Zoya Bylinskii, MIT. “Computational Perception for Multimodal Document Understanding”

Email:  zoya@mit.edu

Position:  Research Scientist

Current Institution:  Adobe Research

Abstract:  Computational Perception for Multimodal Document Understanding

Multi-modal documents occur in a variety of forms as graphs in technical reports, diagrams in textbooks, and graphic designs in bulletins. Humans can efficiently process the visual and textual information contained within to make decisions on topics including business, healthcare, and science. Building the computational tools to understand multi-modal documents can therefore have important applications for web search, information retrieval, automatic captioning, and design tools. My research has used machine learning for detecting and parsing the visual and textual elements in multi-modal documents for topic prediction and automatic summarization. Inspired by human perception, I have developed novel data collection methods and built models that predict where people look in graphic designs and information visualizations. These predictions have enabled interactive design applications. My work has made contributions to the fields of human vision, computer vision, and human-computer interaction.

Bio:
Zoya Bylinskii recently started as a research scientist at Adobe Research in Cambridge, Massachusetts. She received a PhD from MIT in September 2018, a master’s in electrical engineering and computer science from MIT in 2015, and an honors bachelor’s degree in computer science and statistics from the University of Toronto in 2012. She is a 2016 Adobe Research Fellow, a 2014-2016 NSERC Postgraduate Scholar, and a 2013 Julie Payette Research Scholar. She also received a Google Anita Borg Memorial Scholarship in 2011. She works at the interface of human vision computer vision and human-computer interaction: building computational models of people’s memory and attention and applying the findings to graphic designs and data visualization with automatic summarization and redesign applications (AI for creativity).

2018-10-12T17:09:30+00:00