Website Search
Find information on spaces, staff, and services.
Find information on spaces, staff, and services.
As a typical cross-modal problem, visual question answering (VQA) has received increasing attention from the communities of computer vision and natural language processing. Reading and reasoning ab...
As a typical cross-modal problem, visual question answering (VQA) has received increasing attention from the communities of computer vision and natural language processing. Reading and reasoning about texts in the images is a burgeoning and important topic in VQA, especially for the visually impaired assistance applications. Given a image, it aims to predict an answer to a provided natural language question closely related to its textual contents. In this paper, we propose a novel end-to-end textual content based VQA model, which grounds question answering both on the visual and textual information.