AUDITORY GRAPHS FROM DENOISING REAL IMAGES USING FULLY SYMMETRIC CONVOLUTIONAL NEURAL NETWORKSRevista : Proceedings of the ICAD
Tipo de publicación : Conferencia No DCC Ir a publicación
Auditory graphs are a very useful way to deliver numerical information to visually impaired users. Several tools have been proposed for chart data sonification, including audible spreadsheets,custom interfaces, interactive tools and automatic models. In thecase of the latter, most of these models are aimed towards the extraction of contextual information and not many solutions havebeen proposed for the generation of an auditory graph directlyfrom the pixels of an image by the automatic extraction of the underlying data. These kind of tools can dramatically augment theavailability and usability of auditory graphs for the visually impaired community. We propose a deep learning-based approachfor the generation of an automatic sonification of an image containing a bar or a line chart using only pixel information. In particular, we took a denoising approach to this problem, based ona fully symmetric convolutional neural network architecture. Ourresults show that this approach works as a basis for the automaticsonification of charts directly from the information contained inthe pixels of an image.