Context & challenge
Autonomy in accommodation and tourism spaces is a complex challenge for people with severe low vision. The difficulty in acquiring clear information about the layout of rooms, the position of electrical sockets, controls, furniture and services, compromises the autonomous and safe use of these places.
The Sherlock project was born within the European project ECOS4IN, dedicated to inclusive innovation, and was developed by Hackability@Milano, in collaboration with A.N.S. – Associazione Nazionale Subvedenti and Fondazione Giacomo Brodolini, with the aim of creating a device capable of providing structured audio descriptions of hospitality and tourism spaces.
Through an active co-design process with severely visually impaired users, Sherlock integrates intuitive tactile navigation and an advanced audio description system, providing an accessible and autonomous orientation experience.
Where the idea comes from
Sherlock is the result of an advanced co-design process based on the Hackability method, awarded an Honourable Mention at the Compasso d’Oro 2020 for its ability to innovate through active collaboration between designers and end users.
The process was not limited to the involvement of users, but made them an active and creative part of the development, bringing direct experience and expertise to the definition of the device. Through three online workshops, people with severe low vision, accessibility experts and designers worked together to model the architecture of the audio descriptions, design the physical interface and optimise the user experience.
This methodology made it possible to develop a truly inclusive and replicable system, responding in a concrete way to the needs for autonomy in hospitality and tourism spaces.
Functions and features
Design and development by:
Teo Bistoni
Dario Comini
Alessio Crivelli
Rossella Indaco
Tam Huynh
Francesco Rodighiero
Luisa C. Baraglia

The device offers a more accessible experience for people with severe low vision, but at the same time is useful and intuitive for everyone.
The technological heart of the device is a Raspberry Pi, which manages the structured audio description system, providing detailed information. Interaction is simple and straightforward thanks to three physical buttons that are easily recognisable by touch, allowing content to be navigated fluently. It integrates a courtesy light, useful both for those with residual vision and for those who need discreet illumination, and a Bluetooth speaker, which allows Sherlock to also be used as an audio speaker for mobile devices.
It features quick access to a Mini SD card, allowing audio content to be easily updated without technical intervention, ensuring flexible information management. It is also multi-language ready, allowing descriptions to be adapted according to user needs and contexts of use.
Made entirely with digital fabrication technologies, Sherlock combines 3D printing, laser cutting and manual soldering of electronic components, making the device highly replicable and adaptable to different contexts.
Development, prototyping and future vision
The development of Sherlock followed a process based on research, experimentation and testing with real users. The team built several versions of the prototype, progressively refining both the hardware and the user experience.
The tests conducted confirmed the effectiveness of the system, demonstrating the value of co-design as a method for developing accessible and truly functional solutions. The ADI Index further underlined its importance.
Looking to the future, the project is already designed to be adapted to new scenarios, expanding the potential of the device in contexts such as museums and public spaces. Sherlock’s open architecture (opensource in Creative Commons) allows for modifications, making it a versatile and scalable solution.