‘Horus’ is a wearable device for people with vision challange, to observe, understands and describes the environment by reading texts, recognize faces, objects and much more

Horus is a wearable device that observes, understands and describes the environment to the person using it, providing useful information with the right timing and in a discreet way using bone conduction. Horus is able to read texts, to recognize faces, objects and much more.

Horus is composed by a wearable headset with cameras and by a pocket unit that contains a powerful processor and a long lasting battery. The user can activate each functionality through a set of buttons located both on the headset and the pocket unit. The buttons are very easy to find thanks to their different shapes. In some circumstances, Horus is also able to understand when something is relevant for the user, triggering an automatic action.

Render dispositivo horus

Text recognition
Horus is able to recognize printed texts, also on non flat surfaces, and help the user in correctly framing the page thanks to audible cues. Once that is done, Horus will start reading and it is not necessary to keep it in front of the camera.
Face recognition
Horus can learn and recognize the faces of people. After detecting a face, Horus can quickly understand if that is a known person and then notify the user. In case of an unknown person, teaching the device takes just a couple of seconds.
Object recognition
By rotating an object in front of the cameras, Horus can learn its appearance and shape. Thanks to 3D perception, Horus will then recognize it from different angles, helping the user in recognizing objects which might be similar in shape. In order to help the user to correctly frame the object, Horus generates audible cues.
Mobility assistance
When moving, Horus can promptly alert the user of the presence of obstacles along his or her path, by generating 3D sounds with different intensity and pitch that represent the position of the obstacle. According to the object’s position and distance, the user will hear sounds coming from the side of the obstacle, with a different repetition frequency.
Scene and photos description
Thanks to the latest advances in artificial intelligence, Horus is able to describe what the cameras are seeing. Whether it is a postcard, a photograph or a landscape, the device provides a short description of what is in front of it.

 

Horus is composed of a wearable headset and a pocket unit. The headset runs along the back of the head and it holds the cameras and the bone conduction transducers, while the pocket unit contains the battery and the processor. The two units are connected by a thin wire.

Headset is very similar to a pair of headphones. It can be placed comfortably on the head, with or without glasses. Thanks to the buttons located near the ends of the headset, the user can interact with Horus and change the audio volume.

The pocket unit is slightly bigger than a smartphone, and it contains Horus’ main processor and a long lasting battery. On the sides, buttons allow the user to interact with the device and adjust the audio volume.

Currently, Horus can work in Italian, English and Japanese. The plan is to quickly expand towards other European languages.

When its cameras detect a foreground object, Horus will automatically try to recognize it. If it’s not in its database, pressing the triangle button will allow you to start the learning process. In this stage, you should rotate the object in front of the cameras. Then Horus will prompt you to record the object’s name before adding it to its database.

While in this application, Horus constantly scans the environment for obstacles. Every obstacle gets translated into sound, whose type depends on the obstacle’s position and distance. An obstacle far and to the left will create a slow paced sound coming from the left, while a close obstacle on the right will create a fast paced sound coming from the right.