Sensory Substitution:

The vOICe is a visual to auditory sensory substitution algorithm.  Originally created by Dr. Peter Meijer, and adapted for the After Sight Model 1. This software provides a mimic of vision through users training their brains to see with sound. Please visit www.seeingwithsound.com for additional information. Although not the same as the sense of vision, users who train over extended periods of time can achieve amazing results.   

Object Recognition:

Teradeep is a neural network object identification application. Neural networks are specialized processes that can utilize computer vision to identify objects. As currently implemented, the  'best' guesses are read over the audio channel each time the process completes. The object recognition library is located on the device, and no 'training' is required. Currently 1000 objects are recognized. In the future we plan to update to a 5000 object library. 

Distance Sensing:

Using the built in ultrasonic rangefinder, the distance to the nearest object is reported. There are three options for feedback. 1. A vibration signal that changes depending on the distance. 2. a tonal signal 3. Distance read in English to the nearest tenth of a meter. The effective range is 0.3-5.0 meters.  

Face Detection and localization:

This function detects if a face is present in the camera view, and if it is, the location of the face is read back in a 3x3 grid. Output would be read in english like this: "Face at Upper mid" or "Face at Lower Right", or any of the other possible locations. This helps in locating where a person is relative to the user. 

After Sight Model 1 Operating System:

An operating system built on linux that utilizes the rotary knob/switch and provides feedback with spoken words allows the user complete control over the operation and configuration of the device. The operating system also keeps track of the battery condition and notifies users when external power is connected or disconnected. As of April 2016, an update mechanism has been built into the system allowing users to plug the device into a network and recieve software updates. As we add new functions, existing users can benefit from these efforts at no extra cost. 

Future Plans:

In the future, we plan on adding a straight line following application to aid in keeping a heading over short distances up to about 20 meters with final deviations of less than 5 degrees as the goal. The device currently has the sensor required for this task, but the application has not been written yet. 

Devices produced after March of 2016 have the capability of bluetooth and wifi built in. We plan on adding support for these over the coming months. Users will be able to use bluetooth headsets and require less wires.

With wireless networking available, the possibilities for applications that take advantage are huge. Some that we have mused about developing are a facial recognition application where the users could add friends from facebook and utilize the tagged photos they have already posted to build the training library. By this method, one could later be walking into a meeting with acquaintances and have the device notify them when a friend walks into view.

Another potential application would be to view youtube videos as conversions to sensory substitution soundscapes. A user from South Africa demonstrated what this might be like recently using the vOICe LE for windows on a youtube video of a SpaceX rocket landing. With about a frame per second conversion, it's not the same as watching live action, but more like looking at the frames in a graphic novel. Not perfect, but it opens up new experiences that are not possible at the moment. It enables the visually impaired access to the world in ways that are not fully realized yet. 

We regularly ask blind stakeholders what they would like, and try to prioritize and build to their desires as much as possible.