With voice and eye: is sound the future of digital?

With the proliferation of smart objects and the development of digital personal assistants, the future will further integrate sound into man-machine interfaces. The challenge? To make the use of technology ever more natural. This is what is explained to us by Antoine Châron, co-founder of Sound To Sight, a sound design agency that works on smart vehicles and smart objects.

“Sound design is taking up a central place in man-machine interfaces”

Today, sight is the most called-upon sense in our interactions with digital tools. However, with the rise of smart objects, voice command and sounds are taking up more and more space in man-machine interfaces. Can we talk of a paradigm shift?

It is becoming more and more natural to chat with objects but many of the sounds that they produce remain accidental, generated by default, similar and mediocre, and are produced by cheap speakers. But smart objects force us to innovate as they often don’t have a graphic interface: the only way to communicate with them is to delve into our telephone to search for the information. The designers’ work is therefore to offer sounds that save us from having to get our phone out, with a “grammar” of sounds designed to arrive just at the right time.

A “grammar of sounds”? What do you mean by this?

These sound emoticons, that are also called “earcons”, must be consistent among themselves as to associate a sound with an object one must be able to identify it within its own sound family. They must also be very graphic in order for us to know what the object wants to say, and coherent in the context in which we hear them. In a smart car for example, if I choose a very long sound to say “stop”, the driver won’t understand – but a sudden and short sound will make them react immediately. Another example: with Netatmo (editor’s note: a startup specialised in the smart home), we racked our brains to come up with a sound announcing that someone has entered the home. The notification needed to be striking but without necessarily alerting the user, because the presence detector doesn’t know if the person is friend or foe. We thus chose a sound with a questioning connotation.

What are the issues surrounding these “earcons” for smart objects?

Not needing to look at a screen is sometimes a safety issue. In the case of a driving assistant plugged into a car, sound language is what enables the driver to keep their eyes on the road. Yet there is still much to be accomplished in this area. We see for example very smart vehicles equipped with the same sound for several use cases, such as blind-spot obstacle detection and sudden braking of the vehicle in front. If you are driving distractedly, with this generic sound you have no way of knowing what is happening, which can be dangerous. The challenge is therefore to associate a pertinent sound with each important function. If it’s slowing down that is requested, the sound will get lower, and if it is a generic notification, we will have a “pop-up” sound, that gets higher.

For manufacturers, it is also a brand identity issue…

Yes. They are requesting sounds that are not only efficient, but also compatible with their brand world. While working with McLaren on a vehicle for the Geneva Show, we realised that nearly all competitors use the same type of sounds – including in top-of-the-range cars. However, the brain synthesises all senses to create an impression, and if the sounds are too basic, the product is devalued. The issue for us here is therefore to offer sounds that are ergonomic, aesthetic, coherent with the vehicle, and recognisable so they become a brand signature.

How do you produce these graphic sounds?

User testing enables us to categorise and classify sounds so as to use them wisely. During this analysis phase, we use synaesthesia, i.e. images to describe a sound. Once we have found our vocabulary, we can move on to the aesthetics phase: for example, a short sound repeated could have the same effect but not the same aesthetics, depending on if we hear “plum-plum” or “chak-chak”. The designer’s work thus mixes things as varied as aesthetics, brand image, ergonomics, but also psychology or even fundamental analysis.

What are the new technologies behind these innovations?

Wave Field Synthesis (WFS) for example, which makes it possible to create sound holograms. The technology is rather expensive so is not yet frequently used outside of experimental music concerts. We tested it with a German manufacturer on a vehicle cabin equipped with spatialised sounds. We were able to create movement of sounds within the space: you’re driving and looking forwards so there is no reason for the blind spot warning to come from your dashboard. So, we imagined a sound that started at the front of the cabin and moved to the left or right to guide the driver’s attention.

Likewise for a traffic-jam warning, thanks to this three-dimensional technology, we were able to imagine a sound that started off far-away and got closer to the driver then moved away, to suggest slowing down. Spatialised sound is not yet well-developed, but WFS technology makes it possible to create sounds that are localised in the same place by all users of a cabin. A process of phase and delay cancelling between several speakers provides this sensation, which is of high interest for cars.

Any other examples?

There are many! Among promising technologies is the vibrating pot, a speaker without a membrane, which is stuck to the back of an object and uses matter to emit sound. With Otis, we imagined an immersive surround sound booth to combat “elevator fear”: by integrating vibrating pots onto all of the lift’s surfaces and broadcasting sounds with lots of reverberation, we provide the impression that the space is much bigger.

Could this spatialised sound have a place in the smart home?

Currently, the video games industry is the one using these technologies the most. But we can see that there are promising uses in everyday life: we can imagine future spatialised signage, or a voice that follows you through the house. Within the scope of an event with the French national railway company SNCF, we used parametric antennas, also known as “sound cannons”: with these ultra-directional loud-speakers we were able to transmit information, that was not heard outside the perimeter, to a precise place, offering bilingual navigation through the space with floor marking specific to each language. However, this technology is still rather bulky, a bit expensive, and its audio quality remains quite mediocre.

A similar experiment is to be looked for in binaural technology, which offers a personal experience of 3D-representation of the acoustic space via a headset on the ears. For guiding visually impaired people for example, we can imagine a sound that moves along with the person, even in the street, to help them identify a flow of traffic or a bus stop. Disabled people and the elderly are in fact the first to benefit from these innovations around sound. To boost their independence, our final year project installed a sound man-machine interface with voice-command in their kitchen. The user could say “I’m looking for plates” and the cupboard beeped. These are sometimes technologies that have been around for a long time, but that can really find their place in all the sound signals that we need to broadcast today.

Read also on Hello Future

Le dispositif biosymbiotique mis au point par Phillip Gutruf et ses collaborateurs se recharge sans fil. Avec l’aimable autorisation de Max Farley et Tucker Stuart The biosymbiotic device developed by Phillip Gutruf and his collaborators charges wirelessly. Courtesy of Max Farley and Tucker Stuart

Digital divide: LoRa IoT devices for medical monitoring

Discover

IoT: first digital vision, and now digital olfaction

Discover

Health applications and wearables: impacts on the medical ecosystem and practices

Discover

The race is on to make smart devices safe

Discover