Most teams running a mobile or a web application are struggling with improving the key performance indicators of their apps.
For consumer-facing applications, teams might be interested in conversion rates or engagement. For applications used by professionals such as CRMs and ERPs, the most important goal might be to improve data quality and completed tasks.
However it be, it’s not an easy task. The low-hanging fruits have long been picked and the teams spend hours and hours in long meetings deciding which colours they should A/B test for their CTA buttons next or whether they should use the word ‘purchase’ or ‘buy’.
While this might be a good and fun exercise, everyone already knows that any change won’t improve these metrics by a lot.
But there is one thing that can be done pretty easily that might make filling an average form up to five times faster or decrease the time that a search takes by seconds.
Most of today’s user interfaces are operated solely by using two modalities: touch (tactile) and vision. We click, type, and touch and see from our displays what’s happening.
The third common modality is the voice and many people already have a good example of a device using this modality in their living rooms: smart speakers.
Also Read: Passionate about user experience? Check out these 10 UI/UX jobs
Voice is unlike the other two modalities in that it’s pretty easy to use it for both directions: you can command the device by voice and it replies by using voice. This is unlike touch that is only used for input or vision that is only used for output.
However, voice has its drawbacks too. The main drawback is that it’s a pretty slow mean for transmitting information and if you misheard one critical piece of information somewhere in the middle of the utterance, it’s not very easy to get back to that piece. Compare that to a book where you can read a line for as many times as you want and you get the point.
Another issue is that when a smart speaker fails, it can be a frustrating experience. One reason for that is that smart speakers do not support the other modalities. This is why smart speakers are probably not the future of voice.
But how about using all the three modalities at the same time? What if your average application didn’t limit itself into two most common modalities and wouldn’t replace the two modalities by one smart speaker skill but rather leveraged them all?
Well, that would be the way to get the improvements I promised earlier.
For example, in this video a regular web form for booking flights is turned into a multimodal form that supports voice and touch simultaneously and shows the results in real-time for fast feedback.
That I think will be the way to improve your applications key metrics in a simple way. Or you can go back thinking whether a blue button would still work better.
–
Editor’s note: e27 aims to foster thought leadership by publishing contributions from the community. Become a thought leader in the community and share your opinions or ideas and earn a byline by submitting a post.
Join our e27 Telegram group, or like the e27 Facebook page
Image credit: Thought Catalog on Unsplash
The post How to improve your app’s user experience with a new UI modality appeared first on e27.