Everywhere you look these days, voice is becoming the dominant enablement medium. Whether in a car, at home, in the office, or anywhere else, voice is connecting us with the technologies we use every day.
Juniper estimates that more than 3.25 billion voice-enabled devices are in circulation today, driving voice-driven commerce to more than $80 billion by 2023. Companies across industries – both incumbents and startups – are recognizing it and are developing voice-enabled applications to enable better and more efficient processes for businesses and consumers alike. According to PwC and CB insights, venture capital funding of AI companies reached a record $9.3 billion, underscoring the momentum the speech-enablement and AI markets are gaining.
Of course, much of the hype has been driven, first, by Apple’s Siri, followed by Amazon’s Alexa, Microsoft’s Cortana, Google Assistant, and now Samsung’s Bixby. These are only the tip of the iceberg, but they have helped give validity to the AI market and created the demand for massive AI innovation.
Much of the growth can be attributed to advances in accuracy. If the voice recognition engines didn’t work, they wouldn’t be useful. Today, most major platforms are well above 90% accuracy ratings, making them much more enticing to users. In fact, 32% of American adults say they use voice search for the fun of it – a figure that jumps to 51% for teenagers, who will soon be entering the workforce and 55% of whom already use voice search on a daily basis.
What does it mean? comScore predicts that voice will be used for half of all searches by next year, and Gartner goes even further, saying that 30% of all searches will be done without even a screen.
For businesses looking to engage their customers in the most efficient and desirable way, that means they had better get their AI development hats on quickly.
To address that issue, Adam Cheyer, co-founder of both Siri and Bixby, will be in Los Angeles tomorrow, June 15, to talk not only about Samsung’s Bixby Developer program, but how AI can be used with existing APIs and services to build rich conversational experiences for users.
It’s all about the experience, and Cheyer’s engagement at the Bixby Developer Session will let attendees experience firsthand, in an immersive, hands-on training environment, how new capabilities for voice interaction will help create new levels of interactive engagement for more than 500 million users through Bixby.
Details of tomorrow’s event:
Date: Saturday, June 15, 2020
Location: Cross Campus, 29 Colorado Avenue, Santa Monica, California 90401
If you’re on the East Coast, there will be a Bixby Developer Session in New York next weekend, Saturday, June 22. Details here.
The Future of Work Expo will take place February 12-14, 2020, in Ft. Lauderdale, Florida, featuring three days of discussion about how AI, chatbots, and automation are enabling businesses to reinvent themselves and become more agile and customer-centric.
Group Editorial Director
There was a time when the concept of online payments made people skeptical, but that's far from the case today. Payments are central to eCommerce and …
Most of us identify manufacturing processes by the equipment used. But manufacturing is also identified in a broader sense, by the way, the manufactur…
The internet may have transformed almost every part of the world today. But it goes without saying, it has also created a wide range of other threats …
Despite your best efforts, your startup product might be taking longer to get off the ground that you had hoped it would. You seem to be bleeding cash…
In the case of 5G, signaling/control messages between nodes give way to open API calls between virtual network functions (VNF).