Partilhar via


Event Session – Beyond Cortana & Siri: Using Speech Recognition & Speech Synthesis for the Next Generation of Mobile Apps

Speech is probably the topic I’m most passionate about when it comes to app development (ok, I have a soft spot for GIS too). From HAL900 in 2001: A Space Odyssey and Joshua in WarGames, to Star Trek computers, Siri and Cortana, having conversations with a semi-sentient computer using natural language and speech is probably the ultimate frontier of technology. But speech can also be a responsibility for us developers to make sure our apps are usable by all, and to keep our users – and those around them – safe. This talk is one of my favorite. It’s about using Speech Recognition & Speech Synthesis to build the next generation of mobile apps.

I recently presented this talk at Philly Code Camp 2014 last weekend, and at the Microsoft Mobile App Devs of New Jersey (MMAD) Meetup. I’ve also presented it at Internet Week NY 2014 last month, and I’ve done variations of this talk at other events in the past including VSLive, CodePalousa, DevTeach, DVLUP Day Boston and M3 Conference.

Session Description

Our society has a problem. Individuals are hooked on apps, phones, tablets and social networking. We created these devices and these apps that have become a core part of our lives but we stopped short. We failed to recognize some of the problematic situations where our apps are used. People are texting, emailing and chatting while driving. Pedestrians walk into busy intersections and into sidewalk hazards because they refuse to put their phone down. We cannot entirely blame them. We created a mobile revolution, and now we just can’t simply ask them to put it on hold when it’s not convenient. It’s almost an addiction and too often it has led to fatal results.

Furthermore, mobile applications are not always easy to work with due to the small screen and on-screen keyboard. Other people struggle to use traditional computing devices due to handicaps. Using our voice is a natural form of communication amongst humans. Ever since 2001: A Space Odyssey, we’ve been dreaming of computers who can converse with us like HAL9000 or the Star Trek computers. Or maybe you’re part of the new generation of geeks dreaming of Halo’s Cortana? Thanks to the new advances and SDKs for speech recognition and synthesis (aka text-to-speech), we are now several steps closer to this reality. Siri is not the end game, she’s the beginning.

This session explores the design models and development techniques you can use to add voice recognition to your mobile applications, including in-app commands, standard & custom grammars, and voice commands usable outside your app. We’ll also see how your apps can respond to the user via speech synthesis, opening-up a new world of hands-free scenarios. This reality is here, you’ll see actual live cross-platform demos with speech and you can now learn how to do it. Speech support is not just cool or a convenience, it should be a necessity in many apps.

Session Slides, Demos & Resources

Continue reading this post and access slides, demos and additional resources at Age of Mobility here.