Today we’re going to talk about how computers understand speech and speak themselves. As computers play an increasing role in our daily lives there has been an growing demand for voice user interfaces, but speech is also terribly complicated. Vocabularies are diverse, sentence structures can often dictate the meaning of certain words, and computers also have to deal with accents, mispronunciations, and many common linguistic faux pas. The field of Natural Language Processing, or NLP, attempts to solve these problems, with a number of techniques we’ll discuss today. And even though our virtual assistants like Siri, Alexa, Google Home, Bixby, and Cortana have come a long way from the first speech processing and synthesis models, there is still much room for improvement.
Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios
Want to know more about Carrie Anne?
https://about.me/carrieannephilbin
The Latest from PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV
Want to find Crash Course elsewhere on the internet?
Facebook - https://www.facebook.com/YouTubeCrash...
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
BetaSeries is the reference application for series fans who watch streaming platforms. Download the application for free, fill in the series you like, and receive instant recommendations.
© 2024 BetaSeries - All external content remains the property of the rightful owner.