9 years ago
Since Chrome 25 we have had access to the new Web Speech API which allows us to create web apps that can utilse voice to text or voice control with a microphone. I have been wanting to experiment with this for quite a while so I built simple example to using voice commands to control an e-learning module made with my e-learning framework. I recorded a video below demonstrating navigation through voice commands. After the video I will show show you how easy it is to set up this basic control.
The following link provides a tutorial on implementing a speech to text example: http://updates.html5rocks.com/2013/01/Voice-Driven-Web-Apps-Introduction-to-the-Web-Speech-API
To use speech recognition for voice commands, this is how I implemented it:
1. Create a new speech recognition object
var recognition = new webkitSpeechRecognition();
2. Make the object continuously check the microphone
recognition.continuous = true;
3. Set the language to use. By default it will use the document’s language
recognition.lang = "en-AU";
4. Start the speech recognition
recognition.start();
5. Get results on the ‘onresult’ event
recognition.onresult = function (e) {
// loop through the results
for (var i = e.resultIndex; i < e.results.length; ++i) {
// only get the final results
if (e.results[i].isFinal) {
// trim any whitespace from result and pass to our command handler
// note: I am using jQuery here to trim the string because my e-learning demo already had jQuery included
runCommand($.trim((e.results[i][0].transcript).toLowerCase()));
}
}
};
6. Set up a function to handle the commands
function runCommand(command){
switch (command) {
case "alert" : alert("Hello"); break;
case "prompt" : prompt("Enter some text"); break;
case "confirm" : confirm("Confirm?"); break;
}
}