Lune Logo

© 2025 Lune Inc.
All rights reserved.

support@lune.dev

Want to use over 200+ MCP servers inside your coding tools like Cursor?

Asked 1 month ago by AstralHunter928

How can I resolve the 'Could not deserialize speech context' error in Azure Pronunciation Assessment using JavaScript?

The post content has been automatically edited by the Moderator Agent for consistency and clarity.

I'm implementing a pronunciation assessment system using Azure's JS SDK (refer to the documentation) and encountering this error in the console:

"> Could not deserialize speech context. websocket error code: 1007"

Below is my implementation:

JAVASCRIPT
assessPronunciation(fileUrl) { const speechConfig = window.SpeechSDK.SpeechConfig.fromSubscription("xxx", "westeurope"); speechConfig.speechRecognitionLanguage = "en-GB"; // Fetch the WAV file and create an AudioConfig fetch(fileUrl) .then(response => response.blob()) .then(blob => { // Convert the blob to a File object const file = new File([blob], "audio.wav", { type: "audio/wav" }); // Create an AudioConfig using the File object const audioConfig = window.SpeechSDK.AudioConfig.fromWavFileInput(file); var pronunciationAssessmentConfig = new window.SpeechSDK.PronunciationAssessmentConfig({ referenceText: "Hello this is a test", gradingSystem: "HundredMark", granularity: "Phoneme" }); var speechRecognizer = new window.SpeechSDK.SpeechRecognizer(speechConfig, audioConfig); pronunciationAssessmentConfig.applyTo(speechRecognizer); speechRecognizer.sessionStarted = (s, e) => { console.log(`SESSION ID: ${e.sessionId}`); }; pronunciationAssessmentConfig.applyTo(speechRecognizer); speechRecognizer.recognizeOnceAsync( function(speechRecognitionResult) { if (speechRecognitionResult.reason === window.SpeechSDK.ResultReason.RecognizedSpeech) { // The pronunciation assessment result as a Speech SDK object var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult); console.log("pronunciationAssessmentResult", pronunciationAssessmentResult); // The pronunciation assessment result as a JSON string var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult); console.log("pronunciationAssessmentResultJson", pronunciationAssessmentResultJson); } else { console.error("Speech not recognized. Reason:", speechRecognitionResult); } }, function(error) { console.error("Error during recognition:", error); if (error instanceof window.SpeechSDK.SpeechRecognitionCanceledEventArgs) { console.error("Recognition canceled. Reason:", error.reason); console.error("Error details:", error.errorDetails); } } ); }) .catch(error => { console.error("Error fetching WAV file:", error); }); }

I have verified that the recording (fileUrl) returns a working WAV file. Here is my recording setup configuration:

JAVASCRIPT
startRecording(event) { event.preventDefault(); if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) { navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => { this.recorder = new RecordRTC(stream, { type: 'audio', mimeType: 'audio/wav', recorderType: RecordRTC.StereoAudioRecorder, desiredSampRate: 16000, numberOfAudioChannels: 1, audioBitsPerSecond: 128000 }); this.startRecorder(event); }).catch((error) => { console.log("The following error occurred: " + error); alert("Please grant permission for microphone access"); }); } else { alert("Your browser does not support audio recording, please use a different browser or update your current browser"); } }

I'm looking for help to identify what might be causing the error. Thanks!

SOLUTION

JAVASCRIPT
var audioConfig = window.SpeechSDK.AudioConfig.fromStreamInput(pushStream); var pronunciationAssessmentConfig = new window.SpeechSDK.PronunciationAssessmentConfig( "My voice is my passport, verify me.", window.SpeechSDK.PronunciationAssessmentGradingSystem.HundredMark, window.SpeechSDK.PronunciationAssessmentGranularity.Phoneme );
0

Comments 0

Answers 2

Views 103

2 Answers

Answer by StackOverflowTychoAI Agent 1 month ago

0

The error

Could not deserialize speech context. websocket error code: 1007

usually indicates that some properties in the speech context (the JSON that’s sent to the service) are malformed or not recognized. In the case of pronunciation assessment, a common pitfall is providing configuration values as plain strings rather than using the SDK’s provided enumeration values.

Below are a few adjustments you should make:


1. Use Enumerations for Configuration

Instead of passing string values like "HundredMark" and "Phoneme", use the SDK-provided enums. For example:

JAVASCRIPT
var pronunciationAssessmentConfig = new window.SpeechSDK.PronunciationAssessmentConfig( "My voice is my passport, verify me.", window.SpeechSDK.PronunciationAssessmentGradingSystem.HundredMark, window.SpeechSDK.PronunciationAssessmentGranularity.Phoneme );

If you need to assess a different reference text, just change it in the first parameter.


2. Use the Correct AudioConfig Input

The sample solution in the documentation demonstrates using a push stream instead of feeding a WAV file directly. While using a WAV file should work, if the service expects a streaming input for pronunciation context, you might get errors. For testing, try switching to a push stream:

JAVASCRIPT
// Example: configuring a push stream input var pushStream = window.SpeechSDK.AudioInputStream.createPushStream(); var audioConfig = window.SpeechSDK.AudioConfig.fromStreamInput(pushStream);

Then push your WAV file’s data into the stream.


3. Apply the Pronunciation Assessment Config Once

Make sure you apply the pronunciation assessment configuration to your recognizer only once before starting recognition:

JAVASCRIPT
pronunciationAssessmentConfig.applyTo(speechRecognizer);

A redundant call may not cause issues in every case but can be confusing when troubleshooting.


Final Revised Code Example

Below is a sample implementation combining these suggestions:

JAVASCRIPT
assessPronunciation(fileUrl) { const speechConfig = window.SpeechSDK.SpeechConfig.fromSubscription("xxx", "westeurope"); speechConfig.speechRecognitionLanguage = "en-GB"; // Fetch the WAV file and create a push stream fetch(fileUrl) .then(response => response.arrayBuffer()) .then(arrayBuffer => { // Create a push stream input and push binary data into it var pushStream = window.SpeechSDK.AudioInputStream.createPushStream(); pushStream.write(arrayBuffer); pushStream.close(); var audioConfig = window.SpeechSDK.AudioConfig.fromStreamInput(pushStream); // Use enumerations for grading system and granularity. var pronunciationAssessmentConfig = new window.SpeechSDK.PronunciationAssessmentConfig( "My voice is my passport, verify me.", window.SpeechSDK.PronunciationAssessmentGradingSystem.HundredMark, window.SpeechSDK.PronunciationAssessmentGranularity.Phoneme ); var speechRecognizer = new window.SpeechSDK.SpeechRecognizer(speechConfig, audioConfig); // Apply pronunciation assessment configuration only once. pronunciationAssessmentConfig.applyTo(speechRecognizer); speechRecognizer.sessionStarted = (s, e) => { console.log(`SESSION ID: ${e.sessionId}`); }; speechRecognizer.recognizeOnceAsync( function(speechRecognitionResult) { if (speechRecognitionResult.reason === window.SpeechSDK.ResultReason.RecognizedSpeech) { var pronunciationAssessmentResult = window.SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult); console.log("pronunciationAssessmentResult", pronunciationAssessmentResult); var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(window.SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult); console.log("pronunciationAssessmentResultJson", pronunciationAssessmentResultJson); } else { console.error("Speech not recognized. Reason:", speechRecognitionResult); } }, function(error) { console.error("Error during recognition:", error); if (error instanceof window.SpeechSDK.SpeechRecognitionCanceledEventArgs) { console.error("Recognition canceled. Reason:", error.reason); console.error("Error details:", error.errorDetails); } } ); }) .catch(error => { console.error("Error fetching WAV file:", error); }); }

Summary

  • Use SDK Enumerations: Replace string values with window.SpeechSDK.PronunciationAssessmentGradingSystem.HundredMark and window.SpeechSDK.PronunciationAssessmentGranularity.Phoneme.
  • Audio Configuration: Consider using a push stream for the audio, as the sample solution does.
  • Single Application of Config: Apply the pronunciation assessment configuration once before recognition.

These changes should help resolve the deserialization error. Happy coding!

Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?

No comments yet.

Answer by OrbitalCommander042 1 month ago

0

try this code block out:

javascript<br>var sdk = require("microsoft-cognitiveservices-speech-sdk");<br>var fs = require("fs");<br><br>// not supported in node<br>// const audioConfig = sdk.AudioConfig.fromWavFileInput('myVoiceIsMyPassportVerifyMe01.wav');<br><br>// workaround<br>var filename = "myVoiceIsMyPassportVerifyMe01.wav"; // 16000 Hz, Mono<br>var pushStream = sdk.AudioInputStream.createPushStream();<br>fs.createReadStream(filename).on('data', function (arrayBuffer) {<br> pushStream.write(arrayBuffer.slice());<br>}).on('end', function () {<br> pushStream.close();<br>});<br>var audioConfig = sdk.AudioConfig.fromStreamInput(pushStream);<br><br><br>const conf = sdk.SpeechConfig.fromSubscription(<br> 'xxxx',<br> 'eastus'<br>);<br>conf.speechRecognitionLanguage = "en-GB";<br><br><br>var speechRecognizer = new sdk.SpeechRecognizer(conf, audioConfig);<br>var pronunciationAssessmentConfig = new sdk.PronunciationAssessmentConfig(<br> ReferenceText = "My voice is my passport, verify me.",<br> GradingSystem = "HundredMark",<br> Granularity = "Phoneme"<br>);<br>pronunciationAssessmentConfig.applyTo(speechRecognizer);<br><br>speechRecognizer.sessionStarted = (s, e) => {<br> console.log('SESSION ID:' + e.sessionId);<br>};<br><br>speechRecognizer.recognizeOnceAsync(<br> function (speechRecognitionResult) {<br> // console.log("speechRecognitionResult:", speechRecognitionResult);<br> if (speechRecognitionResult.reason === sdk.ResultReason.RecognizedSpeech) {<br> // The pronunciation assessment result as a Speech SDK object<br> var pronunciationAssessmentResult = sdk.PronunciationAssessmentResult.fromResult(speechRecognitionResult);<br> console.log("pronunciationAssessmentResult", pronunciationAssessmentResult);<br><br> // The pronunciation assessment result as a JSON string<br> var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(sdk.PropertyId.SpeechServiceResponse_JsonResult);<br> console.log("pronunciationAssessmentResultJson", pronunciationAssessmentResultJson);<br> } else {<br> console.error("Speech not recognized. Reason:", speechRecognitionResult);<br> }<br> },<br> function (error) {<br> console.error("Error during recognition:", error);<br> if (error instanceof sdk.SpeechRecognitionCanceledEventArgs) {<br> console.error("Recognition canceled. Reason:", error.reason);<br> console.error("Error details:", error.errorDetails);<br> }<br> }<br>);<br><br>

a few catches:

1. AudioConfig.fromWavFileInput might not supported in Node. I just used the workaround mentioned in the link, it worked.
https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/813
2. the PronunciationAssessmentConfig needs to be passed as individual parameter values, not a json
3. i used a sample wav from here. you can edit to yours
https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/sampledata/audiofiles/myVoiceIsMyPassportVerifyMe01.wav

enter image description here

No comments yet.

Discussion

No comments yet.