Speech SDK - LoadGrammar() raising UnauthorizedAccessExcepti
本文关键字:raising UnauthorizedAccessExcepti LoadGrammar SDK Speech | 更新日期: 2023-09-27 18:20:42
我正在Windows 8上的C#(4.5)上尝试使用语音识别库。
我安装了"Microsoft Speech Platform SDK 11",使用LoadGrammar收到一个异常。
我的程序:
using System;
using System.Speech.Recognition;
using System.Speech.Synthesis;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace SpeechRecognition
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine())
{
// Create and load a dictation grammar.
// An unhandled exception of type 'System.UnauthorizedAccessException' occurred in System.Speech.dll
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Recognized text: " + e.Result.Text);
}
}
}
System.Speech.dll 中发生类型为"System.UnauthorizedAccessException"的未处理异常
堆栈跟踪:
emSystem.Speech.Recognition.IgnizerBase.Initialize(SapiRecognizer识别器,Boolean inproc)emSystem.Speech.Recognition.SpeechRecognitionEngine.get_RecoBase()emSystem.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar(语法grammar)em SpeechRecognition.Program.Main(String[]args)nae: ''测试中心''语音识别''语音识别''Program.cs:linha 23em System.AppDomain_nExecuteAssembly(运行时程序集程序集,字符串[]args)emMicrosoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
em System.Threading.ExecutionContext.RunInternal(ExecutionContextexecutionContext,ContextCallback回调,对象状态,布尔值preserveSyncCtx)emSystem.Threading.ExecutionContext.Run(ExecutionContextexecutionContext,ContextCallback回调,对象状态,布尔值preserveSyncCtx)emSystem.Threading.ExecutionContext.Run(ExecutionContextexecutionContext,ContextCallback回调,对象状态)
我在Win7和Win8中进行了测试,但没有人工作。
有人能帮我吗?
奇怪的是,我似乎记得Speech SDK也有类似的问题,但找不到解决方案。我认为这涉及到更改计算机上某个文件或文件夹的所有者或访问权限。也许更多的谷歌搜索可以帮助你找到我当时找到的解决方案,或者你可以使用ProcessMonitor来查看流程试图做什么,但失败了。也许eventvwr会展示一些东西。
我尝试安装Speech Platform SDK 11和Speech Platform Runtime,但我认为这可能是在.NET包装中使用Microsoft.Speech
命名空间的技术的服务器版本。我也安装了Speech SDK 5.3,但我认为那不是最新版本。最终,我安装了Windows8.1SDK,我认为这对我来说很好。这在我的WPF应用程序中运行良好:
XAML:
<Window x:Class="SpeechTestApp.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="MainWindow" Height="350" Width="525">
<Grid>
<TextBlock
x:Name="tb"/>
</Grid>
</Window>
C#
using System.Diagnostics;
using System.Globalization;
using System.Speech.Recognition;
using System.Windows;
namespace SpeechTestApp
{
public partial class MainWindow : Window
{
private SpeechRecognitionEngine recognizer;
public MainWindow()
{
InitializeComponent();
// Create a SpeechRecognitionEngine object for the default recognizer in the en-US locale.
this.recognizer = new SpeechRecognitionEngine(new CultureInfo("en-US"));
{
// Create a grammar for finding services in different cities.
Choices services = new Choices(new string[] { "restaurants", "hotels", "gas stations" });
Choices cities = new Choices(new string[] { "Seattle", "Boston", "Dallas" });
GrammarBuilder findServices = new GrammarBuilder("Find");
findServices.Append(services);
findServices.Append("near");
findServices.Append(cities);
// Create a Grammar object from the GrammarBuilder and load it to the recognizer.
Grammar servicesGrammar = new Grammar(findServices);
recognizer.LoadGrammarAsync(servicesGrammar);
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized += recognizer_SpeechRecognized;
recognizer.SpeechDetected += RecognizerOnSpeechDetected;
recognizer.SpeechHypothesized += RecognizerOnSpeechHypothesized;
recognizer.SpeechRecognitionRejected += RecognizerOnSpeechRecognitionRejected;
recognizer.AudioStateChanged += RecognizerOnAudioStateChanged;
recognizer.AudioSignalProblemOccurred += RecognizerOnAudioSignalProblemOccurred;
// Configure the input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
}
}
private void RecognizerOnAudioSignalProblemOccurred(object sender, AudioSignalProblemOccurredEventArgs audioSignalProblemOccurredEventArgs)
{
Debug.WriteLine(audioSignalProblemOccurredEventArgs.AudioSignalProblem.ToString());
}
private void RecognizerOnAudioStateChanged(object sender, AudioStateChangedEventArgs audioStateChangedEventArgs)
{
Debug.WriteLine(audioStateChangedEventArgs.AudioState.ToString());
}
private void RecognizerOnSpeechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs speechRecognitionRejectedEventArgs)
{
Debug.WriteLine("RecognizerOnSpeechRecognitionRejected: " + speechRecognitionRejectedEventArgs.Result.Text);
}
private void RecognizerOnSpeechHypothesized(object sender, SpeechHypothesizedEventArgs speechHypothesizedEventArgs)
{
Debug.WriteLine("Hypothesized: " + speechHypothesizedEventArgs.Result.Text);
tb.Text = speechHypothesizedEventArgs.Result.Text;
}
private void RecognizerOnSpeechDetected(object sender, SpeechDetectedEventArgs e)
{
Debug.WriteLine("Detected position: " + e.AudioPosition);
}
private void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Debug.WriteLine("Recognized text: " + e.Result.Text);
tb.Text = e.Result.Text;
}
}
}