使用系统.用Kinect说话
本文关键字:说话 Kinect 系统 | 更新日期: 2023-09-27 18:16:46
我正在为一个大学项目开发一个原型语音到文本字幕应用程序。我将在我的项目后期使用手势识别,所以我认为使用Kinect作为麦克风源是一个好主意,而不是使用额外的麦克风。我的应用程序的想法是识别自发的演讲,如长句和复杂的句子(我知道这并不意味着语音听写不完美)。我看到很多Kinect的语音样本都提到了微软。演讲,但不是系统,演讲。因为我需要训练语音引擎并将一个DictationGrammar加载到微软的语音识别引擎中。演讲是我唯一的选择。
在使用Kinect作为直接麦克风音频源时,我已经设法使其工作,但由于我正在加载Kinect以进行视频预览和手势识别,我无法将其作为直接麦克风访问。
这是直接访问麦克风的代码,无需加载Kinect硬件来进行手势等,并且工作完美:
private void InitializeSpeech()
{
var speechRecognitionEngine = new SpeechRecognitionEngine();
speechRecognitionEngine.SetInputToDefaultAudioDevice();
speechRecognitionEngine.LoadGrammar(new DictationGrammar());
speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
speechRecognitionEngine.SpeechRecognized += (s, args) => MessageBox.Show(args.Result.Text);
}
这就是我需要在加载后通过Kinect访问访问源的地方,它根本没有做任何事情。这是我想要做的:
using (var audioSource = new KinectAudioSource())
{
audioSource.FeatureMode = true;
audioSource.AutomaticGainControl = false;
audioSource.SystemMode = SystemMode.OptibeamArrayOnly;
var recognizerInfo = GetKinectRecognizer();
var speechRecognitionEngine = new SpeechRecognitionEngine(recognizerInfo.Id);
speechRecognitionEngine.LoadGrammar(new DictationGrammar());
speechRecognitionEngine.SpeechRecognized += (s, args) => MessageBox.Show(args.Result.Text);
using (var s = audioSource.Start())
{
speechRecognitionEngine.SetInputToAudioStream(s, new SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null));
speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
}
}
所以问题是,是否可能使用System。用语音代替微软。语音与当前的Kinect SDK,我做错了什么在第二个代码样本?
GetKinectRecognizer方法private static RecognizerInfo GetKinectRecognizer()
{
Func<RecognizerInfo, bool> matchingFunc = r =>
{
string value;
r.AdditionalInfo.TryGetValue("Kinect", out value);
return "True".Equals(value, StringComparison.InvariantCultureIgnoreCase) && "en-US".Equals(r.Culture.Name, StringComparison.InvariantCultureIgnoreCase);
};
return SpeechRecognitionEngine.InstalledRecognizers().Where(matchingFunc).FirstOrDefault();
}
根据我自己的实验,我可以告诉您实际上可以同时使用这两个库。
尝试此代码而不是当前代码(确保添加了对System的引用)。演讲中,显然):
using (var audioSource = new KinectAudioSource())
{
audioSource.FeatureMode = true;
audioSource.AutomaticGainControl = false;
audioSource.SystemMode = SystemMode.OptibeamArrayOnly;
System.Speech.Recognition.RecognizerInfo ri = GetKinectRecognizer();
var speechRecognitionEngine = new SpeechRecognitionEngine(ri.Id);
speechRecognitionEngine.LoadGrammar(new DictationGrammar());
speechRecognitionEngine.SpeechRecognized += (s, args) => MessageBox.Show(args.Result.Text);
using (var s = audioSource.Start())
{
speechRecognitionEngine.SetInputToAudioStream(s, new SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null));
speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
}
}
祝你好运! !
用System.Speech的引用试试这段代码。
using (var audioSource = new KinectAudioSource())
{
audioSource.FeatureMode = true;
audioSource.AutomaticGainControl = false;
audioSource.SystemMode = SystemMode.OptibeamArrayOnly;
System.Speech.Recognition.RecognizerInfo ri = GetKinectRecognizer();
var speechRecognitionEngine = new SpeechRecognitionEngine(ri.Id);
speechRecognitionEngine.LoadGrammar(new DictationGrammar());
speechRecognitionEngine.SpeechRecognized += (s, args) => MessageBox.Show(args.Result.Text);
using (var s = audioSource.Start())
{
speechRecognitionEngine.SetInputToAudioStream(s, new SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null));
speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
}
}