The jarvis application that is currently developed, is in English. I want to customize it to use local language. How to develop this kind of app for local languages? what kind of programming languages I must know to proceed to the development? I have tested the english version of the jarvis, it works well for me. How to attach the c# with HTK for the purpose of the development?
Building Jarvis like application for local languages
637 Views Asked by senye At
2
There are 2 best solutions below
Related Questions in NLP
- command line parameter in word2vec
- Annotator dependencies: UIMA Type Capabilities?
- term frequency over time: how to plot +200 graphs in one plot with Python/pandas/matplotlib?
- Stanford Entity Recognizer (caseless) in Python Nltk
- How to interpret scikit's learn confusion matrix and classification report?
- Detect (predefined) topics in natural text
- Amazon Machine Learning for sentiment analysis
- How to Train an Input File containing lines of text in NLTK Python
- What exactly is the difference between AnalysisEngine and CAS Consumer?
- keywords in NEGATIVE Sentiment using sentiment Analysis(stanfordNLP)
- MaxEnt classifier implementation in java for linguistic features?
- Are word-vector orientations universal?
- Stanford Parser - Factored model and PCFG
- Training a Custom Model using Java Code - Stanford NER
- Topic or Tag suggestion algorithm
Related Questions in SPEECH-RECOGNITION
- Sphinx4 fails to find resources
- How to config grammar for StreamSpeechRecognizer in CMUSphinx
- Offline Speech Recognition on Android Wear
- Is Speech-to-Text-to-Translation an Impossible Dream?
- Recognition listener android studio, it doesn't work
- Android speech recognizer works fine on 5.0.1 but doesn't work on 5.1
- How do I reconfigure MS' CLI for full dictation via speech recognition?
- Can't get Mac dictation custom commands to work
- How to working with multiple button recognizer at HTML5 web speech API
- Offline voice recognition android taking unwanted voice
- How can i make the python to wait till i complete speaking?
- Voice Interaction App [Android]
- webkitSpeechRecognition does not show interim results
- Why is my Sphinx4 Recognition poor?
- Launching a program with Voce
Related Questions in HTK
- Online Word Recognition using HMM Toolkit (HTK)
- HTK HSGen [+8250] error?
- HTK error : Requested data format is not supported
- Building Jarvis like application for local languages
- Open source tools for recognizing untranscribed speech without a dictionary
- What is the purpose of speaker adaptive training and speaker dependent training?
- facing ERROR [+1019] extracting mfcc features using the HCopy of HTK toolkit
- understanding format of file
- HTK: E: Unable to locate package libx11-dev:i386
- Install htk in ubuntu "make all" message " /usr/bin/ld: cannot find -lX11 "
- Error in hybrid_segmentation HMError when running HTK
- Phoneme generation Tools
- htk in ubuntu “make all” error “Nothing to be done for `all'.” error
- can not patch HTS-2.3 for HTK-3.4.1
- ERROR [+7050] HMError: HMM Def Error: GetToken: Symbol expected at line 1/col 2/char 1 in ./data/test/feature/T0011.mfc
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
You don't need to develop from scratch, take existing software and build on it. For example you can consider https://github.com/jasperproject/jasper-client, it's pretty actively developed.
Most NLP libraries are in Python or Java. You also need shell scripting (awk/perl) experience because often models are built with Linux tools.
For speech recognition it's easiest to use CMUSphinx, the tutorial to add your language to CMUSphinx is at http://cmusphinx.sourceforge.net/wiki/tutorialam.
There are many ways for interoperability:
1) C# can invoke HTK tools as binaries through Process.Start http://msdn.microsoft.com/en-us/library/system.diagnostics.process.start(v=vs.110).aspx
2) You can build a library from HTK and invoke it with PInvoke through interop framework
3) You can build a TCP or HTTP server with HTK tools and connect to this server from C# application to get speech recognition results.
Overall, you could probably use existing solutions like mentioned above, they have all hard things implemented, you only need to configure your local language.