I understand that the model uses previously trained Part of Speech tagging during its configuration stage. But what if most of the words are new, how would the parser decide its operation then?
How does a Transition-based Dependency parser decide which operation to do next in its configuration stage?
184 Views Asked by Akash At
2
There are 2 best solutions below
Related Questions in NLP
- command line parameter in word2vec
- Annotator dependencies: UIMA Type Capabilities?
- term frequency over time: how to plot +200 graphs in one plot with Python/pandas/matplotlib?
- Stanford Entity Recognizer (caseless) in Python Nltk
- How to interpret scikit's learn confusion matrix and classification report?
- Detect (predefined) topics in natural text
- Amazon Machine Learning for sentiment analysis
- How to Train an Input File containing lines of text in NLTK Python
- What exactly is the difference between AnalysisEngine and CAS Consumer?
- keywords in NEGATIVE Sentiment using sentiment Analysis(stanfordNLP)
- MaxEnt classifier implementation in java for linguistic features?
- Are word-vector orientations universal?
- Stanford Parser - Factored model and PCFG
- Training a Custom Model using Java Code - Stanford NER
- Topic or Tag suggestion algorithm
Related Questions in STANFORD-NLP
- "Other" Class in Stanford NLP Classifier for lines that are not related to ANY of the Trained Classes
- Tokenization by Stanford parser is slow?
- Get list of annotators in Stanford CoreNLP
- keywords in NEGATIVE Sentiment using sentiment Analysis(stanfordNLP)
- Can I use the Stanford-nlp ner project to parse names of different formats?
- Extending Stanford NER terms with new terms
- Stanford Parser - Factored model and PCFG
- Training a Custom Model using Java Code - Stanford NER
- Minipar to stanford NLP dependencies
- Lazy parsing with Stanford CoreNLP to get sentiment only of specific sentences
- to search for numeric or alphanumeric strings before or after some keywords in java
- Separately tokenizing and pos-tagging with CoreNLP
- Chinese sentence segmenter with Stanford coreNLP
- How to extract an unlabelled/untyped dependency tree from a TreeAnnotation using Stanford CoreNLP?
- How to use crossValidate of stanford classifier?
Related Questions in DEPENDENCY-PARSING
- Basic and enhanced dependencies give different results in Stanford coreNLP
- How does a Transition-based Dependency parser decide which operation to do next in its configuration stage?
- Entity Attribute Extraction On Unstructured Medical Text
- Stanford Stanza -- Dependency Parsing Module -- Output for document with more than one sentence
- Are there method that can extract interaction between person in text?
- Issue in creating Semgrex patterns with relation names containing ":" colon
- Unable to use Allennlp biaffine parser model
- How can I parse the action that belongs to a Person using Spacy in Python?
- Dependency Parsing in Spacy
- Segmenting sentence into subsentences with CoreNLP
- How can I detect a verb order with Stanford CoreNLP Dependency Parser?
- How to make a spacy matcher pattern using verb tense/mood?
- Spacy pattern exception case based on verb form
- How to use stanford dependency parser to extract aspect terms from text?
- nlp: What is exactly a grammar dependence tag 'attr'?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
I'd like to flesh @Quantum's answer out into a detailed one as follows:
Before 2014 many parsers were depending on a manually designed set of feature templates, and such methods have two drawbacks: 1) they required a lot of expertise and are usually incomplete; 2) most of the runtime is consumed by the feature extraction part of the configuration stage. After Chen and Mannning published their paper, A Fast and Accurate Dependency Parser using Neural Networks, almost all parsers are relying on neural networks.
Let's see how Chen and Manning did the job.
As illustrated in the above diagram, the output of the neural network is a distribution after a softmax function, then it is a simple classification problem depending on some given information. The given information contains mainly three parts: the top 3 words on the stack and buffer, and the two leftmost/rightmost children of the top two words on the stack, and the leftmost and rightmost grandchildren; the POS tags of the above; and the arc labels of all children/grandchildren.
The inputs are embedded into a matrix and transformed by two matrices(and as shown in the picture a cube function) to become the logits and then the distribution of three elements atop of the network.
HTH :)
References: 1) A Fast and Accurate Dependency Parser using Neural Networks, 2) CMU Neural Nets for NLP 2017 (12): Transition-based Dependency Parsing