Working on your text..
Welcome to Pythia
Pythia is an online tool for word sense disambiguation and sentiment analysis.
To test this demo, insert your text in the text area on the left and click Next, or click on the Magic Wand and watch.
For more details check the About page.
Select text type
Based on your choice a different parser is used for breaking your text into tokens.
Choose this option if you provided a short, twitter-like message.
Choose this option if you entered a long and/or more formal text.
Select disambiguation method
First Sense (FS)
This method chooses the first (most popular) sense of each word as it appears in WordNet. Very fast. No actual disambiguation is performed.
Weighted Degree (WDEG)
This method chooses for each word in a sentence the sense with the highest degree in the sentence graph. Quite fast.
Integer Linear Programming (ILP)
This method chooses for the words of a sentence, those senses that maximize the overall semantic coherences of the sentence. Moderately fast, more accurate.
Select similarity metric
Our WSD methods rely on sense relatedness.
We currently offer 3 solutions.
Semantic Relatedness (SR)
This metric computes pairwise sense relatedness on the semantic graph of WordNet. Knowledge based metric.
Pointwise Mutual Information (PMI)
This metric computes pairwise sense relatedness based on sense co-occurrence in a semantically annotated corpus. Corpus based metric.
This metric computes pairwise sense relatedness based on the Lesk like similarity of sense glosses in WordNet.
Select sense pruning
Use sense prunning in order to accelarate the WSD process. All senses that do not contain any of the sentence terms in their definitions are ignored.
Use this option, if you do not want senses to be ignored.
Select sentence sentiment classification model
Represent the text using the 40 semantic features. Logistic Regression will be used as model.
Represent the text using the 214,342 term n-gram features. SVM will be used as model.
Represent the text using the 11,923 character n-gram features. Naive Bayes will be used as model.
Represent the text using union of all n-grams (225,475 features). Naive Bayes will be used as model.
Represent the text using the union of all features (225,515 features). Naive Bayes will be used as model.
You are done
Press Check to disambiguate and detect sentiment. Press Back to change your options. Press
to start over with the same text.