I am currently working on a java web server project, that requires the use of Natural Language processing, specifically Named Entity Recognition (NER).
I was using OpenNLP for java, since it was easy to add custom training data. It works perfectly.
However, I need to also be able to extract entites inside of entities (Nested named entity recognition). I tried doing this in OpenNLP, but I got parsing errors. So my guess is that OpenNLP sadly does not support nested entities.
Here is an example of what I need to parse:
Remind me to [START:reminder] give some presents to [START:contact] John [END] and [START:contact] Charlie [END][END].
If this cannot be achieved with OpenNLP, is there any other Java NLP Library that could do this. If there are no Java libraries at all, are there any NLP libraries in any other language that can do this?
Please help. Thanks!
The short answer is:
This cannot be achieved using openNLP NER which is suitable only for continuous entities because it use a BIO tagging scheme.
I don't know any library in any language capable of do this.
I think you are extending too much the concept of entity, which is habitually associated with persons, places, organizations, gene names etc.
But not with the identification of complex structures within text.
For that purpose you need to think in a more elaborated solution, taking into account the grammatical structure of the sentence, which can be obtained using a parser like the one in OpenNLP, and maybe combine this with the output of the NER process.
For the purpose of Name Entity Recognition (Java based) I use the following:
Apache UIMA
ClearTK
https://github.com/merishav/cleartk-tutorials
You can train models for your use case, I have already trained for NER for person, places, date of birth, profession.
ClearTK gives you a wrapper on MalletCRFClassifier.
Use this python source code (Python 3) https://gist.github.com/ttpro1995/cd8c60cfc72416a02713bb93dff9ae6f
It's create multiple un-nest version of nest data for you.
For input sentence below ( input data must be tokenized first, so there are space between and thing around it)
Remind me to <START:reminder> give some presents to <START:contact> John <END> and <START:contact> Charlie <END> <END> .
It output multiple sentence with different nest level.
Remind me to give some presents to John and Charlie .
Remind me to <START:reminder> give some presents to John and Charlie <END> .
Remind me to give some presents to <START:contact> John <END> and <START:contact> Charlie <END> .
Full source code here for quick copy-paste
import sys
END_TAG = 0
START_TAG = 1
NOT_TAG = -1
def detect_tag(in_token):
"""
detect tag in token
:param in_token:
:return:
"""
if "<START:" in in_token:
return START_TAG
elif "<END>" == in_token:
return END_TAG
return NOT_TAG
def remove_nest_tag(in_str):
"""
với <START:ORGANIZATION> Sở Cảnh sát Phòng cháy , chữa cháy ( PCCC ) và cứu nạn , cứu hộ <START:LOCATION> Hà Nội <END> <END>
:param in_str:
:return:
"""
state = 0
taglist = []
tag_dict = dict()
sentence_token = in_str.split()
## detect token tag
max_nest = 0
for index, token in enumerate(sentence_token):
# print(token + str(detect_tag(token)))
tag = detect_tag(token)
if tag > 0:
state += 1
if max_nest < state:
max_nest = state
token_info = (index, state, token)
taglist.append(token_info)
tag_dict[index] = token_info
elif tag == 0:
token_info = (index, state, token)
taglist.append(token_info)
tag_dict[index] = token_info
state -= 1
generate_sentences = []
for state in range(max_nest+1):
generate_sentence_token = []
for index, token in enumerate(sentence_token):
if detect_tag(token) >= 0: # is a tag
token_info = tag_dict[index]
if token_info[1] == state:
generate_sentence_token.append(token)
elif detect_tag(token) == -1 : # not a tag
generate_sentence_token.append(token)
sentence = ' '.join(generate_sentence_token)
generate_sentences.append(sentence)
return generate_sentences
# generate sentence
print(taglist)
def test():
tstr2 = "Remind me to <START:reminder> give some presents to <START:contact> John <END> and <START:contact> Charlie <END> <END> ."
result = remove_nest_tag(tstr2)
print("-----")
for sentence in result:
print(sentence)
if __name__ == "__main__":
"""
un-nest dataset for opennlp name
"""
# test()
# test()
if len(sys.argv) > 1:
inpath = sys.argv[1]
infile = open(inpath, 'r')
outfile = open(inpath+".out", 'w')
for line in infile:
sentences = remove_nest_tag(line)
for sentence in sentences:
outfile.write(sentence+"\n")
outfile.close()
else:
print("usage: python unnest_data.py input.txt")
I'm using the Java API of Apache Jena to store and retrieve documents and the words within them. For this I decided to set up the following datastructure:
_dataset = TDBFactory.createDataset("./database");
_dataset.begin(ReadWrite.WRITE);
Model model = _dataset.getDefaultModel();
Resource document= model.createResource("http://name.space/Source/DocumentA");
document.addProperty(RDF.value, "Document A");
Resource word = model.createResource("http://name.space/Word/aword");
word.addProperty(RDF.value, "aword");
Resource resource = model.createResource();
resource.addProperty(RDF.value, word);
resource.addProperty(RSS.items, "5");
document.addProperty(RDF.type, resource);
_dataset.commit();
_dataset.end();
The code example above represents a document ("Document A") consisting of five (5) words ("aword"). The occurences of a word in a document are counted and stored as a property. A word can also occur in other documents, therefore the occurence count relating to a specific word in a specific document is linked together by a blank node. (I'm not entirely sure if this structure makes any sense as I'm fairly new to this way of storing information, so please feel free to provide better solutions!)
My major question is: How can I get a list of all distinct words and the sum of their occurences over all documents?
Your data model is a bit unconventional, in my opinion. With your code, you'll end up with data that looks like this (in Turtle notation), and which uses rdf:type and rdf:value in unconventional ways:
:doc rdf:value "document a" ;
rdf:type :resource .
:resource rdf:value :word ;
:items 5 .
:word rdf:value "aword" .
It's unusual, because usually you wouldn't have such complex information on the type attribute of a resource. From the SPARQL standpoint though, rdf:type and rdf:value are properties just like any other, and you can still retrieve the information you're looking for with a simple query. It would look more or less like this (though you'll need to define some prefixes, etc.):
select ?word (sum(?n) as ?nn) where {
?document rdf:type ?type .
?type rdf:value/rdf:value ?word ;
:items ?n .
}
group by ?word
That query will produce a result for each word, and with each will be the sum of all the values of the :items properties associated with the word. There are lots of questions on Stack Overflow that have examples of running SPARQL queries with Jena. E.g., (the first one that I found with Google): Query Jena TDB store.
The Stanford NLP, demo'd here, gives an output like this:
Colorless/JJ green/JJ ideas/NNS sleep/VBP furiously/RB ./.
What do the Part of Speech tags mean? I am unable to find an official list. Is it Stanford's own system, or are they using universal tags? (What is JJ, for instance?)
Also, when I am iterating through the sentences, looking for nouns, for instance, I end up doing something like checking to see if the tag .contains('N'). This feels pretty weak. Is there a better way to programmatically search for a certain part of speech?
The Penn Treebank Project. Look at the Part-of-speech tagging ps.
JJ is adjective. NNS is noun, plural. VBP is verb present tense. RB is adverb.
That's for english. For chinese, it's the Penn Chinese Treebank. And for german it's the NEGRA corpus.
CC Coordinating conjunction
CD Cardinal number
DT Determiner
EX Existential there
FW Foreign word
IN Preposition or subordinating conjunction
JJ Adjective
JJR Adjective, comparative
JJS Adjective, superlative
LS List item marker
MD Modal
NN Noun, singular or mass
NNS Noun, plural
NNP Proper noun, singular
NNPS Proper noun, plural
PDT Predeterminer
POS Possessive ending
PRP Personal pronoun
PRP$ Possessive pronoun
RB Adverb
RBR Adverb, comparative
RBS Adverb, superlative
RP Particle
SYM Symbol
TO to
UH Interjection
VB Verb, base form
VBD Verb, past tense
VBG Verb, gerund or present participle
VBN Verb, past participle
VBP Verb, non3rd person singular present
VBZ Verb, 3rd person singular present
WDT Whdeterminer
WP Whpronoun
WP$ Possessive whpronoun
WRB Whadverb
Explanation of each tag from the documentation :
CC: conjunction, coordinating
& 'n and both but either et for less minus neither nor or plus so
therefore times v. versus vs. whether yet
CD: numeral, cardinal
mid-1890 nine-thirty forty-two one-tenth ten million 0.5 one forty-
seven 1987 twenty '79 zero two 78-degrees eighty-four IX '60s .025
fifteen 271,124 dozen quintillion DM2,000 ...
DT: determiner
all an another any both del each either every half la many much nary
neither no some such that the them these this those
EX: existential there
there
FW: foreign word
gemeinschaft hund ich jeux habeas Haementeria Herr K'ang-si vous
lutihaw alai je jour objets salutaris fille quibusdam pas trop Monte
terram fiche oui corporis ...
IN: preposition or conjunction, subordinating
astride among uppon whether out inside pro despite on by throughout
below within for towards near behind atop around if like until below
next into if beside ...
JJ: adjective or numeral, ordinal
third ill-mannered pre-war regrettable oiled calamitous first separable
ectoplasmic battery-powered participatory fourth still-to-be-named
multilingual multi-disciplinary ...
JJR: adjective, comparative
bleaker braver breezier briefer brighter brisker broader bumper busier
calmer cheaper choosier cleaner clearer closer colder commoner costlier
cozier creamier crunchier cuter ...
JJS: adjective, superlative
calmest cheapest choicest classiest cleanest clearest closest commonest
corniest costliest crassest creepiest crudest cutest darkest deadliest
dearest deepest densest dinkiest ...
LS: list item marker
A A. B B. C C. D E F First G H I J K One SP-44001 SP-44002 SP-44005
SP-44007 Second Third Three Two * a b c d first five four one six three
two
MD: modal auxiliary
can cannot could couldn't dare may might must need ought shall should
shouldn't will would
NN: noun, common, singular or mass
common-carrier cabbage knuckle-duster Casino afghan shed thermostat
investment slide humour falloff slick wind hyena override subhumanity
machinist ...
NNS: noun, common, plural
undergraduates scotches bric-a-brac products bodyguards facets coasts
divestitures storehouses designs clubs fragrances averages
subjectivists apprehensions muses factory-jobs ...
NNP: noun, proper, singular
Motown Venneboerger Czestochwa Ranzer Conchita Trumplane Christos
Oceanside Escobar Kreisler Sawyer Cougar Yvette Ervin ODI Darryl CTCA
Shannon A.K.C. Meltex Liverpool ...
NNPS: noun, proper, plural
Americans Americas Amharas Amityvilles Amusements Anarcho-Syndicalists
Andalusians Andes Andruses Angels Animals Anthony Antilles Antiques
Apache Apaches Apocrypha ...
PDT: pre-determiner
all both half many quite such sure this
POS: genitive marker
' 's
PRP: pronoun, personal
hers herself him himself hisself it itself me myself one oneself ours
ourselves ownself self she thee theirs them themselves they thou thy us
PRP$: pronoun, possessive
her his mine my our ours their thy your
RB: adverb
occasionally unabatingly maddeningly adventurously professedly
stirringly prominently technologically magisterially predominately
swiftly fiscally pitilessly ...
RBR: adverb, comparative
further gloomier grander graver greater grimmer harder harsher
healthier heavier higher however larger later leaner lengthier less-
perfectly lesser lonelier longer louder lower more ...
RBS: adverb, superlative
best biggest bluntest earliest farthest first furthest hardest
heartiest highest largest least less most nearest second tightest worst
RP: particle
aboard about across along apart around aside at away back before behind
by crop down ever fast for forth from go high i.e. in into just later
low more off on open out over per pie raising start teeth that through
under unto up up-pp upon whole with you
SYM: symbol
% & ' '' ''. ) ). * + ,. < = > # A[fj] U.S U.S.S.R * ** ***
TO: "to" as preposition or infinitive marker
to
UH: interjection
Goodbye Goody Gosh Wow Jeepers Jee-sus Hubba Hey Kee-reist Oops amen
huh howdy uh dammit whammo shucks heck anyways whodunnit honey golly
man baby diddle hush sonuvabitch ...
VB: verb, base form
ask assemble assess assign assume atone attention avoid bake balkanize
bank begin behold believe bend benefit bevel beware bless boil bomb
boost brace break bring broil brush build ...
VBD: verb, past tense
dipped pleaded swiped regummed soaked tidied convened halted registered
cushioned exacted snubbed strode aimed adopted belied figgered
speculated wore appreciated contemplated ...
VBG: verb, present participle or gerund
telegraphing stirring focusing angering judging stalling lactating
hankerin' alleging veering capping approaching traveling besieging
encrypting interrupting erasing wincing ...
VBN: verb, past participle
multihulled dilapidated aerosolized chaired languished panelized used
experimented flourished imitated reunifed factored condensed sheared
unsettled primed dubbed desired ...
VBP: verb, present tense, not 3rd person singular
predominate wrap resort sue twist spill cure lengthen brush terminate
appear tend stray glisten obtain comprise detest tease attract
emphasize mold postpone sever return wag ...
VBZ: verb, present tense, 3rd person singular
bases reconstructs marks mixes displeases seals carps weaves snatches
slumps stretches authorizes smolders pictures emerges stockpiles
seduces fizzes uses bolsters slaps speaks pleads ...
WDT: WH-determiner
that what whatever which whichever
WP: WH-pronoun
that what whatever whatsoever which who whom whosoever
WP$: WH-pronoun, possessive
whose
WRB: Wh-adverb
how however whence whenever where whereby whereever wherein whereof why
The accepted answer above is missing the following information:
There are also 9 punctuation tags defined (which are not listed in some references, see here). These are:
#
$
'' (used for all forms of closing quote)
( (used for all forms of opening parenthesis)
) (used for all forms of closing parenthesis)
,
. (used for all sentence-ending punctuation)
: (used for colons, semicolons and ellipses)
`` (used for all forms of opening quote)
Here is a more complete list of tags for the Penn Treebank (posted here for the sake of completness):
http://www.surdeanu.info/mihai/teaching/ista555-fall13/readings/PennTreebankConstituents.html
It also includes tags for clause and phrase levels.
Clause Level
- S
- SBAR
- SBARQ
- SINV
- SQ
Phrase Level
- ADJP
- ADVP
- CONJP
- FRAG
- INTJ
- LST
- NAC
- NP
- NX
- PP
- PRN
- PRT
- QP
- RRC
- UCP
- VP
- WHADJP
- WHAVP
- WHNP
- WHPP
- X
(descriptions in the link)
Codified:
/**
* Represents the English parts-of-speech, encoded using the
* de facto <a href="http://www.cis.upenn.edu/~treebank/">Penn Treebank
* Project</a> standard.
*
* #see Penn Treebank Specification
*/
public enum PartOfSpeech {
ADJECTIVE( "JJ" ),
ADJECTIVE_COMPARATIVE( ADJECTIVE + "R" ),
ADJECTIVE_SUPERLATIVE( ADJECTIVE + "S" ),
/* This category includes most words that end in -ly as well as degree
* words like quite, too and very, posthead modi ers like enough and
* indeed (as in good enough, very well indeed), and negative markers like
* not, n't and never.
*/
ADVERB( "RB" ),
/* Adverbs with the comparative ending -er but without a strictly comparative
* meaning, like <i>later</i> in <i>We can always come by later</i>, should
* simply be tagged as RB.
*/
ADVERB_COMPARATIVE( ADVERB + "R" ),
ADVERB_SUPERLATIVE( ADVERB + "S" ),
/* This category includes how, where, why, etc.
*/
ADVERB_WH( "W" + ADVERB ),
/* This category includes and, but, nor, or, yet (as in Y et it's cheap,
* cheap yet good), as well as the mathematical operators plus, minus, less,
* times (in the sense of "multiplied by") and over (in the sense of "divided
* by"), when they are spelled out. <i>For</i> in the sense of "because" is
* a coordinating conjunction (CC) rather than a subordinating conjunction.
*/
CONJUNCTION_COORDINATING( "CC" ),
CONJUNCTION_SUBORDINATING( "IN" ),
CARDINAL_NUMBER( "CD" ),
DETERMINER( "DT" ),
/* This category includes which, as well as that when it is used as a
* relative pronoun.
*/
DETERMINER_WH( "W" + DETERMINER ),
EXISTENTIAL_THERE( "EX" ),
FOREIGN_WORD( "FW" ),
LIST_ITEM_MARKER( "LS" ),
NOUN( "NN" ),
NOUN_PLURAL( NOUN + "S" ),
NOUN_PROPER_SINGULAR( NOUN + "P" ),
NOUN_PROPER_PLURAL( NOUN + "PS" ),
PREDETERMINER( "PDT" ),
POSSESSIVE_ENDING( "POS" ),
PRONOUN_PERSONAL( "PRP" ),
PRONOUN_POSSESSIVE( "PRP$" ),
/* This category includes the wh-word whose.
*/
PRONOUN_POSSESSIVE_WH( "WP$" ),
/* This category includes what, who and whom.
*/
PRONOUN_WH( "WP" ),
PARTICLE( "RP" ),
/* This tag should be used for mathematical, scientific and technical symbols
* or expressions that aren't English words. It should not used for any and
* all technical expressions. For instance, the names of chemicals, units of
* measurements (including abbreviations thereof) and the like should be
* tagged as nouns.
*/
SYMBOL( "SYM" ),
TO( "TO" ),
/* This category includes my (as in M y, what a gorgeous day), oh, please,
* see (as in See, it's like this), uh, well and yes, among others.
*/
INTERJECTION( "UH" ),
VERB( "VB" ),
VERB_PAST_TENSE( VERB + "D" ),
VERB_PARTICIPLE_PRESENT( VERB + "G" ),
VERB_PARTICIPLE_PAST( VERB + "N" ),
VERB_SINGULAR_PRESENT_NONTHIRD_PERSON( VERB + "P" ),
VERB_SINGULAR_PRESENT_THIRD_PERSON( VERB + "Z" ),
/* This category includes all verbs that don't take an -s ending in the
* third person singular present: can, could, (dare), may, might, must,
* ought, shall, should, will, would.
*/
VERB_MODAL( "MD" ),
/* Stanford.
*/
SENTENCE_TERMINATOR( "." );
private final String tag;
private PartOfSpeech( String tag ) {
this.tag = tag;
}
/**
* Returns the encoding for this part-of-speech.
*
* #return A string representing a Penn Treebank encoding for an English
* part-of-speech.
*/
public String toString() {
return getTag();
}
protected String getTag() {
return this.tag;
}
public static PartOfSpeech get( String value ) {
for( PartOfSpeech v : values() ) {
if( value.equals( v.getTag() ) ) {
return v;
}
}
throw new IllegalArgumentException( "Unknown part of speech: '" + value + "'." );
}
}
I am providing the whole list here and also giving reference link
1. CC Coordinating conjunction
2. CD Cardinal number
3. DT Determiner
4. EX Existential there
5. FW Foreign word
6. IN Preposition or subordinating conjunction
7. JJ Adjective
8. JJR Adjective, comparative
9. JJS Adjective, superlative
10. LS List item marker
11. MD Modal
12. NN Noun, singular or mass
13. NNS Noun, plural
14. NNP Proper noun, singular
15. NNPS Proper noun, plural
16. PDT Predeterminer
17. POS Possessive ending
18. PRP Personal pronoun
19. PRP$ Possessive pronoun
20. RB Adverb
21. RBR Adverb, comparative
22. RBS Adverb, superlative
23. RP Particle
24. SYM Symbol
25. TO to
26. UH Interjection
27. VB Verb, base form
28. VBD Verb, past tense
29. VBG Verb, gerund or present participle
30. VBN Verb, past participle
31. VBP Verb, non-3rd person singular present
32. VBZ Verb, 3rd person singular present
33. WDT Wh-determiner
34. WP Wh-pronoun
35. WP$ Possessive wh-pronoun
36. WRB Wh-adverb
You can find out the whole list of Parts of Speech tags here.
Regarding your second question of finding particular POS (e.g., Noun) tagged word/chunk, here is the sample code you can follow.
public static void main(String[] args) {
Properties properties = new Properties();
properties.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse");
StanfordCoreNLP pipeline = new StanfordCoreNLP(properties);
String input = "Colorless green ideas sleep furiously.";
Annotation annotation = pipeline.process(input);
List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);
List<String> output = new ArrayList<>();
String regex = "([{pos:/NN|NNS|NNP/}])"; //Noun
for (CoreMap sentence : sentences) {
List<CoreLabel> tokens = sentence.get(CoreAnnotations.TokensAnnotation.class);
TokenSequencePattern pattern = TokenSequencePattern.compile(regex);
TokenSequenceMatcher matcher = pattern.getMatcher(tokens);
while (matcher.find()) {
output.add(matcher.group());
}
}
System.out.println("Input: "+input);
System.out.println("Output: "+output);
}
The output is:
Input: Colorless green ideas sleep furiously.
Output: [ideas]
They seem to be Brown Corpus tags.
Stanford CoreNLP Tags for Other Languages : French, Spanish, German ...
I see you use the parser for English language, which is the default model.
You may use the parser for other languages (French, Spanish, German ...) and, be aware, both tokenizers and part of speech taggers are different for each language. If you want to do that, you must download the specific model for the language (using a builder like Maven for example) and then set the model you want to use.
Here you have more information about that.
Here you are lists of tags for different languages :
Stanford CoreNLP POS Tags for Spanish
Stanford CoreNLP POS Tagger for German uses the Stuttgart-Tübingen Tag Set (STTS)
Stanford CoreNLP POS tagger for French uses the following tags:
TAGS FOR FRENCH:
Part of Speech Tags for French
A (adjective)
Adv (adverb)
CC (coordinating conjunction)
Cl (weak clitic pronoun)
CS (subordinating conjunction)
D (determiner)
ET (foreign word)
I (interjection)
NC (common noun)
NP (proper noun)
P (preposition)
PREF (prefix)
PRO (strong pronoun)
V (verb)
PONCT (punctuation mark)
Phrasal Categories Tags for French:
AP (adjectival phrases)
AdP (adverbial phrases)
COORD (coordinated phrases)
NP (noun phrases)
PP (prepositional phrases)
VN (verbal nucleus)
VPinf (infinitive clauses)
VPpart (nonfinite clauses)
SENT (sentences)
Sint, Srel, Ssub (finite clauses)
Syntactic Functions for French:
SUJ (subject)
OBJ (direct object)
ATS (predicative complement of a subject)
ATO (predicative complement of a direct object)
MOD (modifier or adjunct)
A-OBJ (indirect complement introduced by à)
DE-OBJ (indirect complement introduced by de)
P-OBJ (indirect complement introduced by another preposition)
In spacy it was very fast i think, in just a low-end notebook it will run like this :
import spacy
import time
start = time.time()
with open('d:/dictionary/e-store.txt') as f:
input = f.read()
word = 0
result = []
nlp = spacy.load("en_core_web_sm")
doc = nlp(input)
for token in doc:
if token.pos_ == "NOUN":
result.append(token.text)
word += 1
elapsed = time.time() - start
print("From", word, "words, there is", len(result), "NOUN found in", elapsed, "seconds")
The Output in several trial :
From 3547 words, there is 913 NOUN found in 7.768507719039917 seconds
From 3547 words, there is 913 NOUN found in 7.408619403839111 seconds
From 3547 words, there is 913 NOUN found in 7.431427955627441 seconds
So, I think you don't need to worry about the looping for each POS tag check :)
More improvement I got when disabled certain pipeline :
nlp = spacy.load("en_core_web_sm", disable = 'ner')
So, The result is faster :
From 3547 words, there is 913 NOUN found in 6.212834596633911 seconds
From 3547 words, there is 913 NOUN found in 6.257707595825195 seconds
From 3547 words, there is 913 NOUN found in 6.371225833892822 seconds