Hi I am using the HttpServletRequest and trying to get the set of headers set.
Here is the code :
public static Map<String, String> getHeaders(HttpServletRequest request) {
Map<String, String> headers = new HashMap<String, String>();
Enumeration<String> headerNames = request.getHeaderNames();
if (headerNames != null) {
while (headerNames.hasMoreElements()) {
String headerName = headerNames.nextElement();
String header = request.getHeader(headerName);
headers.put(headerName, header);
}
}
return headers;
}
This method seems to be throwing a Null pointer exception at the headerNames.nextElement().
Is it possible that the hasMoreElements check returns true but the element headerNames.nextElement in turn causes the null pointer exception?
Stack Trace :
Stack trace : Caused by: java.lang.NullPointerException at org.apache.tomcat.util.buf.ByteChunk.equalsIgnoreCase(ByteChunk.java:608) at
org.apache.tomcat.util.buf.MessageBytes.equalsIgnoreCase(MessageBytes.java:325)
at org.apache.tomcat.util.http.NamesEnumerator.findNext(MimeHeaders.java:414) at org.apache.tomcat.util.http.NamesEnumerator.nextElement(MimeHeaders.java:438)
at org.apache.tomcat.util.http.NamesEnumerator.nextElement(MimeHeaders.java:396) at generateRequestHeaderMap...
It would be great if you guys could help me out with this issue.
I suspect that the problem is caused by a mangled request. Here is what findNext() is doing (in Tomcat 6.0.18):
private void findNext() {
next=null;
for( ; pos< size; pos++ ) {
next=headers.getName( pos ).toString();
for( int j=0; j<pos ; j++ ) {
if( headers.getName( j ).equalsIgnoreCase( next )) {
// duplicate.
next=null;
break;
}
}
if( next!=null ) {
// it's not a duplicate
break;
}
}
// next time findNext is called it will try the
// next element
pos++;
}
The salient lines are this:
next=headers.getName( pos ).toString();
if( headers.getName( j ).equalsIgnoreCase( next )) {
If the header is mangled then it may be possible for getName(j) to return a null. If that happens, then the ByteChunk path for the equalsIgnoreCase method will throw an NPE.
If you are going to track this down scientifically, you need to:
get hold of the actual raw bytes of the request, and examine them forensically to determine the nature of the corruption (if any)
set up a test harness to allow you to run your app on this request with a debugger attached .... and trap the exception at source.
The non-scientific approach would be to upgrade Tomcat to the most recent patch release of Tomcat 6 ... or a later version. It might fix the problem. Or not.
Here's another report of this problem in Tomcat 6.0.20 from back in 2010:
https://mail-archives.apache.org/mod_mbox/tomcat-users/201002.mbox/%3C4B7EBCE4.1010604#christopherschultz.net%3E
This is how I successfully patched the Apache Tomcat in JBoss 6.1.0 Final (in deploy/jbossweb.sar/jbossweb.jar) based on Apache Tomcat 6.0.20 source code:
org.apache.tomcat.util.http.MimeHeaders.NamesEnumerator.findNext()
private void findNext() {
next=null;
for( ; pos< size; pos++ ) {
// (4 lines changed): check mb for null as suggested here: https://stackoverflow.com/questions/37493552/enumeration-null-pointer-exception/37493888#37493888
MessageBytes mb = headers.getName( pos );
if (mb != null) {
next=mb.toString();
}
for( int j=0; j<pos ; j++ ) {
// (2 lines changed): check mb and nex for null as suggested here: https://stackoverflow.com/questions/37493552/enumeration-null-pointer-exception/37493888#37493888
mb = headers.getName( j );
if(mb != null && next != null && mb.equalsIgnoreCase( next )) {
// duplicate.
next=null;
break;
}
}
// new (just 1 comment line): if mb == null we assume next == null, thus it will be a duplicate (i.e. not found, causing no break)
if( next!=null ) {
// it's not a duplicate
break;
}
}
// next time findNext is called it will try the
// next element
pos++;
}
Sure it does not avoid the non-thread safe implementation mentioned in https://mail-archives.apache.org/mod_mbox/tomcat-users/201002.mbox/%3c27699460.post#talk.nabble.com%3e but at least if avoids the NullPointerException during reading unnecessary headers.
Related
I am testing a method using JUnit API and I think I am setting all the values but still I am getting NullPointerException. I don't want to catch this but I don't even expect it since I am setting my values. l = appConfigDao.getAppConfig(); is the line which throws exception and I am using Mockito to return List. For getCacheProvider() I am setting the value using setter but while debugging it shows as null but I don't get exception.
Method under test:
public List<AppConfigTO> getAppConfig(boolean ignoreCache ) {
List<AppConfigTO> l = null;
if( !ignoreCache ) {
if( getCacheProvider() != null ) {
l = (List<AppConfigTO>)getCacheProvider().getFromCache( CacheConstants.CONFIG_APPCONFIG, CacheConstants.TABLE_CACHE_KEY );
}
}
if( l == null ) {
l = appConfigDao.getAppConfig();
if( !ignoreCache ) {
if( getCacheProvider() != null ) {
getCacheProvider().addToCache( CacheConstants.CONFIG_APPCONFIG, CacheConstants.TABLE_CACHE_KEY, l );
}
}
}
return l;
}
JUnit test:
private ICacheProvider cacheProvider;
#Test
public void testGetAppConfig() throws Exception {
AppConfigManager configManager = new AppConfigManager();
configManager.setCacheProvider(cacheProvider);
List<AppConfigTO> list = new ArrayList<>();
// Mocking IAppConfigDao
IAppConfigDao configDao = Mockito.mock(IAppConfigDao.class);
Mockito.when(configDao.getAppConfig()).thenReturn(list);
list = configManager.getAppConfig(false);
}
This is just a happy path because I want to see if I am setting all the values correctly and then I will cover branch coverage, but if I stop getting exception.
Thanks,
In my code I call this method, as a preprocessing step to 'stem' words:
public void getStem(String word)
{
WordnetStemmer stem = new WordnetStemmer( dict );
List<String> stemmed_words = stem.findStems(word, POS.VERB);
System.out.println( stemmed_words.get(0) );
}
Usually everything is good if it gets a normal word (I'm using the Java Wordnet Interface to handle the stemming). The thing is--> I don't always get a normal word, somethings I get things along the lines of isa which is a conjunction of is and a. In such a case that method will return null and my program will crash. How can I defend against this?
This is how I call that code:
public Sentence(String verb, String object, String subject ) throws IOException
{
WordNet wordnet = new WordNet();
this.verb = verb;
this.object = object;
this.subject = subject;
wordnet.getStem( verb );
}
Eventually I want that to read:
this.verb = wordnet.getStem( verb );
I once heard about doing something with null objects, is that applicable here?
I tried this but it didn't work, but I want to do something like this:
public void getStem(String word)
{
WordnetStemmer stem = new WordnetStemmer( dict );
List<String> stemmed_words = stem.findStems(word, POS.VERB);
if( stemmed_words != null)
System.out.println( stemmed_words.get(0) );
else
System.out.println( word );
}
This is the output:
prevent
contain
contain
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0
at java.util.Collections$EmptyList.get(Collections.java:4454)
at inference_learner.WordNet.getStem(WordNet.java:76)
at inference_learner.Sentence.<init>(Sentence.java:23)
at inference_learner.RegEx.match_regex_patterns(RegEx.java:33)
at inference_learner.ReadFile.readFile(ReadFile.java:30)
at inference_learner.Main.main(Main.java:38)
That won't work because the List is not null, the List is empty.
You have to do the check like this if (stemmed_words.size() > 0)
try
if( stemmed_words != null && stemmed_words.size() > 0))
System.out.println( stemmed_words.get(0) );
else
System.out.println( word );
}
I am currently working on creating an IDE for the custom, very lua-like scripting language MobTalkerScript (MTS), which provides me with an ANTLR4 lexer. Since the specifications from the language file for MTS puts comments into the HIDDEN_CHANNEL channel, I need to tell the lexer to actually read from the HIDDEN_CHANNEL channel. This is how I tried to do that.
Mts3Lexer lexer = new Mts3Lexer(new ANTLRInputStream("<replace this with the input>"));
lexer.setTokenFactory(new CommonTokenFactory(false));
lexer.setChannel(Token.HIDDEN_CHANNEL);
Token token = lexer.emit();
int type = token.getType();
do {
switch(type) {
case Mts3Lexer.LINE_COMMENT:
case Mts3Lexer.COMMENT:
System.out.println("token "+token.getText()+" is a comment");
default:
System.out.println("token "+token.getText()+" is not a comment");
}
} while((token = lexer.nextToken()) != null && (type = token.getType()) != Token.EOF);
Now, if I use this code on the following input, nothing but token ... is not a comment gets printed to the console.
function foo()
-- this should be a single-line comment
something = "blah"
--[[ this should
be a multi-line
comment ]]--
end
The tokens containing the comments never show up, though. So I searched for the source of this problem and found the following method in the ANTLR4 Lexer class:
/** Return a token from this source; i.e., match a token on the char
* stream.
*/
#Override
public Token nextToken() {
if (_input == null) {
throw new IllegalStateException("nextToken requires a non-null input stream.");
}
// Mark start location in char stream so unbuffered streams are
// guaranteed at least have text of current token
int tokenStartMarker = _input.mark();
try{
outer:
while (true) {
if (_hitEOF) {
emitEOF();
return _token;
}
_token = null;
_channel = Token.DEFAULT_CHANNEL;
_tokenStartCharIndex = _input.index();
_tokenStartCharPositionInLine = getInterpreter().getCharPositionInLine();
_tokenStartLine = getInterpreter().getLine();
_text = null;
do {
_type = Token.INVALID_TYPE;
// System.out.println("nextToken line "+tokenStartLine+" at "+((char)input.LA(1))+
// " in mode "+mode+
// " at index "+input.index());
int ttype;
try {
ttype = getInterpreter().match(_input, _mode);
}
catch (LexerNoViableAltException e) {
notifyListeners(e); // report error
recover(e);
ttype = SKIP;
}
if ( _input.LA(1)==IntStream.EOF ) {
_hitEOF = true;
}
if ( _type == Token.INVALID_TYPE ) _type = ttype;
if ( _type ==SKIP ) {
continue outer;
}
} while ( _type ==MORE );
if ( _token == null ) emit();
return _token;
}
}
finally {
// make sure we release marker after match or
// unbuffered char stream will keep buffering
_input.release(tokenStartMarker);
}
}
The line that caught my eye was the following.
_channel = Token.DEFAULT_CHANNEL;
I don't know much about ANTLR, but apparently this line keeps the lexer in the DEFAULT_CHANNEL channel.
Is the way I tried to read from the HIDDEN_CHANNEL channel right or can't I use nextToken() with the hidden channel?
I found out why the lexer didn't give me any tokens containing the comments - I seem to have missed that the grammar file skips comments instead of putting them into the hidden channel. Contacted the author, changed the grammar file and now it works.
Note to myself: pay more attention to what you read.
For Go (golang) this snippet works for me:
import (
"github.com/antlr/antlr4/runtime/Go/antlr"
)
type antlrparser interface {
GetParser() antlr.Parser
}
func fullText(prc antlr.ParserRuleContext) string {
p := prc.(antlrparser).GetParser()
ts := p.GetTokenStream()
tx := ts.GetTextFromTokens(prc.GetStart(), prc.GetStop())
return tx
}
just pass your ctx.GetSomething() into fullText. Of course, as shown above, whitespace has to go to the hidden channel in the *.g4 file:
WS: [ \t\r\n] -> channel(HIDDEN);
I am just starting with Lucene so it's probably a beginners question. We are trying to implement a semantic search on digital books and already have a concept generator, so for example the contexts I generate for a new article could be:
|Green Beans | Spring Onions | Cooking |
I am using Lucene to create an index on the books/articles using only the extracted concepts (stored in a temporary document for that purpose). Now the standard analyzer is creating single word tokens: Green, Beans, Spring, Onions, Cooking, which of course is not the same.
My question: is there an analyzer that is able to detect delimiters around tokens (|| in our example), or an analyzer that is able to detect multi-word constructs?
I'm afraid we'll have to create our own analyzer, but I don't quite know where to start for that one.
Creating an analyzer is pretty easy. An analyzer is just a tokenizer optionally followed by token filters. In your case, you'd have to create your own tokenizer. Fortunately, you have a convenient base class for this: CharTokenizer.
You implement the isTokenChar method and make sure it returns false on the | character and true on any other character. Everything else will be considered part of a token.
Once you have the tokenizer, the analyzer should be straightforward, just look at the source code of any existing analyzer and do likewise.
Oh, and if you can have spaces between your | chars, just add a TrimFilter to the analyzer.
I came across this question because I am doing something with my Lucene mechanisms which creates data structures to do with sequencing, in effect "hijacking" the Lucene classes. Otherwise I can't imagine why people would want knowledge of the separators ("delimiters") between tokens, but as it was quite tricky I thought I'd put it here for the benefit of anyone who might need to.
You have to rewrite your own versions of StandardTokenizer and StandardTokenizerImpl. These are both final classes so you can't extend them.
SeparatorDeliveringTokeniserImpl (tweaked from source of StandardTokenizerImpl):
3 new fields:
private int startSepPos = 0;
private int endSepPos = 0;
private String originalBufferAsString;
Tweak these methods:
public final void getText(CharTermAttribute t) {
t.copyBuffer(zzBuffer, zzStartRead, zzMarkedPos - zzStartRead);
if( originalBufferAsString == null ){
originalBufferAsString = new String( zzBuffer, 0, zzBuffer.length );
}
// startSepPos == -1 is a "flag condition": it means that this token is the last one and it won't be followed by a sep
if( startSepPos != -1 ){
// if the flag is NOT set, record the start pos of the next sep...
startSepPos = zzMarkedPos;
}
}
public final void yyreset(java.io.Reader reader) {
zzReader = reader;
zzAtBOL = true;
zzAtEOF = false;
zzEOFDone = false;
zzEndRead = zzStartRead = 0;
zzCurrentPos = zzMarkedPos = 0;
zzFinalHighSurrogate = 0;
yyline = yychar = yycolumn = 0;
zzLexicalState = YYINITIAL;
if (zzBuffer.length > ZZ_BUFFERSIZE)
zzBuffer = new char[ZZ_BUFFERSIZE];
// reset fields responsible for delivering separator...
originalBufferAsString = null;
startSepPos = 0;
endSepPos = 0;
}
(inside getNextToken:)
if ((zzAttributes & 1) == 1) {
zzAction = zzState;
zzMarkedPosL = zzCurrentPosL;
if ((zzAttributes & 8) == 8) {
// every occurrence of a separator char leads here...
endSepPos = zzCurrentPosL;
break zzForAction;
}
}
And make a new method:
String getPrecedingSeparator() {
String sep = null;
if( originalBufferAsString == null ){
sep = new String( zzBuffer, 0, endSepPos );
}
else if( startSepPos == -1 || endSepPos <= startSepPos ){
sep = "";
}
else {
sep = originalBufferAsString.substring( startSepPos, endSepPos );
}
if( zzMarkedPos < startSepPos ){
// ... then this is a sign that the next token will be the last one... and will NOT have a trailing separator
// so set a "flag condition" for next time this method is called
startSepPos = -1;
}
return sep;
}
SeparatorDeliveringTokeniser (tweaked from source of StandardTokenizer):
Add this:
private String separator;
String getSeparator(){
// normally this delivers a preceding separator... but after incrementToken returns false, if there is a trailing
// separator, it then delivers that...
return separator;
}
(inside incrementToken:)
while(true) {
int tokenType = scanner.getNextToken();
// added NB this gives you the separator which PRECEDES the token
// which you are about to get from scanner.getText( ... )
separator = scanner.getPrecedingSeparator();
if (tokenType == SeparatorDeliveringTokeniserImpl.YYEOF) {
// NB at this point sep is equal to the trailing separator...
return false;
}
...
Usage:
In my FilteringTokenFilter subclass, called TokenAndSeparatorExamineFilter, the methods accept and end look like this:
#Override
public boolean accept() throws IOException {
String sep = ((SeparatorDeliveringTokeniser) input).getSeparator();
// a preceding separator can only be an empty String if we are currently
// dealing with the first token and if the sequence starts with a token
if (!sep.isEmpty()) {
// ... do something with the preceding separator
}
// then get the token...
String token = getTerm();
// ... do something with the token
// my filter does no filtering! Every token is accepted...:
return true;
}
#Override
public void end() throws IOException {
// deals with trailing separator at the end of a sequence of tokens and separators (if there is one, i.e. if it doesn't end with a token)
String sep = ((SeparatorDeliveringTokeniser) input).getSeparator();
// NB will be an empty String if there is no trailing separator
if (!sep.isEmpty()) {
// ... do something with this trailing separator
}
}
I am having trouble with the logic of deleting an entry in an Address Book... I am saving all the entries using an ARRAY.
I am try to make the array[i] = null, if array[i] is equals to the entered name of the user. But after i delete an entry and then try to view all entries again, nothing shows.. and output says :
Exception in thread "main" java.lang.NullPointerException
at AddressBook.viewAll(AddressBook.java:61)
at AddressBook.main(AddressBook.java:35)
Java Result: 1
this is my code in deleting an Entry:
public void deleteEntry() {
SName = JOptionPane.showInputDialog("Enter Name to delete: ");
for (int i = 0; i < counter; i++) {
if (entry[i].getName().equals(SName)) {
//JOptionPane.showMessageDialog(null, "Found!");
entry[i] = null;
}
}
}
Can you help me figure out what was wrong with my code... or LOGICAL ERROR?
If you have any suggestion or better way to delete an entry that would be a big help..
please help...
if (entry[i].getName().equals(SName)) {
if on one pass you make
entry[i] = null
then how would you getName() afterwords?
try adding a null check to your if statement
if (entry[i] != null && entry[i].getName().equals(SName)) {
EDIT: Benjamin brings up a good point. You should be prepared for a null result from showinputdialog(). For example, there's a cancel button right? If they press that, you'll get null I believe. Here's some better code for that case:
public void deleteEntry() {
/* get the input */
SName = JOptionPane.showInputDialog("Enter Name to delete: ");
/* if no input, nothing to delete */
if(SName == null) return;
/* find the name */
for (int i = 0; i < counter; i++) {
/* make sure we have an entry*/
/* we know SName is not null */
if (entry[i] != null && SName.equals(entry[i].getName())) {
/* null out the deleted entry */
entry[i] = null;
// break; /* If you know you have unique names, you can leave the for loop now */
} /* end if */
} /* end for i*/
}