Try Catch Final - final always has null in variable - java

I am currently facing an issue with my code and I can't figure out why this statement is evaluating as it is. This is the first time I am using a finally block, so it may be that there is some fundamental behaviour I haven't understood.
What this method does is it gets a json document from an api and stores said document as this.thisPage. Then another method sliceItem does the splitting of the results field into an array of json objects.
A MalformedJsonException is thrown whenever the API returns a json that has bad fields (ex. String fields being stored as int, or int as double etc.). This is tried 10 times (handled by failsafeget) and if it failed 10 times, MalformedJsonException (RuntimeException) is thrown. What I would like slicePage to do in that case is get the next page instead of continuing with this page. To simplify this - each page has 100 entries; if the offset 3500 is broken, we want to get offset 3600.
The issue that I am facing currently is that resp always evaluates to null in the final block. I cannot understand why this is the case, since the try block can return something other than null (JSONObject type).
Any help would be greatly appreciated and if you need more information/code, I am willing to provide it.
public synchronized void slicePage(){
JSONObject resp=null; // otherwise java complains that not initialised
ApiClient apiClient = new ApiClient();
RestEndPoint pageUrl;
while (true) {
pageUrl = getNextPageEndPoint();
if(pageUrl == null) {
throw new IllegalStateException("We have reached the end and the code isn't designed to handle the end here"); // we have reached the end
}
currentPageNumber++;
try {
resp = apiClient.failSafeGet(pageUrl, getRetryCount());
break;
}
catch (MalformedJsonException e) {
logger.info(String.format("The json was still broken after %d retries. Skipping this page and notifying listeners", getRetryCount()));
for (Consumer<Integer> consumer: onSkipListenerList) {
consumer.accept(batchSize); // inform each listener that we are skipping this many entries
}
}
finally { // We need to set the next page end point no matter the outcome of the try catch. N.B. this gets executed even if there is a break
if(resp == null) {
// no next possible
setNextPageEndPoint(null); // don't consider next; we reached the max
this.thisPage = null;
} else {
if(currentPageNumber > maxPages - 1) {
// because a request has been made already, so reduce by 1
setNextPageEndPoint(null); // don't consider next; we reached the max
} else {
// else consider next page
setNextPageEndPoint(constructNextPageEndPoint(pageUrl, resp));
}
this.thisPage = this.parseResult(resp);
setTotalCount(resp.getInt("totalResults"));
}
}
}
}
EDIT I forgot to mention, that when I said it always evaluates to null, I meant that my IDE - Intellij IDEA, is warning me that the if condition always evaluates to null. The following is the help that shows up in Intellij (with Ctrl-F1).
Condition 'resp == null' is always 'true' less... (Ctrl+F1)
This inspection analyzes method control and data flow to report possible conditions that are always true or false, expressions whose value is statically proven to be constant, and situations that can lead to nullability contract violations.
Variables, method parameters and return values marked as #Nullable or #NotNull are treated as nullable (or not-null, respectively) and used during the analysis to check nullability contracts, e.g. report possible NullPointerException errors.
More complex contracts can be defined using #Contract annotation, for example:
#Contract("_, null -> null") — method returns null if its second argument is null #Contract("_, null -> null; _, !null -> !null") — method returns null if its second argument is null and not-null otherwise #Contract("true -> fail") — a typical assertFalse method which throws an exception if true is passed to it
The inspection can be configured to use custom #Nullable
#NotNull annotations (by default the ones from annotations.jar will be used)
EDIT 2
As it turns out, the code analysis is wrong, the value is non-null once run. Thank you everyone (including commentators) for sharing your insights and advice. Ultimately I inserted a logger.info with the value before the condition and everything seemed to work. The reason it seemed to stop working is because the graph server was running into timeouts.

This is normal behaviour. This call
apiClient.failSafeGet(pageUrl, getRetryCount());
throws the exception, so the value assignment to resp is never completed so in the finally block the value is null. So, either you method is always throwing an exception, or, if not, it is returning null at some point.

In your code:
try {
resp = apiClient.failSafeGet(pageUrl, getRetryCount());
break;
}
catch (MalformedJsonException e) {
logger.info(String.format("The json was still broken after %d retries. Skipping this page and notifying listeners", getRetryCount()));
for (Consumer<Integer> consumer: onSkipListenerList) {
consumer.accept(batchSize); // inform each listener that we are skipping this many entries
}
}
finally {.....
If resp = apiClient.failSafeGet(pageUrl, getRetryCount()); throws an exception, resp will be always null, because the program failed before assing the instance to resp.

Related

How do you handle blank non String values with a FieldSet in FieldSetMapper?

I am using common FieldSetMapper logic found through searches and in examples on StackOverflow and I have run into a situation which surprised me. Either it is a feature or a bug, but I thought I would present it here for review to see how others handle it.
Using Spring Batch, I have a pipe delimited file which has string and number values which may by optional depending on position. For example:
string|string|number|number|string
string||number||string
In your field set mapper class which implements FieldSetMapper, you usually do some mapping such as:
newThingy.setString1(fieldSet.readString("string1"));
newThingy.setString2(fieldSet.readString("string2"));
newThingy.setValue1(fieldSet.readInt("value1"));
newThingy.setValue2(fieldSet.readInt("value2"));
newThingy.setString3(fieldSet.readString("string3"));
During testing the code for line 1 above worked fine.
For line 2 with the blank values for string2 and value, a Java exception was thrown for the number but not the string:
Caused by: java.lang.NumberFormatException: Unparseable number:
at org.springframework.batch.item.file.transform.DefaultFieldSet.parseNumber(DefaultFieldSet.java:754)
at org.springframework.batch.item.file.transform.DefaultFieldSet.readInt(DefaultFieldSet.java:323)
at org.springframework.batch.item.file.transform.DefaultFieldSet.readInt(DefaultFieldSet.java:335)
at com.healthcloud.batch.mapper.MemberFieldSetMapper.mapFieldSet(MemberFieldSetMapper.java:31)
at com.healthcloud.batch.mapper.MemberFieldSetMapper.mapFieldSet(MemberFieldSetMapper.java:1)
I did some research in the DefaultFieldSetMapper.java class provided by Spring Batch which implements the FieldSet class to try and understand what is going on.
What I found is that the readAndTrim function called by readString returns null if the value read is blank
protected String readAndTrim(int index) {
String value = tokens[index];
if (value != null) {
return value.trim();
}
else {
return null;
}
}
... but when using readInt (and maybe others) we are returning an exception.
private Number parseNumber(String candidate) {
try {
return numberFormat.parse(candidate);
}
catch (ParseException e) {
throw new NumberFormatException("Unparseable number: " + candidate);
}
}
I do see where you can return a default value in some of the methods, but null is obviously not allowed. What I would expect is consistent behavior between all methods in FieldSet implementations which allow one to match the file to my database as the data is read. Blank values in delimited and fixed length files are fairly common.
If number based values cannot be properly handled, I will probably have to convert everything over to String as it is read and then go through the trouble to manual handle the conversion to the database, which obviously defeats the purpose of using Spring Batch.
Am I missing something that I should handle better? I can add more code if needed, I just felt this is commonly used and I could keep this short. Will edit as needed.
Edit: Add info on Unit Tests found for Spring Batch class
The comments in the test case state a default should be set instead, but why? I don't want a default. My database allows a null value in the Integer column. I would have to set the default to some arbitrary number which hopefully no one EVER sends, check for it before insert and then switch to null on insert. I still don't like this "feature."
#Test
public void testReadBlankInt() {
// Trying to parse a blank field as an integer, but without a default
// value should throw a NumberFormatException
try {
fieldSet.readInt(13);
fail();
}
catch (NumberFormatException ex) {
// expected
}
try {
fieldSet.readInt("BlankInput");
fail();
}
catch (NumberFormatException ex) {
// expected
}
}
Always sanity check your input/data. I'll usually throw together a Util class with all the parse/read/verification I need. Bare bones version below...
public static Integer getInteger(FieldSet fs, String key, Integer default) {
if(StringUtils.isNumeric(fs.readString(key))) {
return fs.readInt(key);
} else {
return default;
}
}

Taking advantage of try catch block instead of doing repeated null checks

I was pondering about removing unnecessary null condition checks from my code today mostly involving deep nested objects.
For example:
I have a class Big and it has a reference to Small object and its getters and setters, likewise Small has reference to
Tiny object and its getters and setters and Tiny object has the tinyLength variable which is an Integer variable.
So, for example, if I'm writing a method that take in a Big object parameter and returns tinyLength at the very end, I usually do something
like this:
public Integer getTinyLength (Big big) {
if (big!=null && big.getSmall()!=null && big.getSmall().getTiny()!=null &&
big.getSmall().getTiny().getTinyLength()!=null) {
return big.getSmall().getTiny().getTinyLength();
}
else {
// throw exception or return null or do whatever
}
So my question here is rather than doing all this gunk, why don't I just do something like:
try {
return big.getSmall().getTiny().getTinyLength();
}
catch (Exception e){
//null pointer exception being a subclass of Exception class
log your error here
throw new NullPointerException();
}
I feel like the second code snippet is very concise and to the point. Is there any disadvantages to doing something like this?
Especially when there are multiple nested objects, and dereferencing each one them usually results in highly unreadable code / also prone to mistakes if you miss one of them and it results in null.
The first code sample is inefficient, and thread unsafe. getSmall() is called four times, getTiny() is called three times, and getTinyLength() is called twice.
The second code sample is dangerous. Catching Exception when you just want to catch NullPointerException can lead to catching exceptions you didn't mean to catch. Plus, the overhead of constructing a NullPointerException object, throwing it, catching it, discarding it, and creating yet another NullPointerObject is non-trivial.
Better would be:
Integer length = null;
if (big != null) {
Small small = big.getSmall();
if (small != null) {
Tiny tiny = small.getTiny();
if (tiny != null) {
length = tiny.getTinyLength();
}
}
}
if (length != null) {
return length;
} else {
// Throw exception, return null, or do whatever
}
Better still would be to hide that code away in an getter inside Big#getTinyLength().
A serious look at your data model should be in order. Can you have a Big that doesn't have a Small? Can you have a Small that doesn't contain a Tiny? Why would Tiny not have a length? If these members shouldn't ever be null, then let the JVM check for it and throw an exception, but then don't catch it, just to rethrow an equivalent one:
return big.getSmall().getTiny().getTinyLength();
EDIT
If you are using Java 8, then the Optional type could simplify your code:
public Integer getTinyLength (Big big) {
return Optional.ofNullable(big)
.map(Big::getSmall)
.map(Small:getTiny)
.map(Tiny::getTinyLength)
.orElse(null);
}
This constructs an Optional<Big> from big, and proceeds to unwrap it to get to the tiny length value. If big is null, or if any mapping function used in the .map(...) calls returns null, an empty optional is produced. Finally, the value of the Optional<Integer> is returned if present, and null otherwise.

Is having a return statement just to satisfy syntax bad practice?

Consider the following code:
public Object getClone(Cloneable a) throws TotallyFooException {
if (a == null) {
throw new TotallyFooException();
}
else {
try {
return a.clone();
} catch (CloneNotSupportedException e) {
e.printStackTrace();
}
}
//cant be reached, in for syntax
return null;
}
The return null; is necessary since an exception may be caught, however in such a case since we already checked if it was null (and lets assume we know the class we are calling supports cloning) so we know the try statement will never fail.
Is it bad practice to put in the extra return statement at the end just to satisfy the syntax and avoid compile errors (with a comment explaining it will not be reached), or is there a better way to code something like this so that the extra return statement is unnecessary?
A clearer way without an extra return statement is as follows. I wouldn't catch CloneNotSupportedException either, but let it go to the caller.
if (a != null) {
try {
return a.clone();
} catch (CloneNotSupportedException e) {
e.printStackTrace();
}
}
throw new TotallyFooException();
It's almost always possible to fiddle with the order to end up with a more straight-forward syntax than what you initially have.
It definitely can be reached. Note that you're only printing the stacktrace in the catch clause.
In the scenario where a != null and there will be an exception, the return null will be reached. You can remove that statement and replace it with throw new TotallyFooException();.
In general*, if null is a valid result of a method (i.e. the user expects it and it means something) then returning it as a signal for "data not found" or exception happened is not a good idea. Otherwise, I don't see any problem why you shouldn't return null.
Take for example the Scanner#ioException method:
Returns the IOException last thrown by this Scanner's underlying Readable. This method returns null if no such exception exists.
In this case, the returned value null has a clear meaning, when I use the method I can be sure that I got null only because there was no such exception and not because the method tried to do something and it failed.
*Note that sometimes you do want to return null even when the meaning is ambiguous. For example the HashMap#get:
A return value of null does not necessarily indicate that the map contains no mapping for the key; it's also possible that the map explicitly maps the key to null. The containsKey operation may be used to distinguish these two cases.
In this case null can indicate that the value null was found and returned, or that the hashmap doesn't contain the requested key.
Is it bad practice to put in the extra return statement at the end just to satisfy the syntax and avoid compile errors (with a comment explaining it will not be reached)
I think return null is bad practice for the terminus of an unreachable branch. It is better to throw a RuntimeException (AssertionError would also be acceptable) as to get to that line something has gone very wrong and the application is in an unknown state.
Most like this is (like above) because the developer has missed something (Objects can be none-null and un-cloneable).
I'd likely not use InternalError unless I'm very very sure that the code is unreachable (for example after a System.exit()) as it is more likely that I make a mistake than the VM.
I'd only use a custom exception (such as TotallyFooException) if getting to that "unreachable line" means the same thing as anywhere else you throw that exception.
You caught the CloneNotSupportedException which means your code can handle it. But after you catch it, you have literally no idea what to do when you reach the end of the function, which implies that you couldn't handle it. So you're right that it is a code smell in this case, and in my view means you should not have caught CloneNotSupportedException.
I would prefer to use Objects.requireNonNull() to check if the Parameter a is not null. So it's clear when you read the code that the parameter should not be null.
And to avoid checked Exceptions I would re throw the CloneNotSupportedException as a RuntimeException.
For both you could add nice text with the intention why this shouldn't happen or be the case.
public Object getClone(Object a) {
Objects.requireNonNull(a);
try {
return a.clone();
} catch (CloneNotSupportedException e) {
throw new IllegalArgumentException(e);
}
}
The examples above are valid and very Java. However, here's how I would address the OP's question on how to handle that return:
public Object getClone(Cloneable a) throws CloneNotSupportedException {
return a.clone();
}
There's no benefit for checking a to see if it is null. It's going to NPE. Printing a stack trace is also not helpful. The stack trace is the same regardless of where it is handled.
There is no benefit to junking up the code with unhelpful null tests and unhelpful exception handling. By removing the junk, the return issue is moot.
(Note that the OP included a bug in the exception handling; this is why the return was needed. The OP would not have gotten wrong the method I propose.)
In this sort of situation I would write
public Object getClone(SomeInterface a) throws TotallyFooException {
// Precondition: "a" should be null or should have a someMethod method that
// does not throw a SomeException.
if (a == null) {
throw new TotallyFooException() ; }
else {
try {
return a.someMethod(); }
catch (SomeException e) {
throw new IllegalArgumentException(e) ; } }
}
Interestingly you say that the "try statement will never fail", but you still took the trouble to write a statement e.printStackTrace(); that you claim will never be executed. Why?
Perhaps your belief is not that firmly held. That is good (in my opinion), since your belief is not based on the code you wrote, but rather on the expectation that your client will not violate the precondition. Better to program public methods defensively.
By the way, your code won't compile for me. You can't call a.clone() even if the type of a is Cloneable. At least Eclipse's compiler says so. Expression a.clone() gives error
The method clone() is undefined for the type Cloneable
What I would do for your specific case is
public Object getClone(PubliclyCloneable a) throws TotallyFooException {
if (a == null) {
throw new TotallyFooException(); }
else {
return a.clone(); }
}
Where PubliclyCloneable is defined by
interface PubliclyCloneable {
public Object clone() ;
}
Or, if you absolutely need the parameter type to be Cloneable, the following at least compiles.
public static Object getClone(Cloneable a) throws TotallyFooException {
// Precondition: "a" should be null or point to an object that can be cloned without
// throwing any checked exception.
if (a == null) {
throw new TotallyFooException(); }
else {
try {
return a.getClass().getMethod("clone").invoke(a) ; }
catch( IllegalAccessException e ) {
throw new AssertionError(null, e) ; }
catch( InvocationTargetException e ) {
Throwable t = e.getTargetException() ;
if( t instanceof Error ) {
// Unchecked exceptions are bubbled
throw (Error) t ; }
else if( t instanceof RuntimeException ) {
// Unchecked exceptions are bubbled
throw (RuntimeException) t ; }
else {
// Checked exceptions indicate a precondition violation.
throw new IllegalArgumentException(t) ; } }
catch( NoSuchMethodException e ) {
throw new AssertionError(null, e) ; } }
}
Is having a return statement just to satisfy syntax bad practice?
As others have mentioned, in your case this does not actually apply.
To answer the question, though, Lint type programs sure haven't figured it out! I have seen two different ones fight it out over this in a switch statement.
switch (var)
{
case A:
break;
default:
return;
break; // Unreachable code. Coding standard violation?
}
One complained that not having the break was a coding standard violation. The other complained that having it was one because it was unreachable code.
I noticed this because two different programmers kept re-checking the code in with the break added then removed then added then removed, depending on which code analyzer they ran that day.
If you end up in this situation, pick one and comment the anomaly, which is the good form you showed yourself. That's the best and most important takeaway.
It isn't 'just to satisfy syntax'. It is a semantic requirement of the language that every code path leads to a return or a throw. This code doesn't comply. If the exception is caught a following return is required.
No 'bad practice' about it, or about satisfying the compiler in general.
In any case, whether syntax or semantic, you don't have any choice about it.
I would rewrite this to have the return at the end. Pseudocode:
if a == null throw ...
// else not needed, if this is reached, a is not null
Object b
try {
b = a.clone
}
catch ...
return b
No one mentioned this yet so here goes:
public static final Object ERROR_OBJECT = ...
//...
public Object getClone(Cloneable a) throws TotallyFooException {
Object ret;
if (a == null)
throw new TotallyFooException();
//no need for else here
try {
ret = a.clone();
} catch (CloneNotSupportedException e) {
e.printStackTrace();
//something went wrong! ERROR_OBJECT could also be null
ret = ERROR_OBJECT;
}
return ret;
}
I dislike return inside try blocks for that very reason.
The return null; is necessary since an exception may be caught,
however in such a case since we already checked if it was null (and
lets assume we know the class we are calling supports cloning) so we
know the try statement will never fail.
If you know details about the inputs involved in a way where you know the try statement can never fail, what is the point of having it? Avoid the try if you know for sure things are always going to succeed (though it is rare that you can be absolutely sure for the whole lifetime of your codebase).
In any case, the compiler unfortunately isn't a mind reader. It sees the function and its inputs, and given the information it has, it needs that return statement at the bottom as you have it.
Is it bad practice to put in the extra return statement at the end
just to satisfy the syntax and avoid compile errors (with a comment
explaining it will not be reached), or is there a better way to code
something like this so that the extra return statement is unnecessary?
Quite the opposite, I'd suggest it's good practice to avoid any compiler warnings, e.g., even if that costs another line of code. Don't worry too much about line count here. Establish the reliability of the function through testing and then move on. Just pretending you could omit the return statement, imagine coming back to that code a year later and then try to decide if that return statement at the bottom is going to cause more confusion than some comment detailing the minutia of why it was omitted because of assumptions you can make about the input parameters. Most likely the return statement is going to be easier to deal with.
That said, specifically about this part:
try {
return a.clone();
} catch (CloneNotSupportedException e) {
e.printStackTrace();
}
...
//cant be reached, in for syntax
return null;
I think there's something slightly odd with the exception-handling mindset here. You generally want to swallow exceptions at a site where you have something meaningful you can do in response.
You can think of try/catch as a transaction mechanism. try making these changes, if they fail and we branch into the catch block, do this (whatever is in the catch block) in response as part of the rollback and recovery process.
In this case, merely printing a stacktrace and then being forced to return null isn't exactly a transaction/recovery mindset. The code transfers the error-handling responsibility to all the code calling getClone to manually check for failures. You might prefer to catch the CloneNotSupportedException and translate it into another, more meaningful form of exception and throw that, but you don't want to simply swallow the exception and return a null in this case since this is not like a transaction-recovery site.
You'll end up leaking the responsibilities to the callers to manually check and deal with failure that way, when throwing an exception would avoid this.
It's like if you load a file, that's the high-level transaction. You might have a try/catch there. During the process of trying to load a file, you might clone objects. If there's a failure anywhere in this high-level operation (loading the file), you typically want to throw exceptions all the way back to this top-level transaction try/catch block so that you can gracefully recover from a failure in loading a file (whether it's due to an error in cloning or anything else). So we generally don't want to just swallow up an exception in some granular place like this and then return a null, e.g., since that would defeat a lot of the value and purpose of exceptions. Instead we want to propagate exceptions all the way back to a site where we can meaningfully deal with it.
Your example is not ideal to illustrate your question as stated in the last paragraph:
Is it bad practice to put in the extra return statement at the end
just to satisfy the syntax and avoid compile errors (with a comment
explaining it will not be reached), or is there a better way to code
something like this so that the extra return statement is unnecessary?
A better example would be the implementation of clone itself:
public class A implements Cloneable {
public Object clone() {
try {
return super.clone() ;
} catch (CloneNotSupportedException e) {
throw new InternalError(e) ; // vm bug.
}
}
}
Here the catch clause should never be entered. Still the syntax either requires to throw something or return a value. Since returning something does not make sense, an InternalError is used to indicate a severe VM condition.

trouble finding NullPointerException used try catch instead of if statement

I've been struggling to find why my if statement didnt work properly so I used a try catch block instead. This is the if statement as I had it:
//selectArtistByName returns an Artist object
if (!selectArtistByName(artist.getName()).equals(artist.getName()) ||
selectArtistByName(artist.getName())==null) {
//save data to database
}
When I ran the above, I got a NullPointerException because the method selectArtistByName was returning null as the database was empty. What I don't understand is why it didn't go in the if statement when I was getting null. So I did this and it worked:
try {
if (!selectArtistByName(artist.getName()).equals(artist.getName())) {
}
} catch (NullPointerException e) {
m_db.insert(TABLE_ARTIST, null, artistContents);
}
I'm not a Java guru but it looks like a horrible fix to me. How could I fix this.
You just need to change the order of condition in if block:
if (selectArtistByName(artist.getName()) == null ||
!selectArtistByName(artist.getName()).equals(artist.getName())) {
//save data to database
}
Do the null check first.
If that succeeds, then 2nd condition is not evaluated, and hence no NullPointerException. This is how short-circuit OR operator works. It only evaluates the 2nd expression, if 1st one evaluates to false.
If null check fails, then 2nd condition is evaluated, which wouldn't throw NPE, as it has already been confirmed by first condition.
Also, as rightly pointed out by #ruakh in comment, your condition seems to be broken. selectArtistByName sounds to be returning an Artist, which you can't compare with String.
I guess, you don't even need the 2nd condition. I would assume, selectArtistByName() method has already done the equality check for name, based on which it will return Artist. Just check that selectArtistByName method return null, that would be enough. So, you should change the if block to:
if (selectArtistByName(artist.getName()) == null) {
//save data to database
}
Just put the null condition check at the beginning to shortcut when artist is unknown:
if (selectArtistByName(artist.getName())==null || !selectArtistByName(artist.getName()).equals(artist.getName())) {
//save data to database
}
You can find more info about lazy evaluation in this other question: Does Java have lazy evaluation?

Dead code warning where it shouldn't be

I have a very simple Java code, like this (this is just an excerpt):
for(;;)
{
AnObject object = null;
for(AnObject elem : list) // where the list is of the type List<AnObject>
{
if(<some dynamic condition goes here>)
{
object = elem;
}
}
Log.v(TAG, object.property); // was initially omitted, added for the answer
// more code skipped for simplicity
if(object == null)
{ //
break; //
} // this all is marked as dead code
}
In Eclipse, the fragment with comments is marked as a dead code. Why? There is no final elements in the condition. The object variable is not assigned anywhere to constant null, except for the very beginning of the cycle, after which it should be normally overriden, but it does not happen always. The object can very well be null and non-null.
Am I missing something?
The Answer
Well, I found the answer, and to show it I must add one line of code to my example, which I omitted by accident when tried to simplify the code excerpt, but it is important. The line is:
Log.v(TAG, object.property);
So the object must be non-null, otherwise the code unreachable by exception. This line was added temporary for debugging purposes, which is why it was out of my consideration.
I can't reproduce that warning in eclipse.
Could it be that you accidently forgot some piece of code in your original question which might lead to a situation where no code behind it is executed. As you marked the complete if statement as dead code and not just its body, this seems to be the case. That missing line could be something that actually uses object (probably as simple as a logging statement where you try to access one of object's properties).
Even if the if condition is not marked as dead, its body might still be. Either object is not null (and the if body would not be executed) or – if object is null – a NullPointerException would be thrown in that missing line so that the execution stops there.
Can't reproduce with the following program:
import java.util.Date;
import java.util.LinkedList;
import java.util.List;
public class Test {
public static void main(String[] args) {
for(;;)
{
List<Object> list = new LinkedList<Object>();
Object object = null;
for(Object elem : list)
{
if(new Date().getSeconds() % 3 == 0)
{
object = elem;
}
}
if(object == null)
{
break;
}
}
}
}
This is Eclipse 3.7.2 with Java SE 1.7
Because, if list is null (in your code there's no init for list), you won't ever initialize Object.
My Trashtalk, my bad.
If the inner 'if' always go false, the dead code could be marked inside it.
If the elements are null, the comparisson in the end still applies.
If you return before your dead code block, it will raise an unreachable code error.
As others say, couldn't replicate it.

Categories