I am creating a set of TestNG tests in eclipse for existing code that supposedly validates certificates against a CRL.
In my test, I create a new class to provide validation functions, like this:
public void testRevokedCertificate() throws Exception {
EmbeddedFileServer embeddedFileServer = new EmbeddedFileServer(CertificateResourceHelper.getResourcePath("."));
embeddedFileServer.start();
URL crlUrl = new URL("http://localhost:" + embeddedFileServer.getPort() + "/certs/" + "test_ca1.crl");
CachingValidCrlProvider prov = new CachingValidCrlProvider(crlUrl, publicKey, 1, 2);
assertNotNull(prov);
}
The constructor CachingValidCrlProvider(URL, publicKey, int, int) always returns null! This is a mystery to me, as the constructor does not look very special:
public CachingValidCrlProvider(URL crlUrl, PublicKey expectedPublicKey, int failedDownloadBackoffTimeInSeconds, int forcedCrlRefreshIntervalInSeconds) {
System.out.println("Creating CachingValidCrlProvider (this is never displayed)");
this.crlUrl = crlUrl;
this.expectedPublicKey = expectedPublicKey;
crlDownloadState = new CrlDownloadState(failedDownloadBackoffTimeInSeconds, forcedCrlRefreshIntervalInSeconds);
}
The first debug println() is somehow never reached since I cannot find the printed line in stdout.
AFAIK, one of the few ways constructors may yield null is when a static {} block fails somewhere down the line. However, I do not see any (there is one such block in the codebase but not reachable in this scenario).
How do I troubleshoot this issue?
Quote from an answer here on StackOverflow by Jon Skeet:
From section 15.9.4 of the JLS:
The value of a class instance creation expression is a reference to the newly created object of the specified class. Every time the expression is evaluated, a fresh object is created.
So no, it can never return null.
A constructor in Java CANNOT return null. The only case your object can be null is if you didn't call the constructor, or the constructor raised an exception.
EDIT:
As for your code, the most probable possibility IMO is that the code doesn't stop on your assertNotNull(prov), but never reaches it and instead crashes somewhere before. You should try using the debugger/more sysout.
Static Initialization Blocks are executed before anything else, So the code is reached, you just don't know it. I suggest you add a breakpoint in your static block, and use a debugger (included in eclipse or Intellij) to see when it fails. good luck.
Thank you all for the responses. They helped me get to the root cause. As it turned out, there was a mock initialization of this class in another test (there are ~300 existing tests by other people in this codebase).
Because the code uses singletons extensively, my call to new did not actually change anything to the already-initialized object.
I found this thanks to your remarks and hints which made me look in different parts of the codebase, thanks a lot.
How to avoid java warning which says
The value of the local variable is not used
for the variable which is declared as private?
You have multiple choices:
Remove the field.
It is unused, so it shouldn't be there.
Comment out the field, e.g. using // TODO
Good for temporary hiding of warning until you write code using field.
Suppress the warning using #SuppressWarnings("unused").
Disable the warning in IDE settings. For Eclipse, that would be in
Window > Preferences
Java > Compiler > Error/Warnings
Unnecessary code > Unused private member
Select option Ignore
For #3 and #4, though, although you can, why would you want to?
Since it is not used and does not contain any code you are interested in, you can delete it.
It is saying because the value of the variable or variable is not used within the program. So to remove this warning you can simply delete this variable because you are not using it anywhere .Or use this variable at some point in your code
Actually, in some cases it's not that easy. Here's an example of why it's not as easy as just commenting out the code. Sure this is a comment, but I don't have rep of 50 to comment and the comment block is lame anyway. So sue me. This does not work by the way, but comments are lame here.
try {
// so, rather than a wait, we check for exit value
// and if that tosses an exception, the process is
// still running. Ooooooookkkkkaaaaaayyyyyy No Problem
int exitValue = pShowProcess.exitValue();
// guess we don't do this to get rid of the not referened warning
//(void)exitValue;
// we don't care what the exit value was
exitValue = 0;
// but if we get here, then the show stopped, so
// if we stop it now, it won't need to wait, it will be fine
// we think.
endShow();
} catch (Exception ex) {
// Process is still running. So just keep going until
// mouse clicks or something else stops the show
}
Was there any reason why the designers of Java felt that local variables should not be given a default value? Seriously, if instance variables can be given a default value, then why can't we do the same for local variables?
And it also leads to problems as explained in this comment to a blog post:
Well this rule is most frustrating when trying to close a resource in a finally block. If I instantiate the resource inside a try, but try to close it within the finally, I get this error. If I move the instantiation outside the try, I get another error stating that a it must be within a try.
Very frustrating.
Local variables are declared mostly to do some calculation. So it's the programmer's decision to set the value of the variable and it should not take a default value.
If the programmer, by mistake, did not initialize a local variable and it takes a default value, then the output could be some unexpected value. So in case of local variables, the compiler will ask the programmer to initialize it with some value before they access the variable to avoid the usage of undefined values.
The "problem" you link to seems to be describing this situation:
SomeObject so;
try {
// Do some work here ...
so = new SomeObject();
so.DoUsefulThings();
} finally {
so.CleanUp(); // Compiler error here
}
The commenter's complaint is that the compiler balks at the line in the finally section, claiming that so might be uninitialized. The comment then mentions another way of writing the code, probably something like this:
// Do some work here ...
SomeObject so = new SomeObject();
try {
so.DoUsefulThings();
} finally {
so.CleanUp();
}
The commenter is unhappy with that solution because the compiler then says that the code "must be within a try." I guess that means some of the code may raise an exception that isn't handled anymore. I'm not sure. Neither version of my code handles any exceptions, so anything exception-related in the first version should work the same in the second.
Anyway, this second version of code is the correct way to write it. In the first version, the compiler's error message was correct. The so variable might be uninitialized. In particular, if the SomeObject constructor fails, so will not be initialized, and so it will be an error to attempt to call so.CleanUp. Always enter the try section after you have acquired the resource that the finally section finalizes.
The try-finally block after the so initialization is there only to protect the SomeObject instance, to make sure it gets cleaned up no matter what else happens. If there are other things that need to run, but they aren't related to whether the SomeObject instance was property allocated, then they should go in another try-finally block, probably one that wraps the one I've shown.
Requiring variables to be assigned manually before use does not lead to real problems. It only leads to minor hassles, but your code will be better for it. You'll have variables with more limited scope, and try-finally blocks that don't try to protect too much.
If local variables had default values, then so in the first example would have been null. That wouldn't really have solved anything. Instead of getting a compile-time error in the finally block, you'd have a NullPointerException lurking there that might hide whatever other exception could occur in the "Do some work here" section of the code. (Or do exceptions in finally sections automatically chain to the previous exception? I don't remember. Even so, you'd have an extra exception in the way of the real one.)
Moreover, in the example below, an exception may have been thrown inside the SomeObject construction, in which case the 'so' variable would be null and the call to CleanUp will throw a NullPointerException
SomeObject so;
try {
// Do some work here ...
so = new SomeObject();
so.DoUsefulThings();
} finally {
so.CleanUp(); // Compiler error here
}
What I tend to do is this:
SomeObject so = null;
try {
// Do some work here ...
so = new SomeObject();
so.DoUsefulThings();
} finally {
if (so != null) {
so.CleanUp(); // safe
}
}
The actual answer to your question is because method variables are instantiated by simply adding a number to the stack pointer. To zero them would be an extra step. For class variables they are put into initialized memory on the heap.
Why not take the extra step? Take a step back--Nobody mentioned that the "warning" in this case is a Very Good Thing.
You should never initialize your variable to zero or null on the first pass (when you are first coding it). Either assign it to the actual value or don't assign it at all because if you don't then Java can tell you when you really screw up. Take Electric Monk's answer as a great example. In the first case, it's actually amazingly useful that it's telling you that if the try() fails because SomeObject's constructor threw an exception, then you would end up with an NPE in the finally. If the constructor can't throw an exception, it shouldn't be in the try.
This warning is an awesome multi-path bad programmer checker that has saved me from doing stupid stuff since it checks every path and makes sure that if you used the variable in some path then you had to initialize it in every path that lead up to it. I now never explicitly initialize variables until I determine that it is the correct thing to do.
On top of that, isn't it better to explicitly say "int size=0" rather than "int size" and make the next programmer go figure out that you intend it to be zero?
On the flip side I can't come up with a single valid reason to have the compiler initialize all uninitialized variables to 0.
Notice that the final instance/member variables don't get initialized by default. Because those are final and can't be changed in the program afterwards. That's the reason that Java doesn't give any default value for them and force the programmer to initialize it.
On the other hand, non-final member variables can be changed later. Hence, the compiler doesn't let them remain uninitialised; precisely, because those can be changed later. Regarding local variables, the scope of local variables is much narrower; and compiler knows when it's getting used. Hence, forcing the programmer to initialize the variable, makes sense.
For me, the reason comes down to this this: The purpose of local variables is different than the purpose of instance variables. Local variables are there to be used as part of a calculation; instance variables are there to contain state. If you use a local variable without assigning it a value, that's almost certainly a logic error.
That said, I could totally get behind requiring that instance variables were always explicitly initialized; the error would occur on any constructor where the result allows an uninitialized instance variable (e.g., not initialized at declaration and not in the constructor). But that's not the decision Gosling, et. al., took in the early 90's, so here we are. (And I'm not saying they made the wrong call.)
I could not get behind defaulting local variables, though. Yes, we shouldn't rely on compilers to double-check our logic, and one doesn't, but it's still handy when the compiler catches one out. :-)
I think the primary purpose was to maintain similarity with C/C++. However the compiler detects and warns you about using uninitialized variables which will reduce the problem to a minimal point. From a performance perspective, it's a little faster to let you declare uninitialized variables since the compiler will not have to write an assignment statement, even if you overwrite the value of the variable in the next statement.
It is more efficient not to initialize variables, and in the case of local variables it is safe to do so, because initialization can be tracked by the compiler.
In cases where you need a variable to be initialized you can always do it yourself, so it is not a problem.
The idea behind local variables is they only exist inside the limited scope for which they are needed. As such, there should be little reason for uncertainty as to the value, or at least, where that value is coming from. I could imagine many errors arising from having a default value for local variables.
For example, consider the following simple code... (N.B. let us assume for demonstration purposes that local variables are assigned a default value, as specified, if not explicitly initialized)
System.out.println("Enter grade");
int grade = new Scanner(System.in).nextInt(); // I won't bother with exception handling here, to cut down on lines.
char letterGrade; // Let us assume the default value for a char is '\0'
if (grade >= 90)
letterGrade = 'A';
else if (grade >= 80)
letterGrade = 'B';
else if (grade >= 70)
letterGrade = 'C';
else if (grade >= 60)
letterGrade = 'D';
else
letterGrade = 'F';
System.out.println("Your grade is " + letterGrade);
When all is said and done, assuming the compiler assigned a default value of '\0' to letterGrade, this code as written would work properly. However, what if we forgot the else statement?
A test run of our code might result in the following
Enter grade
43
Your grade is
This outcome, while to be expected, surely was not the coder's intent. Indeed, probably in a vast majority of cases (or at least, a significant number, thereof), the default value wouldn't be the desired value, so in the vast majority of cases the default value would result in error. It makes more sense to force the coder to assign an initial value to a local variable before using it, since the debugging grief caused by forgetting the = 1 in for(int i = 1; i < 10; i++) far outweighs the convenience in not having to include the = 0 in for(int i; i < 10; i++).
It is true that try-catch-finally blocks could get a little messy (but it isn't actually a catch-22 as the quote seems to suggest), when for example an object throws a checked exception in its constructor, yet for one reason or another, something must be done to this object at the end of the block in finally. A perfect example of this is when dealing with resources, which must be closed.
One way to handle this in the past might be like so...
Scanner s = null; // Declared and initialized to null outside the block. This gives us the needed scope, and an initial value.
try {
s = new Scanner(new FileInputStream(new File("filename.txt")));
int someInt = s.nextInt();
} catch (InputMismatchException e) {
System.out.println("Some error message");
} catch (IOException e) {
System.out.println("different error message");
} finally {
if (s != null) // In case exception during initialization prevents assignment of new non-null value to s.
s.close();
}
However, as of Java 7, this finally block is no longer necessary using try-with-resources, like so.
try (Scanner s = new Scanner(new FileInputStream(new File("filename.txt")))) {
...
...
} catch(IOException e) {
System.out.println("different error message");
}
That said, (as the name suggests) this only works with resources.
And while the former example is a bit yucky, this perhaps speaks more to the way try-catch-finally or these classes are implemented than it speaks about local variables and how they are implemented.
It is true that fields are initialized to a default value, but this is a bit different. When you say, for example, int[] arr = new int[10];, as soon as you've initialized this array, the object exists in memory at a given location. Let's assume for a moment that there is no default values, but instead the initial value is whatever series of 1s and 0s happens to be in that memory location at the moment. This could lead to non-deterministic behavior in a number of cases.
Suppose we have...
int[] arr = new int[10];
if(arr[0] == 0)
System.out.println("Same.");
else
System.out.println("Not same.");
It would be perfectly possible that Same. might be displayed in one run and Not same. might be displayed in another. The problem could become even more grievous once you start talking reference variables.
String[] s = new String[5];
According to definition, each element of s should point to a String (or is null). However, if the initial value is whatever series of 0s and 1s happens to occur at this memory location, not only is there no guarantee you'll get the same results each time, but there's also no guarantee that the object s[0] points to (assuming it points to anything meaningful) even is a String (perhaps it's a Rabbit, :p)! This lack of concern for type would fly in the face of pretty much everything that makes Java Java. So while having default values for local variables could be seen as optional at best, having default values for instance variables is closer to a necessity.
Flip this around and ask: why are fields initialised to default values? If the Java compiler required you to initialise fields yourself instead of using their default values, that would be more efficient because there would be no need to zero out memory before you used it. So it would be a sensible language design if all variables were treated like local variables in this regard.
The reason is not because it's more difficult to check this for fields than for local variables. The Java compiler already knows how to check whether a field is definitely initialised by a constructor, because it has to check this for final fields. So it would be little extra work for the compiler to apply the same logic to other fields to ensure they are definitely assigned in the constructor.
The reason is that, even for final fields where the compiler proves that the field is definitely assigned in the constructor, its value before assignment can still be visible from other code:
class A {
final int x;
A() {
this.x = calculate();
}
int calculate() {
System.out.println(this.x);
return 1;
}
}
In this code, the constructor definitely assigns to this.x, but even so, the field's default initial value of 0 is visible in the calculate method at the point where this.x is printed. If the field wasn't zeroed out before the constructor was invoked, then the calculate method would be able to observe the contents of uninitialised memory, which would be non-deterministic behaviour and have potential security concerns.
The alternative would be to forbid the method call calculate() at this point in the code where the field isn't yet definitely assigned. But that would be inconvenient; it is useful to be able to call methods from the constructor like this. The convenience of being able to do that is worth more than the tiny performance cost of zeroing out the memory for the fields before invoking the constructor.
Note that this reasoning does not apply to local variables, because a method's uninitialised local variables are not visible from other methods; because they are local.
Eclipse even gives you warnings of uninitialized variables, so it becomes quite obvious anyway. Personally I think it's a good thing that this is the default behaviour, otherwise your application may use unexpected values, and instead of the compiler throwing an error it won't do anything (but perhaps give a warning) and then you'll be scratching your head as to why certain things don't quite behave the way they should.
Instance variable will have default values but the local variables could not have default values. Since local variables basically are in methods/behavior, its main aim is to do some operations or calculations. Therefore, it is not a good idea to set default values for local variables. Otherwise, it is very hard and time-consuming to check the reasons of unexpected answers.
The local variables are stored on a stack, but instance variables are stored on the heap, so there are some chances that a previous value on the stack will be read instead of a default value as happens in the heap.
For that reason the JVM doesn't allow to use a local variable without initializing it.
Memory stack for methods is created at execution time. The method stack order is decided at execution time.
There might be a function that may not be called at all. So to instantiate local variables at the time of object instantiation would be a complete wastage of memory. Also, Object variables remain in memory for a complete object lifecycle of a class whereas, local variables and their values become eligible for garbage collection the moment they are popped from the memory stack.
So, To give memory to the variables of methods that might not even be called or even if called, will not remain inside memory for the lifecycle of an object, would be a completely illogical and memory-waste-worthy
The answer is instance variables can be initialized in the class constructor or any class method. But in case of local variables, once you defined whatever in the method, that remains forever in the class.
I could think of the following two reasons
As most of the answers said, by putting the constraint of initialising the local variable, it is ensured that the local variable gets assigned a value as the programmer wants and ensures the expected results are computed.
Instance variables can be hidden by declaring local variables (same name) - to ensure the expected behaviour, local variables are forced to be initialised to a value (I would totally avoid this, though).
Note: To be clear, the question being asked is not about "How can I display line numbers in Eclipse?" That is answered here # StackOverflow.
Goal: I want to know if Eclipse has a macro that acts as getLineNumber() the same way Visual Studio has the ability to use the preprocessor macro __LINE__ to easily/universally get the line# for where this particular call to getLineNumber() lives on. It's useful for knowing if a line of code was hit, without using a debugger/breakpoints.
I'd like to debug using: println("I just hit file#line: "+__FILE__ +'#'+ __LINE__ );
I am seeking a universal macro that can be copied and pasted to any file/any line, so that those values are known at/before compile time. Does Java/Eclipse offer such a feature? Possibly in an Eclipse plugin?
I can't find any info on this anywhere, I'm at a loss. Thanks.
You can try that:
System.out.println("I just hit file#line: "+Thread.currentThread().getStackTrace()[1].getFileName() +'#'+ Thread.currentThread().getStackTrace()[1].getLineNumber());
Thread.currentThread().getStackTrace() will return an array of StackTraceElement, and we take the second element (the first one will be in the Thread class), and get the file name and line number from that.
EDIT: You may also want to create a static function (you can put it in an utilitary class) to do that (which will avoid the problems with the different java implementations):
public static void printCurrentFileAndLine() {
final StackTraceElement[] ste = Thread.currentThread().getStackTrace();
for (int i=0; i<ste.length; i++)
if (ste[i].getMethodName().equals("printCurrentFileAndLine")) {
System.out.println("I just hit file#line: "+ste[i+1].getFileName() +'#'+ ste[i+1].getLineNumber() );
break;
}
}
And you just need to call:
printCurrentFileAndLine();
Be careful with Thread.getStackTrace() method, it's a costly method. Just use it when you really need.
For more information about that:
How Expensive is Thread.getStackTrace()?