Related
sometimes it would be convenient to have an easy way of doing the following:
Foo a = dosomething();
if (a != null){
if (a.isValid()){
...
}
}
My idea was to have some kind of static “default” methods for not initialized variables like this:
class Foo{
public boolean isValid(){
return true;
}
public static boolean isValid(){
return false;
}
}
And now I could do this…
Foo a = dosomething();
if (a.isValid()){
// In our example case -> variable is initialized and the "normal" method gets called
}else{
// In our example case -> variable is null
}
So, if a == null the static “default” methods from our class gets called, otherwise the method of our object gets called.
Is there either some keyword I’m missing to do exactly this or is there a reason why this is not already implemented in programming languages like java/c#?
Note: this example is not very breathtaking if this would work, however there are examples where this would be - indeed - very nice.
It's very slightly odd; ordinarily, x.foo() runs the foo() method as defined by the object that the x reference is pointing to. What you propose is a fallback mechanism where, if x is null (is referencing nothing) then we don't look at the object that x is pointing to (there's nothing its pointing at; hence, that is impossible), but that we look at the type of x, the variable itself, instead, and ask this type: Hey, can you give me the default impl of foo()?
The core problem is that you're assigning a definition to null that it just doesn't have. Your idea requires a redefinition of what null means which means the entire community needs to go back to school. I think the current definition of null in the java community is some nebulous ill defined cloud of confusion, so this is probably a good idea, but it is a huge commitment, and it is extremely easy for the OpenJDK team to dictate a direction and for the community to just ignore it. The OpenJDK team should be very hesitant in trying to 'solve' this problem by introducing a language feature, and they are.
Let's talk about the definitions of null that make sense, which definition of null your idea specifically is catering to (at the detriment of the other interpretations!), and how catering to that specific idea is already easy to do in current java, i.e. - what you propose sounds outright daft to me, in that it's just unneccessary and forces an opinion of what null means down everybody's throats for no reason.
Not applicable / undefined / unset
This definition of null is exactly how SQL defines it, and it has the following properties:
There is no default implementation available. By definition! How can one define what the size is of, say, an unset list? You can't say 0. You have no idea what the list is supposed to be. The very point is that interaction with an unset/not-applicable/unknown value should immediately lead to a result that represents either [A] the programmer messed up, the fact that they think they can interact with this value means they programmed a bug - they made an assumption about the state of the system which does not hold, or [B] that the unset nature is infectuous: The operation returns the notion 'unknown / unset / not applicable' as result.
SQL chose the B route: Any interaction with NULL in SQL land is infectuous. For example, even NULL = NULL in SQL is NULL, not FALSE. It also means that all booleans in SQL are tri-state, but this actually 'works', in that one can honestly fathom this notion. If I ask you: Hey, are the lights on?, then there are 3 reasonable answers: Yes, No, and I can't tell you right now; I don't know.
In my opinion, java as a language is meant for this definition as well, but has mostly chosen the [A] route: Throw an NPE to let everybody know: There is a bug, and to let the programmer get to the relevant line extremely quickly. NPEs are easy to solve, which is why I don't get why everybody hates NPEs. I love NPEs. So much better than some default behaviour that is usually but not always what I intended (objectively speaking, it is better to have 50 bugs that each takes 3 minutes to solve, than one bug that takes an an entire working day, by a large margin!) – this definition 'works' with the language:
Uninitialized fields, and uninitialized values in an array begin as null, and in the absence of further information, treating it as unset is correct.
They are, in fact, infectuously erroneous: Virtually all attempts to interact with them results in an exception, except ==, but that is intentional, for the same reason in SQL IS NULL will return TRUE or FALSE and not NULL: Now we're actually talking about the pointer nature of the object itself ("foo" == "foo" can be false if the 2 strings aren't the same ref: Clearly == in java between objects is about the references itself and not about the objects referenced).
A key aspect to this is that null has absolutely no semantic meaning, at all. Its lack of semantic meaning is the point. In other words, null doesn't mean that a value is short or long or blank or indicative of anything in particular. The only thing it does mean is that it means nothing. You can't derive any information from it. Hence, foo.size() is not 0 when foo is unset/unknown - the question 'what is the size of the object foo is pointing at' is unanswerable, in this definition, and thus NPE is exactly right.
Your idea would hurt this interpretation - it would confound matters by giving answers to unanswerable questions.
Sentinel / 'empty'
null is sometimes used as a value that does have semantic meaning. Something specific. For example, if you ever wrote this, you're using this interpretation:
if (x == null || x.isEmpty()) return false;
Here you've assigned a semantic meaning to null - the same meaning you assigned to an empty string. This is common in java and presumably stems from some bass ackwards notion of performance. For example, in the eclipse ecj java parser system, all empty arrays are done with null pointers. For example, the definition of a method has a field Argument[] arguments (for the method parameters; using argument is the slightly wrong word, but it is used to store the param definitions); however, for methods with zero parameters, the semantically correct choice is obviously new Argument[0]. However, that is NOT what ecj fills the Abstract Syntax Tree with, and if you are hacking around on the ecj code and assign new Argument[0] to this, other code will mess up as it just wasn't written to deal with this.
This is in my opinion bad use of null, but is quite common. And, in ecj's defense, it is about 4 times faster than javac, so I don't think it's fair to cast aspersions at their seemingly deplorably outdated code practices. If it's stupid and it works it isn't stupid, right? ecj also has a better track record than javac (going mostly by personal experience; I've found 3 bugs in ecj over the years and 12 in javac).
This kind of null does get a lot better if we implement your idea.
The better solution
What ecj should have done, get the best of both worlds: Make a public constant for it! new Argument[0], the object, is entirely immutable. You need to make a single instance, once, ever, for an entire JVM run. The JVM itself does this; try it: List.of() returns the 'singleton empty list'. So does Collections.emptyList() for the old timers in the crowd. All lists 'made' with Collections.emptyList() are actually just refs to the same singleton 'empty list' object. This works because the lists these methods make are entirely immutable.
The same can and generally should apply to you!
If you ever write this:
if (x == null || x.isEmpty())
then you messed up if we go by the first definition of null, and you're simply writing needlessly wordy, but correct, code if we go by the second
definition. You've come up with a solution to address this, but there's a much, much better one!
Find the place where x got its value, and address the boneheaded code that decided to return null instead of "". You should in fact emphatically NOT be adding null checks to your code, because it's far too easy to get into this mode where you almost always do it, and therefore you rarely actually have null refs, but it's just swiss cheese laid on top of each other: There may still be holes, and then you get NPEs. Better to never check so you get NPEs very quickly in the development process - somebody returned null where they should be returning "" instead.
Sometimes the code that made the bad null ref is out of your control. In that case, do the same thing you should always do when working with badly designed APIs: Fix it ASAP. Write a wrapper if you have to. But if you can commit a fix, do that instead. This may require making such an object.
Sentinels are awesome
Sometimes sentinel objects (objects that 'stand in' for this default / blank take, such as "" for strings, List.of() for lists, etc) can be a bit more fancy than this. For example, one can imagine using LocalDate.of(1800, 1, 1) as sentinel for a missing birthdate, but do note that this instance is not a great idea. It does crazy stuff. For example, if you write code to determine the age of a person, then it starts giving completely wrong answers (which is significantly worse than throwing an exception. With the exception you know you have a bug faster and you get a stacktrace that lets you find it in literally 500 milliseconds (just click the line, voila. That is the exact line you need to look at right now to fix the problem). It'll say someone is 212 years old all of a sudden.
But you could make a LocalDate object that does some things (such as: It CAN print itself; sentinel.toString() doesn't throw NPE but prints something like 'unset date'), but for other things it will throw an exception. For example, .getYear() would throw.
You can also make more than one sentinel. If you want a sentinel that means 'far future', that's trivially made (LocalDate.of(9999, 12, 31) is pretty good already), and you can also have one as 'for as long as anyone remembers', e.g. 'distant past'. That's cool, and not something your proposal could ever do!
You will have to deal with the consequences though. In some small ways the java ecosystem's definitions don't mesh with this, and null would perhaps have been a better standin. For example, the equals contract clearly states that a.equals(a) must always hold, and yet, just like in SQL NULL = NULL isn't TRUE, you probably don't want missingDate.equals(missingDate) to be true; that's conflating the meta with the value: You can't actually tell me that 2 missing dates are equal. By definition: The dates are missing. You do not know if they are equal or not. It is not an answerable question. And yet we can't implement the equals method of missingDate as return false; (or, better yet, as you also can't really know they aren't equal either, throw an exception) as that breaks contract (equals methods must have the identity property and must not throw, as per its own javadoc, so we can't do either of those things).
Dealing with null better
There are a few things that make dealing with null a lot easier:
Annotations: APIs can and should be very clear in communicating when their methods can return null and what that means. Annotations to turn that documentation into compiler-checked documentation is awesome. Your IDE can start warning you, as you type, that null may occur and what that means, and will say so in auto-complete dialogs too. And it's all entirely backwards compatible in all senses of the word: No need to start considering giant swaths of the java ecosystem as 'obsolete' (unlike Optional, which mostly sucks).
Optional, except this is a non-solution. The type isn't orthogonal (you can't write a method that takes a List<MaybeOptionalorNot<String>> that works on both List<String> and List<Optional<String>>, even though a method that checks the 'is it some or is it none?' state of all list members and doesn't add anything (except maybe shuffle things around) would work equally on both methods, and yet you just can't write it. This is bad, and it means all usages of optional must be 'unrolled' on the spot, and e.g. Optional<X> should show up pretty much never ever as a parameter type or field type. Only as return types and even that is dubious - I'd just stick to what Optional was made for: As return type of Stream terminal operations.
Adopting it also isn't backwards compatible. For example, hashMap.get(key) should, in all possible interpretations of what Optional is for, obviously return an Optional<V>, but it doesn't, and it never will, because java doesn't break backwards compatibility lightly and breaking that is obviously far too heavy an impact. The only real solution is to introduce java.util2 and a complete incompatible redesign of the collections API, which is splitting the java ecosystem in twain. Ask the python community (python2 vs. python3) how well that goes.
Use sentinels, use them heavily, make them available. If I were designing LocalDate, I'd have created LocalDate.FAR_FUTURE and LocalDate_DISTANT_PAST (but let it be clear that I think Stephen Colebourne, who designed JSR310, is perhaps the best API designer out there. But nothing is so perfect that it can't be complained about, right?)
Use API calls that allow defaulting. Map has this.
Do NOT write this code:
String phoneNr = phoneNumbers.get(userId);
if (phoneNr == null) return "Unknown phone number";
return phoneNr;
But DO write this:
return phoneNumbers.getOrDefault(userId, "Unknown phone number");
Don't write:
Map<Course, List<Student>> participants;
void enrollStudent(Student student) {
List<Student> participating = participants.get(econ101);
if (participating == null) {
participating = new ArrayList<Student>();
participants.put(econ101, participating);
}
participating.add(student);
}
instead write:
Map<Course, List<Student>> participants;
void enrollStudent(Student student) {
participants.computeIfAbsent(econ101,
k -> new ArrayList<Student>())
.add(student);
}
and, crucially, if you are writing APIs, ensure things like getOrDefault, computeIfAbsent, etc. are available so that the users of your API don't have to deal with null nearly as much.
You can write a static test() method like this:
static <T> boolean test(T object, Predicate<T> validation) {
return object != null && validation.test(object);
}
and
static class Foo {
public boolean isValid() {
return true;
}
}
static Foo dosomething() {
return new Foo();
}
public static void main(String[] args) {
Foo a = dosomething();
if (test(a, Foo::isValid))
System.out.println("OK");
else
System.out.println("NG");
}
output:
OK
If dosomething() returns null, it prints NG
Not exactly, but take a look at Optional:
Optional.ofNullable(dosomething())
.filter(Foo::isValid)
.ifPresent(a -> ...);
I have a getter that returns a String and I am comparing it to some other String. I check the returned value for null so my ifstatement looks like this (and I really do exit early if it is true)
if (someObject.getFoo() != null && someObject.getFoo().equals(someOtherString)) {
return;
}
Performancewise, would it be better to store the returned String rather than calling the getter twice like this? Does it even matter?
String foo = someObject.getFoo();
if (foo != null && foo.equals(someOtherString)) {
return;
}
To answer questions from the comments, this check is not performed very often and the getter is fairly simple. I am mostly curious how allocating a new local variable compares to executing the getter an additional time.
It depends entirely on what the getter does. If it's a simple getter (retrieving a data member), then the JVM will be able to inline it on-the-fly if it determines that code is a hot spot for performance. This is actually why Oracle/Sun's JVM is called "HotSpot". :-) It will apply aggressive JIT optimization where it sees that it needs it (when it can). If the getter does something complex, though, naturally it could be slower to use it and have it repeat that work.
If the code isn't a hot spot, of course, you don't care whether there's a difference in performance.
Someone once told me that the inlined getter can sometimes be faster than the value cached to a local variable, but I've never proven that to myself and don't know the theory behind why it would be the case.
Use the second block. The first block will most likely get optimized to the second anyway, and the second is more readable. But the main reason is that, if someObject is ever accessed by other threads, and if the optimization somehow gets disabled, the first block will throw no end of NullPointerException exceptions.
Also: even without multi-threading, if someObject is by any chance made volatile, the optimization will disappear. (Bad for performance, and, of course, really bad with multiple threads.) And lastly, the second block will make using a debugger easier (not that that would ever be necessary.)
You can omit the first null check since equals does that for you:
The result is true if and only if the argument is not null and is a String object that represents the same sequence of characters as this object.
So the best solution is simply:
if(someOtherString.equals(someObject.getFoo())
They both look same,even Performance wise.Use the 1st block if you are sure you won't be using the returned value further,if not,use 2nd block.
I prefer the second code block because it assigns foo and then foo cannot change to null/notnull.
Null are often required and Java should solve this by using the 'Elvis' operator:
if (someObject.getFoo()?.equals(someOtherString)) {
return;
}
When reading JDK source code, I find it common that the author will check the parameters if they are null and then throw new NullPointerException() manually.
Why do they do it? I think there's no need to do so since it will throw new NullPointerException() when it calls any method. (Here is some source code of HashMap, for instance :)
public V computeIfPresent(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
Node<K,V> e; V oldValue;
int hash = hash(key);
if ((e = getNode(hash, key)) != null &&
(oldValue = e.value) != null) {
V v = remappingFunction.apply(key, oldValue);
if (v != null) {
e.value = v;
afterNodeAccess(e);
return v;
}
else
removeNode(hash, key, null, false, true);
}
return null;
}
There are a number of reasons that come to mind, several being closely related:
Fail-fast: If it's going to fail, best to fail sooner rather than later. This allows problems to be caught closer to their source, making them easier to identify and recover from. It also avoids wasting CPU cycles on code that's bound to fail.
Intent: Throwing the exception explicitly makes it clear to maintainers that the error is there purposely and the author was aware of the consequences.
Consistency: If the error were allowed to happen naturally, it might not occur in every scenario. If no mapping is found, for example, remappingFunction would never be used and the exception wouldn't be thrown. Validating input in advance allows for more deterministic behavior and clearer documentation.
Stability: Code evolves over time. Code that encounters an exception naturally might, after a bit of refactoring, cease to do so, or do so under different circumstances. Throwing it explicitly makes it less likely for behavior to change inadvertently.
It is for clarity, consistency, and to prevent extra, unnecessary work from being performed.
Consider what would happen if there wasn't a guard clause at the top of the method. It would always call hash(key) and getNode(hash, key) even when null had been passed in for the remappingFunction before the NPE was thrown.
Even worse, if the if condition is false then we take the else branch, which doesn't use the remappingFunction at all, which means the method doesn't always throw NPE when a null is passed; whether it does depends on the state of the map.
Both scenarios are bad. If null is not a valid value for remappingFunction the method should consistently throw an exception regardless of the internal state of the object at the time of the call, and it should do so without doing unnecessary work that is pointless given that it is just going to throw. Finally, it is a good principle of clean, clear code to have the guard right up front so that anyone reviewing the source code can readily see that it will do so.
Even if the exception were currently thrown by every branch of code, it is possible that a future revision of the code would change that. Performing the check at the beginning ensures it will definitely be performed.
In addition to the reasons listed by #shmosel's excellent answer ...
Performance: There may be / have been performance benefits (on some JVMs) to throwing the NPE explicitly rather than letting the JVM do it.
It depends on the strategy that the Java interpreter and JIT compiler take to detecting the dereferencing of null pointers. One strategy is to not test for null, but instead trap the SIGSEGV that happens when an instruction tries to access address 0. This is the fastest approach in the case where the reference is always valid, but it is expensive in the NPE case.
An explicit test for null in the code would avoid the SIGSEGV performance hit in a scenario where NPEs were frequent.
(I doubt that this would be a worthwhile micro-optimization in a modern JVM, but it could have been in the past.)
Compatibility: The likely reason that there is no message in the exception is for compatibility with NPEs that are thrown by the JVM itself. In a compliant Java implementation, an NPE thrown by the JVM has a null message. (Android Java is different.)
Apart from what other people have pointed out, it's worth noting the role of convention here. In C#, for example, you also have the same convention of explicitly raising an exception in cases like this, but it's specifically an ArgumentNullException, which is somewhat more specific. (The C# convention is that NullReferenceException always represents a bug of some kind - quite simply, it shouldn't ever happen in production code; granted, ArgumentNullException usually does, too, but it could be a bug more along the line of "you don't understand how to use the library correctly" kind of bug).
So, basically, in C# NullReferenceException means that your program actually tried to use it, whereas ArgumentNullException it means that it recognized that the value was wrong and it didn't even bother to try to use it. The implications can actually be different (depending on the circumstances) because ArgumentNullException means that the method in question didn't have side effects yet (since it failed the method preconditions).
Incidentally, if you're raising something like ArgumentNullException or IllegalArgumentException, that's part of the point of doing the check: you want a different exception than you'd "normally" get.
Either way, explicitly raising the exception reinforces the good practice of being explicit about your method's pre-conditions and expected arguments, which makes the code easier to read, use, and maintain. If you didn't explicitly check for null, I don't know if it's because you thought that no one would ever pass a null argument, you're counting it to throw the exception anyway, or you just forgot to check for that.
It is so you will get the exception as soon as you perpetrate the error, rather than later on when you're using the map and won't understand why it happened.
It turns a seemingly erratic error condition into a clear contract violation: The function has some preconditions for working correctly, so it checks them beforehand, enforcing them to be met.
The effect is, that you won't have to debug computeIfPresent() when you get the exception out of it. Once you see that the exception comes from the precondition check, you know that you called the function with an illegal argument. If the check were not there, you would need to exclude the possibility that there is some bug within computeIfPresent() itself that leads to the exception being thrown.
Obviously, throwing the generic NullPointerException is a really bad choice, as it does not signal a contract violation in and of itself. IllegalArgumentException would be a better choice.
Sidenote:
I don't know whether Java allows this (I doubt it), but C/C++ programmers use an assert() in this case, which is significantly better for debugging: It tells the program to crash immediately and as hard as possible should the provided condition evaluate to false. So, if you ran
void MyClass_foo(MyClass* me, int (*someFunction)(int)) {
assert(me);
assert(someFunction);
...
}
under a debugger, and something passed NULL into either argument, the program would stop right at the line telling which argument was NULL, and you would be able to examine all local variables of the entire call stack at leisure.
It's because it's possible for it not to happen naturally. Let's see piece of code like this:
bool isUserAMoron(User user) {
Connection c = UnstableDatabase.getConnection();
if (user.name == "Moron") {
// In this case we don't need to connect to DB
return true;
} else {
return c.makeMoronishCheck(user.id);
}
}
(of course there is numerous problems in this sample about code quality. Sorry to lazy to imagine perfect sample)
Situation when c will not be actually used and NullPointerException will not be thrown even if c == null is possible.
In more complicated situations it's becomes very non-easy to hunt down such cases. This is why general check like if (c == null) throw new NullPointerException() is better.
It is intentional to protect further damage, or to getting into inconsistent state.
Apart from all other excellent answers here, I'd also like to add a few cases.
You can add a message if you create your own exception
If you throw your own NullPointerException you can add a message (which you definitely should!)
The default message is a null from new NullPointerException() and all methods that use it, for instance Objects.requireNonNull. If you print that null it can even translate to an empty string...
A bit short and uninformative...
The stack trace will give a lot of information, but for the user to know what was null they have to dig up the code and look at the exact row.
Now imagine that NPE being wrapped and sent over the net, e.g. as a message in a web service error, perhaps between different departments or even organizations. Worst case scenario, no one may figure out what null stands for...
Chained method calls will keep you guessing
An exception will only tell you on what row the exception occurred. Consider the following row:
repository.getService(someObject.someMethod());
If you get an NPE and it points at this row, which one of repository and someObject was null?
Instead, checking these variables when you get them will at least point to a row where they are hopefully the only variable being handled. And, as mentioned before, even better if your error message contains the name of the variable or similar.
Errors when processing lots of input should give identifying information
Imagine that your program is processing an input file with thousands of rows and suddenly there's a NullPointerException. You look at the place and realize some input was incorrect... what input? You'll need more information about the row number, perhaps the column or even the whole row text to understand what row in that file needs fixing.
I have 2 java statements:
if(myvar!=null)
if(null!=myvar)
Some people says that second form is better because it helps to avoid NPEs, is it true? What is generally beter to use in java?
if(myvar!=null)
if(null!=myvar)
Some people says that second form is better because it helps to avoid
NPEs, is it true?
No. These are exactly the same, and there is no risk of NPE here to avoid.
Maybe you confused the example with this situation:
if (myvar.equals("something"))
if ("something".equals(myvar))
Here, if myvar is null, the first form would throw an NPE, since .equals would be dereferencing a null value, but the second one works just fine, as the .equals implementation of String handles a null parameter gracefully, returning false in this example. For this reason, in this example, the 2nd form is generally recommended.
A related argument, which one of these is preferred?
if (myvar == null)
if (null == myvar)
Consider the possibility of a typo, writing a single = instead of ==:
if (myvar = null)
if (null = myvar)
If myvar is a Boolean, then the first form will compile, the second form will not. So it may seem that the second form is safer. However, for any other variable type, neither form will compile. And even in the case of a Boolean, the damage is very limited, because regardless of the value of myvar, the program will always crash with an NPE when the if statement is reached, due to unboxing a null value.
Since no test will ever get past this statement, and since you should not release untested code, making such mistake is unrealistic.
In short, the safety benefit is so marginally small that it's practically non-existent, so I don't see a reason to prefer this unusual writing style.
Update
As #Nitek pointed out in a comment, an advantage of adopting the second form could be if you make it a habit, so that when you program in other languages where myvar = null might compile, you'd be slightly safer, out of your "good habits".
I'd still point out that in many languages comparisons with null are special, with no possibility of such typo errors. For example in Python, myvar == None is incorrect, and should be written as myvar is None, so there's no more == to mistype.
Strictly speaking, although the null = myvar writing style will not protect you in all languages, it might protect you in some, so I'd have to admit it seems to be a good habit.
This is not true, they are the same.
I prefer the first one because i think it reads better.
if(myvar!=null)
if myvar is not equal to null
and
if(null!=myvar)
if null is not equal to myvar
There is no certain difference in both of them both refer to the same check
if(myvar!=null)
if(null!=myvar)
both are the exact same things.
But in depper context it is the Yoda Condition.
It is generally criticized because of its readability issues, so try to make every thing simple for yourself and for other people who might read your code as this is not a standard notation.
This is primary opinion based, but I always go with
if (variable == null)
if (variable != null)
because imo it´s a better programming style.
And a short answer to your post, no difference between them.
There is no practical difference in your case.
15.21. Equality Operators
The equality operators are commutative if the operand expressions have no side effects.
That is, you can have a situation where the LHS and RHS matter because evaluating them can cause a change in the other, but not if one of them is the keyword null.
Consider the following example:
public class Example {
static int x = 0;
public static void main(String[] args) {
System.out.println(doit() == x); // false
System.out.println(x == doit()); // true
}
static int doit() {
x++;
return 0;
}
}
Furthermore,
15.21.3. Reference Equality Operators == and !=
The result of != is false if the operand values are both null or both refer to the same object or array; otherwise, the result is true.
shows that there is no difference in the evaluation.
As Nitek wrote in the comment to my initial question:
The second one prevents typos like "myvar=null" because "null=myvar" won't compile. Might save you some trouble.
So, we have a BIG advantage of the second form - it helps to prevent serious logic error. Example:
Boolean i=false;
....
if(null=i){
}
won't compile
but
Boolean i=false;
....
if(i=null){
}
will
But the second form has a big disadvantage - it's reading difficulties.
So I would say that in 99% cases the first form is ok, but I prefer to use the second form. If you are sure you won't mix == and = up, use the first form. If not, use the second. I'm sure there are a couple of other cases when the second form is preferred, but can't remind it at the moment.
Hi
In our company they follow a strict rule of comparing with null values. When I code
if(variable!=null) in code review I get comments on this to change it to if(null!=variable). Is there any performance hit for the above code?
If anybody explains highly appreciated.
Thanks in advance
I don't see any advantage in following this convention. In C, where boolean types don't exist, it's useful to write
if (5 == variable)
rather than
if (variable == 5)
because if you forget one of the eaqual sign, you end up with
if (variable = 5)
which assigns 5 to variable and always evaluate to true. But in Java, a boolean is a boolean. And with !=, there is no reason at all.
One good advice, though, is to write
if (CONSTANT.equals(myString))
rather than
if (myString.equals(CONSTANT))
because it helps avoiding NullPointerExceptions.
My advice would be to ask for a justification of the rule. If there's none, why follow it? It doesn't help readability.
No performance difference - the reason is that if you get used to writing (null == somevar) instead of (somevar == null), then you'll never accidentally use a single equals sign instead of two, because the compiler won't allow it, where it will allow (somevar = null). They're just extending this to != to keep it consistent.
I personally prefer (somevar == null) myself, but I see where they're coming from.
It's a "left-over" from old C-coding standards.
the expression if (var = null) would compile without problems. But it would actually assign the value null to the variable thus doing something completely different. This was the source for very annoying bugs in C programs.
In Java that expression does not compile and thus it's more a tradition than anything else. It doesn't erver any purpose (other than coding style preferences)
This has nothing to do with performance. It's used to prevent that you assign accidentally instead of comparing. An assignment null = var won't make any sense. But in Java var = null also won't compile so the rule of turning them around doesn't make sense anymore and only makes the code less readable.