This question already has answers here:
How can I properly compare two Integers in Java?
(10 answers)
Closed 7 years ago.
I have to compare two Integer objects (not int). What is the canonical way to compare them?
Integer x = ...
Integer y = ...
I can think of this:
if (x == y)
The == operator only compares references, so this will only work for lower integer values. But perhaps auto-boxing kicks in...?
if (x.equals(y))
This looks like an expensive operation. Are there any hash codes calculated this way?
if (x.intValue() == y.intValue())
A little bit verbose...
EDIT: Thank you for your responses. Although I know what to do now, the facts are distributed on all of the existing answers (even the deleted ones :)) and I don't really know, which one to accept. So I'll accept the best answer, which refers to all three comparison possibilities, or at least the first two.
This is what the equals method does:
public boolean equals(Object obj) {
if (obj instanceof Integer) {
return value == ((Integer)obj).intValue();
}
return false;
}
As you can see, there's no hash code calculation, but there are a few other operations taking place there. Although x.intValue() == y.intValue() might be slightly faster, you're getting into micro-optimization territory there. Plus the compiler might optimize the equals() call anyway, though I don't know that for certain.
I generally would use the primitive int, but if I had to use Integer, I would stick with equals().
Use the equals method. Why are you so worried that it's expensive?
if (x.equals(y))
This looks like an expensive operation. Are there any hash codes calculated this way?
It is not an expensive operation and no hash codes are calculated. Java does not magically calculate hash codes, equals(...) is just a method call, not different from any other method call.
The JVM will most likely even optimize the method call away (inlining the comparison that takes place inside the method), so this call is not much more expensive than using == on two primitive int values.
Note: Don't prematurely apply micro-optimizations; your assumptions like "this must be slow" are most likely wrong or don't matter, because the code isn't a performance bottleneck.
Minor note: since Java 1.7 the Integer class has a static compare(Integer, Integer) method, so you can just call Integer.compare(x, y) and be done with it (questions about optimization aside).
Of course that code is incompatible with versions of Java before 1.7, so I would recommend using x.compareTo(y) instead, which is compatible back to 1.2.
I would go with x.equals(y) because that's consistent way to check equality for all classes.
As far as performance goes, equals is actually more expensive because it ends up calling intValue().
EDIT: You should avoid autoboxing in most cases. It can get really confusing, especially the author doesn't know what he was doing. You can try this code and you will be surprised by the result;
Integer a = 128;
Integer b = 128;
System.out.println(a==b);
Compare integer and print its value in value ascending or descending order. All you have to do is implements Comparator interface and override its compare method and compare its value as below:
#Override
public int compare(Integer o1, Integer o2) {
if (ascending) {
return o1.intValue() - o2.intValue();
} else {
return o2.intValue() - o1.intValue();
}
}
"equals" is it. To be on the safe side, you should test for null-ness:
x == y || (x != null && x.equals(y))
the x==y tests for null==null, which IMHO should be true.
The code will be inlined by the JIT if it is called often enough, so performance considerations should not matter.
Of course, avoiding "Integer" in favor of plain "int" is the best way, if you can.
[Added]
Also, the null-check is needed to guarantee that the equality test is symmetric -- x.equals(y) should by the same as y.equals(x), but isn't if one of them is null.
The Integer class implements Comparable<Integer>, so you could try,
x.compareTo(y) == 0
also, if rather than equality, you are looking to compare these integers, then,
x.compareTo(y) < 0 will tell you if x is less than y.
x.compareTo(y) > 0 will tell you if x is greater than y.
Of course, it would be wise, in these examples, to ensure that x is non-null before making these calls.
I just encountered this in my code and it took me a while to figure it out. I was doing an intersection of two sorted lists and was only getting small numbers in my output. I could get it to work by using (x - y == 0) instead of (x == y) during comparison.
Related
I understand the different between these two terms and what methods you would use if you wanted to check if two objects had the references or the same value. My question is, when would you ever have to check if two objects have the same reference as opposed to checking if two objects have the same content or value(There has never been a time where I've had to check if two objects have the reference)?
That's not really "instead of" the equality check, but you could do the reference check before you do the equality check for performance and null-safety reasons.
In effect, this is what happens when you call Objects.equals(a,b) instead of a.equals(b).
public static boolean equals(Object a, Object b) {
return (a == b) || (a != null && a.equals(b));
}
Well a number of situations arise from the top of my head. First is just in debugging to make sure you are not creating multiple instances or adding the same content to more than one variable, second might be in game design/game modding (Minecraft for example) to look if its the same block being placed somewhere, third is when working with api's, can be used a lot there. Besides even if you don't ever use this feature it's always nice to know you have it:).
There has never been a time where I've had to check if two objects have the reference
That means you have never Overridden an equals() method (or properly).
Reference equality is a part of Object equality, In case both the references being compared point to the same Object, you reduce further comparison just by using == and not comparing further if true.
public boolean equals(Object anObject) {
if (this == anObject) {
return true;
}
--
--
Your point is right. You need the == operator for primitive comparing most of the times. There would be an another case where you could compare the references of your objects, say that you have shuffled your list and you want to get a specific object, so you would need to check your objects references, but that's a trivial case in general.
Imagine now a world where Java supported operator overloading. The answer to your question would have been totally different. Let me remind you why Java does not support operator overloading:
I left out operator overloading as a fairly personal choice because I had seen too many people abuse it in C++.
James Gosling. Source:
http://www.gotw.ca/publications/c_family_interview.htm
I recently saw a discussion in an SO chat but with no clear conclusions so I ended up asking there.
Is this for historical reasons or consistency with other languages? When looking at the signatures of compareTo of various languages, it returns an int.
Why it doesn't return an enum instead. For example in C# we could do:
enum CompareResult {LessThan, Equals, GreaterThan};
and :
public CompareResult CompareTo(Employee other) {
if (this.Salary < other.Salary) {
return CompareResult.LessThan;
}
if (this.Salary == other.Salary){
return CompareResult.Equals;
}
return CompareResult.GreaterThan;
}
In Java, enums were introduced after this concept (I don't remember about C#) but it could have been solved by an extra class such as:
public final class CompareResult {
public static final CompareResult LESS_THAN = new Compare();
public static final CompareResult EQUALS = new Compare();
public static final CompareResult GREATER_THAN = new Compare();
private CompareResult() {}
}
and
interface Comparable<T> {
Compare compareTo(T obj);
}
I'm asking this because I don't think an int represents well the semantics of the data.
For example in C#,
l.Sort(delegate(int x, int y)
{
return Math.Min(x, y);
});
and its twin in Java 8,
l.sort(Integer::min);
compiles both because Min/min respect the contracts of the comparator interface (take two ints and return an int).
Obviously the results in both cases are not the ones expected. If the return type was Compare it would have cause a compile error thus forcing you to implement a "correct" behavior (or at least you are aware of what you are doing).
A lot of semantic is lost with this return type (and potentially can cause some difficult bugs to find), so why design it like this?
[This answer is for C#, but it probably also apples to Java to some extent.]
This is for historical, performance and readability reasons. It potentially increases performance in two places:
Where the comparison is implemented. Often you can just return "(lhs - rhs)" (if the values are numeric types). But this can be dangerous: See below!
The calling code can use <= and >= to naturally represent the corresponding comparison. This will use a single IL (and hence processor) instruction compared to using the enum (although there is a way to avoid the overhead of the enum, as described below).
For example, we can check if a lhs value is less than or equal to a rhs value as follows:
if (lhs.CompareTo(rhs) <= 0)
...
Using an enum, that would look like this:
if (lhs.CompareTo(rhs) == CompareResult.LessThan ||
lhs.CompareTo(rhs) == CompareResult.Equals)
...
That is clearly less readable and is also inefficient since it is doing the comparison twice. You might fix the inefficiency by using a temporary result:
var compareResult = lhs.CompareTo(rhs);
if (compareResult == CompareResult.LessThan || compareResult == CompareResult.Equals)
...
It's still a lot less readable IMO - and it's still less efficient since it's doing two comparison operations instead of one (although I freely admit that it is likely that such a performance difference will rarely matter).
As raznagul points out below, you can actually do it with just one comparison:
if (lhs.CompareTo(rhs) != CompareResult.GreaterThan)
...
So you can make it fairly efficient - but of course, readability still suffers. ... != GreaterThan is not as clear as ... <=
(And if you use the enum, you can't avoid the overhead of turning the result of a comparison into an enum value, of course.)
So this is primarily done for reasons of readability, but also to some extent for reasons of efficiency.
Finally, as others have mentioned, this is also done for historical reasons. Functions like C's strcmp() and memcmp() have always returned ints.
Assembler compare instructions also tend to be used in a similar way.
For example, to compare two integers in x86 assembler, you can do something like this:
CMP AX, BX ;
JLE lessThanOrEqual ; jump to lessThanOrEqual if AX <= BX
or
CMP AX, BX
JG greaterThan ; jump to greaterThan if AX > BX
or
CMP AX, BX
JE equal ; jump to equal if AX == BX
You can see the obvious comparisons with the return value from CompareTo().
Addendum:
Here's an example which shows that it's not always safe to use the trick of subtracting the rhs from the lhs to get the comparison result:
int lhs = int.MaxValue - 10;
int rhs = int.MinValue + 10;
// Since lhs > rhs, we expect (lhs-rhs) to be +ve, but:
Console.WriteLine(lhs - rhs); // Prints -21: WRONG!
Obviously this is because the arithmetic has overflowed. If you had checked turned on for the build, the code above would in fact throw an exception.
For this reason, the optimization of suusing subtraction to implement comparison is best avoided. (See comments from Eric Lippert below.)
Let's stick to bare facts, with absolute minumum of handwaving and/or unnecessary/irrelevant/implementation dependent details.
As you already figured out yourself, compareTo is as old as Java (Since: JDK1.0 from Integer JavaDoc); Java 1.0 was designed to be familiar to C/C++ developers, and mimicked a lot of it's design choices, for better or worse. Also, Java has a backwards compatibility policy - thus, once implemented in core lib, the method is almost bound to stay in it forever.
As to C/C++ - strcmp/memcmp, which existed for as long as string.h, so essentially as long as C standard library, return exactly the same values (or rather, compareTo returns the same values as strcmp/memcmp) - see e.g. C ref - strcmp. At the time of Java's inception going that way was the logical thing to do. There weren't any enums in Java at that time, no generics etc. (all that came in >= 1.5)
The very decision of return values of strcmp is quite obvious - first and foremost, you can get 3 basic results in comparison, so selecting +1 for "bigger", -1 for "smaller" and 0 for "equal" was the logical thing to do. Also, as pointed out, you can get the value easily by subtraction, and returning int allows to easily use it in further calculations (in a traditional C type-unsafe way), while also allowing efficient single-op implementation.
If you need/want to use your enum based typesafe comparison interface - you're free to do so, but since the convention of strcmp returning +1/0/-1 is as old as contemporary programming, it actually does convey semantic meaning, in the same way null can be interpreted as unknown/invalid value or a out of bounds int value (e.g. negative number supplied for positive-only quality) can be interpreted as error code. Maybe it's not the best coding practice, but it certainly has its pros, and is still commonly used e.g. in C.
On the other hand, asking "why the standard library of language XYZ does conform to legacy standards of language ABC" is itself moot, as it can only be accurately answered by the very language designed who implemented it.
TL;DR it's that way mainly because it was done that way in legacy versions for legacy reasons and POLA for C programmers, and is kept that way for backwards-compatibility & POLA, again.
As a side note, I consider this question (in its current form) too broad to be answered precisely, highly opinion-based, and borderline off-topic on SO due to directly asking about Design Patterns & Language Architecture.
This practice comes from comparing integers this way, and using a subtract between first non-matching chars of a string.
Note that this practice is dangerous with things that are partially comparable while using a -1 to mean that a pair of things was incomparable. This is because it could create a situation of a < b and b < a (which the application might use to define "incomparable"). Such a situation can lead to loops that don't terminate correctly.
An enumeration with values {lt,eq,gt,incomparable} would be more correct.
My understanding is that this is done because you can order the results (i.e., the operation is reflexive and transitive). For example, if you have three objects (A,B,C) you can compare A->B and B->C, and use the resulting values to order them properly. There is an implied assumption that if A.compareTo(B) == A.compareTo(C) then B==C.
See java's comparator documentation.
Reply this is due to performance reasons.
If you need to compare int as often happens you can return the following:
Infact comparison are often returned as substractions.
As an example
public class MyComparable implements Comparable<MyComparable> {
public int num;
public int compareTo(MyComparable x) {
return num - x.num;
}
}
Consider:
int a = 42;
// Reference equality on two boxed ints with the same value
Console.WriteLine( (object)a == (object)a ); // False
// Same thing - listed only for clarity
Console.WriteLine(ReferenceEquals(a, a)); // False
Clearly, each boxing instruction allocates a separate instance of a boxed Int32, which is why reference-equality between them fails. This page appears to indicate that this is specified behaviour:
The box instruction converts the 'raw'
(unboxed) value type into an object
reference (type O). This is
accomplished by creating a new object
and copying the data from the value
type into the newly allocated object.
But why does this have to be the case?
Is there any compelling reason why the CLR does not choose to hold a "cache" of boxed Int32s, or even stronger, common values for all primitive value-types (which are all immutable)? I know Java has something like this.
In the days of no-generics, wouldn't it have helped out a lot with reducing the memory requirements as well as GC workload for a large ArrayListconsisting mainly of small integers? I'm also sure that there exist several modern .NET applications that do use generics, but for whatever reason (reflection, interface assignments etc.), run up large boxing-allocations that could be massively reduced with (what appears to be) a simple optimization.
So what's the reason? Some performance implication I haven't considered (I doubt if testing that the item is in the cache etc. will result in a net performance loss, but what do I know)? Implementation difficulties? Issues with unsafe code? Breaking backwards compatibility (I can't think of any good reason why a well-written program should rely on the existing behaviour)? Or something else?
EDIT: What I was really suggesting was a static cache of "commonly-occurring" primitives, much like what Java does. For an example implementation, see Jon Skeet's answer. I understand that doing this for arbitrary, possibly mutable, value-types or dynamically "memoizing" instances at run-time is a completely different matter.
EDIT: Changed title for clarity.
One reason which I find compelling is consistency. As you say, Java does cache boxed values in a certain range... which means it's all too easy to write code which works for a while:
// Passes in all my tests. Shame it fails if they're > 127...
if (value1 == value2) {
// Do something
}
I've been bitten by this - admittedly in a test rather than production code, fortunately, but it's still nasty to have something which changes behaviour significantly outside a given range.
Don't forget that any conditional behaviour also incurs a cost on all boxing operations - so in cases where it wouldn't use the cache, you'd actually find that it was slower (because it would first have to check whether or not to use the cache).
If you really want to write your own caching box operation, of course, you can do so:
public static class Int32Extensions
{
private static readonly object[] BoxedIntegers = CreateCache();
private static object[] CreateCache()
{
object[] ret = new object[256];
for (int i = -128; i < 128; i++)
{
ret[i + 128] = i;
}
}
public object Box(this int i)
{
return (i >= -128 && i < 128) ? BoxedIntegers[i + 128] : (object) i;
}
}
Then use it like this:
object y = 100.Box();
object z = 100.Box();
if (y == z)
{
// Cache is working
}
I can't claim to be able to read minds, but here's a couple factors:
1) caching the value types can make for unpredictability - comparing two boxed values that are equal could be true or false depending on cache hits and implementation. Ouch!
2) The lifetime of a boxed value type is most likely short - so how long do you hold the value in cache? Now you either have a lot of cached values that will no longer be used, or you need to make the GC implementation more complicated to track the lifetime of cached value types.
With these downsides, what is the potential win? Smaller memory footprint in an application that does a lot of long-lived boxing of equal value types. Since this win is something that is going to affect a small number of applications and can be worked around by changing code, I'm going to agree with the c# spec writer's decisions here.
Boxed value objects are not necessarily immutable. It is possible to change the value in a boxed value type, such as through an interface.
So if boxing a value type always returned the same instance based on the same original value, it would create references which may not be appropriate (for example, two different value type instances which happen to have the same value end up with the same reference even though they should not).
public interface IBoxed
{
int X { get; set; }
int Y { get; set; }
}
public struct BoxMe : IBoxed
{
public int X { get; set; }
public int Y { get; set; }
}
public static void Test()
{
BoxMe original = new BoxMe()
{
X = 1,
Y = 2
};
object boxed1 = (object) original;
object boxed2 = (object) original;
((IBoxed) boxed1).X = 3;
((IBoxed) boxed1).Y = 4;
Console.WriteLine("original.X = " + original.X);
Console.WriteLine("original.Y = " + original.Y);
Console.WriteLine("boxed1.X = " + ((IBoxed)boxed1).X);
Console.WriteLine("boxed1.Y = " + ((IBoxed)boxed1).Y);
Console.WriteLine("boxed2.X = " + ((IBoxed)boxed2).X);
Console.WriteLine("boxed2.Y = " + ((IBoxed)boxed2).Y);
}
Produces this output:
original.X = 1
original.Y = 2
boxed1.X = 3
boxed1.Y = 4
boxed2.X = 1
boxed2.Y = 2
If boxing didn't create a new instance, then boxed1 and boxed2 would have the same values, which would be inappropriate if they were created from different original value type instance.
There's an easy explanation for this: un/boxing is fast. It needed to be back in the .NET 1.x days. After the JIT compiler generates the machine code for it, there's but a handful of CPU instructions generated for it, all inline without method calls. Not counting corner cases like nullable types and large structs.
The effort of looking up a cached value would greatly diminish the speed of this code.
I wouldn't think a run-time-filled cache would be a good idea, but I would think it might be reasonable on 64-bit systems, to define ~8 billion of the 64 quintillion possible objects-reference values as being integer or float literals, and on any system pre-box all primitive literals. Testing whether the upper 31 bits of a reference type hold some value should probably be cheaper than a memory reference.
Adding to the answers already listed is the fact that in .net, at least with the normal garbage collector, object references are internally stored as direct pointers. This means that when a garbage collection is performed the system has to update every single reference to every object that gets moved, but it also means that "main-line" operation can be very fast. If object references were sometimes direct pointers and sometimes something else, this would require extra code every time an object is dereferenced. Since object dereferencing is one of the most common operations during the execution of a .net program, even a 5% slowdown here would be devastating unless it was matched by an awesome speedup. It's possible, for example, a "64-bit compact" model, in which each object reference was a 32-bit index into an object table, might offer better performance than the existing model in which each reference is a 64-bit direct pointer. Deferencing operations would require an extra table lookup, which would be bad, but object references would be smaller, thus allowing more of them to be stored in the cache at once. In some circumstances, that could be a major performance win (maybe often enough to be worthwhile--maybe not). It's unclear, though, that allowing an object reference to sometimes be a direct memory pointer and sometimes be something else would really offer much advantage.
I find that I often make a recursive call just to reorder arguments.
For example, here's my solution for endOther from codingbat.com:
Given two strings, return true if either of the strings appears at the very end of the other string, ignoring upper/lower case differences (in other words, the computation should not be "case sensitive"). Note: str.toLowerCase() returns the lowercase version of a string.
public boolean endOther(String a, String b) {
return a.length() < b.length() ? endOther(b, a)
: a.toLowerCase().endsWith(b.toLowerCase());
}
I'm very comfortable with recursions, but I can certainly understand why some perhaps would object to it.
There are two obvious alternatives to this recursion technique:
Swap a and b traditionally
public boolean endOther(String a, String b) {
if (a.length() < b.length()) {
String t = a;
a = b;
b = t;
}
return a.toLowerCase().endsWith(b.toLowerCase());
}
Not convenient in a language like Java that doesn't pass by reference
Lots of code just to do a simple operation
An extra if statement breaks the "flow"
Repeat code
public boolean endOther(String a, String b) {
return (a.length() < b.length())
? b.toLowerCase().endsWith(a.toLowerCase())
: a.toLowerCase().endsWith(b.toLowerCase());
}
Explicit symmetry may be a nice thing (or not?)
Bad idea unless the repeated code is very simple
...though in this case you can get rid of the ternary and just || the two expressions
So my questions are:
Is there a name for these 3 techniques? (Are there more?)
Is there a name for what they achieve? (e.g. "parameter normalization", perhaps?)
Are there official recommendations on which technique to use (when)?
What are other pros/cons that I may have missed?
Another example
To focus the discussion more on the technique rather than the particular codingbat problem, here's another example where I feel that the recursion is much more elegant than a bunch of if-else's, swaps, or repetitive code.
// sorts 3 values and return as array
static int[] sort3(int a, int b, int c) {
return
(a > b) ? sort3(b, a, c) :
(b > c) ? sort3(a, c, b) :
new int[] { a, b, c };
}
Recursion and ternary operators don't bother me as much as it bothers some people; I honestly believe the above code is the best pure Java solution one can possibly write. Feel free to show me otherwise.
Let’s first establish that code duplication is usually a bad idea.
So whatever solution we take, the logic of the method should only be written once, and we need a means of swapping the arguments around that does not interfere with the logic.
I see three general solutions to that:
Your first recursion (either using if or the conditional operator).
swap – which, in Java, is a problem, but might be appropriate in other languages.
Two separate methods (as in #Ha’s solution) where one acts as the implementation of the logic and the other as the interface, in this case to sort out the parameters.
I don’t know which of these solutions is objectively the best. However, I have noticed that there are certain algorithms for which (1) is generally accepted as the idiomatic solution, e.g. Euklid’s algorithm for calculating the GCD of two numbers.
I am generally averse to the swap solution (2) since it adds an extra call which doesn’t really do anything in connection with the algorithm. Now, technically this isn’t a problem – I doubt that it would be less efficient than (1) or (3) using any decent compiler. But it adds a mental speed-bump.
Solution (3) strikes me as over-engineered although I cannot think of any criticism except that it’s more text to read. Generally, I don’t like the extra indirection introduced by any method suffixed with “Impl”.
In conclusion, I would probably prefer (1) for most cases although I have in fact used (3) in similar circumstances.
Another +1 for "In any case, my recommendation would be to do as little in each statement as possible. The more things that you do in a single statement, the more confusing it will be for others who need to maintain your code."
Sorry but your code:
// sorts 3 values and return as array
static int[] sort3(int a, int b, int c) {
return
(a > b) ? sort3(b, a, c) :
(b > c) ? sort3(a, c, b) :
new int[] { a, b, c };
}
It's perhaps for you the best "pure java code", but for me it's the worst... unreadable code, if we don't have the method or the comment we just can't know at first sight what it's doing...
Hard to read code should only be used when high performances are needed (but anyway many performances problems are due to bad architecture...). If you HAVE TO write such code, the less you can do is to make a good javadoc and unit tests... we developper often don't care about implementation of such methods if we just have to use it, and not to rework it... but since the first sight doesn't tell us what is does, we can have to trust it works like we expect it does and we can loose time...
Recursive methods are ok when it's a short method, but i think a recursive method should be avoided if the algorithm is complex and if there's another way to do it for almost the same computation time... Particulary if other peoples will prolly work in this method.
For your exemple it's ok since it's a short method, but anyway if you'r just not concerned by performances you could have used something like that:
// sorts int values
public static int[] sort(Integer... intValues) {
ArrayList list = new ArrayList(
for ( Integer i : intValues ) {
list.add(i);
}
Collections.sort(list);
return list.toArray();
}
A simple way to implement your method, easily readable by all java >= 1.5 developper, that works for 1 to n integers...
Not the fastest but anyway if it's just about speed use c++ or asm :)
For this particular example, I wouldn't use anything you suggested.. I would instead write:
public boolean endOther(String a, String b){
String alower=a.toLowerCase();
String blower=b.toLowerCase();
if ( a.length() < b.length() ){
return blower.endsWith(alower);
} else {
return alower.endsWith(blower);
}
}
While the ternary operator does have its place, the if statement is often more intelligible, especially when the operands are fairly complex. In addition, if you repeat code in different branches of an if statement, they will only be evaluated in the branch that is taken (in many programming languages, both operands of the ternary operator are evaluated no matter which branch is selected). While, as you have pointed out, this is not a concern in Java, many programmers have used a variety of languages and might not remember this level of detail, and so it is best to use the ternary operator only with simple operands.
One frequently hears of "recursive" vs. "iterative"/"non-recursive" implementations. I have not heard of any particular names for the various options that you have given.
In any case, my recommendation would be to do as little in each statement as possible. The more things that you do in a single statement, the more confusing it will be for others who need to maintain your code.
In terms of your complaint about repetitiion... if there are several lines that are being repated, then it is time to create a "helper" function that does that part. Function composition is there to reduce repitition. Swapping just doesn't make any sense to do, since there is more effort to swap than to simply repeat... also, if code later in the function uses the parameters, the parameters now mean different things than they used to.
EDIT
My argument vis-a-vis the ternary operator was not a valid one... the vast majority of programming languages use lazy evalution with the ternary operator (I was thinking of Verilog at the time of writing, which is a hardware description language (HDL) in which both branches are evaluated in parallel). That said, there are valid reasons to avoid using complicated expressions in ternary operators; for example, with an if...else statement, it is possible to set a breakpoint on one of the conditional branches whereas, with the ternary operator, both branches are part of the same statement, so most debuggers won't split on them.
It is slightly better to use another method instead of recursion
public boolean endOther(String a, String b) {
return a.length() < b.length() ? endOtherImpl(b,a):endOtherImpl(a,b);
}
protected boolean endOtherImpl(String longStr,String shortStr)
{
return longStr.toLowerCase().endsWith(shortStr.toLowerCase());
}
Is there any justifiable reason to in Java something like
Long l = new Long(SOME_CONSTANT)
This creates an extra object and is tagged by FindBugs, and is obviously a bad practice. My question is whether there is ever a good reason to do so?
I previously asked this about String constructors and got a good answer, but that answer doesn't seem to apply to numbers.
Only if you want to make sure you get a unique instance, so practically never.
Some numbers can be cached when autoboxed (although Longs aren't guaranteed to be), which might cause problems. But any code that would break because of caching probably has deeper issues. Right now, I can't think of a single valid case for it.
My question is whether there is ever a good reason to do so?
You might still use it if you want to write code compatible with older JREs. valueOf(long) was only introduced in Java 1.5, so in Java 1.4 and before the constructor was the only way to go directly from a long to a Long. I expect it isn't deprecated because the constructor is still used internally.
The only thing I can think of is to make the boxing explicit, although the equivalent autoboxed code is actually compiled into Long.valueOf(SOME_CONSTANT) which can cache small values : (from jvm src)
public static Long valueOf(long l) {
final int offset = 128;
if (l >= -128 && l <= 127) { // will cache
return LongCache.cache[(int)l + offset];
}
return new Long(l);
}
. Not a big deal, but I dislike seeing code that continually boxes and unboxes without regard for type, which can get sloppy.
Functionally, though, I can't see a difference one way or the other. The new long will still compute as equals and hashcode equals to the autoboxed one, so I can't see how you could even make a functional distinction if you wanted to.