Why does compareTo return an integer - java

I recently saw a discussion in an SO chat but with no clear conclusions so I ended up asking there.
Is this for historical reasons or consistency with other languages? When looking at the signatures of compareTo of various languages, it returns an int.
Why it doesn't return an enum instead. For example in C# we could do:
enum CompareResult {LessThan, Equals, GreaterThan};
and :
public CompareResult CompareTo(Employee other) {
if (this.Salary < other.Salary) {
return CompareResult.LessThan;
}
if (this.Salary == other.Salary){
return CompareResult.Equals;
}
return CompareResult.GreaterThan;
}
In Java, enums were introduced after this concept (I don't remember about C#) but it could have been solved by an extra class such as:
public final class CompareResult {
public static final CompareResult LESS_THAN = new Compare();
public static final CompareResult EQUALS = new Compare();
public static final CompareResult GREATER_THAN = new Compare();
private CompareResult() {}
}
and
interface Comparable<T> {
Compare compareTo(T obj);
}
I'm asking this because I don't think an int represents well the semantics of the data.
For example in C#,
l.Sort(delegate(int x, int y)
{
return Math.Min(x, y);
});
and its twin in Java 8,
l.sort(Integer::min);
compiles both because Min/min respect the contracts of the comparator interface (take two ints and return an int).
Obviously the results in both cases are not the ones expected. If the return type was Compare it would have cause a compile error thus forcing you to implement a "correct" behavior (or at least you are aware of what you are doing).
A lot of semantic is lost with this return type (and potentially can cause some difficult bugs to find), so why design it like this?

[This answer is for C#, but it probably also apples to Java to some extent.]
This is for historical, performance and readability reasons. It potentially increases performance in two places:
Where the comparison is implemented. Often you can just return "(lhs - rhs)" (if the values are numeric types). But this can be dangerous: See below!
The calling code can use <= and >= to naturally represent the corresponding comparison. This will use a single IL (and hence processor) instruction compared to using the enum (although there is a way to avoid the overhead of the enum, as described below).
For example, we can check if a lhs value is less than or equal to a rhs value as follows:
if (lhs.CompareTo(rhs) <= 0)
...
Using an enum, that would look like this:
if (lhs.CompareTo(rhs) == CompareResult.LessThan ||
lhs.CompareTo(rhs) == CompareResult.Equals)
...
That is clearly less readable and is also inefficient since it is doing the comparison twice. You might fix the inefficiency by using a temporary result:
var compareResult = lhs.CompareTo(rhs);
if (compareResult == CompareResult.LessThan || compareResult == CompareResult.Equals)
...
It's still a lot less readable IMO - and it's still less efficient since it's doing two comparison operations instead of one (although I freely admit that it is likely that such a performance difference will rarely matter).
As raznagul points out below, you can actually do it with just one comparison:
if (lhs.CompareTo(rhs) != CompareResult.GreaterThan)
...
So you can make it fairly efficient - but of course, readability still suffers. ... != GreaterThan is not as clear as ... <=
(And if you use the enum, you can't avoid the overhead of turning the result of a comparison into an enum value, of course.)
So this is primarily done for reasons of readability, but also to some extent for reasons of efficiency.
Finally, as others have mentioned, this is also done for historical reasons. Functions like C's strcmp() and memcmp() have always returned ints.
Assembler compare instructions also tend to be used in a similar way.
For example, to compare two integers in x86 assembler, you can do something like this:
CMP AX, BX ;
JLE lessThanOrEqual ; jump to lessThanOrEqual if AX <= BX
or
CMP AX, BX
JG greaterThan ; jump to greaterThan if AX > BX
or
CMP AX, BX
JE equal ; jump to equal if AX == BX
You can see the obvious comparisons with the return value from CompareTo().
Addendum:
Here's an example which shows that it's not always safe to use the trick of subtracting the rhs from the lhs to get the comparison result:
int lhs = int.MaxValue - 10;
int rhs = int.MinValue + 10;
// Since lhs > rhs, we expect (lhs-rhs) to be +ve, but:
Console.WriteLine(lhs - rhs); // Prints -21: WRONG!
Obviously this is because the arithmetic has overflowed. If you had checked turned on for the build, the code above would in fact throw an exception.
For this reason, the optimization of suusing subtraction to implement comparison is best avoided. (See comments from Eric Lippert below.)

Let's stick to bare facts, with absolute minumum of handwaving and/or unnecessary/irrelevant/implementation dependent details.
As you already figured out yourself, compareTo is as old as Java (Since: JDK1.0 from Integer JavaDoc); Java 1.0 was designed to be familiar to C/C++ developers, and mimicked a lot of it's design choices, for better or worse. Also, Java has a backwards compatibility policy - thus, once implemented in core lib, the method is almost bound to stay in it forever.
As to C/C++ - strcmp/memcmp, which existed for as long as string.h, so essentially as long as C standard library, return exactly the same values (or rather, compareTo returns the same values as strcmp/memcmp) - see e.g. C ref - strcmp. At the time of Java's inception going that way was the logical thing to do. There weren't any enums in Java at that time, no generics etc. (all that came in >= 1.5)
The very decision of return values of strcmp is quite obvious - first and foremost, you can get 3 basic results in comparison, so selecting +1 for "bigger", -1 for "smaller" and 0 for "equal" was the logical thing to do. Also, as pointed out, you can get the value easily by subtraction, and returning int allows to easily use it in further calculations (in a traditional C type-unsafe way), while also allowing efficient single-op implementation.
If you need/want to use your enum based typesafe comparison interface - you're free to do so, but since the convention of strcmp returning +1/0/-1 is as old as contemporary programming, it actually does convey semantic meaning, in the same way null can be interpreted as unknown/invalid value or a out of bounds int value (e.g. negative number supplied for positive-only quality) can be interpreted as error code. Maybe it's not the best coding practice, but it certainly has its pros, and is still commonly used e.g. in C.
On the other hand, asking "why the standard library of language XYZ does conform to legacy standards of language ABC" is itself moot, as it can only be accurately answered by the very language designed who implemented it.
TL;DR it's that way mainly because it was done that way in legacy versions for legacy reasons and POLA for C programmers, and is kept that way for backwards-compatibility & POLA, again.
As a side note, I consider this question (in its current form) too broad to be answered precisely, highly opinion-based, and borderline off-topic on SO due to directly asking about Design Patterns & Language Architecture.

This practice comes from comparing integers this way, and using a subtract between first non-matching chars of a string.
Note that this practice is dangerous with things that are partially comparable while using a -1 to mean that a pair of things was incomparable. This is because it could create a situation of a < b and b < a (which the application might use to define "incomparable"). Such a situation can lead to loops that don't terminate correctly.
An enumeration with values {lt,eq,gt,incomparable} would be more correct.

My understanding is that this is done because you can order the results (i.e., the operation is reflexive and transitive). For example, if you have three objects (A,B,C) you can compare A->B and B->C, and use the resulting values to order them properly. There is an implied assumption that if A.compareTo(B) == A.compareTo(C) then B==C.
See java's comparator documentation.

Reply this is due to performance reasons.
If you need to compare int as often happens you can return the following:
Infact comparison are often returned as substractions.
As an example
public class MyComparable implements Comparable<MyComparable> {
public int num;
public int compareTo(MyComparable x) {
return num - x.num;
}
}

Related

Java 8 lambda and alpha equivalence

I am wondering, is any nice way (if it is possible at all) to implement an alpha-equivalence comparison in Java-8?
Obviously these two lambda-s are alpha-equivalent. Let us suppose that for some circumstances we want to detect this fact. How it can be achieved?
Predicate<Integer> l1 = x -> x == 1;
Predicate<Integer> l2 = y -> y == 1;
I'm going out on a limb with this answer, but it may be worth mentioning this:
There is no way to do this. As Brian Goetz pointed out in an answer to a related question, there are no specified, reliable ways of obtaining the "contents" of a lambda, in that sense.
But (and now comes the vague, handwaving part) :
There is no way to do this yet.
It might be possible to do this in the future. Maybe not with Java 9, but later. The Project Panama has ambituous goals, among them, giving the developer a deeper access to lambdas, aiding in further (runtime) optimizations, translations and processing.
And recently, Radosław Smogura posted in the mailing list :
I try to capture lambda expression to get them as expression tree during runtime. I’m able to do it for simple lambdas like (o) -> (var == var) && ((varX == varX) && (someField + 1 == 1)), so later user can use (missing) API to examine tree.
Right now tree can be accessed with code like this:
Method m = BasicMatadataTest.class.getDeclaredMethod("lambda$meta0");
Expression e = (Expression) m.invoke(null);
BinaryExpression top = (BinaryExpression) e;
BinaryExpression vars = (BinaryExpression) top.getLefthandExpression(); // represents (var == var)
(VariableExpression) vars.getLefthandExpression() // represents first var, and it’s reference equal to vars.getRighthandExpression() as it’s same variable
...
The key point here may be the comment:
represents first var, and it’s reference equal to vars.getRighthandExpression() as it’s same variable
(e.b.m)
So if I understood your question and this mailing list post correctly, then it might be possible to determine the equivalence between such expressions: Comparing the tree structure would be fairly trivial (given the functionality sketched above). And then it might boil down to treating two VariableExpression as being "equal", regardless of the actual variable name.
The mailing list message points to the repositories:
https://bitbucket.org/radoslaw_smogura/java-lambda-metaexpression-jdk9-langtools/branch/metaexpr
https://bitbucket.org/radoslaw_smogura/java-lambda-metaexpresions-jdk/commits/branch/metaexpr
(Disclaimer: I have not tested this, and don't know how to get this running (or whether it works at all) - but to my understanding, it is at least very close to what the question was actually about).

Java not deterministic?

I've written a little predator-prey simulation in Java. Even if the rules are quite complicated and end up in a chaotic system the techniques used are simple:
arithmetics and decisions on basic data types
no external libraries
no external systems are included
no concurrency occurs
no use of current time or date
So I thought when initializing the system with identical parameters it should output identical results, but it doesn't and I wonder why.
Some thoughts on that:
My application uses Randoms, but for that test I initialize them all with a given value, so in my understanding they should create for every run the same outputs in the same order.
I'm iterating through Sets, and I know that the order a Set is iterated isn't defined. But I don't see any reason why a Set that is filled in the same order with the same values should behave different in several runs. Does it?
I'm using a lot of floats. Datatypes where 1 + 1 = 1.9999999999725 are always suspect to me, but I even if their behavior is strange to me, it should be always the same strange. Isn't it?
Garbage Collection isn't deterministic, but as long as I don't rely on destructors I should be safe.
As said above, there is no concurrency and no datatypes depending on the actual time in use.
I can't reproduce that behavior in a simple example. But going through my code, I can't see anything that could be unpredictable. So are any of my assumptions above wrong? Any ideas what I could be missing?
Here's a test to verify my assumptions:
public static void main(String[] args) {
Random r = new Random(1);
Set<Float> s = new HashSet<Float>();
for (int i = 0; i < 1000000; i++) {
s.add(r.nextFloat());
}
float ret = 1;
int cnt = 0;
for (Float f : s) {
float multiply = 0.3f;
if (cnt++ % 2 == 0) {
multiply = 0.7f;
}
float f2 = (f * multiply);
ret += f2;
}
System.out.println(ret);
}
It results always in 242455.25 for me.
You can write a deterministic program in Java. You just need to eliminate the possible sources of non-determinism.
It's hard to know what could be causing non-determinism without seeing your actual code, and concrete evidence of that determinism.
There are any number of library methods that could potentially be sources of non-deterministic behaviour ... depending on how you use them.
For example, the value returned by Object.hashcode() (the first time it is called on an instance) is non-deterministic. And that percolates through to any library that uses hashing. It can definitely affecting the order in which elements of a HashSet or HashMap are returned when you iterate them ... if the element class doesn't override hashcode().
Random number generators may or may not be deterministic. If they are pseudo-random and they are initialized with fixed seeds, then the sequence of numbers produced by each one will be deterministic.
Floating point arithmetic should be deterministic. For any (fixed) set of inputs to an arithmetic expression, the result should always be the same. (I'm not sure that determinism of floating point arithmetic is guaranteed by the JLS, but it would be mighty strange if it happened in practice. As in ... you are running on broken hardware.)
FOLLOWUP ... on strictfp and non-determinism.
According to the JLS 15.4:
"Within an expression that is not FP-strict, some leeway is granted for an implementation to use an extended exponent range to represent intermediate results; the net effect, roughly speaking, is that a calculation might produce "the correct answer" in situations where exclusive use of the float value set or double value set might result in overflow or underflow."
This doesn't exactly say how much "leeway" the implementation has in a non-FP-strict expressions. However, I'd have thought that that leeway would not extend to allowing non-deterministic behaviour. I'd have thought that a JIT compiler on a particular platform would always generate equivalent native code for the same expression, and that code would be deterministic. (I can't see any reason for non-determinism ... unless the hardware itself has non-deterministic floating point.) The other possible source of non-determinism might be that behaviour of JIT compiled and interpreted code might be different. But frankly, I think that it would be "nuts" to allow that to happen ... and I think we'd have heard of it.
So while non-FP-strict expression evaluation could be non-deterministic in theory, I think we should discount this ... unless there is clear evidence that it happens in practice.
(Note that I'm talking about real non-determinism, not platform differences.)
I'm iterating throu Sets, and I know that the order a Set is iterated isn't definied. But I don't see any reason why a Set that is filled in the same order with the same values should behave differnet in several runs. Does it?
It can. The implementation is free to use, for example, the object's location in memory as the key into the underlying hash table. That can vary depending on when garbage collection runs.

Reordering arguments using recursion (pro, cons, alternatives)

I find that I often make a recursive call just to reorder arguments.
For example, here's my solution for endOther from codingbat.com:
Given two strings, return true if either of the strings appears at the very end of the other string, ignoring upper/lower case differences (in other words, the computation should not be "case sensitive"). Note: str.toLowerCase() returns the lowercase version of a string.
public boolean endOther(String a, String b) {
return a.length() < b.length() ? endOther(b, a)
: a.toLowerCase().endsWith(b.toLowerCase());
}
I'm very comfortable with recursions, but I can certainly understand why some perhaps would object to it.
There are two obvious alternatives to this recursion technique:
Swap a and b traditionally
public boolean endOther(String a, String b) {
if (a.length() < b.length()) {
String t = a;
a = b;
b = t;
}
return a.toLowerCase().endsWith(b.toLowerCase());
}
Not convenient in a language like Java that doesn't pass by reference
Lots of code just to do a simple operation
An extra if statement breaks the "flow"
Repeat code
public boolean endOther(String a, String b) {
return (a.length() < b.length())
? b.toLowerCase().endsWith(a.toLowerCase())
: a.toLowerCase().endsWith(b.toLowerCase());
}
Explicit symmetry may be a nice thing (or not?)
Bad idea unless the repeated code is very simple
...though in this case you can get rid of the ternary and just || the two expressions
So my questions are:
Is there a name for these 3 techniques? (Are there more?)
Is there a name for what they achieve? (e.g. "parameter normalization", perhaps?)
Are there official recommendations on which technique to use (when)?
What are other pros/cons that I may have missed?
Another example
To focus the discussion more on the technique rather than the particular codingbat problem, here's another example where I feel that the recursion is much more elegant than a bunch of if-else's, swaps, or repetitive code.
// sorts 3 values and return as array
static int[] sort3(int a, int b, int c) {
return
(a > b) ? sort3(b, a, c) :
(b > c) ? sort3(a, c, b) :
new int[] { a, b, c };
}
Recursion and ternary operators don't bother me as much as it bothers some people; I honestly believe the above code is the best pure Java solution one can possibly write. Feel free to show me otherwise.
Let’s first establish that code duplication is usually a bad idea.
So whatever solution we take, the logic of the method should only be written once, and we need a means of swapping the arguments around that does not interfere with the logic.
I see three general solutions to that:
Your first recursion (either using if or the conditional operator).
swap – which, in Java, is a problem, but might be appropriate in other languages.
Two separate methods (as in #Ha’s solution) where one acts as the implementation of the logic and the other as the interface, in this case to sort out the parameters.
I don’t know which of these solutions is objectively the best. However, I have noticed that there are certain algorithms for which (1) is generally accepted as the idiomatic solution, e.g. Euklid’s algorithm for calculating the GCD of two numbers.
I am generally averse to the swap solution (2) since it adds an extra call which doesn’t really do anything in connection with the algorithm. Now, technically this isn’t a problem – I doubt that it would be less efficient than (1) or (3) using any decent compiler. But it adds a mental speed-bump.
Solution (3) strikes me as over-engineered although I cannot think of any criticism except that it’s more text to read. Generally, I don’t like the extra indirection introduced by any method suffixed with “Impl”.
In conclusion, I would probably prefer (1) for most cases although I have in fact used (3) in similar circumstances.
Another +1 for "In any case, my recommendation would be to do as little in each statement as possible. The more things that you do in a single statement, the more confusing it will be for others who need to maintain your code."
Sorry but your code:
// sorts 3 values and return as array
static int[] sort3(int a, int b, int c) {
return
(a > b) ? sort3(b, a, c) :
(b > c) ? sort3(a, c, b) :
new int[] { a, b, c };
}
It's perhaps for you the best "pure java code", but for me it's the worst... unreadable code, if we don't have the method or the comment we just can't know at first sight what it's doing...
Hard to read code should only be used when high performances are needed (but anyway many performances problems are due to bad architecture...). If you HAVE TO write such code, the less you can do is to make a good javadoc and unit tests... we developper often don't care about implementation of such methods if we just have to use it, and not to rework it... but since the first sight doesn't tell us what is does, we can have to trust it works like we expect it does and we can loose time...
Recursive methods are ok when it's a short method, but i think a recursive method should be avoided if the algorithm is complex and if there's another way to do it for almost the same computation time... Particulary if other peoples will prolly work in this method.
For your exemple it's ok since it's a short method, but anyway if you'r just not concerned by performances you could have used something like that:
// sorts int values
public static int[] sort(Integer... intValues) {
ArrayList list = new ArrayList(
for ( Integer i : intValues ) {
list.add(i);
}
Collections.sort(list);
return list.toArray();
}
A simple way to implement your method, easily readable by all java >= 1.5 developper, that works for 1 to n integers...
Not the fastest but anyway if it's just about speed use c++ or asm :)
For this particular example, I wouldn't use anything you suggested.. I would instead write:
public boolean endOther(String a, String b){
String alower=a.toLowerCase();
String blower=b.toLowerCase();
if ( a.length() < b.length() ){
return blower.endsWith(alower);
} else {
return alower.endsWith(blower);
}
}
While the ternary operator does have its place, the if statement is often more intelligible, especially when the operands are fairly complex. In addition, if you repeat code in different branches of an if statement, they will only be evaluated in the branch that is taken (in many programming languages, both operands of the ternary operator are evaluated no matter which branch is selected). While, as you have pointed out, this is not a concern in Java, many programmers have used a variety of languages and might not remember this level of detail, and so it is best to use the ternary operator only with simple operands.
One frequently hears of "recursive" vs. "iterative"/"non-recursive" implementations. I have not heard of any particular names for the various options that you have given.
In any case, my recommendation would be to do as little in each statement as possible. The more things that you do in a single statement, the more confusing it will be for others who need to maintain your code.
In terms of your complaint about repetitiion... if there are several lines that are being repated, then it is time to create a "helper" function that does that part. Function composition is there to reduce repitition. Swapping just doesn't make any sense to do, since there is more effort to swap than to simply repeat... also, if code later in the function uses the parameters, the parameters now mean different things than they used to.
EDIT
My argument vis-a-vis the ternary operator was not a valid one... the vast majority of programming languages use lazy evalution with the ternary operator (I was thinking of Verilog at the time of writing, which is a hardware description language (HDL) in which both branches are evaluated in parallel). That said, there are valid reasons to avoid using complicated expressions in ternary operators; for example, with an if...else statement, it is possible to set a breakpoint on one of the conditional branches whereas, with the ternary operator, both branches are part of the same statement, so most debuggers won't split on them.
It is slightly better to use another method instead of recursion
public boolean endOther(String a, String b) {
return a.length() < b.length() ? endOtherImpl(b,a):endOtherImpl(a,b);
}
protected boolean endOtherImpl(String longStr,String shortStr)
{
return longStr.toLowerCase().endsWith(shortStr.toLowerCase());
}

How do I compare two Integers? [duplicate]

This question already has answers here:
How can I properly compare two Integers in Java?
(10 answers)
Closed 7 years ago.
I have to compare two Integer objects (not int). What is the canonical way to compare them?
Integer x = ...
Integer y = ...
I can think of this:
if (x == y)
The == operator only compares references, so this will only work for lower integer values. But perhaps auto-boxing kicks in...?
if (x.equals(y))
This looks like an expensive operation. Are there any hash codes calculated this way?
if (x.intValue() == y.intValue())
A little bit verbose...
EDIT: Thank you for your responses. Although I know what to do now, the facts are distributed on all of the existing answers (even the deleted ones :)) and I don't really know, which one to accept. So I'll accept the best answer, which refers to all three comparison possibilities, or at least the first two.
This is what the equals method does:
public boolean equals(Object obj) {
if (obj instanceof Integer) {
return value == ((Integer)obj).intValue();
}
return false;
}
As you can see, there's no hash code calculation, but there are a few other operations taking place there. Although x.intValue() == y.intValue() might be slightly faster, you're getting into micro-optimization territory there. Plus the compiler might optimize the equals() call anyway, though I don't know that for certain.
I generally would use the primitive int, but if I had to use Integer, I would stick with equals().
Use the equals method. Why are you so worried that it's expensive?
if (x.equals(y))
This looks like an expensive operation. Are there any hash codes calculated this way?
It is not an expensive operation and no hash codes are calculated. Java does not magically calculate hash codes, equals(...) is just a method call, not different from any other method call.
The JVM will most likely even optimize the method call away (inlining the comparison that takes place inside the method), so this call is not much more expensive than using == on two primitive int values.
Note: Don't prematurely apply micro-optimizations; your assumptions like "this must be slow" are most likely wrong or don't matter, because the code isn't a performance bottleneck.
Minor note: since Java 1.7 the Integer class has a static compare(Integer, Integer) method, so you can just call Integer.compare(x, y) and be done with it (questions about optimization aside).
Of course that code is incompatible with versions of Java before 1.7, so I would recommend using x.compareTo(y) instead, which is compatible back to 1.2.
I would go with x.equals(y) because that's consistent way to check equality for all classes.
As far as performance goes, equals is actually more expensive because it ends up calling intValue().
EDIT: You should avoid autoboxing in most cases. It can get really confusing, especially the author doesn't know what he was doing. You can try this code and you will be surprised by the result;
Integer a = 128;
Integer b = 128;
System.out.println(a==b);
Compare integer and print its value in value ascending or descending order. All you have to do is implements Comparator interface and override its compare method and compare its value as below:
#Override
public int compare(Integer o1, Integer o2) {
if (ascending) {
return o1.intValue() - o2.intValue();
} else {
return o2.intValue() - o1.intValue();
}
}
"equals" is it. To be on the safe side, you should test for null-ness:
x == y || (x != null && x.equals(y))
the x==y tests for null==null, which IMHO should be true.
The code will be inlined by the JIT if it is called often enough, so performance considerations should not matter.
Of course, avoiding "Integer" in favor of plain "int" is the best way, if you can.
[Added]
Also, the null-check is needed to guarantee that the equality test is symmetric -- x.equals(y) should by the same as y.equals(x), but isn't if one of them is null.
The Integer class implements Comparable<Integer>, so you could try,
x.compareTo(y) == 0
also, if rather than equality, you are looking to compare these integers, then,
x.compareTo(y) < 0 will tell you if x is less than y.
x.compareTo(y) > 0 will tell you if x is greater than y.
Of course, it would be wise, in these examples, to ensure that x is non-null before making these calls.
I just encountered this in my code and it took me a while to figure it out. I was doing an intersection of two sorted lists and was only getting small numbers in my output. I could get it to work by using (x - y == 0) instead of (x == y) during comparison.

Is there any justifiable reason to use new and a constructor on a number class in Java?

Is there any justifiable reason to in Java something like
Long l = new Long(SOME_CONSTANT)
This creates an extra object and is tagged by FindBugs, and is obviously a bad practice. My question is whether there is ever a good reason to do so?
I previously asked this about String constructors and got a good answer, but that answer doesn't seem to apply to numbers.
Only if you want to make sure you get a unique instance, so practically never.
Some numbers can be cached when autoboxed (although Longs aren't guaranteed to be), which might cause problems. But any code that would break because of caching probably has deeper issues. Right now, I can't think of a single valid case for it.
My question is whether there is ever a good reason to do so?
You might still use it if you want to write code compatible with older JREs. valueOf(long) was only introduced in Java 1.5, so in Java 1.4 and before the constructor was the only way to go directly from a long to a Long. I expect it isn't deprecated because the constructor is still used internally.
The only thing I can think of is to make the boxing explicit, although the equivalent autoboxed code is actually compiled into Long.valueOf(SOME_CONSTANT) which can cache small values : (from jvm src)
public static Long valueOf(long l) {
final int offset = 128;
if (l >= -128 && l <= 127) { // will cache
return LongCache.cache[(int)l + offset];
}
return new Long(l);
}
. Not a big deal, but I dislike seeing code that continually boxes and unboxes without regard for type, which can get sloppy.
Functionally, though, I can't see a difference one way or the other. The new long will still compute as equals and hashcode equals to the autoboxed one, so I can't see how you could even make a functional distinction if you wanted to.

Categories