I wrote something like
#Override public int compareTo(Cuboid that) {
return -ComparisonChain.start()
.compare(this.d0, that.d0)
.compare(this.d1, that.d1)
.compare(this.d2, that.d2)
.result();
}
To reverse the order, I simply negated the result, but now I see it was wrong as the docs says
Ends this comparison chain and returns its result: a value having the
same sign as the first nonzero comparison result in the chain, or zero if
every result was zero.
So Integer.MIN_VALUE is an allowed return value and then the negation fails. In the source code, I can see that nothing but -1, 0, and +1 gets ever returned, but this isn't something I'd like to depend on.
Instead of the negation I could swap all the operands. Simple and ugly, but I'm curious if there's a better solution.
One option that may be more clear than reversing the compare arguments would be to compare each pair of arguments with the reverse of their natural ordering:
.compare(this.d0, that.d0, Ordering.natural().reverse())
I don't know if that's better (personally I don't like to involve floating point operations), but you could send it through Math#signum:
return -Math.signum( .... );
I'd probably just swap the operands.
Maybe this?
int temp = ComparisonChain.start()
.compare(this.d0, that.d0)
.compare(this.d1, that.d1)
.compare(this.d2, that.d2)
.result();
return temp == Integer.MIN_VALUE ? Integer.MAX_VALUE : -temp;
There's actually a signum function using no floating point, namely Integer#signum(int) and it's about 3.5 times faster then Math.signum with the cast to double.
Out of curiosity, the fastest solution is
return -((x&1) | (x>>1));
but only by about 10%.
Related
Which piece of code has better performance?
Pseudocode:
1)
prvate String doSth(String s) {
...
return s.substring(0, Math.min(s.length(), constparam));
}
2)
prvate String doSth(String s) {
if (s.length() > constparam) {
return s.substring(0, constparam);
}
return s;
}
In most cases (99%) - s.length < constparam.
This method is invoked 20-200 times per second.
Which solution (and why) would have a better performance?
Will it be a significant impact?
Let's look at what each one does:
1 always finds the lower of two values and always calls substring returning a new String.
2 always compares two values and sometimes calls sub string so sometimes creates a new String.
So 2, because some of the time it will do less work and create less objects.
If s.length < constparam for most of the cases, case 2 will be faster as substring() operation need not to be done for most of the cases.
public static int min(int a, int b) {
return (a <= b) ? a : b;
}
From Math. I would run a JMH test, and I would say that the two presented solution won't show statistically significant differences.
If you are really concerned about performance, don't even use substring (check the code, it has 3 ifs and creates a new String every time you call it), but you should operate with char arrays.
And then again: in real life I don't think it will matter. Run the JMH test with some real parameters (lengths of strings and constant values). I think you'll see numbers which are enough for almost every sane use-case.
Function substring with arguments (0, length) returns the string unmodified.
The difference is checking if s.length() > constparam, but basically it's what Math.min does.
So im my opinion, there's almost no performance differences, assuming substring invocation takes much more time than this conditional or Math.min, or even without this assumption.
The branching may cost something like a few nanoseconds. In both cases there's no needless char[] copying. The method call overhead is rather big, but gets optimized out.
A few nanoseconds times 200 make a few microseconds at most. The difference between the two approaches is smaller, so you may spend 0.0001% the time on this. This is a very rough estimate, but even if I was off by a factor of thousand, there's no point in optimizing here.
I store flags using bits within a 64-bits integer.
I want to know if there is a single bit set whatever the position within the 64-bits integer (e.i. I do not care about the position of any specific bit).
boolean isOneSingleBitSet (long integer64)
{
return ....;
}
I could count number of bits using the Bit Twiddling Hacks (by Sean Eron Anderson), but I am wondering what is the most efficient way to just detect whether one single bit is set...
I found some other related questions:
(8051) Check if a single bit is set
Detecting single one-bit streams within an integer
and also some Wikipedia pages:
Find first one
Bit manipulation
Hamming weight
NB: my application is in java, but I am curious about optimizations using other languages...
EDIT: Lưu Vĩnh Phúc pointed out that my first link within my question already got the answer: see section Determining if an integer is a power of 2 in the Bit Twiddling Hacks (by Sean Eron Anderson). I did not realized that one single bit was the same as power of two.
If you just literally want to check if one single bit is set, then you are essentially checking if the number is a power of 2. To do this you can do:
if ((number & (number-1)) == 0) ...
This will also count 0 as a power of 2, so you should check for the number not being 0 if that is important. So then:
if (number != 0 && (number & (number-1)) == 0) ...
(using x as the argument)
Detecting if at least one bit is set is easy:
return x!=0;
Likewise detecting if bit one (second lowest bit) is set is easy:
return (x&2)!=0;
Exactly one bit is set iff it is a power of two. This works:
return x!=0 && (x & (x-1))==0;
The wrapper class java.lang.Long has a static function bitCount() that returns the number of bits in a long (64-bit int):
boolean isSingleBitSet(long l)
{
return Long.bitCount(l) == 1;
}
Note that ints are 32-bit in java.
Assuming you have already an efficient - or hardware - implementation of ffs() - find first set - you may act as follows:
bool isOneSingleBitSet (long integer64)
{
return (integer64 >> ffs(integer64)) == 0;
}
The ffs() function may be already available, or you may like to see your own link above
lets assume X is a 64bit inter full of 0s exept the one you are looking for;
return ((64bitinteger&X)==X)
Seems like you can do a bitwise AND with a long representation of the single bit you want to check. For example, to check the LSB
return( (integer64 & 1L)!=0 );
Or to check the 4th bit from the right
return( (integer64 & 8L)!=0 );
I'm doing some basic computation using the Apache Commons Library, and I have a 2x2 symmetric RealMatrix for which I need to compute the EigenDecomposition. The matrix is as follows:
{{10.387035702893005, 0.14862451664049367},
{0.14862451664049442, -5.1952457826500815}}
The top right and bottom left elements, of type double, are supposed to be identical, and you'll notice that they almost are. When I pass the matrix to a new instance of EigenDecomposition, however, I'll get an exception. isSymmetric() evaluates false, and because the constructor passes in 'true' as a parameter, the isSymmetric() method raises an exception. I basically need to bypass this check. What are my options? Thanks!
public EigenDecomposition(final RealMatrix matrix,
final double splitTolerance) {
if (isSymmetric(matrix, true)) {
transformToTridiagonal(matrix);
findEigenVectors(transformer.getQ().getData());
}
}
N.B. The split tolerance parameter, which one might think specifies a tolerance level, is merely a dummy parameter.
The problem seems to be a numerical error - the values are almost identical, but not exactly. A quick and dirty solution could be:
Check if the two values equal each other using the condition: Math.abs(matrix[0][1] - matrix[1][0]) < DELTA. where DELTA is your tolerance factor (what is the maximum you can tolerate so the matrix will be considered symetric).
If it is - assign matrix[0][1] = matrix[1][0]
It is easy to see that a matrix that suffices the condition should now be symetric by definition.
Can you override isSymmetric and then get the second paramter to be ignored. YOu could then call your own isSymmetric method
For example
#override
public boolean isSymmetric(RealMatrix m, Boolean b) {
return _isSymmetric(m);
}
where _isSymmetric(m) is your own implementation. You could then compare the double values in any way you see fit. I would recommend using delta rather than a straight == as double values are very rarely exactly equaly but usually are equal enough ;)
If the two values are supposed to be equal, can you just copy one over the top of the other?
matrix[0][1] = matrix[1][0];
Every time I need to implement a comparator, I get stuck trying to remember when I should return -1 and when 1, and I have to look it up.
I mean, obviously -1 is less, so it implies that first is less than second. But whenever I say that to myself, I get that nagging "are you sure?" feeling. I suspect part of my confusion comes from implementing it the other way around whenever I need a descending sort.
What do you use to remember which is which?
I use this simple "substraction" mnemonic:
first - second
So, if first is "less" than second you'll get negative result, otherwise - positive or zero if they are equal.
comparator.compare(a, b) < 0 <==> a < b
I am not sure what you mean by mnemonic. However, I have had a very similar cognitive dissonance.
I am very visual, so I use the number line (the one I was taught in grade school). I just visualize the negative numbers as "left", 0 as "center" and positive numbers as "right". That the corresponds to the truth: -1 < 0 < 1
I remember the base integer case (pseudocode):
int comparator(int a, int b) {
return a-b;
}
So if we give in a small a and a large b which is the first < last we get a negative result.
I have a more visual memory, so remembering the "structure" of that function is easy and natural for me.
I used to always check the documentation when implementing Comparator and Comparable interfaces.
Question: Compare a and b
Lets first at look ascending order since the descending order will be just the inverse of whatever we do.
Question can be translated to given two numbers a and b, how would you put them on the number line?
if a < b, then we will put a on the negative side and b on the positive side.
else if a = b then we will put both at the center (at 0)
otherwise b will be on the negative side and a will be on the positive side.
Comparator Implementation:
Here you are comparing a to b.
#Override
public int compare(MyClass a, MyClass b) { //always good to rename your variables like this to be consistent
return a.f1 - b.f1;
}
Comparable Implementation:
Here you are comparing this to other.
#Override
public int compareTo(MyClass other) { // this is same as compare(this, other)
return this.f1 - o.f1;
}
Can anybody explain how this function works?
public int TestAdd(int a,int b) {
if(a <1)return b;
return(TestAdd((a&b)<<1,a^b));
}
Adding two matching set binary digits is equivalent to setting the next bit up: 1+1=2, and so on. So the function does that for all matching bits, then carries the unmatched ones over to another round. When no unmatched ones remain, it's done.
Since you can obviously test to see that it does indeed add two numbers, I assume you aren't understanding what those symbols are doing. Java's operators are described here:
http://download.oracle.com/javase/tutorial/java/nutsandbolts/operators.html
And you can easily look up the definitions of "logical AND" and "bitwise exclusive OR" and how they apply to ints.
Here we using the concept of recursion, this function call itself as values-
1 bit left shift of binary(Due to carry while adding) addition of numbers and xor of numbers until the a become <1.
Thus it returns the addition of numbers.
You can debug the function by taking some values for better understanding.