Invert image pixels - java

I'm currently trying to convert a piece of matlab code to java. The purpose of the code is to invert and normalize the image pixels of an image file. In java, the pixels are stored in a byte array. Below is the Matlab code of importance:
inp2=1024.-inp.-min; %inp is the input array (double precision). min is the minimum value in that matrix.
The image is 16 bit, but is using only 10 bits for storage, so that's where the 1024 comes from (2^10). I know definitively that this code works in matlab. However, I'm personally not proficient in matlab, and my java translation isn't behaving the same way as its counterpart.
Below is the method where I've tried inverting the image matrix:
//bitsStored is the bit depth. In this test, it is 10.
//imageBytes is the pixel data in a byte array
public static short[] invert(int bitsStored) {
short min = min(imageBytes);//custom method. Gets the minimum value in the byte array.
short range = (short) (2 << bitsStored);
short[] holder = new short[imageBytes.length];
for (int i = 0; i < imageBytes.length; i++) {
holder[i] = (short) (range - imageBytes[i] - min);
}
imageBytes = holder;
return imageBytes;
}
However, instead of inverting the color channels, the image loses some data and becomes much harsher looking (higher contrast, less blend, etc). What am I doing wrong here?
Let me know if I can make anything clearer for you. Thank you.
UPDATE:
Hi, I have another question regarding this code. Can the above code (fixed to short[] not byte[]) be used in reverse on the same file? As in, if I rerun through this code using an inverted version of the original image, should I get the original input/image from the start of the program? The only problem with it I think is that the min value changes between runs.

byte has range from -128 to 127, it cannot hold 1024 different values. So either you need to use a wider type (like short) to model your points, or your byte array has to be unpacked before processing.
One more thing: double is floating point and it does not play well with integers used in the rest of your code. The following seems better:
short range = 1 << bitsStored; // 2^bitsStored

Correct equation for inversion is:
newValue[i] = maxPossibleValue - currentValue[i]
Your maxPossibleValue is 1024.
Other thing is that you can't have image with depth of 10 bits in array of bytes (cause they've 8 bits)

On your second question about the reversibility of your algorithm.
Your formula looks like result[i] = 1024 - min(data) - data[i] where data ranges from 0 to 1023. Let's imagine that all your data points are 1023. Then min is 1023, so all the result[i] will be -1022.
So the result does not even fit in the same range as the data.
Then, if you run your algorithm with that result array to produce result1, all its points will be 1024 - (-1022) - (-1022) i.e. 3068, and not the original 1023.
So the answer is not, double application of this algorithm does not produce result equal to the input.
Please note that the algorithm mentioned in another answer (maxPossibleValue - currentValue[i]) keeps range and it is reverses when applied twice.
BTW, it should be
short range = (short) (1 << bitsStored);
instead of
short range = (short) (2 << bitsStored);
to produce 2^bitsStored.

Related

Emulating multiplication of 128-bit integers with pairs of 64-bit integers [duplicate]

I need to multiply two 8 byte (64 bit) arrays in the fastest way possible. The byte arrays are little endian. The arrays can be wrapped in a ByteBuffer and treated as little endian to easily resolve a java "long" value that correctly represents the bytes (but not the real nominal value since java longs are 2s compliment).
Java's standard way to handle large math is BigInteger. But that implementation is slow and unnecessary since im very strictly working with 64 bits x 64 bits. In addition, you can't throw the "long" value into one because the nominal value is incorrect, and I can't use the byte array directly because it's little endian. I need to be able to do this without having to use up more memory / CPU to reverse the array. This type of multiplication should be able to execute 1m+ times per second. BigInteger doesn't really come close to meeting that requirement anyway, so I'm trying to do it via splitting the high order bits from the low order bits, but I can't get it working consistently.
The high-order-bits-only code is only working for a subset of longs because even the intermediate addition can overflow. I got my current code from this answer....
high bits of long multiplication in Java?
Is there a more generic pattern for getting hi/lo order bits from 128 bit multiplication? That works for the largest long values?
Edit:
FWIW I'm prepared for the answer to be.. "cant do that in java, do it in c++ and call via JNI". Though I'm hoping someone can give a java solution before it comes to that.
As of Java 9 (which was a bit too new at the time this question was asked), there is now a trivial way to get the upper half of the 128-bit product of two signed 64-bit integers: Math.multiplyHigh
There is a relatively simple conversion from "upper half of signed product" to "upper half unsigned product" (see Hacker's Delight chapter 8), which can be used to implement an unsigned multiply high like this:
static long multiplyHighUnsigned(long x, long y) {
long signedUpperHalf = Math.multiplyHigh(x, y);
return signedUpperHalf + ((x >> 63) & y) + ((y >> 63) & x);
}
This has the potential to be more efficient (on platforms on which multiplyHigh is treated as an intrinsic function by the JIT) than the more manual approach used by the old answer, which I will leave below the line.
It can be done manually without BigInteger by splitting the longs up into two halves, creating the partial products, and then summing them up. Naturally the low half of the sum can be left out.
The partial products overlap, like this:
LL
LH
HL
HH
So the high halves of LH and HL must be added to the high result, and furthermore the low halves of LH and HL together with the high half of LL may carry into the bits of the high half of the result. The low half of LL is not used.
So something like this (only slightly tested):
static long hmul(long x, long y) {
long m32 = 0xffffffffL;
// split
long xl = x & m32;
long xh = x >>> 32;
long yl = y & m32;
long yh = y >>> 32;
// partial products
long t00 = xl * yl;
long t01 = xh * yl;
long t10 = xl * yh;
long t11 = xh * yh;
// resolve sum and carries
// high halves of t10 and t01 overlap with the low half of t11
t11 += (t10 >>> 32) + (t01 >>> 32);
// the sum of the low halves of t10 + t01 plus
// the high half of t00 may carry into the high half of the result
long tc = (t10 & m32) + (t01 & m32) + (t00 >>> 32);
t11 += tc >>> 32;
return t11;
}
This of course treats the input as unsigned, which does not mean they have to be positive in the sense that Java would treat them as positive, you can absolutely input -1501598000831384712L and -735932670715772870L and the right answer comes out, as confirmed by wolfram alpha.
If you are prepared to interface with native code, in C++ with MSVC you could use __umulh, and with GCC/Clang you can make the product as an __uint128_t and just shift it right, the codegen for that is actually fine, it doesn't cause a full 128x128 multiply.

How can I handle 128 bit little endian multiplication in Java without resorting to BigInteger

I need to multiply two 8 byte (64 bit) arrays in the fastest way possible. The byte arrays are little endian. The arrays can be wrapped in a ByteBuffer and treated as little endian to easily resolve a java "long" value that correctly represents the bytes (but not the real nominal value since java longs are 2s compliment).
Java's standard way to handle large math is BigInteger. But that implementation is slow and unnecessary since im very strictly working with 64 bits x 64 bits. In addition, you can't throw the "long" value into one because the nominal value is incorrect, and I can't use the byte array directly because it's little endian. I need to be able to do this without having to use up more memory / CPU to reverse the array. This type of multiplication should be able to execute 1m+ times per second. BigInteger doesn't really come close to meeting that requirement anyway, so I'm trying to do it via splitting the high order bits from the low order bits, but I can't get it working consistently.
The high-order-bits-only code is only working for a subset of longs because even the intermediate addition can overflow. I got my current code from this answer....
high bits of long multiplication in Java?
Is there a more generic pattern for getting hi/lo order bits from 128 bit multiplication? That works for the largest long values?
Edit:
FWIW I'm prepared for the answer to be.. "cant do that in java, do it in c++ and call via JNI". Though I'm hoping someone can give a java solution before it comes to that.
As of Java 9 (which was a bit too new at the time this question was asked), there is now a trivial way to get the upper half of the 128-bit product of two signed 64-bit integers: Math.multiplyHigh
There is a relatively simple conversion from "upper half of signed product" to "upper half unsigned product" (see Hacker's Delight chapter 8), which can be used to implement an unsigned multiply high like this:
static long multiplyHighUnsigned(long x, long y) {
long signedUpperHalf = Math.multiplyHigh(x, y);
return signedUpperHalf + ((x >> 63) & y) + ((y >> 63) & x);
}
This has the potential to be more efficient (on platforms on which multiplyHigh is treated as an intrinsic function by the JIT) than the more manual approach used by the old answer, which I will leave below the line.
It can be done manually without BigInteger by splitting the longs up into two halves, creating the partial products, and then summing them up. Naturally the low half of the sum can be left out.
The partial products overlap, like this:
LL
LH
HL
HH
So the high halves of LH and HL must be added to the high result, and furthermore the low halves of LH and HL together with the high half of LL may carry into the bits of the high half of the result. The low half of LL is not used.
So something like this (only slightly tested):
static long hmul(long x, long y) {
long m32 = 0xffffffffL;
// split
long xl = x & m32;
long xh = x >>> 32;
long yl = y & m32;
long yh = y >>> 32;
// partial products
long t00 = xl * yl;
long t01 = xh * yl;
long t10 = xl * yh;
long t11 = xh * yh;
// resolve sum and carries
// high halves of t10 and t01 overlap with the low half of t11
t11 += (t10 >>> 32) + (t01 >>> 32);
// the sum of the low halves of t10 + t01 plus
// the high half of t00 may carry into the high half of the result
long tc = (t10 & m32) + (t01 & m32) + (t00 >>> 32);
t11 += tc >>> 32;
return t11;
}
This of course treats the input as unsigned, which does not mean they have to be positive in the sense that Java would treat them as positive, you can absolutely input -1501598000831384712L and -735932670715772870L and the right answer comes out, as confirmed by wolfram alpha.
If you are prepared to interface with native code, in C++ with MSVC you could use __umulh, and with GCC/Clang you can make the product as an __uint128_t and just shift it right, the codegen for that is actually fine, it doesn't cause a full 128x128 multiply.

Converting 4D Vector to Long

I'm trying to figure out a way of converting a 4D vector into a bounded long. However, the vector and the resultant long have certain restrictions. The vector itself is composed of 4 integers: The first integer can be anything within Java's capability (so Integer.MIN_VALUE all the way to Integer.MAX_VALUE). The second and fourth integers are always between -2999984 and 2999984 (both inclusive). And finally, the third is always between 0 and 255 (again, both inclusive). So it follows this format:
([Integer min - Integer max], [-2999984 - 2999984], [0 - 255], [-2999984 - 2999984])
That vector needs to be converted to a long between -824629322721380016 and 824629339968358064.
I'm aware that there is probably no function that results in a 1 to 1 matching, but I'm trying to figure out a function that will result in as few collisions as possible.
If you are wondering, these bounds for the vector and long are not arbitrary. As I've tagged the post with Minecraft, I ought to explain why. I'm trying to match a certain blockpos in one dimension with a blockpos in another. The 4D vector is [dimension id, x pos, y pos, z pos] and the resultant long is the serialized form of BlockPos (BlockPos#fromLong). You can see this forums post that sparked my inquiry. I'm asking here because my queston is necessarily MC specific, as it's mainly mathematical and code-based.
I would recommend converting your 4d vector into bits, making that bit representation into a BigInteger and hashing that integer using a hashing algorithm designed for low collisions.
Your number of 'buckets' is effectively the range of the long.
According to this Murmur2 appears to be the best hashing for numbers.:
https://softwareengineering.stackexchange.com/questions/49550/which-hashing-algorithm-is-best-for-uniqueness-and-speed
You can Google for Murmur2 java implementations, but here is one such example at the time of the writing of this answer:
https://github.com/sangupta/murmur
Worth noting that if you could limit your number of dimensions to 65536 (16 bits) - you could have a 1 to 1 hash on bits alone. Possibly could do this by limiting the number of virtual worlds that a user can go into?
Unfortunately this can't be done. A long only holds 64 bits, but your 4D-Vector needs 32 + 23 + 8 + 23 > 64.
If you could restrict your input a little bit to make it fit, you could convert it similar to the following code (example of a 2D-int-Vector <-> long conversion):
long toLong(int int1, int int2) {
return ((long) int1 << 32) | (int2 & (-1L >>> 32));
}
int[] toInts(long l) {
int[] ints = new int[2];
int[0] = (int) (both >> 32);
int[1] = (int) both;
return ints;
}

Combining elements of a byte[] array into 16-bit numbers

This is an excerpt of code from a music tuner application. A byte[] array is created, audio data is read into the buffer arrays, and then the for loop iterates through buffer and combines the values at indices n,n+1, to create an array of 16-bit numbers that is half the length.
byte[] buffer = new byte[2*1200];
targetDataLine.read(buffer, 0, buffer.length)
for ( int i = 0; i < n; i+=2 ) {
int value = (short)((buffer[i]&0xFF) | ((buffer[i+1]&0xFF) << 8)); //**Don't understand**
a[i >> 1] = value;
}
So far, what I have is this:
From a different SO post, I learned that every byte being stored in a larger type must be & with 0xFF, due to its conversion to a 32-bit number. I guess the leading 24 bits are filled with 1s (though I don't know why it isn't filled with zeros... wouldn't leading with 1s change the value of the number? 000000000010 (2) is different from 111111110010 (-14), after all.), so the purpose of 0xff is to only grab the last 8 bits (which is the whole byte).
When buffer[i+1] is shifted left by 8 bits, this makes it so that, when ORing, the eight bits from buffer[i+1] are in the most significant positions, and the eight bits from buffer[i] are in the least significant eight bits. We wind up with a 16-bit number that is of the form buffer[i+1] + buffer[i]. (I'm using + but I understand it's closer to concatenation.)
First, why are we ORing buffer[i] | buffer[i+1] << 8? This seems to destroy the original sound information unless we pull it back out in the same way; while I understand that OR will combine them into one value, I don't see how that value can be useful or used in calculations later. And the only way this data is accessed later is as its literal values:
diff += Math.abs(a[j]-a[i+j];
If I have 101 and 111, added together I should get 12, or 1100. Yet 101 | 111 << 3 gives 111101, which is equal to 61. The closest I got to understanding was that 101 (5) | 111000 (56) is the same as adding 5+56=61. But the order matters -- doing the reverse 101 <<3 | 111 is completely different. I really don't understand how the data can remain useful, when it is OR'd in this way.
The other problem I'm having is that, because Java uses signed bytes, the eighth position doesn't indicate the value, but the sign. If I'm ORing two binary signed numbers, then in the resulting 16-bit number, the bit at 2⁷ is now acting as a value instead of a placeholder. If I had a negative byte before running the OR, then in my final value post-operation, it would now erroneously be acting as though the original number had a positive 2⁷ in it. 0xff doesn't get rid of this, because it preserves the eighth, signed byte, so shouldn't this be a problem?
For example, 1111 (-1) and 0101, when OR'd, might give 01011111. But 1111 wasn't representing POSITIVE 1111, it was representing the signed version; yet in the final answer, it now is acting as a positive 2³.
UPDATE: I marked the accepted answer, but it took that + a little extra work to figure out where I went wrong. For anyone who may read this in the future:
As far as the signing goes, the code I have uses signed bytes. My only guess as to why this doesn't mess anything up is because all of the values received might be of positive sign. Except that this doesn't make sense, given a waveform varies amplitude from [-1,1]. I'm going to play around with this to try and figure it out. If there are negative signs, the implementation of code here doesn't seem to remove the 1 when ORing, so I suspect that it doesn't affect the computation too much (given that we're dealing with really large values (diff += means diff will be really large -- a few extra 1s shouldn't hurt the outcome given the code and the comparisons it relies on. So this was all wrong. I gave it some more thought and it's really simple, actually -- the only reason this was such a problem is because I didn't know about big-endian, and then once I read about it, I misunderstood exactly how it is implemented. Endian-ness explained in the next bulletpoint.
Regarding the order in which the bits are placed, destroying the sound, etc. The code I'm using sets bigEndian=false, meaning that the byte order goes from least significant byte to most significant byte. For this reason, combining the two indices of buffer requires taking the second index, placing its bits first, and placing the first index as second (so we are now in big-endian byte order). One of the problems I had was the impression that "endian-ness" determines the bit order. I thought 10010101 big-endian would become 10101001 small-endian. Turns out this is not the case -- the bits in each byte remain in their original order; the difference is that the bytes are ordered "backward". So 10110101 111000001 big-endian becomes 11100001 10110101 -- same bit order within each byte; however, different byte order.
Finally, I'm not sure why, but the accepted answer is correct: targetDataLine.read() may place the bits into a byte array only (not just in my code, but in all Java code using targetDataLine -- read() only accepts arguments where the destination var is a byte array), but the data is in fact one short split into two bytes. It is for this reason that every two indices must be combined together.
Coming back to the signing goes, it should be obvious by now why this isn't an issue. This is the commenting that I now have in the code, which more coherently explains what it took all of this^ to explain before:
/* The Javadoc explains that the targetDataLine will only read to a byte-typed array.
However, because the sample size is 16-bit, it is actually storing 16-bit numbers
there (shorts), auto-parsing them every eight bits. Additionally, because it is storing
them in little-endian, bits [2^0,2^7] are stored in index[i] in normal order (powers 76543210)
while bits [2^8,2^15] are stored in index[i+1]. So, together they currently read as [7-6-5-4-3-2-1-0 15-14-13-12-11-10-9-8],
which is a problem. In the next for loop, we take care of this and re-organize the bytes by swapping every pair (remember the bits are ok, but the bytes are out of order).
Also, although the array is signed, this will not matter when we combine bytes, because the sign-bit (2^15) will be placed
back at the beginning like it normally is; although 2^7 currently exists as the most significant bit in its byte,
it is not a sign-indicating bit,
because it is really the middle of the short which was split. */
This is combining the byte stream from input in low bytes first byte order to a stream of shorts in internal byte order.
With sign extesion it is more a question of the sign encoding of the original byte stream. If the original byte stream is unsigned (coding values from 0 to 255), then the overcomes the then unwanted effects of java treating values as signed. So educated guess is taht the external byte strem encodes unsigned bytes.
Judging whether the code is plausible needs information on what externel encoding is being treated and what internal encoding is used. E.g. (wild guess could be totally wrong!): the two byte junks read coud belong to 2 channels of a stereo sound encoding and are put into a single short for ease of internal processing. You should look at the encoding being read and the use of the converted data within the application.

java opengl: glDrawElements() with >32767 vertices

I have a complex model that has >32767 vertices. now, the indices can only be passed to opengl as type GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT. java has no concept of unsigned, so the unsigned short option maps to simply (signed) short, which is 16 bits, or +32767. when I specify the vertices, i need to pass opengl a short[], where the values in the array point to a vertex in the vertice array. however, if there are >32767 vertices, the value won't fit in the short[].
Is there another way to specify the indices? code snippet is below,
short[] shorts = ... read the indices ...;
...
ShortBuffer indicesBuffer = null;
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * Short.SIZE / 8);
ibb.order(ByteOrder.nativeOrder());
indicesBuffer = ibb.asShortBuffer();
indicesBuffer.put(indices);
indicesBuffer.position(0);
...
gl.glDrawElements(GL10.GL_TRIANGLES, numOfIndices, GL10.GL_UNSIGNED_SHORT, indicesBuffer);
...
I haven't used OpenGL from Java so I'm speculating here, but there's a good chance that you can just use the negative numbers whose binary reprentation is the same as the unsigned positive numbers you really want. You're giving GL some byte pairs and telling it to interpret them as unsigned, and as long as they have the right value when interpreted that way, it should work. It doesn't matter if Java thought they meant something different when it stored those bits in memory.
If you're iterating, just ignore the wraparound and keep on incrementing. When you get to -1, you're done.
If you're calculating the index numbers as ints (which don't have this range problem) and then casting to short, subtract 65536 from any number that's greater than 32767.

Categories