objective-c code convert for java code - java

I have a objective-c code.
but I want to convert to java code.
I know objective-c's NSData equals java's byte[].
but I don't know about the equivalent of rest of the keywords.
Objective-C CODE
NSData * updatedValue = characteristic.value;
uint8_t* dataPointer = (uint8_t*)[updatedValue bytes];
uint8_t flags = dataPointer[0]; dataPointer++;
int32_t tempData = (int32_t)CFSwapInt32LittleToHost(*(uint32_t*)dataPointer); dataPointer += 4;
int8_t exponent = (int8_t)(tempData >> 24);
int32_t mantissa = (int32_t)(tempData & 0x00FFFFFF);
if( tempData == 0x007FFFFF )
{
NSLog(#"Invalid temperature value received");
return;
}
float tempValue = (float)(mantissa*pow(10, exponent));
self.tempString = [NSString stringWithFormat:#"%.1f", tempValue];
Please help me

You could try
Objective c to Java converter
Incase if you need your java code to be converted to Objective C
Java to Objective c converter
Reference

Do not attempt convert it to Java, determine what it does and write it in Java.
A little educated guesswork based on your knowledge of programming should get you a long way to understanding the code. This is a great advantage of programming languages over natural languages, understand programming and you can usually make a good educated guess at the meaning of a fragment of code even if you don't know the language. In natural languages the same simply does not hold, knowing, say, French is little help in reading Hindi!
So let's see, uint8_t is probably a type, what type could it be? Well int sounds a lot like integer, 8 is probably the size of the integer in bits - it occurs in the second line which also contains the word bytes, and the u probably means unsigned. So guess that uint8_t is an unsigned 8-bit integer. Now look at the other type-like words, do they make sense in the same way?
So what is the code doing? Well you've figure out that NSData * is "like" byte[], so what would the code set flags to? The first byte in the array maybe? How about tempData? Well there is a 32 in the types here, and that is four bytes.
Having got tempData what do the code do? Some manipulation which results in tempValue which is a float. Maybe float is a 32-bit floating point number? Which is of course what it is in Java.
However here you're going to hit a wall. If you look up how a 32-bit floating-point number is represented in IEEE 754 - the most common way to represent floating point numbers - you will discover that it is stored in binary with the mantissa being a faction (see Wikipedia).
Now look at the code, pow(10, exponent) looks a lot like 10 to the power, not 2 to the power. And does the mantissa look like its being treated as a fraction?
So whatever for those 4 bytes you've guessed are being converted into a float it looks like either (a) they are not a typical 32-bit float or (b) the Objective-C code is wrong...
So back to the first point - determine what this code is meant to do and then write it directly in Java, don't try to convert it.
HTH

Related

Convert Java long to int64 in C++

I am receiving some numerical data from a Java client via socket connection on C++ server. When I receive 4 byte int type data, what I need is just using ntohl() function or reverse the bit order to convert to c++ int type. However, I'am having trouble trying to convert long data type from Java. No matter what I tried, I could not recover the correct value. I used LONG64, ULONG64 and int64_t as well, and none of them worked.
For example, when I send long s = 1 from Java, on C++ side I did:
int64_t size;
recv(client, (char *)&size, sizeof int64_t, 0);
if I do
size = ntohl(size)
Then size will become 0 whatever the original long value is in Java !
If I don't do ntohl() conversion, then size = 72057594037927936 for s = 1
I have hardly found any useful information on this topic and I would appreciate any suggestion.
The value 72057594037927936 is 0x0100000000000000 in Hex. As you may have guessed, that's simply backwards byte ordering, the 1 is in front instead of back.
ntohl() is 32-bit, so it is throwing out those top four bytes (the first 8 hex digits), giving you zero. You could possibly use htonll instead, but that isn't quite right. The best thing is to reverse the order of the bytes yourself.
int64_t size;
recv(client, (char *)&size, sizeof int64_t, 0);
char *start = (char *)&size, *end = start + sizeof(size);
std::reverse(start, end);
There are a ton of ways of reversing the bytes, and a ton of ways of dealing with little/big endian problems in general.

Convert native uint8_t (Java byte) into an int

I have a native function (from a library) that does some work on uint8_t types (unsigned 8-bit number 0-255). The closest thing Java has is byte which must be signed.
How can I convert this byte into a proper positive integer to use in Java? I know I'll have to store it in a short or int in order to properly represent numbers from 0-255, but I don't know how to convert the byte.
I tried int intValue = byteValue & 0xFF;, but that is giving me unexpected results, so I suspect it's incorrect. Or that is correct and I am misunderstanding the expected results from the native library function. Would appreciate confirmation either way.
In Java, you can use a Guava library function to convert a byte to an int, treating it as unsigned: UnsignedBytes.toInt. So, if you returned the value of a C++ unsigned value as a byte to Java, you can then fix it up.
If you want to make it into an int and then return it to java as an int, that should be perfectly straightforward.

python - what is max byte equivalent to java Byte.MAX_VALUE

Does python have an equivalence to java's Byte.MAX_VALUE representing the max byte? I had a look at python sys module, I only managed to find sys.maxint. Does it have anything like sys.maxbyte?
UPDATE:
In my case, I am doing a Hbase Rowkey scan, My rowkey looks like rk1_rk2. In order to scan all results for rk1 without knowing exact rk2, My java code looks like:
byte[] startRowBytes = "rk1".getBytes();
byte[] endRowBytes = ("rk1" + (char) Byte.MAX_VALUE).getBytes();
HbaseScanQuery query = new HbaseScanQuery(tableName, colFamily);
query.setStartRow(startRowBytes).setStopRow(endRowBytes);
I am just trying to work out the python equivalence of Byte.MAX_VALUE part.
I think you will have to define the value yourself. A byte has 2^8 = 256 unique states and so the largest integer it can represent is 255. java's byte type, however, is a signed byte, so half the states are reserved for positives(and 0) and the other half is used for negatives. therefore the the equivalent of java's Byte.MAX_VALUE is 127, and the equivalent of java's Byte.MIN_VALUE is -128
Since python bytes are unsigned, the equivalent of java's Byte.MIN_VALUE would be 128 which is the representation of -128 in 2's compliment notation(the defacto standard for representing signed integers) thanks to Ignacio Vazquez-Abrams for pointing that out.
I haven't dealt with python in a while, but i believe what you want is ("rk1"+chr(127))
Given your update, there is an even better answer: Don't worry about what the max byte value is. According to the HBase documentation, the setStartRow and setStopRow methods work just like Python's slicing; namely, the start is inclusive, but the stop is exclusive, meaning your endRowBytes should simply be 'rk2'.
Also, the documentation mentions that you can make the stop row inclusive by adding a zero byte, so another alternative is 'rk1' + chr(0) (or 'rk1\0' or 'rk1\x00', whichever is clearest to you). In fact, the example used to explain HBase scans in the linked documentation illustrates exactly your use case.

Primitive DataType Casting in java - Internal Logic

Guys i want to understand , how the widening or narrow implicit casting is internally implemented in java.I know that it involves bit fiddling.
For example:
//implicit
int i =2400;
long a = (long)i;
//Explicit
float d = (float) 2.23423;
Updates:
I wrote this post after looking at the question
posted here Bitshifting to read/write data
.Peter Lawrey gave the following answer.
public long create(int one, int two){
return ((long) one << 32) | (two & 0xFFFFFFFFL);
}
To re-iterate same,widening conversion like above happens at the machine level more or less with smiliar same logic mentioned above by peter.
kindly let me know your valuable comments.
Java uses the IEEE 754 standard machine code instructions supported by your CPU. As such Java does not implement this functionality using something you can break down further.
For conversion from double to float.
the sign is preserved
exponent is truncated, however if the number is too large it goes to infinity, if to small, it goes to zero.
both formats have an implied top bit which is 1, this is unchanged.
the top 23 bits of the mantissa is kept (with optional rounding of the 24th bit)
For float to double the process is similar except fields are extended.
However this is all done in the floating point processor unit and Java plays no part in how it happens.
A double has 64 bit, whereas a float has 32 bit. When you cast a double into a float you simply drop the first 32 bits.
Viceversa when you widen you just add as many 0 bits to fill the larger type, for instance you add 32 zeros in fron of the 32 bits of an int to make a long.

Most elegant way to convert a byte to an int in Java

Example code:
int a = 255;
byte b = (byte) a;
int c = b & 0xff; // Here be dragons
System.out.println(a);
System.out.println(b);
System.out.println(c);
So we start with an integer value of 255, convert it to a byte (becoming -1) and then converting it back to an int by using a magic formula. The expected output is:
255
-1
255
I'm wondering if this a & 0xff is the most elegant way to to this conversion. checkstyle for example complains about using a magic number at this place and it's not a good idea to ignore this value for this check because in other places 255 may really be a magic number which should be avoided. And it's quite annoying to define a constant for stuff like this on my own. So I wonder if there is a standard method in JRE which does this conversion instead? Or maybe an already defined constant with the highest unsigned byte value (similar to Byte.MAX_VALUE which is the highest signed value)
So to keep the question short: How can I convert a byte to an int without using a magic number?
Ok, so far the following possibilities were mentioned:
Keep using & 0xff and ignore the magic number 255 in checkstyle. Disadvantage: Other places which may use this number in some other scope (not bit operations) are not checked then, too. Advantage: Short and easy to read.
Define my own constant for it and then use code like & SomeConsts.MAX_UNSIGNED_BYTE_VALUE. Disadvantage: If I need it in different classes then I have to define my own constant class just for this darn constant. Advantage: No magic numbers here.
Do some clever math like b & ((1 << Byte.SIZE) - 1). The compiler output is most likely the same because it gets optimized to a constant value. Disadvantage: Pretty much code, difficult to read. Advantage: As long as 1 is not defined as magic number (checkstyle ignores it by default) we have no magic number here and we don't need to define custom constants. And when bytes are redefined to be 16 bit some day (Just kidding) then it still works because then Byte.SIZE will be 16 and not 8.
Are there more ideas? Maybe some other clever bit-wise operation which is shorter then the one above and only uses numbers like 0 and 1?
This is the standard way to do that transformation. If you want to get rid of the checkstyle complaints, try defining a constant, it could help:
public final static int MASK = 0xff;
BTW - keep in mind, that it is still a custom conversion. byte is a signed datatype so a byte can never hold the value 255. A byte can store the bit pattern 1111 1111 but this represents the integer value -1.
So in fact you're doing bit operations - and bit operations always require some magic numbers.
BTW-2 : Yes, there is a Byte.MAX_VALUE constant but this is - because byte is signed - defined as 27-1 (= 127). So it won't help in your case. You need a byte constant for -1.
Ignore checkstyle. 0xFF is not a magic number. If you define a constant for it, the constant is a magic constant, which is much less understandable than 0xFF itself. Every programmer educated in the recent centuries should be more familiar with 0xFF than with his girlfriend, if any.
should we write code like this?
for(int i = Math.ZERO; ... )
Guava to the rescue.
com.google.common.primitives.UnsignedBytes.toInt
Java 8 provides Byte.toUnsignedInt and Byte.toUnsignedLong (probably for really big bytes) methods:
byte b = (byte)255;
int c = Byte.toUnsignedInt(b); // 255
long asLong = Byte.toUnsignedLong(b); // 255
I wrote a method for this like
public static int unsigned(byte x) {
return int (x & 0xFF);
}
which is overloaded for short and int parameters, too (where int gets extended to long).
Instead of 0xFF you could use Byte.MAX_VALUE+Byte.MAX_VALUE+1 to keep FindBug shut, but I'd consider it to be an obfuscation. And it's too easy to get it wrong (s. previous versions).

Categories