How to convert c++ operator to Java - java

I'm trying to use the OpenCV library (Java version). I found some code written in C++and I'm trying to rewrite it to Java. However, I can't understand one construction.
Here is the C++ code:
void scaleDownImage(cv::Mat &originalImg,cv::Mat &scaledDownImage )
{
for(int x=0;x<16;x++)
{
for(int y=0;y<16 ;y++)
{
int yd =ceil((float)(y*originalImg.cols/16));
int xd = ceil((float)(x*originalImg.rows/16));
scaledDownImage.at<uchar>(x,y) = originalImg.at<uchar>(xd,yd);
}
}
}
I can't understand how to translate this line:
scaledDownImage.at<uchar>(x,y) = originalImg.at<uchar>(xd,yd);

have a look at the Mat accessor functions here: http://docs.opencv.org/java/org/opencv/core/Mat.html#get(int,%20int)
so, to translate your example:
scaledDownImage.at<uchar>(r,c) = originalImg.at<uchar>(rd,cd); // c++
would be :
byte [] pixel = new byte[1]; // byte[3] for rgb
originalImg.get( rd, cd, pixel );
scaledDownImage.put( r,c, pixel );
note, that it's (row,col), not (x,y) !

uchar = unsigned char.
This statement:
scaledDownImage.at<uchar>(x,y)
returns unsigned char(I guess, pointer) at positions(x,y).
So, in java it will be like:
unsigned char scaledDownImageChar = scaledDownImage.charAt(x, y);
scaledDownImageChar = originalImg.charAt(x, y);
This is not real code, just an example.

It is called template, search generics for java.
Basically, the "at" method´s code uses some type T
(don´t know if it is called T or something else)
which could be int, float, char, any class...
just something which is unspecified at the moment
And in this case, you´re calling the method with T being an uchar

Related

Is there something like MAKELPARAM in Java / JNA?

I would like to realize the code taken from this answer and simulate a click without simulating mouse movement inside non-java app window. I know about JNA which, in theory, should have all WinAPI functions. The latest JNA version is 5.6.0 but I didn't find something similar to MAKELPARAM.
POINT pt;
pt.x = 30; // This is your click coordinates
pt.y = 30;
HWND hWnd = WindowFromPoint(pt);
LPARAM lParam = MAKELPARAM(pt.x, pt.y);
PostMessage(hWnd, WM_RBUTTONDOWN, MK_RBUTTON, lParam);
PostMessage(hWnd, WM_RBUTTONUP, MK_RBUTTON, lParam);
Does anyone know if there is something similar in Java or JNA?
Please do not suggest Java Robot. I have tried it, but unfortunately the mouse cursor moves (disappears) by about a milliseconds from the starting position to the point where you need to click and back to the starting position.
public void performClick(int x, int y) {
Point origLoc = MouseInfo.getPointerInfo().getLocation();
robot.mouseMove(x, y);
robot.mousePress(InputEvent.BUTTON1_MASK);
robot.mouseRelease(InputEvent.BUTTON1_MASK);
robot.mouseMove(origLoc.x, origLoc.y);
}
Short answer:
No, but you can easily do it yourself.
Long answer:
As you said, "JNA ... in theory, should have all WinAPI functions." What is important to recognize is that there are two components to JNA, the core functionality that allows Java to interface with native (C) code via libffi, contained in the jna artifact; and the user-contributed platform mappings (including many WinAPI mappings) in jna-platform. So JNA has the ability to map anything in WinAPI but someone needs to contribute it to the project to share their work with others.
Now regarding MAKELPARAM, it is simply a macro. You can see the source code for it here:
#define MAKELPARAM(l, h) ((LPARAM)(DWORD)MAKELONG(l, h))
It calls the MAKELONG macro with (WORD) inputs l and h, casts that to a DWORD, and further casts that to a LPARAM.
The MAKELONG macro is defined in Windef.h:
#define MAKELONG(a, b) ((LONG)(((WORD)(((DWORD_PTR)(a)) & 0xffff)) | ((DWORD)((WORD)(((DWORD_PTR)(b)) & 0xffff))) << 16))
JNA does have the LPARAM type mapped, present in the WinDef class. It takes a long argument to the constructor.
So you must simply take two 16-bit values l and h, map them to the rightmost 32 bits of a long, and send that long to the LPARAM constructor.
So the solution you seek is:
// int args are needed for unsigned 16-bit values
public static WinDef.LPARAM makeLParam(int l, int h) {
// note the high word bitmask must include L
return new WinDef.LPARAM((l & 0xffff) | (h & 0xffffL) << 16);
}

How to feed boolean placeholder by means of TensorFlowInferenceInterface.java?

I'm trying to launch trained in Keras Tensorflow graph by means of Java Tensorflow API.
Aside from standard input image placeholder, this graph contains 'keras_learning_phase' placeholder that is needed to be fed with a boolean value.
The thing is, there is no method in TensorFlowInferenceInterface for boolean values - you can only feed it with float, double, int or byte values.
Obviously, when I try to pass int to this tensor by means of this code:
inferenceInterface.fillNodeInt("keras_learning_phase",
new int[]{1}, new int[]{0});
I get
tensorflow_inference_jni.cc:207 Error during inference: Internal:
Output 0 of type int32 does not match declared output type bool for
node _recv_keras_learning_phase_0 = _Recvclient_terminated=true,
recv_device="/job:localhost/replica:0/task:0/cpu:0",
send_device="/job:localhost/replica:0/task:0/cpu:0",
send_device_incarnation=4742451733276497694,
tensor_name="keras_learning_phase", tensor_type=DT_BOOL,
_device="/job:localhost/replica:0/task:0/cpu:0"
Is there a way to circumvent it?
Maybe it is possible somehow to explicitly convert Placeholder node in graph to Constant?
Or maybe it is possible to initially avoid creation of this Placeholder in the graph?
The TensorFlowInferenceInterface class essentially is a convenience wrapper over the full TensorFlow Java API, which does support boolean values.
You could perhaps add a method to TensorFlowInferenceInterface to do what you want. Similar to fillNodeInt, you could add the following (note the caveat that booleans in TensorFlow are represented as one byte):
public void fillNodeBool(String inputName, int[] dims, bool[] src) {
byte[] b = new byte[src.length];
for (int i = 0; i < src.length; ++i) {
b[i] = src[i] ? 1 : 0;
}
addFeed(inputName, Tensor.create(DatType.BOOL, mkDims(dims), ByteBuffer.wrap(b)));
}
Hope that helps. If it works, I'd encourage you to contribute back to the TensorFlow codebase.
This is in addition to the answer by ash, as the Tensorflow API has changed a little. Using this worked for me:
public void feed(String inputName, boolean[] src, long... dims) {
byte[] b = new byte[src.length];
for (int i = 0; i < src.length; i++) {
b[i] = src[i] ? (byte) 1 : (byte) 0;
}
addFeed(inputName, Tensor.create(Boolean.class, dims, ByteBuffer.wrap(b)));
}

Float.floatToIntBits in php or the right way pass int value inside a float

I need to pass an int value inside a float from java code to php.
The reason is that the third-party API that I have to use in between accepts only float values.
In java I have the following code, that works as expected:
int i1 = (int) (System.currentTimeMillis() / 1000L);
float f = Float.intBitsToFloat(t);
int i2 = Float.floatToIntBits(f);
//i1 == i2
Then I pass float value from Float.intBitsToFloat() to the third-party API and it sends a string to my server with float:
"value1":1.4237714E9
In php I receive and parse many such strings and get an array:
{
"value1" => 1.4237714E9, (Number)
"value2" => 1.4537614E9 (Number)
...
}
Now I need to make Float.floatToIntBits() for each element in php, but I'm not sure how. Will these php numbers be 4 bytes long? Or maybe I can somehow get integer while parsing from string? Any suggestions?
Thank you in advance!
Thank you, guys! Yes, I forgot about pack/unpack.
It's not really an answer, yet it works for my case:
function floatToIntBits($float_val)
{
$int = unpack('i', pack('f', $float_val));
return $int[1];
}
But not vice versa! The strange thing:
$i1 = 1423782793;
$bs =pack('i', $i);
$f = unpack('f', $bs);
//array { 1 => 7600419110912} while should be 7.6004191E12 (E replaced with 109?)
//or may be 7600419110000 which also right, but not 7600419110912!
I can't explain this. Double checked on home system and on server (5.5 and 5.4 php) - the same result.
Hi the staff that I found you probably will not like:
function FloatToHex1($data)
{
return bin2hex(strrev(pack("f",$data)));
}
function HexToFloat1($data)
{
$value=unpack("f",strrev(pack("H*",$data)));
return $value[1];
}
//usage
echo HexToFloat1(FloatToHex1(7600419100000));
Give the result like 7600419110912
so the 109 is NOT a substitution of E the problem is recalculation of the numbers with float point. It's sounds funny but PHP recalculation bring you the most accurate answer possible. And this is an answer 7600419110912
So read this post for more info https://cycling74.com/forums/topic/probably-a-stupid-question-but/

Converting a pointer to an array in C++ to Java (OpenCV)

I'm in the process of converting a function in OpenCV which is in C++ to Java. Link.
I don't know too much about C++ and struggling to convert this part:
/// Set the ranges ( for B,G,R) )
float range[] = { 0, 256 } ; //the upper boundary is exclusive
const float* histRange = { range };
This is what I have so far:
//Set of ranges
float ranges[] = {0,256};
final float histRange = {ranges};
Edit:
Thanks for your help, I have managed to get it working. This question was in the context of OpenCV (Sorry if I didn't make it clear). Code:
//Set of ranges
float ranges[] = {0,256};
MatOfFloat histRange = new MatOfFloat(ranges);
Unless I am mistaken with my pointers today, the second line in the c++ code duplicates pointer of range, so they both point at the same pair of values. What you want in Java should be this:
float ranges[] = {0,256};
final float histRange[] = ranges;
Thanks for your help, I have managed to get it working. This question was in the context of OpenCV (Sorry if I didn't make it clear). Code:
//Set of ranges
float ranges[] = {0,256};
MatOfFloat histRange = new MatOfFloat(ranges);

Simple assignment coming up with wrong value

private static void convert(int x) {
// assume we've passed in x=640.
final int y = (x + 64 + 127) & (~127);
// as expected, y = 768
final int c = y;
// c is now 320?!
}
Are there any sane explanations for why the above code would produce the values above? This method is called from JNI. The x that is passed in is originally a C++ int type that is static_cast to a jint like so: static_cast<jint>(x);
In the debugger, with the breakpoint set on the y assignment, I see x=640. Stepping one line, I see y=768. Stepping another line and c=320. Using the debugger, I can set the variable c = y and it will correctly assign it 768.
This code is single threaded and runs many times per second and the same result is always observed.
Update from comments below
This problem has now disappeared entirely after a day of debugging it. I'd blame it on cosmic rays if it didn't happen reproducibly for an entire day. Oddest thing I've seen in a very long time.
I'll leave this question open for a while in case someone has some insight on what could possibly cause this.
Step 01: compile it right, see comments under your post.
if needed i with this code it will go:
C# Code:
private void callConvert(object sender, EventArgs e)
{
string myString = Convert.ToString(convert123(640));
textBox1.Text = myString;
}
private static int convert123(int x) {
// assume we've passed in x=640.
int y = (x + 64 + 127) & (~127);
// as expected, y = 768
int c = y;
// c is now 320?!
return (c);
}
but its a c# code
and a tipp for you NEVER call your funktion with a name that is used in the compiler as an standart.
convert is in the most langues used.
(system.convert)
Have you set c to 320 recently? If so, it may have been stored in some memory and the compiler may have reassigned it to what it thought it was and not what it should be. I am, in part, guessing though.
It looks like problem of memory byte size of temporary variables if program is optimized for memory usage. Debugger may not be reliable. I see if the temporary ~127 is store in a byte, then you may reach at the scenario you observed. It all depends on what is ~127 is stored in at run time.

Categories