Converting a pointer to an array in C++ to Java (OpenCV) - java

I'm in the process of converting a function in OpenCV which is in C++ to Java. Link.
I don't know too much about C++ and struggling to convert this part:
/// Set the ranges ( for B,G,R) )
float range[] = { 0, 256 } ; //the upper boundary is exclusive
const float* histRange = { range };
This is what I have so far:
//Set of ranges
float ranges[] = {0,256};
final float histRange = {ranges};
Edit:
Thanks for your help, I have managed to get it working. This question was in the context of OpenCV (Sorry if I didn't make it clear). Code:
//Set of ranges
float ranges[] = {0,256};
MatOfFloat histRange = new MatOfFloat(ranges);

Unless I am mistaken with my pointers today, the second line in the c++ code duplicates pointer of range, so they both point at the same pair of values. What you want in Java should be this:
float ranges[] = {0,256};
final float histRange[] = ranges;

Thanks for your help, I have managed to get it working. This question was in the context of OpenCV (Sorry if I didn't make it clear). Code:
//Set of ranges
float ranges[] = {0,256};
MatOfFloat histRange = new MatOfFloat(ranges);

Related

Need help converting DCGAN to Java for Tensorflow for Java

I am trying to get DCGAN ( Deep Convolutional Generative Adversarial Networks) to work with tensorflow for Java.
I have added the necessary code to DCGAN’s model.py as below to output a graph to be later used in tensorflow for Java.
//at the beginning to define where the model will be saved
#
self.load_dir = load_dir
self.models_dir = models_dir
graph = tf.Graph()
self.graph = graph
self.graph.as_default()
#
//near the end where the session is ran in order to build and save the model to be used in tensorflow for java. A model is saved every 200 samples as defined by DCGAN’s default settings.
#
steps = "training_steps-" + "{:08d}".format(step)
set_models_dir = os.path.join(self.models_dir, steps)
builder = tf.saved_model.builder.SavedModelBuilder(set_models_dir)
self.builder = builder
self.builder.add_meta_graph_and_variables(self.sess, [tf.saved_model.tag_constants.SERVING])
self.builder.save()
#
The above codes output a graph that is loaded by the following Java code
package Main;
import java.awt.image.BufferedImage;
import java.io.File;
import java.util.Random;
import javax.imageio.ImageIO;
import org.tensorflow.Tensor;
public class DCGAN {
public static void main(String[] args) throws Exception {
String model_dir = "E:\\AgentWeb\\mnist-steps\\training_steps-00050000";
//SavedModelBundle model = SavedModelBundle.load(model_dir , "serve");
//Session sess = model.session();
Random rand = new Random();
int sample_num = 64;
int z_dim = 100;
float [][] gen_random = new float [64][100];
for(int i = 0 ; i < sample_num ; i++) {
for(int j = 0 ; j < z_dim ; j++) {
gen_random[i][j] = (float)rand.nextGaussian();
}
}
Tensor <Float> sample_z = Tensor.<Float>create(gen_random, Float.class);
Tensor <Float> sample_inputs = Tensor.<Float>create(placeholder, Float.class);
// placeholder is the tensor which I want to create after solving the problem below.
//Tensor result = sess.runner().fetch("t_vars").feed("z", sample_z).feed("inputs", sample_inputs).run().get(3);
}
}
(I have left some comments as I used them for debugging)
With this method I am stuck at a certain portion of translating the python code to Java for use in tensorflow for Java. In DCGAN’s model.py where the images are processed there’s the following code.
get_image(sample_file,
input_height=self.input_height,
input_width=self.input_width,
resize_height=self.output_height,
resize_width=self.output_width,
crop=self.crop,
grayscale=self.grayscale) for sample_file in sample_files]
which calls get_iamge in saved_utils.py as follows
def get_image(image_path, input_height, input_width,
resize_height=64, resize_width=64,
crop=True, grayscale=False):
image = imread(image_path, grayscale)
return transform(image, input_height, input_width,
resize_height, resize_width, crop)
which then calls a method called imread as follows
def imread(path, grayscale = False):
if (grayscale):
return scipy.misc.imread(path, flatten = True).astype(np.float)
else:
# Reference: https://github.com/carpedm20/DCGAN-tensorflow/issues/162#issuecomment-315519747
img_bgr = cv2.imread(path)
# Reference: https://stackoverflow.com/a/15074748/
img_rgb = img_bgr[..., ::-1]
return img_rgb.astype(np.float)
My question is that I am unsure what the img_rgb = img_bgr[..., ::-1]
part does and how do I translate it for use in my Java file in tensorflow.java.
I am familiar with the way python slices arrays but I am unfamiliar with the three dots used in there.
I did read about the reference to the stackoverflow questions there and it mentions that it is similar to img[:, :, ::-1]. But I am not really sure about what it is exactly doing.
Any help is appreciated and thank you for taking your time to read this long post.
What basically do the imread and get_image is
1) reads an image
2) convert it from BGR to RGB
3) convert it to floats
4) rescale the image
You can do this in Java either by using an imaging library, such as JMagick or AWT, or by using TensorFlow.
If you use TensorFlow, it is possible to run this preprocessing in eager mode or by building and running a small graph. For example, given tf an instance of org.tensorflow.op.Ops:
tf.image.decode* can read content of an image (you know to know the type of your image though to pick the right operation).
tf.reverse can reverse the value in your channel dimension (RGB to BGR)
tf.dtypes.cast can convert the image to floats
tf.image.resizeBilinear can rescale your image

Float.floatToIntBits in php or the right way pass int value inside a float

I need to pass an int value inside a float from java code to php.
The reason is that the third-party API that I have to use in between accepts only float values.
In java I have the following code, that works as expected:
int i1 = (int) (System.currentTimeMillis() / 1000L);
float f = Float.intBitsToFloat(t);
int i2 = Float.floatToIntBits(f);
//i1 == i2
Then I pass float value from Float.intBitsToFloat() to the third-party API and it sends a string to my server with float:
"value1":1.4237714E9
In php I receive and parse many such strings and get an array:
{
"value1" => 1.4237714E9, (Number)
"value2" => 1.4537614E9 (Number)
...
}
Now I need to make Float.floatToIntBits() for each element in php, but I'm not sure how. Will these php numbers be 4 bytes long? Or maybe I can somehow get integer while parsing from string? Any suggestions?
Thank you in advance!
Thank you, guys! Yes, I forgot about pack/unpack.
It's not really an answer, yet it works for my case:
function floatToIntBits($float_val)
{
$int = unpack('i', pack('f', $float_val));
return $int[1];
}
But not vice versa! The strange thing:
$i1 = 1423782793;
$bs =pack('i', $i);
$f = unpack('f', $bs);
//array { 1 => 7600419110912} while should be 7.6004191E12 (E replaced with 109?)
//or may be 7600419110000 which also right, but not 7600419110912!
I can't explain this. Double checked on home system and on server (5.5 and 5.4 php) - the same result.
Hi the staff that I found you probably will not like:
function FloatToHex1($data)
{
return bin2hex(strrev(pack("f",$data)));
}
function HexToFloat1($data)
{
$value=unpack("f",strrev(pack("H*",$data)));
return $value[1];
}
//usage
echo HexToFloat1(FloatToHex1(7600419100000));
Give the result like 7600419110912
so the 109 is NOT a substitution of E the problem is recalculation of the numbers with float point. It's sounds funny but PHP recalculation bring you the most accurate answer possible. And this is an answer 7600419110912
So read this post for more info https://cycling74.com/forums/topic/probably-a-stupid-question-but/

How to convert c++ operator to Java

I'm trying to use the OpenCV library (Java version). I found some code written in C++and I'm trying to rewrite it to Java. However, I can't understand one construction.
Here is the C++ code:
void scaleDownImage(cv::Mat &originalImg,cv::Mat &scaledDownImage )
{
for(int x=0;x<16;x++)
{
for(int y=0;y<16 ;y++)
{
int yd =ceil((float)(y*originalImg.cols/16));
int xd = ceil((float)(x*originalImg.rows/16));
scaledDownImage.at<uchar>(x,y) = originalImg.at<uchar>(xd,yd);
}
}
}
I can't understand how to translate this line:
scaledDownImage.at<uchar>(x,y) = originalImg.at<uchar>(xd,yd);
have a look at the Mat accessor functions here: http://docs.opencv.org/java/org/opencv/core/Mat.html#get(int,%20int)
so, to translate your example:
scaledDownImage.at<uchar>(r,c) = originalImg.at<uchar>(rd,cd); // c++
would be :
byte [] pixel = new byte[1]; // byte[3] for rgb
originalImg.get( rd, cd, pixel );
scaledDownImage.put( r,c, pixel );
note, that it's (row,col), not (x,y) !
uchar = unsigned char.
This statement:
scaledDownImage.at<uchar>(x,y)
returns unsigned char(I guess, pointer) at positions(x,y).
So, in java it will be like:
unsigned char scaledDownImageChar = scaledDownImage.charAt(x, y);
scaledDownImageChar = originalImg.charAt(x, y);
This is not real code, just an example.
It is called template, search generics for java.
Basically, the "at" method´s code uses some type T
(don´t know if it is called T or something else)
which could be int, float, char, any class...
just something which is unspecified at the moment
And in this case, you´re calling the method with T being an uchar

How to transform coordinate from WGS84 to a coordinate in a projection with PROJ.4?

I have a GPS-coordinate in WGS84 that I would like to transform to a map-projection coordinate in SWEREF99 TM using PROJ.4 in Java or Proj4js in JavaScript.
Its hard to find documentation for PROJ.4 and how to use it. If you have a good link, please post it as a comment.
The PROJ.4 parameters for SWEREF99 TM is +proj=utm +zone=33 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs
I have tried to use a PROJ.4 Java library for transforming Lat: 55° 00’ N, Long: 12° 45’ E and tried with this code:
String[] proj4_w = new String[] {
"+proj=utm",
"+zone=33",
"+ellps=GRS80",
"+towgs84=0,0,0,0,0,0,0",
"+units=m",
"+no_defs"
};
Projection proj = ProjectionFactory.fromPROJ4Specification(proj4_w);
Point2D.Double testLatLng = new Point2D.Double(55.0000, 12.7500);
Point2D.Double testProjec = proj.transform(testLatLng, new Point2D.Double());
This give me the point Point2D.Double[5197915.86288144, 1822635.9083898761] but I should be N: 6097106.672, E: 356083.438
What am I doing wrong? and what method and parameters should I use instead?
The correct values is taken from Lantmäteriet.
I am not sure if proj.transform(testLatLng, new Point2D.Double()); is the right method to use.
55 is latitude or longitude?
EDIT: it seems you should simply swap lat and long parameters.
EDIT2: i.e.
Point2D.Double testLatLng = new Point2D.Double(12.7500, 55.0000);

Digital Filter, Math in Java,

I need your help, and thank you for reading my question!
I am currently writing a java Programm that will use an Direket Form 2 Transposed Filter. I know that the function filter in Matlab will do that just fine, but i have to use Java.
So does anyone know you to implement this Direkt Form 2 Transposed , this Math Function:
y(n) = b(1)*x(n) + b(2)*x(n-1) + ... + b(nb+1)*x(n-nb)
- a(2)*y(n-1) - ... - a(na+1)*y(n-na)
in any Programmm Language? All it takes is hopefully a point to the wrigth direction so i can figure it out! Maybe there is an C Lib that implements some of the matlab functions, just anything.
So thank you for your time
yours Elektro
Follow up:
I tried for a couple of days to understand your function but i couldn't.
This is the function from Matlab: filter
http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/ref/filter.html&http://www.google.de/search?hl=de&q=filter+matlab&btnG=Google-Suche&meta=&aq=f&oq=
All i know is that i use in matlab the function like this:
newArray = filter(1,LPC_Faktor,OldArray)
All I have to do is to implement the filter function.
So could you help again?
Thanks
Elektro
Whatever language you use, the direct form II transposed structure is quite simple.
For example, in C, it could be something like:
float myFilter( float u)
{
static float[nb] x = {0,0,0,...,0); // initialize x
static float[na] y = {0,0,0,...,0); // initialize y
static float b1 = ....; // put b(1) here
static float[nb] b = {...,...,...,...,...}; // put b(2) to b(nb+1) here
static float[na] a = {...,...,...,...,...}; // put a(2) to a(na+1) values here
// initialization
float sum = 0;
int i=0;
// compute the value
for(i=0;i<nb;i++)
sum += b[i]*x[i];
for(i=0;i<na;i++)
sum -= a[i]*y[i];
sum += b1*u;
// prepare the values for the next time
for(i=1;i<nb;i++)
x[i] = x[i-1];
x[0] = u;
for(i=1;i<na;i++)
y[i] = y[i-1];
y[0] = sum;
// return the value
return sum;
}
I did not test the code, but it is something like that.
The Direct Form II transposed is the simplest form to implement a FIR filter (numerically, and specially in fixed-point, it is not the best, but it is the form that requires the less operations).
Of course, it is possible to have a better implementation (with cycling array, for example). If needed, I can provide it, too.
EDIT: I answered too quickly. The algorithm you provide
y(n) = b(1)x(n) + b(2)x(n-1) + ... + b(nb+1)x(n-nb) - a(2)y(n-1) - ... - a(na+1)*y(n-na)
is not the Direct Form II, but the direct form I. It requires to store na+nb values (n is the order of your filter), whereas the Direct Form II requires only max(na,nb).
The algorithm used for the Direct Form II is
e(n) = u(n) - a(1)*e(n-1) - a(2)*e(n-2) - ... - a(na)*e(n-na)
y(n) = b(1)*e(n-1) + b(2)*e(n-2) + ... + b(nb)*e(n-nb)
Tell me if you need this form or not.
after long searching i found the answer,
thank you showed the rigth way:
filter(int ord, float *a, float *b, int np, float *x, float *y)
{
int i,j;
y[0]=b[0] * x[0];
for (i=1;i<ord+1;i++)
{
y[i]=0.0;
for (j=0;j<i+1;j++)
y[i]=y[i]+b[j]*x[i-j];
for (j=0;j<i;j++)
y[i]=y[i]-a[j+1]*y[i-j-1];
}
/* end of initial part */
for (i=ord+1;i<np+1;i++)
{
y[i]=0.0;
for (j=0;j<ord+1;j++)
y[i]=y[i]+b[j]*x[i-j];
for (j=0;j<ord;j++)
y[i]=y[i]-a[j+1]*y[i-j-1];
}
} /* end of filter */

Categories