Rosin thresholding (Unimodal thresholding) JAVA - java

I have to do for my project many image filters (all you can imagine) in JAVA (I use JAVA JAI). I have done all except unimodal thresholding by Paul L. Rosin. I found only this document and implementation on c++. Unfortunately, I'm terrible in c ++. Can you help me please? Thanks!

Just use Catalano Framework. It's very easy and fast. There's a lot of examples in samples folder.
import Catalano.Imaging.Filters;
// If you want to use parallel processing. Change the namespace for:
// import Catalano.Imaging.Concurrent.Filters;
FastBitmap fb = new FastBitmap(bufferedImage);
fb.toGrayscale();
RosinThreshold rosin = new RosinThreshold();
rosin.applyInPlace(fb);

Related

How to use MLeap DenseTensor in Java

I am using MLeap to run a Pyspark logistic regression model in a java program. Once I run the pipeline I am able to get a DefaultLeapFrame object with one row Stream(Row(1.3,12,3.6,DenseTensor([D#538613b3,List(2)),1.0), ?).
But I am not sure how to actually inspect the DenseTensor object. When I use getTensor(3) on this row I get an object. I am not familiar with Scala but that seems to be how this is meant to be interacted with. In Java how can I get the values within this DenseVector?
Here is roughly what I am doing. I'm guessing using Object is not right for the type. . .
DefaultLeapFrame df = leapFrameSupport.select(frame2, Arrays.asList("feat1", "feat2", "feat3", "probability", "prediction"));
Tensor<Object> tensor = df.dataset().head().getTensor(3);
Thanks
So the MLeap documentation for the Java DSL is not so good but I was able to look over some unit tests (link) that pointed me to the right thing to use. In case anyone else is interested, this is what I did.
DefaultLeapFrame df = leapFrameSupport.select(frame, Arrays.asList("feat1", "feat2", "feat3", "probability", "prediction"));
TensorSupport tensorSupport = new TensorSupport();
List<Double> tensor_vals = tensorSupport.toArray(df.dataset().head().getTensor(3));

How to create a tensorflow serving client for the 'wide and deep' model?

I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).
I've exported the model as follows:
m = build_estimator(model_dir)
m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)
print('Model statistics:')
for key in sorted(results):
print("%s: %s" % (key, results[key]))
print('Done training!!!')
# Export model
export_path = sys.argv[-1]
print('Exporting trained model to %s' % export_path)
m.export(
export_path,
input_fn=serving_input_fn,
use_deprecated_input_fn=False,
input_feature_key=INPUT_FEATURE_KEY
My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?
Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.
Documentation is very sketchy, hence why I am asking on here.
Many thanks!
I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.
TL;DR
To export an estimator there are four steps:
Define features for export as a list of all features used during estimator initialization.
Create a feature config using create_feature_spec_for_parsing.
Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.
Export the model using export_savedmodel().
To run a client script properly you need to do three following steps:
Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/
Create or modify corresponding BUILD file by adding a py_binary.
Build and run a model server, e.g. tensorflow_model_server.
Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.
For more details look at the tutorial itself.
Just spent a solid week figuring this out. First off, m.export is going to deprecated in a couple weeks, so instead of that block, use: m.export_savedmodel(export_path, input_fn=serving_input_fn).
Which means you then have to define serving_input_fn(), which of course is supposed to have a different signature than the input_fn() defined in the wide and deep tutorial. Namely, moving forward, I guess it's recommended that input_fn()-type things are supposed to return an InputFnOps object, defined here.
Here's how I figured out how to make that work:
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes
def serving_input_fn():
features, labels = input_fn()
features["examples"] = tf.placeholder(tf.string)
serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
shape=[None],
name='input_example_tensor')
inputs = {'examples': serialized_tf_example}
labels = None # these are not known in serving!
return input_fn_utils.InputFnOps(features, labels, inputs)
This is probably not 100% idiomatic, but I'm pretty sure it works. For now.

How configure Stanford QNMinimizer to get similar results as scipy.optimize.minimize L-BFGS-B

I want to configurate the QN-Minimizer from Stanford Core NLP Lib to get nearly similar optimization results as scipy optimize L-BFGS-B implementation or get a standard L-BFSG configuration that is suitable for the most things. I set the standard paramters as follow:
The python example I want to copy:
scipy.optimize.minimize(neuralNetworkCost, input_theta, method = 'L-BFGS-B', jac = True)
My try to do the same in Java:
QNMinimizer qn = new QNMinimizer(10,true) ;
qn.terminateOnMaxItr(batch_iterations);
//qn.setM(10);
output = qn.minimize(neuralNetworkCost, 1e-5, input,15000);
What I need is a solid and general L-BFSG configuration, that is suitable to solve most problems.
I m also not sure, if I need to set some of these parameters for standard L-BFGS configuration:
useAveImprovement = ?;
useRelativeNorm = ?;
useNumericalZero = ?;
useEvalImprovement = ?;
Thanks for your help in advance, I m new on that field.
Resources for Information:
Stanford Core NLP QNMinimizer:
http://nlp.stanford.edu/nlp/javadoc/javanlp-3.5.2/edu/stanford/nlp/optimization/QNMinimizer.html#setM-int-
https://github.com/stanfordnlp/CoreNLP/blob/master/src/edu/stanford/nlp/optimization/QNMinimizer.java
Scipy Optimize L-BFGS-B:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.fmin_l_bfgs_b.html
Thanks in advance!
What you have should be just fine. (Have you actually had any problems with it?)
Setting termination both on max iterations and max function evaluations is probably overkill, so you might omit the last argument to qn.minimize(), but it seems from the documentation that scipy does use both with a default value of 15000.
In general, using the robustOptions (with a second argument of true as you do) should give a reliable minimizer, similar to the pgtol convergence criterion of scipy. The other options are for special situations or just to experiment with how they work.

OpenCV Constants.CaptureProperty

Hi I use OpenCV Java and have some problem.
I open video file and try get property like FPS.
And others:
CV_CAP_PROP_POS_MSEC
CV_CAP_PROP_FRAME_COUNT
So first I opened video like this:
VideoCapture vC = new VideoCapture(url2);
and next i have a problem with function
vC.get(int i)
in OpenCV C++ its look like
vC.get(CV_CAP_PROP_FPS);
In Java where I find this constants?In HighGui I didnt find them. Only what I find is another libary to OpenCV where are this constants http://siggiorn.com/wp-content/uploads/libraries/opencv-java/docs/sj/opencv/Constants.CaptureProperty.html. But where I find them in OpenCV Java. Anyway how I have to use vC.get() function? Maybe some working example?
There is a bug report about this issue.
Until it is fixed, I suggest that you find these constants in the C++ source code, and define them yourself.
Edit:
I was just curious myself. You find them in the file modules/highgui/include/opencv2/highgui.hpp They are:
CAP_PROP_POS_MSEC =0,
CAP_PROP_POS_FRAMES =1,
CAP_PROP_POS_AVI_RATIO =2,
CAP_PROP_FRAME_WIDTH =3,
CAP_PROP_FRAME_HEIGHT =4,
CAP_PROP_FPS =5,
CAP_PROP_FOURCC =6,
CAP_PROP_FRAME_COUNT =7,
CAP_PROP_FORMAT =8,
CAP_PROP_MODE =9,
CAP_PROP_BRIGHTNESS =10,
CAP_PROP_CONTRAST =11,
CAP_PROP_SATURATION =12,
CAP_PROP_HUE =13,
CAP_PROP_GAIN =14,
CAP_PROP_EXPOSURE =15,
CAP_PROP_CONVERT_RGB =16,
CAP_PROP_WHITE_BALANCE_BLUE_U =17,
CAP_PROP_RECTIFICATION =18,
CAP_PROP_MONOCROME =19,
CAP_PROP_SHARPNESS =20,
CAP_PROP_AUTO_EXPOSURE =21, // DC1394: exposure control done by camera, user can adjust refernce level using this feature
CAP_PROP_GAMMA =22,
CAP_PROP_TEMPERATURE =23,
CAP_PROP_TRIGGER =24,
CAP_PROP_TRIGGER_DELAY =25,
CAP_PROP_WHITE_BALANCE_RED_V =26,
CAP_PROP_ZOOM =27,
CAP_PROP_FOCUS =28,
CAP_PROP_GUID =29,
CAP_PROP_ISO_SPEED =30,
CAP_PROP_BACKLIGHT =32,
CAP_PROP_PAN =33,
CAP_PROP_TILT =34,
CAP_PROP_ROLL =35,
CAP_PROP_IRIS =36,
CAP_PROP_SETTINGS =37
use class import org.opencv.videoio.Videoio;
vc.open(FD.class.getResource("1.avi").getPath());
double totalFrameNumber = vc.get(Videoio.CAP_PROP_FRAME_COUNT);
System.out.println("\n"+totalFrameNumber);
It seems the bug is solved. Now you should be able to use it as:
VideoCapture vC = new VideoCapture(...);
nbFrames = vC.get(Videoio.CAP_PROP_FRAME_COUNT);

How to extract programmatically video frames?

I need programmatically extract frames from mp4 video file, so each frame goes into a separate file. Please advise on a library that will allow to get result similar to the following VLC command (http://www.videolan.org/vlc/):
vlc v1.mp4 --video-filter=scene --vout=dummy --start-time=1 --stop-time=5 --scene-ratio=1 --scene-prefix=img- --scene-path=./images vlc://quit
Library for any of these Java / Python / Erlang / Haskell will do the job for me.
Consider using the following class by Popscan. The usage is as follows:
VideoSource vs = new VideoSource("file://c:\test.avi");
vs.initialize();
...
int frameIndex = 12345; // any frame
BufferedImage frame = vs.getFrame(frameIndex);
I would personally look for libraries that wrap ffmpeg/libavcodec (the understands-most-things library that many encoders and players use).
I've not really tried any yet so can't say anything about code quality and ease, but the five-line pyffmpeg example suggests it's an easy option - though it may well be *nix-only.

Categories