In python you can simply pass a numpy array to predict() to get predictions from your model. What is the equivalent using Java with a SavedModelBundle?
Python
model = tf.keras.models.Sequential([
# layers go here
])
model.compile(...)
model.fit(x_train, y_train)
predictions = model.predict(x_test_maxabs) # <= This line
Java
SavedModelBundle model = SavedModelBundle.load(path, "serve");
model.predict() // ????? // What does it take as in input? Tensor?
TensorFlow Python automatically convert your NumPy array to a tf.Tensor. In TensorFlow Java, you manipulate tensors directly.
Now the SavedModelBundle does not have a predict method. You need to obtain the session and run it, using the SessionRunner and feeding it with input tensors.
For example, based on the next generation of TF Java (https://github.com/tensorflow/java), your code endup looking like this (note that I'm taking a lot of assumptions here about x_test_maxabs since your code sample does not explain clearly where it comes from):
try (SavedModelBundle model = SavedModelBundle.load(path, "serve")) {
try (Tensor<TFloat32> input = TFloat32.tensorOf(...);
Tensor<TFloat32> output = model.session()
.runner()
.feed("input_name", input)
.fetch("output_name")
.run()
.expect(TFloat32.class)) {
float prediction = output.data().getFloat();
System.out.println("prediction = " + prediction);
}
}
If you are not sure what is the name of the input/output tensor in your graph, you can obtain programmatically by looking at the signature definition:
model.metaGraphDef().getSignatureDefMap().get("serving_default")
You can try Deep Java Library (DJL).
DJL internally use Tensorflow java and provide high level API to make it easy fro inference:
Criteria<Image, Classifications> criteria =
Criteria.builder()
.setTypes(Image.class, Classifications.class)
.optModelUrls("https://example.com/squeezenet.zip")
.optTranslator(ImageClassificationTranslator
.builder().addTransform(new ToTensor()).build())
.build();
try (ZooModel<Image, Classification> model = ModelZoo.load(criteria);
Predictor<Image, Classification> predictor = model.newPredictor()) {
Image image = ImageFactory.getInstance().fromUrl("https://myimage.jpg");
Classification result = predictor.predict(image);
}
Checkout the github repo: https://github.com/awslabs/djl
There is a blogpost: https://towardsdatascience.com/detecting-pneumonia-from-chest-x-ray-images-e02bcf705dd6
And the demo project can be found: https://github.com/aws-samples/djl-demo/blob/master/pneumonia-detection/README.md
In 0.3.1 API:
val model: SavedModelBundle = SavedModelBundle.load("path/to/model", "serve")
val inputTensor = TFloat32.tesnorOf(..)
val function: ConcreteFunction = model.function(Signature.DEFAULT_KEY)
val result: Tensor = function.call(inputTensor) // u can cast to type you expect, a type of returning tensor can be checked by signature: model.function("serving_default").signature().toString()
After you got a result Tensor of any subtype, you can iterate over its values. In my example, I had a TFloat32 with shape (1, 56), so I found max value by result.get(0, idx)
Related
A field in the table is normalized using Java as shown below,
String name = customerRecord.getName().trim();
name = name.replaceAll("œ", "oe");
name = name.replaceAll("æ", "ae");
name = Normalizer.normalize(name, Normalizer.Form.NFKD).replaceAll("[^\\p{ASCII}]", "");
name = name.toLowerCase();
Now I'm trying to query the same db using Python. How do I do Normalizer.normalize(name, Normalizer.Form.NFKD) in Python so that it is compatible with the way it is written to?
An almost complete translation of the above Java code to Python would be like as follows,
import unicodedata
ASCII_REPLACEMENTS = {
'œ': 'oe',
'æ': 'ae'
}
text = ''.join([ASCII_REPLACEMENTS.get(c, c) for c in search_term])
ascii_term = (
unicodedata.normalize('NFKD', text).
encode('ascii', errors='ignore').decode()
)
return ascii_term.lower()
ASCII_REPLACEMENTS should be amended with what ever characters that wont get translated correctly by unicodedata.normalize compared to Java's Normalizer.normalize(name, Normalizer.Form.NFKD). This way we can ensure the compatibility between the two.
I am attempting to port some Java code utilizing Xerces v3.2.2 that loads a schema file, retrieves the XSModel* and parses it into some custom data structures.
JAVA
import org.apache.xerces.XSLoader;
import org.apache.xerces.XSModel;
XSImplementation xsLoader = null;
XSLoader xsLoader = null;
XSModel xsModel = null;
xsImpl = (XSImplmentation) domRegistry.getDOMImplementation("XS-Loader");
xsLoader = xsImpl.createXSLoader(null);
xsModel = xsLoader.loadURI("path-to-schema.xsd");
myDataStruct = new MyDataStruct(xsModel);
I have been unable to find anything in Xerces-c documentation that would yield similar results. As far as I can tell, I can access the XSModel* from the xercesc::GrammarResolver* through the xercesc::AbstractDOMParser but this would require me to derive from the parser as it is a protected function.
CPP
#include <xercesc/parsers/XercesDOMParser.hpp>
using namespace xercesc;
class MyDOMParser : public XercesDOMParser
{
public:
using AbstractDOMParser::getGrammarResolver;
};
int main()
{
XMLPlatformUtils::Initialize();
MyDOMParser parser;
parser.loadGrammar("path-to-schema.xsd", Grammar::GrammarType::SchemaGrammarType);
auto resolver = parser.getGrammarResolver();
auto xsModel = resolver->getXSModel();
MyDataStruct myDataStruct{xsModel};
return 0;
}
Is this the route I must go? Will this even work? Are there examples out in the wild that show a better way of doing this?
The above solution I attempted for CPP does appear to achieve what I'm trying to accomplish. By deriving from XercesDOMParser I am able to access the GrammarResolver and therefore the XSModel. The model seems to contain the data my data structure requires for parsing.
I have retrained inception model for my own data set. Tho model is built in python and i now have the saved graph as .pb file and label file as .txt. Now i need to predict using this model for an image through java. Can anyone please help me
The TensorFlow team is developing a Java interface, but it is not stable yet. You can find the existing code here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/java and follow updates on its development here https://github.com/tensorflow/tensorflow/issues/5. You can take a look at GraphTest.java, SessionTest.java and TensorTest.java to see how it is currently used (although, as explained, this may change in the future). Basically, you need to load the binary saved graph into a Graph object, create a Session with it and run it with the appropriate values (as Tensors) to receive a List<Tensor> with the output. Put together from the examples in the source:
import java.nio.file.Files;
import java.nio.file.Paths;
import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
try (Graph graph = new Graph()) {
graph.importGraphDef(Files.readAllBytes(Paths.get("saved_model.pb"));
try (Session sess = new Session(graph)) {
try (Tensor x = Tensor.create(1.0f);
Tensor y = s.runner().feed("x", x).fetch("y").run().get(0)) {
System.out.println(y.floatValue());
}
}
}
The code I used that worked read a protobuf file, ending with .pb.
try (SavedModelBundle b = SavedModelBundle.load("/tmp/model", "serve")) {
Session sess = b.session();
...
float[][]matrix = sess.runner()
.feed("x", input)
.feed("keep_prob", keep_prob)
.fetch("y_conv")
.run()
.get(0)
.copyTo(new float[1][10]);
...
}
The python code I used to save it was:
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs = {'x': tf.saved_model.utils.build_tensor_info(x)},
outputs = {'y_conv': tf.saved_model.utils.build_tensor_info(y_conv)},
)
builder = tf.saved_model.builder.SavedModelBuilder("/tmp/model" )
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature}
)
builder.save()
I've been reading the H2O documentation for a while, and I haven't found a clear example of how to load model trained and saved using the Python API. I was following the next example.
import h2o
from h2o.estimators.naive_bayes import H2ONaiveBayesEstimator
model = H2ONaiveBayesEstimator()
h2o_df = h2o.import_file("http://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
model.train(y = "IsDepDelayed", x = ["Year", "Origin"],
training_frame = h2o_df,
family = "binomial",
lambda_search = True,
max_active_predictors = 10)
h2o.save_model(model, path=models)
But if you check the official documentation it states that you have to download the model as a POJO from the flow UI. Is it the only way? or, may I achieve the same result via python? Just for information, I show the doc's example below. I need some guidance.
import java.io.*;
import hex.genmodel.easy.RowData;
import hex.genmodel.easy.EasyPredictModelWrapper;
import hex.genmodel.easy.prediction.*;
public class main {
private static String modelClassName = "gbm_pojo_test";
public static void main(String[] args) throws Exception {
hex.genmodel.GenModel rawModel;
rawModel = (hex.genmodel.GenModel) Class.forName(modelClassName).newInstance();
EasyPredictModelWrapper model = new EasyPredictModelWrapper(rawModel);
//
// By default, unknown categorical levels throw PredictUnknownCategoricalLevelException.
// Optionally configure the wrapper to treat unknown categorical levels as N/A instead:
//
// EasyPredictModelWrapper model = new EasyPredictModelWrapper(
// new EasyPredictModelWrapper.Config()
// .setModel(rawModel)
// .setConvertUnknownCategoricalLevelsToNa(true));
RowData row = new RowData();
row.put("Year", "1987");
row.put("Month", "10");
row.put("DayofMonth", "14");
row.put("DayOfWeek", "3");
row.put("CRSDepTime", "730");
row.put("UniqueCarrier", "PS");
row.put("Origin", "SAN");
row.put("Dest", "SFO");
BinomialModelPrediction p = model.predictBinomial(row);
System.out.println("Label (aka prediction) is flight departure delayed: " + p.label);
System.out.print("Class probabilities: ");
for (int i = 0; i < p.classProbabilities.length; i++) {
if (i > 0) {
System.out.print(",");
}
System.out.print(p.classProbabilities[i]);
}
System.out.println("");
}
}
h2o.save_model will save the binary model to the provided file system, however, looking at the Java application above it seems you want to use model into a Java based scoring application.
Because of that you should be using h2o.download_pojo API to save the model to local file system along with genmodel jar file. The API is documented as below:
download_pojo(model, path=u'', get_jar=True)
Download the POJO for this model to the directory specified by the path; if the path is "", then dump to screen.
:param model: the model whose scoring POJO should be retrieved.
:param path: an absolute path to the directory where POJO should be saved.
:param get_jar: retrieve the h2o-genmodel.jar also.
Once you have download POJO, you can use the above sample application to perform the scoring and make sure the POJO class name and the "modelClassName" are same along with model type.
I have a sentiment analysis program to predict whether a given movie review is positive or negative using recurrent neutral network. I'm using Deeplearning4j deep learning library for that program. Now I need to add that program to apache spark pipeline.
When doing it, I have a class MovieReviewClassifier which extends org.apache.spark.ml.classification.ProbabilisticClassifier and I have to add an instance of that class to the pipeline. The features which are needed to build the model are entered to the program using setFeaturesCol(String s) method. The features I add are in String format since they are a set of strings used for sentiment analysis. But the features should be in the form org.apache.spark.mllib.linalg.VectorUDT. Is there a way to convert the strings to Vector UDT?
I have attached my code for pipeline implementation below:
public class RNNPipeline {
final static String RESPONSE_VARIABLE = "s";
final static String INDEXED_RESPONSE_VARIABLE = "indexedClass";
final static String FEATURES = "features";
final static String PREDICTION = "prediction";
final static String PREDICTION_LABEL = "predictionLabel";
public static void main(String[] args) {
SparkConf sparkConf = new SparkConf();
sparkConf.setAppName("test-client").setMaster("local[2]");
sparkConf.set("spark.driver.allowMultipleContexts", "true");
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(javaSparkContext);
// ======================== Import data ====================================
DataFrame dataFrame = sqlContext.read().format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")
.load("/home/RNN3/WordVec/training.csv");
// Split in to train/test data
double [] dataSplitWeights = {0.7,0.3};
DataFrame[] data = dataFrame.randomSplit(dataSplitWeights);
// ======================== Preprocess ===========================
// Encode labels
StringIndexerModel labelIndexer = new StringIndexer().setInputCol(RESPONSE_VARIABLE)
.setOutputCol(INDEXED_RESPONSE_VARIABLE)
.fit(data[0]);
// Convert indexed labels back to original labels (decode labels).
IndexToString labelConverter = new IndexToString().setInputCol(PREDICTION)
.setOutputCol(PREDICTION_LABEL)
.setLabels(labelIndexer.labels());
// ======================== Train ========================
MovieReviewClassifier mrClassifier = new MovieReviewClassifier().setLabelCol(INDEXED_RESPONSE_VARIABLE).setFeaturesCol("Review");
// Fit the pipeline for training..setLabelCol.setLabelCol.setLabelCol.setLabelCol
Pipeline pipeline = new Pipeline().setStages(new PipelineStage[] { labelIndexer, mrClassifier, labelConverter});
PipelineModel pipelineModel = pipeline.fit(data[0]);
}
}
Review is the feature column which contains strings to be predicted as positive or negative.
I get the following error when I execute the code:
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Column Review must be of type org.apache.spark.mllib.linalg.VectorUDT#f71b0bce but was actually StringType.
at scala.Predef$.require(Predef.scala:233)
at org.apache.spark.ml.util.SchemaUtils$.checkColumnType(SchemaUtils.scala:42)
at org.apache.spark.ml.PredictorParams$class.validateAndTransformSchema(Predictor.scala:50)
at org.apache.spark.ml.Predictor.validateAndTransformSchema(Predictor.scala:71)
at org.apache.spark.ml.Predictor.transformSchema(Predictor.scala:116)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:167)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:167)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:108)
at org.apache.spark.ml.Pipeline.transformSchema(Pipeline.scala:167)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:62)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:121)
at RNNPipeline.main(RNNPipeline.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
According to its documentation
User-defined type for Vector which allows easy interaction with SQL via DataFrame.
And the fact that in the ML library
DataFrame supports many basic and structured types; see the Spark SQL datatype reference for a list of supported types. In addition to the types listed in the Spark SQL guide, DataFrame can use ML Vector types.
and the fact you are asked for org.apache.spark.sql.types.UserDefinedType<Vector>
You can probably get away by passing either a DenseVector or SparseVector, created from your String.
The conversion from String ("Review" ??? ) to a Vector depends on how you have organized your data.
The way to convert String type to verctor UDT is using word2vec. I have to add an word2vec object to the spark pipeline to do the conversion.