In Python I've trained a TensorFlow LinearClassifier and saved it like:
model = tf.contrib.learn.LinearClassifier(feature_columns=columns)
model.fit(input_fn=train_input_fn, steps=100)
model.export_savedmodel(export_dir, parsing_serving_input_fn)
By using the TensorFlow Java API I am able to load this model in Java using:
model = SavedModelBundle.load(export_dir, "serve");
It seems I should be able to run the graph using something like
model.session().runner().feed(???, ???).fetch(???, ???).run()
but what variable names/data should I feed to/fetch from the graph to provide it features and to fetch the probabilities of the classes? The Java documentation is lacking this information as far as I can see.
The names of the nodes to feed would depend on what parsing_serving_input_fn does, in particular they should be the names of the Tensor objects that are returned by parsing_serving_input_fn. The names of the nodes to fetch would depend on what you're predicting (arguments to model.predict() if using your model from Python).
That said, the TensorFlow saved model format does include the "signature" of the model (i.e., the names of all Tensors that can be fed or fetched) as metadata that can provide hints.
From Python you can load the saved model and list out its signature using something like:
with tf.Session() as sess:
md = tf.saved_model.loader.load(sess, ['serve'], export_dir)
sig = md.signature_def[tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
print(sig)
Which will print something like:
inputs {
key: "inputs"
value {
name: "input_example_tensor:0"
dtype: DT_STRING
tensor_shape {
dim {
size: -1
}
}
}
}
outputs {
key: "scores"
value {
name: "linear/binary_logistic_head/predictions/probabilities:0"
dtype: DT_FLOAT
tensor_shape {
dim {
size: -1
}
dim {
size: 2
}
}
}
}
method_name: "tensorflow/serving/classify"
Suggesting that what you want to do in Java is:
Tensor t = /* Tensor object to be fed */
model.session().runner().feed("input_example_tensor", t).fetch("linear/binary_logistic_head/predictions/probabilities").run()
You can also extract this information purely within Java if your program includes the generated Java code for TensorFlow protocol buffers (packaged in the org.tensorflow:proto artifact) using something like this:
// Same as tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
// in Python. Perhaps this should be an exported constant in TensorFlow's Java API.
final String DEFAULT_SERVING_SIGNATURE_DEF_KEY = "serving_default";
final SignatureDef sig =
MetaGraphDef.parseFrom(model.metaGraphDef())
.getSignatureDefOrThrow(DEFAULT_SERVING_SIGNATURE_DEF_KEY);
You will have to add:
import org.tensorflow.framework.MetaGraphDef;
import org.tensorflow.framework.SignatureDef;
Since the Java API and the saved-model-format are somewhat new, there is much room for improvement in the documentation.
Hope that helps.
Related
I have built an alloy model where I have put all my system logic. I want to do a large scale analysis. For doing that, my logic is to use Java to read the data file, then pass those data to Alloy to check whether those met the constraint I defined in the Alloy or not. To do that, my logic is to create sig object using those data and pass those to Alloy.
As my system model is complex, I am trying to summarize my problem using the following code-
sig A{
val: Int
}
sig B{
chunk: Int
}
fact {
A.val > 10 && A.val < 15
}
Now, I want to pass the following sig object and run command from Java.
sig C{
name: String
}
run {} for 4
How can I pass that code? I am following this link https://github.com/ikuraj/alloy/blob/master/src/edu/mit/csail/sdg/alloy4whole/ExampleUsingTheAPI.java . But not able to figure it out.
There is currently a branch pkriens/api in progress that makes this quite easy. Look at the testcases in the classic test project.
We're working on integrating this in the master branch soon (before the end of 2019).
I have this code that works for python
X = numpy.loadtxt("compiledFeatures.csv", delimiter=",")
model = load_model("kerasnaive.h5")
predictions = model.predict(X)
print(predictions);
and I am trying to write a code with the same functionality in java,
I have written this code but it do not works, anyone knows what I am doing wrong or is there another simpler way to do it?
the code is going to the catch block, and during debugging the code it seems that all the information gained from the model file is null
path = String.format("%s\\kerasnaive.h5", System.getProperty("user.dir"),
pAgents[i]);
try {
network = KerasModelImport.importKerasModelAndWeights(path, false);
}
catch (Exception e){
System.out.println("cannot build keras layers");
}
INDArray input = Nd4j.create(1);
input.add(featuresInput); //an NDarray that i got in the method
INDArray output = network[i].outputSingle(input);
it seems that the model does not built (the network is still null)
the code for python loads the model and it works,
in java i get the error: "Could not determine number of outputs for layer: no output_dim or nb_filter field found. For more information, see http://deeplearning4j.org/model-import-keras."
although the same file is used in both casses
Thanks,
Ori
You are currently importing the trained keras model using importKerasModelAndWeights. I'm not sure how you trained your model, but in Keras there are two types of models available: the Sequential model, and the Model class that uses the functional API. You can read more here.
If you used the Sequential model when you created the network, you need to use the importKerasSequentialModel function. Keras Sequential models.
I use Apache Thrift protocol for tablet-server and interlanguage integration, and all is OK few years.
Integration is between languages (C#/C++/PC Java/Dalvik Java) and thrift is probably one of simplest and safest. So I want pack-repack sophisticated data structures (and changed over years) with Thrift library. Lets say in thrift terms kind of OfflineTransport or OfflineProtocol.
Scenario:
I want to make backup solution, for example during internet provider failure process data in offline mode: serialise, store, try to process in few ways. For example sent serialised data by normal email via poor backup connection etc.
Question is: where in Thrift philosophy is best extension point for me?
I understand, only part of online protocol is possible to backup offline, ie real time return of value is not possible, that is OK.
Look for serializer. There are misc. implementations but they all share the same common concept to use a buffer or file / stream as transport medium:
Writing data in C#
E.g. we plan to store the bits into a bytes[] buffer. So one could write:
var trans = new TMemoryBuffer();
var prot = new TCompactProtocol( trans);
var instance = GetMeSomeDataInstanceToSerialize();
instance.Write(prot);
Now we can get a hold of the data:
var data = trans.GetBuffer();
Reading data in C#
Reading works similar, except that you need to know from somewhere what root instance to construct:
var trans = new TMemoryBuffer( serializedBytes);
var prot = new TCompactProtocol( trans);
var instance = new MyCoolClass();
instance.Read(prot);
Additional Tweaks
One solution to the chicken-egg problem during load could be to use a union as an extra serialization container:
union GenericFileDataContainer {
1 : MyCoolClass coolclass;
2 : FooBar foobar
// more to come later
}
By always using this container as the root instance during serialization it is easy to add more classes w/o breaking compatibility and there is no need to know up front what exactly is in a file - you just read it and check what element is set in the union.
There is an RPC framework that uses the standard thrift Protocol named "thrifty", and it is the same effect as using thrift IDL to define the service, that is, thrify can be compatible with code that uses thrift IDL, which is very helpful for cross-platform. And has a ThriftSerializer class in it:
[ThriftStruct]
public class LogEntry
{
[ThriftConstructor]
public LogEntry([ThriftField(1)]String category, [ThriftField(2)]String message)
{
this.Category = category;
this.Message = message;
}
[ThriftField(1)]
public String Category { get; }
[ThriftField(2)]
public String Message { get; }
}
ThriftSerializer s = new ThriftSerializer(ThriftSerializer.SerializeProtocol.Binary);
byte[] s = s.Serialize<LogEntry>();
s.Deserialize<LogEntry>(s);
you can try it:https://github.com/endink/Thrifty
I'm making checkboxes and using Scala, I found nice example but in Java. But I couldn't convert it to Scala.
This is Java code:
Form<StudentFormData> formData = Form.form(StudentFormData.class).fill(studentData);
Scala's play.api.data.Form class doesn't have "fill" and "form" methods like Java's play.data.Form. How I can create Form in Scala?
Here is a function that I use to get data from the Form and generate an object Location.
def add = DBAction { implicit rs =>
val data = LocationForm.form.bindFromRequest.get
Locations.create(Some(data.venueName), data.lat, data.lon)
Redirect(routes.LocationController.all) }
I am working on refactoring an existing application written in PowerBuilder and Java and which runs on Sybase EA Server (Jaguar). I am building a small framework to wrap around Jaguar API functions that are available in EA Server. One of the classes is to get runtime statistics from EA Server using the Monitoring class.
Without going into too much detail, Monitoring is a class in EA Server API that provides Jaguar Runtime Monitoring statistics (actual classes are in C++; EA Server provides a wrapper for these in Java, so they can be accessed through CORBA).
Below is the simplified version of my class. (I made a superclass which I inherit from for getting stats for components, conn. caches, HTTP etc).
public class JagMonCompStats {
...
public void dumpStats(String type, String entity) {
private String type = "Component";
private String entity = "web_business_rules";
private String[] header = {"Active", "Pooled", "invoke"};
// This has a lot more keys, simplified for this discussion
private static short[] compKeys = {
(short) (MONITOR_COMPONENT_ACTIVE.value),
(short) (MONITOR_COMPONENT_POOLED.value),
(short) (MONITOR_COMPONENT_INVOKE.value)
};
private double[] data = null;
...
/* Call to Jaguar API */
Monitoring jm = MonitoringHelper.narrow(session.create("Jaguar/Monitoring"));
data = jm.monitor(type, entity, keys);
...
printStats(entity, header, data);
...
}
protected void printStats(String entityName, String[] header, double[] data) {
/* print the header and print data in a formatted way */
}
}
The line data = jm.monitor is the call to Jaguar API. It takes the type of the entity, the name of the entity, and the keys of the stats we want. This method returns a double array. I go on to print the header and data in a formatted output.
The program works, but I would like to get experts' opinion on OO design aspect. For one, I want to be able to customize printStats to be able to print in different formats (for e.g., full blown report or a one-liner). Apart from this, I am also thinking of showing the stats on a web page or PowerBuilder screen, in which case printStats may not even be relevant. How would you do this in a real OO way?
Well, it's quite simple. Don't print stats from this class. Return them. And let the caller decide how the returned stats should be displayed.
Now that you can get stats, you can create a OneLinerStatsPrinter, a DetailedStatsPrinter, an HtmlStatsFormatter, or whatever you want.