Load pre-trained models in Tensorflow for Java - java

I'm trying to load pre-trained models in Tensorflow using the Java API.
I notice that over time the format of the saved model files has changed and now there are saved models with file formats .pb , .ckpt and model directories with model.ckpt.data-00000-of-00001 , model.ckpt.index.
I am following the way to read a model specified in the LabelImage example. But in this example the file format is protobuf .pb. I see that the latest saved models are saved in .ckpt or model.ckpt.data-00000-of-00001 , model.ckpt.index formats.
I tried to use the SavedModelBundle method with the export_dir containing the files - model.ckpt.data-00000-of-00001 and model.ckpt.index, but I get this error
`2018-07-18 16:54:00.388790: I tensorflow/cc/saved_model/loader.cc:291] SavedModel load for tags { }; Status: fail. Took 95 microseconds.
Exception in thread "main" org.tensorflow.TensorFlowException: SavedModel not found in export directory: /path/to/model_dir
at org.tensorflow.SavedModelBundle.load(Native Method)
at org.tensorflow.SavedModelBundle.load(SavedModelBundle.java:39)
Could someone please tell me what I'm doing wrong or let me know as to how I can read the saved models saved in file formats apart from .pb in Java.

I think there are 2 ways that you can try to solve your problem:
Convert the format of the saved model (checkpoint file) to protobuf file
After restore the saved model to the current session: sess,
# Freeze the graph, with output _node_names is the name of the output when construct the model
# Eg. output_node_names = ["prediction"]
frozen_graph_def = tf.graph_util.convert_variables_to_constants (sess, sess.graph_def, output_node_names)
# Save the frozen graph
with open (frozen_graph_file, "wb") as f:
f.write(frozen_graph_def.SerializeToString()
It should convert the former format to the new one.
Retrain and save the model to .pb format.

Related

Is there a Java code to convert csv files into pbix?

We need a Java code which automatically converts csv files into pbix files, so they can be opened and further worked on in the PowerBI Desktop. Now, I know PowerBI offers this super cool feature, which converts csv files and many other formats into pbix manually. However, we need a function which automatically converts our reports directly into pbix, so that no intermediate files need to be created and stored somewhere.
We have already been able to develop a function with three parameters: The first one corresponds to the selected report, from our database; the second corresponds to the directory, in which the converted report should be generated; and finally the third one is the converted output file itself. The two first parameters work well and the code is able to generate a copy of any report we select into any directory we select. However, it is able to generate csv files only. Any other format will have the same size as the csv and won't be able to open.
This is what we've tried so far for the conversion part of the code:
Util.writeFile("C:\\" + "test.csv", byteString);
The above piece of code works just fine, however csv is not what we wanted, the original reports are already in csv format anyway.
Util.writeFile("C:\\" + "test.pbix", byteString);
Util.writeFile("C:\\" + "test.pdf", byteString);
Util.writeFile("C:\\" + "test.xlsx", byteString);
Each of the three lines above generates one file in the indicated format, however each of the generated files are just as large as its corresponding csv(but should be much larger) and therefore are unable to open.
File file = new File("C:\\" + "test1.csv");
File file2 = new File("C:\\" + "test1.pbix");
file.renameTo(file2);
The above piece of code does not generate any file at all, but I thought it could be worth mentioning it, as it doesn't throw any exception at all either.
P.S. We would also be interested in a java code which converts csv in any other BI reporting software besides PowerBI, like Tableau, BIRT, Knowage, etc.
P.S.2 The first piece of code uses objects of a class (sailpoint.tools.Util) which is apparently only available for those who have access to Sailpoint.

Serving the inception model v3 in Java using SavedModelBundle

Using org.tensorflow:tensorflow:1.3.0-rc0.
I have generated the inception model from the checkpoints as per the tutorial https://tensorflow.github.io/serving/serving_inception:
inception_saved_model --checkpoint_dir=/root/xmod/inception-v3
This went OK and generated a saved_model.pb and a variables/ subdirectory with data and I moved all this content to the /tmp/inception-model directory.
Now I'm trying to use this model by essentially converting https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/src/main/java/org/tensorflow/examples/LabelImage.java
I am loading the model like this with no errors:
SavedModelBundle modelBundle = SavedModelBundle.load("/tmp/inception-model", "serve");
Now I am trying to formulate the query (similar to this https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/src/main/java/org/tensorflow/examples/LabelImage.java#L112) but I'm stuck trying to figure out how to use the feed and fetch methods:
private static float[] executeInceptionGraph(SavedModelBundle modelBundle, Tensor image) throws Exception {
Tensor result = modelBundle.session().runner().feed(???).fetch(???).run().get(0);
Any help how to write this query is much appreciated.
You need to feed your input (here your tensor image) associated to the name of its node in the graph, from the link you posted it seems that the tutorial uses "images" (see here https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/inception_client.py#L49, code with the python query to the server built in the tutorial https://tensorflow.github.io/serving/serving_inception).
Then you fetch your output node with its name too, looking at a sample of the server response here https://tensorflow.github.io/serving/serving_inception you can get "classes" or "scores" depending which one you'd like to have.
So one of the two command should work:
Tensor result = modelBundle.session().runner().feed("images", image).fetch("classes").run().get(0);
OR
Tensor result = modelBundle.session().runner().feed("images", image).fetch("scores").run().get(0);
I've found that it works with frozen models only. The argument to the fetch method is the one used in freeze_graph argument output_node_names. See https://github.com/tensorflow/models/blob/master/slim/export_inference_graph.py#L32

Convert one json format to another in java

I am looking for a utility which converts one json format to another by respecting at the conversion definitions from a preferably xml file. Is there any library doing something like this in java ?
For example source json is:
{"name":"aa","surname":"bb","accounts":[{"accountid":10,"balance":100}]}
target json is :
{"owner":"aa-bb","accounts":[{"accountid":10,"balance":100}]}
sample config xml :
t.owner = s.name.concat("-").concat(surname)
t.accounts = t.accounts
Ps:Please dont post solutions for this example, it is just for giving an idea, there will be quite different scenarios in mapping.
Is this what u need?
Open input file.
Read / parse JSON from file using a JSON library.
Convert in-memory data structure to new structure.
Open output file
Unparse in-memory data structure to file using JSON library.

Libgdx read internal text file for android

I am trying to load some data from an internal .txt file. After some efforts with FileHandle the only I thing I've accomplished is to put this .txt file into a string variable. Instead of this string I need the integers that are stored inside:
FileHandle handle = Gdx.files.internal("txt/questions.txt");
String lines = handle.readString(); `
Part of txt file:0
#a)1! b)0,350,190,185! c)180,1247,180,153! d)710,970,124,101! e)615,1105,175,120! //sheep
#a)2! b)208,344,248,191! c)403,957,142,127! d)655,1250,142,130! e)0,1075,263,150! // elafi
#a)3! b)460,344,164,200! c)10,1232,165,155! d)245,915,150,133! e)268,1083,235,145! //elephant
#a)4! b)624,344,234,190! c)835,260,150,55! d)500,1228,155,172! e)800,1117,185,108! //horse
#a)5! b)858,330,167,203! c)10,890,220,174! d)822,1235,178,145! e)575,943,128,141! //rabbit
You need to "parse" your text file. You could write a simple parser for your text file format (there is nothing special in Libgdx to support parsing text files, so any standard Java features like Java - Parsing Text File OR http://pages.cs.wisc.edu/~hasti/cs302/examples/Parsing/parseString.html might help).
Alternatively, it might be simpler put your text file in a format that is easy for existing Libgdx code to parse. That generally means "JSON". JSON is not a Libgdx file format, so there are lots of tools and tutorials explaining JSON. (This format makes more sense if your file is generated by a tool and isn't maintained by a human directly.)

how to get the data from xml feeds

I have the following feeds from my vendor,
http://scores.cricandcric.com/cricket/getFeed?key=4333433434343&format=xml&tagsformat=long&type=schedule
I wanted to get the data from that xml files as java objects, so that I can insert into my database regularly.
The above data is nothing but regular updates from the vendor, so that I can update in my website.
can you please suggest me what are my options available to get this working
Should I use any webservices or just Xstream
to get my final output.. please suggest me as am a new comer to this concept
Vendor has suggested me that he can give me the data in following 3 formats rss, xml or json, I am not sure what is easy and less consumable to get it working
I would suggest just write a program that parses the XML and inserts the data directly into your database.
Example
This groovy script inserts data into a H2 database.
//
// Dependencies
// ============
import groovy.sql.Sql
#Grapes([
#Grab(group='com.h2database', module='h2', version='1.3.163'),
#GrabConfig(systemClassLoader=true)
])
//
// Main program
// ============
def sql = Sql.newInstance("jdbc:h2:db/cricket", "user", "pass", "org.h2.Driver")
def dataUrl = new URL("http://scores.cricandcric.com/cricket/getFeed?key=4333433434343&format=xml&tagsformat=long&type=schedule")
dataUrl.withReader { reader ->
def feeds = new XmlSlurper().parse(reader)
feeds.matches.match.each {
def data = [
it.id,
it.name,
it.type,
it.tournamentId,
it.location,
it.date,
it.GMTTime,
it.localTime,
it.description,
it.team1,
it.team2,
it.teamId1,
it.teamId2,
it.tournamentName,
it.logo
].collect {
it.text()
}
sql.execute("INSERT INTO matches (id,name,type,tournamentId,location,date,GMTTime,localTime,description,team1,team2,teamId1,teamId2,tournamentName,logo) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)", data)
}
}
Well... you could use an XML Parser (stream or DOM), or a JSON parser (again stream of 'DOM'), and build the objects on the fly. But with this data - which seems to consist of records of cricket matches, why not go with a csv format?
This seems to be your basic 'datum':
<id>1263</id>
<name>Australia v India 3rd Test at Perth - Jan 13-17, 2012</name>
<type>TestMatch</type>
<tournamentId>137</tournamentId>
<location>Perth</location>
<date>2012-01-14</date>
<GMTTime>02:30:00</GMTTime>
<localTime>10:30:00</localTime>
<description>3rd Test day 2</description>
<team1>Australia</team1>
<team2>India</team2>
<teamId1>7</teamId1>
<teamId2>1</teamId2>
<tournamentName>India tour of Australia 2011-12</tournamentName>
<logo>/cricket/137/tournament.png</logo>
Of course you would still have to parse a csv, and deal with character delimiting (such as when you have a ' or a " in a string), but it will reduce your network traffic quite substantially, and likely parse much faster on the client. Of course, this depends on what your client is.
Actually you have RESTful store that can return data in several formats and you only need to read from this source and no further interaction is needed.
So, you can use any XML Parser to parse XML data and put the extracted data in whatever data structure that you want or you have.
I did not hear about XTREME, but you can find more information about selecting the best parser for your situation at this StackOverflow question.

Categories