know how to use a language model in tensorflow Lite? - java

Does anyone know how to use a language model in TensorFlow Lite? I have a generated language model with an LSTM structure in TensorFlow. I have already converted it to .tflite for use on android. Now my question is how can I use it? My intention is to use that model to predict the next word in a sentence. The original model works perfectly in Python, now what I need is for it to work and make predictions in Java or Kotlin.

You can achieve this in two ways:
Add the .tflite model in assets folders, implement it to build Gradle file and then finally import that model to carry out requirements in java/kotlin. as shown here
By using firebase, firebase provides custom machine learning model deployment options.
you can add the model to firebase and use it as a remote service. the link to docs.
I'd recommend the first method if your model size is small & if you want to run your app offline.

Related

Is there a way to update a trained machine learning model in Weka while making predictions for new data (Java)?

I have implemented classification algorithms using machine learning using Weka Java. I would like to deploy the trained model on RasberryPi to test.. I want then the trained model get updated every time when it receive the new data and make predictions?? Is there a way to do that with weka Java ?? Could you share your thoughts how to proceed??
Thanks in Advance
Well, I'm bored so I'll try to help since this is a tough and common problem as more and more people integrate machine learning into their normal dev processes...even though this is a bit broad for SO.
I would ask yourself a few questions:
How often does this particular model need to be refreshed in order to maintain relevancy to the task it is performing for you?
Typically, retraining a classification model when every new row of data is written to the place you get training data from would be insane. So I would think about that.
How long does it take to build the model, and how long WILL it take to build it as more and more training data piles up?
Where do you keep your training data, and how do you label it so fast that you would be able to retrain the model every time you get new data? Or is it not a typical supervised classification model?
I ask this because, from what i've done, the data you train with will go into some kind of database, or file system or whatever, and if the Java code you use to build the model reads from a standard location on disk or a DB, rebuilding the model isn't all that hard...it can be a CRON job or a jenkins job or whatever to rebuild the model (read data, build model, write model to disk, deploy model). You'll want the process that uses the model to be able to read it from configuration, and you'll want the code that builds the model to be able to configure the location of the training data corpus. a simple Java properties file might suffice for this piece.
Will you need to reprocess all your data every time you build a new model? This is also a common problem, sometimes solved by tagging each classified item with the version of the model that you used to classify it. In this case, you can set up a "reprocessing pipeline" that looks for old classification results and pumps them through the new model. This opens up a can of worms depending on how you do you data (dedupe strategy, history keeping, etc) so think about that one.
I don't know anything at all about Raspberry pi, but that part doesn't seem relevant since this is really a software architecture problem. One way I've done the automagic deployment piece is to use Jenkins and something like PUPPET to push/pull the model to machine it will be used on. In my past I've put NLP models on to a Hadoop and Storm cluster with puppet and the java code picks them up from a static NFS mount that is on all nodes. etc.....
HTH
Check out the MOA (Massive Online Analysis) package from the Weka developers. It does basically what you want- updating the training model incrementally (= "online").
MOA is (freely) available as a standalone product, or as an extension to Weka, as far as I remember.

Tensorflow Android demo: load a custom graph in?

The Tensorflow Android demo provides a decent base for building an Android app that uses a TensorFlow graph, but I've been getting stuck on how to repurpose it for an app that does not do image classification. As it is, it loads in the Inception graph from a .pb file and uses that to run inferences (and the code assumes as such), but what I'd like to do is load my own graph in (from a .pb file), and do a custom implementation of how to handle the input/output of the graph.
The graph in question is from Assignment 6 of Udacity's deep learning course, an RNN that uses LSTMs to generate text. (I've already frozen it into a .pb file.) However, the Android demo's code is based on the assumption that they're dealing with an image classifier. So far I've figured out that I'll need to change the values of the parameters passed into tensorflow.initializeTensorflow (called in TensorFlowImageListener), but several of the parameters represent properties of image inputs (e.g. IMAGE_SIZE), which the graph I'm looking to load in doesn't have. Does this mean I'll have to change the native code? More generally, how can I approach this entire issue?
Look at TensorFlow Serving for a generic way to load and serve tensorflow models.
Good news: it recently became a lot easier to embed a pre-trained TensorFlow model in your Android app. Check out my blog posts here:
https://medium.com/#daj/using-a-pre-trained-tensorflow-model-on-android-e747831a3d6 (part 1)
https://medium.com/#daj/using-a-pre-trained-tensorflow-model-on-android-part-2-153ebdd4c465 (part 2)
My blog post goes into a lot more detail, but in summary, all you need to do is:
Include the compile org.tensorflow:tensorflow-android:+ dependency in your build.gradle.
Use the Java TensorFlowInferenceInterface class to interface with your model (no need to modify any of the native code).
The TensorFlow Android demo app has been updated to use this new approach. See TensorFlowImageClassifier.recognizeImage for where it uses the TensorFlowInferenceInterface.
You'll still need to specify some configuration, like the names of the input and output nodes in the graph, and the size of the input, but you should be able to figure that information out from using TensorBoard, or inspecting the training script.

Spark Model to use in Java Application

For analysis.
I know we can use the Save function and load the Model in Spark application. But it works only in Spark application (Java, Scala, Python).
We can also use the PMML and export the model to other type of application.
Is there any way to use a Spark model in a Java application?
I am one of the creators of MLeap. Check us out, it is meant for exactly your use case. If there is a transformer you need that is not currently supported, get in touch with me and we will get it in there.
Our serialization format is solely JSON/Protobuf right now, so it is very portable and supports large models like RandomForest. You can serialize your model to a zip file then load it up wherever.
Take a look at our demo to get a use case:
https://github.com/TrueCar/mleap-demo
Currently no, your options are to use PMML for those models that support it, or write your own framework for using models outside of Spark.
There is movement towards enabling this (see this issue). You could also check out Mleap.

Store Data to file in CodeNameOne AND a Swing GUI java application

I've started using CodeNeme One to develop an app that displays a curriculum of data made up of modules, categories and topics in a hierarchical data structure (Modules contain categories which contain Topics). My goal is to make this a general purpose framework within which the Data determines the app behaviour and content so organizations can customize the app by using a desktop GUI to edit data.
So I took a break from my mobile App Development to add a Swing GUI to generate the data model instead of hard-coded test data.
I used Java.io.ObjectInputStream/OutputStream to write my data structure conveniently to file and read it. Then tried to use the file input and output code in my codenameOne project and got errors like in this StackOverflow question: In Codename One, why can I not get FileInputStream to import or compile? .
So, CodenameOne recommends use of their own APIs to read and write to storage. Is there a way I read and write the data files that my compiled CodenameOne App will be able to read, but in the Swing GUI Desktop application? How should I implement file (or hopefully object I/O) in the desktop GUI Swing app and then be able to package that data file with the app to compile it?
We usually use a common format when interacting between JavaSE/EE and Codename One. So we use XML or JSON parsers/generators for which are available on both platforms and they allow us to be "future proof" for any type of technology.
We also have the ability to work with objects with our own custom externalization code, the webservice wizard generates Java SE/EE compatible versions of these classes and you can use those within your Swing project.
FYI Codename One also supports desktop and JavaScript builds. We also have quite a lot of Swing applications and we are slowly migrating them to Codename One since Swing doesn't get updated anymore.

Convert PMML - Model (Artificial Neural Network) to Java Code

I have a PMML file of a trained Artificial Neural Network (ANN). I would like to create a Java method which simply takes in the inputs and returns the targeted value.
This seems pretty easy, but I do not know how realize it.
The PMML Version = 3.0
Update: 24.05.2013
I tried to use the jpmml Java API.
This is how I have done:
(1) Downloaded via Maven Central Repository (link) three .Jar files:
pmml-manager-1.0.2.jar
pmml-model-1.0.2.jar
pmml-evaluator-1.0.2.jar
(2) Used eclipse to "configure Build path" and added those three external .Jar's
(3) Import my PMML-File named "text.xml" ( an artificial neural network (ANN)) PMML version="3.0"
(4) Tried to run an example "TreeModelTraversalExample.java" provided by the jpmml-project
Obviously it did not work for some reasons:
the mentioned example is not for ANN's. How to rewrite it?
my PMML-file is in XML-format. Is it the right format?
I do not know how to handle or to add Java API's. Should I even add those by "configure build path" in eclipse?
Obvious fact #2, I have no clue what I do :-)
Thanks again and kindest regards.
Stefan
JPMML should be able to handle PMML 3.X and newer versions of NeuralNetwork models without problem. Moreover, it should be able to handle all the normalization and denormalization transformations that may accompany such models.
I could use a clarification that why are you interested in converting PMML models to Java code in the first place. This complicates the whole matter a lot and it doesn't add any value. The JPMML library itself is rather compact and has minimal external dependencies (at the moment of writing this, it only depends on commons-math). There shouldn't be much difference performance-wise. You can reasonably expect to obtain up to 10'000 scorings/sec on a modern desktop computer.
The JPMML codebase has recently moved to GitHub: http://github.com/jpmml/jpmml
Fellow coders in Turn Inc. have forked this codebase and are implementing PMML-to-Java translation (see top-level module "pmml-translation") for selected model types: https://github.com/turn/jpmml
At the moment I recommend you to check out the Openscoring project (uses JPMML internally): http://www.openscoring.org
Then, you could try the following:
Deploy your XML file using the HTTP PUT method.
Get your model summary information using the HTTP GET method. If the request succeeds (as opposed to failing with an HTTP status 500 error code) then your model is well supported.
Execute the model either in single prediction mode or batch prediction mode using the HTTP POST method. Try sending larger batches to see if it meets your performance requirements.
Undeploy the model using the HTTP DELETE method.
You can always try contacting project owners for more insight. I'm sure they are nice people.
Another approach would be to use the Cascading API. There's a library called "Pattern" for Cascading, which translates PMML models into Cascading apps in Java. https://github.com/Cascading/pattern
Generally those are for Hadoop jobs; however, if you use the "local mode" flow planner in Cascading, it can be built as a JAR file to include with some other Java app.
There is work in progress for ANN models. Check on the developer email list: https://groups.google.com/forum/?fromgroups#!forum/pattern-user
I think this might do what you need. It is an open source library that claims to be able to read and evaluate pmml neural networks. I have not tried it.
https://code.google.com/p/jpmml/

Categories