I've been reading the H2O documentation for a while, and I haven't found a clear example of how to load model trained and saved using the Python API. I was following the next example.
import h2o
from h2o.estimators.naive_bayes import H2ONaiveBayesEstimator
model = H2ONaiveBayesEstimator()
h2o_df = h2o.import_file("http://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
model.train(y = "IsDepDelayed", x = ["Year", "Origin"],
training_frame = h2o_df,
family = "binomial",
lambda_search = True,
max_active_predictors = 10)
h2o.save_model(model, path=models)
But if you check the official documentation it states that you have to download the model as a POJO from the flow UI. Is it the only way? or, may I achieve the same result via python? Just for information, I show the doc's example below. I need some guidance.
import java.io.*;
import hex.genmodel.easy.RowData;
import hex.genmodel.easy.EasyPredictModelWrapper;
import hex.genmodel.easy.prediction.*;
public class main {
private static String modelClassName = "gbm_pojo_test";
public static void main(String[] args) throws Exception {
hex.genmodel.GenModel rawModel;
rawModel = (hex.genmodel.GenModel) Class.forName(modelClassName).newInstance();
EasyPredictModelWrapper model = new EasyPredictModelWrapper(rawModel);
//
// By default, unknown categorical levels throw PredictUnknownCategoricalLevelException.
// Optionally configure the wrapper to treat unknown categorical levels as N/A instead:
//
// EasyPredictModelWrapper model = new EasyPredictModelWrapper(
// new EasyPredictModelWrapper.Config()
// .setModel(rawModel)
// .setConvertUnknownCategoricalLevelsToNa(true));
RowData row = new RowData();
row.put("Year", "1987");
row.put("Month", "10");
row.put("DayofMonth", "14");
row.put("DayOfWeek", "3");
row.put("CRSDepTime", "730");
row.put("UniqueCarrier", "PS");
row.put("Origin", "SAN");
row.put("Dest", "SFO");
BinomialModelPrediction p = model.predictBinomial(row);
System.out.println("Label (aka prediction) is flight departure delayed: " + p.label);
System.out.print("Class probabilities: ");
for (int i = 0; i < p.classProbabilities.length; i++) {
if (i > 0) {
System.out.print(",");
}
System.out.print(p.classProbabilities[i]);
}
System.out.println("");
}
}
h2o.save_model will save the binary model to the provided file system, however, looking at the Java application above it seems you want to use model into a Java based scoring application.
Because of that you should be using h2o.download_pojo API to save the model to local file system along with genmodel jar file. The API is documented as below:
download_pojo(model, path=u'', get_jar=True)
Download the POJO for this model to the directory specified by the path; if the path is "", then dump to screen.
:param model: the model whose scoring POJO should be retrieved.
:param path: an absolute path to the directory where POJO should be saved.
:param get_jar: retrieve the h2o-genmodel.jar also.
Once you have download POJO, you can use the above sample application to perform the scoring and make sure the POJO class name and the "modelClassName" are same along with model type.
Related
I understand this has been asked for multiple times, but I am really stuck here and if it is fairly easy, please help me.
I have a sample java program and a jar file.
Here is what is inside of the java program (WriterSample.java).
// (c) Copyright 2014. TIBCO Software Inc. All rights reserved.
package com.spotfire.samples;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Date;
import java.util.Random;
import com.spotfire.sbdf.BinaryWriter;
import com.spotfire.sbdf.ColumnMetadata;
import com.spotfire.sbdf.FileHeader;
import com.spotfire.sbdf.TableMetadata;
import com.spotfire.sbdf.TableMetadataBuilder;
import com.spotfire.sbdf.TableWriter;
import com.spotfire.sbdf.ValueType;
/**
* This example is a simple command line tool that writes a simple SBDF file
* with random data.
*/
public class WriterSample {
public static void main(String[] args) throws IOException {
// The command line application requires one argument which is supposed to be
// the name of the SBDF file to write.
if (args.length != 1)
{
System.out.println("Syntax: WriterSample output.sbdf");
return;
}
String outputFile = args[0];
// First we just open the file as usual and then we need to wrap the stream
// in a binary writer.
OutputStream outputStream = new FileOutputStream(outputFile);
BinaryWriter writer = new BinaryWriter(outputStream);
// When writing an SBDF file you first need to write the file header.
FileHeader.writeCurrentVersion(writer);
// The second part of the SBDF file is the metadata, in order to create
// the table metadata we need to use the builder class.
TableMetadataBuilder tableMetadataBuilder = new TableMetadataBuilder();
// The table can have metadata properties defined. Here we add a custom
// property indicating the producer of the file. This will be imported as
// a table property in Spotfire.
tableMetadataBuilder.addProperty("GeneratedBy", "WriterSample.exe");
// All columns in the table needs to be defined and added to the metadata builder,
// the required information is the name of the column and the data type.
ColumnMetadata col1 = new ColumnMetadata("Category", ValueType.STRING);
tableMetadataBuilder.addColumn(col1);
// Similar to tables, columns can also have metadata properties defined. Here
// we add another custom property. This will be imported as a column property
// in Spotfire.
col1.addProperty("SampleProperty", "col1");
ColumnMetadata col2 = new ColumnMetadata("Value", ValueType.DOUBLE);
tableMetadataBuilder.addColumn(col2);
col2.addProperty("SampleProperty", "col2");
ColumnMetadata col3 = new ColumnMetadata("TimeStamp", ValueType.DATETIME);
tableMetadataBuilder.addColumn(col3);
col3.addProperty("SampleProperty", "col3");
// We need to call the build function in order to get an object that we can
// write to the file.
TableMetadata tableMetadata = tableMetadataBuilder.build();
tableMetadata.write(writer);
int rowCount = 10000;
Random random = new Random();
// Now that we have written all the metadata we can start writing the actual data.
// Here we use a TableWriter to write the data, remember to close the table writer
// otherwise you will not generate a correct SBDF file.
TableWriter tableWriter = new TableWriter(writer, tableMetadata);
for (int i = 0; i < rowCount; ++i) {
// You need to perform one addValue call for each column, for each row in the
// same order as you added the columns to the table metadata object.
// In this example we just generate some random values of the appropriate types.
// Here we write the first string column.
String[] col1Values = new String[] {"A", "B", "C", "D", "E"};
tableWriter.addValue(col1Values[random.nextInt(5)]);
// Next we write the second double column.
double doubleValue = random.nextDouble();
if (doubleValue < 0.5) {
// Note that if you want to write a null value you shouldn't send null to
// addValue, instead you should use theInvalidValue property of the columns
// ValueType.
tableWriter.addValue(ValueType.DOUBLE.getInvalidValue());
} else {
tableWriter.addValue(random.nextDouble());
}
// And finally the third date time column.
tableWriter.addValue(new Date());
}
// Finally we need to close the file and write the end of table marker.
tableWriter.writeEndOfTable();
writer.close();
outputStream.close();
System.out.print("Wrote file: ");
System.out.println(outputFile);
}
}
The jar file is sbdf.jar, which is in the same directory as the java file.
I can now compile with:
javac -cp "sbdf.jar" WriterSample.java
This will generate a WriterSample.class file.
The problem is that when I try to execute the program by
java -cp .:./sbdf.jar WriterSample
I got an error message:
Error: Could not find or load main class WriterSample
What should I do? Thanks!
You should use the fully qualified name of the WriterSample, which is com.spotfire.samples.WriterSample and the correct java command is:
java -cp .:././sbdf.jar com.spotfire.samples.WriterSample
I have a folder containing many videos that i'd like to rename. I can't think of any convenient way of doing so. The naming convention is the following "SeasonX, EpisodeY: Episode name". This is going to be "SXEY:Name" for short.
An example: S01E01:JavaCode
That would be Season One, Episode One of Episode called JavaCode.
I wrote something that is able to change the file names, but I need different and unique file names for every episode because it's a TV show.
Here's the code:
import java.io.File;
import java.util.TreeMap;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class BatchFileRenamer {
public static void main(String[] args) {
// TODO Auto-generated method stub
File folder = new File("C:\\Users\\Tony\\Videos\\New folder");
TreeMap map = new TreeMap();
String name = "name";
File[] files = folder.listFiles();
Pattern p = Pattern.compile("\\..*");
for (int i = 0; i != files.length; i++) {
Matcher m = p.matcher(files[i].getName());
System.out.println(files[i].getName());
m.find();
files[i].renameTo(new File(folder.getAbsolutePath() + "\\" + name + " S01E" +
(i < 10 ? "1" : "") + i + m.group()));
}
}
}
I was thinking of creating an array containing the episode names but that's just as much work as manually renaming them in Windows. I guess if I had a txt file to download for all the TV shows with the names of the episodes in it it'd be useful.
Anyway, any suggestions would be greatly appreciated!
I think the best way to do this would be to use the Open Movie Database API. With this, you can get a REST response including a list of episodes for each season of a show. (Example request).
With this, you could use Gson or another parser to serialize the list of episodes:
Here is a Gist of some sample code. (There is probably a better getter method, but you get the point)
What the code does is it gets the information from the sample request above via the API, then it serializes it into a basic POJO from the Episodes.java class using Gson:
Gson gson = new Gson();
Episodes episodes = gson.fromJson(download, Episodes.class);
System.out.println(episodes);
You can then use this information to create the individual file names for the video files.
I have retrained inception model for my own data set. Tho model is built in python and i now have the saved graph as .pb file and label file as .txt. Now i need to predict using this model for an image through java. Can anyone please help me
The TensorFlow team is developing a Java interface, but it is not stable yet. You can find the existing code here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/java and follow updates on its development here https://github.com/tensorflow/tensorflow/issues/5. You can take a look at GraphTest.java, SessionTest.java and TensorTest.java to see how it is currently used (although, as explained, this may change in the future). Basically, you need to load the binary saved graph into a Graph object, create a Session with it and run it with the appropriate values (as Tensors) to receive a List<Tensor> with the output. Put together from the examples in the source:
import java.nio.file.Files;
import java.nio.file.Paths;
import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
try (Graph graph = new Graph()) {
graph.importGraphDef(Files.readAllBytes(Paths.get("saved_model.pb"));
try (Session sess = new Session(graph)) {
try (Tensor x = Tensor.create(1.0f);
Tensor y = s.runner().feed("x", x).fetch("y").run().get(0)) {
System.out.println(y.floatValue());
}
}
}
The code I used that worked read a protobuf file, ending with .pb.
try (SavedModelBundle b = SavedModelBundle.load("/tmp/model", "serve")) {
Session sess = b.session();
...
float[][]matrix = sess.runner()
.feed("x", input)
.feed("keep_prob", keep_prob)
.fetch("y_conv")
.run()
.get(0)
.copyTo(new float[1][10]);
...
}
The python code I used to save it was:
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs = {'x': tf.saved_model.utils.build_tensor_info(x)},
outputs = {'y_conv': tf.saved_model.utils.build_tensor_info(y_conv)},
)
builder = tf.saved_model.builder.SavedModelBuilder("/tmp/model" )
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature}
)
builder.save()
I need to create RDF/XML documents containing objects in the OSLC namespace.
e.g.
<oslc_disc:ServiceProviderCatalog
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dc="http://purl.org/dc/terms/"
xmlns:oslc_disc="http://open-services.net/xmlns/discovery/1.0/"
rdf:about="{self}">
<dc:title>{catalog title}</dc:title>
<oslc_disc:details rdf:resource="{catalog details uri}" />
what is the simplest way to create this doc using the Jena API ?
( I know about Lyo, they use a JSP for this doc :-)
Thanks, Carsten
Here's a complete example to start you off. Be aware that this will be equivalent to XML output you want, but may not be identical. The order of properties, for example, may vary, and there are other ways to write the same content.
import com.hp.hpl.jena.rdf.model.*
import com.hp.hpl.jena.vocabulary.DCTerms;
public class Jena {
// Vocab items -- could use schemagen to generate a class for this
final static String OSLC_DISC_NS = "http://open-services.net/xmlns/discovery/1.0/";
final static Resource ServiceProviderCatalog =
ResourceFactory.createResource(OSLC_DISC_NS + "ServiceProviderCatalog");
final static Property details =
ResourceFactory.createProperty(OSLC_DISC_NS, "details");
public static void main(String[] args) {
// Inputs
String selfURI = "http://example.com/self";
String catalogTitle = "Catalog title";
String catalogDetailsURI = "http://example.com/catalogDetailsURI";
// Create in memory model
Model model = ModelFactory.createDefaultModel();
// Set prefixes
model.setNsPrefix("dc", DCTerms.NS);
model.setNsPrefix("oslc_disc", OSLC_DISC_NS);
// Add item of type spcatalog
Resource self = model.createResource(selfURI, ServiceProviderCatalog);
// Add the title
self.addProperty(DCTerms.title, catalogTitle);
// Add details, which points to a resource
self.addProperty(details, model.createResource(catalogDetailsURI));
// Write pretty RDF/XML
model.write(System.out, "RDF/XML-ABBREV");
}
}
I want to insert data in my ontology using this code:
Resource resource = model.createResource(X_NAMESPACE + Global_ID);
Property prop = model.createProperty(RDF_NAMESPACE + "type");
Resource obj = model.createResource(X_NAMESPACE + "X");
model.add(resource, prop, obj);
First, does this code correctly create an individual of the specified type?
When I run this code, it saves without a problem, and the model looks correct, but when I want to query the model, I had problems. For example, I save some data in X, and when I retrieve it, all other data is retrieved.
Your code for creating a resource is correct, but it's not very idiomatic. There are methods provided by the Model interface that will make creating resources easier, and there are methods in the Resource interface that will make adding types easier too. Heres' code that illustrates these:
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.rdf.model.Resource;
import com.hp.hpl.jena.vocabulary.RDF;
public class CreateResourceExample {
public static void main(String[] args) {
Model model = ModelFactory.createDefaultModel();
String NS = "http://stackoverflow.com/q/22471651/1281433/";
model.setNsPrefix( "", NS );
// Create the class resource
Resource thing = model.createResource( NS+"ThingA" );
// The model API provides methods for creating resources
// of specified types.
Resource x = model.createResource( NS+"X", thing );
// If you want to create the triples manually, you can
// use the predefined vocabulary classes.
Resource y = model.createResource( NS+"Y" );
model.add( y, RDF.type, thing );
// You can also use the Resource API to add properties
Resource z = model.createResource( NS+"Z" );
z.addProperty( RDF.type, thing );
// Show the model
model.write( System.out, "TTL" );
}
}