Load a tensorflow model in java - java

I am trying to load a tensorflow model in Java.
tf.saved_model.simple_save(
sess,
"/tmp/model/"+timestamp,
inputs={"input_x" : cnn.input_x},
outputs={"input_y" : cnn.input_y})
This is how I save a tensorflow model in python.
public static void main( String[] args ) throws IOException
{
// good idea to print the version number, 1.2.0 as of this writing
System.out.println(TensorFlow.version());
final int NUM_PREDICTIONS = 1;
Random r = new Random();
long[] shape = new long[] {1,56};
IntBuffer buf = IntBuffer.allocate(1*56);
for (int i = 0; i < 56; i++) {
buf.put(r.nextInt());
}
buf.flip();
// load the model Bundle
try (SavedModelBundle b = SavedModelBundle.load("/tmp/model/1549001254", "serve")) {
Session sess = b.session();
// run the model and get the result, 4.0f.
try(Tensor x = Tensor.create(shape, buf)){
float[] result = sess.runner()
.feed("input_x", x)
.fetch("input_y")
.run()
.get(0)
.copyTo(new float[1][2])[0];
// print out the result.
System.out.println(result[0]);
}
}
}
This is how I load it in Java.
The given SavedModel SignatureDef contains the following input(s):
inputs['input_x'] tensor_info:
dtype: DT_INT32
shape: (-1, 56)
name: input_x:0
The given SavedModel SignatureDef contains the following output(s):
outputs['input_y'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: input_y:0
Method name is: tensorflow/serving/predict
The input and output are saved well.
1.12.0
2019-02-01 15:58:59.065677: I tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /tmp/model/1549001254
2019-02-01 15:58:59.072601: I tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-02-01 15:58:59.085912: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2
2019-02-01 15:58:59.132271: I tensorflow/cc/saved_model/loader.cc:162] Restoring SavedModel bundle.
2019-02-01 15:58:59.199331: I tensorflow/cc/saved_model/loader.cc:138] Running MainOp with key legacy_init_op on SavedModel bundle.
2019-02-01 15:58:59.199435: I tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { serve }; Status: success. Took 133774 microseconds.
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'input_y' with dtype float and shape [?,2]
[[{{node input_y}} = Placeholder[_output_shapes=[[?,2]], dtype=DT_FLOAT, shape=[?,2], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:314)
at org.tensorflow.Session$Runner.run(Session.java:264)
at Use_model.main(Use_model.java:38)
But It cannot load the model... The error message is like this.
I don't know what is the problem and how to fix it.

There is some confusion about input_y in your code. The exception says:
You must feed a value for placeholder tensor 'input_y' with dtype float and shape [?,2]
This means that, in your python code, input_y is defined to be a placeholder. I guess this is the placeholder that contains labels of the input_x items. Then input_y should be used in your loss function to compare the last layer of your cnn (let's call it cnn.output) with the actual labels (cnn.input_y), e.g.:
loss = tf.square(cnn.input_y - cnn.output)
Then, you python code should save cnn.output in the outputs dictionary, and not cnn.input_y:
tf.saved_model.simple_save(
sess,
"/tmp/model/"+timestamp,
inputs={"input_x" : cnn.input_x},
outputs={"output" : cnn.output})
In your java code you should then fetch "output":
float[] result = sess.runner()
.feed("input_x", x)
.fetch("output")
.run()
.get(0)
.copyTo(new float[1][2])[0];

Related

How to pass input data to an existing tensorflow 2.x model in Java?

I'm doing my first steps with tensorflow. After having created a simple model for MNIST data in Python, I now want to import this model into Java and use it for classification. However, I don't manage to pass the input data to the model.
Here is the Python code for model creation:
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical.
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32')
train_images /= 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32')
test_images /= 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
NrTrainimages = train_images.shape[0]
NrTestimages = test_images.shape[0]
import os
import numpy as np
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K
# Network architecture
model = Sequential()
mnist_inputshape = train_images.shape[1:4]
# Convolutional block 1
model.add(Conv2D(32, kernel_size=(5,5),
activation = 'relu',
input_shape=mnist_inputshape,
name = 'Input_Layer'))
model.add(MaxPooling2D(pool_size=(2,2)))
# Convolutional block 2
model.add(Conv2D(64, kernel_size=(5,5),activation= 'relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
# Prediction block
model.add(Flatten())
model.add(Dense(128, activation='relu', name='features'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax', name = 'Output_Layer'))
model.compile(loss='categorical_crossentropy',
optimizer='Adam',
metrics=['accuracy'])
LOGDIR = "logs"
my_tensorboard = TensorBoard(log_dir = LOGDIR,
histogram_freq=0,
write_graph=True,
write_images=True)
my_batch_size = 128
my_num_classes = 10
my_epochs = 5
history = model.fit(train_images, train_labels,
batch_size=my_batch_size,
callbacks=[my_tensorboard],
epochs=my_epochs,
use_multiprocessing=False,
verbose=1,
validation_data=(test_images, test_labels))
score = model.evaluate(test_images, test_labels)
modeldir = 'models'
model.save(modeldir, save_format = 'tf')
For Java, I am trying to adapt the App.java code published here.
I am struggling with replacing this snippet:
Tensor result = s.runner()
.feed("input_tensor", inputTensor)
.feed("dropout/keep_prob", keep_prob)
.fetch("output_tensor")
.run().get(0);
While in this code, a particular input tensor is used to pass the data, in my model, there are only layers and no individual named tensors. Thus, the following doesn't work:
Tensor<?> result = s.runner()
.feed("Input_Layer/kernel", inputTensor)
.fetch("Output_Layer/kernel")
.run().get(0);
How do I pass the data to and get the output from my model in Java?
With the newest version of TensorFlow Java, you don't need to search for yourself the name of the input/output tensors from the model signature or from the graph. You can simply call the following:
try (SavedModelBundle model = SavedModelBundle.load("./model", "serve");
Tensor<TFloat32> image = TFloat32.tensorOf(...); // There a many ways to pass you image bytes here
Tensor<TFloat32> result = model.call(image).expect(TFloat32.DTYPE)) {
System.out.println("Result is " + result.data().getFloat());
}
}
TensorFlow Java will automatically take care of mapping your input/output tensors to the right nodes.
I finally managed to find a solution. To get all the tensor names in the graph, I used the following code:
for (Iterator it = smb.graph().operations(); it.hasNext();) {
Operation op = (Operation) it.next();
System.out.println("Operation name: " + op.name());
}
From this, I figured out that the following works:
SavedModelBundle smb = SavedModelBundle.load("./model", "serve");
Session s = smb.session();
Tensor<Float> inputTensor = Tensor.<Float>create(imagesArray, Float.class);
Tensor<Float> result = s.runner()
.feed("serving_default_Input_Layer_input", inputTensor)
.fetch("StatefulPartitionedCall")
.run().get(0).expect(Float.class);

Loading saved Tensorflow model in Java

I have developed a Tensorflow model with python in Linux based on the tutorial here: "http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/". I trained and saved the model using "tf.train.Saver". I am able to deploy the model in Linux environment and perform prediction successfully. Now I need to be able to load this saved model in JAVA on WINDOWS. Through extensive research online I have read that it does not work with "tf.train.Saver" and I have to change my code to use "Serving" to be able to load a saved TF model in java! Therefore, I followed the tutorial here:
"https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_saved_model.py
" and changed my code. However, I have an error with "tf.FixedLenFeature" where it is asking me to use "FixedLenSequenceFeature". Here is the complete error message:
"ValueError: First dimension of shape for feature x unknown. Consider using FixedLenSequenceFeature."
which is happening here:
feature_configs = {'x': tf.FixedLenFeature(shape=[None, img_size,img_size,num_channels], dtype=tf.float32),}
I am not sure this is the right path to go since I have batch of images of size [batchsize*128*128*3] and should not be using the sequence feature! It would be great if someone could clear this out for me and answer these questions:
1- Do I have to change my code from "tf.train.Saver" to "serving" to be able to load the saved model and deploy it in JAVA?
2- If the answer to the above question is yes, how can I feed the data correctly and solve the aforementioned ERROR?
3- Is there any example of how to DEPLOY the model that was saved using "serving"?
Here is my training code that throws the error:
import dataset
import tensorflow as tf
import time
from datetime import timedelta
import math
import random
import numpy as np
import os
#Adding Seed so that random initialization is consistent
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
batch_size = 32
#Prepare input data
classes = ['class1','class2','class3']
num_classes = len(classes)
# 20% of the data will automatically be used for validation
validation_size = 0.2
img_size = 128
num_channels = 3
train_path='/home/user1/Downloads/Expression/Augmented/Data/Train'
# We shall load all the training and validation images and labels into memory using openCV and use that during training
data = dataset.read_train_sets(train_path, img_size, classes, validation_size=validation_size)
print("Complete reading input data. Will Now print a snippet of it")
print("Number of files in Training-set:\t\t{}".format(len(data.train.labels)))
print("Number of files in Validation-set:\t{}".format(len(data.valid.labels)))
session = tf.Session()
serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
feature_configs = {'x': tf.FixedLenFeature(shape=[None, img_size,img_size,num_channels], dtype=tf.float32),}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
x = tf.identity(tf_example['x'], name='x') # use tf.identity() to assign name
# x = tf.placeholder(tf.float32, shape=[None, img_size,img_size,num_channels], name='x')
## labels
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
##Network graph params
filter_size_conv1 = 3
num_filters_conv1 = 32
filter_size_conv2 = 3
num_filters_conv2 = 32
filter_size_conv3 = 3
num_filters_conv3 = 64
fc_layer_size = 128
def create_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def create_biases(size):
return tf.Variable(tf.constant(0.05, shape=[size]))
def create_convolutional_layer(input,
num_input_channels,
conv_filter_size,
num_filters):
## We shall define the weights that will be trained using create_weights function.
weights = create_weights(shape=[conv_filter_size, conv_filter_size, num_input_channels, num_filters])
## We create biases using the create_biases function. These are also trained.
biases = create_biases(num_filters)
## Creating the convolutional layer
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
layer += biases
## We shall be using max-pooling.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
## Output of pooling is fed to Relu which is the activation function for us.
layer = tf.nn.relu(layer)
return layer
def create_flatten_layer(layer):
#We know that the shape of the layer will be [batch_size img_size img_size num_channels]
# But let's get it from the previous layer.
layer_shape = layer.get_shape()
## Number of features will be img_height * img_width* num_channels. But we shall calculate it in place of hard-coding it.
num_features = layer_shape[1:4].num_elements()
## Now, we Flatten the layer so we shall have to reshape to num_features
layer = tf.reshape(layer, [-1, num_features])
return layer
def create_fc_layer(input,
num_inputs,
num_outputs,
use_relu=True):
#Let's define trainable weights and biases.
weights = create_weights(shape=[num_inputs, num_outputs])
biases = create_biases(num_outputs)
# Fully connected layer takes input x and produces wx+b.Since, these are matrices, we use matmul function in Tensorflow
layer = tf.matmul(input, weights) + biases
if use_relu:
layer = tf.nn.relu(layer)
return layer
layer_conv1 = create_convolutional_layer(input=x,
num_input_channels=num_channels,
conv_filter_size=filter_size_conv1,
num_filters=num_filters_conv1)
layer_conv2 = create_convolutional_layer(input=layer_conv1,
num_input_channels=num_filters_conv1,
conv_filter_size=filter_size_conv2,
num_filters=num_filters_conv2)
layer_conv3= create_convolutional_layer(input=layer_conv2,
num_input_channels=num_filters_conv2,
conv_filter_size=filter_size_conv3,
num_filters=num_filters_conv3)
layer_flat = create_flatten_layer(layer_conv3)
layer_fc1 = create_fc_layer(input=layer_flat,
num_inputs=layer_flat.get_shape()[1:4].num_elements(),
num_outputs=fc_layer_size,
use_relu=True)
layer_fc2 = create_fc_layer(input=layer_fc1,
num_inputs=fc_layer_size,
num_outputs=num_classes,
use_relu=False)
y_pred = tf.nn.softmax(layer_fc2,name='y_pred')
y_pred_cls = tf.argmax(y_pred, dimension=1)
values, indices = tf.nn.top_k(y_pred, 3)
table = tf.contrib.lookup.index_to_string_table_from_tensor(
tf.constant([str(i) for i in xrange(3)]))
prediction_classes = table.lookup(tf.to_int64(indices))
session.run(tf.global_variables_initializer())
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session.run(tf.global_variables_initializer())
def show_progress(epoch, feed_dict_train, feed_dict_validate, val_loss):
acc = session.run(accuracy, feed_dict=feed_dict_train)
val_acc = session.run(accuracy, feed_dict=feed_dict_validate)
msg = "Training Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}"
print(msg.format(epoch + 1, acc, val_acc, val_loss))
total_iterations = 0
# saver = tf.train.Saver()
def train(num_iteration):
global total_iterations
for i in range(total_iterations,
total_iterations + num_iteration):
x_batch, y_true_batch, _, cls_batch = data.train.next_batch(batch_size)
x_valid_batch, y_valid_batch, _, valid_cls_batch = data.valid.next_batch(batch_size)
feed_dict_tr = {x: x_batch,
y_true: y_true_batch}
feed_dict_val = {x: x_valid_batch,
y_true: y_valid_batch}
session.run(optimizer, feed_dict=feed_dict_tr)
if i % int(data.train.num_examples/batch_size) == 0:
print(i)
val_loss = session.run(cost, feed_dict=feed_dict_val)
epoch = int(i / int(data.train.num_examples/batch_size))
show_progress(epoch, feed_dict_tr, feed_dict_val, val_loss)
print("Saving the model Now!")
# saver.save(session, save_path_full, global_step=i)
total_iterations += num_iteration
train(num_iteration=10000)#3000
# Export model
# WARNING(break-tutorial-inline-code): The following code snippet is
# in-lined in tutorials, please update tutorial documents accordingly
# whenever code changes.
export_path_base = './SavedModel/'
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(1)))
print 'Exporting trained model to', export_path
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
# Build the signature_def_map.
classification_inputs = tf.saved_model.utils.build_tensor_info(
serialized_tf_example)
classification_outputs_classes = tf.saved_model.utils.build_tensor_info(
prediction_classes)
classification_outputs_scores = tf.saved_model.utils.build_tensor_info(values)
classification_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={
tf.saved_model.signature_constants.CLASSIFY_INPUTS:
classification_inputs
},
outputs={
tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES:
classification_outputs_classes,
tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES:
classification_outputs_scores
},
method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME))
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y_pred)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'images': tensor_info_x},
outputs={'scores': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_images':
prediction_signature,
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
print 'Done exporting!'

Using Spring Data,Mongodb, how can I avoid Duplicate vertices error

I get the error in one of the polygons i am importing.
Write failed with error code 16755 and error message 'Can't extract geo keys: { _id: "b9c5ac0c-e469-4b97-b059-436cd02ffe49", _class: .... ] Duplicate vertices: 0 and 15'
Full stack Trace: https://gist.github.com/boundaries-io/927aa14e8d1e42d7cf516dc25b6ebb66#file-stacktrace
GeoJson MultiPolygon I am importing using Spring Data MongoDB
public class MyPolgyon {
#Id
String id;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPoint position;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPoint location;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPolygon polygon;
public static GeoJsonPolygon generateGeoJsonPolygon(List<LngLatAlt> coordinates) {
List<Point> points = new ArrayList<Point>();
for ( LngLatAlt point: coordinates) {
org.springframework.data.geo.Point dataPoint = new org.springframework.data.geo.Point( point.getLongitude() ,point.getLatitude());
points.add(dataPoint);
}
return new GeoJsonPolygon(points);
}
How can i avoid this error in Java?
I can load the geojson fine in http://geojson.io
here is the GEOJSON: https://gist.github.com/boundaries-io/4719bfc386c3728b36be10af29860f4c#file-rol-ca-part1-geojson
removal of duplicates using:
for (com.vividsolutions.jts.geom.Coordinate coordinate : geometry.getCoordinates()) {
Point lngLatAtl = new Point(coordinate.x, coordinate.y);
boolean isADup = points.contains(lngLatAtl);
if ( !notDup ){
points.add(lngLatAtl);
}else{
LOGGER.debug("Duplicate, [" + lngLatAtl.toString() +"] index[" + count +"]");
}
count++;
}
Logging:
2017-10-27 22:38:18 DEBUG TestBugs:58 - Duplicate, [Point [x=-97.009868, y=52.358242]] index[15]
2017-10-27 22:38:18 DEBUG TestBugs:58 - Duplicate, [Point [x=-97.009868, y=52.358242]] index[3348]
In this case you have duplicate vertex at index 0 and index 1341 for 2nd polygon.
[ -62.95859676499998, 46.20653318300003 ]
The insertion fails when Mongo db is trying to build the 2d sphere index for the document. Remove the coordinate at index 1341 and you should be able to persist successfully.
You just have to cleanse the data when you find the error.
You can write a small program to read the error from mongo db and provide the update back to the client. Client can act on those messages and try again the request.
More information on geo errors can be found here.
You can look at the code here for GeoParser to find how/what errors are generated. For the specific error you got you can take a look here GeoParser. This error is generated by S2 library that Mongodb uses for validation.

OPENCV crash when saving to file trained machine learning data (like SVM or ANN)

I have built a simple project in android studio, and included OpenCV in order to train either an SVM (support vector machine) or an ANN (artificial neural network). Everything seems to go well, including data creation, training and inspection of trained data, except for saving. Whenever I save a opencv ml-object (like ann.save(...) or svm.save(...)), android studio crashes.
SVM -
When I extract supportvectors using the line
classifier.getSupportVectors()
the numbers seem sain. However, the app crashes when I move past a breakpoint placed at
classifier.save("C:\\foo\\trentsvm.txt");
In logCat I dig up the following feedback:
07-04 14:36:10.939 25258-25258/com.example.tbrandsa.opencvtest A/libc:
Fatal signal 11 (SIGSEGV), code 2, fault addr 0x7f755f53f0 in tid
25258 (ndsa.opencvtest) [ 07-04 14:36:10.942 439: 439 W/]
debuggerd: handling request: pid=25258 uid=10227 gid=10227 tid=25258
I get a similar error if i instead try to save an artificial neural network (ANN), see update far below.
I have tried saving the file as XML and txt, and as "C:\trentsvm.someformat", and as "trentsvm.someformat". I also get the same error in my Eclipse java project. High pain, no gain. Full code is below. Could you help?
PS: I use OpenCv version 3.2.0. and Android Studio 2.3.2
// I based this code on stuff i found online. Not sure if all is as important or good.
// Purpose: multilabel classification - digit recognition for android app.
// Create data and labels for a digit recognition algorithm
int numTargets = 10; // (0-9 => 10 types of labels)
int totalSamples = 100; // Could have been number of images of digits
int totalIndicators = 10; // Could have been number of properites per digit image.
Mat labels = new Mat(totalSamples,1, CvType.CV_16S);
Mat data = new Mat(totalSamples, totalIndicators,CvType.CV_16S);
// Fill with dummy values:
for (int s = 0; s<totalSamples; s++)
{
int someLabel = s%numTargets;
labels.put(s,0, (double)someLabel);
for (int m = 0; m<totalIndicators; m++)
{
int someDataValue = (s%numTargets)*totalIndicators + m;
data.put(s, m, (double)someDataValue);
}
}
data.convertTo(data, CvType.CV_32F);
labels.convertTo(labels, CvType.CV_32S);
SVM classifier = SVM.create();
TermCriteria criteria = new TermCriteria(TermCriteria.EPS + TermCriteria.MAX_ITER,100,0.1);
classifier.setKernel(SVM.LINEAR);
classifier.setType(SVM.C_SVC); //We choose here the type CvSVM::C_SVC that can be used for n-class classification (n >= 2).
classifier.setGamma(0.5);
classifier.setNu(0.5);
classifier.setC(1);
classifier.setTermCriteria(criteria);
classifier.train(data, Ml.ROW_SAMPLE, labels);
// Check how trained SVM predicts the training data
Mat estimates = new Mat(totalSamples, 1, CvType.CV_32F);
classifier.predict(data, estimates, StatModel.RAW_OUTPUT);
for (int i = 0; i<totalSamples; i++)
{
double l = labels.get(i, 0)[0];
double e = estimates.get(i, 0)[0];
System.out.print("\n fact: "+l+", estimat: "+e);
}
Mat suppV = classifier.getSupportVectors();
try {
if (classifier.isTrained()){
// It crashes at the next line!
classifier.save("C:\\foo\\trentsvm.txt");
}
}
catch (Exception e)
{
}
Update july 5th: As suggested by ZdaR, I tried the to use an in-phone adress, but it did not solve the problem.
String address = Environment.getExternalStorageDirectory().getPath()+"/trentsvm.xml";
// address now has value "storage/emulated/0/trentsvm.xml"
classifier.save(address);
In logcat:
07-05 14:50:12.420 11743-11743/com.example.tbrandsa.opencv2 A/libc:
Fatal signal 11 (SIGSEGV), code 2, fault addr 0x7d517f1990 in tid
11743 (brandsa.opencv2)
[ 07-05 14:50:12.424 3134: 3134 W/ ] debuggerd: handling
request: pid=11743 uid=10319 gid=10319 tid=11743
Update july 6th:
When I run the same script in eclipse and use a debugger (JUnit 4, VM arguments: -Djava.library.path=C:\Users\tbrandsa\Downloads\opencv\build\java\x64;src\test\jniLibs, ) debugging on the pc without device, the caught exception "e" says the following
cause= Exception,
detailMessage= "Unknown Exception" ,
stackTrace=> StackTraceElement[0] ,
suppressedExeptions= Collections$UnmodifiableRandomAccessList,
Update july 13th:
I just tried with an artificial neural network (ANN) too, and it crashes when trying to save.
Error:
Fatal signal 11 (SIGSEGV), code 1, fault addr 0x15a57e688000c in tid
8507 (brandsa.opencv2) debuggerd: handling request: pid=8507
uid=10319 gid=10319 tid=8507
Code:
// Mat data is of size 100*20*CV_32FC1,
// Mat labels is of size 100*1*CV_32FC1
// layerSizes is of size 3*1*CV_8UC1
int[] hiddenLayers = {10};
Mat layerSizes = new Mat(2 + hiddenLayers.length,1,CvType.CV_8U);
layerSizes.put(0, 0, data.width());
for (int l = 0; l< hiddenLayers.length; l++){
layerSizes.put(1 + l, 0,hiddenLayers[l]);}
layerSizes.put(1 + hiddenLayers.length, 0,labels.width());
ANN_MLP ann = ANN_MLP.create();
ann.setLayerSizes(layerSizes);
ann.setActivationFunction(ANN_MLP.SIGMOID_SYM);
ann.train(data, Ml.ROW_SAMPLE , labels);
ann.save("/storage/emulated/0/Pictures/no.rema.priceagent.test/trentann.xml");

How to overcome SVMWithSGD that throws ArrayIndexOutOfBoundsException for index bigger that 5000?

In order to detect visitors demographics based on their behavior I used SVM algorithm from SPARK MLlib:
JavaRDD<LabeledPoint> data = MLUtils.loadLibSVMFile(sc.sc(), "labels.txt").toJavaRDD();
JavaRDD<LabeledPoint> training = data.sample(false, 0.6, 11L);
training.cache();
JavaRDD<LabeledPoint> test = data.subtract(training);
// Run training algorithm to build the model.
int numIterations = 100;
final SVMModel model = SVMWithSGD.train(training.rdd(), numIterations);
// Clear the default threshold.
model.clearThreshold();
JavaRDD<Tuple2<Object, Object>> scoreAndLabels = test.map(new SVMTestMapper(model));
Unfortunately final SVMModel model = SVMWithSGD.train(training.rdd(), numIterations); throws ArrayIndexOutOfBoundsException :
Caused by: java.lang.ArrayIndexOutOfBoundsException: 4857
labels.txt is a txt file composed from:
Visitor criteria(is male) | List[siteId: access number]
1 27349:1 23478:1 35752:1 9704:2 27896:1 30050:2 30018:1
1 36214:1 26378:1 26606:1 26850:1 17968:2
1 21870:1 41294:1 37388:1 38626:1 10711:1 28392:1 20749:1
1 29328:1 34370:1 19727:1 29542:1 37621:1 20588:1 42426:1 30050:6 28666:1 23190:3 7882:1 35387:1 6637:1 32131:1 23453:1
I tried with a lot of data and algorithms and as seen it gives an error for site Ids bigger than 5000.
Is there any solution to overcome it or there is another library for this issue? Or because the data is matrix is too sparse should use SVD?

Categories