I'm trying to do a simple prediction in DL4j (going to use it later for a large dataset with n features) but no matter what I do my network just doesn't want to learn and behaves very weird. Of course I studied all the tutorials and did the same steps shown in dl4j repo, but it doesn't work for me somehow.
For dummy features data I use:
*double[val][x] features; where val = linspace(-10,10)...; and x= Math.sqrt(Math.abs(val)) * val;
my y is : double[y] labels; where y = Math.sin(val) / val
DataSetIterator dataset_train_iter = getTrainingData(x_features, y_outputs_train, batchSize, rnd);
DataSetIterator dataset_test_iter = getTrainingData(x_features_test, y_outputs_test, batchSize, rnd);
// Normalize data, including labels (fitLabel=true)
NormalizerMinMaxScaler normalizer = new NormalizerMinMaxScaler(0, 1);
normalizer.fitLabel(false);
normalizer.fit(dataset_train_iter);
normalizer.fit(dataset_test_iter);
// Use the .transform function only if you are working with a small dataset and no iterator
normalizer.transform(dataset_train_iter.next());
normalizer.transform(dataset_test_iter.next());
dataset_train_iter.setPreProcessor(normalizer);
dataset_test_iter.setPreProcessor(normalizer);
//DataSet setNormal = dataset.next();
//Create the network
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(seed)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.weightInit(WeightInit.XAVIER)
//.miniBatch(true)
//.l2(1e-4)
//.activation(Activation.TANH)
.updater(new Nesterovs(0.1,0.3))
.list()
.layer(new DenseLayer.Builder().nIn(numInputs).nOut(20).activation(Activation.TANH)
.build())
.layer(new DenseLayer.Builder().nIn(20).nOut(10).activation(Activation.TANH)
.build())
.layer( new DenseLayer.Builder().nIn(10).nOut(6).activation(Activation.TANH)
.build())
.layer(new OutputLayer.Builder(LossFunctions.LossFunction.MSE)
.activation(Activation.IDENTITY)
.nIn(6).nOut(1).build())
.build();
//Train and fit network
final MultiLayerNetwork net = new MultiLayerNetwork(conf);
net.init();
net.setListeners(new ScoreIterationListener(100));
//Train the network on the full data set, and evaluate in periodically
final INDArray[] networkPredictions = new INDArray[nEpochs / plotFrequency];
for (int i = 0; i < nEpochs; i++) {
//in fit we have already Backpropagation. See Release deeplearning
// https://deeplearning4j.konduit.ai/release-notes/1.0.0-beta3
net.fit(dataset_train_iter);
dataset_train_iter.reset();
if((i+1) % plotFrequency == 0) networkPredictions[i/ plotFrequency] = net.output(x_features, false);
}
// evaluate and plot
dataset_test_iter.reset();
dataset_train_iter.reset();
INDArray predicted = net.output(dataset_test_iter, false);
System.out.println("PREDICTED ARRAY " + predicted);
INDArray output_train = net.output(dataset_train_iter, false);
//Revert data back to original values for plotting
// normalizer.revertLabels(predicted);
normalizer.revertLabels(output_train);
normalizer.revertLabels(predicted);
PlotUtil.plot(om, y_outputs_train, networkPredictions);
My output seems then very weird (see picture below), even when I use miniBatch (1, 20,100 Samples/Batch) change number of epochs or add hidden nodes and hidden Layers (tryed to add 1000 Nodes and 5 Layers). The network either outputs very stochastic values or the one constant y. I just can't recognize, what is going wrong here. Why the network even doesn't approach the train function.
Another question: what doesn iter.reset() do exactly. Does the Iterator turn the pointer back to 0-Batch in the DataSetIterator?
A pretty common problem is people doing toy problems like this is dl4j's assumption of minibatches (which 99% of problems tend to be). You aren't actually doing minibatch learning (which actually defeats the point of actually using an iterator, which is meant to iterate through slices of a dataset, not an in memory small dataset) - a small recommendation is to just use the normal dataset api (which is what's returned from dataset.next())
Ensure you turn off the minibatch penalty dl4j assigns to all losses with:
.minibatch(false) - you can see that configuration here:
https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/NeuralNetConfiguration.java#L434
A unit test testing this behavior can be found here:
https://github.com/eclipse/deeplearning4j/blob/b4047006ac8175df295c2f3c008e7601437ea4dc/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/gradientcheck/GradientCheckTests.java#L94
For posterity, here is the relevant configuration:
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder().miniBatch(false)
.dataType(DataType.DOUBLE)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).updater(new NoOp())
.list()
.layer(0,
new DenseLayer.Builder().nIn(4).nOut(3)
.dist(new NormalDistribution(0, 1))
.activation(Activation.TANH)
.build())
.layer(1, new OutputLayer.Builder(LossFunction.MCXENT)
.activation(Activation.SOFTMAX).nIn(3).nOut(3).build())
.build();
You'll notice 2 things: 1 is minibatch is false and 2 is the configuration for data type double. You are also welcome to try that for your problem.
Dl4j to save memory tends to also assume float for the default data type.
This is a reasonable assumption when working on larger problems, but may not work well for toy problems.
For reference, you can find the application of the minibatch math here:
https://github.com/eclipse/deeplearning4j/blob/fc735d30023981ebbb0fafa55ea9520ec44292e0/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/updater/BaseMultiLayerUpdater.java#L332
This affects the gradient updates.
The score penalty can be found in the output layer:
https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/BaseOutputLayer.java#L84
Essentially, both of these automatically penalize the loss update for your dataset reflected in both the loss and the gradient updates.
Related
I'm working on a tool about humanitarian logistics. In this model I have some lorries which pick items for support affected people by an earthquake and, after picking them, go to eartquake epicenter to drop these items. I need to manage the availabilty of these warehouses: for example, if a warehouse has 5 items availables and lorries has a transport capacity by 2, availabilty have to become 3 for that warehouse. I need to realize obviously this process for all warehouse of my Supply Chain. I've dropped (as you can see in the pic that I've uploaded) a parameter (availability) in the class of the warehouses [named Magazzini]).
This is the algorithm that manages lorries movement, in which I need to code this command to change availabilty.
List <Magazzini> subsetlist = findAll(main.magazzinis, w->w.capacita>0);
List <Magazzini> sortmag = new ArrayList<Magazzini>();
List <Double> distance = new ArrayList<Double>();
sortmag = subsetlist;
System.out.println(sortmag);
for (Magazzini m : subsetlist)
{
m.distance = distanceTo(m);
}
sortmag = sortAscending(sortmag, p-> p.distance);
//main.magazzinis.cap = main.magazzinis.cap - 2;
moveTo(sortmag.get(0));
System.out.println(sortmag);
partenza = time();
I write a possible command to do it, but it doesn't work. How can I fix it?
I'm trying to use OpenNLP to classify invoices. Based on it's description I will group it into two classes. I have built a training file with 20K descriptions and tagged each one into the correct class.
The training data looks like (first column is a code, that I use as class, and the second column is the invoice description):
85171231 IPHONE 5S CINZA ESPACIAL 16GB (ME432BZA)
85171231 Galaxy S6 SM-G920I
85171231 motorola - MOTO G5 XT1672
00000000 MOTONETA ITALIKA AT110
00000000 CJ BOX UNIBOX MOLA 138X57X188 VINHO
Using DocumentCategorizer from OpenNLP, I achieved 98,5% of correctness. But, trying to improve the efficience, I took the wrong categorized documents and used it to expand the training data.
For instance, when I first run it, the "MOTONETA ITALIKA AT110" was classified as "85171231". It's ok, since into the first run the "MOTONETA ITALIKA AT110" wasn't classified. So, I teached the classifier explicitly puting "MOTONETA ITALIKA AT110" tagged as "00000000".
But, running it again, OpenNLP insists to classify it as "85171231" even though the training data contains an explicity map to "000000".
So my question is: Am I teaching OpenNLP wright? How do I improve it's efficiency?
The code that I'm using is:
MarkableFileInputStreamFactory dataIn = new MarkableFileInputStreamFactory("data.train");
ObjectStream<String> lineStream = new PlainTextByLineStream(dataIn, StandardCharsets.UTF_8);
ObjectStream<DocumentSample> sampleStream = new DocumentSampleStream(lineStream);
TrainingParameters params = new TrainingParameters();
params.put(TrainingParameters.ITERATIONS_PARAM, "100");
params.put(TrainingParameters.CUTOFF_PARAM, "0");
DoccatModel model = DocumentCategorizerME.train("pt", sampleStream, params, new DoccatFactory());
DocumentCategorizer doccat = new DocumentCategorizerME(model);
double[] aProbs = doccat.categorize("MOTONETA ITALIKA AT110".replaceAll("[^A-Za-z0-9 ]", " ").split(" "));
doccat.getBestCategory(aProbs);
By default, DocumentCategorizer will use bag of words. It means that the sequence of terms are not take into account.
If any term of MOTONETA ITALIKA AT110 occurs with high frequency in the group 85171231, the classifier would be inclined to use that group.
You have a few alternatives:
You can add more variants of MOTONETA ITALIKA AT110 to the group 000000;
Try the to change the feature generator.
The second option would be to change the creation of your model, like this:
int minNgramSize = 2;
int maxNgramSize = 3;
DoccatFactory customFactory = new DoccatFactory(
new FeatureGenerator[]{
new BagOfWordsFeatureGenerator(),
new NGramFeatureGenerator(minNgramSize, maxNgramSize)
}
);
DoccatModel model = DocumentCategorizerME.train("pt", sampleStream, params, customFactory);
You can play with the feature generator by removing the BagOfWordsFeatureGenerator and changing the min and max ngram size.
I'm new to the deeplearning4j library, but I've got some experience with neural networks in general.
I'm trying to train a recurrent neural network (a LSTM in particular) which is supposed to detect beats in music in realtime. All examples for using recurrent neural nets with deeplearning4j that I've found so far use a reader which reads the training data from a file. As I want to record music in realtime via a microphone, I can't read some pregenerated file, so the data which is fed into the neural network is generated in realtime by my application.
This is the code that I'm using to generate my network:
NeuralNetConfiguration.ListBuilder builder = new NeuralNetConfiguration.Builder()
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).iterations(1)
.learningRate(0.1)
.rmsDecay(0.95)
.regularization(true)
.l2(0.001)
.weightInit(WeightInit.XAVIER)
.updater(Updater.RMSPROP)
.list();
int nextIn = hiddenLayers.length > 0 ? hiddenLayers[0] : numOutputs;
builder = builder.layer(0, new GravesLSTM.Builder().nIn(numInputs).nOut(nextIn).activation("softsign").build());
for(int i = 0; i < hiddenLayers.length - 1; i++){
nextIn = hiddenLayers[i + 1];
builder = builder.layer(i + 1, new GravesLSTM.Builder().nIn(hiddenLayers[i]).nOut(nextIn).activation("softsign").build());
}
builder = builder.layer(hiddenLayers.length, new RnnOutputLayer.Builder(LossFunctions.LossFunction.MCXENT).nIn(nextIn).nOut(numOutputs).activation("softsign").build());
MultiLayerConfiguration conf = builder.backpropType(BackpropType.TruncatedBPTT).tBPTTForwardLength(DEFAULT_RECURRENCE_DEPTH).tBPTTBackwardLength(DEFAULT_RECURRENCE_DEPTH)
.pretrain(false).backprop(true)
.build();
net = new MultiLayerNetwork(conf);
net.init();
In this case I'm using about 700 inputs (which is mostly FFT-data of the recorded audio), 1 output (which is supposed to output a number between 0 [no beat] and 1 [beat]) and my hiddenLayers array consists of the ints {50, 25, 10}.
For getting the output of the network I'm using this code:
double[] output = new double[]{net.rnnTimeStep(Nd4j.create(netInputData)).getDouble(0)};
where netInputData is the data I want to input into the network as a one-dimensional double array.
I'm relatively sure that this code is working fine, since I get some output for an untrained network which looks something like this when I plot it.
However, once I try to train a network (even if I train it just for a short time, which should alter the weights of the network just a little bit, so that the output should be very similar to the untrained network), I get an output which looks like a constant.
This is the code which I'm using to train the network:
for(int timestep = 0; timestep < trainingData.length - DEFAULT_RECURRENCE_DEPTH; timestep++){
INDArray inputDataArray = Nd4j.create(new int[]{1, numInputs, DEFAULT_RECURRENCE_DEPTH},'f');
for(int inputPos = 0; inputPos < trainingData[timestep].length; inputPos++)
for(int inputTimeWindowPos = 0; inputTimeWindowPos < DEFAULT_RECURRENCE_DEPTH; inputTimeWindowPos++)
inputDataArray.putScalar(new int[]{0, inputPos, inputTimeWindowPos}, trainingData[timestep + inputTimeWindowPos][inputPos]);
INDArray desiredOutputDataArray = Nd4j.create(new int[]{1, numOutputs, DEFAULT_RECURRENCE_DEPTH},'f');
for(int outputPos = 0; outputPos < desiredOutputData[timestep].length; outputPos++)
for(int inputTimeWindowPos = 0; inputTimeWindowPos < DEFAULT_RECURRENCE_DEPTH; inputTimeWindowPos++)
desiredOutputDataArray.putScalar(new int[]{0, outputPos, inputTimeWindowPos}, desiredOutputData[timestep + inputTimeWindowPos][outputPos]);
net.fit(new DataSet(inputDataArray, desiredOutputDataArray));
}
Once again, I've got my data for the input and for the desired output as a double array. This time the two arrays are two-dimensional. The first index represents the time (where index 0 is the first audio data of the recorded audio) and the second index represents the input (or respectively the desired output) for this time step.
Given the shown output after training a network, I tend to think that there must be something wrong with my code used for creating the INDArrays from my data. Am I missing some important step for initializing these arrays or did I mess up the order I need to put my data into these arrays?
Thank you for any help in advance.
I'm not sure, but perhaps 99.99% of your training examples are 0, with only an occasional 1 exactly where the beat occurs. This might be too imbalanced to learn. Good luck.
I am trying to model a genetics problem we are trying to solve, building up to it in steps. I can successfully run the PiAverage examples from Spark Examples. That example "throws darts" at a circle (10^6 in our case) and counts the number that "land in the circle" to estimate PI
Let's say I want to repeat that process 1000 times (in parallel) and average all those estimates. I am trying to see the best approach, seems like there's going to be two calls to parallelize? Nested calls? Is there not a way to chain map or reduce calls together? I can't see it.
I want to know the wisdom of something like the idea below. I thought of tracking the resulting estimates using an accumulator. jsc is my SparkContext, full code of single run is at end of question, thanks for any input!
Accumulator<Double> accum = jsc.accumulator(0.0);
// make a list 1000 long to pass to parallelize (no for loops in Spark, right?)
List<Integer> numberOfEstimates = new ArrayList<Integer>(HOW_MANY_ESTIMATES);
// pass this "dummy list" to parallelize, which then
// calls a pieceOfPI method to produce each individual estimate
// accumulating the estimates. PieceOfPI would contain a
// parallelize call too with the individual test in the code at the end
jsc.parallelize(numberOfEstimates).foreach(accum.add(pieceOfPI(jsc, numList, slices, HOW_MANY_ESTIMATES)));
// get the value of the total of PI estimates and print their average
double totalPi = accum.value();
// output the average of averages
System.out.println("The average of " + HOW_MANY_ESTIMATES + " estimates of Pi is " + totalPi / HOW_MANY_ESTIMATES);
It doesn't seem like a matrix or other answers I see on SO give the answer to this specific question, I have done several searches but I am not seeing how to do this without "parallelizing the parallelization." Is that a bad idea?
(and yes I realize mathematically I could just do more estimates and effectively get the same results :) Trying to build a structure my boss wants, thanks again!
I have put my entire single-test program here if that helps, sans an accumulator I was testing out. The core of this would become PieceOfPI():
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import org.apache.spark.Accumulable;
import org.apache.spark.Accumulator;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.storage.StorageLevel;
import org.apache.spark.SparkConf;
import org.apache.spark.storage.StorageLevel;
public class PiAverage implements Serializable {
public static void main(String[] args) {
PiAverage pa = new PiAverage();
pa.go();
}
public void go() {
// should make a parameter like all these finals should be
// int slices = (args.length == 1) ? Integer.parseInt(args[0]) : 2;
final int SLICES = 16;
// how many "darts" are thrown at the circle to get one single Pi estimate
final int HOW_MANY_DARTS = 1000000;
// how many "dartboards" to collect to average the Pi estimate, which we hope converges on the real Pi
final int HOW_MANY_ESTIMATES = 1000;
SparkConf sparkConf = new SparkConf().setAppName("PiAverage")
.setMaster("local[4]");
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
// setup "dummy" ArrayList of size HOW_MANY_DARTS -- how many darts to throw
List<Integer> throwsList = new ArrayList<Integer>(HOW_MANY_DARTS);
for (int i = 0; i < HOW_MANY_DARTS; i++) {
throwsList.add(i);
}
// setup "dummy" ArrayList of size HOW_MANY_ESTIMATES
List<Integer> numberOfEstimates = new ArrayList<Integer>(HOW_MANY_ESTIMATES);
for (int i = 0; i < HOW_MANY_ESTIMATES; i++) {
numberOfEstimates.add(i);
}
JavaRDD<Integer> dataSet = jsc.parallelize(throwsList, SLICES);
long totalPi = dataSet.filter(new Function<Integer, Boolean>() {
public Boolean call(Integer i) {
double x = Math.random();
double y = Math.random();
if (x * x + y * y < 1) {
return true;
} else
return false;
}
}).count();
System.out.println(
"The average of " + HOW_MANY_DARTS + " estimates of Pi is " + 4 * totalPi / (double)HOW_MANY_DARTS);
jsc.stop();
jsc.close();
}
}
Let me start with your "background question". Transformation operations like map, join, groupBy, etc. fall into two categories; those that require a shuffle of data as input from all the partitions, and those that don't. Operations like groupBy and join require a shuffle, because you need to bring together all records from all the RDD's partitions with the same keys (think of how SQL JOIN and GROUP BY ops work). On the other hand, map, flatMap, filter, etc. don't require shuffling, because the operation works fine on the input of the previous step's partition. They work on single records at a time, not groups of them with matching keys. Hence, no shuffling is necessary.
This background is necessary to understand that an "extra map" does not have a significant overhead. A sequent of operations like map, flatMap, etc. are "squashed" together into a "stage" (which is shown when you look at details for a job in the Spark Web console) so that only one RDD is materialized, the one at the end of the stage.
On to your first question. I wouldn't use an accumulator for this. They are intended for "side-band" data, like counting how many bad lines you parsed. In this example, you might use accumulators to count how many (x,y) pairs were inside the radius of 1 vs. outside, as an example.
The JavaPiSpark example in the Spark distribution is about as good as it gets. You should study why it works. It's the right dataflow model for Big Data systems. You could use "aggregators". In the Javadocs, click the "index" and look at the agg, aggregate, and aggregateByKey functions. However, they are no more understandable and not necessary here. They provide greater flexibility than map then reduce, so they are worth knowing
The problem with your code is that you are effectively trying to tell Spark what to do, rather than expressing your intent and letting Spark optimize how it does it for you.
Finally, I suggest you buy and study O'Reilly's "Learning Spark". It does a good job explaining the internal details, like staging, and it shows lots of example code you can use, too.
I'm on OpenCV for java (but that's not relevant I guess). I'm using the BackgroundSubtractorMOG2 class which is (poorly) referenced here. I have read and understood the Zivkovic paper about the algorithm which you can find here.
BackgroundSubtractorMOG2 takes in its constructor a parameter called history. What is it, and how does it influence the result? Could you point me to its reference inside the paper, for example?
From the class source code, line 106, it is said that alpha = 1/history. That would mean that history is namely the T parameter inside the paper, i.e. (more or less) the number of frames that constitute the training set.
However it doesn't seem so. Changing the value in the constructor, from 10 to 500 or beyond, has no effect on the final result. This is what I'm calling:
Mat result = new Mat();
int history = 10; //or 50, or 500, or whatever
BackgroundSubtractorMOG2 sub = new BackgroundSubtractorMOG2(history, 16, false);
for (....) {
sub.apply(frame[i], result);
}
imshow(result); //let's see last frame
It doesn't matter what history I set, be it 5, 10, 500, 1000 - the result is always the same. Whereas, if I change the alpha value (the learning rate) through apply(), I can see its real influence:
Mat result = new Mat();
float alpha = 0.1; //learning rate, 1/T (1/history?)
BackgroundSubtractorMOG2 sub = new BackgroundSubtractorMOG2(whatever, 16, false);
for (...) {
sub.apply(frame[i], result, alpha);
}
imshow(result);
If I change alpha here, result changes a lot, which is understandable. So, two conjectures:
history is not really 1/alpha as the source code states. But then: what is it? how does it affect the algorithm?
history is really 1/alpha, but there's a bug in the java wrapper that makes the history value you set in the constructor useless.
Could you help me?
(Tagging c++ also as this is mainly a question about an OpenCV class and the whole OpenCV java framework is just a wrapper around c++).
It seems clear that alpha = 1 / history (except for some transitory instants). In void BackgroundSubtractorMOG2Impl::apply method:
learningRate = learningRate >= 0 && nframes > 1 ? learningRate : 1./std::min( 2*nframes, history );
You can test if the BackgroundSubtractorMOG2 object is using the history value that you pass in the constructor using the getHistory() method.