How to use my neural net properly with dl4j? - java

My problem:
I implemented a feed forward model and a recurrent model with deeplearning4j to detect anomalies in a 1D signal.
Maybe I'm missing an abstraction but I thought I could solve this problem the following way:
Preprocess the data. I have 5 different failure categories and have roundabout 40 examples each.
Each failure has his own "structure".
Building a neural net with 5 output neurons, one for each failure.
Train and evaluate.
Now I wanted to test my net with real data and it should detect the anomalies in a very long 1D
signal. The idea was, that the net should somehow "iterate" over the signal and detect these failures
in it.
Is this approach even possible? Do u have any ideas?
Thanks in advance!

It depends on how the structure to those defects looks like.
Given that you have a 1D signal, I expect that your examples are a sequence of data that is effectively a window over your continuous signal.
There are multiple ways to model that problem:
Sliding window
This works if all of your examples have the same length. In that case, you can make a normal feed forward network, that just takes a fixed number of steps as input and returns a single classification.
If your real data doesn't have enough data, you can pad it, and if it has more data than the example length, then you slide over the sequence (e.g. with a window size of 2 the sequence abcd turns into [ab], [bc], [cd] and you get 3 classifications).
As far as I know there is nothing in DL4J out of the box that implements this solution. But on the other hand it shouldn't be too hard to implement it yourself using RecordConverter.toRecord and RecordConverter.toArray to transform your real data into NDArrays.
Recurrent Network
Using a recurrent network, you can apply a neural network to any length of sequence data. This will likely be your choice if the faults you are looking for can have different lengths in the signal.
The recurrent network can have an internal state that gets updated on each call during inference and it will produce a classification after each step of your signal.
What the right solution is for you, will depend entirely on your actual concrete use case.

Related

Neural networks in evolution simulator- how to calculate error without expected result

I am creating an evolution simulator in Java. The simulation consists of a map with cold/hot regions and high/low elevation, etc'.
I want the creatures in the world to evolve in two ways- every single creature will evolve it's AI during the course of his lifetime, and when a creature reproduces there is a chance for mutation.
I thought it would be good to make the brain of the creatures a neural network that takes the sensor's data as input (only eyes at the moment), and produces commands to the thrusters (which move the creature around).
However, I only have experience with basic neural networks that recieve desired inputs from the user and calculate the error accordingly. However in this simulator, there is no optimal result. Results can be rated by a fitness function I have created (which takes in count energy changes, amount of offsprings, etc'), but it is unknown which output node is wrong and which is right.
Am I using the correct approach for this problem? Or perhaps neural networks are not the best solution for it?
If it is a viable way to achieve what I desire, how can I make the neural network adjust the correct weights if I do not know them?
Thanks in advance, and sorry for any english mistakes.
You ran into a common problem with neural networks and games. As mentioned in the comments a Genetic algorithm is often used when there is no 'correct' solution.
So your goal is basically to somehow combine neural networks and genetic algorithms. Luckily somebody did this before and described the process in this paper.
Since the paper is relatively complex and it is very time consumeing to implemwnt the algorithm you should consider using a library.
Since I couldn't find any suiting library for me, I decided to write my own one, you can find it here
The library should work good enough for 'smaller' problems like yours. You will find some example code in the Main class.
Combine networks using
Network.breedWith(Network other);
Create networks using
Network net = new Network(int inputs, int outputs);
Mutate networks using
Network.innovate();
As you will see in the example code it is important to always have an initial amount of mutations for each new network. This is because when you create a new network there are no connections, so innovation (fancy word for mutation) is needed to create connections.
If needed you can always create copys of networks (Network.getCopy();). The Network class and all of its attributes implement serializable, so you can save/load a network using an ObjectOutputStream.
If you decide to use my library please let me know what results you got!

Cross Correlation: Android AudioRecord create sample data for TDoA

On one side with my Android smartphone I'm recording an audio stream using AudioRecord.read(). For the recording I'm using the following specs
SampleRate: 44100 Hz
MonoChannel
PCM-16Bit
size of the array I use for AudioRecord.read(): 100 (short array)
using this small size allows me to read every 0.5ms (mean value), so I can use this timestamp later for the multilateration (at least I think so :-) ). Maybe this will be obsolete if I can use cross correlation to determine the TDoA ?!? (see below)
On the other side I have three speaker emitting different sounds using the WebAudio API and the the following specs
freq1: 17500 Hz
freq2: 18500 Hz
freq3: 19500 Hz
signal length: 200 ms + a fade in and fade out of the gain node of 5ms, so in sum 210ms
My goal is to determine the time difference of arrival (TDoA) between the emitted sounds. So in each iteration I read 100 byte from my AudioRecord buffer and then I want to determine the time difference (if I found one of my sounds). So far I've used a simple frequency filter (using fft) to determine the TDoA, but this is really inacurrate in the real world.
So far I've found out that I can use cross correlation to determine the TDoA value even better (http://paulbourke.net/miscellaneous/correlate/ and some threads here on SO). Now my problem: at the moment I think I have to correlate the recorded signal (my short array) with a generated signal of each of my three sounds above. But I'm struggling to generate this signal. Using the code found at (http://repository.tudelft.nl/view/ir/uuid%3Ab6c16565-cac8-448d-a460-224617a35ae1/ section B1.1. genTone()) does not clearly solve my problem because this will generate an array way bigger than my recorded samples. And so far I know the cross correlation needs two arrays of the same size to work. So how can I generate a sample array?
Another question: is the thinking of how to determine the TDoA so far correct?
Here are some lessons I've learned the past days:
I can either use cross correlation (xcorr) or a frequency recognition technique to determine the TDoA. The latter one is far more imprecise. So i focus on the xcorr.
I can achieve the TDoA by appling the xcorr on my recorded signal and two reference signals. E.g. my record has a length of 1000 samples. With the xcorr I recognize sound A at sample 500 and sound B at sample 600. So I know they have a time difference of 100 sample (that can be converted to seconds depending on the sample rate).
Therefor I generate a linear chirp (chirps a better than simple sin waves (see literature)) using this code found on SO. For an easy example and to check if my experiment seems to work I save my record as well as my generated chirp sounds as .wav files (there are plenty of code example how to do this). Then I use MatLab as an easy way to calculate the xcorr: see here
Another point: "input of xcorr has to be the same size?" I'm quite not sure about this part but I think this has to be done. We can achieve this by zero padding the two signals to the same length (preferable a power of two, so we can use the efficient Radix-2 implementation of FFT) and then use the FFT to calculate the xcorr (see another link from SO)
I hope this so far correct and covers some questions of other people :-)

Encog - How to load training data for Neural Network

The NeuralDataSet objects that I've seen in action haven't been anything but XOR which is just two small data arrays... I haven't been able to figure out anything from the documentation on MLDataSet.
It seems like everything must be loaded at once. However, I would like to loop through training data until I reach EOF and then count that as 1 epoch.. However, everything I've seen all the data must be loaded into 1 2D array from the beginning. How can I get around this?
I've read this question, and the answers didn't really help me. And besides that, I haven't found a similar question asked on here.
This is possible, you can either use an existing implementation of a data set that supports streaming operation or you can implement your own on top of whatever source you have. Check out the BasicMLDataSet interface and the SQLNeuralDataSet code as an example. You will have to implement a codec if you have a specific format. For CSV there is an implementation already, I haven't checked if it is memory based though.
Remember when doing this that your data will be streamed fully for each epoch and from my experience that is a much higher bottleneck than the actual computation of the network.

Artificial Neural network PSO training

I am working on a FF Neural network (used for classification problems) which I am training using a PSO. I only have one hidden layer and I can vary the amount of neurons in that layer.
My problem is that the NN can learn linearly separable problems quite easily but can not learn problems that are not linearly separable(like XOR) like it should be able to do.
I believe my PSO is working correctly because I cans see that it tries to minimises the error function of each particle (using mean squared error over the training set).
I have tried using a sigmoid and linear activation function with similar(bad) results. I also have a bias unit(which also doesn't help much).
What I want to know is if there is something specific that I might be doing wrong that might cause this type of problem, or maybe just some things I should look at where the error might be.
I am a bit lost at the moment
Thanks
PSO can train a neural network to non-solve linearly separable problems, like XOR. I've done this before, my algorithm takes about 50 or so iterations at most. Sigmoid is a good activation function for XOR. If it does converge for non-separable problems then my guess is somehow your hidden layer is not having an effect, or is bypassed. As the hidden layer is what typically allows non-separable.
When I debug AI I find it often useful to determine first if my training code or evaluation code (the neural network in this case) is at fault. You might want to create a 2nd trainer for your network. Then you can make sure your network code is calculating the output correctly. You could even do a simple "hill climber". Pick a random weight and change by a random small amount (up or down). Did your error get better? Keep the weight change and repeat. Did your error get worse, drop the change and try again.

Neural network for letter recognition

I'm trying to add to the code for a single layer neural network which takes a bitmap as input and has 26 outputs for the likelihood of each letter in the alphabet.
The first question I have is regarding the single hidden layer that is being added. Am I correct in thinking that the hidden layer will have it's own set of output values and weights only? It doesn't need to have it's own bias'?
Can I also confirm that I'm thinking about the feedforward aspect correctly? Here's some pseudocode:
// input => hidden
for j in hiddenOutput.length:
sum=inputs*hiddenWeights
hiddenOutput[j] = activationFunction(sum)
// hidden => output
for j in output.length:
sum=hiddenOutputs*weights
output[j] = activationFunction(sum)
Assuming that is correct, would the training be something like this?
def train(input[], desired[]):
iterate through output and determine errors[]
update weights & bias accordingly
iterate through hiddenOutput and determine hiddenErrors[]
update hiddenWeights & (same bias?) accordingly
Thanks in advance for any help, I've read so many examples and tutorials and I'm still having trouble determining how to do everything correctly.
Dylan, this is probably long after your homework assignment was due, but I do have a few thoughts about what you've posted.
Make the hidden layer much bigger than the size of the input bitmaps.
You should have different weights and biases from input -> hidden than from hidden -> output.
Spend a lot of time on your error function (discriminator).
Understand that neural nets have a tendency to get quickly locked in to a set of weights (usually incorrect). You'll need to start over and train in a different order.
The thing I learned about neural nets is that you never know why they're working (or not working). That alone is reason to keep it out of the realms of medicine and finance.
you might want to read http://www.ai-junkie.com/ann/evolved/nnt1.html . there it mentions exactly something about what you are doing. It also provided code along with a (mostly) simple explanation of how it learns. Although the learning aspect is completely different from feed forward this should hopefully give you some ideas about the nature of NN.
It is my belief that even the hidden and output layers should have a bias.
Also NN can be tricky, try first identifying only 1 letter. Getting a consistent high/low signal from only a single output. Then try to keep that signal with different variations of the same letter. Then you can progress and add more. You might do that by teaching 26 different networks that give an output only on a match. Or maybe you make it as one large NN with 26 outputs. Two different approaches.
As far as the use of bias terms is concerned I found the section Why use a bias/threshold? in the comp.ai.neural-nets FAQ very useful. I highly recommend reading that FAQ.

Categories