Java- Simple Neural Network Implementation not working - java

I'm trying to implement a neural network with:
5 input nodes(+1 bias)
1 hidden layer of 1 hidden node(+1 bias)
1 output unit.
The training data I'm using is the a disjunction of 5 input units. The Overall Error is oscillating instead of decreasing and reaching very high numbers.
package neuralnetworks;
import java.io.File;
import java.io.FileNotFoundException;
import java.math.*;
import java.util.Random;
import java.util.Scanner;
public class NeuralNetworks {
private double[] weightslayer1;
private double[] weightslayer2;
private int[][] training;
public NeuralNetworks(int inputLayerSize, int weights1, int weights2) {
weightslayer1 = new double[weights1];
weightslayer2 = new double[weights2];
}
public static int[][] readCSV() {
Scanner readfile = null;
try {
readfile = new Scanner(new File("disjunction.csv"));
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Scanner delimit;
int[][] train = new int[32][6];
int lines = 0;
while (readfile.hasNext()) {
String line = readfile.nextLine();
delimit = new Scanner(line);
delimit.useDelimiter(",");
int features = 0;
while (delimit.hasNext() && lines > 0) {
train[lines - 1][features] = Integer.parseInt(delimit.next());
features++;
}
lines++;
}
return train;
}
public double linearcomb(double[] input, double[] weights) { //calculates the sum of the multiplication of weights and inputs
double sigma = 0;
for (int i = 0; i < input.length; i++) {
sigma += (input[i] * weights[i]);
}
return sigma;
}
public double hiddenLayerOutput(int[] inputs) { //calculates the output of the hiddenlayer
double[] formattedInput = new double[6]; //adds the bias unit
formattedInput[0] = 1;
for (int i = 1; i < formattedInput.length; i++)
formattedInput[i] = inputs[i - 1];
double hlOutput = linearcomb(formattedInput, weightslayer1);
return hlOutput;
}
public double feedForward(int[] inputs) { //calculates the output
double hlOutput = hiddenLayerOutput(inputs);
double[] olInput = new double[2];
olInput[0] = 1;
olInput[1] = hlOutput;
double output = linearcomb(olInput, weightslayer2);
return output;
}
public void backprop(double predoutput, double targetout, double hidout, double learningrate, int[] input) {
double outputdelta = predoutput * (1 - predoutput) * (targetout - predoutput);
double hiddendelta = hidout * (1 - hidout) * (outputdelta * weightslayer2[1]);
updateweights(learningrate, outputdelta, hiddendelta, input);
}
public void updateweights(double learningrate, double outputdelta, double hiddendelta, int[] input) {
for (int i = 0; i < weightslayer1.length; i++) {
double deltaw1 = learningrate * hiddendelta * input[i];
weightslayer1[i] += deltaw1;
}
for (int i = 0; i < weightslayer2.length; i++) {
double deltaw2 = learningrate * outputdelta * hiddenLayerOutput(input);
weightslayer2[i] += deltaw2;
}
}
public double test(int[] inputs) {
return feedForward(inputs);
}
public void train() {
double learningrate = 0.01;
double output;
double hiddenoutput;
double error = 100;
do {
error = 0;
for (int i = 0; i < training.length; i++) {
output = feedForward(training[i]);
error += (training[i][5] - output) * (training[i][5] - output) / 2;
hiddenoutput = hiddenLayerOutput(training[i]);
backprop(output, training[i][5], hiddenoutput, learningrate, training[i]);
}
//System.out.println(error);
}while(error>1);
}
public static void main(String[] args) {
NeuralNetworks nn = new NeuralNetworks(6, 6, 2);
Random rand = new Random();
nn.weightslayer2[0] = (rand.nextDouble() - 0.5);
nn.weightslayer2[1] = (rand.nextDouble() - 0.5);
for (int i = 0; i < nn.weightslayer1.length; i++)
nn.weightslayer1[i] = (rand.nextDouble() - 0.5);
nn.training = readCSV();
/*for (int i = 0; i < nn.training.length; i++) {
for (int j = 0; j < nn.training[i].length; j++)
System.out.print(nn.training[i][j] + ",");
System.out.println();
}*/
nn.train();
int[] testa = { 0, 0, 0, 0, 0 };
System.out.println(nn.test(testa));
}
}

Related

Why is my neural network unable to solve MNIST? (debug help)

I have implemented a general neural network framework from scratch, and ensured it is ostensibly doing the right thing by solving the XOR problem (I have done so with both 1 and more hidden layers to ensure I don't have a bug in my multi-hidden layer implementation).
I've also implemented my own Matrix math library from scratch (more as an exercise in matrix math as a precursor to understanding the NN math rather than for lightweight reasons).
So now I am trying to finalise the successful functionality of the framework by trying to solve MNIST with a vector-input network with layers of size {784, 100, 50, 10}, a learning rate of 0.2, and weight initialisation of random float in range {-1, 1} and zero-initialisation of biases.
After training the network on all 60,000 training examples (running backprop for each one, no mini-batching), I'm getting rubbish out. I've verified that I'm not putting rubbish in, however I've written the framework in Java and the training data needs quite a bit of prep for it to be usable.
I have mapped each value {0, 255} between {0.01, 1} using transformation x -> x * (0.99 / 255) + 0.01 to prevent zero-adjustment of biases.
That's the background, I'll dump the code below, hopefully it isn't too terribly written:
Network class
public class Network {
int inputNum, outputNum;
Layer[] layers;
float cost;
float learning_rate;
public Network(int input, int[] hidden, int output, float learning_rate, float weight_bound) {
this.inputNum = input;
this.outputNum = output;
this.layers = new Layer[hidden.length + 1];
this.learning_rate = learning_rate;
this.layers[0] = new Layer(input, hidden[0], weight_bound);
this.layers[this.layers.length - 1] = new Layer(hidden[hidden.length - 1], output, weight_bound);
for (int i = 1; i < hidden.length; i++) {
layers[i] = new Layer(hidden[i - 1], hidden[i], weight_bound);
}
}
public Matrix predict(Matrix input) throws DimensionException {
Matrix currentIn = input;
for (Layer l : this.layers) {
currentIn = l.forwardPropogate(currentIn);
}
return currentIn;
}
public void train(Matrix[] inputs, Matrix[] labels) throws DimensionException {
for (int i = 0; i < inputs.length; i++) {
Matrix currentIn = inputs[i];
for (Layer l : this.layers) {
currentIn = l.forwardPropogate(currentIn);
}
Matrix error = Matrix.sub(labels[i], currentIn);
this.cost = Matrix.apply(error, x -> 0.5f * x * x).sum();
for (int l = this.layers.length - 1; l >= 0; l--) {
error = this.layers[l].backwardPropogate(error, learning_rate);
}
}
}
public static float sigmoid(float x) {
return 1 / (1 + (float) Math.exp(-x));
}
public static float dsigmoid(float y) {
return y * (1 - y);
}
}
Layer class
import java.util.concurrent.ThreadLocalRandom;
public class Layer {
Matrix weights;
Matrix biases;
Matrix inValues;
Matrix outValues;
public Layer(int input_size, int output_size, float init_bound) {
this.weights = new Matrix(output_size, input_size);
this.weights.apply(x -> (float) ThreadLocalRandom.current().nextDouble(-init_bound, init_bound));
this.biases = new Matrix(output_size, 1);
}
public Matrix forwardPropogate(Matrix in) throws DimensionException {
this.inValues = in;
this.outValues = this.weights.multiply(this.inValues);
this.outValues.add(this.biases);
this.outValues.apply(x -> Network.sigmoid(x));
return outValues;
}
public Matrix backwardPropogate(Matrix out_error, float learning_rate) throws DimensionException {
Matrix errorByDSig = Matrix.apply(this.outValues, x -> Network.dsigmoid(x)).multiply(out_error);
Matrix weightAdjustment = errorByDSig.multiply(Matrix.transpose(this.inValues));
weightAdjustment.multiply(learning_rate);
Matrix biasAdjustment = errorByDSig.copy();
biasAdjustment.multiply(learning_rate);
Matrix newError = Matrix.transpose(this.weights).multiply(errorByDSig);
this.weights.add(weightAdjustment);
this.biases.add(biasAdjustment);
return newError;
}
}
Matrix library
import java.util.concurrent.ThreadLocalRandom;
import java.util.function.Function;
class Matrix {
int rows, cols;
float[][] matrix;
public Matrix (int rows, int cols) {
this.rows = rows;
this.cols = cols;
this.matrix = new float[this.rows][this.cols];
}
public void multiply(float n) {
this.apply(x -> (float)x * n);
}
public Matrix multiply(Matrix m) throws DimensionException {
Matrix newM;
if (this.cols == m.cols && this.rows == m.rows) {
newM = this.copy();
for (int r = 0; r < m.rows; r++) {
for (int c = 0; c < m.cols; c++) {
newM.matrix[r][c] *= m.matrix[r][c];
}
}
return newM;
}
if (this.cols == m.rows) {
newM = new Matrix(this.rows, m.cols);
for (int r = 0; r < this.rows; r++) {
for (int c = 0; c < m.cols; c++) {
float sum = 0;
for (int s = 0; s < m.rows; s++) {
sum += this.matrix[r][s] * m.matrix[s][c];
}
newM.matrix[r][c] = sum;
}
}
return newM;
}
throw new DimensionException("Matrix dimensions are incorrect");
}
public void add(float n) {
this.apply(x -> x + n);
}
public void add(Matrix m) throws DimensionException{
if (this.cols != m.cols || this.rows != m.rows) {
throw new DimensionException("Matrix dimensions are incorrect");
}
for (int r = 0; r < this.rows; r++) {
for (int c = 0; c < this.cols; c++) {
this.matrix[r][c] += m.matrix[r][c];
}
}
}
public void sub(float n) {
this.add(-n);
}
public void sub(Matrix m) throws DimensionException {
this.add(Matrix.apply(m, x -> -x));
}
public void apply(Function<Float, Float> f) {
for (int r = 0; r < this.rows; r++) {
for (int c = 0; c < this.cols; c++) {
this.matrix[r][c] = f.apply(this.matrix[r][c]);
}
}
}
public Matrix copy() {
Matrix m = new Matrix(this.rows, this.cols);
for (int r = 0; r < this.rows; r++) {
for (int c = 0; c < this.cols; c++) {
m.matrix[r][c] = this.matrix[r][c];
}
}
return m;
}
public String toString() {
String s = "";
for (int r = 0; r < this.rows; r++) {
for (int c = 0; c < this.cols; c++) {
s += Float.toString(this.matrix[r][c]) + " ";
}
s += "\n";
}
return s;
}
public float sum() {
float total = 0;
for (int r = 0; r < this.rows; r++) {
for (int c = 0; c < this.cols; c++) {
total += this.matrix[r][c];
}
}
return total;
}
public static Matrix multiply(Matrix m, float n) {
Matrix newM = m.copy();
newM.add(n);
return newM;
}
public static Matrix multiply(Matrix m, Matrix n) throws DimensionException {
Matrix newM = m.copy();
return newM.multiply(n);
}
public static Matrix add(Matrix m, float n) {
Matrix newM = m.copy();
newM.apply(x -> x + n);
return newM;
}
public static Matrix add(Matrix m, Matrix n) throws DimensionException {
Matrix newM = m.copy();
newM.add(n);
return newM;
}
public static Matrix sub(Matrix m, float n) {
Matrix newM = m.copy();
newM.sub(n);
return newM;
}
public static Matrix sub(Matrix m, Matrix n) throws DimensionException {
Matrix newM = m.copy();
newM.sub(n);
return newM;
}
public static Matrix apply(Matrix m, Function<Float, Float> f) {
Matrix newM = m.copy();
newM.apply(f);
return newM;
}
public static Matrix random(int rows, int cols, int bound) {
Matrix m = new Matrix(rows, cols);
m.apply(x -> x + ThreadLocalRandom.current().nextInt(bound));
return m;
}
public static Matrix fromArray(int rows, int cols, float[] array) {
Matrix newMatrix = new Matrix(rows, cols);
for (int r = 0; r < rows; r++) {
for (int c = 0; c < cols; c++) {
newMatrix.matrix[r][c] = array[r * cols + c];
}
}
return newMatrix;
}
public static Matrix transpose(Matrix m) {
Matrix newMatrix = new Matrix(m.cols, m.rows);
for (int r = 0; r < m.rows; r++) {
for (int c = 0; c < m.cols; c++) {
newMatrix.matrix[c][r] = m.matrix[r][c];
}
}
return newMatrix;
}
}
Training / getting an output
import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.Arrays;
public class tests {
public static void main(String[] args) throws DimensionException, FileNotFoundException, IOException {
File training_files = new File("C:/Users/2001b/OneDrive/Desktop/data/data/training");
File label_file = new File("C:/Users/2001b/OneDrive/Desktop/data/data/labels.csv");
Matrix[] training_data = new Matrix[60000];
Matrix[] training_labels = new Matrix[60000];
int file_count = 0;
for (File f : training_files.listFiles()) {
float[] pixels = new float[784];
int p_count = 0;
BufferedReader br = new BufferedReader(new FileReader(f));
String line;
while ((line = br.readLine()) != null) {
double[] values = Arrays.stream(line.split(",")).mapToDouble(Double::parseDouble).toArray();
for (double v : values) {
pixels[p_count++] = ((float) v * (0.99f / 255) + 0.01f);
}
}
training_data[file_count++] = Matrix.fromArray(784, 1, pixels);
br.close();
}
BufferedReader br = new BufferedReader(new FileReader(label_file));
String line;
int count = 0;
while ((line = br.readLine()) != null) {
int value = Integer.valueOf(line);
Matrix answerMatrix = new Matrix(10, 1);
answerMatrix.matrix[value][0] = 1;
training_labels[count++] = answerMatrix;
}
br.close();
System.out.println(training_labels[5]);
Network network = new Network(784, new int[] {100, 50}, 10, 0.2f, 1);
network.train(training_data, training_labels);
System.out.println(network.predict(training_data[5]));
}
}
After training with 10-vector all-zero labels (other than the expected 1), I'm getting output vectors like this:
0.0629406
0.09087993
0.09197301
0.08965302
0.052927334
0.08770021
0.11267567
0.071576655
0.12798244
0.09146147
Any help would be much appreciated

Average, Standard Deviation, and mode from text file with integers

I need help on my program which takes calculates the average, standard deviation, and mode from a number of integers less than or equal to 1,000 in a text file. My program compiles, however, I end up with "null" as my output for both standard deviation and average, and "-1" for my mode.
import java.util.Scanner;
import java.io.File;
import java.io.IOException;
public class Statistics
{
/**default constructor for Statistics class*/
private static int [] unmodifiedNumbers, numbers;
public Statistics()
{
unmodifiedNumbers = new int [1000];
}
/**reads numbers from files and puts them into an array*/
public void readFromFile()
{
try
{
Scanner in = new Scanner(new File("numbers.txt"));
int x;
for(x = 0; in.hasNextInt(); x++)
unmodifiedNumbers[x] = in.nextInt();
numbers = new int [x];
for(int y = 0; y < x; y++)
numbers[y] = unmodifiedNumbers[y];
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
/**calculates and returns and prints average*/
public static double avg()
{
double average = 0;
try
{
Scanner in = new Scanner(new File("numbers.txt"));
double total = 0;
for(int x = 0; x < numbers.length; x++)
total += x;
average = (double)total/numbers.length;
System.out.println(average);
}
catch(Exception c)
{
System.out.println(c.getMessage());
}
return average;
}
/**calculates and displays mode*/
public static void mode()
{
int count = 0, mode = -1, maxCount = 0;
try
{
count = 0;
Scanner in = new Scanner(new File("numbers.txt"));
for(int x = 0; x < numbers.length; x++)
{
for(int y = 0; y < numbers.length; y++)
if(numbers[y] == numbers[x])
count++;
if(count > maxCount)
{
mode = numbers[x];
maxCount = count;
}
}
}
catch(Exception b)
{
System.out.println(b.getMessage());
}
System.out.println(mode);
}
/**calculates and displays standard deviation*/
public static void stddev()
{
double stddev = 0;
long total = 0;
double average = avg();
try
{
Scanner in = new Scanner(new File("numbers.txt"));
for(int x = 0; x < numbers.length; x++)
total += Math.pow((average - numbers[x]), 2);
System.out.println(Math.sqrt((double)total/(numbers.length - 1)));
}
catch(Exception d)
{
System.out.println(d.getMessage());
}
System.out.println(stddev);
}
}
I have made some changes to your code to test it, and make it work:
First I add a main to test:
public static void main(String args[]){
readFromFile();
avg();
mode();
stddev();
}
You can see that I call to readFromFile.
Because of that, I change readFromFile to be static.
/**reads numbers from files and puts them into an array*/
public static void readFromFile()
And I change this:
/**default constructor for Statistics class*/
private static int [] unmodifiedNumbers = new int [1000], numbers;
To initialize unmodifiedNumbers
Now it works.

Test method for multilayer perceptron

This is Multi-Layer Perceptron using Backpropagation algorithm.I found this code on codetidy.com and i want to test it .
"mlp.java"
/***** This ANN assumes a fully connected network *****/
import java.util.*;
import java.io.*;
public class MLP {
static ArrayList<Neuron> input, hidden, output;
ArrayList<Pattern> pattern;
static double bias;
double learningRate;
Random random;
public MLP(int numInput, int numHidden, int numOutput, int rangeMin, int rangeMax, double learningRate, Random random, File f) {
this.learningRate = learningRate;
this.random = random;
input = new ArrayList<Neuron>();
hidden = new ArrayList<Neuron>();
output = new ArrayList<Neuron>();
pattern = readPattern(f);
int i;
// bias is random value between [rangeMin, rangeMax] --> [-1, 1]
bias = 1;//randomDouble(rangeMin, rangeMax);
// initialize inputs
for (i = 0; i < numInput; i++) {
input.add(new Neuron("x"+(i+1), 0, randomDoubleArray(numHidden, rangeMin, rangeMax))); // set initial values to 0
}
// initialize hidden
for (i = 0; i < numHidden; i++) {
hidden.add(new Neuron("h"+(i+1), randomDoubleArray(numOutput, rangeMin, rangeMax)));
}
// initialize output
for (i = 0; i < numOutput; i++) {
output.add(new Neuron("y"+(i+1)));
}
// link inputs forward to hidden
for (Neuron x : input) {
x.connect(hidden, 1);
}
// link hidden
for (Neuron h : hidden) {
// back to inputs
h.connect(input, 0);
// forward to output
h.connect(output, 1);
}
// link output back to hidden
for (Neuron y : output) {
y.connect(hidden, 0);
}
}
void train() {
int i;
double[] error = new double[pattern.size()];
boolean done = false;
// main training loop
while(!done) {
// loop through input patterns, save error for each
for (i = 0; i < pattern.size(); i++) {
/*** Set new pattern ***/
setInput(pattern.get(i).values);
/*** Feed-forward computation ***/
forwardPass();
/*** Backpropagation with weight updates ***/
error[i] = backwardPass();
}
boolean pass = true;
// check if error for all runs is <= 0.05
for (i = 0; i < error.length; i++) {
if (error[i] > 0.05)
pass = false;
}
if (pass) // if all cases <= 0.05, convergence reached
done = true;
}
}
void setInput(int[] values) {
for (int i = 0; i < values.length; i++) {
input.get(i).value = values[i];
}
}
double backwardPass() {
int i;
double[] outputError = new double[output.size()];
double[] outputDelta = new double[output.size()];
double[] hiddenError = new double[hidden.size()];
double[] hiddenDelta = new double[hidden.size()];
/*** Backpropagation to the output layer ***/
// calculate delta for output layer: d = error * sigmoid derivative
for (i = 0; i < output.size(); i++) {
// error = desired - y
outputError[i] = getOutputError(output.get(i));
// using sigmoid derivative = sigmoid(v) * [1 - sigmoid(v)]
outputDelta[i] = outputError[i] * output.get(i).value * (1.0 - output.get(i).value);
}
/*** Backpropagation to the hidden layer ***/
// calculate delta for hidden layer: d = error * sigmoid derivative
for (i = 0; i < hidden.size(); i++) {
// error(i) = sum[outputDelta(k) * w(kj)]
hiddenError[i] = getHiddenError(hidden.get(i), outputDelta);
// using sigmoid derivative
hiddenDelta[i] = hiddenError[i] * hidden.get(i).value * (1.0 - hidden.get(i).value);
}
/*** Weight updates ***/
// update weights connecting hidden neurons to output layer
for (i = 0; i < output.size(); i++) {
for (Neuron h : output.get(i).left) {
h.weights[i] = learningRate * outputDelta[i] * h.value;
}
}
// update weights connecting input neurons to hidden layer
for (i = 0; i < hidden.size(); i++) {
for (Neuron x : hidden.get(i).left) {
x.weights[i] = learningRate * hiddenDelta[i] * x.value;
}
}
// return outputError to be used when testing for convergence?
return outputError[0];
}
void forwardPass() {
int i;
double v, y;
// loop through hidden layers, determine current value
for (i = 0; i < hidden.size(); i++) {
v = 0;
// get v(n) for hidden layer i
for (Neuron x : input) {
v += x.weights[i] * x.value;
}
// add bias
v += bias;
// calculate f(v(n))
y = activate(v);
hidden.get(i).value = y;
}
// calculate output?
for (i = 0; i < output.size(); i++) {
v = 0;
// get v(n) for output layer
for (Neuron h : hidden) {
v += h.weights[i] * h.value;
}
// add bias
v += bias;
// calculate f(v(n))
y = activate(v);
output.get(i).value = y;
}
}
double activate(double v) {
return (1 / (1 + Math.exp(-v))); // sigmoid function
}
double getHiddenError(Neuron j, double[] outputDelta) {
// calculate error sum[outputDelta * w(kj)]
double sum = 0;
for (int i = 0; i < j.right.size(); i++) {
sum += outputDelta[i] * j.weights[i];
}
return sum;
}
double getOutputError(Neuron k) {
// calculate error (d - y)
// note: desired is 1 if input contains odd # of 1's and 0 otherwise
int sum = 0;
double d;
for (Neuron x : input) {
sum += x.value;
}
if (sum % 2 != 0)
d = 1.0;
else
d = 0.0;
return Math.abs(d - k.value);
}
double[] randomDoubleArray(int n, double rangeMin, double rangeMax) {
double[] a = new double[n];
for (int i = 0; i < n; i++) {
a[i] = randomDouble(rangeMin, rangeMax);
}
return a;
}
double randomDouble(double rangeMin, double rangeMax) {
return (rangeMin + (rangeMax - rangeMin) * random.nextDouble());
}
ArrayList<Pattern> readPattern(File f) {
ArrayList<Pattern> p = new ArrayList<Pattern>();
try {
BufferedReader r = new BufferedReader(new FileReader(f));
String s = "";
while ((s = r.readLine()) != null) {
String[] columns = s.split(" ");
int[] values = new int[columns.length];
for (int i = 0; i < values.length; i++) {
values[i] = Integer.parseInt(columns[i]);
}
p.add(new Pattern(values));
}
r.close();
}
catch (IOException e) { }
return p;
}
public static void main(String[] args) {
Random random = new Random(1234);
File file = new File("input.txt");
MLP mlp = new MLP(4, 4, 1, -1, 1, 0.1, random, file);
mlp.train();
}
}
"neuron.java"
import java.util.ArrayList;
public class Neuron {
String name;
double value;
double[] weights;
ArrayList<Neuron> left, right;
public Neuron(String name, double value, double[] weights) { // constructor for input neurons
this.name = name;
this.value = value;
this.weights = weights;
right = new ArrayList<Neuron>();
}
public Neuron(String name, double[] weights) { // constructor for hidden neurons
this.name = name;
this.weights = weights;
value = 0; // default initial value
left = new ArrayList<Neuron>();
right = new ArrayList<Neuron>();
}
public Neuron(String name) { // constructor for output neurons
this.name = name;
value = 0; // default initial value
left = new ArrayList<Neuron>();
}
public void connect(ArrayList<Neuron> ns, int direction) { // 0 is left, 1 is right
for (Neuron n : ns) {
if (direction == 0)
left.add(n);
else
right.add(n);
}
}
}
"pattern.java"
public class Pattern {
int [] values;
public Pattern (int [] Values)
{
this.values=Values;
}
}
How can i count the correct and wrong classified samples?

Arrays, Statistics, and calculating mean, median, mode, average and sqrt

I need to implement four static methods in a class named ArrayStatistics. Each of the four methods will calculate the mean, median, mode, and population standard deviation, respectively, of the values in the array.
This is my first time working with Java, and cannot figure out what should I do next. I was given some test values for, you guessed it, test out my program.
public class ArrayStatistics {
public static void main(String[] args) {
final int[] arr;
int[] testValues = new int[] { 10, 20, 30, 40 };
meanValue = a;
meadianValue = b;
modeValue = c;
sqrtDevValue = d;
average = (sum / count);
System.out.println("Average is " );
}
static double[] mean(int[] data) {
for(int x = 1; x <=counter; x++) {
input = NumScanner.nextInt();
sum = sum + inputNum;
System.out.println();
}
return a;
}
static double[] median(int[] data) {
// ...
}
public double getMedian(double[] numberList) {
int factor = numberList.length - 1;
double[] first = new double[(double) factor / 2];
double[] last = new double[first.length];
double[] middleNumbers = new double[1];
for (int i = 0; i < first.length; i++) {
first[i] = numbersList[i];
}
for (int i = numberList.length; i > last.length; i--) {
last[i] = numbersList[i];
}
for (int i = 0; i <= numberList.length; i++) {
if (numberList[i] != first[i] || numberList[i] != last[i]) middleNumbers[i] = numberList[i];
}
if (numberList.length % 2 == 0) {
double total = middleNumbers[0] + middleNumbers[1];
return total / 2;
} else {
return b;
}
}
static double[] mode(int[] data) {
public double getMode(double[] numberList) {
HashMap<Double,Double> freqs = new HashMap<Double,Double>();
for (double d: numberList) {
Double freq = freqs.get(d);
freqs.put(d, (freq == null ? 1 : freq + 1));
}
double mode = 0;
double maxFreq = 0;
for (Map.Entry<Double,Doubler> entry : freqs.entrySet()) {
double freq = entry.getValue();
if (freq > maxFreq) {
maxFreq = freq;
mode = entry.getKey();
}
}
return c;
}
static double[] sqrt(int[] sqrtDev) {
return d;
}
}
This is pretty easy.
public double mean(ArrayList list) {
double ans=0;
for(int i=0; i<list.size(); i++) {
ans+=list.get(i); }
return ans/list.size()
}
`
Median:
public void median(ArrayList list) {
if(list.size()%==2) return (list.get(list.size()/2)+list.get(list.size()+1))/2;
else return list.get((list.size()/2)+1)
}
For Mode, just a keep a tally on the frequency of each number occurrence, extremely easy.
For standard deviation find the mean and just use the formula given here: https://www.mathsisfun.com/data/standard-deviation-formulas.html

Implementing a Neural Network in Java: Training and Backpropagation issues

I'm trying to implement a feed-forward neural network in Java.
I've created three classes NNeuron, NLayer and NNetwork. The "simple" calculations seem fine (I get correct sums/activations/outputs), but when it comes to the training process, I don't seem to get correct results. Can anyone, please tell what I'm doing wrong ?
The whole code for the NNetwork class is quite long, so I'm posting the part that is causing the problem:
[EDIT]: this is actually pretty much all of the NNetwork class
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class NNetwork
{
public static final double defaultLearningRate = 0.4;
public static final double defaultMomentum = 0.8;
private NLayer inputLayer;
private ArrayList<NLayer> hiddenLayers;
private NLayer outputLayer;
private ArrayList<NLayer> layers;
private double momentum = NNetwork1.defaultMomentum; // alpha: momentum, default! 0.3
private ArrayList<Double> learningRates;
public NNetwork (int nInputs, int nOutputs, Integer... neuronsPerHiddenLayer)
{
this(nInputs, nOutputs, Arrays.asList(neuronsPerHiddenLayer));
}
public NNetwork (int nInputs, int nOutputs, List<Integer> neuronsPerHiddenLayer)
{
// the number of neurons on the last layer build so far (i.e. the number of inputs for each neuron of the next layer)
int prvOuts = 1;
this.layers = new ArrayList<>();
// input layer
this.inputLayer = new NLayer(nInputs, prvOuts, this);
this.inputLayer.setAllWeightsTo(1.0);
this.inputLayer.setAllBiasesTo(0.0);
this.inputLayer.useSigmaForOutput(false);
prvOuts = nInputs;
this.layers.add(this.inputLayer);
// hidden layers
this.hiddenLayers = new ArrayList<>();
for (int i=0 ; i<neuronsPerHiddenLayer.size() ; i++)
{
this.hiddenLayers.add(new NLayer(neuronsPerHiddenLayer.get(i), prvOuts, this));
prvOuts = neuronsPerHiddenLayer.get(i);
}
this.layers.addAll(this.hiddenLayers);
// output layer
this.outputLayer = new NLayer(nOutputs, prvOuts, this);
this.layers.add(this.outputLayer);
this.initCoeffs();
}
private void initCoeffs ()
{
this.learningRates = new ArrayList<>();
// learning rates of the hidden layers
for (int i=0 ; i<this.hiddenLayers.size(); i++)
this.learningRates.add(NNetwork1.defaultLearningRate);
// learning rate of the output layer
this.learningRates.add(NNetwork1.defaultLearningRate);
}
public double getLearningRate (int layerIndex)
{
if (layerIndex > 0 && layerIndex <= this.hiddenLayers.size()+1)
{
return this.learningRates.get(layerIndex-1);
}
else
{
return 0;
}
}
public ArrayList<Double> getLearningRates ()
{
return this.learningRates;
}
public void setLearningRate (int layerIndex, double newLearningRate)
{
if (layerIndex > 0 && layerIndex <= this.hiddenLayers.size()+1)
{
this.learningRates.set(
layerIndex-1,
newLearningRate);
}
}
public void setLearningRates (Double... newLearningRates)
{
this.setLearningRates(Arrays.asList(newLearningRates));
}
public void setLearningRates (List<Double> newLearningRates)
{
int len = (this.learningRates.size() <= newLearningRates.size())
? this.learningRates.size()
: newLearningRates.size();
for (int i=0; i<len; i++)
this.learningRates
.set(i,
newLearningRates.get(i));
}
public double getMomentum ()
{
return this.momentum;
}
public void setMomentum (double momentum)
{
this.momentum = momentum;
}
public NNeuron getNeuron (int layerIndex, int neuronIndex)
{
if (layerIndex == 0)
return this.inputLayer.getNeurons().get(neuronIndex);
else if (layerIndex == this.hiddenLayers.size()+1)
return this.outputLayer.getNeurons().get(neuronIndex);
else
return this.hiddenLayers.get(layerIndex-1).getNeurons().get(neuronIndex);
}
public ArrayList<Double> getOutput (ArrayList<Double> inputs)
{
ArrayList<Double> lastOuts = inputs; // the last computed outputs of the last 'called' layer so far
// input layer
//lastOuts = this.inputLayer.getOutput(lastOuts);
lastOuts = this.getInputLayerOutputs(lastOuts);
// hidden layers
for (NLayer layer : this.hiddenLayers)
lastOuts = layer.getOutput(lastOuts);
// output layer
lastOuts = this.outputLayer.getOutput(lastOuts);
return lastOuts;
}
public ArrayList<ArrayList<Double>> getAllOutputs (ArrayList<Double> inputs)
{
ArrayList<ArrayList<Double>> outs = new ArrayList<>();
// input layer
outs.add(this.getInputLayerOutputs(inputs));
// hidden layers
for (NLayer layer : this.hiddenLayers)
outs.add(layer.getOutput(outs.get(outs.size()-1)));
// output layer
outs.add(this.outputLayer.getOutput(outs.get(outs.size()-1)));
return outs;
}
public ArrayList<ArrayList<Double>> getAllSums (ArrayList<Double> inputs)
{
//*
ArrayList<ArrayList<Double>> sums = new ArrayList<>();
ArrayList<Double> lastOut;
// input layer
sums.add(inputs);
lastOut = this.getInputLayerOutputs(inputs);
// hidden nodes
for (NLayer layer : this.hiddenLayers)
{
sums.add(layer.getSums(lastOut));
lastOut = layer.getOutput(lastOut);
}
// output layer
sums.add(this.outputLayer.getSums(lastOut));
return sums;
}
public ArrayList<Double> getInputLayerOutputs (ArrayList<Double> inputs)
{
ArrayList<Double> outs = new ArrayList<>();
for (int i=0 ; i<this.inputLayer.getNeurons().size() ; i++)
outs.add(this
.inputLayer
.getNeuron(i)
.getOutput(inputs.get(i)));
return outs;
}
public void changeWeights (
ArrayList<ArrayList<Double>> deltaW,
ArrayList<ArrayList<Double>> inputSet,
ArrayList<ArrayList<Double>> targetSet,
boolean checkError)
{
for (int i=0 ; i<deltaW.size()-1 ; i++)
this.hiddenLayers.get(i).changeWeights(deltaW.get(i), inputSet, targetSet, checkError);
this.outputLayer.changeWeights(deltaW.get(deltaW.size()-1), inputSet, targetSet, checkError);
}
public int train2 (
ArrayList<ArrayList<Double>> inputSet,
ArrayList<ArrayList<Double>> targetSet,
double maxError,
int maxIterations)
{
ArrayList<Double>
input,
target;
ArrayList<ArrayList<ArrayList<Double>>> prvNetworkDeltaW = null;
double error;
int i = 0, j = 0, traininSetLength = inputSet.size();
do // during each itreration...
{
error = 0.0;
for (j = 0; j < traininSetLength; j++) // ... for each training element...
{
input = inputSet.get(j);
target = targetSet.get(j);
prvNetworkDeltaW = this.train2_bp(input, target, prvNetworkDeltaW); // ... do backpropagation, and return the new weight deltas
error += this.getInputMeanSquareError(input, target);
}
i++;
} while (error > maxError && i < maxIterations); // iterate as much as necessary/possible
return i;
}
public ArrayList<ArrayList<ArrayList<Double>>> train2_bp (
ArrayList<Double> input,
ArrayList<Double> target,
ArrayList<ArrayList<ArrayList<Double>>> prvNetworkDeltaW)
{
ArrayList<ArrayList<Double>> layerSums = this.getAllSums(input); // the sums for each layer
ArrayList<ArrayList<Double>> layerOutputs = this.getAllOutputs(input); // the outputs of each layer
// get the layer deltas (inc the input layer that is null)
ArrayList<ArrayList<Double>> layerDeltas = this.train2_getLayerDeltas(layerSums, layerOutputs, target);
// get the weight deltas
ArrayList<ArrayList<ArrayList<Double>>> networkDeltaW = this.train2_getWeightDeltas(layerOutputs, layerDeltas, prvNetworkDeltaW);
// change the weights
this.train2_updateWeights(networkDeltaW);
return networkDeltaW;
}
public void train2_updateWeights (ArrayList<ArrayList<ArrayList<Double>>> networkDeltaW)
{
for (int i=1; i<this.layers.size(); i++)
this.layers.get(i).train2_updateWeights(networkDeltaW.get(i));
}
public ArrayList<ArrayList<ArrayList<Double>>> train2_getWeightDeltas (
ArrayList<ArrayList<Double>> layerOutputs,
ArrayList<ArrayList<Double>> layerDeltas,
ArrayList<ArrayList<ArrayList<Double>>> prvNetworkDeltaW)
{
ArrayList<ArrayList<ArrayList<Double>>> networkDeltaW = new ArrayList<>(this.layers.size());
ArrayList<ArrayList<Double>> layerDeltaW;
ArrayList<Double> neuronDeltaW;
for (int i=0; i<this.layers.size(); i++)
networkDeltaW.add(new ArrayList<ArrayList<Double>>());
double
deltaW, x, learningRate, prvDeltaW, d;
int i, j, k;
for (i=this.layers.size()-1; i>0; i--) // for each layer
{
learningRate = this.getLearningRate(i);
layerDeltaW = new ArrayList<>();
networkDeltaW.set(i, layerDeltaW);
for (j=0; j<this.layers.get(i).getNeurons().size(); j++) // for each neuron of this layer
{
neuronDeltaW = new ArrayList<>();
layerDeltaW.add(neuronDeltaW);
for (k=0; k<this.layers.get(i-1).getNeurons().size(); k++) // for each weight (i.e. each neuron of the previous layer)
{
d = layerDeltas.get(i).get(j);
x = layerOutputs.get(i-1).get(k);
prvDeltaW = (prvNetworkDeltaW != null)
? prvNetworkDeltaW.get(i).get(j).get(k)
: 0.0;
deltaW = -learningRate * d * x + this.momentum * prvDeltaW;
neuronDeltaW.add(deltaW);
}
// the bias !!
d = layerDeltas.get(i).get(j);
x = 1;
prvDeltaW = (prvNetworkDeltaW != null)
? prvNetworkDeltaW.get(i).get(j).get(prvNetworkDeltaW.get(i).get(j).size()-1)
: 0.0;
deltaW = -learningRate * d * x + this.momentum * prvDeltaW;
neuronDeltaW.add(deltaW);
}
}
return networkDeltaW;
}
ArrayList<ArrayList<Double>> train2_getLayerDeltas (
ArrayList<ArrayList<Double>> layerSums,
ArrayList<ArrayList<Double>> layerOutputs,
ArrayList<Double> target)
{
// get ouput deltas
ArrayList<Double> outputDeltas = new ArrayList<>(); // the output layer deltas
double
oErr, // output error given a target
s, // sum
o, // output
d; // delta
int
nOutputs = target.size(), // #TODO ?== this.outputLayer.size()
nLayers = this.hiddenLayers.size()+2; // #TODO ?== layerOutputs.size()
for (int i=0; i<nOutputs; i++) // for each neuron...
{
s = layerSums.get(nLayers-1).get(i);
o = layerOutputs.get(nLayers-1).get(i);
oErr = (target.get(i) - o);
d = -oErr * this.getNeuron(nLayers-1, i).sigmaPrime(s); // #TODO "s" or "o" ??
outputDeltas.add(d);
}
// get hidden deltas
ArrayList<ArrayList<Double>> hiddenDeltas = new ArrayList<>();
for (int i=0; i<this.hiddenLayers.size(); i++)
hiddenDeltas.add(new ArrayList<Double>());
NLayer nextLayer = this.outputLayer;
ArrayList<Double> nextDeltas = outputDeltas;
int
h, k,
nHidden = this.hiddenLayers.size(),
nNeurons = this.hiddenLayers.get(nHidden-1).getNeurons().size();
double
wdSum = 0.0;
for (int i=nHidden-1; i>=0; i--) // for each hidden layer
{
hiddenDeltas.set(i, new ArrayList<Double>());
for (h=0; h<nNeurons; h++)
{
wdSum = 0.0;
for (k=0; k<nextLayer.getNeurons().size(); k++)
{
wdSum += nextLayer.getNeuron(k).getWeight(h) * nextDeltas.get(k);
}
s = layerSums.get(i+1).get(h);
d = this.getNeuron(i+1, h).sigmaPrime(s) * wdSum;
hiddenDeltas.get(i).add(d);
}
nextLayer = this.hiddenLayers.get(i);
nextDeltas = hiddenDeltas.get(i);
}
ArrayList<ArrayList<Double>> deltas = new ArrayList<>();
// input layer deltas: void
deltas.add(null);
// hidden layers deltas
deltas.addAll(hiddenDeltas);
// output layer deltas
deltas.add(outputDeltas);
return deltas;
}
public double getInputMeanSquareError (ArrayList<Double> input, ArrayList<Double> target)
{
double diff, mse=0.0;
ArrayList<Double> output = this.getOutput(input);
for (int i=0; i<target.size(); i++)
{
diff = target.get(i) - output.get(i);
mse += (diff * diff);
}
mse /= 2.0;
return mse;
}
}
Some methods' names (with their return values/types) are quite self-explanatory, like "this.getAllSums" that returns the sums (sum(x_i*w_i) for each neuron) of each layer, "this.getAllOutputs" that return the outputs (sigmoid(sum) for each neuron) of each layer and "this.getNeuron(i,j)" that returns the j'th neuron of the i'th layer.
Thank you in advance for your help :)
Here is a very simple java implementation with tests in the main method :
import java.util.Arrays;
import java.util.Random;
public class MLP {
public static class MLPLayer {
float[] output;
float[] input;
float[] weights;
float[] dweights;
boolean isSigmoid = true;
public MLPLayer(int inputSize, int outputSize, Random r) {
output = new float[outputSize];
input = new float[inputSize + 1];
weights = new float[(1 + inputSize) * outputSize];
dweights = new float[weights.length];
initWeights(r);
}
public void setIsSigmoid(boolean isSigmoid) {
this.isSigmoid = isSigmoid;
}
public void initWeights(Random r) {
for (int i = 0; i < weights.length; i++) {
weights[i] = (r.nextFloat() - 0.5f) * 4f;
}
}
public float[] run(float[] in) {
System.arraycopy(in, 0, input, 0, in.length);
input[input.length - 1] = 1;
int offs = 0;
Arrays.fill(output, 0);
for (int i = 0; i < output.length; i++) {
for (int j = 0; j < input.length; j++) {
output[i] += weights[offs + j] * input[j];
}
if (isSigmoid) {
output[i] = (float) (1 / (1 + Math.exp(-output[i])));
}
offs += input.length;
}
return Arrays.copyOf(output, output.length);
}
public float[] train(float[] error, float learningRate, float momentum) {
int offs = 0;
float[] nextError = new float[input.length];
for (int i = 0; i < output.length; i++) {
float d = error[i];
if (isSigmoid) {
d *= output[i] * (1 - output[i]);
}
for (int j = 0; j < input.length; j++) {
int idx = offs + j;
nextError[j] += weights[idx] * d;
float dw = input[j] * d * learningRate;
weights[idx] += dweights[idx] * momentum + dw;
dweights[idx] = dw;
}
offs += input.length;
}
return nextError;
}
}
MLPLayer[] layers;
public MLP(int inputSize, int[] layersSize) {
layers = new MLPLayer[layersSize.length];
Random r = new Random(1234);
for (int i = 0; i < layersSize.length; i++) {
int inSize = i == 0 ? inputSize : layersSize[i - 1];
layers[i] = new MLPLayer(inSize, layersSize[i], r);
}
}
public MLPLayer getLayer(int idx) {
return layers[idx];
}
public float[] run(float[] input) {
float[] actIn = input;
for (int i = 0; i < layers.length; i++) {
actIn = layers[i].run(actIn);
}
return actIn;
}
public void train(float[] input, float[] targetOutput, float learningRate, float momentum) {
float[] calcOut = run(input);
float[] error = new float[calcOut.length];
for (int i = 0; i < error.length; i++) {
error[i] = targetOutput[i] - calcOut[i]; // negative error
}
for (int i = layers.length - 1; i >= 0; i--) {
error = layers[i].train(error, learningRate, momentum);
}
}
public static void main(String[] args) throws Exception {
float[][] train = new float[][]{new float[]{0, 0}, new float[]{0, 1}, new float[]{1, 0}, new float[]{1, 1}};
float[][] res = new float[][]{new float[]{0}, new float[]{1}, new float[]{1}, new float[]{0}};
MLP mlp = new MLP(2, new int[]{2, 1});
mlp.getLayer(1).setIsSigmoid(false);
Random r = new Random();
int en = 500;
for (int e = 0; e < en; e++) {
for (int i = 0; i < res.length; i++) {
int idx = r.nextInt(res.length);
mlp.train(train[idx], res[idx], 0.3f, 0.6f);
}
if ((e + 1) % 100 == 0) {
System.out.println();
for (int i = 0; i < res.length; i++) {
float[] t = train[i];
System.out.printf("%d epoch\n", e + 1);
System.out.printf("%.1f, %.1f --> %.3f\n", t[0], t[1], mlp.run(t)[0]);
}
}
}
}
}
I tried going over your code, but as you stated, it was pretty long.
Here's what I suggest:
To verify that your network is learning properly, try to train a simple network, like a network that recognizes the XOR operator. This shouldn't take all that long.
Use the simplest back-propagation algorithm. Stochastic backpropagation (where the weights are updated after the presentation of each training input) is the easiest. Implement the algorithm without the momentum term initially, and with a constant learning rate (i.e., don't start with adaptive learning-rates). Once you're satisfied that the algorithm is working, you can introduce the momentum term. Doing too many things at the same time increases the chances that more than one thing can go wrong. This makes it harder for you to see where you went wrong.
If you want to go over some code, you can check out some code that I wrote; you want to look at Backpropagator.java. I've basically implemented the stochastic backpropagation algorithm with a momentum term. I also have a video where I provide a quick explanation of my implementation of the backpropagation algorithm.
Hopefully this is of some help!

Categories