I am trying to create some nodes in Neo4j through a Maven Java Application and create relationships between those nodes. Exactly i want to create 16807 nodes and 17210368 relationships. I read a file and get the row variable which has the number of nodes i must create and i also have a list which has 34420736 elements (=17210368*2). I want to create a relationship from node[element 0 of list] to node[element 1 from list], from node[element 2 of list] to node[element 3 from list] etc. Also the max element of list is 16807. I create an ArrayList<Node> as to create the nodes dynamically cause i want the programm to run with different files(and with different row values).
Here is my code:
GraphDatabaseFactory dbFactory = new GraphDatabaseFactory();
GraphDatabaseService graphDb = dbFactory.newEmbeddedDatabase("C:\Users\....\default.graphdb");
Transaction tx = graphDb.beginTx();
try {
final RelationshipType type2 = DynamicRelationshipType.withName("KNOW");
ArrayList<Node> nodelist = new ArrayList<Node>();
for (int k = 0; k < row; k++) { //row=16807
nodelist.add(graphDb.createNode());
nodelist.get(k).setProperty("Name", "ListNode " + k);
}
int count=0;
for (int j = 0; j < list.size() ; j++) { //list.size()=34420736
nodelist.get(list.get(count)).createRelationshipTo(nodelist.get(list.get(count+1)), type2);
count=count+2;
}
tx.success();
}
finally {
tx.close();
}
graphDb.shutdown();
If i run the code without trying to create relationships it creates the nodes and run correctly. When i add the for loop that creates the realtionships it throws me the following error:
Exception in thread "main" org.neo4j.graphdb.TransactionFailureException: Unable to rollback transaction
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:131)
at com.mycompany.traverse_test.traverse_main.main(traverse_main.java:138)
Caused by: java.lang.IllegalStateException: No RelationshipState for added relationship!
at org.neo4j.kernel.api.txstate.RelationshipChangeVisitorAdapter$1.visit(RelationshipChangeVisitorAdapter.java:132)
at org.neo4j.kernel.api.txstate.RelationshipChangeVisitorAdapter.visitAddedRelationship(RelationshipChangeVisitorAdapter.java:83)
at org.neo4j.kernel.api.txstate.RelationshipChangeVisitorAdapter.visitAdded(RelationshipChangeVisitorAdapter.java:106)
at org.neo4j.kernel.api.txstate.RelationshipChangeVisitorAdapter.visitAdded(RelationshipChangeVisitorAdapter.java:47)
at org.neo4j.kernel.impl.util.diffsets.DiffSets.accept(DiffSets.java:76)
at org.neo4j.kernel.impl.api.state.TxState.accept(TxState.java:156)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.rollback(KernelTransactionImplementation.java:542)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.close(KernelTransactionImplementation.java:404)
at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:112)
... 1 more
Any ideas??
Neo4j is trying to roll back your transaction due to a bug in your code. The fact that it is failing to roll back may be a bug in neo4j, but that is really not your main problem.
Looking at your code, it looks like you are iterating through your list too many times. That is, the code in the list loop is using up 2 list elements at a time, so you should only be looping list.size()/2 times.
Here is code that should fix that bug, and it also makes a few other improvements.
GraphDatabaseFactory dbFactory = new GraphDatabaseFactory();
GraphDatabaseService graphDb = dbFactory.newEmbeddedDatabase("C:\Users\....\default.graphdb");
Transaction tx = graphDb.beginTx();
try {
final RelationshipType type2 = DynamicRelationshipType.withName("KNOW");
ArrayList<Node> nodelist = new ArrayList<Node>();
for (int k = 0; k < row; k++) { //row=16807
Node node = graphDb.createNode();
node.setProperty("Name", "ListNode " + k);
nodelist.add(node);
}
for (int j = 0; j < list.size() ; j += 2) { //list.size()=34420736
nodelist.get(list.get(j)).createRelationshipTo(
nodelist.get(list.get(j+1)), type2);
}
tx.success();
} catch(Throwable e) {
e.printStackTrace();
// You may want to re-throw the exception, rather than just eating it here...
} finally {
tx.close();
}
graphDb.shutdown();
[EDITED]
However, the above code can still run out of memory, since it is trying to create so many resources (16K nodes and 17M relationships) in a single transaction.
The following example code does the work in multiple transactions (one for creating the nodes and node list, and multiple transactions for the relationships).
NUM_RELS_PER_CHUNK specifies the maximum number of relationships to be created in each transaction. The createRelEndpointList() method must be modified to fill in the list of relationship endpoint (node) indices (each index is the 0-origin position of a node in nodeList).
public class MyCode {
private static final int NODE_COUNT = 16807;
private static final int NUM_RELS_PER_CHUNK = 1000000;
public static void main(String[] args) {
doIt();
}
private static void doIt() {
GraphDatabaseFactory dbFactory = new GraphDatabaseFactory();
GraphDatabaseService graphDb = dbFactory.newEmbeddedDatabase(new File("C:\\Users\\....\\default.graphdb"));
try {
RelationshipType type = DynamicRelationshipType.withName("KNOW");
List<Node> nodeList = createNodes(graphDb, NODE_COUNT);
List<Integer> list = createRelEndpointList();
final int numRels = list.size() / 2;
final int numChunks = (numRels + NUM_RELS_PER_CHUNK - 1)/NUM_RELS_PER_CHUNK;
int startRelIndex = 0, endRelIndexPlus1;
for (int i = numChunks; --i >= 0 && startRelIndex < numRels; ) {
endRelIndexPlus1 = (i > 0) ? startRelIndex + NUM_RELS_PER_CHUNK : numRels;
createRelationships(graphDb, nodeList, list, startRelIndex, endRelIndexPlus1, type);
startRelIndex = endRelIndexPlus1;
}
} finally {
graphDb.shutdown();
}
}
private static List<Node> createNodes(GraphDatabaseService graphDb, int rowCount) {
ArrayList<Node> nodeList = new ArrayList<Node>(rowCount);
Transaction tx = graphDb.beginTx();
try {
final StringBuilder sb = new StringBuilder("ListNode ");
final int initLength = sb.length();
for (int k = 0; k < rowCount; k++) {
Node node = graphDb.createNode();
sb.setLength(initLength);
sb.append(k);
node.setProperty("Name", sb.toString());
nodeList.add(node);
}
tx.success();
System.out.println("Created nodes.");
} catch(Exception e) {
e.printStackTrace();
tx.failure();
return null;
} finally {
tx.close();
}
return nodeList;
}
private static List<Integer> createRelEndpointList() {
final List<Integer> list = new ArrayList<Integer>();
// Fill
// list
// ...
return list;
}
private static void createRelationships(GraphDatabaseService graphDb, List<Node> nodeList, List<Integer> list, int startRelIndex, int endRelIndexPlus1, RelationshipType type) {
Transaction tx = graphDb.beginTx();
try {
final int endPlus2 = endRelIndexPlus1 * 2;
for (int j = startRelIndex * 2; j < endPlus2; ) {
Node from = nodeList.get(list.get(j++));
Node to = nodeList.get(list.get(j++));
from.createRelationshipTo(to, type);
}
tx.success();
System.out.println("Created rels. Start: " + startRelIndex + ", count: " + (endRelIndexPlus1 - startRelIndex));
} catch(Exception e) {
e.printStackTrace();
tx.failure();
// You may want to re-throw the exception, rather than just eating it here...
} finally {
tx.close();
}
}
}
Related
I'm trying to create unique nodes of type TEST with a unique property of "id".
However the forNodes() method is not detecting duplicates, is there any better method using Java API only and why does the below not work?
public class Neo4jTest
{
public static void main(String args[])
{
GraphDatabaseService graph = new TestGraphDatabaseFactory().newImpermanentDatabase();
Label testLabel = new Label()
{
#Override
public String name()
{
return "TEST";
}
};
try (Transaction tx = graph.beginTx())
{
graph.schema()
.constraintFor(testLabel)
.assertPropertyIsUnique("id")
.create();
tx.success();
}
try (Transaction tx = graph.beginTx())
{
int k = 99;
for (int i = 0; i < 4; i++)
{
System.out.println("indexing... i="+i);
Index<Node> testIndex = graph.index().forNodes(testLabel.name());
IndexHits<Node> testIterator = testIndex.get("id", k);
if (!testIterator.hasNext())
{
System.out.println("creating node... i="+i);
Node testNode = graph.createNode(testLabel);
testNode.setProperty("id", k);
tx.success();
}
}
}
}
}
The above returns:
indexing... i=0
creating node... i=0
indexing... i=1
creating node... i=1
Exception in thread "main" org.neo4j.graphdb.ConstraintViolationException: Node 0 already exists with label TEST and property "id"=[99]
shouldn't the above detect when i=1 that there's already a node with id = 99???
EDIT: same error also in different transactions..
public class Neo4jTest
{
public static void main(String args[])
{
GraphDatabaseService graph = new TestGraphDatabaseFactory().newImpermanentDatabase();
Label testLabel = new Label()
{
#Override
public String name()
{
return "TEST";
}
};
try (Transaction tx = graph.beginTx())
{
graph.schema()
.constraintFor(testLabel)
.assertPropertyIsUnique("id")
.create();
tx.success();
}
int k = 99;
try (Transaction tx = graph.beginTx())
{
System.out.println("indexing... i=" + 0);
Index<Node> testIndex = graph.index().forNodes(testLabel.name());
IndexHits<Node> testIterator = testIndex.get("id", k);
if (!testIterator.hasNext())
{
System.out.println("creating node... i=" + 0);
Node testNode = graph.createNode(testLabel);
testNode.setProperty("id", k);
}
tx.success();
}
try (Transaction tx = graph.beginTx())
{
System.out.println("indexing... i=" + 1);
Index<Node> testIndex = graph.index().forNodes(testLabel.name());
IndexHits<Node> testIterator = testIndex.get("id", k);
if (!testIterator.hasNext())
{
System.out.println("creating node... i=" + 1);
Node testNode = graph.createNode(testLabel);
testNode.setProperty("id", k);
}
tx.success();
}
}
}
The real problem that you are finding is that testIterator is returning false when it should return true. When i == 1 the iterator should have returned !true because it already has one no further insertion should have happened.
But it is not working as designed then it proceeds to insert and you get an exception as it should be. The graph is recognizing the uniqueness violation.
What you don't know is if the iterator is being cached without updating after the transaction is committed
if (!testIterator.hasNext())
{
...
}
Notice that going from 0 to 1 nodes does NOT involve any uniqueness. This is where the iterator is failing: not being updated
I need to implement a "round-robin" scheduler with a job class that I cannot modify. Round-robin scheduler should process the job that has been waiting the longest first, then reset timer to zero. If two jobs have same wait time, lower id is processed first. The job class only gives three values (job id, remaining duration, and priority(which is not needed for this). each job has a start time, so only a couple of jobs may be available during first cycle, few more next cycle, etc. Since the "job array" I am calling is different every time I call it, I'm not sure how to store the wait times.
This is the job class:
public class Jobs{
private int[] stas = new int[0];
private int[] durs = new int[0];
private int[] lefs = new int[0];
private int[] pris = new int[0];
private int[] fins = new int[0];
private int clock;
public Jobs()
{
this("joblist.csv");
}
public Jobs(String filename)
{
BufferedReader fp = null;
String line = "";
String[] b = null;
int[] tmp;
try
{
fp = new BufferedReader(new FileReader(filename));
while((line = fp.readLine()) != null)
{
b = line.split(",");
if(b.length == 3)
{
try
{
int sta = Integer.parseInt(b[0]);
//System.out.println("sta: " + b[0]);
int dur = Integer.parseInt(b[1]);
//System.out.println("dur: " + b[1]);
int pri = Integer.parseInt(b[2]);
//System.out.println("pri: " + b[2]);
stas = app(stas, sta);
//System.out.println("stas: " + Arrays.toString(stas));
durs = app(durs, dur);
//System.out.println("durs: " + Arrays.toString(durs));
lefs = app(lefs, dur);
//System.out.println("lefs: " + Arrays.toString(lefs));
pris = app(pris, pri);
//System.out.println("pris: " + Arrays.toString(pris));
fins = app(fins, -1);
//System.out.println("fins: " + Arrays.toString(fins));
}
catch(NumberFormatException e) {}
}
}
fp.close();
}
catch(FileNotFoundException e) { e.printStackTrace(); }
catch(IOException e) { e.printStackTrace(); }
clock = 0;
}
public boolean done()
{
boolean done = true;
for(int i=0; done && i<lefs.length; i++)
if(lefs[i]>0) done=false;
return done;
}
public int getClock() { return clock; }
public int[][] getJobs()
{
int count = 0;
for(int i=0; i<stas.length; i++)
if(stas[i]<=clock && lefs[i]>0)
count++;
int[][] jobs = new int[count][3];
count = 0;
for(int i=0; i<stas.length; i++)
if(stas[i]<=clock && lefs[i]>0)
{
jobs[count] = new int[]{i, lefs[i], pris[i]};
count++;
}
return jobs;
}
public int cycle() { return cycle(-1); }
public int cycle(int j)
{
if(j>=0 && j<lefs.length && clock>=stas[j] && lefs[j]>0)
{
lefs[j]--;
if(lefs[j] == 0) fins[j] = clock+1;
}
clock++;
return clock;
}
private int[] app(int[] a, int b)
{
int[] tmp = new int[a.length+1];
for(int i=0; i<a.length; i++) tmp[i] = a[i];
tmp[a.length] = b;
return tmp;
}
public String report()
{
String r = "JOB,PRIORITY,START,DURATION,FINISH,DELAY,PRI*DELAY\n";
float dn=0;
float pdn=0;
for(int i=0; i<stas.length; i++)
{
if(fins[i]>=0)
{
int delay = ((fins[i]-stas[i])-durs[i]);
r+= ""+i+","+pris[i]+","+stas[i]+","+durs[i]+","+fins[i]+","+delay+","+(pris[i]*delay)+"\n";
dn+= delay;
pdn+= pris[i]*delay;
}
else
{
int delay = ((clock*10-stas[i])-durs[i]);
r+= ""+i+","+pris[i]+","+stas[i]+","+durs[i]+","+fins[i]+","+delay+","+(pris[i]*delay)+"\n";
dn+= delay;
pdn+= pris[i]*delay;
}
}
if(stas.length>0)
{
r+= "Avg,,,,,"+(dn/stas.length)+","+pdn/stas.length+"\n";
}
return r;
}
public String toString()
{
String r = "There are "+stas.length+" jobs:\n";
for(int i=0; i<stas.length; i++)
{
r+= " JOB "+i+": START="+stas[i]+" DURATION="+durs[i]+" DURATION_LEFT="+lefs[i]+" PRIORITY="+pris[i]+"\n";
}
return r;
}
I don't need full code, just an idea of how to store wait times and cycle the correct job.
While a array based solution 'may' work, I would advocate a more object oriented approach. Create 'Job' class with the desire attributes (id, start_time, wait etc). Using the csv file, create Job objects and hold them in a list. Write a comparator to sort this jobs-list (in this case based on job wait/age would be the factor).
The job executor then has to do the following:
while(jobs exist) {
iterate on the list {
if job is executable // start_time > current sys_time
consume cycles/job for executable jobs
mark completed jobs (optional)
}
remove the completed jobs
}
//\ This loop will add +1 to each job
for(int i = 0; i < jobs.length; i++)
{
waitTime[jobs[i][0]] += 1;
}
int longestWait = 0;//\ This holds value for greatest wait time
int nextJob = 0; //\ This holds value for index of job with greatest wait time
//\ this loop will check for the greatest wait time and and set variables accordingly
for(int i = 0; i < waitTime.length; i++)
{
if(waitTime[i] > longestWait)
{
longestWait = waitTime[i];
nextJob = i;
}
}
//\ this cycles the job with the highest wait time
jobsource.cycle(nextJob);
//\ this resets the wait time for processed job
waitTime[nextJob] = 0;
I've trained a neural network in NetBeans and saved it as neural_network.ser by using Serializable ,"all classes implement Serializable" , Now I want to use it in my android application but when loading the network ,ClassNotFoundException raised .
java.lang.ClassNotFoundException: neural_network.BackPropagation
Here is the Classes:
BackPropagation class:
public class BackPropagation extends Thread implements Serializable
{
private static final String TAG = "NetworkMessage";
private static final long serialVersionUID = -8862858027413741101L;
private double OverallError;
// The minimum Error Function defined by the user
private double MinimumError;
// The user-defined expected output pattern for a set of samples
private double ExpectedOutput[][];
// The user-defined input pattern for a set of samples
private double Input[][];
// User defined learning rate - used for updating the network weights
private double LearningRate;
// Users defined momentum - used for updating the network weights
private double Momentum;
// Number of layers in the network
private int NumberOfLayers;
// Number of training sets
private int NumberOfSamples;
// Current training set/sample that is used to train network
private int SampleNumber;
// Maximum number of Epochs before the traing stops training
private long MaximumNumberOfIterations;
// Public Variables
public LAYER Layer[];
public double ActualOutput[][];
long delay = 0;
boolean die = false;
// Calculate the node activations
public void FeedForward()
{
int i,j;
// Since no weights contribute to the output
// vector from the input layer,
// assign the input vector from the input layer
// to all the node in the first hidden layer
for (i = 0; i < Layer[0].Node.length; i++)
Layer[0].Node[i].Output = Layer[0].Input[i];
Layer[1].Input = Layer[0].Input;
for (i = 1; i < NumberOfLayers; i++)
{
Layer[i].FeedForward();
// Unless we have reached the last layer, assign the layer i's //output vector
// to the (i+1) layer's input vector
if (i != NumberOfLayers-1)
Layer[i+1].Input = Layer[i].OutputVector();
}
}
// FeedForward()
// Back propagated the network outputy error through
// the network to update the weight values
public void UpdateWeights()
{
CalculateSignalErrors();
BackPropagateError();
}
private void CalculateSignalErrors()
{
int i,j,k,OutputLayer;
double Sum;
OutputLayer = NumberOfLayers-1;
// Calculate all output signal error
for (i = 0; i < Layer[OutputLayer].Node.length; i++)
{
Layer[OutputLayer].Node[i].SignalError =
(ExpectedOutput[SampleNumber][i] -Layer[OutputLayer].Node[i].Output) *
Layer[OutputLayer].Node[i].Output *
(1-Layer[OutputLayer].Node[i].Output);
}
// Calculate signal error for all nodes in the hidden layer
// (back propagate the errors
for (i = NumberOfLayers-2; i > 0; i--)
{
for (j = 0; j < Layer[i].Node.length; j++)
{
Sum = 0;
for (k = 0; k < Layer[i+1].Node.length; k++)
Sum = Sum + Layer[i+1].Node[k].Weight[j] *
Layer[i+1].Node[k].SignalError;
Layer[i].Node[j].SignalError = Layer[i].Node[j].Output*(1 -
Layer[i].Node[j].Output)*Sum;
}
}
}
private void BackPropagateError()
{
int i,j,k;
// Update Weights
for (i = NumberOfLayers-1; i > 0; i--)
{
for (j = 0; j < Layer[i].Node.length; j++)
{
// Calculate Bias weight difference to node j
Layer[i].Node[j].ThresholdDiff = LearningRate *
Layer[i].Node[j].SignalError +
Momentum*Layer[i].Node[j].ThresholdDiff;
// Update Bias weight to node j
Layer[i].Node[j].Threshold =
Layer[i].Node[j].Threshold +
Layer[i].Node[j].ThresholdDiff;
// Update Weights
for (k = 0; k < Layer[i].Input.length; k++)
{
// Calculate weight difference between node j and k
Layer[i].Node[j].WeightDiff[k] =
LearningRate *
Layer[i].Node[j].SignalError*Layer[i-
1].Node[k].Output +
Momentum*Layer[i].Node[j].WeightDiff[k];
// Update weight between node j and k
Layer[i].Node[j].Weight[k] =
Layer[i].Node[j].Weight[k] +
Layer[i].Node[j].WeightDiff[k];
}
}
}
}
private void CalculateOverallError()
{
int i,j;
OverallError = 0;
for (i = 0; i < NumberOfSamples; i++)
for (j = 0; j < Layer[NumberOfLayers-1].Node.length; j++)
{
OverallError = OverallError +
0.5*( Math.pow(ExpectedOutput[i][j] - ActualOutput[i]
[j],2) );
}
}
public BackPropagation(int NumberOfNodes[],
double InputSamples[][],
double OutputSamples[][],
double LearnRate,
double Moment,
double MinError,
long MaxIter
)
{
int i,j;
// Initiate variables
NumberOfSamples = InputSamples.length;
MinimumError = MinError;
LearningRate = LearnRate;
Momentum = Moment;
NumberOfLayers = NumberOfNodes.length;
MaximumNumberOfIterations = MaxIter;
// Create network layers
Layer = new LAYER[NumberOfLayers];
// Assign the number of node to the input layer
Layer[0] = new LAYER(NumberOfNodes[0],NumberOfNodes[0]);
// Assign number of nodes to each layer
for (i = 1; i < NumberOfLayers; i++)
Layer[i] = new LAYER(NumberOfNodes[i],NumberOfNodes[i-1]);
Input = new double[NumberOfSamples][Layer[0].Node.length];
ExpectedOutput = new double[NumberOfSamples][Layer[NumberOfLayers-
1].Node.length];
ActualOutput = new double[NumberOfSamples][Layer[NumberOfLayers-
1].Node.length];
// Assign input set
for (i = 0; i < NumberOfSamples; i++)
for (j = 0; j < Layer[0].Node.length; j++)
Input[i][j] = InputSamples[i][j];
// Assign output set
for (i = 0; i < NumberOfSamples; i++)
for (j = 0; j < Layer[NumberOfLayers-1].Node.length; j++)
ExpectedOutput[i][j] = OutputSamples[i][j];
}
public void TrainNetwork()
{
int i,j;
long k=0;
do
{
// For each pattern
for (SampleNumber = 0; SampleNumber < NumberOfSamples; SampleNumber++)
{
for (i = 0; i < Layer[0].Node.length; i++)
Layer[0].Input[i] = Input[SampleNumber][i];
FeedForward();
// Assign calculated output vector from network to ActualOutput
for (i = 0; i < Layer[NumberOfLayers-1].Node.length; i++)
ActualOutput[SampleNumber][i] = Layer[NumberOfLayers-
1].Node[i].Output;
UpdateWeights();
// if we've been told to stop training, then
// stop thread execution
if (die){
return;
}
// if
}
k++;
// Calculate Error Function
CalculateOverallError();
System.out.println("OverallError =
"+Double.toString(OverallError)+"\n");
System.out.print("Epoch = "+Long.toString(k)+"\n");
} while ((OverallError > MinimumError) &&(k < MaximumNumberOfIterations));
}
public LAYER[] get_layers() { return Layer; }
// called when testing the network.
public double[] test(double[] input)
{
int winner = 0;
NODE[] output_nodes;
for (int j = 0; j < Layer[0].Node.length; j++)
{ Layer[0].Input[j] = input[j];}
FeedForward();
// get the last layer of nodes (the outputs)
output_nodes = (Layer[Layer.length - 1]).get_nodes();
double[] actual_output = new double[output_nodes.length];
for (int k=0; k < output_nodes.length; k++)
{
actual_output[k]=output_nodes[k].Output;
} // for
return actual_output;
}//test()
public double get_error()
{
CalculateOverallError();
return OverallError;
} // get_error()
// to change the delay in the network
public void set_delay(long time)
{
if (time >= 0) {
delay = time;
} // if
}
//save the trained network
public void save(String FileName)
{
try{
FileOutputStream fos = new FileOutputStream (new File(FileName), true);
// Serialize data object to a file
ObjectOutputStream os = new ObjectOutputStream(fos);
os.writeObject(this);
os.close();
fos.close();
System.out.println("Network Saved!!!!");
}
catch (IOException E){System.out.println(E.toString());}
catch (Exception e){System.out.println(e.toString());}
}
public BackPropagation load(String FileName)
{
BackPropagation myclass= null;
try
{
//File patternDirectory = new File(Environment.getExternalStorageDirectory().getAbsolutePath().toString()+"INDIAN_NUMBER_RECOGNITION.data");
//patternDirectory.mkdirs();
FileInputStream fis = new FileInputStream(new File(FileName));
//FileInputStream fis =context.openFileInput(FileName);
ObjectInputStream is = new ObjectInputStream(fis);
myclass = (BackPropagation) is.readObject();
System.out.println("Error After Reading = "+Double.toString(myclass.get_error())+"\n");
is.close();
fis.close();
return myclass;
}
catch (Exception e){System.out.println(e.toString());}
return myclass;
}
// needed to implement threading.
public void run() {
TrainNetwork();
File Net_File = new File(Environment.getExternalStorageDirectory(),"Number_Recognition_1.ser");
save(Net_File.getAbsolutePath());
System.out.println( "DONE TRAINING :) ^_^ ^_^ :) !\n");
System.out.println("With Network ERROR = "+Double.toString(get_error())+"\n");
} // run()
// to notify the network to stop training.
public void kill() { die = true; }
}
Layer Class:
public class LAYER implements Serializable
{
private double Net;
public double Input[];
// Vector of inputs signals from previous
// layer to the current layer
public NODE Node[];
// Vector of nodes in current layer
// The FeedForward function is called so that
// the outputs for all the nodes in the current
// layer are calculated
public void FeedForward() {
for (int i = 0; i < Node.length; i++) {
Net = Node[i].Threshold;
for (int j = 0; j < Node[i].Weight.length; j++)
{Net = Net + Input[j] * Node[i].Weight[j];
System.out.println("Net = "+Double.toString(Net)+"\n");
}
Node[i].Output = Sigmoid(Net);
System.out.println("Node["+Integer.toString(i)+".Output = "+Double.toString(Node[i].Output)+"\n");
}
}
// The Sigmoid function calculates the
// activation/output from the current node
private double Sigmoid (double Net) {
return 1/(1+Math.exp(-Net));
}
// Return the output from all node in the layer
// in a vector form
public double[] OutputVector() {
double Vector[];
Vector = new double[Node.length];
for (int i=0; i < Node.length; i++)
Vector[i] = Node[i].Output;
return (Vector);
}
public LAYER (int NumberOfNodes, int NumberOfInputs) {
Node = new NODE[NumberOfNodes];
for (int i = 0; i < NumberOfNodes; i++)
Node[i] = new NODE(NumberOfInputs);
Input = new double[NumberOfInputs];
}
// added by DSK
public NODE[] get_nodes() { return Node; }
}
Node Class:
public class NODE implements Serializable
{
public double Output;
// Output signal from current node
public double Weight[];
// Vector of weights from previous nodes to current node
public double Threshold;
// Node Threshold /Bias
public double WeightDiff[];
// Weight difference between the nth and the (n-1) iteration
public double ThresholdDiff;
// Threshold difference between the nth and the (n-1) iteration
public double SignalError;
// Output signal error
// InitialiseWeights function assigns a randomly
// generated number, between -1 and 1, to the
// Threshold and Weights to the current node
private void InitialiseWeights() {
Threshold = -1+2*Math.random();
// Initialise threshold nodes with a random
// number between -1 and 1
ThresholdDiff = 0;
// Initially, ThresholdDiff is assigned to 0 so
// that the Momentum term can work during the 1st
// iteration
for(int i = 0; i < Weight.length; i++) {
Weight[i]= -1+2*Math.random();
// Initialise all weight inputs with a
// random number between -1 and 1
WeightDiff[i] = 0;
// Initially, WeightDiff is assigned to 0
// so that the Momentum term can work during
// the 1st iteration
}
}
public NODE (int NumberOfNodes) {
Weight = new double[NumberOfNodes];
// Create an array of Weight with the same
// size as the vector of inputs to the node
WeightDiff = new double[NumberOfNodes];
// Create an array of weightDiff with the same
// size as the vector of inputs to the node
InitialiseWeights();
// Initialise the Weights and Thresholds to the node
}
public double[] get_weights() { return Weight; }
public double get_output() { return Output; }
}
I wrote the code in Netbeans exactly like this but it differs in the saving method where the file should be saved!.
How can I load the file correctly so I don't get this exception?
I Solved this by saving the network to XML file and then load it again in android so it just took two hours of training instead of days without any Serialization problems , although it took some time to load that XML I serialized it again to neural_network.ser so it will load much faster
I know it's not the best solution but that what I've done.
Here the is the code:
public void SaveToXML(String FileName)throws
ParserConfigurationException, FileNotFoundException,
TransformerException, TransformerConfigurationException
{
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder parser = factory.newDocumentBuilder();
Document doc = parser.newDocument();
Element root = doc.createElement("neuralNetwork");
Element layers = doc.createElement("structure");
layers.setAttribute("numberOfLayers",Integer.toString(this.NumberOfLayers));
for (int il=0; il<this.NumberOfLayers; il++){
Element layer = doc.createElement("layer");
layer.setAttribute("index",Integer.toString(il));
layer.setAttribute("numberOfNeurons",Integer.toString(this.Layer[il].Node.length));
if(il==0)
{
for(int in=0;in<this.Layer[il].Node.length;in++)
{
Element neuron = doc.createElement("neuron");
neuron.setAttribute("index",Integer.toString(in));
neuron.setAttribute("NumberOfInputs",Integer.toString(1));
neuron.setAttribute("threshold",Double.toString(this.Layer[il].Node[in].Threshold));
Element input = doc.createElement("input");
double[] weights = this.Layer[il].Node[in].get_weights();
input.setAttribute("index",Integer.toString(in));
input.setAttribute("weight",Double.toString(weights[in]));
neuron.appendChild(input);
layer.appendChild(neuron);
}
layers.appendChild(layer);
}
else
{
for (int in=0; in<this.Layer[il].Node.length;in++){
Element neuron = doc.createElement("neuron");
neuron.setAttribute("index",Integer.toString(in));
neuron.setAttribute("NumberOfInputs",Integer.toString(this.Layer[il].Node[in].Weight.length));
neuron.setAttribute("threshold",Double.toString(this.Layer[il].Node[in].Threshold));
for (int ii=0; ii<this.Layer[il].Node[in].Weight.length;ii++) {
double[] weights = this.Layer[il].Node[in].get_weights();
Element input = doc.createElement("input");
input.setAttribute("index",Integer.toString(ii));
input.setAttribute("weight",Double.toString(weights[ii]));
neuron.appendChild(input);
}
layer.appendChild(neuron);
layers.appendChild(layer);
}
}
}
root.appendChild(layers);
doc.appendChild(root);
File xmlOutputFile = new File(FileName);
FileOutputStream fos;
Transformer transformer;
fos = new FileOutputStream(xmlOutputFile);
TransformerFactory transformerFactory = TransformerFactory.newInstance();
transformer = transformerFactory.newTransformer();
DOMSource source = new DOMSource(doc);
StreamResult result = new StreamResult(fos);
transformer.setOutputProperty("encoding","iso-8859-2");
transformer.setOutputProperty("indent","yes");
transformer.transform(source, result);
}
LoadFromXML Function:
public BackPropagation LoadFromXML(String FileName)throws
ParserConfigurationException, SAXException, IOException, ParseException
{
BackPropagation myclass= new BackPropagation();
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder parser = factory.newDocumentBuilder();
File source = new File(FileName);
Document doc = parser.parse(source);
Node nodeNeuralNetwork = doc.getDocumentElement();
if (!nodeNeuralNetwork.getNodeName().equals("neuralNetwork")) throw new ParseException("[Error] NN-Load: Parse error in XML file, neural network couldn't be loaded.",0);
NodeList nodeNeuralNetworkContent = nodeNeuralNetwork.getChildNodes();
System.out.print("<neuralNetwork>\n");
for (int innc=0; innc<nodeNeuralNetworkContent.getLength(); innc++)
{
Node nodeStructure = nodeNeuralNetworkContent.item(innc);
if (nodeStructure.getNodeName().equals("structure"))
{
System.out.print("<stucture nuumberOfLayers = ");
myclass.NumberOfLayers = Integer.parseInt(((Element)nodeStructure).getAttribute("numberOfLayers"));
myclass.Layer = new LAYER[myclass.NumberOfLayers];
System.out.print(Integer.toString(myclass.NumberOfLayers)+">\n");
NodeList nodeStructureContent = nodeStructure.getChildNodes();
for (int isc=0; isc<nodeStructureContent.getLength();isc++)
{
Node nodeLayer = nodeStructureContent.item(isc);
if (nodeLayer.getNodeName().equals("layer"))
{
int index = Integer.parseInt(((Element)nodeLayer).getAttribute("index"));
System.out.print("<layer index = "+Integer.toString(index)+" numberOfNeurons = ");
int number_of_N = Integer.parseInt(((Element)nodeLayer).getAttribute("numberOfNeurons"));
System.out.print(Integer.toString(number_of_N)+">\n");
if(index==0)
{
myclass.Layer[0]=new LAYER(number_of_N,800);
}
else
{
int j=index-1;
myclass.Layer[index]=new LAYER(number_of_N,myclass.Layer[j].Node.length);
}
NodeList nodeLayerContent = nodeLayer.getChildNodes();
for (int ilc=0; ilc<nodeLayerContent.getLength();ilc++)
{
Node nodeNeuron = nodeLayerContent.item(ilc);
if (nodeNeuron.getNodeName().equals("neuron"))
{
System.out.print("<neuron index = ");
int neuron_index = Integer.parseInt(((Element)nodeNeuron).getAttribute("index"));
myclass.Layer[index].Node[neuron_index].Threshold = Double.parseDouble(((Element)nodeNeuron).getAttribute("threshold"));
System.out.print(Integer.toString(neuron_index)+" threshold = "+Double.toString(myclass.Layer[index].Node[neuron_index].Threshold)+">\n");
NodeList nodeNeuronContent = nodeNeuron.getChildNodes();
for (int inc=0; inc < nodeNeuronContent.getLength();inc++)
{
Node nodeNeuralInput = nodeNeuronContent.item(inc);
if (nodeNeuralInput.getNodeName().equals("input"))
{
System.out.print("<input index = ");
int index_input = Integer.parseInt(((Element)nodeNeuralInput).getAttribute("index"));
myclass.Layer[index].Node[neuron_index].Weight[index_input] = Double.parseDouble(((Element)nodeNeuralInput).getAttribute("weight"));
System.out.print(Integer.toString(index_input)+" weight = "+Double.toString(myclass.Layer[index].Node[neuron_index].Weight[index_input])+">\n");
}
}
}
}
}
}
System.out.print("</structure");
}
}
return myclass;
}
Consider a few web server instances running in parallel. Each server holds a reference to a single shared "Status keeper", whose role is keeping the last N requests from all servers.
For example (N=3):
Server a: "Request id = ABCD" Status keeper=["ABCD"]
Server b: "Request id = XYZZ" Status keeper=["ABCD", "XYZZ"]
Server c: "Request id = 1234" Status keeper=["ABCD", "XYZZ", "1234"]
Server b: "Request id = FOO" Status keeper=["XYZZ", "1234", "FOO"]
Server a: "Request id = BAR" Status keeper=["1234", "FOO", "BAR"]
At any point in time, the "Status keeper" might be called from a monitoring application that reads these last N requests for an SLA report.
What's the best way to implement this producer-consumer scenario in Java, giving the web servers higher priority than the SLA report?
CircularFifoBuffer seems to be the appropriate data structure to hold the requests, but I'm not sure what's the optimal way to implement efficient concurrency.
Buffer fifo = BufferUtils.synchronizedBuffer(new CircularFifoBuffer());
Here's a lock-free ring buffer implementation. It implements a fixed-size buffer - there is no FIFO functionality. I would suggest you store a Collection of requests for each server instead. That way your report can do the filtering rather than getting your data structure to filter.
/**
* Container
* ---------
*
* A lock-free container that offers a close-to O(1) add/remove performance.
*
*/
public class Container<T> implements Iterable<T> {
// The capacity of the container.
final int capacity;
// The list.
AtomicReference<Node<T>> head = new AtomicReference<Node<T>>();
// TESTING {
AtomicLong totalAdded = new AtomicLong(0);
AtomicLong totalFreed = new AtomicLong(0);
AtomicLong totalSkipped = new AtomicLong(0);
private void resetStats() {
totalAdded.set(0);
totalFreed.set(0);
totalSkipped.set(0);
}
// TESTING }
// Constructor
public Container(int capacity) {
this.capacity = capacity;
// Construct the list.
Node<T> h = new Node<T>();
Node<T> it = h;
// One created, now add (capacity - 1) more
for (int i = 0; i < capacity - 1; i++) {
// Add it.
it.next = new Node<T>();
// Step on to it.
it = it.next;
}
// Make it a ring.
it.next = h;
// Install it.
head.set(h);
}
// Empty ... NOT thread safe.
public void clear() {
Node<T> it = head.get();
for (int i = 0; i < capacity; i++) {
// Trash the element
it.element = null;
// Mark it free.
it.free.set(true);
it = it.next;
}
// Clear stats.
resetStats();
}
// Add a new one.
public Node<T> add(T element) {
// Get a free node and attach the element.
totalAdded.incrementAndGet();
return getFree().attach(element);
}
// Find the next free element and mark it not free.
private Node<T> getFree() {
Node<T> freeNode = head.get();
int skipped = 0;
// Stop when we hit the end of the list
// ... or we successfully transit a node from free to not-free.
while (skipped < capacity && !freeNode.free.compareAndSet(true, false)) {
skipped += 1;
freeNode = freeNode.next;
}
// Keep count of skipped.
totalSkipped.addAndGet(skipped);
if (skipped < capacity) {
// Put the head as next.
// Doesn't matter if it fails. That would just mean someone else was doing the same.
head.set(freeNode.next);
} else {
// We hit the end! No more free nodes.
throw new IllegalStateException("Capacity exhausted.");
}
return freeNode;
}
// Mark it free.
public void remove(Node<T> it, T element) {
totalFreed.incrementAndGet();
// Remove the element first.
it.detach(element);
// Mark it as free.
if (!it.free.compareAndSet(false, true)) {
throw new IllegalStateException("Freeing a freed node.");
}
}
// The Node class. It is static so needs the <T> repeated.
public static class Node<T> {
// The element in the node.
private T element;
// Are we free?
private AtomicBoolean free = new AtomicBoolean(true);
// The next reference in whatever list I am in.
private Node<T> next;
// Construct a node of the list
private Node() {
// Start empty.
element = null;
}
// Attach the element.
public Node<T> attach(T element) {
// Sanity check.
if (this.element == null) {
this.element = element;
} else {
throw new IllegalArgumentException("There is already an element attached.");
}
// Useful for chaining.
return this;
}
// Detach the element.
public Node<T> detach(T element) {
// Sanity check.
if (this.element == element) {
this.element = null;
} else {
throw new IllegalArgumentException("Removal of wrong element.");
}
// Useful for chaining.
return this;
}
public T get () {
return element;
}
#Override
public String toString() {
return element != null ? element.toString() : "null";
}
}
// Provides an iterator across all items in the container.
public Iterator<T> iterator() {
return new UsedNodesIterator<T>(this);
}
// Iterates across used nodes.
private static class UsedNodesIterator<T> implements Iterator<T> {
// Where next to look for the next used node.
Node<T> it;
int limit = 0;
T next = null;
public UsedNodesIterator(Container<T> c) {
// Snapshot the head node at this time.
it = c.head.get();
limit = c.capacity;
}
public boolean hasNext() {
// Made into a `while` loop to fix issue reported by #Nim in code review
while (next == null && limit > 0) {
// Scan to the next non-free node.
while (limit > 0 && it.free.get() == true) {
it = it.next;
// Step down 1.
limit -= 1;
}
if (limit != 0) {
next = it.element;
}
}
return next != null;
}
public T next() {
T n = null;
if ( hasNext () ) {
// Give it to them.
n = next;
next = null;
// Step forward.
it = it.next;
limit -= 1;
} else {
// Not there!!
throw new NoSuchElementException ();
}
return n;
}
public void remove() {
throw new UnsupportedOperationException("Not supported.");
}
}
#Override
public String toString() {
StringBuilder s = new StringBuilder();
Separator comma = new Separator(",");
// Keep counts too.
int usedCount = 0;
int freeCount = 0;
// I will iterate the list myself as I want to count free nodes too.
Node<T> it = head.get();
int count = 0;
s.append("[");
// Scan to the end.
while (count < capacity) {
// Is it in-use?
if (it.free.get() == false) {
// Grab its element.
T e = it.element;
// Is it null?
if (e != null) {
// Good element.
s.append(comma.sep()).append(e.toString());
// Count them.
usedCount += 1;
} else {
// Probably became free while I was traversing.
// Because the element is detached before the entry is marked free.
freeCount += 1;
}
} else {
// Free one.
freeCount += 1;
}
// Next
it = it.next;
count += 1;
}
// Decorate with counts "]used+free".
s.append("]").append(usedCount).append("+").append(freeCount);
if (usedCount + freeCount != capacity) {
// Perhaps something was added/freed while we were iterating.
s.append("?");
}
return s.toString();
}
}
Note that this is close to O1 put and get. A Separator just emits "" first time around and then its parameter from then on.
Edit: Added test methods.
// ***** Following only needed for testing. *****
private static boolean Debug = false;
private final static String logName = "Container.log";
private final static NamedFileOutput log = new NamedFileOutput("C:\\Junk\\");
private static synchronized void log(boolean toStdoutToo, String s) {
if (Debug) {
if (toStdoutToo) {
System.out.println(s);
}
log(s);
}
}
private static synchronized void log(String s) {
if (Debug) {
try {
log.writeLn(logName, s);
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
static volatile boolean testing = true;
// Tester object to exercise the container.
static class Tester<T> implements Runnable {
// My name.
T me;
// The container I am testing.
Container<T> c;
public Tester(Container<T> container, T name) {
c = container;
me = name;
}
private void pause() {
try {
Thread.sleep(0);
} catch (InterruptedException ex) {
testing = false;
}
}
public void run() {
// Spin on add/remove until stopped.
while (testing) {
// Add it.
Node<T> n = c.add(me);
log("Added " + me + ": " + c.toString());
pause();
// Remove it.
c.remove(n, me);
log("Removed " + me + ": " + c.toString());
pause();
}
}
}
static final String[] strings = {
"One", "Two", "Three", "Four", "Five",
"Six", "Seven", "Eight", "Nine", "Ten"
};
static final int TEST_THREADS = Math.min(10, strings.length);
public static void main(String[] args) throws InterruptedException {
Debug = true;
log.delete(logName);
Container<String> c = new Container<String>(10);
// Simple add/remove
log(true, "Simple test");
Node<String> it = c.add(strings[0]);
log("Added " + c.toString());
c.remove(it, strings[0]);
log("Removed " + c.toString());
// Capacity test.
log(true, "Capacity test");
ArrayList<Node<String>> nodes = new ArrayList<Node<String>>(strings.length);
// Fill it.
for (int i = 0; i < strings.length; i++) {
nodes.add(i, c.add(strings[i]));
log("Added " + strings[i] + " " + c.toString());
}
// Add one more.
try {
c.add("Wafer thin mint!");
} catch (IllegalStateException ise) {
log("Full!");
}
c.clear();
log("Empty: " + c.toString());
// Iterate test.
log(true, "Iterator test");
for (int i = 0; i < strings.length; i++) {
nodes.add(i, c.add(strings[i]));
}
StringBuilder all = new StringBuilder ();
Separator sep = new Separator(",");
for (String s : c) {
all.append(sep.sep()).append(s);
}
log("All: "+all);
for (int i = 0; i < strings.length; i++) {
c.remove(nodes.get(i), strings[i]);
}
sep.reset();
all.setLength(0);
for (String s : c) {
all.append(sep.sep()).append(s);
}
log("None: " + all.toString());
// Multiple add/remove
log(true, "Multi test");
for (int i = 0; i < strings.length; i++) {
nodes.add(i, c.add(strings[i]));
log("Added " + strings[i] + " " + c.toString());
}
log("Filled " + c.toString());
for (int i = 0; i < strings.length - 1; i++) {
c.remove(nodes.get(i), strings[i]);
log("Removed " + strings[i] + " " + c.toString());
}
c.remove(nodes.get(strings.length - 1), strings[strings.length - 1]);
log("Empty " + c.toString());
// Multi-threaded add/remove
log(true, "Threads test");
c.clear();
for (int i = 0; i < TEST_THREADS; i++) {
Thread t = new Thread(new Tester<String>(c, strings[i]));
t.setName("Tester " + strings[i]);
log("Starting " + t.getName());
t.start();
}
// Wait for 10 seconds.
long stop = System.currentTimeMillis() + 10 * 1000;
while (System.currentTimeMillis() < stop) {
Thread.sleep(100);
}
// Stop the testers.
testing = false;
// Wait some more.
Thread.sleep(1 * 100);
// Get stats.
double added = c.totalAdded.doubleValue();
double skipped = c.totalSkipped.doubleValue();
//double freed = c.freed.doubleValue();
log(true, "Stats: added=" + c.totalAdded + ",freed=" + c.totalFreed + ",skipped=" + c.totalSkipped + ",O(" + ((added + skipped) / added) + ")");
}
Maybe you want to look at Disruptor - Concurrent Programming Framework.
Find a paper describing the alternatives, design and also a performance comparement to java.util.concurrent.ArrayBlockingQueue here: pdf
Consider to read the first three articles from BlogsAndArticles
If the library is too much, stick to java.util.concurrent.ArrayBlockingQueue
I would have a look at ArrayDeque, or for a more concurrent implementation have a look at the Disruptor library which is one of the most sophisticated/complex ring buffer in Java.
An alternative is to use an unbounded queue which is more concurrent as the producer never needs to wait for the consumer. Java Chronicle
Unless your needs justify the complexity, an ArrayDeque may be all you need.
Also have a look at java.util.concurrent.
Blocking queues will block until there is something to consume or (optionally) space to produce:
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/BlockingQueue.html
Concurrent linked queue is non-blocking and uses a slick algorithm that allows a producer and consumer to be active concurrently:
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ConcurrentLinkedQueue.html
Hazelcast's Queue offers almost everything you ask for, but doesn't support circularity. But from your description I am not sure if you actually need it.
If it were me, I would use the CircularFIFOBuffer as you indicated, and synchronize around the buffer when writing (add). When the monitoring application wants to read the buffer, synchronize on the buffer, and then copy or clone it to use for reporting.
This suggestion is predicated on the assumption that latency is minimal to copy/clone the buffer to a new object. If there are large number of elements, and copying time is slow, then this is not a good idea.
Pseudo-Code example:
public void writeRequest(String requestID) {
synchronized(buffer) {
buffer.add(requestID);
}
}
public Collection<String> getRequests() {
synchronized(buffer) {
return buffer.clone();
}
}
Since you specifically ask to give writers (that is web servers) higher priority than the reader (that is monitoring), I would suggest the following design.
Web servers add request information to a concurrent queue which is read by a dedicated thread, which adds requests to a thread-local (therefore non-synchronized) queue that overwrites the oldest element, like EvictingQueue or CircularFifoQueue.
This same thread checks a flag which indicates if a report has been requested after every request processed, and if positive, produces a report from all elements present in the thread-local queue.
I've done the breadth-first search in a normal way.
now I'm trying to do it in a multithreaded way.
i have one queue which is shared between the threads.
i use synchronize(LockObject) when i remove a node from the queue ( FIFI queue )
so what I'm trying to do is that.
when i thread finds a solution all the other threads will stop immediately.
i assume you are traversing a tree for your BFS.
create a thread pool.
for each unexplored children in the node, retrieve a thread from the thread pool (perhaps using a Semaphore). mark the child node as 'explored' and explore the node's children in a BFS manner. when you have found a solution or done exploring all the nodes, release the semaphore.
^ i've never done this before so i might have missed out something.
Assuming you want to do this iteratively (see note at the bottom why there may be better closed solutions), this is not a great problem for exercising multi threading. The problem is that multithreading is great if you don't depend on previous results, but here you want the minimum amount of coins.
As you point out, a breadth first solution guarantees that once you reach the desired amount, you won't have any further solutions with less coins in a single threaded environment. However, in a multithreaded environment, once you start calculating a solution, you cannot guarantee that it will finish before some other solution. Let's imagine for the value 21: it can be a 20c coin and a 1c or four 5c coins and a 1c; if both are calculating simultaneously, you cannot guarantee that the first (and correct) solution will finish first. In practice, it is unlikely the situation will happen, but when you work with multithreading you want the solution to work in theory, because multithreads always fail in the demonstration, no matter if they should not have failed until the death heat of the universe.
Now you have 2 possible solutions: one is to introduce choke points at the beginning of each level; you don't start that level until the previous level is finished. The other is once you reach a solution continue doing all the calculations with a lower level than the current result (which means you cannot purge the others). Probably with all the synchronization needed you get better performance by going single threaded, but let's go on.
For the first solution, the natural form is to iterate increasing the level. You can use the solution provided by happymeal, with a Semaphore. An alternative is to use the new classes provided by java.
CoinSet getCoinSet(int desiredAmount) throws InterruptedException {
// Use whatever number of threads you prefer or another member of Executors.
final ExecutorService executor = Executors.newFixedThreadPool(10);
ResultContainer container = new ResultContainer();
container.getNext().add(new Producer(desiredAmount, new CoinSet(), container));
while (container.getResult() == null) {
executor.invokeAll(container.setNext(new Vector<Producer>()));
}
return container.getResult();
}
public class Producer implements Callable<CoinSet> {
private final int desiredAmount;
private final CoinSet data;
private final ResultContainer container;
public Producer(int desiredAmount, CoinSet data, ResultContainer container) {
this.desiredAmount = desiredAmount;
this.data = data;
this.container = container;
}
public CoinSet call() {
if (data.getSum() == desiredAmount) {
container.setResult(data);
return data;
} else {
Collection<CoinSet> nextSets = data.addCoins();
for (CoinSet nextSet : nextSets) {
container.getNext().add(new Producer(desiredAmount, nextSet, container));
}
return null;
}
}
}
// Probably it is better to split this class, but you then need to pass too many parameters
// The only really needed part is to create a wrapper around getNext, since invokeAll is
// undefined if you modify the list of tasks.
public class ResultContainer {
// I use Vector because it is synchronized.
private Vector<Producer> next = new Vector<Producer>();
private CoinSet result = null;
// Note I return the existing value.
public Vector<Producer> setNext(Vector<Producer> newValue) {
Vector<Producer> current = next;
next = newValue;
return current;
}
public Vector<Producer> getNext() {
return next;
}
public synchronized void setResult(CoinSet newValue) {
result = newValue;
}
public synchronized CoinSet getResult() {
return result;
}
}
This still has the problem that existing tasks are executed; however, it is simple to fix that; pass the thread executor into each Producer (or the container). Then, when you find a result, call executor.shutdownNow. The threads that are executing won't be interrupted, but the operation in each thread is trivial so it will finish fast; the runnables that have not started won't start.
The second option means you have to let all the current tasks finish, unless you keep track of how many tasks you have run at each level. You no longer need to keep track of the levels, though, and you don't need the while cycle. Instead, you just call
executor.submit(new Producer(new CoinSet(), desiredAmount, container)).get();
And then, the call method is pretty similar (assume you have executor in the Producer):
public CoinSet call() {
if (container.getResult() != null && data.getCount() < container.getResult().getCount()) {
if (data.getSum() == desiredAmount)) {
container.setResult(data);
return data;
} else {
Collection<CoinSet> nextSets = data.addCoins();
for (CoinSet nextSet : nextSets) {
executor.submit(new Producer(desiredAmount, nextSet, container));
}
return null;
}
}
}
and you also have to modify container.setResult, since you cannot depend that between the if and setting the value it has not been set by some other threads (threads are really annoying, aren't they?)
public synchronized void setResult(CoinSet newValue) {
if (newValue.getCount() < result.getCount()) {
result = newValue;
}
}
In all previous answers, CoinSet.getSum() returns the sum of the coins in the set, CoinSet.getCount() returns the number of coins, and CoinSet.addCoins() returns a Collection of CoinSet in which each element is the current CoinSet plus one coin of each possible different value
Note: For the problem of the coins with the values 1, 5, 10 and 20, the simplest solution is take the amount and divide it by the largest coin. Then take the modulus of that and use the next largest value and so on. That is the minimum amount of coins you are going to need. This rule applies (AFAICT) when the following property if true: if for all consecutive pairs of coin values (i.e. in this case, 1-5, 5-10, 10-20) you can reach any int multiple of the lower element in the pair with with a smaller number of coins using the larger element and whatever coins are necessary. You only need to prove it to the min common multiple of both elements in the pair (after that it repeats itself)
I gather from your comment on happymeal's anwer that you are trying to find how to reach a specific amount of money by adding coins of 1c, 5c, 10c and 20c.
Since each coin denomination divides the denomination of the next bigger coin, this can be solved in constant time as follows:
int[] coinCount(int amount) {
int[] coinValue = {20, 10, 5, 1};
int[] coinCount = new int[coinValue.length];
for (int i = 0; i < coinValue.length; i++) {
coinCount[i] = amount / coinValue[i];
amount -= coinCount[i] * coinValue[i];
}
return coinCount;
}
Take home message: Try to optimize your algorithm before resorting to multithreading, because algorithmic improvements can yield much greater improvements.
I've successfully implemented it.
what i did is that i took all the nodes in the first level, let's say 4 nodes.
then i had 2 threads. each one takes 2 nodes and generate their children. whenever a node finds a solution it has to report the level that it found the solution in and limit the searching level so other threads don't exceed the level.
only the reporting method should be synchronized.
i did the code for the coins change problem. this is my code for others to use
Main Class (CoinsProblemBFS.java)
package coinsproblembfs;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Queue;
import java.util.Scanner;
/**
*
* #author Kassem M. Bagher
*/
public class CoinsProblemBFS
{
private static List<Item> MoneyList = new ArrayList<Item>();
private static Queue<Item> q = new LinkedList<Item>();
private static LinkedList<Item> tmpQ;
public static Object lockLevelLimitation = new Object();
public static int searchLevelLimit = 1000;
public static Item lastFoundNode = null;
private static int numberOfThreads = 2;
private static void InitializeQueu(Item Root)
{
for (int x = 0; x < MoneyList.size(); x++)
{
Item t = new Item();
t.value = MoneyList.get(x).value;
t.Totalvalue = MoneyList.get(x).Totalvalue;
t.Title = MoneyList.get(x).Title;
t.parent = Root;
t.level = 1;
q.add(t);
}
}
private static int[] calculateQueueLimit(int numberOfItems, int numberOfThreads)
{
int total = 0;
int[] queueLimit = new int[numberOfThreads];
for (int x = 0; x < numberOfItems; x++)
{
if (total < numberOfItems)
{
queueLimit[x % numberOfThreads] += 1;
total++;
}
else
{
break;
}
}
return queueLimit;
}
private static void initializeMoneyList(int numberOfItems, Item Root)
{
for (int x = 0; x < numberOfItems; x++)
{
Scanner input = new Scanner(System.in);
Item t = new Item();
System.out.print("Enter the Title and Value for item " + (x + 1) + ": ");
String tmp = input.nextLine();
t.Title = tmp.split(" ")[0];
t.value = Double.parseDouble(tmp.split(" ")[1]);
t.Totalvalue = t.value;
t.parent = Root;
MoneyList.add(t);
}
}
private static void printPath(Item item)
{
System.out.println("\nSolution Found in Thread:" + item.winnerThreadName + "\nExecution Time: " + item.searchTime + " ms, " + (item.searchTime / 1000) + " s");
while (item != null)
{
for (Item listItem : MoneyList)
{
if (listItem.Title.equals(item.Title))
{
listItem.counter++;
}
}
item = item.parent;
}
for (Item listItem : MoneyList)
{
System.out.println(listItem.Title + " x " + listItem.counter);
}
}
public static void main(String[] args) throws InterruptedException
{
Item Root = new Item();
Root.Title = "Root Node";
Scanner input = new Scanner(System.in);
System.out.print("Number of Items: ");
int numberOfItems = input.nextInt();
input.nextLine();
initializeMoneyList(numberOfItems, Root);
System.out.print("Enter the Amount of Money: ");
double searchValue = input.nextDouble();
int searchLimit = (int) Math.ceil((searchValue / MoneyList.get(MoneyList.size() - 1).value));
System.out.print("Number of Threads (Muste be less than the number of items): ");
numberOfThreads = input.nextInt();
if (numberOfThreads > numberOfItems)
{
System.exit(1);
}
InitializeQueu(Root);
int[] queueLimit = calculateQueueLimit(numberOfItems, numberOfThreads);
List<Thread> threadList = new ArrayList<Thread>();
for (int x = 0; x < numberOfThreads; x++)
{
tmpQ = new LinkedList<Item>();
for (int y = 0; y < queueLimit[x]; y++)
{
tmpQ.add(q.remove());
}
BFS tmpThreadObject = new BFS(MoneyList, searchValue, tmpQ);
Thread t = new Thread(tmpThreadObject);
t.setName((x + 1) + "");
threadList.add(t);
}
for (Thread t : threadList)
{
t.start();
}
boolean finish = false;
while (!finish)
{
Thread.sleep(250);
for (Thread t : threadList)
{
if (t.isAlive())
{
finish = false;
break;
}
else
{
finish = true;
}
}
}
printPath(lastFoundNode);
}
}
Item Class (Item.java)
package coinsproblembfs;
/**
*
* #author Kassem
*/
public class Item
{
String Title = "";
double value = 0;
int level = 0;
double Totalvalue = 0;
int counter = 0;
Item parent = null;
long searchTime = 0;
String winnerThreadName="";
}
Threads Class (BFS.java)
package coinsproblembfs;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
/**
*
* #author Kassem M. Bagher
*/
public class BFS implements Runnable
{
private LinkedList<Item> q;
private List<Item> MoneyList;
private double searchValue = 0;
private long start = 0, end = 0;
public BFS(List<Item> monyList, double searchValue, LinkedList<Item> queue)
{
q = new LinkedList<Item>();
MoneyList = new ArrayList<Item>();
this.searchValue = searchValue;
for (int x = 0; x < queue.size(); x++)
{
q.addLast(queue.get(x));
}
for (int x = 0; x < monyList.size(); x++)
{
MoneyList.add(monyList.get(x));
}
}
private synchronized void printPath(Item item)
{
while (item != null)
{
for (Item listItem : MoneyList)
{
if (listItem.Title.equals(item.Title))
{
listItem.counter++;
}
}
item = item.parent;
}
for (Item listItem : MoneyList)
{
System.out.println(listItem.Title + " x " + listItem.counter);
}
}
private void addChildren(Item node, LinkedList<Item> q, boolean initialized)
{
for (int x = 0; x < MoneyList.size(); x++)
{
Item t = new Item();
t.value = MoneyList.get(x).value;
if (initialized)
{
t.Totalvalue = 0;
t.level = 0;
}
else
{
t.parent = node;
t.Totalvalue = MoneyList.get(x).Totalvalue;
if (t.parent == null)
{
t.level = 0;
}
else
{
t.level = t.parent.level + 1;
}
}
t.Title = MoneyList.get(x).Title;
q.addLast(t);
}
}
#Override
public void run()
{
start = System.currentTimeMillis();
try
{
while (!q.isEmpty())
{
Item node = null;
node = (Item) q.removeFirst();
node.Totalvalue = node.value + node.parent.Totalvalue;
if (node.level < CoinsProblemBFS.searchLevelLimit)
{
if (node.Totalvalue == searchValue)
{
synchronized (CoinsProblemBFS.lockLevelLimitation)
{
CoinsProblemBFS.searchLevelLimit = node.level;
CoinsProblemBFS.lastFoundNode = node;
end = System.currentTimeMillis();
CoinsProblemBFS.lastFoundNode.searchTime = (end - start);
CoinsProblemBFS.lastFoundNode.winnerThreadName=Thread.currentThread().getName();
}
}
else
{
if (node.level + 1 < CoinsProblemBFS.searchLevelLimit)
{
addChildren(node, q, false);
}
}
}
}
} catch (Exception e)
{
e.printStackTrace();
}
}
}
Sample Input:
Number of Items: 4
Enter the Title and Value for item 1: one 1
Enter the Title and Value for item 2: five 5
Enter the Title and Value for item 3: ten 10
Enter the Title and Value for item 4: twenty 20
Enter the Amount of Money: 150
Number of Threads (Muste be less than the number of items): 2