The code below is working well, however it isn't quite what I am looking for. It finds an efficient way to cut variable sizes (user input) from one static stock size. I want to alter the code so it finds an efficient way to cut the variable sizes from multiple static stock sizes, 4 to be specific.
public static void binPacking(double[] FinalCuttingList, double size, int TotalCuts)
{
int StockLengthCount = 0;
double[] StockValues = new double[TotalCuts];
for (int i = 0; i < StockValues.length; i++)
StockValues[i] = size;
for (int i = 0; i < TotalCuts; i++)
for (int o = 0; o < StockValues.length; o++)
{
if (StockValues[o] - FinalCuttingList[i] >= 0)
{
StockValues[o] -= FinalCuttingList[i];
break;
}
}
for (int i = 0; i < StockValues.length; i++)
if (StockValues[i] != size)
StockLengthCount++;
System.out
.println("Number of 3400mm pieces required is :"
+ StockLengthCount);
}
static double[] sort(double[] sequence)
{
// Sort in descending order
for (int i = 0; i < sequence.length; i++)
for (int o = 0; o < sequence.length - 1; o++)
if (sequence[o] < sequence[o + 1])
{
sequence[o] = sequence[o] + sequence[o + 1];
sequence[o + 1] = sequence[o] - sequence[o + 1];
sequence[o] = sequence[o] - sequence[o + 1];
}
return sequence;
}
Related
Firsly, I have a Matrix which I need to sort in such way (shown on the picture) using Selection Sort:
the way to sort the Matrix
Thus, the array should be sorted in increasing way from up to down. What's more, we have to sort elements of the right diagonal of Matrix.
Here's my code, but it doesn't work properly because I take diagonal into account.
public class PAS {
public static void main(String[] args) {
int n = 5; int max = 25; int minimL = 0; int MinRow, Temp;
int[][] array = new int [n][n];
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
array[i][j] = (int) (Math.random() * (max - minimL + 1) + minimL);
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++)
System.out.print(" " + array[i][j]);
System.out.println();
}
System.out.println();
for (int NumCol = n - 1; NumCol >= 0; NumCol --) {
for (int NumRow = 0; NumRow < n; NumRow++) {
if (NumCol >= n - NumRow - 1) // Getting the right diagonal of Matrix changing position of columns to rows in order to sort them
{
MinRow = NumRow;
for (int j = NumRow + 1; j < n; j++)
if (array[(n - 1) - NumCol][NumCol] > array[MinRow][(n - 1) - NumCol])
MinRow = j;
Temp = array[NumRow][(n - 1) - NumCol];
array[(n - 1) - NumCol][NumCol] = array[MinRow][(n - 1) - NumCol];
array[MinRow][(n - 1) - NumCol] = Temp;
}
}
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
System.out.print(" " + array[i][j]);
}
System.out.println();
}
}
}
Is it possible to sort it properly in the way shown on the picture taking O(n^3) notation?
I have to sort an array using a sorting algorithm and show in which index the values were originally placed. (see expected result vs actual result: https://i.imgur.com/GpIpkPE.png).
I managed to sort correctly the values. BUT the index is HALF correct? Which is confusing me.
public static void sortNumbers(double[] averageNotes) {
for (int i = 0; i < averageNotes.length; i++) {
double max = averageNotes[i];
int maxId = i;
for (int j = i+1; j < averageNotes.length; j++) {
if (averageNotes[j] > max) {
max = averageNotes[j];
maxId = j;
}
}
double temp = averageNotes[i];
averageNotes[i] = max;
averageNotes[maxId] = temp;
System.out.println(averageNotes[i] + " (" + maxId + ")");
}
Any help is highly appreciated. Thank you.
Use an extra array for your indices and sort simultaneously:
public static void sortNumbers(double[] averageNotes) {
//create an array for your indices
int[] indices = new int[averageNotes.length];
//fill indices
for (int i = 0; i < averageNotes.length; i++) {
indices[i] = i;
}
//sort both arrays simultaneously
for (int i = 0; i < averageNotes.length; i++) {
for (int j = i+1; j < averageNotes.length; j++) {
if (averageNotes[i] < averageNotes[j]) {
double temp = averageNotes[i];
averageNotes[i] = averageNotes[j];
averageNotes[j] = temp;
int t = indices[i];
indices[i] = indices[j];
indices[j] = t;
}
}
}
//print
for (int i = 0; i < averageNotes.length; i++) {
System.out.println(averageNotes[i] + " (" + indices[i] + ")");
}
}
Firstly, i should mention that i only recently started programming (about a year ago). My main-language is Java.
to elaborate on what I've done:
to learn about Neural Networks i watched 3Blue1Brown´s series on the topic
after (mostly) understanding it i started to make the actual Implementation
I implemented a File-Reader to turn the raw numbers of the Database (i used MNIST just like 3b1b) into three arrays : 1 for the labels, 1 for the Images and one for the grayscale-values (i mapped the RGB values between 0 and 1) in one input-array
I then designed a test() and train() method, this is what i did:
public class Network {
int L;
int[] Lsize;
double[][][] weights;
double[][] biases;
public Network(int... Lsize) {
L = Lsize.length;
this.Lsize = Lsize;
weights = new double[L - 1][][];
for (int i = 0; i < L - 1; i++) {
weights[i] = new double[Lsize[i + 1]][Lsize[i]];
for (int j = 0; j < Lsize[i + 1]; j++) {
for (int k = 0; k < Lsize[i]; k++) {
weights[i][j][k] = (Math.random() * 2) - 1;
}
}
}
biases = new double[L - 1][];
for (int i = 0; i < L - 1; i++) {
biases[i] = new double[Lsize[i + 1]];
for (int j = 0; j < Lsize[i + 1]; j++) {
biases[i][j] = (Math.random() * 2) - 1;
}
}
}
public static void main(String[] args) {
Network n = new Network(28 * 28, 16, 16, 10);
Database mnist_train = new Database(60000, 28, 28, "mnist_train");
Database mnist_test = new Database(10000, 28, 28, "mnist_test");
System.out.println("accuracy= " + n.accuracy(mnist_test));
for(int i = 0; i < 50; i ++) {
n.train(mnist_train, 10, 0.1);
System.out.println("accuracy= " + n.accuracy(mnist_test));
}
}
public void train(Database data,int batchsize,double factor) {
Batch[] batches = data.dividetoBatches(batchsize);
for (int b = 0; b < data.n / batchsize; b++) {
System.out.println("Step " + b + " started!");
Batch batch = batches[b];
double[][][] averagegweights = new double[L - 1][][];
double[][] averagegbiases = new double[L - 1][];
for (int i = 0; i < L - 1; i++) {
averagegweights[i] = new double[Lsize[i + 1]][Lsize[i]];
averagegbiases[i] = new double[Lsize[i + 1]];
}
double averagecost = 0;
for (int e = 0; e < batchsize; e++) {
double[] target = batch.target[e];
double[] values = batch.values[e];
double[][] z = new double[L - 1][];
double[][] a = new double[L][];
a[0] = values;
for (int i = 0; i < L - 1; i++) {
a[i + 1] = new double[Lsize[i + 1]];
z[i] = new double[Lsize[i + 1]];
for (int j = 0; j < Lsize[i + 1]; j++) {
double sum = biases[i][j];
for (int k = 0; k < Lsize[i]; k++) {
sum += weights[i][j][k] * a[i + 1][j];
}
z[i][j] = sum;
a[i + 1][j] = sigmoid(sum);
}
}
double[][][] gweights = new double[L - 1][][];
double[][] gbiases = new double[L - 1][];
double[][] dCa = new double[L][];
double cost = 0;
dCa[L - 1] = new double[Lsize[L - 1]];
for (int i = 0; i < Lsize[L - 1]; i++) {
dCa[L-1][i] = 2 * (target[i] - a[L - 1][i]);
cost += (target[i] - a[L - 1][i]) * (target[i] - a[L - 1][i]);
}
// Backpropagation:
for (int i = L - 2; i >= 0; i--) {
dCa[i] = new double[Lsize[i]];
gweights[i] = new double[Lsize[i+1]][Lsize[i]];
gbiases[i] = new double[Lsize[i]];
for (int j = 0; j < Lsize[i + 1]; j++) {
gbiases[i][j] = dsigmoid(z[i][j]) * dCa[i+1][j];
for (int k = 0; k < Lsize[i]; k++) {
gweights[i][j][k] = a[i][k] * dsigmoid(z[i][j]) * dCa[i+1][j];
}
}
for (int k = 0; k < Lsize[i]; k++) {
dCa[i][k] = 0;
for (int j = 0; j < Lsize[i + 1]; j++) {
dCa[i][k] += weights[i][j][k] * dsigmoid(z[i][j]) * dCa[i + 1][j];
}
}
}
for (int i = 0; i < L - 1; i++) {
for (int j = 0; j < Lsize[i + 1]; j++) {
averagegbiases[i][j] += gbiases[i][j];
for (int k = 0; k < Lsize[i]; k++) {
averagegweights[i][j][k] += gweights[i][j][k];
}
}
}
averagecost += cost;
}
for (int i = 0; i < L - 1; i++) {
for (int j = 0; j < Lsize[i + 1]; j++) {
averagegbiases[i][j] = averagegbiases[i][j]/batchsize * -1;
for (int k = 0; k < Lsize[i]; k++) {
averagegweights[i][j][k] = averagegweights[i][j][k]/batchsize * -1;
}
}
}
for (int i = 0; i < L - 1; i++) {
for (int j = 0; j < Lsize[i + 1]; j++) {
biases[i][j] += averagegbiases[i][j] * factor;
for (int k = 0; k < Lsize[i]; k++) {
weights[i][j][k] += averagegweights[i][j][k] * factor;
}
}
}
averagecost = averagecost/batchsize;
System.out.println("averagecost = " + averagecost);
// System.out.println(Arrays.deepToString(batch.target));
System.out.println("Das sollte eine" + data.labels[0] + " sein!");
data.shuffle();
double[] output = test(data.values[0]);
for (int i = 0; i < output.length; i++) {
float val = (float) output[i];
System.out.print(i + ": ");
System.out.printf("%.2f", val);
System.out.println();
}
System.out.println("Step " + b + " finished!");
}
}
public double accuracy(Database data) {
int rightanswers = 0;
int answers = 0;
for(int i = 0; i < data.n; i++) {
if(maxIndex(test(data.values[i])) == data.labels[i]) {
rightanswers++;
}
answers++;
}
return (double)rightanswers/(double)answers;
}
public int maxIndex(double[] output) {
int index = 0;
for(int i = 0; i < output.length; i++) {
if(output[i] > output[index])
index = i;
}
return index;
}
public double[] test(double[] input) {
double[][] values = new double[L][];
values[0] = input;
for (int i = 0; i < L - 1; i++) {
values[i + 1] = new double[Lsize[i + 1]];
for (int j = 0; j < Lsize[i + 1]; j++) {
double sum = biases[i][j];
for (int k = 0; k < Lsize[i]; k++) {
sum += weights[i][j][k] * values[i][k];
}
values[i + 1][j] = sigmoid(sum);
}
}
double[] output = values[L - 1];
return output;
}
public double sigmoid(double x) {
return 1 / (1 + Math.pow(Math.E, x));
}
// derivative of the sigmoid-function
public double dsigmoid(double x) {
return sigmoid(x) * (1 - sigmoid(x));
}
}
my problem now is when i run the training the Cost-function decreases, but only because all of the output-values are nearing 0 and not because the network has actually found the right number to the picture.
after about one run through the Database the average Cost stagnates around 0.9
am i missing something fundamental or am i just not noticing an simple error
Thank you in advance
I`m sorry for my bad English, I´m actually German
The first thing you can do is to increase the complexity of the network. I just tried your architecture with my implementation and the network you currently have definitely isn't big enough to fit handwritten digits. I would recommend 784,128,128,10 as that had around 90% accuracy for 3 runs over the training set. If that doesn't yield better results, make sure your implementation of a neural network is functional on something less complex like the XOR problem. If XOR is learnable by your implementation, it could be the way you are altering the dataset. Debugging a neural network is tough, but you have to make sure the foundation of it is built correctly before looking at higher-level problems like learning rate and optimizers. You're using vanilla gradient descent so you should definitely implement momentum as it is easier to implement than what you've already done and is a significant improvement to your current method.
I am doing a solution to a coding problem, and I tweaked some existing code to be able to figure out how many semi-primes exist up till and including a certain number.
However, I am stuck at the part where I want to count the number of unique semi-primes between two numbers e.g. 10 and 4, which would be 4,6,9 and 10, i.e. 4. My answer is simply saying 10 has 4 semi-primes, 4 has 1 semi-primes, so the sub-primes between them are 4-1 =3. This is where I am going wrong.
Code is here:
public class SemiPrimeRange {
public static int[] solution(int N, int[] P, int[] Q) {
int arrSize = P.length;
int[] arr = new int[arrSize];
for (int i = 0; i < arr.length; i++) {
int n = NoSemiPrimes(Q[i]);
int m = NoSemiPrimes(P[i]);
arr[i] = n-m;
}
for (int i : arr) {
System.out.println(i);
}
return arr;
}
public static int NoSemiPrimes(int large) {
int n = 0;
boolean[] primeTop = new boolean[large + 1];
boolean[] semiprimeTop = new boolean[large + 1];
for (int i = 2; i <= large; i++) {
primeTop[i] = true;
}
for (int i = 2; i * i <= large; i++) {
if (primeTop[i]) {
for (int j = i; i * j <= large; j++) {
primeTop[i * j] = false;
}
}
}
int primes = 0;
for (int i = 2; i <= large; i++) {
if (primeTop[i])
primes++;
}
for (int i = 0; i < large; i++) {
semiprimeTop[i] = false;
}
for (int i = 0; i <= large; i++) {
for (int j = i; j <= large; j++) {
if (primeTop[j]&&primeTop[i]) {
if(i*j<=large){
semiprimeTop[j*i] = true;
}
}
}
}
for (int i = 0; i < semiprimeTop.length; i++) {
System.out.println(semiprimeTop[i]);
}
int semiprimes = 0;
for (int i = 2; i <= large; i++) {
if (semiprimeTop[i])
semiprimes++;
}
System.out.println("The number of semiprimes <= " + large + " is " + semiprimes);
return semiprimes;
}
public static void main(String[] args) {
int[] P = { 1, 4, 16 };
int[] Q = { 26, 10, 20 };
int N = 26;
solution(N, P, Q);
}
If you want number of semi-primes between y and x (y > x), count(y) - count(x) (count(a) is number of semi-primes between a and 1) is not a correct formula because it will omit x if it is semi-prime. Correct formula is count(y) - count(x - 1).
Also note that your code is ineffective because it will count between 1 and the lesser number twice.
The method signature should be
public static int NoSemiPrimes(int small, int large)
and change the loop
int semiprimes = 0;
for (int i = 2; i <= large; i++) {
if (semiprimeTop[i])
semiprimes++;
}
to
int semiprimes = 0;
for (int i = small; i <= large; i++) {
if (semiprimeTop[i])
semiprimes++;
}
to count the number of semi-primes in desired range directly instead of using int NoSemiPrimes(int large) twice.
I have written a code in lucene, which firsts indexes xml documents, and finds the number of unique terms in the index.
Say there are n number (no.) of unique terms.
I want to generate a matrix of dimensions nXn, where
m[i][j] = (co_occurrence value of terms (i, j))/ (occurrence value of term i)
co_occurence of terms (i, j) = no. of documents in which ith term and jth terms, both are occurring
occurence of term j is the no. of documents in which the term j is occurring.
My code is working fine. But its not efficient. for large no. of files, where no. of terms are more than 2000, its taking more than 10 minutes.
here is my code for finding co_occurence -
int cooccurrence(IndexReader reader, String term_one, String term_two) throws IOException {
int common_doc_no = 0, finaldocno_one = 0, finaldocno_two = 0;
int termdocid_one[] = new int[6000];
int termdocid_two[] = new int[6000];
int first_docids[] = new int[6000];
int second_docids[] = new int[6000];
int k = 0;
for (java.util.Iterator<String> it = reader.getFieldNames(
FieldOption.ALL).iterator(); it.hasNext();) {
String fieldname = (String) it.next();
TermDocs t = reader.termDocs(new Term(fieldname, term_one));
while (t.next()) {
int x = t.doc();
if (termdocid_one[x] != 1) {
finaldocno_one++;
first_docids[k] = x;
k++;
}
termdocid_one[x] = 1;
}
}
/*
* System.out.println("value of finaldoc_one - " + finaldocno_one); for
* (int i = 0; i < finaldocno_one; i++) { System.out.println("" +
* first_docids[i]); }
*/
k = 0;
for (java.util.Iterator<String> it = reader.getFieldNames(
FieldOption.ALL).iterator(); it.hasNext();) {
String fieldname = (String) it.next();
TermDocs t = reader.termDocs(new Term(fieldname, term_two));
while (t.next()) {
int x = t.doc();
if (termdocid_two[x] != 1) {
finaldocno_two++;
second_docids[k] = x;
k++;
}
termdocid_two[x] = 1;
}
}
/*
* System.out.println("value of finaldoc_two - " + finaldocno_two);
*
* for (int i = 0; i < finaldocno_two; i++) { System.out.println("" +
* second_docids[i]); }
*/
int max;
int search = 0;
if (finaldocno_one > finaldocno_two) {
max = finaldocno_one;
search = 1;
} else {
max = finaldocno_two;
search = 2;
}
if (search == 1) {
for (int i = 0; i < max; i++) {
if (termdocid_two[first_docids[i]] == 1)
common_doc_no++;
}
} else if (search == 2) {
for (int i = 0; i < max; i++) {
if (termdocid_one[second_docids[i]] == 1)
common_doc_no++;
}
}
return common_doc_no;
}
code for calculation of knowledge matrix: -
void knowledge_matrix(double matrix[][], IndexReader reader, double avg_matrix[][]) throws IOException {
ArrayList<String> unique_terms_array = new ArrayList<>();
int totallength = unique_term_count(reader, unique_terms_array);
int co_occur_matrix[][] = new int[totallength + 3][totallength + 3];
double rowsum = 0;
for (int i = 1; i <= totallength; i++) {
rowsum = 0;
for (int j = 1; j <= totallength; j++) {
int co_occurence;
int occurence = docno_single_term(reader,
unique_terms_array.get(j - 1));
if (i > j) {
co_occurence = co_occur_matrix[i][j];
} else {
co_occurence = cooccurrence(reader,
unique_terms_array.get(i - 1),
unique_terms_array.get(j - 1));
co_occur_matrix[i][j] = co_occurence;
co_occur_matrix[j][i] = co_occurence;
}
matrix[i][j] = (float) co_occurence / (float) occurence;
rowsum += matrix[i][j];
if (i > 1)
{
avg_matrix[i - 1][j] = matrix[i - 1][j] - matrix[i - 1][0];
}
}
matrix[i][0] = rowsum / totallength;
}
for (int j = 1; j <= totallength; j++) {
avg_matrix[totallength][j] = matrix[totallength][j]
- matrix[totallength][0];
}
}
Please anyone suggest me any efficient method to implement it.
I think you can put the find process of term_one and term_two in one for loop. And you can use two hashsets to save the docid that you have found. And then use termOneSet.retainAll(termTwoSet) to get the doc which have both term_one and term_two.