I need to make high score list in a txt file. In the first game, the txt file should be empty as it is the first game. After the first game, the score list must be updated each time with the player's name and the player's score. The list should of course be ordered from high to low according to player's score. After 10 games, the last ones should be removed and only 10 should remain in the list.
I am trying to do this but every time my txt file is stays empty. How can I fix this issue?
My HighScore class:
import java.io.FileWriter;
import java.io.IOException;
import java.util.Scanner;
import java.util.Formatter;
import java.nio.file.Paths;
public class HighScore {
public class HighScoreEntry {
private String name;
private int score;
public HighScoreEntry(String name, int score) {
this.name = name;
this.score = score;
}
public String getName() {
return name;
}
public int getScore() {
return score;
}
}
public void writeHighScores(HighScoreEntry[] highScores) {
Formatter f = null;
FileWriter fw = null;
try {
fw = new FileWriter("highscores.txt",true);
f = new Formatter(fw);
for (int i = 0; i < highScores.length; i++) {
f.format("%s:%d%n", highScores[i].getName(), highScores[i].getScore());
}
} catch (IOException e) {
System.out.println("An error occurred while writing the high scores file.");
} finally {
if (f != null) {
f.close();
}
}
}
public HighScoreEntry[] readHighScores() {
HighScoreEntry[] highScores = new HighScoreEntry[10];
// Initialize the high scores array with default values
for (int i = 0; i < highScores.length; i++) {
highScores[i] = new HighScoreEntry("", 0);
}
Scanner reader = null;
try {
reader = new Scanner(Paths.get("highscores.txt"));
int i = 0;
while (reader.hasNextLine() && i < 10) {
String line = reader.nextLine();
String[] parts = line.split(":");
String name = parts[0];
int score = Integer.parseInt(parts[1]);
highScores[i] = new HighScoreEntry(name, score);
i++;
}
} catch (IOException e) {
System.out.println("An error occurred while reading the high scores file.");
} finally {
if (reader != null) {
reader.close();
}
}
return highScores;
}
public void updateHighScores(String name, int score) {
System.out.println("Updating high scores with name " + name + " and score " + score);
// Write the player's score and name to the high scores file
writeHighScores(new HighScoreEntry[] {new HighScoreEntry(name, score)});
// Read the high scores from the file
HighScoreEntry[] highScores = readHighScores();
// Sort the high scores
sortHighScores(highScores);
}
private void sortHighScores(HighScoreEntry[] highScores) {
for (int i = 0; i < highScores.length - 1; i++) {
for (int j = i + 1; j < highScores.length; j++) {
if (highScores[i].getScore() < highScores[j].getScore()) {
HighScoreEntry temp = highScores[i];
highScores[i] = highScores[j];
highScores[j] = temp;
}
}
}
}
}
My calling method in Game class:
HighScore highScore = new HighScore();
highScore.updateHighScores(user, playerPoints);
I just have to use them. I can't use anything other than these.
The Formatter class returns a formatted string, but you're not capturing the return value of your format call, nor are you writing the resulting string to your FileWriter.
It should look something like this:
String result = f.format("%s:%d%n", highScores[i].getName(), highScores[i].getScore());
fw.write(string);
Give or take a newline.
So, i'm having trouble generating random numbers with uniform distribution in java, given the maximum and the minimun value of some attributes in some data set (Iris from UCI for machine learning). What i have is iris dataset, in some 2-d-array called samples. I put the random values according to the maximun and the minimun value of each attribute in iris data set (without the class attribute) in a 2-d-array called gworms (which has some extra fields for some other values of the algorithm).
So far, the full algorithm is not working properly, and my thoughts are in the fact that maybe the gworms (the points in 4-d space) are not generating correctly or with a good randomness. I think that the points are to close to each other (this i think because of some results obtained later whose code is not shown here). So, i'm asking for your help to validate this code in which i implement "uniform distribution" for gworms (for de first 4 positions):
/*
* To change this license header, choose License Headers in Project
Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package glowworms;
import java.lang.Math;
import java.util.ArrayList;
import java.util.Random;
import weka.core.AttributeStats;
import weka.core.Instances;
/**
*
* #author oscareduardo937
*/
public class GSO {
/* ************ Initializing parameters of CGSO algorithm ******************** */
int swarmSize = 1000; // Swarm size m
int maxIte = 200;
double stepSize = 0.03; // Step size for the movements
double luciferin = 5.0; // Initial luciferin level
double rho = 0.4; // Luciferin decay parameter
double gamma = 0.6; // Luciferin reinforcement parameter
double rs = 0.38; // Initial radial sensor range. This parameter depends on the data set and needs to be found by running experiments
double gworms[][] = null; // Glowworms of the swarm.
/* ************ Initializing parameters of clustering problem and data set ******************** */
int numAtt; // Dimension of the position vector
int numClasses; // Number of classes
int total_data; //Number of instances
int threshold = 5;
int runtime = 1;
/*Algorithm can be run many times in order to see its robustness*/
double minValuesAtts[] = new double[this.numAtt]; // Minimum values for all attributes
double maxValuesAtts[] = new double[this.numAtt]; // Maximum values for all attributes
double samples[][] = new double[this.total_data][this.numAtt]; //Samples of the selected dataset.
ArrayList<Integer> candidateList;
double r;
/*a random number in the range [0,1)*/
/* *********** Method to put the instances in a matrix and get max and min values for attributes ******************* */
public void instancesToSamples(Instances data) {
this.numAtt = data.numAttributes();
System.out.println("********* NumAttributes: " + this.numAtt);
AttributeStats attStats = new AttributeStats();
if (data.classIndex() == -1) {
//System.out.println("reset index...");
data.setClassIndex(data.numAttributes() - 1);
}
this.numClasses = data.numClasses();
this.minValuesAtts = new double[this.numAtt];
this.maxValuesAtts = new double[this.numAtt];
System.out.println("********* NumClasses: " + this.numClasses);
this.total_data = data.numInstances();
samples = new double[this.total_data][this.numAtt];
double[] values = new double[this.total_data];
for (int j = 0; j < this.numAtt; j++) {
values = data.attributeToDoubleArray(j);
for (int i = 0; i < this.total_data; i++) {
samples[i][j] = values[i];
}
}
for(int j=0; j<this.numAtt-1; j++){
attStats = data.attributeStats(j);
this.maxValuesAtts[j] = attStats.numericStats.max;
this.minValuesAtts[j] = attStats.numericStats.min;
//System.out.println("** Min Value Attribute " + j + ": " + this.minValuesAtts[j]);
//System.out.println("** Max Value Attribute " + j + ": " + this.maxValuesAtts[j]);
}
//Checking
/*for(int i=0; i<this.total_data; i++){
for(int j=0; j<this.numAtt; j++){
System.out.print(samples[i][j] + "** ");
}
System.out.println();
}*/
} // End of method InstancesToSamples
public void initializeSwarm(Instances data) {
this.gworms = new double[this.swarmSize][this.numAtt + 2]; // D-dimensional vector plus luciferin, fitness and intradistance.
double intraDistance = 0;
Random r = new Random(); //Random r;
for (int i = 0; i < this.swarmSize; i++) {
for (int j = 0; j < this.numAtt - 1; j++) {
//Uniform randomization of d-dimensional position vector
this.gworms[i][j] = this.minValuesAtts[j] + (this.maxValuesAtts[j] - this.minValuesAtts[j]) * r.nextDouble();
}
this.gworms[i][this.numAtt - 1] = this.luciferin; // Initial luciferin level for all swarm
this.gworms[i][this.numAtt] = 0; // Initial fitness for all swarm
this.gworms[i][this.numAtt + 1] = intraDistance; // Intra-distance for gworm i
}
//Checking gworms
/*for(int i=0; i<this.swarmSize; i++){
for(int j=0; j<this.numAtt+2; j++){
System.out.print(gworms[i][j] + "** ");
}
System.out.println();
}*/
} // End of method initializeSwarm
}
The main class is this one:
package uniformrandomization;
/**
*
* #author oscareduardo937
*/
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.FileNotFoundException;
import weka.core.Instances;
import glowworms.GSO;
public class UniformRandomization {
public UniformRandomization(){
super();
}
//Loading the data from the filename file to the program. It can be .arff or .csv
public static BufferedReader readDataFile(String filename) {
BufferedReader inputReader = null;
try {
inputReader = new BufferedReader(new FileReader(filename));
} catch (FileNotFoundException ex) {
System.err.println("File not found: " + filename);
}
return inputReader;
}
/**
* #param args the command line arguments
*/
public static void main(String[] args) throws Exception {
// TODO code application logic here
BufferedReader datafile1 = readDataFile("src/data/iris.arff");
Instances data = new Instances(datafile1);
GSO gso = new GSO();
gso.instancesToSamples(data);
gso.initializeSwarm(data);
System.out.println("Fin...");
}
}
So i want to know if with this code, the numbers of the position ij of the gworms are generating within the range of max value and min value for attribute j.
Thanks so much in advanced.
I have this .java datafile. The data file is a part of an imagej plugin.
The whole data structure is here:
enter link description here
package mosaic.plugins;
import ij.IJ;
import ij.ImagePlus;
import ij.macro.Interpreter;
import ij.measure.ResultsTable;
import ij.process.ByteProcessor;
import java.awt.Color;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.Map;
import java.util.Map.Entry;
import java.util.TreeMap;
import javax.swing.BorderFactory;
import javax.swing.JButton;
import javax.swing.JDialog;
import javax.swing.JPanel;
import javax.swing.JTextPane;
import javax.swing.WindowConstants;
import mosaic.plugins.utils.PlugIn8bitBase;
import net.imglib2.Cursor;
import net.imglib2.IterableInterval;
import net.imglib2.RandomAccess;
import net.imglib2.img.ImagePlusAdapter;
import net.imglib2.img.Img;
import net.imglib2.img.ImgFactory;
import net.imglib2.img.array.ArrayImgFactory;
import net.imglib2.img.display.imagej.ImageJFunctions;
import net.imglib2.type.NativeType;
import net.imglib2.type.numeric.NumericType;
import net.imglib2.type.numeric.RealType;
import net.imglib2.type.numeric.integer.UnsignedByteType;
import net.imglib2.type.numeric.real.FloatType;
import net.imglib2.view.IntervalView;
import net.imglib2.view.Views;
public class Naturalization extends PlugIn8bitBase
{
// Precision in finding your best T
private static final float EPS = 0.0001f;
// Prior parameter for first oder
// In this case is for all channels
// Fixed parameter
private static final float T1_pr = 0.3754f;
// Number of bins for the Laplacian Histogram
// In general is 4 * N_Grad
// max of laplacian value is 4 * 255
private static final int N_Lap = 2041;
// Offset shift in the histogram bins
// Has to be N_Lap / 2;
private static final int Lap_Offset = 1020;
// Number of bins for the Gradient
private static final int N_Grad = 512;
// Offset for the gradient histogram shift
private static final int Grad_Offset = 256;
// Prior parameter for second order (Parameters learned from trained data set)
// For different color R G B
// For one channel image use an average of them
private final float T2_pr[] = {0.2421f ,0.2550f, 0.2474f, 0.24816666f};
// Keeps values of PSNR for all images and channels in case of RGB. Maps: imageNumber -> map (channel, PSNR value)
private final Map<Integer, Map<Integer, Float>> iPsnrOutput = new TreeMap<Integer, Map<Integer, Float>>();
private synchronized void addPsnr(int aSlice, int aChannel, float aValue) {
Map<Integer, Float> map = iPsnrOutput.get(aSlice);
boolean isNewMap = false;
if (map == null) {
map = new TreeMap<Integer, Float>();
isNewMap = true;
}
map.put(aChannel, aValue);
if (isNewMap) {
iPsnrOutput.put(aSlice, map);
}
}
#Override
protected void processImg(ByteProcessor aOutputImg, ByteProcessor aOrigImg, int aChannelNumber) {
// perform naturalization
final ImagePlus naturalizedImg = naturalize8bitImage(aOrigImg, aChannelNumber);
// set processed pixels to output image
aOutputImg.setPixels(naturalizedImg.getProcessor().getPixels());
}
#Override
protected void postprocessBeforeShow() {
// Create result table with all stored PSNRs.
final ResultsTable rs = new ResultsTable();
for (final Entry<Integer, Map<Integer, Float>> e : iPsnrOutput.entrySet()) {
rs.incrementCounter();
for (final Entry<Integer, Float> m : e.getValue().entrySet()) {
switch(m.getKey()) {
case CHANNEL_R: rs.addValue("Naturalization R", m.getValue()); rs.addValue("Estimated R PSNR", calculate_PSNR(m.getValue())); break;
case CHANNEL_G: rs.addValue("Naturalization G", m.getValue()); rs.addValue("Estimated G PSNR", calculate_PSNR(m.getValue())); break;
case CHANNEL_B: rs.addValue("Naturalization B", m.getValue()); rs.addValue("Estimated B PSNR", calculate_PSNR(m.getValue())); break;
case CHANNEL_8G: rs.addValue("Naturalization", m.getValue()); rs.addValue("Estimated PSNR", calculate_PSNR(m.getValue())); break;
default: break;
}
}
}
if (!Interpreter.isBatchMode()) {
rs.show("Naturalization and PSNR");
showMessage();
}
}
private ImagePlus naturalize8bitImage(ByteProcessor imp, int aChannelNumber) {
Img<UnsignedByteType> TChannel = ImagePlusAdapter.wrap(new ImagePlus("", imp));
final float T2_prior = T2_pr[(aChannelNumber <= CHANNEL_B) ? 2-aChannelNumber : CHANNEL_8G];
final float[] result = {0.0f}; // ugly but one of ways to get result back via parameters;
// Perform naturalization and store PSNR result. Finally return image in ImageJ format.
TChannel = performNaturalization(TChannel, T2_prior, result);
addPsnr(imp.getSliceNumber(), aChannelNumber, result[0]);
return ImageJFunctions.wrap(TChannel,"temporaryName");
}
/**
* Naturalize the image
* #param Img original image
* #param Theta parameter
* #param Class<T> Original image
* #param Class<S> Calculation Type
* #param T2_prior Prior to use
* #param result One element array to store nautralization factor
*/
private <T extends NumericType<T> & NativeType<T> & RealType<T>, S extends RealType<S>> Img<T> doNaturalization(Img<T> image_orig, S Theta,Class<T> cls_t, float T2_prior, float[] result) throws InstantiationException, IllegalAccessException
{
if (image_orig == null) {return null;}
// Check that the image data set is 8 bit
// Otherwise return an error or hint to scale down
final T image_check = cls_t.newInstance();
final Object obj = image_check;
if (!(obj instanceof UnsignedByteType)) {
IJ.error("Error it work only with 8-bit type");
return null;
}
final float Nf = findNaturalizationFactor(image_orig, Theta, T2_prior);
result[0] = Nf;
final Img<T> image_result = naturalizeImage(image_orig, Nf, cls_t);
return image_result;
}
private <S extends RealType<S>, T extends NumericType<T> & NativeType<T> & RealType<T>>
Img<T> naturalizeImage(Img<T> image_orig, float Nf, Class<T> cls_t)
throws InstantiationException, IllegalAccessException
{
// Mean of the original image
// S mean_original = cls_s.newInstance();
// Mean<T,S> m = new Mean<T,S>();
// m.compute(image_orig.cursor(), mean_original);
// TODO: quick fix for deprecated code above. Is new 'mean' utility introduced in imglib2?
float mean_original = 0.0f;
final Cursor<T> c2 = image_orig.cursor();
float count = 0.0f;
while (c2.hasNext()) {
c2.next();
mean_original += c2.get().getRealFloat();
count += 1.0f;
}
mean_original /= count;
// Create result image
final long[] origImgDimensions = new long[2];
image_orig.dimensions(origImgDimensions);
final Img<T> image_result = image_orig.factory().create(origImgDimensions, cls_t.newInstance());
// for each pixel naturalize
final Cursor<T> cur_orig = image_orig.cursor();
final Cursor<T> cur_ir = image_result.cursor();
while (cur_orig.hasNext()) {
cur_orig.next();
cur_ir.next();
final float tmp = cur_orig.get().getRealFloat();
// Naturalize
float Nat = (int) ((tmp - mean_original)*Nf + mean_original + 0.5);
if (Nat < 0)
{Nat = 0;}
else if (Nat > 255)
{Nat = 255;}
cur_ir.get().setReal(Nat);
}
return image_result;
}
private <S extends RealType<S>, T extends NumericType<T> & NativeType<T> & RealType<T>> float findNaturalizationFactor(Img<T> image_orig, S Theta, float T2prior) {
final ImgFactory<FloatType> imgFactoryF = new ArrayImgFactory<FloatType>();
// Create one dimensional image (Histogram)
final Img<FloatType> LapCDF = imgFactoryF.create(new long[] {N_Lap}, new FloatType());
// Two dimensional image for Gradient
final Img<FloatType> GradCDF = imgFactoryF.create(new long[] {N_Grad, 2}, new FloatType());
// GradientCDF = Integral of the histogram of the of the Gradient field
// LaplacianCDF = Integral of the Histogram of the Laplacian field
final Img<FloatType> GradD = create2DGradientField();
calculateLaplaceFieldAndGradient(image_orig, LapCDF, GradD);
convertGrad2dToCDF(GradD);
calculateGradCDF(GradCDF, GradD);
calculateLapCDF(LapCDF);
// For each channel find the best T1
// EPS=precision
// for X component
float T_tmp = (float)FindT(Views.iterable(Views.hyperSlice(GradCDF, GradCDF.numDimensions()-1 , 0)), N_Grad, Grad_Offset, EPS);
// for Y component
T_tmp += FindT(Views.iterable(Views.hyperSlice(GradCDF, GradCDF.numDimensions()-1 , 1)), N_Grad, Grad_Offset, EPS);
// Average them and divide by the prior parameter
final float T1 = T_tmp/(2*T1_pr);
// Find the best parameter and divide by the T2 prior
final float T2 = (float)FindT(LapCDF, N_Lap, Lap_Offset, EPS)/T2prior;
// Calculate naturalization factor!
final float Nf = (float) ((1.0-Theta.getRealDouble())*T1 + Theta.getRealDouble()*T2);
return Nf;
}
/**
* Calculate the peak SNR from the Naturalization factor
*
* #param Nf naturalization factor
* #return the PSNR
*/
String calculate_PSNR(double x)
{
if (x >= 0 && x <= 0.934)
{
return String.format("%.2f", new Float(23.65 * Math.exp(0.6 * x) - 20.0 * Math.exp(-7.508 * x)));
}
else if (x > 0.934 && x < 1.07)
{
return new String("> 40");
}
else if (x >= 1.07 && x < 1.9)
{
return String.format("%.2f", new Float(-11.566 * x + 52.776));
}
else
{
return String.format("%.2f",new Float(13.06*x*x*x*x - 121.4 * x*x*x + 408.5 * x*x -595.5*x + 349));
}
}
private Img<UnsignedByteType> performNaturalization(Img<UnsignedByteType> channel, float T2_prior, float[] result) {
// Parameters balance between first order and second order
final FloatType Theta = new FloatType(0.5f);
try {
channel = doNaturalization(channel, Theta, UnsignedByteType.class, T2_prior, result);
} catch (final InstantiationException e) {
e.printStackTrace();
} catch (final IllegalAccessException e) {
e.printStackTrace();
}
return channel;
}
// Original data
// N = nuber of bins
// offset of the histogram
// T current
private double FindT_Evalue(float[] p_d, int N, int offset, float T)
{
double error = 0;
for (int i=-offset; i<N-offset; ++i) {
final double tmp = Math.atan(T*(i)) - p_d[i+offset];
error += (tmp*tmp);
}
return error;
}
// Find the T
// data CDF Histogram
// N number of bins
// Offset of the histogram
// eps precision
private double FindT(IterableInterval<FloatType> data, int N, int OffSet, float eps)
{
//find the best parameter between data and model atan(Tx)/pi+0.5
// Search between 0 and 1.0
float left = 0;
float right = 1.0f;
float m1 = 0.0f;
float m2 = 0.0f;
// Crate p_t to save computation (shift and rescale the original CDF)
final float p_t[] = new float[N];
// Copy the data
final Cursor<FloatType> cur_data = data.cursor();
for (int i = 0; i < N; ++i)
{
cur_data.next();
p_t[i] = (float) ((cur_data.get().getRealFloat() - 0.5)*Math.PI);
}
// While the precision is bigger than eps
while (right-left>=eps)
{
// move left and right of 1/3 (m1 and m2)
m1=left+(right-left)/3;
m2=right-(right-left)/3;
// Evaluate on m1 and m2, ane move the extreme point
if (FindT_Evalue(p_t, N, OffSet, m1) <=FindT_Evalue(p_t, N, OffSet, m2)) {
right=m2;
}
else {
left=m1;
}
}
// return the average
return (m1+m2)/2;
}
private Img<FloatType> create2DGradientField() {
final long dims[] = new long[2];
dims[0] = N_Grad;
dims[1] = N_Grad;
final Img<FloatType> GradD = new ArrayImgFactory<FloatType>().create(dims, new FloatType());
return GradD;
}
private void calculateLapCDF(Img<FloatType> LapCDF) {
final RandomAccess<FloatType> Lap_hist2 = LapCDF.randomAccess();
//convert Lap to CDF
for (int i = 1; i < N_Lap; ++i)
{
Lap_hist2.setPosition(i-1,0);
final float prec = Lap_hist2.get().getRealFloat();
Lap_hist2.move(1,0);
Lap_hist2.get().set(Lap_hist2.get().getRealFloat() + prec);
}
}
private void calculateGradCDF(Img<FloatType> GradCDF, Img<FloatType> GradD) {
final RandomAccess<FloatType> Grad_dist = GradD.randomAccess();
// Gradient on x pointer
final IntervalView<FloatType> Gradx = Views.hyperSlice(GradCDF, GradCDF.numDimensions()-1 , 0);
// Gradient on y pointer
final IntervalView<FloatType> Grady = Views.hyperSlice(GradCDF, GradCDF.numDimensions()-1 , 1);
integrateOverRowAndCol(Grad_dist, Gradx, Grady);
scaleGradiens(Gradx, Grady);
}
private void scaleGradiens(IntervalView<FloatType> Gradx, IntervalView<FloatType> Grady) {
final RandomAccess<FloatType> Gradx_r2 = Gradx.randomAccess();
final RandomAccess<FloatType> Grady_r2 = Grady.randomAccess();
//scale, divide the number of integrated bins
for (int i = 0; i < N_Grad; ++i)
{
Gradx_r2.setPosition(i,0);
Grady_r2.setPosition(i,0);
Gradx_r2.get().set((float) (Gradx_r2.get().getRealFloat() / 255.0));
Grady_r2.get().set((float) (Grady_r2.get().getRealFloat() / 255.0));
}
}
private void integrateOverRowAndCol(RandomAccess<FloatType> Grad_dist, IntervalView<FloatType> Gradx, IntervalView<FloatType> Grady) {
final int[] loc = new int[2];
// pGrad2D has 2D CDF
final RandomAccess<FloatType> Gradx_r = Gradx.randomAccess();
// Integrate over the row
for (int i = 0; i < N_Grad; ++i)
{
loc[1] = i;
Gradx_r.setPosition(i,0);
// get the row
for (int j = 0; j < N_Grad; ++j)
{
loc[0] = j;
// Set the position
Grad_dist.setPosition(loc);
// integrate over the row to get 1D vector
Gradx_r.get().set(Gradx_r.get().getRealFloat() + Grad_dist.get().getRealFloat());
}
}
final RandomAccess<FloatType> Grady_r = Grady.randomAccess();
// Integrate over the column
for (int i = 0; i < N_Grad; ++i)
{
loc[1] = i;
Grady_r.setPosition(0,0);
for (int j = 0; j < N_Grad; ++j)
{
loc[0] = j;
Grad_dist.setPosition(loc);
Grady_r.get().set(Grady_r.get().getRealFloat() + Grad_dist.get().getRealFloat());
Grady_r.move(1,0);
}
}
}
private <T extends RealType<T>> void calculateLaplaceFieldAndGradient(Img<T> image, Img<FloatType> LapCDF, Img<FloatType> GradD) {
final RandomAccess<FloatType> Grad_dist = GradD.randomAccess();
final long[] origImgDimensions = new long[2];
image.dimensions(origImgDimensions);
final Img<FloatType> laplaceField = new ArrayImgFactory<FloatType>().create(origImgDimensions, new FloatType());
// Cursor localization
final int[] indexD = new int[2];
final int[] loc_p = new int[2];
final RandomAccess<T> img_cur = image.randomAccess();
final RandomAccess<FloatType> Lap_f = laplaceField.randomAccess();
final RandomAccess<FloatType> Lap_hist = LapCDF.randomAccess();
// Normalization 1/(Number of pixel of the original image)
long n_pixel = 1;
for (int i = 0 ; i < laplaceField.numDimensions() ; i++)
{n_pixel *= laplaceField.dimension(i)-2;}
// unit to sum
final double f = 1.0/(n_pixel);
// Inside the image for Y
final Cursor<FloatType> cur = laplaceField.cursor();
// For each point of the Laplacian field
while (cur.hasNext())
{
cur.next();
// Localize cursors
cur.localize(loc_p);
// Exclude the border
boolean border = false;
for (int i = 0 ; i < image.numDimensions() ; i++)
{
if (loc_p[i] == 0)
{border = true;}
else if (loc_p[i] == image.dimension(i)-1)
{border = true;}
}
if (border == true) {
continue;
}
// get the stencil value;
img_cur.setPosition(loc_p);
float L = -4*img_cur.get().getRealFloat();
// Laplacian
for (int i = 0 ; i < 2 ; i++)
{
img_cur.move(1, i);
final float G_p = img_cur.get().getRealFloat();
img_cur.move(-1,i);
final float G_m = img_cur.get().getRealFloat();
img_cur.move(-1, i);
final float L_m = img_cur.get().getRealFloat();
img_cur.setPosition(loc_p);
L += G_p + L_m;
// Calculate the gradient + convert into bin
indexD[1-i] = (int) (Grad_Offset + G_p - G_m);
}
Lap_f.setPosition(loc_p);
// Set the Laplacian field
Lap_f.get().setReal(L);
// Histogram bin conversion
L += Lap_Offset;
Lap_hist.setPosition((int)(L),0);
Lap_hist.get().setReal(Lap_hist.get().getRealFloat() + f);
Grad_dist.setPosition(indexD);
Grad_dist.get().setReal(Grad_dist.get().getRealFloat() + f);
}
}
private void convertGrad2dToCDF(Img<FloatType> GradD) {
final RandomAccess<FloatType> Grad_dist = GradD.randomAccess();
final int[] loc = new int[GradD.numDimensions()];
// for each row
for (int j = 0; j < GradD.dimension(1); ++j)
{
loc[1] = j;
for (int i = 1; i < GradD.dimension(0) ; ++i)
{
loc[0] = i-1;
Grad_dist.setPosition(loc);
// Precedent float
final float prec = Grad_dist.get().getRealFloat();
// Move to the actual position
Grad_dist.move(1, 0);
// integration up to the current position
Grad_dist.get().set(Grad_dist.get().getRealFloat() + prec);
}
}
//col integration
for (int j = 1; j < GradD.dimension(1); ++j)
{
// Move to the actual position
loc[1] = j-1;
for (int i = 0; i < GradD.dimension(0); ++i)
{
loc[0] = i;
Grad_dist.setPosition(loc);
// Precedent float
final float prec = Grad_dist.get().getRealFloat();
// Move to the actual position
Grad_dist.move(1, 1);
Grad_dist.get().set(Grad_dist.get().getRealFloat() + prec);
}
}
}
/**
* Show information about authors and paper.
*/
private void showMessage()
{
// Create main window with panel to store gui components
final JDialog win = new JDialog((JDialog)null, "Naturalization", true);
final JPanel msg = new JPanel();
msg.setBorder(BorderFactory.createEmptyBorder(10, 10, 10, 10));
// Create message not editable but still focusable for copying
final JTextPane text = new JTextPane();
text.setContentType("text/html");
text.setText("<html>Y. Gong and I. F. Sbalzarini. Image enhancement by gradient distribution specification. In Proc. ACCV, <br>"
+ "12th Asian Conference on Computer Vision, Workshop on Emerging Topics in Image Enhancement and Restoration,<br>"
+ "pages w7–p3, Singapore, November 2014.<br><br>"
+ "Y. Gong and I. F. Sbalzarini, Gradient Distributions Priors for Biomedical Image Processing, 2014<br>http://arxiv.org/abs/1408.3300<br><br>"
+ "Y. Gong and I. F. Sbalzarini. A Natural-Scene Gradient Distribution Prior and its Application in Light-Microscopy Image Processing.<br>"
+ "IEEE Journal of Selected Topics in Signal Processing, Vol.10, No.1, February 2016, pages 99-114<br>"
+ "ISSN: 1932-4553, DOI: 10.1109/JSTSP.2015.2506122<br><br>"
+ "</html>");
text.setBorder(BorderFactory.createLineBorder(Color.BLACK, 2));
text.setEditable(false);
msg.add(text);
// Add button "Close" for closing window easily
final JButton button = new JButton("Close");
button.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
win.dispose();
}
});
msg.add(button);
// Finally show window with message
win.add(msg);
win.pack();
win.setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE);
win.setVisible(true);
}
#Override
protected boolean showDialog() {
return true;
}
#Override
protected boolean setup(String aArgs) {
setFilePrefix("naturalized_");
return true;
}
}
I want it to compile it again and get a .class file or a whole .jar file of this plugin.
Which sturcuture and datas I need for get a .class data?
What are with the import files, where i can get the ij, java, javax and net files? In which structure must it be.
I am a novice in Java and only know, that the compiled command is javac.
on linux there is a command to do it which is javac
just : javac HelloWorld.java
it might be the same thing on windows but i am not sure (install a virtual linux box if there is no other way)
If something goes wrong google the error
If you want to compile a Java program from command line you should use the javac command and to invoke it just write java and then the name of your program.
Compiling a file you will have the .class file that you are looking for.
I've trained a neural network in NetBeans and saved it as neural_network.ser by using Serializable ,"all classes implement Serializable" , Now I want to use it in my android application but when loading the network ,ClassNotFoundException raised .
java.lang.ClassNotFoundException: neural_network.BackPropagation
Here is the Classes:
BackPropagation class:
public class BackPropagation extends Thread implements Serializable
{
private static final String TAG = "NetworkMessage";
private static final long serialVersionUID = -8862858027413741101L;
private double OverallError;
// The minimum Error Function defined by the user
private double MinimumError;
// The user-defined expected output pattern for a set of samples
private double ExpectedOutput[][];
// The user-defined input pattern for a set of samples
private double Input[][];
// User defined learning rate - used for updating the network weights
private double LearningRate;
// Users defined momentum - used for updating the network weights
private double Momentum;
// Number of layers in the network
private int NumberOfLayers;
// Number of training sets
private int NumberOfSamples;
// Current training set/sample that is used to train network
private int SampleNumber;
// Maximum number of Epochs before the traing stops training
private long MaximumNumberOfIterations;
// Public Variables
public LAYER Layer[];
public double ActualOutput[][];
long delay = 0;
boolean die = false;
// Calculate the node activations
public void FeedForward()
{
int i,j;
// Since no weights contribute to the output
// vector from the input layer,
// assign the input vector from the input layer
// to all the node in the first hidden layer
for (i = 0; i < Layer[0].Node.length; i++)
Layer[0].Node[i].Output = Layer[0].Input[i];
Layer[1].Input = Layer[0].Input;
for (i = 1; i < NumberOfLayers; i++)
{
Layer[i].FeedForward();
// Unless we have reached the last layer, assign the layer i's //output vector
// to the (i+1) layer's input vector
if (i != NumberOfLayers-1)
Layer[i+1].Input = Layer[i].OutputVector();
}
}
// FeedForward()
// Back propagated the network outputy error through
// the network to update the weight values
public void UpdateWeights()
{
CalculateSignalErrors();
BackPropagateError();
}
private void CalculateSignalErrors()
{
int i,j,k,OutputLayer;
double Sum;
OutputLayer = NumberOfLayers-1;
// Calculate all output signal error
for (i = 0; i < Layer[OutputLayer].Node.length; i++)
{
Layer[OutputLayer].Node[i].SignalError =
(ExpectedOutput[SampleNumber][i] -Layer[OutputLayer].Node[i].Output) *
Layer[OutputLayer].Node[i].Output *
(1-Layer[OutputLayer].Node[i].Output);
}
// Calculate signal error for all nodes in the hidden layer
// (back propagate the errors
for (i = NumberOfLayers-2; i > 0; i--)
{
for (j = 0; j < Layer[i].Node.length; j++)
{
Sum = 0;
for (k = 0; k < Layer[i+1].Node.length; k++)
Sum = Sum + Layer[i+1].Node[k].Weight[j] *
Layer[i+1].Node[k].SignalError;
Layer[i].Node[j].SignalError = Layer[i].Node[j].Output*(1 -
Layer[i].Node[j].Output)*Sum;
}
}
}
private void BackPropagateError()
{
int i,j,k;
// Update Weights
for (i = NumberOfLayers-1; i > 0; i--)
{
for (j = 0; j < Layer[i].Node.length; j++)
{
// Calculate Bias weight difference to node j
Layer[i].Node[j].ThresholdDiff = LearningRate *
Layer[i].Node[j].SignalError +
Momentum*Layer[i].Node[j].ThresholdDiff;
// Update Bias weight to node j
Layer[i].Node[j].Threshold =
Layer[i].Node[j].Threshold +
Layer[i].Node[j].ThresholdDiff;
// Update Weights
for (k = 0; k < Layer[i].Input.length; k++)
{
// Calculate weight difference between node j and k
Layer[i].Node[j].WeightDiff[k] =
LearningRate *
Layer[i].Node[j].SignalError*Layer[i-
1].Node[k].Output +
Momentum*Layer[i].Node[j].WeightDiff[k];
// Update weight between node j and k
Layer[i].Node[j].Weight[k] =
Layer[i].Node[j].Weight[k] +
Layer[i].Node[j].WeightDiff[k];
}
}
}
}
private void CalculateOverallError()
{
int i,j;
OverallError = 0;
for (i = 0; i < NumberOfSamples; i++)
for (j = 0; j < Layer[NumberOfLayers-1].Node.length; j++)
{
OverallError = OverallError +
0.5*( Math.pow(ExpectedOutput[i][j] - ActualOutput[i]
[j],2) );
}
}
public BackPropagation(int NumberOfNodes[],
double InputSamples[][],
double OutputSamples[][],
double LearnRate,
double Moment,
double MinError,
long MaxIter
)
{
int i,j;
// Initiate variables
NumberOfSamples = InputSamples.length;
MinimumError = MinError;
LearningRate = LearnRate;
Momentum = Moment;
NumberOfLayers = NumberOfNodes.length;
MaximumNumberOfIterations = MaxIter;
// Create network layers
Layer = new LAYER[NumberOfLayers];
// Assign the number of node to the input layer
Layer[0] = new LAYER(NumberOfNodes[0],NumberOfNodes[0]);
// Assign number of nodes to each layer
for (i = 1; i < NumberOfLayers; i++)
Layer[i] = new LAYER(NumberOfNodes[i],NumberOfNodes[i-1]);
Input = new double[NumberOfSamples][Layer[0].Node.length];
ExpectedOutput = new double[NumberOfSamples][Layer[NumberOfLayers-
1].Node.length];
ActualOutput = new double[NumberOfSamples][Layer[NumberOfLayers-
1].Node.length];
// Assign input set
for (i = 0; i < NumberOfSamples; i++)
for (j = 0; j < Layer[0].Node.length; j++)
Input[i][j] = InputSamples[i][j];
// Assign output set
for (i = 0; i < NumberOfSamples; i++)
for (j = 0; j < Layer[NumberOfLayers-1].Node.length; j++)
ExpectedOutput[i][j] = OutputSamples[i][j];
}
public void TrainNetwork()
{
int i,j;
long k=0;
do
{
// For each pattern
for (SampleNumber = 0; SampleNumber < NumberOfSamples; SampleNumber++)
{
for (i = 0; i < Layer[0].Node.length; i++)
Layer[0].Input[i] = Input[SampleNumber][i];
FeedForward();
// Assign calculated output vector from network to ActualOutput
for (i = 0; i < Layer[NumberOfLayers-1].Node.length; i++)
ActualOutput[SampleNumber][i] = Layer[NumberOfLayers-
1].Node[i].Output;
UpdateWeights();
// if we've been told to stop training, then
// stop thread execution
if (die){
return;
}
// if
}
k++;
// Calculate Error Function
CalculateOverallError();
System.out.println("OverallError =
"+Double.toString(OverallError)+"\n");
System.out.print("Epoch = "+Long.toString(k)+"\n");
} while ((OverallError > MinimumError) &&(k < MaximumNumberOfIterations));
}
public LAYER[] get_layers() { return Layer; }
// called when testing the network.
public double[] test(double[] input)
{
int winner = 0;
NODE[] output_nodes;
for (int j = 0; j < Layer[0].Node.length; j++)
{ Layer[0].Input[j] = input[j];}
FeedForward();
// get the last layer of nodes (the outputs)
output_nodes = (Layer[Layer.length - 1]).get_nodes();
double[] actual_output = new double[output_nodes.length];
for (int k=0; k < output_nodes.length; k++)
{
actual_output[k]=output_nodes[k].Output;
} // for
return actual_output;
}//test()
public double get_error()
{
CalculateOverallError();
return OverallError;
} // get_error()
// to change the delay in the network
public void set_delay(long time)
{
if (time >= 0) {
delay = time;
} // if
}
//save the trained network
public void save(String FileName)
{
try{
FileOutputStream fos = new FileOutputStream (new File(FileName), true);
// Serialize data object to a file
ObjectOutputStream os = new ObjectOutputStream(fos);
os.writeObject(this);
os.close();
fos.close();
System.out.println("Network Saved!!!!");
}
catch (IOException E){System.out.println(E.toString());}
catch (Exception e){System.out.println(e.toString());}
}
public BackPropagation load(String FileName)
{
BackPropagation myclass= null;
try
{
//File patternDirectory = new File(Environment.getExternalStorageDirectory().getAbsolutePath().toString()+"INDIAN_NUMBER_RECOGNITION.data");
//patternDirectory.mkdirs();
FileInputStream fis = new FileInputStream(new File(FileName));
//FileInputStream fis =context.openFileInput(FileName);
ObjectInputStream is = new ObjectInputStream(fis);
myclass = (BackPropagation) is.readObject();
System.out.println("Error After Reading = "+Double.toString(myclass.get_error())+"\n");
is.close();
fis.close();
return myclass;
}
catch (Exception e){System.out.println(e.toString());}
return myclass;
}
// needed to implement threading.
public void run() {
TrainNetwork();
File Net_File = new File(Environment.getExternalStorageDirectory(),"Number_Recognition_1.ser");
save(Net_File.getAbsolutePath());
System.out.println( "DONE TRAINING :) ^_^ ^_^ :) !\n");
System.out.println("With Network ERROR = "+Double.toString(get_error())+"\n");
} // run()
// to notify the network to stop training.
public void kill() { die = true; }
}
Layer Class:
public class LAYER implements Serializable
{
private double Net;
public double Input[];
// Vector of inputs signals from previous
// layer to the current layer
public NODE Node[];
// Vector of nodes in current layer
// The FeedForward function is called so that
// the outputs for all the nodes in the current
// layer are calculated
public void FeedForward() {
for (int i = 0; i < Node.length; i++) {
Net = Node[i].Threshold;
for (int j = 0; j < Node[i].Weight.length; j++)
{Net = Net + Input[j] * Node[i].Weight[j];
System.out.println("Net = "+Double.toString(Net)+"\n");
}
Node[i].Output = Sigmoid(Net);
System.out.println("Node["+Integer.toString(i)+".Output = "+Double.toString(Node[i].Output)+"\n");
}
}
// The Sigmoid function calculates the
// activation/output from the current node
private double Sigmoid (double Net) {
return 1/(1+Math.exp(-Net));
}
// Return the output from all node in the layer
// in a vector form
public double[] OutputVector() {
double Vector[];
Vector = new double[Node.length];
for (int i=0; i < Node.length; i++)
Vector[i] = Node[i].Output;
return (Vector);
}
public LAYER (int NumberOfNodes, int NumberOfInputs) {
Node = new NODE[NumberOfNodes];
for (int i = 0; i < NumberOfNodes; i++)
Node[i] = new NODE(NumberOfInputs);
Input = new double[NumberOfInputs];
}
// added by DSK
public NODE[] get_nodes() { return Node; }
}
Node Class:
public class NODE implements Serializable
{
public double Output;
// Output signal from current node
public double Weight[];
// Vector of weights from previous nodes to current node
public double Threshold;
// Node Threshold /Bias
public double WeightDiff[];
// Weight difference between the nth and the (n-1) iteration
public double ThresholdDiff;
// Threshold difference between the nth and the (n-1) iteration
public double SignalError;
// Output signal error
// InitialiseWeights function assigns a randomly
// generated number, between -1 and 1, to the
// Threshold and Weights to the current node
private void InitialiseWeights() {
Threshold = -1+2*Math.random();
// Initialise threshold nodes with a random
// number between -1 and 1
ThresholdDiff = 0;
// Initially, ThresholdDiff is assigned to 0 so
// that the Momentum term can work during the 1st
// iteration
for(int i = 0; i < Weight.length; i++) {
Weight[i]= -1+2*Math.random();
// Initialise all weight inputs with a
// random number between -1 and 1
WeightDiff[i] = 0;
// Initially, WeightDiff is assigned to 0
// so that the Momentum term can work during
// the 1st iteration
}
}
public NODE (int NumberOfNodes) {
Weight = new double[NumberOfNodes];
// Create an array of Weight with the same
// size as the vector of inputs to the node
WeightDiff = new double[NumberOfNodes];
// Create an array of weightDiff with the same
// size as the vector of inputs to the node
InitialiseWeights();
// Initialise the Weights and Thresholds to the node
}
public double[] get_weights() { return Weight; }
public double get_output() { return Output; }
}
I wrote the code in Netbeans exactly like this but it differs in the saving method where the file should be saved!.
How can I load the file correctly so I don't get this exception?
I Solved this by saving the network to XML file and then load it again in android so it just took two hours of training instead of days without any Serialization problems , although it took some time to load that XML I serialized it again to neural_network.ser so it will load much faster
I know it's not the best solution but that what I've done.
Here the is the code:
public void SaveToXML(String FileName)throws
ParserConfigurationException, FileNotFoundException,
TransformerException, TransformerConfigurationException
{
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder parser = factory.newDocumentBuilder();
Document doc = parser.newDocument();
Element root = doc.createElement("neuralNetwork");
Element layers = doc.createElement("structure");
layers.setAttribute("numberOfLayers",Integer.toString(this.NumberOfLayers));
for (int il=0; il<this.NumberOfLayers; il++){
Element layer = doc.createElement("layer");
layer.setAttribute("index",Integer.toString(il));
layer.setAttribute("numberOfNeurons",Integer.toString(this.Layer[il].Node.length));
if(il==0)
{
for(int in=0;in<this.Layer[il].Node.length;in++)
{
Element neuron = doc.createElement("neuron");
neuron.setAttribute("index",Integer.toString(in));
neuron.setAttribute("NumberOfInputs",Integer.toString(1));
neuron.setAttribute("threshold",Double.toString(this.Layer[il].Node[in].Threshold));
Element input = doc.createElement("input");
double[] weights = this.Layer[il].Node[in].get_weights();
input.setAttribute("index",Integer.toString(in));
input.setAttribute("weight",Double.toString(weights[in]));
neuron.appendChild(input);
layer.appendChild(neuron);
}
layers.appendChild(layer);
}
else
{
for (int in=0; in<this.Layer[il].Node.length;in++){
Element neuron = doc.createElement("neuron");
neuron.setAttribute("index",Integer.toString(in));
neuron.setAttribute("NumberOfInputs",Integer.toString(this.Layer[il].Node[in].Weight.length));
neuron.setAttribute("threshold",Double.toString(this.Layer[il].Node[in].Threshold));
for (int ii=0; ii<this.Layer[il].Node[in].Weight.length;ii++) {
double[] weights = this.Layer[il].Node[in].get_weights();
Element input = doc.createElement("input");
input.setAttribute("index",Integer.toString(ii));
input.setAttribute("weight",Double.toString(weights[ii]));
neuron.appendChild(input);
}
layer.appendChild(neuron);
layers.appendChild(layer);
}
}
}
root.appendChild(layers);
doc.appendChild(root);
File xmlOutputFile = new File(FileName);
FileOutputStream fos;
Transformer transformer;
fos = new FileOutputStream(xmlOutputFile);
TransformerFactory transformerFactory = TransformerFactory.newInstance();
transformer = transformerFactory.newTransformer();
DOMSource source = new DOMSource(doc);
StreamResult result = new StreamResult(fos);
transformer.setOutputProperty("encoding","iso-8859-2");
transformer.setOutputProperty("indent","yes");
transformer.transform(source, result);
}
LoadFromXML Function:
public BackPropagation LoadFromXML(String FileName)throws
ParserConfigurationException, SAXException, IOException, ParseException
{
BackPropagation myclass= new BackPropagation();
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder parser = factory.newDocumentBuilder();
File source = new File(FileName);
Document doc = parser.parse(source);
Node nodeNeuralNetwork = doc.getDocumentElement();
if (!nodeNeuralNetwork.getNodeName().equals("neuralNetwork")) throw new ParseException("[Error] NN-Load: Parse error in XML file, neural network couldn't be loaded.",0);
NodeList nodeNeuralNetworkContent = nodeNeuralNetwork.getChildNodes();
System.out.print("<neuralNetwork>\n");
for (int innc=0; innc<nodeNeuralNetworkContent.getLength(); innc++)
{
Node nodeStructure = nodeNeuralNetworkContent.item(innc);
if (nodeStructure.getNodeName().equals("structure"))
{
System.out.print("<stucture nuumberOfLayers = ");
myclass.NumberOfLayers = Integer.parseInt(((Element)nodeStructure).getAttribute("numberOfLayers"));
myclass.Layer = new LAYER[myclass.NumberOfLayers];
System.out.print(Integer.toString(myclass.NumberOfLayers)+">\n");
NodeList nodeStructureContent = nodeStructure.getChildNodes();
for (int isc=0; isc<nodeStructureContent.getLength();isc++)
{
Node nodeLayer = nodeStructureContent.item(isc);
if (nodeLayer.getNodeName().equals("layer"))
{
int index = Integer.parseInt(((Element)nodeLayer).getAttribute("index"));
System.out.print("<layer index = "+Integer.toString(index)+" numberOfNeurons = ");
int number_of_N = Integer.parseInt(((Element)nodeLayer).getAttribute("numberOfNeurons"));
System.out.print(Integer.toString(number_of_N)+">\n");
if(index==0)
{
myclass.Layer[0]=new LAYER(number_of_N,800);
}
else
{
int j=index-1;
myclass.Layer[index]=new LAYER(number_of_N,myclass.Layer[j].Node.length);
}
NodeList nodeLayerContent = nodeLayer.getChildNodes();
for (int ilc=0; ilc<nodeLayerContent.getLength();ilc++)
{
Node nodeNeuron = nodeLayerContent.item(ilc);
if (nodeNeuron.getNodeName().equals("neuron"))
{
System.out.print("<neuron index = ");
int neuron_index = Integer.parseInt(((Element)nodeNeuron).getAttribute("index"));
myclass.Layer[index].Node[neuron_index].Threshold = Double.parseDouble(((Element)nodeNeuron).getAttribute("threshold"));
System.out.print(Integer.toString(neuron_index)+" threshold = "+Double.toString(myclass.Layer[index].Node[neuron_index].Threshold)+">\n");
NodeList nodeNeuronContent = nodeNeuron.getChildNodes();
for (int inc=0; inc < nodeNeuronContent.getLength();inc++)
{
Node nodeNeuralInput = nodeNeuronContent.item(inc);
if (nodeNeuralInput.getNodeName().equals("input"))
{
System.out.print("<input index = ");
int index_input = Integer.parseInt(((Element)nodeNeuralInput).getAttribute("index"));
myclass.Layer[index].Node[neuron_index].Weight[index_input] = Double.parseDouble(((Element)nodeNeuralInput).getAttribute("weight"));
System.out.print(Integer.toString(index_input)+" weight = "+Double.toString(myclass.Layer[index].Node[neuron_index].Weight[index_input])+">\n");
}
}
}
}
}
}
System.out.print("</structure");
}
}
return myclass;
}
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package sim;
import java.io.*;
import java.util.Arrays;
import java.util.Scanner;
import java.util.logging.Level;
import java.util.logging.Logger;
import static jdk.nashorn.internal.objects.NativeMath.max;
/**
*
* #author admin
*/
public class Sim {
public String[][] bigramizedWords = new String[500][100];
public String[] words = new String[500];
public File file1 = new File("file1.txt");
public File file2 = new File("file2.txt");
public int tracker = 0;
public double matches = 0;
public double denominator = 0; //This will hold the sum of the bigrams of the 2 words
public double res;
public double results;
public Scanner a;
public PrintWriter pw1;
public Sim(){
intialize();
// bigramize();
results = max(res);
System.out.println("\n\nThe Bigram Similarity value between " + words[0] + " and " + words[1] + " is " + res + ".");
pw1.close();
}
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
Sim si=new Sim();
// TODO code application logic here
}
public void intialize() {
int j[]=new int[35];
try {
File file1=new File("input.txt");
File file2=new File("out.txt");
Scanner a = new Scanner(file1);
PrintWriter pw1= new PrintWriter(file2);
int i=0,count = 0;
while (a.hasNext()) {
java.lang.String gram = a.next();
if(gram.startsWith("question")|| gram.endsWith("?"))
{
count=0;
count-=1;
}
if(gram.startsWith("[")||gram.startsWith("answer")||gram.endsWith(" ") )
{
//pw1.println(count);
j[i++]=count;
count=0;
//pw1.println(gram);
//System.out.println(count);
}
else
{
// System.out.println(count);
count+=1;
//System.out.println(count + " " + gram);
}
int line=gram.length();
int sa_length;
//int[] j = null;
int refans_length=j[1];
//System.out.println(refans_length);
for(int k=2;k<=35;k++)
// System.out.println(j[k]);
//System.out.println(refans_length);
for(int m=2;m<=33;m++)
{
sa_length=j[2];
//System.out.println(sa_length);
for(int s=0;s<=refans_length;s++)
{
for(int l=0;l<=sa_length;l++)
{
for (int x = 0; x <= line - 2; x++) {
int tracker = 0;
bigramizedWords[tracker][x] = gram.substring(x, x + 2);
System.out.println(gram.substring(x, x + 2) + "");
//bigramize();
}
// bigramize();
}
}
}
bigramize();
words[tracker] = gram;
tracker++;
}
//pw1.close();
}
catch (FileNotFoundException ex) {
Logger.getLogger(Sim.class.getName()).log(Level.SEVERE, null, ex);
}
}
public void bigramize() {
//for(int p=0;p<=sa_length;p++)
denominator = (words[0].length() - 1) + (words[1].length() - 1);
for (int k = 0; k < bigramizedWords[0].length; k++) {
if (bigramizedWords[0][k] != null) {
for (int i = 0; i < bigramizedWords[1].length; i++) {
if (bigramizedWords[1][i] != null) {
if (bigramizedWords[0][k].equals(bigramizedWords[1][i])) {
matches++;
}
}
}
}
}
matches *= 2;
res = matches / denominator;
}
}
I have tried the above code for bigramizing the words in the file "input.txt" i have got the result of bigram but i didnt get the similarity value.
for e.g:
input file contains as
answer:
high
risk
simulate
behaviour
solution
set
rules
[2]
rules
outline
high
source
knowledge
[1]
set
rules
simulate
behaviour
in the above example I have to compare the words under answer with every word under [2] as {high,rules} {high,outline} {high,high} {high,source} {high,knowledge} and I have to store the maximum value of the above comparison and again the second word from answer is taken and then similar process is taken. At last, mean of maximum value of each iteration is taken.