FASTEST way to truncate a float in Java - java

I have a program that takes in anywhere from 20,000 to 500,000 velocity vectors and must output these vectors multiplied by some scalar. The program allows the user to set a variable accuracy, which is basically just how many decimal places to truncate to in the calculations. The program is quite slow at the moment, and I discovered that it's not because of multiplying a lot of numbers, it's because of the method I'm using to truncate floating point values.
I've already looked at several solutions on here for truncating decimals, like this one, and they mostly recommend DecimalFormat. This works great for formatting decimals once or twice to print nice user output, but is far too slow for hundreds of thousands of truncations that need to happen in a few seconds.
What is the most efficient way to truncate a floating-point value to n number of places, keeping execution time at utmost priority? I do not care whatsoever about resource usage, convention, or use of external libraries. Just whatever gets the job done the fastest.
EDIT: Sorry, I guess I should have been more clear. Here's a very simplified version of what I'm trying to illustrate:
import java.util.*;
import java.lang.*;
import java.text.DecimalFormat;
import java.math.RoundingMode;
public class MyClass {
static class Vector{
float x, y, z;
#Override
public String toString(){
return "[" + x + ", " + y + ", " + z + "]";
}
}
public static ArrayList<Vector> generateRandomVecs(){
ArrayList<Vector> vecs = new ArrayList<>();
Random rand = new Random();
for(int i = 0; i < 500000; i++){
Vector v = new Vector();
v.x = rand.nextFloat() * 10;
v.y = rand.nextFloat() * 10;
v.z = rand.nextFloat() * 10;
vecs.add(v);
}
return vecs;
}
public static void main(String args[]) {
int precision = 2;
float scalarToMultiplyBy = 4.0f;
ArrayList<Vector> velocities = generateRandomVecs();
System.out.println("First 10 raw vectors:");
for(int i = 0; i < 10; i++){
System.out.print(velocities.get(i) + " ");
}
/*
This is the code that I am concerned about
*/
DecimalFormat df = new DecimalFormat("##.##");
df.setRoundingMode(RoundingMode.DOWN);
long start = System.currentTimeMillis();
for(Vector v : velocities){
/* Highly inefficient way of truncating*/
v.x = Float.parseFloat(df.format(v.x * scalarToMultiplyBy));
v.y = Float.parseFloat(df.format(v.y * scalarToMultiplyBy));
v.z = Float.parseFloat(df.format(v.z * scalarToMultiplyBy));
}
long finish = System.currentTimeMillis();
long timeElapsed = finish - start;
System.out.println();
System.out.println("Runtime: " + timeElapsed + " ms");
System.out.println("First 10 multiplied and truncated vectors:");
for(int i = 0; i < 10; i++){
System.out.print(velocities.get(i) + " ");
}
}
}
The reason it is very important to do this is because a different part of the program will store trigonometric values in a lookup table. The lookup table will be generated to n places beforehand, so any velocity vector that has a float value to 7 places (i.e. 5.2387471) must be truncated to n places before lookup. Truncation is needed instead of rounding because in the context of this program, it is OK if a vector is slightly less than its true value, but not greater.
Lookup table for 2 decimal places:
...
8.03 -> -0.17511085919
8.04 -> -0.18494742685
8.05 -> -0.19476549993
8.06 -> -0.20456409661
8.07 -> -0.21434223706
...
Say I wanted to look up the cosines of each element in the vector {8.040844, 8.05813164, 8.065688} in the table above. Obviously, I can't look up these values directly, but I can look up {8.04, 8.05, 8.06} in the table.
What I need is a very fast method to go from {8.040844, 8.05813164, 8.065688} to {8.04, 8.05, 8.06}

The fastest way, which will introduce rounding error, is going to be to multiply by 10^n, call Math.rint, and to divide by 10^n.
That's...not really all that helpful, though, considering the introduced error, and -- more importantly -- that it doesn't actually buy anything. Why drop decimal points if it doesn't improve efficiency or anything? If it's about making the values shorter for display or the like, truncate then, but until then, your program will run as fast as possible if you just use full float precision.

Related

Euler's Method in java

I have written an Euler's Method code to find an approximate value for x(10) and compare it to the value of x(10) given by the exact solution given in separable ODE. However, my code displays a chaotic number for x(10). Can you please identify a major error.
Thank you.
//#(#)euler.java
//This method attempts to find solutions to dx/dt = (e^t)(sin(x)) via
//Euler's iterative method and find an approximate value for x(10)
import java.text.DecimalFormat;
public class euler
{
public static void main(String[] Leonhard)
{
DecimalFormat df = new DecimalFormat("#.0000");
double h = (1.0/3.0); // h is the step-size
double t_0 = 0; // initial condition
double x_0 = .3; // initial condition
double x_f = 10; // I want to find x(10) using this method and compare it to an exact value of x(10)
double[] t_k;
t_k = new double[ (int)( ( x_f - x_0 ) / h ) + 1 ] ; // this two arrays hold the values of x_k and t_k
double[] x_k;
x_k = new double[ (int)( ( x_f - x_0 ) / h ) + 1 ] ;
int i; // the counter
System.out.println( "k\t t_k\t x_k" ); // table header
for ( i = 0; k < (int)( ( x_f - x_0 ) / h ) + 1; i++ )
{
if ( i == 0 ) // this if statement handles the initial conditions
{
t_k[i] = t_0;
x_k[i] = x_0;
}
else if ( i > 0 )
{
t_k[i] += i*h;
x_k[i] = x_k[i-1] + h*( Math.exp(t_k[i-1]))*(Math.sin(x_k[i-1]) );
}
System.out.println( k + " " + df.format(t_k[i]) + " " + df.format( x_k[i]) );
}
}
}
Your code seems to work. The problem is that Euler's method is a fairly simplistic way of approximately integrating a differential equation. Its accuracy is strongly dependent upon the step size you're using, as you noticed.
I ran your code and compared with another implementation of the same algorithm. The results overlap in the regime where the approximation is working, and quite a while beyond. They only differ once the method breaks down strongly:
A thing to note is that the Euler method doesn't work very well for this particular differential equation, for the point you wish to reach. A step size of 1/3 is much too big to begin with, but even if you choose a much smaller step size, e.g 1/10000, the method tends to break down before reaching t=10. Something like exp(t)sin(x) is hard to deal with. The real solution becomes flat, approaching pi, so sin(x) should go to zero, making the derivative zero as well. However, exp(t) blows up, so the derivative is numerically unstable.

Implementation of Logistic regression with Gradient Descent in Java

I have implemented Logistic Regression with Gradient Descent in Java. It doesn't seem to work well (It does not classify records properly; the probability of y=1 is a lot.) I don't know whether my implementation is correct.I have gone through the code several times and i am unable to find any bug. I have been following Andrew Ng's tutorials on Machine learning on Course Era. My Java implementation has 3 classes. namely :
DataSet.java : To read the data set
Instance.java : Has two members : 1. double[ ] x and 2. double label
Logistic.java : This is the main class that implements Logistic Regression with Gradient Descent.
This is my cost function:
J(Θ) = (- 1/m ) [Σmi=1 y(i) log( hΘ( x(i) ) ) + (1 - y(i) ) log(1 - hΘ (x(i)) )]
For the above Cost function, this is my Gradient Descent algorithm:
Repeat ( Θj := Θj - α Σmi=1 ( hΘ( x(i)) - y(i) ) x(i)j
(Simultaneously update all Θj )
)
import java.io.FileNotFoundException;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
public class Logistic {
/** the learning rate */
private double alpha;
/** the weight to learn */
private double[] theta;
/** the number of iterations */
private int ITERATIONS = 3000;
public Logistic(int n) {
this.alpha = 0.0001;
theta = new double[n];
}
private double sigmoid(double z) {
return (1 / (1 + Math.exp(-z)));
}
public void train(List<Instance> instances) {
double[] temp = new double[3];
//Gradient Descent algorithm for minimizing theta
for(int i=1;i<=ITERATIONS;i++)
{
for(int j=0;j<3;j++)
{
temp[j]=theta[j] - (alpha * sum(j,instances));
}
//simulataneous updates of theta
for(int j=0;j<3;j++)
{
theta[j] = temp[j];
}
System.out.println(Arrays.toString(theta));
}
}
private double sum(int j,List<Instance> instances)
{
double[] x;
double prediction,sum=0,y;
for(int i=0;i<instances.size();i++)
{
x = instances.get(i).getX();
y = instances.get(i).getLabel();
prediction = classify(x);
sum+=((prediction - y) * x[j]);
}
return (sum/instances.size());
}
private double classify(double[] x) {
double logit = .0;
for (int i=0; i<theta.length;i++) {
logit += (theta[i] * x[i]);
}
return sigmoid(logit);
}
public static void main(String... args) throws FileNotFoundException {
//DataSet is a class with a static method readDataSet which reads the dataset
// Instance is a class with two members: double[] x, double label y
// x contains the features and y is the label.
List<Instance> instances = DataSet.readDataSet("data.txt");
// 3 : number of theta parameters corresponding to the features x
// x0 is always 1
Logistic logistic = new Logistic(3);
logistic.train(instances);
//Test data
double[]x = new double[3];
x[0]=1;
x[1]=45;
x[2] = 85;
System.out.println("Prob: "+logistic.classify(x));
}
}
Can anyone tell me what am I doing wrong?
Thanks in advance! :)
As I am studying logistic regression, I took the time to review your code in detail.
TLDR
In fact, it appears the algorithm is correct.
The reason you had so much false negatives or false positives is, I think, because of the hyper parameters you choose.
The model was under-trained so the hypothesis was under-fitting.
Details
I had to create the DataSet and Instance classes because you did not publish them, and set up a train data set and a test data set based on the Cryotherapy dataset.
See http://archive.ics.uci.edu/ml/datasets/Cryotherapy+Dataset+.
Then, using your same exact code (for the logistic regression part) and by choosing an alpha rate of 0.001 and a number of iterations of 100000, I got a precision rate of 80.64516129032258 percent on the test data set, which is not so bad.
I tried to get a better precision rate by tweaking manualy those hyper parameters but could not obtain any better result.
At this point, an enhancement would be to implement regularization, I suppose.
Gradient descent formula
In Andrew Ng's video about the the cost function and gradient descent, it is correct that the 1/m term is omitted.
A possible explanation is that the 1/m term is included in the alpha term.
Or maybe it's just an oversight.
See https://www.youtube.com/watch?v=TTdcc21Ko9A&index=36&list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN&t=6m53s at 6m53s.
But if you watch Andrew Ng's video about regularization and logistic regression you'll notice that the term 1/m is clearly present in the formula.
See https://www.youtube.com/watch?v=IXPgm1e0IOo&index=42&list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN&t=2m19s at 2m19s.

Coefficient Correlation Over a Large Binary Image Data-Set - Slow Performance

I am trying to build an OCR by calculating the Coefficient Correlation between characters extracted from an image with every character I have pre-stored in a database. My implementation is based on Java and pre-stored characters are loaded into an ArrayList upon the beginning of the application, i.e.
ArrayList<byte []> storedCharacters, extractedCharacters;
storedCharacters = load_all_characters_from_database();
extractedCharacters = extract_characters_from_image();
// Calculate the coefficent between every extracted character
// and every character in database.
double maxCorr = -1;
for(byte [] extractedCharacter : extractedCharacters)
for(byte [] storedCharacter : storedCharactes)
{
corr = findCorrelation(extractedCharacter, storedCharacter)
if (corr > maxCorr)
maxCorr = corr;
}
...
...
public double findCorrelation(byte [] extractedCharacter, byte [] storedCharacter)
{
double mag1, mag2, corr = 0;
for(int i=0; i < extractedCharacter.length; i++)
{
mag1 += extractedCharacter[i] * extractedCharacter[i];
mag2 += storedCharacter[i] * storedCharacter[i];
corr += extractedCharacter[i] * storedCharacter[i];
} // for
corr /= Math.sqrt(mag1*mag2);
return corr;
}
The number of extractedCharacters are around 100-150 per image but the database has 15600 stored binary characters. Checking the coefficient correlation between every extracted character and every stored character has an impact on the performance as it needs around 15-20 seconds to complete for every image, with an Intel i5 CPU.
Is there a way to improve the speed of this program, or suggesting another path of building this bringing similar results. (The results produced by comparing every character with such a large dataset is quite good).
Thank you in advance
UPDATE 1
public static void run() {
ArrayList<byte []> storedCharacters, extractedCharacters;
storedCharacters = load_all_characters_from_database();
extractedCharacters = extract_characters_from_image();
// Calculate the coefficent between every extracted character
// and every character in database.
computeNorms(charComps, extractedCharacters);
double maxCorr = -1;
for(byte [] extractedCharacter : extractedCharacters)
for(byte [] storedCharacter : storedCharactes)
{
corr = findCorrelation(extractedCharacter, storedCharacter)
if (corr > maxCorr)
maxCorr = corr;
}
}
}
private static double[] storedNorms;
private static double[] extractedNorms;
// Correlation between to binary images
public static double findCorrelation(byte[] arr1, byte[] arr2, int strCharIndex, int extCharNo){
final int dotProduct = dotProduct(arr1, arr2);
final double corr = dotProduct * storedNorms[strCharIndex] * extractedNorms[extCharNo];
return corr;
}
public static void computeNorms(ArrayList<byte[]> storedCharacters, ArrayList<byte[]> extractedCharacters) {
storedNorms = computeInvNorms(storedCharacters);
extractedNorms = computeInvNorms(extractedCharacters);
}
private static double[] computeInvNorms(List<byte []> a) {
final double[] result = new double[a.size()];
for (int i=0; i < result.length; ++i)
result[i] = 1 / Math.sqrt(dotProduct(a.get(i), a.get(i)));
return result;
}
private static int dotProduct(byte[] arr1, byte[] arr2) {
int dotProduct = 0;
for(int i = 0; i< arr1.length; i++)
dotProduct += arr1[i] * arr2[i];
return dotProduct;
}
Nowadays, it's hard to find a CPU with a single core (even in mobiles). As the tasks are nicely separated, you can do it with a few lines only. So I'd go for it, though the gain is limited.
In case you really mean cross-correlation, then a transform like DFT or DCT could help. They surely do for big images, but with yours 12x16, I'm not sure.
Maybe you mean just a dot product? And maybe you should tell us?
Note that you actually don't need to compute the correlation, most of the time you only need is find out if it's bigger than a threshold:
corr = findCorrelation(extractedCharacter, storedCharacter)
..... more code to check if this is the best match ......
This may lead to some optimizations or not, depending on how the images look like.
Note also that a simple low level optimization can give you nearly a factor of 4 as in this question of mine. Maybe you really should tell us what you're doing?
UPDATE 1
I guess that due to the computation of three products in the loop, there's enough instruction level parallelism, so a manual loop unrolling like in my above question is not necessary.
However, I see that those three products get computed some 100 * 15600 times, while only one of them depends on both extractedCharacter and storedCharacter. So you can compute
100 + 15600 + 100 * 15600
dot products instead of
3 * 100 * 15600
This way you may get a factor of three pretty easily.
Or not. After this step there's a single sum computed in the relevant step and the problem linked above applies. And so does its solution (unrolling manually).
Factor 5.2
While byte[] is nicely compact, the computation involves extending them to ints, which costs some time as my benchmark shows. Converting the byte[]s to int[]s before all the correlations gets computed saves time. Even better is to make use of the fact that this conversion for storedCharacters can be done beforehand.
Manual loop unrolling twice helps but unrolling more doesn't.

Class to count variables design issue

I'm new to OO programing and having a bit of trouble with the design of my program to use the concepts. I have done the tutorials but am still having problem.
I have a recursion that takes a value of items(could be anything in this example, stocks) and figures out what number of them are needed to equal a specific value(in this code 100). This part works but I want to know if a stock's weighting exceeds a threshold. Originally I approached this problem with a method that did a for loop and calculated the entire list of values but this is super inefficient because its doing it on every loop of the recursion. I thought this would be a good time to try to learn classes because I could use a class to maintain state information and just increment the value on each loop and it'll let me know when the threshold is hit.
I think I have the code but I don't fully understand how to design this problem with classes. So far it runs the loop each step of the recursion because I'm initially the class there. Is there a better way to design this? My end goal is to be notified when a weighting is exceeded(which I can somewhat already do) but I want to do in way that uses the least bit of resources(avoiding inefficient/unnecessary for loops)
Code(Here's the entire code I have been using to learn but the problem is with the Counter class and its location within the findVariables method):
import java.util.Arrays;
public class LearningClassCounting {
public static int[] stock_price = new int[]{ 20,5,20};
public static int target = 100;
public static void main(String[] args) {
// takes items from the first list
findVariables(stock_price, 100, new int[] {0,0,0}, 0, 0);
}
public static void findVariables(int[] constants, int sum,
int[] variables, int n, int result) {
Counter Checker = new Counter(stock_price, variables);
if (n == constants.length) {
if (result == sum) {
System.out.println(Arrays.toString(variables));
}
} else if (result <= sum){ //keep going
for (int i = 0; i <= 100; i++) {
variables[n] = i;
Checker.check_total_percent(n, i);
findVariables(constants, sum, variables, n+1, result+constants[n]*i);
}
}
}
}
class Counter {
private int[] stock_price;
private int[] variables;
private int value_so_far;
public Counter(int[] stock_price, int[] variables) {
this.stock_price = stock_price;
this.variables = variables;
for (int location = 0; location < variables.length; location++) {
//System.out.println(variables[location] + " * " + stock_price[location] + " = " + (variables[location] * stock_price[location]) );
value_so_far = value_so_far + (variables[location] * stock_price[location]);
}
//System.out.println("Total value so far is " + value_so_far);
//System.out.println("************");
}
public void check_total_percent(int current_location, int percent) {
// Check to see if weight exceeds threshold
//System.out.println("we are at " + current_location + " and " + percent + " and " + Arrays.toString(variables));
//System.out.println("value is " + stock_price[current_location] * percent);
//formula I think I need to use is:
if (percent == 0) {
return;
}
int current_value = (stock_price[current_location] * percent);
int overall_percent = current_value/(value_so_far + current_value);
if (overall_percent > 50 ) {
System.out.println("item " + current_location + " is over 50%" );
}
}
}
What you're describing sounds like a variant of the famous knapsack problem. There are many approaches to these problems, which are inherently difficult to calculate.
Inherently, one may need to check "all the combinations". The so-called optimization comes from backtracking when a certain selection subset is already too large (e.g., if 10 given stocks are over my sum, no need to explore other combinations). In addition, one can cache certain subsets (e.g., if I know that X Y and Z amount to some value V, I can reuse that value). You'll see a lot of discussion of how to approach these sort of problems and how to design solutions.
That being said, my view is that while algorithmic problems of this sort may be important for learning how to program and structure code and data structures, they're generally a very poor choice for learning object-oriented design and modelling.

How to compute the probability of a multi-class prediction using libsvm?

I'm using libsvm and the documentation leads me to believe that there's a way to output the believed probability of an output classification's accuracy. Is this so? And if so, can anyone provide a clear example of how to do it in code?
Currently, I'm using the Java libraries in the following manner
SvmModel model = Svm.svm_train(problem, parameters);
SvmNode x[] = getAnArrayOfSvmNodesForProblem();
double predictedValue = Svm.svm_predict(model, x);
Given your code-snippet, I'm going to assume you want to use the Java API packaged with libSVM, rather than the more verbose one provided by jlibsvm.
To enable prediction with probability estimates, train a model with the svm_parameter field probability set to 1. Then, just change your code so that it calls the svm method svm_predict_probability rather than svm_predict.
Modifying your snippet, we have:
parameters.probability = 1;
svm_model model = svm.svm_train(problem, parameters);
svm_node x[] = problem.x[0]; // let's try the first data pt in problem
double[] prob_estimates = new double[NUM_LABEL_CLASSES];
svm.svm_predict_probability(model, x, prob_estimates);
It's worth knowing that training with multiclass probability estimates can change the predictions made by the classifier. For more on this, see the question Calculating Nearest Match to Mean/Stddev Pair With LibSVM.
The accepted answer worked like a charm. Make sure to set probability = 1 during training.
If you are trying to drop prediction when the confidence is not met with threshold, here is the code sample:
double confidenceScores[] = new double[model.nr_class];
svm.svm_predict_probability(model, svmVector, confidenceScores);
/*System.out.println("text="+ text);
for (int i = 0; i < model.nr_class; i++) {
System.out.println("i=" + i + ", labelNum:" + model.label[i] + ", name=" + classLoadMap.get(model.label[i]) + ", score="+confidenceScores[i]);
}*/
//finding max confidence;
int maxConfidenceIndex = 0;
double maxConfidence = confidenceScores[maxConfidenceIndex];
for (int i = 1; i < confidenceScores.length; i++) {
if(confidenceScores[i] > maxConfidence){
maxConfidenceIndex = i;
maxConfidence = confidenceScores[i];
}
}
double threshold = 0.3; // set this based data & no. of classes
int labelNum = model.label[maxConfidenceIndex];
// reverse map number to name
String targetClassLabel = classLoadMap.get(labelNum);
LOG.info("classNumber:{}, className:{}; confidence:{}; for text:{}",
labelNum, targetClassLabel, (maxConfidence), text);
if (maxConfidence < threshold ) {
LOG.info("Not enough confidence; threshold={}", threshold);
targetClassLabel = null;
}
return targetClassLabel;

Categories