Take 10% away from a double Java - java

I was wondering on how I could take 10% away from a double value in Java? I've tried researching percentages in Java, but it's a bit confusing.

You can multiply 0.1 instead of using percents.
For example:
/*Your starting value*/
int x = 100
/* Your percent you want to take away*/
int y = 0.1 /*0.1 is the same as 10%*/
int z = x*y
/* A takes your starting amount and takes away 10% */
int a = x-z
/* a is your answer */
A more simple way of doing this would be to multiply by 0.9 (it multiplys by 90%).
/*Your starting number*/
int x = 100
/*What you want to take away*/
int y = 0.9
int z = x*y
This does take less code and eqations.

Related

Taylor series - calculating sin(x) until 6 digits precision

I must calculate sin(x) with a Taylor's series, until the output has 6 decimal places. The argument is an angle. I didn't implement checking the decimal places, I'm just printing next values (to check if it's working), but after 10-20 iteration it shows infinities/NaN's.
What's wrong in my thinking?
public static void sin(double x){
double sin = 0;
int n=1;
while(1<2){
sin += (Math.pow(-1,n) / factorial(2*n+1)) * Math.pow(x, 2*n+1);
n++;
try {
Thread.sleep(50);
} catch (InterruptedException ex) {
}
// CHECKING THE PRECISION HERE LATER
System.out.println(sin);
}
}
the Equation:
Don't compute each term using factorials and powers! You will rapidly overflow.
Just realize that each next term is -term * x * x / ((n+1)*(n+2)) where n increases by 2 for each term:
double tolerance = 0.0000007; // or whatever limit you want
double sin = 0.;
int n = 1;
double term = x;
while ( Math.abs(term) > tolerance ) {
sin += term;
term *= -( (x/(n+1)) * (x/(n+2)) );
n+= 2;
}
To add on to the answer provided by #Xoce (and #FredK), remember that you are computing the McLaurin series (special case of Taylor about x = 0). While this will converge fairly quickly for values that are within about pi/2 of zero, you may not get convergence of the digits before the factorial explodes for values of x further than that.
My recommendation is to use the actual Taylor series about the closest value of sin(x) for which the exact value is known (i.e., the nearest multiple of pi/2, not just about zero. And definitely do the convergence check!
Problem:
NAN error is normally a really big number, something that can happend if you divide 2 numbers but the divisor is very small, or zero.
Solution
This happens because your factorial number is getting an overflow, and later at some point you are dividing by zero again
if your factorial is taken as argument an int, then change it by , for example, a BIgInterger object.

How can I show up to 2 decimal places without rounding off?

I want to take two decimal places only for a float without rounding off. eg. 4.21777 should be 4.21 and not 4.22. How do I do this?
A simple answer:
double x = 4.21777;
double y = Math.floor(x * 100) / 100;
Subtract 0.005 and then round. For example if you just want to print the number you can use a format of %f6.2 and the value x-0.005.
float f = 4.21777 * 100;
int solution = (int)f;
f = solution/100;
This should work ;)
Explanation: By multiplying with 100, you will get 421.777, which, castet to int, is being rounded down to 421. Now divided by 100 returns its actual value.

Java double computing speed

I have a piece of code that needs to do many computations based on double values, which takes too much time. Can I speed this up by dropping some decimals? if I use a formatter to parse the double, won't that do the calculus first and then shed the extra decimals, so nothing would be gained? what's the best way of doing this?
Just something to get an idea:
double avgRatingForPreferredItem = (double) tempAverageRating.get(matrix.get(0).getItemID1())/matrix.size();
double avgRatingForRandomItem = (double) tempAverageRating.get(matrix.get(0).getItemID2())/matrix.size();
double numarator = 0;
for (MatrixColumn matrixCol : matrix) {
numarator += ( matrixCol.getRatingForItemID1() - avgRatingForPreferredItem ) * (matrixCol.getRatingForItemID2() - avgRatingForRandomItem);
}
double numitor = 0;
double numitorStanga = 0;
double numitorDreapta = 0;
for (MatrixColumn matrixCol : matrix) {
numitorStanga += (matrixCol.getRatingForItemID1() - avgRatingForPreferredItem) * (matrixCol.getRatingForItemID1() - avgRatingForPreferredItem);
numitorDreapta += (matrixCol.getRatingForItemID2() - avgRatingForRandomItem) * (matrixCol.getRatingForItemID2() - avgRatingForRandomItem);
}
numitor = Math.sqrt( numitorStanga * numitorDreapta );
double corelare = numarator/numitor;
I don't believe the actual values involved can make any difference.
It's worth at least trying to reduce the computations here:
for (MatrixColumn matrixCol : matrix) {
numitorStanga += (matrixCol.getRatingForItemID1() - avgRatingForPreferredItem)
* (matrixCol.getRatingForItemID1() - avgRatingForPreferredItem);
numitorDreapta += (matrixCol.getRatingForItemID2() - avgRatingForRandomItem)
* (matrixCol.getRatingForItemID2() - avgRatingForRandomItem);
}
It depends on how smart the JIT compiler is - and I'm assuming getRatingforItemID1 and getRatingforItemID2 are just pass-through properties - but your code at least looks like it's doing redundant subtractions. So:
for (MatrixColumn matrixCol : matrix) {
double diff1 = matrixCol.getRatingForItemID1() - avgRatingForPreferredItem;
double diff2 = matrixCol.getRatingForItemID2() - avgRatingForPreferredItem;
numitorStanga += diff1 * diff1;
numitorDreapta += diff2 * diff2;
}
You could try changing everything to float instead of double - on some architectures that may make things faster; on others it may well not.
Are you absolutely sure that it's the code you've shown which has the problem, though? It's only an O(N) algorithm - how long is it taking, and how large is the matrix?
Floating-point calculations are the same speed regardless of the decimal places. This is hardware, so it operates on the complete value every time anyway. Also keep in mind that the number of decimal places is irrelevant anyway, double stores numbers in binary and just truncating decimal places could well create a same-length binary representation.
Another way to make this faster is to use arrays instead of objects. The problem with using objects is you have no idea how they are arranged in memory (often badly in my experience as the JVM doesn't optimise for this at all)
double avgRatingForPreferredItem = (double) tempAverageRating.get(matrix.get(0).getItemID1()) / matrix.size();
double avgRatingForRandomItem = (double) tempAverageRating.get(matrix.get(0).getItemID2()) / matrix.size();
double[] ratingForItemID1 = matrix.getRatingForItemID1();
double[] ratingForItemID2 = matrix.getRatingForItemID2();
double numarator = 0, numitorStanga = 0, numitorDreapta = 0;
for (int i = 0; i < ratingForItemID1.length; i++) {
double rating1 = ratingForItemID1[i] - avgRatingForPreferredItem;
double rating2 = ratingForItemID2[i] - avgRatingForRandomItem;
numarator += rating1 * rating2;
numitorStanga += rating1 * rating1;
numitorDreapta += rating2 * rating2;
}
double numitor = Math.sqrt(numitorStanga * numitorDreapta);
double corelare = numarator / numitor;
Accessing data continuously in memory can be 5x faster than random access.
You might be able to speed up your algorithm (depending on the value range used) by changing your floating point values into long values that are scaled according to the number of decimal places you need, i.e. value * 10000 for 4 decimal places.
If you chose to do this, you will need to keep the scale in mind for division and multiplication (numitorDreapta += (diff2 * diff2) / 10000;) which does add some clutter to your code.
You will need to convert before and after, but if you need to do a lot of calculations using integer arithmetic instead of floating point might yield the speedup you are looking for.

How to round a number to within a certain range?

I have a value like this:
421.18834
And I have to round it mathematical correctly with a mask which can look like this:
0.05
0.04
0.1
For example, if the mask is 0.04, i have to get the value 421.20, because .18 is nearer at .20 than .16.
All functions that I found using Google didn't work.
Can you please help me?
double initial = 421.18834;
double range = 0.04;
int factor = Math.round(initial / range); // 10530 - will round to correct value
double result = factor * range; // 421.20
You don't need a special function. You multiply your original number by (1/mask), you round it to a decimal and you divide again by the same factor.
Example with 0.05
factor = 1/0.05 = 20
421.18834 * 20 = 8423.7668
int( 8423.7668 ) = 8424
8424.0 / 20.0 = 421.20
Example with 0.01
factor = 1/0.1 = 10
421.18834 * 10 = 4211.8834
int( 4211.8834 ) = 4212
4212.0 / 10.0 = 421.20
Contrary to all the answers you will probably get here about multiplying and dividing, you can't do this accurately because floating point doesn't have decimal places. To need to convert to a decimal radix and then round. BigDecimal does that.
Both fredley and Matteo make the assumption that the rounding factor is itself a factor of 100. For factors like 0.06 or 0.07, this is an incorrect assumption.
Here's my Java routine:
public double rounded(double number, double factor) {
long integer = (long) number;
double fraction = number - integer;
double multiple = (fraction / factor);
multiple = Math.round(multiple);
return factor * multiple + integer;
}

Fast transcendent / trigonometric functions for Java

Since the trigonometric functions in java.lang.Math are quite slow: is there a library that does a quick and good approximation? It seems possible to do a calculation several times faster without losing much precision. (On my machine a multiplication takes 1.5ns, and java.lang.Math.sin 46ns to 116ns). Unfortunately there is not yet a way to use the hardware functions.
UPDATE: The functions should be accurate enough, say, for GPS calculations. That means you would need at least 7 decimal digits accuracy, which rules out simple lookup tables. And it should be much faster than java.lang.Math.sin on your basic x86 system. Otherwise there would be no point in it.
For values over pi/4 Java does some expensive computations in addition to the hardware functions. It does so for a good reason, but sometimes you care more about the speed than for last bit accuracy.
Computer Approximations by Hart. Tabulates Chebyshev-economized approximate formulas for a bunch of functions at different precisions.
Edit: Getting my copy off the shelf, it turned out to be a different book that just sounds very similar. Here's a sin function using its tables. (Tested in C since that's handier for me.) I don't know if this will be faster than the Java built-in, but it's guaranteed to be less accurate, at least. :) You may need to range-reduce the argument first; see John Cook's suggestions. The book also has arcsin and arctan.
#include <math.h>
#include <stdio.h>
// Return an approx to sin(pi/2 * x) where -1 <= x <= 1.
// In that range it has a max absolute error of 5e-9
// according to Hastings, Approximations For Digital Computers.
static double xsin (double x) {
double x2 = x * x;
return ((((.00015148419 * x2
- .00467376557) * x2
+ .07968967928) * x2
- .64596371106) * x2
+ 1.57079631847) * x;
}
int main () {
double pi = 4 * atan (1);
printf ("%.10f\n", xsin (0.77));
printf ("%.10f\n", sin (0.77 * (pi/2)));
return 0;
}
Here is a collection of low-level tricks for quickly approximating trig functions. There is example code in C which I find hard to follow, but the techniques are just as easily implemented in Java.
Here's my equivalent implementation of invsqrt and atan2 in Java.
I could have done something similar for the other trig functions, but I have not found it necessary as profiling showed that only sqrt and atan/atan2 were major bottlenecks.
public class FastTrig
{
/** Fast approximation of 1.0 / sqrt(x).
* See http://www.beyond3d.com/content/articles/8/
* #param x Positive value to estimate inverse of square root of
* #return Approximately 1.0 / sqrt(x)
**/
public static double
invSqrt(double x)
{
double xhalf = 0.5 * x;
long i = Double.doubleToRawLongBits(x);
i = 0x5FE6EB50C7B537AAL - (i>>1);
x = Double.longBitsToDouble(i);
x = x * (1.5 - xhalf*x*x);
return x;
}
/** Approximation of arctangent.
* Slightly faster and substantially less accurate than
* {#link Math#atan2(double, double)}.
**/
public static double fast_atan2(double y, double x)
{
double d2 = x*x + y*y;
// Bail out if d2 is NaN, zero or subnormal
if (Double.isNaN(d2) ||
(Double.doubleToRawLongBits(d2) < 0x10000000000000L))
{
return Double.NaN;
}
// Normalise such that 0.0 <= y <= x
boolean negY = y < 0.0;
if (negY) {y = -y;}
boolean negX = x < 0.0;
if (negX) {x = -x;}
boolean steep = y > x;
if (steep)
{
double t = x;
x = y;
y = t;
}
// Scale to unit circle (0.0 <= y <= x <= 1.0)
double rinv = invSqrt(d2); // rinv ≅ 1.0 / hypot(x, y)
x *= rinv; // x ≅ cos θ
y *= rinv; // y ≅ sin θ, hence θ ≅ asin y
// Hack: we want: ind = floor(y * 256)
// We deliberately force truncation by adding floating-point numbers whose
// exponents differ greatly. The FPU will right-shift y to match exponents,
// dropping all but the first 9 significant bits, which become the 9 LSBs
// of the resulting mantissa.
// Inspired by a similar piece of C code at
// http://www.shellandslate.com/computermath101.html
double yp = FRAC_BIAS + y;
int ind = (int) Double.doubleToRawLongBits(yp);
// Find φ (a first approximation of θ) from the LUT
double φ = ASIN_TAB[ind];
double cφ = COS_TAB[ind]; // cos(φ)
// sin(φ) == ind / 256.0
// Note that sφ is truncated, hence not identical to y.
double sφ = yp - FRAC_BIAS;
double sd = y * cφ - x * sφ; // sin(θ-φ) ≡ sinθ cosφ - cosθ sinφ
// asin(sd) ≅ sd + ⅙sd³ (from first 2 terms of Maclaurin series)
double d = (6.0 + sd * sd) * sd * ONE_SIXTH;
double θ = φ + d;
// Translate back to correct octant
if (steep) { θ = Math.PI * 0.5 - θ; }
if (negX) { θ = Math.PI - θ; }
if (negY) { θ = -θ; }
return θ;
}
private static final double ONE_SIXTH = 1.0 / 6.0;
private static final int FRAC_EXP = 8; // LUT precision == 2 ** -8 == 1/256
private static final int LUT_SIZE = (1 << FRAC_EXP) + 1;
private static final double FRAC_BIAS =
Double.longBitsToDouble((0x433L - FRAC_EXP) << 52);
private static final double[] ASIN_TAB = new double[LUT_SIZE];
private static final double[] COS_TAB = new double[LUT_SIZE];
static
{
/* Populate trig tables */
for (int ind = 0; ind < LUT_SIZE; ++ ind)
{
double v = ind / (double) (1 << FRAC_EXP);
double asinv = Math.asin(v);
COS_TAB[ind] = Math.cos(asinv);
ASIN_TAB[ind] = asinv;
}
}
}
That might make it : http://sourceforge.net/projects/jafama/
I'm surprised that the built-in Java functions would be so slow. Surely the JVM is calling the native trig functions on your CPU, not implementing the algorithms in Java. Are you certain your bottleneck is calls to trig functions and not some surrounding code? Maybe some memory allocations?
Could you rewrite in C++ the part of your code that does the math? Just calling C++ code to compute trig functions probably wouldn't speed things up, but moving some context too, like an outer loop, to C++ might speed things up.
If you must roll your own trig functions, don't use Taylor series alone. The CORDIC algorithms are much faster unless your argument is very small. You could use CORDIC to get started, then polish the result with a short Taylor series. See this StackOverflow question on how to implement trig functions.
On the x86 the java.lang.Math sin and cos functions do not directly call the hardware functions because Intel didn't always do such a good job implimenting them. There is a nice explanation in bug #4857011.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4857011
You might want to think hard about an inexact result. It's amusing how often I spend time finding this in others code.
"But the comment says Sin..."
You could pre-store your sin and cos in an array if you only need some approximate values.
For example, if you want to store the values from 0° to 360°:
double sin[]=new double[360];
for(int i=0;i< sin.length;++i) sin[i]=Math.sin(i/180.0*Math.PI):
you then use this array using degrees/integers instead of radians/double.
I haven't heard of any libs, probably because it's rare enough to see trig heavy Java apps. It's also easy enough to roll your own with JNI (same precision, better performance), numerical methods (variable precision / performance ) or a simple approximation table.
As with any optimization, best to test that these functions are actually a bottleneck before bothering to reinvent the wheel.
Trigonometric functions are the classical example for a lookup table. See the excellent
Lookup table article at wikipedia
If you're searching a library for J2ME you can try:
the Fixed Point Integer Math Library MathFP
The java.lang.Math functions call the hardware functions. There should be simple appromiations you can make but they won't be as accurate.
On my labtop, sin and cos takes about 144 ns.
In the sin/cos test I was performing for integers zero to one million. I assume that 144 ns is not fast enough for you.
Do you have a specific requirement for the speed you need?
Can you qualify your requirement in terms of time per operation which is satisfactory?
Check out Apache Commons Math package if you want to use existing stuff.
If performance is really of the essence, then you can go about implementing these functions yourself using standard math methods - Taylor/Maclaurin series', specifically.
For example, here are several Taylor series expansions that might be useful (taken from wikipedia):
Could you elaborate on what you need to do if these routines are too slow. You might be able to do some coordinate transformations ahead of time some way or another.

Categories