Related
I'm currently working on Java for Android. I try to implement the FFT in order to realize a kind of viewer of the frequencies.
Actually I was able to do it, but the display is not fluid at all.
I added some traces in order to check the treatment time of each part of my code, and the fact is that the FFT takes about 300ms to be applied on my complex array, that owns 4096 elements. And I need it to take less than 100ms, as my thread (that displays the frequencies) is refreshed every 100ms. I reduced the initial array in order that the FFT results own only 1028 elements, and it works, but the result is deprecated.
Does someone have an idea ?
I used the default fft.java and Complex.java classes that can be found on the internet.
For information, my code computing the FFT is the following :
int bytesPerSample = 2;
Complex[] x = new Complex[bufferSize/2] ;
for (int index = 0 ; index < bufferReadResult - bytesPerSample + 1; index += bytesPerSample)
{
// 16BITS = 2BYTES
float asFloat = Float.intBitsToFloat(asInt);
double sample = 0;
for (int b = 0; b < bytesPerSample; b++) {
int v = buffer[index + b];
if (b < bytesPerSample - 1 || bytesPerSample == 1) {
v &= 0xFF;
}
sample += v << (b * 8);
}
double sample32 = 100 * (sample / 32768.0); // don't know the use of this compute...
x[index/bytesPerSample] = new Complex(sample32, 0);
}
Complex[] tx = new Complex[1024]; // size = 2048
///// reduction of the size of the signal in order to improve the fft traitment time
for (int i = 0; i < x.length/4; i++)
{
tx[i] = new Complex(x[i*4].re(), 0);
}
// Signal retrieval thanks to the FFT
fftRes = FFT.fft(tx);
I don't know Java, but you're way of converting between your input data and an array of complex values seems very convoluted. You're building two arrays of complex data where only one is necessary.
Also it smells like your complex real and imaginary values are doubles. That's way over the top for what you need, and ARMs are veeeery slow at double arithmetic anyway. Is there a complex class based on single precision floats?
Thirdly you're performing a complex fft on real data by filling the imaginary part of your complexes with zero. Whilst the result will be correct it is twice as much work straight off (unless the routine is clever enough to spot that, which I doubt). If possible perform a real fft on your data and save half your time.
And then as Simon says there's the whole issue of avoiding garbage collection and memory allocation.
Also it looks like your FFT has no preparatory step. This mean that the routine FFT.fft() is calculating the complex exponentials every time. The longest part of the FFT calculation is working out the complex exponentials, which is a shame because for any given FFT length the exponentials are constants. They don't depend on your input data at all. In the real time world we use FFT routines where we calculate the exponentials once at the start of the program and then the actual fft itself takes that const array as one of its inputs. Don't know if your FFT class can do something similar.
If you do end up going to something like FFTW then you're going to have to get used to calling C code from your Java. Also make sure you get a version that supports (I think) NEON, ARM's answer to SSE, AVX and Altivec. It's worth ploughing through their release notes to check. Also I strongly suspect that FFTW will only be able to offer a significant speed up if you ask it to perform an FFT on single precision floats, not doubles.
Google luck!
--Edit--
I meant of course 'good luck'. Give me a real keyboard quick, these touchscreen ones are unreliable...
First, thanks for all your answers.
I followed them and made two test :
first one, I replace the double used in my Complex class by float. The result is just a bit better, but not enough.
then I've rewroten the fft method in order not to use Complex anymore, but a two-dimensional float array instead. For each row of this array, the first column contains the real part, and the second one the imaginary part.
I also changed my code in order to instanciate the float array only once, on the onCreate method.
And the result... is worst !! Now it takes a little bit more than 500ms instead of 300ms.
I don't know what to do now.
You can find below the initial fft fonction, and then the one I've re-wroten.
Thanks for your help.
// compute the FFT of x[], assuming its length is a power of 2
public static Complex[] fft(Complex[] x) {
int N = x.length;
// base case
if (N == 1) return new Complex[] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2 : " + N); }
// fft of even terms
Complex[] even = new Complex[N/2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
Complex[] q = fft(even);
// fft of odd terms
Complex[] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
Complex[] r = fft(odd);
// combine
Complex[] y = new Complex[N];
for (int k = 0; k < N/2; k++) {
double kth = -2 * k * Math.PI / N;
Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
y[k] = q[k].plus(wk.times(r[k]));
y[k + N/2] = q[k].minus(wk.times(r[k]));
}
return y;
}
public static float[][] fftf(float[][] x) {
/**
* x[][0] = real part
* x[][1] = imaginary part
*/
int N = x.length;
// base case
if (N == 1) return new float[][] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2 : " + N); }
// fft of even terms
float[][] even = new float[N/2][2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
float[][] q = fftf(even);
// fft of odd terms
float[][] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
float[][] r = fftf(odd);
// combine
float[][] y = new float[N][2];
double kth, wkcos, wksin ;
for (int k = 0; k < N/2; k++) {
kth = -2 * k * Math.PI / N;
//Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
wkcos = Math.cos(kth) ; // real part
wksin = Math.sin(kth) ; // imaginary part
// y[k] = q[k].plus(wk.times(r[k]));
y[k][0] = (float) (q[k][0] + wkcos * r[k][0] - wksin * r[k][1]);
y[k][1] = (float) (q[k][1] + wkcos * r[k][1] + wksin * r[k][0]);
// y[k + N/2] = q[k].minus(wk.times(r[k]));
y[k + N/2][0] = (float) (q[k][0] - (wkcos * r[k][0] - wksin * r[k][1]));
y[k + N/2][1] = (float) (q[k][1] - (wkcos * r[k][1] + wksin * r[k][0]));
}
return y;
}
actually I think I don't understand everything.
First, about Math.cos and Math.sin : how do you want me not to compute it each time ? Do you mean that I should instanciate the whole values only once (e.g store it in an array) and use them for each compute ?
Second, about the N % 2, indeed it's not very useful, I could make the test before the call of the function.
Third, about Simon's advice : I mixed what he said and what you said, that's why I've replaced the Complex by a two-dimensional float[][]. If that was not what he suggested, then what was it ?
At least, I'm not a FFT expert, so what do you mean by making a "real FFT" ? Do you mean that my imaginary part is useless ? If so, I'm not sure, because later in my code, I compute the magnitude of each frequence, so sqrt(real[i]*real[i] + imag[i]*imag[i]). And I think that my imaginary part is not equal to zero...
thanks !
I've seen a lot of discussion on SO related to rounding float values, but no solid Q&A considering the efficiency aspect. So here it is:
What is the most efficient (but correct) way to round a float value to the nearest integer?
(int) (mFloat + 0.5);
or
Math.round(mFloat);
or
FloatMath.floor(mFloat + 0.5);
or something else?
Preferably I would like to use something available in standard java libraries, not some external library that I have to import.
Based on the Q&A's that I think you are referring to, the relative efficiency of the various methods depends on the platform you are using.
But the bottom line is that:
the latest JREs have the performance fix for Math.floor / StrictMath.floor, and
unless you are doing an awful lot of rounding, it probably doesn't make any difference which way you do it.
References:
Does this mean that Java Math.floor is extremely slow?
http://bugs.sun.com/view_bug.do?bug_id=6908131 ("Pure Java implementations of java.lang.StrictMath.floor(double) & java.lang.StrictMath.ceil(double)")
public class Main {
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 10; i++) {
measurementIteration();
}
}
public static void measurementIteration() {
long s, t1 = 0, t2 = 0;
float mFloat = 3.3f;
int f, n1 = 0, n2 = 0;
for (int i = 0; i < 1E4; i++) {
switch ((int) (Math.random() * 2)) {
case 0:
n1 += 1E4;
s = System.currentTimeMillis();
for (int k = 0; k < 1E4; k++)
f = (int) (mFloat + 0.5);
t1 += System.currentTimeMillis() - s;
break;
case 1:
n2 += 1E4;
s = System.currentTimeMillis();
for (int k = 0; k < 1E4; k++)
f = Math.round(mFloat);
t2 += System.currentTimeMillis() - s;
break;
}
}
System.out.println(String.format("(int) (mFloat + 0.5): n1 = %d -> %.3fms/1000", n1, t1 * 1000.0 / n1));
System.out.println(String.format("Math.round(mFloat) : n2 = %d -> %.3fms/1000", n2, t2 * 1000.0 / n2));
}
}
Output on Java SE6:
(int) (mFloat + 0.5): n1 = 500410000 -> 0.003ms/1000
Math.round(mFloat) : n2 = 499590000 -> 0.022ms/1000
Output on Java SE7 (thanks to alex for the results):
(int) (mFloat + 0.5): n1 = 50120000 -> 0,002ms/1000
Math.round(mFloat) : n2 = 49880000 -> 0,002ms/1000
As you can see, there was a huge performance improvement on Math.round from SE6 to SE7. I think in SE7 there is no significant difference anymore and you should choose whatever seems more readable to you.
I should go for Math.round(mFloat) cause it's encapsuling rounding logic in a method (even if it's not your method).
According with its documentation the code you've written is the same that Math.round executes (except it checks border cases).
Anyway what is more important is the time-complexity of your algorithm, not the time for small constant-like things... Except you are programming something that will be invoked millions of times! :D
Edit: I don't know FloatMath. Is it from JDK?
You may benchmark it by using System.currentTimeMillis(). You will see that difference is too little
Simply adding 0.5 will give an incorrect result for negative numbers. See Faster implementation of Math.round? for a better solution.
I'm looking to calculate mutual information between two features, using Java.
I've read Calculating Mutual Information For Selecting a Training Set in Java already, but that was a discussion of if mutual information was appropriate for the poster, with only some light pseudo-code as to the implementation.
My current code is below, but I'm hoping there is a way to optimise it, as I have large quantities of information to process. I'm aware that calling out to another language/framework may improve speed, but would like to focus on solving this in Java for now.
Any help much appreciated.
public static double calculateNewMutualInformation(double frequencyOfBoth, double frequencyOfLeft,
double frequencyOfRight, int noOfTransactions) {
if (frequencyOfBoth == 0 || frequencyOfLeft == 0 || frequencyOfRight == 0)
return 0;
// supp = f11
double supp = frequencyOfBoth / noOfTransactions; // P(x,y)
double suppLeft = frequencyOfLeft / noOfTransactions; // P(x)
double suppRight = frequencyOfRight / noOfTransactions; // P(y)
double f10 = (suppLeft - supp); // P(x) - P(x,y)
double f00 = (1 - suppRight) - f10; // (1-P(y)) - P(x,y)
double f01 = (suppRight - supp); // P(y) - P(x,y)
// -1 * ((P(x) * log(Px)) + ((1 - P(x)) * log(1-p(x)))
double HX = -1 * ((suppLeft * MathUtils.logWithoutNaN(suppLeft)) + ((1 - suppLeft) * MathUtils.logWithoutNaN(1 - suppLeft)));
// -1 * ((P(y) * log(Py)) + ((1 - P(y)) * log(1-p(y)))
double HY = -1 * ((suppRight * MathUtils.logWithoutNaN(suppRight)) + ((1 - suppRight) * MathUtils.logWithoutNaN(1 - suppRight)));
double one = (supp * MathUtils.logWithoutNaN(supp)); // P(x,y) * log(P(x,y))
double two = (f10 * MathUtils.logWithoutNaN(f10));
double three = (f01 * MathUtils.logWithoutNaN(f01));
double four = (f00 * MathUtils.logWithoutNaN(f00));
double HXY = -1 * (one + two + three + four);
return (HX + HY - HXY) / (HX == 0 ? MathUtils.EPSILON : HX);
}
public class MathUtils {
public static final double EPSILON = 0.000001;
public static double logWithoutNaN(double value) {
if (value == 0) {
return Math.log(EPSILON);
} else if (value < 0) {
return 0;
}
return Math.log(value);
}
I have found the following to be fast, but I have not compared it against your method - only that provided in weka.
It works on the premise of re-arranging the MI equation so that it is possible to minimise the number of floating point operations:
We start by defining as count/frequency over number of samples/transactions. So, we define the number of items as n, the number of times x occurs as |x|, the number of times y occurs as |y| and the number of times they co-occur as |x,y|. We then get,
.
Now, we can re-arrange that by flipping the bottom of the inner divide, this gives us (n|x,y|)/(|x||y|). Also, compute use N = 1/n so we have one less divide operation. This gives us:
This gives us the following code:
/***
* Computes MI between variables t and a. Assumes that a.length == t.length.
* #param a candidate variable a
* #param avals number of values a can take (max(a) == avals)
* #param t target variable
* #param tvals number of values a can take (max(t) == tvals)
* #return
*/
static double computeMI(int[] a, int avals, int[] t, int tvals) {
double numinst = a.length;
double oneovernuminst = 1/numinst;
double sum = 0;
// longs are required here because of big multiples in calculation
long[][] crosscounts = new long[avals][tvals];
long[] tcounts = new long[tvals];
long[] acounts = new long[avals];
// Compute counts for the two variables
for (int i=0;i<a.length;i++) {
int av = a[i];
int tv = t[i];
acounts[av]++;
tcounts[tv]++;
crosscounts[av][tv]++;
}
for (int tv=0;tv<tvals;tv++) {
for (int av=0;av<avals;av++) {
if (crosscounts[av][tv] != 0) {
// Main fraction: (n|x,y|)/(|x||y|)
double sumtmp = (numinst*crosscounts[av][tv])/(acounts[av]*tcounts[tv]);
// Log bit (|x,y|/n) and update product
sum += oneovernuminst*crosscounts[av][tv]*Math.log(sumtmp)*log2;
}
}
}
return sum;
}
This code assumes that the values of a and t are not sparse (i.e. min(t)=0 and tvals=max(t)) for it to be efficient. Otherwise (as commented) large and unnecessary arrays are created.
I believe this approach improves further when computing MI between several variables at once (the count operations can be condensed - especially that of the target). The implementation I use is one that interfaces with WEKA.
Finally, it might be more efficient even to take the log out of the summations. But I am unsure whether log or power will take more computation within the loop. This is done by:
Apply a*log(b) = log(a^b)
Move the log to outside the summations, using log(a)+log(b) = log(ab)
and gives:
I am not mathematician but..
There are just a bunch of floating point calculations here. Some mathemagician might be able to reduce this to fewer calculation, try the Math SE.
Meanwhile, you should be able to use a static final double for Math.log(EPSILON)
Your problem might not be a single call but the volume of data for which this calculation has to be done. That problem is better solved by throwing more hardware at it.
I am looking to implement the simple equation:
i,j = -Q ± √(Q2-4PR) / 2P
To do so I have the following code (note: P = 10. Q = 7. R = 10):
//Q*Q – 4PR = -351 mod 11 = -10 mod 11 = 1, √1 = 1
double test = Math.sqrt(modulo(((Q*Q) - ((4*P)*R))));
// Works, but why *-10 needed?
i = (int)(((-Q+test)/(P*2))*-10); // i = 3
j = (int)(((-Q-test)/(P*2))*-10); // j = 4
To put it simply, test takes the first part of the equation and mods it to a non-zero integer in-between 0 and 11, then i and j are written. i and j return the right number, but for some reason *-10 is needed to get them right (a number I guessed to get the correct values).
If possible, I'd like to find a better way of performing the above equation because my way of doing it seems wrong and just works. I'd like to do it as the equation suggests, rather than hack it to work.
The quadratic equation is more usually expressed in terms of a, b and c. To satisfy ax2+bx+c = 0, you get (-b +/- sqrt(b^2-4ac)) / 2a as answers.
I think your basic problem is that you're using modulo for some reason instead of taking the square root. The factor of -10 is just a fudge factor which happens to work for your test case.
You should have something like this:
public static void findRoots(double a, double b, double c)
{
if (b * b < 4 * a * c)
{
throw new IllegalArgumentException("Equation has no roots");
}
double tmp = Math.sqrt(b * b - 4 * a * c);
double firstRoot = (-b + tmp) / (2 * a);
double secondRoot = (-b - tmp) / (2 * a);
System.out.println("Roots: " + firstRoot + ", " + secondRoot);
}
EDIT: Your modulo method is currently going to recurse pretty chronically. Try this instead:
public static int modulo(int x)
{
return ((x % 11) + 11) % 11;
}
Basically the result of the first % 11 will be in the range [-10, 10] - so after adding another 11 and taking % 11 again, it'll be correct. No need to recurse.
At that point there's not much reason to have it as a separate method, so you can use:
public static void findRoots(double a, double b, double c)
{
int squareMod11 = (((b * b - 4 * a * c) % 11) + 11) % 11;
double tmp = Math.sqrt(squareMod11);
double firstRoot = (-b + tmp) / (2 * a);
double secondRoot = (-b - tmp) / (2 * a);
System.out.println("Roots: " + firstRoot + ", " + secondRoot);
}
You need to take the square root. Note that Q^2-4PR yields a negative number, and consequently you're going to have to handle complex numbers (or restrict input to avoid this scenario). Apache Math may help you here.
use Math.sqrt for the square root. Why do you cast i and j to ints? It is equation giving you roots of square function, so i and j can be any complex numbers. You shall limit the discriminant to positive-only values for real (double) roots, otherwise use complex numbers.
double test = Q*Q - 4*P*R;
if(Q < 0) throw new Exception("negative discriminant!");
else {
test = Math.sqrt(test);
double i = (-Q + test) / 2*P;
double i = (-Q - test) / 2*P;
}
Why are you doing modulo and not square root? Your code seems to be the way to get the roots of a quadratic equation ((a±sqrt(b^2-4ac))/2a), so the code should be:
double delta = Q*Q-4*P*R);
if(delta < 0.0) {
throw new Exception("no roots");
}
double d = Math.power(delta,0.5);
double r1 = (Q + d)/(2*P)
double r2 = (Q - d)/(2*P)
As pointed out by others, your use of mod isn't even wrong. Why are you making up mathematics like this?
It's well known that the naive solution to the quadratic equation can have problems if the value of b is very nearly equal to the discriminant.
A better way to do it is suggested in section 5.6 of "Numerical Recipes in C++": if we define
(source: equationsheet.com)
Then the two roots are:
and
Your code also needs to account for pathological cases (e.g., a = 0).
Let's substitute your values into these formulas and see what we get. If a = 10, b = 7, and c = 10, then :
(source: equationsheet.com)
Then the two roots are:
(source: equationsheet.com)
and
(source: equationsheet.com)
I think I have the signs right.
If your calculation is giving you trouble, it's likely due to the fact that you have complex roots that your method can't take into account properly. You'll need a complex number class.
I'm thinking in particular of how to display pagination controls, when using a language such as C# or Java.
If I have x items which I want to display in chunks of y per page, how many pages will be needed?
Found an elegant solution:
int pageCount = (records + recordsPerPage - 1) / recordsPerPage;
Source: Number Conversion, Roland Backhouse, 2001
Converting to floating point and back seems like a huge waste of time at the CPU level.
Ian Nelson's solution:
int pageCount = (records + recordsPerPage - 1) / recordsPerPage;
Can be simplified to:
int pageCount = (records - 1) / recordsPerPage + 1;
AFAICS, this doesn't have the overflow bug that Brandon DuRette pointed out, and because it only uses it once, you don't need to store the recordsPerPage specially if it comes from an expensive function to fetch the value from a config file or something.
I.e. this might be inefficient, if config.fetch_value used a database lookup or something:
int pageCount = (records + config.fetch_value('records per page') - 1) / config.fetch_value('records per page');
This creates a variable you don't really need, which probably has (minor) memory implications and is just too much typing:
int recordsPerPage = config.fetch_value('records per page')
int pageCount = (records + recordsPerPage - 1) / recordsPerPage;
This is all one line, and only fetches the data once:
int pageCount = (records - 1) / config.fetch_value('records per page') + 1;
For C# the solution is to cast the values to a double (as Math.Ceiling takes a double):
int nPages = (int)Math.Ceiling((double)nItems / (double)nItemsPerPage);
In java you should do the same with Math.ceil().
This should give you what you want. You will definitely want x items divided by y items per page, the problem is when uneven numbers come up, so if there is a partial page we also want to add one page.
int x = number_of_items;
int y = items_per_page;
// with out library
int pages = x/y + (x % y > 0 ? 1 : 0)
// with library
int pages = (int)Math.Ceiling((double)x / (double)y);
The integer math solution that Ian provided is nice, but suffers from an integer overflow bug. Assuming the variables are all int, the solution could be rewritten to use long math and avoid the bug:
int pageCount = (-1L + records + recordsPerPage) / recordsPerPage;
If records is a long, the bug remains. The modulus solution does not have the bug.
In need of an extension method:
public static int DivideUp(this int dividend, int divisor)
{
return (dividend + (divisor - 1)) / divisor;
}
No checks here (overflow, DivideByZero, etc), feel free to add if you like. By the way, for those worried about method invocation overhead, simple functions like this might be inlined by the compiler anyways, so I don't think that's where to be concerned. Cheers.
P.S. you might find it useful to be aware of this as well (it gets the remainder):
int remainder;
int result = Math.DivRem(dividend, divisor, out remainder);
HOW TO ROUND UP THE RESULT OF INTEGER DIVISION IN C#
I was interested to know what the best way is to do this in C# since I need to do this in a loop up to nearly 100k times. Solutions posted by others using Math are ranked high in the answers, but in testing I found them slow. Jarod Elliott proposed a better tactic in checking if mod produces anything.
int result = (int1 / int2);
if (int1 % int2 != 0) { result++; }
I ran this in a loop 1 million times and it took 8ms. Here is the code using Math:
int result = (int)Math.Ceiling((double)int1 / (double)int2);
Which ran at 14ms in my testing, considerably longer.
A variant of Nick Berardi's answer that avoids a branch:
int q = records / recordsPerPage, r = records % recordsPerPage;
int pageCount = q - (-r >> (Integer.SIZE - 1));
Note: (-r >> (Integer.SIZE - 1)) consists of the sign bit of r, repeated 32 times (thanks to sign extension of the >> operator.) This evaluates to 0 if r is zero or negative, -1 if r is positive. So subtracting it from q has the effect of adding 1 if records % recordsPerPage > 0.
Another alternative is to use the mod() function (or '%'). If there is a non-zero remainder then increment the integer result of the division.
For records == 0, rjmunro's solution gives 1. The correct solution is 0. That said, if you know that records > 0 (and I'm sure we've all assumed recordsPerPage > 0), then rjmunro solution gives correct results and does not have any of the overflow issues.
int pageCount = 0;
if (records > 0)
{
pageCount = (((records - 1) / recordsPerPage) + 1);
}
// no else required
All the integer math solutions are going to be more efficient than any of the floating point solutions.
I do the following, handles any overflows:
var totalPages = totalResults.IsDivisble(recordsperpage) ? totalResults/(recordsperpage) : totalResults/(recordsperpage) + 1;
And use this extension for if there's 0 results:
public static bool IsDivisble(this int x, int n)
{
return (x%n) == 0;
}
Also, for the current page number (wasn't asked but could be useful):
var currentPage = (int) Math.Ceiling(recordsperpage/(double) recordsperpage) + 1;
you can use
(int)Math.Ceiling(((decimal)model.RecordCount )/ ((decimal)4));
Alternative to remove branching in testing for zero:
int pageCount = (records + recordsPerPage - 1) / recordsPerPage * (records != 0);
Not sure if this will work in C#, should do in C/C++.
I made this for me, thanks to Jarod Elliott & SendETHToThisAddress replies.
public static int RoundedUpDivisionBy(this int #this, int divider)
{
var result = #this / divider;
if (#this % divider is 0) return result;
return result + Math.Sign(#this * divider);
}
Then I realized it is overkill for the CPU compared to the top answer.
However, I think it's readable and works with negative numbers as well.
A generic method, whose result you can iterate over may be of interest:
public static Object[][] chunk(Object[] src, int chunkSize) {
int overflow = src.length%chunkSize;
int numChunks = (src.length/chunkSize) + (overflow>0?1:0);
Object[][] dest = new Object[numChunks][];
for (int i=0; i<numChunks; i++) {
dest[i] = new Object[ (i<numChunks-1 || overflow==0) ? chunkSize : overflow ];
System.arraycopy(src, i*chunkSize, dest[i], 0, dest[i].length);
}
return dest;
}
The following should do rounding better than the above solutions, but at the expense of performance (due to floating point calculation of 0.5*rctDenominator):
uint64_t integerDivide( const uint64_t& rctNumerator, const uint64_t& rctDenominator )
{
// Ensure .5 upwards is rounded up (otherwise integer division just truncates - ie gives no remainder)
return (rctDenominator == 0) ? 0 : (rctNumerator + (int)(0.5*rctDenominator)) / rctDenominator;
}
I had a similar need where I needed to convert Minutes to hours & minutes. What I used was:
int hrs = 0; int mins = 0;
float tm = totalmins;
if ( tm > 60 ) ( hrs = (int) (tm / 60);
mins = (int) (tm - (hrs * 60));
System.out.println("Total time in Hours & Minutes = " + hrs + ":" + mins);
You'll want to do floating point division, and then use the ceiling function, to round up the value to the next integer.