I need help to make my code below more efficient, and to clean it up a little.
As shown in this image, x and y can be any point around the whole screen, and I am trying to find the angle t. Is there a way I can reduce the number of lines here?
Note: The origin is in the top left corner, and moving right/down is moving in the positive direction
o := MiddleOfScreenX - x;
a := MiddleOfScreenY - y;
t := Abs(Degrees(ArcTan(o / a)));
if(x > MiddleOfScreenX)then
begin
if(y > MiddleOfScreenY)then
t := 180 + t
else
t := 360 - t;
end
else
if(y > MiddleOfScreenY)then
t := 180 - t;
The code is in pascal, but answers in other languages with similar syntax or c++ or java are fine as well.
:= sets the variable to that value
Abs() result is the absolute of that value (removes negatives)
Degrees() converts from radians to degrees
ArcTan() returns the inverse tan
see this http://www.cplusplus.com/reference/clibrary/cmath/atan2/ for a C function.
atan2 takes 2 separate arguments, so can determine the quadrant.
pascal may have arctan2 see http://www.freepascal.org/docs-html/rtl/math/arctan2.html or http://www.gnu-pascal.de/gpc/Run-Time-System.html
o := MiddleOfScreenX - x;
a := MiddleOfScreenY - y;
t := Degrees(ArcTan2(o, a));
The number of lines of code isn't necessarily the only optimization you need to consider. Trigonometric functions are costly in terms of the time it takes for a single one to finish its computation (ie: a single cos() call may require hundreds of additions and multiplications depending on the implementation).
In the case of a commonly used function in signal processing, the discrete Fourier transform, the results of thousands of cos() and sin() calculations are pre-calculated and stored in a massive lookup table. The tradeoff is that you use more memory when running your application, but it runs MUCH faster.
Please see the following article, or search for the importance of "precomputed twiddle factors", which essentially means calculating a ton of complex exponentials in advance.
In the future, you should also mention what you are trying to optimize for (ie: CPU cycles used, number of bytes of memory used, cost, among other things). I can only assume that you mean to optimize in terms of instructions executed, and by extension, the number of CPU cycles used (ie: you want to reduce CPU overhead).
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.34.9421&rep=rep1&type=pdf
You should only need one test to determine what to do with the arctan.. your existing tests recover the information destroyed by Abs().
atan() normally returns in the range -pi/4 to pi/4. Your coordinate system is a bit strange--rotate 90 deg clockwise to get a "standard" one, though you take atan of x/y as opposed to y/x. I'm already having a hard time resolving this in my head.
Anyways, I believe your test just needs to be that if you're in negative a, add 180 deg. If you want to avoid negative angles; add 360 deg if it's then negative.
Related
I'm using the non linear least squares Levenburg Marquardt algorithm in java to fit a number of exponential curves (A+Bexp(Cx)). Although the data is quite clean and has a good approximation to the model the algorithm is not able to model the majority of them even with a excessive number of iterations(5000-6000). For the curves it can model, it does so in about 150 iterations.
LeastSquaresProblem problem = new LeastSquaresBuilder()
.start(start).model(jac).target(dTarget)
.lazyEvaluation(false).maxEvaluations(5000)
.maxIterations(6000).build();
LevenbergMarquardtOptimizer optimizer = new LevenbergMarquardtOptimizer();
LeastSquaresOptimizer.Optimum optimum = optimizer.optimize(problem);}
My question is how would I define a convergence criteria in apache commons in order to stop it hitting a max number of iterations?
I don't believe Java is your problem. Let's address the mathematics.
This problem is easier to solve if you change your function.
Your assumed equation is:
y = A + B*exp(C*x)
It'd be easier if you could do this:
y-A = B*exp(C*x)
Now A is just a constant that can be zero or whatever value you need to shift the curve up or down. Let's call that variable z:
z = B*exp(C*x)
Taking the natural log of both sides:
ln(z) = ln(B*exp(C*x))
We can simplify that right hand side to get the final result:
ln(z) = ln(B) + C*x
Transform your (x, y) data to (x, z) and you can use least squares fitting of a straight line where C is the slope in (x, z) space and ln(B) is the intercept. Lots of software available to do that.
I have a Java program that moves an object according to a sinus function asin(bx).
The object moves at a certain speed by changing the x parameter by the time interval x the speed. However, this only moves the object at a constant speed along the x axis.
What I want to do is move the object at a constant tangential speed along the curve of the function. Could someone please help me? Thanks.
Let's say you have a function f(x) that describes a curve in the xy plane. The problem consists in moving a point along this curve at a constant speed S (i.e., at a constant tangential speed, as you put it.)
So, let's start at an instant t and a position x. The point has coordinates (x, f(x)). An instant later, say, at t + dt the point has moved to (x + dx, f(x + dx)).
The distance between these two locations is:
dist = sqrt((x + dx - dx)2 + (f(x+dx) - f(x))2) = sqrt(dx2 + (f(x+dx) - f(x))2)
Now, let's factor out dx to the right. We get:
dist = sqrt(1 + f'(x)2) dx
where f'(x) is the derivative (f(x+dx) - f(x)) /dx.
If we now divide by the time elapsed dt we get
dist/dt = sqrt(1 + f'(x)2) dx/dt.
But dist/dt is the speed at which the point moves along the curve, so it is the constant S. Then
S = sqrt(1 + f'(x)2) dx/dt
Solving for dx
dx = S / sqrt(1 + f'(x)2) dt
which gives you how much you have to move the x-coordinate of the point after dt units of time.
The arc length on a sine curve as a function of x is an elliptic integral of the second kind. To determine the x coordinate after you move a particular distance (or a particular time with a given speed) you would need to invert this elliptic integral. This is not an elementary function.
There are ways to approximate the inverse of the elliptic integral that are much simpler than you might expect. You could combine a good numerical integration algorithm such as Simpson's rule with either Newton's method or binary search to find a numerical root of arc length(x) = kt. Whether this is too computationally expensive depends on how accurate you need it to be and how often you need to update it. The error will decrease dramatically if you estimate the length of one period once, and then reduce t mod the arc length on one period.
Another approach you might try is to use a different curve than a sine curve with a more tractable arc length parametrization. Unfortunately, there are few of those, which is why the arc length exercises in calculus books repeat the same types of curves over and over. Another possibility is to accept a speed that isn't constant but doesn't go too far above or below a specified constant, which you can get with some Fourier analysis.
Another approach is to recognize the arc length parametrization as a solution to a 2-dimensional ordinary differential equation. A first order numerical approximation (Euler's method) might suffice, and I think that's what Leandro Caniglia's answer suggests. If you find that the round off errors are too large, you can use a higher order method such as Runge-Kutta.
I desire an image processing function that returns a 1 for each pixel if all the pixels around it (say +/- 4) have the nearly the same RGB values (to within some threshold: epsilon). Otherwise, zero is returned in the pixel location.
I have written this using get() and put(), and with the Java API, sweeping over the entire image matrix, but it is very slow.
Is there some tactic I can use to leverage existing OpenCV image processing functions to achieve the same result but much faster?
What about Inrange() function in opencv?. I think it will satisfy your condition.
In terms of implemented and well known functions it is a question about algorithm, not language. Try this with you initial image(matrix) r-channel for A:
calc Amin -- result of windowed(window half-size = 4) min operation on A called "erosion"
calc Asub-min = A - Amin
threshold Asub-min by epsilon ( if(Asub_ij < epsilon) Asub_ij = 1 else Asub_ij = 0 ), let's call result Asub-min-thr.
do the same for max function ("dilation")
Ares = Asub-min-thr AND Asub-max-thr -- Ares is answer for one channel
do the same for all channels
Ares-rgb = Ares-r AND Ares-g AND Ares-b -- your final answer.
All functions above are well implemented in openCV, including GPU acceleration (GPU not always reachable from Java). No more get() and put() at all!
If you need not per channel epsilon-distance but L2 distance in RGB, you will have to do more calculations like Asub-rgb^2 = Asub-r*Asub-r + Asub-g*Asub-g + Asub-b*Asub-b and then epsilon^2 thresholding, but for 3-dimensional RGB space per-channel epsilon differs from L2-epsilon just by factor sqrt(3) in worst case so you may ignore it and do fast calculation.
The algorithm to implement a low-pass filter is stated as follows (sourced from Wikipedia):
for i from 1 to n
y[i] := y[i-1] + α * (x[i] - y[i-1])
where
α = T/(tau + T)
T, is the period, in other words, the time interval in which data is received. And, tau, is the time constant, defined as:
tau = RC.
OK it's all clear. Everyone seem to come up with different values for, α, but it beats me - how can one reach a logical decision for this value?
Surely, the values of, R, and, C, are not available to use - or is it?
Does anybody know how to determine the value of, tau, and thus the value of, α?
Thanks one and all!
T: sampling period.
tau: time constant.
fc: cutoff frecuency of the filter.
fc = 1 / tau
then
alpha = T / ( T + 1/fc )
best regards!
I had the same problem with the calculation of the smoothing parameter (ALPHA) of the compass sensor data using low pass filter.
I have figured out and the calculated ALPHA value works good for my application.
For more understanding and discussion refer the following post
How to calculate the value of the smoothing parameter for lowpass filter (in case of smoothing of compass sensor data)
For a different lowpass filter, you may want to consider the RBJ biquad:
http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt
The implementation of which is described in detail here:
http://blog.bjornroche.com/2012/08/basic-audio-eqs.html
From wikipedia's RC time constant entry: Cutoff frequency - The time constant is related to the cutoff frequency fc, an alternative parameter of the RC circuit
tau = 1 / (2π * f)
Why 2π? From wikipedia's time constant entry
ω = 2π * f is the frequency in radians per second.
From the same entry, Tau is the equivalent of RC and is the rise time of the system. A low rise time means that higher frequency input will not excite the system. It is easy to image then that it is connected to the low pass filter's cutoff frequency. Ultimately it controls how much of the feedback signal is mixed in with the new input signal.
In my 2nd order low pass filter, I use the following for alpha.
α = 1 / (T * tau)
In my audio application the 2nd order filter is two single order filters chained, and I calculate the filter output like this. filter1Out and filter2out are the current values of the filters and this is the update after receiving input.
filter1Out = filter1Out + (alpha * (input - filter1Out));
filter2Out = filter2Out + (alpha * (filter1Out - filter2Out));
To determine what you want your cutoff frequency to be with the Android compass, I would at first not implement any filtering and try to use the data as provided. The cutoff really depends on what you are doing with the signal. Are you smoothing it for on screen animations? Are you smoothing it for path tracking? Is there noise in the signal that you want to reject? Each could need a different setting. If the unfiltered signal changes too often then figure out how often you want it to change and use that as your starting point for the filter cutoff.
I hope this helps. The derivation of the DSP math is beyond my skills, but I've implemented low pass filters for audio applications a few times.
I have a bunch of floating point numbers (Java doubles), most of which are very close to 1, and I need to multiply them together as part of a larger calculation. I need to do this a lot.
The problem is that while Java doubles have no problem with a number like:
0.0000000000000000000000000000000001 (1.0E-34)
they can't represent something like:
1.0000000000000000000000000000000001
Consequently of this I lose precision rapidly (the limit seems to be around 1.000000000000001 for Java's doubles).
I've considered just storing the numbers with 1 subtracted, so for example 1.0001 would be stored as 0.0001 - but the problem is that to multiply them together again I have to add 1 and at this point I lose precision.
To address this I could use BigDecimals to perform the calculation (convert to BigDecimal, add 1.0, then multiply), and then convert back to doubles afterwards, but I have serious concerns about the performance implications of this.
Can anyone see a way to do this that avoids using BigDecimal?
Edit for clarity: This is for a large-scale collaborative filter, which employs a gradient descent optimization algorithm. Accuracy is an issue because often the collaborative filter is dealing with very small numbers (such as the probability of a person clicking on an ad for a product, which may be 1 in 1000, or 1 in 10000).
Speed is an issue because the collaborative filter must be trained on tens of millions of data points, if not more.
Yep: because
(1 + x) * (1 + y) = 1 + x + y + x*y
In your case, x and y are very small, so x*y is going to be far smaller - way too small to influence the results of your computation. So as far as you're concerned,
(1 + x) * (1 + y) = 1 + x + y
This means you can store the numbers with 1 subtracted, and instead of multiplying, just add them up. As long as the results are always much less than 1, they'll be close enough to the mathematically precise results that you won't care about the difference.
EDIT: Just noticed: you say most of them are very close to 1. Obviously this technique won't work for numbers that are not close to 1 - that is, if x and y are large. But if one is large and one is small, it might still work; you only care about the magnitude of the product x*y. (And if both numbers are not close to 1, you can just use regular Java double multiplication...)
Perhaps you could use logarithms?
Logarithms conveniently reduce multiplication to addition.
Also, to take care of the initial precision loss, there is the function log1p (at least, it exists in C/C++), which returns log(1+x) without any precision loss. (e.g. log1p(1e-30) returns 1e-30 for me)
Then you can use expm1 to get the decimal part of the actual result.
Isn't this sort of situation exactly what BigDecimal is for?
Edited to add:
"Per the second-last paragraph, I would prefer to avoid BigDecimals if possible for performance reasons." – sanity
"Premature optimization is the root of all evil" - Knuth
There is a simple solution practically made to order for your problem. You are concerned it might not be fast enough, so you want to do something complicated that you think will be faster. The Knuth quote gets overused sometimes, but this is exactly the situation he was warning against. Write it the simple way. Test it. Profile it. See if it's too slow. If it is then start thinking about ways to make it faster. Don't add all this additional complex, bug-prone code until you know it's necessary.
Depending on where the numbers are coming from and how you are using them, you may want to use rationals instead of floats. Not the right answer for all cases, but when it is the right answer there's really no other.
If rationals don't fit, I'd endorse the logarithms answer.
Edit in response to your edit:
If you are dealing with numbers representing low response rates, do what scientists do:
Represent them as the excess / deficit (normalize out the 1.0 part)
Scale them. Think in terms of "parts per million" or whatever is appropriate.
This will leave you dealing with reasonable numbers for calculations.
Its worth noting that you are testing the limits of your hardware rather than Java. Java uses the 64-bit floating point in your CPU.
I suggest you test the performance of BigDecimal before you assume it won't be fast enough for you. You can still do tens of thousands of calculations per second with BigDecimal.
As David points out, you can just add the offsets up.
(1+x) * (1+y) = 1 + x + y + x*y
However, it seems risky to choose to drop out the last term. Don't. For example, try this:
x = 1e-8
y = 2e-6
z = 3e-7
w = 4e-5
What is (1+x)(1+y)(1+z)*(1+w)? In double precision, I get:
(1+x)(1+y)(1+z)*(1+w)
ans =
1.00004231009302
However, see what happens if we just do the simple additive approximation.
1 + (x+y+z+w)
ans =
1.00004231
We lost the low order bits that may have been important. This is only an issue if some of the differences from 1 in the product are at least sqrt(eps), where eps is the precision you are working in.
Try this instead:
f = #(u,v) u + v + u*v;
result = f(x,y);
result = f(result,z);
result = f(result,w);
1+result
ans =
1.00004231009302
As you can see, this gets us back to the double precision result. In fact, it is a bit more accurate, since the internal value of result is 4.23100930230249e-05.
If you really need the precision, you will have to use something like BigDecimal, even if it's slower than Double.
If you don't really need the precision, you could perhaps go with David's answer. But even if you use multiplications a lot, it might be some Premature Optimization, so BIgDecimal might be the way to go anyway
When you say "most of which are very close to 1", how many, exactly?
Maybe you could have an implicit offset of 1 in all your numbers and just work with the fractions.