Is this multiplication technically possible? - java

I have 2 matrices with doubles 200,000x3,000 and 3,000x200,000. They are dense and most values (80%) are filled.
How many iterations are needed for this?

The naive algorithm will take O(200,000 * 3,000 * 200,000), i.e. O(120,000,000,000,000) which is 120 trillion operations, so it will probably take a while.
The operands will each take about 4.5 GB whereas the output matrix will require about 298 GB, assuming 8 bytes per double.
It is not straightforward to compare Strassen to the naive algorithm as:
Furthermore, there is no need for the matrices to be square. Non-square matrices can be split in half using the same methods, yielding smaller non-square matrices. If the matrices are sufficiently non-square it will be worthwhile reducing the initial operation to more square products, using simple methods which are essentially O ( n2 ). For instance:
A product of size [2N x N] * [N x 10N] can be done as 20 separate [N x N] * [N x N] operations, arranged to form the result;
A product of size [N x 10N] * [10N x N] can be done as 10 separate [N x N] * [N x N] operations, summed to form the result.
These techniques will make the implementation more complicated, compared to simply padding to a power-of-two square; however, it is a reasonable assumption that anyone undertaking an implementation of Strassen, rather than conventional, multiplication, will place a higher priority on computational efficiency than on simplicity of the implementation.
See also Adaptive Strassen’s Matrix Multiplication.

Related

Smart algorithm to randomize a Double in range but with odds

I use the following function to generate a random double in a specific range :
nextDouble(1.50, 7.00)
However, I've been trying to come up with an algorithm to make the randomization have higher probability to generate a double that is close to the 1.50 than it is to 7.00. Yet I don't even know where it starts. Anything come to mind ?
Java is also welcome.
You should start by discovering what probability distribution you need. Based on your requirements, and assuming that random number generations are independent, perhaps Poisson distribution is what you are looking for:
a call center receives an average of 180 calls per hour, 24 hours a day. The calls are independent; receiving one does not change the probability of when the next one will arrive. The number of calls received during any minute has a Poisson probability distribution with mean 3: the most likely numbers are 2 and 3 but 1 and 4 are also likely and there is a small probability of it being as low as zero and a very small probability it could be 10.
The usual probability distributions are already implemented in libraries e.g. org.apache.commons.math3.distribution.PoissonDistribution in Apache Commons Math3.
I suggest to not think about this problem in terms of generating a random number with irregular probability. Instead, think about generating a random number normally in a some range, but then map this range into another one in non-linear way.
Let's split our algorithm into 3 steps:
Generate a random number in [0, 1) range linearly (so using a standard random generator).
Map it into another [0, 1) range in non-linear way.
Map the resulting [0, 1) into [1.5, 7) linearly.
Steps 1. and 3. are easy, the core of our algorithm is 2. We need a way to map [0, 1) into another [0, 1), but non-linearly, so e.g. 0.7 does not have to produce 0.7. Classic math helps here, we just need to look at visual representations of algebraic functions.
In your case you expect that while the input number increases from 0 to 1, the result first grows very slowly (to stay near 1.5 for a longer time), but then it speeds up. This is exactly how e.g. y = x ^ 2 function looks like. Your resulting code could be something like:
fun generateDouble(): Double {
val step1 = Random.nextDouble()
val step2 = step1.pow(2.0)
val step3 = step2 * 5.5 + 1.5
return step3
}
or just:
fun generateDouble() = Random.nextDouble().pow(2.0) * 5.5 + 1.5
By changing the exponent to bigger numbers, the curve will be more aggressive, so it will favor 1.5 more. By making the exponent closer to 1 (e.g. 1.4), the result will be more close to linear, but still it will favor 1.5. Making the exponent smaller than 1 will start to favor 7.
You can also look at other algebraic functions with this shape, e.g. y = 2 ^ x - 1.
What you could do is to 'correct' the random with a factor in the direction of 1.5. You would create some sort of bias factor. Like this:
#Test
void DoubleTest() {
double origin = 1.50;
final double fiarRandom = new Random().nextDouble(origin, 7);
System.out.println(fiarRandom);
double biasFactor = 0.9;
final double biasedDiff = (fiarRandom - origin) * biasFactor;
double biasedRandom = origin + biasedDiff;
System.out.println(biasedRandom);
}
The lower you set the bias factor (must be >0 & <= 1), the stronger your bias towards 1.50.
You can take a straightforward approach. As you said you want a higher probability of getting the value closer to 1.5 than 7.00, you can even set the probability. So, here their average is (1.5+7)/2 = 4.25.
So let's say I want a 70% probability that the random value will be closer to 1.5 and a 30% probability closer to 7.
double finalResult;
double mid = (1.5+7)/2;
double p = nextDouble(0,100);
if(p<=70) finalResult = nextDouble(1.5,mid);
else finalResult = nextDouble(mid,7);
Here, the final result has 70% chance of being closer to 1.5 than 7.
As you did not specify the 70% probability you can even make it random.
you just have to generate nextDouble(50,100) which will give you a value more than or equal 50% and less than 100% which you can use later to apply this probability for your next calculation. Thanks
I missed that I am using the same solution strategy as in the reply by Nafiul Alam Fuji. But since I have already formulated my answer, I post it anyway.
One way is to split the range into two subranges, say nextDouble(1.50, 4.25) and nextDouble(4.25, 7.0). You select one of the subranges by generating a random number between 0.0 and 1.0 using nextDouble() and comparing it to a threshold K. If the random number is less than K, you do nextDouble(1.50, 4.25). Otherwise nextDouble(4.25, 7.0).
Now if K=0.5, it is like doing nextDouble(1.50, 7). But by increasing K, you will do nextDouble(1.50, 4.25) more often and favor it over nextDouble(4.25, 7.0). It is like flipping an unfair coin where K determines the extent of the cheating.

Why do these two methods of sampling primes run equally long?

So I've implemented my own little RSA algorithm and in the course of that I wrote a function to find large prime numbers.
First I wrote a function prime? that tests for primality and then I wrote two versions of a prime searching function. In the first version I just test random BigIntegers until I hit a prime. In the second version I sample a random BigInteger and then incremented it until I find a prime.
(defn resampling []
(let [rnd (Random.)]
(->> (repeatedly #(BigInteger. 512 rnd))
(take-while (comp not prime?))
(count))))
(defn incrementing []
(->> (BigInteger. 512 (Random.))
(iterate inc)
(take-while (comp not prime?))
(count)))
(let [n 100]
{:resampling (/ (reduce + (repeatedly n resampling)) n)
:incrementing (/ (reduce + (repeatedly n incrementing)) n)})
Running this code yielded the two averages of 332.41 for the resampling function and 310.74 for the incrementing function.
Now the first number makes complete sense to me. The prime number theorem states that the n'th prime is about n*ln(n) in size (where ln is the natural logarithm). So the distance between adjacent primes is approximately n*ln(n) - (n-1)*ln(n-1) ≈ (n - (n - 1))*ln(n) = ln(n) (For large values of n ln(n) ≈ ln(n - 1)). Since I'm sampling 512-bit integers I'd expect the distance between primes to be in the vicinity of ln(2^512) = 354.89. Therefore random sampling should take about 354.89 attempts on average before hitting a prime, which comes out quite nicely.
The puzzle for me is why the incrementing function is taking about just as many steps. If I imagine throwing a dart at a grid where primes are spaced 355 units apart, it should take only about half that many steps on average to walk to the next higher prime, since on average I'd be hitting the center between two primes.
(The code for prime? is a little lengthy. You can take a look at it here.)
You assume that primes are equally distributed, that seems not to be the case.
Let's consider the following possible scenario: If primes would always come as pairs for example 10...01 and 10...03 then the next pair would come in 2*ln(n). For the sampling algorithm this distribution makes no difference, but for the incrementing algorithm the probability to start inside of a such pair is almost 0, so this means it would need to go a half of the big distance on average, that is ln(n).
In a nutshell: to estimate the behavior of the incremental algorithm right, it is not enough to know the average distance between the primes.

Using a low pass filter to calculate average?

If I want to calculate an average of 400 data points (noise values from an accelerometer sensor), can I use a low pass function such as this one to do that?
private float lowPass(float alpha, float input, float previousOutput) {
return alpha * previousOutput + (1 - alpha) * input;
}
I'm comparing this to simply storing the 400 data points in a List<float>, summing them up and dividing by 400.
I'm getting quite different results even with high values for alpha. Am I doing something wrong? Can I use the low pass filter to calculate an average, or is it generally better to simply calculate the "real" average?
EDIT
My low pass function originally took a float[] as input and output, since my data comes from a 3-axis accelerometer. I changed this to float and removed the internal for loop to avoid confusion. This also means that the input/output is now passed as primitive values, so the method returns a float instead of operating directly on the output array.
If you can afford to compute the arithmetic mean (which doesn't even require extra storage if you keep a running sum) then that would probably be the better option in most cases for reasons described bellow.
Warning: maths ahead
For sake of comparing the arithmetic average with the first-order recursive low-pass filter you are using, let's start with a signal of N samples, where each sample has a value equal to m plus some Gaussian noise of variance v. Let's further assume that the noise is independent from sample to sample.
The computation of the arithmetic average on this signal will give you a random result with mean m and variance v/N.
Assuming the first previousOutput is initialized to zero, deriving the mean and variance for the last output (output[N-1]) of the low-pass filter, we would get a mean m * (1 - alpha^N) and variance v * (1-alpha)^2 * (1-alpha^(2*N)) / (1 - alpha^2).
An immediate problem that can be seen is that for large m, the estimated mean m * (1 - alpha^N) can be quite far for the true value m. This problem unfortunately gets worse as alpha gets closer to 1. This occurs because the filter does not have time to ramp up to it's steady state value.
To avoid this issue, one may consider initializing the first previousOutput with the first input sample.
In this case the mean and variance of the last output would be m and v * ((1-alpha)^2*(1-alpha^(2*N-2))/(1-alpha^2) + alpha^(2*N-2)) respectively. This time the problem is that for larger alpha the output variance is largely dominated by the variance of that first sample that was used for the initialization. This is particularly obvious in the following comparative graph of the output variance (normalized by the input variance):
So, either you get a bias in the estimated mean when initializing previousOutput with zero, or you get a large residual variance when initializing with the first sample (much more so than with the arithmetic mean computation).
Note in conclusion that actual performance may vary for your specific data, depending on the nature of the observed variations.
What's output[] ? If it holds the results and you initialize with 0s, then this term will always be zero: alpha * output[i]
And in general:
A low-pass filter is a filter that passes signals with a frequency
lower than a certain cutoff frequency and attenuates signals with
frequencies higher than the cutoff frequency.
So it is not average it is basically a cutoff up to a specific threshold.

Random but most likely 1 float

I want to randomize a float that so that
There is 95% chance to be about 1
There is 0.01% chance to be < 0.1 or > 1.9
It never becomes 0 or 2
Is this possible by using Random.nextFloat() several times for example?
A visual illustration of the probability:
You need to find a function f such that:
f is continuous and increasing on [0, 1]
f(0) > 0 and f(1) < 2
f(0.01) >= 0.1 and f(0.99) <= 1.9
f(x) is "about 1" for 0.025 <= x <= 0.975
And then just take f(Random.nextDouble())
For example, Math.tan(3*(x-0.5))/14.11 fits this, so for your expression I'd use:
Math.tan(3*(Random.nextDouble()-0.5))/14.11
The probability is distributed as:
I do not code in JAVA but anyway, if I would want to use the internal pseudo-random generator (I usually use different approaches for this) I would do it like this:
Definitions
Let's say we have pseudo-random generator Random.nextFloat() returning values in range <0,1> with uniform distribution.
Create mapping from uniform <0,1> to yours (0,2)
It would be something like:
THE 0.001 SHOULD BE 0.0001 !!! I taught it was 0.1% instead 0.01% while drawing ...
Let's call it f(x). It can be a table (piecewise interpolation), or construct some polynomial that will match the properties you need (BEZIER,Interpolation polynomials,...)
As you can see the x axis is the probability and the y axis is the pseudo-random value (in your range). As built-in pseudo-random generators are uniform, they will generate uniformly distributed numbers between <0,1> which can be directly used as x.
To avoid the 0.0 and 2.0 either throw them away or use interval <0.0+ulp,2.0-ulp> where ulp is unit in last place
The graph is drawn in Paint and consists of 2x cubic BEZIER (4 control points per cubic) and a single Line.
Now just convert the ranges
So your pseudo-random value will be:
value=f(Random.nextFloat());
[Notes]
This would be better with fixed point format numbers otherwise you need to make the curvatures insanely high order to make any effect or use very huge amount of data to match desired probability output.

Compute the product a * b² * c³ ... efficiently

What is the most efficient way to compute the product
a1 b2 c3 d4 e5 ...
assuming that squaring costs about half as much as multiplication? The number of operands is less than 100.
Is there a simple algorithm also for the case that the multiplication time is proportional to the square of operand length (as with java.math.BigInteger)?
The first (and only) answer is perfect w.r.t. the number of operations.
Funnily enough, when applied to sizable BigIntegers, this part doesn't matter at all. Even computing abbcccddddeeeee without any optimizations takes about the same time.
Most of the time gets spent in the final multiplication (BigInteger implements none of the smarter algorithms like Karatsuba, Toom–Cook, or FFT, so the time is quadratic). What's important is assuring that the intermediate multiplicands are about the same size, i.e., given numbers p, q, r, s of about the same size, computing (pq) (rs) is usually faster than ((pq) r) s. The speed ratio seems to be about 1:2 for some dozens of operands.
Update
In Java 8, there are both Karatsuba and Toom–Cook multiplications in BigInteger.
I absolutely don't know if this is the optimal approach (although I think it is asymptotically optimal), but you can do it all in O(N) multiplications. You group the arguments of a * b^2 * c^3 like this: c * (c*b) * (c*b*a). In pseudocode:
result = 1
accum = 1
for i in 0 .. arguments:
accum = accum * arg[n-i]
result = result * accum
I think it is asymptotically optimal, because you have to use N-1 multiplications just to multiply N input arguments.
As mentioned in the Oct 26 '12 edit:
With multiplication time superlinear in the size of the operands, it would be of advantage to keep the size of the operands for long operations similar (especially if the only Toom-Cook available was toom-2 (Karatsuba)). If not going for a full optimisation, putting operands in a queue that allows popping them in order of increasing (significant) length looks a decent shot from the hip.
Then again, there are special cases: 0, powers of 2, multiplications where one factor is (otherwise) "trivial" ("long-by-single-digit multiplication", linear in sum of factor lengths).
And squaring is simpler/faster than general multiplication (question suggests assuming ½), which would suggest the following strategy:
in a pre-processing step, count trailing zeroes weighted by exponent
result 0 if encountering a 0
remove trailing zeroes, discard resulting values of 1
result 1 if no values left
find and combine values occurring more than once
set up a queue allowing extraction of the "shortest" number. For each pair (number, exponent), insert the factors exponentiation by squaring would multiply
optional: combine "trivial factors" (see above) and re-insert
Not sure how to go about this. Say factors of length 12 where trivial, and initial factors are of length 1, 2, …, 10, 11, 12, …, n. Optimally, you combine 1+10, 2+9, … for 7 trivial factors from 12. Combining shortest gives 3, 6, 9, 12 for 8 from 12
extract the shortest pair of factors, multiply and re-insert
once there is just one number, the result is that with the zeroes from the first step tacked on
(If factorisation was cheap, it would have to go on pretty early to get most from cheap squaring.)

Categories