Out Of Memory error with HackerEarth Problem: Reverse Primes - java

Generate as many distinct primes P such that reverse (P) is
also prime and is not equal to P.
Output: Print per line one integer( ≤ 10^15 ). Don't print more than
10^6 integers in all.
Scoring: Let N = correct outputs.
M = incorrect outputs. Your score will be max(0,N-M).
Note: Only one of P and reverse(P) will be counted as correct. If both are in the file, one will be counted as incorrect.
Sample Output 107 13 31 17 2
Explanation
Score will be 1. Since 13,107,17 are correct. 31 is incorrect because
13 is already there. 2 is incorrect.
Here is the code I've written which is giving me output Out Of Memory error in Eclipse.
Since memory requirement is 256 MB, I set -Xmx256M, but since it's giving me an Out Of Memory error, I must have misunderstood the question or my code is buggy in terms of memory utilization. What am I doing wrong here? I'm getting the desired output for smaller lONGMAX like 10000 or 1000000.
public class ReversePrime {
final static long lONGMAX=1000000000000000L;
final static int MAXLISTSIZE=1000000;
final static boolean[] isPrime=isPrime();
public static void main(String...strings ){
Set<Long> reversedCheckedPrime = new LinkedHashSet<Long>();
int isPrimeLength=isPrime.length;
for(int i = 0; i < isPrimeLength ; i++){
if( isPrime[i]){
long prime = 2 * i + 3;
long revrse= reversePrime(prime);
if ( (!(prime==revrse)) && (!reversedCheckedPrime.contains(revrse)) &&
(reversedCheckedPrime.size()<=MAXLISTSIZE)){
reversedCheckedPrime.add(prime);
}
if (reversedCheckedPrime.size()==MAXLISTSIZE){
break;
}
}
}
for (Long prime : reversedCheckedPrime){
System.out.println(prime);
}
}
private static long reversePrime(long prime) {
long result=0;
long rem;
while(prime!=0){
rem = prime % 10;
prime = prime / 10;
result = result * 10 + rem ;
}
return result;
}
private static boolean[] isPrime() {
int root=(int) Math.sqrt(lONGMAX)+1;
root = root/2-1;
int limit= (int) ((lONGMAX-1)/2);
boolean[] isPrime=new boolean[limit];
Arrays.fill(isPrime, true);
for(int i = 0 ; i < root ; i++){
if(isPrime[i]){
for( int j = 2 * i * (i + 3 ) + 3, p = 2 * i + 3; j < limit ; j = j + p){
isPrime[j] = false;
}
}
}
return isPrime;
}
}
Hackerearth Link

There are two possibilities:
You use -Xmx256M which means a 256 MB heap. But there's more than just the heap and your VM may get killed when it tries to get more.
You give 256 MB to your VM but your program needs more and gets killed. <---- As RealSkeptic says, this is the case.
In order to get 1M primes, you need to investigate some <100M numbers(*). So with a prime sieve working below 100_000_000, it should work. This way the sieve works for the reversed number as well. By skipping the evens, you need only 50 MB, so you can set the limit to maybe 100M.
You could reduce the memory used by a factor 8 by using bits instead of bytes. You could reduce it by a factor of 2 by ignoring numbers starting with an even digit, but this gets complicated.
(*) This is something you can easily try out before submitting.

You declare this:
final static long lONGMAX=1000000000000000L;
And then when you allocate your boolean array, you calculate this:
int limit= (int) ((lONGMAX-1)/2);
Based on that definition, limit will be 1,382,236,159. That's 1.3Gb, assuming a boolean takes one byte. You might be thinking that the VM only allocates one bit per boolean, but that's not how it works.
Consider using a java.util.BitSet instead.

You actually should replace your boolean[] with a List as the out of memory probably is coming from this table. You're not using the best strategy, as you're stacking all of the value for every long existing.
You better should only keep in memory the prime numbers, try to rethink the definition of a prime number, and go on an iterative deduction.

Related

How does index*int in a for loop end up with zero as result?

We tried just for fun to create a for loop like below. We assumed that the number we get would be very high but we got 0. Why is it 0 and not something big?
We even tried it with a long because we thought it might be bigger than a int.
Thanks in advance.
private static void calculate() {
int currentSolution = 1;
for (int i = 1; i < 100; i++) {
currentSolution *= i;
}
System.out.println(currentSolution);
}
Your int is wrapping round to -2147483648 when it reaches +2147483647.
By an amazing coincidence1, a multiplication by zero is introduced into your product.
See for yourself: write
if (currentSolution == 0){
// What is the value of i?
}
You'll need a BigInteger to evaluate 100!.
1 Really it's not that amazing: it's just that 100! has 232 as a factor.
Your for loop calculates the factorial of 100, i.e. 1 * 2 * 3 * ... * 99 * 100, also written as 100! which equals 9.332621544×10157
The range of int is -2,147,483,648 to 2,147,483,647 and the range of long is -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, so with a number that is 10157 you've hit multiplication overflow long before you even get close to the end of the for loop.
The overflow eventually results in all bits eventually zeroing out, thus producing the result 0.
Your datatypes are too small, even if you use long, so you cannot do this calculation with basic types. So you get an overflow, where the values become negative, and at some point 0. Therefore, the result is still zero after every loop.
You can see that by printing the currentSolution in the loop.
In order to get the correct solution, try using BigInteger:
public static void calculate() {
BigInteger currentSolution = BigInteger.valueOf(1);
for (int i = 1; i < 100; i++) {
currentSolution = currentSolution.multiply(BigInteger.valueOf(i));
}
System.out.println(currentSolution);
}
This outputs the correct solution: 933262154439441526816992388562667004907159682643816214685929638952175999932299156089414639761565182862536979208272237582511852109168640000000000000000000000
You re basically calculating 100!, which is approximately 9.3e+157, whereas a long maximum value is 2^64 -1 so approximately 1.8e+19.
It cant fit, thats why you have 0.

Efficient way of altering data in an array with threads

I've been trying to figure out the most efficient way where many threads are altering a very big byte array on bit level. For ease of explaining I'll base the question around a multithreaded Sieve of Eratosthenes to ease explaining the question. The code though should not be expected to fully completed as I'll omit certain parts that aren't directly related. The sieve also wont be fully optimised as thats not the direct question. The sieve will work in such a way that it saves which values are primes in a byte array, where each byte contains 7 numbers (we can't alter the first bit due to all things being signed).
Lets say our goal is to find all the primes below 1 000 000 000 (1 billion). As a result we would need an byte array of length 1 000 000 000 / 7 +1 or 142 857 143 (About 143 million).
class Prime {
int max = 1000000000;
byte[] b = new byte[(max/7)+1];
Prime() {
for(int i = 0; i < b.length; i++) {
b[i] = (byte)127; //Setting all values to 1 at start
}
findPrimes();
}
/*
* Calling remove will set the bit value associated with the number
* to 0 signaling that isn't an prime
*/
void remove(int i) {
int j = i/7; //gets which array index to access
b[j] = (byte) (b[j] & ~(1 << (i%7)));
}
void findPrimes() {
remove(1); //1 is not a prime and we wanna remove it from the start
int prime = 2;
while (prime*prime < max) {
for(int i = prime*2; i < max; i = prime + i) {
remove(i);
}
prime = nextPrime(prime); //This returns the next prime from the list
}
}
... //Omitting code, not relevant to question
}
Now we got a basic outline where something runs through all numbers for a certain mulitplication table and calls remove to remove numbers set bits that fits the number to 9 if we found out they aren't primes.
Now to up the ante we create threads that do the checking for us. We split the work so that each takes a part of the removing from the table. So for example if we got 4 threads and we are running through the multiplication table for the prime 2, we would assign thread 1 all in the 8 times tables with an starting offset of 2, that is 4, 10, 18, ...., the second thread gets an offset of 4, so it goes through 6, 14, 22... and so on. They then call remove on the ones they want.
Now to the real question. As most can see that while the prime is less than 7 we will have multiple threads accessing the same array index. While running through 2 for example we will have thread 1, thread 2 and thread 3 will all try to access b[0] to alter the byte which causes an race condition which we don't want.
The question therefore is, whats the best way of optimising access to the byte array.
So far the thoughts I've had for it are:
Putting synchronized on the remove method. This obviously would be very easy to implement but an horrible ideas as it would remove any type of gain from having threads.
Create an mutex array equal in size to the byte array. To enter an index one would need the mutex on the same index. This Would be fairly fast but would require another very big array in memory which might not be the best way to do it
Limit the numbers stored in the byte to prime number we start running on. So if we start on 2 we would have numbers per array. This would however increase our array length to 500 000 000 (500 million).
Are there other ways of doing this in a fast and optimal way without overusing the memory?
(This is my first question here so I tried to be as detailed and thorough as possible but I would accept any comments on how I can improve the question - to much detail, needs more detail etc.)
You can use an array of atomic integers for this. Unfortunately there isn't a getAndAND, which would be ideal for your remove() function, but you can CAS in a loop:
java.util.concurrent.atomic.AtomicIntegerArray aia;
....
void remove(int i) {
int j = i/32; //gets which array index to access
do {
int oldVal = aia.get(j);
int newVal = oldVal & ~(1 << (i%32));
boolean updated = aia.weakCompareAndSet(j, oldVal, newVal);
} while(!updated);
}
Basically you keep trying to adjust the slot to remove that bit, but you only succeed if nobody else modifies it out from under you. Safe, and likely to be very efficient. weakCompareAndSet is basically an abstracted Load-link/Store conditional instruction.
BTW, there's no reason not to use the sign bit.
I think you could avoid synchronizing threads...
For example, this task:
for(int i = prime*2; i < max; i = prime + i) {
remove(i);
}
it could be partitioned in small tasks.
for (int i =0; i < thread_poll; i++){
int totalPos = max/8; // dividing virtual array in bytes
int partitionSize = totalPos /thread_poll; // dividing bytes by thread poll
removeAll(prime, partitionSize*i*8, (i + 1)* partitionSize*8);
}
....
// no colisions!!!
void removeAll(int prime, int initial; int max){
k = initial / prime;
if (k < 2) k = 2;
for(int i = k * prime; i < max; i = i + prime) {
remove(i);
}
}

Splitting an array into two subarrays with minimal sum

My question is if given an array,we have to split that into two sub-arrays such that the absolute difference between the sum of the two arrays is minimum with a condition that the difference between number of elements of the arrays should be atmost one.
Let me give you an example.Suppose
Example 1: 100 210 100 75 340
Answer :
Array1{100,210,100} and Array2{75,340} --> Difference = |410-415|=5
Example 2: 10 10 10 10 40
Answer : Array1{10,10,10} and Array2{10,40} --> Difference = |30-50|=20
Here we can see that though we can divide the array into {10,10,10,10} and {40},we are not dividing because the constraint "the number of elements between the arrays should be atmost 1" will be violated if we do so.
Can somebody provide a solution for this ?
My approach:
->Calculate sum of the array
->Divide the sum by 2
->Let the size of the knapsack=sum/2
->Consider the weights of the array values as 1.(If you have come across the knapsack problem ,you may know about the weight concept)
->Then consider the array values as the values of the weights.
->Calculate the answer which will be array1 sum.
->Total sum-answer=array2 sum
This approach fails.
Calculating the two arrays sum is enough.We are not interested in which elements form the sum.
Thank you!
Source: This is an ICPC problem.
I have an algorithm that works in O(n3) time, but I have no hard proof it is optimal. It seems to work for every test input I give it (including some with negative numbers), so I figured it was worth sharing.
You start by splitting the input into two equally sized arrays (call them one[] and two[]?). Start with one[0], and see which element in two[] would give you the best result if swapped. Whichever one gives the best result, swap. If none give a better result, don't swap it. Then move on to the next element in one[] and do it again.
That part is O(2) by itself. The problem is, it might not get the best results the first time through. If you just keep doing it until you don't make any more swaps, you end up with an ugly bubble-type construction which makes it O(n3) total.
Here's some ugly Java code to demonstrate (also at ideone.com if you want to play with it):
static int[] input = {1,2,3,4,5,-6,7,8,9,10,200,-1000,100,250,-720,1080,200,300,400,500,50,74};
public static void main(String[] args) {
int[] two = new int[input.length/2];
int[] one = new int[input.length - two.length];
int totalSum = 0;
for(int i=0;i<input.length;i++){
totalSum += input[i];
if(i<one.length)
one[i] = input[i];
else
two[i-one.length] = input[i];
}
float goal = totalSum / 2f;
boolean swapped;
do{
swapped = false;
for(int j=0;j<one.length;j++){
int curSum = sum(one);
float curBestDiff = Math.abs(goal - curSum);
int curBestIndex = -1;
for(int i=0;i<two.length;i++){
int testSum = curSum - one[j] + two[i];
float diff = Math.abs(goal - testSum);
if(diff < curBestDiff){
curBestDiff = diff;
curBestIndex = i;
}
}
if(curBestIndex >= 0){
swapped = true;
System.out.println("swapping " + one[j] + " and " + two[curBestIndex]);
int tmp = one[j];
one[j] = two[curBestIndex];
two[curBestIndex] = tmp;
}
}
} while(swapped);
System.out.println(Arrays.toString(one));
System.out.println(Arrays.toString(two));
System.out.println("diff = " + Math.abs(sum(one) - sum(two)));
}
static int sum(int[] list){
int sum = 0;
for(int i=0;i<list.length;i++)
sum += list[i];
return sum;
}
Can you provide more information on the upper limit of the input?
For your algorithm, I think your are trying to pick floor(n/2) items and find it's maximum sum of value as array1 sum...(If this is not your original thought then please ignore the following lines)
If this is the case, then knapsack size should be n/2 instead of sum/2,
but even so, I think it's still not working. The ans is min(|a - b|) and maximizing a is a different issue. For eg, {2,2,10,10}, you will get a = 20, b = 4, while the ans is a = b = 12.
To answer the problem, I think I need more information of the upper limit of the input..
I cannot come up with a brilliant dp state but a 3-dimensional state
dp(i,n,v) := in first i-th items, pick n items out and make a sum of value v
each state is either 0 or 1 (false or true)
dp(i,n,v) = dp(i-1, n, v) | dp(i-1, n-1, v-V[i])
This dp state is so naive that it has a really high complexity which usually cannot pass a ACM / ICPC problem, so if possible please provide more information and see if I can come up another better solution...Hope I can help a bit :)
DP soluction will give lg(n) time. Two array, iterate one from start to end, and calculate the sum, the other iterate from end to start, and do the same thing. Finally, iterate from start to end and get minimal difference.

Is this a "good enough" random algorithm; why isn't it used if it's faster?

I made a class called QuickRandom, and its job is to produce random numbers quickly. It's really simple: just take the old value, multiply by a double, and take the decimal part.
Here is my QuickRandom class in its entirety:
public class QuickRandom {
private double prevNum;
private double magicNumber;
public QuickRandom(double seed1, double seed2) {
if (seed1 >= 1 || seed1 < 0) throw new IllegalArgumentException("Seed 1 must be >= 0 and < 1, not " + seed1);
prevNum = seed1;
if (seed2 <= 1 || seed2 > 10) throw new IllegalArgumentException("Seed 2 must be > 1 and <= 10, not " + seed2);
magicNumber = seed2;
}
public QuickRandom() {
this(Math.random(), Math.random() * 10);
}
public double random() {
return prevNum = (prevNum*magicNumber)%1;
}
}
And here is the code I wrote to test it:
public static void main(String[] args) {
QuickRandom qr = new QuickRandom();
/*for (int i = 0; i < 20; i ++) {
System.out.println(qr.random());
}*/
//Warm up
for (int i = 0; i < 10000000; i ++) {
Math.random();
qr.random();
System.nanoTime();
}
long oldTime;
oldTime = System.nanoTime();
for (int i = 0; i < 100000000; i ++) {
Math.random();
}
System.out.println(System.nanoTime() - oldTime);
oldTime = System.nanoTime();
for (int i = 0; i < 100000000; i ++) {
qr.random();
}
System.out.println(System.nanoTime() - oldTime);
}
It is a very simple algorithm that simply multiplies the previous double by a "magic number" double. I threw it together pretty quickly, so I could probably make it better, but strangely, it seems to be working fine.
This is sample output of the commented-out lines in the main method:
0.612201846732229
0.5823974655091941
0.31062451498865684
0.8324473610354004
0.5907187526770246
0.38650264675748947
0.5243464344127049
0.7812828761272188
0.12417247811074805
0.1322738256858378
0.20614642573072284
0.8797579436677381
0.022122999476108518
0.2017298328387873
0.8394849894162446
0.6548917685640614
0.971667953190428
0.8602096647696964
0.8438709031160894
0.694884972852229
Hm. Pretty random. In fact, that would work for a random number generator in a game.
Here is sample output of the non-commented out part:
5456313909
1427223941
Wow! It performs almost 4 times faster than Math.random.
I remember reading somewhere that Math.random used System.nanoTime() and tons of crazy modulus and division stuff. Is that really necessary? My algorithm performs a lot faster and it seems pretty random.
I have two questions:
Is my algorithm "good enough" (for, say, a game, where really random numbers aren't too important)?
Why does Math.random do so much when it seems just simple multiplication and cutting out the decimal will suffice?
Your QuickRandom implementation hasn't really an uniform distribution. The frequencies are generally higher at the lower values while Math.random() has a more uniform distribution. Here's a SSCCE which shows that:
package com.stackoverflow.q14491966;
import java.util.Arrays;
public class Test {
public static void main(String[] args) throws Exception {
QuickRandom qr = new QuickRandom();
int[] frequencies = new int[10];
for (int i = 0; i < 100000; i++) {
frequencies[(int) (qr.random() * 10)]++;
}
printDistribution("QR", frequencies);
frequencies = new int[10];
for (int i = 0; i < 100000; i++) {
frequencies[(int) (Math.random() * 10)]++;
}
printDistribution("MR", frequencies);
}
public static void printDistribution(String name, int[] frequencies) {
System.out.printf("%n%s distribution |8000 |9000 |10000 |11000 |12000%n", name);
for (int i = 0; i < 10; i++) {
char[] bar = " ".toCharArray(); // 50 chars.
Arrays.fill(bar, 0, Math.max(0, Math.min(50, frequencies[i] / 100 - 80)), '#');
System.out.printf("0.%dxxx: %6d :%s%n", i, frequencies[i], new String(bar));
}
}
}
The average result looks like this:
QR distribution |8000 |9000 |10000 |11000 |12000
0.0xxx: 11376 :#################################
0.1xxx: 11178 :###############################
0.2xxx: 11312 :#################################
0.3xxx: 10809 :############################
0.4xxx: 10242 :######################
0.5xxx: 8860 :########
0.6xxx: 9004 :##########
0.7xxx: 8987 :#########
0.8xxx: 9075 :##########
0.9xxx: 9157 :###########
MR distribution |8000 |9000 |10000 |11000 |12000
0.0xxx: 10097 :####################
0.1xxx: 9901 :###################
0.2xxx: 10018 :####################
0.3xxx: 9956 :###################
0.4xxx: 9974 :###################
0.5xxx: 10007 :####################
0.6xxx: 10136 :#####################
0.7xxx: 9937 :###################
0.8xxx: 10029 :####################
0.9xxx: 9945 :###################
If you repeat the test, you'll see that the QR distribution varies heavily, depending on the initial seeds, while the MR distribution is stable. Sometimes it reaches the desired uniform distribution, but more than often it doesn't. Here's one of the more extreme examples, it's even beyond the borders of the graph:
QR distribution |8000 |9000 |10000 |11000 |12000
0.0xxx: 41788 :##################################################
0.1xxx: 17495 :##################################################
0.2xxx: 10285 :######################
0.3xxx: 7273 :
0.4xxx: 5643 :
0.5xxx: 4608 :
0.6xxx: 3907 :
0.7xxx: 3350 :
0.8xxx: 2999 :
0.9xxx: 2652 :
What you are describing is a type of random generator called a linear congruential generator. The generator works as follows:
Start with a seed value and multiplier.
To generate a random number:
Multiply the seed by the multiplier.
Set the seed equal to this value.
Return this value.
This generator has many nice properties, but has significant problems as a good random source. The Wikipedia article linked above describes some of the strengths and weaknesses. In short, if you need good random values, this is probably not a very good approach.
Your random number function is poor, as it has too little internal state -- the number output by the function at any given step is entirely dependent on the previous number. For instance, if we assume that magicNumber is 2 (by way of example), then the sequence:
0.10 -> 0.20
is strongly mirrored by similar sequences:
0.09 -> 0.18
0.11 -> 0.22
In many cases, this will generate noticeable correlations in your game -- for instance, if you make successive calls to your function to generate X and Y coordinates for objects, the objects will form clear diagonal patterns.
Unless you have good reason to believe that the random number generator is slowing your application down (and this is VERY unlikely), there is no good reason to try and write your own.
The real problem with this is that it's output histogram is dependent on the initial seed far to much - much of the time it will end up with a near uniform output but a lot of the time will have distinctly un-uniform output.
Inspired by this article about how bad php's rand() function is, I made some random matrix images using QuickRandom and System.Random. This run shows how sometimes the seed can have a bad effect (in this case favouring lower numbers) where as System.Random is pretty uniform.
QuickRandom
System.Random
Even Worse
If we initialise QuickRandom as new QuickRandom(0.01, 1.03) we get this image:
The Code
using System;
using System.Drawing;
using System.Drawing.Imaging;
namespace QuickRandomTest
{
public class QuickRandom
{
private double prevNum;
private readonly double magicNumber;
private static readonly Random rand = new Random();
public QuickRandom(double seed1, double seed2)
{
if (seed1 >= 1 || seed1 < 0) throw new ArgumentException("Seed 1 must be >= 0 and < 1, not " + seed1);
prevNum = seed1;
if (seed2 <= 1 || seed2 > 10) throw new ArgumentException("Seed 2 must be > 1 and <= 10, not " + seed2);
magicNumber = seed2;
}
public QuickRandom()
: this(rand.NextDouble(), rand.NextDouble() * 10)
{
}
public double Random()
{
return prevNum = (prevNum * magicNumber) % 1;
}
}
class Program
{
static void Main(string[] args)
{
var rand = new Random();
var qrand = new QuickRandom();
int w = 600;
int h = 600;
CreateMatrix(w, h, rand.NextDouble).Save("System.Random.png", ImageFormat.Png);
CreateMatrix(w, h, qrand.Random).Save("QuickRandom.png", ImageFormat.Png);
}
private static Image CreateMatrix(int width, int height, Func<double> f)
{
var bitmap = new Bitmap(width, height);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
var c = (int) (f()*255);
bitmap.SetPixel(x, y, Color.FromArgb(c,c,c));
}
}
return bitmap;
}
}
}
One problem with your random number generator is that there is no 'hidden state' - if I know what random number you returned on the last call, I know every single random number you will send until the end of time, since there is only one possible next result, and so on and so on.
Another thing to consider is the 'period' of your random number generator. Obviously with a finite state size, equal to the mantissa portion of a double, it will only be able to return at most 2^52 values before looping. But that's in the best case - can you prove that there are no loops of period 1, 2, 3, 4...? If there are, your RNG will have awful, degenerate behavior in those cases.
In addition, will your random number generation have a uniform distribution for all starting points? If it does not, then your RNG will be biased - or worse, biased in different ways depending on the starting seed.
If you can answer all of these questions, awesome. If you can't, then you know why most people do not re-invent the wheel and use a proven random number generator ;)
(By the way, a good adage is: The fastest code is code that does not run. You could make the fastest random() in the world, but it's no good if it is not very random)
One common test I always did when developing PRNGs was to :
Convert output to char values
Write chars value to a file
Compress file
This let me quickly iterate on ideas that were "good enough" PRNGs for sequences of around 1 to 20 megabytes. It also gave a better top down picture than just inspecting it by eye, as any "good enough" PRNG with half-a-word of state could quickly exceed your eyes ability to see the cycle point.
If I was really picky, I might take the good algorithms and run the DIEHARD/NIST tests on them, to get more of an insight, and then go back and tweak some more.
The advantage of the compression test, as opposed to a frequency analysis is that, trivially it is easy to construct a good distribution : simply output a 256 length block containing all chars of values 0 - 255, and do this 100,000 times. But this sequence has a cycle of length 256.
A skewed distribution, even by a small margin, should be picked up by a compression algorithm, particularly if you give it enough (say 1 megabyte) of the sequence to work with. If some characters, or bigrams, or n-grams occur more frequently, a compression algorithm can encode this distribution skew to codes that favor the frequent occurrences with shorter code words, and you get a delta of compression.
Since most compression algorithms are fast, and they require no implementation (as OSs have them just lying around), the compression test is a very useful one for quickly rating pass/fail for an PRNG you might be developing.
Good luck with your experiments!
Oh, I performed this test on the rng you have above, using the following small mod of your code :
import java.io.*;
public class QuickRandom {
private double prevNum;
private double magicNumber;
public QuickRandom(double seed1, double seed2) {
if (seed1 >= 1 || seed1 < 0) throw new IllegalArgumentException("Seed 1 must be >= 0 and < 1, not " + seed1);
prevNum = seed1;
if (seed2 <= 1 || seed2 > 10) throw new IllegalArgumentException("Seed 2 must be > 1 and <= 10, not " + seed2);
magicNumber = seed2;
}
public QuickRandom() {
this(Math.random(), Math.random() * 10);
}
public double random() {
return prevNum = (prevNum*magicNumber)%1;
}
public static void main(String[] args) throws Exception {
QuickRandom qr = new QuickRandom();
FileOutputStream fout = new FileOutputStream("qr20M.bin");
for (int i = 0; i < 20000000; i ++) {
fout.write((char)(qr.random()*256));
}
}
}
The results were :
Cris-Mac-Book-2:rt cris$ zip -9 qr20M.zip qr20M.bin2
adding: qr20M.bin2 (deflated 16%)
Cris-Mac-Book-2:rt cris$ ls -al
total 104400
drwxr-xr-x 8 cris staff 272 Jan 25 05:09 .
drwxr-xr-x+ 48 cris staff 1632 Jan 25 05:04 ..
-rw-r--r-- 1 cris staff 1243 Jan 25 04:54 QuickRandom.class
-rw-r--r-- 1 cris staff 883 Jan 25 05:04 QuickRandom.java
-rw-r--r-- 1 cris staff 16717260 Jan 25 04:55 qr20M.bin.gz
-rw-r--r-- 1 cris staff 20000000 Jan 25 05:07 qr20M.bin2
-rw-r--r-- 1 cris staff 16717402 Jan 25 05:09 qr20M.zip
I would consider an PRNG good if the output file could not be compressed at all.
To be honest, I did not think your PRNG would do so well, only 16% on ~20 Megs is pretty impressive for such a simple construction. But I still consider it a fail.
The fastest random generator you could implement is this:
XD, jokes apart, besides everything said here, I'd like to contribute citing
that testing random sequences "is a hard task" [ 1 ], and there are several test
that check certain properties of pseudo-random numbers, you can find a lot of them
here: http://www.random.org/analysis/#2005
One simple way to evaluate random generator "quality" is the old Chi Square test.
static double chisquare(int numberCount, int maxRandomNumber) {
long[] f = new long[maxRandomNumber];
for (long i = 0; i < numberCount; i++) {
f[randomint(maxRandomNumber)]++;
}
long t = 0;
for (int i = 0; i < maxRandomNumber; i++) {
t += f[i] * f[i];
}
return (((double) maxRandomNumber * t) / numberCount) - (double) (numberCount);
}
Citing [ 1 ]
The idea of the χ² test is to check whether or not the numbers produced are
spread out reasonably. If we generate N positive numbers less than r, then we'd
expect to get about N / r numbers of each value. But---and this is the essence of
the matter---the frequencies of ocurrence of all the values should not be exactly
the same: that wouldn't be random!
We simply calculate the sum of the squares of the frecuencies of occurrence of
each value, scaled by the expected frequency, and then substract off the size of the
sequence. This number, the "χ² statistic," may be expressed mathematically as
If the χ² statistic is close to r, then the numbers are random; if it is too far away,
then they are not. The notions of "close" and "far away" can be more precisely
defined: tables exist that tell exactly how relate the statistic to properties of
random sequences. For the simple test that we're performing, the statistic should
be within 2√r
Using this theory and the following code:
abstract class RandomFunction {
public abstract int randomint(int range);
}
public class test {
static QuickRandom qr = new QuickRandom();
static double chisquare(int numberCount, int maxRandomNumber, RandomFunction function) {
long[] f = new long[maxRandomNumber];
for (long i = 0; i < numberCount; i++) {
f[function.randomint(maxRandomNumber)]++;
}
long t = 0;
for (int i = 0; i < maxRandomNumber; i++) {
t += f[i] * f[i];
}
return (((double) maxRandomNumber * t) / numberCount) - (double) (numberCount);
}
public static void main(String[] args) {
final int ITERATION_COUNT = 1000;
final int N = 5000000;
final int R = 100000;
double total = 0.0;
RandomFunction qrRandomInt = new RandomFunction() {
#Override
public int randomint(int range) {
return (int) (qr.random() * range);
}
};
for (int i = 0; i < ITERATION_COUNT; i++) {
total += chisquare(N, R, qrRandomInt);
}
System.out.printf("Ave Chi2 for QR: %f \n", total / ITERATION_COUNT);
total = 0.0;
RandomFunction mathRandomInt = new RandomFunction() {
#Override
public int randomint(int range) {
return (int) (Math.random() * range);
}
};
for (int i = 0; i < ITERATION_COUNT; i++) {
total += chisquare(N, R, mathRandomInt);
}
System.out.printf("Ave Chi2 for Math.random: %f \n", total / ITERATION_COUNT);
}
}
I got the following result:
Ave Chi2 for QR: 108965,078640
Ave Chi2 for Math.random: 99988,629040
Which, for QuickRandom, is far away from r (outside of r ± 2 * sqrt(r))
That been said, QuickRandom could be fast but (as stated in another answers) is not good as a random number generator
[ 1 ] SEDGEWICK ROBERT, Algorithms in C, Addinson Wesley Publishing Company, 1990, pages 516 to 518
I put together a quick mock-up of your algorithm in JavaScript to evaluate the results. It generates 100,000 random integers from 0 - 99 and tracks the instance of each integer.
The first thing I notice is that you are more likely to get a low number than a high number. You see this the most when seed1 is high and seed2 is low. In a couple of instances, I got only 3 numbers.
At best, your algorithm needs some refining.
If the Math.Random() function calls the operating system to get the time of day, then you cannot compare it to your function. Your function is a PRNG, whereas that function is striving for real random numbers. Apples and oranges.
Your PRNG may be fast, but it does not have enough state information to achieve a long period before it repeats (and its logic is not sophisticated enough to even achieve the periods that are possible with that much state information).
Period is the length of the sequence before your PRNG begins to repeat itself. This happens as soon as the PRNG machine makes a state transition to a state which is identical to some past state. From there, it will repeat the transitions which began in that state. Another problem with PRNG's can be a low number of unique sequences, as well as degenerate convergence on a particular sequence which repeats. There can also be undesirable patterns. For instance, suppose that a PRNG looks fairly random when the numbers are printed in decimal, but an inspection of the values in binary shows that bit 4 is simply toggling between 0 and 1 on each call. Oops!
Take a look at the Mersenne Twister and other algorithms. There are ways to strike a balance between the period length and CPU cycles. One basic approach (used in the Mersenne Twister) is to cycle around in the state vector. That is to say, when a number is being generated, it is not based on the entire state, just on a few words from the state array subject to a few bit operations. But at each step, the algorithm also moves around in the array, scrambling the contents a little bit at a time.
There are many, many pseudo random number generators out there. For example Knuth's ranarray, the Mersenne twister, or look for LFSR generators. Knuth's monumental "Seminumerical algorithms" analizes the area, and proposes some linear congruential generators (simple to implement, fast).
But I'd suggest you just stick to java.util.Random or Math.random, they fast and at least OK for occasional use (i.e., games and such). If you are just paranoid on the distribution (some Monte Carlo program, or a genetic algorithm), check out their implementation (source is available somewhere), and seed them with some truly random number, either from your operating system or from random.org. If this is required for some application where security is critical, you'll have to dig yourself. And as in that case you shouldn't believe what some colored square with missing bits spouts here, I'll shut up now.
It is very unlikely that random number generation performance would be an issue for any use-case you came up with unless accessing a single Random instance from multiple threads (because Random is synchronized).
However, if that really is the case and you need lots of random numbers fast, your solution is far too unreliable. Sometimes it gives good results, sometimes it gives horrible results (based on the initial settings).
If you want the same numbers that the Random class gives you, only faster, you could get rid of the synchronization in there:
public class QuickRandom {
private long seed;
private static final long MULTIPLIER = 0x5DEECE66DL;
private static final long ADDEND = 0xBL;
private static final long MASK = (1L << 48) - 1;
public QuickRandom() {
this((8682522807148012L * 181783497276652981L) ^ System.nanoTime());
}
public QuickRandom(long seed) {
this.seed = (seed ^ MULTIPLIER) & MASK;
}
public double nextDouble() {
return (((long)(next(26)) << 27) + next(27)) / (double)(1L << 53);
}
private int next(int bits) {
seed = (seed * MULTIPLIER + ADDEND) & MASK;
return (int)(seed >>> (48 - bits));
}
}
I simply took the java.util.Random code and removed the synchronization which results in twice the performance compared to the original on my Oracle HotSpot JVM 7u9. It is still slower than your QuickRandom, but it gives much more consistent results. To be precise, for the same seed values and single threaded applications, it gives the same pseudo-random numbers as the original Random class would.
This code is based on the current java.util.Random in OpenJDK 7u which is licensed under GNU GPL v2.
EDIT 10 months later:
I just discovered that you don't even have to use my code above to get an unsynchronized Random instance. There's one in the JDK, too!
Look at Java 7's ThreadLocalRandom class. The code inside it is almost identical to my code above. The class is simply a local-thread-isolated Random version suitable for generating random numbers quickly. The only downside I can think of is that you can't set its seed manually.
Example usage:
Random random = ThreadLocalRandom.current();
'Random' is more than just about getting numbers.... what you have is pseudo-random
If pseudo-random is good enough for your purposes, then sure, it's way faster (and XOR+Bitshift will be faster than what you have)
Rolf
Edit:
OK, after being too hasty in this answer, let me answer the real reason why your code is faster:
From the JavaDoc for Math.Random()
This method is properly synchronized to allow correct use by more than one thread. However, if many threads need to generate pseudorandom numbers at a great rate, it may reduce contention for each thread to have its own pseudorandom-number generator.
This is likely why your code is faster.
java.util.Random is not much different, a basic LCG described by Knuth. However it has main 2 main advantages/differences:
thread safe - each update is a CAS which is more expensive than a simple write and needs a branch (even if perfectly predicted single threaded). Depending on the CPU it could be significant difference.
undisclosed internal state - this is very important for anything non-trivial. You wish the random numbers not to be predictable.
Below it's the main routine generating 'random' integers in java.util.Random.
protected int next(int bits) {
long oldseed, nextseed;
AtomicLong seed = this.seed;
do {
oldseed = seed.get();
nextseed = (oldseed * multiplier + addend) & mask;
} while (!seed.compareAndSet(oldseed, nextseed));
return (int)(nextseed >>> (48 - bits));
}
If you remove the AtomicLong and the undisclosed sate (i.e. using all bits of the long), you'd get more performance than the double multiplication/modulo.
Last note: Math.random should not be used for anything but simple tests, it's prone to contention and if you have even a couple of threads calling it concurrently the performance degrades. One little known historical feature of it is the introduction of CAS in java - to beat an infamous benchmark (first by IBM via intrinsics and then Sun made "CAS from Java")
This is the random function I use for my games. It's pretty fast, and has good (enough) distribution.
public class FastRandom {
public static int randSeed;
public static final int random()
{
// this makes a 'nod' to being potentially called from multiple threads
int seed = randSeed;
seed *= 1103515245;
seed += 12345;
randSeed = seed;
return seed;
}
public static final int random(int range)
{
return ((random()>>>15) * range) >>> 17;
}
public static final boolean randomBoolean()
{
return random() > 0;
}
public static final float randomFloat()
{
return (random()>>>8) * (1.f/(1<<24));
}
public static final double randomDouble() {
return (random()>>>8) * (1.0/(1<<24));
}
}

Dealing with overflow in Java without using BigInteger

Suppose I have a method to calculate combinations of r items from n items:
public static long combi(int n, int r) {
if ( r == n) return 1;
long numr = 1;
for(int i=n; i > (n-r); i--) {
numr *=i;
}
return numr/fact(r);
}
public static long fact(int n) {
long rs = 1;
if(n <2) return 1;
for (int i=2; i<=n; i++) {
rs *=i;
}
return rs;
}
As you can see it involves factorial which can easily overflow the result. For example if I have fact(200) for the foctorial method I get zero. The question is why do I get zero?
Secondly how do I deal with overflow in above context? The method should return largest possible number to fit in long if the result is too big instead of returning wrong answer.
One approach (but this could be wrong) is that if the result exceed some large number for example 1,400,000,000 then return remainder of result modulo
1,400,000,001. Can you explain what this means and how can I do that in Java?
Note that I do not guarantee that above methods are accurate for calculating factorial and combinations. Extra bonus if you can find errors and correct them.
Note that I can only use int or long and if it is unavoidable, can also use double. Other data types are not allowed.
I am not sure who marked this question as homework. This is NOT homework. I wish it was homework and i was back to future, young student at university. But I am old with more than 10 years working as programmer. I just want to practice developing highly optimized solutions in Java. In our times at university, Internet did not even exist. Today's students are lucky that they can even post their homework on site like SO.
Use the multiplicative formula, instead of the factorial formula.
Since its homework, I won't want to just give you a solution. However a hint I will give is that instead of calculating two large numbers and dividing the result, try calculating both together. e.g. calculate the numerator until its about to over flow, then calculate the denominator. In this last step you can chose the divide the numerator instead of multiplying the denominator. This stops both values from getting really large when the ratio of the two is relatively small.
I got this result before an overflow was detected.
combi(61,30) = 232714176627630544 which is 2.52% of Long.MAX_VALUE
The only "bug" I found in your code is not having any overflow detection, since you know its likely to be a problem. ;)
To answer your first question (why did you get zero), the values of fact() as computed by modular arithmetic were such that you hit a result with all 64 bits zero! Change your fact code to this:
public static long fact(int n) {
long rs = 1;
if( n <2) return 1;
for (int i=2; i<=n; i++) {
rs *=i;
System.out.println(rs);
}
return rs;
}
Take a look at the outputs! They are very interesting.
Now onto the second question....
It looks like you want to give exact integer (er, long) answers for values of n and r that fit, and throw an exception if they do not. This is a fair exercise.
To do this properly you should not use factorial at all. The trick is to recognize that C(n,r) can be computed incrementally by adding terms. This can be done using recursion with memoization, or by the multiplicative formula mentioned by Stefan Kendall.
As you accumulate the results into a long variable that you will use for your answer, check the value after each addition to see if it goes negative. When it does, throw an exception. If it stays positive, you can safely return your accumulated result as your answer.
To see why this works consider Pascal's triangle
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
which is generated like so:
C(0,0) = 1 (base case)
C(1,0) = 1 (base case)
C(1,1) = 1 (base case)
C(2,0) = 1 (base case)
C(2,1) = C(1,0) + C(1,1) = 2
C(2,2) = 1 (base case)
C(3,0) = 1 (base case)
C(3,1) = C(2,0) + C(2,1) = 3
C(3,2) = C(2,1) + C(2,2) = 3
...
When computing the value of C(n,r) using memoization, store the results of recursive invocations as you encounter them in a suitable structure such as an array or hashmap. Each value is the sum of two smaller numbers. The numbers start small and are always positive. Whenever you compute a new value (let's call it a subterm) you are adding smaller positive numbers. Recall from your computer organization class that whenever you add two modular positive numbers, there is an overflow if and only if the sum is negative. It only takes one overflow in the whole process for you to know that the C(n,r) you are looking for is too large.
This line of argument could be turned into a nice inductive proof, but that might be for another assignment, and perhaps another StackExchange site.
ADDENDUM
Here is a complete application you can run. (I haven't figured out how to get Java to run on codepad and ideone).
/**
* A demo showing how to do combinations using recursion and memoization, while detecting
* results that cannot fit in 64 bits.
*/
public class CombinationExample {
/**
* Returns the number of combinatios of r things out of n total.
*/
public static long combi(int n, int r) {
long[][] cache = new long[n + 1][n + 1];
if (n < 0 || r > n) {
throw new IllegalArgumentException("Nonsense args");
}
return c(n, r, cache);
}
/**
* Recursive helper for combi.
*/
private static long c(int n, int r, long[][] cache) {
if (r == 0 || r == n) {
return cache[n][r] = 1;
} else if (cache[n][r] != 0) {
return cache[n][r];
} else {
cache[n][r] = c(n-1, r-1, cache) + c(n-1, r, cache);
if (cache[n][r] < 0) {
throw new RuntimeException("Woops too big");
}
return cache[n][r];
}
}
/**
* Prints out a few example invocations.
*/
public static void main(String[] args) {
String[] data = ("0,0,3,1,4,4,5,2,10,0,10,10,10,4,9,7,70,8,295,100," +
"34,88,-2,7,9,-1,90,0,90,1,90,2,90,3,90,8,90,24").split(",");
for (int i = 0; i < data.length; i += 2) {
int n = Integer.valueOf(data[i]);
int r = Integer.valueOf(data[i + 1]);
System.out.printf("C(%d,%d) = ", n, r);
try {
System.out.println(combi(n, r));
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
}
Hope it is useful. It's just a quick hack so you might want to clean it up a little.... Also note that a good solution would use proper unit testing, although this code does give nice output.
You can use the java.math.BigInteger class to deal with arbitrarily large numbers.
If you make the return type double, it can handle up to fact(170), but you'll lose some precision because of the nature of double (I don't know why you'd need exact precision for such huge numbers).
For input over 170, the result is infinity
Note that java.lang.Long includes constants for the min and max values for a long.
When you add together two signed 2s-complement positive values of a given size, and the result overflows, the result will be negative. Bit-wise, it will be the same bits you would have gotten with a larger representation, only the high-order bit will be truncated away.
Multiplying is a bit more complicated, unfortunately, since you can overflow by more than one bit.
But you can multiply in parts. Basically you break the to multipliers into low and high halves (or more than that, if you already have an "overflowed" value), perform the four possible multiplications between the four halves, then recombine the results. (It's really just like doing decimal multiplication by hand, but each "digit" is, say, 32 bits.)
You can copy the code from java.math.BigInteger to deal with arbitrarily large numbers. Go ahead and plagiarize.

Categories