Arcane isPrime method in Java - java

Consider the following method:
public static boolean isPrime(int n) {
return ! (new String(new char[n])).matches(".?|(..+?)\\1+");
}
I've never been a regular expression guru, so can anyone fully explain how this method actually works? Furthermore, is it efficient compared to other possible methods for determining whether an integer is prime?

First, note that this regex applies to numbers represented in a unary counting system, i.e.
1 is 1
11 is 2
111 is 3
1111 is 4
11111 is 5
111111 is 6
1111111 is 7
and so on. Really, any character can be used (hence the .s in the expression), but I'll use "1".
Second, note that this regex matches composite (non-prime) numbers; thus negation detects primality.
Explanation:
The first half of the expression,
.?
says that the strings "" (0) and "1" (1) are matches, i.e. not prime (by definition, though arguable.)
The second half, in simple English, says:
Match the shortest string whose length is at least 2, for example, "11" (2). Now, see if we can match the entire string by repeating it. Does "1111" (4) match? Does "111111" (6) match? Does "11111111" (8) match? And so on. If not, then try it again for the next shortest string, "111" (3). Etc.
You can now see how, if the original string can't be matched as a multiple of its substrings, then by definition, it's prime!
BTW, the non-greedy operator ? is what makes the "algorithm" start from the shortest and count up.
Efficiency:
It's interesting, but certainly not efficient, by various arguments, some of which I'll consolidate below:
As #TeddHopp notes, the well-known sieve-of-Eratosthenes approach would not bother to check multiples of integers such as 4, 6, and 9, having been "visited" already while checking multiples of 2 and 3. Alas, this regex approach exhaustively checks every smaller integer.
As #PetarMinchev notes, we can "short-circuit" the multiples-checking scheme once we reach the square root of the number. We should be able to because a factor greater than the square root must partner with a factor lesser than the square root (since otherwise two factors greater than the square root would produce a product greater than the number), and if this greater factor exists, then we should have already encountered (and thus, matched) the lesser factor.
As #Jesper and #Brian note with concision, from a non-algorithmic perspective, consider how a regular expression would begin by allocating memory to store the string, e.g. char[9000] for 9000. Well, that was easy, wasn't it? ;)
As #Foon notes, there exist probabilistic methods which may be more efficient for larger numbers, though they may not always be correct (turning up pseudoprimes instead). But also there are deterministic tests that are 100% accurate and far more efficient than sieve-based methods. Wolfram's has a nice summary.

The unary characteristics of primes and why this works has already been covered. So here's a test using conventional approaches and this approach:
public class Main {
public static void main(String[] args) {
long time = System.nanoTime();
for (int i = 2; i < 10000; i++) {
isPrimeOld(i);
}
time = System.nanoTime() - time;
System.out.println(time + " ns (" + time / 1000000 + " ms)");
time = System.nanoTime();
for (int i = 2; i < 10000; i++) {
isPrimeRegex(i);
}
time = System.nanoTime() - time;
System.out.println(time + " ns (" + time / 1000000 + " ms)");
System.out.println("Done");
}
public static boolean isPrimeRegex(int n) {
return !(new String(new char[n])).matches(".?|(..+?)\\1+");
}
public static boolean isPrimeOld(int n) {
if (n == 2)
return true;
if (n < 2)
return false;
if ((n & 1) == 0)
return false;
int limit = (int) Math.round(Math.sqrt(n));
for (int i = 3; i <= limit; i += 2) {
if (n % i == 0)
return false;
}
return true;
}
}
This test computes whether or not the number is prime up to 9,999, starting from 2. And here's its output on a relatively powerful server:
8537795 ns (8 ms)
30842526146 ns (30842 ms)
Done
So it is grossly inefficient once the numbers get large enough. (For up to 999, the regex runs in about 400 ms.) For small numbers, it's fast, but it's still faster to generate the primes up to 9,999 the conventional way than it is to even generate primes up to 99 the old way (23 ms).

This is not a really efficient way to check if a number is prime(it checks every divisor).
An efficient way is to check for divisors up to sqrt(number). This is if you want to be certain if a number is prime. Otherwise there are probabilistic primality checks which are faster, but not 100% correct.

Related

Is there even an algorithm for 2^(n) - 1 which lies in Theta Ө(1)?

so I have a question about an algorithm I'm supposed to "invent"/"find". It's an algorithm which calculates 2^(n) - 1 for Ө(n^n) and Ө(1) and Ө(n).
I was thinking for several hours but I couldn't find any solution for both tasks (the first ones while the last one was the easist imo, I posted the algorithm below). But I'm not skilled enough to "invent"/"find" one for a very slow and very fast algorithm.
So far my algorithms are (In Pseudocode):
The one for Ө(n)
int f(int n) {
int number = 2
if(n = 0) then return 0
if(n==1) then return 1
while(n > 1)
number = number * 2
n--
number = number - 1
return number
A simple one and kinda obvious one which uses recursion though I don't know how fast it is (It would be nice if someone could tell me that):
int f(int n) {
if(n==0) then return 0
if(n==1) then return 1
return 3*f(n-1) - 2*f(n-2)
}
Assuming n is not bounded by any constant (and output should not be a simple int, but a data type that can contain large integers to allow it) - there is no algorithm
to yield 2^n -1 in Ө(1), since the size of the output itself is
Ө(log(n)), so if we assume there is such algorithm, and let it
run in constant time and makes less than C operations, for n =
2^(C+1), you will require C+1 operations only to print the
output, which contradicts the assumption that C is the upper bound, so
there is no such algorithm.
For Ө(n^n), if you have a more efficient algorithm (Ө(n) for example), you can make a pointless loop that runs extra n^n iterations and do nothing important, it will make your algorithm Ө(n^n).
There is also a Ө(log(n)*M(logn)) algorithm, using exponent by squaring, and then simply reducing 1 from this value. In here M(x) is complexity of your multiplying operator for number containing x digits.
As commented by #kajacx, you can even improve (3) by applying Fourier transform
Something like:
HugeInt h = 1;
h = h << n;
h = h - 1;
Obviously HugeInt is pseudo-code for an integer type that can be of arbitrary size allowing for any n.
=====
Look at amit's answer instead!
the Ө(n^n) is too tricky for me, but a real Ө(1) algorithm on any "binary" architecture would be:
return n-1 bits filled with 1
(assuming your architecture can allocate and fill n-1 bits in constant time)
;)

Luhn checksum validation in Java

I have to replicate the luhn algorithm in Java, the problem I face is how to implement this in an efficient and elegant way (not a requirement but that is what I want).
The luhn-algorithm works like this:
You take a number, let's say 56789
loop over the next steps till there are no digits left
You pick the left-most digit and add it to the total sum. sum = 5
You discard this digit and go the next. number = 6789
You double this digit, if it's more than one digit you take apart this number and add them separately to the sum. 2*6 = 12, so sum = 5 + 1 = 6 and then sum = 6 + 2 = 8.
Addition restrictions
For this particular problem I was required to read all digits one at a time and do computations on each of them separately before moving on. I also assume that all numbers are positive.
The problems I face and the questions I have
As said before I try to solve this in an elegant and efficient way. That's why I don't want to invoke the toString() method on the number to access all individual digits which require a lot of converting. I also can't use the modulo kind of way because of the restriction above that states once I read a number I should also do computations on it right away. I could only use modulo if I knew in advance the length of the String, but that feels like I first have to count all digits one-for-once which thus is against the restriction. Now I can only think of one way to do this, but this would also require a lot of computations and only ever cares about the first digit*:
int firstDigit(int x) {
while (x > 9) {
x /= 10;
}
return x;
}
Found here: https://stackoverflow.com/a/2968068/3972558
*However, when I think about it, this is basically a different and weird way to make use of the length property of a number by dividing it as often till there is one digit left.
So basically I am stuck now and I think I must use the length property of a number which it does not really have, so I should find it by hand. Is there a good way to do this? Now I am thinking that I should use modulo in combination with the length of a number.
So that I know if the total number of digits is uneven or even and then I can do computations from right to left. Just for fun I think I could use this for efficiency to get the length of a number: https://stackoverflow.com/a/1308407/3972558
This question appeared in the book Think like a programmer.
You can optimise it by unrolling the loop once (or as many times are you like) This will be close to twice as fast for large numbers, however make small numbers slower. If you have an idea of the typical range of numbers you will have you can determine how much to unroll this loop.
int firstDigit(int x) {
while (x > 99)
x /= 100;
if (x > 9)
x /= 10;
return x;
}
use org.apache.commons.validator.routines.checkdigit.LuhnCheckDigit . isValid()
Maven Dependency:
<dependency>
<groupId>commons-validator</groupId>
<artifactId>commons-validator</artifactId>
<version>1.4.0</version>
</dependency>
Normally you would process the numbers from right to left using divide by 10 to shift the digits and modulo 10 to extract the last one. You can still use this technique when processing the numbers from left to right. Just use divide by 1000000000 to extract the first number and multiply by 10 to shift it left:
0000056789
0000567890
0005678900
0056789000
0567890000
5678900000
6789000000
7890000000
8900000000
9000000000
Some of those numbers exceed maximum value of int. If you have to support full range of input, you will have to store the number as long:
static int checksum(int x) {
long n = x;
int sum = 0;
while (n != 0) {
long d = 1000000000l;
int digit = (int) (n / d);
n %= d;
n *= 10l;
// add digit to sum
}
return sum;
}
As I understand, you will eventually need to read every digit, so what is wrong with convert initial number to string (and therefore char[]) and then you can easily implement the algorithm iterating that char array.
JDK implementation of Integer.toString is rather optimized so that you would need to implement your own optimalizations, e.g. it uses different lookup tables for optimized conversion, convert two chars at once etc.
final static int [] sizeTable = { 9, 99, 999, 9999, 99999, 999999, 9999999,
99999999, 999999999, Integer.MAX_VALUE };
// Requires positive x
static int stringSize(int x) {
for (int i=0; ; i++)
if (x <= sizeTable[i])
return i+1;
}
This was just an example but feel free to check complete implementation :)
I would first convert the number to a kind of BCD (binary coded decimal). I'm not sure to be able to find a better optimisation than the JDK Integer.toString() conversion method but as you said you did not want to use it :
List<Byte> bcd(int i) {
List<Byte> l = new ArrayList<Byte>(10); // max size for an integer to avoid reallocations
if (i == 0) {
l.add((byte) i);
}
else {
while (i != 0) {
l.add((byte) (i % 10));
i = i / 10;
}
}
return l;
}
It is more or less what you proposed to get first digit, but now you have all you digits in one single pass and can use them for your algorythm.
I proposed to use byte because it is enough, but as java always convert to int to do computations, it might be more efficient to directly use a List<Integer> even if it really wastes memory.

Number Guessing Game Over Intervals

I have just started my long path to becoming a better coder on CodeChef. People begin with the problems marked 'Easy' and I have done the same.
The Problem
The problem statement defines the following -:
n, where 1 <= n <= 10^9. This is the integer which Johnny is keeping secret.
k, where 1 <= k <= 10^5. For each test case or instance of the game, Johnny provides exactly k hints to Alice.
A hint is of the form op num Yes/No, where -
op is an operator from <, >, =.
num is an integer, again satisfying 1 <= num <= 10^9.
Yes or No are answers to the question: Does the relation n op num hold?
If the answer to the question is correct, Johnny has uttered a truth. Otherwise, he is lying.
Each hint is fed to the program and the program determines whether it is the truth or possibly a lie. My job is to find the minimum possible number of lies.
Now CodeChef's Editorial answer uses the concept of segment trees, which I cannot wrap my head around at all. I was wondering if there is an alternative data structure or method to solve this question, maybe a simpler one, considering it is in the 'Easy' category.
This is what I tried -:
class Solution //Represents a test case.
{
HashSet<SolutionObj> set = new HashSet<SolutionObj>(); //To prevent duplicates.
BigInteger max = new BigInteger("100000000"); //Max range.
BigInteger min = new BigInteger("1"); //Min range.
int lies = 0; //Lies counter.
void addHint(String s)
{
String[] vals = s.split(" ");
set.add(new SolutionObj(vals[0], vals[1], vals[2]));
}
void testHints()
{
for(SolutionObj obj : set)
{
//Given number is not in range. Lie.
if(obj.bg.compareTo(min) == -1 || obj.bg.compareTo(max) == 1)
{
lies++;
continue;
}
if(obj.yesno)
{
if(obj.operator.equals("<"))
{
max = new BigInteger(obj.bg.toString()); //Change max value
}
else if(obj.operator.equals(">"))
{
min = new BigInteger(obj.bg.toString()); //Change min value
}
}
else
{
//Still to think of this portion.
}
}
}
}
class SolutionObj //Represents a single hint.
{
String operator;
BigInteger bg;
boolean yesno;
SolutionObj(String op, String integer, String yesno)
{
operator = op;
bg = new BigInteger(integer);
if(yesno.toLowerCase().equals("yes"))
this.yesno = true;
else
this.yesno = false;
}
#Override
public boolean equals(Object o)
{
if(o instanceof SolutionObj)
{
SolutionObj s = (SolutionObj) o; //Make the cast
if(this.yesno == s.yesno && this.bg.equals(s.bg)
&& this.operator.equals(s.operator))
return true;
}
return false;
}
#Override
public int hashCode()
{
return this.bg.intValue();
}
}
Obviously this partial solution is incorrect, save for the range check that I have done before entering the if(obj.yesno) portion. I was thinking of updating the range according to the hints provided, but that approach has not borne fruit. How should I be approaching this problem, apart from using segment trees?
Consider the following approach, which may be easier to understand. Picture the 1d axis of integers, and place on it the k hints. Every hint can be regarded as '(' or ')' or '=' (greater than, less than or equal, respectively).
Example:
-----(---)-------(--=-----)-----------)
Now, the true value is somewhere on one of the 40 values of this axis, but actually only 8 segments are interesting to check, since anywhere inside a segment the number of true/false hints remains the same.
That means you can scan the hints according to their ordering on the axis, and maintain a counter of the true hints at that point.
In the example above it goes like this:
segment counter
-----------------------
-----( 3
--- 4
)-------( 3
-- 4
= 5 <---maximum
----- 4
)----------- 3
) 2
This algorithm only requires to sort the k hints and then scan them. It's near linear in k (O(k*log k), with no dependance on n), therefore it should have a reasonable running time.
Notes:
1) In practice the hints may have non-distinct positions, so you'll have to handle all hints of the same type on the same position together.
2) If you need to return the minimum set of lies, then you should maintain a set rather than a counter. That shouldn't have an effect on the time complexity if you use a hash set.
Calculate the number of lies if the target number = 1 (store this in a variable lies).
Let target = 1.
Sort and group the statements by their respective values.
Iterate through the statements.
Update target to the current statement group's value. Update lies according to how many of those statements would become either true or false.
Then update target to that value + 1 (Why do this? Consider when you have > 5 and < 7 - 6 may be the best value) and update lies appropriately (skip this step if the next statement group's value is this value).
Return the minimum value for lies.
Running time:
O(k) for the initial calculation.
O(k log k) for the sort.
O(k) for the iteration.
O(k log k) total.
My idea for this problem is similar to how Eyal Schneider view it. Denoting '>' as greater, '<' as less than and '=' as equals, we can sort all the 'hints' by their num and scan through all the interesting points one by one.
For each point, we keep in all the number of '<' and '=' from 0 to that point (in one array called int[]lessAndEqual), number of '>' and '=' from that point onward (in one array called int[]greaterAndEqual). We can easily see that the number of lies in a particular point i is equal to
lessAndEqual[i] + greaterAndEqual[i + 1]
We can easily fill the lessAndEqual and greaterAndEqual arrays by two scan in O(n) and sort all the hints in O(nlogn), which result the time complexity is O(nlogn)
Note: special treatment should be taken for the case when the num in hint is equals. Also notice that the range for num is 10^9, which require us to have some forms of point compression to fit the array into the memory

Dealing with overflow in Java without using BigInteger

Suppose I have a method to calculate combinations of r items from n items:
public static long combi(int n, int r) {
if ( r == n) return 1;
long numr = 1;
for(int i=n; i > (n-r); i--) {
numr *=i;
}
return numr/fact(r);
}
public static long fact(int n) {
long rs = 1;
if(n <2) return 1;
for (int i=2; i<=n; i++) {
rs *=i;
}
return rs;
}
As you can see it involves factorial which can easily overflow the result. For example if I have fact(200) for the foctorial method I get zero. The question is why do I get zero?
Secondly how do I deal with overflow in above context? The method should return largest possible number to fit in long if the result is too big instead of returning wrong answer.
One approach (but this could be wrong) is that if the result exceed some large number for example 1,400,000,000 then return remainder of result modulo
1,400,000,001. Can you explain what this means and how can I do that in Java?
Note that I do not guarantee that above methods are accurate for calculating factorial and combinations. Extra bonus if you can find errors and correct them.
Note that I can only use int or long and if it is unavoidable, can also use double. Other data types are not allowed.
I am not sure who marked this question as homework. This is NOT homework. I wish it was homework and i was back to future, young student at university. But I am old with more than 10 years working as programmer. I just want to practice developing highly optimized solutions in Java. In our times at university, Internet did not even exist. Today's students are lucky that they can even post their homework on site like SO.
Use the multiplicative formula, instead of the factorial formula.
Since its homework, I won't want to just give you a solution. However a hint I will give is that instead of calculating two large numbers and dividing the result, try calculating both together. e.g. calculate the numerator until its about to over flow, then calculate the denominator. In this last step you can chose the divide the numerator instead of multiplying the denominator. This stops both values from getting really large when the ratio of the two is relatively small.
I got this result before an overflow was detected.
combi(61,30) = 232714176627630544 which is 2.52% of Long.MAX_VALUE
The only "bug" I found in your code is not having any overflow detection, since you know its likely to be a problem. ;)
To answer your first question (why did you get zero), the values of fact() as computed by modular arithmetic were such that you hit a result with all 64 bits zero! Change your fact code to this:
public static long fact(int n) {
long rs = 1;
if( n <2) return 1;
for (int i=2; i<=n; i++) {
rs *=i;
System.out.println(rs);
}
return rs;
}
Take a look at the outputs! They are very interesting.
Now onto the second question....
It looks like you want to give exact integer (er, long) answers for values of n and r that fit, and throw an exception if they do not. This is a fair exercise.
To do this properly you should not use factorial at all. The trick is to recognize that C(n,r) can be computed incrementally by adding terms. This can be done using recursion with memoization, or by the multiplicative formula mentioned by Stefan Kendall.
As you accumulate the results into a long variable that you will use for your answer, check the value after each addition to see if it goes negative. When it does, throw an exception. If it stays positive, you can safely return your accumulated result as your answer.
To see why this works consider Pascal's triangle
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
which is generated like so:
C(0,0) = 1 (base case)
C(1,0) = 1 (base case)
C(1,1) = 1 (base case)
C(2,0) = 1 (base case)
C(2,1) = C(1,0) + C(1,1) = 2
C(2,2) = 1 (base case)
C(3,0) = 1 (base case)
C(3,1) = C(2,0) + C(2,1) = 3
C(3,2) = C(2,1) + C(2,2) = 3
...
When computing the value of C(n,r) using memoization, store the results of recursive invocations as you encounter them in a suitable structure such as an array or hashmap. Each value is the sum of two smaller numbers. The numbers start small and are always positive. Whenever you compute a new value (let's call it a subterm) you are adding smaller positive numbers. Recall from your computer organization class that whenever you add two modular positive numbers, there is an overflow if and only if the sum is negative. It only takes one overflow in the whole process for you to know that the C(n,r) you are looking for is too large.
This line of argument could be turned into a nice inductive proof, but that might be for another assignment, and perhaps another StackExchange site.
ADDENDUM
Here is a complete application you can run. (I haven't figured out how to get Java to run on codepad and ideone).
/**
* A demo showing how to do combinations using recursion and memoization, while detecting
* results that cannot fit in 64 bits.
*/
public class CombinationExample {
/**
* Returns the number of combinatios of r things out of n total.
*/
public static long combi(int n, int r) {
long[][] cache = new long[n + 1][n + 1];
if (n < 0 || r > n) {
throw new IllegalArgumentException("Nonsense args");
}
return c(n, r, cache);
}
/**
* Recursive helper for combi.
*/
private static long c(int n, int r, long[][] cache) {
if (r == 0 || r == n) {
return cache[n][r] = 1;
} else if (cache[n][r] != 0) {
return cache[n][r];
} else {
cache[n][r] = c(n-1, r-1, cache) + c(n-1, r, cache);
if (cache[n][r] < 0) {
throw new RuntimeException("Woops too big");
}
return cache[n][r];
}
}
/**
* Prints out a few example invocations.
*/
public static void main(String[] args) {
String[] data = ("0,0,3,1,4,4,5,2,10,0,10,10,10,4,9,7,70,8,295,100," +
"34,88,-2,7,9,-1,90,0,90,1,90,2,90,3,90,8,90,24").split(",");
for (int i = 0; i < data.length; i += 2) {
int n = Integer.valueOf(data[i]);
int r = Integer.valueOf(data[i + 1]);
System.out.printf("C(%d,%d) = ", n, r);
try {
System.out.println(combi(n, r));
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
}
Hope it is useful. It's just a quick hack so you might want to clean it up a little.... Also note that a good solution would use proper unit testing, although this code does give nice output.
You can use the java.math.BigInteger class to deal with arbitrarily large numbers.
If you make the return type double, it can handle up to fact(170), but you'll lose some precision because of the nature of double (I don't know why you'd need exact precision for such huge numbers).
For input over 170, the result is infinity
Note that java.lang.Long includes constants for the min and max values for a long.
When you add together two signed 2s-complement positive values of a given size, and the result overflows, the result will be negative. Bit-wise, it will be the same bits you would have gotten with a larger representation, only the high-order bit will be truncated away.
Multiplying is a bit more complicated, unfortunately, since you can overflow by more than one bit.
But you can multiply in parts. Basically you break the to multipliers into low and high halves (or more than that, if you already have an "overflowed" value), perform the four possible multiplications between the four halves, then recombine the results. (It's really just like doing decimal multiplication by hand, but each "digit" is, say, 32 bits.)
You can copy the code from java.math.BigInteger to deal with arbitrarily large numbers. Go ahead and plagiarize.

Find an integer n > 0 which holds the following three conditions

Some definition for starters: flip(n) is the 180 degree rotation of a seven segment display font number, so a 2 in seven segment font will be flipped to a 2. 0,1,2,5,8 will be mapped to themselfs. 6 -> 9, 9 -> 6 and 3,4,7 are not defined. Therefore any number containing 3,4,7 will not be flippable. More examples: flip(112) = 211, flip(168) = 891, flip(3112) = not defined.
(By the way, I am quite sure that flip(1) should be undefined, but the homework says that flip(168) = 891 so regarding this assignment flip(1) is defined)
The original challenge: Find an integer n > 0 which holds the following three conditions:
flip(n) is defined and flip(n) = n
flip(n*n) is defined
n is divisible by 2011 -> n % 2011 == 0
Our solution which you can find below seems to work, but it does not find an answer at least not for 2011. If I am using 1991 instead (I searched for some "base" number for which the problem could be solved) I am getting a pretty fast answer saying 1515151 is the one. So the basic concept seems to work but not for the given "base" in the homework. Am I missing something here?
Solution written in pseudo code (We have an implementation in Small Basic and I made a multithreading one in Java):
for (i = 1; i < Integer.MaxValue; i++) {
n = i * 2011;
f = flip(n, true);
if (f != null && flip(n*n, false) != null) {
print n + " is the number";
return;
}
}
flip(n, symmetry) {
l = n.length;
l2 = (symmetry) ? ceil(l/2) : l;
f = "";
for (i = 0; i < l2; i++) {
s = n.substr(i,1);
switch(s) {
case 0,1,2,5,8:
r = s; break;
case 6:
r = 9; break;
case 9:
r = 6; break;
default:
r = "";
}
if (r == "") {
print n + " is not flippable";
return -1;
} elseif (symmetry && r != n.substr(l-i-1,1)) {
print n + " is not flip(n)";
return -1;
}
f = r + f;
}
return (symmetry) ? n : f;
}
Heuristically (with admittedly minimal experimentation and going mainly on intuition), it is not so likely you will find a solution without optimising your search technique mathematically (e.g. employing a method of construction to build a perfect square that doesn't contain 3,4,7 and is flippably symmetrical. as opposed to optimising the computations, which will not change the complexity by a noticeable amount):
I'll start with a list of all numbers who satisfy 2 criteria (that the number and it's flip be the same, i.e. flippably symmetrical, and that it be a multiple of 2011), less than 10^11:
192555261 611000119 862956298
988659886 2091001602 2220550222
2589226852 6510550159 8585115858
10282828201 12102220121 18065559081
18551215581 19299066261 20866099802
22582528522 25288188252 25510001552
25862529852 28018181082 28568189582
28806090882 50669869905 51905850615
52218581225 55666299955 58609860985
59226192265 60912021609 68651515989
68828282889 69018081069 69568089569
85065859058 85551515558 89285158268
91081118016 92529862526 92852225826
95189068156 95625052956 96056895096
96592826596 98661119986 98882128886
98986298686
There are 46 numbers there, all flippably symmetrical according to the definition and multiples of 2011, under 10^11. Seemingly multiples of 2011 that satisfy this condition will become scarcer because as the number of digits increases, less of the multiples will be palindromes, statistically.
I.e. for any given range, say [1, 10^11] (as above), there were 46. For the adjacent range of equal width: [10^11+1, 2*10^11], we might guess to find another 46 or thereabouts. But as we continue up with intervals of the same width in higher powers of 10, the number of numbers is the same (because we analyse equal width intervals) although the palindrome condition now falls on more digits because the number of digits increases. So approaching infinity we expect the number of palindromes on any fixed with interval to approach 0. Or, more formally (but without proof) for every positive value N, with probability 0 a given interval (of predetermined width) will have more than N multiples of 2011 that are palindromes.
So the number of palindromes we can find will decrease as an exhaustive search continues. As per the probability that for any found palindrome the square will be flippable, we assume uniform distribution of the squares of palindromes (since we have no analysis to tell us otherwise, and no reason to believe otherwise) and then the probability that any given square of d digits length will be flippable is (7/10)^d.
Let's start with the smallest such square we found
192555261 ^ 2 = 37077528538778121
which is already 17 digits long, giving it a probability of around 0.002 (approx. 1/430) that it's flippably defined. But already by the time we've reached the last on the list:
98986298686 ^ 2 = 9798287327554005326596
which is 24 digits long, and has a probability of less than 1/5000 of being flippably defined.
So as the search continues in higher numbers, the number of palindromes decreases, and the probability that any found palindrome's square is flippable also decreases - a double edged blade.
What's left is to find some sort of ratio of densities and accordingly see how improbable finding a solution is... Although it's clear intuitively that finding a solution gets much less likely probabilistically speaking (which by no means rules out that one or even a large number of solutions exist (possibly an infinite number?)).
Good luck! I hope someone solves this. As with many problems, the solutions are often not as simple as running the algorithm on a faster machine or with more parallelism or for a longer period of time or whatnot, but with a more advanced technique or more inventive methods of attacking the problem, which themselves further the field. The answer, a number, is of much less interest (usually) than the method used to derive it.
You are searching through all of the numbers divisible by 2011, then checking whether they are the flip of themselves. But after you've reached 7 digit numbers the condition that it be a flip of itself is more restrictive than the condition that it be divisible by 2011. So I'd suggest that you instead iterate through all of the numbers that can be constructed without the digits 3, 4, 7, then construct the number that is flip of itself prepended to itself, possibly squishing a middle digit if the middle digits are 11, 22, 55, or 88. Then test for divisibility by 2011, then test whether n*n is flippable.
Be very, very aware of the possibility that n*n will hit integer overflow. By the time you've reached a 5-digit number for the base, your n will be 9 or 10 digits long, and n*n will be 18-21 digits long.
Not necessarily a complete solution, more like thought process which may help you on the way.
n = flip(n) => n is a palindrome (180° rotation in flip()), n consists only of numbers which map to themselves in flip() i.e.: 0, 1, 2, 5, 8
flip(n*n) is defined. Thus n*n may not contain 3, 4, 7
n % 2011 = 0.
n > 0.

Categories