How do you find abrupt change in an array? For example, if you have following array:
1,3,8,14,58,62,69
In this case, there is a jump from 14 to 58
OR
79,77,68,61,9,3,1
In this case, there is a drop from 61 to 9
In both examples, there are small and big jumps. For example, in 2nd case, there is a small drop from 77 to 68. However, this must be ignored if a larger jump/drop is found. I have following algorithm in my mind but I am not sure if this will cover all possible cases:
ALGO
Iterate over array
Diff (i+1)-i
store first difference in a variable
if next diff is bigger than previous then overwrite
For the following example, this algo will not work for the following case:
1, 2, 4, 6, 34, 38, 41, 67, 69, 71
There are two jumps in this array. So it should be arranged like
[1, 2, 4, 6], [34, 38, 41], [67, 69, 71]
In the end, this is pure statistics. You have a data set; and you are look for a certain forms of outliers. In that sense, your requirement to detect "abrupt changes" is not very precise.
I think you should step back here; and have a deeper look into the mathematics behind your problem - to come up with clear "semantics" and crisp definitions for your actual problem (for example based on average, deviation, etc.). The wikipedia link I gave above should be a good starting point for that part.
From there on, to get to an Java implementation, you might start looking here.
I would look into using a Moving Average, this involves looking at an average for the last X ammount of values. Do this based on the change in value (Y1 - Y2). Any large deviations from the average could be seen as a big shift.
However given how small your datasets are a moving average would likely yeild bad results. With such a small sample size it might be better to take an average of all values in the array instead:
double [] nums = new double[] {79,77,68,61,9,3,1};
double [] deltas = new double[nums.length-1];
double advDelta = 0;
for(int i=0;i<nums.length-1;i++) {
deltas[i] = nums[i+1]-nums[i];
advDelta += deltas[i] / deltas.length;
}
// search for deltas > average
for(int i=0;i<deltas.length;i++) {
if(Math.abs(deltas[i]) > Math.abs(advDelta)) {
System.out.println("Big jump between " + nums[i] + " " + nums[i+1]);
}
}
This problem doesn't have an absolute solution, you'll have to determine thresholds for the context in which the solution is to be applied.
No algorithm can give us the rule for the jump. We as humans are able to determine these changes because we are able to see the entire data at one glance for now. But if data set is large enough then it would be difficult for us to say which jumps are to be considered. For example if on average differences between consecutive numbers are 10 then any difference above that would be considered a jump. However in a large data set there could be differences which are sort of spikes or which start a new normal difference like from 10 to differences suddenly become 100. We will have to decide if we want to get the jumps based on the difference average 10 or 100.
If we are interested in local spike only then it's possible to use moving average as suggested by #ug_
However moving average has to be moving, meaning we maintain a set of local numbers with a fixed set size. On that we calculate the average of the differences and then compare them to the local differences.
However here also we again face the problem to determine the size of the local set. This threshold determines the granularity of the jumps that we capture. A very large set will tend to ignore the closer jumps and a smaller one will tend to provide false positives.
Following a simple solution where you can try setting the thresholds. Local set size in this case is 3, that's the minimum that can be used as it will give us minimum count of differences required that is 2.
public class TestJump {
public static void main(String[] args) {
int[] arr = {1, 2, 4, 6, 34, 38, 41, 67, 69, 71};
//int[] arr = {1,4,8,13,19,39,60,84,109};
double thresholdDeviation = 50; //percent jump to detect, set for your reuirement
double thresholdDiff = 3; //Minimum difference between consecutive differences to avoid false positives like 1,2,4
System.out.println("Started");
for(int i = 1; i < arr.length - 1; i++) {
double diffPrev = Math.abs(arr[i] - arr[i-1]);
double diffNext = Math.abs(arr[i+1] - arr[i]);
double deviation = Math.abs(diffNext - diffPrev) / diffPrev * 100;
if(deviation > thresholdDeviation && Math.abs(diffNext - diffPrev) > thresholdDiff) {
System.out.printf("Abrupt change # %d: (%d, %d, %d)%n", i, arr[i-1], arr[i], arr[i+1]);
i++;
}
//System.out.println(deviation + " : " + Math.abs(diffNext - diffPrev));
}
System.out.println("Finished");
}
}
Output
Started
Abrupt change # 3: (4, 6, 34)
Abrupt change # 6: (38, 41, 67)
Finished
If you're trying to solve a larger problem than just arrays like finding spikes in medical data or images, then you should checkout neural networks.
Related
I need a fast way to find maximum value when intervals are overlapping, unlike finding the point where got overlap the most, there is "order". I would have int[][] data that 2 values in int[], where the first number is the center, the second number is the radius, the closer to the center, the larger the value at that point is going to be. For example, if I am given data like:
int[][] data = new int[][]{
{1, 1},
{3, 3},
{2, 4}};
Then on a number line, this is how it's going to looks like:
x axis: -2 -1 0 1 2 3 4 5 6 7
1 1: 1 2 1
3 3: 1 2 3 4 3 2 1
2 4: 1 2 3 4 5 4 3 2 1
So for the value of my point to be as large as possible, I need to pick the point x = 2, which gives a total value of 1 + 3 + 5 = 9, the largest possible value. It there a way to do it fast? Like time complexity of O(n) or O(nlogn)
This can be done with a simple O(n log n) algorithm.
Consider the value function v(x), and then consider its discrete derivative dv(x)=v(x)-v(x-1). Suppose you only have one interval, say {3,3}. dv(x) is 0 from -infinity to -1, then 1 from 0 to 3, then -1 from 4 to 6, then 0 from 7 to infinity. That is, the derivative changes by 1 "just after" -1, by -2 just after 3, and by 1 just after 6.
For n intervals, there are 3*n derivative changes (some of which may occur at the same point). So find the list of all derivative changes (x,change), sort them by their x, and then just iterate through the set.
Behold:
intervals = [(1,1), (3,3), (2,4)]
events = []
for mid, width in intervals:
before_start = mid - width - 1
at_end = mid + width
events += [(before_start, 1), (mid, -2), (at_end, 1)]
events.sort()
prev_x = -1000
v = 0
dv = 0
best_v = -1000
best_x = None
for x, change in events:
dx = x - prev_x
v += dv * dx
if v > best_v:
best_v = v
best_x = x
dv += change
prev_x = x
print best_x, best_v
And also the java code:
TreeMap<Integer, Integer> ts = new TreeMap<Integer, Integer>();
for(int i = 0;i<cows.size();i++) {
int index = cows.get(i)[0] - cows.get(i)[1];
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
index = cows.get(i)[0] + 1;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) - 2);
}else {
ts.put(index, -2);
}
index = cows.get(i)[0] + cows.get(i)[1] + 2;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
}
int value = 0;
int best = 0;
int change = 0;
int indexBefore = -100000000;
while(ts.size() > 1) {
int index = ts.firstKey();
value += (ts.get(index) - indexBefore) * change;
best = Math.max(value, best);
change += ts.get(index);
ts.remove(index);
}
where cows is the data
Hmmm, a general O(n log n) or better would be tricky, probably solvable via linear programming, but that can get rather complex.
After a bit of wrangling, I think this can be solved via line intersections and summation of function (represented by line segment intersections). Basically, think of each as a triangle on top of a line. If the inputs are (C,R) The triangle is centered on C and has a radius of R. The points on the line are C-R (value 0), C (value R) and C+R (value 0). Each line segment of the triangle represents a value.
Consider any 2 such "triangles", the max value occurs in one of 2 places:
The peak of one of the triangle
The intersection point of the triangles or the point where the two triangles overall. Multiple triangles just mean more possible intersection points, sadly the number of possible intersections grows quadratically, so O(N log N) or better may be impossible with this method (unless some good optimizations are found), unless the number of intersections is O(N) or less.
To find all the intersection points, we can just use a standard algorithm for that, but we need to modify things in one specific way. We need to add a line that extends from each peak high enough so it would be higher than any line, so basically from (C,C) to (C,Max_R). We then run the algorithm, output sensitive intersection finding algorithms are O(N log N + k) where k is the number of intersections. Sadly this can be as high as O(N^2) (consider the case (1,100), (2,100),(3,100)... and so on to (50,100). Every line would intersect with every other line. Once you have the O(N + K) intersections. At every intersection, you can calculate the the value by summing the of all points within the queue. The running sum can be kept as a cached value so it only changes O(K) times, though that might not be posible, in which case it would O(N*K) instead. Making it it potentially O(N^3) (in the worst case for K) instead :(. Though that seems reasonable. For each intersection you need to sum up to O(N) lines to get the value for that point, though in practice, it would likely be better performance.
There are optimizations that could be done considering that you aim for the max and not just to find intersections. There are likely intersections not worth pursuing, however, I could also see a situation where it is so close you can't cut it down. Reminds me of convex hull. In many cases you can easily reduce 90% of the data, but there are cases where you see the worst case results (every point or almost every point is a hull point). For example, in practice there are certainly causes where you can be sure that the sum is going to be less than the current known max value.
Another optimization might be building an interval tree.
In this problem I have a number of queries for which I have to output the count of integers in the array which is divisible by k(one of the queries).The array contains duplicate elements. I am trying to optimise the problem and my approach is given below :
Code:
public static void main(String[] args) {
Scanner sc=new Scanner(System.in);
int[] ar={2,4,6,9,11,34,654,23,32,54,76,21432,32543,435,43543,643,2646,4567,457654,75,754,7,567865,8765877,53,2};
int query=sc.nextInt();
int length=ar.length;
int count=0;
for (int i=0;i<query ;i++ ) {
int x=sc.nextInt();
for (int j=0;j<length ;j++ ) {
if(ar[j]>x){
if(ar[j]%x==0){
count++;
}
}
}
System.out.println("Count:"+count);
}
}
The above code gives the correct output, but the complexity is O(query*length) and what if the array size is much bigger,the program will timeout.
Can anyone help me optimize the problem?
One optimization that you could do is to take advantage of short-circuiting, and use one if statement (instead of two).
So change this:
if(ar[j]>x) {
if(ar[j]%x==0) {
to this:
if(ar[j]>x && ar[j]%x==0) {
This will not affect the time complexity of your algorithm, but it will help Branch Prediction.
and maximum value of an element in an array is 10^5
This makes it trivial. Use boolean[10001] and mark all multiples of all queries. Then use it for testing the elements.
The new problem is how to mark all the multiples when there are many small queries. The worst case would be queries like {1, 1, 1, ...}, but duplicates can be trivial removed e.g., using a HashSet. Now the worst case is {1, 2, 3, 4, ...}, which needs 10001/1 + 10001/2 + 10001/3... steps. You can do some special treatment for the smallest queries like removing multiples.
For example, when you look at all queries up to 10 and remove multiples among them, then the worst case is {2, 3, 5, 7, 11, 12, 13, 14...}, which should make the marking pretty fast. This step may not be needed at all.
You can do a precomputation step to build a divisors' table. For every element in the array calculate its divisors. You can calculate divisors of a number efficiently in O(sqrt(V)) assuming V is the maximum value in the array. Building the full table will cost O(n * sqrt(V)) which according to your constraints equals 100,000 * sqrt(100,000) =~ 32M which shouldn't be a lot. Memory complexity would be the same.
Then you can answer your queries in O(1) by a lookup in the table.
You can check this link for how to generate divisors in O(sqrt(V)).
Consider this method:
public static int[] countPairs(int min, int max) {
int lastIndex = primes.size() - 1;
int i = 0;
int howManyPairs[] = new int[(max-min)+1];
for(int outer : primes) {
for(int inner : primes.subList(i, lastIndex)) {
int sum = outer + inner;
if(sum > max)
break;
if(sum >= min && sum <= max)
howManyPairs[sum - min]++;
}
i++;
}
return howManyPairs;
}
As you can see, I have to count how many times each number between min and max can be expressed as a sum of two primes.
primes is an ArrayList with all primes between 2 and 2000000. In this case, min is 1000000 and max is 2000000, that's why primes goes until 2000000.
My method works fine, but the goal here is to do something faster.
My method takes two loops, one inside the other, and it makes my algorithm an O(n²). It sucks like bubblesort.
How can I rewrite my code to accomplish the same result with a better complexity, like O(nlogn)?
One last thing: I'm coding in Java, but your reply can be in also Python, VB.Net, C#, Ruby, C or even just a explanation in English.
For each number x between min and max, we want to compute the number of ways x can be written as the sum of two primes. This number can also be expressed as
sum(prime(n)*prime(x-n) for n in xrange(x+1))
where prime(x) is 1 if x is prime and 0 otherwise. Instead of counting the number of ways that two primes add up to x, we consider all ways two nonnegative integers add up to x, and add 1 to the sum if the two integers are prime.
This isn't a more efficient way to do the computation. However, putting it in this form helps us recognize that the output we want is the discrete convolution of two sequences. Specifically, if p is the infinite sequence such that p[x] == prime(x), then the convolution of p with itself is the sequence such that
convolve(p, p)[x] == sum(p[n]*p[x-n] for n in xrange(x+1))
or, substituting the definition of p,
convolve(p, p)[x] == sum(prime(n)*prime(x-n) for n in xrange(x+1))
In other words, convolving p with itself produces the sequence of numbers we want to compute.
The straightforward way to compute a convolution is pretty much what you were doing, but there are much faster ways. For n-element sequences, a fast Fourier transform-based algorithm can compute the convolution in O(n*log(n)) time instead of O(n**2) time. Unfortunately, this is where my explanation ends. Fast Fourier transforms are kind of hard to explain even when you have proper mathematical notation available, and as my memory of the Cooley-Tukey algorithm isn't as precise as I'd like it to be, I can't really do it justice.
If you want to read more about convolution and Fourier transforms, particularly the Cooley-Tukey FFT algorithm, the Wikipedia articles I've just linked would be a decent start. If you just want to use a faster algorithm, your best bet would be to get a library that does it. In Python, I know scipy.signal.fftconvolve would do the job; in other languages, you could probably find a library pretty quickly through your search engine of choice.
What you´re searching is the count of Goldbach partitions for each number
in your range, and imho there is no efficient algorithm for it.
Uneven numbers have 0, even numbers below 4*10^18 are guaranteed to have more than 0,
but other than that... to start with, if even numbers (bigger than 4*10^18) with 0 partitions exist
is an unsolved problem since 1700-something, and such things as exact numbers are even more complicated.
There are some asymptotic and heuristic solutions, but if you want the exact number,
other than getting more CPU and RAM, there isn´t be much you can do.
The other answers have an outer loop that goes from N to M. It's more efficient, however, for the outer loop (or loops) to be pairs of primes, used to build a list of numbers between N and M that equal their sums.
Since I don't know Java I'll give a solution in Ruby for a specific example. That should allow anyone interested to implement the algorithm in Java, regardless of whether they know Ruby.
I initially assume that two primes whose product equals a number between M and N must be unique. In other words, 4 cannot be express as 4 = 2+2.
Use Ruby's prime number library.
require 'prime'
Assume M and N are 5 and 50.
lower = 5
upper = 50
Compute the prime numbers up to upper-2 #=> 48, the 2 being the first prime number.
primes = Prime.each.take_while { |p| p < upper-2 }
#=> [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
Construct an enumerator of all combinations of two primes.
enum = primes.combination(2)
=> #<Enumerator: [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]:combination(2)>
We can see the elements that will be generated by this enumerator by converting it to an array.
enum.to_a
#=> [[2, 3], [2, 5],..., [2, 47], [3, 5],..., [43, 47]] (105 elements)
Just think of enum as an array.
Now construct a counting hash whose keys are numbers between lower and upper for which there is at least one pair of primes that sum to that number and whose values are the numbers of pairs of primes that sum to the value of the associated key.
enum.each_with_object(Hash.new(0)) do |(x,y),h|
sum = x+y
h[sum] += 1 if (lower..upper).cover?(sum)
end
#=> {5=>1, 7=>1, 9=>1, 13=>1, 15=>1, 19=>1, 21=>1, 25=>1, 31=>1, 33=>1,
# 39=>1, 43=>1, 45=>1, 49=>1, 8=>1, 10=>1, 14=>1, 16=>2, 20=>2, 22=>2,
# 26=>2, 32=>2, 34=>3, 40=>3, 44=>3, 46=>3, 50=>4, 12=>1, 18=>2, 24=>3,
# 28=>2, 36=>4, 42=>4, 48=>5, 30=>3, 38=>1}
This shows, for example, that there are two ways that 16 can be expressed as the sum of two primes (3+13 and 5+11), three ways for 34 (3+31, 5+29 and 11+23) and no way for 6.
If the two primes being summed need not be unique (e.g., 4=2+2 is to be included), only a slight change is needed.
arr = primes.combination(2).to_a.concat(primes.zip(primes))
whose sorted values are
a = arr.sort
#=> [[2, 2], [2, 3], [2, 5], [2, 7],..., [3, 3],..., [5, 5],.., [47, 47]] (120 elements)
then
a.each_with_object(Hash.new(0)) do |(x,y),h|
sum = x+y
h[sum] += 1 if (lower..upper).cover?(sum)
end
#=> {5=>1, 7=>1, 9=>1, 13=>1, 15=>1, 19=>1, 21=>1, 25=>1, 31=>1, 33=>1,
# 39=>1, 43=>1, 45=>1, 49=>1, 6=>1, 8=>1, 10=>2, 14=>2, 16=>2, 20=>2,
# 22=>3, 26=>3, 32=>2, 34=>4, 40=>3, 44=>3, 46=>4, 50=>4, 12=>1, 18=>2,
# 24=>3, 28=>2, 36=>4, 42=>4, 48=>5, 30=>3, 38=>2}
a should be replaced by arr. I used a here merely to order the elements of the resulting hash so that it would be easier to read.
Since I just wanted to describe the approach, I used a brute force method to enumerate the pairs of elements of primes, throwing away 44 of the 120 pairs of primes because their sums fall outside the range 5..50 (a.count { |x,y| !(lower..upper).cover?(x+y) } #=> 44). Clearly, there's considerable room for improvement.
A sum of two primes means N = A + B, where A and B are primes, and A < B, which means A < N / 2 and B > N / 2. Note that they can't be equal to N / 2.
So, your outer loop should only loop from 1 to floor((N - 1) / 2). In integer math, the floor is automatic.
Your inner loop can be eliminated if the primes are stored in a Set. Assuming your array is sorted (fair assumption), use a LinkedHashSet, such that iterating the set in the outer loop can stop at (N - 1) / 2.
I'll leave it up to you to code this.
Update
Sorry, the above is an answer to the problem of finding A and B for a particular N. Your question was to find all N between min and max (inclusive).
If you follow to logic of the above, you should be able to apply that to your problem.
Outer loop should be from 1 to max / 2.
Inner loop should be from min - outer to max - outer.
To find the starting point of the inner loop, you can keep some extra index variables around, or you can rely on your prime array being sorted and use Arrays.binarySearch(primes, min - outer). First option is likely a little bit faster, but second option is definitely simpler.
Some definition for starters: flip(n) is the 180 degree rotation of a seven segment display font number, so a 2 in seven segment font will be flipped to a 2. 0,1,2,5,8 will be mapped to themselfs. 6 -> 9, 9 -> 6 and 3,4,7 are not defined. Therefore any number containing 3,4,7 will not be flippable. More examples: flip(112) = 211, flip(168) = 891, flip(3112) = not defined.
(By the way, I am quite sure that flip(1) should be undefined, but the homework says that flip(168) = 891 so regarding this assignment flip(1) is defined)
The original challenge: Find an integer n > 0 which holds the following three conditions:
flip(n) is defined and flip(n) = n
flip(n*n) is defined
n is divisible by 2011 -> n % 2011 == 0
Our solution which you can find below seems to work, but it does not find an answer at least not for 2011. If I am using 1991 instead (I searched for some "base" number for which the problem could be solved) I am getting a pretty fast answer saying 1515151 is the one. So the basic concept seems to work but not for the given "base" in the homework. Am I missing something here?
Solution written in pseudo code (We have an implementation in Small Basic and I made a multithreading one in Java):
for (i = 1; i < Integer.MaxValue; i++) {
n = i * 2011;
f = flip(n, true);
if (f != null && flip(n*n, false) != null) {
print n + " is the number";
return;
}
}
flip(n, symmetry) {
l = n.length;
l2 = (symmetry) ? ceil(l/2) : l;
f = "";
for (i = 0; i < l2; i++) {
s = n.substr(i,1);
switch(s) {
case 0,1,2,5,8:
r = s; break;
case 6:
r = 9; break;
case 9:
r = 6; break;
default:
r = "";
}
if (r == "") {
print n + " is not flippable";
return -1;
} elseif (symmetry && r != n.substr(l-i-1,1)) {
print n + " is not flip(n)";
return -1;
}
f = r + f;
}
return (symmetry) ? n : f;
}
Heuristically (with admittedly minimal experimentation and going mainly on intuition), it is not so likely you will find a solution without optimising your search technique mathematically (e.g. employing a method of construction to build a perfect square that doesn't contain 3,4,7 and is flippably symmetrical. as opposed to optimising the computations, which will not change the complexity by a noticeable amount):
I'll start with a list of all numbers who satisfy 2 criteria (that the number and it's flip be the same, i.e. flippably symmetrical, and that it be a multiple of 2011), less than 10^11:
192555261 611000119 862956298
988659886 2091001602 2220550222
2589226852 6510550159 8585115858
10282828201 12102220121 18065559081
18551215581 19299066261 20866099802
22582528522 25288188252 25510001552
25862529852 28018181082 28568189582
28806090882 50669869905 51905850615
52218581225 55666299955 58609860985
59226192265 60912021609 68651515989
68828282889 69018081069 69568089569
85065859058 85551515558 89285158268
91081118016 92529862526 92852225826
95189068156 95625052956 96056895096
96592826596 98661119986 98882128886
98986298686
There are 46 numbers there, all flippably symmetrical according to the definition and multiples of 2011, under 10^11. Seemingly multiples of 2011 that satisfy this condition will become scarcer because as the number of digits increases, less of the multiples will be palindromes, statistically.
I.e. for any given range, say [1, 10^11] (as above), there were 46. For the adjacent range of equal width: [10^11+1, 2*10^11], we might guess to find another 46 or thereabouts. But as we continue up with intervals of the same width in higher powers of 10, the number of numbers is the same (because we analyse equal width intervals) although the palindrome condition now falls on more digits because the number of digits increases. So approaching infinity we expect the number of palindromes on any fixed with interval to approach 0. Or, more formally (but without proof) for every positive value N, with probability 0 a given interval (of predetermined width) will have more than N multiples of 2011 that are palindromes.
So the number of palindromes we can find will decrease as an exhaustive search continues. As per the probability that for any found palindrome the square will be flippable, we assume uniform distribution of the squares of palindromes (since we have no analysis to tell us otherwise, and no reason to believe otherwise) and then the probability that any given square of d digits length will be flippable is (7/10)^d.
Let's start with the smallest such square we found
192555261 ^ 2 = 37077528538778121
which is already 17 digits long, giving it a probability of around 0.002 (approx. 1/430) that it's flippably defined. But already by the time we've reached the last on the list:
98986298686 ^ 2 = 9798287327554005326596
which is 24 digits long, and has a probability of less than 1/5000 of being flippably defined.
So as the search continues in higher numbers, the number of palindromes decreases, and the probability that any found palindrome's square is flippable also decreases - a double edged blade.
What's left is to find some sort of ratio of densities and accordingly see how improbable finding a solution is... Although it's clear intuitively that finding a solution gets much less likely probabilistically speaking (which by no means rules out that one or even a large number of solutions exist (possibly an infinite number?)).
Good luck! I hope someone solves this. As with many problems, the solutions are often not as simple as running the algorithm on a faster machine or with more parallelism or for a longer period of time or whatnot, but with a more advanced technique or more inventive methods of attacking the problem, which themselves further the field. The answer, a number, is of much less interest (usually) than the method used to derive it.
You are searching through all of the numbers divisible by 2011, then checking whether they are the flip of themselves. But after you've reached 7 digit numbers the condition that it be a flip of itself is more restrictive than the condition that it be divisible by 2011. So I'd suggest that you instead iterate through all of the numbers that can be constructed without the digits 3, 4, 7, then construct the number that is flip of itself prepended to itself, possibly squishing a middle digit if the middle digits are 11, 22, 55, or 88. Then test for divisibility by 2011, then test whether n*n is flippable.
Be very, very aware of the possibility that n*n will hit integer overflow. By the time you've reached a 5-digit number for the base, your n will be 9 or 10 digits long, and n*n will be 18-21 digits long.
Not necessarily a complete solution, more like thought process which may help you on the way.
n = flip(n) => n is a palindrome (180° rotation in flip()), n consists only of numbers which map to themselves in flip() i.e.: 0, 1, 2, 5, 8
flip(n*n) is defined. Thus n*n may not contain 3, 4, 7
n % 2011 = 0.
n > 0.
I'm having trouble figuring why the following code isn't producing the expected output. Instead, result = 272 which does not seem right.
/*
*Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
*Find the sum of all the even-valued terms in the sequence which do not exceed four million.
*/
public class Fibonacci
{
public static void main (String[] args)
{
int result = 0;
for(int i=2;i<=33;i++)
{
System.out.println("i:" + fib(i));
if(i % 2 == 0) //if i is even
{
result += i;
System.out.println("result:" + result);
}
}
}
public static long fib(int n)
{
if (n <= 1)
return n;
else
return fib(n-1) + fib(n-2);
}
}
The line result += i; doesn't add a Fibonacci number to result.
You should be able to figure out how to make it add a Fibonacci number to result.
Hint: Consider making a variable that stores the number you're trying to work with.
First of all, you got one thing wrong for the Fib. The definition for Fib can be found here: http://en.wikipedia.org/wiki/Fibonacci_number.
Second of all (i % 2) is true for every other number (2, 4, 6 and so), which will man that it is true for fib(2), fib(4), and so on.
And last, result += i adds the index. What you wan't to add is the result of the fib(i). So first you need to calculate fib(i), store that in a variable, and check if THAT is an even or odd number, and if it is, then add the variable to the result.
[Edit]
One last point: doing fib in recursion when you wan't to add up all the numbers can be really bad. If you are working with to high numbers you might even end up with StackOverflowException, so it is always a good idea to try and figure a way so that you don't have to calculate the same numbers over and over again. In this example, you want to sum the numbers, so instead of first trying fib(0), then fib(1) and so on, you should just go with the list, check every number on the way and then add it to the result if it matches your criteria.
Well, here's a good starting point, but it's C, not Java. Still, it might help: link text