A question was asked to me today and I do not believe it is possible, but I could be wrong or am over thinking it. How can you reverse an array without using iteration in C?
My thought is that it's impossible because of the fact that the array can be any size and that no C program can be expressed with that kind of support in mind without using some form of iteration.
The answer to your question is that, yes, it is possible to reverse an array without iteration. The phrasing of the question itself might be ambiguous, however, the spirit of the question is obvious: a recursive algorithm can be used; and there is no ambiguity at all as to the meaning of recursive in this sense.
If, in an interview situation with a top-flight company, you were asked this question, then the following pseudo-code would be sufficient to demonstrate you truly understood what is meant by recursion:
function reverse(array)
if (length(array) < 2) then
return array
left_half = reverse(array[0 .. (n/2)-1])
right_half = reverse(array[(n/2) .. (n-1)])
return right_half + left_half
end
For example, if we have an array of 16 elements containing the first 16 letters of the Latin Alphabet, [A]..[P], the above reverse algorithm could be visualised as follows:
Original Input
1. ABCDEFHGIJKLMNOP Recurse
2. ABCDEFGH IJKLMNOP Recurse
3. ABCD EFGH IJKL MNOP Recurse
4. AB CD EF GH IJ KL MN OP Recurse
5. A B C D E F G H I J K L M N O P Terminate
6. BA DC FE HG JI LK NM PO Reverse
7. DCBA HGFE LKJI PONM Reverse
8. HGFEDCBA PONMLKJI Reverse
9. PONMLKJIHGFEDCBA Reverse
Reversed Output
Any problem that is solved with a recursive algorithm follows the Divide and Conquer paradigm, namely that:
The problem is divided into [two or more] sub-problems where each sub-problem is smaller than, but can be solved in a similar manner to, the original problem (Divide).
The problem is divided into [two or more] sub-problems where each sub-problem is independent and can be solved either recursively, or in a straightforward manner if small enough (Conquer).
The problem is divided into [two or more] sub-problems where the results of those sub-problems are combined to give the solution for the original problem (Combine).
The pseudo-code above for reversing an array strictly satisfies the above criteria. Thus, it can be considered a recursive algorithm and we can state without any doubt that reversing an array can be done without using iteration.
ADDITIONAL BACKGROUND INFORMATION
The difference between Iteration, Recursive Implementations and Recursive Algorithms
It is a common misunderstanding that a recursive implementation means an algorithm is recursive. They are not equivalent. Here is a definitive explanation as to why, including a detailed explanation of the above solution.
What are Iteration and Recursion?
Back in 1990, three of the most respected scholars of modern algorithm analysis in the field of computer science, Thomas H. Cormen, Charles E. Leiserson and Ronald L. Rivest, released their much acclaimed Introduction to Algorithms. In this book, which represented the coming together of over 200 respected texts in their own right, and which for over 20 years has been used as the first and only text for teaching algorithms in most of the top-flight universities around the world, Mssrs. Cormen, Leiserson, and Rivest were explicit about what constitutes Iterating and what constitutes Recursing.
In their analysis and comparison of two classic sorting algorithms, Insertion Sort and Merge Sort, they explain the fundamental properties of iterative and recursive algorithms (sometimes termed incremental algorithms to disambiguate when the classical mathematical notion of iteration is being used in the same context).
Firstly, Insertion Sort is classified as an Iterative algorithm, with its behaviour summarised as follows:
Having sorted the subarray A[1..j-1], we insert the single item A[j] into its proper place, yielding the sorted array A[1..j].
Source: Introduction to Algorithms - Cormen, Leisersen, Rivest, 1990 MIT Press
This statement classifies an Iterative algorithm as one that relies on the result or state of a previous execution ("iteration") of the algorithm, and that such results or state information are then used to solve the problem for the current iteration.
Merge Sort, on the other hand, is classified as a recursive algorithm. A recursive algorithm conforms to a processing paradigm called Divide and Conquer which is a set of three fundamental criteria that differentiate the operation of recursive algorithms from non-recursive algorithms. An algorithm can be considered recursive if, during the processing of a given problem:
The problem is divided into [two or more] sub-problems where each sub-problem is smaller than, but can be solved in a similar manner to, the original problem (Divide).
The problem is divided into [two or more] sub-problems where each sub-problem can be solved either recursively, or in a straightforward manner if small enough (Conquer).
The problem is divided into [two or more] sub-problems where the results of those sub-problems are combined to give the solution for the original problem (Combine).
Reference: Introduction to Algorithms - Cormen, Leisersen, Rivest, 1990 MIT Press
Both Iterative algorithms and Recursive algorithms continue their work until a terminating condition has been reached. The terminating condition in Insertion Sort is that the j'th item has been properly placed in the array A[1..j]. The terminating condition in a Divide and Conquer algorithm is when Criteria 2 of the paradigm "bottoms out", that is, the size of a sub-problem reaches a sufficiently small size that it can be solved without further sub-division.
It's important to note that the Divide and Conquer paradigm requires that sub-problems must be solvable in a similar manner to the original problem to allow recursion. As the original problem is a standalone problem, with no outside dependencies, it follows that the sub-problems must also be solvable as if they were standalone problems with no outside dependencies, particularly on other sub-problems. This means that sub-problems in Divide and Conquer algorithms should be naturally independent.
Conversely, it is equally important to note that input to iterative algorithms is based on previous iterations of the algorithm, and so must be considered and processed in order. This creates dependencies between iterations which prevent the algorithm dividing the problem into sub-problems that can be recursively solved. In Insertion Sort, for example, you cannot divide the items A[1..j] into two sub-sets such that the sorted position in the array of A[j] gets decided before all items A[1..j-1] have been placed, as the real proper position of A[j] may move while any of A[1..j-1] are being themselves placed.
Recursive Algorithms vs. Recursive Implementations
The general misunderstanding of the term recursion stems from the fact there is a common and wrong assumption that a recursive implementation for some task automatically means that the problem has been solved with a recursive algorithm. Recursive algorithms are not the same as recursive implementations and never have been.
A recursive implementation involves a function, or group of functions, that eventually call themselves in order to solve a sub-portion of the overall task in exactly the same manner that the overall task is being solved in. It happens that recursive algorithms (i.e, those that satisfy the Divide and Conquer paradigm), lend themselves well to recursive implementations. However, recursive algorithms can be implemented using just iterative constructs like for(...) and while(...) as all algorithms, including recursive algorithms, end up performing some task repeatedly in order to get a result.
Other contributors to this post have demonstrated perfectly that iterative algorithms can be implemented using a recursive function. In fact, recursive implementations are possible for everything that involves iterating until some terminating condition has been met. Recursive implementations where there is no Divide or Combine steps in the underlying algorithm are equivalent to iterative implementations with a standard terminating condition.
Taking Insertion Sort as an example, we already know (and it has been proven) that Insertion Sort is an iterative algorithm. However, this does not prevent a recursive implementation of Insertion Sort. In fact, a recursive implementation can be created very easily as follows:
function insertionSort(array)
if (length(array) == 1)
return array
end
itemToSort = array[length(array)]
array = insertionSort(array[1 .. (length(array)-1)])
find position of itemToSort in array
insert itemToSort into array
return array
end
As can be seen, the implementation is recursive. However, Insertion Sort is an iterative algorithm and this we know. So, how do we know that even by using the above recursive implementation that our Insertion Sort algorithm hasn't become recursive? Let us apply the three criteria of the Divide and Conquer paradigm to our algorithm and check.
The problem is divided into [two or more] sub-problems where each sub-problem is smaller than, but can be solved in a similar manner to, the original problem.
YES: Excluding an array of length one, the method for inserting an item A[j] into its proper place in the array is identical to the method used to insert all previous items A[1..j-1] into the array.
The problem is divided into [two or more] sub-problems where each sub-problem is independent and can be solved either recursively, or in a straightforward manner if small enough.
NO: Proper placement of item A[j] is wholly dependent on the array containing A[1..j-1] items and those items being sorted. Therefore, item A[j] (called itemToSort) is not put in the array before the rest of the array is processed.
The problem is divided into [two or more] sub-problems where the results of those sub-problems are combined to give the solution for the original problem.
NO: Being an iterative algorithm, only one item A[j] can be properly placed in any given iteration. The space A[1..j] is not divided into sub-problems where A[1], A[2]...A[j] are all properly placed independently and then all these properly placed elements combined to give the sorted array.
Clearly, our recursive implementation has not made the Insertion Sort algorithm recursive in nature. In fact, the recursion in the implementation in this case is acting as flow control, allowing the iteration to continue until the terminating condition has been met. Therefore, using a recursive implementation did not change our algorithm into a recursive algorithm.
Reversing an Array Without Using an Iterative Algorithm
So now that we understand what makes an algorithm iterative, and what makes one recursive, how is it that we can reverse an array "without using iteration"?
There are two ways to reverse an array. Both methods require you to know the length of the array in advance. The iteration algorithm is favoured for its efficiency and its pseudo-code looks as follows:
function reverse(array)
for each index i = 0 to (length(array) / 2 - 1)
swap array[i] with array[length(array) - i]
next
end
This is a purely iterative algorithm. Let us examine why we can come to this conclusion by comparing it to the Divide and Conquer paradigm which determines an algorithm's recursiveness.
The problem is divided into [two or more] sub-problems where each sub-problem is smaller than, but can be solved in a similar manner to, the original problem.
YES: Reversal of the array is broken down to its finest granularity, elements, and processing for each element is identical to all other processed elements.
The problem is divided into [two or more] sub-problems where each sub-problem is independent and can be solved either recursively, or in a straightforward manner if small enough.
YES: Reversal of element i in the array is possible without requiring that element (i + 1) (for example) has been reversed or not. Furthermore, reversal of element i in the array does not require the results of other element reversals in order to be able to complete.
The problem is divided into [two or more] sub-problems where the results of those sub-problems are combined to give the solution for the original problem.
NO: Being an iterative algorithm, only one calculation stage is performed at every algorithm step. It does not divide problems into subproblems and there is no merge of the results of two or more sub-problems to get a result.
The above analsys of our first algorithm above confirmed that it does not fit the Divide and Conquer paradigm, and therefore cannot be considered to be a recursive algorithm. However, as both criteria (1) and criteria (2) were satisifed, it is apparent that a recursive algorithm could be possible.
The key lies in the fact that the sub-problems in our iterative solution are of the smallest possible granularity (i.e. elements). By dividing the problem into successively smaller and smaller sub-problems (instead of going for the finest granularity from the start), and then merging the results of the sub-problems, the algorithm can be made recursive.
For example, if we have an array of 16 elements containing the first 16 letters of the Latin Alphabet (A..P), a recursive algorithm would visually look as follows:
Original Input
1. ABCDEFHGIJKLMNOP Divide
2. ABCDEFGH IJKLMNOP Divide
3. ABCD EFGH IJKL MNOP Divide
4. AB CD EF GH IJ KL MN OP Divide
5. A B C D E F G H I J K L M N O P Terminate
6. BA DC FE HG JI LK NM PO Conquer (Reverse) and Merge
7. DCBA HGFE LKJI PONM Conquer (Reverse) and Merge
8. HGFEDCBA PONMLKJI Conquer (Reverse) and Merge
9. PONMLKJIHGFEDCBA Conquer (Reverse) and Merge
Reversed Output
From top level, the 16 elements are progressively broken into smaller sub-problem sizes of exactly equal size (levels 1 to 4) until we reach the finest granularity of sub-problem; unit-length arrays in forward order (step 5, individual elements). At this point, our 16 array elements still appear to be in order. However, they are at the same time also reversed as a single element array is also a reversed array in its own right. The results of the single-element arrays are then merged to get eight reversed arrays of length two (step 6), then merged again to get four reversed arrays of length four (step 7), and so on until our original array has been reconstructed in reverse (steps 6 to 9).
The pseudo-code for the recursive algorithm to reverse an array looks as follows:
function reverse(array)
/* check terminating condition. all single elements are also reversed
* arrays of unit length.
*/
if (length(array) < 2) then
return array
/* divide problem in two equal sub-problems. we process the sub-problems
* in reverse order so that when combined the array has been reversed.
*/
return reverse(array[(n/2) .. (n-1)]) + reverse(array[0 .. ((n/2)-1)])
end
As you can see, the algorithm breaks the problem into sub-problems until it reaches the finest granularity of sub-problem that gives an instant result. It then reverses the results while they are being merged to give a reversed result array. Although we think that this algorithm is recursive, let us apply the three critera for Divide and Conquer algorithms to confirm.
The problem is divided into [two or more] sub-problems where each sub-problem is smaller than, but can be solved in a similar manner to, the original problem.
YES: Reversing the array at level one can be done using exactly the same algorithm as at level 2, 3, 4, or five.
The problem is divided into [two or more] sub-problems where each sub-problem is independent and can be solved either recursively, or in a straightforward manner if small enough.
YES: Every sub-problem that is not unit length is solved by splitting the problem into two independent sub-arrays and recursively reversing those sub-arrays. Unit length arrays, the smallest arrays possible, are themselves reversed so providing a terminating condition and a guaranteed first set of combine results.
The problem is divided into [two or more] sub-problems where the results of those sub-problems are combined to give the solution for the original problem.
YES: Every problem at levels 6, 7, 8, and 9 are composed only of results from the level immediately above; i.e. of their sub-problems. Reversal of the array at each level results in a reversed result overall.
As can be seen, our recursive algorithm passed the three criteria for the Divide and Conquer paradigm and so can be considered a truly recursive algorithm. Therefore, it is possible to reverse an array without using an iterative algorithm.
It is interesting to note that our original iterative algorithm for array reversal can be implemented using a recursive function. The pseudo code for such an implementation is as follows:
function reverse(array)
if length(array) < 2
return
end
swap array[0] and array[n-1]
reverse(array[1..(n-1)])
end
This is similar to solutions proposed by other posters. This is a recursive implementation as the defined function eventually calls itself to repeatedly perform the same task over all the elements in the array. However, this does not make the algorithm recursive, as there is no division of the problems into sub-problems, and there is no merging of the results of sub-problems to give the final result. In this case, the recursion is simply being used as a flow-control construct, and algorithmically the overall result can be proved to be performing the same sequence of steps, in exactly the same order, as the original iterative algorithm that was proposed for the solution.
That is the difference between an Iterative Algorithm, a Recursive Algorithm, and a Recursive Implementation.
As people have said in the comments, it depends on the definition of iteration.
Case 1. Iteration as a programming style, different from recursion
If one takes recursion (simply) as an alternative to iteration, then the recursive solution presented by Kalai is the right answer.
Case 2. Iteration as lower bound linear time
If one takes iteration as "examining each element," then the question becomes one of whether array reversal requires linear time or can be done in sublinear time.
To show there is no sublinear algorithm for array reversal, consider an array with n elements. Assume an algorithm A exists for reversal which does not need to read each element. Then there exists an element a[i] for some i in 0..n-1 that the algorithm never reads, yet is still able to correctly reverse the array. (EDIT: We must exclude the middle element of an odd-length array -- see the comments below from this range -- see the comments below -- but this does not impact whether the algorithm is linear or sublinear in the asymptotic case.)
Since the algorithm never reads element a[i] we can change its value. Say we do this. Then the algorithm, having never read this value at all, will produce the same answer for reversal as it did before we changed its value. But this answer will not be correct for the new value of a[i]. Hence a correct reversal algorithm which does not at least read every input array element (save one) does not exist. Hence array reversal has a lower bound of O(n) and thus requires iteration (according to the working definition for this scenario).
(Note that this proof is only for array reversal and does not extend to algorithms that truly have sublinear implementations, like binary search and element lookup.)
Case 3. Iteration as a looping construct
If iteration is taken as "looping until a condition is met" then this translates into machine code with conditional jumps, known to require some serious compiler optimization (taking advantage of branch prediction, etc.) In this case, someone asking if there is a way to do something "without iteration" may have in mind loop unrolling (to straight line code). In this case you can in principle write straight-line (loop-free) C code. But this technique is not general; it only works if you know the size of the array beforehand. (Sorry to add this more-or-less flippant case to the answer, but I did so for completeness and because I have heard the term "iteration" used in this way, and loop unrolling is an important compiler optimization.)
Use recursive function.
void reverse(int a[],int start,int end)
{
int temp;
temp = a[start];
a[start] = a[end];
a[end] = temp;
if(start==end ||start==end-1)
return;
reverse(a, start+1, end-1);
}
Just call the above method as reverse(array,0,lengthofarray-1)
Implement a recursive function to reverse a sorted array. Ie, given the array [ 1, 2, 3, 4, 5] your
procedure should return [5, 4, 3, 2, 1].
void reverse(int a[], int start, int end )
{
std::cout<<a[end] <<std::endl;
if(end == start)
return;
reverse(a, start, end-1);
}
This looks better way as even we don't require loop to print the array values .
Here's a neat solution using recursion in a javascript function. It does not require any parameters other than the array itself.
/* Use recursion to reverse an array */
function reverse(a){
if(a.length==undefined || a.length<2){
return a;
}
b=[];
b.push(reverse(copyChop(a)));
b.push(a[0]);
return b;
/* Return a copy of an array minus the first element */
function copyChop(a){
b=a.slice(1);
return b;
}
}
Call it as follows;
reverse([1,2,3,4]);
Note that if you don't use the nested function copyChop to do the array slicing you end up with an array as the first element in your final result. Not quite sure why this should be so
#include<stdio.h>
void rev(int *a,int i,int n)
{
if(i<n/2)
{
int temp = a[i];
a[i]=a[n-i-1];
a[n-i-1]=temp;
rev(a,++i,n);
}
}
int main()
{
int array[] = {3,2,4,5,6,7,8};
int len = (sizeof(array)/sizeof(int));
rev(array,0,len);
for(int i=0;i<len;i++)
{
printf("\n array[%d]->%d",i,array[i]);
}
}
One solution could be:
#include <stdio.h>
#include <stdlib.h>
void swap(int v[], int v_start, int v_middle, int v_end) {
int *aux = calloc(v_middle - v_start, sizeof(int));
int k = 0;
for(int i = v_start; i <= v_middle; i++) {
aux[k] = v[i];
k = k + 1;
}
k = v_start;
for(int i = v_middle + 1; i <= v_end; i++) {
v[k] = v[i];
k = k + 1;
}
for(int i = 0; i <= v_middle - v_start; i++) {
v[k] = aux[i];
k = k + 1;
}
}
void divide(int v[], int v_start, int v_end) {
if(v_start < v_end) {
int v_middle = (v_start + v_start)/2;
divide(v, v_start, v_middle);
divide(v, v_middle + 1, v_end);
swap(v, v_start, v_middle, v_end);
}
}
int main() {
int v[10] = {4, 20, 12, 100, 50, 9}, n = 6;
printf("Array: \n");
for (int i = 0; i < n; i++) {
printf("%d ", v[i]);
}
printf("\n\n");
divide(v, 0, n - 1);
printf("Reversed: \n");
for (int i = 0; i < n; i++) {
printf("%d ", v[i]);
}
return 0;
}
I am came across this question recently but didn't get any idea about solving this. Can you some one help with pseudo code.
Given an array with four integers A, B, C, D, shuffle them in some order. If the integers are unique then there are 24 shuffles. My task is get the best shuffle such that
F(S) = abs(s[0]-s[1]) + abs(s[1]-s[2])+ abs(s[2]-s[3])
is maximum
For example consider this example
A=5, B= 3, C=-1, D =5
s[0]=5, s[1]=-1, s[2]= 5, s[3] =3
will give me maximum sum which is
F[s] =14
The time and space complexity are O(1).
Since your array has a bounded size, any algorithm you use that terminates will have time and space complexity O(1). Therefore, the simple algorithm of "try all permutations and find the best one" will solve the problem in the appropriate time bounds. I don't mean to say that this is by any stretch of the imagination the ideal algorithm, but if all you need is something that works in time/space O(1), then you've got your answer.
Hope this helps!
Algorithm
Consider laying out your points in sorted order:
A B C D
Let x be the distance AB
Let y be the distance BC
Let z be the distance CD
An order which will always give the best score is BDAC with score 2x+3y+2z.
Example
In your example, the sorted points are:
A=-1 B= 3 C=5 D=5
x=4, y=2, z=0
So the best order will be BDAC=3->5->-1->5 with score 14.
Hints towards Proof
You can prove this result be simply considering all permutations of the path between the 4 points, and computing the score in terms of x,y,z.
e.g.
ABCD -> x+y+z
ACBD -> x+3y+z
ADBC -> x+3y+2z
etc.
In any permutation, the score will use x at most twice (because A is on the end so the route can only go to or from A twice). Similarly, z is used at most twice because D is on the end. y can be used at most three times because there are three things being added.
The permutation BDAC uses x twice, z twice, and y three times so can never be beaten.
If array is sorted this solution also works:
F(S)= 2*abs(s[0]-s[3]) + abs(s[1]-s[2])
where s[0]=A, s[1]=B, s[2]=C and s[3]=D.
Do you know any way to get k-th element of m-element combination in O(1)? Expected solution should work for any size of input data and any m value.
Let me explain this problem by example (python code):
>>> import itertools
>>> data = ['a', 'b', 'c', 'd']
>>> k = 2
>>> m = 3
>>> result = [''.join(el) for el in itertools.combinations(data, m)]
>>> print result
['abc', 'abd', 'acd', 'bcd']
>>> print result[k-1]
abd
For a given data the k-th (2-nd in this example) element of m-element combination is abd. Is it possible to that value (abd) without creating the whole combinatory list?
I'am asking because I have data of ~1,000,000 characters and it is impossible to create full m-character-length combinatory list to get k-th element.
The solution can be pseudo code, or a link the page describing this problem (unfortunately, I didn't find one).
Thanks!
http://en.wikipedia.org/wiki/Permutation#Numbering_permutations
Basically, express the index in the factorial number system, and use its digits as a selection from the original sequence (without replacement).
Not necessarily O(1), but the following should be very fast:
Take the original combinations algorithm:
def combinations(elems, m):
#The k-th element depends on what order you use for
#the combinations. Assuming it looks something like this...
if m == 0:
return [[]]
else:
combs = []
for e in elems:
combs += combinations(remove(e,elems), m-1)
For n initial elements and m combination length, we have n!/(n-m)!m! total combinations. We can use this fact to skip directly to our desired combination:
def kth_comb(elems, m, k):
#High level pseudo code
#Untested and probably full of errors
if m == 0:
return []
else:
combs_per_set = ncombs(len(elems) - 1, m-1)
i = k / combs_per_set
k = k % combs_per_set
x = elems[i]
return x + kth_comb(remove(x,elems), m-1, k)
first calculate r = !n/(!m*!(n-m)) with n the amount of elements
then floor(r/k) is the index of the first element in the result,
remove it (shift everything following to the left)
do m--, n-- and k = r%k
and repeat until m is 0 (hint when k is 0 just copy the following chars to the result)
I have written a class to handle common functions for working with the binomial coefficient, which is the type of problem that your problem appears to fall under. It performs the following tasks:
Outputs all the K-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters. This method makes solving this type of problem quite trivial.
Converts the K-indexes to the proper index of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle. My paper talks about this. I believe I am the first to discover and publish this technique, but I could be wrong.
Converts the index in a sorted binomial coefficient table to the corresponding K-indexes. I believe it too is faster than other published techniques.
Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers.
The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to perform the 4 above methods. Accessor methods are provided to access the table.
There is an associated test class which shows how to use the class and its methods. It has been extensively tested with 2 cases and there are no known bugs.
To read about this class and download the code, see Tablizing The Binomial Coeffieicent.
It should not be hard to convert this class to Java, Python, or C++.
I have an algorithm, and I want to figure it what it does. I'm sure some of you can just look at this and tell me what it does, but I've been looking at it for half an hour and I'm still not sure. It just gets messy when I try to play with it. What are your techniques for breaking down an algoritm like this? How do I analyze stuff like this and know whats going on?
My guess is its sorting the numbers from smallest to biggest, but I'm not too sure.
1. mystery(a1 , a2 , . . . an : array of real numbers)
2. k = 1
3. bk = a1
4. for i = 2 to n
5. c = 0
6. for j = 1 to i − 1
7. c = aj + c
8. if (ai ≥ c)
9. k = k + 1
10. bk = ai
11. return b1 , b2 , . . . , bk
Here's an equivalent I tried to write in Java, but I'm not sure if I translated properly:
public int[] foo(int[] a) {
int k=1;
int nSize=10;
int[] b=new int[nSize];
b[k]=a[1];
for (int i=2;i<a.length;){
int c=0;
for (int j=1;j<i-1;)
c=a[j]+c;
if (a[i]>=c){
k=k+1;
b[k]=a[i];
Google never ceases to amaze, due on the 29th I take it? ;)
A Java translation is a good idea, once operational you'll be able to step through it to see exactly what the algorithm does if you're having problems visualizing it.
A few pointers: the psuedo code has arrays indexed 1 through n, Java's arrays are indexed 0 through length - 1. Your loops need to be modified to suit this. Also you've left the increments off your loops - i++, j++.
Making b magic constant sized isn't a good idea either - looking at the pseudo code we can see it's written to at most n - 1 times, so that would be a good starting point for its size. You can resize it to fit at the end.
Final tip, the algorithm's O(n2) timed. This is easy to determine - outer for loop runs n times, inner for loop n / 2 times, for total running time of (n * (n / 2)). The n * n dominates, which is what Big O is concerned with, making this an O(n2) algorithm.
The easiest way is to take a sample but small set of numbers and work it on paper. In your case let's take number a[] = {3,6,1,19,2}. Now we need to step through your algorithm:
Initialization:
i = 2
b[1] = 3
After Iteration 1:
i = 3
b[2] = 6
After Iteration 2:
i = 4
b[2] = 6
After Iteration 3:
i = 5
b[3] = 19
After Iteration 4:
i = 6
b[3] = 19
Result b[] = {3,6,19}
You probably can guess what it is doing.
Your code is pretty close to the pseudo code, but these are a few errors:
Your for loops are missing the increment rules: i++, j++
Java arrays are 0 based, not 1 based, so you should initialize k=0, a[0], i=1, e.t.c.
Also, this isn't sorting, more of a filtering - you get some of the elements, but in the same order.
How do I analyze stuff like this and know whats going on?
The basic technique for something like this is to hand execute it with a pencil and paper.
A more advanced technique is to decompose the code into parts, figure out what the parts do and then mentally reassemble it. (The trick is picking the boundaries for decomposing. That takes practice.)
Once you get better at it, you will start to be able to "read" the pseudo-code ... though this example is probably a bit too gnarly for most coders to handle like that.
When converting to java, be careful with array indexes, as this pseudocode seems to imply a 1-based index.
From static analysis:
a is the input and doesn't change
b is the output
k appears to be a pointer to an element of b, that will only increment in certain circumstances (so we can think of k = k+1 as meaning "the next time we write to b, we're going to write to the next element")
what does the loop in lines 6-7 accomplish? So what's the value of c?
using the previous answer then, when is line 8 true?
since lines 9-10 are only executed when line 8 is true, what does that say about the elements in the output?
Then you can start to sanity check your answer:
will all the elements of the output be set?
try running through the psuedocode with an input like [1,2,3,4] -- what would you expect the output to be?
def funct(*a)
sum = 0
a.select {|el| (el >= sum).tap { sum += el }}
end
Srsly? Who invents those homework questions?
By the way: since this is doing both a scan and a filter at the same time, and the filter depends on the scan, which language has the necessary features to express this succinctly in such a way that the sequence is only traversed once?