Let's say that you have an arbitrarily large sized two-dimensional array with an even amount of items in it. Let's also assume for clarity that you can only choose between two things to put as a given item in the array. How would you go about putting a random choice at a given index in the array but once the array is filled you have an even split among the two choices?
If there are any answers with code, Java is preferred but other languages are fine as well.
You could basically think about it in the opposite way. Rather than deciding for a given index, which value to put in it, you could select n/2 elements from the array and place the first value in them. Then place the 2nd value in the other n/2.
A 2-D A[M,N] array can be mapped to a vector V[M*N] (you can use a row-major or a column-major order to do the mapping).
Start with a vector V[M*N]. Fill its first half with the first choice, and the second half of the array with the second choice object. Run a Fisher-Yates shuffle, and convert the shuffled array to a 2-D array. The array is now filled with elements that are evenly split among the two choices, and the choices at each particular index are random.
The below creates a List<T> the size of the area of the matrix, and fills it half with the first choice (spaces[0]) and half with the second (spaces[1]). Afterward, it applies a shuffle (namely Fisher-Yates, via Collections.shuffle) and begins to fill the matrix with these values.
static <T> void fill(final T[][] matrix, final T... space) {
final int w = matrix.length;
final int h = matrix[0].length;
final int area = w * h;
final List<T> sample = new ArrayList<T>(area);
final int half = area >> 1;
sample.addAll(Collections.nCopies(half, space[0]));
sample.addAll(Collections.nCopies(half, space[1]));
Collections.shuffle(sample);
final Iterator<T> cursor = sample.iterator();
for (int x = w - 1; x >= 0; --x) {
final T[] column = matrix[x];
for (int y = h - 1; y >= 0; --y) {
column[y] = cursor.next();
}
}
}
Pseudo-code:
int trues_remaining = size / 2;
int falses_remaining = size / 2;
while (trues_remaining + falses_remaining > 0)
{
if (trues_remaining > 0)
{
if (falses_remaining > 0)
array.push(getRandomBool());
else
array.push(true);
}
else
array.push(false);
}
Doesn't really scale to more than two values, though. How about:
assoc_array = { 1 = 4, 2 = 4, 3 = 4, 4 = 4 };
while (! assoc_array.isEmpty())
{
int index = rand(assoc_array.getNumberOfKeys());
int n = assoc_array.getKeyAtIndex(index);
array.push(n);
assoc_array[n]--;
if (assoc_array[n] <= 0) assoc_array.deleteKey(n);
}
EDIT: just noticed you asked for a two-dimensional array. Well it should be easy to adapt this approach to n-dimensional.
EDIT2: from your comment above, "school yard pick" is a great name for this.
It doesn't sound like your requirements for randomness are very strict, but I thought I'd contribute some more thoughts for anyone who may benefit from them.
You're basically asking for a pseudorandom binary sequence, and the most popular one I know of is the maximum length sequence. This uses a register of n bits along with a linear feedback shift register to define a periodic series of 1's and 0's that has a perfectly flat frequency spectrum. At least it is perfectly flat within certain bounds, determined by the sequence's period (2^n-1 bits).
What does that mean? Basically it means that the sequence is guaranteed to be maximally random across all shifts (and therefore frequencies) if its full length is used. When compared to an equal length sequence of numbers generated from a random number generator, it will contain MORE randomness per length than your typical randomly generated sequence.
It is for this reason that it is used to determine impulse functions in white noise analysis of systems, especially when experiment time is valuable and higher order cross effects are less important. Because the sequence is random relative to all shifts of itself, its auto-correlation is a perfect delta function (aside from qualifiers indicated above) so the stimulus does not contaminate the cross correlation between stimulus and response.
I don't really know what your application for this matrix is, but if it simply needs to "appear" random then this would do that very effectively. In terms of being balanced, 1's vs 0's, the sequence is guaranteed to have exactly one more 1 than 0. Therefore if you're trying to create a grid of 2^n, you would be guaranteed to get the correct result by tacking a 0 onto the end.
So an m-sequence is more random than anything you'll generate using a random number generator and it has a defined number of 0's and 1's. However, it doesn't allow for unqualified generation of 2d matrices of arbitrary size - only those where the total number of elements in the grid is a power of 2.
Related
I have been struggling to solve an array problem with linear time,
The problem is:
Assuming we are given an array A [1...n] write an algorithm that return true if:
There are two numbers in the array x,y that have the following:
x < y
x repeats more than n/3 times
y repeats more than n/4 times
I have tried to write the following java program to do so assuming we have a sorted array but I don't think it is the best implementation.
public static boolean solutionManma(){
int [] arr = {2,2,2,3,3,3};
int n = arr.length;
int xCount = 1;
int yCount = 1;
int maxXcount= xCount,maxYCount = yCount;
int currX = arr[0];
int currY = arr[n-1];
for(int i = 1; i < n-2;i++){
int right = arr[n-2-i+1];
int left = arr[i];
if(currX == left){
xCount++;
}
else{
maxXcount = Math.max(xCount,maxXcount);
xCount = 1;
currX = left;
}
if(currY == right){
yCount++;
}
else {
maxYCount = Math.max(yCount,maxYCount);
yCount = 1;
currY = right;
}
}
return (maxXcount > n/3 && maxYCount > n/4);
}
If anyone has an algorithm idea for this kind of issue (preferably O(n)) I would much appreciate it because I got stuck with this one.
The key part of this problem is to find in linear time and constant space the values which occur more than n/4 times. (Note: the text of your question says "more than" and the title says "at least". Those are not the same condition. This answer is based on the text of your question.)
There are at most three values which occur more than n/4 times, and a list of such values must also include any value which occurs more than n/3 times.
The algorithm we'll use returns a list of up to three values. It only guarantees that all values which satisfy the condition are in the list it returns. The list might include other values, and it does not provide any information about the precise frequencies.
So a second pass is necessary, which scans the vector a second time counting the occurrences of each of the three values returned. Once you have the three counts, it's simple to check whether the smallest value which occurs more than n/3 times (if any) is less than the largest value which occurs more than n/4 times.
To construct the list of candidates, we use a generalisation of the Boyer-Moore majority vote algorithm, which finds a value which occurs more than n/2 times. The generalisation, published in 1982 by J. Misra and D. Gries, uses k-1 counters, each possibly associated with a value, to identify values which might occur more than 1/k times. In this case, k is 4 and so we need three counters.
Initially, all of the counters are 0 and are not associated with any value. Then for each value in the array, we do the following:
If there is a counter associated with that value, we increment it.
If no counter is associated with that value but some counter is at 0, we associate that counter with the value and increment its count to 1.
Otherwise, we decrement every counter's count.
Once all the values have been processed, the values associated with counters with positive counts are the candidate values.
For a general implementation where k is not known in advance, it would be possible to use a hash-table or other key-value map to identify values with counts. But in this case, since it is known that k is a small constant, we can just use a simple vector of three value-count pairs, making this algorithm O(n) time and O(1) space.
I will suggest the following solution, using the following assumption:
In an array of length n there will be at most n different numbers
The key feature will be to count the frequency of occurance for each different input using a histogram with n bins, meaning O(n) space. The algorithm will be as follows:
create a histogram vector with n bins, initialized to zeros
for index ii in the length of the input array a
2.1. Increase the value: hist[a[ii]] +=1
set found_x and found_y to False
for the iith bin in the histogram, check:
4.1. if found_x == False
4.1.1. if hist[ii] > n/3, set found_x = True and set x = ii
4.2. else if found_y == False
4.2.1. if hist[ii] > n/4, set y = ii and return x, y
Explanation
In the first run over the array you document the occurance frequency of all the numbers. In the run over the histogram array, which also has a length of n, you check the occurrence. First you check if there is a number that occurred more than n/3 times and if there is, for the rest of the numbers (by default larger than x due to the documentation in the histogram) you check if there is another number which occurred more than n/4 times. if there is, you return the found x and y and if there isn't you simply return not found after covering all the bins in the histogram.
As far as time complexity, you goover the input array once and you go over the histogram with the same length once, therefore the time complexity is O(n) is requested.
Recently in AP Computer Science A, our class recently learned about arrays. Our teacher posed to us a riddle.
Say you have 20 numbers, 10 through 100 inclusive, right? (these numbers are gathered from another file using Scanners)
As each number is read, we must print the number if and only if it is not a duplicate of a number already read. Now, here's the catch. We must use the smallest array possible to solve the problem.
That's the real problem I'm having. All of my solutions require a pretty big array that has 20 slots in it.
I am required to use an array. What would be the smallest array that we could use to solve the problem efficiently?
If anyone could explain the method with pseudocode (or in words) that would be awesome.
In the worst case we have to use an array of length 19.
Why 19? Each unique number has to be remembered in order to sort out duplicates from the following numbers. Since you know that there are 20 numbers incoming, but not more, you don't have to store the last number. Either the 20th number already appeared (then don't do anything), or the 20th number is unique (then print it and exit – no need to save it).
By the way: I wouldn't call an array of length 20 big :)
If your numbers are integers: You have a range from 10 to 100. So you need 91 Bits to store which values have already been read. A Java Long has 64 Bits. So you will need an array of two Longs. Let every Bit (except for the superfluous ones) stand for a number from 10 to 100. Initialize both longs with 0. When a number is read, check if the corresponding bit mapped to the read value is set to 1. If yes, the read number is a duplicate, if no set the bit to 1.
This is the idea behind the BitSet class.
Agree with Socowi. If number of numbers is known and it is equal to N , it is always possible to use N-1 array to store duplicates. Once the last element from the input is received and it is already known that this is the last element, it is not really needed to store this last value in the duplicates array.
Another idea. If your numbers are small and really located in [10:100] diapason, you can use 1 Long number for storing at least 2 small Integers and extract them from Long number using binary AND to extract small integers values back. In this case it is possible to use N/2 array. But it will make searching in this array more complicated and does not save much memory, only number of items in the array will be decreased.
You technically don't need an array, since the input size is fixed, you can just declare 20 variables. But let's say it wasn't fixed.
As other answer says, worst case is indeed 19 slots in the array. But, assuming we are talking about integers here, there is a better case scenario where some numbers form a contiguous interval. In that case, you only have to remember the highest and lowest number, since anything in between is also a duplicate. You can use an array of intervals.
With the range of 10 to 100, the numbers can be spaced apart and you still need an array of 19 intervals, in the worst case. But let's say, that the best case occurs, and all numbers form a contiguous interval, then you only need 1 array slot.
The problem you'd still have to solve is to create an abstraction over an array, that expands itself by 1 when an element is added, so it will use the minimal size necessary. (Similar to ArrayList, but it doubles in size when capacity is reached).
Since an array cannot change size at run time You need a companion variable to count the numbers that are not duplicates and fill the array partially with only those numbers.
Here is a simple code that use companion variable currentsize and fill the array partially.
Alternative you can use arrayList which change size during run time
final int LENGTH = 20;
double[] numbers = new double[LENGTH];
int currentSize = 0;
Scanner in = new Scanner(System.in);
while (in.hasNextDouble()){
if (currentSize < numbers.length){
numbers[currentSize] = in.nextDouble();
currentSize++;
}
}
Edit
Now the currentSize contains those actual numbers that are not duplicates and you did not fill all 20 elements in case you had some duplicates. Of course you need some code to determine whither a numbers is duplicate or not.
My last answer misunderstood what you were needing, but I turned this thing up that does it an int array of 5 elements using bit shifting. Since we know the max number is 100 we can store (Quite messily) four numbers into each index.
Random rand = new Random();
int[] numbers = new int[5];
int curNum;
for (int i = 0; i < 20; i++) {
curNum = rand.nextInt(100);
System.out.println(curNum);
boolean print = true;
for (int x = 0; x < i; x++) {
byte numberToCheck = ((byte) (numbers[(x - (x % 4)) / 4] >>> ((x%4) * 8)));
if (numberToCheck == curNum) {
print = false;
}
}
if (print) {
System.out.println("No Match: " + curNum);
}
int index = ((i - (i % 4)) / 4);
numbers[index] = numbers[index] | (curNum << (((i % 4)) * 8));
}
I use rand to get my ints but you could easily change this to a scanner.
I am trying to solve this question: https://www.hackerrank.com/challenges/journey-to-the-moon I.e. a problem of finding connected components of a graph. What I have is a list of vertices (from 0 to N-1) and each line in the standard input gives me pair of vertices that are connected by an edge (i.e. if I have 1, 3) it means that vertex 1 and vertex 3 are in one connected component. My question is what is the best way to store the inpit, i.e. how to represent my graph? My idea is to use ArrayList of Arraylist - each position in the array list stores another arraylist of adgecent vertices. This is the code:
public static List<ArrayList<Integer>> graph;
and then in the main() method:
graph = new ArrayList<ArrayList<Integer>>(N);
for (int j = 0; j < N; j++) {
graph.add(new ArrayList<Integer>());
}
//then for each line in the standard input I fill the corresponding values in the array:
for (int j = 0; j < I; j++) {
String[] line2 = br.readLine().split(" ");
int a = Integer.parseInt(line2[0]);
int b = Integer.parseInt(line2[1]);
graph.get(a-1).add(b);
graph.get(b-1).add(a);
}
I'm pretti sure that for solving the question I have to put vertex a at position b-1 and then vertex b at position a-1 so this should not change. But what I am looking for is better way to represent the graph?
Using Java's collections (ArrayList, for example) adds a lot of memory overhead. each Integer object will take at least 12 bytes, in addition to the 4 bytes required for storing the int.
Just use a huge single int array (let's call it edgeArray), which represents the adjacency matrix. Enter 1 when the cell corresponds to an edge. e.g., if nodes k and m is seen on the input, then cell (k, m) will have 1, else 0. In the row major order, it will be the index k * N + m. i.e, edgeArray[k * N + m ] = 1. You can either choose column major order, or row major order. But then your int array will be very sparse. It's trivial to implement a sparse array. Just have an array for the non-zero indices in sorted order. It should be in sorted order so that you can binary search. The number of elements will be in the order of number of edges.
Of course, when you are building the adjacency matrix, you won't know how many edges are there. So you won't be able to allocate the array. Just use a hash set. Don't use HashSet, which is very inefficient. Look at IntOpenHashSet from fastutils. If you are not allowed to use libraries, implement one that is similar to that.
Let us say that the openHashMap variable you will be using is called adjacencyMatrix. So if you see, 3 and 2 and there are 10^6 nodes in total (N = 10^6). then you will just do
adjacencyMatirx.add(3 * 10000000 + 2);
Once you have processed all the inputs, then you can make the sparse adjacency matrix implementation above:
final int[] edgeArray = adjacencyMatrix.toIntArray(new int[adjacencyMatrix.size()]);
IntArrays.sort(edgeArray)
Given an node, finding all adjacent nodes:
So if you need all the nodes connected to node p, you would binary search for the next value that is greater than or equal to p * N (O(log (number of edges))). Then you will just traverse the array until you hit a value that is greater than or equal to (p + 1) * N. All the values you encounter will be nodes connected to p.
Comparing it with the approach you mentioned in your question:
It uses O(N*b) space complexity, where N (number of nodes) and b is the branching factor. It's lower bounded by the number of edges.
For the approach I mentioned, the space complexity is just O(E). In fact it's exactly e number of integers plus the header for the int array.
I used var graph = new Dictionary<long, List<long>>();
See here for complete solution in c# - https://gist.github.com/newton3/a4a7b4e6249d708622c1bd5ea6e4a338
PS - 2 years but just in case someone stumbles into this.
I have a map of items with some probability distribution:
Map<SingleObjectiveItem, Double> itemsDistribution;
Given a certain m I have to generate a Set of m elements sampled from the above distribution.
As of now I was using the naive way of doing it:
while(mySet.size < m)
mySet.add(getNextSample(itemsDistribution));
The getNextSample(...) method fetches an object from the distribution as per its probability. Now, as m increases the performance severely suffers. For m = 500 and itemsDistribution.size() = 1000 elements, there is too much thrashing and the function remains in the while loop for too long. Generate 1000 such sets and you have an application that crawls.
Is there a more efficient way to generate a unique set of random numbers with a "predefined" distribution? Most collection shuffling techniques and the like are uniformly random. What would be a good way to address this?
UPDATE: The loop will call getNextSample(...) "at least" 1 + 2 + 3 + ... + m = m(m+1)/2 times. That is in the first run we'll definitely get a sample for the set. The 2nd iteration, it may be called at least twice and so on. If getNextSample is sequential in nature, i.e., goes through the entire cumulative distribution to find the sample, then the run time complexity of the loop is at least: n*m(m+1)/2, 'n' is the number of elements in the distribution. If m = cn; 0<c<=1 then the loop is at least Sigma(n^3). And that too is the lower bound!
If we replace sequential search by binary search, the complexity would be at least Sigma(log n * n^2). Efficient but may not be by a large margin.
Also, removing from the distribution is not possible since I call the above loop k times, to generate k such sets. These sets are part of a randomized 'schedule' of items. Hence a 'set' of items.
Start out by generating a number of random points in two dimentions.
Then apply your distribution
Now find all entries within the distribution and pick the x coordinates, and you have your random numbers with the requested distribution like this:
The problem is unlikely to be the loop you show:
Let n be the size of the distribution, and I be the number of invocations to getNextSample. We have I = sum_i(C_i), where C_i is the number of invocations to getNextSample while the set has size i. To find E[C_i], observe that C_i is the inter-arrival time of a poisson process with λ = 1 - i / n, and therefore exponentially distributed with λ. Therefore, E[C_i] = 1 / λ = therefore E[C_i] = 1 / (1 - i / n) <= 1 / (1 - m / n). Therefore, E[I] < m / (1 - m / n).
That is, sampling a set of size m = n/2 will take, on average, less than 2m = n invocations of getNextSample. If that is "slow" and "crawls", it is likely because getNextSample is slow. This is actually unsurprising, given the unsuitable way the distrubution is passed to the method (because the method will, of necessity, have to iterate over the entire distribution to find a random element).
The following should be faster (if m < 0.8 n)
class Distribution<T> {
private double[] cummulativeWeight;
private T[] item;
private double totalWeight;
Distribution(Map<T, Double> probabilityMap) {
int i = 0;
cummulativeWeight = new double[probabilityMap.size()];
item = (T[]) new Object[probabilityMap.size()];
for (Map.Entry<T, Double> entry : probabilityMap.entrySet()) {
item[i] = entry.getKey();
totalWeight += entry.getValue();
cummulativeWeight[i] = totalWeight;
i++;
}
}
T randomItem() {
double weight = Math.random() * totalWeight;
int index = Arrays.binarySearch(cummulativeWeight, weight);
if (index < 0) {
index = -index - 1;
}
return item[index];
}
Set<T> randomSubset(int size) {
Set<T> set = new HashSet<>();
while(set.size() < size) {
set.add(randomItem());
}
return set;
}
}
public class Test {
public static void main(String[] args) {
int max = 1_000_000;
HashMap<Integer, Double> probabilities = new HashMap<>();
for (int i = 0; i < max; i++) {
probabilities.put(i, (double) i);
}
Distribution<Integer> d = new Distribution<>(probabilities);
Set<Integer> set = d.randomSubset(max / 2);
//System.out.println(set);
}
}
The expected runtime is O(m / (1 - m / n) * log n). On my computer, a subset of size 500_000 of a set of 1_000_000 is computed in about 3 seconds.
As we can see, the expected runtime approaches infinity as m approaches n. If that is a problem (i.e. m > 0.9 n), the following more complex approach should work better:
Set<T> randomSubset(int size) {
Set<T> set = new HashSet<>();
while(set.size() < size) {
T randomItem = randomItem();
remove(randomItem); // removes the item from the distribution
set.add(randomItem);
}
return set;
}
To efficiently implement remove requires a different representation for the distribution, for instance a binary tree where each node stores the total weight of the subtree whose root it is.
But that is rather complicated, so I wouldn't go that route if m is known to be significantly smaller than n.
If you are not concerning with randomness properties too much then I do it like this:
create buffer for pseudo-random numbers
double buff[MAX]; // [edit1] double pseudo random numbers
MAX is size should be big enough ... 1024*128 for example
type can be any (float,int,DWORD...)
fill buffer with numbers
you have range of numbers x = < x0,x1 > and probability function probability(x) defined by your probability distribution so do this:
for (i=0,x=x0;x<=x1;x+=stepx)
for (j=0,n=probability(x)*MAX,q=0.1*stepx/n;j<n;j++,i++) // [edit1] unique pseudo-random numbers
buff[i]=x+(double(i)*q); // [edit1] ...
The stepx is your accuracy for items (for integral types = 1) now the buff[] array has the same distribution as you need but it is not pseudo-random. Also you should add check if j is not >= MAX to avoid array overruns and also at the end the real size of buff[] is j (can be less than MAX due to rounding)
shuffle buff[]
do just few loops of swap buff[i] and buff[j] where i is the loop variable and j is pseudo-random <0-MAX)
write your pseudo-random function
it just return number from the buffer. At first call returns the buff[0] at second buff[1] and so on ... For standard generators When you hit the end of buff[] then shuffle buff[] again and start from buff[0] again. But as you need unique numbers then you can not reach the end of buffer so so set MAX to be big enough for your task otherwise uniqueness will not be assured.
[Notes]
MAX should be big enough to store the whole distribution you want. If it is not big enough then items with low probability can be missing completely.
[edit1] - tweaked answer a little to match the question needs (pointed by meriton thanks)
PS. complexity of initialization is O(N) and for get number is O(1).
You should implement your own random number generator (using a MonteCarlo methode or any good uniform generator like mersen twister) and basing on the inversion method (here).
For example : exponential law: generate a uniform random number u in [0,1] then your random variable of the exponential law would be : ln(1-u)/(-lambda) lambda being the exponential law parameter and ln the natural logarithm.
Hope it'll help ;).
I think you have two problems:
Your itemDistribution doesn't know you need a set, so when the set you are building gets
large you will pick a lot of elements that are already in the set. If you start with the
set all full and remove elements you will run into the same problem for very small sets.
Is there a reason why you don't remove the element from the itemDistribution after you
picked it? Then you wouldn't pick the same element twice?
The choice of datastructure for itemDistribution looks suspicious to me. You want the
getNextSample operation to be fast. Doesn't the map from values to probability force you
to iterate through large parts of the map for each getNextSample. I'm no good at
statistics but couldn't you represent the itemDistribution the other way, like a map from
probability, or maybe the sum of all smaller probabilities + probability to a element
of the set?
Your performance depends on how your getNextSample function works. If you have to iterate over all probabilities when you pick the next item, it might be slow.
A good way to pick several unique random items from a list is to first shuffle the list and then pop items off the list. You can shuffle the list once with the given distribution. From then on, picking your m items ist just popping the list.
Here's an implementation of a probabilistic shuffle:
List<Item> prob_shuffle(Map<Item, int> dist)
{
int n = dist.length;
List<Item> a = dist.keys();
int psum = 0;
int i, j;
for (i in dist) psum += dist[i];
for (i = 0; i < n; i++) {
int ip = rand(psum); // 0 <= ip < psum
int jp = 0;
for (j = i; j < n; j++) {
jp += dist[a[j]];
if (ip < jp) break;
}
psum -= dist[a[j]];
Item tmp = a[i];
a[i] = a[j];
a[j] = tmp;
}
return a;
}
This in not Java, but pseudocude after an implementation in C, so please take it with a grain of salt. The idea is to append items to the shuffled area by continuously picking items from the unshuffled area.
Here, I used integer probabilities. (The proabilities don't have to add to a special value, it's just "bigger is better".) You can use floating-point numbers but because of inaccuracies, you might end up going beyond the array when picking an item. You should use item n - 1 then. If you add that saftey net, you could even have items with zero probability that always get picked last.
There might be a method to speed up the picking loop, but I don't really see how. The swapping renders any precalculations useless.
Accumulate your probabilities in a table
Probability
Item Actual Accumulated
Item1 0.10 0.10
Item2 0.30 0.40
Item3 0.15 0.55
Item4 0.20 0.75
Item5 0.25 1.00
Make a random number between 0.0 and 1.0 and do a binary search for the first item with a sum that is greater than your generated number. This item would have been chosen with the desired probability.
Ebbe's method is called rejection sampling.
I sometimes use a simple method, using an inverse cumulative distribution function, which is a function that maps a number X between 0 and 1 onto the Y axis.
Then you just generate a uniformly distributed random number between 0 and 1, and apply the function to it.
That function is also called the "quantile function".
For example, suppose you want to generate a normally distributed random number.
It's cumulative distribution function is called Phi.
The inverse of that is called probit.
There are many ways to generate normal variates, and this is just one example.
You can easily construct an approximate cumulative distribution function for any univariate distribution you like, in the form of a table.
Then you can just invert it by table-lookup and interpolation.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
In-place transposition of a matrix
Recently attended an Technical Written Interview. Came through the following question.
I have an array as say
testArray = {a1,a2,a3,...an,b1,b2,b3,....bn,c1,c2,c3,.....,cn}
I need to sort this array as `
testArray = {a1,b1,c1,a2,b2,c2,a3,b3,c3,.....,an,bn,cn}
Constraint is I should not use extra memory, should not use any inbuilt function.
Should write complete code, it can be in any language and can also use any data structure.
eg:
Input: {1,2,3,4,5,6,7,8,9}, n = 3
Output: {1,4,7,2,5,8,3,6,9}
I could not get any solution within the constraint, can anyone provide solution or suggestion?
This is just a matrix transpose operation. And there is even a problem and solution for in-place matrix transposition on Wikipedia.
No extra space is impossible, since you need to at least go through the array. O(1) additional memory is possible, with heavy penalty on the time complexity.
The solution is built on follow-the-cycle algorithm in the Wikipedia page: for each cell, we will find the cell with the smallest index in the cycle. If the cell with the smallest index is greater than or equal (>=) to the index of the current cell, we will perform chain swapping. Otherwise, we ignore the cell, since it has been swapped correctly. The (loosely analyzed) upper bound on time complexity can go as high as O((MN)2) (we go through M * N cells, and the cycle can only be as long as the total number of cells).
Impossibility
It is impossible to implement this algorithm without extra use of memory and an arbitrary length because you need a an iterator to traverse the list and that takes up space.
Finding the right indices to swap
For fixed lengths of the array and fixed n you can use a matrix transpose algorithm.
and in order to swap the elements y
The algorithm you are looking for is a matrix transpose algorithm.
so you have to swap every element exactly once iterating through it.
http://en.wikipedia.org/wiki/Transpose
basically you have to swap the m -th element in the n - th component with the n - th element in the m -th component. This can be done by a double loop.
m = length(array)/n;
for (i = 0; i < m; i++)
for (j = 0; j < n; j++)
{
index_1 = i * m + j;
index_2 = j * m + i
swap(index_1, index_2);
}
Note: For fixed m and n this loop can be completely unrolled and therefore m, i, j can be replaced by a constant.
Swaping without Memory consumption
In order to swap every element without using extra space you can use the XOR swap algorithm as pointed out in the comments:
X := X XOR Y
Y := Y XOR X
X := X XOR Y
The simplest way to swap two numbers (a and b) without using a temporary variable is like this:
b = b + a;
a = b - a;
b = b - a;
If you write that in a function, then you're part of the way there. How you keep track of which variable to swap within the arrays without using a temporary variable eludes me right now.
Bear in mind voters: he doesn't actually need to sort the array, just swap the right values.
Edit: this will work with large values in Java (and in C/C++ unless you turn on some very aggressive compiler optimisations - the behaviour is undefined but defaults to sane). The values will just wrap around.
Second edit - some (rather untested) code to flip the array around, with I think 4 integers over the memory limit. It's while technically massively unthreadsafe, but it would be parallelisable just because you only access each array location once at most:
static int[] a = {1,2,3,4,
5,6,7,8,
9,10,11,12,
13,14,15,16};
static int n = 4;
public static void main(String[] args)
{
for(int i = 0; i < a.length/n; i++) // 1 integer
for(int j = 0; j < n; j++) // 1 integer
if(j > i)
swap(i*n+j, j*n+i);
}
static void swap(int aPos, int bPos) // 2 integers
{
if(a[aPos] != a[bPos])
{
a[bPos] = a[aPos] + a[bPos];
a[aPos] = a[bPos] - a[aPos];
a[bPos] = a[bPos] - a[aPos];
}
}
Apologies if this misunderstands the question; I read it carefully and couldn't work out what was needed other than this.
Take a look at Quicksort algorithm
For more information about available algorithms, go to Sorting algorithm page.