I want to solve a 3-dimesional knapsack problem.
I got a number of boxes with a different width, height, length and value. I got a specified space and I want to put the boxes into that space, such that I will get the optimal profit. I would like to do it with using bruteforce.
I'm programming in Java.
I tried to do it with recursion, so:
public void solveBruteforce(double freeX, double freeY, double freeZ) {
for(int i = 0; i < numOfBoxes; i++) {
for(int j = 0; j < BoxObject.numOfVariations; j++) {
if(possible to place box) {
place(box);
add(value);
solveBruteforce(newX, newY, newZ);
}
}
}
remove(box);
remove(value);
}
But I will get the problem that each line has a different free x, y and z.
Could someone help me to find another way to do it?
First thing is, use an octree to keep track of where things are in the space. Occupancy tree is a 3D 4-out-degree tree, with occupancy flags at every node, dividing your space into a place that is efficient to search over. This would be useful if you want to use some kind of heuristic search to place the boxes, and even if you are trying all possibilities. It can shortcut the forbidden (crowded) placements.
Brute force will take a long time. But if that's what you want you need to define an ordering for trying out permutations of placements.
Since you will need many iterations, recursion is not so great since you will get a stack overflow.
A first-draft alternative would involve a greedy algorithm. Take the box that maximizes your profit (say, the largest), place that, then take the next largest box, and find the best fit for that, and so on.
But, say you wanted to try all possible combinations:
def maximize_profit(boxes,space):
max_profit = 0
best_fits = list()
while(Arranger.hasNext()):
a_fit,a_profit = Arranger.next(boxes,space)
if (a_profit == max_profit):
best_fits.append(a_fit)
elif (a_profit > max_profit):
max_profit = a_profit
best_fits = [ a_profit ]
return best_fits, max_profit
For ideas on how to define the Arranger, think about choosing #{box} slots from #{space} possibilities, respecting arrangements that are identical w.r.t. symmetry. Alternately maybe a "flood fill" method will give you ideas.
Related
Situation: at the end of procedurally generating my game's world, I am left with a 2048^2 (~4.2 million) size Stack of tiles. I then need to calculate where in my handler's list of stacks they need to go. Here is my method:
public static final void addTile(Tile t){
for(int i = 0; i < sections.length; i++)
for(int j = 0; j < sections[i].length; j++)
if(sections[i][j].contains(t.x, t.y)){ //<-- determine which list according to tile's pos
world.get(i).get(j).get(TILE_LIST).push(t);
return;
}
}
There is a Rectangle[][] that corresponds with each spot in the 'world' arraylist. As you can see, this is an O(n^2) loop that needs executed 4.2 million times. Even with 4 threads running concurrently processing all the tiles takes ~20 seconds.
This isn't a completely unviable processing time, but I think there must be a better algorithm. Any suggestions?
An r-tree, as mentioned by MBo, is a good spatial index for arbitrary point/poly searching, however, since we are looking for rectangles (axis aligned polygons) you would likely be better off using a quad-tree (or oct-tree in 3 dimensions).
https://gamedev.stackexchange.com/questions/63536/how-do-shapes-rectangles-work-in-quad-trees
You certainly need a good data structure for fast searching rectangles that contain given point.
R-tree is intended to treat such queries very fast.
I'm writing the program which is calculates C(n, k) combinations and have big difference between n and k (e. g. n=39, k=13 -> 8122425444 combinations). Also, I need to make some calculations with every combination in realtime. The question is how can I divide my algorithm to several threads to make it faster?
public void getCombinations(List<Item> items) {
int n = items.size();
int k = 13;
int[] res = new int[k];
for (int i = 1; i <= k; i++) {
res[i - 1] = i;
}
int p = k;
while (p >= 1) {
//here I make a Set from items in List by ids in res[]
Set<Item> cards = convert(res, items);
//some calculations
if (res[k - 1] == n) {
p--;
} else {
p = k;
}
if (p >= 1) {
for (int i = k; i >= p; i--) {
res[i - 1] = res[p - 1] + i - p + 1;
}
}
}
}
private Set<Item> convert(int[] res, List<Item> items) {
Set<Item> set = new TreeSet<Item>();
for (int i : res) {
set.add(items.get(i - 1));
}
return set;
}
If you're using JDK 7 then you could use fork/join to divide and conquer this algorithm.
If you want to keep things simple then I would just get a thread to compute a subset of the input and use a CountDownLatch until all threads have completed. The number of threads depends on your CPU.
You could also use Hadoop's map/reduce if you think the input will grow so you can compute on several computers. You will need to normalise it as a map/reduce operation - but look at examples.
The simplest way to split combinations is to have combinations of combinations. ;)
For each possible "first" value you can create a new task in a thread pool. Or you can create each possible pair of "first" and "second" in as a new task. or three etc. You only need to create as many tasks as you have cpus, so you don't need to go over board.
e.g. say you want to create all possible selections of 13 from 39 items.
for(Item item: items) {
List<Item> items2 = new ArrayList<Item>(items);
items2.remove(item);
// create a task which considers all selections of 12 from 38 (plus item)
createCombinationsOf(item, item2, 12);
}
This creates roughly equal work for 39 cpus which may be more than enough. If you want more create pairs (39*38/2) of those.
Your question is quite vague.
What problem are you having right now? Implementing the divide and conquer part of the algorithm (threading, joining, etc), or figuring out how to divide a problem into it's sub-parts.
The later should be your first step. Do you know how to break your original problem into several smaller problems (that can then be dispatched to Executor threads or a similar mechanism to be processed), and how to join the results?
I have been working on some code that works with combinatoric sets of this size. Here are a few suggestions for getting output in a reasonable amount of time.
Instead of building a list of combinations and then processing them, write your program to take a rank for a combination. You can safely assign signed 64 bit long values to each combination for all k values up to n = 66. This will let you easily break up the number system and assign it to different threads/hardware.
If your computation is simple, you should look at using OpenCL or CUDA to do the work. There are a couple of options for doing this. Rootbeer and Aparapi are options for staying in Java and letting a library take care of the GPU details. JavaCL is a nice binding to OpenCL, if you do not mind writing kernels directly in C99. AWS has GPU instance for doing this type of work.
If you are going to collect a result for each combination, you are really going to need to consider storage space. For your example of C(39,13), you would need a little under 61 Gigs just to store a long for each combination. You need a good strategy for dealing with datasets of this size.
If you are trying to roll up this data into a simple result for the entire set of combinations, then follow #algolicious' suggestion and look at map/reduce to solve this problem.
If you really need answers for each combination, but a little error is OK, you may want to look at using AI algorithms or a linear solver to compress the data. Be aware that these techniques will only work if there is something to learn in the resulting data.
If some error will not work, but you need every answer, you may want to just consider recomputing it each time you need it, based on the element's rank.
This is an odd question. I have an integer array in Java, where each int represents a color. They will either be 0xFFFFFFFF or 0x0. What would be the FASTEST way to find if this array contains ANY values equal to 0xFFFFFFFF?
This is my current code:
int length = w * h;
for (int i = 0; i < length; i++) {
if (pixels[i] == 0xFFFFFFFF) {
return true;
}
}
I have no clue if there is a faster way to do this or not. I imagine you vets could have a trick or two though.
EDIT: Seeing as it is just a dumb array of pixels from Bitmap.getPixels(), there's no way it would be sorted or transformed to another storage structure. Thanks for the input, everyone, it seems like looping through is the best way in this case.
No, there is no faster way unless the array of integers is already sorted, which I doubt given it's an array of colours.
To scan through an unsorted array takes linear time "O(n)". That's what you do, and you exit the method as soon as a match is found which is good too.
Without switching to some other data structure, no, there is no better way to find whether the array contains that value. You have to look at all the array elements to see if it's there, since if you don't check some particular location you might miss the one copy of that pixel color.
That said, there are alternative ways that you could solve this problem. Here are a few thoughts on how to speed this up:
If every value is guaranteed to be either white or black, you could store two extra boolean values alongside the array representing whether there are white or black pixels. That way, once you've run the scan once, you could just read the booleans back. You could also store a count of the number of white and black pixels along with the array, and then whenever you write a pixel update the count by decrementing the number of pixels of the original color and incrementing the number of pixels of the new color. This would then give you the ability to check if a pixel of a given color exists in O(1) by just seeing if the correct counter is nonzero.
Alternatively, if you happen to know something about the image (perhaps where the white and black pixels ought to be), you could consider doing the iteration in a different order. For example, if the pixels you're looking for tend to be clustered in the center of the image, rewriting the loop to check there first might be a good idea since if there are any pixels of that type you'll find them more rapidly. This still has the same worst-case behavior, but for "realistic" images might be much faster.
If you have multiple threads available and the array is really huge (millions of elements), you could consider having multiple threads each search a part of the array for the value. This would only be feasible if you had a reason to suspect that most of the image was not white.
Since in most realistic images you might assume that the image is a mixture of colors and you're just looking for something of one color, then you might want to consider storing the image as a sparse array, where you store a list of the pixels that happen to be of one color (say, white) and then assume everything else is black. If you expect most images to be a solid color with a few outliers, this might be a very good representation. Additionally, it would give you constant-time lookup of whether any black or white pixels exist - just check if the list of set pixels is empty or consists of the entire image.
If the order doesn't matter, you could also store the elements in some container like a hash table, which could give you O(1) lookup of whether or not the element is there. You could also sort the array and then just check the endpoints.
As a microoptimization, you could consider always appending to the real image two values - one white pixel and one black pixel - so that you could always iterate until you find the value. This eliminates one of the comparisons from the loop (the check to see if you're in-bounds) and is recommended by some authors for very large arrays.
If you assume that most images are a nice mixture of white and black and are okay with getting the wrong answer a small fraction of the time, you could consider probing a few random locations and checking if any of them are the right color. If so, then clearly a pixel of the correct color exists and you're done. Otherwise, run the full linear scan. For images that are a nice blend of colors, this could save you an enormous amount of time, since you could probe some small number of locations (say, O(log n) of them) and end up avoiding a huge linear scan in many cases. This is exponentially faster than before.
If every value is either white or black, you could also consider storing the image in a bitvector. This would compress the size of the array by a factor of the machine word size (probably between 32-128x compression) You could then iterate across the compressed array and see if any value is not identically equal to 0 to see if any of the pixels are white. This also saves a huge amount of space, and I'd actually suggest doing this since it makes a lot of other operations easy as well.
Hope this helps!
It doesn't matter at the bytecode level, but at the native-code level,
if (pixels[i] != 0)
is likely to be a bit faster, given that you're sure only these two values can appear.
If your array is really big, it might be worth it to divide and conquer. That is, assign segments of the data to multiple threads (probably t threads where t is the number of available processor cores). With a sufficiently large data set, the parallelism may amortize the thread startup cost.
Here is the simple optimization that helps on large arrays: put the requested value at the end of the array and thus eliminate array bounds check. (templatetypedef has already mentioned this optimization.) This solution saves 25% of loop running time and it is good for large arrays:
tmp = a[n - 1]
a[n - 1] = 0xFFFFFFFF
pos = 0
while a[pos] != 0xFFFFFFFF
pos = pos + 1
a[n - 1] = tmp
if a[pos] = 0xFFFFFFFF then
return pos
return -1
There is the C# implementation with running time analysis on this address.
The only scope for improving the performance is the comparison. I feel bitwise operator would be a bit faster than the conditional operator.
You could do this
int length = w * h;
for (int i = 0; i < length; i++) {
if (pixels[i] & 0xFFFFFFFF) {
return true;
}
}
Can't you check when you insert the color into the array? If so, you could store the index of the array's element which contains the 0xFFFFFFFF color. Since you want "ANY" entry that has such value, this should do the trick :D
If not, your answer has the complexity of O(n) which is the best it could be, since the array isn't (and cannot be, as you say) ordered.
using the build-in foreach is a tad faster than the indexed for as id eliminates a bound check
for(int pix:pixels){
if(pix!=0)
return true;
}
Arrays.asList(...).contains(...)
I have a function named resize, which takes a source array, and resizes to new widths and height. The method I'm using, I think, is inefficient. I heard there's a better way to do it. Anyway, the code below works when scale is an int. However, there's a second function called half, where it uses resize to shrink an image in half. So I made scale a double, and used a typecast to convert it back to an int. This method is not working, and I dont know what the error is (the teacher uses his own grading and tests on these functions, and its not passing it). Can you spot the error, or is there a more efficient way to make a resize function?
public static int[][] resize(int[][] source, int newWidth, int newHeight) {
int[][] newImage=new int[newWidth][newHeight];
double scale=newWidth/(source.length);
for(int i=0;i<newWidth/scale;i++)
for(int j=0;j<newHeight/scale;j++)
for (int s1=0;s1<scale;s1++)
for (int s2=0;s2<scale;s2++)
newImage[(int)(i*scale+s1)][(int)(j*scale+s2)] =source[i][j];
return newImage;
}
/**
* Half the size of the image. This method should be just one line! Just
* delegate the work to resize()!
*/
public static int[][] half(int[][] source) {
int[][] newImage=new int[source.length/2][source[0].length/2];
newImage=resize(source,source.length/2,source[0].length/2);
return newImage;
}
So one scheme for changing the size of an image is to resample it (technically this is really the only way, every variation is really just a different kind of resampling function).
Cutting an image in half is super easy, you want to read every other pixel in each direction, and then load that pixel into the new half sized array. The hard part is making sure your bookkeeping is strong.
static int[][] halfImage(int[][] orig){
int[][] hi = new int[orig.length/2][orig[0].length/2];
for(int r = 0, newr = 0; r < orig.length; r += 2, newr++){
for(int c = 0, newc = 0; c < orig[0].length; c += 2, newc++){
hi[newr][newc] = orig[r][c];
}
}
return hi;
}
In the code above I'm indexing into the original image reading every other pixel in every other row starting at the 0th row and 0th column (assuming images are row major, here). Thus, r tells us which row in the original image we're looking at, and c tells us which column in the original image we're looking at. orig[r][c] gives us the "current" pixel.
Similarly, newr and newc index into the "half-image" matrix designated hi. For each increment in newr or newc we increment r and c by 2, respectively. By doing this, we skip every other pixel as we iterate through the image.
Writing a generalized resize routine that doesn't operate on nice fractional quantities (like 1/2, 1/4, 1/8, etc.) is really pretty hard. You'd need to define a way to determine the value of a sub-pixel -- a point between pixels -- for more complicated factors, like 0.13243, for example. This is, of course, easy to do, and you can develop a very naive linear interpolation principle, where when you need the value between two pixels you simply take the surrounding pixels, construct a line between their values, then read the sub-pixel point from the line. More complicated versions of interpolation might be a sinc based interpolation...or one of many others in widely published literature.
Blowing up the size of the image involves something a little different than we've done here (and if you do in fact have to write a generalized resize function you might consider splitting your function to handle upscaling and downscaling differently). You need to somehow create more values than you have originally -- those interpolation functions work for that too. A trivial method might simply be to repeat a value between points until you have enough, and slight variations on this as well, where you might take so many values from the left and so many from the right for a particular position.
What I'd encourage you to think about -- and since this is homework I'll stay away from the implementation -- is treating the scaling factor as something that causes you to make observations on one image, and writes on the new image. When the scaling factor is less than 1 you generally sample from the original image to populate the new image and ignore some of the original image's pixels. When the scaling factor is greater than 1, you generally write more often to the new image and might need to read the same value several times from the old image. (I'm doing a poor job highlighting the difference here, hopefully you see the dualism I'm getting at.)
What you have is pretty understandable, and I think it IS an O(n^4) algorithm. Ouchies!
You can improve it slightly by pushing the i*scale and j*scale out of the inner two loops - they are invariant where they are now. The optimizer might be doing it for you, however. There are also some other similar optimizations.
Regarding the error, run it twice, once with an input array that's got an even length (6x6) and another that's odd (7x7). And 6x7 and 7x6 while you're at it.
Based on your other question, it seems like you may be having trouble with mixing of types - with numeric conversions. One way to do this, which can make your code more debuggable and more readable to others not familiar with the problem space, would be to split the problematic line into multiple lines. Each minor operation would be one line, until you reach the final value. For example,
newImage[(int)(i*scale+s1)][(int)(j*scale+s2)] =source[i][j];
would become
int x = i * scale;
x += s1;
int y = j* scale;
y +=s2;
newImage[x][y] = source[i][j];
Now, you can run the code in a debugger and look at the values of each item after each operation is performed. When a value doesn't match what you think it should be, look at it and figure out why.
Now, back to the suspected problem: I expect that you need to use doubles somewhere, not ints - in your other question you talked about scaling factors. Is the factor less than 1? If so, when it's converted to an int, it'll be 0, and you'll get the wrong result.
Say I have p nodes on a n by m pixel 2D surface, I want the nodes to be attracted to each other such that the further they are apart the strong the attraction. But if the distance between two nodes, say d(A,B) is less than some threshold say k then they start to repel. Could anyone get me started on some code on how to update the co-ordinates of the nodes over time.
I have something a little like the code below which is start to do the attraction, but looking for some advice. (P.S. I can not use an existing library to do this).
public class node{
float posX;
float posY;
}
public class mySimulator{
ArrayList<node> myNodes = new ArrayList<node>();
// Imagine I add a load of nodes to myNodes
myNodes.add(.....
// Now image this is the updating routine that is called at every fixed time increment
public void updateLocations(){
for(int i =0; i <= myNodes.size(); i++){
for(int i =0; i <= myNodes.size(); i++){
myNodes.get(i).posX = myNodes.get(i).posX + "some constant"*(myNodes.get(j).posX -myNodes.get(i).posX);
myNodes.get(i).posY = myNodes.get(i).posY + "some constant"*(myNodes.get(j).posY -myNodes.get(i).posY);
}
}
}
}
}
This kinetic model of elastic collisions is completely unrelated to magnetism, but the design might give you some ideas on modeling an ensemble of interacting particles.
Say I have p nodes on a n by m pixel 2D surface, I want the nodes to be attracted to each other such that the further they are apart the strong the attraction. But if the distance between two nodes, say d(A,B) is less than some threshold say k then they start to repel.
You realize, of course, that this is not how the physics of magnetism work?
Could anyone get me started on some code on how to update the co-ordinates of the nodes over time.
Nobody will be able to give you code to do this easily, because it's actually a difficult problem.
You can numerically integrate the ordinary differential equations for each particle over time. Given initial conditions for position, velocity, and acceleration vectors in 2D, you'll take a time step, integrate the equations to get the values at the end of the time step, update the values by adding the increment, and then doing it again.
It requires some knowledge of 2D vectors, numerical integration, ordinary differential equations, linear algebra, and physics. Do you know anything about those?
Even if you "make up" your own physical laws governing the interactions between your particles, you'll still have to integrate that set of equations.
I'd recommend looking at Runge-Kutta for systems of ODEs. "Numerical Recipes" has a nice chapter on it, even if you go elsewhere for the implementation.
"NR" is now in its third edition. It's a bit controversial, but the prose is very good.