I am using JUNG for a project and when I am displaying relatively large graphs eg 1500 nodes, my pc would not be able to handle it (graphs are rendered but If I want to navigate the graph the system become very slow). Any Suggestions.
So, there are two things that JUNG visualization doesn't always scale very well right now:
iterative force-directed layouts
interaction: figuring out which node or edge (if any) is being referenced for hover and click events.
It sounds like it's the latter that you're running into right now.
Depending on your requirements, you have a couple of options:
(a) turn off mouse events, or at least hover events
(b) hack the visualization system so that lookups of event targets aren't O(m+n).
Simple solutions for (b) basically just partition the viewing area into smallish chunks and only sends events to elements that are in the same chunk as the pointer. (Obviously, the smaller you make the chunks, the more memory is required.)
We've had plans to do (b) (and a design sketched out) for some time but have been working on other things. Anyone that wants to help with a more permanent solution, please contact me.
How much memory are you starting your VM with? Assuming your working on windows, looking at the Task Manager, does the VM hit the maximum amount of allocated memory and start using swap?
The problem probably lies with the calculation of your vertices' positions. The only layout that I've found fairly easy to calculate was the Tree Layout and obviously that's not suitable for all data sets.
The solution probably is to write your own custom layout with a lot less calculations than say an FRLayout.
Related
Obviously it takes a lot of memory to store an array of a history of changes... that's how I had my application working but it just seems like there's a smarter way to go about doing this.
ArrayList<Photo> photoHistory = new ArrayList<>();
photoHistory.add(originalPhoto);
photoHistory.add(change1);
photoHistory.add(change2);
// bad implementation - lots of memory
Maybe store only an original and a current view model and keep a log of the methods/filters used? Then when a user hits 'undo' it would take the total number of changes made and run through all of them again minus one? This also seems incredibly inefficient.
I guess I'm just looking for advice on how to implement a general 'undo' function of a software application.
Here is a tip from how GIMP implements it:
GIMP's implementation of Undo is rather sophisticated. Many operations require very little Undo memory (e.g., changing visibility of a layer), so you can perform long sequences of them before they drop out of the Undo History. Some operations, such as changing layer visibility, are compressed, so that doing them several times in a row produces only a single point in the Undo History. However, there are other operations that may consume a lot of undo memory. Most filters are implemented by plug-ins, so the GIMP core has no efficient way of knowing what changed. As such, there is no way to implement Undo except by memorizing the entire contents of the affected layer before and after the operation. You might only be able to perform a few such operations before they drop out of the Undo History.
Source
So to do it as optimally as possible, you have to do different things depending on what action is being undone. Showing or hiding a layer can be represented in a neglible amount of space, but filtering the whole image might necessitate storing another copy of the whole image. However, if you only filter part of the image (or draw in a small section of the image) perhaps you only need to store that piece of the image.
To clarify, I know why my game is running slow. I have a lot of different objects in the current area and it has to tick and render all of those objects. I just don't know how to fix the problem without just making less objects.
The answer I am looking for is more of a concept of how I can go about fixing this problem rather than just making a bunch of code for me to paste into my game.
I am designing my games based off of tutorials by RealTutsGML. There where some issues I had to work around with his method of building games, but I figured them out.
So every tick in my game, I have to look through all of the objects that currently exist. The more objects that exist, the longer it takes to process all of them. I need to find a way to help free up memory if those objects are currently not in view, for example. I know games like Minecraft use chunks to free up unused memory. (Blocks outside of the view distance are not generated) What can I do to allow for an environment with many objects without causing so much lag? I want to be able to have a big level without having so much lag from all the objects that have to be ticked and rendered.
Another thing that I will clarify is that all of the objects loaded into the levels are held in a LinkedList so that I can easily create and destroy objects. Every tick, I run a for loop through those linked lists to process every objects behavior and how they are rendered.
[EDIT APRIL 28]
The objects in the game I was working on are organized in a very grid-like format. So that includes the tiles, the player, and all of the other game objects.
You haven't given too much information about your game (I'm not going to look through the tutorial either). You may want to give more background information, and maybe some code snippets.
I know one thing about your code with certainty: you are using linked lists. Linked lists, especially when you add and remove things from the middle, are slow. The reason for this is memory (or cache) locality. When they say computers are growing exponentially faster, they meant the processor is. Your data needs to be transported from its home in memory to another location to a place where it can be used. When data is needed, it is transported by a bus, which also brings neighboring data. (Note that "bus" is actually the technical name for the component.) Linked lists, especially how you're using them, manipulate data in a way that destroys neighborhoods of data. As a result, the bus essentially becomes a "taxi", getting data one at a time. And the bus, according to the graph, is a stunning 10x faster than the computers of the 1980's (remember, the graph has an exponential scale).
Also, it seems to me like you probably don't need to tick EVERY object EVERY frame. Yes some objects, like mobs, will need to tick every frame (if they are close enough to be active). From what I assume your game looks like, you have each block of grass being its own object and ticking every frame. Stop watching the grass grow.
MineCraft, for example, will only tick sand blocks when a neighboring block changes (which is why sand generated in the air will only fall when disturbed).
You may want to check these pages out:
Question about memory locality.
Question about linked lists and memory locality
Site explaining cache locality, and source of picture.
Question about slow graphics loop.
Code Review is a good place to get feedback on your code.
Game Development will give more game-based answers.
In an organized environment like such, I think it would be very obvious that making chunks(holds tiles/game objects) and mega-chunks(holds chunks, chunks were still running slow) would be a clear solution especially in an organized environment like this. Like someone on here said, clearly not everything needs to be processed at once and not even exist all at once.
Here is how I see chunks being useful.
I tried this, but didn't see much of a difference, so I am going to try making chunks to hold chunks and hopefully that will help.
I'm busy coding Conways Game of Life and I'm trying to optimise it using some data structure that records which cells should be checked on each life cycle.
I'm using an arrayList as a dynamic data structure to keep a record of all living cells and their neighbours. Is there a better data structure or way of keeping a shorter list that will increase the games speed?
I ask this because often many cells are checked but not changed so I feel like my implementation could be improved.
I believe that the Hashlife algorithm could help you.
It gives the idea of using a quadtree (tree data structure in which each internal node has exactly four children) to keep the data, and then it uses hash tables to store the nodes of the quadtree.
For further reading, this post, written by Eric Burnett, gives a great insight about how Hashlife works, it's performance and implementation (although in Python). It's worth a read.
I built a Life engine that operated on 256x512 bit grids directly mapped to screen pixels back in the 1970s, using a 2Mhz 6800 8 bit computer. I did it directly on the display pixels (they were one-bit on/off white/black) because I wanted to see the results and didn't see the point in copying the Life image to the display.
Its fundamental trick was to treat the problem as one of evaluating a Boolean logic formula for "this cell is on" based on rules of Life, rather than counting live neighbors as is usual. This formula is pretty easy to figure out, so left as a homework exercise. What made it fast was that the Boolean formula was computed on a per-bit basis, doing 8 bits at a time. If you sweep down the screen and across rows, you can in essence evaluate N bits at once (8 on the 6800, 64 on modern PCs) with very low overhead. If you go nuts, you can likely use the SIMD vector extensions and do 256 bits or more at "once". Over the top would be doing this with a GPU.
The 6800 version would process a complete screen in about .5 second; you could watch the update ripple down the screen from top to bottom (60 Hz refresh). On a modern CPU with 1000x the clock rate (1 GHz is pretty easy to get) and 64 bits at a time, it should be able to produce thousands of frames per second. So fast you can't watch it run :-{
A useful observation is that much of the Life world is dead (blank) and processing that part mostly produces more dead cells. This suggests using a sparse representation. Another poster suggested quadtrees, which I think is a very good suggestion. Your quadtree regions don't have to be square, either.
Combining the two ideas, quadtrees for non-blank regions with bit-level processing for blocks of bits designated by the quadtrees is likely to give an astonishingly fast Life algorithm.
I am investigatin this field to obtain object detection in real time.
Video example:
http://www.youtube.com/watch?v=Bm5qUG-06V8
http://www.youtube.com/watch?v=aYd2kAN0Y20
But how can they extract sift keypoint and matching them so fast?
SIFT extraction requires a second generally
I'm an OpenIMAJ developer and responsible for making the first video.
We're not doing anything particularly fancy to make the matching fast in that video, and the SIFT detection and extraction is carried out on the entirety of every frame. In fact that video was made well before we did any optimisation; the current version of that demo is much smoother. We do also have a version with a hybrid KLT-tracker that works even faster by not having to perform SIFT on every frame.
As suggested by #Mario, the image size does have a big effect on the speed of the extraction, so processing a smaller frame can give a big win. Secondly, in the original description of the difference of Gaussian interest point localisation suggested by Lowe in the SIFT paper, it was suggested that the input image was first doubled in size to increase the number of features. By not performing this double-sizing you also get a big performance boost at the expense of having fewer features to match.
The code is open source (BSD license) and you can get it by following the links at http://www.openimaj.org. As stated in the video description, the image-processing code is pure Java; the only native code is a thin interface to the webcam. Tutorial number 7 in the current tutorial pdf document walks through the process of using SIFT in OpenIMAJ. Disabling the double-sizing can be achieved by doing:
DoGSIFTEngine engine = new DoGSIFTEngine();
engine.getOptions().setDoubleInitialImage(false);
SIFT can be accelerated in several ways :
if you can afford approximations, then you can derive a keypoint called SURF which is way faster (using integral images for most tasks)
you can use parallel implementations, at the CPU level (e.g. OpenCV uses Intel's TBB) or at the GPU level (google for sift gpu for related code and doc).
Anyway, none of these is available (AFAIK) in Java, so you'll have to use a Java wrapper to opencv or work it out yourself.
General and first idea: Ask the video uploader(s). We can just assume what's done or how it's done. It might also help to know what you've done so far (e.g. your video resolution, your processing power, image preparation, etc.).
I haven't used SIFT specifically, but I did quite some object/motion tracking during the last few years, so this is more in general. You might have tried some points already, I don't know.
Reduce your image resolution: Going from 640x480 to 320x240 will reduce your data to 25%. Going down to 160x120 will cut it by another 25% (so 6.25 % data left) without significantly impacting your algorithm.
In a similar way, it might be useful to reduce the color depth of your image (not just 256 grayscale, but maybe even more; like 64 colors).
Try other methods to make features more obvious or faster to find, e.g. try running an edge detector over your image.
At least the second video mentions a tracking system, so you could try to guess the region where the object tracked should reappear the next frame (using some simple a/b filter or whatever on coordinates and possibly rotation), then use SIFT on that sub area (with some added margin) only. Only analyze the whole image if you can't find it again. At around 40 or 50 seconds in the second video they're losing the object and need quite some time/tries to find it again.
I saw this video, and I am really curious how it was performed. Does anyone have any ideas? My intuition is that he scraped pixels from the screen (one per 'box'), and then fed that into some program to determine the next move.
Is scraping pixel-by-pixel the way to do this, or is there a better way? I am looking to do something similar with either Java or Python.
Thanks
Probably that's the most reliable way. There are ways to inspect what is happening inside a process - looking directly at its internal state and memory - but they are platform-specific and very prone to misbehaving because your dealing with a slightly different version of something - that includes a different flash version as well as a different version of the app. Those methods are more often used for "trainers" for exe games, where there's typically only one or two versions of the executable to worry about.
Lots of screen shots, comparing, figuring out reliable indicator pixels seems the way to go - plus keeping track of what you expect to happen, of course. When the app is running, it should work from a screenshot at a time (hopefully ensuring a consistent picture, with no half-updated views) and then test the minimum number of pixels needed using (perhaps) a decision tree.
There are ways to automate construction of efficient decision trees, but it's probably easier to do it manually based on comparing screen shots. In this case, since Tetris normally creates all new pieces at the same position, with a 1:1 relationship between colour and shape, you can probably determine the shape and position of a new piece from a single pixel colour - so "decision tree" is probably the wrong term, really, in this case - though there are other things the bot needs to read from the screen.
What's more interesting is the logic to actually make gameplay decisions, since that bot clearly isn't just slotting every piece into the most immediately obvious position, but deliberately aiming to create opportunities to clear 3 or 4 rows at a time.
Yes, i think he scanned the pixels. Actually it should be very simple because you only need to scan the new shape for each move. With that information you can locally calculate the grid and further use it for your AI calculations.