I am having a problem with my SWT Tree. It contains many leaves what makes expanding an item very time consuming. Sometimes I even need to expand all items. Is there a way to expand it asynchronously? I have tried to use asyncExec() on the display putting expandAll() inside the run() method, but it didn't help. And it doesn't solve the first problem where I want to expand only one item. Any ideas?
Additional node: The slow expansion of an item happenes only the first time I expand it. All later expansions of the same item (after collapsing it) are fast.
I solved performance issues with large trees by change the content provider to an ILazyTreeContentProvider. This won't help if you have to expand the full tree at once.
An alternative: Have a closer look at your content and label providers. Maybe their operations are too expensive and you can speed up things if you cache or pre-calculate some information for the tree. If, for example, you have a normal (non-lazy) content provider that loads the items from a database (one-by-one), expanding the tree will take forever...
Are you loading the model items in the UI thread? You should ideally load your model items in a non-UI thread and then update the tree with the items in the UI thread.
Related
Obviously it takes a lot of memory to store an array of a history of changes... that's how I had my application working but it just seems like there's a smarter way to go about doing this.
ArrayList<Photo> photoHistory = new ArrayList<>();
photoHistory.add(originalPhoto);
photoHistory.add(change1);
photoHistory.add(change2);
// bad implementation - lots of memory
Maybe store only an original and a current view model and keep a log of the methods/filters used? Then when a user hits 'undo' it would take the total number of changes made and run through all of them again minus one? This also seems incredibly inefficient.
I guess I'm just looking for advice on how to implement a general 'undo' function of a software application.
Here is a tip from how GIMP implements it:
GIMP's implementation of Undo is rather sophisticated. Many operations require very little Undo memory (e.g., changing visibility of a layer), so you can perform long sequences of them before they drop out of the Undo History. Some operations, such as changing layer visibility, are compressed, so that doing them several times in a row produces only a single point in the Undo History. However, there are other operations that may consume a lot of undo memory. Most filters are implemented by plug-ins, so the GIMP core has no efficient way of knowing what changed. As such, there is no way to implement Undo except by memorizing the entire contents of the affected layer before and after the operation. You might only be able to perform a few such operations before they drop out of the Undo History.
Source
So to do it as optimally as possible, you have to do different things depending on what action is being undone. Showing or hiding a layer can be represented in a neglible amount of space, but filtering the whole image might necessitate storing another copy of the whole image. However, if you only filter part of the image (or draw in a small section of the image) perhaps you only need to store that piece of the image.
I want to get all data from an index. Since the number of items is too large for memory I use the Scroll (nice function):
client.prepareSearch(index)
.setTypes(myType).setSearchType(SearchType.SCAN)
.setScroll(new TimeValue(60000))
.setSize(amountPerCall)
.setQuery(MatchAll())
.execute().actionGet();
Which works nice when calling:
client.prepareSearchScroll(scrollId)
.setScroll(new TimeValue(600000))
.execute().actionGet()
But, when I call the former method multiple times, I get the same scrollId multiple times, hence I cannot scroll multiple times - in parallel.
I found http://elasticsearch-users.115913.n3.nabble.com/Multiple-scrolls-simultanious-td4024191.html which states that it is possible - though I don't know his affiliation to ES.
Am I doing something wrong?
After searching some more, I got the impression that this (same scrollId) is by design. After the timeout has expired (which is reset after each call Elasticsearch scan and scroll - add to new index).
So you can only get one opened scroll per index.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html states:
Scrolling is not intended for real time user requests, but rather for
processing large amounts of data, e.g. in order to reindex the
contents of one index into a new index with a different configuration.
So it appears what I wanted is not an option, on purpose - possibly because of optimization.
Update
As stated creating multiple scrolls cannot be done, but this is only true when the query you use for scrolling is the same. If you scroll for, for instance, another type, index, or just another query, you can have multiple scrolls
You can scroll the same index in same time, this is what elasticsearch-hadoop does.
Just, don't forget that under the hood, an index is composed of multiple shards that own data, so you can scroll each shards in parallel by using:
.setPreference("_shards:1")
I've implemented a search of lots of items (hundreds) in a JList using Lucene - when someone types in the search box it performs a search and displays the results in a JList. It does this by adding and removing the items from the underlying JList model when each character is typed, but this approach blocks the UI (because adding something to a ListModel has to be performed on the EDT.) The search is very quick but it's the adding and removing of items that takes the time.
How would I approach the problem to not block the EDT while the model is being modified?
The length of the lag isn't huge - it's definitely at the state where it's usable at the moment, just not really as snappy as I'd like (for want of a better word.) I'm expecting people on less powerful machines than mine to run the software though hence my interest in sorting the issue.
Other details:
I have profiled the application, the lag is definitely caused by adding / removing lots of items. A typical step could see any number of items getting added or removed, from a few to hundreds. For instance, if I search for the letter "x" in the text box then most of the items will get removed since few contain that letter. If I then remove the letter all the items will be added again. If I search for a more common term, "the" for instance, then just a few items may be removed since the bulk of them contain that term.
I'm not dealing with strings directly, but they're relatively simple objects made up of just a few strings (Song to be precise made up of things like title, author, lyrics etc.), and they're all cached using SoftReferences where possible (so assume none of these objects are being created or destroyed, they shouldn't be for a typical user.)
This may not be the answer you're looking for, but I wonder if your best solution is simply not to add hundreds of items. There's no way that the user will be able to or want to scroll through this many items in a JList, and so perhaps your smartest move is to limit how many items added to a reasonable number, say 20 or so.
I think of this similar to a word processor displaying a document on the screen or other immediate "look-up" components I've used in the past. If the document is large, often the whole thing isn't loaded into memory but rather somehow cached to disk. If you have no choice but to load a lot of items, then perhaps you can take this portion of the model "off-line" show a wait modal dialog, load the items off the EDT and then get the model back on line and then releasing the modal dialog.
I think that easiest way would be to use JTable instead of JList, add RowFilter to JTable, then there aren't reason to add/remove/modify numbers of Items
for add/remove/modify numbers of Items in the XxxModel on the background is there SwingWorker
I am using JUNG for a project and when I am displaying relatively large graphs eg 1500 nodes, my pc would not be able to handle it (graphs are rendered but If I want to navigate the graph the system become very slow). Any Suggestions.
So, there are two things that JUNG visualization doesn't always scale very well right now:
iterative force-directed layouts
interaction: figuring out which node or edge (if any) is being referenced for hover and click events.
It sounds like it's the latter that you're running into right now.
Depending on your requirements, you have a couple of options:
(a) turn off mouse events, or at least hover events
(b) hack the visualization system so that lookups of event targets aren't O(m+n).
Simple solutions for (b) basically just partition the viewing area into smallish chunks and only sends events to elements that are in the same chunk as the pointer. (Obviously, the smaller you make the chunks, the more memory is required.)
We've had plans to do (b) (and a design sketched out) for some time but have been working on other things. Anyone that wants to help with a more permanent solution, please contact me.
How much memory are you starting your VM with? Assuming your working on windows, looking at the Task Manager, does the VM hit the maximum amount of allocated memory and start using swap?
The problem probably lies with the calculation of your vertices' positions. The only layout that I've found fairly easy to calculate was the Tree Layout and obviously that's not suitable for all data sets.
The solution probably is to write your own custom layout with a lot less calculations than say an FRLayout.
I am a new to the prefuse visualization toolkit and have a couple of general questions. For my purpose, I would like to perform an initial visualization using prefuse (graphview / graphml). Once rendered, upon a user click of a node, I would like to completely reload a new xml file for a new visualization. I want to do this in order to allow me to "pre-package" graphs for display.
For example. If I search for Ted. I would like to have an xml file relating to Ted load and render a display. Now in the display I see that Ted has nodes associated called Bill and Joe. When I click Joe, I would like to clear the display and load an xml file associated with Joe. And so on.
I have looked into loading one very large xml file containing all node and node relationship info and allowing prefuse to handle this using the hops from one level to another. However, eventually I am sure that system performance issues will arise due to the size of data.
Thanks in advance for any help,
John
Of course as you said, one option is loading all nodes and then set the nodes you don't need to be invisible. Prefuse scales fairly well, but of course it has its limits. The second option is to just create a brand new panel and replace the old panel. I've used the option 2 and it works quite well.
I'm far from an expert on Prefuse's performance issues, but I think it is definitely more resource intensive to have a huge xml file loaded at once than to do the processing to only re-load the necessary nodes.
I don't know what kind of graph you are using, but I would place a 'refreshGraph' that removes the graph from the Visualization object, cancels Activity, cancels Layout, refreshes the ActionList and re-starts over. It would probably turn out something like this:
public void refresh(clickedNode){
visualization.removeGroup(GRAPH);
visualization.removeGroup(AGGR);
activity.cancel();
actionList.cancel();
visualization.reset();
// process the XML and reload your graph here
}