Gephi: Creating and streaming a dynamic graph using the toolkit - java

I'm trying to create a dynamic graph and stream it using the Gephi toolkit. So far, I've followed the toolkit and the streaming plugin tutorials to create a normal graph and stream it to the Gephi GUI.
I'm having a hard time figuring out how to make the graph dynamic - I have managed to add TimeInterval columns to the Node and Edge tables using the AttributeModel but when I open the Timeline window in the GUI, it says that the graph is not dynamic. The models/controllers are a bit confusing to me.
Here's the code that I have right now:
ProjectController pc = Lookup.getDefault().lookup(ProjectController.class);
pc.newProject();
Workspace workspace = pc.getCurrentWorkspace();
AttributeController attributeController = Lookup.getDefault().lookup(AttributeController.class);
AttributeModel attributeModel = attributeController.getModel();
AttributeColumn nodeTimeColumn = attributeModel.getNodeTable().addColumn(DynamicModel.TIMEINTERVAL_COLUMN, AttributeType.TIME_INTERVAL, AttributeOrigin.PROPERTY);
AttributeColumn edgeTimeColumn = attributeModel.getEdgeTable().addColumn(DynamicModel.TIMEINTERVAL_COLUMN, AttributeType.TIME_INTERVAL, AttributeOrigin.PROPERTY);
GraphController graphController = Lookup.getDefault().lookup(GraphController.class);
GraphModel graphModel = graphController.getModel();
DirectedGraph graph = graphModel.getDirectedGraph();
// At this point, I want to make the graph dynamic so that I can use
// the Timeline feature when I stream to the GUI.
StreamingServer server = Lookup.getDefault().lookup(StreamingServer.class);
ServerControllerFactory controllerFactory = Lookup.getDefault().lookup(ServerControllerFactory.class);
ServerController serverController = controllerFactory.createServerController(graph);
String context = "/testing";
server.register(serverController, context);

The Gephi Graph Streaming, in principle, is an alternative to visualize in real time changes in a graph using much less memory than loading a big dynamic graph. So, at each point in time, you don't have the full dynamic graph in memory, but a static graph representing the current state of the graph. That's why, by default, the plugin creates and updates just a static graph.
Unfortunately, updating a dynamic graph using graph streaming is not possible yet. We are planning to support it in a near future, and such operations will be available by changing operations the following class:
https://github.com/gephi/gephi-plugins/blob/graph-streaming/StreamingAPI/src/org/gephi/streaming/api/GraphUpdaterEventHandler.java
As you can see in this file, there's no handling of dynamic attributes; new nodes/edges/attributes are added, changed or removed, but there's no update to dynamic columns.

Related

How to use getScaleController() and PinchGesture?

I am looking for realtime scaling of objects/models in ARcore. I found that it can be done using
getScaleController() and PinchGesture class. I have a TransformableNode and I want to use the above method and class for achieving realtime scalling of model to update model's dimensions at runtime.
I checked the documentation already, but it didn't help me.
here is my code
TransformableNode node =
new TransformableNode(arFragment.getTransformationSystem());
node.setParent(anchorNode);
node.setRenderable(modelRenderable);
node.select();
/* After this I want to use PinchGester or PinchGestureListener and
node.getScaleController() to achieve my goal */
Thanks in advance

Programmatically render template area in Magnolia CMS

I am using Magnolia CMS 5.4 and I want to build a module that will render some content of a page and expose it over REST API. The task is simple but not sure how to approach it and/or where to start.
I want my module to generate partial template or an area of a template for a given reference, let's say that is "header". I need to render the header template/area get the HTML and return that as a response to another system.
So questions are: is this possible at all and where to start?
OK after asking here and on Magnolia forum couldn't get answer I dug in the source code and found a way to do it.
First thing the rendering works based on different renderers and those could be JCR, plain text or Freemarker renderer. In Magnolia those are decided and used in RenderingEngine and the implementation: DefaultRenderingEngine. The rendering engine will allow you to render a whole page node which is one step closer to what I am trying to achieve. So let's see how could this be done:
I'll skip some steps but I've added command and made that work over REST so I could see what's happening when I send a request to the endpoint. The command extends BaseRepositoryCommand to allow access to the JCR repositories.
#Inject
public setDefaultRenderingEngine(
final RendererRegistry rendererRegistry,
final TemplateDefinitionAssignment templateDefinitionAssignment,
final RenderableVariationResolver variationResolver,
final Provider<RenderingContext> renderingContextProvider
) {
renderingEngine = new DefaultRenderingEngine(rendererRegistry, templateDefinitionAssignment,
variationResolver, renderingContextProvider);
}
This creates your rendering engine and from here you can start rendering nodes with few small gotchas. I've tried injecting the rendering engine directly but that didn't work as all of the internals were empty/null so decided to grab all construct properties and initialise my own version.
Next step is we want to render a page node. First of all the rendering engine works based on the idea it's rendering for a HttpServletResponse and ties to the request/response flow really well, though we need to put the generated markup in a variable so I've added a new implementation of the FilteringResponseOutputProvider:
public class AppendableFilteringResponseOutputProvider extends FilteringResponseOutputProvider {
private final FilteringAppendableWrapper appendable;
private OutputStream outputStream = new ByteArrayOutputStream();
public AppendableFilteringResponseOutputProvider(HttpServletResponse aResponse) {
super(aResponse);
OutputStreamWriter writer = new OutputStreamWriter(outputStream);
appendable = Components.newInstance(FilteringAppendableWrapper.class);
appendable.setWrappedAppendable(writer);
}
#Override
public Appendable getAppendable() throws IOException {
return appendable;
}
#Override
public OutputStream getOutputStream() throws IOException {
((Writer) appendable.getWrappedAppendable()).flush();
return outputStream;
}
#Override
public void setWriteEnabled(boolean writeEnabled) {
super.setWriteEnabled(writeEnabled);
appendable.setWriteEnabled(writeEnabled);
}
}
So idea of the class is to expose the output stream and still preserve the FilteringAppendableWrapper that will allow us the filter the content we want to write. This is not needed in the general case, you can stick to using AppendableOnlyOutputProvider with StringBuilder appendable and easily retrieve the entire page markup.
// here I needed to create a fake HttpServletResponse
OutputProvider outputProvider = new AppendableFilteringResponseOutputProvider(new FakeResponse());
Once you have the output provider you need a page node and since you are faking it you need to set the Magnolia global env to be able to retrieve the JCR node:
// populate repository and root node as those are not set for commands
super.setRepository(RepositoryConstants.WEBSITE);
super.setPath(nodePath); // this can be any existing path like: "/home/page"
Node pageNode = getJCRNode(context);
Now we have the content provider and the node we want to render next thing is actually running the rendering engine:
renderingEngine.render(pageNode, outputProvider);
outputProvider.getOutputStream().toString();
And that's it, you should have your content rendered and you can use it as you wish.
Now we come to my special case where I want to render just an area of the whole page in this case this is the Header of the page. This is all handled by same renderingEngine though you need to add a rendering listener that overrides the writing process. First inject it in the command:
#Inject
public void setAreaFilteringListener(final AreaFilteringListener aAreaFilteringListener) {
areaFilteringListener = aAreaFilteringListener;
}
This is where the magic happens, the AreaFilteringListener will check if you are currently rendering the requested area and if you do it enables the output provider for writing otherwise keeps it locked and skips all unrelated areas. You need to add the listener to the rendering engine like so:
// add the area filtering listener that generates specific area HTML only
LinkedList<AbstractRenderingListener> listeners = new LinkedList<>();
listeners.add(areaFilteringListener);
renderingEngine.setListeners(listeners);
// we need to provide the exact same Response instance that the WebContext is using
// otherwise the voters against the AreaFilteringListener will skip the execution
renderingEngine.initListeners(outputProvider, MgnlContext.getWebContext().getResponse());
I hear you ask: "But where do we specify the area to be rendered?", aha here is comes:
// enable the area filtering listener through a global flag
MgnlContext.setAttribute(AreaFilteringListener.MGNL_AREA_PARAMETER, areaName);
MgnlContext.getAggregationState().setMainContentNode(pageNode);
The area filtering listener is checking for a specific Magnolia context property to be set: "mgnlArea" if that's found it will read its value and use it as an area name, check if that area exists in the node and then enable writing once we hit the area. This could be also used through URLs like: https://demopublic.magnolia-cms.com/~mgnlArea=footer~.html and this will give you just the footer area generated as an HTML page.
here is the full solution: http://yysource.com/2016/03/programatically-render-template-area-in-magnolia-cms/
Just use the path of the area and make a http request using that url, e.g. http://localhost:9080/magnoliaAuthor/travel/main/0.html
As far as I can see there is no need to go through everything programmatically as you did.
Direct component rendering

SlpashScreen/Performance: Proper way to initialize application data during startup especially using a SlpashScreen

I am trying a java desktop application(I am a student).
It have to deal with four kinds data during start up:
1)project tree(like eclipse Project tree) view data.
Currently I using XMLEncoder/XMLDecoder to save and reload from XML file.
2)user preferences data. such as font,recently files and so on.
Currently I using java.util.prefs.Preferences.
3)Class data .Some factory class like MenuFactory,util class like DatabaseUtil,FileUtil and so on they have some static data.
Currently I using static initializer in these class to initialize default data.
4)Database-related information,such as connection configuration,frequently used database name and table names.
Currently I using java.util.Properties;
What I want to improve:
1) Is it the right way to save my application data in those four kinds mentioned above ?
2)since there are so many data to load,what should I do during a splash screen.
To load them at start up time or delayed to the time when using them ?
At least,I do not want to deceive user by using the following code(not updating progress bar at a meaningful time):
SwingWorker<Void, Integer> worker = new SwingWorker<Void, Integer>() {
#Override
protected Void doInBackground() throws Exception {
for (int i = 0; i < 50; i++) {
Thread.sleep(100);// Simulate loading
publish(i*2);// Notify progress
}
return null;
}
3) I think too many static initializer may slowdown the program start up ,any suggestion?
If your app is targeted for high end devices then you can certainly load most of it during the startup/splash screen.
But the issue comes when you have low end devices as well as your target devices. For some of the low end devices loading too much data at startup leaves very less memory for other processing. Some imd leading to even crashing.
Thus it a decision which you have to take wisely and if it some small data then better leave for loading as per required or while that screen is loaded.

Eclipse RCP IPageLayout woes

I have an Eclipse RCP application with a sort of three column layout:
The editor area is at the extreme right. Now, when you get an IPageLayout to work with, the editor area is already added in. That's fine: we add area B to the left of the editor, and area A to the left of B, and the layout is exactly what we need.
The issue is that when you move the sash between A and B, views A and B change without resizing the editor area (good;) but when you move the other sash between B and the editor area, all three views are resized; the layout manager acts to maintain the ratio of the widths of A and B, and that's not what we want. We want the user to be able to move each sash independently, and have it influence only the two views it touches.
It seems like the root cause of this is that the editor is in place when you get your IPageView, and therefore you have to position the IFolderLayouts relative to it. If you could position the editor relative to B, instead, then resize would do the right thing.
So my questions:
Is there any way to tell the IPageView to position the editor relative to a view, instead of the other way around?
Barring that, is there any other way to influence the layout algorithm, like writing some kind of layout manager?
I know of no way to alter the layout tree of IPageLayout in Eclipse 3.x. In Eclipse 4.2, however, the Application Model can be changed dynamically at runtime.
So, if you would consider migrating your application to Eclipse 4, this solution could be an option. To keep the original application and UI code as untouched as possible, this solution will
take full advantage of the compatibility layer of Eclipse 4 to create an Application Model from the Eclipse 3 based RCP application. There is no need to create an Application Model or alter the UI code of the application.
rearrange the editor area's layout after the application is active. This is done by creating an addon class in a separate plugin.
allow easy migration to more Eclipse 4 functionality in the future: Should you decide to build an own Application Model, you can just unhook the addon plugin.
I started with the regular RCP Mail template of Eclipse 3 and altered the perspective to recreate the problem. This is the Perspective class I used in my test application:
import org.eclipse.ui.IPageLayout;
import org.eclipse.ui.IPerspectiveFactory;
public class Perspective implements IPerspectiveFactory {
public static final String ID = "wag.perspective";
public void createInitialLayout(IPageLayout layout) {
String editorArea = layout.getEditorArea();
layout.setEditorAreaVisible(true);
layout.addStandaloneView(AView.ID, false, IPageLayout.LEFT,
0.25f, editorArea);
layout.addStandaloneView(BView.ID, false, IPageLayout.LEFT,
0.25f, editorArea);
layout.getViewLayout(AView.ID).setCloseable(false);
layout.getViewLayout(BView.ID).setCloseable(false);
}
}
It basically creates the scenario you described: a three column layout where one sash effects all three parts and the other one only two.
I then proceeded to migrate the application and alter the Application Model.
Migrate the Eclipse 3 based RCP application to Eclipse 4
There are online tutorials available for this process. I found Eclipse 4.1: Run your 3.x RCP in 4.1 and Eclipse 4 and the Compatibility Layer - Tutorial to be very helpful.
I recommend including the org.eclipse.e4.tools.emf.liveeditor and its required plug-ins in your product dependencies. With the live editor, you can take a look at the Application Model that is created by the compatibility layer.
Once the application starts, thet sashes will still behave the same way. Open the live editor on your application window and take a look at your model.
You can see that the PartSashContainer including the placeholder for the AView contains another PartSashContainer. Moving the sash between AView and that container will update the rest of the layout tree, while moving the sash between BView and the editor does not effect other parts of the layout.
You could now drag the placeholder for the AView to the container where the BView and the editor are located. This would instantly create the effect you desire: The sashes will only affect their direct neighbours. But these changes will only be saved in one's own runtime workspace. Something else is needed to alter the layout structure automatically.
Altering the Application Model at runtime
Since I didn't want to touch the original code if possible, I created another plugin to make a contribution to the Application Model.
Create a Plug-In Project without an Activator without using a template.
Add an Addon class: select New->Other->Eclipse 4->Classes->New Addon Class
Add a Model Fragment: select New->Other-Eclipse 4->Model->New Model Fragment. Open the created fragment.e4xmi file and add a Model Fragment. For the Element Id, put org.eclipse.e4.legacy.ide.application (this is the standard id of legacy applications) and for the Featurename addons. Add an Addon to the Model Fragment. Enter an ID and set the Class URI to your addon class.
Now add your fragment.e4xmi to your org.eclipse.e4.workbench.model extension point:
<extension
id="id1"
point="org.eclipse.e4.workbench.model">
<fragment
uri="fragment.e4xmi">
</fragment>
</extension>
Add your contribution plugin to the dependencies of your application product. When you start your application and look at the model with the live editor, you should see your Addon listed in the model.
Now we can implement the Addon. This is the code of my Addon class:
package wag.contribution.addons;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.inject.Inject;
import org.eclipse.e4.core.services.events.IEventBroker;
import org.eclipse.e4.ui.model.application.MApplication;
import org.eclipse.e4.ui.model.application.ui.MElementContainer;
import org.eclipse.e4.ui.model.application.ui.MUIElement;
import org.eclipse.e4.ui.model.application.ui.advanced.MPlaceholder;
import org.eclipse.e4.ui.workbench.modeling.EModelService;
import org.osgi.service.event.Event;
import org.osgi.service.event.EventHandler;
public class LayoutSorter {
#Inject private IEventBroker broker;
private EventHandler handler;
// The part IDs we are interested in, sorted in the sequence they should be
// shown
private static List<String> PART_IDS = Arrays.asList(new String[] {
"wag.aView", "wag.bView", "org.eclipse.ui.editorss" });
// Listen to the e4 core service's event broker to find the magical time
// when the application is created and try to sort the layout.
#PostConstruct
void hookListeners(final MApplication application,
final EModelService service) {
if (handler == null) {
handler = new EventHandler() {
// Try to sort the layout. Unsubscribe from event broker if
// successful.
#Override
public void handleEvent(Event event) {
try {
sort(application, service);
// sort did finish: stop listening to the broker.
broker.unsubscribe(handler);
} catch (Exception e) {
// Something went wrong, the application model was not ready yet.
// Keep on listening.
}
}
};
// Subscribe "ServiceEvent.MODIFIED" to grab the application.STARTED
// event. Does anybody know how to do this in a better way?
broker.subscribe("org/osgi/framework/ServiceEvent/MODIFIED",
handler);
}
}
private void sort(MApplication application, EModelService service) {
// find all placeholders
List<MPlaceholder> placeholders = service.findElements(application,
null, MPlaceholder.class, null);
// only keep the ones we are interested in
for (int i = placeholders.size() - 1; i > -1; i--) {
if (!PART_IDS.contains(placeholders.get(i).getElementId())) {
placeholders.remove(i);
}
}
// find the parents of the placeholders
List<MElementContainer<MUIElement>> parents = new ArrayList<>(
placeholders.size());
for (MPlaceholder placeholder : placeholders) {
parents.add(placeholder.getParent());
}
// find the parent that is "deepest down" in the tree
MElementContainer<MUIElement> targetParent = null;
for (MElementContainer<MUIElement> parent : parents) {
for (MUIElement child : parent.getChildren()) {
if (parents.contains(child)) {
continue;
}
targetParent = parent;
}
}
// move all parts to the target parent
if (targetParent != null) {
for (int i = 0; i < placeholders.size(); i++) {
if (targetParent != placeholders.get(i).getParent()) {
service.move(placeholders.get(i), targetParent, i);
}
}
}
}
#PreDestroy
void unhookListeners() {
if (handler != null) {
// in case it wasn't unhooked earlier
broker.unsubscribe(handler);
}
}
}
(Please note that the code above is a bit of a hack because it is only really suited for this specific problem.)
After a restart, the application should now behave in the desired way. Take a look at the Application Model to see your changes.
One thing to be aware of is that local changes are saved in the runtime workspace in the file .metadata\.plugins\org.eclipse.e4.workbench\workbench.xmi if saving is switched on, so for recreating the unaltered model for testing this file has to be deleted.
I don't think, it's possible to achieve exactly what you want (so the answers to your questions would be 1. no, 2. no). But there it a 3rd alternative, which IMO behaves quite nicely.
When trying in Eclipse: Start with viewA on left and Editor on right. Then when you drag viewB to the right side of viewA, you get the (wrong) setup you describe. But then you drag it to the left part of the Editor, then you get different configuration, where dragging right sash behaves as you want. Dragging of left sash resizes viewA and Editor and MOVES viewB.
I would say that the code to achieve this would be:
IFolderLayout areaA = layout.createFolder("A", IPageLayout.LEFT, 0.33f, editorArea);
IFolderLayout areaB = layout.createFolder("B", IPageLayout.LEFT, 0.5f, editorArea);

How to implement a Lazy List using SmartGWT and SQL

I was trying all of yesterday to try and integrate a SQL Database with SmartGWT for a lazy list but I just couldn't figure out how to implement it. (JavaDoc, and example of a lazy list)
What I want to do is create a list of a bunch of "sites" all over the world. The problem is there will probably be about a million of them, so I'm trying to load as few as possible at a time. Each site in my DB has an address, so I'm trying to sort them in the tree structure like (Country->State->City->Sites). Every time you go down a level there will be a query to the DB asking for all of the next level (Whether that be all the cities that have sites in the state chosen, or what ever).
Any help is greatly appreciated.
ALSO:
In the example linked the folders and leafs are the type of element, is there a way to keep folders, folders, and then leafs a separate type of object?
After a while I finally got it. I ended up creating my own RPC that would serve up an array of strings that would represent the names of all the TreeNodes for the next level.
So entry point would be:
private NodeServiceAsync nodesRpc; //The RPC that grabs more nodes
private Tree data; //The data structure to hold all of the nodes
private ColumnTree list; //The GUI element that is shown on in the browser
public void onModuleLoad() {
nodesRpc = (NodeServiceAsync) GWT.create(NodeService.class);
data = new Tree();
list = new ColumnTree;
list.setAutoFetchData(true);
list.setLoadDataOnDemand(true);
list.addNodeSelectedHandler(new NodeSelectedHandler () {
public void onNodeSelected(NodeSelectedEvent event) {
if(/*Node is folder and hasn't been opened before*/) {
//Get More Nodes
AsyncCallback<String[]> callback = new NodeGetter<String[]>();
nodesRpc.getData(event.getNode(), callback);
}
else if(/*Node is not a folder (at the end) */) {
//Do something else
}
}
});
list.setData(data); //Make the GUI Element Represent The Data Structure
RootPanel.get().add(list); //Add to screen
}
The serverlet on the server side creates the query, executes, then translates the ResultSet into an array of Strings and passes that back. All the callback has to do, back on the client side, is translate that array into an array of TreeNodes and attach them to the original Node that was clicked on. Finally after all of this the GUI element is redrawn with the new nodes.
I was surprised that there was very little down time between node loads (less then 1 sec) even when sometimes a hundred or so nodes where being queried then displayed.
Note there is also a Pro version of the product which includes SQL connectivity like this out of the box (for Java server platforms). Showcase here:
http://www.smartclient.com/smartgwtee/showcase/
The SQL connector in the Pro product includes load on demand / data paging, search, and all 4 CRUD operations, as well as DataSource Wizards that can generate a working SQL DataSource for an existing database table if you just enter JDBC settings.
Note the Pro product doesn't require SQL, that's just one of the things it can connect to.

Categories