Implementing Apache TinkerPop over my JavaBeans - java

I'm new to graph databases (although I've extensive experience with Semantic Web technologies) and I'd like to understand if what I've in mind makes sense.
I've my own data model, made of Java's JavaBean objects, the model is rather similar to a graph, with a Node interface (and a few subclasses), an Edge interface (and a few subclasses), methods to query the model (get Node instances with attribute = 'x', get all edges for a node, etc).
I'd like to wrap this model with one of those query languages out there (let's say Cypher or Gremlin), so to have something more standardised and so that I can avoid implementing my own query language and, most importantly, my own query engine.
One obvious way would be to use Neo4j or some TinkerPop implementation as a backend for my object model (or similarly, to convert/synch my objects to a graph for one of those frameworks). However, because the model is already graph-like, has good search methods and efficient storage components (to/from simple XML files), I'm also thinking that maybe I could adapt a query language to my model. TinkerPop seems designed to support that.
Does this make sense? Is TinkerPop the best (or a good) way to go? Is/are there documentation/tutorials about that?

As a comitter of SimpleGraph I had similar needs that led me to starting the
SimpleGraph open source project in the first place.
For conversion of Pojos to and from Tinkerpop there is the ORM/OGM stack FERMA.
The idea of SimpleGraph is to "graphenize" other information sources e.g. the tabular structures of Excel Tabels or SQL databases.
Since your own data structures are already in graph form obviously the mapping to and from tinkerpop is much simpler. The SimpleGraph approach in this case would be a simple back and force (link) between the node and edge structures of so that each tinkerpop node corresponds to one of your nodes and tinkerpop each corresponds to one of your edges. I have succesfully used this approach e.g. for a graphical representation of UML models by mapping XML structural elements to tinkerpop elements and graphical representation elements in a graph editor at the same time. So my answers would be:
Does this make sense? Yes
Is TinkerPop the best (or a good) way to go? Yes
Is/are there documentation/tutorials about that? I'd neither say Yes and No this one
I have not seen a specific tutorial for your use case. If you experiment a bit e.g. with the SimpleGraph modules you might get a feeling how things work.

Related

What is the best design pattern for a node - link diagram in Java

What is the best design pattern for a node - link diagram in Java?
The model should be seperable from the graphical representation.
There are several types of nodes.
There are rules as to which nodes can connect to other nodes and how many.
Java 1.7
You need to use literature of graph in data structures.
https://en.wikipedia.org/wiki/Graph_(abstract_data_type)
Then there are famous algorithms you can implement. Depends on what you want to do one of the depth first and breadth first algorithms are more appropriate for you
https://en.wikipedia.org/wiki/Depth-first_search
https://en.wikipedia.org/wiki/Breadth-first_search
If you want to separate your model from the view, you can use the MVC pattern. For the problem of the nodes, you need to study about graph data structures.
To have multiple types of nodes you can take a look at the composite pattern, which work like the DOM in HTML (you have parents and childrens). You can adapt it to have a graph but take car if you want to explore it, you can have some cyclic way (you have to look at the graph exploration algorithms).

Design for defining graphs or flowing structures

I'm trying to create a system for representing and designing graphs in an easy way. That means it should be easy to create some graphical representation from the data structure but it should also be easy to store the structure and do easy calculation on it. Easy calulations in this sence are questions like which nodes are the next nodes from a given node in the graph.
Is there some nice way to define stuff like this in xml or database structures? Later would be easier to edit.
Is there maybe already some good java library abstract enough to support my problems?
I'm trying to define a production process which can also have cycles (these cylces are not so important and could be modeled differently), but it feels kind of weird having to make these fundamental design decisions when this problem is so generic.
JUNG - http://jung.sourceforge.net/, may be a good solution for you. It's pretty extensible and has visualization, graph algorithm support etc
neo4j is the "standard" graph database (see also). you can abstract away from a particular implementation (so that you can change the database without changing you code) using blueprints.
alternatively, if the database part is not so important, a library like jgrapht (i wasn't aware of jung, from chris's answer, but it looks similar) gives you access to the usual algorithms for in-memory structures.
[neo4j licencing]

Jena/ARQ: Difference between Model, Graph and DataSet

I'm starting to work with the Jena Engine and I think I got a grasp of what semantics are.
However I'm having a hard time understanding the different ways to represent a bunch of triples in Jena and ARQ:
The first thing you stumble upon when starting is Model and the documentation says its Jenas name for RDF graphs.
However there is also Graph which seemed to be the necessary tool when I want to query a union of models, however it does not seem to share a common interface with Model, although one can get the Graph out of a Model
Then there is DataSet in ARQ, which also seems to be a collection of triples of some sort.
Sure, afer some looking around in the API, I found ways to somehow convert from one into another. However I suspect there is more to it than 3 different interfaces for the same thing.
So, question is: What are the key design differences between these three? When should I use which one ? Especially: When I want to hold individual bunches of triples but query them as one big bunch (union), which of these datastructures should I use (and why)?
Also, do I "loose" anything when "converting" from one into another (e.g. does model.getGraph() contain less information in some way than model)?
Jena is divided into an API, for application developers, and an SPI for systems developers, such as people making storage engines, reasoners etc.
DataSet, Model, Statement, Resource and Literal are API interfaces and provide many conveniences for application developers.
DataSetGraph, Graph, Triple, Node are SPI interfaces. They're pretty spartan and simple to implement (as you'd hope if you've got to implement the things).
The wide variety of API operations all resolve down to SPI calls. To give an example the Model interface has four different contains methods. Internally each results in a call:
Graph#contains(Node, Node, Node)
such as
graph.contains(nodeS, nodeP, nodeO); // model.contains(s, p, o) or model.contains(statement)
graph.contains(nodeS, nodeP, Node.ANY); // model.contains(s, p)
Concerning your question about losing information, with Model and Graph you don't (as far as I recall). The more interesting case is Resource versus Node. Resources know which model they belong to, so you can (in the api) write resource.addProperty(...) which becomes a Graph#add eventually. Node has no such convenience, and is not associated with a particular Graph. Hence Resource#asNode is lossy.
Finally:
When I want to hold individual bunches of triples but query them as one big bunch (union), which of these datastructures should I use (and why)?
You're clearly a normal user, so you want the API. You want to store triples, so use Model. Now you want to query the models as one union: You could:
Model#union() everything, which will copy all the triples into a new model.
ModelFactory.createUnion() everything, which will create a dynamic union (i.e. no copying).
Store your models as named models in a TDB or SDB dataset store, and use the unionDefaultGraph option.
The last of these works best for large numbers of models, and large model, but is a little more involved to set up.
Short answer: Model is just a stateless wrapper with lots of convenience methods around a Graph. ModelFactory.createModelForGraph(Graph) wraps a graph in a model. Model.getGraph() gets the wrapped graph.
Most application programmers would use Model. Personally I prefer to use Graph because it's simpler. I have trouble remembering all the cruft on the Model class.
Dataset is a collection of several Models: one “default model” and zero or more “named models”. This corresponds to the notion of an “RDF dataset” in SPARQL. (Technically speaking, SPARQL is not a query language for “RDF graphs” but for “RDF datasets” which can be collections of named RDF graphs plus a default graph.)

Generate Java code from a diagram model

In my application, I model a decision diagram (nodes+connections). I have model classes ready (two basic classes: Node and Connection + subclasses for special cases). This diagram gets very big and keeping track of all the connections and nodes only through code is not easy (and take into account future maintenance). I was wondering if there's a tool (Eclipse plugin or other) that I could feed with my model classes (i.e. types of nodes, types of connections), use it to "draw" the diagram graphically (making nodes and connections) and then generate the code of the diagram?
Model classes:
Node: contains List<Connection> of all connections FROM this node
Connection: Node from, Node to
EDIT:
I want to generate a method that initializes all the needed nodes and connections (Node and Connection objects) and returns the head/start node. This in-memory structure is then traversed by the application when it makes decisions.
Sounds a bit like you want something like jgraph? http://www.jgraph.com/jgraph.html
Did you try AndroMDA
AndroMDA (pronounced: andromeda) is an open source code generation framework that follows the Model Driven Architecture (MDA) paradigm. It takes model(s) from CASE-tool(s) and generates fully deployable applications and other components.
If you can create your graph using some UML tools(not sure how easy that would be) AndroMDA can generate the java code for you.
It supports many UML tools including some free tools.
Tale a look on Velocity. It is widely used for code generation.
You might like graphviz. It is very easy to build a directional graph diagram in that application. There are several wrapper libraries to help integrating it if you want to do that. Or if you just want to feed the graph and generate a picture, this is very straightforward. Check out the examples here.
FTW: I have used this extensively for class hierarchies, interaction flow description, mind maps.. :)

Data Model Evolution

When writing code I am seeing requirements to change data models (e.g. adding/changing/removing data members from a class). When these data models belong to an interface, it seems difficult to change without breaking the existing client codes. So I am wondering if there is any best practice of designing interfaces/data models in a way to minimize the impact during evolution.
The closest thing I can find from google is data contract versioning. But that seems to be a .net specific topic. I am wondering if the same practice applies to the Java world, or there is a different or generic way to deal with data model evolution.
Thanks
There are some tools which can help, have a look at LiquiBase.
This article goves a good overview on developerworks
There are no easy answers to this in either the Java or data modeling domains.
Some changes are upwards compatible; e.g. addition of new methods, optional fields, subclasses and so on.
Some changes are not compatible, but can be handled using a simple transformation; e.g. addition of a mandatory field could supported by a transformation that adds an extra constructor argument.
Some changes unavoidably require major programmer intervention.
Another point to note is that the problem gets a lot harder when the data corresponding to the data models is persistent, and cannot be thrown away when the data model changes. This is referred to as the "schema evolution" problem, and I believe that it has been proven that there is no general solution.

Categories