Prior to using SDN 4, I used custom REST client code to implement my own DAO layer between client & Neo4j db. I was able to add a number of labels to nodes I created. This also appears to have been possible using SDN 3 from what I can deduce from docs & other questions using the #Labels annotation.
However, #Labels does not appear to be supported in SDN 4 and the SDN 4 documentation implies that only the class name of an entity class (and any super classes) will be added to a node entity on creation.
Is there a way to add additional labels to a node? I need the values of such labels to be supplied by the user, not hard coded in an annotation.
An unmanaged extension would be the way to do it currently. When a node is created, the extension can assign the additional labels as required. You could either write an unmanaged extension directly or build a GraphAware Runtime Module
Related
Using up to Spring Boot 2.3.4, I've been using the #QueryResult annotation to map some custom Cypher queries responses to POJOs. I'm now testing the Spring Boot 2.4 first RC and trying to follow instructions on how to drop OGM since the support has been removed. I successfully replaced other annotations by the ones provided here:
https://neo4j.github.io/sdn-rx/current/#migrating
but I'm now left with my #QueryResult annotations for which nothing is specified. When I delete them I get Mapping errors:
org.springframework.data.mapping.MappingException: Could not find mappable nodes or relationships inside Record
I've looked up some of the Mapping explanations but here's the thing: my custom POJOs don't represent any entity from the database, neither do they represent part(s) of an entity. They're rather relevant bits from differents Nodes.
Let me examplify:
I want to get all b nodes that are targets of the MY_REL relationship from a:
(a:Node {label:"my label"})-[:MY_REL]->(b:Node)
For my purposes, I don't need to get the nodes in the response, so my POJO only has 2 attributes:
a "source" String which is the beginning node's label
a "targets" Set of String which is the list of end nodes' labels
and I return this:
RETURN a.label AS source, COLLECT(b.label) AS targets
My POJO was simply annotated with #QueryResult in order to get the mapping done.
Does anyone know how to reproduce this behaviour with SB 2.4 release candidate? As I said, removing the now faulty annotation prompts me with a Mapping error but I don't know what I should do to replace it.
Spring Data Neo4j 6 now supports projections (formerly known as #QueryResult) in line with the other Spring Data modules.
Having said this, the simplest thing you would have to do, assuming that this #Query is written in a Neo4jRepository<Node,...>, would be to return also the a.
I know that this sounds ridiculous first but with choosing the repository abstraction, you say that everything that should get processed during the mapping phase is a Node and you want to project its properties (or a subset) into the POJO (DTO projection). SDN cannot ensure that you are really working with the right type when it starts the mapping, so it throws the exception you are facing. Neo4j-OGM was more relaxed behind the scenes for mapping the #QueryResults but unfortunately also wrong with this direction.
If your use-case is simple as you have described it, I would strongly suggest to use the Neo4jClient(docs) that gives you direct access to the mapping.
It has a fluent API for querying and manual mapping, and it participates in the ongoing Spring transactions your repositories are running within.
There is a lot in there when it comes to projections, so I would suggest to also read the section in the documentation.
The actual use case i'm working on has many classes that should be persisted (basically different sensor types). Currently i have to create the table per hand for every sensor type. Isn't there a mechanism of the driver that could auto create the respective tables if they are not existent (like seen in e.g. Hibernate)?
This would allow me to deploy the app on other systems without need for recreating the tables again. Furthermore this is quite handy for quick prototyping ;)
I created a partial solution to the problem - a table / udt create-query creation facility. It can be found here:
https://gist.github.com/eintopf/3ae360110846cb80a227
Unfourtunately the type mapping is NOT complete at the moment, since the respective type mapper class in the object mapper package of datastax is private.
The program just builds all CREATE queries and one use them like he wants (copy paste into cqlsh or use it directly on the cassandra session via Java).
Not at the moment, but this is a planned feature (JAVA-569).
My application has about 50 entities that are displayed in grid format in the UI. All 50 entities have CRUD operations. Most of the operations have the standard flow
ie. for get, read entities from repository, convert to DTO and return a list of DTO's.
for create/update/delete - get DTO's - convert to entities, use repository to create/update/delete on DB, return updated DTOs
Mind you that for SOME entities, there are also some entity specific operations that have to be done.
Currently, we have a get/create/update/delete method for all our entities like
getProducts
createProducts
updateProducts
getCustomers
createCustomers
updateCustomers
in each of these methods, we use the Product/Customer repository to perform the CRUD operation AFTER conversion from entity -> dto and vice versa.
I feel there is a lot of code repetition and there must be a way by which we can remove so many of these methods.
Can i use some pattern (COMMAND PATTERN) to get away with code repetition?
Have a look at the Spring Data JPA or here project. It does away with boilerplate code for DAO.
I believe it basically uses AOP to interpret calls like
findByNameandpassword (String name,String passwd)
to do a query based upon the parameters passed in selecting the fields in the method name (only an interface).
Being a spring project it has very minimal requirements for spring libraries.
Basically, you have 2 ways to do this.
First way: Code generation
Write a class that can generate the code given a database schema.
Note that this you will create basic classes for each entity.
If you have custom code (code specific to certain entities) you can put that in subclasses so that it doesn't get overwritten when you regenerate the basic classes.
Object instatiation should be via Factory methods so that the correct subclass is used.
Make sure you add comments in the generated code that clearly states that the code is generated automatically (so that people don't start editing them directly).
Second way: Reflection
This solution, while being more elegant, is also more complex.
Instead of generating one basic class for each entity you have one basic class that can handle any entity. The class would be using reflection to access the DTO:s.
If you have custom code (code specific to certain entities) you can put that in other classes. These other classes would be injected into the generic class.
Using reflection would require a strict naming policy on your DTO:s.
Conclusion
I have been in a project using the first method in a migration project to generate DTO classes for the service interface between the new application server (running java) and the fat clients and it worked quite well. We had more than 100 generated DTO classes. I am aware that what you are attempting is slighty different. Editing database records is a generic problem (all projects need it) but there aren't (m)any frameworks for it.
I have been thinking about creating a generic tool or framework for it but I have never gotten around to it.
I am new to Java, and am working on a Public Transit Java app as a first small project.
I am loading transit data in from a server through an XML api (using the DOM XML API). So when you call a constructor for say a BusStop(int id), then the constructor loads the info about that Stop from the server based on the id provided. So, I am wondering about a couple things: how can I make sure I don't instantiate two BusStop objects with the same id (I just want one object for each BusStop)?
Also does anyone have recommendations on how I should load up the objects, so I don't need to load the whole database every time I run the app, just the BusStop, and relevant Arrivals and BusTrips objects for that stop? I have done C++ and MVC PHP programming previously, but haven't had experienced loading large numbers of objects with circular object references etc.
Thanks!
I wouldn't start the download/deserialization proces in a constructor. I would write a manager class per entity type with a method to fetch a Java object for a given entity based on its ID. Use a HashMap with the key type as your entity ID and the value type as the Java class for that object. The manager would be a singleton using your preferred pattern (I would probably use static members for simplicity).
The first thing the fetch method should do is check the map to see if it contains an entry for the given ID. If it has already fetched and build this object, return it. If it has not, fetch the entity from the remote service, deserialize the object appropriately, load it into the HashMap by the specified ID, and return it.
Regarding references to other object I suggest you represent those as IDs in your Java objects rather than storing them as Java object references and deserializing them at the same time as the referencing object. The application can lazily instantiate those objects on demand through the relevant manager. This reduces problems through circular references.
If the amount of data is likely to exceed available RAM on your JVM you'd need to consider periodically removing older objects from the map to recover memory (confident they would be reloaded when required).
For this application I would use the following Java EE technologies: JAX-RS, JPA and JAXB. You will find these technologies included in almost every Java application server (i.e. GlassFish).
JPA - Java Persistence API
Provides a simple means of converting your objects to/from the database. Through annotations you can mark a relationship as lazy to prevent the entire database from being read. Also through the use of caching database access and object creation is reduced.
JAXB - Java Architecture for XML Binding
Provides a simple means of converting your objects to/from XML. An implementation is included in Java SE 6.
JAX-RS - Java API for RESTful Services
Provides a simple API (over HTTP) for interacting with XML.
Example
You can check out an example I posted to my blog:
Part 1 - The Database
Part 2 - Mapping the Database to JPA Entities
Part 3 - Mapping JPA entities to XML (using JAXB)
Part 4 - The RESTful Service
Part 5 - The Client
For the classes you want to load only once per given id, use some kind of Factory design pattern. Internally you may want to store id to instance mapping in a Map. Before actually fetching the data from server, first do a lookup on this map to see if you already have a class loaded for this id. If not then go ahead with fetching and the update the map.
I've been using JPA on a small application I've been working on. I now have a need to create a data structure that basically extends or encapsulates a graph data structure object. The graph will need to be persisted to the database.
For persistable objects I write myself, it is very easy to extend them and have the extending classes also persist easily. However, I now find myself wanting to use a library of graph related objects (Nodes, edges, simple graphs, directed graphs, etc) in the JGrahpT library. However, the base classes are not defined as persistable JPA objects, so I'm not sure how to get those classes to save into the database.
I have a couple ideas and I'd like some feedback.
Option 1)
Use the decorator design pattern as I go along to add persistence to an extended version of the base class.
Challenges:
-- How do I persist the private fields of a class that are needed for it to be in the correct state? Do I just extend the class add an ID field, and mark it as persistable? How will JPA get the necessary fields from the parent class? (Something like ruby's runtime class modification would be awesome here)
-- There is a class hierarchy (Abstract Graph, Directed Graph, Directed Weighted Graph, etc.). If I extend to get persistence, extending classes still won't have the common parent class. How do I resolve this? (Again, Something like ruby's runtime class modification would be awesome here)
Option 2) Copy paste the entire code base. Modify the source code of each file to make it JPA compatible.
-- obviously this is a lot of work
I'm sure there are other options.. What have you got for me SO???
Do the base classes follow the JavaBeans naming conventions? If so you should be able to map them using the XML syntax of JPA.
This is documented in Chapter 10 of the specification:
The XML descriptor is intended to
serve as both an alternative to and an
overriding mechanism for Java language
metadata annotations.
This XML file is usually called orm.xml. The schema is available online
Your options with JPA annotations seem pretty limited if you're working with a pre-existing library. One alternative would be to use something like Hibernate XML mapping files instead of JPA. You can declare your mappings outside of the classes themselves. Private fields aren't an issue, Hibernate will ignore access modifiers via reflection. However, even this may end up being more trouble than its worth depending on the internal logic of the code (Hibernate's use of special collections and proxies for instance, will get you in hot water if the classes directly access some of their properties instead of using getter methods internally).
On the other hand, I don't see why you'd consider option 2 'a lot of work'. Creating a ORM mapping isn't really a no brainer task no matter how you go about it, and personally I'd consider option 2 probably the least effort approach. You'd probably want to maintain it as a patch file so you could keep up with updates to the library, rather than just forking.