I am struggeling with a design - persistence issue . I have a tree and all elements in this tree are subclasses of the same abstract superclass. The tree has a root object with a list of children which themselves can have children, and so on. Changes to attributes of a children would need to be propagated to all parent levels. To make it easier to illustrate, let's say we deal with products which would have multiple releases, having multiple development phases. Each of these would have a lifetime. So we would have something like
#Entity
#Access(AccessType.PROPERTY) // JPA reading and writing attributes through their getter and setter methods
#Inheritance(strategy = InheritanceType.SINGLE_TABLE)
#DiscriminatorColumn(
discriminatorType = DiscriminatorType.STRING,
name = "lifetime_level",
length=10
)
public abstract class LifeTime implements Serializable {
// all classes are incomplete for sake of brevity
protected Integer version; // Optimistic locking support
protected LocalDate plannedBegin; // The calculated, potentially shifted, begin resulting from children assignments
protected LocalDate plannedEnd;
private LifeTime parent;
protected List<LifeTime> children;
#ManyToOne
public LifeTime getParent() {
return parent;
}
public void setParent(LifeTime parent) {
this.parent = parent;
}
#OneToMany(mappedBy="parent")
public List<LifeTime> getChildren() {
return children;
}
private void setChildren(List<LifeTime> children) {
this.children = FXCollections.observableList(children);
}
public void addChild(LifeTime child) {
children.add(child);
}
public LocalDate getPlannedBegin() {
return plannedBegin;
}
public void setPlannedBegin(LocalDate aBegin) {
this.plannedBegin = aBegin;
adjustParentBegin(parent, this);
}
public LocalDate getPlannedEnd() {
return plannedEnd;
}
public void setPlannedEnd(LocalDate aEnd) {
this.plannedEnd = aEnd;
adjustParentEnd(parent, this);
}
protected void adjustParentBegin(LifeTime parent, LifeTime child) {
if(child.getPlannedBegin.isBefore(parent.getPlannedBegin)) {
parent.setPlannedBegin(child.getPlannedBegin);
}
}
protected void adjustParentEnd(LifeTime parent, LifeTime child) {
if(child.getPlannedEnd.isAfter(parent.getPlannedEnd)) {
parent.setPlannedEnd(child.getPlannedEnd);
}
}
#Version
public Integer getVersion() {
return version;
}
private void setVersion(Integer version) {
this.version = version;
}
}
We would have the concrete classes Product, Release and Phase. All would extend LifeTime. As root object we have a product. The product has children representing releases of the product and each release would have several development phases. We would also have baselines, iterations that we ignore for the moment. Next, we have somewhere a service class that cares for handling the business logic:
#Service
public class LifeTimeService {
#Autowired ProductRepository productRepository;
#Autowired ReleaseRepository releaseRepository;
#Autowired PhaseRepository phaseRepository;
public Product createProduct() {
Product product = new Product();
release.setPlannedBegin(LocalDate.now());
release.setPlannedEnd(LocalDate.now());
product = productRepository.save(product); // without will throw TransientPropertyValueException: object references an unsaved transient instance when saving release
createRelease(product);
return productRepository.save(product);
}
public Release createRelease(Product product) {
Release release = new Release();
product.addChild(release);
release.setParent(product);
release.setPlannedBegin(product.getPlannedBegin()); // initial values
release.setPlannedEnd(product.getPlannedEnd());
release = releaseRepository.save(release); // without will throw TransientPropertyValueException: object references an unsaved transient instance when saving phases
addPhases(release); // add phases and adjust begin and end of the release
return releaseRepository.save(release);
}
protected void addPhases(Release release) {
LocalDate date = release.getPlannedBegin;
Phase phase1 = new Phase();
release.addChild(phase1);
phase1.setParent(release);
phase1.setPlannedBegin(date);
date = date.plusMonth(3);
phase1.setPlannedEnd(date);
phaseRepository.save(phase1);
phase2 = new Phase();
release.addChild(phase2);
phase2.setParent(release);
phase2.setPlannedBegin(date);
date = date.plusMonth(3);
phase2.setPlannedEnd(date);
phaseRepository.save(phase2);
}
}
Let's say we have Controller class, that makes use of the Service
#Controller
public class MyController {
#Autowired LifeTimeService service;
protected Product product;
public void myTest() {
Product product = service.createProduct(); // this creates a product with an initial release and all phases
Release release = service.createRelease(product); // now the product has a later plannedEnd than the corresponding database row
}
}
Obviously, I want that the createRelease method creates and returns a release. The issue is that it alters all levels of the tree: It creates phases but also changes the parent product begin and end date. I would need to save the product after I call createRelease to persist this change. This approach doesn't scale if the tree has more levels. Otherwise, if I save the product within createRelease, the product in the myTest method would have the wrong version. Or createRelease returns the saved parent - what is counter intuitive - and I have to code a method which return last created release. This is all unsatisfying.
While the above class example follows the Domain Driven Design, whereby each object is in charge of securing its integrity, I was although thinking about Anemic Domain Model and moving the two adjustment methods to the service class and make them recursive. But I don't see how it changes or fixes the above issue.
In reality my tree goes over at least 5 hierarchical levels. So whatever solution you propose, it should scale to multiple levels.
Related
I’m having problems with removing or changing existing relationships between two nodes using Spring Boot (v1.5.10) and Neo4j OGM (v2.1.6, with Spring Data Neo4j v4.2.10). I have found a few traces of similar problems reported by people using older Neo4j OGM versions (like 1.x.something) but, I think, it should be long gone with 2.1.6 and latest Spring Boot v1 release. Therefore, I don’t know whether that’s a regression or I am not using the API in the correct way.
So, my node entities are defined as follows:
#NodeEntity
public class Task {
#GraphId
private Long id;
private String key;
#Relationship(type = "HAS_STATUS")
private Status status;
public Task() {
}
public Long getId() {
return id;
}
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public Status getStatus() {
return status;
}
public void setStatus(Status status) {
this.status = status;
}
}
#NodeEntity
public class Status {
#GraphId
private Long id;
private String key;
#Relationship(type = "HAS_STATUS", direction = Relationship.INCOMING)
private Set<Task> tasks;
public Status() {
tasks = new HashSet<>();
}
public Long getId() {
return id;
}
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public Set<Task> getTasks() {
return tasks;
}
public void addTask(Task task) {
tasks.add(task);
}
public boolean removeTask(Task task) {
if(this.hasTask(task)) {
return this.tasks.remove(task);
}
return false;
}
public boolean hasTask(Task task) {
return this.tasks.contains(task);
}
}
This is how it can be represented in Cypher-like style:
(t:Task)-[:HAS_STATUS]->(s:Status)
Here is the Service method that tries to update the task’s statuses:
public void updateTaskStatus(Task task, Status status) {
Status prevStatus = task.getStatus();
if(prevStatus != null) {
prevStatus.removeTask(task);
this.saveStatus(prevStatus);
}
task.setStatus(status);
if(status != null) {
status.addTask(task);
this.saveStatus(status);
}
this.saveTask(task);
}
As a result of an update, I get two HAS_STATUS relationships to two different Status nodes (old and new one), or, if I try to remove existing relationship, nothing happens (the old relationship remains)
The complete demo that illustrates the problem can be found on the GitHub here:
https://github.com/ADi3ek/neo4j-spring-boot-demo
Any clues or suggestions that can help me resolve that issue are more than welcome! :-)
If you would annotate your commands with #Transactional (because this is where the entities got loaded) it will work.
The underlying problem is that if you load an entity it will open a new transaction with a new session (context), find the relationships and cache the information about them in the context. The transaction (and session) will then get closed because the operation is done.
The subsequent save/update does not find an opened transaction and will then, as a consequence, open a new one (with new session/ session context). When executing the save it looks at the entity in the current state and does not see the old relationship anymore.
Two answers:
it is a bug ;(
EDIT: After a few days thinking about this, I revert the statement above. It is not a real bug but more like unexpected behaviour. There is nothing wrong in SDN. It uses two sessions (one for each operation) to do the work and since nobody told it to do the work in one transaction the loaded object is not 'managed' or 'attached' (as in JPA) to a session context.
you can work around this by using an explicit transaction for your unit of work
I will close the issue for SDN and try to migrate all the information to one of the two issues on GitHub because it is a OGM problem.
I'm having problems with relation
#RelationshipEntity(type = RelTypes.Tag.TAG_ON_OBJECT_EVALUATION)
public class TagOnObjectEvaluation
{
#StartNode
private Mashup taggableObject;
#EndNode
private Tag tag;
// Other fields, getters and setters
}
In both the entities involved (Mashup and Tag), I have this field (with opposite Direction)
#RelatedToVia(type = RelTypes.Tag.TAG_ON_OBJECT_EVALUATION,
direction = Direction.INCOMING /*Direction.OUTGOING*/)
private Set<TagOnObjectEvaluation> tagOnObjectEvaluations =
new HashSet<TagOnObjectEvaluation>();
Then, I have various service class to manage Tag, Mashup and TagOnObjectEvaluation. The class under test now is the latter.
Note: the name is a bit confusing and it's a legacy from the previous coder, you can read DAO as Service. Also GenericNeo4jDAOImpl (again, read it as GenericServiceNeo4jImpl) simply defines standard methods for entities management (create(), find(), update(), delete(), fetch() )
#Service
public class TagOnObjectEvaluationDAONeo4jImpl extends
GenericNeo4jDAOImpl<TagOnObjectEvaluation> implements
TagOnObjectEvaluationDAO
{
#Autowired
private TagOnObjectEvaluationRepository repository;
public TagOnObjectEvaluationDAONeo4jImpl()
{
super(TagOnObjectEvaluation.class);
}
public TagOnObjectEvaluationDAONeo4jImpl(
Class<? extends TagOnObjectEvaluation> entityClass)
{
super(entityClass);
}
#Override
public TagOnObjectEvaluation create(TagOnObjectEvaluation t)
{
Transaction tx = template.getGraphDatabaseService().beginTx();
TagOnObjectEvaluation savedT = null;
try
{
// This is to enforce the uniqueness of the relationship. I know it can fail in many ways, but this is not a problem ATM
savedT =
template.getRelationshipBetween(
t.getTaggableObject(), t.getTag(),
TagOnObjectEvaluation.class,
RelTypes.Tag.TAG_ON_OBJECT_EVALUATION);
if (savedT == null)
savedT = super.create(t);
tx.success();
}
catch (Exception e)
{
tx.failure();
savedT = null;
}
finally
{
tx.finish();
}
return savedT;
}
}
It seems pretty straightforward until now.
But when I'm trying to persist a RelationshipEntity instance, I have many problems.
#Test
public void testRelationshipEntityWasPersisted()
{
TagOnObjectEvaluation tagOnObjectEvaluation = new TagOnObjectEvaluation(taggedObject, tag);
tagOnObjectEvaluationDao.create(tagOnObjectEvaluation);
assertNotNull(tagOnObjectEvaluation.getId());
LOGGER.info("TagOnObjectEvaluation id = " + tagOnObjectEvaluation.getId());
tagDao.fetch(tag);
assertEquals(1, tag.getTaggedObjectsEvaluations().size());
}
The last test fail: the size is 0 and not 1. Also, although it seems that the entity is correctly stored (it gets an id assigned), if I'm navigating the db later on there is no track of it at all.
I've also tried to add the relationship in a different way, using the sets of the involved nodes; f.e.
tag.getTaggedObjectsEvaluations().add(tagOnObjectEvaluation);
tagDao.update(tag);
but with no improvements at all.
You need to change the direction of the relationship in your entity Mashape, (entity corresponding to the #StartNode of your #RelationshipEntity TagOnObjectEvaluation).
#NodeEntity
class Mashape {
// ...
#RelatedToVia(type = RelTypes.Tag.TAG_ON_OBJECT_EVALUATION, direction = Direction.OUTGOING)
private Set<TagOnObjectEvaluation> tagOnObjectEvaluations = new HashSet<TagOnObjectEvaluation>();
}
Just point that according to the specifications of #RelatedToVia spring-data-neo4j annotation, the direction by default is OUTGOING, so you really don't need to specify the direction in this case. This also should be correct:
#RelatedToVia(type = RelTypes.Tag.TAG_ON_OBJECT_EVALUATION)
private Set<TagOnObjectEvaluation> tagOnObjectEvaluations = new HashSet<TagOnObjectEvaluation>();
Hope it helps.
Suppose I have an entity "Parent" which holds a list of "Child" objects.
In Java this looks like this:
public class ParentEntity implements Parent {
protected int id;
#Override
public int getId() { return id; }
#Override
public void setId(int id) { this.id = id; }
protected List<Child> children;
#Override
public List<Child> getChildren() { return children; }
#Override
public void setChildren(List<Child> children) { this.children = children; }
#Override
public void save() {
// Do some Hibernate "save" magic here...
}
public static Parent getById(int id) {
Session session = HibernateUtil.getSessionFactory().openSession();
Parent entity = (Parent) session.get(ParentEntity.class, id);
session.close();
return entity;
}
}
My business logic class shall only work with the interface class, like this:
public class BusinessLogic {
public void doSomething() {
Parent parent = ParentEntity.getById(1);
for (Child c : parent.getChildren())
System.out.println("I love my daddy.");
}
}
Unfortunately, this doesn't work because the parent's children do not get loaded and the loop crashes with a NullPointerException.
1. Approach "Eager Loading"
There are two problems with this approach. Even though in the XML I wrote "lazy='false'" Hibernate seems to ignore this.
Secondly, eager loading is not desirable in my case since we could have hundreds of children, potentially.
2. Approach "Load/Initialize on 'GET'"
#Override
public List<Child> getChildren()
{
if (!Hibernate.isInitialized(children)) {
Session session = HibernateUtil.getSessionFactory().openSession();
Hibernate.initialize(children);
session.close();
}
return children;
}
This doesn't work either because I get an exception saying that the collection is not linked to a session. The session which was used to load the parent entity was closed, previously.
What do you suggest is the 'best practice' solution here? I really don't want to mess with Hibernate sessions in my business logic.
Either you can use a query-object or a custom query and fetch all children in the scenarios you really need them (search for left join fetch), for a few thousand objects this might work.
However if the amount of records could reach millions you are running most likely into memory issues, then you should think about a shared cache, retrieving the data on a page by page basis Just search for n+1 and hibernate you will find plenty of discussions around this topic.
The simplest hack I can think of:
public static Parent getParentWithChildrenById(int id) {
Session session = HibernateUtil.getSessionFactory().openSession();
Parent entity = (Parent) session.get(ParentEntity.class, id);
Hibernate.initialize(entity.children);
session.close();
return entity;
}
Side-note: Having the data access logic in you domain layer is considered bad practice.
I've always allowed Spring and JPA to manage my Hibernate sessions, so most of the painful boilerplate code disappears at that point. But you still have to call entity.getChildren().size() (or something similar) before you exit the data layer call where the session is opened, to force Hibernate to do the retrieval while there's still a session to use.
We are using spring-data-neo4j release 2.2.2.Release and Neo4j 1.9
Saving and updating nodes (properties) works fine using a GraphRepository
Our most simple example looks like this:
public interface LastReadMediaRepository extends GraphRepository<Neo4jLastReadMedia> {}
We also set some relationships connected to a node, the node class looks like this
#NodeEntity
public class Neo4jLastReadMedia {
#GraphId
Long id;
#JsonIgnore
#Fetch #RelatedToVia(type = "read", direction = Direction.OUTGOING)
Set<LastReadMediaToLicense> licenseReferences;
public Neo4jLastReadMedia() {
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public void read(final Neo4jLicense license, final Long lastAccess, final float progress, final Long chapterId) {
licenseReferences.add(new LastReadMediaToLicense(this, license, lastAccess, progress, chapterId));
}
public Set<LastReadMediaToLicense> getLicenseReferences() {
return licenseReferences;
}
#Override
public String toString() {
return "Neo4jLastReadMedia [id=" + id + "]";
}
}
Now, we save a node using the repository's save() method. The relationships are saved, too, at least for the first save.
Later when we want to change properties on a relationship (update a relationship) that already exists (e.g. lastAccess) we retrieve the node from the database, manipulate its relationship Set, here Set<LastReadMediaToLicense> licenseReferences; and then, try to save the node back with save()
Unfortunately, the relationship is not updated and all properties remain the same...
We know how to do that using annotated cypher queries in the repo, but there has to be an "abstracted" way?!
Thanks a lot!
Regards
EDIT: If I remove a relationship from the Set, then perform a save() on the node, the relationship is deleted. Only update does not work! Or is that the intention?
Andy,
SDN only checks for modifications of the set, aka additions and removals, it doesn't check each of the relationships for a change, that would be even more costly.
Usually that can be solved by saving the relationship via a repository or template instead of adding it to a set and then saving the node. That is also faster.
My first project with Hibernate/JPA and Play!, I've got a menu that once working, will support changes (i.e. easily add nodes to the tree). Struggling (in the >5 hours sense) just to get the basic modelling together:
The model:
#Entity
#Table(name = "Node")
public class Node extends Model {
#Column(nullable=false)
public String description;
#OneToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER)
#JoinColumn(name="parent")
public List<Node> children = new LinkedList<Node>();
#ManyToOne
#JoinColumn(name="parent", insertable=false, updatable=false)
public Node parent;
public Node(){}
}
The util class:
public class NodeUtil {
public static void addChild(Node root, String description) {
Node child = new Node();
child.description = description;
child.parent = root;
root.children.add(child);
root.save();
}
private static final String MENU_NAME = "MainMenu";
public static Node getMenu() {
return getRoot(MENU_NAME);
}
public static Node getRoot(String name) {
Node root = Node.find("byDescription", name).first();
if (root == null) {
root = new Node();
root.description = name;
root.save();
}
return root;
}
}
The test:
public class MenuTest extends UnitTest {
private static final String TEST_MENU = "testMenu";
#Test
public void testMenu() {
// test build/get
Node root = NodeUtil.getRoot(TEST_MENU);
assertNotNull(root);
// delete all children - maybe from previous tests etc.
for(Node o : root.children)
o.delete();
root.save();
// test add
root = NodeUtil.getRoot(TEST_MENU);
NodeUtil.addChild(root, "child 1");
NodeUtil.addChild(root, "child 2");
NodeUtil.addChild(root, "child 3");
assertEquals(3, root.children.size());
assertEquals("child 3", root.children.get(2).description);
assertEquals(0, root.children.get(1).children.size());
Node node = root.children.get(1);
NodeUtil.addChild(node, "subchild 1");
NodeUtil.addChild(node, "subchild 2");
NodeUtil.addChild(node, "subchild 3");
NodeUtil.addChild(node, "subchild 4");
NodeUtil.addChild(root.children.get(2), "sub subchild");
assertEquals("sub subchild", root
.children.get(1)
.children.get(2)
.children.get(0)
.description);
assertEquals(4, root.children.get(1).children.size());
root.save();
// test delete
root = NodeUtil.getRoot(TEST_MENU); // regrab the root via hibernate, assuming there isnt it isnt cached this will get changes that have been persisted to the db (maybe?)
root.children.get(1).children.get(2).delete();
assertEquals(3, root.children.get(1).children.size());
//root.delete();
}
}
Questions:
What am I doing wrong? (I.e. I just can't get this simple idea to be modelled and to pass the unit test. Like I said, new to Hibernate, and every change I make yields a new Hibernate error variant, which means nothing to me. E.g. this current setup throws "detached entity passed to persist: models.Node")
Initially I had the entire util class as a bunch of static methods in the model class. Firstly, do static methods affect Hibernates modelling? If so, in brief, under what circumstances can I have static methods (and members, come to think of it) that will be "transient" to the object modelling?
Assuming that I keep the util methods in a separate public util class, where is this class conventionally stored in the play framework? At the moment it's in the models package, next to the Node.java.
I'm not familiar with Play Framework, but I can make some point regarding working with Hibernate:
Maintaining consistent state of the objects in memory is your responsibility. Consider the following code:
for(Node o : root.children)
o.delete();
root.save();
You instructed Hibernate to delete children from the database, but the root object in memory still references them. Since the relationship is configured with cascading, Hibernate will try to save them again (I guess it's the reason of "detached entity passed to persist" error). So, keep in-memory state of the object consistent by clearing root.children.
Hibernate heavily relies on the concept of Unit of Work. I'm not sure how Play manages it, but it looks like you should call clearJPASession() in unittests to make sure that exisiting session state wouldn't affect further operations:
root.save();
clearJPASession();
// test delete
root = NodeUtil.getRoot(TEST_MENU);
The way you defined a relationship is supported, but not recommended (see 2.2.5.3.1.1. Bidirectional). Use the following approach instead:
#OneToMany(mappedBy = "parent", cascade=CascadeType.ALL, fetch=FetchType.EAGER)
public List<Node> children = new LinkedList<Node>();
#ManyToOne
#JoinColumn(name="parent")
public Node parent;
Static methods doesn't interfere with Hibernate.