SDN save nodes and update properties on attached relationships - java

We are using spring-data-neo4j release 2.2.2.Release and Neo4j 1.9
Saving and updating nodes (properties) works fine using a GraphRepository
Our most simple example looks like this:
public interface LastReadMediaRepository extends GraphRepository<Neo4jLastReadMedia> {}
We also set some relationships connected to a node, the node class looks like this
#NodeEntity
public class Neo4jLastReadMedia {
#GraphId
Long id;
#JsonIgnore
#Fetch #RelatedToVia(type = "read", direction = Direction.OUTGOING)
Set<LastReadMediaToLicense> licenseReferences;
public Neo4jLastReadMedia() {
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public void read(final Neo4jLicense license, final Long lastAccess, final float progress, final Long chapterId) {
licenseReferences.add(new LastReadMediaToLicense(this, license, lastAccess, progress, chapterId));
}
public Set<LastReadMediaToLicense> getLicenseReferences() {
return licenseReferences;
}
#Override
public String toString() {
return "Neo4jLastReadMedia [id=" + id + "]";
}
}
Now, we save a node using the repository's save() method. The relationships are saved, too, at least for the first save.
Later when we want to change properties on a relationship (update a relationship) that already exists (e.g. lastAccess) we retrieve the node from the database, manipulate its relationship Set, here Set<LastReadMediaToLicense> licenseReferences; and then, try to save the node back with save()
Unfortunately, the relationship is not updated and all properties remain the same...
We know how to do that using annotated cypher queries in the repo, but there has to be an "abstracted" way?!
Thanks a lot!
Regards
EDIT: If I remove a relationship from the Set, then perform a save() on the node, the relationship is deleted. Only update does not work! Or is that the intention?

Andy,
SDN only checks for modifications of the set, aka additions and removals, it doesn't check each of the relationships for a change, that would be even more costly.
Usually that can be solved by saving the relationship via a repository or template instead of adding it to a set and then saving the node. That is also faster.

Related

Axon: Create and Save another Aggregate in Saga after creation of an Aggregate

Update: The issue seems to be the id that I'm using twice, or in other words, the id from the product entity that I want to use for the productinventory entity. As soon as I generate a new id for the productinventory entity, it seems to work fine. But I want to have the same id for both, since they're the same product.
I have 2 Services:
ProductManagementService (saves a Product entity with product details)
1.) For saving the Product Entity, I implemented an EventHandler that listens to ProductCreatedEvent and saves the product to a mysql database.
ProductInventoryService (saves a ProductInventory entity with stock quantities of product to a certain productId defined in ProductManagementService )
2.) For saving the ProductInventory Entity, I also implemented an EventHandler that listens to ProductInventoryCreatedEvent and saves the product to a mysql database.
What I want to do:
When a new Product is created in ProductManagementService, I want to create a ProductInventory entity in ProductInventoryService directly afterwards and save it to my msql table. The new ProductInventory entity shall have the same id as the Product entity.
For that to accomplish, I created a Saga, which listes to a ProductCreatedEvent and sends a new CreateProductInventoryCommand. As soon as the CreateProductInventoryCommand triggers a ProductInventoryCreatedEvent, the EventHandler as described in 2.) should catch it. Except it doesn't.
The only thing thta gets saved is the Product Entity, so in summary:
1.) works, 2.) doesn't. A ProductInventory Aggregate does get created, but it doesn't get saved since the saving process that is connected to an EventHandler isn't triggered.
I also get an Exception, the application doesn't crash though: Command 'com.myApplication.apicore.command.CreateProductInventoryCommand' resulted in org.axonframework.commandhandling.CommandExecutionException(OUT_OF_RANGE: [AXONIQ-2000] Invalid sequence number 0 for aggregate 3cd71e21-3720-403b-9182-130d61760117, expected 1)
My Saga:
#Saga
#ProcessingGroup("ProductCreationSaga")
public class ProductCreationSaga {
#Autowired
private transient CommandGateway commandGateway;
#StartSaga
#SagaEventHandler(associationProperty = "productId")
public void handle(ProductCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductCreatedEvent");
String productInventoryId = event.productId;
SagaLifecycle.associateWith("productInventoryId", productInventoryId);
//takes ID from product entity and sets all 3 stock attributes to zero
commandGateway.send(new CreateProductInventoryCommand(productInventoryId, 0, 0, 0));
}
#SagaEventHandler(associationProperty = "productInventoryId")
public void handle(ProductInventoryCreatedEvent event) {
System.out.println("ProductCreationSaga, SagaEventHandler, ProductInventoryCreatedEvent");
SagaLifecycle.end();
}
}
The EventHandler that works as intended and saves a Product Entity:
#Component
public class ProductPersistenceService {
#Autowired
private ProductEntityRepository productRepository;
//works as intended
#EventHandler
void on(ProductCreatedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductCreatedEvent");
ProductEntity entity = new ProductEntity(event.productId, event.productName, event.productDescription, event.productPrice);
productRepository.save(entity);
}
#EventHandler
void on(ProductNameChangedEvent event) {
System.out.println("ProductPersistenceService, EventHandler, ProductNameChangedEvent");
ProductEntity existingEntity = productRepository.findById(event.productId).get();
ProductEntity entity = new ProductEntity(event.productId, event.productName, existingEntity.getProductDescription(), existingEntity.getProductPrice());
productRepository.save(entity);
}
}
The EventHandler that should save a ProductInventory Entity, but doesn't:
#Component
public class ProductInventoryPersistenceService {
#Autowired
private ProductInventoryEntityRepository productInventoryRepository;
//doesn't work
#EventHandler
void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventoryPersistenceService, EventHandler, ProductInventoryCreatedEvent");
ProductInventoryEntity entity = new ProductInventoryEntity(event.productInventoryId, event.physicalStock, event.reservedStock, event.availableStock);
System.out.println(entity.toString());
productInventoryRepository.save(entity);
}
}
Product-Aggregate:
#Aggregate
public class Product {
#AggregateIdentifier
private String productId;
private String productName;
private String productDescription;
private double productPrice;
public Product() {
}
#CommandHandler
public Product(CreateProductCommand command) {
System.out.println("Product, CommandHandler, CreateProductCommand");
AggregateLifecycle.apply(new ProductCreatedEvent(command.productId, command.productName, command.productDescription, command.productPrice));
}
#EventSourcingHandler
protected void on(ProductCreatedEvent event) {
System.out.println("Product, EventSourcingHandler, ProductCreatedEvent");
this.productId = event.productId;
this.productName = event.productName;
this.productDescription = event.productDescription;
this.productPrice = event.productPrice;
}
}
ProductInventory-Aggregate:
#Aggregate
public class ProductInventory {
#AggregateIdentifier
private String productInventoryId;
private int physicalStock;
private int reservedStock;
private int availableStock;
public ProductInventory() {
}
#CommandHandler
public ProductInventory(CreateProductInventoryCommand command) {
System.out.println("ProductInventory, CommandHandler, CreateProductInventoryCommand");
AggregateLifecycle.apply(new ProductInventoryCreatedEvent(command.productInventoryId, command.physicalStock, command.reservedStock, command.availableStock));
}
#EventSourcingHandler
protected void on(ProductInventoryCreatedEvent event) {
System.out.println("ProductInventory, EventSourcingHandler, ProductInventoryCreatedEvent");
this.productInventoryId = event.productInventoryId;
this.physicalStock = event.physicalStock;
this.reservedStock = event.reservedStock;
this.availableStock = event.availableStock;
}
}
What you are noticing right now is the uniqueness requirement of the [aggregate identifier, sequence number] pair within a given Event Store. This requirement is in place to safe guard you from potential concurrent access on the same aggregate instance, as several events for the same aggregate all need to have a unique overall sequence number. This number is furthermore use to identify the order in which events need to be handled to guarantee the Aggregate is recreated in the same order consistently.
So, you might think this would opt for a "sorry there is no solution in place", but that is luckily not the case. There are roughly three things you can do in this set up:
Life with the fact both aggregates will have unique identifiers.
Use distinct bounded contexts between both applications.
Change the way aggregate identifiers are written.
Option 1 is arguably the most pragmatic and used by the majority. You have however noted the reuse of the identifier is necessary, so I am assuming you have already disregarded this as an option entirely. Regardless, I would try to revisit this approach as using UUIDs per default for each new entity you create can safe you from trouble in the future.
Option 2 would reflect itself with the Bounded Context notion pulled in by DDD. Letting the Product aggregate and ProductInventory aggregate reside in distinct contexts will mean you will have distinct event stores for both. Thus, the uniqueness constraint would be kept, as no single store is containing both aggregate event streams. Whether this approach is feasible however depends on whether both aggregates actually belong to the same context yes/no. If this is the case, you could for example use Axon Server's multi-context support to create two distinct applications.
Option 3 requires a little bit of insight in what Axon does. When it stores an event, it will invoke the toString() method on the #AggregateIdentifier annotated field within the Aggregate. As your #AggregateIdentifier annotated field is a String, you are given the identifier as is. What you could do is have typed identifiers, for which the toString() method doesn't return only the identifier, but it appends the aggregate type to it. Doing so will make the stored aggregateIdentifier unique, whereas from the usage perspective it still seems like you are reusing the identifier.
Which of the three options suits your solution better is hard to deduce from my perspective. What I did do, is order them in most reasonable from my perspective.
Hoping this will help your further #Jan!

Unable to change or delete relationship between nodes with Neo4j OGM and Spring Boot Data

I’m having problems with removing or changing existing relationships between two nodes using Spring Boot (v1.5.10) and Neo4j OGM (v2.1.6, with Spring Data Neo4j v4.2.10). I have found a few traces of similar problems reported by people using older Neo4j OGM versions (like 1.x.something) but, I think, it should be long gone with 2.1.6 and latest Spring Boot v1 release. Therefore, I don’t know whether that’s a regression or I am not using the API in the correct way.
So, my node entities are defined as follows:
#NodeEntity
public class Task {
#GraphId
private Long id;
private String key;
#Relationship(type = "HAS_STATUS")
private Status status;
public Task() {
}
public Long getId() {
return id;
}
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public Status getStatus() {
return status;
}
public void setStatus(Status status) {
this.status = status;
}
}
#NodeEntity
public class Status {
#GraphId
private Long id;
private String key;
#Relationship(type = "HAS_STATUS", direction = Relationship.INCOMING)
private Set<Task> tasks;
public Status() {
tasks = new HashSet<>();
}
public Long getId() {
return id;
}
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public Set<Task> getTasks() {
return tasks;
}
public void addTask(Task task) {
tasks.add(task);
}
public boolean removeTask(Task task) {
if(this.hasTask(task)) {
return this.tasks.remove(task);
}
return false;
}
public boolean hasTask(Task task) {
return this.tasks.contains(task);
}
}
This is how it can be represented in Cypher-like style:
(t:Task)-[:HAS_STATUS]->(s:Status)
Here is the Service method that tries to update the task’s statuses:
public void updateTaskStatus(Task task, Status status) {
Status prevStatus = task.getStatus();
if(prevStatus != null) {
prevStatus.removeTask(task);
this.saveStatus(prevStatus);
}
task.setStatus(status);
if(status != null) {
status.addTask(task);
this.saveStatus(status);
}
this.saveTask(task);
}
As a result of an update, I get two HAS_STATUS relationships to two different Status nodes (old and new one), or, if I try to remove existing relationship, nothing happens (the old relationship remains)
The complete demo that illustrates the problem can be found on the GitHub here:
https://github.com/ADi3ek/neo4j-spring-boot-demo
Any clues or suggestions that can help me resolve that issue are more than welcome! :-)
If you would annotate your commands with #Transactional (because this is where the entities got loaded) it will work.
The underlying problem is that if you load an entity it will open a new transaction with a new session (context), find the relationships and cache the information about them in the context. The transaction (and session) will then get closed because the operation is done.
The subsequent save/update does not find an opened transaction and will then, as a consequence, open a new one (with new session/ session context). When executing the save it looks at the entity in the current state and does not see the old relationship anymore.
Two answers:
it is a bug ;(
EDIT: After a few days thinking about this, I revert the statement above. It is not a real bug but more like unexpected behaviour. There is nothing wrong in SDN. It uses two sessions (one for each operation) to do the work and since nobody told it to do the work in one transaction the loaded object is not 'managed' or 'attached' (as in JPA) to a session context.
you can work around this by using an explicit transaction for your unit of work
I will close the issue for SDN and try to migrate all the information to one of the two issues on GitHub because it is a OGM problem.

Neo4J OGM Session.load(ID) returns null object for existing ID

I am conducting some Neo4J tests and running into the following peculiar problem. I created a small model which I'm intending to use with OGM. The model has a superclass Entity and a child class Child. They're both in package persistence.model. Entity has the required Long id; with matching getId() getter.
public abstract class Entity {
private Long id;
public Long getId() {
return id;
}
}
#NodeEntity
Child extends Entity {
String name;
public Child() {
}
}
Creating Child objects and persisting them through OGM works fine. I'm basing myself on the examples found in the documentation and using a Neo4jSessionFactory object, which initialises the SessionFactory with the package persistence.model. The resulting database contains objects with proper ID's filled in.
The problem arises when I try to fetch a Child for a given ID. I'm trying it with three methods, using two connection systems (bolt and ogm):
boltSession.run("MATCH (a:Child) WHERE id(a) = {id} RETURN a", parameters("id", childId));
ogmSession.query("MATCH (a:Child) WHERE id(a) = $id RETURN a", params);
ogmSession.load(Child.class, childId, 1);
The first two methods actually return the correct data. The last one returns a null value. The last one, using OGM, has some obvious benefits, and I'd love to be able to use it properly. Can anyone point me in the right direction?
In your test code you are doing a lookup by id of type int.
private int someIdInYourDatabase = 34617;
The internal ids in Neo4j are of type Long.
If you change the type of the id to long or Long then it will work.
private long someIdInYourDatabase = 34617;

DynamoDBMapper: How to get saved item?

For a simple Java REST-API I created a save function to persist my model to a DynamoDB table.
The model uses a auto generated range key as you can see here:
#DynamoDBTable(tableName = "Events")
public class EventModel {
private int country;
private String id;
// ...
#DynamoDBHashKey
public int getCountry() {
return country;
}
public void setCountry(int country) {
this.country = country;
}
#DynamoDBRangeKey
#DynamoDBAutoGeneratedKey
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
//...
}
Unfortunately the the DynamoDBMappers .save() method does not return anything. I want to return the created item to set the proper location header in my 201 HTTP response.
public EventModel create(EventModel event) {
mapper.save(event);
return null;
}
How can I make that work? Any suggestions? Of course I could generate the id on the client but I don´t want to do this because solving the potential atomicity issue needs additional logic on client- and server-side.
I´m using the aws-java-sdk-dynamodb in version 1.11.86.
Never mind, I figured out how to do it. The .save() method updates the reference of the object. After calling mapper.save(event); the id property is populated and has its value.
So the way to get it work is just:
public EventModel create(EventModel event) {
mapper.save(event);
return event;
}
That´s it!
There is direct way through dynamo db mapper to get what is saved in dynamodb after put/update Approach mentioned by m4xy# would work if you are saving with DynamoDBConfig as CLOBBER or UPDATE. If you are using UPDATE_SKIP_NULL_ATTRIBUTES, this approach won't work.
If you are using mapper, you have to specifically call db again to get existing value (which might have been updated if there are multiple writers and you might get unxpected result). To ensure read that you expect you can implement locking for write such that if lock is acquired by a given thread, no other thread can write for a given key. But, this approach as a downside of slowing down your application.
Alternatively, you can use dynamoDBClient that has apis to support return db values after write.
https://sdk.amazonaws.com/java/api/2.0.0-preview-11/index.html?software/amazon/awssdk/services/dynamodb/DynamoDbClient.html

Hide Hibernate session handling from business logic

Suppose I have an entity "Parent" which holds a list of "Child" objects.
In Java this looks like this:
public class ParentEntity implements Parent {
protected int id;
#Override
public int getId() { return id; }
#Override
public void setId(int id) { this.id = id; }
protected List<Child> children;
#Override
public List<Child> getChildren() { return children; }
#Override
public void setChildren(List<Child> children) { this.children = children; }
#Override
public void save() {
// Do some Hibernate "save" magic here...
}
public static Parent getById(int id) {
Session session = HibernateUtil.getSessionFactory().openSession();
Parent entity = (Parent) session.get(ParentEntity.class, id);
session.close();
return entity;
}
}
My business logic class shall only work with the interface class, like this:
public class BusinessLogic {
public void doSomething() {
Parent parent = ParentEntity.getById(1);
for (Child c : parent.getChildren())
System.out.println("I love my daddy.");
}
}
Unfortunately, this doesn't work because the parent's children do not get loaded and the loop crashes with a NullPointerException.
1. Approach "Eager Loading"
There are two problems with this approach. Even though in the XML I wrote "lazy='false'" Hibernate seems to ignore this.
Secondly, eager loading is not desirable in my case since we could have hundreds of children, potentially.
2. Approach "Load/Initialize on 'GET'"
#Override
public List<Child> getChildren()
{
if (!Hibernate.isInitialized(children)) {
Session session = HibernateUtil.getSessionFactory().openSession();
Hibernate.initialize(children);
session.close();
}
return children;
}
This doesn't work either because I get an exception saying that the collection is not linked to a session. The session which was used to load the parent entity was closed, previously.
What do you suggest is the 'best practice' solution here? I really don't want to mess with Hibernate sessions in my business logic.
Either you can use a query-object or a custom query and fetch all children in the scenarios you really need them (search for left join fetch), for a few thousand objects this might work.
However if the amount of records could reach millions you are running most likely into memory issues, then you should think about a shared cache, retrieving the data on a page by page basis Just search for n+1 and hibernate you will find plenty of discussions around this topic.
The simplest hack I can think of:
public static Parent getParentWithChildrenById(int id) {
Session session = HibernateUtil.getSessionFactory().openSession();
Parent entity = (Parent) session.get(ParentEntity.class, id);
Hibernate.initialize(entity.children);
session.close();
return entity;
}
Side-note: Having the data access logic in you domain layer is considered bad practice.
I've always allowed Spring and JPA to manage my Hibernate sessions, so most of the painful boilerplate code disappears at that point. But you still have to call entity.getChildren().size() (or something similar) before you exit the data layer call where the session is opened, to force Hibernate to do the retrieval while there's still a session to use.

Categories