We are running spring-data 4.1.0, Spring 5.2.10
This may sound weird but here is the scenario (state of es No index/mapping):
Fire up the container and the first thing spring-data-es does is create the index with all the mapping. yea! but if that process fails for some reason the mapping does not get created. ok understandable.
After that(mapping failed) you save an entity.. it appears spring/es will dynamically start generating the mapping for that entity as it is getting saved. Cool.. yea! but... some of the #Field attributes are not getting into the mapping. IE copy_to attribute.
I don't know how all the dynamic mapping works, if it is on the java side or the es side. I guess if the dynamic mapping is happening on the es side then this behavior makes sense. But i think i noticed other #Field attributes making its way into the mapping like the field type and data conversion stuff.
Is this the expected behavior? I guess i am thinking that #Field annotation attributes should make its way into the mapping regardless as to how the mapping gets created.
Your assumptions are correct.
The #Field annotations only are considered when the mapping is written with Spring Data Elasticsearch - on repository initialization or when one of the corresponding methods from the IndexOperations interface is called.
When the mapping on index creation fails it is not automatically done afterwards. And it will not be done on the next application start as the index then already exists.
When an entity is stored in an index that does not have a mapping defined, Elasticsearch automatically creates the mapping, and as Elasticsearch does not know anything about Spring Data Elasticsearch annotations.
Did you get an error in the application when the mapping could not be stored?
Related
Using up to Spring Boot 2.3.4, I've been using the #QueryResult annotation to map some custom Cypher queries responses to POJOs. I'm now testing the Spring Boot 2.4 first RC and trying to follow instructions on how to drop OGM since the support has been removed. I successfully replaced other annotations by the ones provided here:
https://neo4j.github.io/sdn-rx/current/#migrating
but I'm now left with my #QueryResult annotations for which nothing is specified. When I delete them I get Mapping errors:
org.springframework.data.mapping.MappingException: Could not find mappable nodes or relationships inside Record
I've looked up some of the Mapping explanations but here's the thing: my custom POJOs don't represent any entity from the database, neither do they represent part(s) of an entity. They're rather relevant bits from differents Nodes.
Let me examplify:
I want to get all b nodes that are targets of the MY_REL relationship from a:
(a:Node {label:"my label"})-[:MY_REL]->(b:Node)
For my purposes, I don't need to get the nodes in the response, so my POJO only has 2 attributes:
a "source" String which is the beginning node's label
a "targets" Set of String which is the list of end nodes' labels
and I return this:
RETURN a.label AS source, COLLECT(b.label) AS targets
My POJO was simply annotated with #QueryResult in order to get the mapping done.
Does anyone know how to reproduce this behaviour with SB 2.4 release candidate? As I said, removing the now faulty annotation prompts me with a Mapping error but I don't know what I should do to replace it.
Spring Data Neo4j 6 now supports projections (formerly known as #QueryResult) in line with the other Spring Data modules.
Having said this, the simplest thing you would have to do, assuming that this #Query is written in a Neo4jRepository<Node,...>, would be to return also the a.
I know that this sounds ridiculous first but with choosing the repository abstraction, you say that everything that should get processed during the mapping phase is a Node and you want to project its properties (or a subset) into the POJO (DTO projection). SDN cannot ensure that you are really working with the right type when it starts the mapping, so it throws the exception you are facing. Neo4j-OGM was more relaxed behind the scenes for mapping the #QueryResults but unfortunately also wrong with this direction.
If your use-case is simple as you have described it, I would strongly suggest to use the Neo4jClient(docs) that gives you direct access to the mapping.
It has a fluent API for querying and manual mapping, and it participates in the ongoing Spring transactions your repositories are running within.
There is a lot in there when it comes to projections, so I would suggest to also read the section in the documentation.
I have an entity "Event" that has a ManyToOne relationship with the entity "Organization". So an Organization can have multiple events.
What I originally wanted to do was to filter the entity Event using a property of the Organization entity. So basically when I fetch events, only return the events that have an Organization.code= :codeParam.
To accomplish that I implemented a hibernate filter with :
#FilterDef(name="codeFilter", parameters=#ParamDef( name="codeParam", type="string" ) )
...
#ManyToOne
#JoinColumn(name="Organization_Id")
#Filter(name="codeFilter", condition=" code = :codeParam")
private Organization organization;
...
Filter hibernateFilter = sess.enableFilter("codeFilter");
hibernateFilter.setParameter("codeParam", "hola");
Unfortunately according to a post from the Hibernate Team on the hibernate forums, this is not possible :
A Hibernate data filter does not change the multiplicity of an association. By definition it therefore does not filter many-to-one, one-to-one, or any load() or get() operation.
What is it supposed to do, return NULL instead of an instance? NULL does not mean FILTERED, it means NULL. You guys are using filters wrong.
So my question is : is there any way to filter the base entity ("Event") with a condition on the entity from a manyToOne relationship (Organization.code= :codeParam)?
I need this to be enforced every time there is a fetch of events, so a solution using the already existing hibernate filters or something similar would be greatly appreciated.
EDIT1: The question is a simple example of what needs to be done on a significantly bigger scale. Basically, we want to add security to all our Entities and their own nested Entities through the use of a globally defined filter on a Unix permissions row that all our tables have.
WARNING: Do not do this, it is dependent on Hibernate internals and prone to breaking on schema changes, and possibly on variations in individual query setup.
Set Hibernate to show its generated sql, run the query you want to filter (in this case, loading some Event objects), and check what name it assigns to the join used for fetching the related Organization. For example, the generated sql might include inner join Organization someNameHere on this_.Organization_Id = someNameHere.OrganizationId. Then apply the filter, not to the association, but to the Event class, with condition "someNameHere.code = :codeParam".
This is, unfortunately, the only way I've been able to find to filter one class by the properties of an associated class.
I'm trying to make a more robust solution, but it's a complex issue and I'm still in the research stage for that. I expect it will use code generation (through an annotation processor) and programmatic modification of Hibernate's mapping information on startup, but I'm not sure what else yet.
Unfortunately we are stuck using JPA 1.0 and Java EE 5 (SAP implementation). If it matters, we are using an Oracle DB.
We have an entity class that is basically a database description for a file type object, holding the name, file type and a byte[] (blob) item.
When calling a lookup on the the table we don't want to eagerly load the blob object every time, and return it through the web service, however we still want to load the file name and type fields.
Now, we've tried several things:-
Firstly, we've tried the #Basic(fetch= FetchType.LAZY). This doesn't seem to do anything anyway. (Probably this is not an enforced JPA protocol)
Secondly we've tried making a one to one relationship with the byte data in a separate table, and setting that as a lazy fetch type. When doing this, this is when the exception occurs in the web service, because the Jax WS proxy object for this item is pretty much empty.
Thirdly, we've tried eagerly loading the entire "File" table, but then just blanking out the byte[] data after the database query; this doesn't return the large byte data over the web service, but it is not ideal as the byte data is still retrieved from the data into the program "context".
So is there a way to tell Jax WS to ignore a specific field, if the proxy object is empty? I couldn't seem to find anything in the documentation for this?
Why not just create a separate jax-ws type like "FileMetadata" which contains only the name and type fields?
We are creating a new web application backed by JPA to replace an old web application. As part of the migration we are converting the old application's database to a new, more sophisticated, JPA-managed database.
So I've written a 'script' that converts the old database to a set of JPA entities and subsequently saves them. It works like this:
Create an order of conversion based on the dependencies of the domain models
For each entity
Execute database query to legacy DB
Store new object for each obtained table row in a list in memory
Iterate over generated lists in the same order as the conversion, and persist each entity.
Now, the first two steps work well. Upon persisting, however I get an exception. The exception occurs when one entity has a relation to another entity. For example if one of our entities would be a Book and another would be Chapter defining a #ManyToOne(optional=false) relation to Book. Upon persisting the Chapter, it throws the exception java.lang.IllegalStateException: org.hibernate.TransientPropertyValueException: Not-null property references a transient value - transient instance must be saved before current operation: models.Chapter.book -> models.Book.
Of course, this indicates that something is wrong with the state of the book: it seems it is either not set or has not yet been persisted. However, I can verify that the Book is set properly in the conversion of the Chapter, and I can also verify that all entities of type Book are persisted by the EntityManager before the entities of type Chapter get persisted. Obviously, my JPA provider does not behave as expected and does not truly persist my Book objects for some reason.
What solution would allow me to save the entire graph of objects that I have converted to the database? I use Hibernate as my JPA provider and I also use Spring 3.1 for injection of dependencies and EntityManagers.
EDIT 1: Some additional info: I've again verified that entityManager.persist() is called on each of the book objects before entityManager.persist() is called on the chapters. However, the id of the book object remains null, meaning it is not properly persisted. The database also remains empty, despite not using transactions.
EDIT 2: Because I don't think it's clear from the text above: the Book and Chapter story is just an example. It happens for any entity that references another entity. This makes it seem as if I'm not using JPA/Hibernate properly as opposed to not setting the values of my entities properly.
EDIT 3: The core issue seems to be that despite persisting Book properly, having all the right annotations, book.getId() remains null. Basically, Hibernate is not setting the ids on my entities after persisting them, leading to problems when I need to use those entities later.
I once battled with such an error from hibernate myself. It turned out that it was a combination of a circle in the object graph and the cascade settings that caused the problem.
It has been a while so the fowlling might not be 100% accurate but maybe it is enough information to track your problem:
Hibernate Wants to insert the chapter. Realizes it needs to insert the book first.
Wants to insert the book. Realizes it needs to insert another entity first (e.g. publisher)
Inserts publisher and performs cascades defined on publisher (e.g. authors)
Author has e.g. reference to his lastestBook. Because hibernate internally already marked the book as processed (in step 2) you would no get an exception stating that author.book references a transient instance.
To find out if this is your problem you can enable full hibernate debugging and follow the path hibernate is taking through your object graph.
I've found the answer thanks to the discussion I've had with user1888440.
The solution to this answer was that the Spring #Transactional annotation was nonfunctional in my application. This mean that everything Hibernate did didn't occur in the context of a transaction. This meant that Hibernate would not set ids after persisting and this meant that all conversions would break down.
The reason why #Transactional did not work is probably because of a fact I did not mention: this script is part of a Play 2.0 (actually 2.1) app and is thus built using SBT. SBT doesn't use a normal Java setup to build an application, but instead uses the Scala compiler to compile Java as well. My guess is that the Scala compile did not work well with the AspectJ that Spring requires to make #Transactional work.
Instead, I performed all of the database work involved in this conversion within a programmatically defined Spring transaction (section 11.6). Now everything behaves as expected.
Check he unsaved values for your primary key/Object ID in your hbm files.If you have automated ID creaion by hibernate framework and you are stting th ID somewhere it woudl throw this error.By defaut the unsaved-value is 0 , so if you set the ID as 0 you would see this error.
Sounds like you are forgetting to assign a Book to each Chapter before persisting it. Even if you have persisted the Book it needs to be assigned to the #book property of the Chapter instance before you can persist the Chapter. This is because you have specified the relationship as non-optional. #book can never be null.
Working with JPA / Hibernate in an OSIV Web environment is driving me mad ;)
Following scenario: I have an entity A that is loaded via JPA and has a collection of B entities. Those B entities have a required field.
When the user adds a new B to A by pressing a link in the webapp, that required field is not set (since there is no sensible default value).
Upon the next http request, the OSIV filter tries to merge the A entity, but this fails as Hibernate complains that the new B has a required field is not set.
javax.persistence.PersistenceException: org.hibernate.PropertyValueException: not-null property references a null or transient value
Reading the JPA spec, i see no sign that those checks are required in the merge phase (i have no transaction active)
I can't keep the collection of B's outside of A and only add them to A when the user presses 'save' (aka entitymanager.persist()) as the place where the save button is does not know about the B's, only about A.
Also A and B are only examples, i have similar stuff all over the place ..
Any ideas? Do other JPA implementaions behave the same here?
Thanks in advance.
I did a lot reading and testing. The problem come from my misunderstanding of JPA / Hibernate. merge() always does a hit on the DB and also schedules an update for the entity. I did not find any mention of this in the JPA spec, but the 'Java Persistence with Hibernate' book does mention it.
Looking through the EntityManager (and Session as fallback) API it looks as if there is no means of just assigning an entity to the current persistent context WITHOUT scheduling an update. After all, what I want is to navigate the object graph, changing properties as needed and trigger an update (with version check if needed) later on. Something i think every Webapp out there using ORM must do?
The basic workflow i 'm looking for:
load an entity from the DB (or create a new one)
let the entity (and all its associations become detached (as the EntitManager closes at the end of a HTTP request)
when the next HTTP request comes in, work again with those objects, navigating the tree without fear of LazyInitExceptions
call a method that persists all changes made during 1-3)
With the OSIV filter from spring in conjunction with an IModel implementation from wicket i thought i have archived this.
I basically see 2 possible ways out of it:
a) load the entity and all the associations needed when entering a certain page (use case), letting them become detached, adding/ changing them as needed in the course of several http requests. Than reattach them when the user initiates a save (validators will ensure a valid state) and submit them to the database.
b) use the current setup, but make sure that all newly added entities have all their required fields set (probably using some wizard components). i would still have all the updates to the database for every merge(), but hopefully the database admin won't realize ;)
How do other people work with JPA in a web environment? Any other options for me?