As I have read in many articles (e.g. here) - to enable Hibernate's second level cache for given entity we need to set cache concurrency strategy on entity via #org.hibernate.annotations.Cache annotation.
#Entity
#Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public class Person {
Besides I also use query-level cache (using query.setCacheable(true)) on some queries that fetches this entity and it works well.
My question relates to custom queries that uses DTO projection, so for the queries like this:
Query query = session.createQuery("SELECT new PersonDto(person.id, person.name) FROM Person person WHERE person.name = :name");
query.setParameter("name", name);
query.setCacheable(true);
query.uniqueResult();
Do I need to set #Cache annotation also for PersonDto? I have tried to run the query without the annotation and the DTO was successfully cached.
Could you explain why do we need the annotation for entity objects only and other non-entity objects does not require that?
Thanks.
I'm not 100% on this, but you are manually setting cacheable to true for the query.
The annotation on Person is the equivalent for an entity.
I wouldn't think of it as PersonDTO being cached in this instance. If you were to write another query saying select new PersonDTO(person.id, person.name) from Person person where person.id = 10, I don't think it will look into your cache to see if a PersonDTO with id == 10 exists; whereas, the Entitys cache would because it understands they are the same thing.
I would think of it as the query itself is being cached (meaning if ran again before TTL then cached results would occur). It's caching the fact that you ran this query with a certain name parameter, not that a PersonDTO with that name exists in the cache. Does that make sense?
Related
Given the following domain model, I want to load all Answers including their Values and their respective sub-children and put it in an AnswerDTO to then convert to JSON. I have a working solution but it suffers from the N+1 problem that I want to get rid of by using an ad-hoc #EntityGraph. All associations are configured LAZY.
#Query("SELECT a FROM Answer a")
#EntityGraph(attributePaths = {"value"})
public List<Answer> findAll();
Using an ad-hoc #EntityGraph on the Repository method I can ensure that the values are pre-fetched to prevent N+1 on the Answer->Value association. While my result is fine there is another N+1 problem, because of lazy loading the selected association of the MCValues.
Using this
#EntityGraph(attributePaths = {"value.selected"})
fails, because the selected field is of course only part of some of the Value entities:
Unable to locate Attribute with the the given name [selected] on this ManagedType [x.model.Value];
How can I tell JPA only try fetching the selected association in case the value is a MCValue? I need something like optionalAttributePaths.
You can only use an EntityGraph if the association attribute is part of the superclass and by that also part of all subclasses. Otherwise, the EntityGraph will always fail with the Exception that you currently get.
The best way to avoid your N+1 select issue is to split your query into 2 queries:
The 1st query fetches the MCValue entities using an EntityGraph to fetch the association mapped by the selected attribute. After that query, these entities are then stored in Hibernate's 1st level cache / the persistence context. Hibernate will use them when it processes the result of the 2nd query.
#Query("SELECT m FROM MCValue m") // add WHERE clause as needed ...
#EntityGraph(attributePaths = {"selected"})
public List<MCValue> findAll();
The 2nd query then fetches the Answer entity and uses an EntityGraph to also fetch the associated Value entities. For each Value entity, Hibernate will instantiate the specific subclass and check if the 1st level cache already contains an object for that class and primary key combination. If that's the case, Hibernate uses the object from the 1st level cache instead of the data returned by the query.
#Query("SELECT a FROM Answer a")
#EntityGraph(attributePaths = {"value"})
public List<Answer> findAll();
Because we already fetched all MCValue entities with the associated selected entities, we now get Answer entities with an initialized value association. And if the association contains an MCValue entity, its selected association will also be initialized.
I don't know what Spring-Data is doing there, but to do that, you usually have to use the TREAT operator to be able to access the sub-association but the implementation for that Operator is quite buggy.
Hibernate supports implicit subtype property access which is what you would need here, but apparently Spring-Data can't handle this properly. I can recommend that you take a look at Blaze-Persistence Entity-Views, a library that works on top of JPA which allows you map arbitrary structures against your entity model. You can map your DTO model in a type safe way, also the inheritance structure. Entity views for your use case could look like this
#EntityView(Answer.class)
interface AnswerDTO {
#IdMapping
Long getId();
ValueDTO getValue();
}
#EntityView(Value.class)
#EntityViewInheritance
interface ValueDTO {
#IdMapping
Long getId();
}
#EntityView(TextValue.class)
interface TextValueDTO extends ValueDTO {
String getText();
}
#EntityView(RatingValue.class)
interface RatingValueDTO extends ValueDTO {
int getRating();
}
#EntityView(MCValue.class)
interface TextValueDTO extends ValueDTO {
#Mapping("selected.id")
Set<Long> getOption();
}
With the spring data integration provided by Blaze-Persistence you can define a repository like this and directly use the result
#Transactional(readOnly = true)
interface AnswerRepository extends Repository<Answer, Long> {
List<AnswerDTO> findAll();
}
It will generate a HQL query that selects just what you mapped in the AnswerDTO which is something like the following.
SELECT
a.id,
v.id,
TYPE(v),
CASE WHEN TYPE(v) = TextValue THEN v.text END,
CASE WHEN TYPE(v) = RatingValue THEN v.rating END,
CASE WHEN TYPE(v) = MCValue THEN s.id END
FROM Answer a
LEFT JOIN a.value v
LEFT JOIN v.selected s
My latest project used GraphQL (a first for me) and we had a big issue with N+1 queries and trying to optimize the queries to only join for tables when they are required. I have found Cosium
/
spring-data-jpa-entity-graph irreplaceable. It extends JpaRepository and adds methods to pass in an entity graph to the query. You can then build dynamic entity graphs at runtime to add in left joins for only the data you need.
Our data flow looks something like this:
Receive GraphQL request
Parse GraphQL request and convert to list of entity graph nodes in the query
Create entity graph from the discovered nodes and pass into the repository for execution
To solve the problem of not including invalid nodes into the entity graph (for example __typename from graphql), I created a utility class which handles the entity graph generation. The calling class passes in the class name it is generating the graph for, which then validates each node in the graph against the metamodel maintained by the ORM. If the node is not in the model, it removes it from the list of graph nodes. (This check needs to be recursive and check each child as well)
Before finding this I had tried projections and every other alternative recommended in the Spring JPA / Hibernate docs, but nothing seemed to solve the problem elegantly or at least with a ton of extra code
Edited after your comment:
My apologize, I haven't undersood you issue in the first round, your issue occurs on startup of spring-data, not only when you try to call the findAll().
So, you can now navigate the full example can be pull from my github:
https://github.com/bdzzaid/stackoverflow-java/blob/master/jpa-hibernate/
You can easlily reproduce and fix your issue inside this project.
Effectivly, Spring data and hibernate are not capable to determinate the "selected" graph by default and you need to specify the way to collect the selected option.
So first, you have to declare the NamedEntityGraphs of the class Answer
As you can see, there is two NamedEntityGraph for the attribute value of the class Answer
The first for all Value without specific relationship to load
The second for the specific Multichoice value. If you remove this one, you reproduce the exception.
Second, you need to be in a transactional context answerRepository.findAll() if you want to fetch data in type LAZY
#Entity
#Table(name = "answer")
#NamedEntityGraphs({
#NamedEntityGraph(
name = "graph.Answer",
attributeNodes = #NamedAttributeNode(value = "value")
),
#NamedEntityGraph(
name = "graph.AnswerMultichoice",
attributeNodes = #NamedAttributeNode(value = "value"),
subgraphs = {
#NamedSubgraph(
name = "graph.AnswerMultichoice.selected",
attributeNodes = {
#NamedAttributeNode("selected")
}
)
}
)
}
)
public class Answer
{
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(updatable = false, nullable = false)
private int id;
#OneToOne(cascade = CascadeType.ALL)
#JoinColumn(name = "value_id", referencedColumnName = "id")
private Value value;
// ..
}
How can I update every fields in Entity without write sql query:
(update Entity u set u.m1= ?1, u.m2= ?2, ... where u.id = ?3)
Class has 20+ fields and if I write sql query This will take a long time. And I often add new fields
Can I update everything automatically? Like this:
entityRepo.update(entity);
If i do entityRepo.save(); create unnecessary record in base.
No, you can use JpaRepository.save(S entity) that saves or updates the entity if existing.
To achieve that, make sure that the entity has its JPA #Id valued before invoking save() otherwise a new record will indeed be created.
This is an alternative to #davidxxx's answer.
If the transaction with which the entity was fetched is not yet closed (i.e. the entity is still attached), you can simply update the java-object and the changes will be committed to the database when the transaction is committed.
I'm trying to understand EclipseLink behaviour in case if I use native query. So I have Entity like this:
class Entity {
#OneToOne(fetch = FetchType.LAZY)
#JoinColumn(name="other_entity_id")
private OtherEntity otherEntity;
#Column(name = "name")
private String name;
//gets ... sets ...
}
and corresponding table looks like:
**ENTITY**
INTEGER ID;
VARCHAR NAME;
OTHER_ENTITY_ID;
And then I run native query
Query query = getEntityManager().runNativeQuery("select * from ENTITY", Entity.class);
query.getResultList()
Within Entity I have declared OtherEntity otherEntity which is annotated with FetchType.LAZY, however my query selects (*) - all of the columns, including OTHER_ENTITY_ID. The question is - if I run native query that fetches all columns, will fields annotated with FetchType.LAZY populated as if they were FetchType.EAGER or not? I've never worked with EclipseLink before and tyring to decide is it worth using it or not so I would really appreciate any help
Thanks, Cheers
My first advice is to turn on EclipseLink's SQL logging, and execute the equivalent JPQL to load what you are looking for and see the SQL EclipseLink generates to accomplish that to get an understanding of what is required to build objects in your native queries based on your current mappings.
Relationships generally loaded with a secondary query using the values read in from the foreign keys, so eager or lazy fetching is not affected by the native query to read in "Entity" - the query requires the other_entity_id value regardless of the fetch type. When required based on eager/lazy loading, EclipseLink will issue the query required by the mapping.
You can change this though by marking that the relationship is to use joining. In this case, EclipseLink will expect not only the Entity values to be in the query, but the referenced OtherEntity values as well.
I have an Entity (say Employee) and a find method which uses TypedQuery to execute a named query and return the Employee rows. When the properties of this returned Employee instance is changed it is persisted.
I am trying to figure out the JPA concept behind this and how is this different from update. Is it good to update single row like this if only few column values of the existing rows in db needs change.
Looking for pointers to JPA concept that explains this.
Here is the code snip.
#Entity
#NamedQueries({
#NamedQuery(name = "Employee.findInActiveEmployee", query = "SELECT e FROM Employee e" +
"WHERE some_prop = :something")
})
public class Employee implements Serializable {
#Id
#NotNull
#Column(name = "id")
private String id;
#Column(name = "name")
private int name;
//so and so properties
//getter and setters
}
the finder method
TypedQuery<Employee> query = getEntityManager().
createNamedQuery("Employee.findInActiveEmployee", Employee.class);
query.setParameter("someproperty", "somevalue");
try {
return query.getSingleResult();
} catch (NoResultException e) {
throw new NoSuchObjectException("somevalue");
}
It's not really different from update.
In JPA you usually don't need to explicitly merge changes, since the JPA implementation will keep track of what data has changed in managed objects (i.e. entities the EntityManager knows about, such as ones that it has just loaded for you) and will make sure to save those changes to the underlying database.
If you don't want that, you can explicitly detach the entity with em.detach(Object o);, so the EntityManager no longer manages it . After that you'll need to perform merge() to update any changes.
Entities you get back from JPQL-Queries are managed by the EntityManager. In other words, they are atached and there is no need to merge them (like you would need to do for detached entities).
If you alter the entities you got back from the query and you have an open transaction the changes will be committed back to the database.
If you want to update a large number of entities a at the same time or your entities contain some members that have a really large serialized footprint then it might pay of performance-wise to use JPQL Updates.
This may be a simple question, but I'm trying to find out if there is a way that I can create a JPQL update query that would allow me to update a single Persisted Entity using a unique column identifier that is not the primary key.
Say I have and entity like the following:
#Entity
public class Customer {
#ID
private Long id;
#Column
private String uniqueExternalID;
#Column
private String firstname;
....
}
Updating this entity with a Customer that has the id value set is easy, however, id like to update this customer entity using the uniqueExternalId without having to pre-query for the local entity and merge the changes in or manually construct a jpql query with all the fields in it manually.
Something like
UPDATE Customer c SET c = :customer WHERE c.uniqueExternalId = :externalId
Is something like this possible in JQPL?
You cannot do it in the exact way you describe - by passing an entity reference, but you can use bulk queries to achieve the same effect.
UPDATE Customer c SET c.name = :name WHERE c.uniqueExternalId = :externalId
Please note that you will have to explicitly define each updated attribute.
It is important to note that bulk queries bypass the persistence context. Entity instances that are managed within the persistence context will not reflect the changes to the records that are changed by the bulk update. Further, if you use optimistic locking, consider incrementing the #Version field of your entities with the bulk update:
UPDATE Customer c SET c.name = :name, c.version = c.version + 1 WHERE c.uniqueExternalId = :externalId
EDIT: The JPA 2.0 spec advises in ยง 4.10:
In general, bulk update and delete operations should only be performed
within a transaction in a new persistence context or before fetching
or accessing entities whose state might be affected by such
operations.