I have a rather big system with a specification that is built by several methods on a child entity. So I have a User with a OneToMany on pets, as in this question. My relation is bidirectional, so Pet also has a ManyToOne relationship on User, and I'm struggling in transforming the specification on child entity to apply on parent entity instead.
Most questions I looked up showed how to apply a specification on a different entity, OR to get a different entity once the specification was executed. This is NOT what i'm looking for.
I'm trying to write a method like this :
public static Specification<User> extendsPetSpecToUser(
Specification<Pet> petSpec) {
// ???
}
but I don't know how to write it (I tried using Join, but didn't manage to "say" to JPA to combine the join and the specification to query with a different root but similar constraints)
Given the specification is big AND also used in other parts to query directly for Pet, rewriting it from a different root isn't really an option.
Thank you for your time, and sorry if this is a duplicate, I really didn't see another question matching my needs.
First, it feels like this problem hides a weird construction of the queries you use.
Why would you build a Specification<Pet> if you need Specification<User> at the end ?
There might be a code architecture to think about.
Anyway, to achieve the mentioned goal, have you tried using subqueries ?
public class TransformToUser implements Specification<User> {
private final Specification<Pet> specification;
public TransformToUser (Specification<Pet> specification) {
this.specification = specification;
}
#Override
public Predicate toPredicate(
Root<User> root, CriteriaQuery<?> criteriaQuery, CriteriaBuilder criteriaBuilder) {
Subquery<Pet> subqueryPet = criteriaQuery.subquery(Pet.class);
Root<Pet> fromPet = subqueryDeclaration.from(Pet.class);
subqueryPet.select(fromPet);
subqueryPet.where(
specification.toPredicate(fromPet, criteriaQuery, criteriaBuilder));
return root.get(User_.pets).in(subqueryPet);
}
}
Related
dears.
Currently, I am working with JpaSpecificationExecutor API, cause I need to create dynamic queries based on a lot of optional search criteria, for this I am starting with a root entity, which is included in the generic type in the JpaSpecificationExecutor interface.
The code to make the build in the Specification (which is included in the findAll method that belongs to repository interface) is the following:
public Specification<T> build(){
return (root, query, criteriaBuilder) -> {
Predicate filters = null;
//createPredicate is a method to create predicates (it does not matter in this code)
filters = createPredicate(root, criteriaBuilder, searchCriterias);
return filters;
};
}
Now each is working fine, but there is a little detail, in the query I am using joins to make relation with other entities, but when I get the response, like in this code:
List<EXAMPLE> examples = repository.findAll(filters.build());
I am not able to see the filter data in other entities, just I can see the field in the root entity (It is EXAMPLE entity) because these ones are appearing in the query formed. so, I would like to include in the SELECT the other entities, in order to see the filter data by means of the JOINs, but until now I have not been able to do it.
I have seen in any inputs, that I can use these codes in my build code, I have tried, but the new files are not appearing in the generated query:
query.select, or query.multiselect
Ref:
Ref. 1
Also, I have found that this is not working with JpaSpecificationExecutor, because the list of attributes in the SELECT is assigned by the entity Ref. 2
My question is, does someone know if exists another way to change the field in the SELECT with JpaSpecificationExecutor.
Regards.
Suppose an entity model where an Employee has a Supervisor who has an id. Using hibernate-jpamodelgen to generate the meta model for the entities, how can I query a nested field?
For instance, "get all employees whose supervisor has id 4", using JpaSpecificationExecutor:
Page<Employee> getEmployeesBySupervisorId(int id) {
return findAll((root, query, criteriaBuilder) -> {
return criteriaBuilder.equal(root.get(Employee_.supervisor.id), id);
});
}
Note that Employee_ is the model meta class for Employee (and was generated by Hibernate).
This code will produce an error because the id symbol cannot be found on type SingularAttribute<Employee, Supervisor>. I get that, but it seems like these should somehow be chainable. I can't find great examples of how to do this cleanly.
In order to navigate to related entities, you must use From#join() join method, which works well with MetaModel:
CriteriaQuery<Employee> cq = criteriaBuilder.createQuery(Employee.class);
Root<Employee> from = cq.from(Employee.class);
Predicate p = criteriaBuilder.equal(from.join(Employee_.supervisor).get(Supervisor_.id), id);
See also
Oracle's Java EE Tutorial - Using the Criteria API and Metamodel API to Create Basic Typesafe Queries
Yes, I also stumbled upon this problem that the Metamodel classes are not offering deeper visibility to relationships > 1.
While accessing A.b is possible, A.b.c is not.
But there is another possibility besides Joins:
Just concatenate by using several getter(). For this you will need a root element (= CriteriaQuery & CriteriaBuilder).
return criteriaBuilder.equal(root.get(Employee_.supervisor).get(Supervisor_.id), id);
While this still ensures type safety, the whole path should be correct as it is not validated until runtime.
Also for sorting a resultset using the Metamodel there is a similar solution. Say you want to sort by the Supervisor's id:
Use JpaSort and JpaSort.Path
JpaSort.of(JpaSort.Direction.ASC, JpaSort.path(Employee_.supervisor).dot(Supervisor_.id));
I'm new to JPA and got stuck with a rather straight forward use case. All I want is to add some conditions to my criteria based on certain filter user passes to my application. User passes the information to application using key, value pairs and we'll decode it to see what type each parameter is.
Eg:
1) key=id and value=20. We'll interpret it as search by id=20, where id is integer
2) key=name and value='test'. We'll interpret it as search by name='test', where name is string
This is achieved by a method like this
public <T> T getSearchValue(Object inputValue) {
if (condition) {
return (T) inputValue.toString();
} else {
return (T) Integer.parseInt(inputValue.toString);
}
}
I was thinking that I could easily add the condition using criteria builder. Something like this
cb.equal(getSearchKey(), getSearchValue(inputValue));
cb.gt(getSearchKey(), getSearchValue(inputValue));
Here cb.equal works fine. But cb.gt or any other operation like lessThan, like etc. are all giving compilation error that there are 2 method matches and hence ambiguous
First method is gt(Expression<? extends Number>, Expression<? extends Number>)
and second one is gt(Expression<? extends Number>, Number)
So how do I resolve this and make sure it can resolve to Number as the second parameter for example? Problem here is I don't know the type in advance as it is added based on what key user want to search with. Is it possible to dynamically cast to a class somehow or is there any other better approach here?
Idea is to supply the restrictions to the where clause of the "SQL statement".
SELECT * FROM table_name WHERE key=value
Lots of people like to write JPQL queries similar to SQL because it is easier to understand when reading. Nevertheless, there are some advantages about using this Criteria API (dynamic, less error prone, less attention for security, easier to refactor).
The first step is getzing the CriteriaBuilder and then, creating a CriteriaQuery with it. Afterwards, you could go on to define from which class you want to build the Root and use this to get the columns. Joins could also follow.
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaQuery<YourClass> cq = cb.createQuery(YourClass.class);
Root<YourClass> root = cq.from(YourClass.class);
As a next step, you want to define the where clause with its restrictions (Predicates). As in a normal SQL statement, there could be only one where. If you have multiple restrictions, they have to be combined with criteriaBuilder.and().
There are several ways of applying the whole process. I have one way how I do this (and some others might do it like this as well). Usually, I create a List with all restrictions I want to use and create an array out of this which gets then combined with the cb.and().
List<Predicate> predicates = new ArrayList<>();
predicates.add(cb.equal(root.get("key1"), "value1"));
predicates.add(cb.gt(root.get("key2"), 100));
cq.select(root).where(cb.and(predicates.toArray(new Predicate[predicates.size()])));
Following, a whole example for such a DAO method.
public List<Foo>(String state, int size, String column, String value) {
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaQuery<Foo> cq = cb.createQuery(Foo.class);
Root<Foo> root = cq.from(Foo.class);
List<Predicate> predicates = new ArrayList<>();
predicates.add(cb.equal(root.get(Foo_.bla), state));
predicates.add(cb.gt(root.get(Foo_.blub), size));
predicates.add(cb.equal(root.get(column), value));
cq.select(root).where(cb.and(predicates.toArray(new Predicate[predicates.size()])));
return entityManager.createQuery(cq).getResultList();
}
For getting the column names (or singular attributes), I used hibernate-jpamodelgen. Have a look for this plugin which generates classes with underscores automatically (e.g. Foo_) which makes it more typesafe when using column names.
Update: If you do not know the names of the columns for your restrictions in the where clause beforehand, it is simple to adjust it (see third restriction above).
I usually work on 3-tier applications using Hibernate in the persistence layer and I take care to not use the domain model classes in the presentation layer. This is why I use the DTO (Data Transfer Object) design pattern.
But I always have a dilemma in my entity-dto mapping. Whether I lose the lazy loading benefict, or I create complexity in the code by introducing filters to call or not the domain model getters.
Example : Consider a DTO UserDto that corresponds to the entity User
public UserDto toDto(User entity, OptionList... optionList) {
if (entity == null) {
return null;
}
UserDto userDto = new UserDto();
userDto.setId(entity.getId());
userDto.setFirstname(entity.getFirstname());
if (optionList.length == 0 || optionList[0].contains(User.class, UserOptionList.AUTHORIZATION)) {
IGenericEntityDtoConverter<Authorization, AuthorizationDto> authorizationConverter = converterRegistry.getConverter(Authorization.class);
List<AuthorizationDto> authorizations = new ArrayList<>(authorizationConverter.toListDto(entity.getAuthorizations(), optionList));
userDto.setAuthorizations(authorizations);
...
}
OptionList is used to filter the mapping and map just what is wanted.
Although the last solution allow lazy loading but it's very heavy because the optionList must be specified in the service layer.
Is there any better solution to preserve lazy loading in a DTO design pattern ?
For the same entity persistent state, I don't like having fields of an object un-initialized in some execution path, while these fields might also be initialized in other cases. This cause too much headache to maintain :
it will cause Nullpointer in the better cases
if null is also a valid option (and thus not cause NullPointer), it could mean the data was removed and might trigger unexpected removal business rules, while the data is in fact still there.
I would rather create a DTO hierarchy of interfaces and/or classes, starting with UserDto. All of the actual dto implementation fields are filled to mirror the persistent state : if there is data, the field of the dto is not null.
So then you just need to ask the service layer which implementation of the Dto you want :
public <T extends UserDto> T toDto(User entity, Class<T> dtoClass) {
...
}
Then in the service layer, you could have a :
Map<Class<? extends UserDto>, UserDtoBUilder> userDtoBuilders = ...
where you register the different builders that will create and initialize the various UserDto implementations.
I'm not sure why you would want lazy loading, but I guess because your UserDto serves multiple representations through optionList configuration?
I don't know how your presentation layer code looks like, but I guess you have lot's of if-else code for each element in optionList?
How about having different representations i.e. subclasses instead? I'm asking this because I'd like to suggest giving Blaze-Persistence Entity Views a try. Here a little code example that fits to your domain.
#EntityView(User.class)
public interface SimpleUserView {
// The id of the user entity
#IdMapping("id") int getId();
String getFirstname();
}
#EntityView(Authorization.class)
public interface AuthorizationView {
// The id of the authorization entity
#IdMapping("id") int getId();
// Whatever properties you want
}
#EntityView(User.class)
public interface AuthorizationUserView extends SimpleUserView {
List<AuthorizationView> getAuthorizations();
}
These are the DTOs with some metadata about the mapping to the entity model. And here comes the usage:
#Transactional
public <T> T findByName(String name, EntityViewSetting<T, CriteriaBuilder<T>> setting) {
// Omitted DAO part for brevity
EntityManager entityManager = // jpa entity manager
CriteriaBuilderFactory cbf = // query builder factory from Blaze-Persistence
EntityViewManager evm = // manager that can apply entity views to query builders
CriteriaBuilder<User> builder = cbf.create(entityManager, User.class)
.where("name").eq(name);
List<T> result = evm.applySetting(builder, setting)
.getResultList();
return result;
}
Now if you use it like service.findByName("someName", EntityViewSetting.create(SimpleUserView.class)) it will generate a query like
SELECT u.id, u.firstname
FROM User u
WHERE u.name = :param_1
and if you use the other view like service.findByName("someName", EntityViewSetting.create(AuthorizationUserView.class)) it will generate
SELECT u.id, u.firstname, a.id
FROM User u LEFT JOIN u.authorizations a
WHERE u.name = :param_1
Apart from being able to get rid of the manual object mapping, the performance will improve because of using optimized queries!
I am trying to add a NOSQL data into my JPA-based application, following this tutorial.
The entity I want to add, was befored modeled without NOSQL in this way:
Triple.java
#Entity
#IdClass(ConceptPk.class)
#Table(name = "triple")
public class TripleDBModel {
protected List<Annotation> annotations;
public Concept conceptUriSubject;
public Concept conceptUriObject;
public Concept conceptUriPredicate;
#ManyToMany(
cascade={CascadeType.ALL },
fetch=FetchType.LAZY
)
#JoinTable(name = "triple_has_annotation",
joinColumns={#JoinColumn(name="uri_concept_subject"), #JoinColumn(name="uri_concept_object"), #JoinColumn(name="uri_concept_predicate") },
inverseJoinColumns=#JoinColumn(name="annotation_id") )
public List<Annotation> getAnnotations() {
return annotations;
}
public void setAnnotations(List<Annotation> annotations) {
this.annotations = annotations;
}
ConceptPk.java
#Embeddable
public class ConceptPk implements java.io.Serializable {
private static final long serialVersionUID = 1L;
public Concept conceptUriSubject;
public Concept conceptUriObject;
public Concept conceptUriPredicate;
#Id
#ManyToOne(cascade=CascadeType.MERGE)
#JoinColumn(name="uri_concept_subject")
public Concept getConceptUriSubject() {
return conceptUriSubject;
}
public void setConceptUriSubject(Concept conceptUriSubject) {
this.conceptUriSubject = conceptUriSubject;
}
I am omiting repetitions, but the 3 Concepts are part of the primary key, of the #Id.
Adapting this entity to NOSQL:
#Entity
#IdClass(ConceptPk.class)
#Table(name = "triple")
#NodeEntity(partial = true)
public class TripleDBModel {
//Fields referring other entities shouldn't be initialized
protected List<Annotation> annotations;
//public Concept conceptUriSubject;
//public Concept conceptUriObject;
//public Concept conceptUriPredicate;
#RelatedTo(type = "conceptUriSubject", elementClass = Concept.class)
Set<Concept> conceptUriSubject;
Now the question, which actually are two questions:
A) #RelatedTo(type = "conceptUriSubject", elementClass = Concept.class) gives me error on Eclipse, and advises me to add a cast, but this doesn't solve the error. I don't know if I must an annotation or any other additional thing to Class.java
B) As I have specified, the primery key is composed by 3 concepts, and ConceptPK.java is required. JPA modelling is ok, but I don't know how to do the same in NOSQL
Mujer,
your domain looks like it would be much easier modelled in the graph database itself. As it is RDF like triplets that are annotated here.
You are right in that Spring Data Graph right now does not support compound keys. We will look into that in the future, but I can't promise anything.
In the graph you could model your nodes being Concepts (URIs) and the type of relationship representing what you want to represent with that Concept.
(TripleDBModel) - SUBJECT -> (Concept [URI = ""])
(TripleDBModel) - PREDICATE -> (Concept [URI = ""])
(TripleDBModel) - OBJECT -> (Concept [URI = ""])
(TripleDBModel) - HAS_ANNOTATION -> (Annotation)
This could be easily modelled with Spring Data Graph (or also with the pure Neo4j API)
#NodeEntity
class Concept {
private URI uri;
}
#NodeEntity
class Triple {
// will be automatically mapped to a relationship with the name "subject"
private Concept subject;
// or provide explicit mapping
#RelatedTo(elementClass=Concept.class, type = "PREDICATE")
private Concept predicate;
private Concept object;
#RelatedTo(elementClass=Annotation.class, type = "HAS_ANNOTATION")
private Set<Annotation> annotations;
}
The eclipse error is annoying but just a wrong visualization, the AspectJ team is in the process of fixing that.
Hope that helps, if you need further advice just ask
Michael
Well, you haven't said which NoSQL engine you're going to, which is pretty important. Most NoSQL data stores don't support the concept of a composite primary key - and some of them won't allow you unique columns in the first place.
First, note that I work for a NoSQL vendor, http://gigaspaces.com/ - I'm not unbiased.
However, going from JPA to NoSQL is not hard, no matter what your engine is. For GigaSpaces, you can use JPA to talk to the data grid with very few changes, for example, although then you're still stuck with JPA.
To really think about JPA, you need to think about your data as data and not organizational stuff; you have a triplet, basically, which means your NoSQL data items consist of three data items (predicate, subject, object, like you've used.) For most NoSQL engines you'll probably want an id there, too, just for efficiency's sake.
The ID is the "primary key," and enforcing unique triplets after that is going to be on your end more than the NoSQL engine's end; this is one area where NoSQL "suffers" compared to SQL, but it's also where you find the greatest speed and storage improvements.
For some NoSQL engines, then, you'll build a document, consisting of the three data items, and you'd just query for that document before writing it into the database.
I could give you an example for many NoSQL engines (and certainly can for GigaSpaces) but I don't know which one you're targeting or why.
Have a look at Kundera.
http://mevivs.wordpress.com/2012/02/13/how-to-crud-and-jpa-association-handling-using-kundera/
Use JPA over NoSql
PlayOrm is another solution which is JPA-like but with noSQL specific features like you can do #NoSqlEmbedded on a List of Strings and it is embedded in that row leading to a table where each row is a different length than the other rows...this is a noSql common pattern which is why NoSql ORM's need a slight break from JPA.
PlayOrm also supports joins and S-SQL (scalable SQL).
later,
Dean