Usual way is to map a single query result to a class e.g.
List<MyPojo> p = ctx.fetch("select * from table...").into(MyPojo.class);
But what if I want to build a pojo based on several queries results? Note that some of pojo fields are objects.
class MyPojo{
String field1 - query1
int field2 - query1
List<SomeClass> field3 - query2
SomeClass2 - query3
...
}
Will really appreciate any thoughts.
You could obviously map your code manually by running several queries and assembling the results into your result type individually (there's no automatic way to do this in jOOQ). But much better than that, just use
jOOQ's code generator (things will get so much better with that!)
MULTISET, nested records, and ad-hoc converters
An example would be:
List<MyPojo> result =
ctx.select(
T.FIELD1,
T.FIELD2,
multiset(
select(U.COL1, U.COL2)
.from(U)
.where(U.T_ID.eq(T.ID))
).convertFrom(r -> r.map(Records.mapping(SomeClass::new))),
field(
select(row(X.COL1, X.COL2).mapping(SomeClass2::new))
.from(X)
.where(X.T_ID.eq(T.ID))
))
.from(T)
.fetch(Records.mapping(MyPojo::new));
All of this is completely type safe and not using any reflection. All you need for those constructor references to work is add the constructor to your MyPojo, or make it a record, instead:
record MyPojo(
String field1,
int field2,
List<SomeClass> field3,
SomeClass2 field4
) {}
Related
I have 2 entities:
record Customer(String name, List<CustomerContact > contactHistory) {}
record CustomerContact(LocalDateTime contactAt, Contact.Type type) {
public enum Type {
TEXT_MESSAGE, EMAIL
}
}
These are persisted in a schema with 2 tables:
CREATE TABLE customer(
"id". BIGSERIAL PRIMARY KEY,
"name" TEXT NOT NULL
);
CREATE TABLE customer_contact(
"customer_id" BIGINT REFERENCES "customer" (ID) NOT NULL,
"type" TEXT NOT NULL,
"contact_at" TIMESTAMPTZ NOT NULL DEFAULT (now() AT TIME ZONE 'utc')
);
I want to retrieve the details of my Customers with a single query, and use the arrayAgg method to add the contactHistory to each customer. I have a query like this:
//pseudo code
DSL.select(field("customer.name"))
.select(arrayAgg(field("customer_contact.contact_at")) //TODO How to aggregate both fields into a CustomerContact object
.from(table("customer"))
.join(table("customer_contact")).on(field("customer_contact.customer_id").eq("customer.id"))
.groupBy(field("customer_contact.customer_id"))
.fetchOptional()
.map(asCustomer());
The problem I have with this is that arrayAgg will only work with a single field. I want to use 2 fields, and bind them into a single object (CustomerContact) then use that as the basis for the arrayAgg
Apologies if I have not explained this clearly! Any help much appreciated.
Rather than using ARRAY_AGG, how about using the much more powerful MULTISET_AGG or MULTISET to get the most out of jOOQ's type safe capabilities? Combine that with ad-hoc conversion for type safe mapping to your Java records, as shown also in this article. Your query would then look like this:
Using MULTISET_AGG
List<Customer> customers =
ctx.select(
CUSTOMER.NAME,
multisetAgg(CUSTOMER_CONTACT.CONTACT_AT, CUSTOMER_CONTACT.TYPE)
.convertFrom(r -> r.map(Records.mapping(CustomerContact::new))))
.from(CUSTOMER)
.join(CUSTOMER_CONTACT).on(CUSTOMER_CONTACT.CUSTOMER_ID.eq(CUSTOMER.ID))
.groupBy(CUSTOMER_CONTACT.CUSTOMER_ID)
.fetch(Records.mapping(Customer::new));
Note that the entire query type checks. If you change anything about the query or about your records, it won't compile anymore, giving you additional type safety. This is assuming that youre Type enum is either:
Generated from a PostgreSQL ENUM type
Converted automatically using an enum converter, attached to generated code
Depending on your tastes, using implicit joins could slightly simplify the query for you?
List<Customer> customers =
ctx.select(
CUSTOMER_CONTACT.customer().NAME,
multisetAgg(CUSTOMER_CONTACT.CONTACT_AT, CUSTOMER_CONTACT.TYPE)
.convertFrom(r -> r.map(Records.mapping(CustomerContact::new))))
.from(CUSTOMER_CONTACT)
.groupBy(CUSTOMER_CONTACT.CUSTOMER_ID)
.fetch(Records.mapping(Customer::new));
It's not a big deal in this query, but in a more complex query, it can reduce complexity.
Using MULTISET
An alterantive is to nest your query instead of aggregating, like this:
List<Customer> customers =
ctx.select(
CUSTOMER.NAME,
multiset(
select(CUSTOMER_CONTACT.CONTACT_AT, CUSTOMER_CONTACT.TYPE)
.from(CUSTOMER_CONTACT)
.where(CUSTOMER_CONTACT.CUSTOMER_ID.eq(CUSTOMER.ID))
).convertFrom(r -> r.map(Records.mapping(CustomerContact::new))))
.from(CUSTOMER)
.fetch(Records.mapping(Customer::new));
Code generation
For this answer, I was assuming you're using the code generator (you should!), as it would greatly contribute to this code being type safe, and make this answer more readable.
Much of the above can be done without code generation (except implicit joins), but I'm sure this answer could nicely demonstrate the benefits it terms of type safety.
Let's say we have a table car and parts. To fetch all car with their parts we use the following query:
#Transactional
public List<ReadCarDto> getAllCars() {
return getDslContext().select(
CAR.ID,
CAR.NAME,
CAR.DESCRIPTION,
multiset(
selectDistinct(
PARTS.ID,
PARTS.NAME,
PARTS.TYPE,
PARTS.DESCRIPTION
).from(PARTS).where(PARTS.CAR_ID.eq(CAR.ID))
).convertFrom(record -> record.map(record1 -> new ReadPartDto(
record1.value1(),
record1.value2(),
record1.value3(),
record1.value4()
)))
).from(CAR).fetch(record -> new ReadCarDto(
record.value1(),
record.value2(),
record.value3(),
record.value4()
));
}
Question: I always want to fetch the full car and part rows. Is there a way to reuse my existing private RecordMapper<CarRecord, ReadCarDto> getCarMapper() method that already implements the DTO conversion (For parts too of course)? Otherwise I have to retype the conversion in my multiset queries.
It looks like the selectDistinct method only has support for 1 - 22 fields and select().from(CAR) doesn't provide a multiset method.
Sidenote: I don't want to use the reflection conversion.
Your question reminds me of this one. You probably want to use a nested row() expression to produce nested records. Something like this:
return getDslContext()
.select(
row(
CAR.ID,
CAR.NAME,
CAR.DESCRIPTION,
...
).mapping(carMapper),
multiset(...)
)
A future jOOQ version will let you use CAR directly as a nested row in your projections, see https://github.com/jOOQ/jOOQ/issues/4727. As of jOOQ 3.16, this isn't available yet.
Consider this trivial query:
SELECT 1 as first, 2 as second
When using Hibernate we can then do something like:
em.createNativeQuery(query).fetchResultList()
However, there seem to be no way of getting the aliases (or column names). This would be very helpful for creating List<Map<String, Object>> where each map would be a row with their aliases, for instance in this case: [{first: 1, second: 2}].
Is there a way to do something like that?
I would suggest a bit different approach which may meet your needs.
In JPA 2.1 there is a feature called "result set mapping".
Basically you have to define a POJO class which would hold the result values (all the values must be passed using the constructor):
public class ResultClass{
private String fieldOne;
private String fieldTwo;
public ResultClass(String fieldOne, String fieldTwo){
this.fieldOne = fieldOne;
this.fieldTwo = fieldTwo;
}
}
Then you have to declare the mapping on one of your entities (does not matter on which, it just has to be a declated #Entity):
#SqlResultSetMapping(name="ResultMapping", classes = {
#ConstructorResult(targetClass = ResultClass.class,
columns = {#ColumnResult(name="columnOne"), #ColumnResult(name="columnTwo")})
})
The columnOne and columnTwo are aliases as declared in the select clause of the native query.
And finally use in the query creation:
List<ResultClass> results = em.createNativeQuery(query, "ResultMapping").getResultList();
In my opinion this is more elegant and "a level above" solution as you are not working with a generic Map key/values pairs but with a concrete POJO class.
You can use ResultTransformer interface . Implement custom mapper for mapping values with aliases.
here is example https://vladmihalcea.com/why-you-should-use-the-hibernate-resulttransformer-to-customize-result-set-mappings/
with ResultTransformer you can easy customize result set type , especially if you need aliases
I have a table experiment and a table tags. There may be many tags for one experiment.
schema:
-------- --------
|Table1| 1 n |Table2|
| | <--------------> | |
| | | |
-------- --------
(experiment) (tags)
Is it possible to create a query with jooq which returns the experiments and the corresponding List of tags?
something like Result<Record> where Record is a experimentRecord and a list of Tags, or a map<experimentRecord, List<TagRecord>.
I also have a query which returns only one result, is there something convenient out there?
EDIT: java8, newest jooq.
There are many ways to materialise a nested collection with SQL, and / or with jOOQ. I'm just going through some of them:
Using joins
If you don't deeply nest those collections, denormalising (flattening) your results with a JOIN might do the trick for you, without adding too much overhead as data is being duplicated. Essentially, you'll write:
Map<ExperimentRecord, Result<Record>> map =
DSL.using(configuration)
.select()
.from(EXPERIMENT)
.join(TAGS)
.on(...)
.fetchGroups(EXPERIMENT);
The above map contains experiment records as keys, and nested collections containing all the tags as values.
Creating two queries
If you want to materialise a complex object graph, using joins might no longer be optimal. Instead, you probably want to collect the data in your client from two distinct queries:
Result<ExperimentRecord> experiments =
DSL.using(configuration)
.selectFrom(EXPERIMENT)
.fetch();
And
Result<TagsRecord> tags =
DSL.using(configuration)
.selectFrom(TAGS)
.where(... restrict to the previous experiments ...)
.fetch();
And now, merge the two results in your client's memory, e.g.
experiments.stream()
.map(e -> new ExperimentWithTags(
e,
tags.stream()
.filter(t -> e.getId().equals(t.getExperimentId()))
.collect(Collectors.toList())
));
Nesting collections using SQL/XML or SQL/JSON
This question didn't require it, but others may find this question in search for a way of nesting to-many relationships with jOOQ. I've provided an answer here. Starting with jOOQ 3.14, you can use your RDBMS's SQL/XML or SQL/JSON capabilities, and then use Jackson, Gson, or JAXB to nest collections like this:
List<Experiment> experiments =
ctx.select(
EXPERIMENT.asterisk(),
field(
select(jsonArrayAgg(jsonObject(TAGS.fields())))
.from(TAGS)
.where(TAGS.EXPERIMENT_ID.eq(EXPERIMENT.ID))
).as("tags")
)
.from(EXPERIMENT)
.fetchInto(Experiment.class);
Where Experiment is a custom Java class like this:
class Experiment {
long id;
String name;
List<Tag> tags;
}
class Tag {
long id;
String name;
}
Nesting collections using MULTISET
Even better than the above, you can hide using SQL/XML or SQL/JSON behind jOOQ 3.15's new MULTISET operator support. Assuming the above Java classes are Java 16 records (or any other immutable classes), you can even map nested collections type safely into your DTOs:
List<Experiment> experiments =
ctx.select(
EXPERIMENT.ID,
EXPERIMENT.NAME,
multiset(
select(TAGS.ID, TAGS.NAME)
.from(TAGS)
.where(TAGS.EXPERIMENT_ID.eq(EXPERIMENT.ID))
).as("tags").convertFrom(r -> r.map(Records.mapping(Tag::new)))
)
.from(EXPERIMENT)
.fetch(Records.mapping(Experiment::new));
Where Experiment is a custom Java class like this:
record Experiment(long id, String name, List<Tag> tags) {}
record Tag(long id, String name) {}
See also this blog post for more information.
You can now use SimpleFlatMapper to map your result to a Tuple2<ExperimentRecord, List<TagRecord>>. All you need to do is.
1 - create a mapper, specify the key column, assumed it would be id
JdbcMapper mapper =
JdbcMapperFactory
.newInstance()
.addKeys(EXPERIMENT.ID.getName())
.newMapper(new TypeReference<Tuple2<ExperimentRecord, List<TagRecord>>>() {});
2 - use the mapper on the ResultSet of your query
try (ResultSet rs = DSL.using(configuration)
.select()
.from(EXPERIMENT)
.join(TAGS)
.on(...)
.fetchResultSet()) {
Stream<Tuple2<ExperimentRecord, List<TagRecord>>> stream = mapper.stream(rs);
....
}
See here for more details
We have an SQL statement which is executed by Jdbi (org.skife.jdbi.v2). For binding parameters we use Jdbi's bind method:
Handle handle = ...
Query<Map<String, Object>> sqlQuery = handle.createQuery(query);
sqlQuery.bind(...)
However we have a problem with in-lists and currently we are using String.format for this. So our query can look like this:
SELECT DISTINCT
tableOne.columnOne,
tableTwo.columnTwo,
tableTwo.columnThree
FROM tableOne
JOIN tableTwo
ON tableOne.columnOne = tableTwo.columnOne
WHERE tableTwo.columnTwo = :parameterOne
AND tableTwo.columnThree IN (%s)
%s is replaced by String.format so we have to generate a proper string in java code. Then after all %s are replaced we are using jdbi's bind method to replace all other parameters (:parameterOne or ?).
Is there a way to replace String.format with jdbi? There is a method bind(String, Object) but it doesn't handle lists/arrays by default. I have found this article which explains how to write our own factory for binding custom objects but it looks like a lot of effort, especially for something that should be already supported.
The article you linked also descibes the #BindIn annotation. This provides a general purpose implementiation for lists.
#UseStringTemplate3StatementLocator
public class MyQuery {
#SqlQuery("select id from foo where name in (<nameList>)")
List<Integer> getIds(#BindIn("nameList") List<String> nameList);
}
Please note that you'll have to escape all pointy brackets < like this \\<. There is a previous discusion on SO: How to do in-query in jDBI?
I just wanted to add an example since I recently spent considerable time getting a slightly more complex scenario to work :
Query :
select * from sometable where id <:id and keys in (<keys>)
What worked for me :
#UseStringTemplate3StatementLocator
public interface someDAO {
....
....
// This is the method that uses BindIn
#Mapper(someClassMapper.class)
#SqlQuery("select something from sometable where age \\< :age and name in (<names>)")
List<someclass> someMethod (#Bind("age") long age, #BindIn("names") List<string> names);
#Mapper(someClassMapper.class)
#SqlQuery("select something from sometable where id = :id")
List<someclass> someMethod1 (#Bind("id") long id);
...
...
}
Note: I did have to also add the below dependency since I am using
#UseStringTemplate3StatementLocator
<dependency>
<groupId>org.antlr</groupId>
<artifactId>stringtemplate</artifactId>
<version>3.2.1</version>
</dependency>
The main thing to observe in the above example : You only need to escape the less than operator (i.e. < ) and not the <> that surround the collection variable (names).
As you can see I did not use a sql.stg file to write my queries in. Initially I incorrectly assumed that when using #UseStringTemplate3StatementLocator , we have to write the queries in the sql.stg file. However, somehow I never got my sql.stg file to work and I eventually reverted back to writing the query within the DAO class using #SqlQuery.
For the most recent jdbi version things got easier:
public interface MyDao {
#SqlQuery("select id from foo where name in (<nameList>)")
List<Integer> getIds(#BindList("nameList") List<String> nameList);
}
#BindIn → #BindList, and no longer requires StringTemplate
Reference: https://jdbi.org/