It is possible to map static properties after selecting data from DB?
For example if I want that result of some field will be always ENUM_VALUE. I will add an example to make myself more clear.
Target dto I want to fill via JOOQ
internal data class MytargetClass(val property: Int, val static_value: MyEnum)
Since static_value is val and its not optional, I need to initialize that property while creating object. So way to create object only with property and map static_value afterwards is not possible.
This is a solution that works, I add my enum value as string via val method, but this approach is not really something that I would want.
// Repository file, other fields and method omitted for simplicity
jooq
.select(
t.property.`as`("property"),
DSL.`val`(Enum.MY_VALUE.name).`as`("static_value"),
)
// from and where clause omitted for simplicity
.fetchInto(MytargetClass::class.java)
But I would like to ask if there is way how to do something like
// Repository file, other fields and method omitted for simplicity
jooq
.select(
t.property.`as`("property"),
)
// from and where clause omitted for simplicity
.fetchInto(MytargetClass::class.java, (data) -> { data.static_value = Enum.MY_VALUE // possible more transformation of object inside this lambda })
So I dont have to select these properties statically via SQL and therefore I can bypass problems with ENUM dialects and other issues linked to this approach.
Using DefaultRecordMapper
Just use a custom RecordMapper via ResultQuery.fetch(RecordMapper) and call apply on a mapped value
.fetch {
it.into(MytargetClass::class.java).apply {
static_value = Enum.MY_VALUE
}
}
Using any other RecordMapper
The above just assumes you want to continue using the DefaultRecordMapper, which is reflection based. But you don't have to. You can do anything you want with it:
.fetch {
MytargetClass(it[t.property], Enum.MY_VALUE)
}
Related
I have 2 entities:
record Customer(String name, List<CustomerContact > contactHistory) {}
record CustomerContact(LocalDateTime contactAt, Contact.Type type) {
public enum Type {
TEXT_MESSAGE, EMAIL
}
}
These are persisted in a schema with 2 tables:
CREATE TABLE customer(
"id". BIGSERIAL PRIMARY KEY,
"name" TEXT NOT NULL
);
CREATE TABLE customer_contact(
"customer_id" BIGINT REFERENCES "customer" (ID) NOT NULL,
"type" TEXT NOT NULL,
"contact_at" TIMESTAMPTZ NOT NULL DEFAULT (now() AT TIME ZONE 'utc')
);
I want to retrieve the details of my Customers with a single query, and use the arrayAgg method to add the contactHistory to each customer. I have a query like this:
//pseudo code
DSL.select(field("customer.name"))
.select(arrayAgg(field("customer_contact.contact_at")) //TODO How to aggregate both fields into a CustomerContact object
.from(table("customer"))
.join(table("customer_contact")).on(field("customer_contact.customer_id").eq("customer.id"))
.groupBy(field("customer_contact.customer_id"))
.fetchOptional()
.map(asCustomer());
The problem I have with this is that arrayAgg will only work with a single field. I want to use 2 fields, and bind them into a single object (CustomerContact) then use that as the basis for the arrayAgg
Apologies if I have not explained this clearly! Any help much appreciated.
Rather than using ARRAY_AGG, how about using the much more powerful MULTISET_AGG or MULTISET to get the most out of jOOQ's type safe capabilities? Combine that with ad-hoc conversion for type safe mapping to your Java records, as shown also in this article. Your query would then look like this:
Using MULTISET_AGG
List<Customer> customers =
ctx.select(
CUSTOMER.NAME,
multisetAgg(CUSTOMER_CONTACT.CONTACT_AT, CUSTOMER_CONTACT.TYPE)
.convertFrom(r -> r.map(Records.mapping(CustomerContact::new))))
.from(CUSTOMER)
.join(CUSTOMER_CONTACT).on(CUSTOMER_CONTACT.CUSTOMER_ID.eq(CUSTOMER.ID))
.groupBy(CUSTOMER_CONTACT.CUSTOMER_ID)
.fetch(Records.mapping(Customer::new));
Note that the entire query type checks. If you change anything about the query or about your records, it won't compile anymore, giving you additional type safety. This is assuming that youre Type enum is either:
Generated from a PostgreSQL ENUM type
Converted automatically using an enum converter, attached to generated code
Depending on your tastes, using implicit joins could slightly simplify the query for you?
List<Customer> customers =
ctx.select(
CUSTOMER_CONTACT.customer().NAME,
multisetAgg(CUSTOMER_CONTACT.CONTACT_AT, CUSTOMER_CONTACT.TYPE)
.convertFrom(r -> r.map(Records.mapping(CustomerContact::new))))
.from(CUSTOMER_CONTACT)
.groupBy(CUSTOMER_CONTACT.CUSTOMER_ID)
.fetch(Records.mapping(Customer::new));
It's not a big deal in this query, but in a more complex query, it can reduce complexity.
Using MULTISET
An alterantive is to nest your query instead of aggregating, like this:
List<Customer> customers =
ctx.select(
CUSTOMER.NAME,
multiset(
select(CUSTOMER_CONTACT.CONTACT_AT, CUSTOMER_CONTACT.TYPE)
.from(CUSTOMER_CONTACT)
.where(CUSTOMER_CONTACT.CUSTOMER_ID.eq(CUSTOMER.ID))
).convertFrom(r -> r.map(Records.mapping(CustomerContact::new))))
.from(CUSTOMER)
.fetch(Records.mapping(Customer::new));
Code generation
For this answer, I was assuming you're using the code generator (you should!), as it would greatly contribute to this code being type safe, and make this answer more readable.
Much of the above can be done without code generation (except implicit joins), but I'm sure this answer could nicely demonstrate the benefits it terms of type safety.
Let's say we have a table car and parts. To fetch all car with their parts we use the following query:
#Transactional
public List<ReadCarDto> getAllCars() {
return getDslContext().select(
CAR.ID,
CAR.NAME,
CAR.DESCRIPTION,
multiset(
selectDistinct(
PARTS.ID,
PARTS.NAME,
PARTS.TYPE,
PARTS.DESCRIPTION
).from(PARTS).where(PARTS.CAR_ID.eq(CAR.ID))
).convertFrom(record -> record.map(record1 -> new ReadPartDto(
record1.value1(),
record1.value2(),
record1.value3(),
record1.value4()
)))
).from(CAR).fetch(record -> new ReadCarDto(
record.value1(),
record.value2(),
record.value3(),
record.value4()
));
}
Question: I always want to fetch the full car and part rows. Is there a way to reuse my existing private RecordMapper<CarRecord, ReadCarDto> getCarMapper() method that already implements the DTO conversion (For parts too of course)? Otherwise I have to retype the conversion in my multiset queries.
It looks like the selectDistinct method only has support for 1 - 22 fields and select().from(CAR) doesn't provide a multiset method.
Sidenote: I don't want to use the reflection conversion.
Your question reminds me of this one. You probably want to use a nested row() expression to produce nested records. Something like this:
return getDslContext()
.select(
row(
CAR.ID,
CAR.NAME,
CAR.DESCRIPTION,
...
).mapping(carMapper),
multiset(...)
)
A future jOOQ version will let you use CAR directly as a nested row in your projections, see https://github.com/jOOQ/jOOQ/issues/4727. As of jOOQ 3.16, this isn't available yet.
Consider this trivial query:
SELECT 1 as first, 2 as second
When using Hibernate we can then do something like:
em.createNativeQuery(query).fetchResultList()
However, there seem to be no way of getting the aliases (or column names). This would be very helpful for creating List<Map<String, Object>> where each map would be a row with their aliases, for instance in this case: [{first: 1, second: 2}].
Is there a way to do something like that?
I would suggest a bit different approach which may meet your needs.
In JPA 2.1 there is a feature called "result set mapping".
Basically you have to define a POJO class which would hold the result values (all the values must be passed using the constructor):
public class ResultClass{
private String fieldOne;
private String fieldTwo;
public ResultClass(String fieldOne, String fieldTwo){
this.fieldOne = fieldOne;
this.fieldTwo = fieldTwo;
}
}
Then you have to declare the mapping on one of your entities (does not matter on which, it just has to be a declated #Entity):
#SqlResultSetMapping(name="ResultMapping", classes = {
#ConstructorResult(targetClass = ResultClass.class,
columns = {#ColumnResult(name="columnOne"), #ColumnResult(name="columnTwo")})
})
The columnOne and columnTwo are aliases as declared in the select clause of the native query.
And finally use in the query creation:
List<ResultClass> results = em.createNativeQuery(query, "ResultMapping").getResultList();
In my opinion this is more elegant and "a level above" solution as you are not working with a generic Map key/values pairs but with a concrete POJO class.
You can use ResultTransformer interface . Implement custom mapper for mapping values with aliases.
here is example https://vladmihalcea.com/why-you-should-use-the-hibernate-resulttransformer-to-customize-result-set-mappings/
with ResultTransformer you can easy customize result set type , especially if you need aliases
I was following some example in which we can able to build OData service with Olingo from Java (maven project). The provided example doesn't have any database interaction. They are using some Storage.class, which contains hard codded data.
You can find sample code on git. please refer example p0_all in provided url.
Does anyone knows how we can connect git example with some database and furthermore perform CRUD operations
Please do help me with some good examples or concept.
Thanking you in advance.
I recently built an oData producer using Olingo and found myself similarly frustrated. I think that part of the issue is that there really are a lot of different ways to build an oData service with Olingo, and the data access piece is entirely up to the developer to sort out in their own project.
Firstly, you need an application that has a database connection set up. So completely disregarding Olingo, you should have an app that connects to and can query a database. If you are uncertain of how to build a java application that can query a MySQL datasource, then you should Google around for tutorials that are related to that problem and have nothing to do with Olingo.
Next you need to write the methods and queries to perform CRUD operations in your application. Again, these methods have nothing to do with Olingo.
Where Olingo starts to come in to play is in your implementation of the processor classes. EntityCollectionProcessor, EntityProcessor etc. (note that there are other concerns such as setting up your CsdlEntityTypes and Schema/Service Document etc., but those are outside the scope of your question)
Lets start by looking at EntityCollectionProcessor. By implementing the EntityCollectionProcessor class you need to override the readEntityCollection() function. The purpose of this function is to parse the oData URI for the entity name, fetch an EntityCollection for that Entity, and then serialize the EntityCollection into an oData compliant response. Here's the implementation of readEntityCollection() from your example link:
public void readEntityCollection(ODataRequest request, ODataResponse response, UriInfo uriInfo, ContentType responseFormat)
throws ODataApplicationException, SerializerException {
// 1st we have retrieve the requested EntitySet from the uriInfo object
// (representation of the parsed service URI)
List<UriResource> resourcePaths = uriInfo.getUriResourceParts();
UriResourceEntitySet uriResourceEntitySet = (UriResourceEntitySet) resourcePaths.get(0);
// in our example, the first segment is the EntitySet
EdmEntitySet edmEntitySet = uriResourceEntitySet.getEntitySet();
// 2nd: fetch the data from backend for this requested EntitySetName
// it has to be delivered as EntityCollection object
EntityCollection entitySet = getData(edmEntitySet);
// 3rd: create a serializer based on the requested format (json)
ODataSerializer serializer = odata.createSerializer(responseFormat);
// 4th: Now serialize the content: transform from the EntitySet object to InputStream
EdmEntityType edmEntityType = edmEntitySet.getEntityType();
ContextURL contextUrl = ContextURL.with().entitySet(edmEntitySet).build();
final String id = request.getRawBaseUri() + "/" + edmEntitySet.getName();
EntityCollectionSerializerOptions opts = EntityCollectionSerializerOptions.with().id(id).contextURL(contextUrl).build();
SerializerResult serializerResult = serializer.entityCollection(serviceMetadata, edmEntityType, entitySet, opts);
InputStream serializedContent = serializerResult.getContent();
// Finally: configure the response object: set the body, headers and status code
response.setContent(serializedContent);
response.setStatusCode(HttpStatusCode.OK.getStatusCode());
response.setHeader(HttpHeader.CONTENT_TYPE, responseFormat.toContentTypeString());
}
You can ignore (and reuse) everything in this example except for the "2nd" step:
EntityCollection entitySet = getData(edmEntitySet);
This line of code is where Olingo finally starts to interact with our underlying system, and the pattern that we see here informs how we should set up the rest of our CRUD operations.
The function getData(edmEntitySet) can be anything you want, in any class you want. The only restriction is that it must return an EntityCollection. So what you need to do is call a function that queries your MySQL database and returns all records for the given entity (using the string name of the entity). Then, once you have a List, or Set (or whatever) of your records, you need to convert it to an EntityCollection.
As an aside, I think that this is probably where the disconnect between the Olingo examples and real world application comes from. The code behind that getData(edmEntitySet); call can be architected in infinitely different ways, depending on the design pattern used in the underlying system (MVC etc.), styling choices, scalability requirements etc.
Here's an example of how I created an EntityCollection from a List that returned from my query (keep in mind that I am assuming you know how to query your MySQL datasource and have already coded a function that retrieves all records for a given entity):
private List<Foo> getAllFoos(){
// ... code that queries dataset and retrieves all Foo records
}
// loop over List<Foo> converting each instance of Foo into and Olingo Entity
private EntityCollection makeEntityCollection(List<Foo> fooList){
EntityCollection entitySet = new EntityCollection();
for (Foo foo: fooList){
entitySet.getEntities().add(createEntity(foo));
}
return entitySet;
}
// Convert instance of Foo object into an Olingo Entity
private Entity createEntity(Foo foo){
Entity tmpEntity = new Entity()
.addProperty(createPrimitive(Foo.FIELD_ID, foo.getId()))
.addProperty(createPrimitive(Foo.FIELD_FOO_NAME, foo.getFooName()));
return tmpEntity;
}
Just for added clarity, getData(edmEntitySet) might look like this:
public EntityCollection getData(String edmEntitySet){
// ... code to determine which query to call based on entity name
List<Foo> foos = getAllFoos();
EntityCollection entitySet = makeEntityCollection(foos);
return entitySet;
}
If you can find an Olingo example that uses a DataProvider class, there are some basic examples of how you might set up the // ...code to determine which query to call based on entity name. I ended up modifying that pattern heavily using Java reflection, but that is totally unrelated to your question.
So getData(edmEntitySet) is a function that takes an entity name, queries the datasource for all records of that entity (returning a List<Foo>), and then converts that List<Foo> into an EntityCollection. The EntityCollection is made by calling the createEntity() function which takes the instance of my Foo object and turns it into an Olingo Entity. The EntityCollection is then returned to the readEntityCollection() function and can be properly serialized and returned as an oData response.
This example exposes a bit of the architecture problem that Olingo has with its own examples. In my example Foo is an object that has constants that are used to identify the field names, which are used by Olingo to generate the oData Schema and Service Document. This object has a method to return it's own CsdlEntityType, as well as a constructor, its own properties and getters/setters etc. You don't have to set your system up this way, but for the scalability requirements of my project this is how I chose to do things.
This is the general pattern that Olingo uses. Override methods of an interface, then call functions in a separate part of your system that interact with your data in the desired manner. Then convert the data into Olingo readable objects so they can do whatever "oData stuff" needs to be done in the response. If you want to implement CRUD for a single entity, then you need to implement EntityProcessor and its various CRUD methods, and inside those methods, you need to call the functions in your system (totally separate from any Olingo code) that create(), read() (single entity), update(), or delete().
I'm using Hibernate Envers in my app to track changes in all fields of my entities.
I'm using #Audited(withModifiedFlag=true) annotation to do it.
The records are been correcty recorded at database and the _mod fields correctly indicate the changed fields.
I want to get a particular revision from some entity and the information of what fields have been changed. I'm using the follow method to do it:
List<Object[]> results = reader.createQuery()
.forRevisionsOfEntity(this.getDao().getClazz(), false, true)
.add(AuditEntity.id().eq(id))
.getResultList();
This method returns an list of an object array with my entity as first element.
The problem is that the returned entity doesn't have any information about the changed fields. So, my question is: how to get the information about the changed fields?
I know that this question is a bit old now but I was trying to do this and didn't really find any answers.
There doesn't seem to be a nice way to achieve this, but here is how I went about it.
Firstly you need to use projections, which no longer gives you a nice entity model already mapped for you. You'll still get back an array of Objects but each object in the array corresponds to each projection that you added (in order).
final List<Object[]> resultList = reader.createQuery()
.forRevisionsOfEntity(this.getDao().getClazz(), false, true)
// if you want revision properties like revision number/type etc
.addProjection(AuditEntity.revisionNumber())
// for your normal entity properties
.addProjection(AuditEntity.id())
.addProjection(AuditEntity.property("title")) // for each of your entity's properties
// for the modification properties
.addProjection(new AuditProperty<Object>(new ModifiedFlagPropertyName(new EntityPropertyName("title"))))
.add(AuditEntity.id().eq(id))
.getResultList();
You then need to map each result manually. This part is up to you, but I'm use a separate class as a revision model as it contains extra data to the normal entity data. If you wanted you could probably achieve this with #Transient properties on your entity class though.
final List<MyEntityRevision> results = resultList.stream().map(this::transformRevisionResult)
.collect(Collectors.toList());
private MyEntityRevision transformRevisionResult(Object[] revisionObjects) {
final MyEntityRevision rev = new MyEntityRevision();
rev.setRevisionNumber((Integer) revisionObjects[0]);
rev.setId((Long) revisionObjects[1]);
rev.setTitle((String) revisionObjects[2]);
rev.setTitleModified((Boolean) revisionObjects[3]);
return rev;
}