I have an interface method in my repository
#Query("from Alert a")
Stream<Alert> streamAll();
Since I may have a lot of alerts, but I only need to process each one a record at a time and I don't need the associated data afterwards, I write code as follows
streamAll()
.forEach(alert-> {
doProcessing(alert);
entityManager.detach(alert);
});
Where I explicitly do a detach of the alert. I was wondering is there a nicer way of doing this so I can simply do
streamAllWithDetach()
.forEach(this::doProcessing);
Without writing a lot of custom code or AOP wrappers.
just keep it simple. let's create a method to wrap
doProcessing(alert);
entityManager.detach(alert);
Related
I will usually have 5-6 events per aggregate and would like not to store projections in DB. What would be the easiest way always to make view projection at query time?
The short answer to this, is that there is no easy/quick way to do this.
However, it most certainly is doable to implement a 'replay given events at request time' set up.
What I would suggest you do exists in several steps:
Create the query model you would like to return, which can handle events (use #EventHandler annotated methods on the model)
Create a Component which can handle the query that'll return the query model in step one (use a #QueryHandler annotated method for this.
The Query-Handling-Component should be able to retrieve a stream of events from the EventStore. If this is based on an aggregateIdentifier, use the EventStore#readEvents(String) method. If you need the entire event stream, you need to use the StreamableMessageSource#openStream(TrackingToken) method (note: the EventStore interface implements StreamableMessageSource)
Upon query handling, create a AnnotationEventHandlerAdapter, giving it a fresh instance of your Query Model
For every event in the event stream you've created in point 3, call the AnnotationEventHandlerAdapter#handle(EventMessage) method. This method will call the #EventHandler annotated methods on your Query Model object
If the stream is depleted, you are ensured all necessary events for your Query Model have dealt with. Thus, you can now return the Query Model
So, again, I don't think this is overly trivial, easy or quick to set up.
Additionally, step 3 has quite a caveat in there. Retrieving the stream of a given Aggregate based on the Aggregate Identifier is pretty fast/concise, as an Aggregate in general doesn't have a lot of events.
However, retrieving the Event Stream based on a TrackingToken, which you'd need if your Query Model spans several Aggregates, can ensure you pull in the entire event store for instantiating your models on the fly. Granted, you can fine tune the point in time you want the Event Stream to return events from as you're dealing with a TrackingToken, but the changes are pretty high you will be incomplete and relatively slow.
However, you stated you want to retrieve events for a given Aggregate Identifier.
I'd thus think this should be a workable solution in your scenario.
Hope this helps!
I have to save records to a database and then send some data to a restful web service. I need them to happen together. If one fails then the other should not happen as well. So for example, consider the following code:
saveRecords(records);
sendToRestService(records);
If saveRecords fails with a database constraint violation then I don't want the rest call to happen. I could make saveRecords happen in it's own transaction and commit it before the call to sendToRestService but there is still the potential for the rest service to be down. I could keep up with whether the rest service succeeds and if it doesn't then try to send them later. I was just wondering if there is a better strategy since this seems like it would be a common scenario.
Thanks for any advice.
why don't you try Observer design pattern?
I'm assuming saveRecords(records) and sendToRestService(records) methods are in two different classes.
If you use Observer design pattern, you can notify the class containing sendToRestService() method in case if the calling class object changes.
Ref: Observer Design Pattern
So, I'm working on using Jooq to create a caching layer over Postgres. I've been using the MockConnection/MockDataProvider objects to intercept every query, and this is working, but I'm having a few issues.
First, how do I determine between reads and writes? That is, how do I tell whether a query is an insert/update/etc or a select, given only the MockExecuteContext that's passed into the execute method in MockDataProvider?
And I'm a bit confused on how I can do invalidations. The basic scheme I'm implementing right now is that whenever a "write" query is made to a table, I invalidate all cached queries that involve that table. This goes back to my first question, on telling different types of queries from each other, but also brings up another issue: how would I identify the tables used in a query given only the sql string and the bindings (both are attributes of MockExecuteContext)?
Also, is this a correct approach at caching? My first thought was to override the fetch() method, but that method is final, and I'd rather not change something already embedded in Jooq itself. This is the only other way I could think of to intercept all requests made so I could create a separate, persistent caching layer.
I have seen this (https://groups.google.com/forum/#!topic/jooq-user/xSjrvnmcDHw) question, but I'm still not clear on how Lukas recommended to identify tables from the object. I can try to implement a Postgres NOTIFY, but I wanted something native in Jooq first. I've seen this issue (https://github.com/jOOQ/jOOQ/issues/2665) pop up a lot too, but I'm not sure how it applies.
Keep in mind that I'm new to Jooq, so it's quite possible that I'm missing something obvious.
Thanks!
I'm having a problem where JPA is trying to lazily load my data when I don't want it to. Essentially what is happening is I'm using a Service to retrieve some data, and when I go to parse that data into JSON, the JSON library is triggering hibernate to try and lazily load the data. Is there any way to stop this? I've given an example below.
// Web Controller method
public String getEmployeesByQuery(String query) {
Gson gson = new Gson();
List<Employee> employees = employeeService.findEmployeesByQuery(query);
// Here is where the problem is occurring - the gson.toJSON() method is (I imagine)
// using my getters to format the JSON output, which is triggering hibernate to
// try and lazily load my data...
return gson.toJSON(employees);
}
Is it possible to set JPA/hibernate to not try and lazily load the data?
UPDATE: I realize that you can use FetchType.EAGER - but what if I don't want to eager load that data? I just want to stop hibernate from trying to retrieve more data - I already have the data I want. Right now whenever I try and access a get() method hibernate will throw a "no session or session is closed" error, which makes sense because my transaction was already committed from my service.
Thanks!
There are several options:
If you always need to load your collection eagerly, you can specify fetch = FetchType.EAGER in your mapping, as suggested in other answers.
Otherwise you can enable eager fetching for particular query:
By using JOIN FETCH clause in HQL/JPQL query:
SELECT e FROM Employee e JOIN FETCH e.children WHERE ...
By using fetch profiles (in JPA you can access Hibernate Session via em.unwrap(Session.class))
You really have two options:
You can copy the data from employee to one that is not being proxied by hibernate.
See if there is a way to not have the toJSON library reflect the entire object graph. I know some JSON libraries allow you to only serialize some properties of an object to JSON.
Personally I would think #1 would be easier if your library only uses reflection.
As others have stated, this is not an issue with JPA/hibernate but rather with the json serialization library you are using. You should instruct gson to exclude the properties you don't want traversed.
Yes:
#*ToMany(fetch=FetchType.EAGER)
I suggest you to make a fetched copy of the entities you want to use outside of a transaction. That way, the lazy loading will occur from within a transaction and you can pass to Gson a plain, not enhanced, POJO.
You can use Doozer to do this. It is very flexible and through a little configuration (read you'll gonna loose your hair configuring it) you can even retrieve only partially the data you want to send to Gson.
You could always change the fetch attribute to FetchType.EAGER, but it is also worth considering if you have your transactions have the right scope. Collections will be correctly loaded if they are accessed within a transaction.
Your problem is that you are serializing the data. We ran into the same sort of problem with Flex and JPA/Hibernate. The trick is, depending on how much you want to mangle things, either
Change your data model to not chase after the data you don't want.
Copy the data you do want into some sort of DTO that has no relationships to worry about.
Assuming you're using Hibernate, add the Session-in-view filter....its something like that, it will keep the session open while you serialize the entire database. ;)
Option one is what we did for the first big project we did, but it ruined the data access library we had for any sort of general purpose use. Since that time we've tended more toward option two.
YMMV
The easy and straight forward thing to do is create new Data classes (something like DTO)
use Hibernate.isInitialized() to check if the object is initialized by hibernate or not.
I am checking gson if i can override anything. I will post it here if I find anything new.
I am somewhat new to using JPA -- I'll put that out there right off the bat. I'm getting more familiar with it, but there are big holes in my knowledge right now.
I am working on an application that uses JPA, and deletes entities using the EntityManager.remove(Object entity) function. However, the application also links into a third-party application, and I would like to add logic that gets executed whenever a certain type of Entity is removed from the persistence layer.
My question is this. Is there a way to add logic to the EntityManager.remove(Object entity) function on a Entity class level, such that every time that type of entity is deleted the extra logic is executed?
Thanks much.
Entity class may have methods annotated with #PreRemove or #PostRemove.
If you are using Eclipselink, it has a much more fine grained native event system via the DescriptorEventListener interface.