I currently have a default Spring architecture: Repostiory, Service, Controller (Spring WebMVC), JacksonJson Mapper as "view". All my Repository/Service/Controller methods look like:
public Collection<Pet> findPetsWithName(String name) {}
So basically each Layer retrieves data, does some calculations and returns it to the next layer.
With increasing Data size I was playing with Spring JdbcTemplate, fetchsize settings and RowCallbackHandler in order to "stream" Database results rather than fetching all at once.
My Question is now: Can I apply the "callback" approach to all layers, not only the Repository layer so that all results are but into a Callback function instread of returning them as Collection? Does it work with SpringMVC views? I think I'd end up with a Chained Callback of :
RowCallbackHandler(ServiceCallbackHandler(ControllerCallbackHandler(SpringViewHandler(HttpSerlvetResponse))))
public void findPetsWithName(String name, Callback<Pet> callback) {}
Has anyone experiences with this approach? Are there existing Patterns or templates for it? I think there is only a benefit for large data sizes because it is more difficult to design.
The only time I had use for streaming data from row mapper to the response was when we were storing large encrypted binary data in the database and wanted to stream it as-is, to be decrypted by our think client.
Assuming this is the kind of situation you are thinking about, you should use a ResultSetExtractor.
You can get the stream from the result set in the callback (assuming your data type is blob equivalent), and pipe it to the response output stream, which is accepted as a parameter to your repo method.
Let me know if you are looking to implement a design where each row should be mapped to an object and the call back mechanism should pass the object back to the higher layers one by one.
Related
I will usually have 5-6 events per aggregate and would like not to store projections in DB. What would be the easiest way always to make view projection at query time?
The short answer to this, is that there is no easy/quick way to do this.
However, it most certainly is doable to implement a 'replay given events at request time' set up.
What I would suggest you do exists in several steps:
Create the query model you would like to return, which can handle events (use #EventHandler annotated methods on the model)
Create a Component which can handle the query that'll return the query model in step one (use a #QueryHandler annotated method for this.
The Query-Handling-Component should be able to retrieve a stream of events from the EventStore. If this is based on an aggregateIdentifier, use the EventStore#readEvents(String) method. If you need the entire event stream, you need to use the StreamableMessageSource#openStream(TrackingToken) method (note: the EventStore interface implements StreamableMessageSource)
Upon query handling, create a AnnotationEventHandlerAdapter, giving it a fresh instance of your Query Model
For every event in the event stream you've created in point 3, call the AnnotationEventHandlerAdapter#handle(EventMessage) method. This method will call the #EventHandler annotated methods on your Query Model object
If the stream is depleted, you are ensured all necessary events for your Query Model have dealt with. Thus, you can now return the Query Model
So, again, I don't think this is overly trivial, easy or quick to set up.
Additionally, step 3 has quite a caveat in there. Retrieving the stream of a given Aggregate based on the Aggregate Identifier is pretty fast/concise, as an Aggregate in general doesn't have a lot of events.
However, retrieving the Event Stream based on a TrackingToken, which you'd need if your Query Model spans several Aggregates, can ensure you pull in the entire event store for instantiating your models on the fly. Granted, you can fine tune the point in time you want the Event Stream to return events from as you're dealing with a TrackingToken, but the changes are pretty high you will be incomplete and relatively slow.
However, you stated you want to retrieve events for a given Aggregate Identifier.
I'd thus think this should be a workable solution in your scenario.
Hope this helps!
I have been trying to get bulk documents based on a list of id's. But for some reason I don't see a method in CRUD Repo which can give me that data. I was able to locate a method name "FindAll(List)", but this seems to work only on the view named "all", and I don't want to introduce a view for a simple lookup(which primarily should be a functionality of couchbase).
Can someone please let me know what all options I have to achieve my end goal if I dont want to end up using Views or Nickel queries.
Also, why is it not supported by spring data couchbase. Is this something that is not expected?
The Repository needs to be able to findAll() documents that it is tasked with saving. Problem is, in Couchbase you can save all sorts of documents in the same bucket, so the repository needs a way of isolating only the documents that match its Entity type.
It is done with the requirement of the View for CRUD operations, and for generated N1QL queries by appending a criteria on the _class field to the WHERE clause.
When you provide a List of keys, the Couchbase repository simply reuses the view you had to configure so that findAll() works, which has the additional benefit of ensuring that keys not corresponding to a correct Entity (that is, not indexed by the view) will be ignored.
That said I think it is on the roadmap to remove the view requirement... (But that's up to the Couchbase team. Maybe raise an issue to get a more definitive answer to that).
Spring Data Kay and its support for Reactive Programming will most likely also change the landscape.
According to documentation, with RxJava you can use effective batching.
Code example
bucket.async()
.query(N1qlQuery.simple("SELECT meta().id as id FROM bucket"))
.doOnNext(res -> res.info().map(N1qlMetrics::elapsedTime).
forEach(t -> System.out.println("time elapsed"+t)))
.flatMap(AsyncN1qlQueryResult::rows)
.flatMap(row ->
bucket.async().
get(row.value().getString("id")))
.map(JsonDocument::content).
toList()
.toBlocking()
.single();
RxJava is asynchronous and will save additional round-trips and should end up the better performer!
Background
I'm using the EventStore (from geteventstore.com) in a project.
So fare I have implemented the write side of the application. That is I can read and write events for a given aggregate.
Now i'm on the read side and need to subscribe to a stream. I'm using the java api and everything is also working here.
Now the problem
The stream doesn't exist... I have to create a projection that aggregates events from different streams to a single stream for my read model.
How can I create a projection via the api? Preferably with the java api, but the http api would also do.
Elaporates
As projections are the means for a readmodel to get the exact events it needs, new projections will be created as the business needs changes. My idea is therefore that a readmodel service will check for and potentially create the projection it needs when it starts up.
It will be unacceptable to manually create the projections before starting the service. That would be like manually migrating your sql db.
From http://docs.geteventstore.com/dotnet-api/4.0.0/projections/
public Task CreateContinuousAsync(string name, string query, UserCredentials userCredentials = null)
Creates a projection that will run until the end of the log and then continue running. The query parameter contains the javascript you want to be created as a one time projection. Continuous projections have explicit names and can be enabled/disabled via this name
There are other options like creating a one-time projection, etc.
It refers to the .NET API. Since there seems to be no specific documentation for the Java API I am assuming they are similar.
I'm working on a school project and or task is to design a project management tool. We are allowed to use any design pattern as long as we can explain how it's good according to the GRASP principles.
I'll give a synopsis of the project tool:
CRUD-functionality for projects
CRUD-functionality for tasks (a project has tasks)
CRUD-functionality for users (a user is assigned to tasks)
A simple GUI
We decided to go with the MVC-pattern and we are not allowed to use a database. My question is: Where should I store objects?
Should I do this in the controller? Currently we do it like this:
public class ProjectController
{
private ArrayList<Project> projects;
public ProjectController(TaskController taskController)
{
projects = new ArrayList<Project>();
}
}
I have a feeling there is something wrong with keeping the objects in the controller but I can't explain why. Anyone that can explain what's the best practice according to the GRASP-principles?
EDIT:
Thank you, learned from everyone something but can only pick one answer.
For a very short answer : NO, don't put your store in the controller. This is a bad idea and it goes against the MVC principle.
Usually, the model is the only place responsible for your data BUT it is frequent that the M part is split into :
Fetching the data.
Storing the data in the application.
The interesting part in this is that, no one cares where your data come from. A database, a file, an API rest. whatever, it doesn't matter.
I'm not saying i have the best solution for you but here is how you could do this with an example.
You store your user data into a file.
You create a php class UserDataRepository that fetches the user data files, and sets the data into your UserModel class.
From the controller, you call your UserDataReposiroty and get back your UserModel.
This way your controller doesn't have any idea how you are fetching the data. He just asks a repository to fetch them and it returns the UserModel that the controller is allowed to manipulate.
I hope this will help you
Increase abstraction.. Create a model class. Create your arraylist (model objects) there. Your controller should still access/call model methods.
Tomorrow, you might want to dump that data into a file or into a DB, you will have one hell of a ride doing that with the current design. So separate your model from your controller and keep the design clean.
No. If you store data in the controller then you are not using MVC. You have to do it in the Model. You can store in memory or files, but always store data throw the model. For example, you could implement DAO pattern to manipulate data.
Maybe, not now, but then you will need a database. With DAO pattern, it won't be difficult to adapt your current persistence kind to a database.
In MVC pattern, M means models, V means view, C means controller. A common MVC application progress is that once a request is coming, controller get it and do necessary processing, retrieving results data, then pass the results data to view for rendering. After view layer is rendered, it displays to users via GUI.
So controller can be regarded as a commander, it controls process, but it is not good to handle data retrieving in controller. Model should be responsible for retrieving and organizing data. That means data objects should be stored in Model instead of Controller, Controller calls Model to retrieve data objects. Take Java application as example, usually these parts are needed:
ProjectController, it calls ProjectService.getAllProjects() method to retrieve result. Once retrieved, view layer use the result to render GUI for display. I suggest Controller layer should be thin.
ProjectService, it has method getAllProjects(), this method calls ProjectDAO.getAllProjects() method to retrieve projects data, and maybe other handling. Here business logic goes.
ProjectDAO, it has several methods which deal with Project objects, deal with data in this layer! But these methods should be independent with business logic(as business logic should be deal in ProjectService).
Project object.
Hope it helps.
The ever so popular discussion on designing proper DAOs always concludes with something along the lines of "DAOs should only perform simple CRUD operations".
So what's the best place to perform things like aggregations and such? And should DAOs return complex object graphs resembling your data source's schema?
Assume I have the following DAO interface:
public interface UserDao {
public User getByName(String name);
}
And here are the Objects it returns:
public class Transaction {
public int amount;
public Date transactionDate;
}
public class User {
public String name;
public Transaction[] transactions;
}
First of all, I consider the DAO to be returning a standard Value Object if all it does is CRUD operations.
So now I have modeled by DAO to return something based on a data store relationship. Is this correct? What if I have a more complex object graph?
Update: I guess what I am asking in this part is, should the return value of a DAO, be it VO, DTO, or whatever you want to call it, be modeled after the data store's representation of the data? Or should I, say introduce a new DAO to get a user's transactions and for each user pulled by the UserDAO, invoke a call to the TransactionDAO to get them?
Secondly, let's say I want to perform an aggregation for all of a user's transactions. Using this DAO, I can simply get a user, and in my Service loop though the transactions array and perform the aggregation myself. After all, it's perfectly reasonable to say that such an aggregation is a business rule that belong in the Service.
But what if a user's transactions number in the tens of thousands. That would have a negative impact on application performance. Would it be incorrect to introduce a new method on the DAO that does said aggregation?
Of course this might be making an assumption that the DAO is backed up by a database where I can write a simple SELECT SUM() query. And if the DAO implementation changes to say a flat file or something, I would need to do the aggregation in memory anyway.
So what's the best practice here?
I use the DAO as the translation layer: read the db objects, create the java side business objects and vice versa. Sometimes a couple of calls might be used to store or create a business object. For the provided example, I would make two calls: one for the user info, one for the list of the user's transactions. The cost is an extra database call. I'm not afraid to make an extra call if I'm using connection pooling and I'm not repeating calculations. Separate calls are simpler to use (unpacking an array of composite types from a jdbc call is not simple and typically requires the proprietary connection object) and provide resusable components. Let's say you wanted the user object for a login screen: you can use the same user dao and not have to pull in the transaction stuff.
If you didn't actually want the transaction details but were just interested in the aggregate, I would do the aggregate work on the database side and expose it via a view or a stored procedure. Relational databases are built for and excel at these kinds of set operations. You are unlikely to perform the operations better. Also, there is no point sending all the data over the wire if the result will do. So sure, add another dao for the aggregate if there are times you are only interested in that.
Is it safe to assume the dao maps to a relational db? If that is how you are starting, I would wager that the backing datastore will remain a relational db. Sometimes there is a lot of fuss and worry to keep it generic, and if you can, great. But it seems to me just changing the type of relational db in the back is further than most apps would go (let alone changing to a non-relational store like a flat file).