A system handles two types of resources. There are write and delete APIs for managing the resources. A client (user) will use a library API to manage these resources. Each resource write (or create) will result in updating a store or a database.
The API would look like:
1) Create Library client. The user will use the returned client to operate on the resources.
MyClient createClient(); //to create the client
2) MyClient interface. Providing operations on a resource
writeResourceType1(id);
deleteResourceType1(id);
writeResourceType2(id);
deleteResourceType2(id);
Some resources are dependent on the other. The user may write them out-of-order (might write a resource before writing its dependent). In order to prevent the system from having an inconsistent state, all changes (resource updates) will be written to a staging location. The changes will be written to the actual store only when the user indicates he/she has written everything.
This means I would need a commit kind of method in the above MyClient interface. So, access pattern will look like
Client client = provider.createClient();
..
client.writeResourceType1(..)
client.writeResourceType1(..)
client.deleteResourceType2(..)
client.commit(); //<----
I'm not comfortable having the commit API in the MyClient interface. I feel it is polluting it and a wrong it is a wrong level of abstraction.
Is there a better way to handle this?
Another option I thought of is getting all the updates as part of a single call. This API would act as a Batch API
writeOrDelete(List<Operations> writeAndDeleteOpsForAllResources)
The downside of this is this the user has to combine all the operations on their end to call this. This is also stuffing too much into a single call. So, I'm not inclined to this approach.
While both ways that you've presented can be viable options, the thing is that at some point in time, the user must somehow say: "Ok, these are are my changes, take them all or leave them". This is exactly what commit is IMO.
And this alone makes necessary some kind of call that must present in the API.
In the first approach that you've presented its obviously explicit, and is done with commit method.
In the second approach its rather implicit and is determined by the content of the list that you pass into writeOrDelete method.
So in my understanding, commit must exist somehow, but the question is how do you make it less "annoying" :)
Here are couple of tricks:
Trick 1: Builder / DSL
interface MyBuilder {
MyBuilder addResourceType1(id);
MyBuilder addResourceType2(id);
MyBuilder deleteResourceType1/2...();
BatchRequest build();
}
interface MyClient {
BatchExecutionResult executeBatchRequest(BatchRequest req);
}
This method is more or less like the second method, however it has a clear way of "adding resources". A single point of creation (pretty much like MyClient not, just I believe that eventually it will have more methods, so maybe its a good idea to separate. As you stated: "I'm not comfortable having the commit API in the MyClient interface. I feel it is polluting it and a wrong it is a wrong level of abstraction")
Additional argument for this approach is that now you know that there is a builder and its an "abstraction to go" in your code that uses this, you don't have to think about passing a reference to the list, think about what happens if someone calls stuff like clear() on this list, and so on and so forth. The builder has a precisely defined API of what can be done.
In terms of creating the builder:
You can go with something like Static Utility class or even add a method to MyClient:
// option1
public class MyClientDSL {
private MyClientDSL {}
public static MyBuilder createBuilder();
}
// option 2
public interface MyClient {
MyBuilder newBuilder();
}
References to this approach: JOOQ (they have DSL like this), OkHttp that have builders for Http Requests, Bodies and so forth (decoupled from the OkHttpClient itself).
Trick 2: Providing an execution code block
Now this can be tricky to implement depending on what kind of environment do you run in,
but basically an idea is borrowed from Spring:
In order to guarantee a transaction while working with databases they provide a special annotation #Transactional that while placed on the methods basically says: "everything inside the method is running in transaction, I'll commit it by myself so that the user won't deal with transactions/commits at all. I'll also roll back upon exception"
So in code it looks like:
class MyBusinessService {
private MyClient myClient; // injected
#Transactional
public void doSomething() {
myClient.addResourceType1();
...
myClient.addResourceType2();
...
}
}
Under the hood they should maintain ThreadLocals to make this possible in multithreaded environment, but the point is that the API is clean. The method commit might exist but probably won't be used at the most of the cases, leaving alone the really sophisticated scenarios where the user might really "need" this fine-grained control.
If you use spring/ any other containter that manages your code, you can integrate it with spring (the technical way of doing this is out of scope of this question, but you get the idea).
If not, you can provide the most simplistic way of it:
public class MyClientCommitableBlock {
public static <T> T executeInTransaction(CodeBlock<T> someBlock)
builder) {
MyBuilder builder = create...;
T result = codeBlock(builder);
// build the request, execute and commit
return result;
}
}
Here is how it looks:
static import MyClientCommitableBlock.*;
public static void main() {
Integer result = executeInTransaction(builder -> {
builder.addResourceType1();
...
return 42;
});
}
// or using method reference:
class Bar {
Integer foo() {
return executeInTransaction(this::bar);
}
private Integer bar(MyBuilder builder) {
....
}
}
In this approach a builder while still defining precisely a set of APIs might not have an "explicit" commit method exposed to the end user. Instead it can have some "package private" method to be used from within the MyClientCommitableBlock class
Try if this suits you
Let us have a flag in staging table with column named status
Status Column values
New : Record inserted by user
ReadyForProcessing : Records ready for processing
Completed : Records processed and updated in Actual Store
Add this below method instead of commit(), and once user invokes this method/service, pick up the records which are for this user and which are in status: New and post it into the Actual Store from the staging location
client.userUpdateCompleted();
There is another option as well let us take out the client intervention by giving client.commit(); or client.userUpdateCompleted(); and instead we can have a batch process using Scheduler which runs at specific intervals which scans the Staging Table and populates the meaningful and user update completed records into the Actual Store
Related
I'd prefer it as a record as there is less boilerplate, but would there be issues?
IntelliJ is suggesting that I turn a basic Java class #Service like this:
#Service
public class LocationService {
private final PlaceRepository placeRepository;
#Autowired
public LocationService(PlaceRepository placeRepository) {
this.placeRepository = placeRepository;
}
public List<PlaceDto> findPlacesByRegionId(Long regionId){
return placeRepository.findByRegionId(regionId).stream().map(place -> new PlaceDto(place.getId(), place.getName())).toList();
}
}
into a Java record #Service like this:
#Service
public record LocationService(PlaceRepository placeRepository) {
public List<PlaceDto> findPlacesByRegionId(Long regionId) {
return placeRepository.findByRegionId(regionId).stream().map(place -> new PlaceDto(place.getId(), place.getName())).toList();
}
}
You could do that, but records have getters (well without get prefix). Which Service Layer shouldn't have. Your Service Facade exposes public methods which also are usually #Transactional, you don't want to mix them with methods that no one is going to use.
Also, records define equals() & hashCode() which aren't needed for Service classes either.
In the end the only common theme between Records and Services is that all fields are usually final and all of them are usually passed via constructor. This isn't much of commonality. So it sounds like a bad idea to use records for this.
Let me quote Oracle guy:
JEP 395 says:
[Records] are classes that act as transparent carriers for immutable
data.
So by creating a record you're telling the compiler, your colleagues,
the whole wide world that this type is about data. More precisely,
data that's (shallowly) immutable and transparently accessible. That's
the core semantic - everything else follows from here.
If this semantic doesn't apply to the type you want to create, then
you shouldn't create a record. If you do it anyways (maybe lured in by
the promise of no boilerplate or because you think records are
equivalent to #Data/#Value or data classes), you're muddying your
design and chances are good that it will come back to bite you. So
don't.
UPD.
I have spent a couple of minutes to figure out what was the root cause of your statement that "IntelliJ is suggesting that I turn a basic Java class #Service like this". And have found the following discussion: https://youtrack.jetbrains.com/issue/IDEA-252036
Thereby:
using records for spring beans is definitely a bad idea: such beans are not eligible for auto proxying, moreover records are not designed for such scenarios
it is embarrassing but JetBrains does mislead CE users
I have a use case where I have a Database interface vended by an external vendor let's say it looks like following:
interface Database{
public Value get(Key key);
public void put(Key key, Value value)
}
The vendor provides multiple implementations of this interface e.g. ActualDatabaseImpl, MockDatabaseImpl. My consumers want to consume DataBase interface but before calling some of the APIs they want to perform some additional work e.g Call client side rate limiter before making call. So rather than every consumer having to do the extra work of checking rateLimiter's limit, I thought of creating a decorated class which will abstract out the rate limit part and consumers can interact with DB without knowing the logic of RateLimiter. e.g.
class RateLimitedDatabase implements Database{
private Database db;
public RateLimitedDatabase(Database db) {this.db = db;}
public Value get(Key key) {
Ratelimiter.waitOrNoop();
return db.get(key);
}
public void put(Key key, Value value) {
Ratelimiter.waitOrNoop();
return db.put(key, value);
}
}
This works fine as long as the Database interface doesn't introduce new methods.But as soon as they start adding APIs that I don't really care about e.g. delete/getDBInfo/deleteDB etc problems start arising.
Whenever a new version of DB with newer methods is released my build for RateLimitedDatabase will break.One option is to implement the new methods in the decorated class on investigating the root cause for build failure but that's just an extra pain for developers. Is there any other way to deal with such cases since this seems to be a common problem when using Decorator pattern with an ever changing/extending interface?
NOTE: I can also think of building a reflection based solution but that seems to be an overkill/over-engineering for this particular problem.
If that's feasible (you need to modify all your client code), you can extract a "mirror" of the vendor.Database interface and call it eg. mirror.Database; and copy just the methods you need from the vendor.Database interface to the mirror.Database (with the very same signatures).
Edit the client code to use the mirror.Database interface, and let RateLimitedDatabase implement this mirror.Database interface. Since all method signatures are the same, switching the client code to the mirrored interface should be painless. RateLimitedDatabase will delegate to a vendor.Database implementation of course.
(I think what I described is more or less the Bridge Pattern (using an interface to "shield" against underlying changes) , https://en.wikipedia.org/wiki/Bridge_pattern)
Aspect oriented programming has an solution to this issue.
Most frameworks will generate a dynamic proxy for your interface so it is always in sync.
I don't quite know how to explain the situation, I will try to be as clear as possible.
I am currently writing a web-application, using Spring to manage the beans. Obviously, more than one people will use this application. Each user has a set of data related to himself. My problem comes with some poor design I introduced when I just entered the development field. Here is the case:
#Component
public class ServiceClass implements IService {
#Autowired
private Dependency firstDependency;
#Autowired
private UsefulObject secondDependency;
private DataSet dataSet; // THIS LINE IS IMPORTANT
public void entryPoint(String arg1, int arg2, Structure arg3) {
/* Query data from a database specific from the project (not SQL
oriented. I absolutely need this information to keep going. */
dataSet = gatherDataSet(String ar1);
/* Treat the data */
subMethodOne(arg1);
subMethodTwo(arg2);
subMethodThree(arg3);
}
private subMethodOne(String arg1) {
// Do some things with arg1, whatever
subSubMethod(arg1);
}
private subSubMethod(String arg1) {
/* Use the DataSet previously gathered */
dataSet.whateverDoing();
}
... // Functions calling sub-methods, using the DataSet;
As every user would have a different dataSet, I thought it would be good to call it at the beginning of every call to my service. In the same way, as is it used very deep in the call hierarchy, I thought it would be a good idea to store it as an attribute.
The problem I encounter is that, when two users are going through this service nearly simultaneously, I have a cross-data issue. The following happens:
First user comes in, calls gatherDataSet.
Second user comes in, calls gatherDataSet. First user is still treating !
First user still uses the dataSet object, which was overrid by Second user.
Basically, the data first user makes use of become false, because he uses data from the second user, which came in short after him.
My questions are the following:
Are there design pattern / methods to avoid this kind of behavior ?
Can you configure Spring so that he uses two instances fo two users (and so on), to avoid this kinf od problems ?
Bonus: (Kind of unrelated) How to implement a very large data mapper ?
Object member variables (fields) are stored on the heap along with the object. Therefore, if two threads call a method on the same object instance and this method updates object member variables, the method is not thread safe.
However, If a resource is created, used and disposed within the control of the same thread, and never escapes the control of this thread, the use of that resource is thread safe.
With this in mind, change your design. https://books.google.co.in/books?isbn=0132702258 is a must read book for coming up with good java based software design
More stackoverflow links: Why are local variables thread safe in Java , Instance methods and thread-safety of instance variables
Spring promotes singleton pattern and (it is the default bean scope). Spring configuration for having two service class objects for two different users is called prototype bean scoping, but should be avoided as far as possible.
Consider the usage of in-memory Map or an external no-sql datastore or an external relational database
Can you configure Spring so that he uses two instances fo two users (and so on), to avoid this kinf od problems ?
You already mentioned correctly, that the design decisions you took are flawed. But to answer your specific question, which should get your use-case to work correctly, but at a impact to performance cost:
You can set spring beans to various scopes (relevant for your usecase: prototype / request or session), which will modify when spring beans get instanced. The default behaviour is one bean per spring container (singleton), hence the concurrency issues. See https://docs.spring.io/spring/docs/3.0.0.M3/reference/html/ch04s04.html
The easiest solution is simply to not store the dataset in a class field.
Instead, store the dataset in a local variable and pass it as an argument to other functions, this way there will not be any concurrency problems, as each call stack will have it's own instance.
Example:
public void entryPoint(String arg1, int arg2, Structure arg3) {
// Store the dataset in a local variable, avoiding concurrency problems
Dataset dataSet = gatherDataSet(String ar1);
// Treat the data passing dataset as an argument
subMethodOne(arg1, dataset);
subMethodTwo(arg2, dataset);
subMethodThree(arg3, dataset);
}
Use synchronized modifier for it.
As "Synchronization plays a key role in applications where multiple threads tend to share the same resources, especially if these resources must keep some kind of sensitive state where manipulations done by multiple threads at the same time could lead the resource to become in an inconsistent state."
public void someMethod() {
synchronized (object) {
// A thread that is executing this code section
// has acquired object intrinsic lock.
// Only a single thread may execute this
// code section at a given time.
}
}
Scenario
A web service receives a request in the form of XML from some other system, based on the contents of this request the web service should perform an arbitrary number of user-definable tasks (such as storing the contents of the XML to a database, extracting certain values, making a call to some other service etc). The behaviour of the requesting system cannot be changed (e.g. to call different actions for different things).
Proposed Design
My proposed design would be to have an interface something like...
interface PipelineTask {
public void Run(String xml);
}
With implementations of this for each user action, for example...
public class LogToDatabaseTask implements PipelineTask {
public void Run(String xml) {
db.store(xml); // some call to database to store.
}
}
Then a database table containing rules (maybe as XPath expressions), and the class to invoke should those rules be satisfied by the received document. I'd then use reflection - or perhaps a factory(?) - to invoke the correct implementation and run it.
Question
To me, it sounds like there should be some kind of existing pattern to implement something like this which I've missed and can't find online anywhere. Would this kind of approach make sense - or is there some better, perhaps more flexible way of doing this?
As you already mentioned, a rule seems a good fit for this case. You can define a rule that takes facts related to the current state and provide the next action in the sequence. Below is a simple java rule method as example. You can also use a rules framework like drools. The response from the rule can be used with a factory or a strategy:
For example, consider the sequence of actions:
UPDATE_DB
EXTRACT_VALUES
INVOKE_XYZ_SERVICE
END
For every web service request check the rule after every step and execute actions until you receive a rule response with next action END. The rulerequest also contains the contents of input document:
public RuleResponse execute(RuleRequest request) {
//initialization and extraction code here
if(request.previousAction.equals("EXTRACT_VALUES") && ....) {
RuleResponse.nextAction = "INVOKE_XYZ_SERVICE".
}
return response;
}
I know that you tagged the question as Java, but actually you can reuse a lot of the MSDN logical model of the Pipes & filters design pattern. The article is very good and I've used already in Java modules.
First you can also read this about Pipeline_software - it helped me a lot with ideas.
I'm doing some maintenance/evolution on a multi-layered spring 3.0 project. The client is a heavy RCP application invoking some spring beans methods from the service layer (Managers) on a RMI based server.
I have several huge method in the Managers, some of them are doing more than 250 lines.Here is an example : (I've omitted code for clarity)
#Transactional(readOnly = false, propagation = Propagation.REQUIRED)
public Declaration saveOrUpdateOrDelete(Declaration decla, List<Declaration> toDeleteList ...){
if (decla.isNew()){
// create from scratch and apply business rules for a creation
manager1.compute(decla);
dao1.save(decla);
...
}else if (decla.isCopy() {
// Copy from an other Declaration and apply business rules for a copy
...
}else {
// update Declaration
...
}
if (toDeleteList!=null){
// Delete declarations and apply business rules for a mass delete
...
}
The first 3 branches are mutually exclusive and represent a unit of work. The last branch (delete) can happen simultaneously with other branches.
Isn't it better to divide this method in something more 'CRUDy' for the sake of clarity and maintainability ? I've been thinking of dividing this behemoth into other manager methods like :
public Declaration create(Declaration decla ...){...
public Declaration update(Declaration decla ...){...
public Declaration copyFrom(Declaration decla ...){...
public void delete(List<Declaration> declaList ...){...
But my colleagues say it will transfer complexity and business rules to the client that I will loose the benefit of atomicity etc.. Who is right here ?
The decision what the updateOrCreateOrWhatever really does is made in the client anyway as it has to set the corresponding field in Declaration object.
The client could equally well just call the apropriate method.
That way code is definitely more manageable and testable (less branches to care about).
The only argument for maintaining it as is is the network round-trips mentioned by #Pangea. I think this could be handled by custom dispatcher class. IMO it doesn't form a part of business logic, and as such shouldn't be taken care of in service layer.
Another thing to take into consideration is transaction logic. Do create/update and deletes have to happen in the same transaction? Can both decla and toDelete be not null at the same time?
One of the basic principles to keep in mind when designing remote services is to make it coarse-grained in order to reduce network latency/round-trips. Also, after going through your code, it seems like the method encapsulates a logical unit of work as it is transactional. In this case, I suggest to keep it as it is.
However, you can still refactor it into multiple methods as long as they are not exposed to be invoked remotely thus forcing the client to manage the transactions from client layer. So make them private.
Bad design. If u really have to make the transaction atomic or complete in one trip, create a more specific method instead of having this.
What's the difference of writing:
public Object doIt(Object... obj){
...
}