I have a use case where I have a Database interface vended by an external vendor let's say it looks like following:
interface Database{
public Value get(Key key);
public void put(Key key, Value value)
}
The vendor provides multiple implementations of this interface e.g. ActualDatabaseImpl, MockDatabaseImpl. My consumers want to consume DataBase interface but before calling some of the APIs they want to perform some additional work e.g Call client side rate limiter before making call. So rather than every consumer having to do the extra work of checking rateLimiter's limit, I thought of creating a decorated class which will abstract out the rate limit part and consumers can interact with DB without knowing the logic of RateLimiter. e.g.
class RateLimitedDatabase implements Database{
private Database db;
public RateLimitedDatabase(Database db) {this.db = db;}
public Value get(Key key) {
Ratelimiter.waitOrNoop();
return db.get(key);
}
public void put(Key key, Value value) {
Ratelimiter.waitOrNoop();
return db.put(key, value);
}
}
This works fine as long as the Database interface doesn't introduce new methods.But as soon as they start adding APIs that I don't really care about e.g. delete/getDBInfo/deleteDB etc problems start arising.
Whenever a new version of DB with newer methods is released my build for RateLimitedDatabase will break.One option is to implement the new methods in the decorated class on investigating the root cause for build failure but that's just an extra pain for developers. Is there any other way to deal with such cases since this seems to be a common problem when using Decorator pattern with an ever changing/extending interface?
NOTE: I can also think of building a reflection based solution but that seems to be an overkill/over-engineering for this particular problem.
If that's feasible (you need to modify all your client code), you can extract a "mirror" of the vendor.Database interface and call it eg. mirror.Database; and copy just the methods you need from the vendor.Database interface to the mirror.Database (with the very same signatures).
Edit the client code to use the mirror.Database interface, and let RateLimitedDatabase implement this mirror.Database interface. Since all method signatures are the same, switching the client code to the mirrored interface should be painless. RateLimitedDatabase will delegate to a vendor.Database implementation of course.
(I think what I described is more or less the Bridge Pattern (using an interface to "shield" against underlying changes) , https://en.wikipedia.org/wiki/Bridge_pattern)
Aspect oriented programming has an solution to this issue.
Most frameworks will generate a dynamic proxy for your interface so it is always in sync.
Related
Our applications using Spring Cache and need to know if response was returned from cache OR it was actually calculated. We are looking to add a flag in result HashMap that will indicate it. However whatever is returned by method, it is cached so not sure if we can do it in calculate method implementation.
Is there any way to know if calculate method was executed OR return value coming from cache when calling calculate method?
Code we are using for calculate method -
#Cacheable(
cacheNames = "request",
key = "#cacheMapKey",
unless = "#result['ErrorMessage'] != null")
public Map<String, Object> calculate(Map<String, Object> cacheMapKey, Map<String, Object> message) {
//method implementation
return result;
}
With a little extra work, it is rather simple to add a bit of state to your #Cacheable component service methods.
I use this technique when I am answering SO questions like this to show that the value came from the cache vs. the service method by actually computing the value. For example.
You will notice this #Cacheable, #Service class extends an abstract base class (CacheableService) to help manage the "cacheable" state. That way, multiple #Cacheable, #Service classes can utilize this functionality if need be.
The CacheableService class contains methods to query the state of the cache operation, like isCacheMiss() and isCacheHit(). Inside the #Cacheable methods, when invoked due to a "cache miss", is where you would set this bit, by calling setCacheMiss(). Again, the setCacheMiss() method is called like so, inside your #Cacheable service method.
However, a few words of caution!
First, while the abstract CacheableService class manages the state of the cacheMiss bit with a Thread-safe class (i.e. AtomicBoolean), the CacheableService class itself is not Thread-safe when used in a highly concurrent environment when you have multiple #Cacheable service methods setting the cacheMiss bit.
That is, if you have a component class with multiple #Cacheable service methods all setting the cacheMiss bit using setCacheMiss() in a multi-Threaded environment (which is especially true in a Web application) then it is possible to read stale state of cacheMiss when querying the bit. Meaning, the cacheMiss bit could be true or false depending on the state of the cache, the operation called and the interleaving of Threads. Therefore, more work is needed in this case, so be careful if you are relying on the state of the cacheMiss bit for critical decisions.
Second, this approach, using an abstract CacheableService class, does not work for Spring Data (CRUD) Repositories based on an interface. As others have mentioned in the comments, you could encapsulate this caching logic in an AOP Advice and intercept the appropriate calls, in this case. Personally, I prefer that caching, security, transactions, etc, all be managed in the Service layer of the application rather than the Data Access layer.
Finally, there are undoubtedly other limitations you might run into, as the example code I have provided above was never meant for production, only demonstration purposes. I leave it to you as an exercise to figure out how to mold these bits for your needs.
A system handles two types of resources. There are write and delete APIs for managing the resources. A client (user) will use a library API to manage these resources. Each resource write (or create) will result in updating a store or a database.
The API would look like:
1) Create Library client. The user will use the returned client to operate on the resources.
MyClient createClient(); //to create the client
2) MyClient interface. Providing operations on a resource
writeResourceType1(id);
deleteResourceType1(id);
writeResourceType2(id);
deleteResourceType2(id);
Some resources are dependent on the other. The user may write them out-of-order (might write a resource before writing its dependent). In order to prevent the system from having an inconsistent state, all changes (resource updates) will be written to a staging location. The changes will be written to the actual store only when the user indicates he/she has written everything.
This means I would need a commit kind of method in the above MyClient interface. So, access pattern will look like
Client client = provider.createClient();
..
client.writeResourceType1(..)
client.writeResourceType1(..)
client.deleteResourceType2(..)
client.commit(); //<----
I'm not comfortable having the commit API in the MyClient interface. I feel it is polluting it and a wrong it is a wrong level of abstraction.
Is there a better way to handle this?
Another option I thought of is getting all the updates as part of a single call. This API would act as a Batch API
writeOrDelete(List<Operations> writeAndDeleteOpsForAllResources)
The downside of this is this the user has to combine all the operations on their end to call this. This is also stuffing too much into a single call. So, I'm not inclined to this approach.
While both ways that you've presented can be viable options, the thing is that at some point in time, the user must somehow say: "Ok, these are are my changes, take them all or leave them". This is exactly what commit is IMO.
And this alone makes necessary some kind of call that must present in the API.
In the first approach that you've presented its obviously explicit, and is done with commit method.
In the second approach its rather implicit and is determined by the content of the list that you pass into writeOrDelete method.
So in my understanding, commit must exist somehow, but the question is how do you make it less "annoying" :)
Here are couple of tricks:
Trick 1: Builder / DSL
interface MyBuilder {
MyBuilder addResourceType1(id);
MyBuilder addResourceType2(id);
MyBuilder deleteResourceType1/2...();
BatchRequest build();
}
interface MyClient {
BatchExecutionResult executeBatchRequest(BatchRequest req);
}
This method is more or less like the second method, however it has a clear way of "adding resources". A single point of creation (pretty much like MyClient not, just I believe that eventually it will have more methods, so maybe its a good idea to separate. As you stated: "I'm not comfortable having the commit API in the MyClient interface. I feel it is polluting it and a wrong it is a wrong level of abstraction")
Additional argument for this approach is that now you know that there is a builder and its an "abstraction to go" in your code that uses this, you don't have to think about passing a reference to the list, think about what happens if someone calls stuff like clear() on this list, and so on and so forth. The builder has a precisely defined API of what can be done.
In terms of creating the builder:
You can go with something like Static Utility class or even add a method to MyClient:
// option1
public class MyClientDSL {
private MyClientDSL {}
public static MyBuilder createBuilder();
}
// option 2
public interface MyClient {
MyBuilder newBuilder();
}
References to this approach: JOOQ (they have DSL like this), OkHttp that have builders for Http Requests, Bodies and so forth (decoupled from the OkHttpClient itself).
Trick 2: Providing an execution code block
Now this can be tricky to implement depending on what kind of environment do you run in,
but basically an idea is borrowed from Spring:
In order to guarantee a transaction while working with databases they provide a special annotation #Transactional that while placed on the methods basically says: "everything inside the method is running in transaction, I'll commit it by myself so that the user won't deal with transactions/commits at all. I'll also roll back upon exception"
So in code it looks like:
class MyBusinessService {
private MyClient myClient; // injected
#Transactional
public void doSomething() {
myClient.addResourceType1();
...
myClient.addResourceType2();
...
}
}
Under the hood they should maintain ThreadLocals to make this possible in multithreaded environment, but the point is that the API is clean. The method commit might exist but probably won't be used at the most of the cases, leaving alone the really sophisticated scenarios where the user might really "need" this fine-grained control.
If you use spring/ any other containter that manages your code, you can integrate it with spring (the technical way of doing this is out of scope of this question, but you get the idea).
If not, you can provide the most simplistic way of it:
public class MyClientCommitableBlock {
public static <T> T executeInTransaction(CodeBlock<T> someBlock)
builder) {
MyBuilder builder = create...;
T result = codeBlock(builder);
// build the request, execute and commit
return result;
}
}
Here is how it looks:
static import MyClientCommitableBlock.*;
public static void main() {
Integer result = executeInTransaction(builder -> {
builder.addResourceType1();
...
return 42;
});
}
// or using method reference:
class Bar {
Integer foo() {
return executeInTransaction(this::bar);
}
private Integer bar(MyBuilder builder) {
....
}
}
In this approach a builder while still defining precisely a set of APIs might not have an "explicit" commit method exposed to the end user. Instead it can have some "package private" method to be used from within the MyClientCommitableBlock class
Try if this suits you
Let us have a flag in staging table with column named status
Status Column values
New : Record inserted by user
ReadyForProcessing : Records ready for processing
Completed : Records processed and updated in Actual Store
Add this below method instead of commit(), and once user invokes this method/service, pick up the records which are for this user and which are in status: New and post it into the Actual Store from the staging location
client.userUpdateCompleted();
There is another option as well let us take out the client intervention by giving client.commit(); or client.userUpdateCompleted(); and instead we can have a batch process using Scheduler which runs at specific intervals which scans the Staging Table and populates the meaningful and user update completed records into the Actual Store
I need to implement some specific functionality and I am wondering if you might know of a java library that does this, or something like it already. I would rather use an existing before building my own.
Basically I want a collection type class that contains instances of FooBar. When the container is constructed, I specify the attribute values or ranges that qualify a FooBar for membership, so when a FooBar instance is inserted, it will be rejected if it does not meet the criteria. In addition, if the attributes of any FooBar in the container are modified in such a way that they no longer meet the membership criteria, they will be ejected (preferably with some sort of callback to registered listeners).
I can elaborate more, but I think the concept is fairly straight forward.
Seen anything like that ?
Thanks.
You could add a set of listeners to the class performing the add / retain and which allows the class to 'choose' whether to add candidate as a listener - and also later on based on your interesting events occurring - whether to 'retain' them or evict them:
public interface AddableEvictable {
public boolean isAddable();
public boolean evict();
}
Then add a Set to your interesting server class and candidate classes to add/(/later evict) would need to (a) implement this simple interface and (b) offer themselves to the server class for it to decide whether to add/not (and later when/if to evict)
I have been brushing up on my design patterns and came across a thought that I could not find a good answer for anywhere. So maybe someone with more experience can help me out.
Is the DAO pattern only meant to be used to access data in a database?
Most the answers I found imply yes; in fact most that talk or write on the DAO pattern tend to automatically assume that you are working with some kind of database.
I disagree though. I could have a DAO like follows:
public interface CountryData {
public List<Country> getByCriteria(Criteria criteria);
}
public final class SQLCountryData implements CountryData {
public List<Country> getByCriteria(Criteria criteria) {
// Get From SQL Database.
}
}
public final class GraphCountryData implements CountryData {
public List<Country> getByCriteria(Criteria criteria) {
// Get From an Injected In-Memory Graph Data Structure.
}
}
Here I have a DAO interface and 2 implementations, one that works with an SQL database and one that works with say an in-memory graph data structure. Is this correct? Or is the graph implementation meant to be created in some other kind of layer?
And if it is correct, what is the best way to abstract implementation specific details that are required by each DAO implementation?
For example, take the Criteria Class I reference above. Suppose it is like this:
public final class Criteria {
private String countryName;
public String getCountryName() {
return this.countryName;
}
public void setCountryName(String countryName) {
this.countryName = countryName;
}
}
For the SQLCountryData, it needs to somehow map the countryName property to an SQL identifier so that it can generate the proper SQL. For the GraphCountryData, perhaps some sort of Predicate Object against the countryName property needs to be created to filter out vertices from the graph that fail.
What's the best way to abstract details like this without coupling client code working against the abstract CountryData with implementation specific details like this?
Any thoughts?
EDIT:
The example I included of the Criteria Class is simple enough, but consider if I want to allow the client to construct complex criterias, where they should not only specify the property to filter on, but also the equality operator, logical operators for compound criterias, and the value.
DAO's are part of the DAL (Data Access Layer) and you can have data backed by any kind of implementation (XML, RDBMS etc.). You just need to ensure that the project instance is injected/used at runtime. DI frameworks like Spring/Guice shine in this case. Also, your Criteria interface/implementation should be generic enough so that only business details are captured (i.e country name criteria) and the actual mapping is again handled by the implementation class.
For SQL, in your case, either you can hand generate SQL, generate it using a helper library like Spring or use a full fledged framework like MyBatis. In our project, Spring XML configuration files were used to decouple the client and the implementation; it might vary in your case.
EDIT: I see that you have raised a similar concern in the previous question. The answer still remains the same. You can add as much flexibility as you want in your interface; you just need to ensure that the implementation is smart enough to make sense of all the arguments it receives and maps them appropriately to the underlying source. In our case, we retrieved the value object from the business layer and converted it to a map in the SQL implementation layer which can be used by MyBatis. Again, this process was pretty much transparent and the only way for the service layer to communicate with DAO was via the interface defined value objects.
No, I don't believe it's tied to only databases. The acronym is for Data Access Object, not "Database Access Object" so it can be usable with any type of data source.
The whole point of it is to separate the application from the backing data store so that the store can be modified at will, provided it still follows the same rules.
That doesn't just mean turfing Oracle and putting in DB2. It could also mean switching to a totally non-DBMS-based solution.
ok this is a bit philosophical question, so I'll tell what I'm thinking about it.
DAO usually stands for Data Access Object. Here the source of data is not always Data Base, although in real world, implementations are usually come to this.
It can be XML, text file, some remote system, or, like you stated in-memory graph of objects.
From what I've seen in real-world project, yes, you right, you should provide different DAO implementations for accessing the data in different ways.
In this case one dao goes to DB, and another dao implementation goes to object graph.
The interface of DAO has to be designed very carefully. Your 'Criteria' has to be generic enough to encapsulate the way you're going to get the data from.
How to achieve this level of decoupling? The answer can vary depending on your system, by in general, I would say, the answer would be "as usual, by adding an another level of indirection" :)
You can also think about your criteria object as a data object where you supply only the data needed for the query. In this case you won't even need to support different Criteria.
Each particular implementation of DAO will take this data and treat it in its own different way: one will construct query for the graph, another will bind this to your SQL.
To minimize hassling with maintenance I would suggest you to use Dependency Management frameworks (like Spring, for example). Usually these frameworks are suited well to instantiate your DAO objects and play good together.
Good Luck!
No, DAO for databases only is a common misconception.
DAO is a "Data Access Object", not a "Database Access Object". Hence anywhere you need to CRUD data to/from ( e.g. file, memory, database, etc.. ), you can use DAO.
In Domain Driven Design there is a Repository pattern. While Repository as a word is far better than three random letters (DAO), the concept is the same.
The purpose of the DAO/Repository pattern is to abstract a backing data store, which can be anything that can hold a state.
I'm currently thinking about some design details of remoting / serialization between a Java Swing WebStart Application (Fat Client) and some remote services running on Tomcat. I want to use a http compatible transport to communicate with the server and since I'm already using Spring, I assume Spring's HTTP Remoting is a good choice. But I'm open to other alternatives here. My design problem is best illustrated with some small example
The client will call some services on the remote side. A sample service interface:
public interface ServiceInterface extends Serialiazable {
// Get immutable reference data
public List<Building> getBuildings();
public List<Office> getOffices();
// Create, read and update Employee objects
public insertEmployee(Employee employee);
public Employee getEmployee();
public void updateEmployee(Employee employee);
}
Building and Office are immutable reference data objects, e.g.
public Building implements Serializable {
String name;
public Building(String name) { this.name = name; }
public String getName() { return name; }
}
public Office implements Serializable {
Building building;
int maxEmployees;
public Office(Building building, int maxEmployess) {
this.building = building;
this.maxEmployees = maxEmployees;
}
public Building getBuilding() { return building; }
punlic int getMaxEmployees() { retrun maxEmployees; }
}
The available Buildings and Offices won't change during runtime and should be prefeteched by the client to have the available for selection list, filter condition, ... I want to have only one instance of each particular Building and Office on client and one instance onserver side. On server side it is not a big problem, but in my eyes the problem starts here when I call getOffices() after getBuildings(). The Buildings returned by getOffices() share the same instance of Buildings (if they have the same Building assigned) but the Buildings returned by getOffices() (referenced in Office objects) are not the same instance as the Buildings returned by getBuildings().
This might been solved by using some getReferenceDate() method returning both information in the same call, but than the problem will start if I have Employees referencing Offices.
I was thinking about some custom serialization (readObject, writeObject) transfering only the primary key and than getting the instance of the object from some class holding the reference data objects. But is this the best solution to this problem? I assume that this problem is not an uncommon problem, but did not find anything on Google. Is there a better solution? If not, what would be the best way to implemet it?
If you're going to serialize, You'll probably need to implement readResolve to guarantee that you're not creating additional instances:
From the javadoc for Serializable:
Classes that need to designate a
replacement when an instance of it is
read from the stream should implement
this special method with the exact
signature.
ANY-ACCESS-MODIFIER Object
readResolve() throws
ObjectStreamException;
I seem to remember reading about this approach in pre-enum days for handling serialization of objects that had to be guaranteed to be singular, like typesafe enums.
I'd also strongly recommend that you include a manual serialVersionUID in your serialized classes so that you can manually control when the application will decide that your classes represent incompatible versions that can't be deserialized.
However, on a more basic level, I'd question the whole approach - rather than trying to guarantee object identity over the network, which sounds to be, at the very least, a concurrency nightmare, why not just pass the raw data around and have your logic determine identity by id, the way we did it in my grandpappy's day? Your back end has a building object, it gets one from the front end, compares via ID (If you've altered the object on the front end you'll have to commit your object to your central datastore and determine what's changed, which could be a synchronizing issue with multiple clients, but you'd have that issue anyway).
Passing data remotely via Spring-httpclient is nice and simple, a bit less low-level than RMI.
Firstly, I'd recommend using RMI for your remoting, which can be proxied over HTTP (IIRC). Secondly, if you serialize the ServiceInterface, I believe the serialization mechanism will maintain the relative references for when it is deserialized in the remote JVM.