I'm writing a desktop MMO game and I want to consult on the architecture question. I have such classes as NetworkManager, ClientWindow, CachingTextureAtlas they are needed only in one instance for the game and here is my question: is it correct to make them singletons? If yes it will be a kind of interaction through the global classes which is not good from the design point of view as it seems to me and if no we'll compose them in a facade and we'll have to pass those all to constructors of too many classes which is not convinient as well. What is a better choice?
Singleton should be used carefully (almost never). I have seen many places, the recent sceanrio being hibernate session factory, where initially we thought that the single instance should be fine but then came across scenario for multiple instance and ended up refactoring the code.
Other problem is if you are writing unit tests for your code, then it will be a nightmare for all the classes which depends on these singleton classes.
One of the solution might be to inject a kind of factory which can provide you these instances/ it gives you a kind of indirection and avoid direct coupling. The other approach is to have a container (e.g. pico container) and inject as constructor dependencies which is what you have pointed out as well.
Related
I am developing an application where I need to create an object and multiple classes have to access and modify that object. How to see the recent changes made by the other class object and how to access the object centrally through all the classes with out passing that object as a parameter across all the classes?
I am creating an Apache POI document where I am adding multiple tables, multiple headers/footers and paragraphs. I want only a single XWPFDocument object present in my application.
Is there any design pattern we can achieve this?
Well the singleton design pattern would work - but isn't terribly clean; you end up with global static state which is hard to keep track of and hard to test. It's often considered an anti-pattern these days. There are a very few cases where it still makes sense, but I try to avoid it.
A better approach would be to use dependency injection: make each class which needs one of these objects declare a constructor parameter of that type (or possibly have a settable property). Each class shouldn't care too much about how shared or otherwise the object is (beyond being aware that it can be shared). Then it's up to the code which initializes the application to decide which objects should be shared.
There are various dependency injection frameworks available for Java, including Guice and Spring. The idea of these frameworks is to automatically wire up all the dependencies in your application, given appropriate configuration.
There is Singleton Pattern for this, it creates a single instance for the application and is shared without passing around.
But it not not the best of options.
Why is it a bad option?
It is not good for testability of code
Not extensible in design
Better than Singleton Pattern is an application wide single instance
Create a single object for the application and share it using some context object. More on this is explained by Misko in his guide to testable code
single instance and not Singleton Pattern
It stands for an application wide single instance, which DOES NOT inforce its singletonness through a static instance field.
Why are Singletons hard to test?
Static access prevents collaborating with a subclass or wrapped version of another class. By hard-coding the dependency, we lose the power and flexibility of polymorphism.
-Every test using global state needs it to start in an expected state, or the test will fail. But another object might have mutated that global state in a previous test.
Global state often prevents tests from being able to run in parallel, which forces test suites to run slower.
If you add a new test (which doesn’t clean up global state) and it runs in the middle of the suite, another test may fail that runs after it.
Singletons enforcing their own “Singletonness” end up cheating.
You’ll often see mutator methods such as reset() or setForTest(…) on so-called singletons, because you’ll need to change the instance during tests. If you forget to reset the Singleton after a test, a later use will use the stale underlying instance and may fail in a way that’s difficult to debug.
I am developing some software with a few people and from the completed class diagrams there is one Database class and for example the Order class has two constructors, one which has no arguments and one which excepts an id. It also has a save() method so going by those class features I am assuming if you supply an id in the constructor the class will use the Database class and populate the objects properties and also there is no place to inject this Database class in the constructor or in a setter method so I presume they want to use a Singleton.
I want to know if my arguments are valid before I say it to them so here they are:
Doing it this way breaks the SOLID Single Responsibility Principle(SRP)
It introduces tight coupling between all our classes which need the Database class
It hides our objects dependencies
It makes unit testing much harder
Introduces unnecessary global state
Would they be valid arguments and are there any more flaws doing it this way? If my points are valid would it be worth saying it to them?
Thanks.
This is a well-known pattern called active record. It is commonly employed in several large frameworks such as Ruby on Rails. It does have the downsides that you mention, and I think that you should highlight the potential problem, but not without having any alternatives to discuss.
One common alternative is to have a service façade which saves you objects - a set of DAOs. With this pattern you make accessing the database more explicit and less convenient, but IMO you decrease DB coupling in your main app. This is better in a SRP perspective, as you mention, which amongst others makes testing much easier.
We have a typical n-tier java application, and I noticed that our data access layers has DAO's that are of the type FooDAO and FooDAOImpl. I was looking to justify the need for the two and here is my analysis.
If you had multiple implementation for the same interface, the abstraction is helpful. But given that we have already made the choice of the framework to be used for the DAOImpl (say iBATIS), is it really required?
Help in proxying via Spring. From what I gather, classes that have interfaces can be proxied easily (going the JdkProxy route) rather than classes that have no interfaces (where the cglib route is chosen) and one has the subclass the class to be proxied. Subclassing has its problems where the class to proxied is final or has none default constructors - both of which are highly unlikely at the data access layer. Performance used to be a factor, but from what I hear, it is no longer a cause of concern.
Help in mocking. Classes with interfaces are more suited to be mocked by mocking frameworks. I have only heard this, but not seen it in practice - so can't really count on it, but maybe because of the same factors as mentioned in #2 above.
With these points, I don't feel the real need for a separate FooDAO and FooDAOImpl where a simple FooDAO should suffice. Feel free to correct any of the points that I have mentioned.
Thanks in advance!
I tried #3 with Mockito and it was able to mock POJO's without interfaces just fine. With the arguments mentioned against #1 and #2, I am inclined not to go with separate DAO and DAOImpl for now. Feel free to add other points of comparison.
I see a fourth reason:
Hide implementation details
Depending for instance on the team, the expected lifetime of the software or the amount of change anticipated in the future, it pays to hide as much as possible even if there is only one implementation.
I would say that having a FooDAO interface with a single implementation FooDAOImpl is generally an anti-pattern. The simpler solution (no separate interfaces for DAOs) is much preferable, unless you have a solid reason otherwise - which doesn't seem to be the case here.
Personally, I would go even further and say that having DAOs at all is not the best choice. I prefer having a single persistence facade class implemented on top of an ORM API such as Hibernate or JPA. Maybe iBATIS is too low-level for that, though.
I'm running a server which on occasion has to search what a client queries. I'd like to write the client query to disk for records, but I don't want to slow down the search anymore than I have to. (The search is already the bottleneck...)
So, when the client performs a search, I'm having the client's thread send a message to a singleton thread, which will handle the disk write, while the client thread continues to handle the client's requests. That way, the file on disk doesn't run into sync issues, and it doesn't slow the clients experience down.
I have a conceptual question here: is the singleton appropriate in this case? I've been using the singleton design pattern a little too much in my recent programming, and I want to make sure that I'm using it for its intended use.
Any feedback is greatly appreciated.
The singleton pattern is definitely overused and comes with its share of difficulties (unit-testing is the canonical example), but like everything in design, you need to weigh the pros and cons for your specific scenario. The singleton pattern does have its uses. There are options that may allow you to get the singleton behaviour, while alleviating some of the inherent issues:
Interception (often referred to as aspect oriented programming, though I've seen debate that they are not exactly the same thing... can't find the article I read on this at this time) is definitely an option. You could use any combination of construction injection, the decorator pattern, an abstract factory and an inversion of control container. I'm not up on my Java IoC containers, but there are some .Net containers that allow automatic interception (I believe Spring.Net does, so likely Spring (Java) has this built in). This is very handy for any type of cross-cutting concerns, where you need to perform certain types of actions across multiple layers (security, logging etc.). Also, most IoC containers allow you to control lifetime management, so you would be able to treat your logger as a singleton, without having to actually implement the singleton pattern manually.
To sum it up. If a singleton fits for your scenario (seems plausible from your description), go for it. Just make sure you have weighed the pros and cons. You may want to try a different approach and compare the two.
As you many know when you proxy an object, like when you create a bean with transactional attributes for Spring/EJB or even when you create a partial mock with some frameworks, the proxies object doesn't know that, and internal calls are not redirected, and then not intercepted either...
That's why if you do something like that in Spring:
#Transactionnal
public void doSomething() {
doSomethingInNewTransaction();
doSomethingInNewTransaction();
doSomethingInNewTransaction();
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void doSomethingInNewTransaction() {
...
}
When you call doSomething, you expect to have 3 new transactions in addition to the main one, but actually, due to this problem you only get one...
So i wonder how do you do to handle these kind of problems...
I'm actually in a situation where i must handle a complex transactional system, and i don't see any better way than splitting my service into many small services, so that I'm sure to pass through all the proxies...
That bothers me a lot because all the code belongs to the same functional domain and should not be split...
I've found this related question with interesting answers:
Spring - #Transactional - What happens in background?
Rob H says that we can inject the spring proxy inside the service, and call proxy.doSomethingInNewTransaction(); instead.
It's quite easy to do and it works, but i don't really like it...
Yunfeng Hou says this:
So I write my own version of CglibSubclassingInstantiationStrategy and
proxy creator so that it will use CGLIB to generate a real subclass
which delegates call to its super rather than another instance, which
Spring is doing now. So I can freely annotate on any methods(as long
as it is not private), and from wherever I call these methods, they
will be taken care of. Well, I still have price to pay: 1. I must list
all annotations that I want to enable the new CGLIB sub class
creation. 2. I can not annotate on a final method since I am now
generating subclass, so a final method can not be intercepted.
What does he mean by "which spring is doing now"? Does this mean internal transactional calls are now intercepted?
What do you think is better?
Do you split your classes when you need some transactional granularity?
Or do you use some workaround like above? (please share it)
I'll talk about Spring and #Transactional but the advise applies for many other frameworks also.
This is an inherent problem with proxy based aspects. It is discussed in the spring documentation here:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
There are a number of possible solutions.
Refactor your classes to avoid the self-invocation calls that bypass the proxy.
The Spring documentation describes this as "The best approach (the term best is used loosely here)".
Advantages of this approach are its simplicity and that there are no ties to any framework. However, it may not be appropriate for a very transactional heavy code base as you'd end up with many trivially small classes.
Internally in the class get a reference to the proxy.
This can be done by injecting the proxy or with hard coded " AopContext.currentProxy()" call (see Spring docs above.).
This method allows you to avoid splitting the classes but in many ways negates the advantages of using the transactional annotation. My personal opinion is that this is one of those things that is a little ugly but the ugliness is self contained and might be the pragmatic approach if lots of transactions are used.
Switch to using AspectJ
As AspectJ does not use proxies then self-invocation is not a problem
This is a very clean method though - it is at the expense of introducing another framework. I've worked on a large project where AspectJ was introduced for this very reason.
Don't use #Transactional at all
Refactor your code to use manual transaction demarcation - possibly using the decorator pattern.
An option - but one that requires moderate refactoring, introducing additional framework ties and increased complexity - so probably not a preferred option
My Advice
Usually splitting up the code is the best answer and can also be good thing for seperation of concerns also. However, if I had a framework/application that heavily relied on nested transactions I would consider using AspectJ to allow self-invocation.
As always when modelling and designing complex use cases - focus on understandable and maintainable design and code. If you prefer a certain pattern or design but it clashes with the underlying framework, consider if it's worth a complex workaround to shoehorn your design into the framework, or if you should compromise and conform your design to the framework where necessary. Don't fight the framework unless you absolutely have to.
My advice - if you can accomplish your goal with such an easy compromise as to split out into a few extra service classes - do it. It sounds a lot cheaper in terms of time, testing and agony than the alternative. And it sure sounds a lot easier to maintain and less of a headache for the next guy to take over.
I usually make it simple, so I split the code into two objects.
The alternative is to demarcate the new transaction yourself, if you need to keep everything in the same file, using a TransactionTemplate. A few more lines of code, but not more than defining a new bean. And it sometimes makes the point more obvious.