Automatically binding multiple interfaces to one impl in Guice - java

I have a design like the own shown below, with one interface extending multiple parent interfaces, and one implementation of that interface.
In my client classes I want to depend only on one or more of the parent interfaces, rather than the ZooKeeperClient. I feel like this is a better design as it reduces the surface area of my client class's dependencies, and it also makes it easier to mock things in tests.
e.g.
#Inject
public Foo(ServiceUpdater su) {
// ...
}
However, in order to achieve this I need to manually add bindings from each interface to the implementation class:
bind(ServiceCreator.class).to(ZooKeeperClientImpl.class)
bind(ServiceDeleter.class).to(ZooKeeperClientImpl.class)
bind(ServiceUpdater.class).to(ZooKeeperClientImpl.class)
// ...
bind(ZooKeeperClient.class).to(ZooKeeperClientImpl.class)
Is there any way I can avoid this repetition and tell Guice to bind the whole hierarchy at once? Something like...
bind(ZooKeeperClient.class/* and its parents*/).to(ZooKeeperClient.class)
If not, is there something wrong with my design here? Am I doing something un-Guicy?

There is no such way in Guice, you may use a utility like ClassUtils.getAllInterfaces() to iterate over all interfaces and bind them.

In Silk you can do autobind on the implementation type.
autobind(ZooKeeperClientImpl.class).toConstructor();
This will bind the class to all its interfaces and super classes (except Object). These binds are weaker than explicit binds - so binding one of ZooKeeperClientImpl super types to something else
bind(ServiceUpdater.class).to(AnotherImplementation.class);
would dominate the autobind done so that you don't get conflicts because of ambiguous binds.
Silk is very much like Guice so if you don't have to much Guice code it is easy and fast to change.

Related

Should #EventListener methods be included on the interface?

Maybe this is a question prone to be deleted, but just in case.
I've had a doubt lately while doing #EventListener annotated methods on my services if those methods should be included on the service's interface or not.
I mean, with a class like:
class FooServiceImpl implements FooService {
#EventListener
public void doSomethingWithEvent(ApplicationEvent event){
// do something
}
}
Should doSomethingWithEvent be included in FooService?
I think it shouldn't as the method is not meant to be directly invoked by any other instance but the one managing the events.
But, on the other hand, I would have a public method on my service that is not included on the interface, and for some reason, that smells bad to me (maybe it's just a habit).
So, what to do? Is there any convention regarding this?
I would say this is primarily opinion based, because afaik there is no real convention for this question. I didn't flag your question, because I think it is a good one and because I'm not sure if it is on or off topic.
Just ask yourself the following question: Is doSomethingWithEvent() part of my service? Is it part of the contract, its consumers (classes which use FooService) are using?
Or to break it down: Is there any case where a method, which uses FooService should be able to call doSomethingWithEvent() directly?
I don't think so.
So, with this in mind, basically: No, you shouldn't include that method in your interface. Programming against interfaces means you provide interfaces to your consumers and they can talk to them wihtout needing to know its implementations. That means also there could be (imho should be) different implementations for one interface. Some might provide an EventListener, some won't provide one.
I personally would prefer to create an own interface - let's say ApplicationEventAware and implement this in FooServiceImpl. You will find this approach in Spring many times. For this case I would name my implementation EventAwareFooService, and avoid *Impl classes, because this is bad design in my personal opinion. (and some might call it an anti pattern)
There is already ApplicationListener<E extends ApplicationEvent> So why not just implementing that?
I don't believe that there is a convention regarding this. However, I agree that doSomethingWithEvent() shouldn't be included in FooService. It will be interesting to see other peoples opinions on this.

Java: How to listen on methods invocation without registering each object explicitely?

I want to listen on method calls in order to attach additional behavior dynamically around the call. I've already done it on JUnit methods with a custom annotation and runner. I'm trying to do it on a standard java application.
The main idea is to do:
#Override
public void beforeInvoke (Object self, Method m, Object[] args){
Object[] newargs = modifyArgs (args);
m.invoke (self, newargs);
}
It's just an abstract idea, I don't have any concrete example, but I'm curious if it's possible in java.
I've found some approaches:
java.lang.reflect.Proxy.newProxyInstance(...)
where a proxy is defined for an interface only (but not used to decorate concrete classes). It seems similar to injection pattern and it's a different concern.
Another approach here using a factory pattern with the ProxyFactory class. This other solution requires explicit calls to create() method to produce object proxies listening on method invocations. So, if you bypass it by using natural constructors of your classes, it's not working. It's very constraining if you must explicit a call to a factory each time you have to create an object.
There is a way to do it with transparency ?
Like Proxy.newProxyInstance() but working also on concrete classes ?
Thanks.
Well,this is commonly seen with Spring Framework and Aspect Oriented Programming. Since you delegate your constructor calls to Spring, it is quite easy for Spring to put a proxy in place to intercept calls to the actual objects.
As far as I can tell, the only way to intercept calls is to use a proxy. Either in the way you mentioned or using Spring and AOP.
I think cglib let you instrument concrete classes.
As far as I know there is no easy way to intercept method calls that are called on a concrete class.
As mentioned you could manipulate the bytecode during compilation (as Used in AOP) or at class loading time (as used from cglib).
Another product to instrument Classes would be jmockit (http://jmockit.org/). Usually I would use this special kind of black magic only in testing environments and not in an productive environment.
Another way you could go is Annotation Processing. It work's during compiling process. You have to write a Processor which will walk through your source code and generate source-code that contains the original code plus the enhanced method-calls you need.
Depending on how much source-code you have to enhance, this method might be a good idea, but in general it is a lot of work.
Here's a link (https://deors.wordpress.com/2011/10/08/annotation-processors/).
Despite usually it's used in combination with annotations, this is not a strict requirement.

Runtime determination of base class at runtime in Java

I have two classes, one which is hardware-dependent and one which is not (let's call them HardwareDependent and HardwareIndependent respectively). The Hardware dependent class extends the hardware independent class. Now I have another class which at least must be an extension of the HardwareIndependent, but I would prefer it to be an extension of HardwareDependent when possible so it may leverage the additional functionality. Is there a possibility of using reflection or something else to accomplish this? Or is this a total technical impossibility? I suppose if all else fails, I could write the class twice, but it seems to me that is an ineffective approach. Thanks for any help in advance.
Inheritance is fixed at compile time.
It sounds like you don't want your new class to extend HardwareIndependent or HardwareDependent; you want it to use an object which could be either. You want composition and not inheritance. You're third class (we'll call it HardwareComposite) has a reference to a HardwareIndependent. Then, you can check if it is HardwareDependent at runtime with the instanceof operator, and if so cast it to HardwareDependent and use the additional facilities that provides.
If your design is forcing you to mix concepts of inheritance and composition, you might look into the Facade and Factory patterns.

How to package Factories in Java

I was wondering how to package the factories that I have in my application. Should the Factory be in the same package as the classes that use it, in the same package as the objects it creates or in its own package?
Thanks for your time and feedback
Usually factories are in the same package as the objects they create; after all their purpose is to create those objects. Usually they are not in a separate package (there is no reason for that). Also having the factory be in the same package as the objects they create allows you to exploit package visibility.
The whole point of a Factory is to have a configurable way to create implementation instances for interfaces. The convention to have the factory in the same package as the implementation classes it provides adds a completely unnecessary restriction you're unlikely to meet in the future. Also if the implementation returned is not the same across all contexts, it makes even less sense to have it in the same package.
For example, imagine a service lookup factory that is shared between the client and server part of an application, which returns a client side implementation (which resides in a client-only package) on the client, and a server side implementation (in a server-only package) when called from within the server's runtime.
Your factory may even be configurable (we do this by having a XML file which defines which implementation class to return for which interface), so the implementation classes can easily be switched, or different mappings can be used for different contexts.
For example, when unit testing we use a configuration which returns mockup implementations for the interfaces (do be able to do unit tests that are not integration tests), and it would make no sense at all to require those mockup implementations to be in the same package as the factory, as they're part of the testing code rather than the runtime code.
My recommendation:
Don't add any package restrictions on
the implmentation classes, as you
don't know which implementations are
used in the future, or in different
contexts.
The interfaces may be in the same
package, but this restriction is also
unnecessary and only makes the
configuration rigid.
Configurable factories (such as a service lookup) can be reused and
shared across projects when the
interface/implementation mapping
isn't hardcoded. This point alone
justifies having the factory
separated from both the interfaces
and the implementation classes.
The unit of reuse is the unit of release. This means there shouldn't be coupling across packages, as the package is generally the lowest granularity of release. When you organize a package, imagine yourself saying, "here's everything you need to use these classes."
I like to put the factory in the package it is creating objects for, naming is key here, if naming is clear and transparent it will help maintenance effort down the line.
For example an action factory could be structured as:
package org.program.actions
interface org.program.actions.Action
enum org.program.actions.ActionTypes
factory org.program.actions.ActionFactory (or .ActionManager)
action implementation classes org.program.actions.LogAction, etc.
Following patterns like this throughout projects help project members to find classes where they actually are located in projects they haven't been involved in before.
That wholly depends on the way you're intending to use said factories. Sometimes it makes sense to put a factory in its own package.
You might for example have an interface, foo.bar.ui.Interface. You want to have different implementations of that interface, one for AWT, one for Swing, one for the console, etc. Then it would be more appropriate to create a foo.bar.ui.swing.SwingInterfaceFactory that creates a foo.bar.ui.swing.SwingInterface. The factory for the foo.bar.ui.awt.AWTInterface would then reside in foo.bar.ui.awt.AWTInterfaceFactory.
Point is, there is no always-follow-this rule. Use whatever is appropriate for your problem.
why not. make it as close as possible if there is no other objections. actually why not
public interface Toy
{
static class Factory
{
public static final Toy make() { ... }
}
}
Toy toy = Toy.Factory.make();
HA!
but make() shouldn't statically depend on subclasses of Toy, that would be bad. it can do some dynamic magic, depends on your factory strategy.

Coding to interfaces? [duplicate]

This question already has answers here:
What does it mean to "program to an interface"?
(33 answers)
Closed 9 years ago.
I want to solidify my understanding of the "coding to interface" concept. As I understand it, one creates interfaces to delineate expected functionality, and then implements these "contracts" in concrete classes. To use the interface one can simply call the methods on an instance of the concrete class.
The obvious benefit is knowing of the functionality provided by the concrete class, irrespective of its specific implementation.
Based on the above:
Are there any fallacies in my understanding of "coding to interfaces"?
Are there any benefits of coding to interfaces that I missed?
Thanks.
Just one possible correction:
To use the interface one can simply call the methods on an instance of the concrete class.
One would call the methods on a reference of the type interface, which happens to use the concrete class as implementation:
List<String> l = new ArrayList<String>();
l.add("foo");
l.add("bar");
If you decided to switch to another List implementation, the client code works without change:
List<String> l = new LinkedList<String>();
This is especially useful for hiding implementation details, auto generating proxies, etc.
You'll find that frameworks like spring and guice encourage programming to an interface. It's the basis for ideas like aspect-oriented programming, auto generated proxies for transaction management, etc.
Your understanding seems to be right on. Your co-worker just swung by your desk and has all the latest pics of the christmas party starring your drunk boss loaded onto his thumbdrive. Your co-worker and you do not think twice about how this thumbdrive works, to you its a black box but you know it works because of the USB interface.
It doesn't matter whether it's a SanDisk or a Titanium (not even sure that is a brand), size / color don't matter either. In fact, the only thing that matters is that it is not broken (readable) and that it plugs into USB.
Your USB thumbdrive abides by a contract, it is essentially an interface. One can assume it fulfills some very basic duties:
Plugs into USB
Abides by the contract method CopyDataTo:
public Interface IUSB {
void CopyDataTo(string somePath); //used to copy data from the thumbnail drive to...
}
Abides by the contract method CopyDataFrom:
public Interface IUSB {
void CopyDataFrom(); //used to copy data from your PC to the thumbnail drive
}
Ok maybe not those methods but the IUSB interface is just a contract that the thumbnail drive vendors have to abide by to ensure functionality across various platforms / vendors. So SanDisk makes their thumbdrive by the interface:
public class SanDiskUSB : IUSB
{
//todo: define methods of the interface here
}
Ari, I think you already have a solid understanding (from what it sounds like) about how interfaces work.
The main advantage is that the use of an interface loosely couples a class with it's dependencies. You can then change a class, or implement a new concrete interface implementation without ever having to change the classes which depend on it.
To use the interface one can simply call the methods on an instance of the concrete class.
Typically you would have a variable typed to the interface type, thus allowing only access to the methods defined in the interface.
The obvious benefit is knowing of the functionality provided by the concrete class, irrespective of its specific implementation.
Sort of. Most importantly, it allows you to write APIs that take parameters with interface types. Users of the API can then pass in their own classes (which implement those interfaces) and you code will work on those classes even though they didn't exist yet when it was written (such as java.util.Arrays.sort() being able to sort anything that implements Comparable or comes with a suitable Comparator).
From a design perspective, interfaces allow/enforce a clear separation between API contracts and implementation details.
The aim of coding against interfaces is to decouple your code from the concrete implementation in use. That is, your code will not make assumptions about the concrete type, only the interface. Consequently, the concrete implementation can be exchanged without needing to adjust your code.
You didn't list the part about how you get an implementation of the interface, which is important. If you explicitly instantiate the implementing class with a constructor then your code is tied to that implementation. You can use a factory to get an instance for you but then you're as tied to the factory as you were before to the implementing class. Your third alternative is to use dependency injection, which is having a factory plug the implementing object into the object that uses it, in which case you escape having the class that uses the object being tied to the implementing class or to a factory.
I think you may have hinted at this, but I believe one of the biggest benefits of coding to an interface is that you are breaking dependency on concrete implementation. You can achieve loose coupling and make it easier to switch out specific implementations without changing much code. If you are just learning, I would take a look at various design patterns and how they solve problems by coding to interfaces. Reading the book Head First: Design Patterns really helped things click for me.
As I understand it, one creates interfaces to delineate expected functionality, and then implements these "contracts" in concrete classes.
The only sort of mutation i see in your thinking is this - You're going to call out expected contracts, not expected functionality. The functionality is implemented in the concrete classes. The interface only states that you will be able to call something that implements the interface with the expected method signatures. Functionality is hidden from the calling object.
This will allow you to stretch your thinking into polymorphism as follows.
SoundMaker sm = new Duck();<br/>
SoundMaker sm1 = new ThunderousCloud();
sm.makeSound(); // quack, calls all sorts of stuff like larynx, etc.<br/>
sm1.makeSound(); // BOOM!, completely different operations here...

Categories