Usefulness of java dynamic proxies vs regular proxies - java

I need some advice to which scenarios a dynamic proxy would prove more useful than a regular proxy.
I've put alot of effort into learning how to use dynamic proxies effectively. In this question, set aside that frameworks like AspectJ can perform basically everything we try to achieve with dynamic proxies, or that e.g., CGLIB can be used to address some of the shortcomings of dynamic proxies.
Use cases
Decorators - e.g., perform logging on method invocation, or cache return values of complex operations
Uphold contract - That is, making sure parameters are within accepted range and return types conform to accepted values.
Adapter - Saw some clever article somewhere describing how this is useful. I rarely come across this design pattern though.
Are the others?
Dynamic proxy advantages
Decorator: Log all method invocations, e.g.,
public Object invoke(Object target, Method method, Object[] arguments) {
System.out.println("before method " + method.getName());
return method.invoke(obj, args);
}
}
The decorator pattern is definately useful as it allows side effects to all proxies methods (although this behaviour is a book-example of using aspects ..).
Contract: In contrast to a regular proxy, we need not implement the full interface. E.g.,
public Object invoke(Object target, Method method, Object[] arguments) {
if ("getValues".equals(method.getName()) {
// check or transform parameters and/or return types, e.g.,
return RangeUtils.validateResponse( method.invoke(obj, args) );
}
if ("getVersion".equals(method.getName()) {
// another example with no delegation
return 3;
}
}
The contract on the other hand only gives the benefit of avoiding the need to implement a complete interface. Then again, refactoring proxied methods would silently invalidate the dynamic proxy.
Conclusion
So what I see here is one real use case, and one questionable use case. What's your opinion?

There are a number of potential uses for dynamic proxies beyond what you've described -
Event publishing - on method x(), transparently call y() or send message z.
Transaction management (for db connections or other transactional ops)
Thread management - thread out expensive operations transparently.
Performance tracking - timing operations checked by a CountdownLatch, for example.
Connection management - thinking of APIs like Salesforce's Enterprise API that require clients of their service to start a session before executing any operations.
Changing method parameters - in case you want to pass default values for nulls, if that's your sort of thing.
Those are just a few options in addition to validation and logging like you've described above. FWIW, JSR 303, a bean validation specification, has an AOP-style implementation in Hibernate Validator, so you don't need to implement it for your data objects specifically. Spring framework also has validation built in and has really nice integration with AspectJ for some of the stuff described here.

Indeed AOP benefits most of the dynamic proxies. That's because you can create a dynamic proxy around an object that you don't know in advance.
Another useful side of a dynamic proxy is when you want to apply the same operation to all methods. With a static proxy you'd need a lot of duplicated code (on each proxied method you would need the same call to a method, and then delegate to the proxied object), and a dynamic proxy would minimize this.
Also note that Adapter and Decorator are separate patterns. They look like the Proxy pattern in the way they are implemented (by object composition), but they serve a different purpose:
the decorator pattern allows you to have multiple concrete decorators, thus adding functionality at runtime
the adapter pattern is meant to adapt an object to an unmatching interface. The best example I can think of is the EnumetationIterator - it adapts an Enumeration to the Iterator interface.

Another use case I can think of is to dynamically implement interfaces at runtime, which is the way some frameworks work.
Take for instance Retrofit, a Java library for consuming REST services. You define a Java interface that reflects the operations available in the REST API, and decorate the methods with annotations to configure specifics of the request. It's easy to see that in this case all methods defined in the interface must execute a HTTP request against some server, transform the method arguments into request parameters; and then parse the response into a java object defined as the method return type.

Related

Only use library if available java

In C++ or C, you can do things like this:
#ifdef WINAPI
public void showWindow(int cmdShow);
#endif
But in java, how can I define methods, that will only be compiled when user has enabled a library? I'm making a cross-platform application that uses certain native features which were not yet abstracted by JVM.
Also, I often make constructors that allow making my class from an object coming from some library. In that case, once the constructor is there, it force user to have that library. Instead, I'd like it to be only enabled when user has that library.
Java does not have the concept of macros or templates. Instead it has reflection and generics. In your case, you would use reflection. The idea is to code to interfaces and then at runtime figure out which implementation to use. If no suitable/custom implementation is found you need to fall back to some default (possibly a no-op implementation if you want nothing to happen by default).
The best way to support such architecture is to provide an entry point to your hierarchy of interfaces, i.e., a factory. The entry point will then provide to all clients the implementations they need. The entry point can use reflection to figure out which implementation you want, e.g.,
public final class LibraryManager {
public static LibraryInterface find(String url) { ... }
}
The LibraryManager above figures out via reflection which implementation of LibraryInterface you want to obtain at runtime. The url can be simply the fully qualified class name of the required implementation of LibraryInterface, e.g., com.my.pack.MyLibraryInterfaceImpl.
To understand this in practice, take a look at JDBC's DriverManager: you get an implementation of Connection by providing the DriverManager.getConnection method with a jdbc URL. Behind the curtains, DriverManager uses reflection to find the right driver and returns the implementation needed. If the driver library for the given URL is not available you will get an exception.
In your case, the only modification you need to make to that pattern is to provide some default implementation if no library is specified. If the implementations rely on 3rd party libraries you are going to need to write appropriate adapters that use these, etc.
Note that in many cases you would actually return a factory to your library implementation so you can create many instances of the library objects. This works exactly the same way as above except you return some LibraryFactoryInterface instead.
Finally, if you use some kind of IoC or DI framework like Spring, you can define your implementation factory at configuration time to be injected in your application. A typical example and an alternative to DriverManager is DataSource. It's very common in a Spring application to define your DataSources in the configuration file. Spring will take care of wiring the DataSource into the objects that need to connect to the database.

"Passing arguments" via ThreadLocal ok?

I'm building both a Java networking library and an application which makes use of it. The library consists of:
An interface PacketSocket which has methods for sending and receiving packets of bytes.
Two implementations of it, one over TCP and one over UDP.
An ObjectConnection class which is built on top of a PacketSocket and handles serialization of objects to byte packets.
The application uses RequestConnection on top of a UDPPacketSocket. The UDPPacketSocket implementation is unique in that it supports specifying per packet whether delivery should be guaranteed. I would like to be able to use from within the application, but there is no way through the ObjectConnection and PacketSocket interfaces.
I could of course add a boolean guaranteed parameter to the applicable methods in those interfaces, but then I'd eventually (when there will be more implementations of PacketSocket) have to add many more parameters that are specific to certain implementations only and ignored by others.
Instead I though I could do it with a static thread-local property of UDPPacketSocket, like so:
class Application {
public void sendStuff() {
// is stored in a ThreadLocal, so this code is still thread-safe
UDPPacketSocket.setGuaranteed(true);
try {
myObjCon.send(...);
} finally {
// ... restore old value of guaranteed
}
}
}
What do you think of an approach like that?
I think its an ugly hack, however sometimes it is only option, esp if you are "passing" a value through many layers of code and you cannot easily modify that code.
I would avoid it if you can. A better option would be to have the following, if possible
myObjCon.sendGuaranteed(...);
I agree that this is an ugly hack. It will work, but you may end up regretting doing it.
I'd deal with this by using a Properties object to pass the various PacketSocket implementation parameters. If that is unpalatable, define a PacketSocketParameters interface with a hierarchy of implementation classes for the different kinds of PacketSocket.
i'd recommend some sort of "performance characteristics" parameter, maybe something like a Properties instance. then, each impl could use their own, arbitrary properties (e.g. "guaranteed" for your current impl). note, you can avoid string parsing by using the object methods on Properties (e.g. get() instead of getProperty()) or using a straight Map instance. then your values could be true objects (e.g. Boolean).
since we know it's a UDP, we can de-abstract the layers and access the concrete stuff
( (UDPSocket)connection.getSocket() ).setGuaranteed(true);

Java Socket RPC protocol

I've been asking some questions about adapting the command protocol for use in my client server environment. However, after some experimentation, I have come to the conclusion that it will not work for me. It is not designed for this scenario. I'm thus at a loose end.
I have implemented a sort of RPC mechanism before whereby I had a class entitled "Operation". I also had an enum entitled "Action" that contained names for actions that could be invoked on the server.
Now, in my old project, every time that the client wanted to invoke an action on the server, it would create an instance of 'Operation' and set the action variable with a value from the "Action" enum. For example
Operation serverOpToInvoke = new Operation();
serverOpToInvoke.setAction(Action.CREATE_TIME_TABLE);
serverOpToInvoke.setParameters(Map params);
ServerReply reply = NetworkManager.sendOperation(serverOpToInvoke);
...
On the server side, I had to perform the horrible task of determining which method to invoke by examining the 'Action' enum value with a load of 'if/else' statements. When a match was found, I would call the appropriate method.
The problem with this was that it was messy, hard to maintain and was ultimately bad design.
My question is thus - Is there some sort of pattern that I can follow to implement a nice, clean and maintainable rpc mechanism over a TCP socket in java? RMI is a no go for me here as the client (android) doesn't support RMI. I've exhausted all avenues at this stage. The only other option would maybe be a REST service. Any advice would be very helpful.
Thank you very much
Regards
Probably the easiest solution is to loosely follow the path of RMI.
You start out with an interface and an implementation:
interface FooService {
Bar doThis( String param );
String doThat( Bar param );
}
class FooServiceImpl implements FooService {
...
}
You deploy the interface to both sides and the implementation to the server side only.
Then to get a client object, you create a dynamic proxy. Its invocation handler will do nothing else but serialize the service classname, the method name and the parameters and send it to the server (initially you can use an ObjectOutputStream but you can use alternative serialization techniques, like XStream for example).
The server listener takes this request and executes it using reflection, then sends the response back.
The implementation is fairly easy and it is transparent from both sides, the only major caveat being that your services will effectively be singletons.
I can include some more implementation detail if you need, but this is the general idea I would follow if I had to implement something like that.
Having said that, I'd probably search a bit more for an already existing solution, like webservices or something similar.
Update: This is what an ordinary (local) invocation handler would do.
class MyHandler implements InvocationHandler {
private Object serviceObject;
#Override
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
return method.invoke(serviceObject, args);
}
}
Where serviceObject is your service implementation object wrapped into the handler.
This is what you have to cut in half, and instead of calling the method, you need to send the following to the server:
The full name of the interface (or some other value that uniquely identifies the service interface)
The name of the method.
The names of the parameter types the method expects.
The args array.
The server side will have to:
Find the implementation for that interface (the easiest way is to have some sort of map where the keys are the interface names and the values the implementation singleton instance)
Find the method, using Class.getMethod( name, paramTypes );
Execute the method by calling method.invoke(serviceObject, args); and send the return value back.
You should look into protocol buffers from google: http://code.google.com/p/protobuf/
This library defines an IDL for generating struct like classes that can be written and read from a stream/byte array/etc. They also define an RPC mechanism using the defined messages.
I've used this library for a similar problem and it worked very well.
RMI is the way to go.
Java RMI is a Java application
programming interface that performs
the object-oriented equivalent of
remote procedure calls (RPC).

How to test Java app operating directly on external API

After comming from Ruby world, I'm having little problems doing TDD in Java. The biggest issue is when I have application that is just communicating with external API.
Say I want to just fetch some data from Google Calendar, or 5 tweets from some Twitter user and display it.
In Ruby, I don't have any problems, because I can monkey-patch the API library in tests directly, but I have no such option in Java.
If I think about this in terms of MVC, my model objects are directly accessing the API through some library. The question is, is this bad design? Should I always wrap any API library in some interface, so I can mock/stub it in Java?
Because when I think about this, the only purpose of that interface would be to simulate (please don't kill me for saying this) the monkey-patch. Meaning that any time I use any external resource, I have to wrap each layer in interface that can be stubbed out.
# do I have to abstract everything just to do this in Java?
Twitter.stub!(:search)
Now you might say that I should always abstract away the interface, so I can change the underlying layer to anything else. But if I'm writing twitter app, I'm not going to change it to RSS reader.
Yes, I can add for example Facebook and then it would make sense to have interface. But when there is no other resource that can be substituted for the one I'm using, than I still have to wrap everything in interfaces to make it testable.
Am I missing something, or is this just a way to test in the Java world?
Using interfaces is just generally good practice in Java. Some languages have multiple inheritance, others have duck typing, Java has interfaces. It's a key feature of the language, it lets me use
different aspects of a class in different contexts and
different implementations of the same contract without changing client code.
So interfaces are a concept you should embrace in general, and then you would reap the benefits in situations like this where you could substitute your services by mock objects.
One of the most important books about Java best practices is Effective Java by Joshua Bloch. I would highly suggest you to read it. In this context the most important part is Item 52: Refer to objects by their interfaces. Quote:
More generally, you should favor the use of interfaces rather than
classes to refer to objects. If appropriate interface types exist, then parameters, return values, variables, and fields should all be declared using interface
types. The only time you really need to refer to an object’s class is when you’re
creating it with a constructor.
And if you take things even further (e.g. when using dependency injection), you aren't even calling the constructor.
One of the key problems of switching languages is that you have to switch the way of thinking too. You can't program language x effectively while thinking in language y. You can't program C effectively without using pointers, Ruby not without duck typing and Java not without Interfaces.
Wrapping the external API is the way I would do this.
So, as you already said, you would have an interface and two classes: the real one and the dummy implementation.
Yes, it may seem unreasonable from the perspective of some services indeed being specific, like Twitter. But, this way your build process doesn't depend on external resources. Depending on external libraries isn't all that bad, but having your tests depend on actual data present or not present out there on the web can mess up the build process.
The easiest way is to wrap the API service with your interface/class pair and use that throughout your code.
I understand that what you want are Mock objects.
As you described it, one of the ways one can generate "test versions" of objects is by implementing a common interface and using it.
However, what you are missing is to simply extend the class (provided that it is not declared final) and override the methods that you want to mock. (NB: the possibility of doing that is the reason why it is considered bad form for a library to declare its classes final - it can make testing considerably harder.)
There is a number of Java libraries that aim in facilitating the use of Mock objects - you can look at Mockito or EasyMock.
Mockito is more handy and like your ruby mocks.
You can "monkey-patch" an API in Java. The Java language itself does not provide specific means to do it, but the JVM and the standard libraries do. In Ruby, developers can use the Mocha library for that. In Java, you can use the JMockit library (which I created because of limitations in older mocking tools).
Here is an example JMockit test, equivalent to the test_should_calculate_value_of_unshipped_orders test available in Mocha documentation:
#Test
public void shouldCalculateValueOfUnshippedOrders()
{
final Order anOrder = new Order();
final List<Order> orders = asList(anOrder, new Order(), new Order());
new NonStrictExpectations(Order.class)
{{
Order.findAll(); result = orders;
anOrder.getTotalCost(); result = 10;
}};
assertEquals(30, Order.unshippedValue());
}

Would you use DI or a factory?

My application stores files, and you have the option of storing the files on your own server or using S3.
I defined an interface:
interface FileStorage {
}
Then I have 2 implementations, S3FileStorage and LocalFileStorage.
In the control panel, the administrator chooses which FileStorage method they want, and this value is stored in a SiteConfiguration object.
Since the FileStorage setting can be changed while the application is already running, would you still use spring's DI to do this?
Or would you just do this in your code:
FileStorage fs = null;
switch(siteConfig.FileStorageMethod)
case FileStorageMethod.S3:
fs = new S3FileStorage();
case FileStorageMethod.Local:
fs = new LocalFileStorage();
Which one makes more sense?
I believe you can use DI with spring during runtime, but I haven't read about it much at this point.
I would inject a factory, and let clients request the actual services from it at runtime. This will decouple your clients from the actual factory implementation, so you can have several factory implementations as well, for example, for testing.
You can also use some kind of a proxy object with several strategies behind it instead of the factory, but it can cause problems, if sequence of calls (like open, write, close for file storage) from one client cannot be served by different implementations.
I would still use Dependency Injection here. If it can be changed at runtime, you should inject it using setter injection, rather than constructor injection. The benefit of using any dependency injection is that you can easily add new implementations of the interface without changing the actual code.
DI without question. Or would you prefer to enhance your factory code when you create/update/delete an implementation? IMO, If you're programming to an interface, then you shouldn't bootstrap your implementations, however many layers deep it actually occurs.
Also, DI isn't synonymous to Spring, et al. It's as simple as containing a constructor with the abstracted interface as an argument, i.e. public FileApp(FileStorage fs) { }.
FYI, another possibility is a proxy.

Categories