In C++ or C, you can do things like this:
#ifdef WINAPI
public void showWindow(int cmdShow);
#endif
But in java, how can I define methods, that will only be compiled when user has enabled a library? I'm making a cross-platform application that uses certain native features which were not yet abstracted by JVM.
Also, I often make constructors that allow making my class from an object coming from some library. In that case, once the constructor is there, it force user to have that library. Instead, I'd like it to be only enabled when user has that library.
Java does not have the concept of macros or templates. Instead it has reflection and generics. In your case, you would use reflection. The idea is to code to interfaces and then at runtime figure out which implementation to use. If no suitable/custom implementation is found you need to fall back to some default (possibly a no-op implementation if you want nothing to happen by default).
The best way to support such architecture is to provide an entry point to your hierarchy of interfaces, i.e., a factory. The entry point will then provide to all clients the implementations they need. The entry point can use reflection to figure out which implementation you want, e.g.,
public final class LibraryManager {
public static LibraryInterface find(String url) { ... }
}
The LibraryManager above figures out via reflection which implementation of LibraryInterface you want to obtain at runtime. The url can be simply the fully qualified class name of the required implementation of LibraryInterface, e.g., com.my.pack.MyLibraryInterfaceImpl.
To understand this in practice, take a look at JDBC's DriverManager: you get an implementation of Connection by providing the DriverManager.getConnection method with a jdbc URL. Behind the curtains, DriverManager uses reflection to find the right driver and returns the implementation needed. If the driver library for the given URL is not available you will get an exception.
In your case, the only modification you need to make to that pattern is to provide some default implementation if no library is specified. If the implementations rely on 3rd party libraries you are going to need to write appropriate adapters that use these, etc.
Note that in many cases you would actually return a factory to your library implementation so you can create many instances of the library objects. This works exactly the same way as above except you return some LibraryFactoryInterface instead.
Finally, if you use some kind of IoC or DI framework like Spring, you can define your implementation factory at configuration time to be injected in your application. A typical example and an alternative to DriverManager is DataSource. It's very common in a Spring application to define your DataSources in the configuration file. Spring will take care of wiring the DataSource into the objects that need to connect to the database.
Related
I maintain a Java library that is used across many different JVM environments, including those that use alternative languages like Groovy.
Within the library, in an upcoming feature branch, there is a class similar to the following:
public class SomeData implements Map<String,Object> {
// ...
#Override
public String toString() {
// custom implementation here
}
}
The overridden toString implementation is there specifically to prevent certain
security information in the object's values from being exposed in application logs or System.out.println calls accidentally. The data is still necessary however, so it still needs to exist in the object's name/value pairs.
However, if an application developer using the library chooses to write the following in Groovy, the Groovy GString does not honor the overloaded implementation:
def someData = getSomeData()
println "Hi, I have ${someData}"
As discussed in this SO answer, it is because GString bypasses someData#toString() and uses its InvokerHelper instead, presumably iterating over the key value pairs and printing them directly.
This is very undesirable because of the security implications this could have.
There are many reasons why the SomeData implements Map<String,Object> that are not discussed here for brevity, nor is the desire to change the core library API just to appease this behavior in the Groovy programming language.
In short, it's a really poor thing to expect a library implementation to change to make this safer for just Groovy environments, when it's already safe by the existing design.
Is there a way to disable this feature for GString for just instances of the SomeData class?
Is there a reason why GString doesn't check to see if the method is overloaded first before attempting its custom key/value rendering logic?
What workaround, if any, might exist to enable this behavior automatically instead of being forced to tell Groovy users "Sorry, you need to be aware to always call things like this:"
println "Hi, I have ${someData.toString()}"
It's incredibly easy to forget to do this, so any solution should ideally be automatic or enabled via global configuration settings somewhere. Are there any options like this?
Is there an "official" way to "unwrap" (i.e., obtain the non-enhanced class) for classes enhanced by Guice AOP?
So far, I detect these classes by looking for the string "$$EnhancerByGuice$$" in the class name and - if it is present - reverting to the superclass (Guice AOP works on classes using inheritance).
I'd prefer something that does not break when Guice decides to change this suffix string (which is by no means part of any API or contract).
As far as I can tell, there is no official way. There is an issue open to address it but given the prioritization I doubt it will happen. In the meantime, if you want to avoid breaking when Guice decide to change the suffix string, add a unit test that proves you can detect an enhanced class.
After comming from Ruby world, I'm having little problems doing TDD in Java. The biggest issue is when I have application that is just communicating with external API.
Say I want to just fetch some data from Google Calendar, or 5 tweets from some Twitter user and display it.
In Ruby, I don't have any problems, because I can monkey-patch the API library in tests directly, but I have no such option in Java.
If I think about this in terms of MVC, my model objects are directly accessing the API through some library. The question is, is this bad design? Should I always wrap any API library in some interface, so I can mock/stub it in Java?
Because when I think about this, the only purpose of that interface would be to simulate (please don't kill me for saying this) the monkey-patch. Meaning that any time I use any external resource, I have to wrap each layer in interface that can be stubbed out.
# do I have to abstract everything just to do this in Java?
Twitter.stub!(:search)
Now you might say that I should always abstract away the interface, so I can change the underlying layer to anything else. But if I'm writing twitter app, I'm not going to change it to RSS reader.
Yes, I can add for example Facebook and then it would make sense to have interface. But when there is no other resource that can be substituted for the one I'm using, than I still have to wrap everything in interfaces to make it testable.
Am I missing something, or is this just a way to test in the Java world?
Using interfaces is just generally good practice in Java. Some languages have multiple inheritance, others have duck typing, Java has interfaces. It's a key feature of the language, it lets me use
different aspects of a class in different contexts and
different implementations of the same contract without changing client code.
So interfaces are a concept you should embrace in general, and then you would reap the benefits in situations like this where you could substitute your services by mock objects.
One of the most important books about Java best practices is Effective Java by Joshua Bloch. I would highly suggest you to read it. In this context the most important part is Item 52: Refer to objects by their interfaces. Quote:
More generally, you should favor the use of interfaces rather than
classes to refer to objects. If appropriate interface types exist, then parameters, return values, variables, and fields should all be declared using interface
types. The only time you really need to refer to an object’s class is when you’re
creating it with a constructor.
And if you take things even further (e.g. when using dependency injection), you aren't even calling the constructor.
One of the key problems of switching languages is that you have to switch the way of thinking too. You can't program language x effectively while thinking in language y. You can't program C effectively without using pointers, Ruby not without duck typing and Java not without Interfaces.
Wrapping the external API is the way I would do this.
So, as you already said, you would have an interface and two classes: the real one and the dummy implementation.
Yes, it may seem unreasonable from the perspective of some services indeed being specific, like Twitter. But, this way your build process doesn't depend on external resources. Depending on external libraries isn't all that bad, but having your tests depend on actual data present or not present out there on the web can mess up the build process.
The easiest way is to wrap the API service with your interface/class pair and use that throughout your code.
I understand that what you want are Mock objects.
As you described it, one of the ways one can generate "test versions" of objects is by implementing a common interface and using it.
However, what you are missing is to simply extend the class (provided that it is not declared final) and override the methods that you want to mock. (NB: the possibility of doing that is the reason why it is considered bad form for a library to declare its classes final - it can make testing considerably harder.)
There is a number of Java libraries that aim in facilitating the use of Mock objects - you can look at Mockito or EasyMock.
Mockito is more handy and like your ruby mocks.
You can "monkey-patch" an API in Java. The Java language itself does not provide specific means to do it, but the JVM and the standard libraries do. In Ruby, developers can use the Mocha library for that. In Java, you can use the JMockit library (which I created because of limitations in older mocking tools).
Here is an example JMockit test, equivalent to the test_should_calculate_value_of_unshipped_orders test available in Mocha documentation:
#Test
public void shouldCalculateValueOfUnshippedOrders()
{
final Order anOrder = new Order();
final List<Order> orders = asList(anOrder, new Order(), new Order());
new NonStrictExpectations(Order.class)
{{
Order.findAll(); result = orders;
anOrder.getTotalCost(); result = 10;
}};
assertEquals(30, Order.unshippedValue());
}
I need some advice to which scenarios a dynamic proxy would prove more useful than a regular proxy.
I've put alot of effort into learning how to use dynamic proxies effectively. In this question, set aside that frameworks like AspectJ can perform basically everything we try to achieve with dynamic proxies, or that e.g., CGLIB can be used to address some of the shortcomings of dynamic proxies.
Use cases
Decorators - e.g., perform logging on method invocation, or cache return values of complex operations
Uphold contract - That is, making sure parameters are within accepted range and return types conform to accepted values.
Adapter - Saw some clever article somewhere describing how this is useful. I rarely come across this design pattern though.
Are the others?
Dynamic proxy advantages
Decorator: Log all method invocations, e.g.,
public Object invoke(Object target, Method method, Object[] arguments) {
System.out.println("before method " + method.getName());
return method.invoke(obj, args);
}
}
The decorator pattern is definately useful as it allows side effects to all proxies methods (although this behaviour is a book-example of using aspects ..).
Contract: In contrast to a regular proxy, we need not implement the full interface. E.g.,
public Object invoke(Object target, Method method, Object[] arguments) {
if ("getValues".equals(method.getName()) {
// check or transform parameters and/or return types, e.g.,
return RangeUtils.validateResponse( method.invoke(obj, args) );
}
if ("getVersion".equals(method.getName()) {
// another example with no delegation
return 3;
}
}
The contract on the other hand only gives the benefit of avoiding the need to implement a complete interface. Then again, refactoring proxied methods would silently invalidate the dynamic proxy.
Conclusion
So what I see here is one real use case, and one questionable use case. What's your opinion?
There are a number of potential uses for dynamic proxies beyond what you've described -
Event publishing - on method x(), transparently call y() or send message z.
Transaction management (for db connections or other transactional ops)
Thread management - thread out expensive operations transparently.
Performance tracking - timing operations checked by a CountdownLatch, for example.
Connection management - thinking of APIs like Salesforce's Enterprise API that require clients of their service to start a session before executing any operations.
Changing method parameters - in case you want to pass default values for nulls, if that's your sort of thing.
Those are just a few options in addition to validation and logging like you've described above. FWIW, JSR 303, a bean validation specification, has an AOP-style implementation in Hibernate Validator, so you don't need to implement it for your data objects specifically. Spring framework also has validation built in and has really nice integration with AspectJ for some of the stuff described here.
Indeed AOP benefits most of the dynamic proxies. That's because you can create a dynamic proxy around an object that you don't know in advance.
Another useful side of a dynamic proxy is when you want to apply the same operation to all methods. With a static proxy you'd need a lot of duplicated code (on each proxied method you would need the same call to a method, and then delegate to the proxied object), and a dynamic proxy would minimize this.
Also note that Adapter and Decorator are separate patterns. They look like the Proxy pattern in the way they are implemented (by object composition), but they serve a different purpose:
the decorator pattern allows you to have multiple concrete decorators, thus adding functionality at runtime
the adapter pattern is meant to adapt an object to an unmatching interface. The best example I can think of is the EnumetationIterator - it adapts an Enumeration to the Iterator interface.
Another use case I can think of is to dynamically implement interfaces at runtime, which is the way some frameworks work.
Take for instance Retrofit, a Java library for consuming REST services. You define a Java interface that reflects the operations available in the REST API, and decorate the methods with annotations to configure specifics of the request. It's easy to see that in this case all methods defined in the interface must execute a HTTP request against some server, transform the method arguments into request parameters; and then parse the response into a java object defined as the method return type.
My application stores files, and you have the option of storing the files on your own server or using S3.
I defined an interface:
interface FileStorage {
}
Then I have 2 implementations, S3FileStorage and LocalFileStorage.
In the control panel, the administrator chooses which FileStorage method they want, and this value is stored in a SiteConfiguration object.
Since the FileStorage setting can be changed while the application is already running, would you still use spring's DI to do this?
Or would you just do this in your code:
FileStorage fs = null;
switch(siteConfig.FileStorageMethod)
case FileStorageMethod.S3:
fs = new S3FileStorage();
case FileStorageMethod.Local:
fs = new LocalFileStorage();
Which one makes more sense?
I believe you can use DI with spring during runtime, but I haven't read about it much at this point.
I would inject a factory, and let clients request the actual services from it at runtime. This will decouple your clients from the actual factory implementation, so you can have several factory implementations as well, for example, for testing.
You can also use some kind of a proxy object with several strategies behind it instead of the factory, but it can cause problems, if sequence of calls (like open, write, close for file storage) from one client cannot be served by different implementations.
I would still use Dependency Injection here. If it can be changed at runtime, you should inject it using setter injection, rather than constructor injection. The benefit of using any dependency injection is that you can easily add new implementations of the interface without changing the actual code.
DI without question. Or would you prefer to enhance your factory code when you create/update/delete an implementation? IMO, If you're programming to an interface, then you shouldn't bootstrap your implementations, however many layers deep it actually occurs.
Also, DI isn't synonymous to Spring, et al. It's as simple as containing a constructor with the abstracted interface as an argument, i.e. public FileApp(FileStorage fs) { }.
FYI, another possibility is a proxy.