Need help improving a tightly coupled design - java

I have an in-house enterprise application (EJB2) that works with a certain BPM vendor. The current implementation of the in-house application involves pulling in an object that is only exposed by the vendor's API and making changes to it through the exposed methods in the API.
I'm thinking that I need to somehow map an internal object to this external one, but that seems too simple and I'm not quite sure of the best strategy to go about doing this. Can anyone shed some light on how they have handled such a situation in the past?
I want to "black box" this vendor's software so I can replace it easily if needed. What would be the best approach from a design point of view to somehow map an internal object to this exposed API object? Keep in mind that my in-house app needs to talk to the API still, so there is going to be some dependency between the two, but I want to reduce it so I can also test in isolation from this software using junit.
Thanks,
Jason

Create an interface for the service layer, internally all your code can work with that. Then make a class that uses that interface and calls the third party api methods and as the api facade.
i.e.
interface IAPIEndpoint {
MyDomainDataEntity getData();
}
class MyAPIEndpoint : IAPIEndpoint {
public MyDomainDataEntity getData() {
MyDomainDataEntity dataEntity = new MyDomainDataEntity();
// Call the third party api and fill it
return dataEntity;
}
}
It is always a good idea to interface out third party apis so you don't get their funk invading your app domain, and you can swap out as needed. You could make another class implementation that uses a different service entirely.
To use it in code you just call
IAPIEndpoint endpoint = new MyAPIEndpoint(); // or get it specific to the lang you are using.
Making your stuff based on interfaces when it spans multiple implementations is the way to go. It works great for TDD as well so you can just swap out the interface to a local test one that can inspect your domain code entirely separate from the third party api.

Abstraction; implement a DAL which will provide the transition from internal to external and back.
Then if you switched vendors your internals would remain valuable and you could change out the vendor specific code; assuming the vendors provide the same functionality and the data types related to each other.

I will be the black sheep here and advocate for the YAGNI principle. The problem is that if you do an abstraction layer now, it will look so close to the third party API that it will just be a redundant layer. Since you don't know now what a hypothetical future second vendor's API will look like, you don't know what differences you need to account for, and any future port is likely to require a rework for those unforeseen differences anyway.
If you need a test framework, my recommendation is to make your own test implementation using the same API as the BPM vendor. Even better, almost all reputable API providers provide some sort of sandbox mode for testing. If they don't, you should ask for one.

Related

In modern application design, how do you implement / between TransferOject and BusinessObject

With organizations which are slow to adapt to modern technology finally junking EJBs and getting ready to transform to SpringBoot, Microservices, REST, Angular, there are some some questions application design. One being about TransferObjects and Business Objects
When the call comes to the REST Controller, is it still popular to populate a TO (POJO) and then make Service call, which in turn populates a BusinessObject and then calls a Repository service?
OR
At the REST Controller layer do we directly populate the BO and send it to the Service (This does not make any sense to me, because a BO is populated only during the execution business logic).
If nowadays its still Option 1, then how do we avoid writing to exactly similar POJO classes in most cases (in order to use BeanUtils.copyProperties()), with the BO decorated with #Id, #Column etc.
To elaborate on #Turing85's comments...
Option 1 usually (see the end of my answer) makes infinitely more sense. It's a question of responsibility (purpose) and change; the two logical components you refer to, a REST API and a repository / system service:
Responsibility: a REST service cares about working with its callers, so ideally when designing a REST API you should be involving someone from the client-side (client as in caller), because if the API doesn't work for them it's not going to be an effective API. On the other hand, repositories are somewhat self-centered, and may need to consider things that are or no interest to API callers (and vis-versa).
Change: if you pay attention to design principles, like SOLID, you'll know that part of a system should do one job - as a way of limiting the reasons why it needs to change (see: SRP). Trying to use one object across both outward-facing API's, and inward-facing repositories, is asking for trouble because it's trying to do too much - its trying to help solve problems in two very different parts of the wider solution, and thee both have very different change drivers working against them. Turning85's comment about the persistence layer stems from the same idea.
"Option 1 usually makes infinitely more sense":
One case where the REST API's objects will / can bear a very close resemblance to those that hit the actual repository (or even be reused, I guess) is when the REST API is a System API - i.e. a dedicated façade / proxy to the repository. In this case, the System API is largely driven by the repository i.e. the main change driver is just the repository.
After researching a bit I agree, keeping things simple will result in simpler code. I found a nice simple way to take care of this manual work.
https://www.baeldung.com/entity-to-and-from-dto-for-a-java-spring-application

JAVA refactoring using reflection

I am using a 3rd party API in few Java applications. They have updated few things in the latest version. We will have to update to the latest version and it needs corresponding changes from our code.
The changes are,
1) The interface and the abstract class name which we used to implement/extend has been changed. Also, the method names has been changed.
These are all just the name changes.
2) Need to annotate the class which implements these interfaces with #Service
3) Then need to add some new Java file and a property file.
4) We also have the abstract class which implements the 3rd part abstract class and then there are many concrete classes. So, few methods from the 3rd party abstract class is been overridden in our base abstract class which extends the base abstract class and few methods are there in the concrete abstract class.
I can do the refactoring through Eclipse IDE, but we dont prefer this.
I like this to be completely automated like running a script.
I tried with Java reflection to find all the concrete class of an Abstract class and rename the methods. Still, it looks risky.
Is there any other better approach?
It depends how much code you need to change, how long it takes to do each step and how many times you repeat the same refactoring.
If it is only a few hundred classes and/or simpler refactorings like rename class/interface can do most of the work, then do it by hand.
Otherwise if you really want to, you can try to write rules in a tool like AutoRefactor: https://github.com/JnRouvignac/AutoRefactor
Disclaimer: I am the author of AutoRefactor.
I remember reading somewhere that a programmer is someone who would rather spend 12 hours writing a script to automate a manual task than to spend 20 minutes actually doing that task.
I understand why you want to automate this - the API you're using is making life hard for its clients by renaming things. It's unusual for APIs to break compatibility with naming only - are you sure it's as simple as that?
My strong recommendation is to just bite the bullet and manually refactor. It will almost certainly take less time than automating the process, you'll identify further opportunities to improve your own application's design, and it's unlikely you will ever need to use the refactoring script again.
Unfortunately, I do not now the exact details of you situation. I can point some principles which can simplify life in future according to my experience.
Shortly, if you are using any 3rd party API, try to minimize it's propagation into your code. Hide the 3rd party code behind your own abstractions (interfaces) using patterns like Adapter, Facade etc.
So, in case the 3rd party code changes, you will make changes only in one place. This approach gives you extra freedom: if you'll decide to use another 3rd party API, it will be simple, because the major peace of your code will not touched. Also it is useful while testing: you can mock actual 3rd party functionality.
For example, suppose your project need to have persisting storage. So you can start from declaring interface like this:
interface IStorage {
void save(Model m);
Model load(int id);
}
This will allow you:
Make decision about storage provider (may be it will be MySQL or
MongoDB or simply just XML file on disk) more later.
Easily substitute one 3rd party API by another (for example change from file storage to DB).
Test you business logic easily by mocking this interface instead of use real storage.
Speed up development in case some modules (which another developers have to do) require working storage (they will just use
IStorage interface as if it is already implemented).

How many GWT services

Starting a new GWT application and wondering if I can get some advice from someones experience.
I have a need for a lot of server-side functionality through RPC services...but I am wondering where to draw the line.
I can make a service for every little call or I can make fewer services which handle more operations.
Let's say I have Customer, Vendor and Administration services. I could make 3 services or a service for each function in each category.
I noticed that much of the service implementation does not provide compile-time help and at times troublesome to get going, but it provides good modularity. When I have a larger service, I don't have the modularity as I described, but I don't have to the service creation issues and reduce the entries in my web.xml file.
Is there a resource issue with using a lot of services? What is the best practice to determine what level of granularity to use?
in my opinion, you should make a rpc service for "logical" things.
in your example:
one for customer, another for vendors and a third one for admin
in that way, you keep several services grouped by meaning, and you will have a few lines to maintain in the web.xml file ( and this is a good news :-)
More seriously, rpc services are usually wrappers to call database stuff, so, you even could make a single 'MagicBlackBoxRpc' with a single web.xml entry and thousands of operations !
but making a separate rpc for admin operations, like you suggest, seems a good thing.
Read general advice on "how big should a class be?", which is available in any decent software engineering book.
In my opinion:
One class = One Subject (ie. group of functions or behaviours that are related)
A class should not deal with more than one subject. For example:
Class PersonDao -> Subject: interface between the DB and Java code.
It WILL NOT:
- cache Person instances
- update fields automatically (for example, update the field 'lastModified')
- find the database
Why?
Because for all these other things, there will be other classes doing it! Respectively:
- a cache around the PersonDao is concerned with the efficient storage of information to avoid hitting the DB more often than necessary
- the Service class which uses the DAO is responsible for modifying anything that needs to be modified automagically.
- To find the database is responsibility of the DataSource (usually part of a framework like Spring) and your Dao should NOT be worried about that. It's not part of its subject.
TDD is the answer
The need for this kind of separation becomes really clear when you do TDD (Test-Driven Development). Try to do TDD on bad code where a single class does all sorts of things! You can't even get started with one unit test! So this is my final hint: use TDD and that will tell you how big a class should be.
I think the thing to optimize for is that you can accomplish a result in one round trip to the server. I have an ad-hoc collection of methods on my service object, one for each situation the client finds itself in when it has to get something done. You do not want the client to RPC to the server several times in a row while the user is sitting there waiting.
REST makes things orthogonal, but orthogonality has a cost: there is a reason that the frequently used verbs in languages are irregular. In terms of maintaing clean orthogonal structure to your app, make sure your schema is well-designed. That is where each class should have semantics orthogonal to that of the other classes. When the semantics of each RPC call can be stated cleanly in the schema there will be no confusion as to what they mean, even if they aren't REST-fully ideal.

Java Interfaces Methodology: Should every class implement an interface?

I've been programming in Java for a few courses in the University and I have the following question:
Is it methodologically accepted that every class should implement an interface? Is it considered bad practice not to do so? Can you describe a situation where it's not a good idea to use interfaces?
Edit: Personally, I like the notion of using Interfaces for everything as a methodology and habit, even if it's not clearly beneficial. Eclipse automatically created a class file with all the methods, so it doesn't waste any time anyway.
You don't need to create an interface if you are not going to use it.
Typically you need an interface when:
Your program will provide several implementations for your component. For example, a default implementation which is part of your code, and a mock implementation which is used in a JUnit test. Some tools automate creating a mock implementation, like for instance EasyMock.
You want to use dependency injection for this class, with a framework such as Spring or the JBoss Micro-Container. In this case it is a good idea to specify the dependencies from one class with other classes using an interface.
Following the YAGNI principle a class should implement an interface if you really need it. Otherwise what do you gain from it?
Edit: Interfaces provide a sort of abstraction. They are particularly useful if you want to interchange between different implementations(many classes implementing the same interface). If it is just a single class, then there is no gain.
No, it's not necessary for every class to implement an interface. Use interfaces only if they make your code cleaner and easier to write.
If your program has no current need for to have more than 1 implementation for a given class, then you don't need an interface. For example, in a simple chess program I wrote, I only need 1 type of Board object. A chess board is a chess board is a chess board. Making a Board interface and implementing that would have just required more code to write and maintain.
It's so easy to switch to an interface if you eventually need it.
Every class does implement an interface (i.e. contract) insofar as it provides a non-private API. Whether you should choose to represent the interface separately as a Java interface depends on whether the implementation is "a concept that varies".
If you are absolutely certain that there is only one reasonable implementation then there is no need for an interface. Otherwise an interface will allow you to change the implementation without changing client code.
Some people will shout "YAGNI", assuming that you have complete control over changing the code should you discover a new requirement later on. Other people will be justly afraid that they will need to change the unchangeable - a published API.
If you don't implement an interface (and use some kind of factory for object creation) then certain kinds of changes will force you to break the Open-Closed Principle. In some situations this is commercially acceptable, in others it isn't.
Can you describe a situation where it's not a good idea to use interfaces?
In some languages (e.g. C++, C#, but not Java) you can get a performance benefit if your class contains no virtual methods.
In small programs, or applications without published APIs, then you might see a small cost to maintaining separate interfaces.
If you see a significant increase in complexity due to separating interface and implementation then you are probably not using interfaces as contracts. Interfaces reduce complexity. From the consumer's perspective, components become commodities that fulfil the terms of a contract instead of entities that have sophisticated implementation details in their own right.
Creating an interface for every class is unnecessary. Some commonly cited reasons include mocking (unneeded with modern mocking frameworks like Mockito) and for dependency injection (e.g. Spring, also unneeded in modern implementations).
Create an interface if you need one, especially to formally document public interfaces. There are a couple of nifty edge cases (e.g. marker interfaces).
For what it's worth, on a recent project we used interfaces for everything (both DI and mocking were cited as reasons) and it turned out to be a complete waste and added a lot of complexity - it was just as easy to add an interface when actually needed to mock something out in the rare cases it was needed. In the end, I'm sure someone will wind up going in and deleting all of the extraneous interfaces some weekend.
I do notice that C programmers first moving to Java tend to like lots of interfaces ("it's like headers"). The current version of Eclipse supports this, by allowing control-click navigation to generate a pop-up asking for interface or implementation.
To answer the OP's question in a very blunt way: no, not all classes need to implement an interface. Like for all design questions, this boils down to one's best judgment. Here are a few rule of thumbs I normally follow:
Purely functional objects probably
don't need to (e.g. Pattern,
CharMatcher – even though the
latter does implement Predicate, it
is secondary to its core function)
Pure data holders probably don't need
to (e.g. LogRecord, Locale)
If you can
envision a different implementation
of a given functionality (say, in-memory
Cache vs. disk-based Cache), try to
isolate the functionality into an interface. But don't go too far trying to predict the future either.
For testing purposes, it's
very convenient when classes that do
I/O or start threads are easily mockable, so
that users don't pay a penalty when
running their tests.
There's nothing
worse than a interface that leaks its
underlying implementation. Pay attention where you draw the line and make sure your interface's Javadoc is neutral in that way. If it's not, you probably don't need an interface.
Generally
speaking, it is preferable for
classes meant for public consumption
outside your package/project to
implement interfaces so that your
users are less coupled to your
implementation du jour.
Note that you can probably find counter-examples for each of the bullets in that list. Interfaces are very powerful, so they need to be used and created with care, especially if you're providing external APIs (watch this video to convince yourself). If you're too quick in putting an interface in front of everything, you'll probably end up leaking your single implementation, and you are only making things more complicated for the people following you. If you don't use them enough, you might end up with a codebase that is equally hard to maintain because everything is statically bound and very hard to change. The non-exhaustive list above is where I try to draw the line.
I've found that it is beneficial to define the public methods of a class in a corresponding interface and when defining references to other classes strictly use an interface reference. This allows for easy inversion of control, and it also facilitates unit testing with mocking and stubbing. It also gives you the liberty of replacing the implementation with some other class that implements that interface, so if you are into TDD it may make things easier (or more contrived if you are a critic of TDD)
Interfaces are the way to get an polymorphism. So if You have only one implementation, one class of particularly type, You don't need an interface.
A good way of learning what are considered good methodologies, especially when it comes to code structure design, is to look at freely available code. With Java, the obvious example is to take a look at the JDK system libraries.
You will find many examples of classes that do not implement any interfaces, or that are meant to be used directly, such as java.util.StringTokenizer.
If you use Service Provider Interface pattern in your application interfaces are harder to extend than abstract classes. If you add method to interface, all service providers must be rewritten. But if you add non-abstract method to the abstract class, none of the service providers must be rewritten.
Interfaces also make programming harder if only small part of the interface methods usually have meaningfull implementation.
When I design a new system from scratch I use a component oriented approach, each component (10 or more classes) provide an interface, this allows me (sometimes) to reuse them.
When designing a Tool (Or a simple system) I think this must not necessarily be an extensible framework I introduce interfaces when I need a second implementation as an option.
I saw some products which exposed nearly every functionality by an interface, it took simply too much time to understand unnecessary complexity.
An interface is like a contract between a service provider (server) and the user of such a service (client).
If we are developing a Webservice and we are exposing the rest routes
via controller classes, controller classes can implement interfaces
and those interfaces act as the agreement between web service and the
other applications which use this web service.
Java interfaces like Serializable, Clonnable and Remote
used to indicate something to compiler or JVM.When JVM sees a class
that implement these interfaces, it performs some operation on the to
support Serialization, cloning or Remote Method Invocation. If your class needs these features, then you will have to implement these interfaces.
Using Interface is about to make your application framework resilient to change. Since as I mentioned here (Multiple Inheritance Debates II: according to Stroustrup) multiple inheritance was cancelled in java and c# which I regret, one should always use Interface because you never know what the future will be.

How to expose an EJB as a webservice that will later let me keep client compatibility when ejb changes?

Lots of frameworks let me expose an ejb as a webservice.
But then 2 months after publishing the initial service I need to change the ejb or any part of its interface. I still have clients that need to access the old interface, so I obviously need to have 2 webservices with different signatures.
Anyone have any suggestions on how I can do this, preferably letting the framework do the grunt work of creating wrappers and copying logic (unless there's an even smarter way).
I can choose webservice framework on basis of this, so suggestions are welcome.
Edit: I know my change is going to break compatibility,and I am fully aware that I will need two services with different namespaces at the same time. But how can I do it in a simple manner ?
I don't think, you need any additional frameworks to do this. Java EE lets you directly expose the EJB as a web service (since EJB 2.1; see example for J2EE 1.4), but with EE 5 it's even simpler:
#WebService
#SOAPBinding(style = Style.RPC)
public interface ILegacyService extends IOtherLegacyService {
// the interface methods
...
}
#Stateless
#Local(ILegacyService.class)
#WebService(endpointInterface = "...ILegacyService", ...)
public class LegacyServiceImpl implements ILegacyService {
// implementation of ILegacyService
}
Depending on your application server, you should be able to provide ILegacyService at any location that fits. As jezell said, you should try to put changes that do not change the contract directly into this interface. If you have additional changes, you may just provide another implementation with a different interface. Common logic can be pulled up into a superclass of LegacyServiceImpl.
I'm not an EBJ guy, but I can tell you how this is generally handled in the web service world. If you have a non-breaking change to the contract (for instance, adding a property that is optional), then you can simply update the contract and consumers should be fine.
If you have a breaking change to a contract, then the way to handle it is to create a new service with a new namespace for it's types. For instance, if your first service had a namespace of:
http://myservice.com/2006
Your new one might have:
http://myservice.com/2009
Expose this contract to new consumers.
How you handle the old contract is up to you. You might direct all the requests to an old server and let clients choose when to upgrade to the new servers. If you can use some amount of logic to upgrade the requests to the format that the new service expects, then you can rip out the old service's logic and replace it with calls to the new. Or, you might just deprecate it all together and fail all calls to the old service.
PS: This is much easier to handle if you create message class objects rather than reusing domain entities.
Ok here goes;
it seems like dozer.sourceforge.net is an acceptable starting-point for doing the grunt work of copying data between two parallel structures. I suppose a lot of web frameworks can generate client proxies that can be re-used in a server context to maintain compatibility.

Categories