I am using a 3rd party API in few Java applications. They have updated few things in the latest version. We will have to update to the latest version and it needs corresponding changes from our code.
The changes are,
1) The interface and the abstract class name which we used to implement/extend has been changed. Also, the method names has been changed.
These are all just the name changes.
2) Need to annotate the class which implements these interfaces with #Service
3) Then need to add some new Java file and a property file.
4) We also have the abstract class which implements the 3rd part abstract class and then there are many concrete classes. So, few methods from the 3rd party abstract class is been overridden in our base abstract class which extends the base abstract class and few methods are there in the concrete abstract class.
I can do the refactoring through Eclipse IDE, but we dont prefer this.
I like this to be completely automated like running a script.
I tried with Java reflection to find all the concrete class of an Abstract class and rename the methods. Still, it looks risky.
Is there any other better approach?
It depends how much code you need to change, how long it takes to do each step and how many times you repeat the same refactoring.
If it is only a few hundred classes and/or simpler refactorings like rename class/interface can do most of the work, then do it by hand.
Otherwise if you really want to, you can try to write rules in a tool like AutoRefactor: https://github.com/JnRouvignac/AutoRefactor
Disclaimer: I am the author of AutoRefactor.
I remember reading somewhere that a programmer is someone who would rather spend 12 hours writing a script to automate a manual task than to spend 20 minutes actually doing that task.
I understand why you want to automate this - the API you're using is making life hard for its clients by renaming things. It's unusual for APIs to break compatibility with naming only - are you sure it's as simple as that?
My strong recommendation is to just bite the bullet and manually refactor. It will almost certainly take less time than automating the process, you'll identify further opportunities to improve your own application's design, and it's unlikely you will ever need to use the refactoring script again.
Unfortunately, I do not now the exact details of you situation. I can point some principles which can simplify life in future according to my experience.
Shortly, if you are using any 3rd party API, try to minimize it's propagation into your code. Hide the 3rd party code behind your own abstractions (interfaces) using patterns like Adapter, Facade etc.
So, in case the 3rd party code changes, you will make changes only in one place. This approach gives you extra freedom: if you'll decide to use another 3rd party API, it will be simple, because the major peace of your code will not touched. Also it is useful while testing: you can mock actual 3rd party functionality.
For example, suppose your project need to have persisting storage. So you can start from declaring interface like this:
interface IStorage {
void save(Model m);
Model load(int id);
}
This will allow you:
Make decision about storage provider (may be it will be MySQL or
MongoDB or simply just XML file on disk) more later.
Easily substitute one 3rd party API by another (for example change from file storage to DB).
Test you business logic easily by mocking this interface instead of use real storage.
Speed up development in case some modules (which another developers have to do) require working storage (they will just use
IStorage interface as if it is already implemented).
Related
Here's the scenario. As a creator of publicly licensed, open source APIs, my group has created a Java-based web user interface framework (so what else is new?). To keep things nice and organized as one should in Java, we have used packages with naming convention
org.mygroup.myframework.x, with the x being things like components, validators, converters, utilities, and so on (again, what else is new?).
Now, somewhere in class org.mygroup.myframework.foo.Bar is a method void doStuff() that I need to perform logic specific to my framework, and I need to be able to call it from a few other places in my framework, for example org.mygroup.myframework.far.Boo. Given that Boo is neither a subclass of Bar nor in the exact same package, the method doStuff() must be declared public to be callable by Boo.
However, my framework exists as a tool to allow other developers to create simpler more elegant R.I.A.s for their clients. But if com.yourcompany.yourapplication.YourComponent calls doStuff(), it could have unexpected and undesirable consequences. I would
prefer that this never be allowed to happen. Note that Bar contains other methods that are genuinely public.
In an ivory tower world, we would re-write the Java language and insert a tokenized analogue to default access, that would allow any class in a package structure of our choice to access my method, maybe looking similar to:
[org.mygroup.myframework.*] void doStuff() { .... }
where the wildcard would mean any class whose package begins with org.mygroup.myframework can call, but no one else.
Given that this world does not exist, what other good options might we have?
Note that this is motivated by a real-life scenario; names have been changed to protect the guilty. There exists a real framework where peppered throughout its Javadoc one will find public methods commented as "THIS METHOD IS INTERNAL TO MYFRAMEWORK AND NOT
PART OF ITS PUBLIC API. DO NOT CALL!!!!!!" A little research shows these methods are called from elsewhere within the framework.
In truth, I am a developer using the framework in question. Although our application is deployed and is a success, my team experienced so many challenges that we want to convince our bosses to never use this framework again. We want to do this in a well thought out presentation of the poor design decisions made by the framework's developers, and not just as a rant. This issue would be one (of several) of our points, but we just can't put a finger on how we might have done it differently. There has already been some lively discussion here at my workplace, so I wondered what the rest of the world would think.
Update: No offense to the two answerers so far, but I think you've missed the mark, or I didn't express it well. Either way allow me to try to illuminate things. Put as simply as I can, how should the framework's developers have refactored the following. Note this is a really rough example.
package org.mygroup.myframework.foo;
public class Bar {
/** Adds a Bar component to application UI */
public boolean addComponentHTML() {
// Code that adds the HTML for a Bar component to a UI screen
// returns true if successful
// I need users of my framework to be able to call this method, so
// they can actually add a Bar component to their application's UI
}
/** Not really public, do not call */
public void doStuff() {
// Code that performs internal logic to my framework
// If other users call it, Really Bad Things could happen!
// But I need it to be public so org.mygroup.myframework.far.Boo can call
}
}
Another update: So I just learned that C# has the "internal" access modifier. So perhaps a better way to have phrased this question might have been, "How to simulate/ emulate internal access in Java?" Nevertheless, I am not in search of new answers. Our boss ultimately agreed with the concerns mentioned above
You get closest to the answer when you mention the documentation problem. The real issue isn't that you can't "protect" your internal methods; rather, it is that the internal methods pollute your documentation and introduce the risk that a client module may call an internal method by mistake.
Of course, even if you did have fine grained permissions, you still aren't going to be able to prevent a client module from calling internal methods---the jvm doesn't protect against reflection based calls to private methods anyway.
The approach I use is to define an interface for each problematic class, and have the class implement it. The interface can be documented solely in terms of client modules, while the implementing class can provide what internal documentation you desire. You don't even have to include the implementation javadoc in your distribution bundle if you don't want to, but either way the boundary is clearly demarcated.
As long as you ensure that at runtime only one implementation is loaded per documentation-interface, a modern jvm will guarantee you don't suffer any performance penalty for using it; and, you can load harness/stub versions during testing for an added bonus.
The only idea that I can think in order to supply this missing "Framework level access modifier" is CDI and a better design.
If you have to use a method from very different classes and packages in various (but few) situations THERE WILL BE certainly a way to redesign those classes in order to make those methods "private" and inacessible.
There is no support in Java language for such kind of access level (you would like something like "internal" with namespace). You can only restrict access to package level (or the known inheritance public-protected-private model).
From my experience, you can use Eclipse convention:
create a package called "internal" that all class hierarchy (including sub-packages) of this package will be considered as non-API code and could be changed anytime with no guarantee for your users. In that non-API code, use public methods whenever you like. Since it is only a convention and it is not enforced by the JVM or Java compiler, you cannot prevent users from using the code, but at least let them know that these classes were not meant to be used by 3rd parties.
By the way, in Eclipse platform source code, there is a complex plugin model that enforces you not to use internal code of other plugins by implementing custom class loader for each plugin that prevents loading classes that should be "internal" in these plugins.
Interfaces and dynamic proxies are sometimes used to make sure you only expose methods that you do want to expose.
However that comes at a fairly hefty performance cost, if your methods are called very often.
Using the #Deprecated annotation might also be an option, although it won't stop external users invoking your "framework private" methods, they can't say they hadn't been warned.
In general I don't think you should worry about your users deliberately shooting themselves in the foot too much, so long as you made it clear to them that they shouldn't use something.
I have an in-house enterprise application (EJB2) that works with a certain BPM vendor. The current implementation of the in-house application involves pulling in an object that is only exposed by the vendor's API and making changes to it through the exposed methods in the API.
I'm thinking that I need to somehow map an internal object to this external one, but that seems too simple and I'm not quite sure of the best strategy to go about doing this. Can anyone shed some light on how they have handled such a situation in the past?
I want to "black box" this vendor's software so I can replace it easily if needed. What would be the best approach from a design point of view to somehow map an internal object to this exposed API object? Keep in mind that my in-house app needs to talk to the API still, so there is going to be some dependency between the two, but I want to reduce it so I can also test in isolation from this software using junit.
Thanks,
Jason
Create an interface for the service layer, internally all your code can work with that. Then make a class that uses that interface and calls the third party api methods and as the api facade.
i.e.
interface IAPIEndpoint {
MyDomainDataEntity getData();
}
class MyAPIEndpoint : IAPIEndpoint {
public MyDomainDataEntity getData() {
MyDomainDataEntity dataEntity = new MyDomainDataEntity();
// Call the third party api and fill it
return dataEntity;
}
}
It is always a good idea to interface out third party apis so you don't get their funk invading your app domain, and you can swap out as needed. You could make another class implementation that uses a different service entirely.
To use it in code you just call
IAPIEndpoint endpoint = new MyAPIEndpoint(); // or get it specific to the lang you are using.
Making your stuff based on interfaces when it spans multiple implementations is the way to go. It works great for TDD as well so you can just swap out the interface to a local test one that can inspect your domain code entirely separate from the third party api.
Abstraction; implement a DAL which will provide the transition from internal to external and back.
Then if you switched vendors your internals would remain valuable and you could change out the vendor specific code; assuming the vendors provide the same functionality and the data types related to each other.
I will be the black sheep here and advocate for the YAGNI principle. The problem is that if you do an abstraction layer now, it will look so close to the third party API that it will just be a redundant layer. Since you don't know now what a hypothetical future second vendor's API will look like, you don't know what differences you need to account for, and any future port is likely to require a rework for those unforeseen differences anyway.
If you need a test framework, my recommendation is to make your own test implementation using the same API as the BPM vendor. Even better, almost all reputable API providers provide some sort of sandbox mode for testing. If they don't, you should ask for one.
I'm writing a small application in RCP to wrap around the business logic in another (non-RCP) simulation library. I can access and use the library fine from any of my plugins, but I don't know where I should put the instance of the Simulation library so that, say, one of the command handlers can make calls to it.
From reading the docs it sounds like I should be storing 'global' information like this in the workbench - but I still don't really understand how to do that.
Help?
First, the business layer (BL) can and should reside in its' own plugin. That will provide decent decoupling between the layers.
Second, you should carefully decide what the interface should be and which classes are exposed. Ideally, you should mostly expose interfaces and data objects.
Finally, decide how the "hand shake" works. E.g., how to obtain the initial interface to the BL. Since it is a Plugin, it could have an Activator which loads it. You could add a method in the activator which returns the BL interface.
If you are looking for something more decoupled, you could create an extension point or deploy the BL as an OSGi service, but that's a bit of an overkill for you need.
If I understand you correctly, I see two ways:
Store the instance in the model plug-in itself, using ‘SimulationFactory.getInstance(String myAppId)‘. The passed String is a constant in you app that is always used, when obtaining the reference.
Define a new class e.g. GlobalAccess in you app that is initilized with an instance of your model and has some getter (whether you use a single instance again or only provide public static methods is a matter of taste).
The seocond way is similar to some classes in eclipse like platfom or platformui, where you can obtain initial references and navigate through the workbench.
edit
i just found a tutorial that might help you:
Passing Data between Plug-ins
I've been programming in Java for a few courses in the University and I have the following question:
Is it methodologically accepted that every class should implement an interface? Is it considered bad practice not to do so? Can you describe a situation where it's not a good idea to use interfaces?
Edit: Personally, I like the notion of using Interfaces for everything as a methodology and habit, even if it's not clearly beneficial. Eclipse automatically created a class file with all the methods, so it doesn't waste any time anyway.
You don't need to create an interface if you are not going to use it.
Typically you need an interface when:
Your program will provide several implementations for your component. For example, a default implementation which is part of your code, and a mock implementation which is used in a JUnit test. Some tools automate creating a mock implementation, like for instance EasyMock.
You want to use dependency injection for this class, with a framework such as Spring or the JBoss Micro-Container. In this case it is a good idea to specify the dependencies from one class with other classes using an interface.
Following the YAGNI principle a class should implement an interface if you really need it. Otherwise what do you gain from it?
Edit: Interfaces provide a sort of abstraction. They are particularly useful if you want to interchange between different implementations(many classes implementing the same interface). If it is just a single class, then there is no gain.
No, it's not necessary for every class to implement an interface. Use interfaces only if they make your code cleaner and easier to write.
If your program has no current need for to have more than 1 implementation for a given class, then you don't need an interface. For example, in a simple chess program I wrote, I only need 1 type of Board object. A chess board is a chess board is a chess board. Making a Board interface and implementing that would have just required more code to write and maintain.
It's so easy to switch to an interface if you eventually need it.
Every class does implement an interface (i.e. contract) insofar as it provides a non-private API. Whether you should choose to represent the interface separately as a Java interface depends on whether the implementation is "a concept that varies".
If you are absolutely certain that there is only one reasonable implementation then there is no need for an interface. Otherwise an interface will allow you to change the implementation without changing client code.
Some people will shout "YAGNI", assuming that you have complete control over changing the code should you discover a new requirement later on. Other people will be justly afraid that they will need to change the unchangeable - a published API.
If you don't implement an interface (and use some kind of factory for object creation) then certain kinds of changes will force you to break the Open-Closed Principle. In some situations this is commercially acceptable, in others it isn't.
Can you describe a situation where it's not a good idea to use interfaces?
In some languages (e.g. C++, C#, but not Java) you can get a performance benefit if your class contains no virtual methods.
In small programs, or applications without published APIs, then you might see a small cost to maintaining separate interfaces.
If you see a significant increase in complexity due to separating interface and implementation then you are probably not using interfaces as contracts. Interfaces reduce complexity. From the consumer's perspective, components become commodities that fulfil the terms of a contract instead of entities that have sophisticated implementation details in their own right.
Creating an interface for every class is unnecessary. Some commonly cited reasons include mocking (unneeded with modern mocking frameworks like Mockito) and for dependency injection (e.g. Spring, also unneeded in modern implementations).
Create an interface if you need one, especially to formally document public interfaces. There are a couple of nifty edge cases (e.g. marker interfaces).
For what it's worth, on a recent project we used interfaces for everything (both DI and mocking were cited as reasons) and it turned out to be a complete waste and added a lot of complexity - it was just as easy to add an interface when actually needed to mock something out in the rare cases it was needed. In the end, I'm sure someone will wind up going in and deleting all of the extraneous interfaces some weekend.
I do notice that C programmers first moving to Java tend to like lots of interfaces ("it's like headers"). The current version of Eclipse supports this, by allowing control-click navigation to generate a pop-up asking for interface or implementation.
To answer the OP's question in a very blunt way: no, not all classes need to implement an interface. Like for all design questions, this boils down to one's best judgment. Here are a few rule of thumbs I normally follow:
Purely functional objects probably
don't need to (e.g. Pattern,
CharMatcher – even though the
latter does implement Predicate, it
is secondary to its core function)
Pure data holders probably don't need
to (e.g. LogRecord, Locale)
If you can
envision a different implementation
of a given functionality (say, in-memory
Cache vs. disk-based Cache), try to
isolate the functionality into an interface. But don't go too far trying to predict the future either.
For testing purposes, it's
very convenient when classes that do
I/O or start threads are easily mockable, so
that users don't pay a penalty when
running their tests.
There's nothing
worse than a interface that leaks its
underlying implementation. Pay attention where you draw the line and make sure your interface's Javadoc is neutral in that way. If it's not, you probably don't need an interface.
Generally
speaking, it is preferable for
classes meant for public consumption
outside your package/project to
implement interfaces so that your
users are less coupled to your
implementation du jour.
Note that you can probably find counter-examples for each of the bullets in that list. Interfaces are very powerful, so they need to be used and created with care, especially if you're providing external APIs (watch this video to convince yourself). If you're too quick in putting an interface in front of everything, you'll probably end up leaking your single implementation, and you are only making things more complicated for the people following you. If you don't use them enough, you might end up with a codebase that is equally hard to maintain because everything is statically bound and very hard to change. The non-exhaustive list above is where I try to draw the line.
I've found that it is beneficial to define the public methods of a class in a corresponding interface and when defining references to other classes strictly use an interface reference. This allows for easy inversion of control, and it also facilitates unit testing with mocking and stubbing. It also gives you the liberty of replacing the implementation with some other class that implements that interface, so if you are into TDD it may make things easier (or more contrived if you are a critic of TDD)
Interfaces are the way to get an polymorphism. So if You have only one implementation, one class of particularly type, You don't need an interface.
A good way of learning what are considered good methodologies, especially when it comes to code structure design, is to look at freely available code. With Java, the obvious example is to take a look at the JDK system libraries.
You will find many examples of classes that do not implement any interfaces, or that are meant to be used directly, such as java.util.StringTokenizer.
If you use Service Provider Interface pattern in your application interfaces are harder to extend than abstract classes. If you add method to interface, all service providers must be rewritten. But if you add non-abstract method to the abstract class, none of the service providers must be rewritten.
Interfaces also make programming harder if only small part of the interface methods usually have meaningfull implementation.
When I design a new system from scratch I use a component oriented approach, each component (10 or more classes) provide an interface, this allows me (sometimes) to reuse them.
When designing a Tool (Or a simple system) I think this must not necessarily be an extensible framework I introduce interfaces when I need a second implementation as an option.
I saw some products which exposed nearly every functionality by an interface, it took simply too much time to understand unnecessary complexity.
An interface is like a contract between a service provider (server) and the user of such a service (client).
If we are developing a Webservice and we are exposing the rest routes
via controller classes, controller classes can implement interfaces
and those interfaces act as the agreement between web service and the
other applications which use this web service.
Java interfaces like Serializable, Clonnable and Remote
used to indicate something to compiler or JVM.When JVM sees a class
that implement these interfaces, it performs some operation on the to
support Serialization, cloning or Remote Method Invocation. If your class needs these features, then you will have to implement these interfaces.
Using Interface is about to make your application framework resilient to change. Since as I mentioned here (Multiple Inheritance Debates II: according to Stroustrup) multiple inheritance was cancelled in java and c# which I regret, one should always use Interface because you never know what the future will be.
Would it be OK to put my public interfaces into their own package (for my organisation only).
for example
com.example.myprogram - contains all normal code
com.example.myprogram.public - contains public accessible interfaces
com.example.myprogram.abstract - contains abstract classes
Is this a good or a bad thing to do, are there any disadvantages?
I wouldn't like this practice at all. You should group classes, both abstract and concrete, and interfaces according to functionality.
Look at the Java API as an example. Did Sun separate the Collections interfaces from implementations? No. Sun's practices aren't always the best guide, but in this case I agree.
Don't do it.
I can suggest you 2 common ways:
If you really think that your interfaces can have more implementations in future (i.e. you're working on API) then move them to a separate module and create there special package with name 'core', for example. (com.example.myprogram.core). Implementations should be in correspondent packages (like com.example.myprogram.firstimpl).
If you have only 1 implementation then let all your interfaces be in com.example.myprogram package and all concrete classes in com.example.myprogram.impl package.
I can't see that as being bad practice, however you might wanna consider as an alternative organizing your stuff per logical functionality rather than syntactic definition, so that all code for a given unit of functionality interfaces/abstract classes/normal code goes in the same package. This is one of the principles of modular programming.
Said so, putting all the interfaces (but only those) in a separated package might be necessary depending on the size of the project, and might eve become almost necessary if you have a pure component based plugin architecture (so that other module know only about interfaces and the actual implementation is somehow dynamically injected).
Public interfaces are a formal contract between system modules or systems. Because of that, it makes sense to isolate them from the remainder of the code, to make them stand out.
For example, in a system I've worked on, all public interfaces between the server and client components of the system have been placed in a special system module (called, no surprise, "api"). This has a number of desirable effects, among which these:
- semantically, you know where to look if you need any kind of information on how communication should take place
- you can version the api module separately, which is especially useful when you don't want a moving target, i.e. you sign a contract to deliver an application which will support "the api v.1.1" rather than constantly playing catch while someone else changes the interface and requires you to adapt your side
That doesn't mean you shouldn't organize them further in sub-packages to distinguish what they are for. :)
In summary, you are doing the right thing by separating the interfaces from the rest of the code base, although depending on your specific needs, you might do well to take it a step further and isolate the interfaces in a separate system module.