I am working on a big Java project not so well engineered, and we actually have two main development branches.
One branch, A, is a subset of the second, B, having all the functionalities of the latter but no security checks integrated on the user operations (they are just hashes on files that mark which user did what).
Since the development is done on the A branch, I have to manually merge all the work on branch B whenever a bugfix is done.
The codebase is huge and it has interdependencies all around, but rewriting it is out of discussion (founding problems, as usual). Moreover, the whole architecture is so complex that any structural changes can have strange side-effects.
(I realize that this is a programmer's nightmare!).
Now, my question as a Java beginner is the following one: would it be possible to "externalize" some functions of some classes -- that is, all the functions that implement security checks -- in an external library, so that the code executes these functions whenever the library is present in the jar file, and executes the plain "no-security" functions otherwise?
Just to be clear, here's a small schematic of what I would like to do:
--- branch A ---
+ class ONE
f1()
f2()
+ class TWO
g1()
g2()
--- branch B ---
+ class ONE
f1*()
f2()
+ class TWO
g1*()
g2()
The code has to execute f1() and g1() whenever the library is not present, but executes their starred version if the library is there.
Ideally, given the problems above mentioned, I would like to just cut&paste the "security-related" functions in a set of java files, and compile them as a library, and I would perform the changes to these functions manually when needed -- they are not often modified.
Is there otherwise a way to deal with this situation that prevents these problems?
Thanks a lot in advance!
#RH6, what you are asking is certainly possible but may not be very easy in the situation you described above. However, as detailed above, the fundamental idea is to look for the presence/absence of the library in question and behave accordingly. This is more of a design matter and there are more than one approach, so right from the onset, you should be prepared to modify your design to incorporate this behaviour.
One avenue that you could explore is to use AspectJ and weave advices (around advice). In this advice body you could check if the required JAR is present or not, if it is present, you could use a custom class loader (though it is not necessary if the JAR is on classpath) load/create object of the required class and execute the f1*()/g1*() method. If the JAR is not present, proceed to execute the f1()/g1() method.
As you have observed, the above method is slightly less intrusive (requires build level intrusion into the existing code base) but it would require you to modify the build process as well as develop & maintain the advices.
I don't think you need to load functions dynamically. For example, you can either:
Make B extends A (and name it something like SecuredA) and overwrite f1() and g1() to add the required security checks.
Create a SecurityManager interface that is called inside of f1() and g1(). You then have to create 2 implementations: one that does nothing (= A) and one that does security related functions (= B). Then you will just have to inject/use the correct SecurityManager depending of the current case.
There are various design principles to solve this.
For example: IoC (inversion of control).
In software engineering, inversion of control (IoC) describes a design in which custom-written portions of a computer program receive the flow of control from a generic, reusable library. A software architecture with this design inverts control as compared to traditional procedural programming: in traditional programming, the custom code that expresses the purpose of the program calls into reusable libraries to take care of generic tasks, but with inversion of control, it is the reusable code that calls into the custom, or task-specific, code.
The most popular framework for this (as far as I know) is Spring. During the instantiation of your objects you start using a factory method. This factory method will check an XML file for possible overruling.
Here is an example:
<?xml version="1.0" encoding="UTF-8"?>
<beans ...>
<bean id="myClass" class="package.my.MyClass" />
</beans>
Alternatively if you don't like the Spring dependency. You can just create something yourself using some reflection:
Class defaultClass = package.my.MyClass.class;
String overruledClassName = System.getProperty(defaultClassName.getName + ".clazz");
Class clazz = (overruledClassName == null) ? defaultClass : Class.forName(overruledClassName);
Object createdObject = clazz.newInstance();
In combination with a property file that contains the following property:
package.my.MyClass.clazz = package.my.MyClassVersion2
Related
I am using a 3rd party API in few Java applications. They have updated few things in the latest version. We will have to update to the latest version and it needs corresponding changes from our code.
The changes are,
1) The interface and the abstract class name which we used to implement/extend has been changed. Also, the method names has been changed.
These are all just the name changes.
2) Need to annotate the class which implements these interfaces with #Service
3) Then need to add some new Java file and a property file.
4) We also have the abstract class which implements the 3rd part abstract class and then there are many concrete classes. So, few methods from the 3rd party abstract class is been overridden in our base abstract class which extends the base abstract class and few methods are there in the concrete abstract class.
I can do the refactoring through Eclipse IDE, but we dont prefer this.
I like this to be completely automated like running a script.
I tried with Java reflection to find all the concrete class of an Abstract class and rename the methods. Still, it looks risky.
Is there any other better approach?
It depends how much code you need to change, how long it takes to do each step and how many times you repeat the same refactoring.
If it is only a few hundred classes and/or simpler refactorings like rename class/interface can do most of the work, then do it by hand.
Otherwise if you really want to, you can try to write rules in a tool like AutoRefactor: https://github.com/JnRouvignac/AutoRefactor
Disclaimer: I am the author of AutoRefactor.
I remember reading somewhere that a programmer is someone who would rather spend 12 hours writing a script to automate a manual task than to spend 20 minutes actually doing that task.
I understand why you want to automate this - the API you're using is making life hard for its clients by renaming things. It's unusual for APIs to break compatibility with naming only - are you sure it's as simple as that?
My strong recommendation is to just bite the bullet and manually refactor. It will almost certainly take less time than automating the process, you'll identify further opportunities to improve your own application's design, and it's unlikely you will ever need to use the refactoring script again.
Unfortunately, I do not now the exact details of you situation. I can point some principles which can simplify life in future according to my experience.
Shortly, if you are using any 3rd party API, try to minimize it's propagation into your code. Hide the 3rd party code behind your own abstractions (interfaces) using patterns like Adapter, Facade etc.
So, in case the 3rd party code changes, you will make changes only in one place. This approach gives you extra freedom: if you'll decide to use another 3rd party API, it will be simple, because the major peace of your code will not touched. Also it is useful while testing: you can mock actual 3rd party functionality.
For example, suppose your project need to have persisting storage. So you can start from declaring interface like this:
interface IStorage {
void save(Model m);
Model load(int id);
}
This will allow you:
Make decision about storage provider (may be it will be MySQL or
MongoDB or simply just XML file on disk) more later.
Easily substitute one 3rd party API by another (for example change from file storage to DB).
Test you business logic easily by mocking this interface instead of use real storage.
Speed up development in case some modules (which another developers have to do) require working storage (they will just use
IStorage interface as if it is already implemented).
I need to create a map of our domain classes simple names to their fully canonical names. I want to do this only for classes that are under our package structure, and that implement Serializable.
In serialization we use the canonical names of classes alot --it's a good default behaviour as its a very conservative approach, but our model objects are going to move around between packages, and I don't want that to represent a breaking change requiring migration scripts, so I'd like this map. I've already tooled our serializer to use this map, now I just need a good strategy for populating it. Its been frustrating.
First alternative: have each class announce itself statically
the most obvious and most annoying: edit each class in question to include the code
static{
Bootstrapper.classAliases.put(
ThisClass.class.getSimpleName(),
ThisClass.class.getCanonicalName()
);
}
I knew I could do this from the get-go, I started on it, and I really hate it. There's no way this is going to be maintained properly, new classes will be introduced, somebody will forget to add this line, and I'll get myself in trouble.
Second alternative: read through the jar
traverse the jar our application is in, load each class, and see if it should be added to this map. This solution smelled pretty bad -- I'm disturbing the normal loading order and I'm coupled tightly to a particular deployment scheme. Gave up on this fairly quickly.
Third alternative: use java.lang.Instrumentation
requires me to run java with a java agent. More specifics about deployment.
Fourth alternative: hijack class loaders
My first idea was to see if I could add a listener to the class loaders, and then listen for my desired classes being loaded, adding them to this map as they're loaded into the JVM. strictly speaking this isn't doing this statically, but its close enough.
After discovering the tree-like nature of class loaders, and the various different schemes used by the different threads and different libraries, I thought that implementing this solution would be both too complicated and lead to bugs.
Fifth alternative: leverage the build system & a properties file
This one seems like one of the better solutions but I don't have the ant skill to do it. My plan would be to search each file for the pattern
//using human readable regex
[whitespace]* package [whitespace]* com.mycompany [char]*;
[char not 'class']*
class [whitespace]+ (<capture:"className">[nameCharacter]+) [char not '{']* implements [char not '{'] Serializable [char not '{'] '{'
//using notepad++'s regex
\s*package\s+([A-Za-z\._]*);.*class\s+(\w+)\s+implements\s+[\w,_<>\s]*Serializable
and then write out each matching entry in the form [pathFound][className]=[className] to a properties file.
Then I add some fairly simple code to load this properties file into a map at runtime.
am I missing something obvious? Why is this so difficult to do? I know that the lazy nature of java classes means that the language is antithetical to code asking the question "what classes are there", and I guess my problem is a derivative of this question, but still, I'm surprised at how much I'm having to scratch my brain to do this.
So I suppose my question is 2 fold:
how would you go about making this map?
If it would be with your build system, what is the ant code needed to do it? Is this worth converting to gradle for?
Thanks for any help
I would start with your fifth alternative. So, there is a byte code manipulation project called - javassist which lets you load .class files and deal with them using java objects. For example, you can load a "Foo.class" and start asking it things like give me your package, public methods etc.
Checkout the ClassPool & CtClass objects.
List<CtClass> classes = new ArrayList<>();
// Using apache commons I/O you can use a glob pattern to populate ALL_CLASS_FILES_IN_PROJECT
for (File file : ALL_CLASS_FILES_IN_PROJECT) {
ClassPool default = ClassPool.getDefault();
classes.add(default.makeClass(new FileInputStream(file.getPath())));
}
The classes list will have all the classes ready for you to now deal with. You can add this to a static block in some entry point class that always gets loaded.
If this doesn't work for you, the next bet is to use the javaagent to do this. Its not that hard to do it, but it will have some implication on your deployment (the agent lib jar should be made available & the -javaagent added to the startup args).
Here's the scenario. As a creator of publicly licensed, open source APIs, my group has created a Java-based web user interface framework (so what else is new?). To keep things nice and organized as one should in Java, we have used packages with naming convention
org.mygroup.myframework.x, with the x being things like components, validators, converters, utilities, and so on (again, what else is new?).
Now, somewhere in class org.mygroup.myframework.foo.Bar is a method void doStuff() that I need to perform logic specific to my framework, and I need to be able to call it from a few other places in my framework, for example org.mygroup.myframework.far.Boo. Given that Boo is neither a subclass of Bar nor in the exact same package, the method doStuff() must be declared public to be callable by Boo.
However, my framework exists as a tool to allow other developers to create simpler more elegant R.I.A.s for their clients. But if com.yourcompany.yourapplication.YourComponent calls doStuff(), it could have unexpected and undesirable consequences. I would
prefer that this never be allowed to happen. Note that Bar contains other methods that are genuinely public.
In an ivory tower world, we would re-write the Java language and insert a tokenized analogue to default access, that would allow any class in a package structure of our choice to access my method, maybe looking similar to:
[org.mygroup.myframework.*] void doStuff() { .... }
where the wildcard would mean any class whose package begins with org.mygroup.myframework can call, but no one else.
Given that this world does not exist, what other good options might we have?
Note that this is motivated by a real-life scenario; names have been changed to protect the guilty. There exists a real framework where peppered throughout its Javadoc one will find public methods commented as "THIS METHOD IS INTERNAL TO MYFRAMEWORK AND NOT
PART OF ITS PUBLIC API. DO NOT CALL!!!!!!" A little research shows these methods are called from elsewhere within the framework.
In truth, I am a developer using the framework in question. Although our application is deployed and is a success, my team experienced so many challenges that we want to convince our bosses to never use this framework again. We want to do this in a well thought out presentation of the poor design decisions made by the framework's developers, and not just as a rant. This issue would be one (of several) of our points, but we just can't put a finger on how we might have done it differently. There has already been some lively discussion here at my workplace, so I wondered what the rest of the world would think.
Update: No offense to the two answerers so far, but I think you've missed the mark, or I didn't express it well. Either way allow me to try to illuminate things. Put as simply as I can, how should the framework's developers have refactored the following. Note this is a really rough example.
package org.mygroup.myframework.foo;
public class Bar {
/** Adds a Bar component to application UI */
public boolean addComponentHTML() {
// Code that adds the HTML for a Bar component to a UI screen
// returns true if successful
// I need users of my framework to be able to call this method, so
// they can actually add a Bar component to their application's UI
}
/** Not really public, do not call */
public void doStuff() {
// Code that performs internal logic to my framework
// If other users call it, Really Bad Things could happen!
// But I need it to be public so org.mygroup.myframework.far.Boo can call
}
}
Another update: So I just learned that C# has the "internal" access modifier. So perhaps a better way to have phrased this question might have been, "How to simulate/ emulate internal access in Java?" Nevertheless, I am not in search of new answers. Our boss ultimately agreed with the concerns mentioned above
You get closest to the answer when you mention the documentation problem. The real issue isn't that you can't "protect" your internal methods; rather, it is that the internal methods pollute your documentation and introduce the risk that a client module may call an internal method by mistake.
Of course, even if you did have fine grained permissions, you still aren't going to be able to prevent a client module from calling internal methods---the jvm doesn't protect against reflection based calls to private methods anyway.
The approach I use is to define an interface for each problematic class, and have the class implement it. The interface can be documented solely in terms of client modules, while the implementing class can provide what internal documentation you desire. You don't even have to include the implementation javadoc in your distribution bundle if you don't want to, but either way the boundary is clearly demarcated.
As long as you ensure that at runtime only one implementation is loaded per documentation-interface, a modern jvm will guarantee you don't suffer any performance penalty for using it; and, you can load harness/stub versions during testing for an added bonus.
The only idea that I can think in order to supply this missing "Framework level access modifier" is CDI and a better design.
If you have to use a method from very different classes and packages in various (but few) situations THERE WILL BE certainly a way to redesign those classes in order to make those methods "private" and inacessible.
There is no support in Java language for such kind of access level (you would like something like "internal" with namespace). You can only restrict access to package level (or the known inheritance public-protected-private model).
From my experience, you can use Eclipse convention:
create a package called "internal" that all class hierarchy (including sub-packages) of this package will be considered as non-API code and could be changed anytime with no guarantee for your users. In that non-API code, use public methods whenever you like. Since it is only a convention and it is not enforced by the JVM or Java compiler, you cannot prevent users from using the code, but at least let them know that these classes were not meant to be used by 3rd parties.
By the way, in Eclipse platform source code, there is a complex plugin model that enforces you not to use internal code of other plugins by implementing custom class loader for each plugin that prevents loading classes that should be "internal" in these plugins.
Interfaces and dynamic proxies are sometimes used to make sure you only expose methods that you do want to expose.
However that comes at a fairly hefty performance cost, if your methods are called very often.
Using the #Deprecated annotation might also be an option, although it won't stop external users invoking your "framework private" methods, they can't say they hadn't been warned.
In general I don't think you should worry about your users deliberately shooting themselves in the foot too much, so long as you made it clear to them that they shouldn't use something.
I'm working on a Scala-based script language (internal DSL) that allows users to define multiple data transformations functions in a Scala script file. Since the application of these functions could take several hours I would like to cache the results in a database.
Users are allowed to change the definition of the transformation functions and also to add new functions. However, then the user restarts the application with a slightly modified script I would like to execute only those functions that have been changed or added. The question is how to detect those changes? For simplicity let us assume that the user can only adapt the script file so that any reference to something not defined in this script can be assumed to be unchanged.
In this case what's the best practice for detecting changes to such user-defined functions?
Until now I though about:
parsing the script file and calculating fingerprints based on the source code of the function definitions
getting the bytecode of each function at runtime and building fingerprints based on this data
applying the functions to some test data and calculating fingerprints on the results
However, all three approaches have their pitfalls.
Writing a parser for Scala to extract the function definitions could be quite some work, especially if you want to detect changes that indirectly affect the behaviour of your functions (e.g. if your function calls another (changed) function defined in the script).
The bytecode analysis could be another option, but I never worked with those libraries. Thus I have no idea if they can solve my problem and how they deal with Java's dynamic binding.
The approach with example data is definitely the simplest one, but has the drawback that different user-defined functions could be accidentally mapped to the same fingerprint if they return the same results for my test data.
Does someone has experience with one of these "solutions" or can suggest me a better one?
The second option doesn't look difficult. For example, with Javassist library obtaining bytecode of a method is as simple as
CtClass c = ClassPool.getDefault().get(className);
for (CtMethod m: c.getDeclaredMethod()) {
CodeAttribute ca = m.getMethodInfo().getCodeAttribute();
if (ca != null) { // i.e. if the method is not native
byte[] byteCode = ca.getCode();
...
}
}
So, as long as you assume that results of your methods depend on the code of that methods only, it's pretty straighforward.
UPDATE:
On the other hand, since your methods are written in Scala, they probably contain some closures, so that parts of their code reside in anonymous classes, and you may need to trace usage of these classes somehow.
I'm writing a small application in RCP to wrap around the business logic in another (non-RCP) simulation library. I can access and use the library fine from any of my plugins, but I don't know where I should put the instance of the Simulation library so that, say, one of the command handlers can make calls to it.
From reading the docs it sounds like I should be storing 'global' information like this in the workbench - but I still don't really understand how to do that.
Help?
First, the business layer (BL) can and should reside in its' own plugin. That will provide decent decoupling between the layers.
Second, you should carefully decide what the interface should be and which classes are exposed. Ideally, you should mostly expose interfaces and data objects.
Finally, decide how the "hand shake" works. E.g., how to obtain the initial interface to the BL. Since it is a Plugin, it could have an Activator which loads it. You could add a method in the activator which returns the BL interface.
If you are looking for something more decoupled, you could create an extension point or deploy the BL as an OSGi service, but that's a bit of an overkill for you need.
If I understand you correctly, I see two ways:
Store the instance in the model plug-in itself, using ‘SimulationFactory.getInstance(String myAppId)‘. The passed String is a constant in you app that is always used, when obtaining the reference.
Define a new class e.g. GlobalAccess in you app that is initilized with an instance of your model and has some getter (whether you use a single instance again or only provide public static methods is a matter of taste).
The seocond way is similar to some classes in eclipse like platfom or platformui, where you can obtain initial references and navigate through the workbench.
edit
i just found a tutorial that might help you:
Passing Data between Plug-ins