What is the convention for keeping old code for reference? - java

I have a fairly new small team project with a new team and we have tried a few different approaches and had a few significant iterations in design.
We have some classes laying around that should not be used and a few classes that have such messes as ClassStuff and ClassStuffImproved. We have SVN but I don't think nuking all the junk and making people dig manually in the history is productive. Some things may need to be re implemented properly and the previous poor implementation would provide a reference. I do however want to break anything that depends on junk. I also want the project to build even if these junk files are broken.
Is there a folder convention? Should i put all nuked content in a textfile so at least its easily searchable when someone wonders where a class went?
What is the typical convention here?

I've kept SVN history logs in place by using TortoiseSVN's Repository Browser. If you have a recent SVN server, operations like "Move to", "Copy to" and "Rename" will keep history logs to the original. Be sure to update all checkouts after these kind of changes (else local changes will have a hard time merging with the new server reality).
Files you no longer use can be moved to a "museum" or "attic" branch (and try to keep the original directory structure in there).
Apache projects have been changing package names to indicate new versions that more or less break with previous versions (e.g. compare commons-lang 2.6 with 3.2). In SVN you can do this by simply renaming a directory in the trunk. I find it a good convention: all code that depends on the old version breaks but is easily updated by adding one character to the import statements (and ofcourse you verify the new version works as expected by reading the updated Apidocs).
Re-implementing junk is, in my experience, harder than it looks at first glance. I use the #Deprecated annotation at class-level at first for classes that need to be replaced. Your IDE will clearly show where deprecated code is used and the compiler will show warnings. Create the re-implementation in a package 'version 2'. Then you can update like you would update from commons-lang version 2 to commons-lang version 3. Once this is all done, delete the deprecated classes (or package).
Be careful when deleting deprecated classes though: the last time I tried deleting deprecated classes, I found a dependency on the deprecated classes in one little and old, but crucial and heavily tested, program and had to keep the deprecated classes in place to maintain backwards compatability.

I'm not sure there is a convention, in my projects I just copy it into a new version with a name like:
MyClass2.java
This way I won't get into trouble when I'm importing and old version of a source file. And I also comment the whole contents of MyClass.java with /* ... */. After some reasonable amount of time I drop the old MyClass version and leave MyClass2 in the project. I'm leaving it as MyClass2, so that it states that it this file has history and it's a more advanced version of my class, so brings a little fun into the process.
I've never seen anyone doing this, but the practice seems quite intuitive.

Related

Java snippet based AST-Manipulation before/during compilation

The question is whether the functionality I describe below already exists, or whether I need to make an attempt at creating it myself. I am aware that I am probably looking at a lot of work if it does not exist yet, and I am also sure that others have already tried. I am nevertheless grateful for comments such as "project A tried this, but..." or "dude D already failed because...". If somebody has an overall more elegant solution, that would of course be welcome as well.
I want to change the way I develop (private) Java code by introducing a multiplexing layer. What I mean by that is that I want to be able to create library-like parameterizable AST-snippets, which I want to insert into my code via some sort of placeholders (such as annotations). I am aware of project https://projectlombok.org/ and have found that, while I find it useful for small applications, it does not generally suit my requirements, as it does not seem possible to insert own snippets without forking the entire project and making major modifications. Also lombok only ever modifies a single file at a time, while I am looking for a solution that will need to 'know' multiple files at a time.
I imagine a structure like this:
Source S: (Parameterizable) AST-snippets that can be included via some sort of reference in Source A.
Source A: Regular Java-Code, in which I can reference snippets from Source A. This code will not be compiled directly, as it is lacking the referenced snippets, and would thus throw a lot of compile time exceptions.
Source T: Target Source, which is an AST-equivalent copy of Source A, except that all references of AST-Snippets have been replaced by their respective Snippet from Source S. It needs to be mappable to the original Source A as well as the resolved snippets from Source S, where applicable, as most development will happen there.
I see several challenges with this concept, not the least of which are debuggability, source-mapping and compatibility with other frameworks/APIs. Also, it seems a challenge to work around the one-file-at-a-time limitation, memory wise.
The advantage over lombok would be flexibility, since lombok only provides a fixed set of snippets for specific purposes, whereas this would enable devs to write own snippets, or make modifications to getters, setters etc. Also, lombok 'quirks' into the compilation step, and does not output the 'fused' source, afaik.
I want to target at least javac and eclipse's ecj compilers.

Where to store thrift or grpc interfaces?

thrift interface can be compiled across multiple languages. it's just text files, why there are no online tools like swagger hub? I don't want to copy paste interface across projects that use that interface
also i don't find it useful to package interface with jar file, because only jvm languages can resolve that interface and also it's not user friendly way. It's not only about thrift, it's about grpc also. I didn't find any docs concerned with this question and couldn't find any best practises
Assuming you have a .proto file with your interfaces, each sub project will need to know about the file. There are two main approaches to this problem: Vendor the file, or copy the file.
Vendor the File
In this option, you make an addition project (like a git repo) which stores all your interface definitions. Each project that needs to know about the interfaces will include a reference (git submodule or git subtree) which includes the interface project. When you build your project, the interfaces will need to be synced and then used to generate the necessary code.
The downside of this approach is that git subtree and submodule (or whatever version control you use) are more difficult to use, and require extra work by people building your code. If you make changes to the interface in the subproject it can be difficult to apply those changes back upstream to the interface project.
Copy the File
In this option, you manually copy the file around between projects, and manually keep them in sync. Each time you make a change, you'll want to apply that change to every other project that depends on the interface. When using Protobuf though, it is important to note that you don't have to do this. Protos are designed to be highly backwards compatible.
For example, code that is changing a proto definition from one form to the other can actually use both forms. Old code will look at the old form, and new code can decide on looking at the old or new form. Once all users have been upgraded, you can remove the old form.
The downside to this approach is that it pushes complexity into the decoding portion of your code. You end up needing to be backwards compatible, with an unknown number of older clients. Since not every project will be in sync with the interface definitions, all the users of the interface will need to be more flexible. This problem is not specific to Proto, but happens naturally; it happens to everyone.
A second downside is having to manually copy changes. You must make sure never to reuse field numbers or names. If you have a lot of projects that depend on the interface, its more work for you.
Which to Choose?
Neither approach is objectively better than the other. Each one pushes complexity into a different part of your build. From what I have seen, most people prefer to copy the file, since it is easier than learning advanced git commands.

Verify consistency of method contracts between classes of java

We have a huge application in our company that was bought with its source code (almost all source code actually) and we have been working adapting it to our needs, fixes etc... The way someone a long time ago decided that it would be a good to compile the project was to have the precompiled jars of the original company and only edit or add the classes we needed in an Override folder that we get from some zip files with the original source code. So when we compile we only compile our edited classes, use the original jar in the Classpath and then join our new or edited classes with the original jar to create a new Jar.
The issue seems to be that people are changing method contracts, editing interfaces and adding abstract methods to abstract classes. When we compile we don't recompile the complete source code, only ours so the compiler only checks contract consistency of our Overrided source code.
I have started to notice that thing seem to be breaking because old classes are trying to call a method that doesn't exists any more, or classes that call methods of classes that implement what really is just a partial version of the interface.
I cannot that simply join the original source code with our overrided source code because some classes and java files are autogenerated at build time so i could accidentaly fix something that really isnt broken. (I tried, thats why i know)
Is there a way to validate contract consistency between classes of various compiled jars automatically? After fixing those inconcesties i will be able merge both source directories together and avoid this issues in the future.
Why not compile everything? (you do have the source code) That way you will at least get compile errors.
I assume you don't have any tests to verify things still work. Maybe you should start writing those too. A good place to start is to test the new code that is written that has broken before. That way you focus on testing the parts that are changing.

non-java files in package structure

We have a developer who is in the habit of committing non-java files (xsd, dtd etc) in the java packages under the src/java folder in our repository. Admittedly, these are relevant files to that package, but I just hate to see non-java files in the src folder.
Is this is a common practice that I should get used to or are we doing something strange by maintaining these files like this?
The problem with putting non Java (or other languages) files that are closely tied to the code in a different place than the code is knowing where to find them. It is possible to standardize the locations then theoretically everyone will know where to go and what to do. But I find in practice that does not happen.
Imagine your app still being maintained 5 or 10 years down the road by a team of junior - intermediate developers that do not work at the company now and will never talk to anyone who works on your project now. Putting files closely linked to the source in the source package structure could make their lives easier.
I am a big proponent of eliminating as many ambiguities as possible within reason.
It's very common and even recommended as long as its justifiable. Generally it's justifiable when it's a static resource (DTD+XSLT for proprietary formats, premade scripts etc.) but it's not when the file is something that's likely to be updated by a third party like IP/geographic location database dump.
I think it gets easier if you think of 'src' as not specifically meaning 'source code'. Think of it as the source of resources that are things needed by your program at compile time and/or runtime.
Things that are a product of compile or build activities should not go here.
Admittedly, like most things, exceptions may apply :)
Update:
Personally, I like to break down src further with subdirectories for each resource type underneath it. Others may like that division at a higher level.
There is a lot of jar libraries that uses the same practice.
I think it is acceptable and comfortable.
In Eclipse it works well for us to have a src folder containing java classes, and a configuration folder (which is blessed as a source folder) containing property files etc. Then they all go in the output folder together and can be found in the classpath while still being in seperate folders inside Eclipse
One of the advantages of keeping all the auxiliary files next to the source is that version consistency is maintained between these 3rd party libraries and your source code. If you ever need to go back and debug a specific version, you can pull the entire set of source+config and have it all be the same version.
That being said I'd put them in a $project/config/ directory, or some such, rather than in $project/src/java itself. They're not source, nor java, really, so it's misleading having them in that directory.
When you really get down to it, though, this is an issue of personal style. There's no "Right" answer and you should be talking with those team members and understanding why they made this decision. Using this thread as evidence to support a unilateral decision probably won't go over well. ;)
Its pretty common, you can find it in really popular frameworks, e.g. xsd files for spring various schemas. Also people usually place hibernate mapping files in the same package as the model classes.
I think this is common as long as the files are necessary. The problems arise when people start committing files that are not needed with the source, such as design specs or random text files.
It is surely common, but incredibly lazy and sloppy. My skin crawls when I see it.
Using a tool such as Maven to build your products enables you to easily, and clearly separate code from resources.
Eclipse bundles can be similarly separated.

How can I override a class using a separate jar?

A customer requires a preview of a new feature of our product. They asked to have that feature sent to them in a jar file (like a patch). There's no problem with including the new classes in said jar file. However, an existing class was modified, which is needed to integrate the new feature. They just want to add this new jar file without having to update the core classes of our product. So, the question is: is it possible to override an already existing class using a separate jar? If so, how?
Thanks in advance.
There's a chance it'll work if you put the new jar earlier in the classpath than the original jar. It's worth trying, although it still sounds like a recipe for disaster - or at least, really hard to debug issues if somehow both classes are loaded.
EDIT: I had planned to write this bit earlier, but got interrupted by the end of a train journey...
I would go back to the customer and explain that while what they're asking is possible, it may cause unexpected problems. Updating the jar file is a much safer fix, with much less risk. The phrases "unexpected problems" and "risk" are likely to ring alarm bells with the customer, so hopefully they'll let you do the right thing.
Yes and no, it depends on your environment.
If you use, for example, OSGi and have your versions under control, it's just a matter of installing a new bundle with the exported package at a higher version (assuming your version ranges are lenient enough).
If you use plain old Java with no fancy custom class loading, you should be good to go putting it earlier on your class path (as others already mentioned).
If you do have custom class loading, you'll need to make sure that all the classes that your 'patched' class needs, and indeed the entire transitive dependency hull, is visible from the class loader which is loading the patched version, which might mean you need to ship the entire application, worst case.
All of the answers that stipulate putting the updated classes before the ones they are replacing in the classpath are correct, only provided the original JAR is not sealed or signed.
Yes, it may be possible, by putting it earlier on the classpath than your original jar. However, relying on the ordering of your classpath is not always going to lead to happiness. I'm not sure if it is even documented in the Java Language Spec; if not, then it's going to break for different JVMs and even different versions of the same JVM.
Instead, consider quoting a realistic time frame to integrate the new feature into the current codebase. This is perhaps not the answer you're looking for.
Probably more than you need for this specific case, but in generally if you just want to tweak or augment an existing class you can also use AspectJ with load-time weaving.

Categories