In so many articles on Java's builder design pattern, it is implemented as follows:
public class YourModel {
// your fields here
private final long id;
//...
private YourModel(YourModelBuilder builder) {
// set everything from the builder
}
public static class YourModelBuilder {
// same fields from the model it is trying to build
private final long id;
//...
public YourModelBuilder(long id/* , .... */) {
// the normal construction pattern here...
this.id = id;
//...
}
// some builder methods for setting individual fields while allowing for chaining
public YourModel build() {
YourModel model = new YourModel(this);
// do validation here
return model;
}
}
}
or something similar.
This implementation of the design pattern seem to satisfy my use case, quickly and easily creating models manually for my Katalon Studio tests, in a way that is easy to understand, but it seems like it could end up a nightmare to maintain, especially given that the AUT these models are being created for, is constantly changing.
How can we abstract out the field declaration being copied from the model to the model builder?
Let's specifically name the problem:
You'd want to be able to change the model class, by way of either:
Adding a new field
Removing an existing field
Renaming an existing field
Changing the type of an existing field
and do this in a way that as much of the infrastructure surrounding your model class (from the builder's 'setters' to your toString implementation) just automatically adapts without having to explicitly go in and fix things.
Java isn't that kind of flexible, resulting in drastic measures being required to do this properly. You have a few options here:
IDE tooling. Instead of just editing the java source file in what amounts to a 'dumb' editor, use tooling. For example, many IDEs support the notion of 'refactor -> rename' on a field which would also change the this.foo = foo; in your setter (if you had one). If you've never seen this, it's a bit magical so I better describe it: You select any identifier, hit the shortcut for 'refactor -> rename', and then a little highlight box appears around that identifier and all other places that identifier comes up. Your IDE is smart enough to know what scope is, and won't "select" different variables that so happen to share a name; this is not some sort of 'global search/replace'. You then start typing and what you type as if by magic appears in ALL those little boxes. You're doing a live search/replace on just those identifiers that actually refer to the thing you selected! Pretty nice, this is what IDEs are all about. They understand java and thus can do things like 'modify only those identifier nodes which actually refer to this thing'. Most do not change the setFoo method name to setBar, but some offer a popup asking if that's your intent. If an IDE tooling/refactor system is aware of builders, it could feasibly be written such that e.g. adding a field via a refactor action adds the field, updates the equals and hashCode method, the toString method, and fixes up the builder, all as-you-type, in one go. Let's call this explicit code generation. Note that at least for eclipse I'm not aware of refactor tools that go quite this far. More usually you'd just delete ALL the infra, update your model class, and then regenerate ALL the infra. If you forget, your code is broken. Point is: Such an IDE plugin could exist and isn't even that hard to build, so that is an option. I'm just not aware of any that actually do exist.
Build tooling and templating. Have a system where you just write the model class and some hints as to what you want from it, and then have some aspect of the build tooling run with it: It takes your model class and the templating hints and generates all the infrastructure surrounding your needs automatically. This ensures that the infrastructure is always in sync with your model class and unlike the previous option, keeps your actual (non-generated) codebase nice and crisp. There are a few tools for this, but Project Lombok is the only one that works 'as you type', integrated directly into your eclipse process (e.g. the outline view updates as you type with e.g. the new builder setter method). The rest tend to work as annotation processors - you need a build cycle. Let's call this one implicit code generation.
Reflective bonanza is a third option: You make an actual model class and the builder appears at runtime. This, naturally, is not an option for java which is static and explicitly typed, but you can make this work if all code that interacts with your models is dynamic, for example because that's written in javascript or groovy or some such. I'm assuming this doesn't interest you so I won't go into further detail as to how to set this up. This solution is very common in dynamic languages. One way or another, you lose as-you-write introspection unless the reflective tool ships with IDE plugins at which point it's as complex as the first 2 solutions.
Pick your poison. I'd pick the implicit code gen option, and then I'd use the tool that most tightly integrates to keep productivity up by keeping 'waiting around for the build!' down to a minimum. But then, I am one of the core contributors to Project Lombok so I might be a tad biased :)
Related
Here's the scenario. As a creator of publicly licensed, open source APIs, my group has created a Java-based web user interface framework (so what else is new?). To keep things nice and organized as one should in Java, we have used packages with naming convention
org.mygroup.myframework.x, with the x being things like components, validators, converters, utilities, and so on (again, what else is new?).
Now, somewhere in class org.mygroup.myframework.foo.Bar is a method void doStuff() that I need to perform logic specific to my framework, and I need to be able to call it from a few other places in my framework, for example org.mygroup.myframework.far.Boo. Given that Boo is neither a subclass of Bar nor in the exact same package, the method doStuff() must be declared public to be callable by Boo.
However, my framework exists as a tool to allow other developers to create simpler more elegant R.I.A.s for their clients. But if com.yourcompany.yourapplication.YourComponent calls doStuff(), it could have unexpected and undesirable consequences. I would
prefer that this never be allowed to happen. Note that Bar contains other methods that are genuinely public.
In an ivory tower world, we would re-write the Java language and insert a tokenized analogue to default access, that would allow any class in a package structure of our choice to access my method, maybe looking similar to:
[org.mygroup.myframework.*] void doStuff() { .... }
where the wildcard would mean any class whose package begins with org.mygroup.myframework can call, but no one else.
Given that this world does not exist, what other good options might we have?
Note that this is motivated by a real-life scenario; names have been changed to protect the guilty. There exists a real framework where peppered throughout its Javadoc one will find public methods commented as "THIS METHOD IS INTERNAL TO MYFRAMEWORK AND NOT
PART OF ITS PUBLIC API. DO NOT CALL!!!!!!" A little research shows these methods are called from elsewhere within the framework.
In truth, I am a developer using the framework in question. Although our application is deployed and is a success, my team experienced so many challenges that we want to convince our bosses to never use this framework again. We want to do this in a well thought out presentation of the poor design decisions made by the framework's developers, and not just as a rant. This issue would be one (of several) of our points, but we just can't put a finger on how we might have done it differently. There has already been some lively discussion here at my workplace, so I wondered what the rest of the world would think.
Update: No offense to the two answerers so far, but I think you've missed the mark, or I didn't express it well. Either way allow me to try to illuminate things. Put as simply as I can, how should the framework's developers have refactored the following. Note this is a really rough example.
package org.mygroup.myframework.foo;
public class Bar {
/** Adds a Bar component to application UI */
public boolean addComponentHTML() {
// Code that adds the HTML for a Bar component to a UI screen
// returns true if successful
// I need users of my framework to be able to call this method, so
// they can actually add a Bar component to their application's UI
}
/** Not really public, do not call */
public void doStuff() {
// Code that performs internal logic to my framework
// If other users call it, Really Bad Things could happen!
// But I need it to be public so org.mygroup.myframework.far.Boo can call
}
}
Another update: So I just learned that C# has the "internal" access modifier. So perhaps a better way to have phrased this question might have been, "How to simulate/ emulate internal access in Java?" Nevertheless, I am not in search of new answers. Our boss ultimately agreed with the concerns mentioned above
You get closest to the answer when you mention the documentation problem. The real issue isn't that you can't "protect" your internal methods; rather, it is that the internal methods pollute your documentation and introduce the risk that a client module may call an internal method by mistake.
Of course, even if you did have fine grained permissions, you still aren't going to be able to prevent a client module from calling internal methods---the jvm doesn't protect against reflection based calls to private methods anyway.
The approach I use is to define an interface for each problematic class, and have the class implement it. The interface can be documented solely in terms of client modules, while the implementing class can provide what internal documentation you desire. You don't even have to include the implementation javadoc in your distribution bundle if you don't want to, but either way the boundary is clearly demarcated.
As long as you ensure that at runtime only one implementation is loaded per documentation-interface, a modern jvm will guarantee you don't suffer any performance penalty for using it; and, you can load harness/stub versions during testing for an added bonus.
The only idea that I can think in order to supply this missing "Framework level access modifier" is CDI and a better design.
If you have to use a method from very different classes and packages in various (but few) situations THERE WILL BE certainly a way to redesign those classes in order to make those methods "private" and inacessible.
There is no support in Java language for such kind of access level (you would like something like "internal" with namespace). You can only restrict access to package level (or the known inheritance public-protected-private model).
From my experience, you can use Eclipse convention:
create a package called "internal" that all class hierarchy (including sub-packages) of this package will be considered as non-API code and could be changed anytime with no guarantee for your users. In that non-API code, use public methods whenever you like. Since it is only a convention and it is not enforced by the JVM or Java compiler, you cannot prevent users from using the code, but at least let them know that these classes were not meant to be used by 3rd parties.
By the way, in Eclipse platform source code, there is a complex plugin model that enforces you not to use internal code of other plugins by implementing custom class loader for each plugin that prevents loading classes that should be "internal" in these plugins.
Interfaces and dynamic proxies are sometimes used to make sure you only expose methods that you do want to expose.
However that comes at a fairly hefty performance cost, if your methods are called very often.
Using the #Deprecated annotation might also be an option, although it won't stop external users invoking your "framework private" methods, they can't say they hadn't been warned.
In general I don't think you should worry about your users deliberately shooting themselves in the foot too much, so long as you made it clear to them that they shouldn't use something.
I am in charge of maintenance of an old application written in Swing, combined with a CAD-like tool written in Java3D. We are having problems with memory usage. After profiling, this is related to the undo functionality in the application.
All undo functionality is state-based, with a basic concept like this:
public class UndoAction {
private UndoTarget target;
private Object old_data;
private Object new_data;
}
Code to create these UndoActions is basically littered throughout the application. Because there is no distinction between modifications of new objects, modifications of existing objects and modifications of subtrees, the following happens:
What happens is a single action is the following:
Create a new object A.
Modify field foo of the object. A new UndoAction is placed on the stack, which contains foo_old and foo_new.
Modify field bar of the object. A new UndoAction is placed on the stack, which contains bar_old and bar_new.
Execute B.setField(A). A new UndoAction is placed on the stack, which contains field_old and field_new (== A).
There is no granularity or any control over this at all. This does not help maintainability at all.
I want to refactor this system so it becomes maintainable and memory-friendly. Unfortunately, implementing the Undo system using the Command pattern is not possible; the actions are too impacting to revert. I want to implement the following:
Use annotations to provide "Undo demarcation". #Undoable() would mark a method as generating an UndoAction which is put on the stack. This can be parametrised just like transactions: REQUIRE, NEST, JOIN... The full object graph is cloned upon entering the Undoable method.
When a Transaction (=method) finishes, an algorithm should compare the new state with the old state and save a diff.
To implement this, we can use AOP. This allows us to keep the core code very clean.
An now, my question:
Do any of the above 3 functionalities already exist in Java? I can imagine I am not the first to wrestle with state-based undo and the problems linked to it (Undo demarcation, state compare, ...)
After this question has been open for quite some time, it seems the question is: "No, no such framework already exists."
As a guide for other people, I am looking into Eclipse Modeling Framework and the EMF.Edit framework. In this framework, you define the model in a descriptor language, and the framework handles the model and any manipulations for you. This automatically results in Actions and Undo/Redo being created.
For reference, one other framework that may serve as a model (if not a solution) is UndoManager, which supports a limited number of edits. It's part of the javax.swing.undo package, one of several core Text Component Features.
This seems like it should be fairly straight-forward, but I can't see anything obvious. What I basically want to do it to point at a method and refactor->extract class. This would take the method in question to a new class with that method as top level public API. The refactoring would also drag any required methods and variables along with it to the new class, deleting them from the old class if nothing else in the old class is using it.
This is a repetitive task I often encounter when refactoring legacy code. Anyway, I'm currently using Eclipse 3.0.2, but would still be interested in the answer if its available in a more recent version of eclipse. Thanks!
I don't think this kind of refactoring exists yet.
Bug 225716 has been log for that kind of feature (since early 2008).
Bug 312347 would also be a good implementation of such a refactoring.
"Create a new class and move the relevant fields and methods from the old class into the new class."
I mention a workaround in this SO answer.
In Eclipse 3.7.1 there is an option to move methods and fields out of a class. To do so:
Make sure the destination class exists (empty class is fine, just as long as it exists in the project).
In the source class, select the methods that you want to remove (the outline view works great for this), right click on the selection, and choose Move
Select the destination class in the drop down/Browse
Your members are now extracted. Fix any visibility issues (Source > Generate Getters and Setters is very useful for this) and you are all set.
This seems like it should be fairly
straight-forward...
Actually, Extract Class is one of the more difficult refactorings. Even in your simple example of moving a single method and its dependencies, there are possible complications:
If the moved method might be used in code you don't know about, you need to have a proxy method in the original class that will delegate to (call) the moved method. (If your application is self-contained or if you know all the clients of the moved method, then the refactoring code could update the calling code.)
If the moved method is part of an interface or if the moved method is inherited, then you will also need to have a "proxy method".
Your method may call a private method/field that some other method calls. You need to choose a class for the called member (maybe in the class that uses it the most). You will need to change access from "private" to something more general.
Depending on how much the original class and the extracted class need to know about each other, one or both may need to have fields initialized that point to the other.
Etc.
This is why I encourage everybody to vote for bug 312347 to get fixed.
Have you tried the Move feature of the Refactor group ? You can create a helper class and move there anything you want.
My team is moving to Spring 3.0 and there are some people who want to start moving everything into Annotations. I just get a really bad feeling in my gut (code smell?) when I see a class that has methods like this: (just an example - not all real annotations)
#Transaction
#Method("GET")
#PathElement("time")
#PathElement("date")
#Autowired
#Secure("ROLE_ADMIN")
public void manage(#Qualifier('time')int time) {
...
}
Am I just behind the times, or does this all seem like a horrible idea to anyone else? Rather then using OO concepts like inheritance and polymorphism everything is now by convention or through annotations. I just don't like it. Having to recompile all the code to change things that IMO are configuration seems wrong. But it seems to be the way everything (especially Spring) is going. Should I just "get over it" or should I push back and try to keep our code as annotation free as possible?
Actually I think that the bad feeling in your gut against has more to do with Annotations like this mixing configuration with code.
Personally I feel the same way as you do, I would prefer to leave configuration (such as transaction definitions, path elements, URLs that a controller should be mapped to, etc.) outside of the code base itself and in external Spring XML context files.
I think though that the correct approach here comes down to opinion and which method you prefer - I would predict that half the community would agree with the annotations approach and the other half would agree with the external configuration approach.
Maybe you have a problem with redundant annotations that are all over the code. With meta-annotations redundant annotations can be replaced and your annotations are at least DRY.
From the Spring Blog:
#Service
#Scope("request")
#Transactional(rollbackFor=Exception.class)
#Retention(RetentionPolicy.RUNTIME)
public #interface MyService {
}
#MyService
public class RewardsService {
…
}
Because Java evolves so slowly people are putting more features that are missing in the language into annotations. This is a good thing Java can be extended in some form and this is a bad thing as most of the annotations are some workaround and add complexity.
I was also initially skeptical about annotations, but seeing them in use, they can be a great thing. They can also be over used.
The main thing to remember about annotations is that they are static. They cannot change at runtime. Any other configuration method (xml, self-description in code, whatever) does not suffer from this. I have seen people here on SO have issues with Spring in terms of having a test environment on injecting test configurations, and having to drop down to XML to get it done.
XML isn't polymorphic, inherited or anything else either, so it is not a step backwards in that sense.
The advantage of annotations is that it can give you more static checking on your configuration and can avoid a lot of verbosity and coordination difficulties in the XML configurations (basically keeping things DRY).
Just like XML was, Annotations can be over used. The main point is to balance the needs and advantages of each. Annotations, to the degree that they give you less verbose and DRYer code, are a tool to be leveraged.
EDIT: Regarding the comment about an annotation replacing an interface or abstract class, I think that can be reasonable at the framework boundary. In a framework intended to be used by hundreds, if not thousands of projects, having an interface or base class can really crimp things (especially a base class, although if you can do it with annotations, there is no reason you couldn't do it with a regular interface.
Consider JUnit4. Before, you had to extends a base class that had a setup and tear down method. For my point, it doesn't really matter if those had been on an interface or in a base class. Now I have a completely separate project with its own inheritance hierarchy, and they all have to honor this method. First of all, they can't have their own conflicting method names (not a big deal in a testing framework, but you get my point). Second of all you have have the chain of calling super all the way down, because all methods must be coupled.
Now with JUnit4, you can have different #Before methods in different classes in the hierarchy and they can be independent of each other. There is no equally DRY way to accomplish this without annotations.
From the point of view of the developers of JUnit, it is a disaster. Much better to have a defined type that you can call setUp and teardown on. But a framework doesn't exist for the convenience of the framework developer, it exists for the convenience of the framework user.
All of this applies if your code doesn't need to care about the type (that is, in your example, nothing would every really use a Controller type anyway). Then you could even say that implementing the framework's interface is more leaky than putting on an annotation.
If, however, you are going to be writing code to read that annotation in your own project, run far away.
It's 2018 and this point is still relevant.
My biggest problem with annotations is that you don't have an idea what the annotations are doing. You're cutting some caller code off and hiding it somewhere disconnected from the callee.
Annotations were introduced to make the language more declarative and less programmatic. But if you're moving the majority of the functionality to annotations, you are effectively switching your code to a different language (and not a very good one at that). There's very little compile-time checking. This article makes the same point: https://blog.softwaremill.com/the-case-against-annotations-4b2fb170ed67
The whole heuristic of "move everything to configuration so that people don't have to learn how to code" has gotten out of control. Engineering managers aren't thinking.
Exceptions:
JUnit
JAX-RS
I personally feel that annotations have taken over too much and have blown up from their original and super useful purpose (e.g., minor things like indicating overridden method) into this crazy metaprogramming tool. I don't feel the JAva mechanism is robust enough to handle these clusters of annotations preceding each method.
For instance, I'm fighting with JUnit annotations these days because they restrict me in ways that I don't like
That being said, in my experience the XML based configuration isn't pretty either. So to quote South Park, you're choosing between a giant douche and a t*rd sandwich.
I think that the main decision you have to make is whether you are more comfortable with having a delocalization of the spring configuration (i.e., maintain two files instead of one), and whether you use tools or IDE plugins that benefit from the annotations. Another important question is whether the developers who will use or maintain your code truly understand annotations.
Like many things, there are pros and cons. In my opinion, some annotations are fine, though sometimes it feels like there is a tendency to overuse annotations when a plain old function calling approach might be superior, and taken as a whole, this can unintentionally increase cognitive load because they increase the number of ways to "do stuff."
Let me explain. For example, I'm glad you mentioned the #Transactional annotation. Most Spring developers probably are going to know about and use #Transactional. But how many of those developers know how #Transactional actually works? And would they know off the top of their head how to create and manage a transaction without using the #Transactional annotation? Using #Transactional makes it easier for me to use transactions in a majority of cases, but in particular cases when I need more fine-grained control over a transaction, it hides those details from me. So in a way it is a double edged sword.
Another example is #Profile in Spring config classes. In the general case, it makes it easier to specify which profiles you want a Spring component loaded in. However, it if you need more powerful logic than just specifying a list of profiles for which you want the component loaded, you would have to get the Environment object yourself and write a function to do this. Again, most Spring developers would probably be familiar with #Profile, but the side effect of that is they become less familiar with the details of how it works, like the Environment.acceptsProfiles(String... profiles) function, for instance.
Finally, when annotations don't work, it can be harder to understand why and you can't just put a breakpoint on the annotation. (For instance, if you forgot the #EnableTransactionManagement on your config, what would happen?) You have to find the annotation processor and debug that. With a function calling approach, you can of course just put a breakpoint in the function.
Annotations have to be used sparingly. They are good for some but not for all. At least the xml configuration approach keeps the config in one file (or multiple) instead of spread all over the place. That would introduce (as I like to call it) crappy code organization. You will never see the full picture of the configuration if it is spread across hundreds of files.
Annotations often introduce dependencies where such dependencies do not belong.
I have a class which happens by coincidence to have properties which resemble the attributes from a table in an RDBMS schema. The class was created with this mapping in mind. There is clearly a relationship between the class and the table but I am happy to keep the class free from any metadata declaring that relationship. Is it right that this class makes a reference to a table and its columns in a completely different system? I certainly don't object to external metadata that associates the two and leaves each free of an understanding of the other. What did I gain? It is not as if metadata in the source code provides type safety or mapping conformance. Any verification tool that could analyze JPA annotations could equally well analyze hibernate mapping files. Annotations did not help.
At one contract, I had created a maven module with a package of implementations of interfaces from an existing package. It is unfortunate that this new package was one of many directories within a monolithic build; I saw it as something separate from the other code. Nonetheless, the team was using classpath scanning so I had to use annotations in order to get my component wired into the system. Here I did not desire centralized configuration; I simply wanted external configuration. XML configuration was not perfect because it conflated dependency wiring with component instantiation. Given that Rod Johnson didn't believe in component based development, this was fair. Nonetheless, I felt once again that annotations did not help me.
Let's contrast this with something that doesn't bother me: TestNG and JUnit tests. I use annotations here because I write this test knowing that I am using either TestNG or JUnit. If I replace one for the other, I understand that I will have to perform a costly transition that will stray close to a rewrite of the tests.
For whatever reason, I accept that TestNG, JUnit, QUnit, unittest, and NUnit owns my test classes. Under no circumstances does either JPA or Hibernate own those domain classes which happen to get mapped to tables. Under no circumstances does Spring own my services. I control my logical and physical packaging in order to isolate units which depend upon either. I want to ensure that a move away from one doesn't leave me crippled because of all the dependencies it left behind. Saying goodbye is always easier than leaving. At some point, leaving is necessary.
Check these answers to similar questions
What are the Pros/Cons of Annotations (non-compiler) compared to xml config files
Xml configuration versus Annotation based configuration
Basically it boils down to: Use both. Both of them have there usecases. Don't use annotations for things which should remain configurable without recompiling everything (especially things which maybe your user should be able to configure without needing you to recompile all)
I think it depends to some extent on when you started programming. Personally, I think they are horrid. Primarily because they have some quasi-'meaning' which you will not understand unless you happen to be aware of the annotation in question. As such they form a new programming language all by themselves and move you further away from POJOs. Compared to (say) plain old OO code. Second reason - they can prevent the compiler doing your work for you. If I have a large code base and want to refactor something or rename something I'd ideally like the compiler to throw up everything that needs to be changed, or as much as possible. An annotation should just be that. An annotation. Not central to the behaviour of your code. They were designed originally to be optionally omitted upon compilation which tells you all you need to know.
And yes, I am aware that XML config suffers in the same way. That doesn't make it worse, just equally bad. At least I can pretend to ignore that though - it doesn't stare me in the face in every single method or parameter declaration.
Given the choice I'd actually prefer the horrible old J2EE remote/home interfaces etc (so criticised by the Spring folks originally) as at least that gives me an idea of whats happening without having to research #CoolAidFrameworkThingy and its foibles.
One of the problems with the framework folks is that they need to tie you to their framework in order to make the whole enterprise financially viable. This is at odds with designing a framework well (i.e. for it to be as independant and removeable from your code as possible).
Unfortunately, though, annotations are trendy. So you will have a hard time preventing your team using them unless you are into code reviews/standards and the like (also, out of fashion!)
I read that Stroustup left annotations out of C++ as he feared they would be mis-used. Sometimes things go in the wrong direction for decades, but you can hope things will come full circle in time..
I think annotations are good if they are used with measure. Annotations like #WebService do a lot of work at deployment and run time, but they don't interfere in the class. #Cachexxx or #Transactional clearly interfere by creating proxies and a lot of artifacts, but I think they are under control.
Thing begin to mess when using Hibernate or JPA with annotations and CDI. Annotations grow a lot.
IMO #Service and #Repository are interferences of Spring in your application code. They make your application Spring dependant and only for Spring use.
The case of Spring Data Graph is another story. #NodeEntity, for instance, add methods to the class at build time to save the domain object. Unless you have Eclipse and Spring plugin you will errors because those methods don't exist in source code.
Configuration near the object has its benefits, but also a single configuration point. Annotations are good with measure, but they aren't good for everything, and definitively bad when there are as much annotation lines as source code lines.
I think the path Spring is going is wrong; mainly because in some cases there is no other way to do such funny things. It's is as if Spring wants to do xtreme coding, and at the same time they lock developers into Spring framework. Probably Java language needs another way to do some things.
Annotations are plain bad in my experience:
Inability to enforce type safety in annotations
Serialization issues
Cross compiling (to for instance javascript) can be an issue.
Libraries/frameworks requiring annotations exclude non-annotated classes from external libraries.
not overridable or interchangeable
your projects eventually becomes strongly dependant on the system that requires the annotations
If Java would have something like "method literals" you could annotate a class in a corresponding annotation class.
Something like as following:
Take for instance javax.persistence, and the following annotated class:
#Entity
class Person
{
#Column
private String firstname;
public String getFirstname() { return firstname; }
public void setFirstname(String value) { firstname = value; }
#Column
private String surname;
public String getSurname() { return surname; }
public void setSurname(String value) { surname = value; }
}
Instead of the annotations, I'd suggest a mapping class like:
class PersonEntity extends Entity<Person> {
#Override
public Class<Person> getEntityClass() { return Person.class;}
#Override
public Collection<PersistentProperty> getPersistentProperties() {
LinkedList<PersistentProperty> result = new LinkedList<>();
result.add(new PersistentProperty<Person>(Person#getFirstname, Person#setFirstname);
result.add(new PersistentProperty<Person>(Person#getSurname, Person#setSurname);
return result;
}
}
The fictional "#" sign in this pseudo java code represents a method literal, which, when invoked on an instance of the given class, invokes the corresponding delegate (signed with "::" since java 8) of that instance.
The "PersistentProperty" class should be able to enforce the method literals to be referring to the given generic argument, in this case the class Person.
This way, you have more benefits than annotations can deliver (like subclassing your 'annotate'-class) and you have none of the aforementioned cons.
You can have more domain-specific approaches too.
The only pre annotations have over this, is that with annotations you can quickly see whether you have forgotten to include a property/method. But this too can be handled more concise and more correct with better metadata support in Java (think for instance of something like required/optional like in Protocolbuffers)
With generated Java source code, like
code generated with Hibernate tools
code generated with JAXB schema binding (xjc)
code generated with WDSL2Java (cxf)
all generated classes are "value object" types, without business logic. And if I add methods to the generated source code, I will loose these methods if I repeat the source code generation.
Do these Java code generation tools offer ways to "extend" the generated code?
For example,
to override the ToString method (for logging)
to implement the visitor pattern (for data analysis / validation)
For JAXB, see Adding Behaviours.
Basically, you configure JAXB to return a custom instance of the object you'd normally expect. In the below example you create a new object PersonEx which extends the JAXB object Person. This mechanism works well in that you're deriving from the generated classes, and not altering the JAXB classes or schemas at all.
package org.acme.foo.impl;
class PersonEx extends Person {
#Override
public void setName(String name) {
if(name.length()<3) throw new IllegalArgumentException();
super.setName(name);
}
}
#XmlRegistry
class ObjectFactoryEx extends ObjectFactory {
#Override
Person createPerson() {
return new PersonEx();
}
}
Note that the #Override directive is important in case your JAXB object changes - it will prevent your customisation becoming orphaned.
As for Hibernate you may tweak the template files used in code generation to change their behaviour. If you want to tweak the HIbernate Tools you can edit, for example: dao/daohome.ftl
You may even add fields to the "toString()" output editing the .hbm.xml files
...
<property name="note" type="string">
<meta attribute="use-in-tostring">true</meta>
<column name="note" />
</property>
...
Both for logging and validation you may consider using AOP with AspectJ (I don't recommend messing with the generated code, since you might want to build that from scratch many times over).
First I would reiterate that modification of generated code has many problems associated with it and that, where possible it should be avoided. That said sometimes this is impractical to avoid or more effort than just dealing with the changes when the code is regenerated.
Sadly java doesn't support the concept of partial classes that c# has. These are precisely to solve this sort of problem.
You should see if your code generation tools support some form of meaningful comments which delimit regions added by yourself in the class (this is unlikely and won't help if you are modifying the code rather than adding to it)
You best option if you really wish to do this is to generate the files initially but check them into a version control repository immediately.
Then make your changes, check that in.
Next time you rerun the tools and let them overwrite the existing files you can diff against your source controlled ones and merge the changes back in (most trivial changes like addition of new columns/tables will be little effort.
This will not help you as much if the code generator suddenly generates radically different code (say a new version) but in those cases any code you added which wasn't simply additional convenience methods relying on data/methods already exposed publicly) will have problems no matter how it is mixed into the class. The version control system does still help however since it also records the original changes so you can see what you had added previously and what, one would assume, you need to recreate in the new style.
It is not a good idea to edit generated code files, either by editing the files them selves or by subclassing. Whatever you do, be sure to leave the signature created by the tool intact, so that it will be possible to understand in the future that the file was auto-generated.
I recommend that you research the command options of the tools to see if they allow you some flexibility. Some tools can generate abstract classes or interfaces instead of concrete classes. If this is not possible, create a domain object that includes the autogenerated object as a member variable.
The way I have used Hibernate is to generate base classes that I then extend. I add all my business logic (if any) to these subclasses. I quite often also end up changing the FreeMarker templates used by Hibernate to further customize the generated classes.
The AOP citation is a good one. I'll add Spring, which has very nice AOP features built in.
Have a look at
http://code.google.com/p/jaxb-method-inserter/
Its a small plugin for JAXB I wrote, Its quite simple to uses. Hope it help