How to merge complex business validation with JSR-303? - java

I'm stuck with validation in my current use case.
My app has standard structure (WEB <-> EJB3 Services <-> EJB3 DAO <-> DB).
I have an entity which has validation annotations applied to it.
#Entity
class PhoneNumber {
...
private NumberType numberType;
}
where
enum NumberType {
FIXED,
MOBILE,
ANY
}
Now I have new validation rule to be applied. On PhoneNumber update there should be not possible to change NumberType to ANY if it was set previously to either FIXED or MOBILE.
My Bean Validation rules are checked just before db operations, and the rule above should be applied in service layer (at least I think so) to have a DB access to get previous entity version to compare.
But without having bean not yet validated I'm forced to check manually if e.g. numberType is not null.
Can you please provide me some advices or general rules how to deal with more complex busines validations (not only checking single field's values in isolation) when using Bean Validation?

I don't think Bean Validation is the right solution for implementing this kind of business logic.
Instead you could implement this check in the setNumberType() method of the PhoneNumber entity. There you have the old value at hand and compared to an implementation in the service layer there is no chance to perform an illegal state transition by circumventing (accidentally or intentionally) the service implementing the check.

Here you can find a good description of how to write a custom validator which can do "cross-field" validation.

Related

JPA/Hibernate/Bean Validator - #Pattern on Discriminator?

So I am trying to use Bean Validation to assertain that the DiscriminatorValue of a given entity can only be one of a selected few.
If the discriminator would have been an ordinary field, that would have been an easy task using #Pattern with a matching regexp.
Since it's not, how do I go about this?
The short answer is that you cannot do that with Bean Validation. As you say, the discriminator column/value is not even part of your entity. It is just a JPA internal value.
I guess my second questions is why you would want to do that. What do you want to achieve? At best these discriminator values are determined at development time by the developer. Provided you let the JPA provider handle the data there should never be a problem.

domain driven design depends on static methods?

I have been reading a lot online/offline about where to put validation and business rules in general for domain driven design. What I could not understand is how can an entity provides methods that does validation and business rules without resorting to static methods or having a service? This is especially important for cases where the domain object does not need to be instantiate yet, but we need to validate a value that will eventually used to set the object's attribute.
I noticed blog postings such as http://lostechies.com/jimmybogard/2007/10/24/entity-validation-with-visitors-and-extension-methods/ relies on .NET's specific extension method, which is not available in programming languages such as Java. I personally don't like static methods are they cannot be overridden and hard to test.
Is there anyway I could do this without static methods or having to instantiate an unnecessary domain object just to use its validation and business rules methods. If not, does that mean domain driven design is very dependent on static methods?
Thanks
Use ValueObjects Not Entity.
In the registration case, a UserName value object could be introduced. Create a Username object when receiving the registration. Implement validation in the constructor of the UserName.
See this question and this presentation for more detail.
Edit1:
1.How to handle cases where different validation rules applied for different context. For example: The username must not have numbers for certain type of members, but it is required for other types of members?
Maybe different factory methods could do that. like UserName.forGoldenCardMember(...) or UserName.forPlainMember(...). Or make MemberType (a hierachy maybe) to validate UserName.
Another alternative solution is use AggregateFactory(AccountFactory in this case).
2.Is constructor the only place to put the validation code? I did read online about two points of view: an object must always be valid vs. not always. Both present good arguments, but any other approach?
I prefer valid approach personally. Passing an maybe invalid value object harms encapsulabilty.
Edit2:
Require
a) validation business rule based on context(different username rules for member types)
b) keep validating all business rules even if one of them fail
Stick with Single responsibility principle by using Value Object(MemberType this case).
AggregateFactory could be introduced to ease the application layer(coarser granularity).
class AccoutFactory {
Account registerWith(Username username, MemberType type, ....) {
List<String> errors = new ArrayList<String>();
errors.addAll(type.listErrorsWith(username));
errors.add(//other error report...
if (CollectionUtils.isEmpty(errors)) {
return new Account(username,....);
} else {
throw new CannotRegisterAccountException(errors);
}
}
}
Edit3:
For questions in the comments
a) Shouldn't the Username object be the one that has a method that returns the error like
the listErrorsWith()? After all, it is the username that has different rules for different member type?
We could check this question from another perspective: MemberTypes have different rules for username. This may replace if/else block in the Username.listErrosWith(String, MemeberType) with polymorphism;
b) If we have the method in the MemberType, the knowledge will not be encapsulated in the Username.Also, we are talking about making sure Username is always valid.
We could define the validity of Username without MemberType rules. Let’s say "hippoom#stackoverflow.com" is a valid username, it is a good candidate for GoldenCard member but not good for SilverCard member.
c) I still can't see how performing validation that returns a list of errors without getting the list from exception thrown by the constructor or static method. Both does not look ideal IMHO.
Yes, the signature of listErrorsWith():List looks weired, I'd rather use validate(username) with no returning value(throw exception when fails). But this will force the cilent to catch every validation step to run validations all at once.
If you decided to use DDD in your application you need to build more complex solution. I agree with #Hippoom, you shouldn't use Entity for this purpose.
I would suggest this solution:
DTO -> Service Layer (ValidationService -> Converter) -> Persistence Layer (Repository)
Some explanation:
When you received DTO from client side with all necessary parameters, you should validate it in you service layer (e.g. Use another service like ValidationService) which can throw exception if something wrong. If all Ok, you can create Entity from your DTO in Converter and persist it in Repository.
If you want flexible solution for ValidationService I'd suggest Drools

How do you keep clean layers separation with Hibernate/ORM?

How is it possible to keep clean layers with Hibernate/ORM (or other ORMs...)?
What I mean by clean layer separation is for exemple to keep all of the Hibernate stuff in the DAO layer.
For example, when creating a big CSV export stream, we should often do some Hibernate operations like evict to avoid OutOfMemory... The filling of the outputstream belong to the view, but the evict belongs to the DAO.
What I mean is that we are not supposed to put evict operations in the frontend / service, and neither we are supposed to put business logic in the DAO... Thus what can we do in such situations?
There are many cases where you have to do some stuff like evict, flush, clear, refresh, particularly when you play a bit with transactions, large data or things like that...
So how do you do to keep clear layers separation with an ORM tool like Hibernate?
Edit: something I don't like either at work is that we have a custom abstract DAO that permits a service to give an Hibernate criterion as an argument. This is practical, but for me in theory a service that calls this DAO shouldn't be aware of a criterion. I mean, we shouldn't have in any way to import Hibernate stuff into the business / view logic.
Is there an answer, simple or otherwise?
If by "clean" you mean that upper layers don't know about implementations of the lower layers, you can usually apply the
Tell, don't ask principle. For your CSV streaming example, it would be something like, say:
// This is a "global" API (meaning it is visible to all layers). This is ok as
// it is a specification and not an implementation.
public interface FooWriter {
void write(Foo foo);
}
// DAO layer
public class FooDaoImpl {
...
public void streamBigQueryTo(FooWriter fooWriter, ...) {
...
for (Foo foo: executeQueryThatReturnsLotsOfFoos(...)) {
fooWriter.write(foo);
evict(foo);
}
}
...
}
// UI layer
public class FooUI {
...
public void dumpCsv(...) {
...
fooBusiness.streamBigQueryTo(new CsvFooWriter(request.getOutputStream()), ...);
...
}
}
// Business layer
public class FooBusinessImpl {
...
public void streamBigQueryTo(FooWriter fooWriter, ...) {
...
if (user.canQueryFoos()) {
beginTransaction();
fooDao.streamBigQueryTo(fooWriter, ...);
auditAccess(...);
endTransaction();
}
...
}
}
In this way you can deal with your specific ORM with freedom. The downside of this "callback" approach: if your layers are on different JVMs then it might not be very workable (in the example you would need to be able to serialize CsvFooWriter).
About generic DAOs: I have never felt the need, most object access patterns I have found are different enough to make an specific implementation desirable. But certainly doing layer separation and forcing the business layer to create Hibernate criteria are contradictory paths. I would specify a different query method in the DAO layer for each different query, and then I would let the DAO implementation get the results in whatever way it might choose (criteria, query language, raw SQL, ...). So instead of:
public class FooDaoImpl extends AbstractDao<Foo> {
...
public Collection<Foo> getByCriteria(Criteria criteria) {
...
}
}
public class FooBusinessImpl {
...
public void doSomethingWithFoosBetween(Date from, Date to) {
...
Criteria criteria = ...;
// Build your criteria to get only foos between from and to
Collection<Foo> foos = fooDaoImpl.getByCriteria(criteria);
...
}
public void doSomethingWithActiveFoos() {
...
Criteria criteria = ...;
// Build your criteria to filter out passive foos
Collection<Foo> foos = fooDaoImpl.getByCriteria(criteria);
...
}
...
}
I would do:
public class FooDaoImpl {
...
public Collection<Foo> getFoosBetween(Date from ,Date to) {
// build and execute query according to from and to
}
public Collection<Foo> getActiveFoos() {
// build and execute query to get active foos
}
}
public class FooBusinessImpl {
...
public void doSomethingWithFoosBetween(Date from, Date to) {
...
Collection<Foo> foos = fooDaoImpl.getFoosBetween(from, to);
...
}
public void doSomethingWithActiveFoos() {
...
Collection<Foo> foos = fooDaoImpl.getActiveFoos();
...
}
...
}
Though someone could think that I'm pushing some business logic down to the DAO layer, it seems a better approach to me: changing the ORM implementation to an alternative one would be easier this way. Imagine, for example that for performance reasons you need to read Foos using raw JDBC to access some vendor-specific extension: with the generic DAO approach you would need to change both the business and DAO layers. With this approach you would just reimplement the DAO layer.
Well, you can always tell your DAO layer to do what it needs to do when you want to. Having a method like cleanUpDatasourceCache in your DAO layer, or something similar (or even a set of these methods for different objects), is not bad practice to me.
And your service layer is then able to call that method without any assumption on what is done by the DAO under the hood. A specific implementation which uses direct JDBC calls would do nothing in that method.
Usually a DAO layer to wrap the data access logic is necessary. Other times is just the EntityManager what you want to use for CRUD operations, for those cases, I wouldn't use a DAO as it would add unnecessary complexity to the code.
How should EntityManager be used in a nicely decoupled service layer and data access layer?
If you don't want to tie your code to Hibernate you can use Hibernate through JPA instead and not bother too much about abstracting everything within your DAOs. You are less likely to switch from JPA to something else than replacing Hibernate.
my 2 cents: i think the layer separation pattern is great as a starting point for most cases, but there is a point where we have to analyze each specific application case by case and design a more flexible solution. what i mean is, ask yourself for example:
is your DAO expected to be reused in another context other than
exporting csv data?
does it make sense to have another implementation of the same DAO
interface without hibernate ?
if both answers were no, maybe a little bit of coupling between persistence and data presentation is ok. i like the callback solution proposed above.
IMHO sometimes strict implementation of a pattern has a higher cost in readability, mantainability, etc. which are the very issues we were trying to fix by adopting a pattern in the first place
you can achieve layer separation by implementing DAO pattern and and doing all hibernate/JDBC/JPA related stuff in Dao itself
for eg:
you can specify a Generic Dao interface as
public interface GenericDao <T, PK extends Serializable> {
/** Persist the newInstance object into database */
PK create(T newInstance);
/** Retrieve an object that was previously persisted to the database using
* the indicated id as primary key
*/
T read(PK id);
/** Save changes made to a persistent object. */
void update(T transientObject);
/** Remove an object from persistent storage in the database */
void delete(T persistentObject);
}
and its implementaion as
public class GenericDaoHibernateImpl <T, PK extends Serializable>
implements GenericDao<T, PK>, FinderExecutor {
private Class<T> type;
public GenericDaoHibernateImpl(Class<T> type) {
this.type = type;
}
public PK create(T o) {
return (PK) getSession().save(o);
}
public T read(PK id) {
return (T) getSession().get(type, id);
}
public void update(T o) {
getSession().update(o);
}
public void delete(T o) {
getSession().delete(o);
}
}
so whenever service classes calls any method on any Dao without any assumption of the internal implementation of the method
have a look at the GenericDao link
Hibernate (either as a SessionManager or a JPA EntityManager) is the DAO. The Repository pattern is, as far as I have seen, the best starting place. There is a great image over at the DDD Sample Website which I think speaks volumes about how you keep things things separate.
My application layer has interfaces that are explicit business actions or values. The business rules are in the domain model and things like Hibernate live in the infrastructure. Services are defined at the domain layer as interfaces, and implemented in the infrastructure in my case. This means that for a given Foo domain object (an aggregate root in the DDD terminology) I usually get the Foo from a FooService and the FooService talks to a FooRepository which allows one to find a Foo based on some criteria. That criteria is expressed via method parameters (possibly complex object types) which at the implementation side, for example in a HibernateFooRepository, would be translated in to HQL or Hibernate criterion.
If you need batch processing, it should exist at the application level and use domain services to facilitate this. StartBatchTransaction/EndBatchTransaction. Hibernate may listen to start/end events in order to coordinate purging, loading, whatever.
In the specific case of serializing domain entities, though, I see nothing wrong with taking a set of criteria and iterating over them one at a time (from root entities).
I find that often, in the pursuit of separation, we often try to make things completely general. They are not one in the same - your application has to do something, and that something can and should be expressed rather explicitly.
If you can substitute an InMemoryFooRepository where a HibernateFooRepository was previously being used, you're on the right path. The natural flow through unit and integration testing your objects encourages this when you adhere or at least try to respect the layering outlined in the image I linked above.
You got some good answers here, I would like to add my thoughts on this (by the way, this is something to take care of in our code as well) I would also like to focus on the issue of having Hibernate annotations/JPA annotations on entities that you might need to use outside of your DAL (i.e - at business logic, or even send to your client side) -
A. If you use the GenericDAO pattern for a given entity, you may find your entity being annotated with Hibernate (or maybe JPA annotation) such as #Table, #ManyToOne and so on - this means that you client code may contain Hibernate/JPA annotations and you would require an appropriate jar to get it compiled, or have some other support at your client code this is for example if you use GWT as your client (which can have support for JPA annotations in order to get entities compiled), and share the entities between the server and the client code, or if you write a Java client that performs a bean lookup using InitialContext against a Java application server (in this case you will need a JAR
B. Another approach that you can have is work with Hibernate/JPA annotated code at server side, and expose Web Services (let's say RESTFul web service or SOAP) - this way, the client works with an "interface" that does not expose knowledge on Hibernate/JPA (for example - a WSDL in case of SOAP defines the contract between the client of the service and the service itself). By breaking the architecture to service oriented one, you get all kinds of benefits such as loose coupling, ease of replacement of pieces of code, and you can concentrate all the DAL logic in one service that serves the rest of your services, and later own replace the DAL if needed by another service.
C. You can use an "object to object" mapping framework such as dozer to map objects of classes with Hibernate/JPA annotations to what I call "true" POJOs - i.e - java beans with no annotations whatsoever on them.
D. Finally regarding annotations - why use annotations at all? Hibernate uses hbm xml files an alternative for doing the "ORM magic" - this way your classes can remain without annotations.
E. One last point - I would like to suggest you look at the stuff we did at Ovirt - you can dowload the code by git clone our repo. You will find there under engine/backend/manager/modules/bll - a maven project holding our bll logic, and under engine/backend/manager/moduled/dal - our DAL layer (although currently implemented with Spring-JDBC, and some hibernate experiments, you will get some good ideas on how to use it in your code. I would like to add that if you go for a similar solution, I suggest that you inject the DAOs in your code, and not hold them in a Singletone like we did with getXXXDao methods (this is legacy code we should strive to remove and move to injections).
I would recommend you let the database handle the export to CSV operation rather than building it yourself in Java, it isn't as efficient. ORM shouldn't really be used for those large scale batch operations, because ORM should only be used to manipulate transactional data.
Large scale Java batch operations should really be done by JDBC directly with transactional support turned off.
However, if you do this regularly, I recommend setting up a reporting database which is a delayed replica of the database that is not used by the application and utilizes database specific replication tools that may come with your database.
Your solution architect should be able to work with the other groups to help set this up for you.
If you really have to do it in the application tier, then using raw JDBC calls may be the better option. With raw JDBC you can perform a query to assemble the data that you require on the database side and fetch the data one row at a time then write to your output stream.
To answer your layers question. Though I don't like using the word layers because it usually implies one thing on top of another. I would rather use the word "components" and I have the following component groups.
application
domain - just annotated JPA classes, no persistence logic, usually a plain JAR file, but I recommend just plop it as a package in the EJB rather than having to deal with class path issues
contracts - WSDL and XSD files that define an interface between different components be it web services or just UI.
transaction scripts - Stateless EJBs that would have a transaction and persistence units injected into them and do the manipulation and persistence of the domain objects. These may implement the interfaces generated by the contracts.
UI - a separate WAR project with EJBs injected into them.
database
O/R diagram - this is the contract that is agreed upon by application and data team to ensure THE MINIMUM that the database will provide. It does not have to show everything.
DDLs - this is the database side implementation of the O/R diagram which will contain everything, but generally no one should care because it implementation details.
batch - batch operations such as export or replicate
reporting - provides queries to get business value reports from the system.
legacy
messaging contracts - these are contracts used by messaging systems such as JMS or WS-Notifications or standard web services.
their implementation
transformation scripts - used to transform one contract to another.
It seems to me we need to take another look at the layers.
(I hope someone corrects me if I get this wrong.)
Front End/UI
Business
Service/DAO
So for the case of Generating a Report, THe layers break down like so.
Front End/UI
will have a UI with a button "Get Some Report"
the button will then call the Business layer that knows what the report is about.
The data returned by the report generator is given any final formatting before being returned to the user.
Business
MyReportGenerator.GenerateReportData() or similar will be called
Service/DAO
inside of the report generator DAOs will be used. DAOLocator.GetDAO(Entity.class); or similar factory type methods would be used to get the DAOs. the returned DAOs will extend a Common DAO interface
Well, to get a clean separation of concern or you can say clean layer separation you can add Service layer to your application, which lies between you FrontEnd and DaoLayer.
You can put your business logic in Service layer and database related things in Dao layer using Hibernate.
So if you need to change something in your business logic, you can edit your service layer without changing the DAO and if you want to change the Dao layer, you can do without changing actual business logic i.e. Service Layer.

Creating entities rules

I'd like to know the answer to this simple question.
When I create an entity object and I want to restrict a setting of an attribute (for example I don't want to allow anyone to set an integer value less then 1 to an attribute), should I implement it in the setter of this attribute or should I check this restriction latter in a class that handles these objects ? Generally, can I implement getters and setters however I want as long as my getters return and setters set attributes ?
I know there are some rules (code conventions) in java, so I don't want to break any of them.
Thanks in advance, hope that my question is clear enough and sorry for any grammar mistakes I might have made :/ .
Yes getters/setters are useful for that.
for example:
public void setAge(int age){
if(age < 0){
throw new IllegalArgumentException("Invalid age : " + age);
//or if you don't want to throw an exception you can handle it otherways too
}
}
You can also use Java-EE's Bean Validators for this
public class Person{
#Min(value = 0)
#Max(value = 99)
private Integer age;
//some other code
}
My preferred approach is to use JSR 303 (Bean Validation API) to ensure that the properties of the class are valid.
It is quite alright to perform validation in setters, but this is not always a desirable approach. There is the potential of mixing the needs of several contexts that are not related to each other. For example, some of your properties must never be set from the user-interface, and would instead be computed by a service, before being persisted. In such an event, it is not desirable to have this logic inside a setter, for you would need to know the context in which the setter is being invoked; you'll need to apply different rules in your UI layer and in your persistence layer. JSR 303 allows you to separate these concerns using validation groups, so that your UI validation group is different from your persistence validation group.
In JPA 2.0, when you annotate your class using constraints that are evaluated by a JSR 303 validator, your persistence provider can automatically evaluate these constraints on the PrePersist, PreUpdate and PreRemove (typically not done; see below) lifecycle events of entities. To perform validation of entities in your JPA provider, you must specify either the validation-mode element or the javax.persistence.validation.mode property in your persistence.xml file; the values must be either AUTO (the default) or CALLBACK (and not NONE).
The presence of a Bean Validation provider is sufficient to ensure that validation occurs on JPA entity lifecycle events, as the default value is AUTO. You get this by default, in a Java EE 6 application server; Glassfish uses the RI implementation of JSR 303 which is Hibernate Validator, and it works quite well with EclipseLink as well.
The CALLBACK mode will allow you to override the validation groups that are to be applied when the lifecycle events are triggered. By default, the default Bean validation group (Default) will be validated for update and persist events; the remove event does not involve any validation. The CALLBACK mode allows you to specify a different validation group for these events, using the properties javax.persistence.validation.group.pre-persist, javax.persistence.validation.group.pre-update and javax.persistence.validation.group.pre-remove.
Do keep in mind that JSR 303 validation can be used outside a Java EE container, although the Bean Validation API documentation link that I've posted above is from the Java EE 6 API documentation.
This is the goal of getters and setters.
If we cannot add some behavior in these methods, well... why don't we use public attributes ?
From my understanding of your question, it pretty much related to encapsulation OO principle.
You can have a look at this article: http://www.tutorialspoint.com/java/java_encapsulation.htm
Getters and setters are great for adding the restrictions, just like Jigar Joshi has in his answer. That way you get feedback immediately and can handle the problem when it is introduced.
Another solution would be to use object validation (something like a JSR-303 implementation) which would allow you to annotate the field with a min and max values. Something like
#Min(value=1)
private int myvalue;
Then you can validate the entire object in one go and get all messages if you have other constrained fields. This is obviously not useful everywhere, but if it fits your need it is an option.
Finally, when you say "entity" I think of something stored in a database or related to ORM tools. If that is the case, you will want to be careful with what you do in your getter. For instance, if you do lazy initialization in the getter some ORM suppliers will mark the entity as dirty and attempt to flush it to the database possibly causing an unintended write.

Struts2 xwork Type Conversion with hibernate

What is the best way to convert types in a Struts2 application?
Right now I want to create a CRUD for a certain hibernate entity in my application. Say I wanted to change the Account that a User is associated with. I can just pass in the parameter user.account.id with a specific value, provided that I have all of the proper getters/setters.
This works perfectly fine when creating an object for the first time, where the account would be null. This makes ognl create a new account object, and set the id to what was passed in.
The problem happens when trying to change the encapsulated Account object. Using the same user.account.id parameter, ognl interprets this as getUser().getAccount().setId(param). Hibernate interprets this as an attempt to change the primary key.
I understand why it does this, I am just wondering if there is better way for handling this case. This is very common in our application, and I don't want to have to keep creating multiple objects and marshaling them over before I save them via hibernate.
Does anyone no a better way to solve this problem in struts2?
Type Converters for Persistence
Create a type converter for the entity and then just pass user.account, rather than user.account.id. This will invoke getUser().setAccount(account) and wont cause you the headaches.
When you update the record, just pass user.account as a hidden field in the form.
As for a widespread solution for your entities, you have a few options:
Multiple Converters
Create an abstract type converter that handles most of the logic so that you have a subclass-per-entity that is really lightweight. Register each converter in your xwork-conversion.properties.
Interface-Driven Converter
The approach that I use is that I have an interface called IdBasedJpaEntity which 99.9% of my entities implement. It defines a getId() method of type Integer. I then have a JpaDAORegistry singleton class that I create when my app starts. I register each of my entities with it and it constructs a single instance of each DAO (basically, a de-facto singleton). I have a map of entity class to DAO instance. This allows my type converter to look up the appropriate DAO instance for any given IdBasedJpaEntity, allowing me to have a single JpaEntityConverter class that works with any entity that implements the interface. This route is a little bit more work up front, but has proven highly reusable for me.

Categories