Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for some best practices for developing a clean domain object model. By 'clean', I mean a domain (or business) model that isn't cluttered up with a bunch of DB persistence, xml/json serialization/deserialization, dependency injection stuff. For example, I've read thru several 'how-to' tutorials about implementing the REST API. When they all get to the point of implementing the 'model', they all end up having some annotations about transforming from the 'pojo/poco' to the xml/json view via [XmlAttribute], or making the field be more user friendly in the UI via [Display/Display Type] attribute. The platform doesn't matter, I've seen the cluttering in the Java world (not familiar with other scripting languages).
I'm aware of the Data Transfer Object design pattern as those objects could use these attributes, but is this the only method? DTO seems like it would require a lot of object mapping to/from view to the business layer. If that's what it takes to have a clean domain layer, then great, just looking for feedback.
Thanks
The simple truth is that all of that "annotation clutter" rose up out of a rejection of all the "XML clutter".
Taking both JPA and JAXB in Java as examples, all of those annotations can be replaced by external XML files describing the same meta data for the underlying frameworks. In both of these cases, the frameworks offer "ok" defaults for unannotated data, but the truth is few are really satisfied with the Convention over Configuration default mappings the frameworks offer, and thus more explicit configuration needs to be done.
And all of that configuration has to be captured somewhere, somehow.
For many folks and many applications, the embedded meta data via annotations is a cleaner and easier to use than the external XML mapping methods.
In the end, from a Java perspective, the domain models are "just" objects, the annotations have no bearing, in general, outside of the respective frameworks. But in truth, there's always some coupling with the frameworks, and they have a tendency to influence implementation details within the model. These aren't particularly glaring, but the simple fact is that when there may be two ways to model something, and one way is "more friendly" to the framework, for many that's enough to tilt the decision to go in that direction rather than fighting for purity above the framework.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
We are, in my company, at the beginning of a huge refactoring to migrate from home made database access to Hibernate.
We want to do it clean, and therefore we wiil use entities, DAOs, etc...
We use maven, and we will therefore have two maven projects, one for the entities, one for the DAOs (is it good, or is it better to have both in the same project ?).
Knowing that, our question is the following : our business layer will use the DAOs.
As most of the DAO's methods return entities, our business layer will have to know about entities. And therefore, our business layer will have to know about Hibernate, as our entities will be Hibernate annotated (or at least JPA annotated).
Is this a problem ? If yes, what is the solution to give the business layer the very minimum knowledge about the data layer ?
Thank you,
Seb
Here is how I typically model the dependencies, along with the reasoning.
Let's distinguish 4 things:
a. the business logic
b. entities
c. DAO interfaces
d. DAO implementations
For me the first three belong together and therefor belong in the same maven module, AND even in the same package. They are closely related and a change in one will very likely cause a change in the other. And things that change together should be close together.
the implementation of the DAO is to a large extend independent of the business logic. And even more important the business logic should NOT depend on where the data is coming from. It is a completely separate concern. So if your data comes today from a database and tomorrow from a webservice, nothing should change in your business logic.
You are right, Hibernate (or JPA) annotations on the enities violate that rule to some extent. You have three options:
a. Live with it. While it creates a dependency to Hibernate artifacts, it does not create a dependency on any Hibernate implementation. So in most scenarios, having the annotations around is acceptable
b. use xml configuration. This will fix the dependency issue, but in my opinion at the rather hefty cost of dealing with xml based configuration. Not worth it in my opinion
c. Don't use Hibernate. I don't think the dependency on Annotations is the important problem you have to consider. The more serious problem is, that Hibernate is rather invasive. When you navigate an object graph, Hibernate will trigger lazy loading, i.e. the execution of sql statements at points that are not at all obvious from looking at the code. This basically means, you data access code starts to leak into every part of the application if you are not careful. One can keep this contained, but it is not easy and requires great care and a in depth understanding of Hibernate, that most teams don't have when they start with it. So Hibernate (or JPA) trades a simple but tedious task of writing SQL-Statments with a difficult task of creating a software architecture, that keeps mostly invisible dependencies in check. I therefore would recommend avoid Hiberante at all and try something simpler. I personally have high hopes toward MyBatis, but haven't used it in real projects yet.
More important then managing the dependencies between technical layers is in my opinion the separation of domain modules. And I'm not alone with that opinion.
I would use separate artifacts (i.e. maven modules) only to separate things that you want to deploy independently. If you for example have a rich client and a backend server two maven artifacts for those, plus maybe a third one for common code make sens. For everything else I'd simply use packages and tests that fail when illegal dependencies get created. For those I use Degraph, but I'm the author of that so I might be biased.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
JSF and Spring are two different web frameworks. I would like to ask two questions to clear things in my head:
what is the purpose to use this 2 frameworks together ?
i have heard that JSF is for view tier. So can we make a complex web
application containing a business logic only with JSF?
Could someone explain? Thanks
Ans
1. We integrate two frameworks to exploit best features of both of them.
In your case JSF is one of the best framework for view(UI) part and spring is good at maintaining beans because of its feature DI(Dependency Injection).
Ans 2. the main goals of creating jsf were
Create a standard UI component framework that can be leveraged by development
tools to make it easier for tool users to both create high-quality UIs and manage the
UI’s connections to application behavior.
Define a set of simple, lightweight Java base classes for UI components, component
state, and input events. These classes will address UI lifecycle issues, notably
managing a component’s persistent state for the lifetime of its page.
Provide a set of common UI components, including the standard HTML form input
elements. These components will be derived from the simple set of base classes
(outlined in #1) that can be used to define new components.
Provide a JavaBeans model for dispatching events from client-side UI controls to
server-side application behavior.
Define APIs for input validation, including support for client-side validation.
Specify a model for internationalization and localization of the UI.
Provide for automatic generation of appropriate output for the target client, taking
into account all available client configuration data, such as the browser version.
Provide for automatic generation of output containing required hooks for supporting
accessibility, as defined by the Web Accessibility Initiative (WAI).
yes you can create a complex application only with JSF but its lot easier to use it with some other framework like Seam , Spring etc
Source JSF Complete reference
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working on a project to provide RESTful API for a hospital related data transaction stuff. And I am using Jersey to be the server side framework for me.
However, apart from the accepted notion of dividing the code into resources, models and data access, I can't find information that provides some helpful best practices on the subject.
Any useful suggestions?
I'll try to compile some best practices that I learnt into some topics.
JPA and ORM
If you use an ORM, then use JPA. It helps to keep your ORM of choice and the application loosely coupled, i.e. you can easily switch between ORM's.
Dependency Injection
This is an awesome way, again, to keep your application as much as loosely coupled as possible. Use Guice or Spring. Basically, with this you can inject generic instances on your classes without coupling them with their concrete implementation.
Useful with DAO's. You can inject a GenericDao (interface) in your JAX-RS classes, but the true implementation of it is a JpaDao, for instance.
Also, this is awesome to quickly switch to test environments. When testing some logic in your application, you probably don't want to use the database but just a dummy implementation of your GenericDao, for example. I consider using DAO's itself as another important best practice.
Security
I have some questions about this on my profile, but basically use OAuth or HTTP Basic/Digest + SSL (HTTPS). It's a bit hard to accomplish security the way you want, surprisingly. You can use the security mechanisms your Servlet Container may provide or something internal to your application like Apache Shiro, Spring Security or even manually defining your security filters.
HATEOAS (and other REST contraints)
Most RESTful API's aren't REST. People often misunderstand this: REST implies a set of contraints. When these constraints aren't met, it's simply an HTTP API, which is also ok. In any case, I advise you to link your resource representations so that the client can navigate through your API. This is called HATEOAS and I merely scratched the surface on this matter. Please read more about REST if you want a true REST API with all its benefits.
Maven
This is a special best practice, not related to the application itself, but to its development. Indeed, Maven increases productivity a lot, specially due to its dependency management capabilities. I couldn't live without it.
I don't know if this information was useful to you. I hope it was.
If you need information about any other topic, I'll edit the answer if I know it.
In addition to the above answers, designing the resources keeping the HTTP verbs out of your base URLs, carefully selecting the #PathParam, #QueryParam, #FormParam and #FormDataParam annotations are something I strongly emphasize.
For error handling, I return a Response object with HTTP response codes to convey the error to the client calling my API.
return Response.status(<HTTPErrorCode>).entity("Error msg here or my Error Bean object as an argument").build();
Using documentation tools like Swagger API helps a lot in developer testing.
Brian Mulloy's Web API design eBook and Vinay Sahni's post had been a handy resource for me to review/correct my design.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to develop a simple orm which performs CRUD functionality.Shold i use reflection?
does libs like hibernate use reflection??
does using reflection will cause the speeed to drop down by a large extent?
Yes Hibernate uses reflection and annotations (or XML configuration files), but it will only index and read all meta information once (at startup). I would recommend though to look at the existing ORM solutions first before you start rolling your own.
A simple ORM is DAO (Data Access Object). You can specify your CRUD operations very well.
For More ORM patterns or Methodology, read Martin Fowler's book: Patterns of Enterprise Application Architecture
Also, you can use the existing JPA (Java Persistence API) and write your own JPA.
Reflection, dynamic proxies, cglib, asm, javassit - all are used in ORM tools.
But, you really don't want to create a new one. Because you can't create a simple ORM. ORMs aren't simple to create and you will realize it once you reach a certain point. So don't waste your time. Use an existing one. There are plenty, some more complicated, some less complicated (and less powerful).
You can google for "simple ORM" and you will have plenty of choices that are (more or less) easy to use. (But not to implement)
Well, not so long ago, I wrote an ORM layer for GAE named gaedo. This framework is modular enough to also fit relational databases. Hopefully, it was my third attempt at such a job. So, here are what is needed and why.
Reflection is the root of all ORM mapping tools, since it'll allow you to explore classes looking for their attributes names and values. This is a first use. It will also allow you to load values from your datastore, provided your bean has a convenient constructor (usually, ORM frameworks rely upon Java Beans, since these beans ensure a no-arg constructor exists). Finally, reflection will allow you to load values from datastore in beans, which is, i think, the most important thing. Unfortunately, you'll fast be faced with the issue of the query that loads the whole database, which will require you the two newt steps
Considering graph loading, you'll fast need to rely upon dynamic proxies to create lazy loadable objects. Obviously, if you rely solely upon JDK, you will only able to use that on objects implementing well-known interfaces (as an example, collections and maps are very good examples of objects benefiting from dynamic proxies implementing their interface).
Finally, annotations will be of smaller use. They'll allow you to define key elements (used to generate the database key for an object, as an example), define parent-children relationships, or even define lazy-loading strategy, in association with previously mentioned dynamic proxies.
This is an interesting, but mostly useless, research effort. Interesting, because it will learn you tons of concepts regarding reflection, proxies, and all those things people ignore and tend to consider as reserved to so-called dynamic languages.
But useless, because you'll always encounter corner cases requiring you hacking your code.
As Emmanuel Bernard told in "Les castcodeurs" (a french Java podcast), I think, each year, someone come with a "reimplementation" of Hibernate. And each year, this implementation reveals itself lacking some important fragments, like transaction, local or distributed, cache handling, ...
So, try to code it, and never forget it can be dropped soon due to a too great overlap with installed frameworks.
To answer the last part of your question, yes; reflection is a serious performance hit. All the work that you normally have the compiler to you instead have to do at run time, so use reflection sparingly (cache classes for example so you only create them once, preferably at startup).
I haven't looked through Hibernate's code, but I expect it uses reflection as well, but as optimized as possible.
My recommendation is that you write a working dead-simple solution first, then start optimizing as you go along.
Try JLibs-JDBC.
This is simple ORM which doesn't use reflection or xml configuration
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've observed the strange fact (based on the questions in the hibernate tag) that people are still actively using xml files instead of annotations to specify their ORM (Hibernate/JPA) mappings.
There are a few cases, where this is necessary:
you are using classes that are provided, and you want to map them.
you are writing an API, whose domain classes can be used without a JPA provider, so you don't want to force a JPA/Hibernate dependency.
But these are not common cases, I think.
My assumptions are:
people are used to xml files and don't feel comfortable / don't want to bother learning to use the annotation approach.
Java pre-1.5 is forced upon the project and there is nothing to do about it
people don't know that annotations are a full-featured replacement of xml mapping.
legacy systems are supported and hence changing the approach is considered risky
people fear that mixing annotations (meta-information) with their classes is wrong.
Any other possible explanations?
The domain layer and the persistence layer are considered by some to be separate concerns. Using the pure XML approach keeps the two layers as loosely coupled as possible; using annotations couples the two layers more tightly as you have persistence-related code embedded in the domain code.
Lack of overview of what's been mapped. You need to dig in the source code.
people don't know that annotations are
a full-featured replacement of xml
mapping.
Ah, but they're not. Three cases off the top of my head (there are probably more) you can't do (well) with annotations:
Use formula as part of association key (admittedly, rather esoteric).
Join-via-subselect - #Loader is not an adequate replacement. Not too common but quite useful. Envers provides a viable alternate approach.
Losing column order for schema generation. This one's an absolute killer. I understand why it's done this way, but it still annoys me to no end.
Don't get me wrong, though - annotations are great; doubly so when they're coupled with Validator (though, again, #3 above kills the buzz on this one). They also provide certain aspects of functionality that XML mappings do not.
Using XML to complement the annotations, where environment or system specific configuration is needed.
Some information is carried nicely in annotations, such as the cardinality of relationships between entities. These annotations provide more detail about the model itself, rather than how the model relates to something else.
However, bindings, whether to a persistence store or XML or anything else, are extrinsic to the model. They change depending on the context in which the model is used. Including them in the model is as bad as using inline style definitions in HTML. I use external binding (usually—though not necessarily—XML) documents for the same reasons I reference an external CSS.
I initially found the annotation syntax very weird. It looks like line noise and mixes in with where I usually put comments. It's vastly better than dealing with the XML files though, because all of the changes are in one place, the model file. Perhaps one limitation of annotation is possible collision with other annotations, but I haven't seen that yet.
I think the real reason that it isn't used more is that it isn't really considered the default. You have to use an additional jar file. It should be part of core and the XML approach should be the optional one.
I've switched to annotations, but sometimes I miss the XML mappings, mainly because the documentation was so much more comprehensive, with examples of many scenarios. With annotations, I stick to pretty basic mappings (which is great if you control the data and object model), but I've done some very complex things in the XML that I don't know if I could replicate in the annotations.
So if you want to deploy your class to multiple datastores. And you want to annotate column definitions into it do you ? Different datastores have different conventions etc and using XML is the only sane place in that situation, being able to have one for MySQL, and one for Derby, and one for Oracle or whatever. You can still put the basic persistence/relation annotations in if you wish, but the schema-specific stuff would go into XML in that case.
--Andy (DataNucleus)
I have a new one : http://www.summerofnhibernate.com/
Very nice screencast series not yet covering annotations. I have written some apps with it to learn the basics, not for my job but out of curiosity, but never migrated to annotations yet. The series where suggested as still relevant on SO. I still will migrate to annotations if I have some more spare time but for the time being I could be one of the persons asking questions about it.
I worked on a project where the database would change very frequently and we have to regenerate the java files and configuration files each time it happens. Actually we do not use all the relationships and configurations generated by hibernate tool. So basically we use the tool and then modify/tweak them.
So when you want to modify/tweak the default configurations, it is easier to do in the XML file in comparison to doing it through annotations.
I feel that it makes the code much more readable if we donot use Annotations.Use of Annotations can really help if the configuration info changes frequently, but take the case of web.xml, how many times does the info in that change, so why use annotations for Servlets.
We continue to use XML because typically for deployed sites, getting a patch (binary code) approved for installation takes time that you may not have. Updates to ASCII files (e.g. xml files) are considered configuration changes and not patches...
t