Architecture and project organization [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
We are, in my company, at the beginning of a huge refactoring to migrate from home made database access to Hibernate.
We want to do it clean, and therefore we wiil use entities, DAOs, etc...
We use maven, and we will therefore have two maven projects, one for the entities, one for the DAOs (is it good, or is it better to have both in the same project ?).
Knowing that, our question is the following : our business layer will use the DAOs.
As most of the DAO's methods return entities, our business layer will have to know about entities. And therefore, our business layer will have to know about Hibernate, as our entities will be Hibernate annotated (or at least JPA annotated).
Is this a problem ? If yes, what is the solution to give the business layer the very minimum knowledge about the data layer ?
Thank you,
Seb

Here is how I typically model the dependencies, along with the reasoning.
Let's distinguish 4 things:
a. the business logic
b. entities
c. DAO interfaces
d. DAO implementations
For me the first three belong together and therefor belong in the same maven module, AND even in the same package. They are closely related and a change in one will very likely cause a change in the other. And things that change together should be close together.
the implementation of the DAO is to a large extend independent of the business logic. And even more important the business logic should NOT depend on where the data is coming from. It is a completely separate concern. So if your data comes today from a database and tomorrow from a webservice, nothing should change in your business logic.
You are right, Hibernate (or JPA) annotations on the enities violate that rule to some extent. You have three options:
a. Live with it. While it creates a dependency to Hibernate artifacts, it does not create a dependency on any Hibernate implementation. So in most scenarios, having the annotations around is acceptable
b. use xml configuration. This will fix the dependency issue, but in my opinion at the rather hefty cost of dealing with xml based configuration. Not worth it in my opinion
c. Don't use Hibernate. I don't think the dependency on Annotations is the important problem you have to consider. The more serious problem is, that Hibernate is rather invasive. When you navigate an object graph, Hibernate will trigger lazy loading, i.e. the execution of sql statements at points that are not at all obvious from looking at the code. This basically means, you data access code starts to leak into every part of the application if you are not careful. One can keep this contained, but it is not easy and requires great care and a in depth understanding of Hibernate, that most teams don't have when they start with it. So Hibernate (or JPA) trades a simple but tedious task of writing SQL-Statments with a difficult task of creating a software architecture, that keeps mostly invisible dependencies in check. I therefore would recommend avoid Hiberante at all and try something simpler. I personally have high hopes toward MyBatis, but haven't used it in real projects yet.
More important then managing the dependencies between technical layers is in my opinion the separation of domain modules. And I'm not alone with that opinion.
I would use separate artifacts (i.e. maven modules) only to separate things that you want to deploy independently. If you for example have a rich client and a backend server two maven artifacts for those, plus maybe a third one for common code make sens. For everything else I'd simply use packages and tests that fail when illegal dependencies get created. For those I use Degraph, but I'm the author of that so I might be biased.

Related

Best practices for clean domain model [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for some best practices for developing a clean domain object model. By 'clean', I mean a domain (or business) model that isn't cluttered up with a bunch of DB persistence, xml/json serialization/deserialization, dependency injection stuff. For example, I've read thru several 'how-to' tutorials about implementing the REST API. When they all get to the point of implementing the 'model', they all end up having some annotations about transforming from the 'pojo/poco' to the xml/json view via [XmlAttribute], or making the field be more user friendly in the UI via [Display/Display Type] attribute. The platform doesn't matter, I've seen the cluttering in the Java world (not familiar with other scripting languages).
I'm aware of the Data Transfer Object design pattern as those objects could use these attributes, but is this the only method? DTO seems like it would require a lot of object mapping to/from view to the business layer. If that's what it takes to have a clean domain layer, then great, just looking for feedback.
Thanks
The simple truth is that all of that "annotation clutter" rose up out of a rejection of all the "XML clutter".
Taking both JPA and JAXB in Java as examples, all of those annotations can be replaced by external XML files describing the same meta data for the underlying frameworks. In both of these cases, the frameworks offer "ok" defaults for unannotated data, but the truth is few are really satisfied with the Convention over Configuration default mappings the frameworks offer, and thus more explicit configuration needs to be done.
And all of that configuration has to be captured somewhere, somehow.
For many folks and many applications, the embedded meta data via annotations is a cleaner and easier to use than the external XML mapping methods.
In the end, from a Java perspective, the domain models are "just" objects, the annotations have no bearing, in general, outside of the respective frameworks. But in truth, there's always some coupling with the frameworks, and they have a tendency to influence implementation details within the model. These aren't particularly glaring, but the simple fact is that when there may be two ways to model something, and one way is "more friendly" to the framework, for many that's enough to tilt the decision to go in that direction rather than fighting for purity above the framework.

Is package by feature approach good? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Recently I came across this javalobby post http://java.dzone.com/articles/how-changing-java-package on packaging java code by feature.
I like the idea, but i have few questions on this approach. I asked my question but didn't get a satisfactory reply. I hope someone on StackOverflow can clarify my questions.
I like the idea of package by feature which greately reduces the time for moving across the packages while coding and all the related stuff will be at one place(package). But what about interactions between the services in different packages?
Suppose we are building a blog app and we are putting all user related operations(controllers/services/repositories) in com.mycompany.myblog.users package. And all blog post related operations(controllers/services/repositories) in com.mycompany.myblog.posts package.
Now I want to show User Profile along with all the posts that he posted. Should I call myblog.posts.PostsService.getPostsByUser(userId) from myblog.users.UserController.showUserProfile()?
What about coupling between packages?
Also wherever I read about package by feature, everyone says its a good practice. Then why many book authors and even frameworks encourage to group by layers? Just curious to know :-)
Take a look at uncle Bob's Package Design Principles. He explains reasons and motivations behind those principles, which I have elaborated on below:
Classes that get reused together should be packaged together so that the package can be treated as a sort of complete product available for you. And those which are reused together should be separated away from the ones those are not reused with. For example, your Logging utility classes are not necessarily used together with your file io classes. So package all logging them separately. But logging classes could be related to one another. So create a sort of complete product for logging, say, for the want of better name commons-logging package it in a (re)usable jar and another separate complete product for io utilities, again for the want of better name, say commons-io.jar.
If you update say commons-io library to say support java nio, then you may not necessarily want to make any changes to the logging library. So separating them is better.
Now, let's say you wanted your logging utility classes to support structured logging for say some sort of log analysis by tools like splunk. Some clients of your logging utility may want to update to your newer version; some others may not. So when you release a new version, package all classes which are needed and reused together for migration. So some clients of your utility classes can safely delete your old commons-logging jar and move to commons-logging-new jar. Some other clients are still ok with older jar. However no clients are needed to have both these jars (new and old) just because you forced them to use some classes for older packaged jar.
Avoid cyclic dependencies. a depend on b; b on c; c on d; but d depends on a. The scenario is obviously deterring as it will be very difficult to define layers or modules, etc and you cannot vary them independly relative to each other.
Also, you could package your classes such that if a layer or module changes, other module or layers do not have to change necessarily. So, for example, if you decide to go from old MVC framework to a rest APIs upgrade, then only view and controller may need changes; your model does not.
I personally like the "package by feature" approach, although you do need to apply quite a lot of judgement on where to draw the package boundaries. It's certainly a feasible and sensible approach in many circumstances.
You should probably achieve coupling between packages and modules using public interfaces - this keeps the coupling clean and manageable.
It's perfectly fine for the "blog posts" package to call into the "users" package as long as it uses well designed public interfaces to do so.
One big piece of advice though if you go down this approach: be very thoughtful about your dependencies and in particular avoid circular dependencies between packages. A good design should looks like a dependency tree - with the higher level areas of functionality depending on a set of common services which depend upon libraries of utility functions etc. To some extent, this will start to look like architectural "layers" with front-end packages calling into back-end services.
There many other aspect other than coupling for package design i would suggest to look at OOAD Priciples, especially package design priciples like
REP The Release Reuse Equivalency Principle The granule of reuse is the granule of release.
CCP The Common Closure Principle Classes that change together are packaged together.
CRP The Common Reuse Principle Classes that are used together are packaged together.
ADP The Acyclic Dependencies Principle The dependency graph of packages must have no cycles.
SDP The Stable Dependencies Principle Depend in the direction of stability.
SAP The Stable Abstractions Principle Abstractness increases with stability.
for more information you can read book "Agile Software Development, Principles, Patterns, and Practices"

Program Design - Package by Feature vs. Layer or Both?

I am in the design stage of a web application that allows users to create requests of work and the workers to put time against those requests. The application will also have reporting capabilities for supervisors to get daily totals, reports, and account for time spent, "cost allocation".
Applications I've worked on in the past have been designed using the package by layer approach. I'm thinking it would be more efficient to use a package by feature design and I have a question about this design.
What I am currently thinking for the packages by feature:
Requests - CRUD the requests, assign then, add invoice numbers, etc...
Work Time - CRUD daily time for users against requests, holiday, training, or meetings
Cost Allocation - create reports, accounting things that accountants want ...
The front-end will be Tomcat server and JSP. And, the back-end will be an Oracle database with EclipseLink doing the persistence.
My question:
In my understanding of package by feature, the entities and DAOs would go into the package associated with them. Spreading out the persistence layer across several packages. Leaving packages to call entities from other packages. With all of the overlap is this really functional? There would be no isolation between the packages. What are the pros and cons to using package by feature? Would it be good design to go with an additional persistence layer? Or, do I have the understanding of this totally wrong?
5 years later...
(Suspenseful music in the background)
Imagine this ridiculous situation:
Managers company, Programmers company, Human Resources company and Marketing company, where the Programmers company will only have programmers and no managers, marketeers or human resources;
We wouldn't want to split co-workers by their profession instead of organizing (self-coordinating) teams, or would we?
Packaging stuff together by what it is, and not by what it does, will only make you jump 10 times to the place you are looking for.
Now doesn't that just look sexy? By looking at the structure, you can already tell what the app is all about. Not satisfied? Read full article.
I would suggest to start package things based on business entities. And in there you can divide things based on layers.
With all of the overlap is this really
functional?
I am practising it for long. I don't see any major issues with this approach. You must find out what to decouple and how much it should be decoupled. For example, calling a persistent method of orders from a customer package using the API provided by orders is pretty fine for me.
What are the pros and cons to using
package by feature?
I find it more simple, straight, understandable and easy to work with than strict layer oriented packaging. It benefits when you want to split and distribute things to different places.
Would it be good design to go with an
additional persistence layer?
Look at this SO thread, I found JPA, or alike, don't encourage DAO pattern.
Further Reading
Generic Repository and DDD
If I would chose betwen the two package by feature vs package by layer. I would chose package by layer.
For several reason,
In layered arhcitecture the interfaces/depenencies should be clearly defined between layers, appropriate packaging will quickly highlight if you are intrudicing un-wanted dependencies or not
It isolates dependencies in one layer (for example persistance using Oracle) from your other layers.
I find it cleaner to think of each layer in isolation
But to answer your question Features vs. Layers or both, I would say both, package primarily by layers then by features.

developing a simple orm [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to develop a simple orm which performs CRUD functionality.Shold i use reflection?
does libs like hibernate use reflection??
does using reflection will cause the speeed to drop down by a large extent?
Yes Hibernate uses reflection and annotations (or XML configuration files), but it will only index and read all meta information once (at startup). I would recommend though to look at the existing ORM solutions first before you start rolling your own.
A simple ORM is DAO (Data Access Object). You can specify your CRUD operations very well.
For More ORM patterns or Methodology, read Martin Fowler's book: Patterns of Enterprise Application Architecture
Also, you can use the existing JPA (Java Persistence API) and write your own JPA.
Reflection, dynamic proxies, cglib, asm, javassit - all are used in ORM tools.
But, you really don't want to create a new one. Because you can't create a simple ORM. ORMs aren't simple to create and you will realize it once you reach a certain point. So don't waste your time. Use an existing one. There are plenty, some more complicated, some less complicated (and less powerful).
You can google for "simple ORM" and you will have plenty of choices that are (more or less) easy to use. (But not to implement)
Well, not so long ago, I wrote an ORM layer for GAE named gaedo. This framework is modular enough to also fit relational databases. Hopefully, it was my third attempt at such a job. So, here are what is needed and why.
Reflection is the root of all ORM mapping tools, since it'll allow you to explore classes looking for their attributes names and values. This is a first use. It will also allow you to load values from your datastore, provided your bean has a convenient constructor (usually, ORM frameworks rely upon Java Beans, since these beans ensure a no-arg constructor exists). Finally, reflection will allow you to load values from datastore in beans, which is, i think, the most important thing. Unfortunately, you'll fast be faced with the issue of the query that loads the whole database, which will require you the two newt steps
Considering graph loading, you'll fast need to rely upon dynamic proxies to create lazy loadable objects. Obviously, if you rely solely upon JDK, you will only able to use that on objects implementing well-known interfaces (as an example, collections and maps are very good examples of objects benefiting from dynamic proxies implementing their interface).
Finally, annotations will be of smaller use. They'll allow you to define key elements (used to generate the database key for an object, as an example), define parent-children relationships, or even define lazy-loading strategy, in association with previously mentioned dynamic proxies.
This is an interesting, but mostly useless, research effort. Interesting, because it will learn you tons of concepts regarding reflection, proxies, and all those things people ignore and tend to consider as reserved to so-called dynamic languages.
But useless, because you'll always encounter corner cases requiring you hacking your code.
As Emmanuel Bernard told in "Les castcodeurs" (a french Java podcast), I think, each year, someone come with a "reimplementation" of Hibernate. And each year, this implementation reveals itself lacking some important fragments, like transaction, local or distributed, cache handling, ...
So, try to code it, and never forget it can be dropped soon due to a too great overlap with installed frameworks.
To answer the last part of your question, yes; reflection is a serious performance hit. All the work that you normally have the compiler to you instead have to do at run time, so use reflection sparingly (cache classes for example so you only create them once, preferably at startup).
I haven't looked through Hibernate's code, but I expect it uses reflection as well, but as optimized as possible.
My recommendation is that you write a working dead-simple solution first, then start optimizing as you go along.
Try JLibs-JDBC.
This is simple ORM which doesn't use reflection or xml configuration

Why are people continuing to use xml mapping files instead of annotations? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've observed the strange fact (based on the questions in the hibernate tag) that people are still actively using xml files instead of annotations to specify their ORM (Hibernate/JPA) mappings.
There are a few cases, where this is necessary:
you are using classes that are provided, and you want to map them.
you are writing an API, whose domain classes can be used without a JPA provider, so you don't want to force a JPA/Hibernate dependency.
But these are not common cases, I think.
My assumptions are:
people are used to xml files and don't feel comfortable / don't want to bother learning to use the annotation approach.
Java pre-1.5 is forced upon the project and there is nothing to do about it
people don't know that annotations are a full-featured replacement of xml mapping.
legacy systems are supported and hence changing the approach is considered risky
people fear that mixing annotations (meta-information) with their classes is wrong.
Any other possible explanations?
The domain layer and the persistence layer are considered by some to be separate concerns. Using the pure XML approach keeps the two layers as loosely coupled as possible; using annotations couples the two layers more tightly as you have persistence-related code embedded in the domain code.
Lack of overview of what's been mapped. You need to dig in the source code.
people don't know that annotations are
a full-featured replacement of xml
mapping.
Ah, but they're not. Three cases off the top of my head (there are probably more) you can't do (well) with annotations:
Use formula as part of association key (admittedly, rather esoteric).
Join-via-subselect - #Loader is not an adequate replacement. Not too common but quite useful. Envers provides a viable alternate approach.
Losing column order for schema generation. This one's an absolute killer. I understand why it's done this way, but it still annoys me to no end.
Don't get me wrong, though - annotations are great; doubly so when they're coupled with Validator (though, again, #3 above kills the buzz on this one). They also provide certain aspects of functionality that XML mappings do not.
Using XML to complement the annotations, where environment or system specific configuration is needed.
Some information is carried nicely in annotations, such as the cardinality of relationships between entities. These annotations provide more detail about the model itself, rather than how the model relates to something else.
However, bindings, whether to a persistence store or XML or anything else, are extrinsic to the model. They change depending on the context in which the model is used. Including them in the model is as bad as using inline style definitions in HTML. I use external binding (usually—though not necessarily—XML) documents for the same reasons I reference an external CSS.
I initially found the annotation syntax very weird. It looks like line noise and mixes in with where I usually put comments. It's vastly better than dealing with the XML files though, because all of the changes are in one place, the model file. Perhaps one limitation of annotation is possible collision with other annotations, but I haven't seen that yet.
I think the real reason that it isn't used more is that it isn't really considered the default. You have to use an additional jar file. It should be part of core and the XML approach should be the optional one.
I've switched to annotations, but sometimes I miss the XML mappings, mainly because the documentation was so much more comprehensive, with examples of many scenarios. With annotations, I stick to pretty basic mappings (which is great if you control the data and object model), but I've done some very complex things in the XML that I don't know if I could replicate in the annotations.
So if you want to deploy your class to multiple datastores. And you want to annotate column definitions into it do you ? Different datastores have different conventions etc and using XML is the only sane place in that situation, being able to have one for MySQL, and one for Derby, and one for Oracle or whatever. You can still put the basic persistence/relation annotations in if you wish, but the schema-specific stuff would go into XML in that case.
--Andy (DataNucleus)
I have a new one : http://www.summerofnhibernate.com/
Very nice screencast series not yet covering annotations. I have written some apps with it to learn the basics, not for my job but out of curiosity, but never migrated to annotations yet. The series where suggested as still relevant on SO. I still will migrate to annotations if I have some more spare time but for the time being I could be one of the persons asking questions about it.
I worked on a project where the database would change very frequently and we have to regenerate the java files and configuration files each time it happens. Actually we do not use all the relationships and configurations generated by hibernate tool. So basically we use the tool and then modify/tweak them.
So when you want to modify/tweak the default configurations, it is easier to do in the XML file in comparison to doing it through annotations.
I feel that it makes the code much more readable if we donot use Annotations.Use of Annotations can really help if the configuration info changes frequently, but take the case of web.xml, how many times does the info in that change, so why use annotations for Servlets.
We continue to use XML because typically for deployed sites, getting a patch (binary code) approved for installation takes time that you may not have. Updates to ASCII files (e.g. xml files) are considered configuration changes and not patches...
t

Categories