Maintaining Clean Architecture in Spring MVC with a data-centric approach - java

Maintaining Clean Architecture in Spring MVC with a data-centric approach
I'm trying to map out the architecture for the front-end of a new Java-based web app (portal type application) we are making at work. I want to get this right from day one, and I would like to kick off a discussion here to help me implement Uncle Bob's Clean Architecture in my architectural design.
Here's a quick run-down of our tech stack, top to bottom (the technology isn't importance, the structure is):
Oracle Database
Oracle Service Bus exposing services using WSDLs
JAX-WS generated Java-classes from the WSDLs (let's call this the "generated service layer")
A Domain module consisting of POJOs mapped to the generated data objects
A Consumer-module exposing the "generated service layer" to the front-end application
A Spring MVC based front-end module using FreeMarker to render the views
A key point:
In particular, the name of something declared in an outer circle must not be mentioned by the code in the an inner circle. That includes, functions, classes. variables, or any other named software entity.
Attempting to adhere to Bob's Clean Architecture, I've gone back and forth a bit with myself regarding where to place the application logic, namely the "Use Case"-layer in his architecture.
Here is the approach I've come up with:
Layer 1 - Entities
Entities encapsulate Enterprise wide business rules.
This is where our Domain module containing the domain-objects lives, these are self-containing objects with minimal dependencies on each other. Only logic pertaining to the objects themselves may live on these domain objects, and no use-case specific logic.
Access to our database is exposed via WSDLs using a service bus that transforms the data, as opposed to an ORM like JPA or Hibernate. Because of this, we do not have "entities" in the traditional sense (with Ids), but a data-centric approach making this layer a data access layer, presented to the rest of the application by the Consumer-module.
Layer 2 - Use Cases
The software in this layer contains application specific business rules.
This is where logic specific to our application's use cases lives. Changes to this layer should not affect the data access layer (layer 1). Changes to the GUI or framework implementation (Spring MVC) should not affect this layer.
This is where it gets a little tricky:
Since our data access layer (in layer 1) must be kept clean of application logic, we need a layer that facilitates use of that layer in a fashion that suits the use cases. One solution I've found to this problem is the use a variant of the "MVVM-pattern" that I choose to call MVC-VM. See below for an explanation. The "VM"-part of this lives in this Use Case-layer, represented by *ViewModel-classes that encapsulate this Use Case-specific logic.
Layer 3 - Interface Adapters
The software in this layer is a set of adapters that convert data from the format most convenient for the use cases and entities, to the format most convenient for some external agency such as the Database or the Web.
This is where the MVC-architecture of our GUI lives (the "MVC" in our "MVC-VM"). Essentially this is when the Controller-classes get data from the *ViewModel-classes and puts it in Spring MVC's ModelMap ojects that are used directly by the FreeMarker-templates in the View.
The way I see it, the servicebus would in our case also fall in under this layer.
Layer 4 - Frameworks and Drivers
Generally you don’t write much code in this layer other than glue code that communicates to the next circle inwards.
This layer is really just a configuration-layer in our application, namely the Spring configuration. This would for example be where we specify that FreeMarker is used to render the view.
Model View ViewModel Pattern
MVVM facilitates a clear separation of the development of the graphical user interface (either as markup language or GUI code) from the development of the business logic or back end logic known as the model (also known as the data model to distinguish it from the view model). The view model of MVVM is a value converter meaning that the view model is responsible for exposing the data objects from the model in such a way that those objects are easily managed and consumed.
More on the MVVM-pattern at Wikipedia.
The MVC-VM roles would be fulfilled in our application like so:
Model - represented simply by the ModelMap datastructure in Spring MVC that is used by the view templates.
View - FreeMarker templates
Controller Spring's Controller-classes that directs HTTP URL requests to specific handlers (and as such functions as a FrontController). The handlers in these classes are responsible for fetching data from the use case-layer and pushing it out to the view templates when showing data(HTTP GET), as well as sending data down for storing (HTTP POST). This way it essentially functions as a binder between the ViewModel and View, using the Model.
ViewModel - These classes are responsible for 1) structuring data from the data access layer in a fashion that is usable by the View and 2) treat data-input from the View. "Treat" means to validate and to break down the data so that it can be sent down the stack for storing. This layer would take form as <UseCase>VM-classes in a viewmodel package in our Spring MVC front-end module.
A key component here is the implicit binding that happens in Spring MVC between ModelMap and the FreeMarker-templates. The templates only use the model (ModelMaps), where the controller has put the data in a format it can use. That way we can make templates like so:
<body>
<h1>Welcome ${user}!</h1>
<p>Our latest product:
${latestProduct.name}!
</body>
I apologize for the verbose explanation, but I could not explain this (relatively simple) architecture in fewer words.
I would greatly appreciate some input on my approach here - am I on the right track? Does the MVC-VM thing make sense? Am I violating any Clean Architecture Principles?
There are of course many solutions to this, but I am trying to find a solution that is 1) not over-engineered and 2) adheres to the principles of Bob's Clean Architecture.
Update:
I think the key issue that puts me off here is what form the "Use case" layer takes in this application. Remember we have an MVC front-end that gets data from a data access layer. If the MVC part fits in Bob's "Interfaces adapters", and the domain models of the data layer fit in Bob's "Entities" layer, then what do I call the use case classes that implement application logic? I am tempted to just call them <UseCase>Models and put them in the MVC project, but according to Bob
The models are likely just data structures that are passed from the controllers to the use cases, and then back from the use cases to the presenters and views.
so that means my model objects should be "dumb" (like a simple Map. ModelMap in Spring) and it is then the responsibility of the controller to put data from the Use Case class into this Map-structure.
So again, what form does my Use Case-classes take? How about <UseCase>Interactor?
But in conclusion I realize that the MVC-MV-thing is over-engineering (or simply incorrect) - as "mikalai" indicates below this essentially just a two-layer applcation in its current form; a data access layer and a front-end MVC layer. Simple as that.

Whoa that was a lot. And I think you have mostly translated Uncle Bob's jargon over to your Spring Java app.
Since architecture is mostly opinion and since your question is sort of asking for one...
There are many different styles of architecture and ... most are overrated. Because most are the same thing: higher cohesion and looser coupling through indirection and abstraction.
What matters MOST (IMHO) are the dependencies. Making lots of small projects as opposed to one giant monolithic project is the best way to get "clean" architecture.
Your most important technology for clean architecture will not be "Spring MVC" technology or "Freemarker" templating language, or another Dr. Dobb's article with diagrams of boxes, hexagons and various other abstract polygons.
Focus on your build and dependency management technology. It is because this technology will enforce your architecture rules.
Also if your code is hard to test.. you probably have bad architecture.
Focus on making your code easy to test and write lots of tests.
If you do that it will be easy to change your code with out worry ... so much you could even change your architecture :)
Beware of focusing too much an bull#%$## architecture rules. Seriously: if your code is easy to test, easy to change, easy to understand and performs wells then you have a good architecture. There is no 6 weeks to 6 pack abs article to do this (sorry Uncle Bob). It takes experience and time... there is no magic bullet plan.
So here my own "clean" architecture... I mean guidelines:
Make many small projects
Use dependency management (ie Maven, Gradle)
Refactor constantly
Understand and use some sort of dependency injection (Spring)
Write unit tests
Understand cross cutting concerns (ie when you need AspectJ, metaprogramming, etc..)

My solution
So it turns out that implementing Bob's "Clean Architecture" in Java/Spring MVC is borderline non-trivial and requires more architectural facets than I originally had included.
And I could actually not find any example of implementations online.
Evidently my architecture was missing a separate module for the "Use Case"-layer as this logic should not live in the Spring MVC Web module (and not be called "*ViewModel"). The Web/MVC module is simply a detail of the application, and the application logic should be completely separated from it, and separately testable.
This new "Use Case"-module now contains *Interactor-classes which get data from the domain module (entities). Moreover, "Request/Response Objects" are needed to facilitate the communication between the MVC/Web-module and the Use Case module.
My dependency chain now looks like this:
Spring MVC module -> Use Case module -> Domain module
where every arrow (dependency) takes form as a Boundary, meaning an interface is defined in the module to the right of the arrow that is implemented where required and injected where needed (Inversion of Control).
Here are the interfaces I ended up with (per Use Case):
I<UseCase>Request - implemented in the MVC module, instantiated in the Controller
I<UseCase>Response - implemented in the Use Case module, instantiated in the Interactor
I<UseCase>Interactor - implemented in the UseCase module, injected in the Controller
I<UseCase>Consumer - implemented in the Domain module, injected in the Interactor
How it works?
The Controller takes parameters from the HTTP request and packs it in a RequestModel which it passes down to the Interactor. The Interactor fetches the data it needs from the domain module *Consumer and imposes it's application specific logic on it, then puts it in a ResponseModel and sends it back up to the Controller. The Controller then finally simply puts this (now GUI-friendly) simple data in a Map object and forwards it to the FreeMarker template which then uses this data directly and renders the HTML.
A Presenter could get involved in that last part there, making this an implementation of the Model-View-Presenter pattern, but I'm leaving that for now.
My conclusion
I ended up with more files than what is necessary this early in development, strictly speaking. However as the complexity and size of the application grows, I am confident that this structure keeps it easy for us to maintain low coupling and high cohesion. Also, the Web-module is now easily replaceable - it just delivers requests to the use-case module and receives response-objects. Moreover, each layer of the application (domain logic, application logic, and GUI logic) is separately testable, with only the View-part requiring a webserver in order to be tested.
Thanks for all advice and pointers I received here. And please comment on my solution - I don't claim that it is perfect.

Because of this, we do not have "entities" in the traditional sense
(with Ids), but a data-centric approach making this layer a data
access layer, presented to the rest of the application by the
Consumer-module.
Something seems odd to me in that part. Why couldn't your entities have ID's even if you get them from web services ?
In the Clean Architecture approach, the Entities layer is precisely not a data access layer. Data access should be a detail in your architecture, not a central concern. As you said yourself, Entities contain domain-specific business rules. Business rules, or behavior, is very different from the way you fetch your data.
Entities is where all the domain logic happens, not where you get your data from. According to Clean Architecture, you get your persisted or external data from Gateways.
One solution I've found to this problem is the use a variant of the
"MVVM-pattern" that I choose to call MVC-VM. See below for an
explanation. The "VM"-part of this lives in this Use Case-layer,
represented by *ViewModel-classes that encapsulate this Use
Case-specific logic.
ViewModel clearly refers to a View, which is a presentation artifact -another detail. Use cases/Interactors should be devoid of such details. Instead, Interactors should send and receive delivery mechanism-agnostic data structures (RequestModels and ResponseModels) through Boundaries.
I understand that this is a custom pattern of yours and doesn't involve a reference to a presentation framework, but the word "View" is just misleading.

Related

Bean validation inside the domain abstraction

I have been reading Clean Architecture by R. C. Martin.
I'm trying to make sense of it, by developing a small project where I'm trying to apply its concepts.
One core concept in the domain layer is to not use frameworks, 3rd party lib, and avoid #Annotations, simply make the classes in the domain pure POJOs.
I would like to know 2 things.
Is it conceptually right to do my "entities" validations inside the domain layer and if so, using Bean Validation would be a reasonable option since it is a specification by java itself?
The job of a POJO domain (business) object is to faithfully represent the values of the content, and maintain the integrity of the information it represents. Validating any data input is a key part of that. Protecting against faulty inputs is a main job of the domain POJO.
So, yes, it makes perfect sense to use the Bean Validation framework to assist in this effort to faithfully represent the domain data correctly.
The admonition against frameworks and libraries should not be misinterpreted as simply and literally no frameworks/libraries. The goal of that advice is to not intertwine the internals of the domain POJO with the outer world of the application’s complexities. The domain POJO should be unaware of how it is being used. So you should be able to pick up the class of a domain POJO from this app’s codebase and drop it into any other app’s code base with no further programming. The domain object should be agnostic and ignorant of the app within which it is being used.
Avoiding this kind of unnecessary messy intertwining is what is meant by “clean” versus “dirty” architecture. Every part of your app should focus on its own responsibility, to do a job that no other part of the app can do, with all little interference or entanglement from other parts of your app as is practical.
The Bean Validation implementation library is used internally by your domain POJO, without concern for the outer app, except for the configuration necessary to load a Bean Validation implementation. This scenario is entirely reasonable, and does not violate Martin’s advice.
For example, your Customer, Invoice, and PurchaseOrder classes should remain blissfully ignorant of your choice of a reactive/flow architecture, or some event bus coordinating parts of your app, or whether your app is a local desktop app built in JavaFX versus a web app built in Vaadin Flow.

MVC and Swing in desktop application

After realizing that I have completely ignored the MVC pattern I have tried to utilize the concept in an application with a Swing view. I have now read most of the posts on the subject of MVC with Swing but am still a bit confused, because it is too complicated for me to grasp, and I think I need some basic clarifications so I don't set off on the wrong path.
I also wonder how common it is to use MVC in real projects. Many online tutorials seem to leave out the controller and mix it with the model, while I was confused by XSTL:s business logic capabilities. Why would you want to address a datasource from a JSP view?
These thoughts aside, my proper question is this:
If you have a Swing component, should event listener in that Swing class update the component state through calling (static perhaps?) methods in a POJO controller class, which in turn gets the appropriate business logic from the model, which is made up by POJO class hierarchy and associated persistence?
I've worked as a freelance for a long time and almost 90% of the projects were about Java Swing (Desktop applications). Also a lot of projects involved migration from languages like Visual Fox Pro to Java, it was a pain, because the hard part is not think in the logic which is already done, the hard part is take the code that is a mess and turn it into a good-looking code following the good practices and using design patterns, that's why it is a good idea to make a schema or a map in your mind how you can separate your code following the concepts of Model, View, Controller.
MVC as mentioned helps you to have a good-looking, maintainable and easy to read code, as well as you follow the programming paradigms and good practices.
View: Obviously, the part that interacts with the user (user interface), in case of Swing, your windows, frames, panels and all the code that involves the graphic components you need for your app.
Controller: Involves the core or business logic you stablish for your application, in this "layer" you should include the functionality and the "how my application will achieve the goals?".
Model: Related with the data you manage, for example, your entities and classes that represents the data you want to manage or give maintenance.
Applying MVC is not so hard, but as I mentioned, it could be sometimes a pain when you have to migrate your code from a not-applying-MVC structure to a MVC structured application. It is easier to start coding using MVC.
A way I get used to it is by using maven and separate my application into little "modules", of course, you don't need maven, I just found it useful in that moment, but in any case you can try practicing or get used to MVC by separating your application into little projects, for instance:
Java Project 1: application-data-model (contains all the code related with data management: entities, dtos, beans, daos)
Java Project 2: application-core-controller (contains all the business logic and functionality, you can use a facade pattern here if you want to make your code more "transparent" when you relate with your view)
Java Project 3: application-view-ui (contains all the panels, frames and graphic components)
Working this way helped me (and forced me) to get used to separate my code and keep an eye on what really matters to the project I'm working on. For instance, if I'm on application-data-model I'm focused in data model, I'm not thinking in business logic nor graphic interface.
Long explanation, maybe somebody could do it better, but hope I could have helped you or at least gave you a hand with this.
Best regards.
Firs the URL for basic understanding of MVC
http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
Now the approach to implement it in Swing applications. Don't get confused with controller functionality with the listeners functionality.
UI controls and listeners attached to them should be defined in your view classes.
On any event, whenever you need to invoke a business logic, then you need to call the controller class. Like fetching some value from the database.
Controller class should talk to your model to fetch the data and manipulate it if required.
Model classes should work on the data.
The idea of using MVC is to reduce redundant code and more manageable code. So if you are doing some calculations/manipulations then those can be moved to Controllers. Controllers can be called from different views requiring the same stuff. Similarly model can be used by multiple controllers to fetch the data.

External Systems Integration Best Practice

Quick question on what is the best practice for integrating with external systems.
We have a system that deals with Companies which we represent by our own objects. We also use an external system via SOAP that returns a Organization object. They are very similar but not the same (ours is a subset of theirs).
My question is, should we wrap the SOAP service via a Facade so we return only Company objects to our application, or should we return another type of object (e.g. OrgCompany), or even just use the Organization object in our code.
The SOAP service and Organization object are defined by an external company (a bank), who we have no control over.
Any advice and justification is much appreciated.
My two cents, Introducing external objects into application is always a problem. Especially during maintenance. A small service change might lead into big code change in the application.
It's always good to have a layer abstraction between the external service and application. I would suggest to create a service layer which will do the translation of external service object to your application domain objects and use them within the application. A clear separation / decoupling helps a lot in maintenance.
The below diagram depicts the above content.
Your decision here is how you want to manage external code dependencies in your application. Some factors that should play into your decision:
1) How often will the API change, and what's the expected nature of the changes?
2) What's the utility of your application outside its depdencies? If you removed the SOAP service dependency, would your app still serve a purpose?
A defensive approach is to build a facade or adapter around SOAP service, so that your code only depends on your object model. This gives you a lot of control and a relatively loose coupling between your code/logic and the service. The price that you pay for this control is that when the SOAP contract changes, you must also usually also change a layer of your code.
A different approach is to use the objects you're getting from the WSDL directly. This is beneficial when it doesn't make sense to introduce a level of indirection in your application between the client code, i.e. your application is just a feeder into a different system and the whole point of the app is to stuff the Organization object into a JMS pipeline or something similar. If the SOAP API contract never changes and you don't expect the output of your app to change much, then introducing an extra layer of indirection will just hinder the readability of your codebase long term.
Most j2ee developers tend to take the former approach in my experience, both because of the nature of their applications, and wanting to separate their application logic from the details of the data source.
hope this helps.
I can't think of any situation where it's good to use the objects that another company controls. The first thing you should do is bridge those objects into your own. Also, by having your own objects, you can expand their functionality beyond the one that is provided by the third party you connect to (for example if in the future you need to talk to more than one Company object provider)
Look at the Adapter pattern.
I'd support Sridhars suggestion, I'd like just to add that for translating external service objects to your application domain you can use Dozer :
http://dozer.sourceforge.net/documentation/mappings.html
I typically always Adapt externally defined domain objects to an internal representation.
I also create a comprehensive suite of tests against the external domain object, that will highlight any problems quickly if the external vendor produces a new release.
The Enterprise service bus Architecture might be useful here
Its primary use is in Enterprise Application Integration of
heterogeneous and complex landscapes.
(from Wikipedia)
I would check out open source Mule if you are looking for an open source solution

Is there an acceptable way to keep these layers/dependencies separate?

I am currently struggling with whether or not I've achieved a good level of separation, or if I've missed the point somewhere, as I am relatively new to learning the disciplined side of development...
My goal when I started was to create a layer that was agnostic of any persistence mechanism - I called this data-api. I then implemented these interfaces using JDO, and called this project data-jdo. The logic layer ideally talks only is aware of data-api.
This is the point where I'm not sure what makes sense. The business logic layer has to be invoked somehow, right? So is the expectation that the implementation of the data-api (data-jdo, or something else depending on experimentation) is provided (appropriate to say/do injected?) by the invoker?
So the goal would be to (largely for experience and not for productivity) maybe, implement a data-jpa package that could be substituted in place of data-jdo. So the topmost layer (a web service, generic main method as part of a tool, unit tests, whatever) are the ones to make the choice which implementation to use.
Should I be using some framework like Spring to allow me to choose which implementation of my data-api is used, via XML?
Sorry if that's a little vague... I guess the root question is, at what point does the consumer of an API depend on, supply, or become paired with, the implementation of that API? If the answer is or should be "never" then what is used to make sure everything is available at runtime and how does the consumer get an instance of whatever the "API" is describing with only interfaces?
I come from a .net background - not a Java one, so I'm afraid I can't help you with Java specifics.
The business logic layer has to be invoked somehow, right? So is the expectation that the implementation of the data-api (data-jdo, or something else depending on experimentation) is provided (appropriate to say/do injected?) by the invoker?
Yes. In the .Net world I use a Factory (as in an instance of the Factory Pattern) that dynamically returns the data provider implementation (which one of those to use is set by config). The data provider is returned by the factory as an 'object' and it's up to the calling business logic code to cast it to the correct type - as specificed by the interface that the business logic is working against.
I'v egot (another!) article on Dependency Injection for .Net which might help explain with some of the issues, but I'm sure there are good java based ones around somewhere.
Should I be using some framework like Spring to allow me to choose which implementation of my data-api is used, via XML?
Probably. I'd say spend your time getting to grips with the concepts first, worry about "best practice" after that. FYI, I learnt AJAX the hard way - by writting all the code myself. These days I'd run straight to a good framework, but I only think I have the confidence to do that after having really grokked the basics by doing some hard graft at the coal-face :)
... If the answer is or should be "never" then what...
Yeah - it's never. Use a Factory.
Your data-api is a DAO interface layer, that's all your business (aka service) layer should know about persistence. And the presentation layer or any other layer above the business layer shouldn't have any "knowledge" of the DAO layer underneath.
To achieve that, relying on a framework like Spring is a good idea. The top level layer loads an application context which contains all the information for the framework to load the appropriate implementation.
For example, you could load applicationContext.xml from the front-end to use data-jdo, and load testApplicationContext.xml from the unit tests to use data-jpa.

Looking for design patterns to isolate framework layers from each other

I'm wondering if anyone has any experience in "isolating" framework objects from each other (Spring, Hibernate, Struts). I'm beginning to see design "problems" where an object from one framework gets used in another object from a different framework. My fear is we're creating tightly coupled objects.
For instance, I have an application where we have a DynaActionForm with several attributes...one of which is a POJO generated by the Hibernate Tools. This POJO gets used everywhere...the JSP populates data to it, the Struts Action sends it down to a Service Layer, the DAO will persist it...ack!
Now, imagine that someone decides to do a little refactoring on that POJO...so that means the JSP, Action, Service, DAO all needs to be updated...which is kind of painful...There has got to be a better way?!
There's a book called Core J2EE Patterns: Best Practices and Design Strategies (2nd Edition)...is this worth a look? I don't believe it touches on any specific frameworks, but it looks like it might give some insight on how to properly layer the application...
Thanks!
For instance, I have an application where we have a DynaActionForm with several attributes...one of which is a POJO generated by the Hibernate Tools. This POJO gets used everywhere...the JSP populates data to it, the Struts Action sends it down to a Service Layer, the DAO will persist it...ack!
To me, there is nothing wrong with having Domain Objects as a "transveral" layer in a web application (after all, you want their state to go from the database to the UI and I don't see the need to map them into intermediate structures):
Now, imagine that someone decides to do a little refactoring on that POJO...so that means the JSP, Action, Service, DAO all needs to be updated...which is kind of painful...There has got to be a better way?!
Sure, you could read "Beans" from the database at the DAO layer level, map them into "Domain Objects" at the service layer and map the Domain Objects into "Value Objects" for the presentation layer and you would have very low coupling. But then you'll realize that:
Adding a column in a database usually means adding some information on the view and vice-versa.
Duplication of objects and mappings are extremely painful to do and to maintain.
And you'll forget this idea.
There's a book called Core J2EE Patterns: Best Practices and Design Strategies (2nd Edition)...is this worth a look? I don't believe it touches on any specific frameworks, but it looks like it might give some insight on how to properly layer the application...
This book was a "showcase" of how to implement (over engineered) applications using the whole J2EE stack (with EJB 2.x) and has somehow always been considered as too complicated (too much patterns). On top of that, it is today clearly outdated. So it is interesting but must be taken with a giant grain of salt.
In other words, I wouldn't recommend that book (at least certainly not as state of the art). Instead, have a look at Real World Java EE Patterns - Rethinking Best Practices (see Chapter 3 - Mapping of the Core J2EE patterns into Java EE) and/or the Spring literature if you are not using Java EE.
First, avoid Struts 1. Having to extend a framework class (like DynaActionForm) is one of the reasons this framework is no longer a good choice.
You don't use spring classes in the usual scenarios. Spring is non-invasive - it just wires your objects. You depend on it only if using some interfaces like ApplicationContextAware, or if you are using the hibernate or jdbc extensions. Using these extensions together with hibernate/jdbc completely fine and it is not an undesired coupling.
Update: If you are forced to work with Struts 1 (honestly, try negotiating for Struts 2, Struts 1 is obsolete!), the usual way to go was to create a copy of the Form class, that contained the exact same fields, but did not extend the framework class. There would be a factory method that takes the form class and returns the simple POJO. This is duplication of code, but I've seen it in practice and is not that bad (compared to the use of Struts 1 :) )
I think your problem is not so big as it seems.
Let's imagine, what can you really change in your POJO:
1) name of its class: any IDE with refactoring support will automatically make all necessary changes for you
2) add some field/method: it almost always means adding new functionality what is always should be done manually and carefully. It usually cause to some changes in your service layer, very seldom in DAO, and usually in your view (jsp).
3) change methods implementation: with good design this should cause any changes in other classes.
That's all, imho.
Make a decision about technology for implementing busyness-logic (EJB or Spring) and use its facilities of dependency injection. Using DI will make different parts of your program communicate to each other through interfaces. It should be enough for reaching necessary (small enough) level of coupling.
It's always nice to keep things clear if you can and separate the layers etc. But don't go overboard. I've seen systems where the developers were so intent on strictly adhering to their adopted patterns and practices that they ended up with a system worse than the imaginary one they were trying to avoid.
The art of good design is understanding the good practices and patterns, knowing when and how to apply them, but also knowing when it's appropriate to break or ignore them.
So take a good look at how you can achieve what you are after, read up on the patterns. Then do a trial on a separate proof of concept or a small part of your system to see your ideas in practice. My experience is that only once you actually put some code in place, do you really see the pros and cons of the idea. Once you have done that, you will be able to make an informed decision about what you will or will not introduce.
Finally, it's possible to build a system which does handle all the issues you are concerned about, but be pragmatic - is each goal you are attempting to reach worth the extra code and APIs you will have to introduce to reach it.
I'd say that Core J2EE Patterns: Best Practices and Design Strategies (2nd Edition) addresses EJB 2.0 concerns, some of which would be considered anti-patterns today. Knowledge is never wasted, but I wouldn't make this my first choice.
The problem is that it's impossible to decouple all the layers. Refactoring the POJO means modifying the problem you're solving, so all the layers DO have to be modified. There's no way around that.
Pure decoupling of layers that have no knowledge of each other requires a lot of duplication, translation, and mapping to occur. Don't fall for the idea that loose coupling means this work goes away.
One thing you can do is have a service layer that's expressed in terms of XML requests and responses. It forces you to map the XML to objects on the service side, but it does decouple the UI from the rest.

Categories