Pass enumeration across tiers in a Java application - java

I have an application with 3 layers: GWT-RPC, Business and DAO
In every layer I have different beans. For example for a User I'd have UserRPC (for the UI), User (business) and UserDTO (to persist). At every layer change I reconvert the objects.
The main problem are the enumerations. The enumerations are exactly the same across the layers but I need to replicate them in order to keep the layer independence.
Any suggestion?

(Short answer)
If they are truly the same you want to create a commons project / jar . Inside here are all the things that are common to the entire application, such as utility classes, enumerations, etc.
(Long answer)
Consider the overall architecture of your system, there should not be any need to have duplicated data in any tier. If this is occurring it means there is a flaw in the design of the system and a scenario of tight coupling may be on the horizon. This is part of the reason why when developing software engineers often (should) document the interfaces (contracts / api) before doing any implementation. Once these interfaces are approved and there is limited to no chance of duplication the actual implementation can begin. This would catch the scenario of X number of enumerations, which are constants, being created at each individual tier of the application. Also, keep in mind that with enumerations that if something in your business tier changes that would impact an enumeration you must recompile the code and redeploy. You must take care to not have an enumeration being used as a catch all for all of your system constants as this is an entirely different issue you may have to deal with.

The enumerations are exactly the same across the layers but I need to
replicate them in order to keep the layer independence.
Data duplication is never a good idea. How do you make sure that updates to one enumeration are reflected in other layers? You should probably create a Utility layer where you put all common classes there.

Related

How to avoid duplicate POJOs in different web services, Java

I have a webapp with multiple Spring services (they each have their own ear and web-controllers) and they talk to each other via REST calls. Some of the different services use the same data POJOs. The existing code just has duplicates of those data objects in different services.
For example, my /users service has a myApp.users.UserData object, and my /emails service calls /users/{userId} and holds the result as a myApp.emails.UserData object. Both of these objects are identical.
The problem here is I have to keep myApp.emails.UserData and myApp.users.UserData in sync, since in reality they represent the same info and are meant to be the "same" class. Say I update the name of a field in emails.UserData, I better remember to update it in users.UserData, otherwise things will break.
I know I could make a shared dependency called something like SharedDataObjects, define myApp.sharedDataObjects.UserData there, and just have both versions refer to that. For some reason, though, my gut feeling is that this is not a good solution... (maybe it is though?)
Are there any better ways of approaching this issue? Is there something fundamentally wrong with the way the webapp structured, and if so how could that be addressed?
It's always tempting to spot duplicated (or nearly duplicated) code and try to eliminate the duplication. An obvious path in that direction is to make a shared library (or module) and have different services (or applications) depend on it.
Doing so, however, increases coupling between services/applications/modules, which has its own set of drawbacks in many contexts. Especially in a microservices architecture*, that kind of coupling often leads to headaches. It can even lead to loss of the value of using microservices* in the first place.
If two services* are supposed to be independent but integrated, they have an integration protocol of some kind. For REST services, for example, that's almost always HTTP and JSON**. By introducing a shared library, you have coupled them in a binary way, separate from (and more binding than) HTTP and JSON. Not a good situation; experience has taught many of us that painful lesson.
Instead, focus on the public interfaces that each service exposes, and use appropriate versioning to evolve those interfaces when needed. Don't worry about some duplicate-looking classes, especially if they're anemic POJOs; that's a minor concern compared to a tightly-coupled set of services that are supposed to be independent.
Not that shared code libraries are inherently bad, they're not and they have true value in many ways. Rather, my point is you need to make sure each service maintains its own definition of what important things are, so that each can remain as independent as possible - and evolve independently as much as possible.
By the way, this is somewhat related to the concept of Bounded Contexts; you might want to read more about it if you're working with microservices*.
*or whatever you call your architecture of independently-managed services/modules/apps
** Could also be some kind of messaging/queueing platform, as another example

In modern application design, how do you implement / between TransferOject and BusinessObject

With organizations which are slow to adapt to modern technology finally junking EJBs and getting ready to transform to SpringBoot, Microservices, REST, Angular, there are some some questions application design. One being about TransferObjects and Business Objects
When the call comes to the REST Controller, is it still popular to populate a TO (POJO) and then make Service call, which in turn populates a BusinessObject and then calls a Repository service?
OR
At the REST Controller layer do we directly populate the BO and send it to the Service (This does not make any sense to me, because a BO is populated only during the execution business logic).
If nowadays its still Option 1, then how do we avoid writing to exactly similar POJO classes in most cases (in order to use BeanUtils.copyProperties()), with the BO decorated with #Id, #Column etc.
To elaborate on #Turing85's comments...
Option 1 usually (see the end of my answer) makes infinitely more sense. It's a question of responsibility (purpose) and change; the two logical components you refer to, a REST API and a repository / system service:
Responsibility: a REST service cares about working with its callers, so ideally when designing a REST API you should be involving someone from the client-side (client as in caller), because if the API doesn't work for them it's not going to be an effective API. On the other hand, repositories are somewhat self-centered, and may need to consider things that are or no interest to API callers (and vis-versa).
Change: if you pay attention to design principles, like SOLID, you'll know that part of a system should do one job - as a way of limiting the reasons why it needs to change (see: SRP). Trying to use one object across both outward-facing API's, and inward-facing repositories, is asking for trouble because it's trying to do too much - its trying to help solve problems in two very different parts of the wider solution, and thee both have very different change drivers working against them. Turning85's comment about the persistence layer stems from the same idea.
"Option 1 usually makes infinitely more sense":
One case where the REST API's objects will / can bear a very close resemblance to those that hit the actual repository (or even be reused, I guess) is when the REST API is a System API - i.e. a dedicated façade / proxy to the repository. In this case, the System API is largely driven by the repository i.e. the main change driver is just the repository.
After researching a bit I agree, keeping things simple will result in simpler code. I found a nice simple way to take care of this manual work.
https://www.baeldung.com/entity-to-and-from-dto-for-a-java-spring-application

How many GWT services

Starting a new GWT application and wondering if I can get some advice from someones experience.
I have a need for a lot of server-side functionality through RPC services...but I am wondering where to draw the line.
I can make a service for every little call or I can make fewer services which handle more operations.
Let's say I have Customer, Vendor and Administration services. I could make 3 services or a service for each function in each category.
I noticed that much of the service implementation does not provide compile-time help and at times troublesome to get going, but it provides good modularity. When I have a larger service, I don't have the modularity as I described, but I don't have to the service creation issues and reduce the entries in my web.xml file.
Is there a resource issue with using a lot of services? What is the best practice to determine what level of granularity to use?
in my opinion, you should make a rpc service for "logical" things.
in your example:
one for customer, another for vendors and a third one for admin
in that way, you keep several services grouped by meaning, and you will have a few lines to maintain in the web.xml file ( and this is a good news :-)
More seriously, rpc services are usually wrappers to call database stuff, so, you even could make a single 'MagicBlackBoxRpc' with a single web.xml entry and thousands of operations !
but making a separate rpc for admin operations, like you suggest, seems a good thing.
Read general advice on "how big should a class be?", which is available in any decent software engineering book.
In my opinion:
One class = One Subject (ie. group of functions or behaviours that are related)
A class should not deal with more than one subject. For example:
Class PersonDao -> Subject: interface between the DB and Java code.
It WILL NOT:
- cache Person instances
- update fields automatically (for example, update the field 'lastModified')
- find the database
Why?
Because for all these other things, there will be other classes doing it! Respectively:
- a cache around the PersonDao is concerned with the efficient storage of information to avoid hitting the DB more often than necessary
- the Service class which uses the DAO is responsible for modifying anything that needs to be modified automagically.
- To find the database is responsibility of the DataSource (usually part of a framework like Spring) and your Dao should NOT be worried about that. It's not part of its subject.
TDD is the answer
The need for this kind of separation becomes really clear when you do TDD (Test-Driven Development). Try to do TDD on bad code where a single class does all sorts of things! You can't even get started with one unit test! So this is my final hint: use TDD and that will tell you how big a class should be.
I think the thing to optimize for is that you can accomplish a result in one round trip to the server. I have an ad-hoc collection of methods on my service object, one for each situation the client finds itself in when it has to get something done. You do not want the client to RPC to the server several times in a row while the user is sitting there waiting.
REST makes things orthogonal, but orthogonality has a cost: there is a reason that the frequently used verbs in languages are irregular. In terms of maintaing clean orthogonal structure to your app, make sure your schema is well-designed. That is where each class should have semantics orthogonal to that of the other classes. When the semantics of each RPC call can be stated cleanly in the schema there will be no confusion as to what they mean, even if they aren't REST-fully ideal.

How much business logic should Value objects contain? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
One mentor I respect suggests that a simple bean is a waste of time - that value objects 'MUST' contain some business logic to be useful.
Another says such code is difficult to maintain and that all business logic must be externalized.
I realize this question is subjective. Asking anyway - want to know answers from more perspectives.
The idea of putting data and business logic together is to promote encapsulation, and to expose as little internal state as possible to other objects. That way, clients can rely on an interface rather than on an implementation. See the "Tell, Don't Ask" principle and the Law of Demeter. Encapsulation makes it easier to understand the states data can be in, easier to read code, easier to decouple classes and generally easier to unit test.
Externalising business logic (generally into "Service" or "Manager" classes) makes questions like "where is this data used?" and "What states can it be in?" a lot more difficult to answer. It's also a procedural way of thinking, wrapped up in an object. This can lead to an anemic domain model.
Externalising behaviour isn't always bad. For example, a service layer might orchestrate domain objects, but without taking over their state-manipulating responsibilities. Or, when you are mostly doing reads/writes to a DB that map nicely to input forms, maybe you don't need a domain model - or the painful object/relational mapping overhead it entails - at all.
Transfer Objects often serve to decouple architectural layers from each other (or from an external system) by providing the minimum state information the calling layer needs, without exposing any business logic.
This can be useful, for example when preparing information for the view: just give the view the information it needs, and nothing else, so that it can concentrate on how to display the information, rather than what information to display. For example, the TO might be an aggregation of several sources of data.
One advantage is that your views and your domain objects are decoupled. Using your domain objects in JSPs can make your domain harder to refactor and promotes the indiscriminate use of getters and setters (hence breaking encapsulation).
However, there's also an overhead associated with having a lot of Transfer Objects and often a lot of duplication, too. Some projects I've been on end up with TO's that basically mirror other domain objects (which I consider an anti-pattern).
You should better call them Transfer Objects or Data transfer objects (DTO).
Earlier this same j2ee pattern was called 'Value object' but they changed the name because it was confused with this
http://dddcommunity.org/discussion/messageboardarchive/ValueObjects.html
To answer your question, I would only put minimal logic to my DTOs, logic that is required for display reasons.
Even better, if we are talking about a database based web application, I would go beyond the core j2ee patterns and use Hibernate or the Java Persistence API to create a domain model that supports lazy loading of relations and use this in the view.
See the Open session in view.
In this way, you don't have to program a set of DTOs and you have all the business logic available to use in your views/controllers etc.
It depends.
oops, did I just blurt out a cliche?
The basic question to ask for designing an object is: will the logic governing the object's data be different or the same when used/consumed by other objects?
If different areas of usage call for different logic, externalise it. If it is the same no matter where the object travels to, place it together with the class.
My personal preference is to put all business logic in the domain model itself, that is in the "true" domain objects. So when Data Transfer Objects are created they are mostly just a (immutable) state representation of domain objects and hence contain no business logic. They can contain methods for cloning and comparing though, but the meat of the business logic code stays in the domain objects.
What Korros said.
Value Object := A small simple object, like money or a date range, whose equality isn't based on identity.
DTO := An object that carries data between processes in order to reduce the number of method calls.
These are the defintions proposed by Martin Fowler and I'd like to popularize them.
I agree with Panagiotis: the open session in view pattern is much better than using DTOs. Put otherwise, I've found that an application is much much simpler if you traffic in your domain objects(or some composite thereof) from your view layer all the way down.
That said, it's hard to pull off, because you will need to make your HttpSession coincident with your persistence layer's unit of work. Then you will need to ensure that all database modifications (i.e. create, updates and deletes) are intentional. In other words, you do not want it be the case that the view layer has a domain object, a field gets modified and the modification gets persisted without the application code intentionally saving the change. Another problem that is important to deal with is to ensure that your transactional semantics are satisfactory. Usually fetching and modifying one domain object will take place in one transactional context and it's not difficult to make your ORM layer require a new transaction. What is challenging is is a nested transaction, where you want to include a second transactional context within the first one opened.
If you don't mind investigating how a non-Java API handles these problems, it's worth looking at Rails' Active Record, which allows Ruby server pages to work directly with the domain model and traverse its associations.

Static layers in a java web application

I am building a small website for fun/learning using a fairly standard Web/Service/Data Access layered design.
To save me from constantly having to create instances of my service layer/data access layer classes, I have made the methods in them all static. I shouldn't get concurrency issues as they use local variables etc and do not share any resources (things are simple enough for this at the moment).
As far as I can see the only trade-off for this is that I am not really following a true OO approach, but then again it keeps the code much cleaner.
Is there any reason this would not be a viable approach? What sort of problems might arise later on? Would it be better to have a "factory" class that can return me instances of the service and data layer classes as needed?
You know those rides at the amusement park where they say "please keep your hands and feet inside the ride at all times"? It turns out the ride is a lot more fun if you don't. The only real trade-off is that you're not really following a true keeping-your-hands-and-feet-inside-the-ride-at-all-times approach.
The point is this -- there is a reason you should follow a "true OO approach", just as there's a reason to keep your hands and feet inside the ride -- it's great fun until you start bleeding everywhere.
The way you describe it, this isn't the "wrong" approach per se but I don't really see the problem you're trying to avoid. Can't you just create a single instance of these business objects when the server starts up and pass them to your servlets as needed?
If you're ready to throw OO out the window you might want to check out the Singleton pattern as well.
Disadvantages:
You will be unable to write unit tests as you will be unable to write mock data access/business logic objects to test against.
You will have concurrency problems as different threads try to access the static code at the same time - or if you use synchronized static methods you will end up with threads queuing up to use the static methods.
You will not be able to use instance variables, which will become a restriction as the code becomes more complex.
It will be more difficult to replace elements of the business or data access layers if you need to.
If you intend to write your application in this manner you would be better off using a language designed to work in this way, such as PHP.
You would be better off going for non-static business/data access layer classes by either:
Using the singleton pattern (creating a single instance of each class and sharing them among threads)...
Or creating instances of the classes in each thread as and when they are needed.
Keep in mind that each user/session connected to your application will be running in it's own thread - so your web application is inherently multi-threaded.
I don't really see the advantage to your design, and there are many things that could go wrong. You are saving a line of code, maybe? Here's some disadvantages to your approach:
You cannot easily replace implementations of your business logic
You cannot defined instance variables to facilitate breaking up logic into multiple methods
Your assumption that multi-threaded issues will not arise is almost certainly wrong
You cannot easily mock them for testing
I really don't see that the omission of one line of code is buying you anything.
It's not really an "OO Design" issue, but more of an appropriateness. Why are you even using Java in such a procedural way? Surely PHP would be more appropriate to this kind of design (and actually save you time by not having to compile and deploy).
I would just make your business layer non-static; it will make it so much easier for to maintain, change, and evolve your application.
You may have difficulty unit-testing your objects with this type of architecture. For example, if you have a layer of business objects that reference your static data access layer, it could be difficult to test the business layer because you won't be able to easily use mock data access objects. That is, when testing your business layer, you probably won't want to use the "real" methods in the data access layer because they will make unwanted changes to your database. If your data access layer was not static, you could provide mock data access objects to your business layer for testing purposes.
I would think that you will have concurrency issues with all static methods with multiple users. The web layer will thread out concurrent users. Can all your static methods handle this? Perhaps, but won't they constantly be locked in queuing the requests in single file? I'm not sure, never tried your idea.

Categories