Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have 2 services (service1 and service2),and both services are using the same data model "studentModel",I'm wondering how to share the studentModel between the two services.
1.Build a studentModel.jar , and all the services refer to this jar
2.Copy & Paste code
Please help me how to reuse code in microservices architecture.
I would recommend going even further. From my experience, the best approach would be the following:
to build a separate module with all models for the microservice
to build a separate client library (module) for the microservice
Following this approach, you can release a new client library each time you change your micro-service - it will be easy to maintain and manage.
In addition, it will help you to save a lot of time when your system grows. Just imagine, you're going to use your core service (e.g. user service or profile service) as a dependency for all other services. Copy-paste is definitely not an option in this case.
Update. Currently, we have such things as OpenAPI and GraphQL in our toolsets. It's enough to design a good schema for the supplier service and simply use code generation tools for consumers.
When it comes to microservices, its ok to keep duplicated files because you might end up with a distributed monolith. Remember the Bounded Context from DDD and use your thought process. No shared library means no coupling.
But again the DRY (Don't Repeat Yourself) says you should not have duplicate, but to what extent?
One failure in one Library should not cause all your microservices to fail using that library, then the whole purpose of microservice is of no use.
There are tools to shared code among microservices, you can have a look into https://bitsrc.io/
All these are my thought, there must be some better way.
For better version control, I would recommend you to build a jar and add it as a dependency on your microservices. Alternatively, you can also explore git sub modules by putting duplicate codes in sub modules and utilizing it in your respective microservice module.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Im building a REST API in java with jersey and jetty. I have a complex domain in which I have a class (Workers). I want to be able to dynamically add workers through a POST. However, the business logic requires me to have a few default workers with fixed values. So at the start of the API, I need to add them to my db (right now its in memory). In terms of clean code whats the best way to go about that?
I thought about initializing my repository with these defaults workers, but I feel like its violating the SRP for the WorkerRepo class, I feel like that should be the job of the application layer as its specific to this application, not to the domain if that makes sense. Where should I move the logic for this initialization? Thanks!
From my perspective I would design your needs just as I would design all other use cases. E.g. a SetupWorkerInteractor.
I would use an ApplicationRunner that encapsulated the application startup logic. E.g. parse the command line args, build the application context, call the initialization process and run the application. Of cource I would also separate these aspects into different classes, but I guess you get the picture of what I mean.
In my case I would use the ApplicationLog as the "presenter" of the setup use case's output.
For simplicity I omitted the entity and request/response models.
If I do it this way, it doesn't matter if the SetupWorkerInputBoundary is called from the ApplicationRunner, a RestService or e.g. a messaging system. I can also test the setup just like any other use case.
After a bit of thinking, I moved the setup in the application layer, where I actually instanciate all my dependencies (in a class named ApplicationContext).
I created an interface WorkerContext and created a concrete implementation WorkerContextX where X = NAMEOFTHEAPP. This context class contains all the default values and uses the injected repo to add these default values to the repo. So at the startup of the API, in the ApplicationContext class, I call the WorkerContext method that setup my workerRepo.
This way, I can easily change the setup strategy in the blink of an eye and it doesnt violate the SRP anymore in the repo. And I now respect the DIP as the repo(which was in the domain) doesnt rely on things that are dictated by the application layer.
I posted this as I thought it was a decent solution and could help other people, feel free to critique or improve this solution.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
The Camel Java DSL provides type-safety, code completion and proper support for refactoring. It also helps to modularize and (unit-)test your code in a great manner.
Speaking for the Camel XML syntax I only see the advantage of being able to modify and reload routes at runtime (e.g. via hawtio).
Obviously I'm really missing something here - so what is the rationale behind the use of Camel XML routes today?
In-Place editing of routes (although I would discourage doing that)
quick&dirty one-off routes (e.g. routing from test to qa environment) or very simple projects - when you have a container like karaf or servicemix. No need to fire up your IDE/compile. Just write and drop to deploy folder.
Maybe easier for non-developers
It is a matter of taste and preference.
I have used both and I have to say the java dsl is far the easier and more powerful to use.
But the best approach is to combine them, especially if you are deploying to an OSGI environment like Karaf.
Use blueprint to define your beans and routeBuilder beans and bind them. The actual implementation is done in routeBuilder classes. In blueprint you can define properties and do a few other things as well, but the actual behavior of the routes is done in java.
First off, when you say XML do you mean Spring XML DSL or Blueprint XML DSL? While they do share most of their syntax, they are not identical. Blueprint XML DSL is the preferred way of defining Camel routes for an OSGi environment (i.e. Apache Karaf runtimes) while Spring XML DSL is nowadays more or less a legacy of the times when you could only use Spring through XML.
Having said that, I think it really boils down to personal preference - a lot of developers still prefer XML over Java for defining routes and find it easier to read and follow. I myself have always preferred the Java DSL since it's more flexible and powerful but I have to admit that XML provides better overview of the routes.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am about to develop a Java desktop application, which I would like to keep it in module wise, so it is easy for me to customize. For an example, let's take a billing system. Right now I can divide it to few modules
Accounting
Billing
Print Bill
Email Bill
If someone told me "I don't need to print the Bill", then I can remove the "Print Bill" module and so on.
I have seen in some applications (C++) where they have been developed as seperate applications and combined together somehow.
In Java, what is the best way of module wise development? The best way I know is creating packages and managing things via interfaces.
Whatever it is, the main advantage would be minimizing the effort when customizations appear. What are the suggestions?
Answering a question for "best way" is hard, because you could only answer it with "it depends" - on the specific circumstances as well as the opinion of the developer.
What I would suggest you to do is take an approach that defines clear interfaces between modules and maybe split them into separate jars. This allows you to hide the implementation details in the abstraction of the interface and you do not need to care about that but only call the correct interface.
Also for high customisation I'd favor "configuration over code" which means that you select the used modules by configuration and not by deploying specific binaries. Again with separate jars both is possible.
So I think your idea of using different packages and interfaces seems very valid to me. Maybe I'd pack them to different jars or use them depending on the configuration.
I think using a bunch of different executables and connect them by pipelining them is also an option, but I somehow dislike it, because it adds increased effort in handling the communication between different executables. This is an unnecessary overload when your application is handling it "all in one".
Separate the parts into artifacts, built into numerous jars. Hide everything behind interfaces. Then have an "application" project using all needed artifacts and integrating them together. Use dependency management tool like Maven or Gradle to build it all together, and Spring to integrate the modules in the resulting application.
For a desktop application, you may want to use some platform like Eclipse RCP or Netbeans RCP - they have each their own plugin system and dependency injection / integration frameworks.
A naive approach would be to only use packages to functionally partition your code. You also want your packages to have as few dependencies as possible. Any shared dependencies should be "moved up" in an other package like "core" or something else that would be a dependency of all the packages that need it.
A quick example based on yours :
package client would make it possible to manipulate clients
package accounting would handle client account information and depend on client
package billing would handle billing and depend on client (or accounting?)
package billing.receipt would handle receipt generation and depend on billing (and indirectly on client)
package billing.receipt.printing would handle receipt printing and depend on billing.receipt (and indirectly on billing and client)
package billing.receipt.email would handle receipt email sending and depend on billing.receipt (and indirectly on billing and client)
For a more industrial version, you should separate your code into different java projects with interdependencies that you could build into a single application with a tool like Maven. The packages separation would still hold, but using different projects and a formal build process would help enforcing weak-coupling.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
We are, in my company, at the beginning of a huge refactoring to migrate from home made database access to Hibernate.
We want to do it clean, and therefore we wiil use entities, DAOs, etc...
We use maven, and we will therefore have two maven projects, one for the entities, one for the DAOs (is it good, or is it better to have both in the same project ?).
Knowing that, our question is the following : our business layer will use the DAOs.
As most of the DAO's methods return entities, our business layer will have to know about entities. And therefore, our business layer will have to know about Hibernate, as our entities will be Hibernate annotated (or at least JPA annotated).
Is this a problem ? If yes, what is the solution to give the business layer the very minimum knowledge about the data layer ?
Thank you,
Seb
Here is how I typically model the dependencies, along with the reasoning.
Let's distinguish 4 things:
a. the business logic
b. entities
c. DAO interfaces
d. DAO implementations
For me the first three belong together and therefor belong in the same maven module, AND even in the same package. They are closely related and a change in one will very likely cause a change in the other. And things that change together should be close together.
the implementation of the DAO is to a large extend independent of the business logic. And even more important the business logic should NOT depend on where the data is coming from. It is a completely separate concern. So if your data comes today from a database and tomorrow from a webservice, nothing should change in your business logic.
You are right, Hibernate (or JPA) annotations on the enities violate that rule to some extent. You have three options:
a. Live with it. While it creates a dependency to Hibernate artifacts, it does not create a dependency on any Hibernate implementation. So in most scenarios, having the annotations around is acceptable
b. use xml configuration. This will fix the dependency issue, but in my opinion at the rather hefty cost of dealing with xml based configuration. Not worth it in my opinion
c. Don't use Hibernate. I don't think the dependency on Annotations is the important problem you have to consider. The more serious problem is, that Hibernate is rather invasive. When you navigate an object graph, Hibernate will trigger lazy loading, i.e. the execution of sql statements at points that are not at all obvious from looking at the code. This basically means, you data access code starts to leak into every part of the application if you are not careful. One can keep this contained, but it is not easy and requires great care and a in depth understanding of Hibernate, that most teams don't have when they start with it. So Hibernate (or JPA) trades a simple but tedious task of writing SQL-Statments with a difficult task of creating a software architecture, that keeps mostly invisible dependencies in check. I therefore would recommend avoid Hiberante at all and try something simpler. I personally have high hopes toward MyBatis, but haven't used it in real projects yet.
More important then managing the dependencies between technical layers is in my opinion the separation of domain modules. And I'm not alone with that opinion.
I would use separate artifacts (i.e. maven modules) only to separate things that you want to deploy independently. If you for example have a rich client and a backend server two maven artifacts for those, plus maybe a third one for common code make sens. For everything else I'd simply use packages and tests that fail when illegal dependencies get created. For those I use Degraph, but I'm the author of that so I might be biased.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Recently I came across this javalobby post http://java.dzone.com/articles/how-changing-java-package on packaging java code by feature.
I like the idea, but i have few questions on this approach. I asked my question but didn't get a satisfactory reply. I hope someone on StackOverflow can clarify my questions.
I like the idea of package by feature which greately reduces the time for moving across the packages while coding and all the related stuff will be at one place(package). But what about interactions between the services in different packages?
Suppose we are building a blog app and we are putting all user related operations(controllers/services/repositories) in com.mycompany.myblog.users package. And all blog post related operations(controllers/services/repositories) in com.mycompany.myblog.posts package.
Now I want to show User Profile along with all the posts that he posted. Should I call myblog.posts.PostsService.getPostsByUser(userId) from myblog.users.UserController.showUserProfile()?
What about coupling between packages?
Also wherever I read about package by feature, everyone says its a good practice. Then why many book authors and even frameworks encourage to group by layers? Just curious to know :-)
Take a look at uncle Bob's Package Design Principles. He explains reasons and motivations behind those principles, which I have elaborated on below:
Classes that get reused together should be packaged together so that the package can be treated as a sort of complete product available for you. And those which are reused together should be separated away from the ones those are not reused with. For example, your Logging utility classes are not necessarily used together with your file io classes. So package all logging them separately. But logging classes could be related to one another. So create a sort of complete product for logging, say, for the want of better name commons-logging package it in a (re)usable jar and another separate complete product for io utilities, again for the want of better name, say commons-io.jar.
If you update say commons-io library to say support java nio, then you may not necessarily want to make any changes to the logging library. So separating them is better.
Now, let's say you wanted your logging utility classes to support structured logging for say some sort of log analysis by tools like splunk. Some clients of your logging utility may want to update to your newer version; some others may not. So when you release a new version, package all classes which are needed and reused together for migration. So some clients of your utility classes can safely delete your old commons-logging jar and move to commons-logging-new jar. Some other clients are still ok with older jar. However no clients are needed to have both these jars (new and old) just because you forced them to use some classes for older packaged jar.
Avoid cyclic dependencies. a depend on b; b on c; c on d; but d depends on a. The scenario is obviously deterring as it will be very difficult to define layers or modules, etc and you cannot vary them independly relative to each other.
Also, you could package your classes such that if a layer or module changes, other module or layers do not have to change necessarily. So, for example, if you decide to go from old MVC framework to a rest APIs upgrade, then only view and controller may need changes; your model does not.
I personally like the "package by feature" approach, although you do need to apply quite a lot of judgement on where to draw the package boundaries. It's certainly a feasible and sensible approach in many circumstances.
You should probably achieve coupling between packages and modules using public interfaces - this keeps the coupling clean and manageable.
It's perfectly fine for the "blog posts" package to call into the "users" package as long as it uses well designed public interfaces to do so.
One big piece of advice though if you go down this approach: be very thoughtful about your dependencies and in particular avoid circular dependencies between packages. A good design should looks like a dependency tree - with the higher level areas of functionality depending on a set of common services which depend upon libraries of utility functions etc. To some extent, this will start to look like architectural "layers" with front-end packages calling into back-end services.
There many other aspect other than coupling for package design i would suggest to look at OOAD Priciples, especially package design priciples like
REP The Release Reuse Equivalency Principle The granule of reuse is the granule of release.
CCP The Common Closure Principle Classes that change together are packaged together.
CRP The Common Reuse Principle Classes that are used together are packaged together.
ADP The Acyclic Dependencies Principle The dependency graph of packages must have no cycles.
SDP The Stable Dependencies Principle Depend in the direction of stability.
SAP The Stable Abstractions Principle Abstractness increases with stability.
for more information you can read book "Agile Software Development, Principles, Patterns, and Practices"