Spring library to share commons - java

In a context of several spring boot apps, sharing some components, is that considered as a bad practice to publish an artifact used in those apps?
I'm planning to reuse controllers and services abstracts and low level classes (for statistics requiring fast write access, so webservices are excluded).

There are two contrary paradigms, both definitely make sense on their own, but in this case they front each other. Hardcore microservice evangelists would roughly claim that there should be no common dependencies at all to reduce coupling between the different services / applications. That means although in case the services share many architectural patterns, you have a lot of "copy and paste" code. And exactly that would make the don't repeat yourself faction angry, because it makes also sense thinking about why not sharing already implemented functionality.
So the correct answer is: "It depends." It will always be a tradeoff between following the one law by violating the other. You can just make an economical / cost-effective decision and figure out what is feasible for your infrastructure and what causes less technical debt.

Related

what is the realtionship betweent DDD and Microservices

The microservice architecture is well known and will not be repeated here. When we create microservices, we need to create a microservice with high cohesion and low coupling. The bounded context in DDD perfectly matches the requirements of microservices, and the bounded context can be understood as a microservice process.
The above is to describe the similarities between the two from a more intuitive point of view.
After the system is complicated, we all need to use divide and conquer to disassemble the problem. There are generally two ways, the technical dimension and the business dimension. The technical dimension is similar to MVC, while the business dimension refers to dividing the system by business areas.
The microservice architecture emphasizes dividing and conquering from the business dimension to deal with system complexity, and DDD also focuses on the business perspective. If the goals (business dimensions) pursued by the two achieve the unity of the context, what are the connections and differences in specific practices?
DDD is a strategy based on modeling your business using OOP, implementing business requirements directly within the model. DDD helps building software more effective, by allowing better mutual understanding between software programmers and business experts.
Microservices is a software architecture. It drives how to technically structure your software to achieve good execution performance, scalability, security and maintainability of a growing code base without expanding the technical debt. It helps building software more efficient.
Actually the two concepts are very orthogonal although both are needed for good software quality. You could, however, make a non DDD microservice, or a monolithic DDD application.
In my opinion, when we think of distributed systems and microservices we think of DDD as well. Both approaches don't live alone, they are a complement of each other.
There are a lot of reasons why we segregate our applications in microservices, which may get for business or technical approach and we always use the DDD concepts to translate the business world for our tech world.

Adding dependencies to microservices

I have built few microservices that consume a number of external services. Few of these external services are consumed by more than 1 microservice that I have built. I have built the connectors to these microservices as a library project and have included it as a dependency in all my microservice projects. However I read that all logic for microservices should be self contained and duplication of logic is ok. If that's the case , is it recommended for me to define these connectors within each every microservice instead of having a shared library?
...all logic for microservices should be self-contained and
duplication of logic is ok
I think this is the core of the issue you are struggling with. Is this statement actually true?
A quick google search later:
http://www.simplicityitself.io/our%20team/2015/01/12/sharing-code-between-microservices.html
This article talks about this exact question, which we can now frame as What is the appropriate level of re-use in microservice architecture?
The author provides a list of reasons why developers feel the need to share code, ordered from lowest to highest in terms of coupling and loss of isolation:
Leverage existing technical functionality
Sharing data schemas, using a class, for example, as an enforcement of a shared schema.
Sharing data sources, use of the same database by multiple services.
Though this list covers most of the reasons, I would add another important reason to share code, which is to do with a common framework for rapid standing up of microservices, commonly called the Microservice Chassis pattern.
The author goes onto say:
It is of utmost importance to pin down your motivation for wanting to
share code, as unfortunately there is no right answer to this
question. Like everything else, it’s contextual.
So, all that said, should you centralize your connectors or not?
Well, where do these dependencies fit in on our list? And what degree of coupling can you endure before you're no longer doing microservices but building a monolith instead?
These are not easy questions to answer, but hopefully this will help guide you to the correct conclusion.

Migrating a large-scale application from JavaEE to Akka

Suppose I have a very large-scale server-side web application written in JavaEE (and related technologies classically combined with it), and I decided to migrate it completely to Akka (and related technologies usually combined with it, including moving the code to Scala). The reasons of the migration decision are not important: suppose I have to do it, and that's all to it.
My question is: What would be the strategy to follow here, aiming to optimize the migration time and the scalability of the resulting application?
If the question lacks of details, I can provide some, although I would like to hear strategies without being very specific.
This is an open ended question. But let me try and give you some ideas. Having worked with both J2EE as well as Play2/Akka/Spray.io (Scala) based system I can provide you will the following high level/general guidance for migration.
Partition your system: Partition your current system based on functionality and rank them according to their criticality to business, your stakeholders and clients. Partitions can done based on different dimensions ( architectural components at runtime, business features, development team/modules) etc. You also need to find dependencies between these partitions.
Identify candidate partition: Once you have ranked partitions, it’s useful to pick the smallest possible partition that overlaps in as many dimensions as possible and has the least amount of coupling. Usually this is the case if your initial architecture is modular.
Implement a prototype: Take the candidate partition and create a prototype that provides the same functional capability. Now evaluate and compare the new capability against the old in terms of various quality attributes (performance, modifiability, extensibility etc). The prototype will also give you an estimate of technical risk, challenges, and effort.
Create a new architecture: I think at this point you should have enough input to create the first version of your new architecture. Also identify how capabilities of other partitions will be implemented in this new architecture. Selecting the most complex partition and try to map it to this new architecture is really good exercise and can massively reduce your technical risk in the future.
Field the prototype: Try to field the prototype to a small subset of users/stakeholders and get feedback. Decoupling the prototype using REST/pub-sub interfaces is a good idea.
Plan for migration: Create a plan and schedule for rest of your system.
I can be more specific if you provide more targeted questions.

Which is scalable? Simple CRUD Webapp vs Webapp talking to a REST service

I think the title says it clearly. I am no scalability guru. I am on the verge of creating a web application which needs to scale to large data sets and possibly many (wont exaggerate here, lets say thousands of) concurrent users.
MongoDB is the data repository and i am torn between writing a simple Play! webapp talking to MongoDB versus Play! app talking to a REST service app (in Scala) which does the heavy lifting of all business logic and persistence.
Part of me thinks that having the business logic wrapped as a service is future proof and allows deploying just the webapp in multiple nodes (scaling). I come from Java EE stack and Play! is a rebel in java web frameworks. This approach assures me that i can move away from Play! if needed.
Part of me also thinks that Play! app + Scala service app is additional complexity and mayn't be fruitful in the long run.
Any suggestions are appreciated.
NOTE: I am a newbie to Scala, MongoDB and Play!. Pardon me if my question was silly.
Scalability is an engineering art. Which means that you have lots of parameters and apply your experience to specific values of these parameters to come to a solution. So general advice, without more specific data about your problem, is hard.
Having said that, from experience, some general advice:
Keep your application as clean and simple as possible. This allows you to keep your options open. In your case, start with a simple Play app. Concentrate on clean code so you can easily rework what you have into a different architectural model (with clean code, that's simpler than you'd think :-))
Measure, not guess, where the bottlenecks are. It's simple enough to flood a server with requests. Use profiling, memory dumps, whatever, to pinpoint your bottleneck in scalability.
Only then, with a working app in hand (which you could launch early with) and data on where your scaling bottlenecks are, you can make decisions on what to split off in (horizontally scalable) services.
On the outset, services look nice and scalable, but they often get you in an early mess - services need to communicate with each other, so you start introducing messaging, etcetera. Keep it simple, measure, optimize.
The question does not appear to be silly . For me encapsulating your data access behind the rest layer does not directly improve the scalability of the application.(significantly, ofcourse, there is the server that can perform http caching and handle request queues etc.., but from your description, your application looks small enough). You can achieve similar scalability without the Rest layer. But having said that, the Service layer could have a indirect impact.
First it makes your application cleaner. (UI Talking to db is messy.). It helps make the application maintainable. (Multi folds). Rest layer could provide you with middle tier that you may need in your application. Also a correctly designed Rest Layer will have to be Resource Driven . In my experience a Resource Driven Architecture is a good middle ground between ease of implementation and highly scalable design.
So I strongly suggest that you use the Service layer (Rest is the way to go :) ), but scalability in itself cannot justify the decision.
Putting the service between the UI and data source encapsulates the data source, so the UI need not know the details of how data is persisted. It also prevents the UI from reaching directly into the data source. This allows the service to authenticate, authorize, validate, bind, and perform biz logic as needed.
The downside is a slight speed bump for the app.
I'd say that adding the service has a small cost and a big upside. I'd vote for that.
The answer, as usual, is. It depends.
If there is some heavy-lifting involved and some business logic: Yup, that is best put into its own layer and if you add a RESTful interface to it, you can serve that up to whatever front-end technology you want.
Nowadays, people are often not bothering with having a separate web app layer, but serve the data via AJAX directly to the client.
You might consider adding a layer, if you either need to maintain a lot of user session state or have an opportunity to cache data on the presentation layer. There are more reasons why you would want a presentation layer, for example, serving out different presentations to different devices/clients.
Don't just add layers for complexities sake, though.
I might add that you should try to employ the HATEOAS principle. That will ease things significantly when scaling out the solution.

What remoting approach for Java application would you recommend?

I wonder how is the best way to integrate Java modules developed as separate J(2)EE applications. Each of those modules exposes Java interfaces. The POJO entities (Hibernate) are being used along with those Java interfaces, there is no DTO objects. What would be the best way to integrate those modules i.e. one module calling the other module interface remotely?
I was thinking about: EJB3, Hessian, SOAP, JMS. there are pros and cons of each of the approaches.
Folks, what is your opinion or your experiences?
Having dabbled with a few of the remoting technologies and found them universally unfun I would now use Spring remoting as an abstraction from the implementation.
It allows you to concentrate on writing your functionality and let Spring handle the remote part with some config. you have the choice of several implementations (RMI, Spring's HTTP invoker, Hessian, Burlap and JMS). The abstraction means you can pick one implementation and simply swap it if your needs change.
See the SpringSource docs for more information.
The standard approach would be to use plain RMI between the various service components but this brings issues of sharing your Java interfaces and versioning changes to your domain model especially if you have lots of components using the same classes.
Are you really running each service in a separate VM? If these EJBs are always talking to each other then you're best off putting them into the same VM and avoiding any remote procedure calls as these services can use their LocalInterfaces.
The other thing that may bite you is using Hibernate POJOs. You may think that these are simple POJOs but behind the scenes Hibernate has been busy with CGLib trying to do things like allow lazy initialization. If these beans are serialzed and passed over remote boundaries then you may end up with odd Hibernate Exception getting thown. Personally I'd prefer to create simple DTOs or write the POJOs out as XML to pass between components. My colleagues would go one step further and write custom wire protocols for transferring the data for performance reasons.
Recently I have been using the MULE ESB to integrate various service components. It's quite nice as you can have a mix of RMI, sockets, web services etc without having to write most of the boiler plate code.
http://www.mulesource.org/display/COMMUNITY/Home
Why would you go with anything other than the simplest thing that works?
In your case that sounds like EJB3 or maybe JMS, depending on whether the communication needs to be synchronous or asynchronous.
EJB3 is by far these easiest being built on top of RMI with the container providing all the additional features you might need - security, transactions, etc. Presumably your POJOs are in a shared jar and therefore can simply be passed between your EJBs, although I tend towards passing value objects myself. The other benefit of EJB is, when done right, that it's the most performant (that's just my opinion btw ;-).
JMS is a little more involved, but not much and a system based on asynchronous communication affords certain niceties in terms of parallelizing tasks, etc.
The performance overhead of web-services, the inevitable extra config and additional points of failure make them, IMHO, not worth the hassle unless you've a requirement that mandates their use - I'm thinking interop with non-Java clients or providing data to external parties here.
If you need network communication between Java-only applications, Java RMI is the way to go. It has the best integration, most transparency and the least overhead.
If, however, some of your clients aren't Java-based, you should probably consider other options (Java RMI actually have an IIOP-dialect, which allows it to interact with CORBA, however - I wouldn't recommend doing this, unless it's for some legacy-code integration). Depending on your needs, webservices are probably your friend. If you are conserned with the networkload, you could go webservices over Hessian.
You literally mean remotely? As in running in a different environment with therefore different availability characteristics? With network overheads?
Assuming "yes" my first step would be to take a service approach, set aside the invocation technology for a moment. Just consider the design and meaning of your services. You know they are comparativley expensive to invoke, hence small busy interfaces tend to be a bad thing. You know that the service system might fail between invocations, so you may favour stateless services. You may need to retry requests after failure, so you may favour idempotent service designs.
Then consider availability relationships. Can your client work without the remote system. In some cases you simply can't progress if the remote system isn't available (eg. can't enable the employee if you can't get to the HR system) in other cases you can adopt a "fire-and-tell-me-later" philosophy; queue up the requests and process responses later.
Where there is an availability depdency, then simply exposing a synchronous interface seems to fit. You can do that with SLSB EJBs, if everything is Java EE, that works. I tend to generalise expecting that if my services are useful then non Java EE clients may want them too. So SOAP (or REST) tends to be useful. These days adding a web service interface to your SLSB is pretty trivial.
But my pet theory is that any sufficiently large IT system ends up needing aynch communications: you need to decouple the availability contraints. So I would tend to look for a JMS-style relationship. An MDB facade in front of your services, or SOAP/JMS is not too hard to do. Such an approach tends to highlight the failure-case design issues that were probably lurking anyway, JMS tends to make you think: "suppose I don't get an answer? suppose my answer comes late?"
I would go for SOAP.
JMS would be more efficient but you would need to code up an message driven bean for each interface.
SOAP on the other hand comes with lots of useful toolkits that will generate your message definition (WSDL) and all the neccesary handlers (client and server) when given an EJB.
With soap you can (but dont have to) deal with certificate security and secure connections over public networks. As the default protocol is HTTP over port 80 you will have minimal pain with firewalls etc. SOAP is also great for hetrogenious clients (in your case anything that isn't J2EE) with good support for most common languages on most common platforms.

Categories