So far I've been studying micro-services architecture and decoupling a monolithic monster.
I'm currently using feign clients in order to make easy the conversation between micro-services.
As I'm neck level immersed in the code of my monolithic application, I found that I’m using way too much feign calls, the thing that is compromising my dream of a totally decoupled application with independent micro-services.
So my question is about gathering ideas or just opinions; because on the internet it's just rainbows and flowers about feign, no one is noticing that after all, it's coupled, because micro-service A won't deliver any answer unless it receives data from B.
So can you think of any way that maybe reduces feign calls? Or do you even see it as a drawback to micro-services architecture ?
You can't avoid communication in a distributed system, the services must call each other to avoid duplication. If you can redesign the system you can potentially swap some of the Feign synchronous calls for asynchronous events e.g. by using Apache Kafka.
The drawback might be the size of your micro-services. If you find yourself constantly modifying a number of them to deliver a single feature it might be that they are too fine grained. There is no one-size-fit-all when it comes to micro-services.
Related
I have created a simple blogging application using Spring Boot and RESTful APIs. I have connected the same to a MySQL database to run some SQL queries like those for adding a blog, deleting a blog, etc.
My questions are as follows:
Does it mean that I have used a microservice architecture? When does an architecture become a microservice? (I ask because many similar websites call an application as microservice-based. Other than the main application, e.g., currency exchange instead of blogging, I see no other difference; for e.g., this one - it does have many more aspects, but they are not contributing to its microservice-ness, IMHO).
Can I call an application as horizontally scalable if I am using microservice-based architecture?
Note: The tutorial I followed is here and the GitHub repo is here.
First of all: those aren't exact yes/no questions. I'll give you my opinion, but others will disagree.
You have created what most people would agree qualifies as a Microservice. But a Microservice does not make a Microservice architecture, in the same way that a tree doesn't make a forest.
A Microservice architecture is defined by creating a greater application that consists of several distributed components. What you have done is created a monolith (which is absolutely fine in most cases).
Almost every talk about Microservices that I have attended has featured this advice: start with a monolith, evolve to microservices once you need it.
Regarding the last question: your application is horizontally scalable if it is stateless. If you keep any session state, it can still be horizontally scalable, but you'll need a smart LB managing sticky sessions or distributed sessions. That's when things get interesting, and when you can start thinking about a Microservice architecture.
Common problems are: how can I still show my customers my website, if the order database, cart service, payment provider etc. are down. Service discovery, autoscaling, retry strategies, evolving Rest apis, those are common problems in a Microservice architecture. The more of them you use and need, the more you can claim to have a Microservice architecture.
Not at all. The use of microservices is an advanced architecture pattern that is hard to implement right, but that gives useful benefits in huge projects. This should not be of any concern to a small project, unless you want to test this particular architectural style.
Breaking an application in smaller chunk does increase its scalability, as resources can be increased on a smaller scale. However, statelesness, among other properties, are also key components to a scalable architecture.
First of all, what you showed us dont looks like microsservice at all.
You can say that you have an application that uses microsservices architecture when it is formed by microsservices(oh rly?) with independent functionalities and that can be scalable. Scale one service, means that you will run multiple instances (possible in multiple hosts) and it will be transparent for other services.
A good example to ilustrate that is a web store microsservice based composed by 4 microsservices:
Sale Microsservice
Product Microsservice
Messaging Microsservice
Authentication Microsservice
In a blackfriday event, for example, which theoretically will occur a lot of purchases, you can scale only the Sale Microsservice, saving resources from the other three (of course this means using a bunch of other technologies like proxy, LB ...). If you were using a monolithic architecture would need to scale all your application.
If you are using correctly a microsservice architecture, yes, you can say that your application is horizontally scalable.
I've seen recently that there are different frameworks out there that allow the use of a messaging architecture but implemented in process, both using same and different threads. The ones I know about are Spring, Guava EventBus and Reactor.
My question is about what are good use cases where someone would want to use them instead of sending messages to a full fledged broker. I understand that its usage allows for a better decoupling of the business logic but in a microservices architecture you would normally publish events to be consumed by other microservices. The advantage of that is the failure tolerance you have by adding a cluster of brokers where an erroneous message cause by a failure in an instance can be retried by another one. Implementing logic that is decomposed and executed by sending messages that are later consumed by the same system, specially when the subscribers are executed in different threads, seems to me difficult to then put the data back to a consistent state.
Advantages of microservices over in-process is not really in the change it represents for message consumption.
Microservices allow you to execute portion of your code on specific nodes within a cluster, permitting to allocate the heavy calculations on powerful computers and secondary or light resources on less powerful resources. Overall it allows you to balance the performances better and scale your resources on the portions of code that require it.
Also, whenever you update the code of a micro-service you do not impact the other services, so that your changes (and errors) are isolated. If everything runs within the same process any wrong update might actually render the entire solution unusable.
In the end, getting the communication out of your process (3rd party broker) allows you to share it with more people, agents, processes, etc. Otherwise people have to become part of your process (a module?) and this is really not efficient.
Honestly, the only good reason you have for intra-process communication within your monolithic is for speed (in-memory communication rather than on-the-wire communication).
I've got one question concerning microservices architecture. I am designing a system based on microservices. I've read few articles and I think I understand the idea. However, I don't how microservices should communicate with each other while they have separate business responsibilities....
What I mean is that if I have a system for booking train tickets, I would divide backend application into modules:
Client (login,logout,registration)
Reservations (booking a train seat for user,getting all reservations for user)
ConnectionsDetails
(searching for connections,getting connection details)
Trains
(information about trains- seats number,class etc.)
Now, I can only think that if user search for connections module ConnectionsDetails communicate with Trains module and ask about particular train details. But how could other microservices communicate? If user wants to login - she/he asks directly Client module, if she/he wants to get all her reservations - asks Reservation module DIRECTLY etc...
So my question is, how should modules communicate if they do different things? I'm sorry if my question is trivial or stupid, I'm just starting with microservices.
EDIT:
I didn't mean what tools could I use for communication. My question is about logic. In the example I showed, why one microservice could ask another microservice about sth if client can directly ask the another one? As I said earlier, how they should communicate(about what should they ask each other exactly) if they do separate things?
To find the right contexts, borders and communication channels is imho one of the most difficult parts of a microservice architecture. It is about finding the data you need, how the relationships are and which service is responsible for what (responsible means the only one allowed to change it). Have a look at the Blog from Martin Fowler.
Microservices is not modules. Each service should be an independent service regarding development and deployment. And yes, they may communicate to each other but a client may also communicate to them individually. The Microservice approach is also about using the right tool for the problem. So each service can be implemented in a different programming language. They can use different kind of storage like RDMBS, NoSQL or Key-Value store. An they will be scaled individually - many instances for ConnectionsDetails and fewer for Reservations e.g.
What will happen if one service is not available? Each service should be as fault tolerant as possible and try to decrease it's service gracefully if nothing else is possible. You should think about minimising the needed communication between the services by choosing the right borders, make data independent and maybe introduce caching. Don't forget about the CAP theorem, a microservice approach makes it more visible. Here are some slides about resilience that may help. Do not share the same database or replicate everything between services.
"how should modules communicate if they do different things?". You should choose a language independent way of communication and depending on your problem a synchronous or asynchronous method. As a language independent format JSON or XML are most common. Synchronous communication can be based on REST, asynchronous communication on messaging. The authentication ("Client") is typically a REST service, sending the booked tickets via Email is more a message driven asynchronous service.
As I think that is a major question about classical SOA vs. Microservices.
I guess you can find many opposite answers to that.
so IMHO:
If in your architecture services communicate each other they are not microservices, since there are dependencies between them.
Instead of that if each microservice has all needed functionality (or say components) and do not depend or communicate to each other then they are microservices.
So in your example you have 4 components.
Clients, Reservations, ConnectionsDetails, Trains.
But, microservices may not necessary match them exactly.
As you said "if user search Connection"...
So "Search Connection" that is microservice which includes all needed components (Client, ConnectionDetails, Trains) and is independent.
And finally, how components (not microservices) will communicate to each other is up to you. With microservices you have a luxury to use straight POJO with no transformations, protocols, transport layers at all.
Or you can make communications more formal, which push you back closer to classical SOA rather than microservices.
I think the title says it clearly. I am no scalability guru. I am on the verge of creating a web application which needs to scale to large data sets and possibly many (wont exaggerate here, lets say thousands of) concurrent users.
MongoDB is the data repository and i am torn between writing a simple Play! webapp talking to MongoDB versus Play! app talking to a REST service app (in Scala) which does the heavy lifting of all business logic and persistence.
Part of me thinks that having the business logic wrapped as a service is future proof and allows deploying just the webapp in multiple nodes (scaling). I come from Java EE stack and Play! is a rebel in java web frameworks. This approach assures me that i can move away from Play! if needed.
Part of me also thinks that Play! app + Scala service app is additional complexity and mayn't be fruitful in the long run.
Any suggestions are appreciated.
NOTE: I am a newbie to Scala, MongoDB and Play!. Pardon me if my question was silly.
Scalability is an engineering art. Which means that you have lots of parameters and apply your experience to specific values of these parameters to come to a solution. So general advice, without more specific data about your problem, is hard.
Having said that, from experience, some general advice:
Keep your application as clean and simple as possible. This allows you to keep your options open. In your case, start with a simple Play app. Concentrate on clean code so you can easily rework what you have into a different architectural model (with clean code, that's simpler than you'd think :-))
Measure, not guess, where the bottlenecks are. It's simple enough to flood a server with requests. Use profiling, memory dumps, whatever, to pinpoint your bottleneck in scalability.
Only then, with a working app in hand (which you could launch early with) and data on where your scaling bottlenecks are, you can make decisions on what to split off in (horizontally scalable) services.
On the outset, services look nice and scalable, but they often get you in an early mess - services need to communicate with each other, so you start introducing messaging, etcetera. Keep it simple, measure, optimize.
The question does not appear to be silly . For me encapsulating your data access behind the rest layer does not directly improve the scalability of the application.(significantly, ofcourse, there is the server that can perform http caching and handle request queues etc.., but from your description, your application looks small enough). You can achieve similar scalability without the Rest layer. But having said that, the Service layer could have a indirect impact.
First it makes your application cleaner. (UI Talking to db is messy.). It helps make the application maintainable. (Multi folds). Rest layer could provide you with middle tier that you may need in your application. Also a correctly designed Rest Layer will have to be Resource Driven . In my experience a Resource Driven Architecture is a good middle ground between ease of implementation and highly scalable design.
So I strongly suggest that you use the Service layer (Rest is the way to go :) ), but scalability in itself cannot justify the decision.
Putting the service between the UI and data source encapsulates the data source, so the UI need not know the details of how data is persisted. It also prevents the UI from reaching directly into the data source. This allows the service to authenticate, authorize, validate, bind, and perform biz logic as needed.
The downside is a slight speed bump for the app.
I'd say that adding the service has a small cost and a big upside. I'd vote for that.
The answer, as usual, is. It depends.
If there is some heavy-lifting involved and some business logic: Yup, that is best put into its own layer and if you add a RESTful interface to it, you can serve that up to whatever front-end technology you want.
Nowadays, people are often not bothering with having a separate web app layer, but serve the data via AJAX directly to the client.
You might consider adding a layer, if you either need to maintain a lot of user session state or have an opportunity to cache data on the presentation layer. There are more reasons why you would want a presentation layer, for example, serving out different presentations to different devices/clients.
Don't just add layers for complexities sake, though.
I might add that you should try to employ the HATEOAS principle. That will ease things significantly when scaling out the solution.
I wonder how is the best way to integrate Java modules developed as separate J(2)EE applications. Each of those modules exposes Java interfaces. The POJO entities (Hibernate) are being used along with those Java interfaces, there is no DTO objects. What would be the best way to integrate those modules i.e. one module calling the other module interface remotely?
I was thinking about: EJB3, Hessian, SOAP, JMS. there are pros and cons of each of the approaches.
Folks, what is your opinion or your experiences?
Having dabbled with a few of the remoting technologies and found them universally unfun I would now use Spring remoting as an abstraction from the implementation.
It allows you to concentrate on writing your functionality and let Spring handle the remote part with some config. you have the choice of several implementations (RMI, Spring's HTTP invoker, Hessian, Burlap and JMS). The abstraction means you can pick one implementation and simply swap it if your needs change.
See the SpringSource docs for more information.
The standard approach would be to use plain RMI between the various service components but this brings issues of sharing your Java interfaces and versioning changes to your domain model especially if you have lots of components using the same classes.
Are you really running each service in a separate VM? If these EJBs are always talking to each other then you're best off putting them into the same VM and avoiding any remote procedure calls as these services can use their LocalInterfaces.
The other thing that may bite you is using Hibernate POJOs. You may think that these are simple POJOs but behind the scenes Hibernate has been busy with CGLib trying to do things like allow lazy initialization. If these beans are serialzed and passed over remote boundaries then you may end up with odd Hibernate Exception getting thown. Personally I'd prefer to create simple DTOs or write the POJOs out as XML to pass between components. My colleagues would go one step further and write custom wire protocols for transferring the data for performance reasons.
Recently I have been using the MULE ESB to integrate various service components. It's quite nice as you can have a mix of RMI, sockets, web services etc without having to write most of the boiler plate code.
http://www.mulesource.org/display/COMMUNITY/Home
Why would you go with anything other than the simplest thing that works?
In your case that sounds like EJB3 or maybe JMS, depending on whether the communication needs to be synchronous or asynchronous.
EJB3 is by far these easiest being built on top of RMI with the container providing all the additional features you might need - security, transactions, etc. Presumably your POJOs are in a shared jar and therefore can simply be passed between your EJBs, although I tend towards passing value objects myself. The other benefit of EJB is, when done right, that it's the most performant (that's just my opinion btw ;-).
JMS is a little more involved, but not much and a system based on asynchronous communication affords certain niceties in terms of parallelizing tasks, etc.
The performance overhead of web-services, the inevitable extra config and additional points of failure make them, IMHO, not worth the hassle unless you've a requirement that mandates their use - I'm thinking interop with non-Java clients or providing data to external parties here.
If you need network communication between Java-only applications, Java RMI is the way to go. It has the best integration, most transparency and the least overhead.
If, however, some of your clients aren't Java-based, you should probably consider other options (Java RMI actually have an IIOP-dialect, which allows it to interact with CORBA, however - I wouldn't recommend doing this, unless it's for some legacy-code integration). Depending on your needs, webservices are probably your friend. If you are conserned with the networkload, you could go webservices over Hessian.
You literally mean remotely? As in running in a different environment with therefore different availability characteristics? With network overheads?
Assuming "yes" my first step would be to take a service approach, set aside the invocation technology for a moment. Just consider the design and meaning of your services. You know they are comparativley expensive to invoke, hence small busy interfaces tend to be a bad thing. You know that the service system might fail between invocations, so you may favour stateless services. You may need to retry requests after failure, so you may favour idempotent service designs.
Then consider availability relationships. Can your client work without the remote system. In some cases you simply can't progress if the remote system isn't available (eg. can't enable the employee if you can't get to the HR system) in other cases you can adopt a "fire-and-tell-me-later" philosophy; queue up the requests and process responses later.
Where there is an availability depdency, then simply exposing a synchronous interface seems to fit. You can do that with SLSB EJBs, if everything is Java EE, that works. I tend to generalise expecting that if my services are useful then non Java EE clients may want them too. So SOAP (or REST) tends to be useful. These days adding a web service interface to your SLSB is pretty trivial.
But my pet theory is that any sufficiently large IT system ends up needing aynch communications: you need to decouple the availability contraints. So I would tend to look for a JMS-style relationship. An MDB facade in front of your services, or SOAP/JMS is not too hard to do. Such an approach tends to highlight the failure-case design issues that were probably lurking anyway, JMS tends to make you think: "suppose I don't get an answer? suppose my answer comes late?"
I would go for SOAP.
JMS would be more efficient but you would need to code up an message driven bean for each interface.
SOAP on the other hand comes with lots of useful toolkits that will generate your message definition (WSDL) and all the neccesary handlers (client and server) when given an EJB.
With soap you can (but dont have to) deal with certificate security and secure connections over public networks. As the default protocol is HTTP over port 80 you will have minimal pain with firewalls etc. SOAP is also great for hetrogenious clients (in your case anything that isn't J2EE) with good support for most common languages on most common platforms.