I am new to EAI and read that there are 2 ways to achieve EAI
1) Broker/ hub-spoke model
2) ESB
Is broker model a JMS?
I worked on Spring-integration which is lightweight ESB so have some Idea how ESB works.
But not sure about Broker model
Anyone who can ellaborate Broker model and how to implement it.
Thanks in advance
Regards
Ramandeep S.
A Broker or hub and spoke is a integration pattern based on a centralized middleware.
And yes, JMS is an implementation of this pattern.
See this:
Integration Hubs
... When translating the concept of hub and
spoke to the world of integration it is useful to have a closer look
at what a connection between two systems really entails, i.e. what
does the line between two boxes really represent? In some cases, the
line might be a message queue, in other cases it might be a
publish-subscribe topic or in yet other cases it might be the URI. So
depending on the system, having a lot of lines might now immediately a
problem. While it sure would be a pain to setup a lot of message
queues, publish-subscribe topics and URI's are largely logical
concepts and having a lot of them night mean a bit more maintenance
but is unlikely to be the end of the world.
But the Hub-and-Spoke architecture also provides another significant
benefit -- it decouples sender and receiver by inserting an active
mediator in the middle - the hub. For example, this hub can perform
the important function of routing incoming messages to the correct
destination. As such, it decouples the sender of the message from
having to know the location of the receiver. Having all messages
travel though a central component is also great for logging messages
or to control message flow. The Hub-and-Spoke style applied in this
manner is commonly referred to as Message Broker because the hub
brokers messages between the participants.
Data Format Considerations
A Message Broker should also include a protocol translation and data
transformation function. For example, a message may arrive via a
message queue, but has to be passed on via HTTP. Also, location
transparency is only an illusion unless data format translation is
also provided. Otherwise, a change in the destination (i.e. a request
in form of a message is now being serviced by another component) is
very likely to require a change in the message data format. Without a
Message Translator in between, the message originator would also have
to be changed. Therefore, the implementation of this type of
Hub-and-Spoke architecture typically includes data format translation
capabilities.
Related
I am new to Spring-integration and trying to design a relatively abstract architecture in java that could accomodate inputs and outputs of various natures.
For example on input : pick a file or get an http request or read from a DB etc..
output : send an email or http reply(eg json) or create a report/pdf/whatever etc..
What could be a good design for such entry/exit points in the application ?
For example on the input side, could I use several different gateways or adapters that could be possibly attached to the same input channel, from where the nature of the input could be then resolved and processed accordingly ?
Any suggestion/example of a good design for such entry/exit points would be more than welcome.
Cheers
Yes, you can do that with Spring Integration.
Inbound Channel Adapters (for various target protocols) indeed can send their messages to the same channel. There you can apply any complex logic from the Service Activator. Or add a Router to send different messages to different downstream flows.
On the output you can use a PublishSubscribeChannel to deliver the same message to different outputs - Outbound Channel Adapters.
We might not have such a sample, but here is an existing set: https://github.com/spring-projects/spring-integration-samples
I have several similar systems which are authoritative for different parts of my data, but there's no way I can tell just from my "keys" which system owns which entities.
I'm working to build this system on top of AMQP (RabbitMQ), and it seems like the best way to handle this would be:
Create a Fanout exchange, called thingInfo, and have all of my other systems bind their own anonymous queues to that exchange.
Send a message out to the exchange: {"thingId": "123abc"}, and set a reply_to queue.
Wait for a single one of the remote hosts to reply to my message, or for some timeout to occur.
Is this the best way to go about solving this sort of problem? Or is there a better way to structure what I'm looking for? This feels mostly like the RPC example from the RabbitMQ docs, except I feel like using a broadcast exchange complicates things.
I think I'm basically trying to emulate the model described for MCollective's Message Flow, but, while I think MCollective generally expects more than one response, in this case, I would expect/require precisely one or, preferably, a clear "nope, don't have it, go fish" response from "everyone" (if it's really possible to even know that in this sort of architecture?).
Perhaps another model that mostly fits is "Scatter-Gather"? It seems there's support for this in Spring Integration.
It's a reasonable architecture (have the uninterested consumers simply ignore the message).
If there's some way to extract the pertinent data that the consumers use to decide interest into headers, then you can gain some efficiency by using a topic exchange instead of a fanout.
In either case, it gets tricky if more than one consumer might reply.
As you say, you can use a timeout if zero consumers reply, but if you think that might be frequent, you may be better off using arbitrary two-way messaging and doing the reply correlation in your code rather than using request/reply and tying up a thread waiting for a reply that will never come, and timing out.
This could also deal with the multi-reply case.
Could someone explain the Broker pattern to me in plain english? Possibly in terms of Java or a real life analogy.
Try to imagine that 10 people have messages they need to deliver. Another 10 people are expecting messages from the previous group. In an open environment, each person in the first group would have to deliver their message to the recipient manually, so each person has to visit at least one member of the second group. This is inefficient and chaotic.
In broker, there is a control class (in this case the postman) who receives all the messages from group one. The broker then organizes the messages based off destination and does any operations needed, before visiting each recipient once to deliver all messages for them. This is far more efficient.
In software design, this lets remote and heterogeneous classes communicate with each other easily. The control class has an interface which all incoming messages can interact with so a sorts of messages can be sent and interpreted correctly. Keep in mind this is not very scalable, so it loses effectiveness for larger systems.
Hope this helped!
I am developing a distributed system which consists of different components (services) which are loosely (asynchronously) coupled via JMS (ActiveMQ).
Thus I do not want to reinvent the wheel, I am looking for a (well) known protocol / library that facilitates remote procedure calls inbetween these components and helps me deal with method interfaces.
So let's decompose the problem I am already solving right now via dirty solutions:
A consumer component want's to call a service, thus it constructs a request string (hand-written and dirty)
The request string is than compressed an put into a JMS message (dirty aswell)
The request message is than transmitted via JMS and routing mechanisms (that part is ok)
The service first of all needs to decompress and parse the request string, to identify the right method (dirty)
The method gets called and the reply goes like #2 - #4.
So that looks pretty much like SOAP, whilst I think that SOAP is to heavy for my application and further I am not using HTTP at all. Given that I was thinking that one might be able to deconstruct the problem into different components.
Part A: HTTP is replaced by JMS (that one is okay)
Part B: XML is replaced by something more lightweight (alright, MessagePack comes in handy here)
Part C: Mechanism to parse request/reply string to identify operation name and parameter values (that one is the real problem here)
I was looking into MessagePack, ProtocolBuffers, Thrift and so forth, but what I don't like about them is that they introduce their own way of handling the actual (TCP) communication. And bypass my already sophisticated JMS infrastructure (which also handles load balancing and stuff).
To further elaborate Part C above, this is how I am currently handling it: Right know I would do something like the following if a consumer would call a service, let's assume the service takes a text and replies keywords. I would have the consumer create a JMS message and transmit it (via ActiveMQ) to the service. The message would contain:
Syntax: OPERATION_NAME [PARAMETERS]
Method: GET_ALL_KEYWORDS [String text] returns [JSON String[] keywords]
Example Request: GET_ALL_KEYWORDS "Hello world, this is my input text..."
Example Reply: ["hello", "world", "text"]
Needless to say that it feels like hacked-together. The problem I see is if I would be to change the method interface by adding or deleting parameters, I would have to check all the request/reply string construction/deconstructions to syncronize the changes. That is pretty errorprone. I'd rather have the library construct the right request/reply syntax by looking at a java interface and throwing real exceptions on runtime if I do mess up stuff, like "Protocol Exception: Mandatory parameter not set" or something...
Any projects/libs known for that?
Requirements would be
It's small and lightweight and fast.
It's implemented in JAVA
It doesn't serve too many purposes (like some fullblown framework e.g. Spring)
I think this Spring package is what you're looking for. See JmsInvokerProxyFactoryBean and related classes.
From the javadoc:
FactoryBean for JMS invoker proxies. Exposes the proxied service for
use as a bean reference, using the specified service interface.
Serializes remote invocation objects and deserializes remote
invocation result objects. Uses Java serialization just like RMI, but
with the JMS provider as communication infrastructure.
I have an existing Protocol I'd like to write a java Client for. The Protocol consists of messages that have a header containing message type and message length, and then the announced number of bytes which is the payload.
I'm having some trouble modeling it, since creating a Class for each message type seems a bit excessive to me (that would turn out to be 20+ classes just to represent the messages that go over the wire) I was thinking about alternative models. But I can't come up with one that works.
I don't want anything fancy to work on the messages aside from notifying via publish subscribe when a message comes in and in some instances reply back.
Any pointers as to where to look?
A class for each message type is the natural OO way to model this. The fact that there are 20 classes should not put you off. (Depending on the relationship between the messages, you can probably implement common featues in superclasses.)
My advice is to not worry too much about efficiency to start with. Just focus on getting clean APIs that provide the required functionality. Once you've got things working, profile the code and see if the protocol classes are a significant bottleneck. If they are ... then think about how to make the code more efficient.