I am new to Spring-integration and trying to design a relatively abstract architecture in java that could accomodate inputs and outputs of various natures.
For example on input : pick a file or get an http request or read from a DB etc..
output : send an email or http reply(eg json) or create a report/pdf/whatever etc..
What could be a good design for such entry/exit points in the application ?
For example on the input side, could I use several different gateways or adapters that could be possibly attached to the same input channel, from where the nature of the input could be then resolved and processed accordingly ?
Any suggestion/example of a good design for such entry/exit points would be more than welcome.
Cheers
Yes, you can do that with Spring Integration.
Inbound Channel Adapters (for various target protocols) indeed can send their messages to the same channel. There you can apply any complex logic from the Service Activator. Or add a Router to send different messages to different downstream flows.
On the output you can use a PublishSubscribeChannel to deliver the same message to different outputs - Outbound Channel Adapters.
We might not have such a sample, but here is an existing set: https://github.com/spring-projects/spring-integration-samples
Related
i'm new to jms, and i'm currently designing a BattleShip game.
I'm using jms with activemq for the communication between them, so far i made 4 classes for jms communication which are Topic and Queue receivers and senders with simple methods of changing destination and sending.
Now i'm facing a problem when want to handle these messages,
I've decided every message will be delivered via ObjectMessage and the inner object will tell the listener how to handle it.
I have 5 different categories for messages :
Authentication,
Data (Such as highscores, replays etc),
InGameMessages (ShipRegistration, TurnUpdate and so on),
ChatMessage,
MatchMakingMessages (Only GameSearch and GameSearchCancel),
So i thought it will be a good idea to add a MessageType enum to each message,
but eventually I ended up writing the listener with over than 20 cases at the switch statement and with a huge amount of class castings.
Now I want to write it all over again, but i'm still stuck on the message handling since I can't find a different idea, or any design pattern which can handle this issue.
Any thoughts?
You can set a JMS property with the value of your "MessageType" enum. JMSType is a built-in property that you can use for that or you can add your own property to every message (with a name like "MessageType")
On the client side, read the message and test the value of that property and convert it back to the enum in a switch statement then perform the casting of the message based on the object of the class associated with the message, Use only one topic, each client subscribing to the topic.
Instead of the switch statement, you can have one listener per message type based on a JMS selector, each one selecting only one value of the JMS property (ie MessageType). All depends of your use case of course (strict ordering etc.)
You can use different topics/queues for different message categories.
You can also add properties to your message.
You can use a framework like spring that can do this for you, specially MessageListenerAdapter. Have a look at spring jms.
EDIT :
I can see some Design Pattern you can have a look at : Chain of Responsability, maybe Observer or EventListener.
The point here is that you have only one handler, that will handle all kinds of messages, read a property and then decide how to handle it. The solution may be to have many handlers, one by kind of message, and to find a way to route messages to the right handler.
Chain of Responsability can do the trick here : pass the message to the first handler of the chain, if it is concerned by the message then it can handle it, otherwise it will pass the message to the next handler, and so on.
First of all I'm gathering information about this question and so that i could implement this feature in a more elegant way.
Let's look at the picture below
The target server (green circle)
This is an api server that I use to fetch some data.
Features:
Only https connection
Response in json format.
Can accept get requests like these [ https://api.server.com/user=1&option&api_key=? ]
Proxy controller (blue square)
It's a simple server that stores list of proxies; Send and receive some data; And I want to talk about the software that i will to run on top of it.
Features:
Proxy list
Api keys list
I think it should be a hashmap that stores ip=>token list or database table if I want to scale my application.
Workers
Just analyze a json response and pass data to the db.
Let's go closer to the proxy controller server.
The first idea:
Create newFixedThreadPoolExecutor
Pass url/token to worker: server.submit(new Worker(url, token, proxy))
Worker analyze the data and pass it to db.
But in my opinion this solution is quite big and hard to maintain, I want to engage endpoint that gather stats, kill or spawn new workers and so on.
The second idea:
Worker generates an request like https://host/user=1&option=1
Pass it to the Proxy controller
Proxy controller assign to the request the api key and proxy server
Execute the request
Accept the response
Pass it back to a worker (I think that the best idea is to put a load balancer between workers and proxy controller).
This solution seems to me quite hacky. For example if the worker is dead the proxy server sends bunch of requests to the dead worker and it could led to dataloss.
The third idea:
The same as the second but instead of sending data directly to the worker the proxy controller pass it to some bus. I find some information about apache camel that allow me to organize this solution. In this case the dead worker is dead worker and dataloss equals zero (maybe).
Of course all three cases don't handle an errors. Some errors can be solved by resending the request with additional data. Some errors can be solved by re-span the workers.
So in your opinion what is the best solution in this case? Do I miss some hidden problems that can appear later? Which tools I should use?
Thanks
What are trying to reach?
Maybe you consider using this architecture:
NGINX (proxy + load balance) -> WORKER SERVERS -> DB SERVER (maybe use some NoSQL like Cassandra)
I am new to EAI and read that there are 2 ways to achieve EAI
1) Broker/ hub-spoke model
2) ESB
Is broker model a JMS?
I worked on Spring-integration which is lightweight ESB so have some Idea how ESB works.
But not sure about Broker model
Anyone who can ellaborate Broker model and how to implement it.
Thanks in advance
Regards
Ramandeep S.
A Broker or hub and spoke is a integration pattern based on a centralized middleware.
And yes, JMS is an implementation of this pattern.
See this:
Integration Hubs
... When translating the concept of hub and
spoke to the world of integration it is useful to have a closer look
at what a connection between two systems really entails, i.e. what
does the line between two boxes really represent? In some cases, the
line might be a message queue, in other cases it might be a
publish-subscribe topic or in yet other cases it might be the URI. So
depending on the system, having a lot of lines might now immediately a
problem. While it sure would be a pain to setup a lot of message
queues, publish-subscribe topics and URI's are largely logical
concepts and having a lot of them night mean a bit more maintenance
but is unlikely to be the end of the world.
But the Hub-and-Spoke architecture also provides another significant
benefit -- it decouples sender and receiver by inserting an active
mediator in the middle - the hub. For example, this hub can perform
the important function of routing incoming messages to the correct
destination. As such, it decouples the sender of the message from
having to know the location of the receiver. Having all messages
travel though a central component is also great for logging messages
or to control message flow. The Hub-and-Spoke style applied in this
manner is commonly referred to as Message Broker because the hub
brokers messages between the participants.
Data Format Considerations
A Message Broker should also include a protocol translation and data
transformation function. For example, a message may arrive via a
message queue, but has to be passed on via HTTP. Also, location
transparency is only an illusion unless data format translation is
also provided. Otherwise, a change in the destination (i.e. a request
in form of a message is now being serviced by another component) is
very likely to require a change in the message data format. Without a
Message Translator in between, the message originator would also have
to be changed. Therefore, the implementation of this type of
Hub-and-Spoke architecture typically includes data format translation
capabilities.
I'm designing a system using comet where there is a common channel where data getting published. I need to filter the data using some conditions based on client subscription details. Can anyone tell how I can do this? I thought I can do this using DataFilter.
Channel.addDataFilter(DataFilter filter);
Is this the correct way? If so any sample code to achieve this please?
There is no Channel.addDataFilter(DataFilter) method, but you can achieve the same results in a different way.
First, have a look at the available DataFilter implementations already available.
Then it's enough that you add a DataFilterMessageListener to the channel you want to filter data on, and specify one or more DataFilter to the DataFilterMessageListener.
You can find an example of this in the CometD demos shipped with the CometD distribution, for example here.
The right way to add the DataFilterMessageListener is during channel initialization, as it is done in the example linked above through a #Configure annotation, or equivalently via ServerChannel.Initializer.
Finally, have a look at how messages are processed on the server from the documentation: http://docs.cometd.org/reference/#concepts_message_processing.
It is important to understand that modifications made by DataFilter are seen by all subscribers.
I am developing a distributed system which consists of different components (services) which are loosely (asynchronously) coupled via JMS (ActiveMQ).
Thus I do not want to reinvent the wheel, I am looking for a (well) known protocol / library that facilitates remote procedure calls inbetween these components and helps me deal with method interfaces.
So let's decompose the problem I am already solving right now via dirty solutions:
A consumer component want's to call a service, thus it constructs a request string (hand-written and dirty)
The request string is than compressed an put into a JMS message (dirty aswell)
The request message is than transmitted via JMS and routing mechanisms (that part is ok)
The service first of all needs to decompress and parse the request string, to identify the right method (dirty)
The method gets called and the reply goes like #2 - #4.
So that looks pretty much like SOAP, whilst I think that SOAP is to heavy for my application and further I am not using HTTP at all. Given that I was thinking that one might be able to deconstruct the problem into different components.
Part A: HTTP is replaced by JMS (that one is okay)
Part B: XML is replaced by something more lightweight (alright, MessagePack comes in handy here)
Part C: Mechanism to parse request/reply string to identify operation name and parameter values (that one is the real problem here)
I was looking into MessagePack, ProtocolBuffers, Thrift and so forth, but what I don't like about them is that they introduce their own way of handling the actual (TCP) communication. And bypass my already sophisticated JMS infrastructure (which also handles load balancing and stuff).
To further elaborate Part C above, this is how I am currently handling it: Right know I would do something like the following if a consumer would call a service, let's assume the service takes a text and replies keywords. I would have the consumer create a JMS message and transmit it (via ActiveMQ) to the service. The message would contain:
Syntax: OPERATION_NAME [PARAMETERS]
Method: GET_ALL_KEYWORDS [String text] returns [JSON String[] keywords]
Example Request: GET_ALL_KEYWORDS "Hello world, this is my input text..."
Example Reply: ["hello", "world", "text"]
Needless to say that it feels like hacked-together. The problem I see is if I would be to change the method interface by adding or deleting parameters, I would have to check all the request/reply string construction/deconstructions to syncronize the changes. That is pretty errorprone. I'd rather have the library construct the right request/reply syntax by looking at a java interface and throwing real exceptions on runtime if I do mess up stuff, like "Protocol Exception: Mandatory parameter not set" or something...
Any projects/libs known for that?
Requirements would be
It's small and lightweight and fast.
It's implemented in JAVA
It doesn't serve too many purposes (like some fullblown framework e.g. Spring)
I think this Spring package is what you're looking for. See JmsInvokerProxyFactoryBean and related classes.
From the javadoc:
FactoryBean for JMS invoker proxies. Exposes the proxied service for
use as a bean reference, using the specified service interface.
Serializes remote invocation objects and deserializes remote
invocation result objects. Uses Java serialization just like RMI, but
with the JMS provider as communication infrastructure.