I currently work on a trading application that does not use camel.
It essentially takes in trades, does some processing and sends the details to an external system.
We now have a need to integrate with 3 new systems uusing FTP for 2 systems and JMS for 1 system.
I would like to use Camel in my application for these integrations. I have read a good chunk of camel in action but I was unclear on how we could kick off our camel routes
Essentially, we dont want to modify too drastically any part of the existing application as its working well in production.
In the existing application, we generate a Trade Value Object and its from this object that that I want to kick off our camel integration.
I dont have a database table or jms queue where I can kick off the route from.
I had a quick look at the Chapter on Bean routing and remoting in the Camel in Action book but I wanted to get peoples advise first before proceeding with any steps.
What would be the best approach for this integration?
Thanks
Damien
You can use Camel's POJO Producing feature that allows you to send a message to a camel endpoint from the java bean. If you have no need in JMS or DB you can use "direct:" or "seda:" or "vm:" endpoint as <from> part of your route.
Pojo producing as Konstantin V. Salikhov stated. However, you need to be sure you have a spring application and are scanning your beans with spring or wire them.
"If a bean is defined in Spring XML or scanned using the Spring component scanning mechanism and a is used or a CamelBeanPostProcessor then we process a number of Camel annotations to do various things such as injecting resources or producing, consuming or routing messages."
If this approach will add too much changes in your application, you could use a ProducerTemplate and just invoke a direct endpoint. (Or SEDA for that matter).
The choice of protocol here might be important. The direct protocol is a safe choice, since the overhead is simply a method call. Also, exceptions will propagate well through direct endpoints, as will transactions. As SEDA endpoints is asynchronous (like JMS) but does not feature persistence, there is a slight chance of loosing in flight data in case of a crash. This might or might not be an issue. However, with high load, the SEDA protocol stages better and give your application better resistance for load peaks.
Related
I am planning the develop of a microservice based architecture application and I decided to use kafka for the internal communicaton while I was reading the book Microservice Architecture by Ronnie Mitra; Matt McLarty; Mike Amundsen; Irakli Nadareishvili where they said:
letting microservices directly interact with message brokers (such as
RabbitMQ, etc.) is rarely a good idea. If two microservices are
directly communicating via a message-queue channel, they are sharing a
data space (the channel) and we have already talked, at length, about
the evils of two microservices sharing a data space. Instead, what we
can do is encapsulate message-passing behind an independent
microservice that can provide message-passing capability, in a loosely
coupled way, to all interested microservices.
I am using Netflix Eureka for Service registration and discovery, Zuul as edge server and Hystrix.
Said so, in practice, how can I implement that kind of microservice? How can I make my microservices indipendent from the communcation channel ( in this case Kafka)?
Actually I'm directly interacting with the channel, so I don't have an extra layer between my publishers/subscribers and kafka.
UPDATE 06/02/2018
to be more precise, we have a couple of microservices: one is publishing news on a topic (activemq, kafka...) and the other microservice is subscribed on that topic and doing some operations on the messages that are coming through. So we have these services that are coupled to the message broker (to the channel)... we have the the message broker's apis "embedeed" on our code and for example, if we want to change the message broker we have to change all the microservices that made use of the message broker's api. So, they are suggesting to use a microservice(in the picture I assume is the Events Hub) that is the "dispatcher" of the various messages. In this way it is the only component that interacts with the channel.
A general foreword - Don't do it if you don't need it. Introducing a queue system can be a big improvement if you are dealing with high number of events and events backing up issues etc. But if you don't face any issues you are probably better off with the lower complexity of a direct service communication.
Back to your question - It sounds like you want to abstract your communication with the queue because you are worried about the effort for replacing the queue with a different system - Is that correct?
In this case you can either do what you proposed - Develop a new service in the middle. This comes with all the baggage of a physical service (including deployment, scaling, etc).
Or the second alternative is to write a client library that abstracts the queue the way you want and allows you to reuse it in all services requiring to participate in the queue. This way you don't have to physically deploy another service for this purpose but you are still in full control of what your interface to the queue should look like and you have a single piece of code to incorporate changes (at least toward the direction of the queue). This would work given you are sure the app-facing side of the library can be stable enough.
But, again, don't do any of those in the first iteration when you are not sure you need all the complexity. (Over-engineering is a dangerous thing)
You should create a Interface lets say "Queue" which provide all functionalities which you want from Kafka or RabbitMQ, the create diff. impl like KafkaQueue and RabbitMQQueue of the Queue interface and inject the right impl which you want to use in your system.
In this your if new queue system is used , your existing code will not be changed
Creating another microservice is an extra overhead in this case
In a service architecture proper way to make your code independent out of constraints of communication channel is by having properly modeled self-sufficient messages. Historic examples would be WSDL in document mode, EDIFACT, HATEOAS etc. From this point of view microservices with spring-boot and kafka are just different implementation of same old thing done since mainframes ruled the world.
Essentially if you take a view of your app as blackbox asynchronous server; everything app does is receives events and produces new ones. It should not matter how events are raised within app. Http requests, xml within jms messages, json in kafka, whatever - all those things are just a way to pass events and business layer of application should respond only to a content of events.
So business layer is usually structured around some custom model/domain which are delivered as payload. Business layer is invoked/triggered by listener/producer layer which talks to communcation channel (kafka listener, http listener etc..). Aside from logging and enforcing security you should not have communication channel logic in app. I have seen unfortunate examples of business logic driven by by originating jms connection or parsing url of request. If you ever have this in your code you have failed to properly structure your code.
However that is easier to say than to implement. Some people are good at this level of modeling, and some never learn.
And there is no other way to learn but to try and fail.
We have a number of related Java Spring applications running on our servers. Lets call them App1, App2 & App3. As is standard all these use the common code in our-common-utils.jar
I want these applications(App1, App2 & App3) to broadcast their state to one or more remote listeners. For e.g.
App1: I failed to read file abc.
App2: I am using more than 90% of my heap space etc.
The listener/s of these events will take specific actions such as send emails to support and/or clients based on the notifications received.
The best solution I can think of is to have a NotificationSender JMX enabled(implements NotificationBroadcasterSupport) bean in our-common-utils.jar. This will have a thread consuming from a queue of Notifications and firing off sendNotification() to the listeners for each Notification. This will be done by each of the Apps in our eco system but using common code from common-utils.
Do you see any flaws in this design? Any more efficient ways/frameworks of doing it?
Many Thanks :)
Alternative solution is to use any distributed coordination service zookeeper for example. I used it in my very first micro service project. As I can see you are using spring. Spring cloud provides necessary solutions that you can use in declarative way. I would pay your attention to #FeignClient. It is very simple in use and flexible in spring world.
If I would work on this issue now, I would use spring hystrix based solution. To simplify integration between your java services I would recommend to check service-registration-and-discovery.
Ignore my opinion if spring is not general engine part in your projects (may be you need other vendor solutions, there are a lot of alternatives). I concentrate my attention on spring solutions because spring is not restricted in my projects and I can use anything I wish if it's reasonable.
Is there a way to trace transactions end to end over distributed applications system using Spring AOP or AspectJ, without changing the existing codes? The web service interactions between applications may be RMI, SOAP or REST? I am looking for a general approach and just want to know if it possible using Spring AOP and AspectJ.
Yes, it is possible with AspectJ, but there is no easy "cooking recipe" or "template for dummies". You need a custom solution. In order to concretely answer your question I would have to see your code. Another guy from India lately asked me the same, maybe he works on the same project as you.
The general approach is to transfer state between client and server by injecting a unique parameter (something like a transaction ID) into the request and using it on the server. Both client and server should be aspect-enabled. This should be possible via RMI, SOAP and REST, provided you find a place where to inject an additional parameter. In RMI and SOAP this could be an existing general-purpose key-value dictionary for optional parameters, in REST it could be a header field or a request parameter.
I am experimenting with Camel and finding it a convenient tool for endpoint integration. I've set up the following experimental application:
The first endpoint is a simple http-get request (using curl on the command line). This interfaces with a central switch using Jetty (this is the Camel-based app). This does some elementary tinkering and passes the request to another endpoint (a Thrift server) which handles the request. Its reponse is then routed back to the command-line client. The set up is therefore a kind of tier-3 over-engineered Hello-world architecture.
My routes typically takes this form:
from("jetty:http://localhost:8080/hello").process(new DummyProcessor()).process(new HelloProcessor());
My question is as follows:
Given that the HelloProcessor sends a Thrift message to another endpoint to process, shouldn't this rather be a Component? Is it good (acceptable) practise to use a Processor for such a task? Furthermore, what are the advantages for writing a component if it is indeed acceptable.
There are not really any benefits in writing a component if you are going to use it in one or a few routes.
If you intend to use this processor in multiple routes in the future, and you need a way to configure it by some parameters - then you typically write your own component. It also perhaps makes the route more readable. A component is also an easy artifact to share between different Camel applications and projects.
from("file:///var/files/inbox").to("http://www.example.com/");
vs
from("file:///var/files/inbox").process(sendHttpToExampleDotComProcessor); // or whatever
If it's a one time use - don't overcomplicate.
I have a web service, that takes an input xml message, transforms it, and then forwards it to another web service.
The application is deployed to two web logic app servers for performance, and resilience reasons.
I would like a single website monitoring page that allows two things
ability to stop/ start forwarding of messages
ability to monitor throughput of number of messages in the last hour etc. Number of different senders into the webservice etc.
I was wondering what the best way to implement this was.
My current idea is to have an in memory database (eg Debry or HSQL) replicating data to share the information between the two (or more) instances of my application that are running in different instances of the app server. I imagine I would have to setup some sort of master/ slave configuration.
I would love a link to an article that discusses how to solve this problem.
(Note, this is a simple spring application using spring MVC)
thanks,
David.
This sounds like a good match for Java Management Extensions (JMX)
JMX allows you to expose certain operations (eg: start/stop forwarding messages)
JMX allows you to monitor certain performance indicators (eg: moving average of messages processed)
Spring has good support for exposing beans as JMX MBeans. See here for more information.
Then you could use an open-source web-based JMX console, such as jManage
Hope this helps.
Sounds like you are looking for a Message Queue, some MDBs and a configurable design would let you do all these. Spring has support for JMS Queues if I'm not wrong
I think you are looking for a message queue. If you need additional monitoring, using a web service as the end point may not suffice - with regards to stop/start or forwarding of messages; monitoring http requests to web service is more cumbersome than tracking messages to a queue (even though you can do it).
If you are exposing this service to third party, then the web service will sit on top of the message queue and delegate to to it.
In my experience, RabbitMQ is a fine messaging queue service with a relatively simple learning curve.