Apache Camel - JDBC Stored Procedures and transaction handling doubts - java

I'm trying to build a small, proof of concept, Camel based application (running on FuseESB) which will possibly replace part of our existing integration system build on EJB's.
Right now, I'm trying to figure out the best way to handle the following scenario with apache camel:
JMS text message comes in
I have to execute a series of database operations based on the message content, invoking mainly stored precedures/functions
from the results acquired by the db calls I have to construct a reply message and send it to specific jms queue.
In case of an error/exception I would like to use a dead letter channel handling mechanism.
I can build simple camel routes, handling errors and exceptions in camel looks easy too, what I don't get is how to use Camel SQL component (I understand that JDBC component cannot be a transactional client) to make all my db calls as part of single transaction. From what I found on the net Camel SQL component cannot be used to execute stored procedures - is it true? If it is, should I use Processors or simple pojo classes to do my jdbc calls? What about transactions in the case of using pojo or processor types? I would highly appreciate any pointers to resources describing how to handle such a use case.

I would suggest to use a Java Bean to do the JDBC interaction, since you want to do multiple calls and use stored procedures. Sometimes Java code is easier.
For example the Spring JdbcTemplate have a good abstraction over the JDK JDBC API and makes it fairly easy to call stored procedures.
Alternative then MyBatis have support for calling stored procedures as well.
http://loianegroner.com/2011/03/ibatis-mybatis-working-with-stored-procedures/
And there is a camel-mybatis component as well.

Related

How to use a JTA transaction with two databases?

App1 interacts with App2(EJB application) using some client api exposed by App2.Uses CMT managed JTA transaction in Jboss.We are getting the UserTransaction from App2(Jboss) using JNDI look up.
App1 makes a call to App2 to insert data into DS2 using UserTransaction's begin() and commit().
App1 makes a call to DS1 using Hibernate JPA to insert data into DS1 using JPATransaction Manager.
Is it possible to wrap above both DB operations in a single transaction(Distributed transaction)
PFB the image which describes requirement
To do this it´s necessary to implement your own transactional resource, capable of joining an ongoing JTA transaction. See this answer as well for some guidelines, one way to see how this is done is to look at XA driver code for a database or JMS resource, and base yourself on that.
This is not trivial to do and a very rare use case, usually solved in practice by adopting an alternative design. One way would be to extract the necessary code from App2 into a jar library, and use it in Tomcat with a JTA transaction manager like Atomikos connected to two XA JTA datasources.
Another way is to flush the SQL statements into the database into tomcat and see if that works, before sending a synchronous call to JBoss, returning the result if the transaction in JBoss went through.
Depending on that commit/rollback in tomcat. This does not guarantee that will work 100% of the times (network failure etc) but might be acceptable depending on what the system does and the business consequences of a failed transaction.
Yet another way is to make the operation revertable in JBoss side and expose a compensate service used by tomcat in case errors are detected. For that and making the two servers JBoss you could take advantage of the JBoss Narayana engine, see also this answer.
Which way is better it depends on the use case, but implementing your own XA transactional services is a big undertaking, I would be simpler to change the design. The reason that very few projects are doing it is doing it is that it´s complex and there are simpler alternatives.
Tomcat is a webserver, so it does not support Global Transactions.
JBoss is an Application server, so it supports Global transactions.
If you have to combine both, you have to use JOTM or ATOMIKOS which acts as Trasaction Managers and commits or rollbacks.

Camel - Integrating with Existing Application

I currently work on a trading application that does not use camel.
It essentially takes in trades, does some processing and sends the details to an external system.
We now have a need to integrate with 3 new systems uusing FTP for 2 systems and JMS for 1 system.
I would like to use Camel in my application for these integrations. I have read a good chunk of camel in action but I was unclear on how we could kick off our camel routes
Essentially, we dont want to modify too drastically any part of the existing application as its working well in production.
In the existing application, we generate a Trade Value Object and its from this object that that I want to kick off our camel integration.
I dont have a database table or jms queue where I can kick off the route from.
I had a quick look at the Chapter on Bean routing and remoting in the Camel in Action book but I wanted to get peoples advise first before proceeding with any steps.
What would be the best approach for this integration?
Thanks
Damien
You can use Camel's POJO Producing feature that allows you to send a message to a camel endpoint from the java bean. If you have no need in JMS or DB you can use "direct:" or "seda:" or "vm:" endpoint as <from> part of your route.
Pojo producing as Konstantin V. Salikhov stated. However, you need to be sure you have a spring application and are scanning your beans with spring or wire them.
"If a bean is defined in Spring XML or scanned using the Spring component scanning mechanism and a is used or a CamelBeanPostProcessor then we process a number of Camel annotations to do various things such as injecting resources or producing, consuming or routing messages."
If this approach will add too much changes in your application, you could use a ProducerTemplate and just invoke a direct endpoint. (Or SEDA for that matter).
The choice of protocol here might be important. The direct protocol is a safe choice, since the overhead is simply a method call. Also, exceptions will propagate well through direct endpoints, as will transactions. As SEDA endpoints is asynchronous (like JMS) but does not feature persistence, there is a slight chance of loosing in flight data in case of a crash. This might or might not be an issue. However, with high load, the SEDA protocol stages better and give your application better resistance for load peaks.

Are there any design patterns that could work in this scenario?

We have a system (Java web application) that's been in active development / maintenance for a long time now (something like ten years).
What we're looking at doing is implementing a RESTful API to the web app. This web application, using Jersey, will be a separate project with the intent that it should be able to run alongside the main application or deployed in the cloud.
Because of the nature and age of our application, we've had to implement a (somewhat) comprehensive caching layer on top of the database (postgres) to help keep load down. Anyway, for the RESTful API, the idea is that GET requests will go to the cache first instead of the database to keep load of the database.
The cache will be populated in a way to help ensure that most things registered API users will need should be in there.
If there is a cache miss, the needed data should be retrieved from the database (also being entered into the cache in the process).
Obviously, this should remain transparent from the RESTful endpoint methods in my code. We've come up with the idea of creating a 'Broker' to handle communications with the DB and the cache. The REST layer will simply pass across ids (if looking to retrieve) or populated Java objects (if looking to insert / update) and the broker will take care of retrieving / updating / invalidating, etc.
There is also the issue of extensibility. To begin with, the API will be living alongside the rest of servers so access to the database won't be an issue however if we deploy to the cloud, we're going to need a different Broker implementation that will communicate with the system (namely the database) in a different manner (potentially through the use of an internal API).
I already have a rough idea on how I can implement this but it struck me that is probably a problem for which a suitable pattern could exist. If I could follow an established pattern as opposed to coming up with my own solution, that'll probably be a better choice. Any ideas?
Ehcache has an implementation of just such a cache that it calls a SelfPopulatingCache.
Requests are made to the cache, not to the database. Then if there is a cache miss Ehcache will call the database (or whatever external data source you have) on your behalf.
You just need to implement a CacheEntryFactory which has a single method:
Object createEntry(Object key) throws Exception;
So as the name suggests, Ehcache implements this concept with a pretty standard factory pattern...
There's no pattern. Just hide the initial DB services behind interfaces, build tests around their intended behavior, then switch in an implementation that uses the caching layer. I guess dependency injection would be the best thing to help you do that?
Sounds like decorator pattern will suit your need: http://en.wikipedia.org/wiki/Decorator_pattern.
You can create an DAO interface for data access, something like:
Value get(long id);
And firstly create a direct DB implementation, then create a Cache implementation which calls underlying DAO instance, in this case it should be the DB implementation.
The Cache implementation will try to get value from its own managed Cache, and from underlaying DAO if it fails.
So both of your old application or the REST will only see DAO interface, without knowing any implemntation details, and in future you can change the implementation freely.
The best design pattern for transparently caching HTTP requests is to use an HTTP cache.

Calling/Using JMS from PL/SQL

Is it possible to call/use JAVA Messaging Service (JMS) from PL/SQL?
I know we can call java from pl/SQL, but calling java is different from calling JMS Queues or JMS Topics, because JMS depends upon JNDI-resource naming and when we use JNDI based resources we first have to deploy them in some J2EE container and then use them. So calling JMS always involves deploying on some J2EE container and then utilizing its functionalities.
Coming back to my question as i mentioned earlier, i want to use JMS from PL/SQL and how it would handle the deployment & JNDI-based resources stuff..?
There are two issues in your question that need to be addressed separately:
JNDI
No, calling a JMS service does not depend on having a JNDI-resource nor you need to have the JMS client deployed in a container. The reason for using JNDI within a container is to avoid having configuration parameters hard-coded in your application code (by using a "directory" of named "things".)
For example, we use JNDI to get a connection pool from which to get a jdbc connection, but I could equally create a jdbc connection directly. The later is fine for testing or for a command-line utility, but it is certainly not fine for a general case (which is why we typically opt for the former, jndi-based option.)
With JMS, yep, you indeed need JNDI, but that doesn't mean your client needs to be in a EE container. Take a look at the JMS tutorial at the Oracle/Sun site, and check the simple examples section:
http://download.oracle.com/javaee/1.3/jms/tutorial/1_3_1-fcs/doc/client.html
IIRC, every example shows clients that can be run from the command line and where you simply pass the queue name and other parameters from the command line. It should be easy to retrofit that code so that you can load them up from a property file or as parameters in a function call.
Java in Store Procedures
Once you have a command-line client that can access the JMS queue you want to access to, you can retrofit that code so that it runs as a stored procedure. Yes, you can use Java to write stored procedures with Oracle...
... now, I think that is a horrible feature, one that is way too open to abuse. But, if you have a legitimate need to access a JMS provider from PL/SQL, this would be one way to go.
First, convert your command-line jms client into a stored procedure. Check the existing documentation on how to create java-based stored procedures with Oracle.
http://www.stanford.edu/dept/itss/docs/oracle/10g/java.101/b12021/storproc.htm
http://download.oracle.com/docs/cd/B10501_01/java.920/a96659.pdf
Then have your PL/SQL code call the stored procedure just as they would call any other stored proc or SQL statement. And voila.
Parting Thoughts
I've never done any of this, and there might be problems along the way. However, at least conceptually, it should be possible. At the very least you should be able to create a jms command-line utility that you can then convert into a java-based stored proc.
edit
Apparently Oracle has something called "Oracle Advanced Queueing" where you can access a JMS provider directly via PL/SQL.
http://www.akadia.com/services/ora_advanced_queueing.html
http://technology.amis.nl/blog/2384/enqueuing-aq-jms-text-message-from-plsql-on-oracle-xe
http://download.oracle.com/docs/cd/B10500_01/appdev.920/a96587/qintro.htm
Looks like a lot of reading and elbow grease involved, but it is certainly feasible (assuming you are using the right Oracle version.)
I might be updating an old thread, but I just successfully used JMS to send out messages from a PLJava trigger function. The one requirement that I never found written anywhere, is you have to load the jms broker jar files(I used activemq) in your database through pljava install function. Other procedures are same as this example.

Asynchronous processing in Java from a servlet

I currently have a tomcat container -- servlet running on it listening for requests. I need the result of an HTTP request to be a submission to a job queue which will then be processed asynchronously. I want each "job" to be persisted in a row in a DB for tracking and for recovery in case of failure. I've been doing a lot of reading. Here are my options (note I have to use open-source stuff for everything).
1) JMS -- use ActiveMQ (but who is the consumer of the job in this case another servlet?)
2) Have my request create a row in the DB. Have a seperate servlet inside my Tomcat container that always runs -- it Uses Quartz Scheduler or utilities provided in java.util.concurrent to continously process the rows as jobs (uses thread pooling).
I am leaning towards the latter because looking at the JMS documentation gives me a headache and while I know its a more robust solution I need to implement this relatively quickly. I'm not anticipating huge amounts of load in the early days of deploying this server in any case.
A lot of people say Spring might be good for either 1 or 2. However I've never used Spring and I wouldn't even know how to start using it to solve this problem. Any pointers on how to dive in without having to re-write my entire project would be useful.
Otherwise if you could weigh in on option 1 or 2 that would also be useful.
Clarification: The asynchronous process would be to screen scrape a third-party web site, and send a message notification to the original requester. The third-party web site is a bit flaky and slow and thats why it will be handled as an asynchronous process (several retry attempts built in). I will also be pulling files from that site and storing them in S3.
Your Quartz Job doesn't need to be a Servlet! You can persist incoming Jobs in the DB and have Quartz started when your main Servlet starts up. The Quartz Job can be a simple POJO and check the DB for any jobs periodically.
However, I would suggest to take a look at Spring. It's not hard to learn and easy to setup within Tomcat. You can find a lot of good information in the Spring reference documentation. It has Quartz integration, which is much easier than doing it manually.
A suitable solution which will not require you to do a lot of design and programming is to create the object you will need later in the servlet, and serialize it to a byte array. Then put that in a BLOB field in the database and be done with it.
Then your processing thread can just read the contents, deserialize it and work with the ressurrected object.
But, you may get better answers by describing what you need your system to actually DO :)

Categories