Web services and transactions - java

I have a client application which needs to communicate via SOAP calls with a second application on a web server. Some of the operations must be atomic, and most of the include web service calls.
I have read about Web Services Transactions (IBM), but could not locate implementations, road-maps, or other hands-on material on this topic.
Should i consider two-phase commit or other distributed protocols for transactions or are there other ways (methodologies) for achieving this?

I have read about Web Services
Transactions (IBM)
It's not just IBM. The 1.0 standard was IBM, IONA, Microsoft and assorted others. 1.1 was IBM, IONA and JBoss.
but could not
locate implementations, road-maps, or
other hands-on material on this topic.
coughgooglecough
Websphere:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.express.doc/info/exp/ae/cjta_wstran.html
JBoss:
http://docs.redhat.com/docs/en-US/JBoss_Enterprise_Application_Platform/5/html/Transactions_Development_Guide/pt03.html
glassfish:
http://metro.java.net/guide/Using_Web_Services_Atomic_Transactions.html
Should i consider two-phase commit or
other distributed protocols for
transactions or are there other ways
(methodologies) for achieving this?
"The locking model used by two phase commit transactions is really only suitable for short lived transactions in the same domain of control. If your services run in the same company datacenter you'll probably get away with it. For wider distribution, be it geographical or administrative, you probably want to look at WS-BA, a web service transactions protocol specifically designed for such use."
(That's from Transaction rollback and web services BTW, although you could also have found Transaction options over Web Service calls without too much trouble)

Related

EJB3 Enterprise Application As Portal & Client Web Apps - Architecture/Design

As shown in the above pic, I have a EJB-3 Enterprise application (EAR file), which acts as a portal and holds 3 web applications (WAR files) that communicate and transact with the same datastore. These 3 webapps are not portlet implementations, but normal webapps which interact with the datastore through the Enterprise App's Persistence Layer. These webapps are developed independently and so, some of 'em use Webservices from the Enterprise App and some of 'em use EJB-Clients.
Also, there is an other option of replacing these webapps (Web App1, Web App2 and Web App3) and using independent Enterprise Apps to communicate and transact with the database, as shown below:
Now, my questions are:
1) What is the best Option among the listed 2 options (above)?
2) How does it affect when we replace those webapps acting as clients to the Enterprise App, as independent Enterprise Apps (EAR files)?
3) What is a better model for Transaction handling, SSO functionality, Scalability and other factors?
4) Are there are any other better models?
EDIT:
1) In the first model, which method is a preferred way to interact with the EAR file - webservices or ejb-client jar file/library (interfaces and utility classes)?
2) How do both models differ in memory usage (server RAM) and performance. Is there any considerable difference?
Since you are being so abstract I will do it as well. If we remove all buzzy words as "Portal", "Enterprise Apps" and so on... What we have at the end is three web apps and a common library or framework (The enterprise App).
Seeing its app as simple as posible. You have three developers that need develop three web apps. You will provide some common code useful to build their apps. The model you will use will depends of what kind of code you will provide them.
1.- You will only provide some utils, and common business code. May be the clasical library fit your needs. (In Java EE environments you must take in account how can you take the advantages of persistence cache level 2 sharing a Session Factory for a single datastore)
2.- You will provide shared services as persistence, cache, security, audit, and so on... You will need a service layer as the first option. You will have a shared state so you need only one instance.
3.- The more common case is both you provide some business API and a service layer to common services.
You aren't indicating any requirement that force you to use a more complex solution for your scenario.
EDIT:
About if it is prefered rmi (the ejb-client) or webservices. I always use rmi to communicate applications geographically close. It use is simple and the protocol is much more faster that webservices (you can read a lot of comparison over this topic searching for rmi webservices performance on google).
On the other hand rmi is more sensible to network latence, require special firewall configurations and it is more coupled that webservices. So if I pretend to offer services to a third party or connect geographically sparse servers I will prefer webservices or even REST.
About the last question initially there is no any difference about deploy one or ten applications in the same server. The deploy fee will be insignificant over the overhead for the use of the application. Of course, you must take this as a generical assumption. Obviously the size and how you deploy your applications will have an impact about the memory consumption and others.
You must take in account that this decisions can be easily changed as you will needed. So as I said you could start with the simple solution and if you encounter a problem deploying your applications your could restructure your ears easily.
I'm inclined to agree with Fedox. If there is no reason for choosing one solution over the other ( business reason, technical reason, etc) then you might as wel choose the path of least resistance. To my mind that would be the first solution.
In general terms start simple and add complexity as you need to. Your solutions have no meaning without context. A banking app needs different considerations to a blog.
Hope this helps
There is a new platform called Vitria's BusinessWare, it's a very successful project which is worth millions.
Now let's see how does it work and what it does so that we can do the same in theory:
It interconnects projects with their databases, web-services with their EJBs..etc.
From their concept we can learn the following:
Create main EJB stateless bean (API), whose job is to pass messages
from:
web-services to other web-services
web-services to webapps
webapps to other web-services
The purpose of this EJB is first do validations in the main database
and then pass the calls to the other modules.
Only this EJB has access to the DB to more secure the connections
This EJB will queue the messages until the modules to sent are free
to accept
This EJB will control all the processes in the DB
This EJB will decide where to send the messages

Java EE / EJB vs Spring for Distributed Transaction management with multiple DB Clusters

I have a requirement to produce a prototype (running in a J2EE compatible application server with MySQL) demonstrating the following
Demonstrate ability to distribute a transaction over multiple database located at different sites globally (Application managed data replication)
Demonstrate ability to write a transaction to a database from a choice of a number of database clusters located at multiple locations. The selection of which database to write to is based on user location. (Database managed data replication)
I have the option to choose either a Spring stack or a Java EE stack (EJB etc). It would be useful to know of your opinions as to which stack is better at supporting distributed transactions on multiple database clusters.
If possible, could you also please point me to any resources you think would be useful to learn of how to implement the above using either of the two stacks. I think seeing examples of both would help in understanding how they work and probably be in a better position to decide which stack to use.
I have seen a lot of sites by searching on Google but most seem to be outdated (i.e. pre EJB 3 and pre Spring 3)
Thanks
I would use the JavaEE stack the following way:
configure a XA DataSource for each database server
according to user's location, a Stateless EJB looks up the corresponding DataSource and get the connection from it
when broadcasting a transaction into all servers, a Stateless EJB has to iterate on all configured DataSources to execute one or more queries on them, but in a single transaction
In case of a technical failure, the transaction is rolled back on all concerned servers. In case of a business failure, the code can trigger a rollback thanks to context.setRollbackOnly().
That way, you benefit from JavaEE automatic distributed transaction demarcation first, and then you can use more complex patterns if you need to manage transaction manually.
BUT the more servers you have enlisted in your transaction, the longest the two-phase commit operation will last, moreover if you have high latency between systems. And I doubt MySQL is the best relational database implementation to do such complex distributed transactions.

Usage of a Data Access Layer (DAL) in a SOA design

As we are in the beginning phases of rejuvenating our application in to SOA design I have some questions that I can not get a clear answer/picture on.
I have been doing a lot of reading, mostly around books from Thomas Erl and following that design pattern of understanding what Task Services, Entity Services and Utility Services are.
What I am stumbling on is the whole DAL concept of how that would look. So this is more of a verification of understanding or a clarification so as to help make the best approach for our platform.
So background. We currently have several web based e-commerce applications that have been pretty much been built in silos and are again pretty much a copy of each other. We have supporting applications such as Daemons and misc web services out there. Many of these applications are older then 5 years and are build on only technology (Model 1). All of our applications are centered around conducting auction sales. So during a sale event we will be taking bids from users, determine who is winning and display that information back. Each sale event has a set amount of time that they will be available to the users.
The company is moving towards a SOA solution as a lot of things we end up doing can be shared across not only our group but across other groups.
So what I understand on the DAL is that it in itself is a service which will sit on top of Data, in this case different Databases - MSSQL, ORACLE, MSSQL. Each of these databases have different schema's (Oracle) etc.
So the services (Task, Entity, Utility and Presentation Tier if needed) will make calls to the DAL to retrieve data. It is the responsibility of the DAL to know, from the contents of the message to determine what it needs to do in order to fulfill the request.
So for example, we have a Security Service candidate. This service needs to authenticate with LDAP and to authorize from the data that is stored for that given application.
The thought here is that a Utility service will be created to wrap up all the operations required to communicate with LDAP and that the Security Service will call upon the Utility Service and to the DAL to fetch the authorization data. The DAL then has the responsibility to go to the correct database/schema to retrieve the information. The information will be in XML format (standard SOA communication).
So, am I on the right track here? Have others done similar things or not? What other things do I need to consider (Currently getting the statistics on how many bids we take in an hour - on average).
Should each service have its own DAL - for example should the Security Service have the DAL as part of the service or should DAL be a shared service in which all services can use?
In your case, the approach to use for a full SOA based deployment would be to use an ESB, Identity provider and a data services solution.
To break it down, the DAL should be implemented using data services, in this way, this service will be a globally accessibly service in a language neutral way, and will support re-use and loose coupling. So all your data access logic can be implemented as web service operations in a data service.
So for the authentication and authorization management, in the SOA world, there's a standard called XACML, which is used for fine grained authorization management. So what you will need is an XACML server, who would authorize the user according to a specific criteria, where this should also have the ability to authenticate with LDAP.
Then your "Security Service" will be implemented in a service at the ESB, where that service will query the identity provider for authentication/authorization and according it's response, it will call the appropriate operations in the data service, with suitable parameters to fetch the data, and return it to the user.
The above scenarios can be implemented using WSO2 Data Services Server, WSO2 Identity Server and WSO2 ESB respectively, which are open source products, and can be freely used and found here.
i once worked with (developing) an soa project that used a "data service". it was some time ago, and i was only involved marginally, but my recollection was that it ended up being too complicated and slow.
in particular we had no real need for a data service - it would have made more sense to have placed the same abstractions in a library layer, which would have given better efficiency and no real loss of functionality (for our particular needs). this was exacerbated by the fact that the data tended to be requested in many small "chunks".
i guess it comes down to the trade-offs involved in the implementation. in our case, with a relatively closed system and a single underlying database technology, we could have easily exploited the support for distributed access that the database provided; instead we ended up duplicating this in a slower, more general, message bus, which added nothing except complexity. but i can easily imagine different cases where access to data is more "distant".
How you have to use SOA for your design is depends on the its requirements.
In generally you can write coarse grain services and expose them as web services. In your case you can write some services which calls the databases and produce the results. In this case authorization logic can also be written with the service logic.
The other approach is to use an ESB or BPEL engine to write the integration logic and expose the integrated service as a web service. In this case you can use some data services to expose data base data in xml format and integrate them. You can use services for different sachems and call the correct service with the request data. And the authorization logic can also be added to service integration logic.
Security aspects such as authentication, confidentiality, integrity is considered as non functional requirements and hence can be engaged to any service without writing an explicit security service.
Following articles describes such sample possible integration of services as mentioned in the second approach.
http://wso2.org/library/articles/2011/05/integrate-business-rules-bpel
http://wso2.org/library/articles/2011/06/securing-web-service-integration

Spring remote services with a transaction context

I have the following scenario:
I have an interface-server which listens on a queue and receives messages from the "outside world". This server then calls a "internal", business, service which in turn calls other services and so on.
These services can each reside on a different machine, and can be clustered for that matter.
I need the notion of a transaction to span across these services and machines.
My development stack includes Spring (3.0.5) and JPA 2.0(Hibernate in background) on a J2SE environment.
Can I acheive this without an app-server? Assuming I plug-in an external JTA transaction-manager (like atomikos for example)
We've chosen to go with Spring for many reasons the most important ones were the service abstractions, intensive DI and the ability to work without a heavy app-server. I know we can use spring in an app-server but if someone is to recommend this I'd like to hear why this should be done, assuming I can forefit spring and go all app-server.
BTW, just to reassure anyone reading this post: Yes, we've thought of the problematic issues of a distributed transaction but we still think we will need such a transaction as this is the business logic of the service and it will need to be across machine as some of the services will be under a lot of pressure.
Thanks in advance,
Ittai
We ended up using JBoss with Spring.
JBoss indeed supplied the distributed transactions that were needed while Spring contained all DI and such.
We still kept spring as we felt its IOC was cleaner and more comfortable.
It is possible we should have used CDI in jboss but that was not on our radar.
We use Spring 3 and Atomikos for distributed transactions (xa) on apache tomcat and oracle databases in production, so this for us a very usefull setup. Have a look at the atomicos spring integration example:
http://www.atomikos.com/Documentation/SpringIntegration

Multiple instances of a java web application sharing a resource

I have a web service, that takes an input xml message, transforms it, and then forwards it to another web service.
The application is deployed to two web logic app servers for performance, and resilience reasons.
I would like a single website monitoring page that allows two things
ability to stop/ start forwarding of messages
ability to monitor throughput of number of messages in the last hour etc. Number of different senders into the webservice etc.
I was wondering what the best way to implement this was.
My current idea is to have an in memory database (eg Debry or HSQL) replicating data to share the information between the two (or more) instances of my application that are running in different instances of the app server. I imagine I would have to setup some sort of master/ slave configuration.
I would love a link to an article that discusses how to solve this problem.
(Note, this is a simple spring application using spring MVC)
thanks,
David.
This sounds like a good match for Java Management Extensions (JMX)
JMX allows you to expose certain operations (eg: start/stop forwarding messages)
JMX allows you to monitor certain performance indicators (eg: moving average of messages processed)
Spring has good support for exposing beans as JMX MBeans. See here for more information.
Then you could use an open-source web-based JMX console, such as jManage
Hope this helps.
Sounds like you are looking for a Message Queue, some MDBs and a configurable design would let you do all these. Spring has support for JMS Queues if I'm not wrong
I think you are looking for a message queue. If you need additional monitoring, using a web service as the end point may not suffice - with regards to stop/start or forwarding of messages; monitoring http requests to web service is more cumbersome than tracking messages to a queue (even though you can do it).
If you are exposing this service to third party, then the web service will sit on top of the message queue and delegate to to it.
In my experience, RabbitMQ is a fine messaging queue service with a relatively simple learning curve.

Categories