What are the benefits of JCA? - java

Our application often connects to a different kind of back-ends over web services, MQ, JDBC, proprietary (direct over socket) and other kinds of transport. We already have a number of implementations that let us connect from our application to these back-ends and while all of these implementations implement the common java interface, they do not share anything else.
We have realized that there are signification portions of code that are common for all of these particular connector implementations and we have decided to streamline the development of future connectors through one universal connector. This connector will be capable of formatting messages to a format expected by back-end and sending them using the available transport mechanism. For example, fixed-length message format over MQ or over a socket.
One of the dilemmas we are facing is the most appropriate technology for this kind of connector. So far, our connectors were basic java classes that implement the common java interface. Since we generally host our applications in some Java EE application server, it seems that Java Connector Architecture would be the most appropriate technology for this piece of software. However, implementing JCA compliant connector seems to be relatively complex. What are the palpable benefits of going with the standard – JCA and do benefits justify the additional effort?

Indeed JCA seems the most appropriate technology for you. Already excellent arguments have been made, namely the portability, standardised interface, the connection pooling and transaction support. And don't forget security.
With WebSphere Process server the adapters could be exposed as a SCA service which can have a lot of benefits if that's important for you.
Also some development tools have extensive support for developing and testing JCA connectors.
Another benefit is (experienced) Java EE Administrators and Java EE developers (should) know the standard so administration and development should be easy to streamline.
But in the end you should have to find reasons to implement JCA based on the scope of your project, the future plans you have for your project or maybe within the policy of your company.

Short answer: I see no benefit on selecting JCA over other technologies, I see it as a drawback since you need Java EE container.
Long answer:
I've been skeptic about these Java EE standards for some time now. I don't see a compelling technical reason to use a full featured Java EE server anymore, since there are better open source implementations for every feature offered. I've been bitten several times by implementation incompatibilities when moving to/from "enterprisey solutions".
The idea for JCA is surfacing here right now and I am pushing to try apache camel or spring integration instead. I am all for open source implementations that you can use everywhere. And there is a lot going on. Check this list of components. Granted, maybe is smaller than whats already developed with JCA, but every bit is open sourced and it's all on one location. Also, I believe the documentation is simpler and more complete. The urge for integration calls for a powerful SPI with plenty of open source, real live examples, developed in the same fashion, and that can be found on the same place.
I am hating the negativity, but I don't like full featured application servers. For instance, I would go for tomcat and terracota any day over other "enterprisey" products, just as I would go with camel before JCA, until the need for JCA gets proven. I don't like the idea of the Java Committee to tell how I should develop my own applications because I don't trust them. I believe it is in my best interest when the piece of software can work just as easily on Java SE/RCP as in a Java EE environments or in a pure Servlet container.

I've just developed an inbound resource adapter for a gps device communicating over an proprietary protocol. It wasn't that much hassle, though I've got the impression that developing an outbound one might require more work. The worst thing with the JCA is the lack of documentation. All books and articles seems to have the same dumb example.
The thing I'm most pleased with is the portability. Once you've written the adapter you can plug the rar (resource adapter archive) into any application server to provide deployed applications the ability to communicate with eis supported by your ra. Or you can bundle the rar into the war/ear.

The benefits are primarily for vendors who wish to sell connectors to proprietary back end systems for use with any app server, for customers who want to be able to drop in a connector without worrying about whether it only works on WebLogic not Websphere, etc. Indeed this is the goal of Java EE in general.
Note that JBoss has decided to put several things into JCA, for example JDBC connections go via JCA.
Your future client code will have a standardised interface, some pooling and transaction support etc. but it's important to keep sight of the bigger picture; namely that the benefits are not targeted at you and your one project specifically, but at a software eco-system consisting of many app servers, many back end systems, many connectors and so on.

Sounds like a good use for a JBI container with binding components. Discussion of JCA vs JBI.

Related

What are Tips and tricks for an integration with RPG on AS/400?

I want to ask some information to tamtamy community to address an architectural choice. I work on a telecommunication suite, this is based on a proprietary development platform JEE7 oriented (now named, DP).
In the requirements analisys phase, a Customer required a strong integration of his AS-IS services in the new products based on our DP.
This integration is not a problem.. This topic is our match!
The customer AS-IS services are implemented in IBM RPG program language and they are deployed on a IBM System I (AS/400). Actually they aren't services but a plethora of programms interfaced with an instance of IBM DB2 database.
The CRUD operations on database aren't a problem we can use an ORM artifact. Now, we're studing a way to interact with RPG programs.
After a preliminary analysis we found different approaches, two are very intersting:
JTOpen, it "is a library of Java classes supporting the
client/server and internet programming models to a system running
IBM i (or i5/OS or OS/400). The classes can be used by Java applets,
servlets, and applications to easily access IBM i data and
resources" (by http://jt400.sourceforge.net/). The idea is to
develop a module to invoke RPG commands via REST (API).
Use WebSphere on AS/400 to wrap RPG commands via Web Service
(directly distributed by IBM) here a tutorial:
http://www-01.ibm.com/support/docview.wss?uid=swg27009770&aid=1
We need to understand what solution is better. For example, it's not easy to understand the performance degradation level for both the solution.
Can you give us some advices?
Thank you,
Bye
As is usually the case in IT, it depends.
Option 2 will be quicker and easier, but there are some limitations in what it can support. Though IBM has been steadily removing those limits.
Note that the document you linked to is considerably out of date. A better reference is the "Moderizing IBM i Applications.." Redbook. Also the Integrated Web Service for IBM i web page.
The Redbook linked to earlier actually covers both options you mention in Chapter 5 - Interfacing.

Advantages of SCA over Spring?

I have experience developing java web applications with Spring, but not so much with the world of SOA. I was reading about SCA- SCA4J - http://www.service-conduit.org/user-guide.pdf - and alot of this seems very similar to Spring.
I was trying to learn about what situations SCA would be useful, but still dont understand what features / benefits SCA offers over using Spring standalone.
I found this old blog post - http://rajith.2rlabs.com/2007/08/05/sca-vs-spring-a-reply-to-dans-post/ - but nothing really stood out to me from the SOA jargon.
I'd appreciate it if anyone could give an explanation geared more towards a spring developer (who is very green in the world of SOA terminology / methodology).
Thanks
I'm not the most knowledgeable about Spring, but am pretty familiar with SCA from having worked with it in IBM's WebSphere Integration Developer IDE and the environments it deploys to: WebSphere Enterprise Service Bus and WebSphere Process Server.
It really all has to do with abstraction and the thought of allowing developers to focus on what is most important - business logic. We are all familiar with the concept of Object-Oriented Programming and how that abstraction better represents the "real world". Then along comes web services and the service-oriented architecture approach. Web services further abstract our logic by making it less dependent on what language is behind our logic. Now C++ or .Net or Java or even RPG or COBOL or whatever could be behind our web service. We can get languages and systems to talk to each other in a way that doesn't depend on CORBA and libraries and what not.
SCA (Service Component Architecture) attempts to take SOA to the next level. It attempts to abstract the protocol and address used to talk to another system or service. Here's the why: With working with web services, you as a developer still need to work with protocol and write or hook in a LOT of boilerplate code. You have to know if you are http or https. You have to know if you are (in the Java world) JAX-RPC, JAX-WS 2.0, JAX-WS 2.1, JAX-WS 2.2 or even JAX-RS (REST based). You need to know if you are working with JSON, XML, or SOAP and if SOAP, is it 1.0, 1.1, or 1.2? And sometimes you even have to know how the vendor of your application server implements certain things (you shouldn't, but it can be the case). And then what happens if you want your web service to talk to another service. But that second service happens to be messaging based. Does that mean JMS? MQ? JMS over MQ? other? And what about just pure HTTP POST and GET?
This is where SCA comes in. SCA attempts to abstract the end points of your services and hide the protocol implementation from you the developer. When you need a service you just look it up via the SCA API's and then invoke the service (I think the method is execute? At least it is in IBM's extension of SCA). But anyway....Now you do not have to know that the service you are communicating with is JAX-WS 2.1 or REST or even MQ. You don't have to know that you working with SOAP/HTTP or JSON/XML or SOAP/JMS or whatever. SCA hides this all from you. It allows you to connect services of differing implementations to each other so they can all talk to one another via a common "service interface".
As you can imagine, this is another layer of abstraction and technology on top of existing abstracted technologies. But having seen it myself, I believe it is worth looking into. I know IBM and Apache (and I think others that just don't come to mind at the moment) worked on coming up with the SCA standard. (And actually IBM's version of SCA is now built on the open standard that Apache presented. Hopefully other vendors that support SCA do the same.)
I think it is worth taking the time to look at. It can help you to focus not so much on the integration of services based on their protocols, but rather the business logic of the services, which is really the value they bring to the table.
SCA is being standardized through OASIS (Assembly Specification), so you can chose from different implementations (e.g. Apache Tuscany or Fabric3).
SCA defines applications in terms of the following basic building blocks:
interface: defines available operations
component: describes an implementation artifact in terms of which "services" it offers, which "references" it requires, and which configurable "properties" it exposes
binding: declares the communication protocol used by a service or reference
policy: captures non-functional requirements for services, references, or implementations
To build SOA applications, concrete "types" of these entities are assembled into composites. For example:
interface: WSDL port type, Java interface
component implementation: Java class, BPEL process, Python, Spring
binding: JMS, Web Service, RMI/IIOP
policy: transaction, security
In addition, SCA defines unified client APIs to invoke components both synchronously and asynchronously (including one-way). For Java this includes annotation-based reference injection.
Combining these capabilities enables you to easily create distributed applications from heterogeneous technologies and evolve them by adding or swapping binding, implementation, interface, or policy technologies.
It is worth looking at Spring Integration (http://www.springsource.org/spring-integration) as opposed to basic Spring when comparing to SCA, since Spring Integration offers a very nice framework for transparently wiring together remote components.

What good middleware solutions are there for clustered/distributed services

I'm looking for existing middleware solutions that address aspects of service clustering/distribution for load-balancing and availability. I'm looking into building up my own infrastructure for this based on a messaging system (more specifically, JMS). However, if possible I'd rather use something which already exists.
The system should have the ability to run various services on a number of computers. Based on service descriptions, the system should be able to figure out how many instances of a specific service to start in the cluster. Based on pending service requests, it should dynamically adjust the number of services running. Monitoring services and deploying new versions of services should also be handled by the system.
By services, I mean "independent units of functionality" that has a predefined interface. Clients would just know the interface and the middleware should take care of making sure that the service is running on enough nodes in order to answer incoming requests made through the interface.
It should be something that integrates well with Java. Some of my services are implemented as native code but I have a good solution for wrapping those into a Java based service.
I've looked at some middleware/ESB solutions like ICE and Mule but I didn't find them to address the aspects of dynamic load service provisioning which I described above very well (if at all). So I'm wondering what else might be out there that somebody here would want to recommend taking a look at...
I would recommend you to look deeper into OSGi:
it has a fairly dynamic module system that allows you to roam your services across the network
there are existing open-source offerings that you can build-upon: Eclipse Equinox and Virgo, Apache Felix and Karaf, Knopflerfish, Concierge, Glassfish 3, etc (comparison)
as for the remoting side, the OSGi 4.2 has Remote Services specification for which there are several implementations out there. More notably, ECF seems to be one implementation that could satisfy your needs, if you want to use JMS (article on dzone).
As a final note, you could take a look at Paremus Service Fabric - from the description it sounds quite similar to the beast that you are trying to build (except that it uses JINI instead of JMS). If nothing else, it could be a source for inspiration.
They also used to have an open-source version called Newton but that was closed due to lack of interest. It was mentioned that it lived on under the name of Service Fabric Community Edition but I currently I could not find any reference to it on their website (most probably it was just cancelled).
Finally, here is one more project for inspiration: Bundle-Bee - transparent, grid-distributed OSGi computing. Most probably there are more similar projects out there.

Why does Java apps need an application server and .Net just IIS Web Server?

Why is there so much confusion in the java world with various servers like apache, tomcat, jboss, jetty, etc and in .Net world it is just IIS that does that job. I would like to understand the need and use of it and am not starting a java vs. .net.
There are several reasons.
A Java EE app server is a transaction monitor for distributed components. It provides a number of abstractions (e.g., naming, pooling, component lifecycle, persistence, messaging, etc.) to help accomplish this.
Lots of these services are part of the Windows operating system. Java EE needs the abstraction because it's independent of operating system.
It should also be said that the full Java EE specification isn't necessary for developing web applications. JDBC, the part of Java that deals with relational databases, is part of Java SE proper. Java EE adds on servlets, which are HTTP listeners, and Java Server Pages, which is a markup language for generating servlets. You can develop fully functional web applications using just these technologies and Java SE. Tomcat and Jetty are two servlet/JSP engines that can stand in for full Java EE app servers.
If you take note of the fact that .NET has HTTP listeners built into the System.Net module, you realize that it's as if .NET took a page from Java and folded the javax.servlet functionality into the framework.
If you add Spring and a messaging functionality like ActiveMQ or RabbitMQ, you can write complete applications without having to resort to WebLogic, WebSphere, JBoss, or Glassfish. You don't need EJBs or the full Java EE spec.
UPDATE:
Spring Boot offers the possibility of developing and running full-featured Java applications as an executable JAR file. There's no need for any Java EE app server, just JDK 8 or higher.
This is because Sun and Microsoft had very different goals with their software, and ways to reach that goal.
The Sun mantra for Java has been right from the beginning "Write once, run everywhere", and that has resulted in that much effort has been put into creating _API_s that specify how the environment should look like to allow a minimalistic piece of code do its job.
The API for "process a web request and return a web response" was named Servlets, and has been extremely successful due to it filling a void and being well specified. All mainstream Java based web servers I know of allow to run servlets. An early implementation of a complete servlet capable web server is only 1500 lines Later this was expanded to include JSP's to provide for HTML with server side code (like PHP).
For any solution to be truly scalable, including web solutions, it means that eventually the load is so high that one computer is not powerful enough to run it on its own anymore. A scalable solution MUST be able to spread over multiple computers aware of each other, and that single requirement brings a LOT of other things to the table:
Code must be able to invoke code running on a different computer (EJB's).
Data must be available to all computers in a consistent way (database).
Access to said database must be efficient (database connection pooling).
... and much much more
Sun then created API's for all of the functions they found were necessary for this to run, and named it "Java Enterprise Edition" (those days the word "Enterprise" was used for a lot of things), and created a system implementing all these API's which people could buy and use.
The difference between Microsoft and Sun now comes in play. Here Microsoft would just make IIS public, and say "use these API's" in clients but not actually want anybody to create another server providing these APIs. Because they want to sell Windows to run it!
Sun wanted people to use the language instead, so they made it possible for ANYONE to implement the Java EE specification, but they had to pass a rigorious test suite from Sun (and pay) to be allowed to use the Java EE brand. This has caused a large number of Java EE servers to be available where you usually can reuse the core business logic, but have to configure the Java EE server to provide the resources the application needs.
See http://en.wikipedia.org/wiki/Java_Platform,_Enterprise_Edition#Certified_application_servers for the state of servers today. Both commercial and open source are available based on your needs - pick the one that suits you best.
So, the reason is that Java EE is a set of well defined API's that anyone can implement, and they have.
First off, you can run .NET code off Apache using mod_mono, so it is not limited to IIS. There are also several other web servers (Cassini and XPS come to mind) that will run ASP.NET as well.
In order to run a dynamic web application you need both a web server and an application server. Sometimes these integrate so well they appear to be one and the same, sometimes not.
In regards to Java - it has always supported more platforms than .NET and has been more open, therefore got integrated to more web servers (on the Linux stack).
As both .NET and IIS are technologies that came from Microsoft, ASP.NET and the application server aspects of it (aspnet_isapi.dll) were bundled with IIS and the different .NET installers integrate with IIS. Of course, Microsoft only implemented it on their OS and for their web server.
Apache is very analogous to IIS, and doesn't have much to do with Java.
Application Servers in Java provide additional services that .NET provides in various ways, with different products or from the Windows operating system.
Apache is typically used in Java deployments as a proxy to an application server behind it, and potentially serves static content, or handles SSL, and similar concerns. It is entirely optional, although there are good reasons to use it.
Tomcat and Jetty are basically java web servers, which provide a defined framework (Servlets among other things) for creating dynamic web sites with Java code. They are often components of a larger application server, or can be deployed alone.
JBoss is an example of an application server (Glassfish and Weblogic are two very common others), which provides the full J2EE specification. The idea behind the J2EE specification is to allow a defined way to build an application server so that an application can be switched between different application servers from different vendors that comply with the spec. The specification is about how to interact with defined services that are useful for server-side program.
Because Java EE is a specification, not a product itself. Remember that Java is a lot more open than .NET (In the specification sense).
Each application server has different features, different performance, different target users/enterprises, different price tags, runs in different platforms, require different hardware. Differentiation is why those all application servers exists, one size does not fit all.
One reason is that writing a servlet is as easy as implementing the javax.servlet.Servlet interface in a concrete class. Servlet containers, then, only need to support a fairly simple API in order to call themselves web servers. This makes setting out to develop a servlet container extremely simple because of this limited contract of functionality.
The choices off tools are one of the advantages and disadvantages of Java, look at the available Java Web Developement Frameworks,you could evaluate them endlessly just to decide. in .Net it's pretty much MVC. With servers it's relatively simple. Most go to Tomcat if they need a web server and JBoss if they need a free application server though. The reasons for this have already been said, J2EE is a specification.

Is there a want for a Java 7 Cloud Server Framework that is not Spring/Tomcat based?

Is there a demand out there for a small, lightweight, Java 7 based open source project that is geared toward making Cloud services more elegant? I have written several servers in my lifetime, and was curious if there was a need for this.
My thoughts were to keep it simple, lightweight, and use the Java 7 NIO 2 functionality for network communications. I was also thinking of using either a broadcast address for local cloud based communications between servers in a rack solution (MBONE) or a serialization-based communications protocol.
I don't want to use Spring or Tomcat, as they are overweight, and they are written on older Java technology. Furthermore, I don't want to use another Apache project because it's too dependent on Apache technologies. Keywords here are "small", "lightweight", "portable", and "efficient".
Maybe this will even have the potential of being installed and used in mobile devices as background servers, or even mobile cloud networks.
From my own point of view, no.
If I want a lightweight servlet server, I use Jetty.
If I want a more powerful, versatile Web app server, I use Tomcat.
If I want a full J2EE server, I use Glassfish.
All of these are of course highly proven. Memory is cheap enough these days that I'm not very worried about a little bloat. That comes standard with Java apps :)
Also, I'd consider it crazy to deploy server technology on mobile devices. Maybe other people have bright new ideas, I think mobile devices should communicate with central servers.
I would probably not want to use a Java 7 server not based on J2EE, at least the servlet part, unless someone comes up with a really compelling alternative. On the other hand, I wonder how small you could make a compliant server.
Finally, as far as I know, Tomcat already (optionally) supports nio: http://tomcat.apache.org/tomcat-6.0-doc/aio.html .
Strictly a personal opinion from an old curmudgeon.

Categories