I have experience developing java web applications with Spring, but not so much with the world of SOA. I was reading about SCA- SCA4J - http://www.service-conduit.org/user-guide.pdf - and alot of this seems very similar to Spring.
I was trying to learn about what situations SCA would be useful, but still dont understand what features / benefits SCA offers over using Spring standalone.
I found this old blog post - http://rajith.2rlabs.com/2007/08/05/sca-vs-spring-a-reply-to-dans-post/ - but nothing really stood out to me from the SOA jargon.
I'd appreciate it if anyone could give an explanation geared more towards a spring developer (who is very green in the world of SOA terminology / methodology).
Thanks
I'm not the most knowledgeable about Spring, but am pretty familiar with SCA from having worked with it in IBM's WebSphere Integration Developer IDE and the environments it deploys to: WebSphere Enterprise Service Bus and WebSphere Process Server.
It really all has to do with abstraction and the thought of allowing developers to focus on what is most important - business logic. We are all familiar with the concept of Object-Oriented Programming and how that abstraction better represents the "real world". Then along comes web services and the service-oriented architecture approach. Web services further abstract our logic by making it less dependent on what language is behind our logic. Now C++ or .Net or Java or even RPG or COBOL or whatever could be behind our web service. We can get languages and systems to talk to each other in a way that doesn't depend on CORBA and libraries and what not.
SCA (Service Component Architecture) attempts to take SOA to the next level. It attempts to abstract the protocol and address used to talk to another system or service. Here's the why: With working with web services, you as a developer still need to work with protocol and write or hook in a LOT of boilerplate code. You have to know if you are http or https. You have to know if you are (in the Java world) JAX-RPC, JAX-WS 2.0, JAX-WS 2.1, JAX-WS 2.2 or even JAX-RS (REST based). You need to know if you are working with JSON, XML, or SOAP and if SOAP, is it 1.0, 1.1, or 1.2? And sometimes you even have to know how the vendor of your application server implements certain things (you shouldn't, but it can be the case). And then what happens if you want your web service to talk to another service. But that second service happens to be messaging based. Does that mean JMS? MQ? JMS over MQ? other? And what about just pure HTTP POST and GET?
This is where SCA comes in. SCA attempts to abstract the end points of your services and hide the protocol implementation from you the developer. When you need a service you just look it up via the SCA API's and then invoke the service (I think the method is execute? At least it is in IBM's extension of SCA). But anyway....Now you do not have to know that the service you are communicating with is JAX-WS 2.1 or REST or even MQ. You don't have to know that you working with SOAP/HTTP or JSON/XML or SOAP/JMS or whatever. SCA hides this all from you. It allows you to connect services of differing implementations to each other so they can all talk to one another via a common "service interface".
As you can imagine, this is another layer of abstraction and technology on top of existing abstracted technologies. But having seen it myself, I believe it is worth looking into. I know IBM and Apache (and I think others that just don't come to mind at the moment) worked on coming up with the SCA standard. (And actually IBM's version of SCA is now built on the open standard that Apache presented. Hopefully other vendors that support SCA do the same.)
I think it is worth taking the time to look at. It can help you to focus not so much on the integration of services based on their protocols, but rather the business logic of the services, which is really the value they bring to the table.
SCA is being standardized through OASIS (Assembly Specification), so you can chose from different implementations (e.g. Apache Tuscany or Fabric3).
SCA defines applications in terms of the following basic building blocks:
interface: defines available operations
component: describes an implementation artifact in terms of which "services" it offers, which "references" it requires, and which configurable "properties" it exposes
binding: declares the communication protocol used by a service or reference
policy: captures non-functional requirements for services, references, or implementations
To build SOA applications, concrete "types" of these entities are assembled into composites. For example:
interface: WSDL port type, Java interface
component implementation: Java class, BPEL process, Python, Spring
binding: JMS, Web Service, RMI/IIOP
policy: transaction, security
In addition, SCA defines unified client APIs to invoke components both synchronously and asynchronously (including one-way). For Java this includes annotation-based reference injection.
Combining these capabilities enables you to easily create distributed applications from heterogeneous technologies and evolve them by adding or swapping binding, implementation, interface, or policy technologies.
It is worth looking at Spring Integration (http://www.springsource.org/spring-integration) as opposed to basic Spring when comparing to SCA, since Spring Integration offers a very nice framework for transparently wiring together remote components.
Related
I am having a Java application and a .NET application both residing in two different machines and need to design a communication layer between these two applications. Any inputs or ideas would be really helpful. Below mentioned is the nature of interaction between these two applications.
Java applications sends large amounts of data to the .NET application
Data latency should be kept to a minimum
.NET application should also be able to request for some data (synchronously/asynchronously)
The easyest way .Net and Java can talk is using Web-Services - we have done in my company with much success (using apache's cxf and standard code on the .Net side).
But if latency and size are the main requirements, you should use Sockets - both platforms offer a pretty extensive socketing frameworks and it would give you the best performance possible.
I think this can be done by setting up an xml webservices layer on the java side. You can use RestEasy for restful web services. Just my .2 cents.
Another alternative is some form of MOM (Message Oriented Middleware). There are a lot of implementations, but one to look at first might be ActiveMQ as it has both Java and C# bindings (among others).
I'm not saying this is better than using a web-service, it entirely depends on what your requirements are.
We have had good experiences with providing web services with JAX-WS (part of standard runtime in Java 6). They explicitly list .NET compatibility as a goal and is well supported in IDE's.
The Endpoint.publish() mechanism allow for small, simple deployments.
You can use Web Services. Jax-WS is the API in java that allow you to use it. As the implementation of this API I recommend metro (http://metro.java.net/), this already came with the SDK, and has a great integration with netbeans.
As already someone referred yet, you can use a socket, and create a communication channel on that, but this have some problems, starting with security. Don´t use this in real life applications.
If you need help with this subject you can start reading this:
Getting started with JAX-WS
It really depends on your requirements. The simple way is generally Web services. However, if you want higher performance, or more fine-grained access to the API on the other platform, you might want to consider JNBridgePro (www.jnbridge.com).
Disclosure: I work for JNBridge.
I'm developing a web application with multiple frameworks (spring, hibernate, spring-security, ZK for GUI), and using Tomcat as app server. I must say I have absolutely no experience with java web services technologies. Thing is, I will almost certainly have to expose number of services for some external applications in the near future, and I was wondering what would be the way to go (considering the frameworks I'm using)...
I saw and read various tutorials and some questions (link) regarding Axis, Axis2, JAX-WS... Thing that confuses me a little bit is that I don't know what is the common practice (if any) to integrate services within existing web application (mainly in the terms of project organization). As I see it now, these services that I need to implement will rely partially on the existing source code, so I don't know whether I should use completely separate project, or I can put it inside my existing web app folder (which I tried with Axis2, but don't know if it's a good practice).
Thanks.
How to organize the projects?
In general I agree with #ericacm, but there is one thing you should keep in mind... You said you're going to develop a number of services in the near future. You may come to a point at which you want to host the services on a separate server, e.g. for performance, availability or maintainability reasons. This may influence your decision of separating the projects. Furthermore, separation "enforces" loose coupling, but therefore introduces other challenges like session sharing across multiple WARs. It's a case-by-case decision.
If I were in your situation I'd first ask myself whether the service(s) logically belongs to the web application or not.
Implementation
When in comes to WS-* implementations you have to make 2 decisions:
Decide for an API to use; today, I can't see any reason for not going with JAX-WS together with JAXB as API, they work well and they are standardized.
Decide for a Framework; I've experience using Axis2 as well as METRO (keep in mind that JSE 1.6+ provides basic JAX-WS support). Both work well. It's fairly easy to change the frameworks if you use the JAX-WS APIs.
I have good experience with Spring-WS 2+ and manual Castor mapping . Is is easy but powerful combination.
Spring-ws 2:
provides contract-first development (specially good for the web app with number of services).
provides WS annotation
supports XML mapping (Castor, JaxB, etc)
Castor:
mapping based on xml configuration
allows map multiple messages (requests/responses) to one java object (based on xml configuration)
If you are using some Java EE 6 server, consider also JAXB for manual mapping:
mapping based on annotation
should be faster than Castor
allows map multiple messages (requests/response) to one java object (when you use java inheritance)
You can go ahead and put them into the same project. Each web service will be an additional interface and implementation class along with some configuration.
Since you are using Spring CXF is a good choice as a for JAX-WS as it integrates well with Spring. See this page as a starter.
Spring-WS is complex framework for simple web services. If you want to understand web services completely and to know the nuts and bolts of web services, learn Spring-WS. It is extremely flexible and provides lot of options.
Else, if you want simpler alternative use JAX-WS. Spring supports JAX-WS annotations. Refer to the section 17.5.7. Exporting web services using the JAX-WS RI's Spring support.
http://static.springsource.org/spring/docs/2.5.x/reference/remoting.html
Is it possible to create WS Server and WS Client manually (without generators) by JAX-WS? Specially if you are developing a big application you want to re-use objects but generators are generating a lot of classes that can be in 99% the same (for example if your app is WS Client and you have to connect to badly designed external WS Server). Is there some tutorial how co create ws manually?
There is a lot of reasons why I don't like generators and completely agree with http://ogrigas.eu/spring/2010/04/spring-ws-and-jaxb-without-a-code-generator
Sorry for such a naive question, but can anyone explain to me the difference between Java web services (jax-ws) and .Net web services behaviours?
Since the term "web service" is used with slightly diverging meanings, I assume we're talking about its W3C definition.
This definition basically defines to specifications: WSDL and SOAP. Additionally, there are a bunch of other specifications known as WS-* that define special usage of WSDL and SOAP for special purposes (e.g. security).
Both, Java and .NET try to implement a web service engine that adheres to these specifications. Since these specifications are fairly complex, both make mistakes. Furthermore, the goal of providing interoperability is not completely meet. For example, the SOAP specification defines an optional SOAPAction HTTP header that is not used in JAX-WS but is required in .NET (Don't know if this is still true for current versions).
So the Metro (Metro is a web service engine using JAX-WS) web site mentions regular interoperability tests with .NET
By the way, JAX-WS is the name of specification as well as a reference implementation thereof.
Ideally, the idea of web-services is to not give you a chance to have such a questions. :)
A client should not be able to distinguiish between web services implemented using either technology, or ideed any other technology. The promise of Web services is that they should be interoperable across many platforms, and service providers can use whatever technologies they prefer, so a Java shop would use Java, JAX-WS for example, and .NET shoop their technologies - clients just don't care, they use the WSDL.
Things get a bit more interesting when we move away from basic SOAP/HTTP web services and use standards for security, transactions, messaging etcs (the whole WS-* space). Ideally the implementation transparancy is still true, but you can't count on an arbitrary implementor supporting what you want to do. The WS-I organisation, and it's participating vendors, do a great deal of work to ensure interoperability, so the story is not too bad even for these more advanced WS-* standards.
Be carefull, wcf compares to metro, jax-ws has just the basics, not all the ws* stuff.
Metro and WCF implements an interoperation standar wse 3. And can communicate with each other through xml without any problem, you just do not know if the service is .net or java based.
Regards
Our application often connects to a different kind of back-ends over web services, MQ, JDBC, proprietary (direct over socket) and other kinds of transport. We already have a number of implementations that let us connect from our application to these back-ends and while all of these implementations implement the common java interface, they do not share anything else.
We have realized that there are signification portions of code that are common for all of these particular connector implementations and we have decided to streamline the development of future connectors through one universal connector. This connector will be capable of formatting messages to a format expected by back-end and sending them using the available transport mechanism. For example, fixed-length message format over MQ or over a socket.
One of the dilemmas we are facing is the most appropriate technology for this kind of connector. So far, our connectors were basic java classes that implement the common java interface. Since we generally host our applications in some Java EE application server, it seems that Java Connector Architecture would be the most appropriate technology for this piece of software. However, implementing JCA compliant connector seems to be relatively complex. What are the palpable benefits of going with the standard – JCA and do benefits justify the additional effort?
Indeed JCA seems the most appropriate technology for you. Already excellent arguments have been made, namely the portability, standardised interface, the connection pooling and transaction support. And don't forget security.
With WebSphere Process server the adapters could be exposed as a SCA service which can have a lot of benefits if that's important for you.
Also some development tools have extensive support for developing and testing JCA connectors.
Another benefit is (experienced) Java EE Administrators and Java EE developers (should) know the standard so administration and development should be easy to streamline.
But in the end you should have to find reasons to implement JCA based on the scope of your project, the future plans you have for your project or maybe within the policy of your company.
Short answer: I see no benefit on selecting JCA over other technologies, I see it as a drawback since you need Java EE container.
Long answer:
I've been skeptic about these Java EE standards for some time now. I don't see a compelling technical reason to use a full featured Java EE server anymore, since there are better open source implementations for every feature offered. I've been bitten several times by implementation incompatibilities when moving to/from "enterprisey solutions".
The idea for JCA is surfacing here right now and I am pushing to try apache camel or spring integration instead. I am all for open source implementations that you can use everywhere. And there is a lot going on. Check this list of components. Granted, maybe is smaller than whats already developed with JCA, but every bit is open sourced and it's all on one location. Also, I believe the documentation is simpler and more complete. The urge for integration calls for a powerful SPI with plenty of open source, real live examples, developed in the same fashion, and that can be found on the same place.
I am hating the negativity, but I don't like full featured application servers. For instance, I would go for tomcat and terracota any day over other "enterprisey" products, just as I would go with camel before JCA, until the need for JCA gets proven. I don't like the idea of the Java Committee to tell how I should develop my own applications because I don't trust them. I believe it is in my best interest when the piece of software can work just as easily on Java SE/RCP as in a Java EE environments or in a pure Servlet container.
I've just developed an inbound resource adapter for a gps device communicating over an proprietary protocol. It wasn't that much hassle, though I've got the impression that developing an outbound one might require more work. The worst thing with the JCA is the lack of documentation. All books and articles seems to have the same dumb example.
The thing I'm most pleased with is the portability. Once you've written the adapter you can plug the rar (resource adapter archive) into any application server to provide deployed applications the ability to communicate with eis supported by your ra. Or you can bundle the rar into the war/ear.
The benefits are primarily for vendors who wish to sell connectors to proprietary back end systems for use with any app server, for customers who want to be able to drop in a connector without worrying about whether it only works on WebLogic not Websphere, etc. Indeed this is the goal of Java EE in general.
Note that JBoss has decided to put several things into JCA, for example JDBC connections go via JCA.
Your future client code will have a standardised interface, some pooling and transaction support etc. but it's important to keep sight of the bigger picture; namely that the benefits are not targeted at you and your one project specifically, but at a software eco-system consisting of many app servers, many back end systems, many connectors and so on.
Sounds like a good use for a JBI container with binding components. Discussion of JCA vs JBI.
I have to choose a technology to connect my Application/Presentation Layer (Java Based) with the Service Layer (Java Based). Basically looking up appropriate Spring Service from the Business Delegate Object.
There are so many options out there that it is confusing me. Here are the options I've narrowed down to but not sure..
Spring RMI
Apache Camel
Apache ServiceMix (ESB)
Iona FUSE (ESB)
Here is what I want to know
If you have worked on (or evaluated) any of these, which choice do You think is more appropriate? (and it wouldn't hurt to tell me why :)
Are there other technologies that I should be looking at as well?
As of now I do not see Application and Service layer being distributed but I do not want to rule out this possibility in future. Is this a good idea to design to provide this flexibility?
Any help would be useful. Thanks!
Spring Remoting would seem like the simplest approach. It also would leave you open to more complex approaches in the future if that is the direction you want to take.
From the limited view of your requirements, I would stick with a simple solution with a lower learning curve, and leave the ESB till you determine you actually need it.
The KISS principle is a wonderful thing.
It mostly boils down to do you want to use Spring Remoting (which Spring RMI and Apache Camel are implementations of) - or do you want to use JAX-WS for web services (which CXF or Metro implement). i.e. do you want automatic remoting for your POJOs - or do you want WS with WSDL contracts and so forth.
Once you've decided on the remoting technology; your next decision is do you want to bundle it inside your application as a library (e.g. Spring RMI or Camel) - or do you want to deploy it in an ESB container like ServiceMix to be able to hot-redeploy modules and so forth.
If the latter is your choice then use Apache ServiceMix - or use the FUSE ESB if you want a commercial distribution with more documentation, frequent releases, commercial support and so forth.
Here you can find a simple solution to integrate Metro and Camel together: http://www.everit.biz/web/guest/everit-blog/-/blogs/calling-a-camel-route-from-web-service-using-metro-and-tomcat?_33_redirect=/web/guest/everit-blog