I come from a background of MoM. I think I understand ESB conceptually. However, I'm not too sure about the practical differences between the two when it comes to making a choice architecturally.
Here is what I want to know
1) Any good links online which can help me in this regard.
2) Can someone tell me where it makes sense to use one over the other.
Any help would be useful.
Messaging tends to concentrate on the reliable exchange of messages around a network; using queues as a reliable load balancer and topics to implement publish and subscribe.
An ESB typically tends to add different features above and beyond messaging such as orchestration, routing, transformation and mediation.
I'd recommend reading about the Enterprise Integration Patterns which gives an overview of common patterns you'll tend to use in integration problems which are all based above a message bus (though can be used with other networking technologies too).
For example using open source; Apache ActiveMQ provides a loosely coupled reliable exchange of messages. Then you can use Apache Camel to implement the Enterprise Integration Patterns for smart routing, transformation, orchestration, working with other technologies and so forth.
I put MOM solutions and ESB solutions on two distinct planes.
I consider MOM a building block for ESB solutions. In fact, ESB solutions reach their own loose coupling and asynchronous communication capabilities, just using the paradigm offered by the specific MOM implementation.
Therefore, MOMs represent solutions for data/events distribution at customized level of QoSs (according to the specific vendor implementation), instead ESBs represent solutions providing capabilities to realize complex orchestrations in a SOA scenario (where we have multiple providers offering their services, and multiple consumers interested in consuming the services offered by the first ones).
Complex orchestrations imply communication between legacy systems, everyone of these with its own data domain representation (rules and services on specific data) and its own communication paradigm (one consumer interact with the ESB using CORBA, another one using WS, and so on).
It is clear that ESB represents a more complex architectural solution aimed to provide the abstraction of data-bus (such as the electronic buses that everyone have in his own pc), able to connect a plethora of service providers to a not well specified plethora of service consumers, hiding heterogeneity in (i) data representation and (ii) communication.
Sorry for the long post, but the concepts are complex and it is very difficult to be effective and efficient in a short statement.
An ESB is typically a layer that routes, logs, transforms, and performs other 'technical' (i.e. non-business) functions on messages. It could process messages from a messaging system (such as something JMS-based), or it could work with other types of message (such as SOAP-based web services). In that respect, it's more general than MoM.
Disclaimer: I am an IBM WebSphere consultant - although I am not contributing here in an official capacity.
ESB with web services in its true form provides Application loose coupling by sending the data through one the elements of the message.
MOM provides not only Application Loose coupling but process loose coupling along.
ESB comes with additional features supporting Governance centric approach.
Both can be used independently or together depending upon the scenario.
IBM and Oracle have SOA certifications. Since they're the leaders in the marketplace (Gartner Magic Quadrant), I would read about how they define SOA and ESBs (along with methodology and the components needed to support SOA like Governance, Registry, etc etc)
EBS is just yet another buzzword, as is SOA 2.0.
You can have a ESB system easily implemented with normal Web Services with a queue behind them.
You can have message routing and or orchestration with SOA 1.0 (Tibco, BizzTalk), one things does not stop the other really. More importantly, it is the semantics given to the messages exchanged in the system that play an important role, in this case events. Messages as events, are triggers about something that happened in your system, so the context is different.
Related
Hi guys: I've "simplistic" workflow management tricks (like rotating file queues, controller threads, etc...) work in a wide variety of producer/consumer contexts... Where files are simply renamed, deleted, and created in a systematic manner; or where a "main" thread is calls and coordinates workers.
In contrast, I've also "played" with JMS in some toy applications, and I can see how it might be used to coordinate a complex application workflow.
I was wondering: What do messaging services like JMS offer over standard producer/consumer workflows (of course, if I'm missing something here, or have the wrong idea of when/why JMS is used, feel free to correct me)?
In particular, what type of applications require enterprise-grade messaging frameworks?
What do messaging services like JMS offer over standard producer/consumer workflows?
Scalability, availability, transparency, manageability. In point-to-point communication sender is bound to the receiver and vice versa. You, as the application developer, are responsible for thinking what to do when traffic increases and implement the necessary changes. Your application must be aware of the environment in which it works and must be changed every time the environment changes. You are forced to reinvent the wheel while solving typical messaging problems, for example, temporary congestion (what to do when the consumer can't keep the pace with the producer for a while?). You have to provide your own means of monitoring the current situation, if something does not work as expected. The list goes on...
Now imagine you have to wire 10 different systems this way. Obviously, you'll need to come up with a fairly universal solution so that you don't implement each connection logic from scratch — that would be terribly expensive to produce, not to mention maintaining it. A JMS message broker is one of such possible general solutions.
In particular, what type of applications require enterprise-grade messaging frameworks?
Complicated, in short. I work for a company that has a network of about 70 systems, some of them 30 years old. New systems are added to the network as time passes and the old systems don't need to be changed, neither must new systems be aware of ancient data exchange protocols — a centralized cluster of message brokers can translate a JMS message into some mainframe message format I have no idea about, and same way back with the answer.
I do have many years of experience in large J2EE web applications and high transactional core java applications but never had any experience on SOA.
Currently I am working on a new project but the architecture was already done. We (Java developers) develop EJB services which finally sends JAXB based Java objects to C#.net clients to render the UI which is used only within the company 11000 users. The idea is that, there may be internet users around the world in the future and we will be developing an web application based on J2EE which will be using the same services.
Is this a truly a Service Oriented Architecture? Can SOA done in this way using JAXB bound Java objects which can be consumed from many platforms?
I have never done any SOA work so I want to get some terms correct. Thank you.
an architecture to be SOA has to stick to the below rules:
• SOA components are loosely coupled. When we say loosely coupled means every service is self contained and exist in alone logically. For instance we take the ‘payment gateway’ service and attach it to a different system.
• SOA services are black boxes. In SOA services hide there inner complexities. They only interact using messages and send services depending on those messages. By visualizing services as black boxes services become more loosely coupled.
• SOA service should be self defined: - SOA services should be able to define themselves.
• SOA Services are maintained in a listing: - SOA services are maintained in a central
repository. Applications can search the services in the central repository and use them accordingly.
• SOA components can be orchestrated and linked to achieve a particular functionality. SOA services can be used/orchestrated in a plug and play manner.
It does not matter what Technologies/language you are using as long you don't break any of the above rules
for more info:
http://www.codeproject.com/KB/aspnet/SoftArch7.aspx
All SOA means is that "external" components can consume functionality. Usually SOA refers to XML/RESTful interfaces, but that's just convention.
http://en.wikipedia.org/wiki/Service-oriented_architecture
SOA doesn't have anything to do with implementation details such as EJB or JAXB. SOA is all about creating loosely coupled, discreet services (usually web services). These services can then be run by any business logic layer to satisfy a business need.
You can then add a UI layer (say java Swing or SWT) onto the business logic layer to create a client application, similarly you could create a web-app. In each case you are using the same web services. This is SOA.
Is this a truly a Service Oriented Architecture?
SOA is a buzzword. You can also think about it as RBMDC => "Reusable By Many Different Clients" architecture
It has nothing to do with the actual data type (XML, JSON, binary, etc..) nor with the protocol (HTTP, TCP/IP, SOAP, etc..).
What it really boils down to is you have X "business functions" that you expose to be usable by external or internal clients. These business functions are technically labeled services, hence your architecture is Service Oriented.
What you describe in your example is what buzz architects call SOA => the answer is YES.
Yes. Thats exactly what SOA is. Ask yourself these questions:
Are you developing a layer which encapsulates business logic.. maybe interacts with a database while doing so?
Is that layer being designed in such a way that multiple views, or other layers can call on to obtain information?
If you are answer is yes, then thats SOA. You will have multiple clients -> calling on a gateway (may be your web server) -> which directs the request to your service. Then returns the data back.
Once you have developed the gateway then all you need to concentrate on is develop the services and some other module can consume it.
Its wonderful to have loose coupling isn't it?
I was in a project that did exactly what you are doing. C# SOA and Java EJBs at the backend.. :)
We have what I think is a fairly typical client/server architecture, with a frontend written in .NET, displaying data sent from a backend written in Java.
Currently, we use a custom message-based framework for transmitting data snapshots and updates down to clients. This might be upgraded; although the basic java service/.net client setup is set in stone, we want to look at replacements for the message framework, for example WPF MVVM (with an eye on Sliverlight), with databindings to java web-services, or perhaps Coherence.
I was wondering what experiences others have had with this and other approaches (obviously there's no golden bullet for all situations...).
Our requirements are that the clients can show large, frequently updating and editable datasets, primarily in grids.
Update
I've accepted that REST/SOAP is the standard way to do it, but I'd still be interested to hear any other approaches, especially from a performance point of view.
Web services is the most common choice:
RESTful service - more flexible, no strictly defined schema
SOAP service - rigid schema, less flexible
Checkout protobuf, which is a good platform agnostic protocol.
I wonder how is the best way to integrate Java modules developed as separate J(2)EE applications. Each of those modules exposes Java interfaces. The POJO entities (Hibernate) are being used along with those Java interfaces, there is no DTO objects. What would be the best way to integrate those modules i.e. one module calling the other module interface remotely?
I was thinking about: EJB3, Hessian, SOAP, JMS. there are pros and cons of each of the approaches.
Folks, what is your opinion or your experiences?
Having dabbled with a few of the remoting technologies and found them universally unfun I would now use Spring remoting as an abstraction from the implementation.
It allows you to concentrate on writing your functionality and let Spring handle the remote part with some config. you have the choice of several implementations (RMI, Spring's HTTP invoker, Hessian, Burlap and JMS). The abstraction means you can pick one implementation and simply swap it if your needs change.
See the SpringSource docs for more information.
The standard approach would be to use plain RMI between the various service components but this brings issues of sharing your Java interfaces and versioning changes to your domain model especially if you have lots of components using the same classes.
Are you really running each service in a separate VM? If these EJBs are always talking to each other then you're best off putting them into the same VM and avoiding any remote procedure calls as these services can use their LocalInterfaces.
The other thing that may bite you is using Hibernate POJOs. You may think that these are simple POJOs but behind the scenes Hibernate has been busy with CGLib trying to do things like allow lazy initialization. If these beans are serialzed and passed over remote boundaries then you may end up with odd Hibernate Exception getting thown. Personally I'd prefer to create simple DTOs or write the POJOs out as XML to pass between components. My colleagues would go one step further and write custom wire protocols for transferring the data for performance reasons.
Recently I have been using the MULE ESB to integrate various service components. It's quite nice as you can have a mix of RMI, sockets, web services etc without having to write most of the boiler plate code.
http://www.mulesource.org/display/COMMUNITY/Home
Why would you go with anything other than the simplest thing that works?
In your case that sounds like EJB3 or maybe JMS, depending on whether the communication needs to be synchronous or asynchronous.
EJB3 is by far these easiest being built on top of RMI with the container providing all the additional features you might need - security, transactions, etc. Presumably your POJOs are in a shared jar and therefore can simply be passed between your EJBs, although I tend towards passing value objects myself. The other benefit of EJB is, when done right, that it's the most performant (that's just my opinion btw ;-).
JMS is a little more involved, but not much and a system based on asynchronous communication affords certain niceties in terms of parallelizing tasks, etc.
The performance overhead of web-services, the inevitable extra config and additional points of failure make them, IMHO, not worth the hassle unless you've a requirement that mandates their use - I'm thinking interop with non-Java clients or providing data to external parties here.
If you need network communication between Java-only applications, Java RMI is the way to go. It has the best integration, most transparency and the least overhead.
If, however, some of your clients aren't Java-based, you should probably consider other options (Java RMI actually have an IIOP-dialect, which allows it to interact with CORBA, however - I wouldn't recommend doing this, unless it's for some legacy-code integration). Depending on your needs, webservices are probably your friend. If you are conserned with the networkload, you could go webservices over Hessian.
You literally mean remotely? As in running in a different environment with therefore different availability characteristics? With network overheads?
Assuming "yes" my first step would be to take a service approach, set aside the invocation technology for a moment. Just consider the design and meaning of your services. You know they are comparativley expensive to invoke, hence small busy interfaces tend to be a bad thing. You know that the service system might fail between invocations, so you may favour stateless services. You may need to retry requests after failure, so you may favour idempotent service designs.
Then consider availability relationships. Can your client work without the remote system. In some cases you simply can't progress if the remote system isn't available (eg. can't enable the employee if you can't get to the HR system) in other cases you can adopt a "fire-and-tell-me-later" philosophy; queue up the requests and process responses later.
Where there is an availability depdency, then simply exposing a synchronous interface seems to fit. You can do that with SLSB EJBs, if everything is Java EE, that works. I tend to generalise expecting that if my services are useful then non Java EE clients may want them too. So SOAP (or REST) tends to be useful. These days adding a web service interface to your SLSB is pretty trivial.
But my pet theory is that any sufficiently large IT system ends up needing aynch communications: you need to decouple the availability contraints. So I would tend to look for a JMS-style relationship. An MDB facade in front of your services, or SOAP/JMS is not too hard to do. Such an approach tends to highlight the failure-case design issues that were probably lurking anyway, JMS tends to make you think: "suppose I don't get an answer? suppose my answer comes late?"
I would go for SOAP.
JMS would be more efficient but you would need to code up an message driven bean for each interface.
SOAP on the other hand comes with lots of useful toolkits that will generate your message definition (WSDL) and all the neccesary handlers (client and server) when given an EJB.
With soap you can (but dont have to) deal with certificate security and secure connections over public networks. As the default protocol is HTTP over port 80 you will have minimal pain with firewalls etc. SOAP is also great for hetrogenious clients (in your case anything that isn't J2EE) with good support for most common languages on most common platforms.
In my company, we are going to use Flex3 for the presentation layer of a new financial web application and Spring for the business layer but a debate is still going on regarding the best messaging/remoting technology. Can you share your own experiences in terms of pros and cons of using one or the other technology?
In my experience, use BlazeDS unless you need to use web services which a variety of technologies can access.
BlazeDS
Pros: Less server intensive, less client parsing time, smaller data package (it's binary), meaning it's overall a faster call. Can do publish/subscribe as well as method invocation.
Cons: Not compatible with non-Flex front ends (although it's open source, so in theory, it could be.)
Webservices
Pros: Well established, pretty much cross platform. Easy to read and translate issues.
Cons: Much more verbose. If you use the internal translation of XML to AS Objects, the client has to do some intensive parsing. If you use the objects as XML, encapsulation will be weakened (objects outside of the call would have to know detailed information about the XML object, meaning refactoring can be problematic.)
For a good comparison on actual databases with actual numbers, see James Ward's Census application.
BlazeDS supports real-time message streaming over AMF and HTTP. But the limitations of the number of clients it can handle are lower than the more efficient RTMP of Adobe LiveCycle ES. You can always switch to LiveCycle later if you need the performance boost, but there is a price tag involved (don't know how expensive it is).