Heterogeneous Java EE cluster - java

Is it possible to form a cluster in which there are different types of application servers? For instance, 1 JBoss, 1 Glassfish and 1 WebSphere? Lets assume we are using EJB3.0.
Stateless session beans should be relatively easy and simple load balancing among the instances should do the work, but what about SFSBs and session replication? Is it possible to utilize some cache storage like infinispan for it?
I would appreciate any comments or sharing your experience on this topic.

I assume it may be possible if you use some application server agnostic solution like Hazelcast. According to its documentation it's pretty easy to configure web session replication and the only requirements it has are
Target application or web server should support Java 1.5+
Target application or web server should support Servlet 2.4+ spec
Session objects that needs to be clustered have to be Serializable
I've not tried to configure a cluster the way you've described, however I think it may do the trick.

The responce is simply NO. Clusturing is a non standard feature, it is up to the Java EE implementation to provide clustoring keeping standard behaviour (with very litle constrains, as stickiness is expected and session object are expected to be serializable) and no interoperability is forseen.
You can of course made the cluster your self, setting up an external data grid to serve as session store and manage your self the cache, but then you will lose any framework functionality related to the session (you will need to do every thing by your self) and what the point any more to use a full Java EE application server. Yes you will then need to forget about SFSBs.
I am ready curous what issue you want to solve by this type of architecture. I don't see any that can over come the cost of maintining 3 differents apps (app server have slite difference on the dev side) and more importantly 3 differents infrastructure operation stack (on this side there is lot of difference, so you need to multiply the opperation team knowlages).

Related

App Server Clustering vs Terracotta

I've heard the term "clustering" used for application servers like GlassFish, as well as with Terracotta; and I'm trying to understand what the word clustering implies when used in conjunction with application servers, and when used in conjunction with Terracotta.
My understanding is:
If a GlassFish server is clustered, then it means we have multiple physical/virtual machines, each with their own JRE/JVM running separate instances of GlassFish. However, since they are clustered, they will all communicate through their admin server ("DAS"), and have the same apps deployed to all of them. They will effectively act (to the end user) as if they are a single app server - but now with load balancing, failover/redundancy and scalability added into the mix.
Terracotta is, essentially, a product that makes multiple JVMs, running on different physical/virtual machines, act as if they are a single JVM.
Thus, if my understanding is correct, the following are implied:
You cluster app servers when you want load balancing and failover tolerance
You use Terracotta when any particular JVM is too small to contain your application and you need more "horsepower"
Thus, technically, if you have a GlassFish cluster of, say, 5 server instances; each of those 5 instances could actually be an array/cluster of Terracotta instances; meaning each GlassFish server instance is actually a GlassFish instance living across the JVMs of multiple machines itself
If any of these assertions/assumptions are untrue, please correct me! If I have gone way off-base and clearly don't understand clustering and/or the very purpose of Terracotta, please point me in the right direction!
Terracotta enables you to have a shared state across all your nodes (its stateful). Basically it creates a shared memory space between different JVM's. This is useful when nodes in a cluster all need access to the same objects.
If your application is stateless and you just need load balancing and fail over you can use a solution like JGroups. In this scenario each node just handles requests and has little idea about other nodes. Objects in memory are not shared across nodes and each JVM just runs on its own and has no idea about other JVM's. This often works nicely for request / response type applications. A webserver serving content (without sessions) does this for example.
Dealing with a stateless cluster is often simpler then dealing with a stateful cluster. This is because in a stateless cluster nodes know almost nothing about each other which results in less things that can go wrong.
GlassFish sits a bit in the middle of the above concepts. Objects in memory within GlassFish are visible to all nodes. However the frontend (HTTP connectors) work stateless.
So to answer your questions:
1) Yes, those are the two most obvious reasons. However sometimes people only want failover or only want load balancing or sometimes both. Not all clustering solutions fix both of these problems.
2) Yes. Altough technically speaking Terracotta only solves the shared memory part, not the CPU part. However by solving the memory part it automatically solves the CPU part since you can now just add JVM's to the shared memory space.
3) I don't know if thats practically possible but as a thought experiment; Yes.
Clustering can mean one of the following:
Multiple instances can be managed as one. Deploy an application to the cluster, it is deployed to all instances in the cluster. Make a configuration change, and that change will be pushed to all nodes in the cluster. GlassFish supports this out of the box.
Service Availability. If any one instance fails, the application is available on another instance. Without high availability enabled, any instance failure also results in session loss for any session being managed by that instance. GlassFish supports this out of the box.
High availability. If any one instance fails, the application is available on another instance, and there is no session loss because a session replica is also maintained on another instance. GlassFish supports this. You will have to choose either #2 or #3 in any one cluster.
What you are asking about IMHO is really #3, because it is the only real case where Terracotta - in the context of high availability clustering - will offer value w/GlassFish. GlassFish already offers built-in high availability, so there had better be a very good reason to add Terracotta to the solution because it will complicate the deployment architecture.
The primary reason I can think of adding Terracotta is that you may want to offload session management to a data grid and free up GlassFish to run business logic. This may be due to more frequent garbage collection or wanting to manage more users per GlassFish instance. However, I'm not sure that Terracotta can do this seamlessly. With GlassFish built-in HA clustering, replicating sessions is seamless (no application logic modifications). You may have to write code to put/get data from a Terracotta cache I'll let you research :-) Oracle GlassFish Server also integrates (seamlessly) with Coherence to solve this problem. You can separate session management into a Coherence data grid without modifying your application code.
Unless you know for a fact up front that your application must scale to a very large number of concurrent users, start with built-in HA clustering, run tests, and go from there.
Hope this helps.

EJB3 Enterprise Application As Portal & Client Web Apps - Architecture/Design

As shown in the above pic, I have a EJB-3 Enterprise application (EAR file), which acts as a portal and holds 3 web applications (WAR files) that communicate and transact with the same datastore. These 3 webapps are not portlet implementations, but normal webapps which interact with the datastore through the Enterprise App's Persistence Layer. These webapps are developed independently and so, some of 'em use Webservices from the Enterprise App and some of 'em use EJB-Clients.
Also, there is an other option of replacing these webapps (Web App1, Web App2 and Web App3) and using independent Enterprise Apps to communicate and transact with the database, as shown below:
Now, my questions are:
1) What is the best Option among the listed 2 options (above)?
2) How does it affect when we replace those webapps acting as clients to the Enterprise App, as independent Enterprise Apps (EAR files)?
3) What is a better model for Transaction handling, SSO functionality, Scalability and other factors?
4) Are there are any other better models?
EDIT:
1) In the first model, which method is a preferred way to interact with the EAR file - webservices or ejb-client jar file/library (interfaces and utility classes)?
2) How do both models differ in memory usage (server RAM) and performance. Is there any considerable difference?
Since you are being so abstract I will do it as well. If we remove all buzzy words as "Portal", "Enterprise Apps" and so on... What we have at the end is three web apps and a common library or framework (The enterprise App).
Seeing its app as simple as posible. You have three developers that need develop three web apps. You will provide some common code useful to build their apps. The model you will use will depends of what kind of code you will provide them.
1.- You will only provide some utils, and common business code. May be the clasical library fit your needs. (In Java EE environments you must take in account how can you take the advantages of persistence cache level 2 sharing a Session Factory for a single datastore)
2.- You will provide shared services as persistence, cache, security, audit, and so on... You will need a service layer as the first option. You will have a shared state so you need only one instance.
3.- The more common case is both you provide some business API and a service layer to common services.
You aren't indicating any requirement that force you to use a more complex solution for your scenario.
EDIT:
About if it is prefered rmi (the ejb-client) or webservices. I always use rmi to communicate applications geographically close. It use is simple and the protocol is much more faster that webservices (you can read a lot of comparison over this topic searching for rmi webservices performance on google).
On the other hand rmi is more sensible to network latence, require special firewall configurations and it is more coupled that webservices. So if I pretend to offer services to a third party or connect geographically sparse servers I will prefer webservices or even REST.
About the last question initially there is no any difference about deploy one or ten applications in the same server. The deploy fee will be insignificant over the overhead for the use of the application. Of course, you must take this as a generical assumption. Obviously the size and how you deploy your applications will have an impact about the memory consumption and others.
You must take in account that this decisions can be easily changed as you will needed. So as I said you could start with the simple solution and if you encounter a problem deploying your applications your could restructure your ears easily.
I'm inclined to agree with Fedox. If there is no reason for choosing one solution over the other ( business reason, technical reason, etc) then you might as wel choose the path of least resistance. To my mind that would be the first solution.
In general terms start simple and add complexity as you need to. Your solutions have no meaning without context. A banking app needs different considerations to a blog.
Hope this helps
There is a new platform called Vitria's BusinessWare, it's a very successful project which is worth millions.
Now let's see how does it work and what it does so that we can do the same in theory:
It interconnects projects with their databases, web-services with their EJBs..etc.
From their concept we can learn the following:
Create main EJB stateless bean (API), whose job is to pass messages
from:
web-services to other web-services
web-services to webapps
webapps to other web-services
The purpose of this EJB is first do validations in the main database
and then pass the calls to the other modules.
Only this EJB has access to the DB to more secure the connections
This EJB will queue the messages until the modules to sent are free
to accept
This EJB will control all the processes in the DB
This EJB will decide where to send the messages

Java EE Application-scoped variables in a clustered environment (Websphere)?

Is there any easy way in a Java EE application (running on Websphere) to share an object in an application-wide scope across the entire cluster? Something maybe similar to Servlet Context parameters, but that is shared across the cluster.
For example, in a cluster of servers "A" and "B", if a value is set on server A (key=value), that value should immediately (or nearly so) be available to requests on server B.
(Note: Would like to avoid distributed caching solutions if possible. This really isn't a caching scenario as the objects being stored are fairly dynamic)
I'm watching this to see if any simple solutions appear, but I don't know of any myself. Last time I asked something like this, the answer was to use a distributed object store.
Our substitute was manual notification over HTTP to a configured list of URLs, one for each Web Container's direct server:port combination. (That is, bypassing any fronting proxy/web server/plugin.)
Try using the WebSphere workarea

Oracle/Java application, recommended architectures

I am working on a desktop Java application that is supposed to connect to an Oracle database via a proxy which can be a Servlet or an EJB or something else that you can suggest.
My question is that what architecture should be used?
Simple Servlets as proxy between client and database, that connects to the database and sends results back to the client.
An enterprise application with EJBs and remote interfaces to access the database
Any other options that I haven't thought of.
Thanks
Depending on how scalable you want the solution to be, you can make a choice.
EJB (3) can make a good choice but then you need a full blown app server.
You can connect directly using jdbc but that will expose url of db (expose as in every client desktop app will make a connection to the DB. you can not pool, and lose lot of flexibilities). I would not recommend going this path unless your app is really a simple one.
You can create a servlet to act as proxy but its tedious and not as scalable. You will have to write lot of code at both ends
What i would recommend is creating a REST based service that performs desired operations on the DB and consume this in your desktop app.
Start off simple. I would begin with a simple servlet/JDBC-based solution and get the system working end-to-end. From that point, consider:
do you want to make use of conenction pooling (most likely). Consider C3P0 / Apache DBCP
do you want to embrace a framework like Spring ? You can migrate to this gradually, and start with using the servlet MVC capabilities, IoC etc. and use more complex solutions as you require
Do you want to use an ORM ? Do you have complex object graphs that you're persisting/querying, and will an ORM simplify your development ?
If you do decide to take this approach, make sure your architecture is well-layered, so you can swap out (say) raw JDBC in favour of an ORM, and that your development is test-driven, such that you have sufficient test cases to confirm that your solution works whilst you're performing the above migrations.
Note that you may never finalise on a solution. As your requirements change, and your application scales, you'll likely want to swap in/out the technology most suitable for your current requirements. Consequently the architecture of your app is more important than the particular toolset that you choose.
Direct usage of JDBC through some ORM (Hibernate for example) ?
If you're developing a stand-alone application, better keep it simple. In order to use ORM or other frameworks you don't need a J2EE App Server (and all the complexity it takes with it).
If you need to exchange huge amounts of data between the DB and the application, just forget about EJBs, Servlets and Web Services, and just go with Hibernate (or directly with plain old JDBC).
A REST based Web Services solution may be good, as long as you don't have complex data, and high numbers (try to profile how long does it takes to actually unmarshal SOAP messages back and to java objects).
I have had a great deal of success with using Spring-remoting and a servlet based approach. This is a great setup for development as well, since you can easily test your code without deploying to an web container.
You start by defining a service interface to retrieve/store your data (POJO's).
Create the implementation, which can use ORM, straight JDBC or some pooling library (container provided or 3rd party). This is irrelevant to the remote deployment.
Develop your application which uses this service directly (no deployment to a server).
When you are satisfied with everything, wrap your implementation in a war and deploy with the Spring DispatcherServlet. If you use maven, it can be done via the war plugin
Configure the desktop to use the service via Spring remoting.
I have found the ability to easily develop the code by running the service as part of the application to be a huge advantage over developing/debugging something running on a server. I have used this approach both with and without an EJB, although the EJB was still accessed via the servlet in our particular case. Only service to service calls used the EJB directly (also using Spring remoting).

What's different when implementing Flex authentication/authorization in a clustered environment?

Are there any differences implementing Flex application security in a clustered Java environment (such as Oracle Application Server/OC4J or a JBoss cluster) vs a single application server environment? (And/or does it depend on the specific environment software?)
What considerations are there in a situation where you need to authenticate with LDAP (AD) and store user access information in a database (ex. USER table containing username + permissions/roles info)?
Are sessions shared across nodes with no issues? Any differences between Blaze DS and Granite DS?
Yes, Blaze DS is a pain when it comes to clustering full stop. LCDS isn't much better, but it at least has more support for clustering (with the downside of being ridiculously expensive).
The problem is the JSESSIONID which the instance uses to identify the Flex client that's making the call. The associated Flex Session object isn't shared in the cluster by default, and IIRC, BlazeDS doesn't have any option for sharing, while LCDS has limited options... Sticky Sessions or port broadcasting.
I can't speak for any of the Open Source options, but clustering support is usually the purview of paid-for solutions...

Categories