WebLogic Transaction Manager - java

Anyone knows what do I have to change so that WebLogic Transaction manager works with a cluster ? I tested and I have it working by now with one server. How can I run it on a cluster ?
InterposedTransactionManager itm = TxHelper.getClientInterposedTransactionManager(initialCtx, serverName);
Is the second parameter I do believe needs to be changed !

That is the correct call - from the documentation:
If the initial context is obtained from a non-clustered server, then the server name specified should refer to the same server. If the initial context is obtained from a cluster, then the server name specified should refer to a server within the cluster.
Just pick any server in your cluster and it should still work the same way.
You will probably want to look into other options if you are clustering such as (Oracle API here):
setClusterwideRecoveryEnabled(boolean isClusterwideRecoveryEnabled)
Specifies whether recovery operations for a distributed transaction are applied to all the servers of the cluster hosting InterposedTransactionManager rather than just the server hosting the InterposedTransactionManager.

Related

Pivotal gemfire cluster configuration

I am trying to set up a Pivotal Gemfire cluster with two nodes/hosts. Precisely two different unix servers. The idea behind is creating 1 locator and 1 cache server in each host where the locators should take care of load balancing among the cache servers. A replicated region will be created in both the cache servers. When a client creates/update a region in cache server using gfsh or java API, it should be replicated to other one
Using gfsh, I am able to start a locator (locator 1) and a cache server (server 1) in host_A and likewise in host_B. I have created a region (RegionA) in both the servers.
Is that all i have to do ?. Pivotal tutorials talk about having a locator and multiple cache servers in same machine. I could not find any appropriate resource which talks about multi-server/host configuration.
After starting the servers in both the hosts. I am starting servers in each of the host like this.
start server --name=server1 --locators=host_A[10334],host_B[10334] --group=group1 --server-port=40406
start server --name=server2 --locators=host_A[10334],host_B[10334] --group=group1 --server-port=40406
When i do "list members" in gfsh, host B shows (locator 2, server 1 [from host A], server 2), but host A shows locator 1 only. Ideally i am expecting 2 locator s and 2 servers as members in both the machines. Is that not right?
The steps look just fine, are you having any issues or something is not working while using the started cluster?. You can go through Pivotal GemFire in 15 Minutes or Less to get to know how to start locators and servers, and how to interact with them as well. The only extra item I can think of (not mentioned withint he previous link as all members are started locally within the same gfsh session) is that you need to correctly configure the --locators parameter when starting your members, more information about how this works can be found in How Member Discovery Works and Configuring Peer-to-Peer Discovery.
Just for your reference, you can have as many members as you want per host, there's no implicit limit about this other than the actual physical resources on the host itself (memory, disk, ports, network throughput, etc.). Keep in mind, however, that it is always better to have only one member per host to achieve the highest reliability and availability for both your data and locator services.
Hope this helps, cheers.

Hazelcast: Execute EntryEvictedListener only once per cluster in client mode

We are connecting to Hazelcast cluster using Java clients from multiple nodes.
HazelcastClient.newHazelcastClient(cfg)
We need our EntryEvictedListener to be executed only once per cluster.
By default it is executed on all connected clients.
Found how to reach this goal with Hazelcast embedded (Time Based Eviction in Hazelcast), but looks like
map.addLocalEntryListener(...)
is not allowed for client.
So is there any way to execute eviction listener only once per cluster using client?
Unfortunately not. You're listener would need to run on a cluster node, since the local is directly connected to the underlying partitioning scheme. What do you want to do on the evict event, maybe you can achieve it differently.

How to get database connection through websphere ORB call?

i've got a websphere 6.1 cluster environment which is composed of two nodes with 2 appservers each. Let's call NodeA including Server1(2809) & Server2(2810), NodeB including Server3(2811) & Server4(2812). Meanwhile, i created a cluster-scope datasource with JNDI local_db.
Right now i want to get database connection in a java client through WAS ORB call from above environment. The specific part of java code would look like this:
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.ibm.websphere.naming.WsnInitialContextFactory");
env.put(Context.PROVIDER_URL,"iiop://localhost:2809");
javax.sql.DataSource ds = (DataSource)initialContext.lookup("local_db");
Connection cn = ds.getConnection();
If above java client code gets run, will the database connection retrieve request follow load-balancing rule among the four connection pools of all application servers?
Moreover, if my java client gets one database connection successfully and then run a big SQL Query with large result return, as for the memory space occupation, which WAS application server would take care? only server1 due to port 2809 used above or the target server who returns the database connection?
BTW, if i put two server members for that PROVIDER_URL, such as iiop://localhost:2809, localhost:2810, does it mean load-balancing or failover?
Please help explain and correct me if i'm understanding wrongly!
Thanks
Let me start with the easy ones and proceed to the rest
Having two provider URLs' implies failover. If you can't connect to the first naming server, it connects to the second naming server and continues till the end of that list. Notice the Fail over is for connection to the naming server (not to the resource itself)
The look up is done on the server that you connect to. THe local_db represents a datasource (and its connection pool) on that server. You will one work with the server1 (as you are connecting to that NS) and will get connection from the datasource hosted on that server.
You will never get any connection from the other servers. In others words there is no load balancing (one request uses connection from server1, another uses a connection from server 2 etc). I believe this is what you mean by load balancing in your question above.
HTH
A DataSource is neither remotable nor serializable. Therefore, if you look up local_db from the server, it will return a javax.naming.Reference that the client uses to create a new DataSource instance with the same configuration as the one on the server. That also means that once the lookup has been performed, no JDBC method will ever send requests to the WebSphere server.

Websphere application server 5.1 DataSource no longer valid when DB is rebooted

First of all, we are running a Java Web application running on WAS 5.1. Behind that, we use an Oracle data base. The problem that we're faced to is really simple, but after a couple of hours of Google search, I decided to ask you.
We have an application that is running on WAS. When we start the server, WAS sets his DataSource so that it points to the data base. Everything works fine, expect when the DBAs have to reboot the data base server. When they do, the data source is no longer valid and we have to manually restart all server and we are currently trying to correct that, if possible. We need to find a way to do it because we have 3 pre-production environnement for for our application, and there are two servers associated with it, one for the application and the other is a report generator web service. So, when the DBAs wants to reboot the server (and they usually don't tell us!) we have to reboot six servers. I was wondering if in Java, there was a way to reset the data source so that we don't need to restart the servers.
For you information, WebSphere is v5.1 and Oracle is 9g with Java 1.4.2.17.
We also use RAD:
Version: 6.0.1
Build id: 20050725_1800
You should configure your application server to always test the connection before leasing it out to a client. I'm not familiar with Websphere that much, but in WebLogic, you can set a jdbc sql statement such as select 1 from dual and the container removes stale connections from the connection pool.
Here is a link on how to do it in Websphere
http://www-01.ibm.com/support/docview.wss?uid=swg21439688
Based on what i read from your note, you should receive Stale connection exception as WAS has stale handles (in its pool) as the DB has been restarted.
The Data Source configuration can be configured to purge the entire pool once a stale connection is detected. The default policy is to purge the individual connection.
Adopting this would prevent you from restarting your WAS Servers.
There are a number of resources in this space
http://www-01.ibm.com/support/docview.wss?uid=swg21063645
HTH
Manglu

Java Hazelcast problems with multiple clusters

I am running a small system that relies on Hazelcast for clustering, distributed computing and messaging in a Multicast mode (Standard config as available in the download). I have a number of server modules that run as "Core" Hazelcast instances and a Java Swing application that is implemented as a Hazelcast "Native Client". This all works well and I would now like to commission the system in production and would hence need to run two separate clusters (dev + prod) and that is where I run into problems.
According to the documentation all you need to is to use separate group names + passwords for the two clusters and I get the impression that the two clusters should sort themselves out automatically!? This appears to work for the server modules but when I try to connect a "Client"-instance to the prod environment, I can see from the logs of one of the server modules in prod that the client appears to connect successfully:
INFO: [prod] received auth from Connection [/192.168.0.2:55863 -> null] live=true,
client=true, type=JAVA_CLIENT, this group name:prod, auth group name:prod,
successfully authenticated
But, the client never shows up as a member of prod. Instead, I find that the client has become a member of the dev environment even though the authentification took place against prod!
Involontary mixing of the two clusters is obviously a giant problem for me and a showstopper. Does anyone know if there is anything that I am doing wrong or if there are any configuration changes that I can do to resolve the problem?
When a client connects to the cluster it never becomes a member of the cluster.
So I suspect that your client did connected to the prod, but somehow in your code you have somewhere something like Hazelcat.getMap() which results in starting a member in that JVM and since the default configuration that this member will use will be same as the dev, this new member will join to your dev cluster.
So in fact you have one client, that is connected to prod and another member that is connected to the dev cluster.
Try to put something through client and see in which cluster those entries are appear?
Am i making sense?

Categories