Connection Pooling and Oracle Seesion - java

Before I start with my question, I would like to clarify that I am a DB developer and have limited understanding of things on Java/J2EE side.
Ours is a web application (n-tier with app server/web server). We are using connection pooling to manage connections to the database. I have limited understanding of connection pooling - App server manages connections for applications, lets the application get a connection from a pool, return the connection once its done back to the pool.
Let's say that I follow these steps -
1. Let's say that I log in the application
2. Application requests for a connection from connection pool to authenticate me
3. Once authentication is done, App server will return the connection back to pool
4. I browse to a page where I have to do some CRUD operation and let's say that I am updating some data on the page.
5. App Server will again request for a connection from Pool
6. Application will process the data using the connection.
Here is my problem statement -
Let's say that I have to capture audit information using triggers (on the tables which are undergoing update). One of the attribute which I need to capture is username (logged in user).
I set a global package variable when I log in (step 1 - 3), which stores the logged in user name. My trigger is going to read the global package variable for the username. Since the connections are not going to remain same (connection pool manages connection), will my global package variable be available when I am processing the trigger?
What will happen to the variable (it obviously depends on answer to first question) when multiple users are logged in and accessing the application?
I tried to look around but have not been able to get clear answer to my doubts.
Pardon me, if my question is not clear. Let me know and I can edit to provide more information.

You can use CLIENT_IDENTIFIER attribute to preserve the actual user who logged in to the application.
Please find below more information from Oracle documentation:
Support for Application User Models by Using Client Identifiers
Many applications use session pooling to set up a number of sessions to be reused by multiple application users. Users authenticate themselves to a middle-tier application, which uses a single identity to log in to the database and maintains all the user connections. In this model, application users are users who are authenticated to the middle tier of an application, but who are not known to the database. Oracle Database supports use of a CLIENT_IDENTIFIER attribute that acts like an application user proxy for these types of applications.
In this model, the middle tier passes a client identifier to the database upon the session establishment. The client identifier could actually be anything that represents a client connecting to the middle tier, for example, a cookie or an IP address. The client identifier, representing the application user, is available in user session information and can also be accessed with an application context (by using the USERENV naming context). In this way, applications can set up and reuse sessions, while still being able to keep track of the application user in the session. Applications can reset the client identifier and thus reuse the session for a different user, enabling high performance.
You can set the CLIENT_IDENTIFIER in java using the following code snippet:
public Connection prepare(Connection conn) throws SQLException {
String prepSql = "{ call DBMS_SESSION.SET_IDENTIFIER('userName') }";
CallableStatement cs = conn.prepareCall(prepSql);
cs.execute();
cs.close();
return conn;
}

Related

reuse jco 3 connection pool for different sso tickets

We've created a java application that uses JCo3 to access the remote SAP system data.
We are using SSO Tickets connect to that system.
The question is if there is some way to reuse the same connection pool for all the user SSO Tickets instead of creating a dedicated pool for each token.
Currently we have a DestinationDataProvider.getDestinationProperties implementation that takes SSO Ticket as a parameter and returns the corresponding Properties instance, ending up, I believe, with a connection pool per user.
I'm not sure how efficient this configuration is and would probably like to know if those connections could somehow be reused.
The technical RFC connection design does not allow to reuse an RFC connection with a different user. An RFC connection is bound to a user identity which cannot be switched. Therefore a connection pool with multiple physical connections that will use different user ids on demand cannot be implemented.
This is not a limitation of JCo but of RFC in general.
However this is not so tragic as the most expensive part of RFC connection establishment is not the opening of a new physical connection via TCP/IP but the RFC user authorization process with its RFC context object creations and internal initializations. So having connection pools per destination and user is what really helps for achieving better performance. You do not need to worry and to take care for optimizing the internal JCo connection pooling. This already works fairly good out-of-the-box even with an own pool for each user ID.

Application continuity with Universal Connection Pool java JDBC Oracle 12c

I am trying to achieve application continuity with Oracle 12c database & Oracle UCP(Universal Connection Pool). As per the official documentation, I have implemented the following in my application. I am using ojdbc8.jar along with the equivalent ons.jar and the ucp.jar in my application.
PoolDataSource pds = oracle.ucp.jdbc.PoolDataSourceFactory.getPoolDataSource();
Properties as per oracle documentation:
pds.setConnectionFactoryClassName("oracle.jdbc.replay.OracleDataSourceImpl");
pds.setUser("username");
pds.setPassword("password");
pds.setInitialPoolSize(10);
pds.setMinPoolSize(10);
pds.setMaxPoolSize(20);
pds.setFastConnectionFailoverEnabled(true);
pds.setONSConfiguration("nodes=IP_1:ONS_PORT_NUMBER,IP_2:ONS_PORT_NUMBER");
pds.setValidateConnectionOnBorrow(true);
pds.setURL("jdbc:oracle:thin:#my_scan_name.my_domain_name.com:PORT_NUMBER/my_service_name");
// I have also tried using the TNS-Like URL as well. //
However, I am not able to acheive application continuity. I have some inflight transactions that I expect to replay when I bring down the RAC node on which my database service is running. What I observe is that my service migrates to the next available RAC node in the cluster, however, my in-flight transactions fail. What expect to happen over here is that the drivers will automatically restart the failed in-flight transactions. However, I dont see this happening. The queries that I fire are the database, sometimes I see them being triggered again on the database side, but we see Connection Closed Exception on the client side
According to some documentation, application continuity allows the application to mask outages from the user. My doubt here is whether my understanding that the application continuity will replay the SQL Statement that were in-flight when the outage occured is correct or is the the true meaning of application continuity something else.
I have refered to some blogs such as this,
https://martincarstenbach.wordpress.com/2013/12/13/playing-with-application-continuity-in-rac-12c/
The example mentioned here does not seem to be intended for replaying of in-flight SQL statements.
Is application continuity capable or replaying the in-flight SQL statements during a outage, or is FCF and application continuity only restore the state of the connection object and make it usable by the user post the outage has occured. If the earlier is true, then please guide me if I am missing anything in the application level settings in my code that is keeping me from achieving replay.
Yes your understanding is correct. With the replay driver, Application Continuity can replay in-flight work so that an outage is invisible to the application and the application can continue, hence the name of the feature. The only thing that's visible from the application is a slight delay on the JDBC call that hit the outage. What's also visible is an increase in memory usage on the JDBC side because the driver maintains a queue of calls. What happens under the covers is that, upon outage, your physical JDBC connection will be replaced by a brand new one and the replay driver will replay its queue of calls.
Now there could be cases where replay fails. For example replay will fail if the data has changed. Replay will also be disabled if you have multiple transactions within a "request". A "request" starts when a connection is borrowed from the pool and ends when it's returned back to the pool. Typically a "request" matches a servlet execution. If within this request you have more than one "commit" then replay will be disabled at runtime and the replay driver will stop queuing. Also note that auto-commit must be disabled.
[I'm part of the Oracle team that designed and implemented this feature]
I think the jdbc connection string could be your problem:
pds.setURL("jdbc:oracle:thin:#my_scan_name.my_domain_name.com:PORT_NUMBER/my_service_name");
You are using a so called EZConnect String but this is not supported with AC.
Alias (or URL) = (DESCRIPTION=
(CONNECT_TIMEOUT= 120)(RETRY_COUNT=20) RETRY_DELAY=3)(TRANSPORT_CONNECT_TIMEOUT=3)
(ADDRESS_LIST=(LOAD_BALANCE=on)
(ADDRESS=(PROTOCOL=TCP)(HOST=primary-scan)(PORT=1521)))
(ADDRESS_LIST=(LOAD_BALANCE=on)
(ADDRESS=(PROTOCOL=TCP)(HOST=secondary-scan)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=gold-cloud)))

Opening a new database connection for every client that connects to the server application?

I am in the process of building a client-server application and I would really like an advise on how to design the server-database connection part.
Let's say the basic idea is the following:
Client authenticates himself on the server.
Client sends a request to server.
Server stores client's request to the local database.
In terms of Java Objects we have
Client Object
Server Object
Database Object
So when a client connects to the server a session is created between them through which all the data is exchanged. Now what bothers me is whether i should create a database object/connection for each client session or whether I should create one database object that will handle all requests.
Thus the two concepts are
Create one database object that handles all client requests
For each client-server session create a database object that is used exclusively for the client.
Going with option 1, I guess that all methods should become synchronized in order to avoid one client thread not overwriting the variables of the other. However, making it synchronize it will be time consuming in the case of a lot of concurrent requests as each request will be placed in queue until the one running is completed.
Going with option 2, seems a more appropriate solution but creating a database object for every client-server session is a memory consuming task, plus creating a database connection for each client could lead to a problem again when the number of concurrent connected users is big.
These are just my thoughts, so please add any comments that it may help on the decision.
Thank you
Option 3: use a connection pool. Every time you want to connect to the database, you get a connection from the pool. When you're done with it, you close the connection to give it back to the pool.
That way, you can
have several clients accessing the database concurrently (your option 1 doesn't allow that)
have a reasonable number of connections opened and avoid bringing the database to its knees or run out of available connections (your option 2 doesn't allow that)
avoid opening new database connections all the time (your option 2 doesn't allow that). Opening a connection is a costly operation.
Basically all server apps use this strategy. All Java EE servers come with a connection pool. You can also use it in Java SE applications, by using a pool as a library (HikariCP, Tomcat connection pool, etc.)
I would suggested a third option, database connection pooling. This way you create a specified number of connections and give out the first available free connection as soon as it becomes available. This gives you the best of both worlds - there will almost always be free connections available quickly and you keep the number of connections the database at a reasonable level. There are plenty of the box java connection pooling solutions, so have a look online.
Just use connection pooling and go with option 2. There are quite a few - C3P0, BoneCP, DBCP. I prefer BoneCP.
Both are not good solutions.
Problem with Option 1:
You already stated the problems with synchronizing when there are multiple threads. But apart from that there are many other problems like transaction management (when are you going to commit your connection?), Security (all clients can see precommitted values).. just to state a few..
Problem with Option 2:
Two of the biggest problems with this are:
It takes a lot of time to create a new connection each and every time. So performance will become an issue.
Database connections are extremely expensive resources which should be used in limited numbers. If you start creating DB Connections for every client you will soon run out of them although most of the connections would not be actively used. You will also see your application performance drop.
The Connection Pooling Option
That is why almost all client-server applications go with the connection pooling solution. You have a set connections in the pool which are obtained and released appropriately. Almost all Java Frameworks have sophisticated connection pooling solutions.
If you are not using any JDBC framework (most use the Spring JDBC\Hibernate) read the following article:
http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/pool.html
If you are using any of the popular Java Frameworks like Spring, I would suggest you use Connection Pooling provided by the framework.

How to get database connection through websphere ORB call?

i've got a websphere 6.1 cluster environment which is composed of two nodes with 2 appservers each. Let's call NodeA including Server1(2809) & Server2(2810), NodeB including Server3(2811) & Server4(2812). Meanwhile, i created a cluster-scope datasource with JNDI local_db.
Right now i want to get database connection in a java client through WAS ORB call from above environment. The specific part of java code would look like this:
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.ibm.websphere.naming.WsnInitialContextFactory");
env.put(Context.PROVIDER_URL,"iiop://localhost:2809");
javax.sql.DataSource ds = (DataSource)initialContext.lookup("local_db");
Connection cn = ds.getConnection();
If above java client code gets run, will the database connection retrieve request follow load-balancing rule among the four connection pools of all application servers?
Moreover, if my java client gets one database connection successfully and then run a big SQL Query with large result return, as for the memory space occupation, which WAS application server would take care? only server1 due to port 2809 used above or the target server who returns the database connection?
BTW, if i put two server members for that PROVIDER_URL, such as iiop://localhost:2809, localhost:2810, does it mean load-balancing or failover?
Please help explain and correct me if i'm understanding wrongly!
Thanks
Let me start with the easy ones and proceed to the rest
Having two provider URLs' implies failover. If you can't connect to the first naming server, it connects to the second naming server and continues till the end of that list. Notice the Fail over is for connection to the naming server (not to the resource itself)
The look up is done on the server that you connect to. THe local_db represents a datasource (and its connection pool) on that server. You will one work with the server1 (as you are connecting to that NS) and will get connection from the datasource hosted on that server.
You will never get any connection from the other servers. In others words there is no load balancing (one request uses connection from server1, another uses a connection from server 2 etc). I believe this is what you mean by load balancing in your question above.
HTH
A DataSource is neither remotable nor serializable. Therefore, if you look up local_db from the server, it will return a javax.naming.Reference that the client uses to create a new DataSource instance with the same configuration as the one on the server. That also means that once the lookup has been performed, no JDBC method will ever send requests to the WebSphere server.

Simultaneous access to Database in Web application

How can I know the average or exact number of users accessing database simultaneously in my Java EE web enterprise application? I would like to see if the "Connection pool setting" I set in the Glassfish application server is suitable for my web application or not. I need to correctly set the maximun number of connection in Connection Pool setting in Application Server. Recently, my application ran out of connections and threw exceptions when the client request for DB expires.
There are multiple ways.
One and easiest would be take help from your DBAs - they can tell you exactly how many connections are active from your webserver or the user id for connection pool at a given time.
If you want some excitement, you will have to JMX management extensions provided by glassfish. Listing 6 on this page - gives an example as to how to write a JMS based snippet to monitor a connection pool.
Finally, you must make sure that all connections are closed explicitly by a connection.close(); type of call in your application. In some cases, you need to close ResultSet as well.
Next is throttling your http thread pool to avoid too many concurrent access if your db connections are taking longer to close.

Categories