reuse jco 3 connection pool for different sso tickets - java

We've created a java application that uses JCo3 to access the remote SAP system data.
We are using SSO Tickets connect to that system.
The question is if there is some way to reuse the same connection pool for all the user SSO Tickets instead of creating a dedicated pool for each token.
Currently we have a DestinationDataProvider.getDestinationProperties implementation that takes SSO Ticket as a parameter and returns the corresponding Properties instance, ending up, I believe, with a connection pool per user.
I'm not sure how efficient this configuration is and would probably like to know if those connections could somehow be reused.

The technical RFC connection design does not allow to reuse an RFC connection with a different user. An RFC connection is bound to a user identity which cannot be switched. Therefore a connection pool with multiple physical connections that will use different user ids on demand cannot be implemented.
This is not a limitation of JCo but of RFC in general.
However this is not so tragic as the most expensive part of RFC connection establishment is not the opening of a new physical connection via TCP/IP but the RFC user authorization process with its RFC context object creations and internal initializations. So having connection pools per destination and user is what really helps for achieving better performance. You do not need to worry and to take care for optimizing the internal JCo connection pooling. This already works fairly good out-of-the-box even with an own pool for each user ID.

Related

Direct db connection per http request vs connection pooling- what is the difference

Let's say I am storing data of Person(id, country_id, name). And let's say user just sent the id and country_id and we send back the name.
Now I have one db and 2 webserver and each webserver keeps a connection pool (e.g. c3p0) of 20 connection.
That means db is maintaining 40 connections and each webserver is maintaining 20 connections.
Analyzing the above system we can see that we used connection pool because people say "creating db connection is expensive"
This all make sense
Now let's say I shard table data on country_id, so now there may be 200 db, also assuming our app is popular now and we need to have 50 webserver.
Now the above strategy of connection pooling fails as if each webserver is keeping 20 connections in the pool for each db.
that means each webserver will have 20*200 db = 4000 connection
and each db will have 50 web server *20 = 1000 connection.
This doesn't sound good, so I got the question that why use connection pooling what is the overhead of creating 1 connection per web request?
So I run a test where I saw that DriverManager.getConnection() takes a average of 20 ms on localhost.
20 ms extra per request is not a game killer
Question1: Is there any other downside of using 1 connection per web request ?
Question2: People all over internet say "db connection is expensive". What are the different expenses?
PS: I also see pinterest doing same https://medium.com/#Pinterest_Engineering/sharding-pinterest-how-we-scaled-our-mysql-fleet-3f341e96ca6f
Other than Connection creation & Connection close cycle being a time consuming task ( i.e. being costly ) , pooling is also done to control the number of simultaneous open connections to your database since there is a limit on number of simultaneous connections that a db server can handle. When you do , one connection per request , you loose that control and your application is always at risk of crashing at peak load.
Secondly, you would unnecessarily tie your web server capacity with your database capacity and target is also to treat db connection management not as a developer concern but an infrastructure concern. Would you like to give control to open a database connection for production application to developer as per his/her code?
In traditional monolithic application servers like Weblogic, JBoss, WebSphere etc, Its sys admin who will create a connection pool as per db serer capacity and pass on JNDI name to developers to use to.Developer's job is to only get connection using that JNDI.
Next comes if database is shared among various independent applications then pooling lets you know as what you are giving out to which application. Some apps might be more data intensive and some might not be that intensive.
Traditional problem of resource leak i.e when developers forget to cleanly close their connection is also taken care of with pooling.
All in all - idea behind pooling is to let developers be concerned only about using a connection and do their job and not being worried about opening and closing it. If a connection is not being used for X minutes, it will be returned to pool per configuration.
If you have a busy web site and every request to the database opens and closes a connection, you are dead in the water.
The 20ms you measured are for a localhost connection. I don't think that all your 50 web servers will be on localhost...
Apart from the time it takes to establish and close a database connection, it also uses resources on the database server. This is mostly the CPU, but there could also be contention on kernel data structures.
Also, if you allow several thousand connections, there is nothing that keeps them from all gettings busy at the same time, in which case your database server will be overloaded and unresponsive unless it has several thousand cores (and even then you'd be limited by lock contention).
Your solution is an external connection pool like pgBouncer.

Datasource Microsoft JDBC Driver for SQL Server (AlwaysOn Availability Groups)

I have a question related to the scenario when connecting from a Java application using the Microsoft JDBC Driver 4.0 to a SQL Server 2014 with AlwaysOn Availability Groups set up for high availability.
With this set up, we will be connecting to an availability group listener (specified in the db connecting string instead of any particular instance), so that the DB fail-over etc. is handled gracefully by the listener and it tries to connect to the next available instance behind the scenes if current primary goes down in the AG cluster.
Question(s) I have is,
In the data-source that is configured on the j2ee application server side (we use WebSphere), what happens to those connections already pooled by the data-source?
When a database goes down, though the AG listener would try to reconnect on the db side to the next available DB, will the AG Listener also through the jdbc driver send an event or something to the data-source created on the app server and make sure those connections that are already pooled by the datasource to be discarded and have it create new ones so that transactions on the application side wont fail (though they might for a while till new connections are created and fail over is successful) or the java application has to find out only after requesting it from the datasource?
WebSphere Application Server is able to cope with bad connections and removes them from the pool. Exactly when this happens depends on some configurable options and on how fully the Microsoft JDBC driver takes advantage of the javax.sql.ConnectionEventListener API to send notifications to the application server. In the ideal case where a JDBC driver sends the connectionErrorOccurred event immediately for all connections, WebSphere Application Server responds by removing all of these connections from the pool and by marking any connection that is currently in-use as bad so that it does not get returned to the pool once the application closes the handle. Lacking this, WebSphere Application Server will discover the first bad connection upon next use by the application. It is discovered either by a connectionErrorOcurred event that is sent by the JDBC driver at that time, or lacking that, upon inspecting the SQLState/error code of an exception for known indicators of bad connections. WebSphere Application Server then goes about purging bad connections from the pool according to the configured Purge Policy. There are 3 options:
Purge Policy of Entire Pool - all connections are removed from
the pool and in-use connections marked as bad so that they are not
pooled.
Purge Policy of Failing Connection Only - only the
specific connection upon which the error actually occurred is
removed from the pool or marked as bad and not returned to the pool
Purge Policy of Validate All Connections - all connections are
tested for validity (Connection.isValid API) and connections found
to be bad are removed from the pool or marked as bad and not
returned to the pool. Connections found to be valid remain in the
pool and continue to be used.
I'm not sure from your description if you are using WebSphere Application Server traditional or Liberty. If traditional, there is an additional option for pre-testing connections as they are handed out of the pool, but be aware that turning this on can have performance implications.
That said, the one thing to be aware of is that regardless of any of the above, your application will always need to be capable of handling the possibility of errors due to bad connections (even if the connection pool is cleared, connections can go bad while in use) and respond by requesting a new connection and retrying the operation in a new transaction.
Version 4 of that SQL Server JDBC driver is old and doesn't know anything about the always on feature.
Any data source connection pool can be configured to check the status of the connection from the pool prior to doling it out to the client. If the connection cannot be used the pool will create a new one. That's true of all vendors and versions. I believe that's the best you can do.

Opening a new database connection for every client that connects to the server application?

I am in the process of building a client-server application and I would really like an advise on how to design the server-database connection part.
Let's say the basic idea is the following:
Client authenticates himself on the server.
Client sends a request to server.
Server stores client's request to the local database.
In terms of Java Objects we have
Client Object
Server Object
Database Object
So when a client connects to the server a session is created between them through which all the data is exchanged. Now what bothers me is whether i should create a database object/connection for each client session or whether I should create one database object that will handle all requests.
Thus the two concepts are
Create one database object that handles all client requests
For each client-server session create a database object that is used exclusively for the client.
Going with option 1, I guess that all methods should become synchronized in order to avoid one client thread not overwriting the variables of the other. However, making it synchronize it will be time consuming in the case of a lot of concurrent requests as each request will be placed in queue until the one running is completed.
Going with option 2, seems a more appropriate solution but creating a database object for every client-server session is a memory consuming task, plus creating a database connection for each client could lead to a problem again when the number of concurrent connected users is big.
These are just my thoughts, so please add any comments that it may help on the decision.
Thank you
Option 3: use a connection pool. Every time you want to connect to the database, you get a connection from the pool. When you're done with it, you close the connection to give it back to the pool.
That way, you can
have several clients accessing the database concurrently (your option 1 doesn't allow that)
have a reasonable number of connections opened and avoid bringing the database to its knees or run out of available connections (your option 2 doesn't allow that)
avoid opening new database connections all the time (your option 2 doesn't allow that). Opening a connection is a costly operation.
Basically all server apps use this strategy. All Java EE servers come with a connection pool. You can also use it in Java SE applications, by using a pool as a library (HikariCP, Tomcat connection pool, etc.)
I would suggested a third option, database connection pooling. This way you create a specified number of connections and give out the first available free connection as soon as it becomes available. This gives you the best of both worlds - there will almost always be free connections available quickly and you keep the number of connections the database at a reasonable level. There are plenty of the box java connection pooling solutions, so have a look online.
Just use connection pooling and go with option 2. There are quite a few - C3P0, BoneCP, DBCP. I prefer BoneCP.
Both are not good solutions.
Problem with Option 1:
You already stated the problems with synchronizing when there are multiple threads. But apart from that there are many other problems like transaction management (when are you going to commit your connection?), Security (all clients can see precommitted values).. just to state a few..
Problem with Option 2:
Two of the biggest problems with this are:
It takes a lot of time to create a new connection each and every time. So performance will become an issue.
Database connections are extremely expensive resources which should be used in limited numbers. If you start creating DB Connections for every client you will soon run out of them although most of the connections would not be actively used. You will also see your application performance drop.
The Connection Pooling Option
That is why almost all client-server applications go with the connection pooling solution. You have a set connections in the pool which are obtained and released appropriately. Almost all Java Frameworks have sophisticated connection pooling solutions.
If you are not using any JDBC framework (most use the Spring JDBC\Hibernate) read the following article:
http://docs.oracle.com/javase/jndi/tutorial/ldap/connect/pool.html
If you are using any of the popular Java Frameworks like Spring, I would suggest you use Connection Pooling provided by the framework.

Maintaing MySql database connections in web app

I have MySql server. I am making trivial CRUD calls to it from java servlets. I am establishing connection before each call and closing it at the end.
What should be ideal way to handle this scenario so the application could scale and handle user load decently.
You should maintain a connection pool which recycles recently used connections. That way, each request to the mysql server doesn't require connection setup.
definately the way to go is using connection pools (Connection pooling is a technique used for sharing database connections among requesting clients). Connection pooling provide significant benefits in terms of application performance, concurrency and scalability and usually involves little or no code modification.
Its pretty forward to use connection pools using tomcat but usually they differ in implementation details from version to version.

Simultaneous access to Database in Web application

How can I know the average or exact number of users accessing database simultaneously in my Java EE web enterprise application? I would like to see if the "Connection pool setting" I set in the Glassfish application server is suitable for my web application or not. I need to correctly set the maximun number of connection in Connection Pool setting in Application Server. Recently, my application ran out of connections and threw exceptions when the client request for DB expires.
There are multiple ways.
One and easiest would be take help from your DBAs - they can tell you exactly how many connections are active from your webserver or the user id for connection pool at a given time.
If you want some excitement, you will have to JMX management extensions provided by glassfish. Listing 6 on this page - gives an example as to how to write a JMS based snippet to monitor a connection pool.
Finally, you must make sure that all connections are closed explicitly by a connection.close(); type of call in your application. In some cases, you need to close ResultSet as well.
Next is throttling your http thread pool to avoid too many concurrent access if your db connections are taking longer to close.

Categories