I have a piece of j2EE app that runs on Jboss AS 4.2.3 and has a lot of reads to a mysql db.
I'd like to setup several more mysql instances, so the app will be able to decide which to contact.
we are using Hibernate as the ORM (full JPA support).
I've been looking at Hibernate Shards, and OpenJPA - but perhaps there is another way to do it?
is there a way we can still use JPA (so we won't need to change our code) and have some kind of read balancing as the provider?
I can pull some tricks on the hosts file, use a DNS with short TTL - but am looking for a simpler solution.
is there one?
just to make sure, sharding is not interesting at this point. just reading.
So eventually what i did was to extend mysql jdbc driver and modified the mysql-ds.xml file.
in the xml file i changed the connection string to contain several hosts, and the :mysql: to :myjdbc:, and entered my class as the provider.
in the JDBC driver i extended i've had to parse the connection string in two functions:
1. acceptsURL
2. connect
my code parses the connection string, selects a machine randomly, and calls the super function with a revised legal connection string.
and it just worked!
if a mysql read replica failed, then the connection fails, and on the next attempt a different machine is chosen.
whooohooo!
do you have any comments?
Related
I have a java REST API application using Quarkus as the framework. The application uses a PostgreSQL database, which is configured via the application.properties config file for hibernate entities (using "quarkus-hibernate-orm" module) etc. However, there are cases where i will have to dynamically connect to a remote database (connection info will be supplied by parameters) to read and write data from during runtime as well. How do i go about this the best way with Quarkus? For simplicity reasons we can assume that the remote databases are of the same type (PostgreSQL) so we don't have to worry about whether the correct driver is locally available or not.
Is there something provided by Quarkus or the environment to establish these connections and read/write? i dont need an ORM layer here necessarily, as i may not know the structure beforehand either. Simple queries are also sufficient. When i try to research this subject i can only get information about static hibernate or datasource configurations in Quarkus, but i won't know what they look like beforehand. Basically, is there some kind of "db connection provider" etc. i should use or do i simply have to manually create new plain JDBC connections in my own code for it?
Does someone know if GlassFish 5 has support to use global transactions with 2PC (XA protocol)? but without install extra tools.
I have looked for information in the page of GlassFish "The Open Source Java EE Reference Implementation" where I downloaded the app server (and in other pages) but I have not had luck.
I try doing transactions in two microservices that insert two values in the data base. I have configured the GlassFish's JNDI with "com.mysql.jdbc.jdbc2.optional.MysqlXADataSource" and it looks like working, but when I check the data base only is added a value of one service. (the global transactions with 2PC does not work). I begin to think that GlassFish does not have support for 2PC.
I have read that it can do it with tomcat, but i need to add tools like atomikos, bitronix, etc. The idea is can do it with glassfish with out install nothing more.
Regards.
Does someone know if GlassFish 5 has support to use global transactions with 2PC (XA protocol)? but without install extra tools.
Glassfish 5 supports transactions using XA datasources. You can create a program that executes transactions combining operations on multiple databases. For instance, you can create a transaction that performs operations into Oracle and IBM DB2 databases. If one of operations in the transaction fails, the other operations (in the same and in the other databases) will be not executed or rollbacked.
I try doing transactions in two microservices that insert two values in the data base. I have configured the GlassFish's JNDI with "com.mysql.jdbc.jdbc2.optional.MysqlXADataSource" and it looks like working, but when I check the data base only is added a value of one service.
If your program invokes a REST/web service in a transaction, the operations performed by the other REST/webservice do not join to the transaction. An error in the program will not produce a rollback in the operations performed by the already invoked REST/webservice.
Is it possible to start up and shut down multiple H2 databases within a JVM?
My goal is to support multi-tenancy by giving each user/account their own database. Each account has very little data. Data between the accounts is never accessed together, compared, or grouped; each account is entirely separate from the others. Each account is only accessed briefly once a day or a few times a month. So there are few upsides to housing the data together in a single database, and some serious downsides.
So my idea is that when a user logs in for a particular account, that account’s database is loaded. When that user logs out, or their web app session (Vaadin app) times out, that account’s database is closed, it's data flushed to storage, and possibly a backup performed. This opening and closing would be happening for any number of databases in parallel.
Benefits include minimizing the amount of memory in use at any one time for caching data and indexes, minimizing locking and other contention, and allowing for smooth scaling.
I'm new to H2, so I'm not sure if its architecture can support this. I'm asking for a denial or confirmation of this capability, along with any tips or caveats.
Yes it is possible to do so. Each database will contain its own mini environment, no possible pollution between databases.
You could for example use a jdbc url based on the user id or login from the user:
jdbc:h2:user1 in H2 1.3.x embedded mode
jdbc:h2:./user1 in H2 1.4.x embedded mode
jdbc:h2:tcp://localhost/user1 in tcp mode
You can use any naming convention for the database name, provided your OS allows it: user1, user2, etc... or truly the name of the login.
Tips:
use the server mode rather than the embedded mode, allowing for same user multiple connections from multiple sessions/hosts
have a schema migrator (like flyway) to initialize each newly created db
ensure you manage name collisions at the top level of your app, and possibly store these databases and corresponding logins in a dedicated database as well
Caveats:
do not use a connection pool as connections will be difficult to reuse
You must make sure IFEXISTS=TRUE is not used on the server
avoid using tweaks on the jdbc url, like turning LOG=0, UNDO_LOG=0, etc...
I do not know if you'll have a limitation from your OS or the JVM on how many db files could be opened like this.
I do not know if such setting can be tweaked from the manual pages. I could not find one.
Please refer to H2 manual in doubts of url parameters.
Is there a way that I can use JDBC to target multiple databases when I execute statements (basic inserts, updates, deletes).
For example, assume both servers [200.200.200.1] and [200.200.200.2] have a database named MyDatabase, and the databases are exactly the same. I'd like to run "INSERT INTO TestTable VALUES(1, 2)" on both databases at the same time.
Note regarding JTA/XA:
We're developing a JTA/XA architecture to target multiple databases in the same transaction, but it won't be ready for some time. I'd like to use standard JDBC batch commands and have them hit multiple servers for now if its possible. I realize that it won't be transaction safe, I just wan't the commands to hit both servers for basic testing at the moment.
You need one connection per database. Once you have those, the standard auto commit/rollback calls will work.
You could try Spring; it already has transaction managers set up.
Even if you don't use Spring, all you have to do is get XA versions of the JDBC driver JARs in your CLASSPATH. Two phase commit will not work if you don't have them.
I'd wonder if replication using the database would not be a better idea. Why should the middle tier care about database clustering?
Best quick and dirty way for development is to use multiple database connections. They won't be in the same transaction since they are in different connection. I don't think this would be much of an issue if this is just for testing.
When your JTA/XA architecture is ready, just plug it into the already working code.
I need improve the traceability in a Web Application that usually run on fixed db user. The DBA should have a fast access for the information about the heavy users that are degrading the database.
5 years ago, I implemented a .NET ORM engine which makes a log of user and the server using the DBMS_APPLICATION_INFO package. Using a wrapper above the connection manager with the following code:
DBMS_APPLICATION_INFO.SET_MODULE('" + User + " - " + appServerMachine + "','');
Each time that a connection get a connection from the pool, the package is executed to log the information in the V$SESSION.
Has anyone discover or implemented a solution for this problem using the Toplink or Hibernate? Is there a default implementation for this problem?
I found here a solutions as I implemented 5 years ago, but I'd like to know with anyone have a better solution and integrated with the ORM.
using DBMS_APPLICATION_INFO with Jboss
My application is above Spring, the DAO are implemented with JPA (using hibernate) and actually running directly in Tomcat, with plans to (next year) migrate to SAP Netwevare Application Server.
Thanks.
In Oracle 10g we can use DBMS_SESSION.SET_IDENTIFIER to uniquely identify a specific session. Oracle also provide a JDBC built-in to hook this into a connection pool. You will have to provide your own means of uniquely identifying a session, which will depend on your application.
Your DBA will then have enough information to identify the resource hungry session.
No DBA I know would be impressed with a huge text file generated from the middle tier.
If you want to know about queries that are costing a lot to run, you should go directly into your database server. There are monitoring tools for that, specific to each server. For example in PostgreSQL you would run SELECT * FROM pg_stat_activity as an admin to check each connection and what it's doing, how long it's been running, etc.
If you really want to/need to do it from the container, then maybe you can define an interceptor with Spring AOP to execute that statement you need before doing anything. Keep in mind that a database connection is not always used by the same application user, since you're using a pool.
You should be able to configure a logger (eg log4j) on your connection pool. You may need a custom appender to pull back the user ID.
Two points to consider:
On a busy system this will generate a big log file.
Frequent connections are not necessary an indication of behaviour that would degrade the DB.