Consider a situation where all client data is stored in its own database/catalog and all such databases are stored in a single RDBMS (client-data). Master data (e.g. clients, ...) is kept in another RDBMS (master-data). How can we dynamically access a particular database in client-data RDBMS by means of JdbcTemplate?
Defining DataSource for each database in client-data RDBMS and then dynamically select one as suggested here is not an option for us since the databases are created and destroyed dynamically.
I would basically need something like JDBC's Connection.setCatalog(String catalog) but I have not found anything like that available in Spring JdbcTemplate.
Maybe you could wrap the datasource with DelegatingDataSource to call setCatalog() in getConnection() and use the wrapped datasource on JdbcTemplate creation:
class MyDelegatingDS extends DelegatingDataSource {
private final String catalogName;
public MyDelegatingDS(final String catalogName, final DataSource dataSource) {
super(dataSource);
this.catalogName = catalogName;
}
#Override
public Connection getConnection() throws SQLException {
final Connection cnx = super.getConnection();
cnx.setCatalog(this.catalogName);
return cnx;
}
// maybe also override the other getConnection();
}
// then use like that: new JdbcTemplate(new MyDelegatingDS("catalogName", dataSource));
You can access the Connection from JdbcTemplate:
jdbcTemplate.getDataSource().getConnection().setCatalog(catalogName);
You'll only have to make sure the database driver supports this functionality.
jdbcTemplate.getDataSource().getConnection().setSchema(schemaName)
Was what I needed for switching schema using postgres. Props to #m3th0dman for putting me on the right track. I'm only adding this in case others find this answer searching for switching schema as I was.
Related
All I have implemented multitenancy with mysql and hibernate but I have doubts that it will work in real world.
As per following quote from hibernate document it should be possible:
Connections could point to the database itself (using some default schema) but the Connections would be altered using the SQL SET SCHEMA (or similar) command. Using this approach, we would have a single JDBC Connection pool for use to service all tenants, but before using the Connection it would be altered to reference the schema named by the “tenant identifier” associated with the currently logged in user.
Here is the link from where I got above paragraph.
Multitenancy in hibernate
So I override the MultiTenantConnectionProvider as below
#Override
public Connection getConnection(String tenantIdentifier) throws SQLException {
Connection tenantSpecificConnection = dataSource.getConnection();
if (!StringUtils.isEmpty(tenantIdentifier)) {
Statement statement = tenantSpecificConnection.createStatement();
statement.executeQuery("use " + tenantIdentifier);
statement.close();
tenantSpecificConnection.setSchema(tenantIdentifier);
} else {
tenantSpecificConnection.setSchema(Constants.DEFAULT);
}
return tenantSpecificConnection;
}
It is very basic iteration, a first one, I am just able to switch the database. But with this I also have questions. Would this work in real world? I think multiple users will cause trouble while using this? According to hibernate documentation it should not but it looks it may cause problems. Has anyone tried this, please need help on this one.
The application uses Spring and Hibernate.
Is it possible to change dblink, before executing a sql query in Java? Whether native JDBC or Hibernate?
I know it's possible to
refer to a table or view on the other database by appending #dblink to
the table or view name
But I can't do that as the sql query is written by a user and I don't have control on it (it's too complicated to parse it).
This code above shows what I'm trying to do...
// TODO connect to the db link ???
List<Object> results = this.getSession().doReturningWork(new ReturningWork<Object>(){
#Override
public String execute(Connection connection) throws SQLException
{
final ResultSet rs = connection.createStatement().executeQuery(userQuery);
// doing something with results....
}
});
You can create a synonym in your schema referencing the table on the other database:
CREATE OR REPLACE SYNONYM "SOURCE_SCHEMA"."SYNONYM_NAME" FOR "TARGET_SCHEMA"."TARGET_TABLE"#"DBLINK"
That way you can access the external table by the synonym name, shielding you from the exact physical location of the object:
SELECT * FROM SYNONYM_NAME
If you want to map this entity using Hibernate, you would use the synonym name just like any other object.
If all references in the query are through DB link(s) or fully qualified, you could do this:
Create a new user in your (local) Oracle database
Create private db links for that user that have same name as the ones you want replaced
If the original DB lniks are public, you need to only redefine the DB links you want redirected, else you need to redefine all of them
Use this DB user in your Java application
Private DB links will take precedence over public DB links of the same name, or for private DB links the current user's ones will be used, obviously.
Either you can create synonyms for all tables that need to be accessed using DB link. Create these links and let them be there in your connecting schema. Or,
You can write a method that parses userQuery to determine if the table is owned or if it is in DB link'ed schema and correspondingly append #DBLink to table name in passed query.
I'd prefer approach 1, as 2 might be difficult to achieve with complex queries
Is there a reason you do want to delete the synonym?
I simply initialize a new connection to the dblink. As the username is unique, it works.
final Configuration configuration = new Configuration();
configuration.setProperty("hibernate.connection.url", jdbcUrl);
configuration.setProperty("hibernate.connection.username", userDbLinkName);
configuration.setProperty("hibernate.connection.password", userDbLinkPassword);
final StandardServiceRegistryBuilder builder = new StandardServiceRegistryBuilder().applySettings(configuration.getProperties());
final SessionFactory sessionFactory = configuration.buildSessionFactory(builder.build());
return sessionFactory.openSession();
I'm using Spring and JDBC template to manage database access, but build the actual SQL queries using JOOQ. For instance, one DAO may look like the following:
public List<DrupalTaxonomyLocationTerm> getLocations(String value, String language) throws DataAccessException {
DSLContext ctx = DSL.using(getJdbcTemplate().getDataSource(), SQLDialect.MYSQL);
SelectQuery q = ctx.selectQuery();
q.addSelect(field("entity_id").as("id"),);
q.addFrom(table("entity").as("e"));
[...]
}
As you can see from the above, I'm building and executing queries using JOOQ. Does Spring still take care of closing the ResultSet I get back from JOOQ, or do I somehow "bypass" Spring when I access the data source directly and pass the data source on to JOOQ?
Spring doesn't do anything with the objects generated from your DataSource, i.e. Connection, PreparedStatement, ResultSet. From a Spring (or generally from a DataSource perspective), you have to do that yourself.
However, jOOQ will always:
close Connection objects obtained from a DataSource. This is documented in jOOQ's DataSourceConnectionProvider
close PreparedStatement objects right after executing them - unless you explicitly tell jOOQ to keep an open reference through Query.keepStatement()
close ResultSet objects right after consuming them through any ResultQuery.fetchXXX() method - unless you explicitly want to keep an open Cursor with ResultQuery.fetchLazy()
By design, jOOQ inverses JDBC's default behaviour of keeping all resources open and having users tediously close them explicitly. jOOQ closes all resources eagerly (which is what people do 95% of the time) and allows you to explicitly keep resources open where this is useful for performance reasons.
See this page of the jOOQ manual for differences between jOOQ and JDBC.
I am using JPA in a JavaSE application and wrote a class to manage database connections and persist objects to the database.
The connection parameters for the database connection are passed using the class constructor and the class has a method to validate the connection parameters.
public class DatabaseManager{
private EntityManagerFactory entityManagerFactory = null;
public DatabaseManager(String connectionDriver, String connectionUrl, String username, String password) {
Properties props = new Properties();
props.put("javax.persistence.jdbc.driver", connectionDriver);
props.put("javax.persistence.jdbc.url", connectionUrl);
props.put("javax.persistence.jdbc.user", username);
props.put("javax.persistence.jdbc.password", password);
entityManagerFactory = Persistence.createEntityManagerFactory("name of persistence unit", props);
}
public boolean checkConnection(){
try{
entityManagerFactory.createEntityManager();
}catch(Exception e){
return false;
}
return true;
}
}
When I call the checkConnection method it tries to create a new entitymanager with the given parameters. If no connection can be established the entitymanagerfactory throws an exception and the method returns false.
When I test the method I can see the following results:
All parameters are correct -> the method returns true as expected.
The URL or the username are not correct -> the method returns false as expected.
The drivername or the user password are not correct -> the method returns true but it should return false. <- This is my problem.
Can someone tell me why it behaves like this and what is a proper way to test connection parameters without writing data to some database tables?
At the moment I am using EclipseLink but I'm looking for some provider independent way.
Thanks for your answers.
Creating an EntityManager doesn't have to create any connection at that point. The JPA implementation could delay obtaining the connection until the first persist or flush for example (the JPA spec won't define when a connection has to be obtained).
Why not just use a simple few lines of JDBC to check your DB credentials? At least that way it works independent of how the JPA implementation has decided to handle connections (and those JPA properties have "jdbc" in the name since that is almost certainly what the JPA implementation is using itself to obtain connections).
From JPA 2.0 Specification Section 8.2.1
The persistence-unit element consists of the name and transaction-type attributes and the following sub-elements: description, provider, jta-data-source, non-jta-data-source, mapping-file, jar-file, class, exclude-unlisted-classes, shared-cache-mode, validation-mode, and properties.
The name attribute is required; the other attributes and elements are optional. Their semantics are described in the following subsections.
Entity Manager doesn't need a connection to be created as it ONLY needs the Persistence Unit name, I think you should go with DataNucleus suggesion to make a simple JDBC connection to validate your connection.
Currently I made a connection to a database in this way:
MyClass.java
try {
DataSource datasource = JNDILoader.getDataSourceObject(pathToSource);
Class.forName("net.sourceforge.jtds.jdbc.Driver");
connection = datasource.getConnection();
stmt = connection.prepareStatement("{call storageProcedureXXX(?,?)}");
stmt.setString(1, "X");
stmt.setString(2, "Y");
result = stmt.executeQuery();
}catch (SQLException){
//TODO
}catch(Exception){
//TODO
}
That works for 1 class that makes the requests for the data, but , would be better if I create a singleton class and get the connection from it? (performance?, maintenability?, simplicity?). Which option would be better: Singleton vs StorageProcedures per each request?.
Note: At the end, the application (Restful Web Service) will need to connect to different databases to load data for different specialized classes, even , the classes would need loads data from plain text.
First of all you are mixing two different things: singleton and stored procedures. Singleton is design pattern, and stored procedures are procedures executed on database, typically encapsulating some business logic.
What you wrote is not really preferred way of connecting to database. If you have many request and create one connection for each request son you will have problems with too many connections to database. You should use connection pool. The most famous for Java is DBCP. Another one is c3p0.
For connection on different databases you should use something like Hibernate.
Stored procedure are executed on the database. You pass/retrieve data to/from it through the connection.
You have to check if it is thread safe (I don't think so), if you'll do concurrent calls or not.
Generally a stored procedure = 1 transaction happening in the database.
Why are you using stored procedure in the 1st place?