Dynamic schema in Hibernate #Table Annotation - java

Imagine you have four MySQL database schemas across two environments:
foo (the prod db),
bar (the in-progress restructuring of the foo db),
foo_beta (the test db),
and bar_beta (the test db for new structures).
Further, imagine you have a Spring Boot app with Hibernate annotations on the entities, like so:
#Table(name="customer", schema="bar")
public class Customer { ... }
#Table(name="customer", schema="foo")
public class LegacyCustomer { ... }
When developing locally it's no problem. You mimic the production database table names in your local environment. But then you try to demo functionality before it goes live and want to upload it to the server. You start another instance of the app on another port and realize this copy needs to point to "foo_beta" and "bar_beta", not "foo" and "bar"! What to do!
Were you using only one schema in your app, you could've left off the schema all-together and specified hibernate.default_schema, but... you're using two. So that's out.
Spring EL--e.g. #Table(name="customer", schema="${myApp.schemaName}") isn't an option--(with even some snooty "no-one needs this" comments), so if dynamically defining schemas is absurd, what does one do? Other than, you know, not getting into this ridiculous scenario in the first place.

I have fixed such kind of problem by adding support for my own schema annotation to Hibernate. It is not very hard to implement by extending LocalSessionFactoryBean (or AnnotationSessionFactoryBean for Hibernate 3). The annotation looks like this
#Target(TYPE)
#Retention(RUNTIME)
public #interface Schema {
String alias() default "";
String group() default "";
}
Example of using
#Entity
#Table
#Schema(alias = "em", group = "ref")
public class SomePersistent {
}
And a schema name for every combination of alias and group is specified in a spring configuration.

you can try with interceptors
public class CustomInterceptor extends EmptyInterceptor {
#Override
public String onPrepareStatement(String sql) {
String prepedStatement = super.onPrepareStatement(sql);
prepedStatement = prepedStatement.replaceAll("schema", "Schema1");
return prepedStatement;
}
}
add this interceptor in session object as
Session session = sessionFactory.withOptions().interceptor(new MyInterceptor()).openSession();
so what happens is when ever onPrepareStatement is executed this block of code will be called and schema name will be changed from schema to schema1.

You can override the settings you declare in the annotations using a orm.xml file. Configure maven or whatever you use to generate your deployable build artifacts to create that override file for the test environment.

Related

I am using JAVA Spring Boot REST API and Hibernate/JPA issue is with table name when the name I need to access contains a dot in the name like FOO.BAR

The problem is when hibernate builds the query it ignores the dot and sets the prepared statement "from" to look like
"from foo_bar" when it needs to actually be "foo.bar" So even though it connects to the primary database fine it never finds the table. This is a DB2 schema where it is Database->table.sub-table ( not a join but a naming convention the DBA's use).
I have tried adding the dot in the #Table name prop
A snippet example is like:
#Entity
#Table(name="FOO.BAR")
public class SomeClassName {
}
I tried using the application.properties
spring.datasource.url=jdbc:db2://server:port/dbname and modifying that. Any ideas? Do I need to create my own naming convention or something?
Welcome to stackoverflow Richard.
I am fairly confident that the first value would be considered the schema name.
Perhaps trying the following would work?
#Entity
#Table(name="BAR" schema="FOO")
public class SomeClassName {
}

Spring Batch + Hibernate: Resolve ManyToMany on Data Migration

We are doing a data migration from one database to another using Hibernate and Spring Batch. The example below is slightly disguised.
Therefore, we are using the standard processing pipeline:
return jobBuilderFactory.get("migrateAll")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(DConfiguration.migrateD())
and migrateD consists of three steps:
#Bean(name="migrateDsStep")
public Step migrateDs() {
return stepBuilderFactory.get("migrateDs")
.<org.h2.D, org.mssql.D> chunk(100)
.reader(dReader())
.processor(dItemProcessor)
.writer(dWriter())
.listener(chunkLogger)
.build();
Now asume that this table has a manytomany relationship to another table. How can I persist that? I have basically a JPA Entity Class for all my Entities and fill those in the processor which does the actual migration from the old database objects to the new ones.
#Component
#Import({mssqldConfiguration.class, H2dConfiguration.class})
public class ClassificationItemProcessor implements ItemProcessor<org.h2.d, org.mssql.d> {
public ClassificationItemProcessor() {
super();
}
public Classification process(org.h2.d a) throws Exception {
d di = new di();
di.setA(a.getA);
di.setB(a.getB);`
// asking for object e.g. possible via, But this does not work:
// Set<e> es = eRepository.findById(a.getes());
di.set(es)
...
// How to model a m:n?
return d;
}
So I could basically ask for the related object via another database call (Repository) and add it to d. But when I do that, I rather run into LazyInitializationExceptions or, if it was successful sometimes the data in the intermediate tables will not have been filled up.
What is the best practice to model this?
This is not a Spring Batch issue, it is rather a Hibernate mapping issue. As far as Spring Batch is concerned, your input items are of type org.h2.D and your output items are of type org.mssql.D. It is up to you define what an item is and how to "enrich" it in your item processor.
You need to make sure that items received by the writer are completely "filled in", meaning that you have already set any other entities on them (be it a single entity or a set of of entities such as di.set(es) in your example). If this leads to lazy intitialization exceptions, you need to change your model to be eagerly initialized instead, because Spring Batch cannot help at that level.

How to set annotation's values according to properties?

I am using hibernate's ORM and hibernate-generator to generate the Entity in the annotation way. I need to switch database frequently (dev/release). So, I have to change the entity's annotation every time. I want to know if there is a way to configure it.
#Entity
#Table(name = "my", catalog = "dev_db")
public class MyEntity {
}
As you can see, I've to change the catalog every time. How to configure it according to a jdbc.properties?
You can use Interceptors to modify SQL generated by hibernate.
public String onPrepareStatement(String sql) {
String superSQL = super.onPrepareStatement(newSQLWithNamespace);
//replace all catalog occurencies with desired value in the superSQL
return superSQL;
}
See e.g. Add a column to all MySQL Select Queries in a single shot
Your interceptor can read the catalog value from config and change the SQL.

How do I configure JPA table name at runtime?

I have an issue where I have only one database to use but I have multiple servers where I want them to use a different table name for each server.
Right now my class is configured as:
#Entity
#Table(name="loader_queue")
class LoaderQueue
I want to be able to have dev1 server point to loader_queue_dev1 table, and dev2 server point to loader_queue_dev2 table for instance.
Is there a way i can do this with or without using annotations?
I want to be able to have one single build and then at runtime use something like a system property to change that table name.
For Hibernate 4.x, you can use a custom naming strategy that generates the table name dynamically at runtime. The server name could be provided by a system property and so your strategy could look like this:
public class ServerAwareNamingStrategy extends ImprovedNamingStrategy {
#Override
public String classToTableName(String className) {
String tableName = super.classToTableName(className);
return resolveServer(tableName);
}
private String resolveServer(String tableName) {
StringBuilder tableNameBuilder = new StringBuilder();
tableNameBuilder.append(tableName);
tableNameBuilder.append("_");
tableNameBuilder.append(System.getProperty("SERVER_NAME"));
return tableNameBuilder.toString();
}
}
And supply the naming strategy as a Hibernate configuration property:
<property
name="hibernate.ejb.naming_strategy"
value="my.package.ServerAwareNamingStrategy"
/>
I would not do this. It is very much against the grain of JPA and very likely to cause problems down the road. I'd rather add a layer of views to the tables providing unified names to be used by your application.
But you asked, so have some ideas how it might work:
You might be able to create the mapping for your classes, completely by code. This is likely to be tedious, but gives you full flexibility.
You can implement a NamingStrategy which translates your class name to table names, and depends on the instance it is running on.
You can change your code during the build process to build two (or more) artefacts from one source.

Unit testing Hibernate with multiple database catalogs

I have an issue testing a Hibernate application which queries multiple catalogs/schemas.
The production database is Sybase and in addition to entities mapped to the default catalog/schema there are two entities mapped as below. There are therefore three catalogs in total.
#Table(catalog = "corp_ref_db", schema = "dbo", name = "WORKFORCE_V2")
public class EmployeeRecord implements Serializable {
}
#Table(catalog = "reference", schema = "dbo", name="cntry")
public class Country implements Serializable {
}
This all works in the application without any issues. However when unit testing my usual strategy is to use HSQL with hibernate's ddl flag set to auto and have dbunit populate the tables.
This all works fine when the tables are all in the same schema.
However, since adding these additional tables, testing is broken as the DDL will not run as HSQL only supports one catalog.
create table corp_ref_db.dbo.WORKFORCE_V2
user lacks privilege or object not found: CORP_REF_DB
If there were only two catalogs then I think it would maybe be possible to get round this by changing the default catalog and schema in the HSQL database to that one explicitly defined:
Is there any other in-memory database for which this might work or is there any strategy for getting the tests to run in HSQL.
I had thought of providing an orm.xml file which specified the default catalog and schema (overiding any annotations and having all the defined tables created in the default catalog/schema) however these overrides do not seem to be observed when the DDL is executed i.e. I get the same error as above.
Essentially, then I would like to run my existing tests and either somehow have the tables created as they are defined in the mappings or somehow override the catalog/schema definitions at the entity level.
I cannot think of any way to achieve either outcome. Any ideas?
I believe H2 supports catalogs. I haven't used them in it myself, but there's a CATALOGS table in the Information Schema.
I managed to achieve something like this in H2 via IGNORE_CATALOGS property and version 1.4.200
However, the url example from their docs did not seem to work for me, so I added a statement in my schema.xml:
SET IGNORE_CATALOGS = true;

Categories