I am using EclipseLink 2.5.2 (JPA 2.1), Spring 4.1.5, and I am deploying on Weblogic 12 and Oracle 12c.
I need to deploy my application to use 2 schemas (2 users on the same DB). The first contains the application data, the second contains lookup date which will never change. This is a hard requirement set by my client (the lookup schema may be used by other applications), however I know that they will be on the same Oracle instance. My JPA data model contains entities from both schemas and references between them. Likewise, at the DB level there are FKs in the data schema to the lookup schema.
I would like to:
map my entities in a way that will abstract away the fact that they reside on a different schema (prefixing the generated SQL queries with the user will be sufficient)
build a war file that is portable (no schema is hardcoded)
avoid synonyms, they are hard to maintain and the 2 schemas have a couple of metadata tables with the same name
My current solution:
I have a single persistence unit with all the entities from both schemas. I added an orm.xml for the lookup entities, where I define their schema at build time through Maven:
<entity class="my.package.lookup.ActionTaken">
<table name="ACTION_TAKEN" schema="${db.lookup.username}"/>
</entity>
I do this to avoid hardcoding the lookup schema in the #Table annotation on the lookup entities.
This works well, the generated SQL has the correct prefix for tables in the lookup schema. The problem is, However, as the lookup schema is defined at build time, the resulting war file is not portable.
Any thoughts on how to achieve this?
Some more thoughts:
I currently have a single persistence unit. I don't think that multiple persistence units would work well with entities from the first persistence unit referencing entities from the second.
I tried to have Spring filter the orm.xml file (i.e. I could define the lookup schema in a Spring profile), but Spring seems to be able to filter its own configuration only.
EclipseLink has is own Composite persistence unit, but I am ruling it out because:
Joins across tables in different data sources are not supported.
If you can use the same datasource to access the different schemas, then you can change the schema name using EclipseLink's customizers as described here: http://eclipse.org/eclipselink/documentation/2.5/jpa/extensions/a_customizer.htm .
You will need to change the table/schema name on both the entity's descriptor as well as any 1:M and M:M mappings that use a join table.
Related
Hopefully i won't misuse any technical term.
We have a DB2 schema with many tables which we will connect with the appropriate connections (one-to-many, many-to-many etc) in the next days.
The goal is to make this exact schema in the Java side using Spring JPA. We want to validate the schema created using the Spring JPA so that it matches the database schema.
I'm using the following property
spring.jpa.hibernate.ddl-auto=validate and it seems to work only in one direction, that is, it checks if the database schema satisfies the Jpa schema. It doesn't seem to work the other way around: the database schema has tables that are not defined in the Java side and the application can run successfully.
Can we somehow validate the Jpa schema so that it matches the database schema absolutely?
The problem is how to implement tracking of data changes on e.g. master detail tables i.e. two entities in one to many relationship in Spring Boot/Spring Data.
After storing data, to be able to get the master entity with its details at specific version, and to have functionality to revert it to specific version.
You can use Hibernate Envers to audit and version your persistence entities changes.
The Envers project aims to enable easy auditing of persistent
classes. All that you have to do is annotate your persistent class or
some of its properties, that you want to audit, with #Audited. For
each audited entity, a table will be created, which will hold the
history of changes made to the entity. You can then retrieve and query
historical data without much effort.
Similarly to Subversion, the library has a concept of revisions.
Basically, one transaction is one revision (unless the transaction
didn't modify any audited entities). As the revisions are global,
having a revision number, you can query for various entities at that
revision, retrieving a (partial) view of the database at that
revision. You can find a revision number having a date, and the other
way round, you can get the date at which a revision was commited.
The library works with Hibernate and requires Hibernate Annotations or
Entity Manager. For the auditing to work properly, the entities must
have immutable unique identifiers (primary keys). You can use Envers
wherever Hibernate works: standalone, inside JBoss AS, with JBoss Seam
or Spring. source
You can query for historic data in a way similar to querying data via
the Hibernate criteria API. The audit history of an entity can be
accessed using the AuditReader interface, which can be obtained with
an open EntityManager or Session via the AuditReaderFactory. source
With Hibernate Envers you can record your data changes and then access it whether using your persistence context or SQL in order to apply your version changes using the provide revision id. With it you've the 80% of the task done.
Check this tutorials
Setting up Hibernate Envers with Spring Boot
Auditing with JPA, Hibernate, and Spring Data JPA
Hibernate Envers: Simple Implementations
If you use JPA, object auditing frameworks like hibernate envers or javers might help
I am creating a Java application that utilizes a JPA annotated model - the core model -. On top of these entities, at runtime, I would like to add a jar file from an external source that contains some other JPA classes definitions and mappings. The imported archive might change its class structure and mappings, but it is the application's duty to refresh the entire schema when changed.
However, when trying to add the jar to hibernate Configuration, I get a
org.hibernate.service.spi.ServiceException: Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
The inner exception is related to the hibernate dialect:
org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set
However, I am sure to have specified the hibernate.dialect property in the persistence.xml file. Below is the code I am using in my application:
org.hibernate.cfg.Configuration cfg = new org.hibernate.cfg.Configuration();
cfg.addJar(new File("path/to/jar.jar"));
cfg.buildSessionFactory();
What am I doing wrong?
Also, could you please tell me if you find this a good approach to create a dynamically updateable schema shared between multiple applications?
I managed to solve the problem. The main point is that, when using EntityManagerFactory (the JPA API), the hibernate persistence provider only reads the persistence.xml configuration files and loads the persistence units that are specified therein.
However, using a hibernate API configuration, hibernate does not read the persistence.xml files, so one will have to explicitly specify all aspects such as dialect, connection parameters etc in the hibernate.cfg.xml file.
However, I managed to work around this issue. Indeed, in the dynamically loaded jar file, one must export the folders (the META-INF especially) and configure a persistence.xml file in there too. However, naming two persistence units the same, their corresponding classes will not get merged and neither will any other properties. By default, hibernate will load the first found persistence unit and will treat the identically-named ones as different. So, I created a more flexible core schema that allows access to multiple persistence units, while caching them in something similar to dictionaries. Consequently, for each schema in my application, I will load the corresponding persistence unit while storing all of them in a dictionary-style container, allowing the application to get notified should any changes occur to the underlying jar file.
We have two schemas in one oracle database. We are writing a Spring/Hibernate application, which will write to tables in both schemas in one operation.
My question is: Can one datasource write to both schemas, in one transaction, and rollback all updates in both schemas if required?
We are in a non Java EE environment, using just Tomcat, so there is no out of the box support for Global Transactions/JTA. I know, if Global Transactions are required, we could utilize Springs support for JTA (and Atomikos).
However, are Global Transactions required in the above situation.. as both schemas are in one database? Is this a use case for JTA?
I am working on a project in which currently I have single persistence unit file as I have only one database schema there in my db. Now I need to separate that schema into two different schema. So I made two different ORM files and mapped it into the PU. Now when i build my EJB project its working fine but as soon as I build my WEB project it starts giving me compilation error.
So, is there any other way so that I can manage two different schema together??
Note that both the schema are related with foreign keys.
Please help me out.
If you are using Oracle and you have SCHEMA_1 and SCHEMA_2 and you can define synonyms:
As SCHEMA_2, grant the appropriate privileges to SCHEMA_1
Define synonyms in SCHEMA_1 for the tables in SCHEMA_2
Now in SCHEMA_1 you should be able to use SCHEMA_2 tables as if they were there