How to access multiple database schemas from single persistence unit? - java

I am working on a project in which currently I have single persistence unit file as I have only one database schema there in my db. Now I need to separate that schema into two different schema. So I made two different ORM files and mapped it into the PU. Now when i build my EJB project its working fine but as soon as I build my WEB project it starts giving me compilation error.
So, is there any other way so that I can manage two different schema together??
Note that both the schema are related with foreign keys.
Please help me out.

If you are using Oracle and you have SCHEMA_1 and SCHEMA_2 and you can define synonyms:
As SCHEMA_2, grant the appropriate privileges to SCHEMA_1
Define synonyms in SCHEMA_1 for the tables in SCHEMA_2
Now in SCHEMA_1 you should be able to use SCHEMA_2 tables as if they were there

Related

How do I migrate data from my old schema database to New schema with different database connection

How do I migrate data inside tables belonging to a Schema A of database A to Tables belonging To schema B of Database B in a mvn project?
Can someone tell me the ways to do it?
I have already written a few SQL scripts and executed them in the SQL editor of Schema A (I just wanted to check if the scripts I have written are correct! So, I created tables belonging to schema B inside schema A) Now how do I actually perform this operation of migrating from tables of schema A to tables of schema B in Java way only?
Note: the design of these tables has been changed hence belong to different schemas across different dbs.
You can use INSERT INTO SELECT statement https://www.oracletutorial.com/oracle-basics/oracle-insert-into-select/

spring boot sql database DML and DDL scripts

How i could define some schema and data to be inserted into db for
sql database in spring boot
Also could i do this for embedded databases
For example i am using two databases and i want to populate some data or define some schema and apply to different databases before application starts.
A file named import.sql in the root of the classpath is executed on startup if Hibernate creates the schema from scratch (that is, if the ddl-auto property is set to create or create-drop). This can be useful for demos and for testing if you are careful but is probably not something you want to be on the classpath in production. It is a Hibernate feature (and has nothing to do with Spring).
You can take a look in spring docs

Compile Dropwizard Code without migration.xml

I have a finalized database in SQL SERVER containing 50+ tables in it and needed to connect it with Dropwizard Code.
I am new to JAVA so my conception about Migrations.xml is it is used to create the tables in database or if any change in database is needed it will be updated through migrations.xml.
So if i don't need any change in database (as told earlier it is finalized).
Can i skip this migrations.xml file?
Need some experts advice please.
If you are handling your database changes elsewhere, then you have no need for any migration xml files within your dropwizard project. It's an optional module, you don't need to use it. You don't even need to include the dropwizard-migrations dependency if you don't want to include database updates in your dropwizard project. You can still connect to your database fine within dropwizard. The docs provide examples using modules dropwizard-jdbi and dropwizard-hibernate.
To connect to your database, add the appropriate code the your java configuration file and yml config as explained in the docs.
jdbi
http://www.dropwizard.io/0.9.2/docs/manual/jdbi.html
hibernate
http://www.dropwizard.io/0.9.2/docs/manual/hibernate.html

JPA: Map multiple Oracle users on single persistence unit

I am using EclipseLink 2.5.2 (JPA 2.1), Spring 4.1.5, and I am deploying on Weblogic 12 and Oracle 12c.
I need to deploy my application to use 2 schemas (2 users on the same DB). The first contains the application data, the second contains lookup date which will never change. This is a hard requirement set by my client (the lookup schema may be used by other applications), however I know that they will be on the same Oracle instance. My JPA data model contains entities from both schemas and references between them. Likewise, at the DB level there are FKs in the data schema to the lookup schema.
I would like to:
map my entities in a way that will abstract away the fact that they reside on a different schema (prefixing the generated SQL queries with the user will be sufficient)
build a war file that is portable (no schema is hardcoded)
avoid synonyms, they are hard to maintain and the 2 schemas have a couple of metadata tables with the same name
My current solution:
I have a single persistence unit with all the entities from both schemas. I added an orm.xml for the lookup entities, where I define their schema at build time through Maven:
<entity class="my.package.lookup.ActionTaken">
<table name="ACTION_TAKEN" schema="${db.lookup.username}"/>
</entity>
I do this to avoid hardcoding the lookup schema in the #Table annotation on the lookup entities.
This works well, the generated SQL has the correct prefix for tables in the lookup schema. The problem is, However, as the lookup schema is defined at build time, the resulting war file is not portable.
Any thoughts on how to achieve this?
Some more thoughts:
I currently have a single persistence unit. I don't think that multiple persistence units would work well with entities from the first persistence unit referencing entities from the second.
I tried to have Spring filter the orm.xml file (i.e. I could define the lookup schema in a Spring profile), but Spring seems to be able to filter its own configuration only.
EclipseLink has is own Composite persistence unit, but I am ruling it out because:
Joins across tables in different data sources are not supported.
If you can use the same datasource to access the different schemas, then you can change the schema name using EclipseLink's customizers as described here: http://eclipse.org/eclipselink/documentation/2.5/jpa/extensions/a_customizer.htm .
You will need to change the table/schema name on both the entity's descriptor as well as any 1:M and M:M mappings that use a join table.

Autocreate Spring Entity "authorities" during testing

When trying unit tests with Spring Security & Hibernate, none of the security entities "user" or "authorities" are being autocreated. What I have done so far is to write an "user" bo that triggers generation of the appropiate table. However, I am stuck with the authorities:
(as advised by http://java.dzone.com/articles/getting-started-spring for postgresql)
CREATE TABLE authorities
(
username character varying(50) NOT NULL,
authority character varying(50) NOT NULL,
CONSTRAINT fk_authorities_users FOREIGN KEY (username)
REFERENCES users (username) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);
Question: With Hibernate/JPA2, what is the appropiate syntax in order to create a BO representing this query?
Question: Actually, I do not want to create the entry using my own BO. Any better way to make Spring Security or Hibernate create all required tables during test run?
Thanks
Set the hibernate property hibernate.hbm2ddl.auto to update, for example. This should let hibernate automatically create (and update) the tables in needs.
<property name="hibernate.hbm2ddl.auto" value="update" />
Actually, I do not want to create the entry using my own BO. Any better way to make Spring Security or Hibernate create all required tables during test run?
If you don't plan to use Hibernate to interact with these tables, it makes indeed little sense to have Entities for them.
My suggestion would thus be to place the Spring Security tables creation script in an import.sql file and to put this file on the root of the class path and Hibernate will automatically execute it after schema export. See Spring/Hibernate testing: Inserting test data after DDL creation for details (just put your DDL statements on a single line).
Thanks, Pascal, this is just what I have been looking for, however, it does not work. I use maven and put import.sql into the resources dir root (content: CREATE TABLE justatest (aaa character varying(50) NOT NULL );). I also set . Running mvn test copies import.sql to target dir... but nothing happens. logback[debug] does not mention import.sql at all. Any idea where I am going wrong? (Hibernate V 3.5.1-Final)
I'm using this feature with Maven and I cannot reproduce your problem. I have hbm2ddl.auto set to create, my import.sql file is in src/test/resources and it gets executed as expected at the end of the schema export when running tests. Here is the log entry I get (using logback):
20:44:37.949 [main] INFO o.h.tool.hbm2ddl.SchemaExport - Executing import script: /import.sql

Categories