How to stop persistence from altering database, JPA - java

I am using camel and open jpa as persistent provider, but I don't want alter statements to be run on prduction.
Snapshot of persistence.xml
<persistence-unit name="camel-openjpa-oracle-alert" transaction-type="RESOURCE_LOCAL">
.
.
<provider>
org.apache.openjpa.persistence.PersistenceProviderImpl
</provider>
<properties>
<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(ForeignKeys=false)" />
</properties>
.
.
</persistence-unit>
What value we have to put for openjpa.jdbc.SynchronizeMappings, so that alter command are not executed.
I searched but was unable to find any such value.

It would be nice to know a little more about what you are doing and why you need to use SynchronizeMappings. The fact that you use ForeignKeys=true tells me you want OpenJPA to read you schema and determine if you have any database FKs defined (i.e. so OpenJPA knows about these FKs so it can order SQL properly to honor parent/child FK constraints). This is a perfectly valid use of SynchMappings. However, by using 'buildSchema', you are specifically telling OpenJPA to make "the database schema match your existing mappings"....this comment is lifted from this OpenJPA doc:
http://openjpa.apache.org/builds/1.2.3/apache-openjpa/docs/ref_guide_mapping.html#ref_guide_mapping_synch
Therefore, you are specifically telling OpenJPA to update your database schema. You can remove the 'buildSchema' if you don't want OpenJPA to update your schema to match your domain model. That is, try:
Or you could use 'validate' in place of 'buildSchema'....however, as the above doc states, OpenJPA will throw an exception if it finds a schema/domain mismatch which may not be what you want. I suggest you read the above doc, and look at the available options to you.
Thanks,
Heath Thomann

Related

make openJPA caching only chosen tables

How can I set openJPA cache so that it works only for chosen entities, maybe I need use some annotaion over they?
my persistence.xml contains:
<property name="openjpa.DataCache" value="true"/>
<property name="openjpa.RemoteCommitProvider" value="sjvm"/>
but thats settings works for all my entities(tables), so i want to cache for example only that table:
#Entity(name = "IsoCountryCodes")
#Table(name = "ISO_COUNTRY_CODES", schema = "ANALYSIS")
#DataCache(timeout=120000)
public class IsoCountryCodes implements Serializable{
....
}
But #DataCache doesnt fix it, its only set the timeout of cache saving.
UPDATE:
I cannot use openJPA 2.0 cause my project deployed on WebLogic 10.36 and have provided KODO openJPA 1.3.
Also i try to include only chosen entities by adding property:
property name="openjpa.DataCache" value="true(Types=foo.bar.FullTimeEmployee)"
but got this error:
org.apache.openjpa.lib.util.ParseException: There was an error while setting up the configuration plugin option "DataCache". The plugin was of type "class kodo.datacache.KodoConcurrentDataCache". The plugin property "Type" had no corresponding setter method or accessible field. All possible plugin properties are: [CacheSize, EvictionSchedule, FailFast, NAME_DEFAULT, Name, SoftReferenceSize].
Can you help me?Maybe you know other ways to exclude or include entitites from caching, maybe with Ehcache usage?
<property name="openjpa.DataCache" value="true"/>
That enables the L2 cache for all Entities. If you are using jpa-2.0, try adding <shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode> to turn the cache on. Also, replace the #DataCache annotation with a #javax.persistence.Cacheable annotation.

Inconsistency checking between entity and table

I'm looking for easy way to check inconsistency between entity and table for my JPA application.
After changing table definition (ex. column name, type, add new column, delete column), I sometimes forget to change entity definition.
So I'd like to be notified if entity and table definitions are inconsistent.
Is some tool available? Eclipse plugin is preferable, but others are also considerable.
I know Dali. But this tool does not suit for me because I should modify Dali output.
(I'm using class inheritance as this question, and so on.)
Your JPA implementation should provide a property on persistence.xml to make it for you. By example, Hibernate provides hibernate.hbm2ddl.auto property which allow to create the schema, update or just validate.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<persistence ...>
<persistence-unit ...>
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
<properties>
<!-- ... -->
<!-- ... -->
<property name="hibernate.hbm2ddl.auto" value="validate"/>
This makes the schema validation process on EntityManager initialization.
Check on your current JPA implementation documentation to find the equivalent property.
Good luck!

Merge an entity, change its id, merge again, cause "mapped to a primary key column in the database. Updates are not allowed" error

I have a JPA program where EclipseLink is the Persistence provider. When I merge an user entity, change its ID, and try to merge the same user instance again, an error is thrown. I rewrite my code to illustrate my problem in the simplest way.
User user = userManager.find(1);
userManager.merge(user);
System.out.println("User is managed? "+userManager.contains(user);
user.setId(2);
userManager.merge(user);
The above code is not in a transaction context. userManager is a stateless session bean with an EntityManager injected. When executed, the console prints:
User is managed? false
Exception [EclipseLink-7251] (Eclipse Persistence Services - 2.1.3.v20110304-r9073): org.eclipse.persistence.exceptions.ValidationException
Exception Description: The attribute [id] of class [demo.model.User] is mapped to a primary key column in the database. Updates are not allowed.
The exception occurs at the second merge() invocation.
If I create a new user, sets its ID and merge it, it works:
User user = userManager.find(1);
userManager.merge(user);
System.out.println("User is managed? "+userManager.contains(user);
User newUser = new User();
newUser.setId(2);
userManager.merge(newUser);
So what is the difference between the first scenario and second one? According to the JPA specification, as long as the entity is in detached state, the merge should succeed, right?
(Assuming the entity with ID=2 exists)
Why the EclipseLink provider seems to be bothered by the fact that the user entity has been merged before?
Update: It seems to be an bug of EclipseLink. I have replaced the persistence provider from EclipseLink to Hibernate:
I change
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
to
<provider>org.hibernate.ejb.HibernatePersistence</provider>
No error has been thrown.
The reason is that the Id may be inserted/defined - as you do in your second example, but not changed/updated - as you try in your first example. The JPA provider tries to reflect the change in the database and fails.
JPA 2 spec ยง2.4 says
The application must not change the value of the primary key. The
behavior is undefined if this occurs.
It seems to be an bug of EclipseLink. I have changed the persistence provider from EclipseLink to Hibernate:
from
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
to
<provider>org.hibernate.ejb.HibernatePersistence</provider>
No error has been thrown.
The version of EclipseLink is 2.3.2. (which is shipped with the latest Glassfish application server 3.1.2).
The version of hibernate is, as of now the latest, 4.1.7.
Try <property name="eclipselink.weaving.internal" value="false"/> in persistence.xml as per
http://blogs.nologin.es/rickyepoderi/index.php?/archives/95-Weaving-Problem-in-EclipseLink.html
This answer is 4 years late but anyway.
You can update it by executing regular update queries using SQL or JPQL or Criteria API. I find the last one is the best.
Here is a code example that can do the trick. I have tried it in a similar situation and it works fine with EclipseLink.
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaUpdate<User> cu = cb.createCriteriaUpdate(User.class);
Root<User> c = cu.from(User.class);
cu.set(User_.id, newId).where( cb.equal(c.get(User_.id), oldId) );
em.createQuery(cu).executeUpdate();
Instead of User_.id you can pass the name of the field as a String e.g. "id".
Another example http://www.thoughts-on-java.org/criteria-updatedelete-easy-way-to/

JPA and toplink create-table on if they don't already exist?

Looks like jpa is something which makes me ask a lot of questions.
Having added this
<property name="toplink.ddl-generation" value="create-tables"/>
my JPA application always creates tables when running, which results in exceptions in case the tables already exist. I would like JPA to check if the tables already exist and if not create them, however I could not find a value for the property above which does this.
So if I just turn it off, is there a way to tell JPA manually at some point to create all the tables?
Update here's the exception I get
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'tags' already exists
Error Code: 1050
Call: CREATE TABLE tags (ID BIGINT AUTO_INCREMENT NOT NULL, NAME VARCHAR(255), OCCURRENCE INTEGER, PRIMARY KEY (ID))
MySQLSyntaxErrorException?! Now that's wrong for sure
According to http://www.oracle.com/technology/products/ias/toplink/JPA/essentials/toplink-jpa-extensions.html#Java2DBSchemaGen toplink does not have an option to update exiting tables, I'm not sure if I would trust it to do the right thing anyway.
You could configure toplink to generate a sql script that you then would have to execute manually to create all tables. The filenames and location can be configured like this:
<property name="toplink.ddl-generation" value="create-tables"/>
<property name="toplink.ddl-generation.output-mode" value="sql-script"/>
<property name="toplink.create-ddl-jdbc-file-name" value="createDDL.sql"/>
<property name="toplink.drop-ddl-jdbc-file-name" value="dropDDL.sql"/>
<property name="toplink.application-location" value="/tmp"/>
I would like [my] JPA [provider] to check if the tables already exist and if not create them, however I could not find a value for the property above which does this.
Weird, according to the TopLink Essentials documentation about the toplink.ddl-generation extension, create-table should leave existing table unchanged:
TopLink JPA Extensions for Schema Generation
Specify what Data Descriptor Language
(DDL) generation action you want for
your JPA entities. To specify the DDL
generation target, see
toplink.ddl-generation.output-mode.
Valid values: oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider
none - do not generate DDL; no
schema is generated.
create-tables - create DDL for
non-existent tables; leave existing
tables unchanged (see also
toplink.create-ddl-jdbc-file-name).
drop-and-create-tables - create DDL for all tables; drop all existing
tables (see also
toplink.create-ddl-jdbc-file-name
and
toplink.drop-ddl-jdbc-file-name).
If you are using persistence outside
the EJB container and would like to
create the DDL files without creating
tables, additionally define a Java
system property INTERACT_WITH_DB and
set its value to false.
Liquibase (http://www.liquibase.org) is good at this. It takes some time to get fully used to it, but I think it's worth the effort.
The Liquibase-way is independent of which JPA persistence provider you use. Actually, it's even database agnostic.

Execute sql script after jpa/EclipseLink created tables?

is there a possibility to execute an sql script, after EclipseLink generated the ddl?
In other words, is it possible that the EclipseLink property "eclipselink.ddl-generation" with "drop-and-create-tables" is used and EclipseLink executes another sql-file (to insert some data into some tables just created) after creating the table definition?
I'm using EclipseLink 2.x and JPA 2.0 with GlassFish v3.
Or can I init the tables within a java method which is called on the project (war with ejb3) deployment?
I came across this question for the same reasons, trying to find an approach to run an initialization script after DDL generation. I offer this answer to an old question in hopes of shortening the amount of "literary research" for those looking for the same solution.
I'm using GlassFish 4 with its default EclipseLink 2.5 JPA implementation. The new Schema Generation feature under JPA 2.1 makes it fairly straightforward to specify an "initialization" script after DDL generation is completed.
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="cbesDatabase" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>java:app/jdbc/cbesPool</jta-data-source>
<properties>
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
<property name="javax.persistence.schema-generation.create-source" value="metadata"/>
<property name="javax.persistence.schema-generation.drop-source" value="metadata"/>
<property name="javax.persistence.sql-load-script-source" value="META-INF/sql/load_script.sql"/>
<property name="eclipselink.logging.level" value="FINE"/>
</properties>
</persistence-unit>
</persistence>
The above configuration generates DDL scripts from metadata (i.e. annotations) after which the META-INF/sql/load_script.sql script is run to populate the database. In my case, I seed a few tables with test data and generate additional views.
Additional information on EclipseLink's use of JPA's properties can be found in the DDL Generation section of EclipseLink/Release/2.5/JPA21. Likewise, Section 37.5 Database Schema Creation in Oracle's Java EE 7 Tutorial and TOTD #187 offer a quick introduction also.
Have a look at Running a SQL Script on startup in EclipseLink that describes a solution presented as a kind of equivalent to Hibernate's import.sql feature1. Credits to Shaun Smith:
Running a SQL Script on startup in EclipseLink
Sometimes, when working with DDL
generation it's useful to run a script
to clean up the database first. In
Hibernate if you put a file called
"import.sql" on your classpath its
contents will be sent to the database.
Personally I'm not a fan of magic
filenames but this can be a useful
feature.
There's no built in support for this
in EclipseLink but it's easy to do
thank's to EclipseLink's high
extensibility. Here's a quick solution
I came up with: I simply register an
event listener for the session
postLogin event and in the handler I
read a file and send each SQL
statement to the database--nice and
clean. I went a little further and
supported setting the name of the file
as a persistence unit property. You
can specify this all in code or in the
persistence.xml.
The ImportSQL class is configured as
a SessionCustomizer through a
persistence unit property which, on
the postLogin event, reads the file
identified by the "import.sql.file"
property. This property is also
specified as a persistence unit
property which is passed to
createEntityManagerFactory. This
example also shows how you can define
and use your own persistence unit
properties.
import org.eclipse.persistence.config.SessionCustomizer;
import org.eclipse.persistence.sessions.Session;
import org.eclipse.persistence.sessions.SessionEvent;
import org.eclipse.persistence.sessions.SessionEventAdapter;
import org.eclipse.persistence.sessions.UnitOfWork;
public class ImportSQL implements SessionCustomizer {
private void importSql(UnitOfWork unitOfWork, String fileName) {
// Open file
// Execute each line, e.g.,
// unitOfWork.executeNonSelectingSQL("select 1 from dual");
}
#Override
public void customize(Session session) throws Exception {
session.getEventManager().addListener(new SessionEventAdapter() {
#Override
public void postLogin(SessionEvent event) {
String fileName = (String) event.getSession().getProperty("import.sql.file");
UnitOfWork unitOfWork = event.getSession().acquireUnitOfWork();
importSql(unitOfWork, fileName);
unitOfWork.commit()
}
});
}
public static void main(String[] args) {
Map<String, Object> properties = new HashMap<String, Object>();
// Enable DDL Generation
properties.put(PersistenceUnitProperties.DDL_GENERATION, PersistenceUnitProperties.DROP_AND_CREATE);
properties.put(PersistenceUnitProperties.DDL_GENERATION_MODE, PersistenceUnitProperties.DDL_DATABASE_GENERATION);
// Configure Session Customizer which will pipe sql file to db before DDL Generation runs
properties.put(PersistenceUnitProperties.SESSION_CUSTOMIZER, "model.ImportSQL");
properties.put("import.sql.file","/tmp/someddl.sql");
EntityManagerFactory emf = Persistence
.createEntityManagerFactory("employee", properties);
}
I'm not sure it's a strict equivalent though, I'm not sure the script will run after the database generation. Testing required. If it doesn't, maybe it can be adapted.
1 Hibernate has a neat little feature that is heavily under-documented and unknown. You can execute an SQL script during the SessionFactory creation right after the database schema generation to import data in a fresh database. You just need to add a file named import.sql in your classpath root and set either create or create-drop as your hibernate.hbm2ddl.auto property.
This might help as there is a confusion here:
Use exactly the same set of properties (except logger) for data seeding.
DO NOT USE:
<property name="eclipselink.ddl-generation" value="create-tables"/>
<property name="eclipselink.ddl-generation.output-mode" value="database"/>
DO USE:
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
<property name="javax.persistence.schema-generation.create-source" value="metadata"/>
<property name="javax.persistence.schema-generation.drop-source" value="metadata"/>
I confirm this worked for me.
:) Just substitue with your data
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
<property name="javax.persistence.schema-generation.create-source" value="metadata-then-script"/>
<property name="javax.persistence.sql-load-script-source" value="META-INF/seed.sql"/>
It is called BEFORE ddl-execution. And there seems to be no nice way to adapt it, as there is no suitable event one could use.
This process offers executing sql before DDL statments whereas what would be nice (for example, to insert seed data ) is to have something which executes after DDL statements. I don't if I am missing something here. Can somebody please tell me how to execute sql AFTER eclipselink has created tables (when create-tables property is set to tru)

Categories