How can I set openJPA cache so that it works only for chosen entities, maybe I need use some annotaion over they?
my persistence.xml contains:
<property name="openjpa.DataCache" value="true"/>
<property name="openjpa.RemoteCommitProvider" value="sjvm"/>
but thats settings works for all my entities(tables), so i want to cache for example only that table:
#Entity(name = "IsoCountryCodes")
#Table(name = "ISO_COUNTRY_CODES", schema = "ANALYSIS")
#DataCache(timeout=120000)
public class IsoCountryCodes implements Serializable{
....
}
But #DataCache doesnt fix it, its only set the timeout of cache saving.
UPDATE:
I cannot use openJPA 2.0 cause my project deployed on WebLogic 10.36 and have provided KODO openJPA 1.3.
Also i try to include only chosen entities by adding property:
property name="openjpa.DataCache" value="true(Types=foo.bar.FullTimeEmployee)"
but got this error:
org.apache.openjpa.lib.util.ParseException: There was an error while setting up the configuration plugin option "DataCache". The plugin was of type "class kodo.datacache.KodoConcurrentDataCache". The plugin property "Type" had no corresponding setter method or accessible field. All possible plugin properties are: [CacheSize, EvictionSchedule, FailFast, NAME_DEFAULT, Name, SoftReferenceSize].
Can you help me?Maybe you know other ways to exclude or include entitites from caching, maybe with Ehcache usage?
<property name="openjpa.DataCache" value="true"/>
That enables the L2 cache for all Entities. If you are using jpa-2.0, try adding <shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode> to turn the cache on. Also, replace the #DataCache annotation with a #javax.persistence.Cacheable annotation.
Related
I have issue with Hibernate query, my IDEA inspection error syntax:
This inspection controls whether the Persistence QL Queries are
error-checked
But I create mapping for Task objects in my hibernate.cfg.xml:
<session-factory>
<property name="connection.url">jdbc:postgresql://localhost:5432/todo_list</property>
<property name="connection.driver_class">org.postgresql.Driver</property>
<property name="connection.username">postgres</property>
<property name="connection.password">1</property>
<property name="dialect">org.hibernate.dialect.PostgreSQL95Dialect</property>
<mapping resource="ru/pravvich/model/Task.hbm.xml" />
</session-factory>
Facets:
If I cheating IDE and instead createQuery("select t from Task t"), create variable and push in createQuery
String hql = format("select t from Task t where t.id > %s", 0);
session.createQuery(hql)
It's work, but it's not normal code. How to fix this issue
Here is what for me resolve the same issue:
Open in the IDEA Preferences (Settings)/Editor/Language Injections and under the list of languages find Session (org.hibernate).
Under the column Language, it should be selected Hibernate QL.
Double click on it and a list of operations will be displayed.
Select the operations that you need.
IDEA doesn't recognise which or what Descriptor you are using. Check Project Structure -> Facets -> Hibernate. You should have found a cfg.xml file in Descriptors. If you are using package scanning through spring session factory definition,you should have found a session factory bean. If neither of them exists,you may add one.
I am using camel and open jpa as persistent provider, but I don't want alter statements to be run on prduction.
Snapshot of persistence.xml
<persistence-unit name="camel-openjpa-oracle-alert" transaction-type="RESOURCE_LOCAL">
.
.
<provider>
org.apache.openjpa.persistence.PersistenceProviderImpl
</provider>
<properties>
<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(ForeignKeys=false)" />
</properties>
.
.
</persistence-unit>
What value we have to put for openjpa.jdbc.SynchronizeMappings, so that alter command are not executed.
I searched but was unable to find any such value.
It would be nice to know a little more about what you are doing and why you need to use SynchronizeMappings. The fact that you use ForeignKeys=true tells me you want OpenJPA to read you schema and determine if you have any database FKs defined (i.e. so OpenJPA knows about these FKs so it can order SQL properly to honor parent/child FK constraints). This is a perfectly valid use of SynchMappings. However, by using 'buildSchema', you are specifically telling OpenJPA to make "the database schema match your existing mappings"....this comment is lifted from this OpenJPA doc:
http://openjpa.apache.org/builds/1.2.3/apache-openjpa/docs/ref_guide_mapping.html#ref_guide_mapping_synch
Therefore, you are specifically telling OpenJPA to update your database schema. You can remove the 'buildSchema' if you don't want OpenJPA to update your schema to match your domain model. That is, try:
Or you could use 'validate' in place of 'buildSchema'....however, as the above doc states, OpenJPA will throw an exception if it finds a schema/domain mismatch which may not be what you want. I suggest you read the above doc, and look at the available options to you.
Thanks,
Heath Thomann
I am trying DataNucleus with JDO api using only XML to define the persistence model, without adding annotation like #PersistenceCapable. That is something that supposedly supported by both JDO and DataNucleus, if I did understand both documentations.
For instance if I remove all annotations of Book.java, Inventory.java, Product.java in datanucleus example and run mvn clean compile I should get the job done, because package.orm define those classes, but I get the following error for all this classes:
(main) DEBUG [DataNucleus.MetaData] - Class
org.datanucleus.samples.jdo.tutorial.Inventory was specified in
persistence-unit (maybe by not putting exclude-unlisted-classes)
Tutorial but not annotated, so ignoring
....
(main) INFO [DataNucleus.Enhancer] - DataNucleus Enhancer completed with success for 0 classes.
What I am missing?
Actual configuration files:
persistence.xml
...
<persistence-unit name="Tutorial">
<class>org.datanucleus.samples.jdo.tutorial.Inventory</class>
<class>org.datanucleus.samples.jdo.tutorial.Product</class>
<class>org.datanucleus.samples.jdo.tutorial.Book</class>
<exclude-unlisted-classes/>
...
</persistence-unit>
...
package-h2.orm
<orm>
<package name="org.datanucleus.samples.jdo.tutorial">
<!-- persistence-modifier is by default equal to: persistence-capable -->
<class name="Inventory" table="INVENTORIES" >...</class>
<class name="Product" table="PRODUCTS">...</class>
<class name="Book" table="BOOKS">...</class>
</orm>
ORM metadata is to OVERRIDE JDO metadata. Consequently you need either annotations OR a JDO XML metadata file (package.jdo).
"class" entries in persistence.xml are to specify classes that have annotations, and you say you have none.
"mapping-file" entries in persistence.xml are to specify XML metadata files ... and you haven't specified any.
I'm getting the following error message from hibernate when attempting to insert a row into a table:
org.hibernate.exception.ConstraintViolationException: Column
'priority' cannot be null
I know that I could put a line into the code to set the value but there are many other instances where the program relies on the default value in the database (db is mysql).
I read somewhere that you can provide a default value in the hbm.xml file but hibernate is not recognizing it. Here's the corresponding section from JobQueue.hbm.xml
<property name="priority" type="integer">
<column name="priority" default="0" />
</property>
I suppose another option would be to modify the JobQueue.java file that gets generated (I'm using eclipse hibernate tools to auto generate the hibernate classes) but for now I'd like to try to get the hbm.xml configuration to work.
I'm using version 4.1.3 of the hibernate libraries and eclipse hibernate tools 3.4.0.x.
default="0" is only relevant for SchemaExport which generates the database schema. other than that hibernate completely ignores this setting. you could try to set not-null="true" for the column.
Unless you are not able to recreate the whole database schema, you can set the default value in the variable initialization.
In your model set the priority to 0 in the initialization.
In your class:
private Integer priority = 0;
I ended up modifying the JobQueue.java POJO to set the default value. To make sure that the Code Generation of hibernate tools wouldn't overwrite this change, I set it up so that the code generation generates the files in a temp folder and then the necessary files are copied over to the permanent source location.
is there a possibility to execute an sql script, after EclipseLink generated the ddl?
In other words, is it possible that the EclipseLink property "eclipselink.ddl-generation" with "drop-and-create-tables" is used and EclipseLink executes another sql-file (to insert some data into some tables just created) after creating the table definition?
I'm using EclipseLink 2.x and JPA 2.0 with GlassFish v3.
Or can I init the tables within a java method which is called on the project (war with ejb3) deployment?
I came across this question for the same reasons, trying to find an approach to run an initialization script after DDL generation. I offer this answer to an old question in hopes of shortening the amount of "literary research" for those looking for the same solution.
I'm using GlassFish 4 with its default EclipseLink 2.5 JPA implementation. The new Schema Generation feature under JPA 2.1 makes it fairly straightforward to specify an "initialization" script after DDL generation is completed.
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="cbesDatabase" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>java:app/jdbc/cbesPool</jta-data-source>
<properties>
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
<property name="javax.persistence.schema-generation.create-source" value="metadata"/>
<property name="javax.persistence.schema-generation.drop-source" value="metadata"/>
<property name="javax.persistence.sql-load-script-source" value="META-INF/sql/load_script.sql"/>
<property name="eclipselink.logging.level" value="FINE"/>
</properties>
</persistence-unit>
</persistence>
The above configuration generates DDL scripts from metadata (i.e. annotations) after which the META-INF/sql/load_script.sql script is run to populate the database. In my case, I seed a few tables with test data and generate additional views.
Additional information on EclipseLink's use of JPA's properties can be found in the DDL Generation section of EclipseLink/Release/2.5/JPA21. Likewise, Section 37.5 Database Schema Creation in Oracle's Java EE 7 Tutorial and TOTD #187 offer a quick introduction also.
Have a look at Running a SQL Script on startup in EclipseLink that describes a solution presented as a kind of equivalent to Hibernate's import.sql feature1. Credits to Shaun Smith:
Running a SQL Script on startup in EclipseLink
Sometimes, when working with DDL
generation it's useful to run a script
to clean up the database first. In
Hibernate if you put a file called
"import.sql" on your classpath its
contents will be sent to the database.
Personally I'm not a fan of magic
filenames but this can be a useful
feature.
There's no built in support for this
in EclipseLink but it's easy to do
thank's to EclipseLink's high
extensibility. Here's a quick solution
I came up with: I simply register an
event listener for the session
postLogin event and in the handler I
read a file and send each SQL
statement to the database--nice and
clean. I went a little further and
supported setting the name of the file
as a persistence unit property. You
can specify this all in code or in the
persistence.xml.
The ImportSQL class is configured as
a SessionCustomizer through a
persistence unit property which, on
the postLogin event, reads the file
identified by the "import.sql.file"
property. This property is also
specified as a persistence unit
property which is passed to
createEntityManagerFactory. This
example also shows how you can define
and use your own persistence unit
properties.
import org.eclipse.persistence.config.SessionCustomizer;
import org.eclipse.persistence.sessions.Session;
import org.eclipse.persistence.sessions.SessionEvent;
import org.eclipse.persistence.sessions.SessionEventAdapter;
import org.eclipse.persistence.sessions.UnitOfWork;
public class ImportSQL implements SessionCustomizer {
private void importSql(UnitOfWork unitOfWork, String fileName) {
// Open file
// Execute each line, e.g.,
// unitOfWork.executeNonSelectingSQL("select 1 from dual");
}
#Override
public void customize(Session session) throws Exception {
session.getEventManager().addListener(new SessionEventAdapter() {
#Override
public void postLogin(SessionEvent event) {
String fileName = (String) event.getSession().getProperty("import.sql.file");
UnitOfWork unitOfWork = event.getSession().acquireUnitOfWork();
importSql(unitOfWork, fileName);
unitOfWork.commit()
}
});
}
public static void main(String[] args) {
Map<String, Object> properties = new HashMap<String, Object>();
// Enable DDL Generation
properties.put(PersistenceUnitProperties.DDL_GENERATION, PersistenceUnitProperties.DROP_AND_CREATE);
properties.put(PersistenceUnitProperties.DDL_GENERATION_MODE, PersistenceUnitProperties.DDL_DATABASE_GENERATION);
// Configure Session Customizer which will pipe sql file to db before DDL Generation runs
properties.put(PersistenceUnitProperties.SESSION_CUSTOMIZER, "model.ImportSQL");
properties.put("import.sql.file","/tmp/someddl.sql");
EntityManagerFactory emf = Persistence
.createEntityManagerFactory("employee", properties);
}
I'm not sure it's a strict equivalent though, I'm not sure the script will run after the database generation. Testing required. If it doesn't, maybe it can be adapted.
1 Hibernate has a neat little feature that is heavily under-documented and unknown. You can execute an SQL script during the SessionFactory creation right after the database schema generation to import data in a fresh database. You just need to add a file named import.sql in your classpath root and set either create or create-drop as your hibernate.hbm2ddl.auto property.
This might help as there is a confusion here:
Use exactly the same set of properties (except logger) for data seeding.
DO NOT USE:
<property name="eclipselink.ddl-generation" value="create-tables"/>
<property name="eclipselink.ddl-generation.output-mode" value="database"/>
DO USE:
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
<property name="javax.persistence.schema-generation.create-source" value="metadata"/>
<property name="javax.persistence.schema-generation.drop-source" value="metadata"/>
I confirm this worked for me.
:) Just substitue with your data
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
<property name="javax.persistence.schema-generation.create-source" value="metadata-then-script"/>
<property name="javax.persistence.sql-load-script-source" value="META-INF/seed.sql"/>
It is called BEFORE ddl-execution. And there seems to be no nice way to adapt it, as there is no suitable event one could use.
This process offers executing sql before DDL statments whereas what would be nice (for example, to insert seed data ) is to have something which executes after DDL statements. I don't if I am missing something here. Can somebody please tell me how to execute sql AFTER eclipselink has created tables (when create-tables property is set to tru)