hello everybody i am using JPA with EclipseLink and oracle as DB and i need to set the property v$session of jdbc4 it allows to set an identification name to the application for auditing purposes but i had no lucky setting it up....i have been trying through entitiyManager following the example in this page: http://wiki.eclipse.org/Configuring_a_EclipseLink_JPA_Application_(ELUG) it does not show any error but does not set the application name at all... when i see the audit in oracle it is not being audited with the name i set by code "Customers" but with OS_program_name=JDBC Thin Client it means that the property in the code is not being set properly and i have no idea where the issue is, the code i am using is the following :
emProperties.put("v$session.program","Customers");
factory=Persistence.createEntityManagerFactory("clients",emProperties);
em=factory.createEntityManager(emProperties);
em.merge(clients);
does anybody know how to do it or any idea....
thanks.-
v$session.program is a JDBC connection property, but Persistence.createEntityManagerFactory gets persistence unit properties. There is no direct way to pass arbitrary JDBC property into entity manager.
However, in EclipseLink you can use SessionCustomizer:
public class ProgramCustomizer extends SessionCustomizer {
#Override
public void customize(Session s) throws Exception {
s.getDatasourceLogin().setProperty("v$session.program", "Customers");
super.customize(s);
}
}
-
emProperties.put(PersistenceUnitProperties.SESSION_CUSTOMIZER, "ProgramCustomizer");
You can achive that without SessionCustomizer in persistence.xml:
<property name="eclipselink.jdbc.property.v$session.program" value="Customers" />
FYI: https://www.eclipse.org/eclipselink/documentation/2.7/jpa/extensions/persistenceproperties_ref.htm#CIHHJHHD
Related
I have an issue where I have only one database to use but I have multiple servers where I want them to use a different table name for each server.
Right now my class is configured as:
#Entity
#Table(name="loader_queue")
class LoaderQueue
I want to be able to have dev1 server point to loader_queue_dev1 table, and dev2 server point to loader_queue_dev2 table for instance.
Is there a way i can do this with or without using annotations?
I want to be able to have one single build and then at runtime use something like a system property to change that table name.
For Hibernate 4.x, you can use a custom naming strategy that generates the table name dynamically at runtime. The server name could be provided by a system property and so your strategy could look like this:
public class ServerAwareNamingStrategy extends ImprovedNamingStrategy {
#Override
public String classToTableName(String className) {
String tableName = super.classToTableName(className);
return resolveServer(tableName);
}
private String resolveServer(String tableName) {
StringBuilder tableNameBuilder = new StringBuilder();
tableNameBuilder.append(tableName);
tableNameBuilder.append("_");
tableNameBuilder.append(System.getProperty("SERVER_NAME"));
return tableNameBuilder.toString();
}
}
And supply the naming strategy as a Hibernate configuration property:
<property
name="hibernate.ejb.naming_strategy"
value="my.package.ServerAwareNamingStrategy"
/>
I would not do this. It is very much against the grain of JPA and very likely to cause problems down the road. I'd rather add a layer of views to the tables providing unified names to be used by your application.
But you asked, so have some ideas how it might work:
You might be able to create the mapping for your classes, completely by code. This is likely to be tedious, but gives you full flexibility.
You can implement a NamingStrategy which translates your class name to table names, and depends on the instance it is running on.
You can change your code during the build process to build two (or more) artefacts from one source.
I am using Hibernate as ORM for my project. I use mysql Database
I have a table "Products" inside DB "catalog".
I have put the #Table(name="Products",schema="catalog") annotation for the entity Products in my application.
However when I try to run the application I get the below exception. Can you please help me resolve this issue?
Exception:
Exception in thread "main" org.hibernate.HibernateException: Missing table:Products
at org.hibernate.cfg.Configuration.validateSchema(Configuration.java:1281)
at org.hibernate.tool.hbm2ddl.SchemaValidator.validate(SchemaValidator.java:155)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:508)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1769)
at org.eros.purchase.db.utils.HibernateUtil.configure(HibernateUtil.java:17)
at Test.main(Test.java:14)
Any thoughts on how I can fix this?
Please update your hibernate.cfg.xml file by adding this property
<property name="hibernate.hbm2ddl.auto">create</property>
or
<property name="hibernate.hbm2ddl.auto">update</property>
I got the same exception from hibernate. In my case, it was because the database user was not properly authorized to see the tables. The tables were definitively there on the database.
After I assigned the roles db_datareader to the user for this database it worked.
However, in another case, where the tables weren't actually there, I got exactly the same exception from hibernate. I cases where the tables are there, I think hibernate might show no more information because of security reasons.
I think your mapping is referring to User table which is actually not in database. .
so check your xml mapping of hibernate
package com.mypackage;
import javax.persistence.Entity;//first check two annotations belong to right package
import javax.persistence.Table;
#Entity
#Table(name = "Products",schema="catalog")
public class Products{
//entity attributes
}
//second check mapping file (if use)i.e register Products class with right spelling
// or third check Hibernate util if we are register mapping class like this
public class CustomHibernateUtil{
private static Configuration getConfiguration(){
configuration.addAnnotatedClass(Products.class);
return configuration;
}
}
We are using Play Framework 2.1 in our web application. We want to explicitly set the database schema (not public schema) in our PostgreSQL database that's the application's database. How can I set it ?
As far as I know, from what I have tried before. You should define your schema name for each Model you want to. It should be like this:
import play.db.ebean.Model;
import javax.persistence.Entity;
import javax.persistence.Table;
#Entity
#Table(schema = "schema2")
public class TableOnSchema2 extends Model {
...
}
Maybe this solution would make an additional effort to define each Model with schema name. Because, I don't know whether there is configuration value can be set for specifying default database scheme for the application. But it works for me!
Hope this would help you.. :)
If your tables are all located outside of the public schema, the best thing to do, is to change the search_path for your application user:
alter user your_appuser set search_path = 'schema1';
If you have multiple schemas, you can add all of them:
alter user your_appuser set search_path = 'schema1,schema2,public';
Don't forget to commit this statement. The change will only have affect after the user logs in the next time. Existing connections will not be affected.
Playframework 2.8.x using Scala example:
We can add the below entry in application.conf:
db {
# You can declare as many datasources as you want.
# By convention, the default datasource is named `default`
default.driver = org.postgresql.Driver
default.url = "jdbc:postgresql://localhost/postgres?currentSchema=backoffice"
default.username = "user"
default.password = "password"
}
Play framework will create a default connection pool with these parameters.
Postgres driver basically gives the ability to define the default schema in conneciton url using ?currentSchema=backoffice from version 9.4 onwards.
A Dao object can use this database as below:
import com.google.inject.Inject
import play.api.db.{DBApi, Database, DefaultDBApi}
class PostgresDao #Inject()(backofficeDB : Database) {
val backofficeDb = backofficeDB
//some more methods
}
I have an issue testing a Hibernate application which queries multiple catalogs/schemas.
The production database is Sybase and in addition to entities mapped to the default catalog/schema there are two entities mapped as below. There are therefore three catalogs in total.
#Table(catalog = "corp_ref_db", schema = "dbo", name = "WORKFORCE_V2")
public class EmployeeRecord implements Serializable {
}
#Table(catalog = "reference", schema = "dbo", name="cntry")
public class Country implements Serializable {
}
This all works in the application without any issues. However when unit testing my usual strategy is to use HSQL with hibernate's ddl flag set to auto and have dbunit populate the tables.
This all works fine when the tables are all in the same schema.
However, since adding these additional tables, testing is broken as the DDL will not run as HSQL only supports one catalog.
create table corp_ref_db.dbo.WORKFORCE_V2
user lacks privilege or object not found: CORP_REF_DB
If there were only two catalogs then I think it would maybe be possible to get round this by changing the default catalog and schema in the HSQL database to that one explicitly defined:
Is there any other in-memory database for which this might work or is there any strategy for getting the tests to run in HSQL.
I had thought of providing an orm.xml file which specified the default catalog and schema (overiding any annotations and having all the defined tables created in the default catalog/schema) however these overrides do not seem to be observed when the DDL is executed i.e. I get the same error as above.
Essentially, then I would like to run my existing tests and either somehow have the tables created as they are defined in the mappings or somehow override the catalog/schema definitions at the entity level.
I cannot think of any way to achieve either outcome. Any ideas?
I believe H2 supports catalogs. I haven't used them in it myself, but there's a CATALOGS table in the Information Schema.
I managed to achieve something like this in H2 via IGNORE_CATALOGS property and version 1.4.200
However, the url example from their docs did not seem to work for me, so I added a statement in my schema.xml:
SET IGNORE_CATALOGS = true;
I'm looking for way to get the SQL update script when Hibernate automatically updates tables.
I'm using hibernate.hbm2ddl.auto=update in development environment only, and I need SQL script that updates tables for production.
I want these SQL scripts in txt format for revision and potential edit.
How can this be done?
Thanks for any advice.
There are some suggestions and general discussion here.
In a nutshell, you can turn on logging (to standard output):
hibernate.show_sql=true
Alternatively, if you use log4j, you can add this to your log4j.properties file:
log4j.logger.org.hibernate.SQL=DEBUG
Both of these approaches are going to output Hibernate's prepared statements with parameters (so the parameter values themselves are not inline). To get around this, you could use an interceptor like P6Spy. Details on that can be found here.
org.hibernate.cfg.Configuration class has method:
public java.lang.String[] generateSchemaUpdateScript( Dialect, DatabaseMetadata)
what generates the reqiured update script.
I've just implemented this in grails:
configuration = new DefaultGrailsDomainConfiguration(
grailsApplication: grailsApplication,
properties: props)
//this extends hibernate config
Connection c = SessionFactoryUtils.getDataSource(sessionFactory).getConnection(props.'hibernate.connection.username', props.'hibernate.connection.password')
<br/>md = new DatabaseMetadata(c, DialectFactory.buildDialect(props.'hibernate.dialect'))
configuration.generateSchemaUpdateScript(DialectFactory.buildDialect(props.'hibernate.dialect'), md)
)
check SchemaExport script in grails, for further information, it uses hibernate to generate schema.
(I had to implent is as a service because we have external domain model)