I want my H2 database to be stored into a file, so that once I close the application and open it again, all the data that was previously written to the database is still there, but for some reason, at the moment whenever I start the application, the database is completely empty. Any suggestions?
#Bean
public DataSource dataSource() {
File f = new File(".");
JdbcDataSource ds = new JdbcDataSource();
ds.setURL("jdbc:h2:file:" + f.getAbsolutePath() + "/db/aurinko");
ds.setUser("");
ds.setPassword("");
return ds;
}
private Properties getHibernateProperties() {
Properties prop = new Properties();
prop.put("hibernate.format_sql", "true");
prop.put("hibernate.show_sql", "false");
prop.put("hibernate.dialect", "org.hibernate.dialect.H2Dialect");
prop.put("hibernate.hbm2ddl.auto", "update");
return prop;
}
#Bean
public SessionFactory sessionFactory() throws IOException {
LocalSessionFactoryBuilder builder = new LocalSessionFactoryBuilder(dataSource());
builder.scanPackages("io.aurinko.server.jpa").addProperties(getHibernateProperties());
SessionFactory result = builder.buildSessionFactory();
return result;
}
I was using spring-boot. Turns out that spring-boot generates its own H2 database. That means that I had two separate databases, one of which I was trying to use and the second one (only the in-memory one) that I was actually using.
May be try setting auto commit to true in the config/ property file. It may work
Related
when there will be an issue with the external database for a live webapp and bean won't be able to get instantiated when retried again by the application then app goes down how can we fix this problem?
#Bean(destroyMethod = "close")
public DataSource dataSource(){
HikariConfig hikariConfig = new HikariConfig();
hikariConfig.setDriverClassName("com.mysql.jdbc.Driver");
hikariConfig.setJdbcUrl("jdbc:mysql://localhost:3306/spring-test");
hikariConfig.setUsername("root");
hikariConfig.setPassword("admin");
hikariConfig.setMaximumPoolSize(5);
hikariConfig.setConnectionTestQuery("SELECT 1");
hikariConfig.setPoolName("springHikariCP");
hikariConfig.addDataSourceProperty("dataSource.cachePrepStmts", "true");
hikariConfig.addDataSourceProperty("dataSource.prepStmtCacheSize", "250");
hikariConfig.addDataSourceProperty("dataSource.prepStmtCacheSqlLimit", "2048");
hikariConfig.addDataSourceProperty("dataSource.useServerPrepStmts", "true");
HikariDataSource dataSource = new HikariDataSource(hikariConfig);
return dataSource;
}
Your code will probably throw an exception if the datasource is down. Wrap it into try/catch like so:
try {
<your code goes here>
} catch (Exception e) {
return null;
}
But code calling for a datasource will have to accept the result may be null rather than throwing NullPointerExceptions.
Up until now I had been using in-memory H2 DB with Spring Batch. However, now I switched to connecting to external postgres DB. Here was my connection object (with some obfuscation):
#Bean
public DataSource postgresDatasource() {
DriverManagerDataSource datasource = new DriverManagerDataSource();
datasource.setDriverClassName("org.postgresql.Driver");
datasource.setUrl("jdbc:postgresql://x.x.x.x:xxxx/blah");
datasource.setUsername("Joe");
datasource.setPassword("password");
return datasource;
}
When I start my application, I get:
Caused by: org.springframework.jdbc.BadSqlGrammarException:
PreparedStatementCallback; bad SQL grammar [SELECT JOB_INSTANCE_ID,
JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = ? and JOB_KEY = ?];
nested exception is org.postgresql.util.PSQLException: ERROR: relation
"batch_job_instance" does not exist
I then read that Spring Batch uses the database to save metadata for its recover/retry functionality, and with embedded databases, these are tables Spring Batch sets up by default. Ok, so that would explain why I had never seen this error before.
However, it said I could set this property:
spring.batch.initialize-schema=never
So I put this in my application.properties file. However, I am still getting the error. I would be grateful for any ideas.
I was able to address this myself. Ultimately I needed the Spring Batch repository independent from my actual target relational database. So I found this reference:
https://github.com/spring-projects/spring-batch/blob/342d27bc1ed83312bdcd9c0cb30510f4c469e47d/spring-batch-core/src/main/java/org/springframework/batch/core/configuration/annotation/DefaultBatchConfigurer.java#L84
I was able to take the DefaultBatchConfigurer class from that example and make a minor change to the data source by adding the #Qualifier for embedded/local data source:
#Autowired(required = false)
public void setDataSource(#Qualifier("dataSource") DataSource dataSource) {
this.dataSource = dataSource;
this.transactionManager = new DataSourceTransactionManager(dataSource);
}
Then, on my Spring Batch reader (in my other batch config class), I made a minor change to the data source by adding the #Qualifier for postgres data source:
#Bean
public ItemReader<StuffDto> itemReader(#Qualifier("postgresDataSource")DataSource dataSource) {
return new JdbcCursorItemReaderBuilder<StuffDto>()
.name("cursorItemReader")
.dataSource(dataSource)
.sql(GET_DATA)
.rowMapper(new BeanPropertyRowMapper<>(StuffDto.class))
.build();
}
Then lastly (or firstly really as I did these first), I explicitly named my data source beans so java could tell them apart to use as above:
#Configuration
public class PersistenceContext {
#Bean(name = "dataSource")
public DataSource dataSource() {
DriverManagerDataSource datasource = new DriverManagerDataSource();
datasource.setDriverClassName("org.h2.Driver");
datasource.setUrl("jdbc:h2:file:/tmp/test");
datasource.setUsername("sa");
datasource.setPassword("");
return datasource;
}
#Bean(name = "postgresDataSource")
public DataSource postgresDatasource() {
DriverManagerDataSource datasource = new DriverManagerDataSource();
datasource.setDriverClassName("org.postgresql.Driver");
datasource.setUrl("jdbc:postgresql://x.x.x.x:xxxx/blah");
datasource.setUsername("joe");
datasource.setPassword("password");
return datasource; }
}
Once I did all the above, the error disappeared and everything worked.
I have to develop a WAR-Application in Netbeans. As a Webserver I use the built-in Jetty to test the WAR, because the release version should also use Jetty.
In Netbeans I have a setup to redeploy the WAR after saving a file.
And there I have the problem that after a redeploy (the first deploy works fine) at connecting to the Firebird database again, the Application is stopping at the first output of:
Hibernate: create table HT_mytable (myid numeric(18,0) not null, hib_sess_id CHAR(36))
This my code for creating the EntityManager
protected EntityManager createEntityManager(String databaseName) {
String user = this.settingsService.getValue("user", "database");
String password = this.settingsService.getValue("password", "database");
Properties properties = new Properties();
properties.put("hibernate.show_sql", true);
properties.put("javax.persistence.jdbc.url", this.createUrlForDatabase(databaseName));
properties.put("javax.persistence.jdbc.user", user);
properties.put("javax.persistence.jdbc.password", password);
properties.put("javax.persistence.jdbc.driver", "org.firebirdsql.jdbc.FBDriver");
try {
EntityManagerFactory emf = Persistence.createEntityManagerFactory(UNITNAME_PREFIX + databaseName, properties);
return emf.createEntityManager();
} catch(PersistenceException ex) {
Logger.getLogger(EntityService.class.getName()).log(Level.SEVERE, null, ex);
}
return null;
}
Thanks :)
I am setting up my DataSource in a Spring Boot / Spring Cloud Connectors project running on Cloud Foundry using Tomcat JDBC Connection Pool and MariaDB JDBC driver like so:
#Configuration
#Profile("cloud")
public class MyDataSourceConfiguration extends AbstractCloudConfig {
#Bean
public DataSource dataSource() {
Map<String, Object> dataSourceProperties = new HashMap<>();
dataSourceProperties.put("initialSize", "4"); // OK
dataSourceProperties.put("maxActive", "4"); // OK
dataSourceProperties.put("maxWait", "2000"); // OK
dataSourceProperties.put("connectionProperties",
"useUnicode=yes;characterEncoding=utf8;"); // ignored
DataSourceConfig conf = new DataSourceConfig(dataSourceProperties);
return connectionFactory().dataSource(conf);
}
}
For some reason only the properties referring to the pool size and maxWait but not the connectionProperties are getting picked up by the DataSource bean - see the log output:
maxActive=4; initialSize=4; maxWait=2000;
connectionProperties=null
Any hints ?
Note: Trying to set the connectionProperties via Spring's ConnectionConfig class didn't work either.
Try using the form of DataSourceConfig that takes separate PoolConfig and ConnectionConfig beans, like this:
#Bean
public DataSource dataSource() {
PoolConfig poolConfig = new PoolConfig(4, 4, 2000);
ConnectionConfig connectionConfig = new ConnectionConfig("useUnicode=yes;characterEncoding=utf8;");
DataSourceConfig dbConfig = new DataSourceConfig(poolConfig, connectionConfig);
return connectionFactory().dataSource(dbConfig);
}
Try the following:
Replace
connProperties.put("connectionProperties", "useUnicode=yes;characterEncoding=utf8;");
with
connProperties.put("connectionProperties", "useUnicode=yes;characterEncoding=UTF-8;");
Alternately, you can also specify the following property directly in application.properties
spring.datasource.connectionProperties=useUnicode=true;characterEncoding=utf-8;
I am trying to configure Atomikos Transaction without using spring.First i am trying to set up the EntityManagerFactory without using spring the following is code i have tried
private static AtomikosDataSourceBean prepareDataSource(){
AtomikosDataSourceBean atomikosDataSourceBean = new AtomikosDataSourceBean();
atomikosDataSourceBean.setUniqueResourceName("demo");
atomikosDataSourceBean.setXaDataSourceClassName("oracle.jdbc.xa.client.OracleXADataSource");
Properties properties = new Properties();
properties.setProperty("user", "demo");
properties.setProperty("password", "demo");
properties.setProperty("URL", "dbc:oracle:thin:#localhost:1521/xe");
atomikosDataSourceBean.setXaProperties(properties);
return atomikosDataSourceBean;
}
public static EntityManagerFactory getEntityManagerFactory(){
LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean();
entityManagerFactory.setDataSource(prepareDataSource());
entityManagerFactory.setPersistenceUnitName("demo");
entityManagerFactory.setPersistenceXmlLocation("classpath*:META-INF/persistence.xml");
Properties properties = new Properties();
properties.setProperty("hibernate.transaction.jta.platform", "com.demo.AtomikosJtaPlatform");
properties.setProperty("hibernate.show_sql", "true");
return (EntityManagerFactory) entityManagerFactory;
}
The above code is returning me an classcastexception.How can i get the same entitymanagerfactory without using spring
I would refer to the official documentation of Atomikos, which actually contains an example for those who opt for not using Spring:
Atomikos without Spring