I am running migrations for unit tests with Liquibase. I use a class called ${projectName}Liquibase.java to store two static functions
public class ${projectName}Liquibase {
...
public static void runMigrations(Connection conn, DB_TYPE dbType) {
Liquibase liquibase;
Database database = null;
try {
database = DatabaseFactory.getInstance()
.findCorrectDatabaseImplementation(new JdbcConnection(conn));
liquibase = new Liquibase(dbType.filePath, new FileSystemResourceAccessor(), database);
liquibase.validate();
liquibase.update(null);
} catch (LiquibaseException e) {
throw new RuntimeException("File at " + dbType.filePath + " Error: " + e.getMessage());
}
}
public static void dropTables() {
...
}
}
I pick up the file dbType.filePath parameter by using System.getProperty("user.dir") and the rest of the path.
The file is read fine, however, the update only goes through the very first changeset and then hangs for the duration of the test. Thus, the test does not run.
Tests are run successfully from other files and submodules within my Intellij project. In particular, our integration test suite is run successfully using the same interface from a different submodule. All of the tests will pass up until this one:
Running *.*.*.*.*.*DAOTest
2013-11-03 14:59:53,144 DEBUG [main] c.j.bonecp.BoneCPDataSource : JDBC URL = jdbc:hsqldb:mem:*, Username = SA, partitions = 2, max (per partition) = 5, min (per partition) = 5, helper threads = 3, idle max age = 60 min, idle test period = 240 min
INFO 11/3/13 2:59 PM:liquibase: Reading from PUBLIC.DATABASECHANGELOG
INFO 11/3/13 2:59 PM:liquibase: Successfully acquired change log lock
INFO 11/3/13 2:59 PM:liquibase: Reading from PUBLIC.DATABASECHANGELOG
INFO 11/3/13 2:59 PM:liquibase: /Users/davidgroff/repo/services/${projectName}/server/../core/src/main/java/com/*/*/liquibase/hsqldb.sql: 1::davidgroff: Custom SQL executed
INFO 11/3/13 2:59 PM:liquibase: /Users/davidgroff/repo/services/${projectName}/server/../core/src/main/java/com/*/*/liquibase/hsqldb.sql: 1::davidgroff: ChangeSet /Users/davidgroff/repo/services/*/*/../core/src/main/java/com/*/*/liquibase/hsqldb.sql::1::davidgroff ran successfully in 3ms
INFO 11/3/13 2:59 PM:liquibase: Successfully released change log lock
After this, the test repeatedly hangs as in in some infinite loop.
I have the current setup:
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
<version>3.0.6</version>
</dependency>
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.0.6</version>
</dependency>
I'm using Java 7 on Maven 3.1.0.
It may be that a separate transaction has locked a row in your database and Liquibase is hanging waiting for the other transaction to complete.
You said "the update only goes through the very first changeset and then hangs for the duration of the test", does that mean the first changeSet runs successfully? If that is the case, then the locked record is either a table lock on the DATABASECHANGELOG table that is preventing the INSERT INTO DATABASECHANGELOG from completing or a problem with your second changeSet.
Assuming it is a problem with the DATABASECHANGELOG table, is there a separate thread or process that would have been trying to delete from that table?
The issue turned out to be that a connection was being created and used after the liquibase changeset was applied with the command,
connection.createStatement(..."***SQL***"...);
and was never being committed to the database, because a new connection was created or that connection was out of data. It is a mystery why this was working before we used Liquibase to run migrations. The fix is simply to commit the above statement by calling:
connection.commit();
Related
We have SpringBoot 2 driven HA java application in which we use PostgreSQL underneath.
For certain reasons like unexpected crashes or Exceptions, Liquibase ends up with a stale DATABASECHANGELOGLOCK which was never released.
This results in subsequent deployments of the app failing with app waiting for the change lock and then exiting as follows:
2020-03-04T11:10:31.78+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:31.78+0200 Waiting for changelog lock....
2020-03-04T11:10:32.87+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:32.87+0200 Waiting for changelog lock....
2020-03-04T11:10:41.78+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:41.78+0200 Waiting for changelog lock....
2020-03-04T11:10:42.87+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:42.87+0200 Waiting for changelog lock....
2020-03-04T11:10:51.79+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:51.79+0200 Waiting for changelog lock....
2020-03-04T11:10:52.88+0200 SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-03-04T11:10:52.88+0200 Waiting for changelog lock....
2020-03-04T11:10:54.00+0200 ERR 2020-03-04 09:10:54.010 UTC
2020-03-04T11:10:55.88+0200 [HEALTH/0] ERR Failed to make TCP connection to port 8080: connection refused
2020-03-04T11:10:55.88+0200 [CELL/0] ERR Failed after 1m0.626s: readiness health check never passed.
2020-03-04T11:10:55.89+0200 [CELL/SSHD/0] OUT Exit status 0
2020-03-04T11:10:55.89+0200 info [native] Initiating shutdown sequence for Java agent
2020-03-04T11:10:55.89+0200 info [] Connection Status (120 times 300s) : 0909
Is there a configuration for removing Liquibase DATABASECHANGELOGLOCK automatically after a certain time or removing it on application start if it is older than let's say 5 mins or a predefined time period.
Or can this be done programatically at App Start before Postgres starts looking for the change lock.
So I was able to achieve this via the following approach:
We initialise LiquiBase using a SpringLiquibase bean.
Within this bean, before the Liquibase instance is constructed, I called a method which uses Statements to query the database for a lock, and if there are any locks older than 5 minutes, we delete them.
#Bean
public SpringLiquibase liquibase(DataSource dataSource) {
// Added a hook to check for locks before LiquiBase initialises.
removeDBLock(dataSource);
//
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setChangeLog(Constants.DDL_XML);
liquibase.setDataSource(dataSource);
return liquibase;
}
private void removeDBLock(DataSource dataSource) {
//Timestamp, currently set to 5 mins or older.
final Timestamp lastDBLockTime = new Timestamp(System.currentTimeMillis() - (5 * 60 * 1000));
final String query = format("DELETE FROM DATABASECHANGELOGLOCK WHERE LOCKED=true AND LOCKGRANTED<'%s'", lastDBLockTime.toString());
try (Statement stmt = dataSource.getConnection().createStatement()) {
int updateCount = stmt.executeUpdate(query);
if(updateCount>0){
log.error("Locks Removed Count: {} .",updateCount);
}
} catch (SQLException e) {
log.error("Error! Remove Change Lock threw and Exception. ",e);
}
}
The default lock implementation provided by Liquibase uses a database table called 'DATABASECHANGELOGLOCK'. Once a process that has acquired the lock is unexpectedly terminated, the only way to recover is to manually release that lock (using the Liquibase CLI or using a SQL statement). Please take a look at this Liquibase extension, which replaces the StandardLockService, by using database locks: https://github.com/blagerweij/liquibase-sessionlock
This extension uses MySQL or Postgres user lock statements, which are automatically released when the database connection is closed (e.g. when the container is stopped unexpectedly). The only thing required to use the extension is to add a dependency to the library. Liquibase will automatically detect the improved LockService.
I'm not the author of the library, but I stumbled upon the library when I was searching for a solution. I helped the author by releasing the library to Maven central. Currently supports MySQL and PostgreSQL, but should be fairly easy to support other RDBMS.
We're running a simple webapp on WebSphere Liberty, that uses Hibernate as persistence provider (included as a library in the WAR file).
When application is starting up Hibernate is initialized and it will open a connection to DB2 and issue some SQL statements. However, this fails when running on CICS and using JDBC Type 2 Driver DataSource. The following messages are logged (some extra line breaks for readability):
WARN org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator -
HHH000342: Could not obtain connection to query metadata : [jcc][50053][12310][4.19.56]
T2zOS exception: [jcc][T2zos]T2zosCicsApi.checkApiStatus:
Thread is not CICS-DB2 compatible: CICS_REGION_BUT_API_DISALLOWED ERRORCODE=-4228, SQLSTATE=null
...
ERROR org.hibernate.hql.spi.id.IdTableHelper - Unable obtain JDBC Connection
com.ibm.db2.jcc.am.SqlException: [jcc][50053][12310][4.19.56] T2zOS exception: [jcc][T2zos]T2zosCicsApi.checkApiStatus:
Thread is not CICS-DB2 compatible: CICS_REGION_BUT_API_DISALLOWED ERRORCODE=-4228, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source) ~[db2jcc4.jar:?]
...
at com.ibm.db2.jcc.t2zos.T2zosConnection.a(Unknown Source) ~[db2jcc4.jar:?]
...
at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(Unknown Source) ~[db2jcc4.jar:?]
at com.ibm.cics.wlp.jdbc.internal.CICSDataSource.getConnection(CICSDataSource.java:176) ~[?:?]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) ~[our-app.war:5.1.0.Final]
at org.hibernate.internal.SessionFactoryImpl$3.obtainConnection(SessionFactoryImpl.java:643) ~[our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.IdTableHelper.executeIdTableCreationStatements(IdTableHelper.java:67) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.global.GlobalTemporaryTableBulkIdStrategy.finishPreparation(GlobalTemporaryTableBulkIdStrategy.java:125) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.global.GlobalTemporaryTableBulkIdStrategy.finishPreparation(GlobalTemporaryTableBulkIdStrategy.java:42) [our-app.war:5.1.0.Final]
at org.hibernate.hql.spi.id.AbstractMultiTableBulkIdStrategyImpl.prepare(AbstractMultiTableBulkIdStrategyImpl.java:88) [our-app.war:5.1.0.Final]
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:451) [our-app.war:5.1.0.Final]
My current understanding is that when running on CICS and using JDBC Type 2 Drivers only some threads are capable of opening a DB2 connection. That would be the application threads (the ones processing HTTP requests) as well as worker threads servicing CICSExecutorService.
The current solution is to:
Disable JDBC metadata lookup in JdbcEnvironmentInitiator by
setting hibernate.temp.use_jdbc_metadata_defaults property to
false
Wrap execution of IdTableHelper#executeIdTableCreationStatements in a Runnable and submit it to CICSExecutorService.
Would you consider this solution to be sufficient and suitable for production? Or maybe you use some different approach?
Versions used:
CICS Transaction Server for z/OS 5.3.0
WebSphere Application Server 8.5.5.8
Hibernate 5.1.0
Update: Just to clarify, once our application is started, it can query DB2 with no problems (when servicing HTTP requests). The problem is only related to startup.
CICS TS v5.3 support for the JPA feature in Liberty was recently made available in a service-refresh (July 2016). Prior to that update, attempting to run JPA in applications would result in very similar problems to those you describe.
Although you are running hibernate and you are on a CICS-enabled thread, it does not have the API environment (which will allow the type 2 JDBC call to succeed). New detection logic was developed specifically (but not exclusively) for use with the DB2 JDBC type 2 driver and JPA. This update was shipped in a recent service refresh and might cure the issues you are seeing.
Try applying:
http://www-01.ibm.com/support/docview.wss?crawler=1&uid=swg1PI58375
The description says it is for 'Standard-mode Liberty' support, but it contains other developments as outlined above.
The following solution was tested to work ok.
The idea is to execute the SQL/DDL statements using CICSExecutorService#runAsCICS. The following extension is registered via hibernate.hql.bulk_id_strategy property.
package org.hibernate.hql.spi.id.global;
import java.util.concurrent.*;
import org.hibernate.boot.spi.MetadataImplementor;
import org.hibernate.engine.jdbc.connections.spi.JdbcConnectionAccess;
import org.hibernate.engine.jdbc.spi.JdbcServices;
import org.springframework.util.ClassUtils;
import com.ibm.cics.server.*;
public class CicsAwareGlobalTemporaryTableBulkIdStrategy extends GlobalTemporaryTableBulkIdStrategy {
#Override
protected void finishPreparation(JdbcServices jdbcServices, JdbcConnectionAccess connectionAccess, MetadataImplementor metadata, PreparationContextImpl context) {
execute(() -> super.finishPreparation(jdbcServices, connectionAccess, metadata, context));
}
#Override
public void release(JdbcServices jdbcServices, JdbcConnectionAccess connectionAccess) {
execute(() -> super.release(jdbcServices, connectionAccess));
}
private void execute(Runnable runnable) {
if (isCics() && IsCICS.getApiStatus() == IsCICS.CICS_REGION_BUT_API_DISALLOWED) {
RunnableFuture<Void> task = new FutureTask<>(runnable, null);
CICSExecutorService.runAsCICS(task);
try {
task.get();
} catch (InterruptedException | ExecutionException e) {
throw new RuntimeException("Failed to execute in a CICS API-enabled thread. " + e.getMessage(), e);
}
} else {
runnable.run();
}
}
private boolean isCics() {
return ClassUtils.isPresent("com.ibm.cics.server.CICSExecutorService", null);
}
}
Note that the newer JCICS API version has an overlaod for runAsCics method accepting a Callable, which might be useful to simplify the CICS branch of the execute method to something like this:
CICSExecutorService.runAsCICS(() -> { runnable.run(); return null; }).get();
A few alternatives tried:
Wrapping just the connection acquisition action (org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl#getConnection) did not work as the connection was closed already when it was used in the main thread.
Wrapping the whole application startup (org.springframework.web.context.ContextLoaderListener#contextInitialized) led to classloading issues.
Edit: Eventually went with a custom Hibernate's MultiTableBulkIdStrategy implementation that does not run any SQL/DDL on startup (see project page on GitHub).
I'm running a tomcat server with services developed using spring v 4.1.0. I'm creating jdbc connections to an informix database and occasionally get an error. These connections are single connections and not pooled (as I'm connecting to dynamically generated database hosts depending upon varying input criteria).
Over time everything seems to be progressing fine and then all of a sudden I start getting a massive upswing of tomcat threads that continues until I hit my max threads and all requests to the server get rejected. Doing a thread dump shows that all the threads are hung on org.springframework.jdbc.support.SQLErrorCodesFactory.getErrorCodes.
- org.springframework.jdbc.support.SQLErrorCodesFactory.getErrorCodes(javax.sql.DataSource) #bci=56, line=204 (Interpreted frame)
- org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.setDataSource(javax.sql.DataSource) #bci=5, line=134 (Interpreted frame)
- org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.<init>(javax.sql.DataSource) #bci=6, line=97 (Interpreted frame)
- org.springframework.jdbc.support.JdbcAccessor.getExceptionTranslator() #bci=22, line=99 (Interpreted frame)
- org.springframework.jdbc.support.JdbcAccessor.afterPropertiesSet() #bci=25, line=138 (Interpreted frame)
- org.springframework.jdbc.core.JdbcTemplate.<init>(javax.sql.DataSource, boolean) #bci=50, line=182 (Interpreted frame)
- com.business.stores.data.dao.impl.BaseDAOImpl.getJdbcTemplate(int) #bci=86, line=53 (Interpreted frame)
...
I've pulled up the source for the spring class listed above and there is a syncronized block within it, but I'm not sure why it would be failing to execute and hanging up all the threads in the system. (It appears that after it gets blocked any subsequent SQL errors will also block until there are no threads left available on the box. Here is the code from Spring in question:
public SQLErrorCodes getErrorCodes(DataSource dataSource) {
Assert.notNull(dataSource, "DataSource must not be null");
if (logger.isDebugEnabled()) {
logger.debug("Looking up default SQLErrorCodes for DataSource [" + dataSource + "]");
}
synchronized (this.dataSourceCache) {
// Let's avoid looking up database product info if we can.
SQLErrorCodes sec = this.dataSourceCache.get(dataSource);
if (sec != null) {
if (logger.isDebugEnabled()) {
logger.debug("SQLErrorCodes found in cache for DataSource [" +
dataSource.getClass().getName() + '#' + Integer.toHexString(dataSource.hashCode()) + "]");
}
return sec;
}
// We could not find it - got to look it up.
try {
String dbName = (String) JdbcUtils.extractDatabaseMetaData(dataSource, "getDatabaseProductName");
if (dbName != null) {
if (logger.isDebugEnabled()) {
logger.debug("Database product name cached for DataSource [" +
dataSource.getClass().getName() + '#' + Integer.toHexString(dataSource.hashCode()) +
"]: name is '" + dbName + "'");
}
sec = getErrorCodes(dbName);
this.dataSourceCache.put(dataSource, sec);
return sec;
}
}
catch (MetaDataAccessException ex) {
logger.warn("Error while extracting database product name - falling back to empty error codes", ex);
}
}
// Fallback is to return an empty SQLErrorCodes instance.
return new SQLErrorCodes();
}
-------UPDATE
At this point I'm at a loss to determine what is locking dataSourceCache or how to fix it.
Turned on logging (and debug) for the spring module and then forced the issue by calling the service with a site in a different environment (and therefore different password). The service returned the invalid password response as expected, but there was these lines in the log.
It appears to have loaded the data correctly:
2015-10-27 21:09:26,677||DEBUG||SQLErrorCodesFactory.getErrorCodes(175)||||SQL error codes for 'Informix Dynamic Server' found
But it had some sort of issue retrieving the data:
2015-10-27 21:09:33,162||DEBUG||SQLErrorCodesFactory.getErrorCodes(199)||||Looking up default SQLErrorCodes for DataSource [org.springframework.jdbc.datasource.SingleConnectionDataSource#149e2931]
2015-10-27 21:09:34,254||DEBUG||SQLErrorCodesFactory.getErrorCodes(217)||||Database product name cached for DataSource [org.springframework.jdbc.datasource.SingleConnectionDataSource#50e91794]: name is 'Informix Dynamic Server'
2015-10-27 21:09:34,255||INFO ||MarkdownVoidByCashierDAOImpl.getVoidByCashierFromStore(47)||||Created JDBC Template for 68
And then it threw the error that I expected:
2015-10-27 21:09:34,317||WARN ||SQLErrorCodesFactory.getErrorCodes(227)||||Error while extracting database product name - falling back to empty error codes
org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: Incorrect password or user com.informix.asf.IfxASFRemoteException: user1#::ffff:10.63.112.131 is not known on the database server.
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:297)
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:324)
at org.springframework.jdbc.support.SQLErrorCodesFactory.getErrorCodes(SQLErrorCodesFactory.java:214)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.setDataSource(SQLErrorCodeSQLExceptionTranslator.java:134)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.<init>(SQLErrorCodeSQLExceptionTranslator.java:97)
at org.springframework.jdbc.support.JdbcAccessor.getExceptionTranslator(JdbcAccessor.java:99)
at org.springframework.jdbc.support.JdbcAccessor.afterPropertiesSet(JdbcAccessor.java:138)
at org.springframework.jdbc.core.JdbcTemplate.<init>(JdbcTemplate.java:182)
...
Of course, this doesn't appear to have recreated the issue either (I didn't really expect it to, previous attempts at recreating the issue have failed) so I will continue monitoring until the issue recurs.
------UPDATE 2
So the issue has recurred on the box. Looking at the logs with debugging I'm not seeing much to point me towards the root cause though.
I'm seeing this basic pattern over and over again:
2015-10-27 21:28:11,178||DEBUG||SQLErrorCodesFactory.getErrorCodes(199)||||Looking up default SQLErrorCodes for DataSource [org.springframework.jdbc.datasource.SingleConnectionDataSource#3da15c49]
...
2015-10-27 21:28:13,481||DEBUG||SQLErrorCodesFactory.getErrorCodes(217)||||Database product name cached for DataSource [org.springframework.jdbc.datasource.SingleConnectionDataSource#207e4667]: name is 'Informix Dynamic Server'
2015-10-27 21:28:13,482||DEBUG||SQLErrorCodesFactory.getErrorCodes(175)||||SQL error codes for 'Informix Dynamic Server' found
The hex value at the end of the single connection data source is the only thing that changes.
On an error or two I'm seeing the following:
2015-10-27 21:27:33,622||WARN ||SQLErrorCodesFactory.getErrorCodes(227)||||Error while extracting database product name - falling back to empty error codes
But I believe that only shows up when I give a completely invalid server name as the target. It does appear that it goes into the synchronized block on every SQL call though. A grep on the log for the lines containing "Looking for" vs "found" shows about a 300 difference where lookings haven't hit a corresponding found. This would be consistent with the threads blocking and being unable to advance since the looking debug line occurs outside of the syncronized block.
I had the same problem, but I found the solution. Because of the jdbc connection or the database connection has no timeout property and the default timeout is never timeout, so after the queue or pool is full, and this.dataSourceCache.get(dataSource); need another resource to process, so it will never timeout and no space to run this line, so it wait here forever.
The solution is set a timeout time for jdbc or what you use for database connection. Hope it will help.
I want to write an application that writes 5 Strings (related to file assets) to Cassandra. I based the code off the tutorials in DataStax's documentation. It works for about 30 seconds for a few hundred inserts, but crashes with the error:
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
...
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:368)
at com.datastax.driver.core.SessionManager.executeQuery(SessionManager.java:404)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:85)
... 8 more
The process is still running and I can re-run the unit test with the same results: a few hundred inserts and then this error. The server shows no sign of distress or error.
I am using the driver:
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>2.0.3</version>
</dependency>
Here's my client code:
private static final String BOUND_STATEMENT = "INSERT INTO myschema.files(file_name, md5, last_modified, size, hash_date) "
+ "VALUES (?, ?, ?, ?, ?);";
#Override
public void persist(FileEntry entry) {
Session session = cluster.connect();
//prepare statement, if it doesn't exist.
if (persistPs == null) {
persistPs = session.prepare(BOUND_STATEMENT);
}
BoundStatement boundStatement = new BoundStatement(persistPs);
session.execute(boundStatement.bind(entry.getFileName(), entry.getMd5(), entry.getLastModified(),
entry.getSize(), entry.getHashDate()));
session.close();
System.out.print(".");
}
I am running Cassandra 2.0.9 on my localhost (OSX with a solid state drive and recent macbook).
Any leads on how to make this not crash? If this is just an issue with the DataStax driver, I'd be happy to use any other driver.
I'm not generating too severe of a load and the server process is not throwing any exceptions or hints as to what can be going wrong. I have heard of other organizations having success with Cassandra, so I assume it's with my client code.
Thanks!
BryceAtNetwork23 was correct. The issue was "solved" by my passing the session object from call to call.
public final void persist(final FileEntry entry, final Session session) {
prepareInsertStatement(session);
final BoundStatement boundStatement = new BoundStatement(persistPs);
//bind values from our bean to our insert query
session.execute(boundStatement.bind(entry.getFileName(), entry.getMd5(), entry.getLastModified(),
entry.getSize(), entry.getHashDate()));
}
private final synchronized void prepareInsertStatement(final Session session) {
// prepare statement, if it doesn't exist.
if (persistPs == null) {
persistPs = session.prepare(BOUND_STATEMENT);
}
}
I don't know if this is an issue with the scalability of Cassandra's engine or DataStax's driver, but normally, I have to work a LOT harder than one day with a platform to bring it to it's knees. Regardless, I am frustrated at their documentation. I have never had this much trouble with a platform getting it to run without crashing. Their examples crash a single node after something like 1000 inserts. If we're evaluating Cassandra, chances are we want to insert a lot more than 1000 rows.
That said, once I passed the session from call to call, the code ran pretty quickly and performs nicely. I have some ambivalence, but am happy everything finally works. Thanks for your help everybody.
I am trying to get a simple example of the Quartz scheduler working in JBoss Seam 2.2.0.GA. Everything works fine using the RAMJobStore setting, but changing the store from
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
to
org.quartz.jobStore.class org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
org.quartz.jobStore.useProperties false
org.quartz.jobStore.dataSource quartzDatasource
## FIXME Should be a different datasource for the non managed connection.
org.quartz.jobStore.nonManagedTXDataSource quartzDatasource
org.quartz.jobStore.tablePrefix qrtz_
org.quartz.dataSource.quartzDatasource.jndiURL java:/quartzDatasource
allows the scheduler to start up, but whereas the job was previously being triggered and run at the correct interval, now it is not run at all. There is also nothing persisted to the quartz database.
I am aware that the nonManagedTXDataSource shouldn't be the same as the managed datasource, but I am having issues with the datasource being unable to be found by Quartz, even though there is a message earlier on reporting it being bound successfully (this is probably about to be asked in a separate question). Using the same datasource allows the service to start up without errors.
My components.xml file has the following:
<event type="org.jboss.seam.postInitialization">
<action execute="#{asyncResultMapper.scheduleTimer}"/>
</event>
<async:quartz-dispatcher/>
and ASyncResultMapper has the following:
#In
ScheduleProcessor processor;
private String text = "ahoy";
private QuartzTriggerHandle quartzTriggerHandle;
public void scheduleTimer() {
String cronString = "* * * * * ?";
quartzTriggerHandle = processor.createQuartzTimer(new Date(), cronString, text);
}
and ScheduleProcessor is as follows:
#Name("processor")
#AutoCreate
#Startup
#Scope(ScopeType.APPLICATION)
public class ScheduleProcessor {
#Asynchronous
public QuartzTriggerHandle createQuartzTimer(#Expiration Date when, #IntervalCron String interval, String text) {
process(when, interval, text);
return null;
}
private void process(Date when, String interval, String text) {
System.out.println("when = " + when);
System.out.println("interval = " + interval);
System.out.println("text = " + text);
}
}
The logs show the service starting but nothing about the job:
INFO [QuartzScheduler] Quartz Scheduler v.1.5.2 created.
INFO [JobStoreCMT] Using db table-based data access locking (synchronization).
INFO [JobStoreCMT] Removed 0 Volatile Trigger(s).
INFO [JobStoreCMT] Removed 0 Volatile Job(s).
INFO [JobStoreCMT] JobStoreCMT initialized.
INFO [JobStoreCMT] Freed 0 triggers from 'acquired' / 'blocked' state.
INFO [JobStoreCMT] Recovering 0 jobs that were in-progress at the time of the last shut-down.
INFO [JobStoreCMT] Recovery complete.
INFO [JobStoreCMT] Removed 0 'complete' triggers.
INFO [JobStoreCMT] Removed 0 stale fired job entries.
INFO [QuartzScheduler] Scheduler FlibScheduler$_NON_CLUSTERED started.
I'm sure it's probably something trivial I've missed, but I can't find a solution in the forums anywhere.
Managed to solve this for myself in the end. The issue of the JobStoreCMT version not starting and triggering jobs was caused by a mixture of a missing #Transactional (thanks tair), and more importantly a need to upgrade Quartz. Once Quartz was upgraded to 1.8.5 the error messages became a lot more useful.