HSQLDB primary key violation error in JUnit tests - java

We have a testing framework using JUnit, OpenEJB, Eclipselink and HSQLDB. Everything has worked fine so far, and testing the service-tier is a breeze. Now however, we are running into problems when doing mass imports on a table (using the service-tier,entitymanager) or for example persisting entities to a list multiple times in a service method.
THIS IS THE WEIRD PART: Our tests seem to only break if tests are run on a fast enough workstation from the command line with Maven. When I run the tests through Eclipse IDE, everything is fine but sometimes, randomly, it also fails. We suspect it might have something to do with the speed the tests are run with, as weird as it sounds. The exception is simple enough because basically it tells us we are trying to add an entity with an already existing id. We have multiple times checked our test data and the hsqldb database. There are no pre-existing rows with id's we are trying to use. Still hsqldb throws the primary key exception at some point. From our logs we can see that the conflicting ID is not always the same, it might be 300015 or 300008.
We are at our wit's end here. Could it have something to do with HSQLDB's transactions or something else causing stale data?
We are using HSQLDB 2.2.8, Eclipselink 2.3.0 and OpenEJB 4.0.0-beta2.
The relation we are trying to add entities to is mapped as following:
#OneToMany(mappedBy = "invoice", cascade = CascadeType.PERSIST)
private List<InvoiceBalance> getInvoiceBalanceHistory() {
if (invoiceBalanceHistory == null) {
this.invoiceBalanceHistory = new ArrayList<InvoiceBalance>();
}
return invoiceBalanceHistory;
}
The root exception is:
Caused by: java.sql.SQLIntegrityConstraintViolationException: integrity constraint violation: unique constraint or index violation; SYS_PK_10492 table: INVOICEBALANCE
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.executeUpdate(Unknown Source)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:831)
... 82 more
Caused by: org.hsqldb.HsqlException: integrity constraint violation: unique constraint or index violation; SYS_PK_10492 table: INVOICEBALANCE
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.Constraint.getException(Unknown Source)
at org.hsqldb.index.IndexAVLMemory.insert(Unknown Source)
at org.hsqldb.persist.RowStoreAVL.indexRow(Unknown Source)
at org.hsqldb.TransactionManager2PL.addInsertAction(Unknown Source)
at org.hsqldb.Session.addInsertAction(Unknown Source)
at org.hsqldb.Table.insertSingleRow(Unknown Source)
at org.hsqldb.StatementDML.insertSingleRow(Unknown Source)
at org.hsqldb.StatementInsert.getResult(Unknown Source)
at org.hsqldb.StatementDMQL.execute(Unknown Source)
at org.hsqldb.Session.executeCompiledStatement(Unknown Source)
at org.hsqldb.Session.execute(Unknown Source)
EDIT:
I changed the primary key generation strategy from GenerationType.AUTO (that seems to use the TABLE-strategy by default) to IDENTITY. After this, our mass persists seem to work without fail. I still don't know why HSQLDB goes "out of sync" with the TABLE-strategy. I wouldn't want to change our jpa entities just because our testing framework is buggy :)

It might be possible that your allocationSize is defining a bottleneck on relatively fast platforms or occasionally.
i.e.
When defaulted to GenerationType.AUTO which defaulyts to table EclipseLink will cache ID upto the allocated value. It will then look up generator to confirm its last allocated value.
If a lookup happened around the edge of the allocationSize before the next set of ID is cached, then you might run into a race condition where eclipse link allocates the last id in the cache twice before it updates the cache and tries to use both for insert and both inserts fail and are rolledback. If you can you should check to see if this happens around when your allocation cache should be incremented, but perhaps that kind of check might change the behaviour

Most likely, you are running out of memory while importing lots of rows into a MEMORY table.
You should increase the memory allocation or define this particular table as a CACHED table.
Update: CACHED tables can be used in persistent databases, not in all-in-memory databases:
CREATE CACHED TABLE mytable ...
or for an existing table:
SET TABLE mytable TYPE CACHED
UPDATE:
If this is not caused by OOM, as changing the generation strategy confirms, then it seems the generating strategy might not be incrementing the generated primary key value at some point. The identity strategy relies on the database to create the generated value, which works fine.

For integrity constraint violation: unique constraint or index violation
If you're a debugger freak, you can rebuild hsqldb in debug mode and set a breakpoint in org.hsqldb.index.IndexAVLMemory#insert at a line where variable compare has been assigned with a condition on the breakpoint compare == 0.
The faulty row (the duplicate one for instance) will be the one passed as argument.

Related

UCanAccess exception on open connection - UNIQUE constraint does not exist on referenced columns

I am trying to create a very simple connection to an access database (.accdb) in my java maven project using ucanaccess. The database is an external database and I just need to read the content of some tables for a migration task. I don't want to and shouldn't modify the tables or write anything in the database.
The java code, that I used is totally simple:
try (Connection connection = DriverManager.getConnection(databaseURL);) {
} catch (SQLException ex) { ex.printStackTrace(); }
But the connection fails right at the beginning with the following exception:
net.ucanaccess.jdbc.UcanaccessSQLException: UCAExc:::5.0.0 a UNIQUE constraint does not exist on referenced columns: T1 in statement [ALTER TABLE T2 ADD CONSTRAINT "T2{2EB41B92-C3AB-4A64-A53C-B83095D76202}" FOREIGN KEY (C2) REFERENCES T1 (C1) ]
at net.ucanaccess.jdbc.UcanaccessDriver.connect(UcanaccessDriver.java:231)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:251)
at myproject.Application.main(Application.java:42)
Caused by: java.sql.SQLSyntaxErrorException: a UNIQUE constraint does not exist on referenced columns: T1 in statement [ALTER TABLE T2 ADD CONSTRAINT "T2{2EB41B92-C3AB-4A64-A53C-B83095D76202}" FOREIGN KEY (C2) REFERENCES T1 (C1) ]
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCStatement.executeUpdate(Unknown Source)
at net.ucanaccess.converters.LoadJet.exec(LoadJet.java:1510)
at net.ucanaccess.converters.LoadJet.access$000(LoadJet.java:74)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadForeignKey(LoadJet.java:695)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTableFKs(LoadJet.java:918)
at net.ucanaccess.converters.LoadJet$TablesLoader.recreate(LoadJet.java:807)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTableData(LoadJet.java:877)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTableData(LoadJet.java:871)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTableData(LoadJet.java:837)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTablesData(LoadJet.java:1029)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTables(LoadJet.java:1077)
at net.ucanaccess.converters.LoadJet$TablesLoader.access$3200(LoadJet.java:264)
at net.ucanaccess.converters.LoadJet.loadDB(LoadJet.java:1579)
at net.ucanaccess.jdbc.UcanaccessDriver.connect(UcanaccessDriver.java:218)
... 3 more
Caused by: org.hsqldb.HsqlException: a UNIQUE constraint does not exist on referenced columns: T1
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.TableWorks.checkCreateForeignKey(Unknown Source)
at org.hsqldb.TableWorks.addForeignKey(Unknown Source)
at org.hsqldb.StatementSchema.getResult(Unknown Source)
at org.hsqldb.StatementSchema.execute(Unknown Source)
at org.hsqldb.Session.executeCompiledStatement(Unknown Source)
at org.hsqldb.Session.executeDirectStatement(Unknown Source)
at org.hsqldb.Session.execute(Unknown Source)
... 18 more
I tried to make my connection "readonly", so that it doesn't check for this constraint and doesn't throw an exception. But I didn't have any success.
try (Connection connection = DriverManager.getConnection(databaseURL + ";readonly=true");) {
} catch (SQLException ex) { ex.printStackTrace(); }
Is there any way to turn off this constraint check, while creating the connection for reading some tables?
I can't post the database for reproducing the issue! Because it is a very complex database with many tables and relationships and it doesn't belong to me. I am not allowed to share it (even without data) because of General Data Protection Regulation.
Furthermore I didn't want to change anything in the database, therefore I wanted to turn off the constraint checks to be able to just connect to the database and just READ the rows of some tables.
At the end I made a copy of the database and delete all the relationships between the tables and the code could then connect to the db,
BUT:
IT TOOK 30 MINUTES TO CONNECT!!!! So, even if, I would have managed to make a copy from the database and delete all relationships between all tables each time I want to run my code (which actually isn't possible in my task), I should have waited 30 minutes to just open a connection!!!
I find it really ridiculous, that there is no possibility to turn off the constraint checks and open the connection for reading data, when some constraint aren't satisfied!!!
But anyway, at the end (after two days searching!) I found a solution for my work. I don't use any kind of Connection of DriverManager of this type:
Connection connection = DriverManager.getConnection(databaseURL)
Instead I use a DatabaseBuilder to open my database and get the tables and read the rows!
For anyone else, who have the same problem and just wants to connect to a MS-Access database in order to READ some data, without changing anything, I suggest this way. It is very easy and fast!
String fileName = "my_database.accdb";
File file = new File(fileName);
Database db = null;
DatabaseBuilder databaseBuilder = new DatabaseBuilder(file);
try {
db = databaseBuilder.open();
} catch(IOException e) {
e.printStackTrace();
}
Table myTable = db.getTable("myAccessTableName");
for(Row row : myTable) {
String firstName = row.getString("first_name");
String lastName = row.getString("last_name");
}
The Database, DatabaseBuilder, Row and Table are from "jackcess". So these imports are needed:
import com.healthmarketscience.jackcess.Database;
import com.healthmarketscience.jackcess.DatabaseBuilder;
import com.healthmarketscience.jackcess.Row;
import com.healthmarketscience.jackcess.Table;
More examples of using this way can be found here:
Java Code Examples
and here:
UCanAccess

Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException because of Duplicate GUID

I use hibernate to insert data in a table using autogenerated GUID, but insertion fails sometimes with duplicate GUID exception.
For Example:
From Logs , insertion fails for the first 2 attempts by printing the duplicate GUID '0500edac-0074-4324-3436-31444231342d'. The time taken are as follows
1st attempt :08-27-2018 04:27:00.012,
2nd attempt :08-27-2018 04:27:01.024,
3rd attempt was not logged ,as it was successful
but in the database I see a row with GUID '0500edac-0074-4324-3436-31444231342d' created at '08-27-2018 04:27:01.054'
So I am not sure why I am getting the exceptions for the first 2 attempts and then successfully it inserts the 3rd time.
SQL Table Properties: I have a SQL Server table named "DataHistory" with a column named
"DataHistoryGuid" with the following properties uniqueidentifier,ROWGUIDCOL,Primary Key column,newsequentialid .
Hibernate Properties:
I am using hibernate to store the data in that table, for the GUID column, I am using the
<id name="dataHistoryGuid" type="java.util.UUID" >
<column name="DataHistoryGuid"/>
<generator class="guid"/>
</id>
The following is the exception trace:
[event.def.AbstractFlushingEventListener:performExecutions:324]
Could not synchronize database state with session
org.hibernate.exception.ConstraintViolationException: could not
insert: [com.testProj.dataprocessor.model.sql.SqlDataHistory]
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2295)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2688)
at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.springframework.orm.hibernate3.HibernateAccessor.flushIfNecessary(HibernateAccessor.java:390)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:420)
at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374)
at org.springframework.orm.hibernate3.HibernateTemplate.saveOrUpdate(HibernateTemplate.java:748)
at com.testProj.dataprocessor.common.sql.hibernate.HibernateSession.upsertDataHistory(HibernateSession.java:505)
at com.testProj.dataprocessor.common.sql.SqlStore.upsertDataHistory(SqlStore.java:92)
at com.testProj.dataprocessor.common.sql.SqlStore$$FastClassByCGLIB$$18d897d8.invoke(<generated>)
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:191)
at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:700)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:77)
at com.testProj.dataprocessor.model.performance.Profiler.profile(Profiler.java:15)
at sun.reflect.GeneratedMethodAccessor160.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:627)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:616)
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:64)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:77)
at com.testProj.dataprocessor.common.sql.SqlRetryPolicy.retry(SqlRetryPolicy.java:20)
at sun.reflect.GeneratedMethodAccessor161.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:627)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:616)
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:64)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:635)
at com.testProj.dataprocessor.common.sql.SqlStore$$EnhancerByCGLIB$$f3a323cc.upsertDataHistory(<generated>)
at com.testProj.dataprocessor.dao.DataDAO.updateDataHistory(DataDAO.java:88)
at com.testProj.dataprocessor.eventhandler.DataHistoryEventHandler.doWork(DataHistoryEventHandler.java:34)
at com.testProj.dataprocessor.eventhandler.DataHistoryEventHandler.updateDiagnosticsHistory(DataHistoryEventHandler.java:28)
at com.testProj.dataprocessor.DataProcessorService.doWork(DataProcessorService.java:37)
at com.testProj.dataprocessor.DataProcessorService.process(DataProcessorService.java:24)
at com.testProj.dataprocessor.DataProcessorService.process(DataProcessorService.java:80)
at com.testProj.dataprocessor.DataProcessorService.postDataEventSync(DefaultDataProcessorService.java:41)
at com.testProj.dataprocessor.DataProcessorService.postDataEvent(DefaultDataProcessorService.java:36) at sun.reflect.GeneratedMethodAccessor272.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:173)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:89)
at org.apache.cxf.jaxws.JAXWSMethodInvoker.invoke(JAXWSMethodInvoker.java:60)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:75)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.cxf.workqueue.SynchronousExecutor.execute(SynchronousExecutor.java:37)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:106)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:109)
at Caused by: java.sql.SQLException: Violation of PRIMARY KEY constraint 'PK_DataHistory_on_DataHistoryGuid'. Cannot insert duplicate key in object 'dbo.DataHistory'. The duplicate key value is (0500edac-0074-4324-3436-31444231342d).
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
at net.sourceforge.jtds.jdbc.JtdsStatement.processResults(JtdsStatement.java:613)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQL(JtdsStatement.java:572)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeUpdate(JtdsPreparedStatement.java:727)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:46)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2275) ... 68 more
Assumptions:
1.As Sql Server websites has mentioned that 1 in a billion Guid's created will be a duplicate, so i don't expect duplicates to be present in my tables.
Although the question is focussed on Hibernate, the following line in the trace indicates that the database is rejecting the insert.
Caused by: java.sql.SQLException: Violation of PRIMARY KEY constraint 'PK_DataHistory_on_DataHistoryGuid'. Cannot insert duplicate key in object 'dbo.DataHistory'. The duplicate key value is (0500edac-0074-4324-3436-31444231342d).
This is more of a design issue than a problem with SQL Server or Hibernate. If an auto generated UUID as a PRIMARY KEY in the DataHistory table is your goal,
then you are much better off using the database to achieve this, rather than using Hibernate (or worse, rolling your own solution).
This is because the database is designed to avoid a collision of values, if a UNIQUE INDEX or PRIMARY KEY is specified.
See SQL UNIQUE Constraint
The task of creating the auto generated UUID is available for you, with PRIMARY KEY DEFAULT (NEWID()).
If you need to use the UUID later, then use a non primary key value to select the value after it has been created.
Example table:
CREATE TABLE dbo.DataHistory
(
DatahistoryGuid uniqueidentifier PRIMARY KEY DEFAULT (NEWID()),
history_row integer NOT NULL,
data_stuff nvarchar(4000) NOT NULL,
created_date datetime2 DEFAULT (GETDATE())
);
Try setting the insertable property to false in Hibernate. Do this for all columns defaulted by the database. For example:
#Column(name = "DatahistoryGuid", insertable=false)
#Column(name = "created_date", insertable=false)
This results in SQL similar to the following:
INSERT INTO dbo.DataHistory (history_row, data_stuff)
VALUES (1234, 'Joe Bloggs');
See Related Question
To get the DatahistoryGuid value, use a SELECT statement:
SELECT DatahistoryGuid from dbo.DataHistory where history_row = 1234;
SQL Fiddle Example

HIbernate : More than one row with the given identifier was found

#Override
public Application getApplicationForId(Long applicationId) {
List<Application> applications = executeNamedQuery("applicationById", Application.class, applicationId);
return applications.isEmpty() ? null : applications.get(0);
}
while debugging in eclipse
return applications.isEmpty() ? null : applications.get(0);
these expression getting evaluated as
applications.isEmpty() -> false
applications.get(0) -> (id=171)
applications.size() -> 1
but after the execution of this line its throwing error
org.hibernate.HibernateException: More than one row with the given identifier was found: 263536,
Even its size is showing as 1, then still why and how its getting multiple rows after the execution.
I'm quite sure that this is due to eager fetching. So check you entity and remove the fetch=FetchType.EAGER.
Actually this is not caused by duplicate rows in the database, as it's obviously not possible to have duplicate primary keys. Instead this was caused by Hibernate looking up an object, and eagerly filling in a relationship. Hibernate assumed a single row would come back, but two came back because there were two objects associated with that relationship.
In my case the issue was,
while debugging when the execution is in the middle of the transaction, may be the purpose got served and forcibly stopped the server in the middle of the execution itself, as this has been forcibly stopped server, that cannot led the transaction to get rolledback and that end up in making the data dirty or corrupt in the database because before terminating the server some data might got inserted in db (chance of autoincrement of the primarykey).
Resetting the AutoIncrement value for the primary key of the table, resolved the issue.
1.Identify the table with dirty data (refer to stack trace )
2.Sort the column(primary key), check the highest value in the column(say somevalue).
3.use command
ALTER TABLE tablename AUTO_INCREMENT = somevalue+1

Java/Hibernate + HSQLDB java.sql.BatchUpdateException: data exception: string data, right truncation

I am testing an application locally using a memory-based HSQLDB.
So far everything went fine, however, when executing a testcase with a String bigger than 256 chars I ran into an error.
Caused by: java.sql.BatchUpdateException: data exception: string data, right truncation; table: TABLENAME column: COLNAME
at org.hsqldb.jdbc.JDBCPreparedStatement.executeBatch(Unknown Source)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
... 30 more
I gathered that the cause of this error is usually an "overflow" of the datatype one is using.
What bothers me is that I explicitly defined the column to be 4000 chars big, using the hbm.xml files.
<property name="translation" type="java.lang.String" length="4000">
<column name="COLNAME" not-null="false" />
</property>
When I cut the test string down to 256 chars or less everything starts working again. 257+ chars and the error is thrown. I don't really see the reason why this is happening. Why would HSQLDB define this column as length="256", when I explicitly state that it's supposed to be 4000...
Can anyone help?
Best regards,
daZza
Well, for whatever reason HSQLDB seems to do a bad mapping for certain types and values specified in the config file to the real tables it creates.
I was able to fix the issue by changing the "type" attribute to type=text. Now everything works perfectly. I just hope that after I finish testing the application it still works with the original MSSQL DB and maps text to a varchar then...

Derby SQL Script Bulk Import

my goal is to import data into a derby database. The data was extracted from a MySQL instance in form of a SQL dump and I took care with a script that I only use the insert-statements and also that the escaping of the MySQL specific syntax is transformed to Derby correctly.
To Test, I use the derby-maven-plugin:
export MAVEN_OPTS=-Xmx2048m; mvn derby:run
In the first step, I create the schema in the derby instance (using DDLUtils from MySQL, this works fine).
Second, I try to import the data. I use ij on the command line with the following (shortend) script:
CONNECT 'jdbc:derby://localhost:1527/foodmart';
SET SCHEMA APP;
autocommit off;
INSERT INTO "ACCOUNT" VALUES (1000,NULL,'Assets','Asset','~',NULL), (2000,NULL,'Liabilities','Liability','~',NULL),(3000,5000,'Net Sales','Income','+',NULL),(3100,3000,'Gross Sales','Income','+','LookUpCube(\"[Sales]\",\"(Measures.[Store Sales],\"+time.currentmember.UniqueName+\",\"+ Store.currentmember.UniqueName+\")\")'),(3200,3000,'Cost of Goods Sold','Income','-',NULL),(4000,5000,'Total Expense','Expense','-',NULL),(4100,4000,'General & Administration','Expense','+',NULL),(4200,4000,'Information Systems','Expense','+',NULL),(4300,4000,'Marketing','Expense','+',NULL),(4400,4000,'Lease','Expense','+',NULL),(5000,NULL,'Net Income','Income','+',NULL);
...
commit;
As you can see, for every row of a table there is a bracket containing the sample data.
There are of course several more insert statments for other tables. After finishing some inserts correctly, the bulk importing process chokes at a really big dataset (> 1000 rows) due to following exception (from the derby-logs):
java.lang.StackOverflowError
at org.apache.derby.impl.sql.compile.SetOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.UnionNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.TableOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.SetOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.UnionNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.TableOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.SetOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.UnionNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.TableOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.SetOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.UnionNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.TableOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.SetOperatorNode.bindExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.UnionNode.bindExpressions(Unknown Source)
... and many lines more repeatingly because of the recursive nature...
Cleanup action completed
ij prints on the command line:
FEHLER XJ001: DERBY SQL error: SQLCODE: -1, SQLSTATE: XJ001, SQLERRMC: java.lang.StackOverflowError^T^TXJ001.U
Is this meaning, that derby is not capable using SQL-bulk-import scenarios?
Should I instead switching to an csv based import?
I assume from the stacktrace method calls that derby is not able to create the query plan.
Could it be possible, that for every 'data bracket' of an insert statement it will create internly an UNION statement, adding a big amount of overhead to each INSERT query.
So should I try to split my long INSERT-statements in many concise statements?
I don't have the time to look at the derby sources, so please help me!
I just used many Import statements and it worked.
It seems that for every data bracket of an INSERT statement Derby generates a method call on the stack, leeding for many brackets to a stackoverflow due to many recursive method calls.
However You just need to transform an INSERT statement from
Insert INTO VALUES (1,2,3), ..., (4,5,6);
To multiple statements:
Insert INTO VALUES (1,2,3);
...
Insert INTO VALUES (4,5,6);

Categories