error occurred while executing hibernate query - java

Exception while execution of a fetch HQL query .The query works most of the time but some time its showing this exception
the database is MySQL and the server used is JBoss 5.1.0 GA
The error thats shown is:
org.hibernate.exception.GenericJDBCException: could not execute query
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126)
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.loader.Loader.doList(Loader.java:2231)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2125)
at org.hibernate.loader.Loader.list(Loader.java:2120)
at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401)
at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:361)
at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196)
at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1148)
at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102)
at java.lang.Thread.run(Thread.java:745) Caused by: org.jboss.util.NestedSQLException: Error; - nested throwable:
(java.lang.OutOfMemoryError: GC overhead limit exceeded)
at org.jboss.resource.adapter.jdbc.WrappedConnection.checkException(WrappedConnection.java:873)
at org.jboss.resource.adapter.jdbc.WrappedStatement.checkException(WrappedStatement.java:852)
at org.jboss.resource.adapter.jdbc.WrappedResultSet.checkException(WrappedResultSet.java:1947)
at org.jboss.resource.adapter.jdbc.WrappedResultSet.getString(WrappedResultSet.java:892)
at org.hibernate.type.StringType.get(StringType.java:41)
at org.hibernate.type.NullableType.nullSafeGet(NullableType.java:184)
at org.hibernate.type.NullableType.nullSafeGet(NullableType.java:173)
at org.hibernate.type.AbstractType.hydrate(AbstractType.java:105)
at org.hibernate.persister.entity.AbstractEntityPersister.hydrate(AbstractEntityPersister.java:2124)
at org.hibernate.loader.Loader.loadFromResultSet(Loader.java:1404)
at org.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:1332)
at org.hibernate.loader.Loader.getRow(Loader.java:1230)
at org.hibernate.loader.Loader.getRowFromResultSet(Loader.java:603)
at org.hibernate.loader.Loader.doQuery(Loader.java:724)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.doList(Loader.java:2228)
... 11 more Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

Your application seems to have run out of memory. Probably the query might be returning a very large data set, so the memory allocated to the application is not sufficient to handle that.
You can either,
Increase the memory allocated to the Java process, or
Fetch your data in subsets (pagination) to avoid a large data set getting returned at once.

Based in your stacktrace the query was executed successfully in the database and the error occurs when the result set is being prepared inside the JVM.
This exceptions is a memory related issue that occurs when your program is demanding more than available memory.
"Thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector"
Some possibilities are
Configuration: You heap configuration (Xmx) is too small for your needs and you should extend it.
Bug: You have a bug in your query that generates a result set bigger that expected.
Design: You need to redesign your code in order to load this data without exceed your available memory. You can do pagination and store a small chunk of data on each iteration. In that case it's important to flush each page to the end-user or consumer before load the next one.
GC overhead tutorial
Query Pagination tutorial

Related

Getting org.apache.openjpa.persistence.OptimisticLockException: Unable to obtain an object lock on "null"

I am getting below exception while doing EntityManager.find(). We are using DB2 database and WAS 8.0 server for our application. Any help greatly appreciated.
Caused by: <openjpa-2.1.2-SNAPSHOT-r422266:1709309 nonfatal store error> org.apache.openjpa.persistence.OptimisticLockException: Unable to obtain an object lock on "null".
at org.apache.openjpa.jdbc.sql.DBDictionary.narrow(DBDictionary.java:4930)
at org.apache.openjpa.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:4908)
at org.apache.openjpa.jdbc.sql.DB2Dictionary.newStoreException(DB2Dictionary.java:603)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:136)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:110)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:62)
at org.apache.openjpa.jdbc.kernel.PreparedSQLStoreQuery$PreparedSQLExecutor.executeQuery(PreparedSQLStoreQuery.java:139)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:1012)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:870)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:801)
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:542)
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:315)
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:331)
... 116 more
Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: DB2 SQL Error: SQLCODE=-913, SQLSTATE=57033, SQLERRMC=00C90088;00000304;ODNC001 .SNCPC145.X'200D65' '.X'43', DRIVER=4.15.134 {prepstmnt -1803801027
SELECT a.column1
FROM table_test a
WHERE (a.column2 = ? AND a.column3 = ?)
[params=(String) 00000, (String) 000011]} [code=-913, state=57033]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:281)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:265)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$700(LoggingConnectionDecorator.java:72)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeQuery(LoggingConnectionDecorator.java:1183)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:284)
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement.executeQuery(JDBCStoreManager.java:1787)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:274)
at org.apache.openjpa.jdbc.kernel.PreparedSQLStoreQuery$PreparedSQLExecutor.executeQuery(PreparedSQLStoreQuery.java:118)
... 123 more
sqlode -913 with SQLERRMC=00C90088 means that your connection experienced a DEADLOCK.
If your Db2-server is running on Z/OS, then ask your DBA for help to find the other Db2-connection and the SQL-statements running in both transactions.The access-plans and isolation levels used by both connections are also relevant, as are any applicable lock timeouts. The Db2-server DBA has access to diagnostic tools to help you.
There are many hits online giving advice on how to reduce the likelihood of Db2 deadlocks, so do your research.
You will need to know the isolation level being used by the Websphere connection (or package, or SQL-statement(s)), and all the statements in the Db2-transaction for your connection.
The other tokens in the message are also relevant i.e. ODNC001.SNCPC145 may be the involved table.
The version of the jdbc type4 driver being used by your Websphere is out of date (looks like it is from a Db2 v10.1 fixpack 5 build) so consider getting that upgraded to a current version.

Connection to Postgres database lost

On a performance server - with rather a big load, i have a weird behavior.
From one moment in time all the connection the database start to say "connection has been closed".
The only hint so far is this IOException :
Caused by: org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:314)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:168)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:116)
at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:342)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1812)
at org.hibernate.loader.Loader.doQuery(Loader.java:697)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.doList(Loader.java:2232)
... 73 more
Caused by: java.io.IOException: Tried to send an out-of-range integer as a 2-byte value: 33001
at org.postgresql.core.PGStream.sendInteger2(PGStream.java:211)
at org.postgresql.core.v3.QueryExecutorImpl.sendParse(QueryExecutorImpl.java:1409)
at org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:1729)
at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1294)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:280)
... 83 more
However i can't really link it to some business scenario for the moment.
Any ideas ?
It's a PostgreSQL driver limitation, the maximum number of parameters for a query is 32768.
You have a query that exceeds that limit - and by doing so the driver has an erratic behavior of closing connections. I encountered that on a JBoss server using Hibernate with PostgreSQL and the connection closing led to a pretty messed up state of the connection pool.
This parameter is described here, in the Parse section:
"Int16 - The number of parameter data types specified".
The solution is to split that long query into smaller ones with a known number of parameters.

Spark on mesos Executors failing with OOM Errors

We are using spark 2.0.2 managed by a DCOS system that fetch data from a Kafka 1.0.0 messaging service and writes parquet in a hdfs system.
Every thing was working ok, but when we increase the number of topics in Kafka, our spark executors began to crash constantly with OOM errors:
java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
18/03/20 18:41:13 ERROR [Executor task launch worker-0] SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
We tried to increase the available the executors memory, review the code, but we couldn't find anything wrong.
Another info: we are using RDDs in spark.
Have someone encountered a similar problem, that already been solved
What is the heap configuration for the executor? By default, Java will autotune its heap according to machine memory. You need to change it to fit in your container with -Xmx setting.
See this article about running Java in the container
https://github.com/fabianenardon/docker-java-issues-demo/tree/master/memory-sample

Ehcache Initial table allocation failed

One of our web application runs within Tomcat 7 which is deployed on AS400 server, and it is using Ehcache as cache component swap data into disk and reduce memory usage.
Few weeks ago, when we try to deploy this application for one of our customer, it fails at startup. And log shows:
Caused by: java.lang.IllegalStateException: Cache 'data' creation in EhcacheManager failed.
at org.ehcache.core.EhcacheManager.createCache(EhcacheManager.java:288)
at org.ehcache.core.EhcacheManager.init(EhcacheManager.java:567)
... 7 more
Caused by: org.ehcache.StateTransitionException: Initial table allocation failed.
Initial Table Size (slots) : 64
Allocation Will Require : 1KB
Table Page Source : org.terracotta.offheapstore.disk.paging.MappedPageSource#bc8a4ca2
at org.ehcache.core.StatusTransitioner$Transition.succeeded(StatusTransitioner.java:209)
at org.ehcache.core.Ehcache.init(Ehcache.java:567)
at org.ehcache.core.EhcacheManager.createCache(EhcacheManager.java:261)
... 8 more
Caused by: java.lang.IllegalArgumentException: Initial table allocation failed.
Initial Table Size (slots) : 64
Allocation Will Require : 1KB
Table Page Source : org.terracotta.offheapstore.disk.paging.MappedPageSource#bc8a4ca2
at org.terracotta.offheapstore.OffHeapHashMap.<init>(OffHeapHashMap.java:219)
at org.terracotta.offheapstore.AbstractLockedOffHeapHashMap.<init>(AbstractLockedOffHeapHashMap.java:71)
at org.terracotta.offheapstore.AbstractOffHeapClockCache.<init>(AbstractOffHeapClockCache.java:76)
at org.terracotta.offheapstore.disk.persistent.AbstractPersistentOffHeapCache.<init>(AbstractPersistentOffHeapCache.java:43)
at org.terracotta.offheapstore.disk.persistent.PersistentReadWriteLockedOffHeapClockCache.<init>(PersistentReadWriteLockedOffHeapClockCache.java:36)
at org.ehcache.impl.internal.store.disk.factories.EhcachePersistentSegmentFactory$EhcachePersistentSegment.<init>(EhcachePersistentSegmentFactory.java:73)
at org.ehcache.impl.internal.store.disk.factories.EhcachePersistentSegmentFactory.newInstance(EhcachePersistentSegmentFactory.java:60)
at org.ehcache.impl.internal.store.disk.factories.EhcachePersistentSegmentFactory.newInstance(EhcachePersistentSegmentFactory.java:37)
at org.terracotta.offheapstore.concurrent.AbstractConcurrentOffHeapMap.<init>(AbstractConcurrentOffHeapMap.java:106)
at org.terracotta.offheapstore.concurrent.AbstractConcurrentOffHeapCache.<init>(AbstractConcurrentOffHeapCache.java:48)
at org.terracotta.offheapstore.disk.persistent.AbstractPersistentConcurrentOffHeapCache.<init>(AbstractPersistentConcurrentOffHeapCache.java:52)
at org.ehcache.impl.internal.store.disk.EhcachePersistentConcurrentOffHeapClockCache.<init>(EhcachePersistentConcurrentOffHeapClockCache.java:52)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore.createBackingMap(OffHeapDiskStore.java:279)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore.getBackingMap(OffHeapDiskStore.java:167)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore.access$600(OffHeapDiskStore.java:95)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore$Provider.init(OffHeapDiskStore.java:460)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore$Provider.initStore(OffHeapDiskStore.java:456)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore$Provider.initAuthoritativeTier(OffHeapDiskStore.java:507)
at org.ehcache.impl.internal.store.tiering.TieredStore$Provider.initStore(TieredStore.java:472)
at org.ehcache.core.EhcacheManager$8.init(EhcacheManager.java:499)
at org.ehcache.core.StatusTransitioner.runInitHooks(StatusTransitioner.java:135)
at org.ehcache.core.StatusTransitioner.access$000(StatusTransitioner.java:33)
at org.ehcache.core.StatusTransitioner$Transition.succeeded(StatusTransitioner.java:194)
this code triggered this is:
CacheConfiguration<String, String[]> dconf = CacheConfigurationBuilder
.newCacheConfigurationBuilder(String.class, String[].class, ResourcePoolsBuilder.heap(11)
.disk(3, MemoryUnit.GB, false))
.withExpiry(Expirations.timeToLiveExpiration(Duration.of(30, TimeUnit.MINUTES)))
.build();
dataCacheManager = CacheManagerBuilder.newCacheManagerBuilder()
.with(CacheManagerBuilder.persistence(new File(cacheFolder, "requestdata"))) //$NON-NLS-1$
.withCache(CACHE_NAME_DATA,dconf)
.build(true);
which surprised us because it has never happened before, we have deployed it for some other customers' server (Windows, As400, linux), none of them has this issues.
This is really a headache, we spend weeks try to figure it out, read source code, tuning jvm parameters, googling around..., nothing except one unanswered post: https://groups.google.com/forum/#!topic/ehcache-users/ApFAe5nYxuA
Is there anyone can help us one this? thanks ahead!
The Ehcache 3 disk store uses java.nio.MappedByteBuffer which require access to direct memory.
There is no documented default MaxDirectMemorySize in Java and the same JVM on different OS can behave differently.
If you have not already set the flag -XX:MaxDirectMemorySize=3G when launching your application, it could be the cause of that exception you see.

java.lang.ArrayIndexOutOfBoundsException at oracle.jdbc.driver.T4CTTIrxd.readBitVector(T4CTTIrxd.java:135)

Trying to access collection using lazy loading and facing below issue, I'm not able to figure out what is the cause of this issue.
I am using Hibernate3 with Spring and Driver 12.1.0.2 oracle 11.2.0.4.0. Below is the error log.
java.lang.ArrayIndexOutOfBoundsException
at oracle.jdbc.driver.T4CTTIrxd.readBitVector(T4CTTIrxd.java:135)
at oracle.jdbc.driver.T4C8TTIrxh.unmarshalV10(T4C8TTIrxh.java:123)
at oracle.jdbc.driver.T4C8Oall.readRXH(T4C8Oall.java:678)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:436)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1066)
at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3716)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.fetchMoreRows(InsensitiveScrollableResultSet.java:1015)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.absoluteInternal(InsensitiveScrollableResultSet.java:979)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.next(InsensitiveScrollableResultSet.java:579)
at com.ibm.ws.rsadapter.jdbc.WSJdbcResultSet.next(WSJdbcResultSet.java:3120)
at org.hibernate.loader.Loader.doQuery(Loader.java:697)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236)
at org.hibernate.loader.Loader.loadCollection(Loader.java:1994)
at org.hibernate.loader.collection.CollectionLoader.initialize(CollectionLoader.java:36)
at org.hibernate.persister.collection.AbstractCollectionPersister.initialize(AbstractCollectionPersister.java:565)
at org.hibernate.event.def.DefaultInitializeCollectionEventListener.onInitializeCollection(DefaultInitializeCollectionEventListener.java:63)
at org.hibernate.impl.SessionImpl.initializeCollection(SessionImpl.java:1716)
at org.hibernate.collection.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:344)
at org.hibernate.collection.AbstractPersistentCollection.read(AbstractPersistentCollection.java:86)
at org.hibernate.collection.PersistentSet.iterator(PersistentSet.java:163)
at org.apache.commons.collections.CollectionUtils.select(CollectionUtils.java:515)
at com.cmt.acq.service.AcqCampaignService.getGrid(AcqCampaignService.java:1245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.adapter.ThrowsAdviceInterceptor.invoke(ThrowsAdviceInterceptor.java:126)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.adapter.AfterReturningAdviceInterceptor.invoke(AfterReturningAdviceInterceptor.java:50)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.adapter.MethodBeforeAdviceInterceptor.invoke(MethodBeforeAdviceInterceptor.java:50)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy157.getGrid(Unknown Source)
at com.cmt.acq.action.AcqManageCampaignAction.generateXMLResponse(AcqManageCampaignAction.java:1073)
This is almost certainly due to row compression. The database recognizes sequential duplicate column values and instead of sending the value again just sends "ditto" conceptually. So this problem is highly dependent on the data returned. Unless the query contains ORDER BY even repeat executions of a single query on a single database can return the rows in a different order so this problem can be difficult to reproduce.
There have been several patches to this code in recent releases. I vaguely recall seeing this problem but I can't find a bug for it. Still the code has changed a lot recently so I suggest trying 12.2.0.1.0 which is available on Oracle Maven. If it reproduces with 12.2.0.1.0 please file a bug.
Had the same problem using the 12.1.0.2 driver with an Oracle 10g database. It happens apparently when the driver is performing a fetch - querying the database for more rows while scrolling through a ResultSet.
Looks like a nasty bug in the driver.
Incrementing the fetchSize property worked well as a temporary workaround to avoid the issue, forcing the driver to retrieve all results at once with no data left to fetch, but it is not a long-term solution and may have a large negative impact on performance/memory depending on the size and amount of data the query is retrieving.
If you're using Hibernate, try to identify the query that is raising the exception (use for instance hibernate.show_sql), check how many rows does it return and adjust the hibernate.jdbc.fetch_size property accordingly.
A non-zero value determines the JDBC fetch size (calls Statement.setFetchSize()).
Or try using a different version of the driver.

Categories