I am trying to search an index in my project.
When i search with the
searchText: semester:"S1 / 2016" AND status:Validated AND emp_id>0
, I am getting the results properly.
When i search with the
searchText: semester:"S1 / 2016" AND status:Validated AND emp_id>500 , I am getting the below exception
java.lang.NegativeArraySizeException
at com.google.appengine.api.search.dev.GenericScorer.search(GenericScorer.java:196)
at com.google.appengine.api.search.dev.LocalSearchService.searchForApp(LocalSearchService.java:584)
at com.google.appengine.api.search.dev.LocalSearchService.search(LocalSearchService.java:534)
at sun.reflect.GeneratedMethodAccessor93.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.google.appengine.tools.development.ApiProxyLocalImpl$AsyncApiCall.callInternal(ApiProxyLocalImpl.java:541)
at com.google.appengine.tools.development.ApiProxyLocalImpl$AsyncApiCall.call(ApiProxyLocalImpl.java:484)
at com.google.appengine.tools.development.ApiProxyLocalImpl$AsyncApiCall.call(ApiProxyLocalImpl.java:461)
at java.util.concurrent.Executors$PrivilegedCallable$1.run(Executors.java:493)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.concurrent.Executors$PrivilegedCallable.call(Executors.java:490)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Below is my code where the search is performed:
public Results<ScoredDocument> retrieveDocuments(String searchText) {
if (searchText.length() > 2000) {
throw new InternalException(new Exception("Error - Too long query !!"),
BaseValidationMessages.SEARCH_STRING_EXCEEDS_LIMIT);
}
QueryOptions options =QueryOptions.newBuilder().setOffset(0).setLimit(2)
.setSortOptions(createSortOptions("emp_id", "asc")).build();
Query query = Query.newBuilder().setOptions(options).build(searchText);
IndexSpec indexSpec = IndexSpec.newBuilder().setName("Beneficiaries").build();
Index index = SearchServiceFactory.getSearchService().getIndex(indexSpec);
return index.search(query);
}
Google Appengine throws NegativeArraySizeException, if the offset passed to the index search is out range of the available results count.
For E.g, Consider that there 50 documents available in search Index for the given search Query Text, if the offset set in the search query options is more than 50, then it would throw NegativeArraySizeException on executing the search.
QueryOptions options = QueryOptions.newBuilder().setOffset(**51**)
So Before setting offset, make sure the documents are available for the offset using the previous results.
First Query:
QueryOptions options = QueryOptions.newBuilder().setOffset(0).setLimit(50);
IndexSpec indexSpec = IndexSpec.newBuilder().setName("Beneficiaries").build();
Index index = SearchServiceFactory.getSearchService().getIndex(indexSpec);
Results<ScoredDocument>results = index.search(query);
If the results returned from the above query is equal to 50, then search for the Next Set;
QueryOptions options = QueryOptions.newBuilder().setOffset(**51**)
if the results returned is less than 50, then we can assume that, there is no more results available to search for the offset > 50, then we can stop further searching.
Related
i started experimenting with oracle olap api 'olapi', but i'm having some issues while running their examples package.
when i run MakingQueriesExamples.java in source package i get this error:
oracle.olapi.data.cursor.NoDataAvailableException
at oracle.express.olapi.data.full.DefinitionManager.handleException(Unknown Source)
at oracle.express.olapi.data.full.DefinitionManager.createCursorManagerInterfaces(Unknown Source)
at oracle.express.olapi.data.full.DefinitionManager.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManagers(Unknown Source)
at oracle.olapi.data.source.DataProvider.createCursorManager(Unknown Source)
at olap.Context11g._displayResult(Context11g.java:650)
at olap.Context11g.displayResult(Context11g.java:631)
at olap.source.MakingQueriesExamples.controllingMatchingWithAlias(MakingQueriesExamples.java:114)
at olap.source.MakingQueriesExamples.run(MakingQueriesExamples.java:40)
at olap.BaseExample11g.execute(BaseExample11g.java:54)
at olap.BaseExample11g.execute(BaseExample11g.java:74)
at olap.source.MakingQueriesExamples.main(MakingQueriesExamples.java:478)
the part that's causing the error is in here (last line) (MakingQueriesExamples):
println("\nControlling Input-to-Source Matching With the alias Method");
MdmMeasure mdmUnits = getMdmMeasure("UNITS");
// Get the Source objects for the measure and for the default hierarchies
// of the dimensions.
NumberSource units = (NumberSource) mdmUnits.getSource();
StringSource prodHier = (StringSource)
getMdmPrimaryDimension("PRODUCT").getDefaultHierarchy().getSource();
StringSource custHier = (StringSource)
getMdmPrimaryDimension("CUSTOMER").getDefaultHierarchy().getSource();
StringSource chanHier = (StringSource)
getMdmPrimaryDimension("CHANNEL").getDefaultHierarchy().getSource();
StringSource timeHier = (StringSource)
getMdmPrimaryDimension("TIME").getDefaultHierarchy().getSource();
// Select single values for the hierarchies.
//Source prodSel = prodHier.selectValue("PRODUCT_PRIMARY::ITEM::ENVY ABM");
Source prodSel = prodHier.selectValue("PRIMARY::ITEM::ENVY ABM");
//Source custSel = custHier.selectValue("SHIPMENTS::SHIP_TO::BUSN WRLD SJ");
Source custSel = custHier.selectValue("SHIPMENTS::SHIP_TO::BUSN WRLD SJ");
//Source timeSel = timeHier.selectValue("CALENDAR_YEAR::MONTH::2001.01");
Source timeSel = timeHier.selectValue("CALENDAR::MONTH::2001.01");
// Produce a Source that specifies the units values for the selected
// dimension values.
Source unitsSel = units.join(timeSel).join(custSel).join(prodSel);
// Create aliases for the Channel dimension hierarchy.
Source chanAlias1 = chanHier.alias();
Source chanAlias2 = chanHier.alias();
// Join the aliases to the Source representing the units values specified
// by the selected dimension elements, using the value method to make the
// alias an input.
NumberSource unitsSel1 = (NumberSource) unitsSel.join(chanAlias1.value());
NumberSource unitsSel2 = (NumberSource) unitsSel.join(chanAlias2.value());
// chanAlias2 is the first output of result, so its values are the row
// (slower varying) values; chanAlias1 is the second output of result
// so its values are the column (faster varying) values.
Source result = unitsSel1.gt(unitsSel2)
.join(chanAlias1) // Output 2, column
.join(chanAlias2); // Output 1, row
getContext().commit();
getContext().displayResult(result);
and in here (first line)(context11g.java):
CursorManager cursorManager =
dp.createCursorManager(source);
Cursor cursor = cursorManager.createCursor();
cpw.printCursor(cursor, displayLocVal);
// Close the CursorManager.
cursorManager.close();
i'm using oracle database 11.2.0.1.0 with OLAP option enabled and oracle analytical workplace manager 11.2.0.4B
i've started by installing the 'global' schema as instructed here:
https://www.oracle.com/technetwork/database/options/olap/global-11g-readme-082667.html
i verified everything in AWM (cubes, dimensions and mesures), and the data in sqldevelopper.
i've noticed that some of the hierarchies' names have changed so i updated them on the java code
any help would be appreciated !
thanks in advance
I'm calling a stored procedure which has INOUT parameters. Database is AS400 DB2. Type is CHARACTER. I'm getting Data Truncation error while registering and setting the variable. If I set the string directly to the column, it does not have an issue. If I set the same string in a variable and use it to set the column, it throws Data truncation error. Could you please let me know where I'm going wrong and how I can set the value of the column using a variable without causing exception? Please see the end of error stack trace for more information.
try{
String cstmt_str = "CALL " + storedProcName + "(?)";
String status="REJ";
cstmt = conn.prepareCall(cstmt_str);
cstmt.registerOutParameter("5506STS", Types.CHAR);
//cstmt.setString("5506STS","REJ"); does not give a problem
cstmt.setString("5506STS",status); //Data truncation Exception occurs here.
cstmt.execute();
}catch (DataTruncation de) {
logger.error("DatatruncationException error:", de );
displayError(de);
de.printStackTrace();
}
public static void displayError(DataTruncation dataTruncation) {
logger.info("Data truncation error: ");
logger.info("dataTruncation.getDataSize():"+dataTruncation.getDataSize() + " number of bytes of data that should have been transferred.");
if (!dataTruncation.getRead())
logger.info("dataTruncation.getRead() is False. Its Written (Error:). ");
logger.info("dataTruncation.getTransferSize():"+dataTruncation.getTransferSize()
+ " number of bytes of data actually transferred.");
}
Stored Procedure Signature:
Number Mode Name DataType Length
1 INOUT 5506STS CHARACTER 3
I tried to find out the length of the string status:
int len = status.getBytes().length; // Output: 5
I'm out of options. Please let me know how to successfully set the value in the variable "status" into "5506STS".
ERROR TRACE:
DatatruncationException error:
java.sql.DataTruncation: Data truncation
at com.ibm.as400.access.AS400JDBCPreparedStatement.testDataTruncation(AS400JDBCPreparedStatement.java:3450)
at com.ibm.as400.access.AS400JDBCPreparedStatement.setValue(AS400JDBCPreparedStatement.java:3361)
at com.ibm.as400.access.AS400JDBCPreparedStatement.setString(AS400JDBCPreparedStatement.java:2999)
at com.ibm.as400.access.AS400JDBCCallableStatement.setString(AS400JDBCCallableStatement.java:3082)
at org.jboss.jca.adapters.jdbc.WrappedCallableStatement.setString(WrappedCallableStatement.java:1563)
at com.ssss.ssjtracdbws.dao.SSJTracDBWSDAO.submitNewRequestJSON(SSJTracDBWSDAO.java:699)
at com.ssss.ssjtracdbws.webservices.SSJTracDBService.submitNewRequestJSON(SSJTracDBService.java:163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:167)
at org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:269)
at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:227)
at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:216)
at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:542)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:524)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:126)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:208)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:55)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:50)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:295)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:231)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:149)
at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:169)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:145)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:97)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:559)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:102)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:340)
at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:353)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:911)
at org.apache.tomcat.util.net.NioEndpoint$ChannelProcessor.run(NioEndpoint.java:920)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Data truncation error:
dataTruncation.getDataSize():5 number of bytes of data that should have been transferred.
dataTruncation.getRead() is False. Its Written (Error:).
dataTruncation.getTransferSize(): 3 number of bytes of data actually transferred.
Well, I found it out myself. The "status" that was passed to the function had a single quote surrounding it.('APP') this made the length 5 instead of 3. Now I removed the single quotes.Its working fine.
I have a web service Java program which reads 13,000,000 dates like '08-23-2016 12:54:44' as strings from database. My developing environment is Java 8, MySQL 5.7 and tomcat 8. I have declare a string array String[] data to store it. I use Guice to inject the initial values of data array to empty. However, the memory usage is still huge. This is my code:
String[] data;//size is 1,000,000
void generateDataWrapper(String params) {
//read over 13000000 dates string
ResultSet rs = mySQLCon.readData(params);
clearData(data);//set to empty string
int index = 0;
while(rs.next()) {
data[index++] = rs.getString("date");
if (index == (size - 1)) {//calculate every 1,000,000 total 13 times
//calculate statistics
...
//reset all to empty string
clearData(data);
index = 0;
}
}
}
//mySQLCon. readData function
ResultSet readData(String params) {
try {
String query = generateQuery(params);
Statement postStmt = connection.createStatement();
ResultSet rs = postStmt.executeQuery(query);
return rs;
} catch (Exception e) {
}
return null;
}
If I call this function once, the memory is reached 12G, If I call it again, the memory goes to 20G, on the third time the memory will goes to 25G and throw a 'java.lang.OutOfMemoryError: GC overhead limit exceeded' error in com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2174)
This is part of the error message:
java.lang.OutOfMemoryError: GC overhead limit exceeded
com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2174)
com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1964)
com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3316)
com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:463)
com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3040)
com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2288)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2681)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2547)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2505)
com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1370)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
java.lang.reflect.Method.invoke(Unknown Source)
I have changed the garbage collection algorithms to:
-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
But it's not helping.
I have tried change the data to static variables, still will have this problem.
Currently the JVM heap is 8g, the tomcat memory is 24g, however, I don't think increase the memory will solve the problem.
I don't understand why my memory is still increasing every time I call this function, Could someone give me some suggestion?
Used resources like a ResultSet have to be closed to release the underlying system-resources. This can be done automatically declaring the resources in a try-block like try (ResultSet resultSet =...).
You can try to fetch only a limited number of rows from database when they are requested from ResultSet and not all of them immediately.
Objects get eligible for garbage collection when they are not referenced any more. So, your array-object keeps in memory with it's whole size as long as it is referenced. If it's not referenced any more and the VM is running out of memory it will be able to dispose the array-object possibly avoiding an OutOfMemoryError.
Unexpectedly high memory usage can be analyzed by creating a heap dump and exploring it in the tool jvisualvm of the JDK.
Additionally you can change your string array to an long array since strings consume a huge amount of memory. In your case the size of a date string is 38 bytes ( 19 char * 2 bytes ) whereas a long only takes 8 bytes of memory.
long[] data;//size is 1,000,000
void generateDataWrapper(String params) {
//read over 13000000 dates string
ResultSet rs = mySQLCon.readData(params);
clearData(data);//set to empty string
int index = 0;
SimpleDateFormat formater = new SimpleDateFormat("MM-dd-YYYY HH:mm:ss");
while(rs.next()) {
try{
Date date = formater.parse(rs.getString("date"));
data[index++] = date.getTime();
}catch(ParseException pe) {
pe.printStackTrace();
}
if (index == (size - 1)) {//calculate every 1,000,000 total 13 times
//calculate statistics
...
//reset all to empty string
clearData(data);
index = 0;
}
}
}
Wherever you need your string you can just parse it back with the following
SimpleDateFormat formater = new SimpleDateFormat("MM-dd-YYYY HH:mm:ss");
Date date = new Date(data[i]);
String dateString = formater.format(date);
First, thanks for all your suggestions. I have figured this out by reading from mm759 and realized that I forgot to close the ResultSet after I have done reading. After I add rs.close(), every time it takes the same time to finish, although the memory will reach the maximum memory I set.
I have broken wiki xml dump into many small parts of 1M and tried to clean it (after cleaning it with another program by somebody else)
I get an out of memory error which I don't know how to solve. Can anyone enlighten me?
I get the following error message:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:212)
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:235)
at org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
at org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:252)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:292)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:454)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1541)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1256)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1237)
at qa.main.ja.Indexing$$anonfun$5$$anonfun$apply$4.apply(SearchDocument.scala:234)
at qa.main.ja.Indexing$$anonfun$5$$anonfun$apply$4.apply(SearchDocument.scala:224)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:750)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at qa.main.ja.Indexing$$anonfun$5.apply(SearchDocument.scala:224)
at qa.main.ja.Indexing$$anonfun$5.apply(SearchDocument.scala:220)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
Where line 234 is as follows:
writer.addDocument(document)
It is adding some documents to Lucene
and where line 224 is as follows:
for (doc <- target_xml \\ "doc") yield {
It is the first line of a for loop for adding various elements as fields in the index.
Is it a code problem, setting problem or hardware problem?
EDIT
Hi, this is my for loop:
for (knowledgeFile <- knowledgeFiles) yield {
System.err.println(s"processing file: ${knowledgeFile}")
val target_xml=XML.loadString(" <file>"+cleanFile(knowledgeFile).mkString+"</file>")
for (doc <- target_xml \\ "doc") yield {
val id = (doc \ "#id").text
val title = (doc \ "#title").text
val text = doc.text
val document = new Document()
document.add(new StringField("id", id, Store.YES))
document.add(new TextField("text", new StringReader(title + text)))
writer.addDocument(document)
val xml_doc = <page><title>{ title }</title><text>{ text }</text></page>
id -> xml_doc
}
}).flatten.toArray`
The inner loop just loops thru every doc element. The outer loop loops thru every file. Is the nested for the source of the problem?
Below is the cleanFile function for reference:
def cleanFile(fileName:String):Array[String] = {
val tagRe = """<\/?doc.*?>""".r
val lines = Source.fromFile(fileName).getLines.toArray
val outLines = new Array[String](lines.length)
for ((line,lineNo) <- lines.zipWithIndex) yield {
if (tagRe.findFirstIn(line)!=None)
{
outLines(lineNo) = line
}
else
{
outLines(lineNo) = StringEscapeUtils.escapeXml11(line)
}
}
outLines
}
Thanks again
Looks like you would like to try increasing the heap size by having -xmx jvm argument?
On a Device a user can select the languages of preference in which the content is going to be showed. I have used this for a long time, but updated the system that this list is ordered:
#Required
#OrderColumn(name="preferenceOrder")
#ManyToMany(fetch= FetchType.LAZY)
public List<Language> languages;
But when I try to update the list with the following code:
try{
// Get the codes of the Device
Device device = Device.findById(deviceId);
device.languages.clear();
for(String langCode : languageCodes){
Language lang = Language.findByCode(langCode);
device.languages.add(lang);
}
device.save();
return true;
} catch (Exception e) {
Logger.error(e, "Problem with updating the languages for device: " + e.getLocalizedMessage());
return false;
}
I get the following error:
15:30:36,376 ERROR ~ Problem with updating the languages for device: org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 5; expected: 1
javax.persistence.PersistenceException: org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 5; expected: 1
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1389)
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1317)
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1323)
at org.hibernate.ejb.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:965)
at play.db.jpa.JPABase._save(JPABase.java:41)
at play.db.jpa.GenericModel.save(GenericModel.java:215)
at logic.helpers.LanguageUpdatePerformer.doJobWithResult(LanguageUpdatePerformer.java:47)
at logic.helpers.LanguageUpdatePerformer.doJobWithResult(LanguageUpdatePerformer.java:1)
at play.jobs.Job.call(Job.java:146)
at play.jobs.Job$1.call(Job.java:66)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 5; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:95)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:70)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:90)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:114)
at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:109)
at org.hibernate.jdbc.AbstractBatcher.prepareBatchStatement(AbstractBatcher.java:244)
at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1401)
at org.hibernate.action.CollectionUpdateAction.execute(CollectionUpdateAction.java:86)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:273)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:265)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:187)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:345)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216)
at org.hibernate.ejb.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:962)
... 12 more
What is the reason for this?
I find 5 records, which is right is this case, but expects 1?
Is it because I first clear the whole list, and should I use another way? This is just the easiest for me to add them in the order they are given me.