I am doing synchronous inserts via prepared statements on cassandra, which causes my entire application to break down.
I write to one partition, about 90k different clustering entries in short time.
List<Statement> statements = new ArrayList<>();
map.forEach((String location, Set<String> set) -> {
PreparedStatement updateStatement = preparedStatementSupplier.getUpdatePeriodByLocationStatement();
BoundStatement boundStatement = updateStatement.bind(set, tradePartner, location);
statements.add(boundStatement);
});
Iterator<Statement> iterator = statements.iterator();
while (iterator.hasNext()) {
Statement statement = iterator.next();
try {
cassandraOperations.execute(statement);
iterator.remove();
} catch (RuntimeException e) {
LOG.error("error on forecast data persistence, reason: {}", e.getMessage(), e);
}
}
public synchronized PreparedStatement getUpdatePeriodByLocationStatement() {
if (Objects.isNull(updatePeriodByLocation)) {
// CREATE TABLE period_by_location (tp text, loc text, pd set<text>, PRIMARY KEY ((tp), loc));
updatePeriodByLocation = cassandraOperations.getSession().prepare("UPDATE period_by_location SET pd = pd + ? WHERE tp = ? AND loc = ?");
updatePeriodByLocation.setIdempotent(true);
}
return updatePeriodByLocation;
}
Which causes timeouts on the server side I guess and the driver to stop working. Cassandra runs more or less on default settings. The error on the cassandra nodes looks like this.
ERROR [SharedPool-Worker-3] 2017-11-29 15:41:37,084 ErrorMessage.java:338 - Unexpected exception during request
java.lang.NullPointerException: null
at org.apache.cassandra.serializers.UTF8Serializer$UTF8Validator.validate(UTF8Serializer.java:55) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.serializers.UTF8Serializer.validate(UTF8Serializer.java:34) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:88) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.Sets$Value.fromSerialized(Sets.java:152) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.Sets$Marker.bind(Sets.java:251) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.Sets$Adder.execute(Sets.java:286) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:112) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:59) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:744) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:531) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:519) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:226) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:492) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:469) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:142) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) [apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) [apache-cassandra-2.2.8.jar:2.2.8]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) [apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.2.8.jar:2.2.8]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
What I read so far, is to slow down the rate of performing cassandrOperations.execute(). Is this the right way or is there any better solution.
I greatly appreciate any tips.
Thank you.
CassandrOperations.execute(…) is the appropriate method. From your code, I'm wondering why you're using CassandraOperations at all as your code seems to be heavily optimized and CassandraOperations adds besides exception translation a bit of overhead.
I have a loop with heavy memory access from oracle.
int firstResult = 0;
int maxResult = 500;
int targetTotal = 8000; // more or less
int phase = 1;
for (int i = 0; i<= targetTotal; i += maxResult) {
try {
Session session = .... init hibernate session ...
// Start Transaction
List<Accounts> importableInvAcList = ...getting list using session and firstResult-maxResult...
List<ContractData> dataList = new ArrayList<>();
List<ErrorData> errorDataList = new ArrayList<>();
for (Accounts account : importableInvAcList) {
... Converting 500 Accounts object to ContractData object ...
... along with 5 more database call using existing session ...
.. On converting The object we generate thousands of ErrorData...
dataList.add(.. converted account to Contract data ..);
errorDataList.add(.. generated error data ..);
}
dataList.stream().forEach(session::save); // 500 data
errorDataList.stream().forEach(session::save); // 10,000-5,000 data
... Commit Transaction ...
phase++;
} catch (Exception e) {
return;
}
}
On the second phase (2nd loop) the Exception comes out. Sometimes Exception is coming out in 3rd or fifth phase.
I also checked the Runtime Memory.
Runtime runtime = Runtime.getRuntime();
long total = runtime.totalMemory();
long free = runtime.freeMemory();
long used = total - free;
long max = runtime.maxMemory();
And in the second phase the status was below for sample...
Used: 1022 MB, Free: 313 MB, Total Allocated: 1335 MB
Stack Trace is here...
org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:140)
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:128)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:52)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:449)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167)
at org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142)
at org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85)
at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1463)
at ibbl.remote.tx.TxSessionImpl.beginTx(TxSessionImpl.java:41)
at ibbl.remote.tx.TxController.initPersistence(TxController.java:70)
at com.ibbl.data.util.CDExporter2.run(CDExporter2.java:130)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Listener refused the connection with the following error:
ORA-12518, TNS:listener could not hand off client connection
Noted that, this process running in a Thread, and there are 3 similar Thread running at a time.
Why this Exception hangs out after the loop running a while ?
there are 3 similar Thread running at a time.
If your code creates a total of 3 Threads, then, optimally, you need only 3 Oracle Connections. Create all of them before any Thread is created. Create the Threads, assign each Thread a Connection, then start the Threads.
Chances are good, though, that your code might be way too aggressively consuming resources on whatever machine is hosting it. Even if you eliminate the ORA-12518, the RDBMS server may "go south". By "go south", I mean if your application is consuming too many resources the machine hosting it or the machine hosting the RDBMS server may "panic" or something equally dreadful.
I'm calling a stored procedure which has INOUT parameters. Database is AS400 DB2. Type is CHARACTER. I'm getting Data Truncation error while registering and setting the variable. If I set the string directly to the column, it does not have an issue. If I set the same string in a variable and use it to set the column, it throws Data truncation error. Could you please let me know where I'm going wrong and how I can set the value of the column using a variable without causing exception? Please see the end of error stack trace for more information.
try{
String cstmt_str = "CALL " + storedProcName + "(?)";
String status="REJ";
cstmt = conn.prepareCall(cstmt_str);
cstmt.registerOutParameter("5506STS", Types.CHAR);
//cstmt.setString("5506STS","REJ"); does not give a problem
cstmt.setString("5506STS",status); //Data truncation Exception occurs here.
cstmt.execute();
}catch (DataTruncation de) {
logger.error("DatatruncationException error:", de );
displayError(de);
de.printStackTrace();
}
public static void displayError(DataTruncation dataTruncation) {
logger.info("Data truncation error: ");
logger.info("dataTruncation.getDataSize():"+dataTruncation.getDataSize() + " number of bytes of data that should have been transferred.");
if (!dataTruncation.getRead())
logger.info("dataTruncation.getRead() is False. Its Written (Error:). ");
logger.info("dataTruncation.getTransferSize():"+dataTruncation.getTransferSize()
+ " number of bytes of data actually transferred.");
}
Stored Procedure Signature:
Number Mode Name DataType Length
1 INOUT 5506STS CHARACTER 3
I tried to find out the length of the string status:
int len = status.getBytes().length; // Output: 5
I'm out of options. Please let me know how to successfully set the value in the variable "status" into "5506STS".
ERROR TRACE:
DatatruncationException error:
java.sql.DataTruncation: Data truncation
at com.ibm.as400.access.AS400JDBCPreparedStatement.testDataTruncation(AS400JDBCPreparedStatement.java:3450)
at com.ibm.as400.access.AS400JDBCPreparedStatement.setValue(AS400JDBCPreparedStatement.java:3361)
at com.ibm.as400.access.AS400JDBCPreparedStatement.setString(AS400JDBCPreparedStatement.java:2999)
at com.ibm.as400.access.AS400JDBCCallableStatement.setString(AS400JDBCCallableStatement.java:3082)
at org.jboss.jca.adapters.jdbc.WrappedCallableStatement.setString(WrappedCallableStatement.java:1563)
at com.ssss.ssjtracdbws.dao.SSJTracDBWSDAO.submitNewRequestJSON(SSJTracDBWSDAO.java:699)
at com.ssss.ssjtracdbws.webservices.SSJTracDBService.submitNewRequestJSON(SSJTracDBService.java:163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:167)
at org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:269)
at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:227)
at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:216)
at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:542)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:524)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:126)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:208)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:55)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:50)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:295)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:231)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:149)
at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:169)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:145)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:97)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:559)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:102)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:340)
at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:353)
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:911)
at org.apache.tomcat.util.net.NioEndpoint$ChannelProcessor.run(NioEndpoint.java:920)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Data truncation error:
dataTruncation.getDataSize():5 number of bytes of data that should have been transferred.
dataTruncation.getRead() is False. Its Written (Error:).
dataTruncation.getTransferSize(): 3 number of bytes of data actually transferred.
Well, I found it out myself. The "status" that was passed to the function had a single quote surrounding it.('APP') this made the length 5 instead of 3. Now I removed the single quotes.Its working fine.
I am trying to search an index in my project.
When i search with the
searchText: semester:"S1 / 2016" AND status:Validated AND emp_id>0
, I am getting the results properly.
When i search with the
searchText: semester:"S1 / 2016" AND status:Validated AND emp_id>500 , I am getting the below exception
java.lang.NegativeArraySizeException
at com.google.appengine.api.search.dev.GenericScorer.search(GenericScorer.java:196)
at com.google.appengine.api.search.dev.LocalSearchService.searchForApp(LocalSearchService.java:584)
at com.google.appengine.api.search.dev.LocalSearchService.search(LocalSearchService.java:534)
at sun.reflect.GeneratedMethodAccessor93.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.google.appengine.tools.development.ApiProxyLocalImpl$AsyncApiCall.callInternal(ApiProxyLocalImpl.java:541)
at com.google.appengine.tools.development.ApiProxyLocalImpl$AsyncApiCall.call(ApiProxyLocalImpl.java:484)
at com.google.appengine.tools.development.ApiProxyLocalImpl$AsyncApiCall.call(ApiProxyLocalImpl.java:461)
at java.util.concurrent.Executors$PrivilegedCallable$1.run(Executors.java:493)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.concurrent.Executors$PrivilegedCallable.call(Executors.java:490)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Below is my code where the search is performed:
public Results<ScoredDocument> retrieveDocuments(String searchText) {
if (searchText.length() > 2000) {
throw new InternalException(new Exception("Error - Too long query !!"),
BaseValidationMessages.SEARCH_STRING_EXCEEDS_LIMIT);
}
QueryOptions options =QueryOptions.newBuilder().setOffset(0).setLimit(2)
.setSortOptions(createSortOptions("emp_id", "asc")).build();
Query query = Query.newBuilder().setOptions(options).build(searchText);
IndexSpec indexSpec = IndexSpec.newBuilder().setName("Beneficiaries").build();
Index index = SearchServiceFactory.getSearchService().getIndex(indexSpec);
return index.search(query);
}
Google Appengine throws NegativeArraySizeException, if the offset passed to the index search is out range of the available results count.
For E.g, Consider that there 50 documents available in search Index for the given search Query Text, if the offset set in the search query options is more than 50, then it would throw NegativeArraySizeException on executing the search.
QueryOptions options = QueryOptions.newBuilder().setOffset(**51**)
So Before setting offset, make sure the documents are available for the offset using the previous results.
First Query:
QueryOptions options = QueryOptions.newBuilder().setOffset(0).setLimit(50);
IndexSpec indexSpec = IndexSpec.newBuilder().setName("Beneficiaries").build();
Index index = SearchServiceFactory.getSearchService().getIndex(indexSpec);
Results<ScoredDocument>results = index.search(query);
If the results returned from the above query is equal to 50, then search for the Next Set;
QueryOptions options = QueryOptions.newBuilder().setOffset(**51**)
if the results returned is less than 50, then we can assume that, there is no more results available to search for the offset > 50, then we can stop further searching.
In summary, we have ran into this weird behavior in doing concurrent updates on an existing document when the document is not part of the working set (not in resident memory).
More details:
Given a collection with a unique index and when running concurrent updates (3 threads) with upsert as true on a given existing document, 1 to 2 threads raise the following exception:
Processing failed (Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$key_1 dup key: { : 1008 }'):
According to the documentation, I would expect all of the three updates to succeed because the document I am trying to update already exists. Instead, it looks like it is trying to do an insert on few or all of the update requests and few fails due to the unique index.
Repeating the same concurrent update on the document does not raise any exceptions. Also, using find() on a document to bring it to the working set, then running the concurrent updates on that document also runs as expected.
Also, using findAndModify with the same query and settings does not have the same problem.
Is this working as expected or am I missing something?
Setup:
-mongodb java driver 3.0.1
-3 node replica set running MongoDB version "2.6.3"
Query:
BasicDBObject query = new BasicDBObject();
query.put("docId", 123L);
collection.update (query, object, true, false);
Index:
name: docId_1
unique: true
key: {"docId":1}
background: true
Updated on May 28 to include sample code to reproduce the issue.
Run MongoDB locally as follow (Note that the test will write about ~4 GB of data):
./mongodb-osx-x86_64-2.6.10/bin/mongod --dbpath /tmp/mongo
Run the following code, restart the database, comment out "fillUpCollection(testMongoDB.col1, value, 0, 300);", then run the code again. Depending on the machine, you may need to tweak some of the numbers to be able to see the exceptions.
package test;
import com.mongodb.BasicDBObject;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.Mongo;
import com.mongodb.MongoClient;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class TestMongoDB {
public static final String DOC_ID = "docId";
public static final String VALUE = "value";
public static final String DB_NAME = "db1";
public static final String UNIQUE = "unique";
public static final String BACKGROUND = "background";
private DBCollection col1;
private DBCollection col2;
private static DBCollection getCollection(Mongo mongo, String collectionName) {
DBCollection col = mongo.getDB(DB_NAME).getCollection(collectionName);
BasicDBObject index = new BasicDBObject();
index.append(DOC_ID, 1);
DBObject indexOptions = new BasicDBObject();
indexOptions.put(UNIQUE, true);
indexOptions.put(BACKGROUND, true);
col.createIndex(index, indexOptions);
return col;
}
private static void storeDoc(String docId, DBObject doc, DBCollection dbCollection) throws IOException {
BasicDBObject query = new BasicDBObject();
query.put(DOC_ID, docId);
dbCollection.update(query, doc, true, false);
//dbCollection.findAndModify(query, null, null, false, doc, false, true);
}
public static void main(String[] args) throws Exception{
final String value = new String(new char[1000000]).replace('\0', 'a');
Mongo mongo = new MongoClient("localhost:27017");
final TestMongoDB testMongoDB = new TestMongoDB();
testMongoDB.col1 = getCollection(mongo, "col1");
testMongoDB.col2 = getCollection(mongo, "col2");
fillUpCollection(testMongoDB.col1, value, 0, 300);
//restart Database, comment out previous line, and run again
fillUpCollection(testMongoDB.col2, value, 0, 2000);
updateExistingDocuments(testMongoDB, value);
}
private static void updateExistingDocuments(TestMongoDB testMongoDB, String value) {
List<String> docIds = new ArrayList<String>();
for(int i = 0; i < 10; i++) {
docIds.add(new Random().nextInt(300) + "");
}
multiThreadUpdate(testMongoDB.col1, value, docIds);
}
private static void multiThreadUpdate(final DBCollection col, final String value, final List<String> docIds) {
Runnable worker = new Runnable() {
#Override
public void run() {
try {
System.out.println("Started Thread");
for(String id : docIds) {
storeDoc(id, getDbObject(value, id), col);
}
} catch (Exception e) {
System.out.println(e);
} finally {
System.out.println("Completed");
}
}
};
for(int i = 0; i < 8; i++) {
new Thread(worker).start();
}
}
private static DBObject getDbObject(String value, String docId) {
final DBObject object2 = new BasicDBObject();
object2.put(DOC_ID, docId);
object2.put(VALUE, value);
return object2;
}
private static void fillUpCollection(DBCollection col, String value, int from, int to) throws IOException {
for(int i = from ; i <= to; i++) {
storeDoc(i + "", getDbObject(value, i + ""), col);
}
}
}
Sample Output on the second run:
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "290" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "170" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "241" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "127" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "120" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "91" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "136" }'
Completed
Completed
This looks like a known issue with MongoDB, at least up to version 2.6. Their recommended fix is to have your code retry the upsert on error.
https://jira.mongodb.org/browse/SERVER-14322
Your query is too specific, not finding the document even if it's created, e.g. not only searching for the unique field. Then the upsert tries to create it a second time (another thread) but fails as it actually exists, but wasn't found. Please see http://docs.mongodb.org/manual/reference/method/db.collection.update/#upsert-behavior for more details.
Boil down from doc: To avoid inserting the same document more than once, only use upsert: true if the query field is uniquely indexed.
Use modify operators like $set, to include your query document into the upsert doc
If you feel that this isn't the case for you. Please provide us with the query and some information about your index.
Update:
If you try to run your code from cli, you'll see the following:
> db.upsert.ensureIndex({docid:1},{unique:true})
{
"createdCollectionAutomatically" : true,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.upsert.update({"docid":123},{one:1,two:2},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 1,
"nModified" : 0,
"_id" : ObjectId("55637413ad907a45eec3a53a")
})
> db.upsert.find()
{ "_id" : ObjectId("55637413ad907a45eec3a53a"), "one" : 1, "two" : 2 }
> db.upsert.update({"docid":123},{one:1,two:2},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 11000,
"errmsg" : "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.upsert.$docid_1 dup key: { : null }"
}
})
You have the following issue:
You want to update the document but don't find it. And your update contains no modify operators, thus your docid field won't be included in the newly created document (or better it's set to null, and null can be set only once in a unique index, too).
Next time you try to update your document, you still don't find it, because of the last step. So MongoDB tries to insert it following the same procedure as before, and fails again. No second null allowed.
Simply change your update query to this, to modify the document/ on upsert case include your query into it: db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 1,
"nModified" : 0,
"_id" : ObjectId("5562164f0f63858bf27345f3")
})
> db.upsert.find()
{ "_id" : ObjectId("5562164f0f63858bf27345f3"), "docid" : 123, "one" : 1, "two" : 2 }
> db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 0 })