I am trying to insert documents into mongodb from java. First record is being inserted and it is showing the error as 'E11000 duplicate key error'. I even tried to make the documents unique. Still I am getting the same error. Here I provide the screen shot of the same.
Mongodb version: v 3.4.10
#sowmyasurampalli, E11000 it's a mongodb code error that means that some entry is duplicated, when you use a field as unique field(in you case _id is default set to unique), you should enter distinct documents _ids else this error 'll be thrown, so in your app you need also to catch that error to inform the user that the entry was duplicated.
Also, if you are sure that the docs that you're inserting have unique ids, just remove your collection from the DB because it contains the inserted documents from the previous insertion!
I just dropped the collection and everything started working fine after that
1.) Just delete the database using the command : db.dropDatabase();
(don't find the above step aggressive)
2.) Create new db : use dbname
3.)Restart the server : npm start
Note : ( Dropped indexes or db will be rebuilt once again by the Schema file when the server is restarted)
Related
I am connecting to a MySQL table using JPA Hibernate. But I am getting error in my Java code:
org.hibernate.HibernateException: Missing table
My table is present in MySQL database schema. I am not getting why missing table exception is thrown here. This is a newly created table. All other existing tables in the same schema are accessible from Hibernate. I saw similar posts with same error. But the answers there didn't help my cause. Can you please let me know what can be the issue here.
If table is present, then most likely it is user permission issue. This happens if you have created the table using a different MySQL user. Make sure the MySQL username/password that you are using in Hibernate is having access to the table. To test, login to MySQL console directly using Hibernate credential & run a select query on the table. If you see similar error as below, then you need to grant access to the table for the Hibernate user.
ERROR 1142 (42000): SELECT command denied to user
Source: http://www.w3spot.com/2020/10/how-to-solve-caused-by-hibernateexception-missing-table.html
Make sure the user has access to the table
Make sure names are equals in terms of case sensitivity
Make sure the schema name and table name are not misspelled
If you share more information about the issue, it would be easier to pinpoint the problem.
Chances are there is an inheritance scenario with a physical table that you assumed to be abstract.
To dig deeper you can put a breakpoint in org.hibernate.tool.schema.extract.internal.DatabaseInformationImpl#getTablesInformation which calls extractor.getTable to see why your table is not returned as part of schema tables.
Rerun the app with the specified breakpoint and step through lines to get to the line which queries table names from the database metadat.
#Override
public TableInformation getTableInformation(QualifiedTableName tableName) {
if ( tableName.getObjectName() == null ) {
throw new IllegalArgumentException( "Passed table name cannot be null" );
}
return extractor.getTable(
tableName.getCatalogName(),
tableName.getSchemaName(),
tableName.getTableName()
);
}
I'm running the liquibase command "dropAllForeignKey" on Sybase database with more than 12000 tables and more than 380000 columns. I'm getting an out of memory exception since liquibase code is trying to query all the columns in the data base.
The JVM is launched with : -Xms64M -Xmx512M (if I increase it to 5GO it'll work but I don't see why we have to query all the columns in the data base)
The script I'm using :
<dropAllForeignKeyConstraints baseTableName="Table_Name"/>
When I checked liquibase code I found that:
In DropAllForeignKeyConstraintsChange: we create a snapshot for the table mentioned in the xml
Table target = SnapshotGeneratorFactory.getInstance().createSnapshot(
new Table(catalogAndSchema.getCatalogName(), catalogAndSchema.getSchemaName(),
database.correctObjectName(getBaseTableName(), Table.class))
, database);
In JdbcDatabaseSnapshot: when we call getColumns, we call the bulkFetchQuery() instead of fastFetchQuery() because the table is neither "DatabaseChangeLogTableName" nor "DatabaseChangeLogLockTableName". In this case, the bulkFetchQuery does not filter on the table given in the dropAllForeignKey xml. Instead, it uses SQL_FILTER_MATCH_ALL, so it'll retrieve all the columns in the database. (It already takes time to query all the columns)
In ColumnMapRowMapper: for each table, we create a LinkedHashMap with size aqual to the number of columns. And here, I'm getting the out of memory
Is it normal that we query all the column when dropping all the foreign keys for a given table? If it's the case, why we need to do it and is there a solution for my problem without increasing the size of the JVM?
PS: There is another command called dropForeignKey to drop the forign key but it needs the name of the foreign key as an input and I don't have it. In fact, I can find the name of the foreign key for a given data base, but I'm running this command on different data bases and the name of the foreign key changes from one to another and I need to have a generic liquibase change. So, I can't use dropForeignKey and I need to use dropAllForeignKey.
Here the stack:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.LinkedHashMap.newNode(LinkedHashMap.java:256)
at java.base/java.util.HashMap.putVal(HashMap.java:637)
at java.base/java.util.HashMap.put(HashMap.java:607)
at liquibase.executor.jvm.ColumnMapRowMapper.mapRow(ColumnMapRowMapper.java:35)
at liquibase.executor.jvm.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:72)
at liquibase.snapshot.ResultSetCache$ResultSetExtractor.extract(ResultSetCache.java:297)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData$3.extract(JdbcDatabaseSnapshot.java:774)
at liquibase.snapshot.ResultSetCache$ResultSetExtractor.extract(ResultSetCache.java:288)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData$3.bulkFetchQuery(JdbcDatabaseSnapshot.java:606)
at liquibase.snapshot.ResultSetCache$SingleResultSetExtractor.bulkFetch(ResultSetCache.java:353)
at liquibase.snapshot.ResultSetCache.get(ResultSetCache.java:59)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData.getColumns(JdbcDatabaseSnapshot.java:539)
at liquibase.snapshot.jvm.ColumnSnapshotGenerator.addTo(ColumnSnapshotGenerator.java:106)
at liquibase.snapshot.jvm.JdbcSnapshotGenerator.snapshot(JdbcSnapshotGenerator.java:79)
at liquibase.snapshot.SnapshotGeneratorChain.snapshot(SnapshotGeneratorChain.java:49)
at liquibase.snapshot.DatabaseSnapshot.include(DatabaseSnapshot.java:286)
at liquibase.snapshot.DatabaseSnapshot.init(DatabaseSnapshot.java:102)
at liquibase.snapshot.DatabaseSnapshot.<init>(DatabaseSnapshot.java:59)
at liquibase.snapshot.JdbcDatabaseSnapshot.<init>(JdbcDatabaseSnapshot.java:38)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:217)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:246)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:230)
at liquibase.change.core.DropAllForeignKeyConstraintsChange.generateChildren(DropAllForeignKeyConstraintsChange.java:90)
at liquibase.change.core.DropAllForeignKeyConstraintsChange.generateStatements(DropAllForeignKeyConstraintsChange.java:59)
I have a MongoDB remote server that I am using.
My KEY is a custom object that has other nested objects in it.
Simple inserting works fine, although if I try to run
collection.replaceOne(eq("_id", KEY), document, new UpdateOptions().upsert(true));
I get com.mongodb.MongoWriteException: After applying the update, the (immutable) field '_id' was found to have been altered to _id: .......
If I have only have primitives in the key it works fine. Of course the value of the KEY is not changed (traced all the way down).
Is this a Mongo Java Driver bug with the ReplaceOne function?
As it turns out for Mongo filters, the order of json properties matter. With debugging it is possible to see what the actual orders of the properties in the filters are and then you can set your model properties order with #JsonPropertyOrder("att1", att2") so they match in order.
It was confirmed by a member of Mongo team.
Mongo ticket-> https://jira.mongodb.org/browse/JAVA-3392
I am doing insertion/Updation into table using below command .
insertResult = ((InsertReturningStep) ctx.insertInto(jOOQEntity, insertFields)
.values(insertValue).onDuplicateKeyUpdate()
.set(tableFieldMapping.duplicateInsertMap)).returning().fetch();
But using above command I am able to insert/update one record at a time.
I want to update multiple record at a single command.
For this i am sending List of values for same fields into value but i am getting below error .
"java.lang.IllegalArgumentException: The number of values must match the number of fields"
Is there any solution to update bulk records at a one shot
#Override
public Application getApplicationForId(Long applicationId) {
List<Application> applications = executeNamedQuery("applicationById", Application.class, applicationId);
return applications.isEmpty() ? null : applications.get(0);
}
while debugging in eclipse
return applications.isEmpty() ? null : applications.get(0);
these expression getting evaluated as
applications.isEmpty() -> false
applications.get(0) -> (id=171)
applications.size() -> 1
but after the execution of this line its throwing error
org.hibernate.HibernateException: More than one row with the given identifier was found: 263536,
Even its size is showing as 1, then still why and how its getting multiple rows after the execution.
I'm quite sure that this is due to eager fetching. So check you entity and remove the fetch=FetchType.EAGER.
Actually this is not caused by duplicate rows in the database, as it's obviously not possible to have duplicate primary keys. Instead this was caused by Hibernate looking up an object, and eagerly filling in a relationship. Hibernate assumed a single row would come back, but two came back because there were two objects associated with that relationship.
In my case the issue was,
while debugging when the execution is in the middle of the transaction, may be the purpose got served and forcibly stopped the server in the middle of the execution itself, as this has been forcibly stopped server, that cannot led the transaction to get rolledback and that end up in making the data dirty or corrupt in the database because before terminating the server some data might got inserted in db (chance of autoincrement of the primarykey).
Resetting the AutoIncrement value for the primary key of the table, resolved the issue.
1.Identify the table with dirty data (refer to stack trace )
2.Sort the column(primary key), check the highest value in the column(say somevalue).
3.use command
ALTER TABLE tablename AUTO_INCREMENT = somevalue+1