I am doing insertion/Updation into table using below command .
insertResult = ((InsertReturningStep) ctx.insertInto(jOOQEntity, insertFields)
.values(insertValue).onDuplicateKeyUpdate()
.set(tableFieldMapping.duplicateInsertMap)).returning().fetch();
But using above command I am able to insert/update one record at a time.
I want to update multiple record at a single command.
For this i am sending List of values for same fields into value but i am getting below error .
"java.lang.IllegalArgumentException: The number of values must match the number of fields"
Is there any solution to update bulk records at a one shot
Related
I'm running the liquibase command "dropAllForeignKey" on Sybase database with more than 12000 tables and more than 380000 columns. I'm getting an out of memory exception since liquibase code is trying to query all the columns in the data base.
The JVM is launched with : -Xms64M -Xmx512M (if I increase it to 5GO it'll work but I don't see why we have to query all the columns in the data base)
The script I'm using :
<dropAllForeignKeyConstraints baseTableName="Table_Name"/>
When I checked liquibase code I found that:
In DropAllForeignKeyConstraintsChange: we create a snapshot for the table mentioned in the xml
Table target = SnapshotGeneratorFactory.getInstance().createSnapshot(
new Table(catalogAndSchema.getCatalogName(), catalogAndSchema.getSchemaName(),
database.correctObjectName(getBaseTableName(), Table.class))
, database);
In JdbcDatabaseSnapshot: when we call getColumns, we call the bulkFetchQuery() instead of fastFetchQuery() because the table is neither "DatabaseChangeLogTableName" nor "DatabaseChangeLogLockTableName". In this case, the bulkFetchQuery does not filter on the table given in the dropAllForeignKey xml. Instead, it uses SQL_FILTER_MATCH_ALL, so it'll retrieve all the columns in the database. (It already takes time to query all the columns)
In ColumnMapRowMapper: for each table, we create a LinkedHashMap with size aqual to the number of columns. And here, I'm getting the out of memory
Is it normal that we query all the column when dropping all the foreign keys for a given table? If it's the case, why we need to do it and is there a solution for my problem without increasing the size of the JVM?
PS: There is another command called dropForeignKey to drop the forign key but it needs the name of the foreign key as an input and I don't have it. In fact, I can find the name of the foreign key for a given data base, but I'm running this command on different data bases and the name of the foreign key changes from one to another and I need to have a generic liquibase change. So, I can't use dropForeignKey and I need to use dropAllForeignKey.
Here the stack:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.LinkedHashMap.newNode(LinkedHashMap.java:256)
at java.base/java.util.HashMap.putVal(HashMap.java:637)
at java.base/java.util.HashMap.put(HashMap.java:607)
at liquibase.executor.jvm.ColumnMapRowMapper.mapRow(ColumnMapRowMapper.java:35)
at liquibase.executor.jvm.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:72)
at liquibase.snapshot.ResultSetCache$ResultSetExtractor.extract(ResultSetCache.java:297)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData$3.extract(JdbcDatabaseSnapshot.java:774)
at liquibase.snapshot.ResultSetCache$ResultSetExtractor.extract(ResultSetCache.java:288)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData$3.bulkFetchQuery(JdbcDatabaseSnapshot.java:606)
at liquibase.snapshot.ResultSetCache$SingleResultSetExtractor.bulkFetch(ResultSetCache.java:353)
at liquibase.snapshot.ResultSetCache.get(ResultSetCache.java:59)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData.getColumns(JdbcDatabaseSnapshot.java:539)
at liquibase.snapshot.jvm.ColumnSnapshotGenerator.addTo(ColumnSnapshotGenerator.java:106)
at liquibase.snapshot.jvm.JdbcSnapshotGenerator.snapshot(JdbcSnapshotGenerator.java:79)
at liquibase.snapshot.SnapshotGeneratorChain.snapshot(SnapshotGeneratorChain.java:49)
at liquibase.snapshot.DatabaseSnapshot.include(DatabaseSnapshot.java:286)
at liquibase.snapshot.DatabaseSnapshot.init(DatabaseSnapshot.java:102)
at liquibase.snapshot.DatabaseSnapshot.<init>(DatabaseSnapshot.java:59)
at liquibase.snapshot.JdbcDatabaseSnapshot.<init>(JdbcDatabaseSnapshot.java:38)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:217)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:246)
at liquibase.snapshot.SnapshotGeneratorFactory.createSnapshot(SnapshotGeneratorFactory.java:230)
at liquibase.change.core.DropAllForeignKeyConstraintsChange.generateChildren(DropAllForeignKeyConstraintsChange.java:90)
at liquibase.change.core.DropAllForeignKeyConstraintsChange.generateStatements(DropAllForeignKeyConstraintsChange.java:59)
I've created the following table in Hive:
CREATE TABLE mytable (..columns...) PARTITIONED BY (load_date string) STORED AS ...
And I'm trying to insert data to my table with spark as follow:
Dataset<Row> dfSelect = df.withColumn("load_date","15_07_2018");
dfSelect.write().mode("append").partitionBy("load_date").save(path);
And also make the following configuration:
sqlContext().setConf("hive.exec.dynamic.partition","true");
sqlContext().setConf("hive.exec.dynamic.partition.mode","nonstrict");
And after I make the write command I see on HDFS the directory /myDbPath/load_date=15_07_2018, which contains the file that I've written but when I make query like:
show partitions mytable
or
select * from mytable where load_date="15_07_2018"
I get 0 records.
What happened and how can I fix this?
EDIT
If I run the following command in Hue:
msck repair table mytable
I solve the problem, how can I do it in my code?
Hive stores a list of partitions for each table in its metastore. If, however, new partitions are directly added to HDFS (say by using hadoop fs -put command (or) .save..etc), the metastore (and hence Hive) will not be aware of these partitions unless the user runs either of the below commands
Meta store check command (msck repair table)
msck repair table <db.name>.<table_name>;
(or)
ALTER TABLE table_name ADD PARTITION commands on each of the newly added partitions.
We can also add partition by using alter table statement by using this way we need to add each and every newly created partition to the table
alter table <db.name>.<table_name> add partition(load_date="15_07_2018") location <hdfs-location>;
Run either of the above statements and then check the data again for load_date="15_07_2018"
For more details refer these links add partitions and msck repair table
I am trying to insert documents into mongodb from java. First record is being inserted and it is showing the error as 'E11000 duplicate key error'. I even tried to make the documents unique. Still I am getting the same error. Here I provide the screen shot of the same.
Mongodb version: v 3.4.10
#sowmyasurampalli, E11000 it's a mongodb code error that means that some entry is duplicated, when you use a field as unique field(in you case _id is default set to unique), you should enter distinct documents _ids else this error 'll be thrown, so in your app you need also to catch that error to inform the user that the entry was duplicated.
Also, if you are sure that the docs that you're inserting have unique ids, just remove your collection from the DB because it contains the inserted documents from the previous insertion!
I just dropped the collection and everything started working fine after that
1.) Just delete the database using the command : db.dropDatabase();
(don't find the above step aggressive)
2.) Create new db : use dbname
3.)Restart the server : npm start
Note : ( Dropped indexes or db will be rebuilt once again by the Schema file when the server is restarted)
I have a situation where user upload a new file when file successful submitted then one records is inserted into database table.i will run other class,that poll database if new records is inserted retrieve file name then read file and insert all file records into database table.i'm not getting any idea to sortout this problem,please help me and provide your views for this situation.
Thanks
Well , it seems that you want to have a Java class that will periodically check a table (say TableA) and process those new records that are inserted since last checking time.
You should at least have a column (eg ,polled_time) to capture if a record is polled before or not . polled_time is the timestamp when the record is last polled . It is null if the record is never polled before.
Whenever the Java class starts ,it should select the records that are not polled for process (select * from TableA where polled_time is null) . After process each record , you should update the polled_time to indicate it is polled and avoid it is selected out again to process when the Java class runs next time ( update TableA set polled_time = now() where id= xxxxx).
Finally, you have to setup a schedule task ( For Window platform) / a cron job (For Linux / Unix platform) to run this java class periodically ,
Why can't you perform both operations on the same servlet doPost()? Or get rid of the Filename table and just have the contents table and avoid the whole polling situation?
doPost(...) {
try {
validateFile(...);
updateFileTable(...);
updateFilenameTable(...);
} catch (...) {...} finally {...}
INSERT INTO [UPLOAD_FILE_RECORD_FIELDS_DATA]([RECORD_ID], [FIELD_ORDER], [FIELD_VALUE], [ERROR_CODE])
select ?,?,?,?
union all
select ?,?,?,?
union all
select ?,?,?,?
I have to insert multiple records into one table.So i am using query as below.and setting parameter values. But i am getting error code 77.What is cause?
No of records to be inserted are approx 70000.So i am inserting 100 records in one query and then using addBatch() on preparedstatment 700 times i execute whole batch .
Actually it was not error code 77.it was no of updates per statement.SO Everything is working fine.