Different response for same method of DatabaseMetaData pointing to different databases? - java

When I invoke the method getIndexInfo( catalog, schema, table, true, false ), I receive a ResultSet slightly different from what is described:
With MySQL (5.7), I receive a ResultSet containing:
one row corresponding to the primary key column description
n rows corresponding to the unique columns description
With SQL Server (14.00), I receive a ResultSet containing:
one row corresponding to the tableIndexStatistic of the primary key
one row corresponding to the primary key column description
n rows corresponding to the unique columns description
m rows corresponding to the index columns description
Due to a project choice all the primary keys are auto-increment, so there aren't case where a primary key column is also a unique column.
I'm searching to write a solution database-independent, since it will be used for both MySQL and SQL Server databases;
MySQL use the MySQL-AB JDBC Driver 5.1.20, SQL Server use the Microsoft JDBC Driver 6.4.
Initially I "resolved" this problem retrieving from the session the driver name, in order to apply a specific filter for each database;
for MySQL i found that the column INDEX_NAME for the Primary Key is always 'PRIMARY', meanwhile for SQL Server I found that the column TYPE is:
0 for the tableIndexStatistic
1 for our SQL Server Primary Keys (tableIndexClustered )
2 (not found in my case yet, but is for tableIndexHashed )
3 for the Unique keys ( tableIndexOther )
A difference between MySQL and SQL Server is that the Primary Keys are respectively of TYPE 3 and 1.
Filter example:
String driver = session.getConfiguration().getDatabaseId();
DatabaseMetaData metadata = session.getConnection().getMetaData();
ResultSet result = metadata.getIndexInfo(catalog, schema, table, true, false);
while( result.next() ){
if( "mysql".equals(driver) ){
if( !"PRIMARY".equals((String) result.getObject("INDEX_NAME"))){
... code to save the result ...
}
} else if ( "sqlserver".equals(driver) ){
if( 3 == (short) result.getObject("TYPE")){
... code to save the result ...
}
} else {
throw new Exception();
}
}
This code worked for a bit, until I discovered on SQL Server a table with an index; in this case, as per documentation linked before, the indexes are part of the tableIndexOther so they have the column TYPE with the value 3.
At this point I've noticed that the column NON_UNIQUE is true for the Unique columns descriptions and false for the Index columns descriptions.
So I was thinking to proceed expanding the SQL Server filter including the NON_UNIQUE column but, against as described in the documentation, when I retrieve a tableIndexStatistic I'll get null instead of false.
I'm a bit confused of how I should approach all those inconsistencies with the documentation, since my main goal is to retrieve the same result of unique keys from those two databases.

Related

Can jOOQ support composite primary keys if auto-increment field isn't the first one

I'm using jOOQ (3.14.11) to manage a table defined (in H2 or MYSQL) as:
CREATE TABLE example_one (
group_id VARCHAR(36) NOT NULL,
pawn_id INT UNSIGNED AUTO_INCREMENT NOT NULL,
some_unimportant_attribute INT UNSIGNED DEFAULT 0 NOT NULL,
another_uniportant_attribute VARCHAR(36),
CONSTRAINT pk_example_one PRIMARY KEY (group_id, pawn_id)
)
Note that in this SQL, the primary key specifies the (group, pawn) IDs in that order but it is the pawn_id, the second one, which is the auto-increment/identity column.
It appears that jOOQ doesn't like this arrangement. When I try to use the Record objects to insert a new row, it will not return back to me the "pawnID" value:
ExampleOneRecord r = create.newRecord(EXAMPLE_ONE);
r.setGroup("a group identity");
r.store();
assert r.getPawnId() != null; // <---- FAILS test
Diving into the code, the suspect seems to be in AbstractDMLQuery.java method executeReturningGeneratedKeysFetchAdditionalRows which has this bit of logic:
// Some JDBC drivers seem to illegally return null
// from getGeneratedKeys() sometimes
if (rs != null)
while (rs.next())
list.add(rs.getObject(1));
The call to rs.getObject(1) seems to be assuming that the generated column will always be the first column of the primary key.
Is there any way to convince jOOQ otherwise?
This is a bug in jOOQ 3.15.1 for H2: https://github.com/jOOQ/jOOQ/issues/12192
It has been fixed in jOOQ 3.16.0 for H2 2.0.202, which now supports the powerful data change delta table syntax, allowing for much easier fetching of generated values from a DML statement (it was implemented before that H2 version, but had a significant bug: https://github.com/h2database/h2database/issues/2502)

Java: how to perform batch insert with identity column using java jdbc for Sql Server

I have a csv file which I need to write to a Sql Server table using SQLServerBulkCopy. I am using SQLServerBulkCSVFileRecord to load data from the file.
The target table has the following structure:
create table TEST
(
ID int identity,
FIELD_1 int,
FIELD_2 varchar(20)
)
The csv file has the following structure:
4279895;AA00000002D
4279895;AA00000002D
4279895;AA00000002D
4279896;AA00000003E
4279896;AA00000003E
4279896;AA00000003E
As you can see the ID (identity) column is not present in the csv, I need the database to automatically add the identity value on insert.
My problem is that the bulk insert does not work as long as the table has the identity column, I got the following error:
com.microsoft.sqlserver.jdbc.SQLServerException: Source and destination schemas do not match.
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.validateColumnMappings(SQLServerBulkCopy.java:1749)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:1579)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:606)
This is the relevant code:
try (
Connection targetConnection = DriverManager.getConnection(Configuration.TARGET_CONNECTION_URL);
SQLServerBulkCopy bulkCopy = new SQLServerBulkCopy(targetConnection);
SQLServerBulkCSVFileRecord fileRecord = new SQLServerBulkCSVFileRecord(csvPath, Charsets.UTF_8.toString(), ";", false);
) {
SQLServerBulkCopyOptions copyOptions = new SQLServerBulkCopyOptions();
copyOptions.setKeepIdentity(false);
bulkCopy.setBulkCopyOptions(copyOptions);
fileRecord.addColumnMetadata(1, null, java.sql.Types.INTEGER, 0, 0); // FIELD_1 int
fileRecord.addColumnMetadata(2, null, java.sql.Types.VARCHAR, 20, 0); // FIELD_2 varchar(20)
bulkCopy.setDestinationTableName("TEST");
bulkCopy.writeToServer(fileRecord);
}
catch (Exception e) {
// [...]
}
The bulk insert ends succesfully if I remove the identity column from the table. Which is the correct to perform an identity bulk-insert using java jdbc for Sql Server?
I think you don't need to set this option copyOptions.setKeepIdentity(false);
Try after removing this line. You can refer to this post as well SqlBulkCopy Insert with Identity Column
If you have a leading column with blank values Helen the identity will be generated on insert. Depending on the settings it might generate new identity even if the first column is not blank.
So either add an extra column or use another (staging) table.
BTW, if you have a really big table the command-line bcp utility is the fastest. From experience up to 5 times faster compared to Jdbc batch insert.

Missing column that was just inserted in cassandra column family

We are constancly getting problem on our test cluster.
Cassandra configuration:
cassandra version: 2.2.12
nodes count: 6, seed-nodess 3, none-seed-nodes 3
replication factor 1 (of course for prod we will use 3)
Table configuration where we get problem:
CREATE TABLE "STATISTICS" (
key timeuuid,
column1 blob,
column2 blob,
column3 blob,
column4 blob,
value blob,
PRIMARY KEY (key, column1, column2, column3, column4)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC, column2 ASC, column3 ASC, column4 ASC)
AND caching = {
'keys':'ALL', 'rows_per_partition':'100'
}
AND compaction = {
'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
};
Our java code details
java 8
cassandra driver: astyanax
app-nodes count: 4
So, whats happening:
Under high load our application do many inserts in cassandra tables from all nodes.
During this we have one workflow when we do next with one row in STATISTICS table:
do insert 3 columns from app-node-1
do insert 1 column from app-node-2
do insert 1 column from app-node-3
do read all columns from row on app-node-4
at last step(4) when we read all columns we are sure that insert of all columns is done (it is guaranteed by other checks that we have)
The problem is that some times(2-5 times on 100'000) it happens that at stpp 4 when we read all columns, we get 4 columns instead of 5, i.e. we are missing column that was inserted at step 2 or 3.
We even start doing reads of this columns every 100ms in loop and we dont get expected result. During this time we also check columns using cqlsh - same result, i.e. 4 instead of 5.
BUT, if we add in this row any new column, then we immediately get expected result, i.e. we are getting then 6 columns - 5 columns from workflow and 1 dummy.
So after inserting dummy column we get missing column that was inserted at step 2 or 3.
Moreover when we get the timestamp of missing (and then apperared column), - its very closed to time when this column was actually added from our app-node.
Basically insertions from app-node-2 & app-node-3 are done nearlly at the same time, so finally these two columns allways have nearly same timestamp, even if we do insert of dummy column in 1 minute after first read of all columns at step 4.
With replication factor 3 we cannot reproduce this problem.
So open questions are:
May be this is expected behavior of Cassandra when replication factor is 1 ?
If its not expected, then what could be potential reason?
UPDATE 1:
next code is used to insert column:
UUID uuid = <some uuid>;
short shortV = <some short>;
int intVal = <some int>;
String strVal = <some string>;
ColumnFamily<UUID, Composite> statisticsCF = ColumnFamily.newColumnFamily(
"STATISTICS",
UUIDSerializer.get(),
CompositeSerializer.get()
);
MutationBatch mb = keyspace.prepareMutationBatch();
ColumnListMutation<Composite> clm = mb.withRow(statisticsCF, uuid);
clm.putColumn(new Composite(shortV, intVal, strVal, null), true);
mb.execute();
UPDATE 2:
Proceed testing/investigatnig.
When we caught this situation again, we immediately stop(killed) our java apps. And then can constantly see in cqlsh that particular row does not contain inserted column.
To appear it, first we tried nodetool flash on every cassandra node:
pssh -h cnodes.txt /path-to-cassandra/bin/nodetool flush
result - the same, column did not appear.
Then we just restarted the cassandra cluster and column appeared
UPDATE 3:
Tried to disable cassandra cache, by setting row_cache_size_in_mb property to 0 (before it was 2Gb)
row_cache_size_in_mb: 0
After it, the problem gone.
SO probably the probmlem may be in OHCProvider which is used as default cache provider.

How can I get an id for a new record in a generic way? (JOOQ 3.4 with Postgres)

With jooq 3.4 I can't figure out how to do this (with Postgresql):
Query query = dsl.insertInto(TABLE)
.set(TABLE.ID, Sequences.TABLE_ID_SEQ.nextval());
but in a case when I don't know which is the exact table, something like this:
TableImpl<?> tableImpl;
Query query = dsl.insertInto(tableImpl)
.set(tableImpl.getIdentity(), tableImpl.getIdentity().getSequence().nextval());
Is it somehow possible?
I tried this:
dsl.insertInto(tableImpl)
.set(DSL.field("id"),
tableImpl.getSchema().getSequence("table_id_seq").nextval())
This works but I still don't know how to get the sequence name from the TableImpl object.
Is there a solution for this? Or is there a problem with my approach?
In plain SQL I would do this:
insert into table_A (id) VALUES nextval('table_A_id_seq');
insert into table_B (table_A_id, some_val) VALUES (currval('table_A_id_seq'), some_val);
So I need the value or a reference to that id for later use of the id that was generated for the inserted record as default, but I don't want to set any other values.
jOOQ currently doesn't have any means of associating a table with its implicitly used sequence for the identity column. The reason for this is that the sequence is generated when the table is created, but it isn't formally connected to that table.
Usually, you don't have to explicitly set the serial value of a column in a PostgreSQL database. It is generated automatically on insert. In terms of DDL, this means:
CREATE TABLE tablename (
colname SERIAL
);
is equivalent to specifying:
CREATE SEQUENCE tablename_colname_seq;
CREATE TABLE tablename (
colname integer NOT NULL DEFAULT nextval('tablename_colname_seq')
);
ALTER SEQUENCE tablename_colname_seq OWNED BY tablename.colname;
The above is taken from:
http://www.postgresql.org/docs/9.3/static/datatype-numeric.html#DATATYPE-SERIAL
In other words, just leave out the ID values from the INSERT statements.
"Empty" INSERT statements
Note that if you want to create an "empty" INSERT statement, i.e. a statement where you pass no values at all, generating a new column with a generated ID, you can use the DEFAULT VALUES clause.
With SQL
INSERT INTO tablename DEFAULT VALUES
With jOOQ
DSL.using(configuration)
.insertInto(TABLENAME)
.defaultValues()
.execute();
Returning IDs
Note that PostgreSQL has native support for an INSERT .. RETURNING clause, which is also supported by jOOQ:
With SQL
INSERT INTO tablename (...) VALUES (...) RETURNING ID
With jOOQ
DSL.using(configuration)
.insertInto(TABLENAME, ...)
.values(...)
.returning(TABLENAME.ID)
.fetchOne();

How to get the serial id just after inserting a row? [duplicate]

This question already has answers here:
How to get a value from the last inserted row? [duplicate]
(14 answers)
Closed 9 years ago.
I have a table with row 'id' (a primary key) default set to serial in PostgreSQL. I insert into this row by calling
session.getCurrentSession().createSQLQuery("some insert query")
without adding any value into id as it is default set to serial.
How can I retrieve the `id' of just inserted row?
JDBC statements can return the generated keys. For instance, if the table has a single column id of type serial (probably PK) that is not mentioned in the insert SQL below, the generated value for this column can be obtained as:
PreparedStatement s = connection.createStatement
("INSERT INTO my_table (c,d) VALUES (1,2)",
Statement.RETURN_GENERATED_KEYS);
s.executeUpdate();
ResultSet keys = s.getGeneratedKeys();
int id = keys.getInt(1);
This is faster than sending the second query to obtain the sequence value or max column value later. Also depending on circumstances these two other solutions may not be not be thread safe.
Since it is serial you can use select max(id) from tableName
Using max(id) is a very bad idea. It will not give you the correct result
in case of multiple concurrent transactions. The only correct way is to use
curval() or the returning clause.
In posgresql: There is already a stackoverflow-question exists BTW.
`INSERT INTO tableName(id, name) VALUES(DEFAULT, 'bob') RETURNING id;`
(also)
Get a specific sequence:
SELECT currval('name_of_your_sequence');
Get the last value from the last sequence used:
SELECT lastval();
Manual: http://www.postgresql.org/docs/current/static/functions-sequence.html
For PHP-mysql users:
From php.net clickhere
<?php
$link = mysqli_connect('localhost', 'mysql_user', 'mysql_password');
if (!$link) {
die('Could not connect: ' . mysqli::$connect_error() );
}
mysqli::select_db('mydb');
mysqli::query("INSERT INTO mytable (product) values ('kossu')");
printf("Last inserted record has id %d\n", mysqli::$insert_id());
?>
But you need to connect for every query.
use SELECT CURRVAL(); . Typically used in conjunction with pg_get_serial_sequence
postgreSQL function for last inserted ID

Categories