I am using Jooq to populate CSV data into my DB.
If I provide "String Value" instead of Int It is not entering the value into DB but in the mean time it is not throwing the error also.
How do I know if the upload is failed or not.How to handle these type of exceptions.In addition that is there any way to check/throw warning if i try to give string in int column.
version : 3.8.x
Connection connection = getConnection()
try(Connection connection = getConnection()) {
DSLContext create = DSL.using(connection, SQLDialect.MYSQL);
create.loadInto(Tables.PROCESS_QUEUE_MAP)
.loadCSV(new File("/my/folder/testInput.csv"))
.fields(Tables.PROCESS_QUEUE_MAP.PROCESS_QUEUE_ID,
Tables.PROCESS_QUEUE_MAP.PROCESS_NAME,
Tables.PROCESS_QUEUE_MAP.QUEUE_NAME,
Tables.PROCESS_QUEUE_MAP.MARKEPTLACE,
Tables.PROCESS_QUEUE_MAP.QUEUE_TYPE,
Tables.PROCESS_QUEUE_MAP.CREATED_BY,
Tables.PROCESS_QUEUE_MAP.CREATED_TIME,
Tables.PROCESS_QUEUE_MAP.LAST_MODIFIED_BY,
Tables.PROCESS_QUEUE_MAP.LAST_MODIFIED_TIME)
.execute();
} catch(Exception ex) {
ex.printStackTrace();
}
}
General failure handling with the Loader API
The loader API by default throws all kinds of exceptions that are raised from the underlying database or JDBC driver. This can be configured and overridden by specifying:
LoaderOptionsStep.onErrorAbort() (default)
LoaderOptionsStep.onErrorIgnore()
This only affects JDBC errors, not data loading "errors"
jOOQ's auto-conversion
For historic reason, throughout the jOOQ API, the automatic conversion between data types is "lenient" instead of "fail-fast". All data type conversion passes through the Convert utility, which returns null in case a data type conversion fails. E.g. when calling Convert.convert(Object, Class), the following test will pass:
assertNull(Convert.convert("abc", int.class));
This has been criticised in the past, but cannot be changed easily in the jOOQ API due to backwards compatibility.
Workarounds include:
Parsing the CSV content yourself
Passing an Object[][] to LoaderSourceStep.loadArrays()
Related
I have an Oracle procedure with an input Clob and returns an output Clob.
When i'm trying to recover the value, i reach the object, if i try to read the toString fro the object, i take the "oracle.sql.CLOB#625a8a83" . But when i want to read the object, in anyways i tryed, allways get a connection closed exception.
in my code:
MapSqlParameterSource parametros = new MapSqlParameterSource();
// setting input parameter
parametros.addValue("PE_IN", new SqlLobValue("IN DATA CLOB", new DefaultLobHandler()),
Types.CLOB);
// Executing call
Map<String, Object> out = jdbcCall.execute(parametros);
salida.setDatosRespuesta(out.get("PS_OUT").toString());
if i change the last line for this:
Clob clob = (Clob) out.get("PS_OUT");
long len = clob.length();
String rtnXml = clob.getSubString(1, (int) len);
i get the connection close error. I tryed in several ways and i can't solve this problem. Any ideas?
I think yo are using the SimpleJdbcCall of the spring framework. If so the database configuration are the default configurations for the oracle driver, you need to increase the time out for the reading of the values for the connection. Check the DatabaseMetaData documentation, also check the OracleConnection properies CONNECTION_PROPERTY_THIN_READ_TIMEOUT_DEFAULT. This happends because you are reading a large data from the database remember that de CLOB can have until 4gb of data
You need to keep in mind that is this process is very common in your application you need to consider the quantity of the connections to the database in order to have always enable connections to your database to guarantee your application availability
Regarding the out.get("PS_OUT").toString() this basically only show the hash that represents your object that the reason beacause why that line works fine
I want to handle exceptions, which are thrown from a query (find(...).first()) to MongoDB (Driver 3.7) in Java (the database is not stored locally). However there are no possible exceptions named in the JavaDocs and also in the MongoDB documentaton itself. Can there really occur no exceptions? I doubt that, because I think there could occur e.g. some network errors.
My queries look something like this:
final MongoCollection<Document> collection = database.getCollection("my-collection");
final Bson bsonFilter = Filters.eq("someName", "test");
final Document result = collection.find(bsonFilter).first();
Consider the following code. It connects to a MongoDB instance locally and gets a collection named "test" from the database named "users".
final String connectionStr = "mongodb://localhost/";
MongoClient mongoClient = MongoClients.create("mongodb://localhost/");
MongoDatabase database = mongoClient.getDatabase("users");
MongoCollection<Document> collection = database.getCollection("test");
If you provide a wrong host name for the connectionStr value, like "mongodb://localhostXYZ/" (and no such host exists) the code will throw an exception, like:
com.mongodb.MongoSocketException: localhostXYZ},
caused by {java.net.UnknownHostException: localhostXYZ}}],
..., ...
com.mongodb.MongoSocketException is a MongoDB Java driver exception. It is a runtime exception. It is also a sub-class of MongoException. From the MongoDB Java API:
public class MongoException extends RuntimeException
Top level Exception for all Exceptions, server-side or client-side, that come
from the driver.
The documentation also lists the following are sub-classes (all are runtime exceptions)
MongoChangeStreamException, MongoClientException, MongoExecutionTimeoutException, MongoGridFSException, MongoIncompatibleDriverException, MongoInternalException, MongoInterruptedException, MongoServerException, MongoSocketException.
So, all the exceptions thrown by MongoDB Java driver APIs are runtime exceptions. These are, in general, not meant to be caught and handled (but, you know how to use try-catch, and a runtime exception can be caught and handled).
Let us consider your code:
final MongoCollection<Document> collection = database.getCollection("my-collection");
final Bson bsonFilter = Filters.eq("someName", "test");
final Document result = collection.find(bsonFilter).first();
The first statement database.getCollection("my-collection"), when it runs the code is looking for a collection named "my-collection".
If you want to make sure the collection exists in the database, then verify using the listCollectionNames() and check the collection name exists in the returned list. In case the collection name doesn't exist, you can throw an exception (if you want to). This exception is what you have figure:
if you want to tell the user or the application that there was no
such collection named "my-collection", you can show or print a
message saying so (and then abort the program) or throw a runtime
exception with an appropriate message.
So, the code might look like this:
if listCollectionNames() doesn't contain "my-collection"
then
print something and abort the program
-or-
throw a runtime exception
else
continue with program execution
Your code final Document result = collection.find(bsonFilter).first(); is not correct. collection.find returns a FindIterable<TDocument> not a Document. So, the query output can be determined by further examining the FindIterable object; it may have documents or none. And, the find method doesn't throw any exceptions.
Based on if there are any documents returned or not you can show a message to the client. This is not a case you throw an exception.
We have an Oracle database with the following charset settings
SELECT parameter, value FROM nls_database_parameters WHERE parameter like 'NLS%CHARACTERSET'
NLS_NCHAR_CHARACTERSET: AL16UTF16
NLS_CHARACTERSET: WE8ISO8859P15
In this database we have a table with a CLOB field, which has a record that starts with the following string, stored obviously in ISO-8859-15: X²ARB (here correctly converted to unicode, in particular that 2-superscript is important and correct).
Then we have the following trivial piece of code to get the value out, which is supposed to automatically convert the charset to unicode via globalization support in Oracle:
private static final String STATEMENT = "SELECT data FROM datatable d WHERE d.id=2562456";
public static void main(String[] args) throws Exception {
Class.forName("oracle.jdbc.driver.OracleDriver");
try (Connection conn = DriverManager.getConnection(DB_URL);
ResultSet rs = conn.createStatement().executeQuery(STATEMENT))
{
if (rs.next()) {
System.out.println(rs.getString(1).substring(0, 5));
}
}
}
Running the code prints:
with ojdbc8.jar and orai18n.jar: X�ARB -- incorrect
with ojdbc7.jar and orai18n.jar: X�ARB -- incorrect
with ojdbc-6.jar: X²ARB -- correct
By using UNISTR and changing the statement to SELECT UNISTR(data) FROM datatable d WHERE d.id=2562456 I can bring ojdbc7.jar and ojdbc8.jar to return the correct value, but this would require an unknown number of changes to the code as this is probably not the only place where the problem occurs.
Is there anything I can do to the client or server configurations to make all queries return correctly encoded values without statement modifications?
It definitely looks like a bug in the JDBC thin driver (I assume you're using thin). It could be related to LOB prefetch where the CLOB's length, character set id and the first part of the LOB data is sent inband. This feature was introduced in 11.2. As a workaround, you can disable lob prefetch by setting the connection property
oracle.jdbc.defaultLobPrefetchSize
to "-1". Meanwhile I'll follow up on this bug to make sure that it gets fixed.
Please have a look at Database JDBC Developer's Guide - Globalization Support
The basic Java Archive (JAR) file ojdbc7.jar, contains all the
necessary classes to provide complete globalization support for:
CHAR or VARCHAR data members of object and collection for the character sets US7ASCII, WE8DEC, WE8ISO8859P1, WE8MSWIN1252, and UTF8.
To use any other character sets in CHAR or VARCHAR data members of
objects or collections, you must include orai18n.jar in the CLASSPATH
environment variable:
ORACLE_HOME/jlib/orai18n.jar
I have a Java project where I call a stored procedure using hibernate.
Here is sample code I am using
public String findCloudName(Long cloudId) {
LOG.info("Entered findCloudName Method - cloudId:{}", cloudId);
String cloudName = null;
ProcedureCall procedureCall = currentSession().createStoredProcedureCall("p_getCloudDetails");
procedureCall.registerParameter( "cloudId", Long.class, ParameterMode.IN ).bindValue( cloudId );
procedureCall.registerParameter( "cloudName", String.class, ParameterMode.OUT );
ProcedureOutputs outputs = procedureCall.getOutputs();
cloudName = (String) outputs.getOutputParameterValue( "cloudName" );
LOG.info("Exiting findCloudName Method - cloudName:{}", cloudName);
return cloudName;
}
This works fine and results the expected results.
However It leaves the following message in my logs
[WARN] [320]org.hibernate.procedure.internal.ProcedureCallImpl[prepareForNamedParameters] - HHH000456: Named parameters are used for a callable statement, but database metadata indicates named parameters are not supported.
I was looking through websites and the source code of hibernate to try and figure out how to get rid of this warning
Any help is greatly appreciated
Cheers
Damien
It's related to the correction of HHH-8740 :
In some cases, the database metadata is incorrect (i.e., says something is not supported, when it actually is supported). In this case it would be preferable to simply log a warning. If it turns out that named parameters really are not supported, then a SQLException will be thrown later when the named parameter is bound to the SQL statement.
So, the warning warn you from calling a method that queries metadata on named parameters.
I am trying to run a SQl query using Hive as an underlying data store, the query invokes Big Decimal function and throws the following error :
Method not supported at
org.apache.hadoop.hive.jdbc.HivePreparedStatement.setBigDecimal(HivePreparedStatement.java:317)
That is simply because Hive does not support as follows :
public void setBigDecimal(int parameterIndex, BigDecimal x) throws SQLException {
// TODO Auto-generated method stub
throw new SQLException("Method not supported");
}
Please suggest what can be other workarounds or fixes available to counter such an issue
The original Hive JDBC driver only supported few of the JDBC interfaces, see HIVE-48: Support JDBC connections for interoperability between Hive and RDBMS. So the commit left auto-generated "not supported" code for interfaces like CallableStatement or PreparedStatement.
With HIVE-2158: add the HivePreparedStatement implementation based on current HIVE supported data-type some of the methods were fleshed out, see the commit. But types like Blob, AsciiStream, binary stream and ... bigDecimal were not added. When HIVE-2158 was resolved (2011-06-15) the support for DECIMAL in Hive was not in, it came with HIVE-2693: Add DECIMAL data type, on 2013-01-17. When support for DECIMAL was added, looks like the JDBC driver interface was not updated.
So basically the JDBC driver needs to be updated with the new types supported. You should file a JIRA for this. Workaround: don't use DECIMAL, or don't use PrepareStatement.
I had similar issue with ".setObject" method, but after update to version 1.2.1 it was resolved.
".setBigDecimal" currently it is not implemented. Here is the implementation of the class. However in .setObject method currently has line like this which in fact solve the case.
if(value instanceof BigDecimal){
st.setString(valueIndex, value.toString());
}
This worked for me, but you can lose precision without any warning!
In general it seems that metamodel supports decimal poorly. If you get all the columns with statemant like this
Column[] columnNames = table.getColumns();
and one of the columns is decimal you'll notice that there is no information about the precision.