Csv Jdbc : ResultSet.getRow() unsupported - java

How do I enable ResultSet.getRow() in CsvJdbc?
(this is a function that is supposed to return the current row number)
It appears to be dependent on an isScrollable member. If anyone has encountered this before, how do you work around it?
Is it a property I need to set in the Properties object passed in?
Would I need to "sanitize" or somehow modify my CSV files in any way?
Thanks!
More Info
An application I use has the capability of importing data from any JDBC source. I need to get some data from CSV files into it, hence I'm using CsvJdbc. This application needs to access the row numbers of each line of data it imports, and unfortunately CsvResultSet#getRow() throws an exception, complaining that "Csv Jdbc : ResultSet.getRow() unsupported".
The following is the impl. of the getRow() method in CsvJdbc (1.0.5)
/**
* Retrieves the current row number. The first row is number 1, the
* second number 2, and so on.
*
* #return the current row number; <code>0</code> if there is no current row
* #exception SQLException if a database access error occurs
*/
public int getRow() throws SQLException {
if (this.isScrollable == ResultSet.TYPE_SCROLL_SENSITIVE) {
return currentRow;
} else {
throw new UnsupportedOperationException(
"ResultSet.getRow() unsupported");
}
}
Looking through the rest of the source it seems that the only place that the isScrollable member property is set is in the constructor and as a default value.

Have you tried creating a scrollable statement...
Statement stmt = connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);

Related

Java 3.0 get distinct values from a column mongodb

I am really struggling here and have looked at other questions but just cant seem to get the answer I need.
What I am trying to do is pull through all the unique values of a column and then iterate through them and add them to an array. Ending up with the one column being stored in my array, but one of each value that exists not the multiple like there currently is.
Every time I try and do .distinct it asks me for the return class I have tried many different class but it just doesn't seem to work... Code is below any help would be appreciated.
public static void MediaInteraction() {
//Storing data from MediaInteraction in MediaArray
//BasicDBObject Query = new BasicDBObject();
//Query.put("consumerid", "");
MongoCursor<Document> cursormedia = collectionmedia.distinct("consumerid", (What do I put here?)).iterator();
while (cursormedia.hasNext()) {
System.out.println(cursormedia.next());
MediasessionID.add(cursormedia.next());
}
System.out.println("Media Array Complete");
System.out.println(MediasessionID.size());
}
The change that you probably want to introduce shall be somewhat like -
MongoCursor<Document> cursormedia = collectionmedia.distinct("consumerid",
<ConsumerId-DataType>.class).iterator(); //please replace the consumerId field's datatype here
Also from the docs -
/**
* Gets the distinct values of the specified field name.
*
* #param fieldName the field name
* #param resultClass the class to cast any distinct items into.
* #param <TResult> the target type of the iterable.
* #return an iterable of distinct values
* #mongodb.driver.manual reference/command/distinct/ Distinct
*/
<TResult> DistinctIterable<TResult> distinct(String fieldName, Class<TResult> resultClass);
So in your example, if you are trying to attain cursor for Document you probably want to use Document.class in the above suggested code.
Edit - Also the fact that you are calling cursormedia.next() twice the count of your MediasessionID would be halved. Suggest you do that(.next) once improving it further to obtain results.

Using JDBC to call a PL/SQL stored procedure with custom type input parameter, all fields are null

I'm using JDBC with createStruct() to call a stored procedure on an Oracle database that accepts a custom type as a parameter. The stored procedure inserts the custom type fields into a table and when I SELECT from the table later I see that all the fields that I tried to insert are NULL.
The custom type looks like this:
type record_rec as object (owner_id varchar2 (7),
target_id VARCHAR2 (8),
IP VARCHAR2 (15),
PREFIX varchar2 (7),
port varchar2 (4),
description VARCHAR2 (35),
cost_id varchar2(10))
The stored procedure looks like this:
package body "PKG_RECORDS"
IS
procedure P_ADD_RECORD (p_target_id in out VARCHAR2,
p_record_rec in record_rec)
is
l_target_id targets.target_id%TYPE;
BEGIN
Insert into targets (target_id,
owner_id,
IP,
description,
prefix,
start_date,
end_date,
cost_id,
port,
server_name,
server_code)
values (f_sequence ('TARGETS'),
p_record_rec.owner_id,
p_record_rec.ip,
p_record_rec.description,
p_record_rec.prefix,
sysdate,
to_date ('01-JAN-2050'),
p_record_rec.cost_id,
p_record_rec.port,
'test-server',
'51')
returning target_id
into p_target_id;
END;
END PKG_RECORDS;
My Java code looks something like this:
try (Connection con = m_dataSource.getConnection()) {
ArrayList<String> ids = new ArrayList<>();
CallableStatement call = con.prepareCall("{call PKG_RECORDS.P_ADD_RECORD(?,?)}");
for (Record r : records) {
call.registerOutParameter("p_target_id", Types.VARCHAR);
call.setObject("p_record_rec",
con.createStruct("SCHEME_ADM.RECORD_REC", new Object[] {
r.getTarget_id(),
null, // will be populated by SP
t.getIp(),
t.getPrefix(),
t.getPort(),
t.getDescription(),
t.getCost_id()
}), Types.STRUCT);
call.execute();
ids.add(call.getString("p_target_id"));
}
return new QueryRunner().query(con,
"SELECT * from TARGETS_V WHERE TARGET_ID IN ("+
ids.stream().map(s -> "?").collect(Collectors.joining(",")) +
")",
new BeanListHandler<Record>(Record.class),
ids.toArray(new Object[] {})
).stream()
.collect(Collectors.toList());
} catch (SQLException e) {
throw new DataAccessException(e.getMessage());
}
Notes:
* That last part is using Apache Commons db-utils - I love their bean stream operations.
* The connection is using C3P0 connection pool - could that be related?
* Just to make it clear - its not that the bean processor populates null values into the Record bean fields - if I use an SQL explorer to load the table (or view) directly, I can see that the fields in the database are indeed set to NULL.
There are no SQLExceptions when the process runs, or any other notice that something is wrong.
Any ideas what to check?
[Update]
After reading on Oracle Objects and SQLData mappings, I rewrote the code to use SQLData.
The Record class now implements SQLData and it's writeSQL() method looks like this:
#Override
public void writeSQL(SQLOutput stream) throws SQLException {
stream.writeString(owner_id);
stream.writeString(target_id);
stream.writeString(Objects.isNull(ip) ? "0" : ip); // weird, but as specified
stream.writeString(prefix);
stream.writeString(String.valueOf(port));
stream.writeString(description);
stream.writeString(cost_id);
}
Then at the start of the calling code, I've added:
con.getTypeMap().put("SCHEME_ADM.RECORD_REC", Record.class);
And instead of using createStruct(), the setObject() call now looks simply like this:
call.setObject("p_record_rec", t, Types.STRUCT)
But the result is the same - no errors and all the passed values are read as NULL. I've traced through the writeSQL() implementation and I can see that it is called and all values are passed correctly into the Oracle code. I've tried to use Types.JAVA_OBJECT in the setObject() call, and got an error: Invalid column type.
[Update 2]
Bordering on insane helplessness I've implemented the OracleData pattern:
public class Record implements SQLData, OracleData, OracleDataFactory {
...
#Override
public Object toJDBCObject(Connection conn) throws SQLException {
return conn.createStruct(getSQLTypeName(), new Object[] {
Objects.isNull(owner_id) ? "" : owner_id,
Objects.isNull(record_id) ? "" : record_id,
Objects.isNull(ip) ? "0" : ip,
Objects.isNull(prefix) ? "" : prefix,
String.valueOf(port),
Objects.isNull(description) ? "" : description,
Objects.isNull(cost_id) ? "" : cost_id
});
}
#Override
public OracleData create(Object jdbcValue, int sqltype) throws SQLException {
if (Objects.isNull(jdbcValue)) return null;
LinkedList<Object> attr = new LinkedList<>(Arrays.asList(((OracleStruct)jdbcValue).getAttributes()));
Record r = new Record();
r.setOwner_id(attr.removeFirst().toString());
r.setRecord_id(attr.removeFirst().toString());
r.setIp(attr.removeFirst().toString());
r.setPrefix(attr.removeFirst().toString());
r.setPort(Integer.parseInt(attr.removeFirst().toString()));
r.setDescription(attr.removeFirst().toString());
r.setCost_id(attr.removeFirst().toString());
return r;
}
public static OracleDataFactory getOracleDataFactory() {
return new Record();
}
Calling code:
...
// unwrap the Oracle object from C3P0 (standard JDBCv4 API)
OracleCallableStatement ops = call.unwrap(OracleCallableStatement.class);
// I'm not sure why I even need to do this - it looks exactly like
// the standard JDBC code
for (Records r : records) {
ops.registerOutParameter(1, Types.VARCHAR);
ops.setObject(2, t);
ops.execute();
ids.add(ops.getString(1));
}
...
And again, same result - no errors, a record is created in the table, with all provided values are null. I've traced through the code and the toJDBCObject() method is called correctly and does pass the values correctly in to createStruct().
Found the problem. Annoyingly, its about character encoding.
If in the toJDBCObject() implementation, I run getAttributes() on the created struct, the resulting Object[] array has all fields set as "???". Which is weird and looks like a character set transcoding failure (although it looks weird for that too - has three question marks for all fields regardless of value length, including empty string values).
According to Oracle's JDBC developer guide, "Globalization Support":
The basic Java Archive (JAR) file ojdbc7.jar, contains all the necessary classes to provide complete globalization support for:
Oracle character sets for CHAR, VARCHAR, LONGVARCHAR, or CLOB data that is not being retrieved or inserted as a data member of an Oracle object or collection type.
CHAR or VARCHAR data members of object and collection for the character sets US7ASCII, WE8DEC, WE8ISO8859P1, WE8MSWIN1252, and UTF8.
To use any other character sets in CHAR or VARCHAR data members of objects or collections, you must include orai18n.jar in the CLASSPATH environment variable:
ORACLE_HOME/jlib/orai18n.jar
And my setup was using the character set "WE8ISO8859P9" (I have no idea why, what it means, or even if it is selected by the client or the server - I just dumped the STRUCT object created by the OracleData API implementation and it was there somewhere).
So when Oracle says that it does not "provide complete globalization support", they mean "all character fields will be silently converted to NULL". Hmpph.
Anyway, adding orai18n.jar to the CLASSPATH indeed fixed the problem, and now records are added correctly to the database.

Check whether the document exists using Mongodb and Java?

I have created a simple java application as my college mini project in which one of the module I'm allowing users to perform operations like insert, delete, update and search.
For validation purposes I want a to display an error message to the user if he tries to delete a record which isn't present in the DB like
"Sorry record not found" .
I have tried try catch block to check that if mongodb throws a exception if document not found but that didn't worked. I'm new in Java and Mongodb and need help.
Here's my code of deleteActionPerformed and of what I tried:
private void deleteActionPerformed(java.awt.event.ActionEvent evt) {
try {
// my collection name is activity
DBCollection col = db.getCollection("activity");
// Tid is the TextField in which i am taking input of _id
if(!Tid.getText().equals("")) {
col.remove(new BasicDBObject().append("_id",(Object)Tid.getText()));
} else {
JOptionPane.showMessageDialog(null,"Please Enter the ID");
}
} catch(Exception e){
JOptionPane.showMessageDialog(null,"Record not Found " + e);
}
}
The try catch block is not generating a not found type exception.
This may not by the most efficient method, but it ought to work.
I adapted it from some code of mine looking for a particular document value (other than _id).
There may be a specialized method for _id.
/**
* Checks if an activity exists with a given id. if no such activity exists
* returns false. Returns true for one or more activities with a matching id.
*
* #param db
* #param id
* #return boolean - true if one or more functions with matching names exit.
*/
public static boolean activityExists(MongoDatabase db, ObjectId id) {
FindIterable<Document> iterable = db.getCollection("activity")
.find(new Document("_id", id));
return iterable.first() != null;
}
EDIT: It seems that it is best to use the count method. Please refer to the following answer:
How to check if document exists in collection using mongo Java driver 3.0+
In your case, it is significantly faster to use find() + limit() because findOne() will always read + return the document if it exists. find() just returns a cursor (or not) and only reads the data if you iterate through the cursor.
So instead of:
db.collection.findOne({_id: “myId”}, {_id: 1})
you should use:
db.collection.find({_id: “myId”}, {_id: 1}).limit(1)

java,how to consult sql lite if a value of a variable "varConsult" is in column "protocolo" in table "pessoajuridica",

i tryed this:
ResultSet existetabela = stm.executeQuery ("SELECT * FROM pessoajuridica WHERE protocolo =" + varConsult );
System.out.println(existetabela);
but it only return a strange String -> org.sqlite.RS#1f959518
i was expecting the value..
remembering, sql lite and java :S
i want to use the value that it return to compare, if it return any value, means that it exist, so it will not add to the sql, if dont return anything = can add!!!
("if exist" doesnt work for me, says that its a invalid argument in the sql command line --')
You can use ResultSet#next() method to test whether there was any result set returned:
if (existetabela.next()) {
// Result was fetched
// Assuming type of protocol is String (Can be anything)
String protocol = existetabela.getString("protocolo");
} else {
// No result
}
Now, let's move ahead to the major issue. You should use PreparedStatement, to save yourself from SQL Injection.
You need to iterate inside the result set to retrive the actual data that were found:
while (existetabela.next()){
System.out.println(existetabela.getObject("protocolo"));
}
Did you look at PreparedStatement ?

java.sql.SQLException: Numeric Overflow while using IN operator

I implemented a Java application which queries a database based on given set of ids using the query:
select * from STUDENT where ID in (?)
The set of ids will be used to replace ?. However, occasionally, I receive an exception:
Caused by: java.sql.SQLException: Numeric Overflow
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:271)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:445)
at oracle.jdbc.driver.NumberCommonAccessor.throwOverflow(NumberCommonAccessor.java:4319)
at oracle.jdbc.driver.NumberCommonAccessor.getInt(NumberCommonAccessor.java:187)
at oracle.jdbc.driver.OracleResultSetImpl.getInt(OracleResultSetImpl.java:712)
at oracle.jdbc.driver.OracleResultSet.getInt(OracleResultSet.java:434)
After some testing, I realized that if I divide the list of ids into many sub-lists with smaller size, the exception stops happening. For some reason, jdbc doesn't like putting too many values into IN (?). I wonder if anyone has seen this issue before and has an explanation for it? As this issue never happens on production environment but only on a local one (which has less powerful resources), I suspect it has something to do with server's resources.
Thanks
Update: the source code that I'm using is:
// create a query
private String getQueryString(int numOfParams) {
StringBuilder out = new StringBuilder();
out.append("select * from STUDENT where ID in (");
for (int i = 0; i < numOfParams; i++) {
if (i == numOfParams - 1) {
out.append("?");
} else {
out.append("?, ");
}
}
out.append(")");
}
// set parameters
private void setParams(PreparedStatement ps, Set<String> params) {
int index = 1;
for (String param: params) {
ps.setString(index++, param);
}
}
public void queryStudent(Connection conn, Set<String> ids) throws Exception {
String query = this.getQueryString(ids.size());
PreparedStatement ps = conn.prepareStatement(query);
this.setParams(ps, ids);
ps.executeQuery();
// do some operations with the result
}
The issue was caused by conflict of ojdbc driver between GlassFish and application. In order to fix it, I need to:
Update application's pom.xml (as I'm using maven) to use a latest
ojdbc which is ojdbc6-11.2.0.3
Add ojdbc6-11.2.0.3 to GlassFish lib
If necessary, manually remove the ojdbc jar from deployed applications' lib in glassfish (apparently this is not cleared by undeploy)
Did you check MySQL and/or JDBC max packet size setting? That usually bites you with large IN (...) lists.
This occurs with the ID property or some other integer property type of your entity
look your stacktrace>
at oracle.jdbc.driver.NumberCommonAccessor.getInt(NumberCommonAccessor.java:187)
at oracle.jdbc.driver.OracleResultSetImpl.getInt(OracleResultSetImpl.java:712)
at oracle.jdbc.driver.OracleResultSet.getInt(OracleResultSet.java:434)
Any value returned from the query does not fit on this property!
Change the properties Integer and try to work with the next integer types (long, Long BigInteger) in all fields of Integer type in your entity.

Categories