I have been using open source data set provider Casper to achieve in-memory representation of a collection of Database objects in Java.
Github Repository : https://github.com/casperds/casperdatasets
Below is the code that I have been using to pull data in Casper datasets
String[] primaryKeys = { "QUESTION_ID" };
if (resultSet != null)
{
container = CDataCacheDBAdapter.loadData(resultSet, null, primaryKeys,new HashMap<Object, Object>());
lCDataRowset = container.getAll();
preparedStatement.close();
resultSet.close();
}
The problem with using this is, when I don't mention primary keys then DBAdapter does not load data. And If I mention some column as primary keys then "Order By" does not have effect in the dataset. It just orders by primary keys.
I want to be able to pull data in dataset in order the way I have mentioned in the query.
Did anybody face this issue? Any kind of help is appreciated!! Thanks
Well it turned out to be very stupid issue. If you pass null for primaryKeys parameter then it returns data in the order the way it returns in MySQL query browser.
I thought this could help someone someday. That's why keeping this post other wise I would have deleted it.
Related
I'm trying to emulate the query Select * from namespace.set where pk="something" through aeropsike's java client. I know that we can query on a secondary index through "Filter", and create "PredExp" for other predicates, but I'm unable to figure out how we can query on a primary key.
Any help would be appreciated. Thanks a lot in advance.
Edit : I have multiple bins in my set, if that makes any difference.
I figured it out. You just have to create a "new key" while querying through the java aerospike client.
Record record = aerospikeClient.get(null, new Key(namespace, cacheName, key), binNames)
Refer to the discussion: https://discuss.aerospike.com/t/primary-key-search/558/6
I am working on a monitoring tool developed in Spring Boot using Hibernate as ORM.
I need to compare each row (already persisted rows of sent messages) in my table and see if a MailId (unique) has received a feedback (status: OPENED, BOUNCED, DELIVERED...) Yes or Not.
I get the feedbacks by reading csv files from a network folder. The CSV parsing and reading of files goes very fast, but the update of my database is very slow. My algorithm is not very efficient because I loop trough a list that can have hundred thousands of objects and look in my table.
This is the method that make the update in my table by updating the "target" Object (row in table database)
#Override
public void updateTargetObjectFoo() throws CSVProcessingException, FileNotFoundException {
// Here I make a call to performProcessing method which reads files on a folder and parse them to JavaObjects and I map them in a feedBackList of type Foo
List<Foo> feedBackList = performProcessing(env.getProperty("foo_in"), EXPECTED_HEADER_FIELDS_STATUS, Foo.class, ".LETTERS.STATUS.");
for (Foo foo: feedBackList) {
//findByKey does a simple Select in mySql where MailId = foo.getMailId()
Foo persistedFoo = fooDao.findByKey(foo.getMailId());
if (persistedFoo != null) {
persistedFoo.setStatus(foo.getStatus());
persistedFoo.setDnsCode(foo.getDnsCode());
persistedFoo.setReturnDate(foo.getReturnDate());
persistedFoo.setReturnTime(foo.getReturnTime());
//The save account here does an MySql UPDATE on the table
fooDao.saveAccount(foo);
}
}
}
What if I achieve this selection/comparison and update action in Java side? Then re-update the whole list in database?
Will it be faster?
Thanks to all for your help.
Hibernate is not particularly well-suited for batch processing.
You may be better off using Spring's JdbcTemplate to do jdbc batch processing.
However, if you must do this via Hibernate, this may help: https://docs.jboss.org/hibernate/orm/5.2/userguide/html_single/chapters/batch/Batching.html
As i mentioned at the mail subject, i am having problem with not mutable map inside BasicDynaBean.
As far as i know, it is the default behaviour of this map.
What i would like to do is, simply retrieve the resultset from db which will create a list including DynaBeans.
For viewing the database table, everything works fine, the problem occurs when i try to edit it and i get the following exception:
Caused by: javax.el.PropertyNotWritableException
at javax.el.MapELResolver.setValue(MapELResolver.java:267)
at com.sun.faces.el.DemuxCompositeELResolver._setValue(DemuxCompositeELResolver.java:255)
at com.sun.faces.el.DemuxCompositeELResolver.setValue(DemuxCompositeELResolver.java:281)
at com.sun.el.parser.AstValue.setValue(AstValue.java:201)
at com.sun.el.ValueExpressionImpl.setValue(ValueExpressionImpl.java:291)
at com.sun.faces.facelets.el.TagValueExpression.setValue(TagValueExpression.java:131)
... 50 more
I assume, this is because of the map inside dyna bean is not mutable.
I think one option is to change the default behaviour of the map by editing the source code of BeanUtils library.
On the other hand, i think the implementors of this library must have thought this functionality somehow...
Below is the code snippet that i use for retrieving the result set as DynaBeans.
String query = "SELECT * FROM test.a";
Statement stmt = (Statement) con.createStatement();
ResultSet rs = stmt.executeQuery(query);
RowSetDynaClass rsdc = new RowSetDynaClass(rs);
rs.close();
stmt.close();
dynaObjectList= rsdc.getRows();
I tried to use LazyDynaMap as well, editing the table worked fine but the
Map didnt allow me to put multiple data since the key is not unique for other datasets.
Because the key is the property name.
I would be really appreciated if you suggest me hints.
Thanks in advance.
I am really looking forward to see the answers if possible.
Best regards.
Ercan CANLIER
By default, BasicDynaBean has mutable map for the DynaBeans. I created another map which is immutable by LazyDynaMap and solved the problem. Hope this helps for someone else.
setDecoratedDynaObjectList(rsdc.getRows());
Iterator<DynaBean> it=decoratedDynaObjectList.iterator();
while(it.hasNext()){
BasicDynaBean dynaBean = (BasicDynaBean) it.next();
Map<String,Object> modifiableMap=new DynaBeanPropertyMapDecorator(dynaBean, false);
DynaBean mutableDynaBean=new LazyDynaMap(modifiableMap);
modifiableDynaObjectList.add(mutableDynaBean);
}
I am getting below exception, when trying to insert a batch of rows to an existing table
ORA-00942: table or view does not exist
I can confirm that the table exists in db and I can insert data to that table using oracle
sql developer. But when I try to insert rows using preparedstatement in java, its throwing table does not exist error.
Please find the stack trace of error below
java.sql.SQLException: ORA-00942: table or view does not exist
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:573)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1889)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1093)
at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2047)
at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:1940)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout>>(OracleStatement.java:2709)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:589)
at quotecopy.DbConnection.insertIntoDestinationDb(DbConnection.java:591)
at quotecopy.QuoteCopier.main(QuoteCopier.java:72)
Can anyone suggest the reasons for this error ?
Update : Issue solved
There was no problem with my database connection properties or with my table or view name. The solution to the problem was very strange. One of the columns that I was trying insert was of Clob type. As I had a lot of trouble handling clob data in oracle db before, gave a try by replacing the clob setter with a temporary string setter and the same code executed with out any problems and all the rows were correctly inserted!!!.
ie. peparedstatement.setClob(columnIndex, clob)
was replaced with
peparedstatement.setString(columnIndex, "String")
Why an error table or view does exist error was throws for error in inserting clob data. Could anyone of you please explain ?
Thanks a lot for your answers and comments.
Oracle will also report this error if the table exists, but you don't have any privileges on it. So if you are sure that the table is there, check the grants.
There seems to be some issue with setCLOB() that causes an ORA-00942 under some circumstances when the target table does exist and is correctly privileged. I'm having this exact issue now, I can make the ORA-00942 go away by simply not binding the CLOB into the same table.
I've tried setClob() with a java.sql.Clob and setCLOB() with an oracle.jdbc.CLOB but with the same result.
As you say, if you bind as a string the problem goes away - but this then limits your data size to 4k.
From testing it seems to be triggered when a transaction is open on the session prior to binding the CLOB. I'll feed back when I've solved this...checking Oracle support.
There was no problem with my database connection properties or with my table or view name. The solution to the problem was very strange. One of the columns that I was trying insert was of Clob type. As I had a lot of trouble handling clob data in oracle db before, gave a try by replacing the clob setter with a temporary string setter and the same code executed with out any problems and all the rows were correctly inserted!!!.
ie. peparedstatement.setClob(columnIndex, clob)
was replaced with
peparedstatement.setString(columnIndex, "String")
#unbeli is right. Not having appropriate grants on a table will result in this error. For what it's worth, I recently experienced this. I was experiencing the exact problem that you described, I could execute insert statements through sql developer but would fail when using hibernate. I finally realized that my code was doing more than the obvious insert. Inserting into other tables that did not have appropriate grants. Adjusting grant privileges solved this for me.
Note: Don't have reputation to comment, otherwise this may have been a comment.
We experienced this issue on a BLOB column. Just in case anyone else lands on this question when encountering this error, here is how we resolved the issue:
We started out with this:
preparedStatement.setBlob(parameterIndex, resultSet.getBlob(columnName)); break;
We resolved the issue by changing that line to this:
java.sql.Blob blob = resultSet.getBlob(columnName);
if (blob != null) {
java.io.InputStream blobData = blob.getBinaryStream();
preparedStatement.setBinaryStream(parameterIndex, blobData);
} else {
preparedStatement.setBinaryStream(parameterIndex, null);
}
I found how to solve this problem without using JDBC's setString() method which limits the data to 4K.
What you need to do is to use preparedStatement.setClob(int parameterIndex, Reader reader). At least this is what that worked for me. Thought Oracle drivers converts data to character stream to insert, seems like not. Or something specific causing an error.
Using a characterStream seems to work for me. I am reading tables from one db and writing to another one using jdbc. And i was getting table not found error just like it is mentioned above. So this is how i solved the problem:
case Types.CLOB: //Using a switch statement for all columns, this is for CLOB columns
Clob clobData = resultSet.getClob(columnIndex); // The source db
if (clobData != null) {
preparedStatement.setClob(columnIndex, clobData.getCharacterStream());
} else {
preparedStatement.setClob(columnIndex, clobData);
}
clobData = null;
return;
All good now.
Is your script providing the schema name, or do you rely on the user logged into the database to select the default schema?
It might be that you do not name the schema and that you perform your batch with a system user instead of the schema user resulting in the wrong execution context for a script that would work fine if executed by the user that has the target schema set as default schema. Your best action would be to include the schema name in the insert statements:
INSERT INTO myschema.mytable (mycolums) VALUES ('myvalue')
update: Do you try to bind the table name as bound value in your prepared statement? That won't work.
It works for me:
Clob clob1;
while (rs.next()) {
rs.setString(1, rs.getString("FIELD_1"));
clob1 = rs.getClob("CLOB1");
if (clob1 != null) {
sta.setClob(2, clob1.getCharacterStream());
} else {
sta.setClob(2, clob1);
}
clob1 = null;
sta.setString(3, rs.getString("FIELD_3"));
}
Is it possible that you are doing INSERT for VARCHAR but doing an INSERT then an UPDATE for CLOB?
If so, you'll need to grant UPDATE permissions to the table in addition to INSERT.
See https://stackoverflow.com/a/64352414/1089967
Here I got the solution for the question. The problem is on glass fish if you are using it. When you create JNDI name make sure pool name is correct and pool name is the name of connection pool name that you are created.
I've started to fiddle with mongo db and came up with a question.
Say, I have an object (POJO) with an id field (say, named 'ID') that I would like to represent in JSON and store/load in/from Mongo DB.
As far as I understood any object always has _id field (with underscore, lowercased).
What I would like to do is: during the query I would like the mongo db to return me my JSON with field ID instead of _id.
In SQL I would use something like
SELECT _id as ID ...
My question is whether its possible to do this in mongo db, and if it is, the Java based Example will be really appreciated :)
I understand that its possible to iterate over the records and substitute the _id with ID manually but I don't want this O(n) loop.
I also don't really want to duplicate the lines and store both "id" and "_id"
So I'm looking for solution at the level of query or maybe Java Driver.
Thanks in advance and have a nice day
Mongodb doesnt use SQL , its more like Object Query Language and Collections.
what you can try is , some thing similar to below code using Mongo Java Driver
Pojo obj = new PojoInstance();
obj.setId(id);
db.yourdb.find(obj);
I've end up using the following approach in the Java Driver:
DBCursor cursor = runSomeQuery();
try {
while(cursor.hasNext()) {
DBObject dbObject = cursor.next();
ObjectId id = (ObjectId) dbObject.get("_id");
dbObject.removeField("_id");
dbObject.put("ID", id.toString());
System.out.println(dbObject);
}
} finally {
cursor.close();
}
I was wondering whether this is the best solution or I have other better options
Mark
Here's an example of what I am doing in Javascript. It may be helpful to you. In my case I am removing the _id field and aliasing the two very nested fields to display simpler names.
db.players.aggregate([
{ $match: { accountId: '12345'}},
{ $project: {
"_id": 0,
"id": "$id",
"masterVersion": "$branches.master.configuration.player.template.version",
"previewVersion": "$branches.preview.configuration.player.template.version"
}
}
])
I hope you find this helpful.