while executing below code
KeyScanCursor<String> cursor = syncCommands.scan(ScanArgs.Builder.limit(50).match(match));
List<String> values = null;
while (!cursor.isFinished()) {
for (String key : cursor.getKeys()) {
values = syncCommands.lrange(key, 0, 50);
}
cursor = syncCommands.scan(cursor, ScanArgs.Builder.limit(50).match(match));
}
getting empty result but while executing below command
redis-cli --cluster call 127.0.0.1:30001 SCAN 0 MATCH "orgId:EC:resetPasswordExpiryHours"
getting expected result
127.0.0.1:30003: 22
orgId:EC:resetPasswordExpiryHours
could someone help me why above code is not working.
Your last iteration of scan is ignored. Cursor is finished, but you have not processed returned keys.
Related
I have a .txt file with, for example, this content:
variable1="hello";
variable2="bye";
testing3="parameter";
whatisthis4="hello";
var5="exampletext";
example=3;
wellthen=8;
---
It read in the file, line by line, fine until I added a way of saving the data.
This whole code plus another reader (with other variable names of course) is wrapped in a try-catch statement.
String path_playlist = new File("").getAbsolutePath();
String fileName_playlist = path_playlist
+ "/src/dancefusion/game/playlist.txt";
FileReader fr_playlist = new FileReader(fileName_playlist);
BufferedReader br_playlist = new BufferedReader(fr_playlist);
int track_counter = track_sum*9;
String trackinfos[] = new String[track_counter];
while(track_counter < 0)
{
System.out.println("linecount="+track_counter);
trackinfos[track_counter] = br_playlist.readLine();
System.out.println(trackinfos[track_counter]);
track_counter--;
}
System.out.println(Arrays.toString(trackinfos));
In this example track_sum equals 1.
The while loop should read in the file one line at a time but only reads null's:
[null, null, null, null, null, null, null, null, null]
Update 1:
The while-condition was set up the wrong way... thanks!
The corrected version:
while(track_counter < 0)
However, now it gives me an exception with an "ArrayOutOfBounds: 9".
Any guesses?
Final Update:
As mentioned by #GiorgiMoniava, I just needed to reduce track_counter by one before starting to read in as in Java arrays begin with 0, thanks!
int track_counter = track_sum*8;
String trackinfos[] = new String[track_counter];
track_counter--;
while(track_counter >= 0)
{
System.out.println("linecount="+track_counter);
trackinfos[track_counter] = br_playlist.readLine();
System.out.println(trackinfos[track_counter]);
track_counter--;
}
Maybe one of you can figure out what I did wrong...
Of course I can deliver more information/code if needed!
Thanks in advance!
This looks weird
while(track_counter < 0)
Are you sure loop is ever entered in? It is my guess (from your output) that track_counter is 9.
About your array out of bounds exception: if you create array of size N you can only access it using indexes: [0, N-1]
try this
while(track_counter > 0)
{
System.out.println("linecount="+track_counter);
track_counter--;
trackinfos[track_counter] = br_playlist.readLine();
System.out.println(trackinfos[track_counter]);
}
I have a java application which read files and writes to oracle db row by row.
We have come across a strange error during batch insert which does not occur during sequential insert. The error is strange because it occurs only with IBM JDK7 on AIX platform and I get this error on different rows every time. My code looks like below:
prpst = conn.prepareStatement(query);
while ((line = bf.readLine()) != null) {
numLine++;
batchInsert(prpst, line);
//onebyoneInsert(prpst, line);
}
private static void batchInsert(PreparedStatement prpst, String line) throws IOException, SQLException {
prpst.setString(1, "1");
prpst.setInt(2, numLine);
prpst.setString(3, line);
prpst.setString(4, "1");
prpst.setInt(5, 1);
prpst.addBatch();
if (++batchedLines == 200) {
prpst.executeBatch();
batchedLines = 0;
prpst.clearBatch();
}
}
private static void onebyoneInsert(PreparedStatement prpst, String line) throws Exception{
int batchedLines = 0;
prpst.setString(1, "1");
prpst.setInt(2, numLine);
prpst.setString(3, line);
prpst.setString(4, "1");
prpst.setInt(5, 1);
prpst.executeUpdate();
}
I get this error during batch insert mode :
java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10345)
I already know why this Ora error occurs but this is not my case. I am nearly sure that I am not setting some large data to a smaller column. May be I am hitting some bugs in IBM jdk7 but could not prove that.
My question if there is a way that I can avoid this problem ? One by one insert is not an option because we have big files and it takes too much time.
Try with
prpst.setInt(5,new Integer(1))
What is the type of variable "numLine"?
Can you share type of columns corresponding to the fields you set in PreparedStatement?
Try once by processing with "onebyoneInsert". Share the output for this case. It might help identifying root cause.
Also print value of "numLine" to console.
In summary, we have ran into this weird behavior in doing concurrent updates on an existing document when the document is not part of the working set (not in resident memory).
More details:
Given a collection with a unique index and when running concurrent updates (3 threads) with upsert as true on a given existing document, 1 to 2 threads raise the following exception:
Processing failed (Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$key_1 dup key: { : 1008 }'):
According to the documentation, I would expect all of the three updates to succeed because the document I am trying to update already exists. Instead, it looks like it is trying to do an insert on few or all of the update requests and few fails due to the unique index.
Repeating the same concurrent update on the document does not raise any exceptions. Also, using find() on a document to bring it to the working set, then running the concurrent updates on that document also runs as expected.
Also, using findAndModify with the same query and settings does not have the same problem.
Is this working as expected or am I missing something?
Setup:
-mongodb java driver 3.0.1
-3 node replica set running MongoDB version "2.6.3"
Query:
BasicDBObject query = new BasicDBObject();
query.put("docId", 123L);
collection.update (query, object, true, false);
Index:
name: docId_1
unique: true
key: {"docId":1}
background: true
Updated on May 28 to include sample code to reproduce the issue.
Run MongoDB locally as follow (Note that the test will write about ~4 GB of data):
./mongodb-osx-x86_64-2.6.10/bin/mongod --dbpath /tmp/mongo
Run the following code, restart the database, comment out "fillUpCollection(testMongoDB.col1, value, 0, 300);", then run the code again. Depending on the machine, you may need to tweak some of the numbers to be able to see the exceptions.
package test;
import com.mongodb.BasicDBObject;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.Mongo;
import com.mongodb.MongoClient;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class TestMongoDB {
public static final String DOC_ID = "docId";
public static final String VALUE = "value";
public static final String DB_NAME = "db1";
public static final String UNIQUE = "unique";
public static final String BACKGROUND = "background";
private DBCollection col1;
private DBCollection col2;
private static DBCollection getCollection(Mongo mongo, String collectionName) {
DBCollection col = mongo.getDB(DB_NAME).getCollection(collectionName);
BasicDBObject index = new BasicDBObject();
index.append(DOC_ID, 1);
DBObject indexOptions = new BasicDBObject();
indexOptions.put(UNIQUE, true);
indexOptions.put(BACKGROUND, true);
col.createIndex(index, indexOptions);
return col;
}
private static void storeDoc(String docId, DBObject doc, DBCollection dbCollection) throws IOException {
BasicDBObject query = new BasicDBObject();
query.put(DOC_ID, docId);
dbCollection.update(query, doc, true, false);
//dbCollection.findAndModify(query, null, null, false, doc, false, true);
}
public static void main(String[] args) throws Exception{
final String value = new String(new char[1000000]).replace('\0', 'a');
Mongo mongo = new MongoClient("localhost:27017");
final TestMongoDB testMongoDB = new TestMongoDB();
testMongoDB.col1 = getCollection(mongo, "col1");
testMongoDB.col2 = getCollection(mongo, "col2");
fillUpCollection(testMongoDB.col1, value, 0, 300);
//restart Database, comment out previous line, and run again
fillUpCollection(testMongoDB.col2, value, 0, 2000);
updateExistingDocuments(testMongoDB, value);
}
private static void updateExistingDocuments(TestMongoDB testMongoDB, String value) {
List<String> docIds = new ArrayList<String>();
for(int i = 0; i < 10; i++) {
docIds.add(new Random().nextInt(300) + "");
}
multiThreadUpdate(testMongoDB.col1, value, docIds);
}
private static void multiThreadUpdate(final DBCollection col, final String value, final List<String> docIds) {
Runnable worker = new Runnable() {
#Override
public void run() {
try {
System.out.println("Started Thread");
for(String id : docIds) {
storeDoc(id, getDbObject(value, id), col);
}
} catch (Exception e) {
System.out.println(e);
} finally {
System.out.println("Completed");
}
}
};
for(int i = 0; i < 8; i++) {
new Thread(worker).start();
}
}
private static DBObject getDbObject(String value, String docId) {
final DBObject object2 = new BasicDBObject();
object2.put(DOC_ID, docId);
object2.put(VALUE, value);
return object2;
}
private static void fillUpCollection(DBCollection col, String value, int from, int to) throws IOException {
for(int i = from ; i <= to; i++) {
storeDoc(i + "", getDbObject(value, i + ""), col);
}
}
}
Sample Output on the second run:
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
Started Thread
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "290" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "170" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "241" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "127" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "120" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "91" }'
Completed
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: db1.col1.$docId_1 dup key: { : "136" }'
Completed
Completed
This looks like a known issue with MongoDB, at least up to version 2.6. Their recommended fix is to have your code retry the upsert on error.
https://jira.mongodb.org/browse/SERVER-14322
Your query is too specific, not finding the document even if it's created, e.g. not only searching for the unique field. Then the upsert tries to create it a second time (another thread) but fails as it actually exists, but wasn't found. Please see http://docs.mongodb.org/manual/reference/method/db.collection.update/#upsert-behavior for more details.
Boil down from doc: To avoid inserting the same document more than once, only use upsert: true if the query field is uniquely indexed.
Use modify operators like $set, to include your query document into the upsert doc
If you feel that this isn't the case for you. Please provide us with the query and some information about your index.
Update:
If you try to run your code from cli, you'll see the following:
> db.upsert.ensureIndex({docid:1},{unique:true})
{
"createdCollectionAutomatically" : true,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.upsert.update({"docid":123},{one:1,two:2},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 1,
"nModified" : 0,
"_id" : ObjectId("55637413ad907a45eec3a53a")
})
> db.upsert.find()
{ "_id" : ObjectId("55637413ad907a45eec3a53a"), "one" : 1, "two" : 2 }
> db.upsert.update({"docid":123},{one:1,two:2},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 11000,
"errmsg" : "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.upsert.$docid_1 dup key: { : null }"
}
})
You have the following issue:
You want to update the document but don't find it. And your update contains no modify operators, thus your docid field won't be included in the newly created document (or better it's set to null, and null can be set only once in a unique index, too).
Next time you try to update your document, you still don't find it, because of the last step. So MongoDB tries to insert it following the same procedure as before, and fails again. No second null allowed.
Simply change your update query to this, to modify the document/ on upsert case include your query into it: db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
WriteResult({
"nMatched" : 0,
"nUpserted" : 1,
"nModified" : 0,
"_id" : ObjectId("5562164f0f63858bf27345f3")
})
> db.upsert.find()
{ "_id" : ObjectId("5562164f0f63858bf27345f3"), "docid" : 123, "one" : 1, "two" : 2 }
> db.upsert.update({"docid":123},{$set:{one:1,two:2}},true,false)
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 0 })
Like you see in this code, I want to get all the information about friends in twitter, people I follow.
But doing this :
PagableResponseList<User> users = twitter.getFriendsList(USER_ID, CURSOR);
... only gives me the first 20 recent friends... What can I do?
Complete code about it :
PagableResponseList<User> users = twitter.getFriendsList(USER_ID, CURSOR);
User user = null;
max = users.size();
System.out.println("Following: "+max);
for (int i = 0 ; i < users.size() ; i++){
user = users.get(i);
System.out.print("\nID: "+user.getId()+" / User: "+user.getName()+" /");
System.out.print("\nFollowers: "+user.getFollowersCount()+"\n");
tid.add(Long.parseLong(String.valueOf(user.getId())));
tusername.add(user.getName());
tfollowers.add(Long.parseLong(String.valueOf(user.getFollowersCount())));
tname.add(user.getScreenName());
}
Thanks..
you can try this code to get the list of people you follow.
long cursor = -1;
PagableResponseList<User> users;
while ((cursor = followers.getNextCursor()) != 0);
{
users = twitter.getFriendsList(userId, cursor);
}
I've taken a peek at the documentation at Twitter4J and Twitter themselves and it's all about that cursor.
To prevent you're getting loaded with a whole bunch of friends at once, Twitter only returns the first 20 results. It doesn't return just the first 20 results, but it also returns a cursor. That cursor is just a random number that's managed by Twitter. When you make a call again and pass this cursor, the next 20 entries (friends) will be returned, again with a cursor that's different now. You can repeat this until the cursor returned is zero. That means there are no more entries available.
In case you want to know more, check these two links: Twitter DEV and Twitter4J documentation.
Concerning your Java, you just need to find a way to get the current cursor, and pass that cursor to your method again, making the app load the next 20 entries. According to this piece of information, that should do the trick.
List<User> allUsers = new ArrayList<User>();
PagableResponseList<User> users;
long cursor = -1;
while (cursor != 0) {
users = twitter.getFriendsList(USER_ID, cursor);
cursor = users.getNextCursor();
allUsers.add(users);
}
You should be able to request up to 200 results at a time:
final PagableResponseList<User> users = twitter.getFriendsList(USER_ID, cursor, 200);
cursor = users.getNextCursor();
If you need to start from where you left off between invocations of your program then you need to store the value of cursor somewhere.
Improvements to Sander's answer!
You can set a count value to the getFriendsList method as in Jonathan's Answer. The maximum value allowed for count is 200. The loop construct will help to collect more than 200 friends now. 200 friends per page or per iteration!
Yet, there are rate limits for any request you make. The getFriendsList method will use this api endpoint: GET friends/list which has a rate limit of 15 hits per 15 minutes. Each hit can fetch a maximum of 200 friends which equates to a total of 3000 friends (15 x 200 = 3000) per 15 minutes. So, there will be no problem if you have only 3000 friends. If you have more than 3000 friends, an exception will be thrown. You can use the RateLimitStatus class to avoid that exception. The following code is an example implementation to achieve this.
Method 1: fetchFriends(long userId)
public List<User> fetchFriends(long userId) {
List<User> friends = new ArrayList<User>();
PagableResponseList<User> page;
long cursor = -1;
try {
while (cursor != 0) {
page = twitter.getFriendsList(userId, cursor, 200);
friends.addAll(page);
System.out.println("Total number of friends fetched so far: " + friends.size());
cursor = page.getNextCursor();
this.handleRateLimit(page.getRateLimitStatus());
}
} catch (TwitterException e) {
e.printStackTrace();
}
return friends;
}
Method 2: handleRateLimit(RateLimitStatus rls)
private void handleRateLimit(RateLimitStatus rls) {
int remaining = rls.getRemaining();
System.out.println("Rate Limit Remaining: " + remaining);
if (remaining == 0) {
int resetTime = rls.getSecondsUntilReset() + 5;
int sleep = (resetTime * 1000);
try {
if(sleep > 0) {
System.out.println("Rate Limit Exceeded. Sleep for " + (sleep / 1000) + " seconds..");
Thread.sleep(sleep);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
By doing so, your program will sleep for some time period based on the rate limiting threshold. It will continue to run from where it left after the sleep. This way we can avoid our program stopping in the midway of collecting friends counting more than 3000.
I have the solution to my post... thanks to Sander, give me some ideas...
The thing was change the for to while ((CURSOR = ids.getNextCursor()) != 0);.
And... user = twitter.showUser(id);
Playing with showUser makes it possible to get, with a slowly time, all the info about all my friends...
That's all. Don't use user.get(i);
Query is getting executed well but problm with JPA .Please help me to find the wrongness.
Query works well if I execute seperately in Oracle SQL client but with application gives error.
Am getting exception in this line as Eclipse says
return genDODetails(pSess.executeQuery(rq));
Query :
SELECT DOD.ID AS DODID,ORDD.ID AS ORDDID,DOD.SHIPQTY AS DOQTY,DOD.BUYERCODE AS BUYERCODE,DOD.BUYERPARTNUM AS BUYERPARTNUM,
DOD.BUYERPARTDESC AS BUYERPARTDESC,DOD.LINENUM AS DOLINENUM,DOD.LINEREVNUM AS LINEREVNUM,
DOD.LINEINDICATIOR AS LINEINDICATOR,DOD.ORDLINENUM AS ORDLINENUM,ORDD.SHIPQTY AS SHIPPEDQTY,
DOD.RSPREMARK1 AS SUPREMARK,DOD.ORDNUM AS PONUM
FROM RDT_DELIVERYORDERDETAIL DOD,RDT_ORDERDETAIL ORDD ,RDT_ORDER ORDM
WHERE ORDD.LATEST =1
AND ORDM.LATEST =1
AND ORDM.ID = ORDD.ORDID
AND ORDD.RESPSTR1 ='EP'
AND ORDD.LINENUM = DOD.ORDLINENUM
AND ORDM.DOCNUM = DOD.ORDNUM
AND DOD.LATEST =1
AND CONTROLLERID =(SELECT ID FROM RDT_ORGANIZATION WHERE OUCODE ='yes' AND PARENTID IS NULL)
AND DOD.DOID = 72
ORDER BY DODID DESC;
JPA Execution Code:
public List<DODetail> getDODetails(Map<String,Object> hparams) throws Exception
{
String sqlQuery= pSess.getSQLString4NamedQuery("DO_VIEW_DETAIL");
String doid = hparams.get("DOID")!=null ?(String)hparams.get("DOID"):"";
Hashtable<String,Object> dbparams=new Hashtable();
dbparams.put(":DOID",doid);
sqlQuery = (String)pSess.getParamQuery(sqlQuery, dbparams);
ReportQuery rq=new ReportQuery();
rq.setReferenceClass(DODetail.class);
rq.addAttribute("DODID");
rq.addAttribute("ORDDID");
rq.addAttribute("DOQTY");
rq.addAttribute("BUYERCODE");
rq.addAttribute("BUYERPARTNUM");
rq.addAttribute("BUYERPARTDESC");
rq.addAttribute("DOLINENUM");
rq.addAttribute("LINEREVNUM");
rq.addAttribute("LINEINDICATOR");
rq.addAttribute("ORDLINENUM");
rq.addAttribute("SHIPPEDQTY");
rq.addAttribute("SUPREMARK");
rq.addAttribute("PONUM");
rq.setSQLString(sqlQuery);
return genDODetails(pSess.executeQuery(rq));
}
private List<DODetail> genDODetails(Object obj)throws Exception
{
if(obj==null ) return null;
List newList = (List)obj;
Iterator it = newList.iterator();
List<DODetail> doDetails = new ArrayList<DODetail>();
while(it.hasNext())
{
ReportQueryResult rs=(ReportQueryResult)it.next();
DODetail order=new DODetail();
order.setId(rs.get("DODID")!=null?((BigDecimal)rs.get("DODID")).longValue():new Long(0));
order.setRefid(rs.get("ORDDID")!=null?((BigDecimal)rs.get("ORDDID")).longValue():new Long(0));
order.setShipqty(rs.get("DOQTY")!=null?(BigDecimal)rs.get("DOQTY"):new BigDecimal(0));
order.setBuyercode(rs.get("BUYERCODE")!=null?(String)rs.get("BUYERCODE"):"");
order.setBuyerpartnum(rs.get("BUYERPARTNUM")!=null?(String)rs.get("BUYERPARTNUM"):"");
order.setCategory(rs.get("BUYERPARTDESC")!=null?(String)rs.get("BUYERPARTDESC"):"");
order.setLinenum(rs.get("DOLINENUM")!=null?((BigDecimal)rs.get("DOLINENUM")).longValue():new Long(0));
order.setLinerevnum(rs.get("LINEREVNUM")!=null?(String)rs.get("LINEREVNUM"):"");
order.setLineindicator(rs.get("LINEINDICATOR")!=null?(String)rs.get("LINEINDICATOR"):"");
order.setOrdlinenum(rs.get("ORDLINENUM")!=null?((BigDecimal)rs.get("ORDLINENUM")).toPlainString():"");
order.setAssignqty(rs.get("SHIPPEDQTY")!=null?(BigDecimal)rs.get("SHIPPEDQTY"):new BigDecimal(0));
order.setRspremark1(rs.get("SUPREMARK")!=null?(String)rs.get("SUPREMARK"):"");
order.setOrdnum(rs.get("PONUM")!=null?(String)rs.get("PONUM"):"");
doDetails.add(order);
}
return doDetails;
}
Make sure, you didnt have a trailing semicolon in your query string, before execution.
Because, it is not needed, and it is one of the possible reasons for this error.
It is not required , when you send a query to the database using a OCI driver. Only when manually executing in SQL*Plus or something needs that as pushing the sql to the database engine for execution.