twitter4j result.nextQuery() is giving me always null - java

Hello guys I would ask you why I my code doesn't get me all the tweet I asked for in the query, and it's just stop next the first page result. I'm asking because the same code worked very well just six months ago.
Query query = new Query("Carolina OR flood lang:en since:2015-10-04 until:2015-10-09");
query.setCount(100);
QueryResult result;
createNewFile(contFile);
do {
result = twitterInstance.search(query);
List<Status> tweets = result.getTweets();
for (Status tweet : tweets) {
if (cont > MAX_TWEET_PER_FILE) {
cont = 1;
contFile++;
writer.close();
createNewFile(contFile);
}
writeToFile(cont,tweet);
cont++;
}
if(result.getRateLimitStatus().getRemaining()<1){
try {
Thread.sleep(result.getRateLimitStatus().getSecondsUntilReset() * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
} while (query!=null);
writer.flush();
writer.close();
System.exit(0);
So after the first iteration of do, query is always null, and what I gain is only the tweets about few hours of friday. I thought that I'm not wil be able to obtain tweets older than a week, but this is only a day (3 days ago...)
Are there any news or updates from Twitter guys I've missed?

Try using:
do{
...
}while((query = result.nextQuery()) != null);
Your query will get the results from the next page, if it exists.

Related

Twitter4j counting the number of tweets found

Here's my method for finding a hashtag:
void getNewTweets()
{
//try the search
try
{
Query query = new Query(searchString);
//get the last 50 tweets
query.count(2);
QueryResult result = twitter.search(query);
tweets = result.getTweets();
System.out.println(tweets);
}
//if there is an error then catch it and print it out
catch (TwitterException te)
{
System.out.println("Failed to search tweets: " + te.getMessage());
System.exit(-1);
}
}
I'm using the Twitter4j library. How would I count the number of tweets found?
You can also use the QueryResult.getCount() function.
Query query = new Query(searchString);
QueryResult result = twitter.search(query);
int count = results.getCount();
System.out.println("tweet count: " + count);
Store the tweets in an ArrayList and fetch the size of the ArrayList to display the number of tweets.
ArrayList tweets = (ArrayList) result.getTweets();
System.out.println(tweets.size());

Using Twitter4j how do i get all the list of Favorited tweets by a particulate user

I want all the list of tweets which are favorited by a twitter user account.
I have done some sample code that will give me all the posts that the user posted but i want all the tweets that the user favorited.
public List getAllTweetsOfUser(Twitter twitter, String user) {
if (user != null && !user.trim().isEmpty()) {
List statuses = new ArrayList();
int pageno = 1;
while (true) {
try {
int size = statuses.size();
Paging page = new Paging(pageno++, 100);
statuses.addAll(twitter.getUserTimeline(user, page));
if (statuses.size() == size) {
break;
}
} catch (TwitterException e) {
}
}
return statuses;
} else {
return null;
}
}
Can any one help me in this..
You need to start the paging with 1, and then increment the page. However, note that you will be rate limited, if you exceed 15 requests per 15 minutes (or 15* 20 = 300 statuses per 15 minutes).
Paging paging = new Paging(1);
List<Status> list;
do{
list = twitter.getFavorites(userID, paging);
for (Status s : list) {
//do stuff with s
System.out.println(s.getText());
}
paging.setPage(paging.getPage() + 1);
}while(list.size() > 0);
One of the Twitter4J samples does exactly this.
public final class GetFavorites {
/**
* Usage: java twitter4j.examples.favorite.GetFavorites
*
* #param args message
*/
public static void main(String[] args) {
try {
Twitter twitter = new TwitterFactory().getInstance();
List<Status> statuses = twitter.getFavorites();
for (Status status : statuses) {
System.out.println("#" + status.getUser().getScreenName() + " - " + status.getText());
}
System.out.println("done.");
System.exit(0);
} catch (TwitterException te) {
te.printStackTrace();
System.out.println("Failed to get favorites: " + te.getMessage());
System.exit(-1);
}
}
}
I have tried like below..
ResponseList<Status> status = twitter.getFavorites(twitterScreenName);
It given me the favorite tweets of the user which i have passed as a parameter. But the problem here is i am able to get only 20 favorites, though the user has so many tweets.
ResponseList<Status> status = twitter.getFavorites(twitterScreenName, paging);
I tried with the paging but i am not sure how to use this paging. So i am getting the top 20 favorites using my first code. If anybody tried this then please share the info like how to get all favorites of a given user.

Streaming the result of REST API from Twitter

I'm working on the REST API of Twitter using Twitter4J libraries, particularly on the https://api.twitter.com/1.1/search/tweets.json endpoint. I am quite aware of Twitter's own Streaming API, but I don't want to use that (at least for now). I have a method that queries the /search/tweets endpoint by a do-while loop, but I want the method's return to be in streaming fashion, so that I can print the results in the console simultaneously, instead of loading everything all at once. Here's the method:
public List<Status> readTweets(String inputQuery) {
List<Status> tweets = new ArrayList<Status>();
int counter = 0;
try {
RateLimitStatus rateLimit = twitter.getRateLimitStatus().get("/search/tweets");
int limit = rateLimit.getLimit();
Query query = new Query(inputQuery);
QueryResult result;
do {
result = twitter.search(query);
tweets.addAll(result.getTweets());
counter++;
} while ((query = result.nextQuery()) != null && counter < (limit - 1));
} catch (TwitterException e) {
e.printStackTrace();
System.out.println("Failed to search tweets: " + e.getMessage());
tweets = null;
}
return tweets;
}
What can you suggest?
P.S. I don't want to put the console printing functionality inside this method.
Thanks.

How to safely use Google translate in Selenium

I am into a project which requires translating text from different languages to English . In a day , I would have to translate nearly 5000 documents.I have written a small selenium code that would help me translate these documents.
Now my question is that if i use Selenium for translating a huge data from google translate ,will I be blocked by Google . If yes , what is the solution to avoid being blocked by Google Translate ?
I have posted my code below for reference :
public static WebDriver google_translate(WebDriver driver,String filename)
{
driver.manage().timeouts().implicitlyWait(5
, TimeUnit.SECONDS);
try{
driver.get("http://translate.google.com/#auto/en");
String text="";
text=read_contents.read_from_html(filename);
if(text.length()<5)
return driver ;
// Enter the query string "Cheese"
System.out.println("file read");
WebElement query = driver.findElement(By.id("source"));
query.sendKeys(text);
WebElement query1 = driver.findElement(By.id("gt-submit"));
query1.click();
System.out.println("text entered");
Date d=new Date();
long intial=d.getTime();
WebElement result;
do{
result = driver.findElement(By.id("result_box"));
d=new Date();
}while(result.getText().length()<20 && (d.getTime()-intial<15000) );
System.out.println("result fetched");
String output=Global.prop.get(1).toString()+"/"+new File(filename).getName()+".txt";
output_writer.txt_writer(result.getText(),output);
}
catch(UnhandledAlertException e)
{
e.printStackTrace();
}
catch(NoSuchElementException e)
{
e.printStackTrace();
}
catch(UnknownServerException e)
{
e.printStackTrace();
}
//System.out.println(result.getText());
return driver ;

Fetching articles form Liferay portal

Our goal is to fetch some of the content from Liferay Portal via SOAP services using Java. We are successfully loading articles right now with JournalArticleServiceSoap. The problem is that the method requires both group id and entry id, and what we want is to fetch all of the articles from a particular group. Hence, we are trying to get the ids first, using AssetEntryServiceSoap but it fails.
AssetEntryServiceSoapServiceLocator aesssLocator = new AssetEntryServiceSoapServiceLocator();
com.liferay.client.soap.portlet.asset.service.http.AssetEntryServiceSoap assetEntryServiceSoap = null;
URL url = null;
try {
url = new URL(
"http://127.0.0.1:8080/tunnel-web/secure/axis/Portlet_Asset_AssetEntryService");
} catch (MalformedURLException e) {
e.printStackTrace();
}
try {
assetEntryServiceSoap = aesssLocator
.getPortlet_Asset_AssetEntryService(url);
} catch (ServiceException e) {
e.printStackTrace();
}
if (assetEntryServiceSoap == null) {
return;
}
Portlet_Asset_AssetEntryServiceSoapBindingStub assetEntryServiceSoapBindingStub = (Portlet_Asset_AssetEntryServiceSoapBindingStub) assetEntryServiceSoap;
assetEntryServiceSoapBindingStub.setUsername("bruno#7cogs.com");
assetEntryServiceSoapBindingStub.setPassword("bruno");
AssetEntrySoap[] entries;
AssetEntryQuery query = new AssetEntryQuery();
try {
int count = assetEntryServiceSoap.getEntriesCount(query);
System.out.println("Entries count: " + Integer.toString(count));
entries = assetEntryServiceSoap.getEntries(query);
if (entries != null) {
System.out.println(Integer.toString(entries.length));
}
for (AssetEntrySoap aes : assetEntryServiceSoap.getEntries(query)) {
System.out.println(aes.getEntryId());
}
} catch (RemoteException e1) {
e1.printStackTrace();
}
Although getEntriesCount() returns a positive value like 83, getEnries() always returns an empty array. I'm very new to Liferay portal, but it looks really weird to me.
By the way, we are obviously not looking for performance here, the key is just to fetch some specific content from the portal remotely. If you know any working solution your help would be much appreciated.
Normally a AssetEntryQuery would have a little more information in it for example:
AssetEntryQuery assetEntryQuery = new AssetEntryQuery();
assetEntryQuery.setClassNameIds(new long[] { ClassNameLocalServiceUtil.getClassNameId("com.liferay.portlet.journal.model.JournalArticle") });
assetEntryQuery.setGroupIds(new long[] { groupId });
So this would return all AssetEntries for the groupId you specify, that are also JournalArticles.
Try this and see, although as you say, the Count method returns a positive number so it might not make a difference, but give it a go! :)

Categories