I'm working on the REST API of Twitter using Twitter4J libraries, particularly on the https://api.twitter.com/1.1/search/tweets.json endpoint. I am quite aware of Twitter's own Streaming API, but I don't want to use that (at least for now). I have a method that queries the /search/tweets endpoint by a do-while loop, but I want the method's return to be in streaming fashion, so that I can print the results in the console simultaneously, instead of loading everything all at once. Here's the method:
public List<Status> readTweets(String inputQuery) {
List<Status> tweets = new ArrayList<Status>();
int counter = 0;
try {
RateLimitStatus rateLimit = twitter.getRateLimitStatus().get("/search/tweets");
int limit = rateLimit.getLimit();
Query query = new Query(inputQuery);
QueryResult result;
do {
result = twitter.search(query);
tweets.addAll(result.getTweets());
counter++;
} while ((query = result.nextQuery()) != null && counter < (limit - 1));
} catch (TwitterException e) {
e.printStackTrace();
System.out.println("Failed to search tweets: " + e.getMessage());
tweets = null;
}
return tweets;
}
What can you suggest?
P.S. I don't want to put the console printing functionality inside this method.
Thanks.
Related
I was using facebook FQL query to fetch sharecount for multiple URLS using this code without needing any access token.
https://graph.facebook.com/fql?q=";
"SELECT url, total_count,share_count FROM link_stat WHERE url in (";
private void callFB(List validUrlList,Map> dataMap,long timeStamp,Double calibrationFactor){
try {
StringBuilder urlString = new StringBuilder();
System.out.println("List Size " + validUrlList.size());
for (int i = 0; i < (validUrlList.size() - 1); i++) {
urlString.append("\"" + validUrlList.get(i) + "\",");
}
urlString.append("\""
+ validUrlList.get(validUrlList.size() - 1) + "\"");
String out = getConnection(fbURL+URLEncoder.encode(
queryPrefix
+ urlString.toString() + ")", "utf-8"));
dataMap = getSocialPopularity(validUrlList.toArray(), dataMap);
getJSON(out, dataMap, timeStamp,calibrationFactor);
} catch (Exception e) {
e.printStackTrace();
}
}
But as now Facebook has depreciated it i am planning to use
https://graph.facebook.com/v2.5/?ids=http://timesofindia.indiatimes.com/life-style/relationships/soul-curry/An-NRI-bride-who-was-tortured-to-hell/articleshow/50012721.cms&access_token=abc
But i could not find any code to make batch request in the same also i am using pageaccesstoken so what could be the rate limit for same.
Could you please help me to find teh batch request using java for this new version.
You will always be subject to rate limiting... If you're using the /?ids= endpoint, there's already a "batch" functionality built-in.
See
https://developers.facebook.com/docs/graph-api/using-graph-api/v2.5#multirequests
https://developers.facebook.com/docs/graph-api/advanced/rate-limiting
Hello guys I would ask you why I my code doesn't get me all the tweet I asked for in the query, and it's just stop next the first page result. I'm asking because the same code worked very well just six months ago.
Query query = new Query("Carolina OR flood lang:en since:2015-10-04 until:2015-10-09");
query.setCount(100);
QueryResult result;
createNewFile(contFile);
do {
result = twitterInstance.search(query);
List<Status> tweets = result.getTweets();
for (Status tweet : tweets) {
if (cont > MAX_TWEET_PER_FILE) {
cont = 1;
contFile++;
writer.close();
createNewFile(contFile);
}
writeToFile(cont,tweet);
cont++;
}
if(result.getRateLimitStatus().getRemaining()<1){
try {
Thread.sleep(result.getRateLimitStatus().getSecondsUntilReset() * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
} while (query!=null);
writer.flush();
writer.close();
System.exit(0);
So after the first iteration of do, query is always null, and what I gain is only the tweets about few hours of friday. I thought that I'm not wil be able to obtain tweets older than a week, but this is only a day (3 days ago...)
Are there any news or updates from Twitter guys I've missed?
Try using:
do{
...
}while((query = result.nextQuery()) != null);
Your query will get the results from the next page, if it exists.
I want all the list of tweets which are favorited by a twitter user account.
I have done some sample code that will give me all the posts that the user posted but i want all the tweets that the user favorited.
public List getAllTweetsOfUser(Twitter twitter, String user) {
if (user != null && !user.trim().isEmpty()) {
List statuses = new ArrayList();
int pageno = 1;
while (true) {
try {
int size = statuses.size();
Paging page = new Paging(pageno++, 100);
statuses.addAll(twitter.getUserTimeline(user, page));
if (statuses.size() == size) {
break;
}
} catch (TwitterException e) {
}
}
return statuses;
} else {
return null;
}
}
Can any one help me in this..
You need to start the paging with 1, and then increment the page. However, note that you will be rate limited, if you exceed 15 requests per 15 minutes (or 15* 20 = 300 statuses per 15 minutes).
Paging paging = new Paging(1);
List<Status> list;
do{
list = twitter.getFavorites(userID, paging);
for (Status s : list) {
//do stuff with s
System.out.println(s.getText());
}
paging.setPage(paging.getPage() + 1);
}while(list.size() > 0);
One of the Twitter4J samples does exactly this.
public final class GetFavorites {
/**
* Usage: java twitter4j.examples.favorite.GetFavorites
*
* #param args message
*/
public static void main(String[] args) {
try {
Twitter twitter = new TwitterFactory().getInstance();
List<Status> statuses = twitter.getFavorites();
for (Status status : statuses) {
System.out.println("#" + status.getUser().getScreenName() + " - " + status.getText());
}
System.out.println("done.");
System.exit(0);
} catch (TwitterException te) {
te.printStackTrace();
System.out.println("Failed to get favorites: " + te.getMessage());
System.exit(-1);
}
}
}
I have tried like below..
ResponseList<Status> status = twitter.getFavorites(twitterScreenName);
It given me the favorite tweets of the user which i have passed as a parameter. But the problem here is i am able to get only 20 favorites, though the user has so many tweets.
ResponseList<Status> status = twitter.getFavorites(twitterScreenName, paging);
I tried with the paging but i am not sure how to use this paging. So i am getting the top 20 favorites using my first code. If anybody tried this then please share the info like how to get all favorites of a given user.
I'm trying to download photos posted with specific tag in real time. I found real time api pretty useless so I'm using long polling strategy. Below is pseudocode with comments of sublte bugs in it
newMediaCount = getMediaCount();
delta = newMediaCount - mediaCount;
if (delta > 0) {
// if mediaCount changed by now, realDelta > delta, so realDelta - delta photos won't be grabbed and on next poll if mediaCount didn't change again realDelta - delta would be duplicated else ...
// if photo posted from private account last photo will be duplicated as counter changes but nothing is added to recent
recentMedia = getRecentMedia(delta);
// persist recentMedia
mediaCount = newMediaCount;
}
Second issue can be addressed with Set of some sort I gueess. But first really bothers me. I've moved two calls to instagram api as close as possible but is this enough?
Edit
As Amir suggested I've rewritten the code with use of min/max_tag_ids. But it still skips photos. I couldn't find better way to test this than save images on disk for some time and compare result to instagram.com/explore/tags/.
public class LousyInstagramApiTest {
#Test
public void testFeedContinuity() throws Exception {
Instagram instagram = new Instagram(Settings.getClientId());
final String TAG_NAME = "portrait";
String id = instagram.getRecentMediaTags(TAG_NAME).getPagination().getMinTagId();
HashtagEndpoint endpoint = new HashtagEndpoint(instagram, TAG_NAME, id);
for (int i = 0; i < 10; i++) {
Thread.sleep(3000);
endpoint.recentFeed().forEach(d -> {
try {
URL url = new URL(d.getImages().getLowResolution().getImageUrl());
BufferedImage img = ImageIO.read(url);
ImageIO.write(img, "png", new File("D:\\tmp\\" + d.getId() + ".png"));
} catch (Exception e) {
e.printStackTrace();
}
});
}
}
}
class HashtagEndpoint {
private final Instagram instagram;
private final String hashtag;
private String minTagId;
public HashtagEndpoint(Instagram instagram, String hashtag, String minTagId) {
this.instagram = instagram;
this.hashtag = hashtag;
this.minTagId = minTagId;
}
public List<MediaFeedData> recentFeed() throws InstagramException {
TagMediaFeed feed = instagram.getRecentMediaTags(hashtag, minTagId, null);
List<MediaFeedData> dataList = feed.getData();
if (dataList.size() == 0) return Collections.emptyList();
String maxTagId = feed.getPagination().getNextMaxTagId();
if (maxTagId != null && maxTagId.compareTo(minTagId) > 0) dataList.addAll(paginateFeed(maxTagId));
Collections.reverse(dataList);
// dataList.removeIf(d -> d.getId().compareTo(minTagId) < 0);
minTagId = feed.getPagination().getMinTagId();
return dataList;
}
private Collection<? extends MediaFeedData> paginateFeed(String maxTagId) throws InstagramException {
System.out.println("pagination required");
List<MediaFeedData> dataList = new ArrayList<>();
do {
TagMediaFeed feed = instagram.getRecentMediaTags(hashtag, null, maxTagId);
maxTagId = feed.getPagination().getNextMaxTagId();
dataList.addAll(feed.getData());
} while (maxTagId.compareTo(minTagId) > 0);
return dataList;
}
}
Using the Tag endpoints to get the recent media with a desired tag, it returns a min_tag_id in its pagination info, which is tied to the most recently tagged media at the time of your call. As the API also accepts a min_tag_id parameter, you can pass that number from your last query to only receive those media that are tagged after your last query.
So based on whatever polling mechanism you have, you just call the API to get the new recent media if any based on last received min_tag_id.
You will also need to pass a large count parameter and follow the pagination of the response to receive all data without losing anything when the speed of tagging is faster than your polling.
Update:
Based on your updated code:
public List<MediaFeedData> recentFeed() throws InstagramException {
TagMediaFeed feed = instagram.getRecentMediaTags(hashtag, minTagId, null, 100000);
List<MediaFeedData> dataList = feed.getData();
if (dataList.size() == 0) return Collections.emptyList();
// follow the pagination
MediaFeed recentMediaNextPage = instagram.getRecentMediaNextPage(feed.getPagination());
while (recentMediaNextPage.getPagination() != null) {
dataList.addAll(recentMediaNextPage.getData());
recentMediaNextPage = instagram.getRecentMediaNextPage(recentMediaNextPage.getPagination());
}
Collections.reverse(dataList);
minTagId = feed.getPagination().getMinTagId();
return dataList;
}
Our goal is to fetch some of the content from Liferay Portal via SOAP services using Java. We are successfully loading articles right now with JournalArticleServiceSoap. The problem is that the method requires both group id and entry id, and what we want is to fetch all of the articles from a particular group. Hence, we are trying to get the ids first, using AssetEntryServiceSoap but it fails.
AssetEntryServiceSoapServiceLocator aesssLocator = new AssetEntryServiceSoapServiceLocator();
com.liferay.client.soap.portlet.asset.service.http.AssetEntryServiceSoap assetEntryServiceSoap = null;
URL url = null;
try {
url = new URL(
"http://127.0.0.1:8080/tunnel-web/secure/axis/Portlet_Asset_AssetEntryService");
} catch (MalformedURLException e) {
e.printStackTrace();
}
try {
assetEntryServiceSoap = aesssLocator
.getPortlet_Asset_AssetEntryService(url);
} catch (ServiceException e) {
e.printStackTrace();
}
if (assetEntryServiceSoap == null) {
return;
}
Portlet_Asset_AssetEntryServiceSoapBindingStub assetEntryServiceSoapBindingStub = (Portlet_Asset_AssetEntryServiceSoapBindingStub) assetEntryServiceSoap;
assetEntryServiceSoapBindingStub.setUsername("bruno#7cogs.com");
assetEntryServiceSoapBindingStub.setPassword("bruno");
AssetEntrySoap[] entries;
AssetEntryQuery query = new AssetEntryQuery();
try {
int count = assetEntryServiceSoap.getEntriesCount(query);
System.out.println("Entries count: " + Integer.toString(count));
entries = assetEntryServiceSoap.getEntries(query);
if (entries != null) {
System.out.println(Integer.toString(entries.length));
}
for (AssetEntrySoap aes : assetEntryServiceSoap.getEntries(query)) {
System.out.println(aes.getEntryId());
}
} catch (RemoteException e1) {
e1.printStackTrace();
}
Although getEntriesCount() returns a positive value like 83, getEnries() always returns an empty array. I'm very new to Liferay portal, but it looks really weird to me.
By the way, we are obviously not looking for performance here, the key is just to fetch some specific content from the portal remotely. If you know any working solution your help would be much appreciated.
Normally a AssetEntryQuery would have a little more information in it for example:
AssetEntryQuery assetEntryQuery = new AssetEntryQuery();
assetEntryQuery.setClassNameIds(new long[] { ClassNameLocalServiceUtil.getClassNameId("com.liferay.portlet.journal.model.JournalArticle") });
assetEntryQuery.setGroupIds(new long[] { groupId });
So this would return all AssetEntries for the groupId you specify, that are also JournalArticles.
Try this and see, although as you say, the Count method returns a positive number so it might not make a difference, but give it a go! :)