Requirement: I need to monitor the Twilio account and subaccount usages in near real-time.
Any solution in java, php, python or even curl will do for me.
Twilio provides Usage Records API and allows some subresources which contains Today but that returns all data from the start of that day till the nearest current time. I am unable to find anything in the documents that would retrieve only usages of the last minute or last 10 minutes or even between two time constants. The Usage API accepts two dates but not time.
Hoping someone out there has a solution to offer.
Related
I am developing an app which will give you nearby Mosques within 10 km of your current location. Now that the Places API allows a certain number of queries per day, I have used firebase to store nearby Mosques for a certain location and I first check if the data is in database or not before querying. But this still doesn't solve the problem. e.g. if a user is on the go the whole day then the results must be changing every single minute, according to his/her location. How can I achieve the desirable results?
As mentioned earlier, I am saving nearby locations in a database with their relative location (around which they exist). But this doesn't quite solve the problem.
Any help will be greatly appreciated.
Places API is a commercial offering - you are meant to pay for using it, if you want to make applications around it.
There's a certain small number of calls that you can do for free, but this is only meant as testing grounds or private use. I am no lawyer, but I would guess that circumventing the fee by scraping the map (like setting a bot to go around a country to build a database of points of interests) would be illegal and would probably get you a letter from Google saying you should stop.
Use AutocompleteSessionToken class to generate a token and place it after your key , this token will reduce your usage because you can request the places api multiple times and still it will be considered as a single request. i hope this will help cause i didnt get your question very well. here is sample of the link:
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=1600+Amphitheatre&key=&sessiontoken=1234567890.
For more details.see here
I am having an Android app and I am planning to use Kinvey Database to store some data.
One of the record in the entry would be having the last used time period.
The last used time period will be set by my app when ever the app is opened.
What basically I am trying to achieve is to run a code at the end of each month and clear all the record whos value of the last used period is more than 10 days.
Can any one please tell me whether it is possible to do this?
The reason for doing this is to use the least Server storage space as they provide only 1 GB/Monthly in the free plan.
What I understand from this...
What basically I am trying to achieve is to run a code at the end of each month and clear all the record whos value of the last used period is more than 10 days.
Can any one please tell me whether it is possible to do this?
... is that you would like to have scheduled code, which will be executed once a month. Please take a further look at the Scheduled Code feature of Kinvey.
https://devcenter.kinvey.com/html5/tutorials/scheduled-code-getting-started
Edit: more information on the topic...
Kinvey Scheduled code allows you to execute one of your custom endpoints on a specific date in the future. It is commonly used to
Aggregate, archive, and cleanup data.
Pull data from a third-party API into Kinvey.
Send out a batch of e-mails or push notifications.
I would not bother describing step-by-step initializing of Scheduled Code, since those steps may change in the future. Please follow the steps from the link above, those should be fine to get you further.
So I am currently trying to gather tweets on a specific location and then analyse what is going on in that location from the tweets gathered. My task basically involves a lot of data mining.
The main problem I have come across however is gathering enough tweets that will allow me to make a judgement.
I have been using the Twitter Streaming API, however this only gives 1% of all the tweets which is far from enough. I mined 100,000 tweets and very little were in English let alone related to the location I was looking for.
I have also noticed that twitter rate limits how often you can call a method via their API. How are sites like trendsmap.com working? Are they somehow accessing a larger data set?
Edit: Ok, so I have tried to use the geolocation feature in the twiiter4j API. Turns out the rate limits can be avoided if you are careful with your implementation. The amount of people however that actually have the geolocation feature turned on when tweeting is very low. This therefore does not represent people in that area. I seem to be getting the same tweets every single time. Twitter does offer a search operator "near" which works great on their website. However they have not included this functionality in their API as far as I can tell.
If you are searching using the Twitter API you can restrict your searches to a specific geolocation using the geocode option.
You can use result_type=recent to ensure you're only getting the most recent tweets.
The maximum count - that is, number of tweets per request - is 100.
The current limit on number of search requests per hour is 450.
So, that's a maximum of 45,000 tweets per hour - is that enough for you?
tl:dr - use the most restrictive set of search parameters to limit the results to those you actually need.
I am trying to write a java code to get the tweets within a special range of time.
I am using Since and Until functions in twitter4j api. However It only returns a limited number of tweets within last minutes.
is there a way to get it work? did any body have the same experience ?
With https://dev.twitter.com/docs/api/1/get/statuses/user_timeline I can get 3,200 most recent tweets. However, certain sites like http://www.mytweet16.com/ seems to bypass the limit, and my browse through the API documentation could not find anything.
How do they do it, or is there another API that doesn't have the limit?
You can use twitter search page to bypass 3,200 limit. However you have to scroll down many times in the search results page. For example, I searched tweets from #beyinsiz_adam. This is the link of search results:
https://twitter.com/search?q=from%3Abeyinsiz_adam&src=typd&f=realtime
Now in order to scroll down many times, you can use the following javascript code.
var myVar=setInterval(function(){myTimer()},1000);
function myTimer() {
window.scrollTo(0,document.body.scrollHeight);
}
Just run it in the FireBug console. And wait some time to load all tweets.
The only way to see more is to start saving them before the user's tweet count hits 3200. Services which show more than 3200 tweets have saved them in their own dbs. There's currently no way to get more than that through any Twitter API.
http://www.quora.com/Is-there-a-way-to-get-more-than-3200-tweets-from-a-twitter-user-using-Twitters-API-or-scraping
https://dev.twitter.com/discussions/276
Note from that second link: "…the 3,200 limit is for browsing the timeline only. Tweets can always be requested by their ID using the GET statuses/show/:id method."
I've been in this (Twitter) industry for a long time and witnessed lots of changes in Twitter API and documentation. I would like to clarify one thing to you. There is no way to surpass 3200 tweets limit. Twitter doesn't provide this data even in its new premium API.
The only way someone can surpass this limit is by saving the tweets of an individual Twitter user.
There are tools available which claim to have a wide database and provide more than 3200 tweets. Few of them are followersanalysis.com, keyhole.co which I know of.
You can use a tool I wrote that bypasses the limit.
It saves the Tweets in a JSON format.
https://github.com/pauldotknopf/twitter-dump
You can use a Python library snscrape to do it. Or you can use ExportData tool to get all tweets for the user, which returns already preprocessed CSV and spreadsheet files. The first option is free, but has less information and requires more manual work.