I have been using Facebook4j for a Facebook graph API related requirement and my requirement is quite simple.
Requirement : I need search Facebook objects (all public objects- posts/comments/pages/ etc..) for given keywords frequently and persist all results into the db.
Problem : Although the requirement looks straight forward I am stuck in handling pagination of the results and calling the API later without losing any items ( posts/pages/comments ) in the consecutive calls to the API.
Doing some research I have found graph API provide several Pagination methods and Cursor based is the best and recommended. But unfortunately cursor based search is not available for all kind of objects. So I had to choose time-based pagination which uses until and since parameters.
Q1.Is my decision is correct?
Following is a sample previous and next URLs I get when I do a search API call using Facebook4j
previous = https://graph.facebook.com/v1.0/search?limit=200q=infographic&access_token=
[ACCESS_TOKEN]&since=1400152500&__previous=1,
next = https://graph.facebook.com/v1.0/search?limit=200&q=infographic&access_token=
[ACCESS_TOKEN]&until=1399983583
Say I did a API call and using the Facebook4j API method then I should be able to use
fetch next method and continue.
facebook.fetchNext()
But when ever I get an exception in my application OR at the point there are no further results I assume that I should be able to persist since/until values and use them for later API calls in the future and continue getting search results from where I stopped last time.
Q2. Is the assumption regarding the pagination correct ?
So I am assuming that my future API calls would be something similar to the below. I am not sure whether to use 'since' or 'until'.
Q3.Is the way I call the API below to continue searching from a previous search is fine ?
ResponseList<JSONObject> results = facebook.search("KEYWORD",new Reading().limit(200).until("1399983583"));
Also I am not sure whether these results are paginated in such a way I get the newest result set when I use the "NEXT url" / fetchnext() provided ?
Q4.Please clarify me on now the NEXT url provided exactly works in terms of pagination and future API calls?
If my above approaches are wrong, please suggest the best practices which I should follow to handle this requirement.
Related
I’m fairly new to REST API and working on a product where client X interacts with n number of servers (Y1, Y2,…Yn) to retrieve different type of data from backend, using POST requests.
Now I also want to retrieve some metadata related to each server (file names, project name etc.) for our internal use-case in client X. Note: This should be a separate request.
How should I implement this using Rest?
Can I use OPTIONS method for this?
I tried implementing this with GET method but not sure if it’s best approach.
Since you are going to retrieve information the GET is the most appropriate. POST instead should be used to 'insert' fresh new datas. I would suggest to take a look at the meaning of all HTTP verbs (POST,GET,PUT,PATCH,DELETE) in order to understand them.
I have a paginated response from an URL for example https://swapi.dev/api/people .
This endpoint gives only 9 persons per page. I want to collect all Star Wars characters using WebClient in Spring Boot app but i don't know how to crawl over pages using WebClient and retrieve all persons at one time in non blocking way. Does anyone know how to do this ? Thank you for your help.
It really depends on how the pagination was implemented by the owners of the API. It might not be possible to get all results at once, but to be sure you would need to get in touch with the ones responsible for the API.
As you can see from the documentation https://swapi.dev/documentation#people https://swapi.dev/api/people request gives all people and https://swapi.dev/api/people/1 gives first people resource. So according to documentation, there is no pagination
The splunk dashboard allows you to run a post-process search which uses a base search.I would like to know if the same can be achieved programatically using splunk sdk java.I would really appreciate if you could give me any pointers on this.
Given that we have a base search, using the Splunk Java SDK, we would like to do the below two steps
1) execute the base search and get the results with 'post process search 1',
2) get the results from the base search that was executed previously with 'post process search2' and do something
I know step 1 is pretty straightforward, but would like to know how I can get the results from the base search executed in step#1 and apply it to the second post process search. Should I just get the results from the history? What is the best way to get the results from the base search job?
I had to build a dashboard with post process searches and look at what endpoints were being called using a recording proxy.
What was happening is a job with the base search was launched. Then when results were being retrieved, a post-processing search was being provided. In the raw REST API this is the search parameter on the search job results (and the preview results) endpoint which was being called.
Now I haven't tried it explicitly, but correspondingly, there's a setSearch method on the JobResultsArgs object (and similar for the JobResultsPreviewArgs object) that you'd pass to Job#getResults (or get preview results) assuming you don't just pass a simple Map with that version of the function signature.
I just want to know the high level steps of the process. Here's my thought on the process:
Assumption: the API returns JSON format
Check the API document to see the structure of the returned JSON
Create a corresponding Java class (ex: Employee)
Make Http call to the endpoint to get the JSON response
Using some JSON library (such as GSON, Jackson) to unmarshall the JSON string to Employee object.
Manipulate the Employee object
However, what if the API returned JSON is changed? it's really tedious task to exam the JSON string every now and then to adjust the corresponding Java class.
Can anyone help me out with this understanding. Thanks
You describe how to consume a json over http API, which is fine since most of the APIs out there are just that. If you are interested in consuming Restful HTTP resources however, one way would be:
Check the API documentation, aka. the media-types that your client will need to support in order to communicate with its resources. Some RESTafarians argue that all media-types should be standardized, so all clients could potentially support them, but I think that goes a bit far.
Watch out for link representations, and processing logic. media-types do not only describe the format of the data, but also how to process them. How to display it if its an image, how to run code that might be part of the message, how to layout onto the screen, how to use embedded controls like forms, etc.
Create corresponding Java classes. If the resources "only" describe data (which they usually do in API context), then simple Java classes will do, otherwise more might be needed. For example: can the representation contain JavaScript to run on the client? You need to embed a JavaScript engine, and prepare your class to do just that.
Make call to a bookmarked URI if you have it. There should be no hardcoded SOAP-like "endpoint" you call. You start with bookmarks and work your way to the state your client need to be in.
Usually your first call goes to the "start" resource. This is the only bookmark you have in the beginning. You specify the media-types you support for this resource in the Accept header.
You then check whether the returned Content-Type matches one of your accepted media-types (remember, the server is free to ignore your preferences), and then you process the returned representation according to its rules.
For example you want to get all the accounts for customer 123456 for which you don't yet have a bookmark to. You might first GET the start resource for account management. The processing logic there might describe a link to go to for account listings. You follow the link. The representation there might give you a "form" in which you have to fill out the customer number and POST. Finally, you get your representation of the account list. You may at this point bookmark the page, so you don't have to go through the whole chain the next time.
Process representation. This might involve displaying, running, or just handing over the data to some other class.
Sorry for the long post, slow day at work :) Just for completeness, some other points the client needs to know about: caching, handling bookmarks (reacting to 3xx codes), following links in representations.
Versioning is another topic you mention. This is a whole discussion onto itself, but in short: some people (myself included) advocate versioning the media-type. Non-backwards compatible changes simply change the media type's name (for example from application/vnd.company.customer-v1+json, to application/vnd.company.customer-v2+json), and then everything (bookmarks for example) continues to work because of content negotiation.
There are many ways to consume RESTful APIs.
Typically, you need to know what version of the API you are going to use. When the API changes (i.e. a different version is exposed) you need to decide if the new functionality is worth migrating your application(s) to the latest and greatest or not...
In my experience, migrating to a new API always requires some effort and it really depends on the value of doing so (vs. not doing it) and/or whether the old API is going to be deprecated and/or not supported by the publisher.
I have been developing an application to download whole calendars from all users in domain and save them in ICS format. The app is written in Java. I get access using 2L OAuth. So far I'm able to get most of calendar's data, excluding exceptions from recurrent events. Google API docs say that every recurrent event should contain a list of recurrence, including EXRULEs. But when I call the API I got only recurrent rule without exception.
It there any way to get these exceptions?
You get the exceptions as regular item and
event.getOriginalEvent()
will return reference to reccuring event.