I have a paginated response from an URL for example https://swapi.dev/api/people .
This endpoint gives only 9 persons per page. I want to collect all Star Wars characters using WebClient in Spring Boot app but i don't know how to crawl over pages using WebClient and retrieve all persons at one time in non blocking way. Does anyone know how to do this ? Thank you for your help.
It really depends on how the pagination was implemented by the owners of the API. It might not be possible to get all results at once, but to be sure you would need to get in touch with the ones responsible for the API.
As you can see from the documentation https://swapi.dev/documentation#people https://swapi.dev/api/people request gives all people and https://swapi.dev/api/people/1 gives first people resource. So according to documentation, there is no pagination
Related
I’m fairly new to REST API and working on a product where client X interacts with n number of servers (Y1, Y2,…Yn) to retrieve different type of data from backend, using POST requests.
Now I also want to retrieve some metadata related to each server (file names, project name etc.) for our internal use-case in client X. Note: This should be a separate request.
How should I implement this using Rest?
Can I use OPTIONS method for this?
I tried implementing this with GET method but not sure if it’s best approach.
Since you are going to retrieve information the GET is the most appropriate. POST instead should be used to 'insert' fresh new datas. I would suggest to take a look at the meaning of all HTTP verbs (POST,GET,PUT,PATCH,DELETE) in order to understand them.
Hi i am pretty new to Restlet, and generally building web servers. I need to support filtering like this:
http://deviceip:port/resource?id=id
So far i know how to return a json message when user invokes different resources, based on my web server state. I would attach it to router, and add class which handles that resource. But how can i return only one resource from collection based on id? What i need to change in my class which is responsible from handling of that resource. Also how can i attach this resource to router? Any help is welcome, if you can write some code snippet to help me, that would be great.
Thanks
So as i understand you can approach to this in two ways. One is explained by link above and another one is using query. So basically you font have to create another resource like in the answer from link above, instead you can just extract query with this.getQuery()
and than call method getFirstValue("id"), which will return entered id.
I'm trying to map an url like www.host.com/{tenant-id}/home to something like www.host.com/home.xhtml?tenantId={tenant-id} where "tenant-id" is the name of the tenant that is using the app, and could be almost anything.
After some research I found many alternatives, but none of them convinces me. I'll list the alternatives so maybe I can help anyone and get some feedback on missing alternatives at the same time.
Furthermore, my app is written in java.
Pretty Faces (or Rewrite). http://www.ocpsoft.org/prettyfaces/
Htmleasy (Resteasy) https://github.com/voodoodyne/htmleasy
A filter, handmade.
URL rewrite trough proxy (Apache / HaProxy)
I tried pretty faces and get it to work. But I'm concerned about some performance issues with high load. I don't know what PF do internally, and I'm afraid that processing every request and applying filters could be bad.
A handmade filter, would be impossible to maintain.
Does anyone have experience with Htmleasy?
Do you know any other alternative?
Thanks in advance
Cristian.
Not sure if I can get a right approach for Java. I am from a .Net background. If this were given to me, i would like to create a httpmodule that intercepts each request and then translates the uri accordingly. However, it will not be required to enable the tenant code in the uri for each request, I would suggest that you initially create a tenant context from within your app and then use it for tenant identification.
This will be far more secure and easy than handling the uri changes per request.
HTH
I see in AEM 6.0 that have a built-in component for page view statistic, displayed as impression column in author site admin. But this built-in does not support for filter the top page view in sites and so on. This one is useful for calculate page view for each page. I'm facing the performance problem for calculate top page view with more than thousand of pages. Anyone have a solution for this one ?. Many thanks and appreciate.
While the impression data initially appears tempting, it is not meant for end-user page view analytics. The CQ integrations with SiteCatalyst, etc are meant for real analytics (or 3rd party solutions, such as Google Analytics).
If you consider the author displays impressions, 1+ publish instances would have to "reverse replicate" impression data back to the author, which would get pushed right back to publish instances.
When you consider the Apache Dispatchers serving up cached pages w/o passing the request to the publish instances, you can understand how even your production publish instances don't see all the traffic, either.
You can create a variant of the page with a selector. Something like: statistics.html.jsp in your page node, then:
http://example.com/a.html is the normal page
http://example.com/a.statistics.html is the page that adds the statistics component.
Finding top 10 most viewed page or sorting all page based on their popularity using Impression service provided by CQ is a bit tricky because of following reasons
It might possible that Page Views is in external system and then you
want to import those data as impression in CQ to have more
application context.
you have to aggregate all data across all publish instances.
It's Slow.
To calculate top page view with more than thousand of pages you have three options
Creating your Own Impression service
You can create your own impression service by extending
com.day.crx.statistics.Entry. Then You can do all the optimizations.
Adobe analytics: If you have thousands of pages then go with adobe
analytics. It will give you the top results and other filtering
options through their Rest Service.
Modify the OOTB service implementation.
You don't want to write your own service but wanted to use OOTB service available to you. Only problem with this is, You have multiple publish instance and some how you want to combine all data into one so that you get accurate picture. It kind of tricky to get all data from all publish instance (through reverse replication) and then combine them on author and then push them over again. However you can use one instance to collect all stat data (king of single source of truth and then replicate it back to all instance every day)
Make sure that you enable page view tracking by adding following line
<cq:include script="/libs/foundation/components/page/stats.jsp" />
Then configure all publish instance to point to one DNS using following config (You can always override this under /apps)
/apps/wcm/core/config.publish/com.day.cq.wcm.core.stats.PageViewStatistics
/apps/wcm/core/config.publish/com.day.cq.wcm.core.stats.PageViewStatisticsImpl
make sure that pageviewstatistics.trackingurl is pointing to single domain (You need to create a domain, something like impression.mydomain.com that will be stand alone CQ instance to take all impression request)
Now you have consolidated page impression on one machine
You can easily write a scheduler which will run every night and reverse replicate all data to author instance.
Once it is an author instance you can use replicator service to replicate to all other publish instance
Then you can tweak some code as mentioned in the custom approach to get popular resources.
For read more about the Custom implementation:
Implementations instruction:
I have been using Facebook4j for a Facebook graph API related requirement and my requirement is quite simple.
Requirement : I need search Facebook objects (all public objects- posts/comments/pages/ etc..) for given keywords frequently and persist all results into the db.
Problem : Although the requirement looks straight forward I am stuck in handling pagination of the results and calling the API later without losing any items ( posts/pages/comments ) in the consecutive calls to the API.
Doing some research I have found graph API provide several Pagination methods and Cursor based is the best and recommended. But unfortunately cursor based search is not available for all kind of objects. So I had to choose time-based pagination which uses until and since parameters.
Q1.Is my decision is correct?
Following is a sample previous and next URLs I get when I do a search API call using Facebook4j
previous = https://graph.facebook.com/v1.0/search?limit=200q=infographic&access_token=
[ACCESS_TOKEN]&since=1400152500&__previous=1,
next = https://graph.facebook.com/v1.0/search?limit=200&q=infographic&access_token=
[ACCESS_TOKEN]&until=1399983583
Say I did a API call and using the Facebook4j API method then I should be able to use
fetch next method and continue.
facebook.fetchNext()
But when ever I get an exception in my application OR at the point there are no further results I assume that I should be able to persist since/until values and use them for later API calls in the future and continue getting search results from where I stopped last time.
Q2. Is the assumption regarding the pagination correct ?
So I am assuming that my future API calls would be something similar to the below. I am not sure whether to use 'since' or 'until'.
Q3.Is the way I call the API below to continue searching from a previous search is fine ?
ResponseList<JSONObject> results = facebook.search("KEYWORD",new Reading().limit(200).until("1399983583"));
Also I am not sure whether these results are paginated in such a way I get the newest result set when I use the "NEXT url" / fetchnext() provided ?
Q4.Please clarify me on now the NEXT url provided exactly works in terms of pagination and future API calls?
If my above approaches are wrong, please suggest the best practices which I should follow to handle this requirement.