The splunk dashboard allows you to run a post-process search which uses a base search.I would like to know if the same can be achieved programatically using splunk sdk java.I would really appreciate if you could give me any pointers on this.
Given that we have a base search, using the Splunk Java SDK, we would like to do the below two steps
1) execute the base search and get the results with 'post process search 1',
2) get the results from the base search that was executed previously with 'post process search2' and do something
I know step 1 is pretty straightforward, but would like to know how I can get the results from the base search executed in step#1 and apply it to the second post process search. Should I just get the results from the history? What is the best way to get the results from the base search job?
I had to build a dashboard with post process searches and look at what endpoints were being called using a recording proxy.
What was happening is a job with the base search was launched. Then when results were being retrieved, a post-processing search was being provided. In the raw REST API this is the search parameter on the search job results (and the preview results) endpoint which was being called.
Now I haven't tried it explicitly, but correspondingly, there's a setSearch method on the JobResultsArgs object (and similar for the JobResultsPreviewArgs object) that you'd pass to Job#getResults (or get preview results) assuming you don't just pass a simple Map with that version of the function signature.
Related
I’m fairly new to REST API and working on a product where client X interacts with n number of servers (Y1, Y2,…Yn) to retrieve different type of data from backend, using POST requests.
Now I also want to retrieve some metadata related to each server (file names, project name etc.) for our internal use-case in client X. Note: This should be a separate request.
How should I implement this using Rest?
Can I use OPTIONS method for this?
I tried implementing this with GET method but not sure if it’s best approach.
Since you are going to retrieve information the GET is the most appropriate. POST instead should be used to 'insert' fresh new datas. I would suggest to take a look at the meaning of all HTTP verbs (POST,GET,PUT,PATCH,DELETE) in order to understand them.
I am curious if there is a better way to handle an advanced filter design. I have an API for a music catalog that allows users to search for tracks based on the following criteria (if nothing is provided then it just returns a paginated result of all tracks ordered by added date)
that means a base endpoint of something like http://www.myapi.com/api/tracks
Then I offer the following parameters to allow for advanced filtering based on multiple things if they choose to do so. If a parameter is not provided then it is not included in the query at all.
Artist Name / Song Title
Genre(s)
Key(s)
BPM Range
Version Type(s)
For example, you could search for Rock songs that are in key 2A, and all songs that meet that criteria would be returned. The endpoint would then look something like this:
http://www.myapi.com/api/tracks?genres=ROCK&keys=2A
My question is that in designing something like this is there any other approach than setting up your endpoint to basically brute force through all of the possible request param combinations and having a separate query for basically each combo?
When I think of this I think of something like Linkedin's advanced filter ... surely they can't brute force that kind of thing, it would be terrible to maintain and scale. How is that kind of functionality usually implemented in production systems?
I'm developing a REST API using Restlet.
So far everything has been working just fine. However, I now encountered an issue with the Router mapping of URL to ServerResource.
I've got the following scenario:
GET /car returns a list of all cars
GET /car/{id} returns details about the car with id 1
GET /car/advancedsearch?param1=test should run a search across all cars with some parameters
The first two calls work without any problems. If I try to hit the third call though, the Restlet Router somehow maps it to the second one instead. How can I tell Restlet to instead use the third case?
My mapping is defined as follows:
router.attach("/car", CarListResource.class);
router.attach("/car/{id}", CarResource.class);
router.attach("/car/advancedsearch", CarSearchResource.class);
CarSearchResource is never invoked, but rather the request ends up in CarResource.
The router's default matching mode is set to Template.MODE_EQUALS, so that can't be causing it.
Does anyone have any further suggestions how I could fix it?
Please don't suggest to use /car with the parameters instead, as there's already another kind of search in place on that level. Also, I'm not in control of the API structure, so it has to remain as it is.
you need to add .setMatchingQuery(true); to that rout in order it to recognize that it is with a query at the end of it.
Router router = (Router) super.createInboundRoot();
TemplateRoute route1 = router.attach("/car/advancedsearch?{query_params}", MyResource.class);
route1.setMatchingQuery(true);
return router;
Mind that this pattern is with the exact specific order that you have determined in the route i.e. advancedsearch comes first and query_params comes after
I was able to solve this by simply reordering the attach statements:
router.attach("/car/advancedsearch", CarSearchResource.class);
router.attach("/car", CarListResource.class);
router.attach("/car/{id}", CarResource.class);
I have been using Facebook4j for a Facebook graph API related requirement and my requirement is quite simple.
Requirement : I need search Facebook objects (all public objects- posts/comments/pages/ etc..) for given keywords frequently and persist all results into the db.
Problem : Although the requirement looks straight forward I am stuck in handling pagination of the results and calling the API later without losing any items ( posts/pages/comments ) in the consecutive calls to the API.
Doing some research I have found graph API provide several Pagination methods and Cursor based is the best and recommended. But unfortunately cursor based search is not available for all kind of objects. So I had to choose time-based pagination which uses until and since parameters.
Q1.Is my decision is correct?
Following is a sample previous and next URLs I get when I do a search API call using Facebook4j
previous = https://graph.facebook.com/v1.0/search?limit=200q=infographic&access_token=
[ACCESS_TOKEN]&since=1400152500&__previous=1,
next = https://graph.facebook.com/v1.0/search?limit=200&q=infographic&access_token=
[ACCESS_TOKEN]&until=1399983583
Say I did a API call and using the Facebook4j API method then I should be able to use
fetch next method and continue.
facebook.fetchNext()
But when ever I get an exception in my application OR at the point there are no further results I assume that I should be able to persist since/until values and use them for later API calls in the future and continue getting search results from where I stopped last time.
Q2. Is the assumption regarding the pagination correct ?
So I am assuming that my future API calls would be something similar to the below. I am not sure whether to use 'since' or 'until'.
Q3.Is the way I call the API below to continue searching from a previous search is fine ?
ResponseList<JSONObject> results = facebook.search("KEYWORD",new Reading().limit(200).until("1399983583"));
Also I am not sure whether these results are paginated in such a way I get the newest result set when I use the "NEXT url" / fetchnext() provided ?
Q4.Please clarify me on now the NEXT url provided exactly works in terms of pagination and future API calls?
If my above approaches are wrong, please suggest the best practices which I should follow to handle this requirement.
Workflow when started again, with same workflowID, it gets a different runID. Is there a way to retrieve such executions(containing different runID) of a given workflow ID?
I explored ListClosedWorkflowExecutionsRequest API but it just lists all the workflow executions, not for a particular workflowID.
Problem I am trying to solve is: There were many workflows that failed for some reason. In restarting process, I didn't include correct time filter and few of them were restarted while few got skipped. So what I am trying to do is, list all failed workflow IDs using ListClosedWorkflowExecutionsRequest. For each workflowID, fetch all executions and if latest of them is successful, skip it else restart.
I am little new to SWF so is there a better way to accomplish same?
Thanks
Yes, it is possible to filter by workflowId as described in
How do you get the state of a WorkflowExecution if all you have is a workflowId in Amazon SWF.
The answer is on line [7] and [9] in the example, the executions() method call. For your use case though, since all you want are closed executions, you'll just alter the method call to include closed=True. So, in order to get all closed executions in the 24hours (the default):
In [7]: domain.executions(closed=True)
And for closed executions filtered by workflow id:
In [9]: domain.executions(workflow_id='my_wf_id', closed=True)
If you're not using boto.swf but an alternative library instead, refer to the API docs for what are the necessary parameters to pass to the ListClosedWorkflowExecutions API.