Workflow when started again, with same workflowID, it gets a different runID. Is there a way to retrieve such executions(containing different runID) of a given workflow ID?
I explored ListClosedWorkflowExecutionsRequest API but it just lists all the workflow executions, not for a particular workflowID.
Problem I am trying to solve is: There were many workflows that failed for some reason. In restarting process, I didn't include correct time filter and few of them were restarted while few got skipped. So what I am trying to do is, list all failed workflow IDs using ListClosedWorkflowExecutionsRequest. For each workflowID, fetch all executions and if latest of them is successful, skip it else restart.
I am little new to SWF so is there a better way to accomplish same?
Thanks
Yes, it is possible to filter by workflowId as described in
How do you get the state of a WorkflowExecution if all you have is a workflowId in Amazon SWF.
The answer is on line [7] and [9] in the example, the executions() method call. For your use case though, since all you want are closed executions, you'll just alter the method call to include closed=True. So, in order to get all closed executions in the 24hours (the default):
In [7]: domain.executions(closed=True)
And for closed executions filtered by workflow id:
In [9]: domain.executions(workflow_id='my_wf_id', closed=True)
If you're not using boto.swf but an alternative library instead, refer to the API docs for what are the necessary parameters to pass to the ListClosedWorkflowExecutions API.
Related
Is there any way to detect API call from Zapier to my app?
I've created two zaps on Zapier. Creating tasks from Wrike to MyApp and vice versa.
I got infinite loop, because when I create task on Wrike it is automatically created on MyApp. But than Zapier detects new task on MyApp and creates new one (same task) in Wrike and so on.
I was thinking to add new field in task object (createdFromZapier) and filter by that field, but is there any other way to handle this?
Here is answer from Zapier team:
There isn't a great way to do this — at best, you could set the User Agent header within your developer integration and then inspect the header on your API server side to detect when a request is coming from Zapier.
We have a help guide on avoiding Zap loops at https://zapier.com/help/troubleshoot/behavior/zap-is-stuck-in-a-loop which might help.
The splunk dashboard allows you to run a post-process search which uses a base search.I would like to know if the same can be achieved programatically using splunk sdk java.I would really appreciate if you could give me any pointers on this.
Given that we have a base search, using the Splunk Java SDK, we would like to do the below two steps
1) execute the base search and get the results with 'post process search 1',
2) get the results from the base search that was executed previously with 'post process search2' and do something
I know step 1 is pretty straightforward, but would like to know how I can get the results from the base search executed in step#1 and apply it to the second post process search. Should I just get the results from the history? What is the best way to get the results from the base search job?
I had to build a dashboard with post process searches and look at what endpoints were being called using a recording proxy.
What was happening is a job with the base search was launched. Then when results were being retrieved, a post-processing search was being provided. In the raw REST API this is the search parameter on the search job results (and the preview results) endpoint which was being called.
Now I haven't tried it explicitly, but correspondingly, there's a setSearch method on the JobResultsArgs object (and similar for the JobResultsPreviewArgs object) that you'd pass to Job#getResults (or get preview results) assuming you don't just pass a simple Map with that version of the function signature.
I'm developing a REST API using Restlet.
So far everything has been working just fine. However, I now encountered an issue with the Router mapping of URL to ServerResource.
I've got the following scenario:
GET /car returns a list of all cars
GET /car/{id} returns details about the car with id 1
GET /car/advancedsearch?param1=test should run a search across all cars with some parameters
The first two calls work without any problems. If I try to hit the third call though, the Restlet Router somehow maps it to the second one instead. How can I tell Restlet to instead use the third case?
My mapping is defined as follows:
router.attach("/car", CarListResource.class);
router.attach("/car/{id}", CarResource.class);
router.attach("/car/advancedsearch", CarSearchResource.class);
CarSearchResource is never invoked, but rather the request ends up in CarResource.
The router's default matching mode is set to Template.MODE_EQUALS, so that can't be causing it.
Does anyone have any further suggestions how I could fix it?
Please don't suggest to use /car with the parameters instead, as there's already another kind of search in place on that level. Also, I'm not in control of the API structure, so it has to remain as it is.
you need to add .setMatchingQuery(true); to that rout in order it to recognize that it is with a query at the end of it.
Router router = (Router) super.createInboundRoot();
TemplateRoute route1 = router.attach("/car/advancedsearch?{query_params}", MyResource.class);
route1.setMatchingQuery(true);
return router;
Mind that this pattern is with the exact specific order that you have determined in the route i.e. advancedsearch comes first and query_params comes after
I was able to solve this by simply reordering the attach statements:
router.attach("/car/advancedsearch", CarSearchResource.class);
router.attach("/car", CarListResource.class);
router.attach("/car/{id}", CarResource.class);
I have been using Facebook4j for a Facebook graph API related requirement and my requirement is quite simple.
Requirement : I need search Facebook objects (all public objects- posts/comments/pages/ etc..) for given keywords frequently and persist all results into the db.
Problem : Although the requirement looks straight forward I am stuck in handling pagination of the results and calling the API later without losing any items ( posts/pages/comments ) in the consecutive calls to the API.
Doing some research I have found graph API provide several Pagination methods and Cursor based is the best and recommended. But unfortunately cursor based search is not available for all kind of objects. So I had to choose time-based pagination which uses until and since parameters.
Q1.Is my decision is correct?
Following is a sample previous and next URLs I get when I do a search API call using Facebook4j
previous = https://graph.facebook.com/v1.0/search?limit=200q=infographic&access_token=
[ACCESS_TOKEN]&since=1400152500&__previous=1,
next = https://graph.facebook.com/v1.0/search?limit=200&q=infographic&access_token=
[ACCESS_TOKEN]&until=1399983583
Say I did a API call and using the Facebook4j API method then I should be able to use
fetch next method and continue.
facebook.fetchNext()
But when ever I get an exception in my application OR at the point there are no further results I assume that I should be able to persist since/until values and use them for later API calls in the future and continue getting search results from where I stopped last time.
Q2. Is the assumption regarding the pagination correct ?
So I am assuming that my future API calls would be something similar to the below. I am not sure whether to use 'since' or 'until'.
Q3.Is the way I call the API below to continue searching from a previous search is fine ?
ResponseList<JSONObject> results = facebook.search("KEYWORD",new Reading().limit(200).until("1399983583"));
Also I am not sure whether these results are paginated in such a way I get the newest result set when I use the "NEXT url" / fetchnext() provided ?
Q4.Please clarify me on now the NEXT url provided exactly works in terms of pagination and future API calls?
If my above approaches are wrong, please suggest the best practices which I should follow to handle this requirement.
I'm looking for something similar to RemovalListener/RemovalNotification - but notification of when values in the cache are modified. Notification would include the old value, as well as the new value that has just been added.
[update]
I only populate the cache via a CaceLoader (load & reload). The "source" of the cached elements are at times flakey (remote to the cache).
So the two primary reason for having the replacement element as well are:
Debug logging to indicate when/what values are actually retrieved
from the remote source. This one could be accomplished in a class
that does the remote retrieval.
Generate difference that can then be proactively pushed to (remote)
clients. e.g. publishing changes via blazeDS, rather than requiring
the clients to continuously "get".
It should be possible to implement this without additional notification via the reload method, and getting the current cache contents before going off and getting the new value, and then comparing the new value and the previous value - and then taking additional action. I was looking for a more generic way to decouple the modification notification.
Thanks.
You could file a Guava feature request asking for a method to be added to RemovalNotification that would return the replacement value when the cause is REPLACED. But please provide as much detail as possible about your problem and why this is a good solution for it.