Is there any maximum limit for ActionProducer.MaxActionTime in Jemmy lib? - java

As it is described on http://wiki.netbeans.org/Jemmy_Operators_Environment default time for ActionProducer.MaxActionTime is 10000 ms.
I need to increase it to 120000 ms and use next code:
JemmyProperties.setCurrentTimeout("ActionProducer.MaxActionTime", 120000);
And when the code is run under debugging mode the value is 120000:
but still I've got the next error:
"Menu pushing: (JMenuItem with text "Modules", JMenuItem with text
"Corporate entity") (ActionProducer.MaxActionTime)" action has not been
produced in 60005 milliseconds
Is 60000 ms a maximum value for ActionProducer.MaxActionTime?
UPDATE:
Every instance of a class implementing org.netbeans.jemmy.Timeoutable can have its own timeout values, so I checked timeout of instance that generates error
menuBar.getTimeouts().getTimeout("ActionProducer.MaxActionTime")
but the result was the same - it is 120000 seconds and still failing at 60000 seconds.

Despite the fact that error message states (ActionProducer.MaxActionTime)" action has not been produced in..., there is another timout that rules this action time:
JMenuOperator.PushMenuTimeout
Even if I set:
JemmyProperties.setCurrentTimeout("JMenuOperator.PushMenuTimeout", 50);
The error is:
"Menu pushing: (JMenuItem with text "Modules", JMenuItem with text
"Corporate entity") (ActionProducer.MaxActionTime)" action has not been
produced in 51 milliseconds
So do not belive Jemmy log messages and try to find the right timeout.

Related

Cannot create connection to DynamoDB in Java AWS lambda function

I have a lambda function, which I'm trying to use to connect to a DynamoDB table I have. I'm using this code to establish the connection:
...
context.getLogger().log("Before create client..");
AmazonDynamoDB ddb = AmazonDynamoDBClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(
"https://dynamodb.ap-southeast-2.amazonaws.com", "ap-southeast-2")).build();
context.getLogger().log("After create client..");
...
The output I have from the function is as follows:
==================== FUNCTION OUTPUT ====================
{"errorMessage":"2017-07-28T01:11:34.092Z aeee6505-7331-11e7-b28b-db98038611cc Task timed out after 5.00 seconds"}
==================== FUNCTION LOG OUTPUT ====================
START RequestId: aeee6505-7331-11e7-b28b-db98038611cc Version: $LATEST
Before create client..END RequestId: aeee6505-7331-11e7-b28b-db98038611cc
REPORT RequestId: aeee6505-7331-11e7-b28b-db98038611cc Duration: 5003.51 ms Billed Duration: 5000 ms Memory Size: 256 MB Max Memory Used: 62 MB
2017-07-28T01:11:34.092Z aeee6505-7331-11e7-b28b-db98038611cc Task timed out after 5.00 seconds
As you can see, it times out when trying to build the connection and never prints the second log statement. Is there a reason it would timeout rather than throwing an exception, e.g. if there's an error with the IAM role or something? The dynamoDB region and lambda region are the same (Sydney - ap-southeast-2), so I'd have thought this would work.
The IAM role the lambda function is using has the following permissions:
AmazonDynamoDBReadOnlyAccess
AmazonS3ReadOnlyAccess
AWSLambdaBasicExecutionRole
Fixed it.. bumped up the memory of the lambda function to 1024MB. Seriously not sure why that was required given memory used was always around 60-70MB :/
It's memory issue only..I changed lambda function to 1024MB it start working fine

How to check time on all the nodes in hadoop cluster

I am running spark job on hadoop cluster, and the job is failing at few times with the exception :
exception : Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, begin > end in range (begin, end): (1494159709088, 1494159706071)
the job ran successfully on the rerun.
After searching on google, It might be Clock skew between the Oozie server host and launcher host.
Is there a way i can check if there is clock skew ? or how can i check the time on all the nodes whether they are in sync or not.
Thanks
ntptime command output :
ntp_gettime() returns code 0 (OK)
time dcb9b19b.a2328f64 Sun, May 7 2017 14:45:47.633, (.633584090),
maximum error 434990 us, estimated error 815 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
modes 0x0 (),
offset 176.871 us, frequency -25.666 ppm, interval 1 s,
maximum error 434990 us, estimated error 815 us,
status 0x2001 (PLL,NANO),
time constant 10, precision 0.001 us, tolerance 500 ppm,
ntpstat command output :
synchronised to NTP server (174.68.168.57) at stratum 3
time correct to within 77 ms
polling server every 1024 s

Apache-Flink Quickstart - reading CSV file error : Futures timed out after [10000 milliseconds]

I want to read CSV file using Flink-API locally, by the following code:
csvPath="data/weather.csv";
List<Tuple2<String, Double>> csv= env.readCsvFile(csvPath)
.types(String.class,Double.class).collect();
I tried some files in different size(from 800mb to 6gb). Sometimes the operation is completed successfully and sometimes it is not, because of the following timeout exception:
Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [10000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:153)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.ready(package.scala:169)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.shutdown(FlinkMiniCluster.scala:439)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.stop(FlinkMiniCluster.scala:408)
at org.apache.flink.client.LocalExecutor.stop(LocalExecutor.java:127)
at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:195)
at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:91)
at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:923)
at org.apache.flink.api.java.DataSet.collect(DataSet.java:410)
at org.apache.flink.simpleCSV.run(simpleCSV.java:83)
how can I fix this problem? increase this timeout programmatically? Or should I put a config file somewhere? Is there a specific heap size that I should set based on the file size?
collect() transfers the data from the cluster to the local client. This does only work for very small data sets (< 10 MB).
If you have larger data sets, you need to process them on the cluster and emit the results through an output format, e.g., write it to a file.
If you are debugging this program, you can set a break point at the constructor of org.apache.flink.api.java.LocalEnvironment (the constructor with config) and run the following command to change the timeout to 200 seconds (Alt+F8 in IntelliJ Idea):
config.setString("akka.ask.timeout", "200 s")
To find LocalEnvironment class in IntelliJ Idea, press Ctr+n, and check "Include non-project classes in the pop-up window, then type "LocalEnvironment" in the edit box.

Websphere HTTP outbound connection pool usage

I have an application running on Websphere Application Server 6.1.0.43. And I'm having slowdown issues when thrying to invoke a remote service.
The slowdown is on the method findGroupAndGetConnection from the class outboundConnectionCache.
According to the IBM APAR PK94494:
The delay occurs after the client-side JAX-RPC handler (if present) is invoked and before the actual SOAP message is sent to the provider.
Because the delay occurs in the IBM web services engine, this problem
can be difficult to detect.
A com.ibm.ws.webservices.engine.transport.*=all trace will show entries similar to these which repeat:
[8/19/09 18:08:29:658 GMT] 00000047 OutboundConne 1 Enter:
WSWS3595I: Current pool size: 25. Connections-in-use size: 0.
Configured pool size: 25
In addition, that same trace spec will show long delays in executing
the .findGroupAndGetConnection() method:
[8/19/09 18:08:03:428 GMT] 00000047 OutboundConne >
OutboundConnectionCache.findGroupAndGetConnection()
WAITING_THREADS_THRESHOLD is 5 Entry
[8/19/09 18:08:38:358 GMT] 00000047 OutboundConne <
OutboundConnectionCache.findGroupAndGetConnection() Exit
And they recommend the following:
Reduce the 'com.ibm.websphere.webservices.http.connectionPoolCleanUpTime' from
the default of 180 to 120 seconds
Increase the max connections 'com.ibm.websphere.webservices.http.maxConnection' property from
default of 25 to 50. This will also require increasing the web
container thread pool size to 100.
Before changing the default properties I decided to monitor the Web Container thread usage and I noticed that maximum thread pool size (50) is never reached, but the minimum pool size (10) is reached very often, forcing connections to be destroyed and recreated.
Running over the minimum pool size will cause this slowdown? Should I increase the minimum pool size? Is my problem something other than http outbound connection pool?

ADF application hangs for some time

In my oracle ADF application a JTable is added to a panel in which the data appears after some time with a pause of 6 seconds. When I look at the log output of dJBo it shows the following logs:
[6024] (47) PCollNode.checkForSplit(1597) $$added root$$ id=-2
[6025] (1108) PCollNode.checkForSplit(1597) $$added root$$ id=-73
when these logs appears it takes time. What are these logs? How can I avoid this?

Categories