Getting desirable JMeter reports from java code - java

Currently I'm struggling with getting desirable JMeter reports from java code.
My goal is to get latency and throughput logged into file for each transaction and then have a summary per each scenario with averages and max/min values for latency and throughput.
Currently I have this code for reports:
ResultCollector csvlogger = new ResultCollector(summer);
csvlogger.setFilename(csvLogFile);
testPlanTree.add(testPlanTree.getArray()[0], csvlogger);
But in this way it logs info only per one transaction and there is no throughput, and latency reported is simply 0 (without any decimal part).
It looks like this:
timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success,failureMessage,bytes,sentBytes,grpThreads,allThreads,Latency,IdleTime,Connect
2017/06/28 08:53:49.276,1014,Jedis Sampler,200,OK,Jedis Thread Group 1-1,text,true,,0,0,1,1,0,0,0
Does anyone know is there any way how I can tune it?
Thanks!

only per one transaction - .jtl log file contains execution of single sampler, try adding more threads and/or loops on Thread Group level and you should see more results.
Latency always zero for scripting-based samplers, you need to explicitly call SampleResult.setLatency() method and set the desired value.
Throughput is not being recorded, it is being calculated. You need to open .jtl results file with i.e. Aggregate Report or Summary Report listener to see the generated value. Take a look into org.apache.jmeter.util.Calculator class source to see the details if you prefer programmaticall non-GUI approaches.

Related

Is it possible (and wise) to execute other "spark-submit" inside a JavaRDD?

I'm trying to execute a Spark program with spark-submit (in particular GATK Spark tools, so the command is not spark-submit, but something similar): this program accept an only input, so I'm trying to write some Java code in order to accept more inputs.
In particular I'm trying to execute a spark-submit for each input, through the pipe function of JavaRDD:
JavaRDD<String> bashExec = ubams.map(ubam -> par1 + "|" + par2)
.pipe("/path/script.sh");
where par1 and par2 are parameters that will be passed to the script.sh, which will handle (splitting by "|" ) and use them to execute something similar to spark-submit.
Now, I don't expect to obtain speedup compared to the execution of a single input because I'm calling other Spark functions, but just to distribute the workload of more inputs on different nodes and have linear execution time to the number of inputs.
For example, the GATK Spark tool lasted about 108 minutes with an only input, with my code I would expect that with two similar inputs it would last something similar to about 216 minutes.
I noticed that that the code "works", or rather I obtain the usual output on my terminal. But in at least 15 hours, the task wasn't completed and it was still executing.
So I'm asking if this approach (executing spark-submit with the pipe function) is stupid or probably there are other errors?
I hope to be clear in explaining my issue.
P.S. I'm using a VM on Azure with 28GB of Memory and 4 execution threads.
Is it possible
Yes, it is technically possible. With a bit caution it is even possible to create a new SparkContext in the worker thread, but
Is it (...) wise
No. You should never do something like this. There is a good reason for Spark disallowing nested parallelization in the first place. Anything that happens inside a task is a black-box, therefore it cannot be accounted during DAG computation and resources allocation. In the worst case scenario job will just deadlock with the main job waiting for the tasks to finish, and tasks waiting for the main job to release required resource.
How to solve this. The problem is rather roughly outlined so it hard to give you a precise advice but you can:
Use driver local loop to submit multiple jobs sequentially from a single application.
Use threading and in-application scheduling to submit multiple jobs concurrently from a single application.
Use independent orchestration tool to submit multiple independent applications, each handling one set of parameters.

Delta between query execution time and Java query call to finish

Context
Our container cluster is located # us-east1-c
We are using the following Java library: google-cloud-bigquery, 0.9.2-beta
Our dataset has around 26M rows and represents ~10G
All of our queries return less than 100 rows as we are always grouping on a specific column
Question
We analyzed the last 100 queries executed in BigQuery, these are were all executed in about 2-3 seconds (we analyzed this by calling bq --format=prettyjson show -j JOBID, end time - creation time).
In our Java logs though, most of the calls to bigquery.query are blocking for 5-6 seconds (and 10 seconds is not out of the ordinary). What could explain the systematic gap between the query to finish in the BigQuery cluster and the results being available in Java? I know 5-6 seconds isn't astronomic, but I am curious to see if this is a normal behaviour when using the Java BigQuery cloud library.
I didn't dig to the point where I analyzed the outbound call using Wireshark. All our tests were executed in our container cluster (Kubernetes).
Code
QueryRequest request = QueryRequest.newBuilder(sql)
.setMaxWaitTime(30000L)
.setUseLegacySql(false)
.setUseQueryCache(false)
.build();
QueryResponse response = bigquery.query(request);
Thank you
Just looking at the code briefly here:
https://github.com/GoogleCloudPlatform/google-cloud-java/blob/master/google-cloud-bigquery/src/main/java/com/google/cloud/bigquery/BigQueryImpl.java
It appears that there are multiple potential sources of delay:
Getting query results
Restarting (there are some automatic restarts in there that can explain the delay spikes)
The frequency of checking for new results
It sounds like looking at Wireshark would give you a precise answer of what is happening.

Instrumenting JVM code to submit timers and counters to AWS CloudWatch

How do you instrument timers and counters in JVM code such that they are passed to AWS CloudWatch? I am interested in manual instrumentation, e.g.: adding timer.start(), timer.stop(), and counter.incr() calls where desired.
Our current solution is to use Dropwizard Metrics for the instrumentation, and this project does a great job of sending the metrics to AWS. The problem with this stack is that Dropwizard Metrics has its own internal aggregates that are based on a expontentially decaying reservoir. As others have mentioned, this behavior is not easily modified. The result is confusing and incorrect data in AWS Cloudwatch. For example:
Averages slowly creep toward the actual value, resulting in a sloped line where a flat line is expected
Min and max values get temporarily stuck at some prior value, resulting in a perfect flat line where a continuously changing value is expected
I think a better solution might be to aggregate no more than a minute of data before submitting it to AWS.

Slowness in reading the large ResultSet

I'm having problems in generating a report the result reaches more than 500,000 lines. Believe me, this result is already filter.
The query (DB2) runs almost instantly, but the the interaction in resultSet is absurdly slow.
I'm doing several tests to try to improve this process but so far without success.
- At first was converting the direct data for the bean (used for report generation), but is very slow and the database gives timeout.
- I tried to turn into a simpler process for testing (resultSet to HashMap) unsuccessfully
- Used the setFetchSize configuration (2000) for the statement
- I looked on the possibility of using thread safe, but does not support resultSet
Already modified the timeout of the bank to increase the processing time, but my problem was not resolved.
Anyway, already tried several possibilities. Does anyone have any tips or solution to my problem?
First of all let me clear,
Reporting, Report Generation task should never be done on application DB.
Application DB, Transactional DBs are designed for fast transactions which doesnt involve heavy result fetching, processing. Those tasks should be handled on DW server or standby replicas.
Second,
Reporting application logic should be processed in less crowded hours (when system is not used by users i.e. nights)
If possible put your processing logic on DB side in form of procedures (maths part) with efficient queries to improve the performance in terms of processing and data transfer.
Try to collect reports periodically using triggers/scheduled jobs etc. and while creating reports use those intermediate reports instead of DB (As you said your query execution is not a problem, but this will save iterating over a large set.) You can use values from intermediate reports thus iterating frequency will be less.

How to tell MapReduce how many mappers to use at the same time?

I am writing an indexing app for MapReduce.
I was able to split inputs with NLineInputFormat, and now I've got few hundred mappers in my app. However, only 2/mashine of those are active at the same time, the rest are "PENDING". I believe that such a behavior slows the app significantly.
How do I make hadoop run at least 100 of those at the same time per machine?
I am using the old hadoop api syntax. Here's what I've tried so far:
conf.setNumMapTasks(1000);
conf.setNumTasksToExecutePerJvm(500);
none of those seem to have any effect.
Any ideas how I can make the mappers actually RUN in parallel?
The JobConf.setNumMapTasks() is just a hint to the MR framework and I am not sure the effect of calling it. In your case the total number of map tasks across the whole job should be equal to the total number of lines in the input divided by the number of lines configured in the NLineInputFormat. You can find more details on the total number of map/reduce tasks across the whole job here.
The description for mapred.tasktracker.map.tasks.maximum says
The maximum number of map tasks that will be run simultaneously by a task tracker.
You need to configure the mapred.tasktracker.map.tasks.maximum (which is defaulted to 2) to change the number of map tasks run parallely on a particular node by the task tracker. I could not get the documentation for 0.20.2, so I am not sure if the parameter exists or if the same parameter name is used in 0.20.2 release.

Categories