I am using Jenkins pipeline to run the tests in parallel, the problem appears when the tests are sent to ReportPortal, they are all in separate launches, what i am trying to do is to set the launch name (the launch number to be precise) for tests manually so they would all be in one launch.
I have looked here for answers but only found some for NUnit and TestNG (which doesn't help me since i am having separate instances of the program). I am using Java main class to run each test in the pipeline, i read that i can set the launch name as an environment variable. Sadly i couldn't find any information how the implementation of it looks like. My question is, is it even possible to set the launch name without TestNG, if it is possible with environment variable how should i use the variable in the runner method to enforce the launch name?
java -Dmaven.clean.skip=true -Dbrowser=firefox -Dos=linux -jar -Drun.tags=#CreateEntity target/standalone/web-tests.jar
This is my setup for each test (the run tag changes obviously), the glue for cucumber and plugin for the reportportal are in the runner method.
TestNG is not mandatory for it. Here you can find JVM-based integration configs https://reportportal.io/docs/JVM-based-clients-configuration
Which means, that if you use CucumberJVM (which has jUnit under the hood), you can use any related parameter.
To specify the name of launch, you can set it in reportportal.properties file or via the command line, as -Drp.launch=zzz
But it will not resolve the issue for multi-threads. In order to have all parallel threads reported into 1 launch, you can make it in 2 ways:
Share launchID across threads. Which means that you can start launch at ReportPortal (as a part of you test runner or as a Jenkins pre-step + cUrl request). Receive launchID and share it with other threads/runners. Runners will use this id to post data, instead of creation of new launch for each thread. At the end make post-step to finish the launch.
Merge launches via UI or API. Once all executions completed, you can merge them via UI. OR you can collect launchIDs during parallel sessions, and after all execution completion, just run API call to merge launches.
Than is relevant for ReportPortal v1-v4.
For the version 5+ of ReportPortal we plan to minimize this effort via Re-Run feature. https://github.com/reportportal/reportportal/issues/363
Test runners will share launchID by default via file on local storage. And if any other parallel thread will start in this environment, then launchID will be used for reporting automatically.
It still do not affect the case, if you have parallel executions, started in parallel mode at multiply VMs, but we will try to address this case as well.
Related
I'm pretty new to TestNG hailing from a cucumber background.
In my current project, the jenkins jobs are configured with Maven and TestNG, using java and Selenium for the scripting.
Any job, say with 30 tests, if takes 2hrs to complete, when abruptly terminated due to some reason on the last minute, I do not get results of any tests that were run successfully. Hence forced to run again the entire job.
All I see is the error stack trace and result as:
Results :Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
I am sure there is a better way of designing this, hoping for better approaches.
How can I make sure I do not lose results of the tests that did run successfully(though passed/failed)?
Is a test management tool or some entity to store the run time results , a mandatory requirement or TestNG has some provision built-in?
How can I make sure I do not lose results of the tests that did run successfully(though passed/failed)?
There are usually two ways in which you can build reporting into your TestNG driven tests.
Batched mode - This is usually how all the TestNG reports are built. A listener which implements org.testng.IReporter interface is built and within its generateReport(), all the logic of consolidating the test results into a report is carried out.
Realtime mode - This is usually done by implementing the TestNG listener org.testng.IInvokedMethodListener and then within its afterInvocation() do the following :
Check the type of the incoming org.testng.IInvokedMethod (to see if its a configuration method (or) a #Test method ) and handle these types of methods differently (if the report needs to show them separately). Then test the status of org.testng.ITestResult and based on the status, show them as PASS/FAIL/SKIPPED
IReporter implementations are run at the end after all the tests have run (which is why i call them as batched mode). So if something crashes towards the end but before the reporting phase is executed, you lose all execution data.
So you might want to try and build a realtime reporting model. You can take a look at the RuntimeReporter report that SeLion uses. Its built on the realtime model.
Is a test management tool or some entity to store the run time results , a mandatory requirement or TestNG has some provision built-in?
There are no such mandatory requirements that TestNG places. As I explained above, it all boils down to how you are constructing your reports. If you are constructing the reports in a realtime fashion (you can leverage templating engines such as Velocity/Freemarker/Thymeleaf) to build your reporting template and then use the IInvokedMethodListener to inject values into the template, so that it can be rendered easily.
Read more here for a comparison on the templating engines so that you can choose what fits your need.
Context: I am working with a project that involves an android-controlled hardware and an iOS app that talks to that android device via websocket. We are in a good shape in terms of lower level (API, unit, contract) testing, but there's nothing to help us with the UI part of it.
UI automation, especially end-to-end is not my favorite way of testing because it is flaky and slow, and I believe it's purpose is only to guarantee that the main user flows are executable rather than every single piece of functionality.
So I developed a suite that includes both the android and the iOS code and page objects, but right now the only thing I can do is run each one of them individually:
Start the appium server and appium driver for android, run the android app suite
Start the appium server and appium driver for ios, run the ios app suite
But that is not quite exactly what I want - since this is going to be the only test case, I want it to be full end-to-end; starts appium server, starts android server, also start appium drivers for both, run test that places an action on ios and verifies that android is executing it.
I don't want to have someone manually running this thing and looking at both devices. If this doesn't work, android and ios suites are going to run separately, relying on mocked counterparts.
So I am throwing it here to the community because none of the test engineering groups I posted to were able to come up with an answer.
I need to know if anyone has ever done or seen this to shed me a light, or if anyone knows how to do it.
Can Steve Jobs and Andy Rubin talk?
I would look into starting 2 appium instances via command line on different ports and then connecting each suite to a given appium instance. Then at this point you just need to properly thread each suite so that you can properly test your code. To do that you will need to add dependencies (can be easily done using TestNG).
Steps:
1) Create a thread for IOS and Android Suites
2) Run each suite on a different appium session (aka different ports)
- You will need to know how to run from command line for this
3) Setup your tests to depend on one another (I recommend using TestNG as the framework)
4) Use threading logic to properly wait for tests to finish before starting. Yields and Timeouts will be very useful, as well as TestNG dependencies, it will save your life given what you are doing.
NOTE: Appium has a timeout functionality where if a session does not get a command in 60 seconds by default the session is destroyed. AKA make sure you increase or find a way to turn off that timeout.
Additionally as a recommendation I would advise the use of TestNG over JUnit. JUnit is a Unit testing framework, meaning you are testing specific functional units. This however is not ideal for app automation as many areas of an app depend on prior functionality. For example if you have a login screen where the login functionality is currently broken you don't want to run all of the tests the need the user to be logged in to pass. This would cause not only a lot of fright when a large portion of your tests fail, it will also make it harder to track down why it failed. Instead if you have all of these tests depend on the login feature passing then if the login fails there is a single error which can then be fixed, and all the tests that depend on the login feature don't run when you know they are going to pass.
Hope this process helps, sorry I obviously can't send out code in this as it would take hours for me to type/figure out.
Problem solved, it was as simple as it looked like.
What I did was to implement an abstract class that builds drivers for both android and ios with their capabilities and specific appium port, instantiating their respective page objects as well. All the test classes extend this abstract class.
Then I divided the suite in 3 pieces:
One for the Android only, which only accesses the page objects for android;
One for ios, which also accesses only the page objects for ios;
And a third test that spins up both ios and android and controls them both.
To avoid always starting two appium servers and also avoid always downloading the latest app versions for both android and ios I created gradle tasks for each platform, so the CI jobs can call only the task that prepares for the platform it has to test at a given moment.
I am currently working on an Eclipse plug-in for a static code analyzer. The analyzer is written in Java. Until now, the Eclipse plug-in used an own launch configuration type as well as a subclass of JavaLaunchDelegate to execute the code analyzer in a separate process. The Eclipse plug-in and the code analyzer communicated via stdin and stdout of the new process. It was quite ugly :-P
Now, we aim to clean this up. First, we converted the code analyzer to be not only a jar file, but also an Eclipse plug-in. Second, we replaced the stdio based communication by a proper Java interface: The code analyzer offers an API to the Eclipse plug-in. This all works fine.
However, the Eclipse plug-in still uses its own launch configuration type with its subclass of JavaLaunchDelegate to run the analyses. This means, since the code analyzer itself is now an Eclipse plug-in, the analysis is done in the same process. However, the Eclipse plug-in still launches the extra process with the code analyzer without using it.
Question
What do we still need from the old setup?
I am quite sure, we can convert the JavaLaunchDelegate to a simple LaunchConfigurationDelegate. This should prevent the Eclipse plug-in from launching the useless process.
Next, in the plugin.xml, we declare the own launch configuration type like so:
<extension
point="org.eclipse.debug.core.launchConfigurationTypes">
<launchConfigurationType
delegate="com.example.LaunchDelegate"
id="com.example.launch.config"
modes="run,debug"
name="Launch"
sourceLocatorId="org.eclipse.jdt.launching.sourceLocator.JavaSourceLookupDirector"
sourcePathComputerId="org.eclipse.jdt.launching.sourceLookup.javaSourcePathComputer">
</launchConfigurationType>
</extension>
Here, I am not sure whether we can remove the sourceLocatorId and sourcePathComputerId attributes: The launch configuration still launches Java code, but it is no longer launched in a separate process. Do these attributes make sense when they are used with a launch delegate that is not a JavaLaunchDelegate?
Finally, I don't know whether it is a good idea to still use a launch configuration at all. This is because we don't really launch an extra process, but an operation that is executed in the Eclipse process. Is it appropriate to use a launch configuration for this use case? Also, we currently use a subclass of the AbstractLaunchConfigurationTabGroup to configure the parameters of the analysis. Is there an alternative to the own launch configuration type that allows us to launch an operation in the Eclipse process and to provide parameters via a GUI for this operation?
Question Summary
Can we replace the JavaLaunchDelegate with a simple LaunchConfigurationDelegate?
Can we remove the sourceLocatorId and sourcePathComputerId attributes from the own launch configuration type declaration?
Is it appropriate to use a launch configuration to execute a static code analysis that runs in the Eclipse process?
If it is not, is there an alternative to the own launch configuration type that allows us to launch an operation in the Eclipse process and to provide parameters via a GUI for this operation?
We now use a simple LaunchConfigurationDelegate and we removed the sourceLocatorId and sourcePathComputerId attributes from the own launch configuration type declaration. This does indeed prevent the unnecessary process. Also, we did not notice any issues with the debugging. Thus, I consider question 1 and 2 as solved. Regarding question 3 and 4: The simple launch configuration now works fine for us, so we stick with it.
I have a simple java code integrated with Apache Camel which also uses camel-kafka component for logging messages in kafka topics. I have created a class which handles single request.
Using threads I can create various threads to invoke above class method to log messages.
Currently I need to load test this JAR using a tool. I want to know a tool that have very low learning curve.
Load test:
increasing users to allow multiple messages to be logged concurrently
variation of messages to increase/decrease message size
Time taken by specific users to log specific messages.
I have gone through
JMeter (learning curve is big)
JProfiler (it does not load test but monitors the application if I am
not wrong)
Netbeans Load generator (again it uses JMeter)
Download groovy-all-*.jar and drop it to /lib folder of your JMeter installation
Restart JMeter
Add Thread Group to test plan. Set desired amount of virtual users, interations and/or duration.
Add JSR223 Sampler as a child of the thread group
Choose "groovy" in the "language" dropdown
Put your "simple java code" in JSR223 Sampler's "Script" area
Save test plan.
Run it.
Was that so hard?
i am trying to automate manual testing of modules in my project. We are dealing with IBM Websphere Message queue software. We have a trigger component written in core java which when executed polls for availability of message in the configured queue. Its an indefinite while loop that keeps the trigger component running. I have written test cases in JUnit to put message in the queue and now will i be able to start/stop the trigger component on demand? Invoking the trigger component keeps it running and i am not getting the control back to check the expected output. If i start it in thread then the log files to which the trigger component is supposed to update when processing the message is not getting updated. How can i resolve this situation.
Your suggestion and directions is highly appreciated.
Thanks,
-Vijay
I would look at moving your manual build to a scripted build using something like Apache Ant and using the junit support, see http://ant.apache.org/manual/Tasks/junit.html.
Once you have your tests which you can run via Ant, you can integrate into a continuous integration container like Hudson (hudson-ci.org) and get it to schedule a build run on a timer. You can also schedule to run on a code check-in.
For more on continuous integration take a look at Martin Fowler's article, http://martinfowler.com/articles/continuousIntegration.html.