i am trying to automate manual testing of modules in my project. We are dealing with IBM Websphere Message queue software. We have a trigger component written in core java which when executed polls for availability of message in the configured queue. Its an indefinite while loop that keeps the trigger component running. I have written test cases in JUnit to put message in the queue and now will i be able to start/stop the trigger component on demand? Invoking the trigger component keeps it running and i am not getting the control back to check the expected output. If i start it in thread then the log files to which the trigger component is supposed to update when processing the message is not getting updated. How can i resolve this situation.
Your suggestion and directions is highly appreciated.
Thanks,
-Vijay
I would look at moving your manual build to a scripted build using something like Apache Ant and using the junit support, see http://ant.apache.org/manual/Tasks/junit.html.
Once you have your tests which you can run via Ant, you can integrate into a continuous integration container like Hudson (hudson-ci.org) and get it to schedule a build run on a timer. You can also schedule to run on a code check-in.
For more on continuous integration take a look at Martin Fowler's article, http://martinfowler.com/articles/continuousIntegration.html.
Related
I am using Jenkins pipeline to run the tests in parallel, the problem appears when the tests are sent to ReportPortal, they are all in separate launches, what i am trying to do is to set the launch name (the launch number to be precise) for tests manually so they would all be in one launch.
I have looked here for answers but only found some for NUnit and TestNG (which doesn't help me since i am having separate instances of the program). I am using Java main class to run each test in the pipeline, i read that i can set the launch name as an environment variable. Sadly i couldn't find any information how the implementation of it looks like. My question is, is it even possible to set the launch name without TestNG, if it is possible with environment variable how should i use the variable in the runner method to enforce the launch name?
java -Dmaven.clean.skip=true -Dbrowser=firefox -Dos=linux -jar -Drun.tags=#CreateEntity target/standalone/web-tests.jar
This is my setup for each test (the run tag changes obviously), the glue for cucumber and plugin for the reportportal are in the runner method.
TestNG is not mandatory for it. Here you can find JVM-based integration configs https://reportportal.io/docs/JVM-based-clients-configuration
Which means, that if you use CucumberJVM (which has jUnit under the hood), you can use any related parameter.
To specify the name of launch, you can set it in reportportal.properties file or via the command line, as -Drp.launch=zzz
But it will not resolve the issue for multi-threads. In order to have all parallel threads reported into 1 launch, you can make it in 2 ways:
Share launchID across threads. Which means that you can start launch at ReportPortal (as a part of you test runner or as a Jenkins pre-step + cUrl request). Receive launchID and share it with other threads/runners. Runners will use this id to post data, instead of creation of new launch for each thread. At the end make post-step to finish the launch.
Merge launches via UI or API. Once all executions completed, you can merge them via UI. OR you can collect launchIDs during parallel sessions, and after all execution completion, just run API call to merge launches.
Than is relevant for ReportPortal v1-v4.
For the version 5+ of ReportPortal we plan to minimize this effort via Re-Run feature. https://github.com/reportportal/reportportal/issues/363
Test runners will share launchID by default via file on local storage. And if any other parallel thread will start in this environment, then launchID will be used for reporting automatically.
It still do not affect the case, if you have parallel executions, started in parallel mode at multiply VMs, but we will try to address this case as well.
If Activiti Modeler is running simultaneously with my application, and if it uses the same database for the Activiti engine as my application, then the service task and script task following the timers (the boundary timer event and the intermediate catching event) do not work, and cause errors. Error descriptions are as follows: "couldn't instantiate " - for the service task (if the class is specified), "Can't find a scripting engine for 'groovy'" - for the script task. If I use Spring, and assign a bean to the service task, then I get an error with the description: "Could not execute service task expression".
At the same time I found and tried this recommendation:
In order for everything to work without errors, you need to compile the classes that are used by the service task, and put them
with all the packages in which they are located in the WEB_INF/classes folder. Also, in order to avoid problems with the groovy, it is
necessary in WEB_INF/lib to throw the jar-file of this library, and
that used by the main program (the same version).
This works if Spring beans are not used. But this is also a crutch solution, and I would like to disable all event timer events in the database in Activiti Modeler. Not yet found how to do it.
I watched the documentation for the system administrator. It says about the properties that can be set in activiti-app.properties. I found several properties that, judging by the description, can help me, and tried to set the necessary values for them:
elastic-search.server.type=none
event.processing.enabled=false
event.generation.enabled=false
But this also gave nothing.
That is a limitation of running timers based on things that you are changing at runtime. We are fixing this in Activiti Cloud (Activiti 7) by separating the runtime in containers instead of just having a single monolithic application.
I'm pretty new to TestNG hailing from a cucumber background.
In my current project, the jenkins jobs are configured with Maven and TestNG, using java and Selenium for the scripting.
Any job, say with 30 tests, if takes 2hrs to complete, when abruptly terminated due to some reason on the last minute, I do not get results of any tests that were run successfully. Hence forced to run again the entire job.
All I see is the error stack trace and result as:
Results :Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
I am sure there is a better way of designing this, hoping for better approaches.
How can I make sure I do not lose results of the tests that did run successfully(though passed/failed)?
Is a test management tool or some entity to store the run time results , a mandatory requirement or TestNG has some provision built-in?
How can I make sure I do not lose results of the tests that did run successfully(though passed/failed)?
There are usually two ways in which you can build reporting into your TestNG driven tests.
Batched mode - This is usually how all the TestNG reports are built. A listener which implements org.testng.IReporter interface is built and within its generateReport(), all the logic of consolidating the test results into a report is carried out.
Realtime mode - This is usually done by implementing the TestNG listener org.testng.IInvokedMethodListener and then within its afterInvocation() do the following :
Check the type of the incoming org.testng.IInvokedMethod (to see if its a configuration method (or) a #Test method ) and handle these types of methods differently (if the report needs to show them separately). Then test the status of org.testng.ITestResult and based on the status, show them as PASS/FAIL/SKIPPED
IReporter implementations are run at the end after all the tests have run (which is why i call them as batched mode). So if something crashes towards the end but before the reporting phase is executed, you lose all execution data.
So you might want to try and build a realtime reporting model. You can take a look at the RuntimeReporter report that SeLion uses. Its built on the realtime model.
Is a test management tool or some entity to store the run time results , a mandatory requirement or TestNG has some provision built-in?
There are no such mandatory requirements that TestNG places. As I explained above, it all boils down to how you are constructing your reports. If you are constructing the reports in a realtime fashion (you can leverage templating engines such as Velocity/Freemarker/Thymeleaf) to build your reporting template and then use the IInvokedMethodListener to inject values into the template, so that it can be rendered easily.
Read more here for a comparison on the templating engines so that you can choose what fits your need.
I have a simple java code integrated with Apache Camel which also uses camel-kafka component for logging messages in kafka topics. I have created a class which handles single request.
Using threads I can create various threads to invoke above class method to log messages.
Currently I need to load test this JAR using a tool. I want to know a tool that have very low learning curve.
Load test:
increasing users to allow multiple messages to be logged concurrently
variation of messages to increase/decrease message size
Time taken by specific users to log specific messages.
I have gone through
JMeter (learning curve is big)
JProfiler (it does not load test but monitors the application if I am
not wrong)
Netbeans Load generator (again it uses JMeter)
Download groovy-all-*.jar and drop it to /lib folder of your JMeter installation
Restart JMeter
Add Thread Group to test plan. Set desired amount of virtual users, interations and/or duration.
Add JSR223 Sampler as a child of the thread group
Choose "groovy" in the "language" dropdown
Put your "simple java code" in JSR223 Sampler's "Script" area
Save test plan.
Run it.
Was that so hard?
We are planning to automate the build system using Hudson. We are new to Hudson or it would be better to say this way that we are new to build automation process. Our application is on Java platform and the database is on MS SQL. This (automation) milestone is break down into different goals. The first step which we have is to automate database changes (DDL/DML) and during updating database if anything goes wrong it should be able to roll back the changes and send an e-mail to a group to notify the failure (with reasons). Otherwise, if succeed then allow to move on to the next step which is make the build and deploy with LiveRebel.
I think we should have a centric mechanism on build failure on any instance if a build fail it should be able roll back changes what it would have had done. For instance, if database changes failed as I said it should notified and don't proceed further. And, if database succeed and build making process failed (e.g because of Unit Tests) it should be able to roll-back the database changes. If notification can have failure details (like exception details with person responsible for this) it would be very helpful to diagnose and inquire appropriately. How can (should) I do this?
We are also interested to use LiquidBase with Hudson.
I would like to ask for your opinion and suggestions how should I plan this and what should be a good way to achieve this.
First of all you shouldn't mix up building and deploying. The database update would be part of your deploy process, not of your build process. Even if using continuous integration this should be kept separate. This means you do your database changes after the project was build and all your JUnit tests were run. If it fails before that it shouldn't perform the changes, so there would be no need for rollback.
As for your actual problem: I don't know any plugin that does what you want to do. In Hudson/Jenkins you always have the possibility to execute a batch/shell script. Write a script that performs your changes. The build should fail if your script exits with an error return code.
For sending notifications on build failure there are various plugins, including E-Mail.