How to run soapUI tests from Java - java

I need to run SoapUI test by Java. Could you please advise me useful links? And I would be happy if you can show me how to load/run tests (code examples).
I also found only one link which can be applicable for my project - http://pritikaur23.wordpress.com/2013/06/16/saving-a-soapui-project-and-sending-requests-using-soapui-api/ .
But when I try to do the same I faced below errors -
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(Ljava/lang/ClassLoader;Ljava/lang/String;)Lorg/apache/xmlbeans/SchemaTypeSystem;
It's weird because i added all needed jar files. Also I even tried different versions of a xmlbeans.
Thank in advance.

I found the way how to run soapUI test by code.
The small explanation:
Firstly - I created a maven project and added dependencies to pom.xml instead of a including .jar directly. For the SoapUI tests was needed to add following dependencies:
<dependency>
<groupId>com.github.redfish4ktc.soapui</groupId>
<artifactId>maven-soapui-extension-plugin</artifactId>
<version>4.6.4.0</version>
</dependency>
Secondly - I also added a few dependencies because I got an exceptions
java.lang.NoSuchMethodError
The needed dependencies:
<dependency>
<groupId>net.java.dev.jgoodies</groupId>
<artifactId>looks</artifactId>
<version>2.1.4</version>
</dependency>
<dependency>
<groupId>net.sf.squirrel-sql.thirdparty-non-maven</groupId>
<artifactId>com-fifesoft-rsyntaxtextarea</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.karaf.eik.plugins</groupId>
<artifactId>org.apache.commons.collections</artifactId>
<version>3.2.1</version>
</dependency>
After preparing of the environment I was able to write a code. I show you an example of a code what able to run all test suites and test cases in a specified soapUI project by Java.
// method for running all Test Suites and test cases in the project
public static void getTestSuite() throws Exception {
String suiteName = "";
String reportStr = "";
// variables for getting duration
long startTime = 0;
long duration = 0;
TestRunner runner = null;
List<TestSuite> suiteList = new ArrayList<TestSuite>();
List<TestCase> caseList = new ArrayList<TestCase>();
SoapUI.setSoapUICore(new StandaloneSoapUICore(true));
// specified soapUI project
WsdlProject project = new WsdlProject("your-soapui-project.xml");
// get a list of all test suites on the project
suiteList = project.getTestSuiteList();
// you can use for each loop
for(int i = 0; i < suiteList.size(); i++){
// get name of the "i" element in the list of a test suites
suiteName = suiteList.get(i).getName();
reportStr = reportStr + "\nTest Suite: " + suiteName;
// get a list of all test cases on the "i"-test suite
caseList = suiteList.get(i).getTestCaseList();
for(int k = 0; k < caseList.size(); k++){
startTime = System.currentTimeMillis();
// run "k"-test case in the "i"-test suite
runner = project.getTestSuiteByName(suiteName).getTestCaseByName(caseList.get(k).getName()).run(new PropertiesMap(), false);
duration = System.currentTimeMillis() - startTime;
reportStr = reportStr + "\n\tTestCase: " + caseList.get(k).getName() + "\tStatus: " + runner.getStatus() + "\tReason: " + runner.getReason() + "\tDuration: " + duration;
}
}
// string of the results
System.out.println(reportStr);
}
Output:
Test Suite: TS_ONE
TestCase: TC_ONE Status: FAILED Reason: Cancelling due to failed test step Duration: 1549
TestCase: TC_TWO Status: FINISHED Reason: {} Duration: 1277
...
TestCase: TC_N Status: FAILED Reason: Cancelling due to failed test step Duration: 1282
Test Suite: TS_TWO
TestCase: TC_BlaBla Status: FINSHED Reason: {} Duration: 1280
...
I hope the information above will help someone.

Using a continuous integration server (eg Hudson is perfect for this) it is possible to run unit tests automatically JUnit format. Below is an example of integrating SoapUI project in a JUnit test.
public void testRunner() throws Exception
{
SoapUITestCaseRunner runner = new SoapUITestCaseRunner();
runner.setProjectFile( "src/dist/sample-soapui-project.xml" );
runner.run();
}
more information here.

Currentyl only SoapUI dependency is needed for #HeLL provided code
<dependency>
<groupId>com.smartbear.soapui</groupId>
<artifactId>soapui</artifactId>
<version>5.1.3</version>
<scope>test</scope>
</dependency>

I had the same issue, but I fixed it by using:
<dependency>
<groupId>com.smartbear.soapui</groupId>
<artifactId>soapui</artifactId>
<version>4.6.1</version>
</dependency>

Related

Java code manually Triggering kubernetes cronjob from the cluster

I'm trying to trigger cronjob manually(not scheduled) using fabric8 library
but getting the following error:
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://172.20.0.1:443/apis/batch/v1/
namespaces/engineering/jobs. Message: Job.batch "app-chat-manual-947171" is invalid: spec.template.spec.containers[0].name: Re
quired value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.template.spec.co
ntainers[0].name, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=batch, kind=Job, name=ap
p-chat-manual-947171, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Job.batch "app-chat-man
ual-947171" is invalid: spec.template.spec.containers[0].name: Required value, metadata=ListMeta(_continue=null, remainingItemCount=
null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
my code is running at the cluster:
maven dependency:
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>6.3.1</version>
</dependency>
java code:
public static void triggerCronjob(String cronjobName, String applicableNamespace) {
KubernetesClient kubernetesClient = new KubernetesClientBuilder().build();
final String podName = String.format("%s-manual-%s", cronjobName.length() > 38 ? cronjobName.substring(0, 38) : cronjobName,
new Random().nextInt(999999));
System.out.println("triggerCronjob method invoked, applicableNamespace: " + applicableNamespace
+ ", cronjobName: " + cronjobName + ", podName: " + podName);
Job job = new JobBuilder()
.withApiVersion("batch/v1")
.withNewMetadata()
.withName(podName)
.endMetadata()
.withNewSpec()
.withBackoffLimit(4)
.withNewTemplate()
.withNewSpec()
.addNewContainer()
.withName(podName)
.withImage("perl")
.withCommand("perl", "-Mbignum=bpi", "-wle", "print bpi(2000)")
.endContainer()
.withRestartPolicy("Never")
.endSpec()
.endTemplate()
.endSpec().build();
kubernetesClient.batch().v1().jobs().inNamespace(applicableNamespace).createOrReplace(job);
kubernetesClient.close();
System.out.println("CronJob triggered: applicableNamespace: " + applicableNamespace + ", cronjob name: " + cronjobName);
}
the code executed at the kubernetes cluster, but not form the application, it's an external program that's running in the cluster.
my goal is to trigger given job in a given namespace.
If you want to trigger an already existing CronJob, you need to provide ownerReference for the existing CronJob in Job:
// Get already existing CronJob
CronJob cronJob = kubernetesClient.batch().v1()
.cronjobs()
.inNamespace(namespace)
.withName(cronJobName)
.get();
// Create new Job object referencing CronJob
Job newJobToCreate = new JobBuilder()
.withNewMetadata()
.withName(jobName)
.addNewOwnerReference()
.withApiVersion("batch/v1")
.withKind("CronJob")
.withName(cronJob.getMetadata().getName())
.withUid(cronJob.getMetadata().getUid())
.endOwnerReference()
.addToAnnotations("cronjob.kubernetes.io/instantiate", "manual")
.endMetadata()
.withSpec(cronJob.getSpec().getJobTemplate().getSpec())
.build();
// Apply job object to Kubernetes Cluster
kubernetesClient.batch().v1()
.jobs()
.inNamespace(namespace)
.resource(newJobToCreate)
.create();

Could not initialize English Chunker

I have included LanguageTool coding in my Java Maven project as below;
Java Code
List<Language> realLanguages = Languages.get();
for (Language language : realLanguages) {
System.out.println(language.getName() + " ==> " + language.getShortName());
if (language.getName().startsWith("English (US)")) {
JLanguageTool langTool = new JLanguageTool(language);
PatternRuleLoader patternRuleLoader = new PatternRuleLoader();
List<PatternRule> abstractPatternRuleList = new ArrayList<PatternRule>();
abstractPatternRuleList = patternRuleLoader.getRules(new File(LTPath + "/CustomGrammar.xml"));
System.out.println("\n\nDefault Active Rules: " + langTool.getAllActiveRules().size());
<-- More coding goes here -->
and it works absolutely fine when the module's jar is invoked from one project (on server 'A'), but the same throws the below attached exception, "Could not initialize English chunker" when invoked from another (on server 'B').
Dependency
<dependency>
<groupId>org.languagetool</groupId>
<artifactId>language-en</artifactId>
<version>3.1</version>
</dependency>
Exception
Please help !

NoSuchMethodError while using Distcp java API

I am trying to use Distcp Java API to do copy data from one hadoop cluster to another cluster.
However I am getting the following exception:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.util.StringUtils.toLowerCase(Ljava/lang/String;)Ljava/lang/String;
at org.apache.hadoop.tools.util.DistCpUtils.getStrategy(DistCpUtils.java:126)
at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:235)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:174)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
at com.monitor.BackupUtil.doBackup(BackupUtil.java:72)
at com.monitor.BackupUtil.main(BackupUtil.java:45)
I am using the following code:
public void doBackup() throws Exception {
System.out.println("Beginning Distcp");
DistCpOptions options = new DistCpOptions(
new Path(prop.getProperty("sourceClusterDirectory") + "/" + prop.getProperty("tablename")
+ "/distcp.txt"),
new Path(prop.getProperty("targetCluster") + prop.getProperty("targetClusterDirectory")));
System.out.println("Disctp between--->" + prop.getProperty("sourceClusterDirectory")+ "/distcp.txt" + "AND" + prop.getProperty("targetCluster")
+ prop.getProperty("targetClusterDirectory"));
DistCp distcp = new DistCp(new Configuration(), options);
Job job = distcp.execute();
job.waitForCompletion(true);
System.out.println("DistCp Completed Successfully");
}
I am using hadoop 2.7.1 and the distcp dependency is this:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-distcp</artifactId>
<version>2.7.1</version>
</dependency>

Calling R script function from Java using rJava

My Requirement -
I need to deploy a Java webservice in a server which internally executes a R scipt file. I googled about various solutions for calling R from Java and the best were rJava and Rserve. Using Rserve I can call R function BUT as I am running this in Windows it can not handle multiple requests at a time and I dont want to switch to Linux.
[Edit]
What I tried -
I have used rJava to call a R function :
String[] args = new String[3];
args[0] = "--quiet"; // Don't print startup message
args[1] = "--no-restore"; // Don't restore anything
args[2] = "--no-save";
String rFilePath = "D:/Dataset_Info/AI-KMS_v2.0/tika/src/main/resources/HSConcordance.R";
Rengine engine = new Rengine(args, false, null);
if (!engine.waitForR()) {
System.out.println("Cannot load R");
}
System.out.print("JRI R-Engine call: ");
engine.eval("source(\"" + rFilePath + "\")");
REXP value = engine.eval("as.integer(a<-simple())");
int a = value.asInt();
System.out.println(a);
Maven dependency -
<dependency>
<groupId>com.github.lucarosellini.rJava</groupId>
<artifactId>JRI</artifactId>
<version>0.9-7</version>
</dependency>
<dependency>
<groupId>com.github.lucarosellini.rJava</groupId>
<artifactId>REngine</artifactId>
<version>0.9-7</version>
</dependency>
<dependency>
<groupId>com.github.lucarosellini.rJava</groupId>
<artifactId>JRIEngine</artifactId>
<version>0.9-7</version>
</dependency>
My R script file -
simple<-function(){
a=1
return(a)
}
Output - JRI R-Engine call: 1
and then it hangs. I debugged it and found that it got stuck in Thread.class
Any kind of help will be greatly appreciated.
The issue was when I am acessing the webservice for the 2nd time it got hanged because we already have an instance of Rengine present which was created at first call.
Rengine re = Rengine.getMainEngine();
if(re == null){
re=new Rengine (new String [] {"--vanilla"}, false, null);
if (!re.waitForR())
{
System.out.println ("Cannot load R");
return "failure";
}
}
re.eval("source(\"" + rFilePath + "\")");
re.eval("copyfile(\""+filePath+"\")");
re.end();
Few points to note -
Check if any instance of Rengine is already present by Rengine re = Rengine.getMainEngine();
Shut down R in the end by re.end();
It may be helpful. thanks.

Get the whole index of Maven Central [duplicate]

I have downloaded the indexes generated for Maven Central from http://mirrors.ibiblio.org/pub/mirrors/maven2/dot-index/nexus-maven-repository-index.gz
I would like to list the artifacts information from these index files (groupId, artifactId, version for example). I have read that there is a high level API for that. It seems that I have to use the following maven dependency. However, I don't know what is the entry point to use (which class?) and how to use it to access those files:
<dependency>
<groupId>org.sonatype.nexus</groupId>
<artifactId>nexus-indexer</artifactId>
<version>3.0.4</version>
</dependency>
Take a peek at https://github.com/cstamas/maven-indexer-examples project.
In short: you dont need to download the GZ/ZIP (new/legacy format) manually, it will indexer take care of doing it for you (moreover, it will handle incremental updates for you too, if possible).
GZ is the "new" format, independent of Lucene index-format (hence, independent of Lucene version) containing data only, while the ZIP is "old" format, which is actually plain Lucene 2.4.x index zipped up. No data content change happens currently, but is planned in future.
As I said, there is no data content difference between two, but some fields (like you noticed) are Indexed but not stored on index, hence, if you consume the ZIP format, you will have them searchable, but not retrievable.
The https://github.com/cstamas/maven-indexer-examples is obsolete. And the build fails (tests do not pass).
The Nexus Indexer has moved along and included the examples too:
https://github.com/apache/maven-indexer/tree/master/indexer-examples
That builds, and the code works.
Here is a simplified version if you want to roll your own:
Maven:
<dependencies>
<dependency>
<groupId>org.apache.maven.indexer</groupId>
<artifactId>indexer-core</artifactId>
<version>6.0-SNAPSHOT</version>
<scope>compile</scope>
</dependency>
<!-- For ResourceFetcher implementation, if used -->
<dependency>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-http-lightweight</artifactId>
<version>2.3</version>
<scope>compile</scope>
</dependency>
<!-- Runtime: DI, but using Plexus Shim as we use Wagon -->
<dependency>
<groupId>org.eclipse.sisu</groupId>
<artifactId>org.eclipse.sisu.plexus</artifactId>
<version>0.2.1</version>
</dependency>
<dependency>
<groupId>org.sonatype.sisu</groupId>
<artifactId>sisu-guice</artifactId>
<version>3.2.4</version>
</dependency>
Java:
public IndexToGavMappingConverter(File dataDir, String id, String url)
throws PlexusContainerException, ComponentLookupException, IOException
{
this.dataDir = dataDir;
// Create Plexus container, the Maven default IoC container.
final DefaultContainerConfiguration config = new DefaultContainerConfiguration();
config.setClassPathScanning( PlexusConstants.SCANNING_INDEX );
this.plexusContainer = new DefaultPlexusContainer(config);
// Lookup the indexer components from plexus.
this.indexer = plexusContainer.lookup( Indexer.class );
this.indexUpdater = plexusContainer.lookup( IndexUpdater.class );
// Lookup wagon used to remotely fetch index.
this.httpWagon = plexusContainer.lookup( Wagon.class, "http" );
// Files where local cache is (if any) and Lucene Index should be located
this.centralLocalCache = new File( this.dataDir, id + "-cache" );
this.centralIndexDir = new File( this.dataDir, id + "-index" );
// Creators we want to use (search for fields it defines).
// See https://maven.apache.org/maven-indexer/indexer-core/apidocs/index.html?constant-values.html
List<IndexCreator> indexers = new ArrayList();
// https://maven.apache.org/maven-indexer/apidocs/org/apache/maven/index/creator/MinimalArtifactInfoIndexCreator.html
indexers.add( plexusContainer.lookup( IndexCreator.class, "min" ) );
// https://maven.apache.org/maven-indexer/apidocs/org/apache/maven/index/creator/JarFileContentsIndexCreator.html
//indexers.add( plexusContainer.lookup( IndexCreator.class, "jarContent" ) );
// https://maven.apache.org/maven-indexer/apidocs/org/apache/maven/index/creator/MavenPluginArtifactInfoIndexCreator.html
//indexers.add( plexusContainer.lookup( IndexCreator.class, "maven-plugin" ) );
// Create context for central repository index.
this.centralContext = this.indexer.createIndexingContext(
id + "Context", id, this.centralLocalCache, this.centralIndexDir,
url, null, true, true, indexers );
}
final IndexSearcher searcher = this.centralContext.acquireIndexSearcher();
try
{
final IndexReader ir = searcher.getIndexReader();
Bits liveDocs = MultiFields.getLiveDocs(ir);
for ( int i = 0; i < ir.maxDoc(); i++ )
{
if ( liveDocs == null || liveDocs.get( i ) )
{
final Document doc = ir.document( i );
final ArtifactInfo ai = IndexUtils.constructArtifactInfo( doc, this.centralContext );
if (ai == null)
continue;
if (ai.getSha1() == null)
continue;
if (ai.getSha1().length() != 40)
continue;
if ("javadoc".equals(ai.getClassifier()))
continue;
if ("sources".equals(ai.getClassifier()))
continue;
out.append(StringUtils.lowerCase(ai.getSha1())).append(' ');
out.append(ai.getGroupId()).append(":");
out.append(ai.getArtifactId()).append(":");
out.append(ai.getVersion()).append(":");
out.append(StringUtils.defaultString(ai.getClassifier()));
out.append('\n');
}
}
}
finally
{
this.centralContext.releaseIndexSearcher( searcher );
}
We use this in the Windup project - JBoss migration tool.
The legacy zip index is a simple lucene index. I was able to open it with Luke
and write some simple lucene code to dump out the headers of interest ("u" in this case)
import org.apache.lucene.document.Document;
import org.apache.lucene.search.IndexSearcher;
public class Dumper {
public static void main(String[] args) throws Exception {
IndexSearcher searcher = new IndexSearcher("c:/PROJECTS/Test/index");
for (int i = 0; i < searcher.maxDoc(); i++) {
Document doc = searcher.doc(i);
String metadata = doc.get("u");
if (metadata != null) {
System.out.println(metadata);
}
}
}
}
Sample output ...
org.ioke|ioke-lang-lib|P-0.4.0-p11|NA
org.jboss.weld.archetypes|jboss-javaee6-webapp|1.0.1.CR2|sources|jar
org.jboss.weld.archetypes|jboss-javaee6-webapp|1.0.1.CR2|NA
org.nutz|nutz|1.b.37|javadoc|jar
org.nutz|nutz|1.b.37|sources|jar
org.nutz|nutz|1.b.37|NA
org.openengsb.wrapped|com.google.gdata|1.41.5.w1|NA
org.openengsb.wrapped|openengsb-wrapped-parent|6|NA
There may be better ways to achieve this though ...
For the records, there is now a tool to extract and export maven indexes as text files: the Maven index exporter. It's available as a Docker image and no code is required.
It basically downloads all .gz index files, extracts the indexes using maven-indexer cli and exports them to a text file with clue. It has been tested on Maven Central and works on many other Maven repositories.

Categories