I am asked to work on a RF Java Test Library that can post test results to a different service through SOAP. While the concept seems easy to understand, I am stuck on using RF due to inexperience.
Ideally, I will write the Test Library ABC which contains a method that takes a string as input parameter. The string would be the location of the test result xml file. Then I add ABC to a RF Test Case. When the Test Case is run, ABC will get called. The code inside ABC would parse this file to get the list of test case id and results then send this data to the external service using SOAP.
The difficulty now is how to make RF know the value of the test rest file and pass the value to ABC during run time. Any ideas? Thank you in advance.
If you want to update an external system during test execution, the best way is to use the listener interface instead of a test library, which is similar to how TestNG and JUnit solve this problem.
Your listener will have access to the same information that is being written to the results XML file including metadata and tags, where you are presumably going to store test case IDs.
EDIT:
You can ignore the data sent to the listener and still work with the XML file. If you have a output_file method in your listener, Robot Framework will send the path to the output XML file once it has done writing to it. Then you can open it an process it.
You can also externalize the whole process and create an application or script that takes the location of the output XML file(s) as parameter(s).
Related
I'm looking for a way to access the name of the file being processed during the data transformation within a DoFn.
My pipeline is as shown below:
Pipeline p = Pipeline.create(options);
p.apply(FileIO.match()
.filepattern(options.getInput())
.continuously(Duration.standardSeconds(5),
Watch.Growth.<String>never()))
.apply(FileIO.readMatches()
.withCompression(Compression.GZIP))
.apply(XmlIO.<MyString>readFiles()
.withRootElement("root")
.withRecordElement("record")
.withRecordClass(MyString.class))//<-- This only returns the contents of the file
.apply(ParDo.of(new ProcessRecord()))//<-- I need to access file name here
.apply(ParDo.of(new FormatRecord()))
.apply(Window.<String>into(FixedWindows.of(Duration.standardSeconds(5))))
.apply(new CustomWrite(options));
Each file that is processed is an XML document. While processing the content, I need access to the name of the file being processed too to include in the transformed record.
Is there a way to achieve this?
This post has a similar question, but since i'm trying to use XmlIO I havent found a way to access the file metadata.
Below is the approach I found online, but not sure if there is a way to use it in the pipeline described above.
p.apply(FileIO.match()
.filepattern(options.getInput())
.continuously(Duration.standardSeconds(5),
Watch.Growth.<String>never()))//File Metadata
.apply(FileIO.readMatches()
.withCompression(Compression.GZIP))//Readable Files
.apply(MapElements
.into(TypeDescriptors.kvs(TypeDescriptors.strings(),new TypeDescriptor<ReadableFile>() {} ))
.via((ReadableFile file) -> {
return KV.of(file.getMetadata().resourceId().getFilename(),file);
})
);
Any suggestions are highly appreciated.
Thank you for your time reviewing this.
EDIT:
I took Alexey's advice and implemented a custom XmlIO. It would be nice if we could just extend the class we need and override the appropriate method. However, in this specific case, there was a reference to one method which was protected within the sdk because of which I couldn't easily override what i needed and instead ended up copying a whole bunch of files. While this works for now, I hope in future there is a more straighforward way to access the file metadata in these IO implementations.
I don't think it's possible to do "out-of-box" with a current implementation of of XmlIO since it returns a PCollection<T> where T is a type of your xml record and, if I'm not mistaken, there is no way to add a file name there. Though, you still can try to "reimplement" a ReadFiles and XmlSource in a way that it will return parsed payload and input file metadata.
I am creating a Spring Cloud Function that I want to give two inputs, an id and a Multipart file (CSV file) but I am having trouble.
If I choose to send a post with a multipart file the function won't recognise this and gives an error like Failed to determine input for function call with parameters:
With the Postman request being this:
#Bean
public Function<MultipartFile, String> uploadWatchlist() {
return body -> {
try {
return service.convert(body);
}
}
}
I have tried using something more akin to Spring MVC like a request entity object but no luck.
The backup I have (other than Python haha) will be using the binary data post so it will just be a string that has the contents of the file which does work, but requires me to append the id inside to each row of the csv which is a bit messy.
There are other solutions but trying to get this working as Java lambdas are what we want to try and use as first choice.
The infrastructure will be to fix up a manual file upload/verification process that is tedious at the moment and looks like: postman -> load balancer -> lambda -> ecs
The postman/load balancer part will be replaced in future. Ideally have the lambda sorted in Java taking in a file and id.
Thanks for any help :)
I use PlanBuilder.ModifyPlan to retrieve the contents and the results are in StringHandle().
I see the PlanBuilderBase.ExportablePlanBase but there is no reference as how to use its exportAs method.
This method should be sth like:
ExportablePlan ep = plan.exportAs(String);
Typically, an application wouldn't call exportAs().
Instead, an application would pass the plan to methods of the RowManager class. Internally, the implementation of such methods export the plan for sending to the server.
In particular, the following RowManager methods take a plan and get its result rows or an explanation of the query preparation:
http://docs.marklogic.com/javadoc/client/com/marklogic/client/expression/class-use/PlanBuilder.Plan.html#com.marklogic.client.row
Here is an example of getting result rows:
http://docs.marklogic.com/guide/java/OpticJava#id_93678
RowManager also provides methods for binding parameters of the plan to literal values before sending the plan to the server:
http://docs.marklogic.com/javadoc/client/com/marklogic/client/expression/class-use/PlanBuilder.Plan.html#com.marklogic.client.expression
Examples of edge cases where an application might want to export a plan include:
logging
inserting into a JSON document so an enode script could import a plan without receiving the plan from the client
The exported plan is a JSON document (represented as a String, if the exportAs() method is used). After exporting the plan, the application could process the JSON document in the same way as any other JSON document. For instance, the application could use JSONDocumentManager to write the plan as a document in the content database.
Hoping that helps,
As I'm trying to automate the API testing process, have to pass the XML file to Read method for example,
Given request read ( varXmlFile )
FYI: XML file is present in the same folder where the feature file exists.
Doing this, its throwing an exception like this
com.intuit.karate.exception.KarateException: called: D:\workspace\APIAutomationDemo\target\test-classes\com\org\features\rci_api_testing.feature, scenario: Get Membership Details, line: 15
javascript evaluation failed: read (varXmlFile )
So Karate doesn't allow this way or can we have any other alternative ?
Suggestion please.
Thanks
Please ensure the variable is set:
* def varXmlFile = 'some-xml-file.xml'
Given request read(varXmlFile)
Or just use normally:
Given request read('some-xml-file.xml')
The problem got solved as in the variable varXmlFile holds the file name along with single quote like this 'SampleXmlRequest.xml'.
So I removed the single quote while returning from the method.
I have this synchronous pipeline that need to be executed from time to time (lets say every 30 minutes):
Connect to a ftp;
Read a .json file (single file) from folder A;
Unmarshall the content of the file (Class A) and add it to the route context;
Read all the .fixedlenght files (multiple files) from folder B (preMove: processingFolder, move: doneFolder, moveFailed: errorFolder);
Unmarshall the content of the files (Class B) and do some logic;
Read all the .xml files (multiple files) from folder C (preMove: processingFolder, move: doneFolder, moveFailed: errorFolder);
Unmarshall the content of the files (Class C) and do some logic;
End the route.
It is a single pipeline created with Java DSL. If a error happen, the process stop.
I'm really struggling with Camel to create this. It is possible or I will need to handle this manually? I created some demos, but none of them are properly working.
Any help will be appreciated.
I would approach this in the following manner:
All the interfaces to the FTP where you read the files are separate routes. Their job is only to pick up the file. They don't deal with parsing or transformation.
Then create separate routes for actually receiving the data, parsing and transformation.
Finally the delivery routes which take the data and deliver to your end destination.
This way you can customise the error handling, easier to find out what went wrong were, makes it easier to change one part without affecting everything and you can reuse the routes in several different parts.
The way you describe your message pipeline it seems beneficial to have 3 separate routes each handling a different folder in your FTP server. You can have a timer that triggers all 3 every 30 minutes of so. The FTP component derives from Camel's File Component and there are a lot a useful parameters that would help with your routing logic here.
For each of your 3 routes you would have something like this:
from("ftp://foo#myserver?include=*.xml&preMove=processingFolder&move=doneFolder&moveFailed=errorFolder")
.unmarshal()
...
You can find more info about filtering files by their extensions here