Good Morning everyone
So I have this java code that parses into swagger documentation file (a JSON file) and split its:
{ Swagger swagger = new SwaggerParser().read("C:/Users/admin/Desktop/testdownload.txt");
Map<String, Path> paths = swagger.getPaths();
for (Map.Entry<String, Path> p : paths.entrySet()) {
Path path = p.getValue();
Map<HttpMethod, Operation> operations = path.getOperationMap();
for (java.util.Map.Entry<HttpMethod, Operation> o : operations.entrySet()) {
System.out.println("===");
System.out.println("PATH:" + p.getKey());
System.out.println("Http method:" + o.getKey());
System.out.println("Summary:" + o.getValue().getSummary());
System.out.println("Parameters number: " + o.getValue().getParameters().size());
for (Parameter parameter : o.getValue().getParameters()) {
System.out.println(" - " + parameter.getName());
}
System.out.println("Responses:");
for (Map.Entry<String, Response> r : o.getValue().getResponses().entrySet()) {
System.out.println(" - " + r.getKey() + ": " + r.getValue().getDescription());
}
System.out.println("");
}
}
}
And here is the input:
and the output is :
What I want to ask is: is it possible to send this output one path by one to apache Nifi ??
is there is any solution that Nifi extracts those outputs and put each one of them in a dependent processor??
You could start a HTTP lister service in NiFi. Use the HandleHttpRequest
Some time ago I did something like this. And was sending data from my Java application to this HandleHttpRequest. This Processor is designed to be used in conjunction with the HandleHttpResponse Processor in order to create a Web Service
You just have to post your data to this webservice and the webservice can consume it and you would already have your data in NiFi. From then on, you are manipulate and control you data as you please.
You can also look into ListenHTTP
Related
I am reading and processing 2 files from 2 different file locations and comparing the content.
If 2nd file is not available , the rest of the process execute with 1st file. If 2nd file is available, comparison process should happen. For this I am using camel pollEnrich, but here the problem is that, camel is picking the 2nd file at first time only. Without restarting the camel route 2nd file is not getting picked up even if it is present there.
After restarting the camel route it is working fine, but after that its not picking the 2nd file.
I am moving the files to different locations after processing it.
Below is my piece of code,
from("sftp:" + firstFileLocation + "?privateKeyFile=" + ppkFileLocation + "&username=" + sftpUsername
+ "&readLock=changed&idempotent=true&move=" + firstFileArchiveLocation)
.pollEnrich("sftp:" + secondFileLocation + "?privateKeyFile=" + ppkFileLocation + "&username=" + sftpUsername
+ "&readLock=changed&idempotent=true&fileExist=Ignore&move="+ secondFileLocationArchive ,10000,new FileAggregationStrategy())
.routeId("READ_INPUT_FILE_ROUTE")
Need help.
You're setting idempotent=true in the sftp consumer, which means camel will not process the same file name twice. Since you're moving the files, it would make sense to set idempotent=false.
Quoted from camel documentation
Option to use the Idempotent Consumer EIP pattern to let Camel skip
already processed files. Will by default use a memory based LRUCache
that holds 1000 entries. If noop=true then idempotent will be enabled
as well to avoid consuming the same files over and over again.
I'm adding an alternative solution based on comments for the answer posted by Jeremy Ross. My answer is based on the following code example. I've only added the configure() method in the test route for brevity.
#Override
public void configure() throws Exception {
String firstFileLocation = "//127.0.0.1/Folder1";
String secondFileLocation = "//127.0.0.1/Folder2";
String ppkFileLocation = "./key.pem";
String sftpUsername = "user";
String sftpPassword = "xxxxxx";
String firstFileArchiveLocation = "./Archive1";
String secondFileLocationArchive = "./Archive2";
IdempotentRepository repository1 = MemoryIdempotentRepository.memoryIdempotentRepository(1000);
IdempotentRepository repository2 = MemoryIdempotentRepository.memoryIdempotentRepository(1000);
getCamelContext().getRegistry().bind("REPO1", repository1);
getCamelContext().getRegistry().bind("REPO2", repository2);
from("sftp:" + firstFileLocation
+ "?password=" + sftpPassword + "&username=" + sftpUsername
+ "&readLock=idempotent&idempotent=true&idempotentKey=\\${file:name}-\\${file:size}-\\${file:modified}" +
"&idempotentRepository=#REPO1&stepwise=true&download=true&delay=10&move=" + firstFileArchiveLocation)
.to("direct:combined");
from("sftp:" + secondFileLocation
+ "?password=" + sftpPassword + "&username=" + sftpUsername
+ "&readLock=idempotent&idempotent=true&idempotentKey=\\${file:name}-\\${file:size}-\\${file:modified}" +
"&idempotentRepository=#REPO2" +
"&stepwise=true&delay=10&move=" + secondFileLocationArchive)
.to("direct:combined");
from("direct:combined")
.aggregate(constant(true), (oldExchange, newExchange) -> {
if (oldExchange == null) {
oldExchange = newExchange;
}
String fileName = (String) newExchange.getIn().getHeaders().get("CamelFileName");
String filePath = (String) newExchange.getIn().getHeaders().get("CamelFileAbsolutePath");
if (filePath.contains("Folder1")) {
oldExchange.getIn().setHeader("File1", fileName);
} else {
oldExchange.getIn().setHeader("File2", fileName);
}
String file1Name = oldExchange.getIn().getHeader("File1", String.class);
String file2Name = oldExchange.getIn().getHeader("File2", String.class);
if (file1Name != null && file2Name != null) {
// Compare files
// Both files are available
oldExchange.getIn().setHeader("PROCEED", true);
} else if (file1Name != null) {
// No comparison, proceed with File 1
oldExchange.getIn().setHeader("PROCEED", true);
} else {
// Do not proceed, keep file 2 data and wait for File 1
oldExchange.getIn().setHeader("PROCEED", false);
}
String fileName1 = oldExchange.getIn().getHeader("File1", String.class);
String fileName2 = oldExchange.getIn().getHeader("File2", String.class);
oldExchange.getIn().setBody("File1: " + fileName1 + " File2: " + fileName2);
System.out.println(oldExchange);
return oldExchange;
}).completion(exchange -> {
if(exchange.getIn().getHeader("PROCEED", Boolean.class)) {
exchange.getIn().removeHeader("File1");
exchange.getIn().removeHeader("File2");
return true;
}
return false;
}).to("log:Test");
}
In this solution, two SFTP consumers were used, instead of pollEnrich, since we need to capture the file changes of both SFTP locations. I have used an idempotent repository and an idempotent key for ignoring duplicates. Further, I've used the same idempotent repository as the lock store assuming only camel routes are accessing the files.
After receiving the files from SFTP consumers, they are sent to the direct:combined producer, which then routes the exchange to an aggregator.
In the example aggregator strategy I have provided, you can see, that the file names are being stored in the exchange headers. According to the file information retrieved from the headers, the aggregator can decide how to process the file and whether or not to proceed with the exchange. (If only file2 is received, the exchange should not proceed to the next stages/routes)
Finally, the completion predicate expression decides whether or not to proceed with the exchange and log the exchange body, based on the headers set by the aggregator. I have added an example clean-up process in the predicate expression processor as well.
Hope you will get the basic idea of my suggestion to use an aggregator from this example.
I have integrated Camunda Engine with Spring in our application. I want to find properties assigned to each active task for the running process instance. I am able to get the task instances with following code
List<Task> tasks = this.taskService.createTaskQuery().processInstanceId("12").list()
but if i cast task object into TaskEntity and then use getTaskDefinition() , I get null.
Other way to get task details is through ProcessDefinitionEntity.getTaskDefinitions() but it also returns null.
How should I get the task detail?
For read properties and documentation attributes use the BPMN Model API.
This example use a elementId for read both.
String processDefinitionId = repositoryService.createProcessDefinitionQuery()
.processDefinitionKey(DEFINITON_KEY).singleResult().getId();
BpmnModelInstance bpmnModelInstance = repositoryService.getBpmnModelInstance(processDefinitionId);
ServiceTask serviceTask = (ServiceTask) bpmnModelInstance.getModelElementById(ELEMENT_ID);
// Documentation, is a collection, but the modeler supports only one attribute
Collection<Documentation> documentations = serviceTask.getDocumentations();
// Properties
Collection<Property> properties = serviceTask.getProperties();
Above answer gave me a hint but didn't solve the problem completely so here is my code which is serving the purpose.
My usertask in .bpmn file looks like:
<bpmn:userTask id="Task_063x95d" name="Tech Task">
<bpmn:documentation>SUCCESS,FAIL</bpmn:documentation>
<bpmn:extensionElements>
<camunda:inputOutput>
<camunda:inputParameter name="language">Java</camunda:inputParameter>
<camunda:outputParameter name="Platform">Linux</camunda:outputParameter>
</camunda:inputOutput>
<camunda:properties>
<camunda:property name="user" value="Test_User" />
</camunda:properties>
</bpmn:extensionElements>
<bpmn:incoming>SequenceFlow_1xjoyjq</bpmn:incoming>
<bpmn:outgoing>SequenceFlow_028pkxo</bpmn:outgoing>
</bpmn:userTask>
I have analysed the .bpmn file and then just rendered its elements with help of below code
// Active tasks for currently running instanceId(input to below code)
List<Task> tasks = this.taskService.createTaskQuery().processInstanceId(instanceId).list();
String documentation= null;
for (Task task : tasks)
{
//This gives [documentation][1] field.
documentation = task.getDescription();
UserTaskImpl modelElementById = (UserTaskImpl) bpmnModelInstance.getModelElementById(tasks.get(0)
.getTaskDefinitionKey());
ExtensionElements childElementsByType2 = modelElementById.getExtensionElements();
Collection<ModelElementInstance> elements = childElementsByType2.getElements();
for (ModelElementInstance elem : elements)
{
//To access all properties.
if (elem instanceof CamundaPropertiesImpl)
{
CamundaPropertiesImpl camundaPropertiesImpl = (CamundaPropertiesImpl) elem;
Collection<CamundaProperty> camundaProperties = camundaPropertiesImpl.getCamundaProperties();
for (CamundaProperty test : camundaProperties)
{
System.out.println("camunda property name :" + test.getCamundaName() + " $ " + test.getCamundaValue());
}
}
else if (elem instanceof CamundaInputOutputImpl)
{
// To access input/output param
CamundaInputOutputImpl camundaInputOutputImpl = (CamundaInputOutputImpl) elem;
for (CamundaInputParameter test : camundaInputOutputImpl.getCamundaInputParameters())
{
log.info("camunda input params name :" + test.getCamundaName() + " $ " + test.getTextContent());
}
for (CamundaOutputParameter test : camundaInputOutputImpl.getCamundaOutputParameters())
{
log.info("camunda output params name :" + test.getCamundaName() + " $ " + test.getTextContent());
}
}
}
}
I am really new to SOAP web services and to Netsuite ERP and I am trying to generate a report in my company where I need to obtain all the Clients and their Invoices using the data available in Netsuite ERP. I followed the Java and Axis tutorial they offer with their sample app for the ERP and I successfully created a Java project in Eclipse that consumes the WSDL for netsuite 2015-2 and compiles the needed classes to run the sample app. So, I followed an example found in their CRM exapmle app to obtain a Client's information but the only problem is that their example method needs you to introduce the Client's ID. Here is the sample code:
public int getCustomerList() throws RemoteException,
ExceededUsageLimitFault, UnexpectedErrorFault, InvalidSessionFault,
ExceededRecordCountFault, UnsupportedEncodingException {
// This operation requires a valid session
this.login(true);
// Prompt for list of internalIds and put in an array
_console
.write("\ninternalIds for records to retrieved (separated by commas): ");
String reqKeys = _console.readLn();
String[] internalIds = reqKeys.split(",");
return getCustomerList(internalIds, false);
}
private int getCustomerList(String[] internalIds, boolean isExternal)
throws RemoteException, ExceededUsageLimitFault,
UnexpectedErrorFault, InvalidSessionFault, ExceededRecordCountFault {
// Build an array of RecordRef objects and invoke the getList()
// operation to retrieve these records
RecordRef[] recordRefs = new RecordRef[internalIds.length];
for (int i = 0; i < internalIds.length; i++) {
RecordRef recordRef = new RecordRef();
recordRef.setInternalId(internalIds[i]);
recordRefs[i] = recordRef;
recordRefs[i].setType(RecordType.customer);
}
// Invoke getList() operation
ReadResponseList getResponseList = _port.getList(recordRefs);
// Process response from get() operation
if (!isExternal)
_console.info("\nRecords returned from getList() operation: \n");
int numRecords = 0;
ReadResponse[] getResponses = getResponseList.getReadResponse();
for (int i = 0; i < getResponses.length; i++) {
_console.info("\n Record[" + i + "]: ");
if (!getResponses[i].getStatus().isIsSuccess()) {
_console.errorForRecord(getStatusDetails(getResponses[i]
.getStatus()));
} else {
numRecords++;
Customer customer = (Customer) getResponses[i].getRecord();
_console.info(" internalId="
+ customer.getInternalId()
+ "\n entityId="
+ customer.getEntityId()
+ (customer.getCompanyName() == null ? ""
: ("\n companyName=" + customer
.getCompanyName()))
+ (customer.getEntityStatus() == null ? ""
: ("\n status=" + customer.getEntityStatus().getName()))
+ (customer.getEmail() == null ? ""
: ("\n email=" + customer.getEmail()))
+ (customer.getPhone() == null ? ""
: ("\n phone=" + customer.getPhone()))
+ "\n isInactive="
+ customer.getIsInactive()
+ (customer.getDateCreated() != null ? ""
: ("\n dateCreated=" + customer
.getDateCreated().toString())));
}
}
return numRecords;
}
So as you can see, this method needs the internal ID of each Customer which I find not useful as I have a many Customers and I don't want to pass each Customer's ID. I read their API docs (which I find hard to navigate and kind of useless) and I found a web service called getAll() that gives all the records given a getAllRecord object which requires a getAllRecordType object. However, the getAllRecordType object does not support Customer entities, so I can't obtain all the customers on the ERP this way.
Is there an easy way to obtain all the Customers in my Netsuite ERP (maybe using other thing rather than the SOAP Web Services they offer? I am desperate about this situation as understanding how Netsuite's Web Services API has been really troublesome.
Thanks!
You would normally use a search to select a list of customers. On a large account you would not normally get all customers on any regular basis. If you are trying to get the invoices you might just find it more practical to get those with a search.
You wrote "in your company". Are you trying to write an application of some sort? If this is an internal project (and even if it's not) you'll probably find using SuiteScripts much more efficient in terms of your time and frustration level.
I made it using the following code on my getCustomerList method:
CustomerSearch customerSrch = new CustomerSearch();
SearchResult searchResult = _port.search(customerSrch);
System.out.println(searchResult.getTotalRecords());
RecordList rl = searchResult.getRecordList();
for (int i = 0; i <searchResult.getTotalRecords()-1; i++) {
Record r = rl.getRecord(i);
System.out.println("Customer # " + i);
Customer testcust = (Customer)r;
System.out.println("First Name: " + testcust.getFirstName());
}
I am trying to pull all issues (resolved or not) from the current sprint and display that information. I am using a JIRA REST Java client to achieve this. I am quite new to JIRA and the JRJC so would like all the help I can get really.
This is the code I have written so far:
SearchResult allIssuesInSprint = restClient.getSearchClient().searchJql("sprint = \"" + 29 + "\" order by rank").claim();
Iterable<Issue> allIssues = allIssuesInSprint.getIssues();
for (Issue issue : allIssues) {
System.out.println("Key: " + issue.getKey());
System.out.println("Type: " + issue.getIssueType());
System.out.println("Status: " + issue.getStatus());
System.out.println("Priority: " + issue.getPriority());
}
Again, I am new to JIRA's JAR files, so I'm not certain on how to use them. Any help would be appreciated.
I am struggling for a couple of hours now on how to link a discid to a musicbrainz mbid.
So, using dietmar-steiner / JMBDiscId
JMBDiscId discId = new JMBDiscId();
if (discId.init(PropertyFinder.getProperty("libdiscid.path")))
{
String musicBrainzDiscID = discId.getDiscId(PropertyFinder.getProperty("cdrom.path"));
}
or musicbrainzws2-java
Disc controller = new Disc();
String drive = PropertyFinder.getProperty("cdrom.path");
try {
DiscWs2 disc =controller.lookUp(drive);
log.info("DISC: " + disc.getDiscId() + " match: " + disc.getReleases().size() + " releases");
....
I can extract a discid for freedb or musicbrainz easily (more or less), but I have not found a way on calculating the id I that I need to download cover art via the CoverArtArchiveClient from last.fm.
CoverArtArchiveClient client = new DefaultCoverArtArchiveClient();
try
{
UUID mbid = UUID.fromString("mbid to locate release");
fm.last.musicbrainz.coverart.CoverArt coverArt = client.getByMbid(mbid);
Theoretically, I assume, I could you the data collected by musicbrainzws2-java to trigger a search, and then use the mbid from the result ... but that cannot be the best option to do.
I am happy about any push into the right direction...
Cheers,
Ed.
You don't calculate the MBID. The MBID is attached on every entity you retrieve from MusicBrainz.
When getting releases by DiscID you get a list. Each entry is a release and has an MBID, accessible with getId():
for (ReleaseWs2 rel : disc.getReleases()){
log.info("MBID: " + rel.getId() + ", String: " + rel.toString());
}
You then probably want to try the CoverArtArchive (CAA) for every release and take the first cover art you get.
Unfortunately I don't know of any API documentation for musicbrainzws2 on the web. I recommend running javadoc on all source files.