Apache camel kafka aggregate before produce msg but lost header - java

I used apache camel Kafka with spring boot
<camel.version>3.14.2</camel.version>
I used default configuration on apache camel Kafka component
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-kafka-starter</artifactId>
<version>${camel.version}</version>
</dependency>
My route camel - fileConsume have 6000 lines
from(fileConsume).split(body().tokenize()).setHeader("testHeader", "valueHeader").aggregate(new GroupedMessageAggregationStrategy())
.constant(true).completionTimeout(100L).to("kafka:topicTest");
All messages from file produced on Kafka very fast (less 2 secondes) but the header is not present.
When i remove aggregate
from(fileConsume).split(body().tokenize()).setHeader("testHeader", "valueHeader").to("kafka:topicTest");
All messages from file produced on Kafka very low (more 10 minutes) but the header is present.
I need some help to produce message with apache camel kafka component on speed way with header.

you must do this, in order to keep header when aggregation is doing.
from("sftp://xxxxxxx#localhost:"
+ "2222/data/in"
+ "?password="
+ "&preferredAuthentications=publickey"
+ "&knownHostsFile=~/.ssh/known_hosts"
+ "&privateKeyFile=xxxxxxx"
+ "&privateKeyPassphrase="
+ "&passiveMode=true"
+ "&fastExistsCheck=true"
+ "&download=true"
+ "&delete=true"
+ "&stepwise=false"
+ "&antInclude=*"
+ "&antExclude=**reject**"
+ "&recursive=false"
+ "&maxMessagesPerPoll=10"
+ "&initialDelay=0"
+ "&delay=0"
+ "&connectTimeout=10000"
+ "&soTimeout=300000"
+ "&timeout=30000"
+ "&shuffle=true"
+ "&eagerMaxMessagesPerPoll=false"
+ "&moveFailed=reject"
+ "&binary=true"
+ "&localWorkDirectory=/opt/camel_data/kafka/"
+ "&readLock=none"
+ "&readLockCheckInterval=1000"
+ "&readLockMinLength=1"
+ "&readLockLoggingLevel=INFO"
+ "&readLockIdempotentReleaseDelay=10000"
+ "&readLockRemoveOnCommit=false"
+ "&readLockRemoveOnRollback=true"
+ "&bulkRequests=1000"
+ "&charset=utf-8")
.routeId("Consume SFTP")
.id("Consume SFTP")
.setProperty("yoda_core_technical_id").header(Exchange.BREADCRUMB_ID)
.setProperty("x_filename_source").header(Exchange.FILE_NAME_ONLY)
.setProperty("x_filepath_source").header("CamelFileAbsolutePath")
.setProperty("x_correlation_id").header("CamelFileName")
.split(body().tokenize())
.setHeader("test",constant("test"))
.aggregate(new GroupedMessageAggregationStrategy())
.constant(true)
.completionTimeout(100L)
.to("direct:aggregate");
from("direct:aggregate")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println(exchange);
GenericFileMessage<String> message =(GenericFileMessage<String>) exchange.getMessage().getBody(List.class).get(0);
exchange.getMessage().setHeader("test",
message.getHeader("test"));
}
})
.to("mock:result");

Related

camel pollEnrich is not working for the second time

I am reading and processing 2 files from 2 different file locations and comparing the content.
If 2nd file is not available , the rest of the process execute with 1st file. If 2nd file is available, comparison process should happen. For this I am using camel pollEnrich, but here the problem is that, camel is picking the 2nd file at first time only. Without restarting the camel route 2nd file is not getting picked up even if it is present there.
After restarting the camel route it is working fine, but after that its not picking the 2nd file.
I am moving the files to different locations after processing it.
Below is my piece of code,
from("sftp:" + firstFileLocation + "?privateKeyFile=" + ppkFileLocation + "&username=" + sftpUsername
+ "&readLock=changed&idempotent=true&move=" + firstFileArchiveLocation)
.pollEnrich("sftp:" + secondFileLocation + "?privateKeyFile=" + ppkFileLocation + "&username=" + sftpUsername
+ "&readLock=changed&idempotent=true&fileExist=Ignore&move="+ secondFileLocationArchive ,10000,new FileAggregationStrategy())
.routeId("READ_INPUT_FILE_ROUTE")
Need help.
You're setting idempotent=true in the sftp consumer, which means camel will not process the same file name twice. Since you're moving the files, it would make sense to set idempotent=false.
Quoted from camel documentation
Option to use the Idempotent Consumer EIP pattern to let Camel skip
already processed files. Will by default use a memory based LRUCache
that holds 1000 entries. If noop=true then idempotent will be enabled
as well to avoid consuming the same files over and over again.
I'm adding an alternative solution based on comments for the answer posted by Jeremy Ross. My answer is based on the following code example. I've only added the configure() method in the test route for brevity.
#Override
public void configure() throws Exception {
String firstFileLocation = "//127.0.0.1/Folder1";
String secondFileLocation = "//127.0.0.1/Folder2";
String ppkFileLocation = "./key.pem";
String sftpUsername = "user";
String sftpPassword = "xxxxxx";
String firstFileArchiveLocation = "./Archive1";
String secondFileLocationArchive = "./Archive2";
IdempotentRepository repository1 = MemoryIdempotentRepository.memoryIdempotentRepository(1000);
IdempotentRepository repository2 = MemoryIdempotentRepository.memoryIdempotentRepository(1000);
getCamelContext().getRegistry().bind("REPO1", repository1);
getCamelContext().getRegistry().bind("REPO2", repository2);
from("sftp:" + firstFileLocation
+ "?password=" + sftpPassword + "&username=" + sftpUsername
+ "&readLock=idempotent&idempotent=true&idempotentKey=\\${file:name}-\\${file:size}-\\${file:modified}" +
"&idempotentRepository=#REPO1&stepwise=true&download=true&delay=10&move=" + firstFileArchiveLocation)
.to("direct:combined");
from("sftp:" + secondFileLocation
+ "?password=" + sftpPassword + "&username=" + sftpUsername
+ "&readLock=idempotent&idempotent=true&idempotentKey=\\${file:name}-\\${file:size}-\\${file:modified}" +
"&idempotentRepository=#REPO2" +
"&stepwise=true&delay=10&move=" + secondFileLocationArchive)
.to("direct:combined");
from("direct:combined")
.aggregate(constant(true), (oldExchange, newExchange) -> {
if (oldExchange == null) {
oldExchange = newExchange;
}
String fileName = (String) newExchange.getIn().getHeaders().get("CamelFileName");
String filePath = (String) newExchange.getIn().getHeaders().get("CamelFileAbsolutePath");
if (filePath.contains("Folder1")) {
oldExchange.getIn().setHeader("File1", fileName);
} else {
oldExchange.getIn().setHeader("File2", fileName);
}
String file1Name = oldExchange.getIn().getHeader("File1", String.class);
String file2Name = oldExchange.getIn().getHeader("File2", String.class);
if (file1Name != null && file2Name != null) {
// Compare files
// Both files are available
oldExchange.getIn().setHeader("PROCEED", true);
} else if (file1Name != null) {
// No comparison, proceed with File 1
oldExchange.getIn().setHeader("PROCEED", true);
} else {
// Do not proceed, keep file 2 data and wait for File 1
oldExchange.getIn().setHeader("PROCEED", false);
}
String fileName1 = oldExchange.getIn().getHeader("File1", String.class);
String fileName2 = oldExchange.getIn().getHeader("File2", String.class);
oldExchange.getIn().setBody("File1: " + fileName1 + " File2: " + fileName2);
System.out.println(oldExchange);
return oldExchange;
}).completion(exchange -> {
if(exchange.getIn().getHeader("PROCEED", Boolean.class)) {
exchange.getIn().removeHeader("File1");
exchange.getIn().removeHeader("File2");
return true;
}
return false;
}).to("log:Test");
}
In this solution, two SFTP consumers were used, instead of pollEnrich, since we need to capture the file changes of both SFTP locations. I have used an idempotent repository and an idempotent key for ignoring duplicates. Further, I've used the same idempotent repository as the lock store assuming only camel routes are accessing the files.
After receiving the files from SFTP consumers, they are sent to the direct:combined producer, which then routes the exchange to an aggregator.
In the example aggregator strategy I have provided, you can see, that the file names are being stored in the exchange headers. According to the file information retrieved from the headers, the aggregator can decide how to process the file and whether or not to proceed with the exchange. (If only file2 is received, the exchange should not proceed to the next stages/routes)
Finally, the completion predicate expression decides whether or not to proceed with the exchange and log the exchange body, based on the headers set by the aggregator. I have added an example clean-up process in the predicate expression processor as well.
Hope you will get the basic idea of my suggestion to use an aggregator from this example.

Apache Camel's load balanced route doesn't work if one of the endpoint stops connecting

I have a scenario in which if my endpoint1 is down, all messages should be routed to endpoint2 or vice versa. In case both are up then messages should be sent in round robin fashion. Can someone please give some idea how to handle this scenario.
from(itemFileConfig.getWorkingDir())
.log("Entered into file consumption part::")
.autoStartup(true)
.process(fileProcessor)
.split(body().tokenize("\n"))
.loadBalance()
.roundRobin()
.to("direct:kafkaPosting1", "direct:kafkaPosting2")
.end();
from("direct:kafkaPosting1")
.to("kafka:" + config.getTopicName() + "?" + "brokers=" +
config.getBoostStapServers1() + "&" +"serializerClass=" +
config.getSerializer())
.end();
from("direct:kafkaPosting2")
.to("kafka:" + config.getTopicName() + "?" + "brokers=" +
config.getBoostStapServers2() + "&" +"serializerClass=" +
config.getSerializer())
.end();
Thanks in advance
// use load balancer with failover strategy
// 1 = which will try 1 failover attempt before exhausting
// false = do not use Camel error handling
// true = use round robin mode
.loadBalance().failover(1, false, true)
.to("direct:kafkaPosting1").to("direct:kafkaPosting2");

Consuming soap application with prebuilt string request

I have built a Soap String message without the http header on top.
Is there any spring or Java EE technology that can be used to send the message to a soap server?
I want to have the ability to write to the logs what is sent exactly but also hiding sensitive data in the soap request such as passwords (when it is printed to the logs).
public class SoapRequestTest {
#Test
public void createHttp() throws IOException {
String soapRequest = "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n" +
"<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">\n" +
" <soap:Body>\n" +
" <Divide xmlns=\"http://tempuri.org/\">\n" +
" <intA>int</intA>\n" +
" <intB>int</intB>\n" +
" </Divide>\n" +
" </soap:Body>\n" +
"</soap:Envelope>";
}
}
Now this soap message I want to use when calling the Divide method in http://www.dneonline.com/calculator.asmx?wsdl

Send flow from java to apache nifi processor

Good Morning everyone
So I have this java code that parses into swagger documentation file (a JSON file) and split its:
{ Swagger swagger = new SwaggerParser().read("C:/Users/admin/Desktop/testdownload.txt");
Map<String, Path> paths = swagger.getPaths();
for (Map.Entry<String, Path> p : paths.entrySet()) {
Path path = p.getValue();
Map<HttpMethod, Operation> operations = path.getOperationMap();
for (java.util.Map.Entry<HttpMethod, Operation> o : operations.entrySet()) {
System.out.println("===");
System.out.println("PATH:" + p.getKey());
System.out.println("Http method:" + o.getKey());
System.out.println("Summary:" + o.getValue().getSummary());
System.out.println("Parameters number: " + o.getValue().getParameters().size());
for (Parameter parameter : o.getValue().getParameters()) {
System.out.println(" - " + parameter.getName());
}
System.out.println("Responses:");
for (Map.Entry<String, Response> r : o.getValue().getResponses().entrySet()) {
System.out.println(" - " + r.getKey() + ": " + r.getValue().getDescription());
}
System.out.println("");
}
}
}
And here is the input:
and the output is :
What I want to ask is: is it possible to send this output one path by one to apache Nifi ??
is there is any solution that Nifi extracts those outputs and put each one of them in a dependent processor??
You could start a HTTP lister service in NiFi. Use the HandleHttpRequest
Some time ago I did something like this. And was sending data from my Java application to this HandleHttpRequest. This Processor is designed to be used in conjunction with the HandleHttpResponse Processor in order to create a Web Service
You just have to post your data to this webservice and the webservice can consume it and you would already have your data in NiFi. From then on, you are manipulate and control you data as you please.
You can also look into ListenHTTP

Search for issues in current sprint in JIRA

I am trying to pull all issues (resolved or not) from the current sprint and display that information. I am using a JIRA REST Java client to achieve this. I am quite new to JIRA and the JRJC so would like all the help I can get really.
This is the code I have written so far:
SearchResult allIssuesInSprint = restClient.getSearchClient().searchJql("sprint = \"" + 29 + "\" order by rank").claim();
Iterable<Issue> allIssues = allIssuesInSprint.getIssues();
for (Issue issue : allIssues) {
System.out.println("Key: " + issue.getKey());
System.out.println("Type: " + issue.getIssueType());
System.out.println("Status: " + issue.getStatus());
System.out.println("Priority: " + issue.getPriority());
}
Again, I am new to JIRA's JAR files, so I'm not certain on how to use them. Any help would be appreciated.

Categories