I am not able to print the response from a Soap Webservice.
Seen few solutions by editing the generated stub code. But I cant edit the generated code as it gets restored to original form on every build. Looking for a solution where I can get the solution printed without change in generated code.
I am consuming the SOAP service from a Spring Boot microservice.
ServiceContext serviceConxt = omsSchedulingService._getServiceClient().getServiceContext();
OperationContext operationContext = serviceConxt.getLastOperationContext();
MessageContext inMessageContext = operationContext.getMessageContext("Out");
log.info(inMessageContext.getEnvelope().toString());
You can add a message handler for the soap message.
Then once you intercept the message with the handler, you can print out the response.
You will need to add the handler to the handler chain, depending on your project you can do that programatically or with config.
final class MyMessageHandler implements SOAPHandler<SOAPMessageContext>{
#Override
public void close(MessageContext context) {
handle(context);
}
private boolean handle(MessageContext context) {
if (context != null) {
try {
Object httpResponseCodeObj = context.get(SOAPMessageContext.HTTP_RESPONSE_CODE);
if (httpResponseCodeObj instanceof Integer)
httpResponseCode = ((Integer) httpResponseCodeObj).intValue();
if (context instanceof SOAPMessageContext) {
SOAPMessage message = ((SOAPMessageContext) context).getMessage();
ByteArrayOutputStream byteOut = new ByteArrayOutputStream(512);
message.writeTo(byteOut);
String messageStr = byteOut.toString(getCharacterEncoding(message));
boolean outbound = Boolean.TRUE.equals(context.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY));
Logger.info(loggingPrefix, outbound ? "SOAP request: " : "SOAP response: ", replaceNewLines(messageStr));
}
} catch (SOAPException e) {
Logger.error(e, loggingPrefix, "SOAPException: ", e.getMessage(), NEWLINE);
} catch (IOException e) {
Logger.error(e, loggingPrefix, "IOException: ", e.getMessage(), NEWLINE);
}
}
return true;
}
}
If you donĀ“t want to implement an interceptor the easiest way is to use the logging via vm arguments:
JAVA_OPTS=-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog -Dorg.apache.commons.logging.simplelog.showdatetime=true -Dorg.apache.commons.logging.simplelog.log.httpclient.wire=debug -Dorg.apache.commons.logging.simplelog.log.org.apache.commons.httpclient=debug
This way you should see the logging of your request / response with headers in console.
First you can get AxisConfiguration from client stub.
AxisConfiguration axisConf = stub._getServiceClient().getAxisConfiguration();
Processing incoming and outgoing messages is divided into phases. There is a list of phases (a flow) which is processed when everything works correctly (without errors) and also another for situations when some fault occurs e.g. when an exception is thrown during message processing. Every flow maybe incoming or outgoing so there are 4 flows altogether.
List<Phase> phasesIn = axisConf.getInFlowPhases(); // normal incoming communication i.e. response from webservice
List<Phase> phasesOut = axisConf.getOutFlowPhases(); // normal outgoing communication
List<Phase> phasesFaultIn = axisConf.getInFaultFlowPhases(); // faulty incoming communication e.g. when an exception occurs during message processing
List<Phase> phasesFaultOut = axisConf.getOutFaultFlowPhases(); // faulty outgoing communication
Some but not all phase names are defined in org.apache.axis2.phaseresolver.PhaseMetadata.
For example "Security" phase processed in Rampart module (module for Web Service Security) won't be found in PhaseMetadata.
You can add a handler to every phase, e.g.
for (Phase p : phasesOut) {
if (PhaseMetadata.PHASE_TRANSPORT_OUT.equals(p.getName())) {
p.addHandler(new MessageContentLoggerHandler());
}
}
Handler is a class which extends org.apache.axis2.handlers.AbstractHandler.
You just have to implement
public InvocationResponse invoke(MessageContext msgContext).
There you have access to MessageContext. Of course, you can get whole SOAP envelope like this:
msgContext.getEnvelope().toString()
and for example print it to your logs or save as a separate file.
Remember to put
return InvocationResponse.CONTINUE;
at the end of invoke method for a situation when handler processes the message successfully. Otherwise processing stops in this handler and a whole process won't get to any another phase.
If you need to see whole message with WSS headers, you can add your own phase. For example this adds your custom phase as the last in processing of outgoing message (so also after Rampart's security phase)
Phase phase = new Phase("SomePhase");
phase.addHandler(new SomeCustomHandler());
axisConf.getOutFlowPhases().add(phase);
Of course logging (and exposing in any other way) security headers in production environment is a very bad idea. Do it only for debugging purposes in some test environment.
Related
I'm really new to Camel concepts, our need is to create some identical routes, identical excepted for some parameters, from a Kafka topic to a http endpoint, with some processing in-between.
Besides this we want to explicitly commit the message consumption only when the http endpoint has been successfully called.
In order to achieve this we set up a route template that carries the Route parameterization and set it up to manually commit after having called the http endpoint :
public void configure() throws Exception {
// #formatter:off
routeTemplate(Constantes.KAFKA_GENERIC_ROUTE)
.templateParameter(Constantes.JOB_NAME)
.templateParameter(Constantes.TOPIC)
.templateParameter(Constantes.PUBLISHER_ID)
.templateParameter(Constantes.CORRELATION_ID_PARAMETER)
.templateParameter(Constantes.JOB_NAME_PARAMETER)
.templateParameter(Constantes.CORRELATION_ID_PARAMETER)
.from(getKafkaEndpoint())
.messageHistory()
.filter(simple("${header.publisherId} == '{{publisherId}}'"))
.process(messageLoggerProcessor)
.process(modelMapperProcessor)
.process(jsonlToArrayProcessor)
.process(payloadProcessor)
.resequence(header("dmlTimestamp")).batch().timeout(maximumRequestCount)
.setHeader(Exchange.HTTP_METHOD, simple("POST"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json;charset=UTF-8"))
.setHeader(Constantes.ACCEPT,constant("application/json"))
.setHeader(Constantes.API_KEY, constant(apiKey))
.throttle(maximumRequestCount).timePeriodMillis(timePeriodMillis).asyncDelayed(true)
.process(apiConsumerProcessorLogger)
.to(this.url)
.process(kafkaOffsetProcessor);
// #formatter:on
}
private String getKafkaEndpoint() {
String endpoint = "kafka:{{topic}}?allowManualCommit=true&autoCommitEnable=false&brokers=" + kafkaBrokers;
if (securityEnabled()) {
endpoint += "&securityProtocol=SASL_SSL" + "&saslMechanism=PLAIN"
+ "&saslJaasConfig=org.apache.kafka.common.security.plain.PlainLoginModule required username=\""
+ username + "\" password=\"" + password + "\";" + "&sslTruststoreLocation=" + sslTrustStoreLocation
+ "&sslTruststorePassword=" + sslTruststorePassword;
}
return endpoint;
}
public class KafkaOffsetProcessor implements Processor {
#Override
public void process(Exchange exchange) throws Exception {
KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class);
manual.commit();
log.info("Committed Kafka offset");
}
}
The problem is that we systematically get this error when a message is consumed by a route :
Trace: java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1824)
at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:1808)
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1255)
at org.apache.camel.component.kafka.DefaultKafkaManualCommit.commitOffset(DefaultKafkaManualCommit.java:60)
at org.apache.camel.component.kafka.DefaultKafkaManualCommit.commitSync(DefaultKafkaManualCommit.java:51)
My understanding is that the instance of KafkaConsumer is reused in multiple routes and therefore it generates the error, but it could be related to using SEDA endpoint as stated here (https://issues.apache.org/jira/browse/CAMEL-12722), which we dont explicitly do.
We tried injecting a KafkaComponent local bean in the route :
.templateBean("myKafkaConfiguration").typeClass("org.apache.camel.component.kafka.KafkaConfiguration").property("topic", "{{" + Constantes.TOPIC +"}}").properties(kafkaConfiguration)
.end()
.templateBean("myKafka").typeClass("org.apache.camel.component.kafka.KafkaComponent").property("configuration", "#{{myKafkaConfiguration}}")
.end()
.from("#{{myKafka}}")
But it ends up with another error because you cannot consume a Bean endpoint (https://camel.apache.org/components/3.18.x/bean-component.html)
How use a different KafkaConsumer for every created route ? Or, if the issue is SEDA related, how to make this route a direct route?
Thank you for your help
I have a spring boot application which will publish message on azure Queue. I have one more azure queueTrigger function written in Java which will listen to the same queue to which spring boot application has published a message. The queueTrigger function not able to detected messages published on queue.
Here is my publisher code
public static void addQueueMessage(String connectStr, String queueName, String message) {
try {
// Instantiate a QueueClient which will be
// used to create and manipulate the queue
QueueClient queueClient = new QueueClientBuilder()
.connectionString(connectStr)
.queueName(queueName)
.buildClient();
System.out.println("Adding message to the queue: " + message);
// Add a message to the queue
queueClient.sendMessage(message);
} catch (QueueStorageException e) {
// Output the exception message and stack trace
System.out.println(e.getMessage());
e.printStackTrace();
}
}
Here is my queueTrigger function app code
#FunctionName("queueprocessor")
public void run(
#QueueTrigger(name = "message",
queueName = "queuetest",
connection = "AzureWebJobsStorage") String message,
final ExecutionContext context
) {
context.getLogger().info(message);
}
I'm passing same connection-String and queueName, still doesn't work. If i run function on my local machine then it gets triggered but with error error image
As the official doc suggests,
Functions expect a base64 encoded string. Any adjustments to the encoding type (in order to prepare data as a base64 encoded string) need to be implemented in the calling service.
Update sender code to send base64 encoded message.
String encodedMsg = Base64.getEncoder().encodeToString(message.getBytes())
queueClient.sendMessage(encodedMsg);
I encountered a knotty problem when receiving message from WildFly JMS queue. My code is below:
Session produceSession = connectionFactory.createConnection().createSession(false, Session
.CLIENT_ACKNOWLEDGE);
Session consumerSession = connectionFactory.createConnection().createSession(false, Session
.CLIENT_ACKNOWLEDGE);
ApsSchedule apsSchedule = new ApsSchedule();
boolean success;
MessageProducer messageProducer = produceSession.createProducer(outQueueMaxusOrder);
success = apsSchedule.sendD90Order(produceSession,messageProducer, d90OrderAps);
if (!success) {
logger.error("Can't send APS schedule msg ");
} else {
MessageConsumer consumer = consumerSession.createConsumer(inQueueDeliveryDate);
data = apsSchedule.receiveD90Result(consumerSession,consumer);
}
then getting into the receiveD90Result():
public DeliveryData receiveD90Result(Session session, MessageConsumer consumer) {
DeliveryData data = null;
try {
Message message = consumer.receive(10000);
if (message == null) {
return null;
}
TextMessage msg = (TextMessage) message;
String text = msg.getText();
logger.debug("Receive APS d90 result: {}", text);
ObjectMapper mapper = new ObjectMapper();
data = mapper.readValue(text, DeliveryData.class);
} catch (JMSException je) {
logger.error("Can't receive APS d90 order result: {}", je.getMessage());
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
consumer.close();
} catch (JMSException e) {
e.printStackTrace();
}
}
return data;
}
But when implementing the consumer.receive(10000), the project can't get a message from queue. If I use asynchronous way of MDB to listen the queue, I can get the message from queue. How to resolve it?
There are multiple modes you can choose to get a message from the queue. Message Queues are by default asynchronous in usage. There are however cases when you want to read it synchronously , for example sending a message with account number and using another queue to read the response and match it with a message id or a message correlation id. When you do a receive , the program is waiting for a message to arrive within that polling interval specified in receive.
The code snippet you have , as i see it uses the psuedo synchronous approach. If you have to use it as an MDB , you will have to implement message driven bean (EJB Resource) or message listener.
The way that MDB/Message Listener works is more event based , instead of a poll with a timeout (like the receive) , you implement a callback called onMessage() that is invoked every time there is a message. Instead of a synchronous call , this becomes asynchronous. Your application may require some changes both in terms of design.
I don't see where you're calling javax.jms.Connection.start(). In fact, it doesn't look like you even have a reference to the javax.jms.Connection instance used for your javax.jms.MessageConsumer. If you don't have a reference to the javax.jms.Connection then you can't invoke start() and you can't invoke close() when you're done so you'll be leaking connections.
Furthermore, connections are "heavy" objects and are meant to be re-used. You should create a single connection for both the producer and consumer. Also, if your application is not going to use the javax.jms.Session from multiple threads then you don't need multiple sessions either.
I'm sending a single message that produces multiple messages, two of which arrive on the same JMS endpoint.
runner.send(sendMessageBuilder -> sendMessageBuilder.endpoint(inputMessage.getEndpoint())
.messageType(MessageType.XML)
.payload(inputMessage.getPayload())
.header(JMSOUTPUTCORRELATIONID, correlationId));
for(OutputMessage outputMessage : inputMessage.getOutputMessages()) {
runner.receive(receiveMessageBuilder -> receiveMessageBuilder.endpoint(outputMessage.getEndpoint())
.schemaValidation(false)
.payload(outputMessage.getPayload())
.header(JMSOUTPUTCORRELATIONID, correlationId));
}
When validating two messages on the same endpoint I'm having trouble finding a way to match them to their respective expected outputs.
I was wondering if Citrus has a built in way to do this or if I could build in a condition that checks the other expected outputs if the first one fails.
I've added a custom validator.
List<OutputMessage> outputMessages = inputMessage.getOutputMessages();
while(outputMessages.size() > 0) {
OutputMessage outputMessage = outputMessages.get(0);
runner.receive(receiveMessageBuilder -> receiveMessageBuilder.endpoint(outputMessage.getEndpoint())
.schemaValidation(true)
.validator(new MultipleOutputMessageValidator(outputMessages))
.header(JMSOUTPUTCORRELATIONID, correlationId));
}
The validator is provided with the the list of expected outputs that have not yet been validated. It will then try to validate each of the expected outputs in the list against the received message and if the validation is succesful removes that expected output from the list.
public class MultipleOutputMessageValidator extends DomXmlMessageValidator {
private static Logger log = LoggerFactory.getLogger(MultipleOutputMessageValidator.class);
private List<OutputMessage> controlMessages;
public MultipleOutputMessageValidator(List<OutputMessage> controlMessages) {
this.controlMessages = controlMessages;
}
#Override
public void validateMessagePayload(Message receivedMessage, Message controlMessage, XmlMessageValidationContext validationContext, TestContext context) throws ValidationException {
Boolean isValidated = false;
for (OutputMessage message : this.controlMessages) {
try {
super.validateMessagePayload(receivedMessage, message, validationContext, context);
isValidated = true;
controlMessages.remove(message);
break;
} catch (ValidationException e) {
// Do nothing for now
}
}
if (!isValidated) {
throw new ValidationException("None of the messages validated");
}
}
}
You should use JMS message selectors so you can "pick" one of the messages from that queue based on a technical identifier. This selector can be a JMS message header for instance (in your case the header JMSOUTPUTCORRELATIONID). This way you make sure to receive the message that you want to validate first.
Example usage:
receive(action -> action.endpoint(someEndpoint)
.selector("correlationId='Cx1x123456789' AND operation='getOrders'"));
Citrus message selector support is described here
I have two applications within my server, and use JMS via ActiveMQ to send messages between the two. My two apps are as follows
Web service - accepts HTTP requests, validates, then sends messages to be executed by the other application.
Exec App - accepts object messages, executes order, sends execution report back to the web service to present to the client.
My Exec app receives messages from the Web service within 200ms, no problems there. However when I send an exec report, the message can hang in the queue for over 10 seconds before being received by the web service. I am using the same code for both side's consumers so I am unsure what the cause would be.
Here is my message producer in the Exec App -
public void createAndSendExecReport(OrderExecutionReport theReport){
try {
logger.debug("Posting exec report: " +theReport.getOrderId());
this.excChannelMessageProducer.send(createMessage(theReport));
} catch (JMSException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
[there is a createMessage method which converts my POJO into an object message]
MessageListener listener = new MessageListener() {
#Override
public void onMessage(Message message) {
logger.debug("Incoming execution report");
try {
OrderExecutionReport report = (OrderExecutionReport)((ObjectMessage)message).getObject();
consumeExecutionReport(report);
} catch (Exception e) {
logger.error("Message handling failed. Caught: " + e);
StringWriter sw = new StringWriter();
e.printStackTrace(new PrintWriter(sw));
logger.error(sw.toString());
}
}
};
I get the log message "sending execution report"
Then nothing in the web service for up to 15 seconds later until finally I get "incoming ... "
What could be the cause of this?
Make sure you have enough MDBs running on the Exec App so they can handle the load.