I am building an application using Spring Integration which is used to send files from one FTP server (source) to another FTP server (target). I first send files from source to the local directory using the inbound adapter and then send files from the local directory to the target using the outbound adapter.
My code seems to be working fine and I am able to achieve my goal but my problem is when the connection is reset to the target FTP server during the transfer of files, then the transfer of files don't continue after the connection starts working.
I used the Java configurations using inbound and outbound adapters. Can anyone please tell me if it is possible to resume my transfer of files somehow after the connection reset?
P.S: I am a beginner at Spring, so correct me if I have done something wrong here. Thanks
AppConfig.java:
#Configuration
#Component
public class FileTransferServiceConfig {
#Autowired
private ConfigurationService configurationService;
public static final String FILE_POLLING_DURATION = "5000";
#Bean
public SessionFactory<FTPFile> sourceFtpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(configurationService.getSourceHostName());
sf.setPort(Integer.parseInt(configurationService.getSourcePort()));
sf.setUsername(configurationService.getSourceUsername());
sf.setPassword(configurationService.getSourcePassword());
return new CachingSessionFactory<FTPFile>(sf);
}
#Bean
public SessionFactory<FTPFile> targetFtpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(configurationService.getTargetHostName());
sf.setPort(Integer.parseInt(configurationService.getTargetPort()));
sf.setUsername(configurationService.getTargetUsername());
sf.setPassword(configurationService.getTargetPassword());
return new CachingSessionFactory<FTPFile>(sf);
}
#MessagingGateway
public interface MyGateway {
#Gateway(requestChannel = "toFtpChannel")
void sendToFtp(Message message);
}
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(sourceFtpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(configurationService.getSourceDirectory());
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter(
configurationService.getFileMask()));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel",
poller = #Poller(fixedDelay = FILE_POLLING_DURATION ))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source =
new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File(configurationService.getLocalDirectory()));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler targetHandler() {
FtpMessageHandler handler = new FtpMessageHandler(targetFtpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(
configurationService.getTargetDirectory()));
return handler;
}
}
Application.java:
#SpringBootApplication
public class Application {
public static ConfigurableApplicationContext context;
public static void main(String[] args) {
context = new SpringApplicationBuilder(Application.class)
.web(false)
.run(args);
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler sourceHandler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
Object payload = message.getPayload();
System.out.println("Payload: " + payload);
if (payload instanceof File) {
File file = (File) payload;
System.out.println("Trying to send " + file.getName() + " to target");
}
MyGateway gateway = context.getBean(MyGateway.class);
gateway.sendToFtp(message);
}
};
}
}
First of all it isn't clear what is that sourceHandler for, but you really should be sure that it is subscribed (or targetHandler) to proper channel.
I somehow believe that in your target code the targetHandler is really subscribed to the toFtpChannel.
Anyway that isn't related.
I think the problem here is exactly with the AcceptOnceFileListFilter and error. So, filter work first during directory scan and load all the local files to the in-memory queue for performance reason. Then all of them are sent to the channel for processing. When we reach the targetHandler and got an exception, we just silently got away to the global errorChannel loosing the fact that file hasn't been transferred. And this happens with all the remaining files in memory. I think the transfer is resumed anyway but it is going work already only for new files in the remote directory.
I suggest you to add ExpressionEvaluatingRequestHandlerAdvice to the targetHandler definition (#ServiceActivator(adviceChain)) and in case of error call the AcceptOnceFileListFilter.remove(File):
/**
* Remove the specified file from the filter so it will pass on the next attempt.
* #param f the element to remove.
* #return true if the file was removed as a result of this call.
*/
boolean remove(F f);
This way you remove the failed files from the filter and it will be picked up on the next poll task. You have to make AcceptOnceFileListFilter to be able to get an access to it from the onFailureExpression. The file is the payload of request message.
EDIT
The sample for the ExpressionEvaluatingRequestHandlerAdvice:
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnFailureExpressionString("#acceptOnceFileListFilter.remove(payload)");
advice.setTrapException(true);
return advice;
}
...
#ServiceActivator(inputChannel = "ftpChannel", adviceChain = "expressionAdvice")
Everything rest you can get from their JavaDocs.
Related
I am working on a backend service which polls S3 bucket periodically using spring aws integration and processes the polled object from S3. Below is the implementation for it
#Configuration
#EnableIntegration
#IntegrationComponentScan
#EnableAsync
public class S3PollerConfiguration {
//private static final Logger log = (Logger) LoggerFactory.getLogger(S3PollerConfiguration.class);
#Value("${amazonProperties.bucketName}")
private String bucketName;
#Bean
#InboundChannelAdapter(value = "s3FilesChannel", poller = #Poller(fixedDelay = "5"))
public MessageSource<InputStream> s3InboundStreamingMessageSource() {
S3StreamingMessageSource messageSource = new S3StreamingMessageSource(template());
messageSource.setRemoteDirectory(bucketName);
return messageSource;
}
#Bean
public S3RemoteFileTemplate template() {
return new S3RemoteFileTemplate(new S3SessionFactory(thumbnailGeneratorService.getImagesS3Client()));
}
#Bean
public PollableChannel s3FilesChannel() {
return new QueueChannel();
}
#Bean
IntegrationFlow fileReadingFlow() throws IOException {
return IntegrationFlows
.from(s3InboundStreamingMessageSource(),
e -> e.poller(p -> p.fixedDelay(10, TimeUnit.SECONDS)))
.handle(Message.class, (payload, header) -> processS3Object(payload.getHeaders(), payload.getPayload()))
.get();
}
}
I am getting the messages from S3 on object upload and I am able to process it using the input stream received as part of message payload. But the problem I face here is that I get 'Time out waiting for connection from pool' exception after receiving few messages
2019-01-06 02:19:06.156 ERROR 11322 --- [ask-scheduler-5] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:445)
at org.springframework.integration.file.remote.RemoteFileTemplate.list(RemoteFileTemplate.java:405)
at org.springframework.integration.file.remote.AbstractRemoteFileStreamingMessageSource.listFiles(AbstractRemoteFileStreamingMessageSource.java:194)
at org.springframework.integration.file.remote.AbstractRemoteFileStreamingMessageSource.poll(AbstractRemoteFileStreamingMessageSource.java:180)
at org.springframework.integration.aws.inbound.S3StreamingMessageSource.poll(S3StreamingMessageSource.java:70)
at org.springframework.integration.file.remote.AbstractRemoteFileStreamingMessageSource.doReceive(AbstractRemoteFileStreamingMessageSource.java:153)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:155)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:236)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:250)
I know that the issue is related to not closing the opened S3Object like stated here https://github.com/aws/aws-sdk-java/issues/1405 so I have implemented closing the input stream of the S3Object received as part of message payload. But that does not solve the issue and I keep getting the exceptions. Can someone help me to fix this issue ?
Your problem that you still mix Messaging Annotations declarations with Java DSL in your configuration.
Looks like in the fileReadingFlow you close those InputStreams in your code processS3Object() method, but you do nothing with InputStreams produced by the #InboundChannelAdapter(value = "s3FilesChannel", poller = #Poller(fixedDelay = "5")).
Why do you have it in fist place at all? What makes you to keep that code if you don't use it?
This S3StreamingMessageSource is polled all the time twice: by the #InboundChannelAdapter and IntegrationFlows.from().
You just have to remove that #InboundChannelAdapter from the S3StreamingMessageSource bean definition and that's all.
Please, read more Reference Manual to determine the reason of such an annotation and how you don't need it when you use Java DSL:
https://docs.spring.io/spring-integration/reference/html/configuration.html#_using_the_literal_inboundchanneladapter_literal_annotation
https://docs.spring.io/spring-integration/reference/html/java-dsl.html#java-dsl-inbound-adapters
Gary Russell kindly answered a previous question of mine about Spring Integration udp flows. Moving from there, I have stumbled upon an issue with ports.
The Spring Integration documentation says that you can put 0 to the inbound channel adapter port, and the OS will select an available port for the adapter, which can be retrieved at runtime invoking getPort() on the adapter object. The problem is that at runtime I just get a 0 if I try to retrieve the port programmatically.
Here's "my" code (i.e. a slightly modified version of Russel's answer to my previous question for Spring Integration 4.3.12, which I am currently using).
#SpringBootApplication
public class TestApp {
private final Map<Integer, IntegrationFlowRegistration> registrations = new HashMap<>();
#Autowired
private IntegrationFlowContext flowContext;
public static void main(String[] args) {
SpringApplication.run(TestApp.class, args);
}
#Bean
public PublishSubscribeChannel channel() {
return new PublishSubscribeChannel();
}
#Bean
public TestData test() {
return new TestData();
}
#Bean
public ApplicationRunner runner() {
return args -> {
UnicastReceivingChannelAdapter source;
source = makeANewUdpInbound(0);
makeANewUdpOutbound(source.getPort());
Thread.sleep(5_000);
channel().send(MessageBuilder.withPayload("foo\n").build());
this.registrations.values().forEach(r -> {
r.stop();
r.destroy();
});
this.registrations.clear();
makeANewUdpInbound(1235);
makeANewUdpOutbound(1235);
Thread.sleep(5_000);
channel().send(MessageBuilder.withPayload("bar\n").build());
this.registrations.values().forEach(r -> {
r.stop();
r.destroy();
});
this.registrations.clear();
};
}
public UnicastSendingMessageHandler makeANewUdpOutbound(int port) {
System.out.println("Creating an adapter to send to port " + port);
UnicastSendingMessageHandler adapter = new UnicastSendingMessageHandler("localhost", port);
IntegrationFlow flow = IntegrationFlows.from(channel())
.handle(adapter)
.get();
IntegrationFlowRegistration registration = flowContext.registration(flow).register();
registrations.put(port, registration);
return adapter;
}
public UnicastReceivingChannelAdapter makeANewUdpInbound(int port) {
System.out.println("Creating an adapter to receive from port " + port);
UnicastReceivingChannelAdapter source = new UnicastReceivingChannelAdapter(port);
IntegrationFlow flow = IntegrationFlows.from(source)
.<byte[], String>transform(String::new)
.handle(System.out::println)
.get();
IntegrationFlowRegistration registration = flowContext.registration(flow).register();
registrations.put(port, registration);
return source;
}
}
The output I read is
Creating an adapter to receive from port 0
Creating an adapter to send to port 0
Creating an adapter to receive from port 1235
Creating an adapter to send to port 1235
GenericMessage [payload=bar, headers={ip_packetAddress=127.0.0.1/127.0.0.1:54374, ip_address=127.0.0.1, id=c95d6255-e63a-433d-3723-c389fe66b060, ip_port=54374, ip_hostname=127.0.0.1, timestamp=1517220716983}]
I suspect the library did create adapters on OS-chosen free ports, but I am unable to retrieve the assigned port.
The port is assigned asynchronously; you need to wait until the port is actually assigned. Something like...
int n = 0;
while (n++ < 100 && ! source.isListening()) {
Thread.sleep(100;
}
if (!source.isListening()) {
// failed to start in 10 seconds.
}
We should probably enhance the adapter to emit an event when the port is ready. Feel free to open an 'Improvement' JIRA Issue.
I am new to Spring Boot but have been requested by my job to implement a small web service using spring boot.
The web service needs to accept SSL TCP connections (an external system will connect to my web service using a custom protocol - NOT HTTP). Also, I would like to handle these connections in a background task (or multiple background tasks).
After looking at the official documentation (http://docs.spring.io/spring-integration/reference/html/ip.html), I still don't understand (where do I place all that XML). When I asked on SO about where to place that XML, I was answered that this is a very old method of configuration and should not be used anymore.
What would be the "up-to-date" way to do this ?
#SpringBootApplication
public class So43983296Application implements CommandLineRunner {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So43983296Application.class, args);
Thread.sleep(10_000);
context.close();
}
#Autowired
private DefaultTcpNetSSLSocketFactorySupport ssl;
#Override
public void run(String... args) throws Exception {
Socket socket = ssl.getSocketFactory().createSocket("localhost", 1234);
socket.getOutputStream().write("foo\r\n".getBytes());
BufferedReader br = new BufferedReader(new InputStreamReader(socket.getInputStream()));
String result = br.readLine();
System.out.println(result);
br.close();
socket.close();
}
#Bean
public TcpNetServerConnectionFactory scf() {
TcpNetServerConnectionFactory scf = new TcpNetServerConnectionFactory(1234);
DefaultTcpNetSSLSocketFactorySupport tcpSocketFactorySupport = tcpSocketFactorySupport();
scf.setTcpSocketFactorySupport(tcpSocketFactorySupport);
// Add custom serializer/deserializer here; default is ByteArrayCrLfSerializer
return scf;
}
#Bean
public DefaultTcpNetSSLSocketFactorySupport tcpSocketFactorySupport() {
TcpSSLContextSupport sslContextSupport = new DefaultTcpSSLContextSupport("classpath:test.ks",
"classpath:test.truststore.ks", "secret", "secret");
DefaultTcpNetSSLSocketFactorySupport tcpSocketFactorySupport =
new DefaultTcpNetSSLSocketFactorySupport(sslContextSupport);
return tcpSocketFactorySupport;
}
#Bean
public TcpInboundGateway inGate() {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(scf());
inGate.setRequestChannelName("upperCase");
return inGate;
}
#ServiceActivator(inputChannel = "upperCase")
public String upCase(byte[] in) {
return new String(in).toUpperCase();
}
}
If you prefer XML configuration for Spring Integration, add it to a spring configuration xml file and use #ImportResource("my-context.xml") on the class.
I know spring integration has TcpInboundGateway and ByteArrayStxEtxSerializer to handle data coming through TCP port.
ByteArrayStxEtxSerializer works great if the TCP server needs to read all the data sent from the client and then processes it. (request and response model) I am using single-use=false so that multiple requests can be processed in the same connection.
For example if the client sends 0x02AAPL0x03 then Server can send the AAPL price.
My TCP Server is working if the client sends 0x02AAPL0x030x02GOOG0x03. It sends the price of AAPL and GOOG price.
Sometimes clients can send EOT (0x04). If the client sends EOT, I would like to close the socket connection.
For example: Client request can be 0x02AAPL0x030x02GOOG0x03 0x020x040x03. Note EOT came in the last packet.
I know ByteArrayStxEtxSerializer deserializer can be customized to read the bytes sent by the client.
is deserializer good place to close socket connection? if not, how should spring integration framework be notified to close socket connection?
Please help.
Here is my spring configuration:
<int-ip:tcp-connection-factory id="crLfServer"
type="server"
port="${availableServerSocket}"
single-use="false"
so-timeout="10000"
using-nio="false"
serializer="connectionSerializeDeserialize"
deserializer="connectionSerializeDeserialize"
so-linger="2000"/>
<bean id="connectionSerializeDeserialize" class="org.springframework.integration.ip.tcp.serializer.ByteArrayStxEtxSerializer"/>
<int-ip:tcp-inbound-gateway id="gatewayCrLf"
connection-factory="crLfServer"
request-channel="serverBytes2StringChannel"
error-channel="errorChannel"
reply-timeout="10000"/> <!-- reply-timeout works on inbound-gateway -->
<int:channel id="toSA" />
<int:service-activator input-channel="toSA"
ref="myService"
method="prepare"/>
<int:object-to-string-transformer id="serverBytes2String"
input-channel="serverBytes2StringChannel"
output-channel="toSA"/>
<int:transformer id="errorHandler"
input-channel="errorChannel"
expression="payload.failedMessage.payload + ':' + payload.cause.message"/>
UPDATE:
Adding throw new SoftEndOfStreamException("Stream closed") to close the stream in serializer works and I can see the CLOSED log entry in EventListener. When the server closes the connection, I expect to receive java.io.InputStream.read() as -1 in the client. But the client is receiving the
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
at sun.nio.cs.StreamDecoder.read0(StreamDecoder.java:107)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:93)
at java.io.InputStreamReader.read(InputStreamReader.java:151)
is there anything else to close the connection on the server side and propagate it to client?
I appreciate your help.
Thank you
The deserializer doesn't have access to the socket, just the input stream; closing it would probably work, but you will likely get a lot of noise in the log.
The best solution is to throw a SoftEndOfStreamException; that signals that the socket should be closed and everything cleaned up.
EDIT
Add a listener to detect/log the close...
#SpringBootApplication
public class So40471456Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So40471456Application.class, args);
Socket socket = SocketFactory.getDefault().createSocket("localhost", 1234);
socket.getOutputStream().write("foo\r\n".getBytes());
socket.close();
Thread.sleep(10000);
context.close();
}
#Bean
public EventListener eventListener() {
return new EventListener();
}
#Bean
public TcpNetServerConnectionFactory server() {
return new TcpNetServerConnectionFactory(1234);
}
#Bean
public TcpReceivingChannelAdapter inbound() {
TcpReceivingChannelAdapter adapter = new TcpReceivingChannelAdapter();
adapter.setConnectionFactory(server());
adapter.setOutputChannelName("foo");
return adapter;
}
#ServiceActivator(inputChannel = "foo")
public void syso(byte[] in) {
System.out.println(new String(in));
}
public static class EventListener implements ApplicationListener<TcpConnectionCloseEvent> {
private final Log logger = LogFactory.getLog(getClass());
#Override
public void onApplicationEvent(TcpConnectionCloseEvent event) {
logger.info(event);
}
}
}
With XML, just add a <bean/> for your listener class.
Result:
foo
2016-11-07 16:52:04.133 INFO 29536 --- [pool-1-thread-2] c.e.So40471456Application$EventListener : TcpConnectionCloseEvent
[source=org.springframework.integration.ip.tcp.connection.TcpNetConnection#118a7548],
[factory=server, connectionId=localhost:50347:1234:b9fcfaa9-e92c-487f-be59-1ed7ebd9312e]
**CLOSED**
EDIT2
It worked as expected for me...
#SpringBootApplication
public class So40471456Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So40471456Application.class, args);
Socket socket = SocketFactory.getDefault().createSocket("localhost", 1234);
socket.getOutputStream().write("foo\r\n".getBytes());
try {
System.out.println("\n\n\n" + socket.getInputStream().read() + "\n\n\n");
context.getBean(EventListener.class).latch.await(10, TimeUnit.SECONDS);
}
finally {
socket.close();
context.close();
}
}
#Bean
public EventListener eventListener() {
return new EventListener();
}
#Bean
public TcpNetServerConnectionFactory server() {
TcpNetServerConnectionFactory server = new TcpNetServerConnectionFactory(1234);
server.setDeserializer(is -> {
throw new SoftEndOfStreamException();
});
return server;
}
#Bean
public TcpReceivingChannelAdapter inbound() {
TcpReceivingChannelAdapter adapter = new TcpReceivingChannelAdapter();
adapter.setConnectionFactory(server());
adapter.setOutputChannelName("foo");
return adapter;
}
public static class EventListener implements ApplicationListener<TcpConnectionCloseEvent> {
private final Log logger = LogFactory.getLog(getClass());
private final CountDownLatch latch = new CountDownLatch(1);
#Override
public void onApplicationEvent(TcpConnectionCloseEvent event) {
logger.info(event);
latch.countDown();
}
}
}
Result:
2016-11-08 08:27:25.964 INFO 86147 --- [ main] com.example2.So40471456Application : Started So40471456Application in 1.195 seconds (JVM running for 1.764)
-1
2016-11-08 08:27:25.972 INFO 86147 --- [pool-1-thread-2] c.e.So40471456Application$EventListener : TcpConnectionCloseEvent [source=org.springframework.integration.ip.tcp.connection.TcpNetConnection#fee3774], [factory=server, connectionId=localhost:54984:1234:f79a6826-0336-4823-8844-67054903a094] **CLOSED**
I am having 2 applications exchanging data using RabbitMQ. I have implemented this using Spring AMQP. I have scenario once the message has been consumed from the consumer might encounter an exception while processing.
If any exception comes i am planning to log into the database. I have to remove message from the queue explicitly once the message reaches the consumer whether it is successful processing or error encountered.
How to forcefully remove the message from queue otherwise it will be
there if my application fails to process it?
Below is my Listener code
#RabbitListener(containerFactory="rabbitListenerContainerFactory",queues=Constants.JOB_QUEUE)
public void handleMessage(JobListenerDTO jobListenerDTO) {
//System.out.println("Received summary: " + jobListenerDTO.getProcessXML());
//amqpAdmin.purgeQueue(Constants.JOB_QUEUE, true);
try{
Map<String, Object> variables = new HashMap<String, Object>();
variables.put("initiator", "cmy5kor");
Deployment deploy = repositoryService.createDeployment().addString(jobListenerDTO.getProcessId()+".bpmn20.xml",jobListenerDTO.getProcessXML()).deploy();
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey(jobListenerDTO.getProcessId(), variables);
System.out.println("Process Instance is:::::::::::::"+processInstance);
}catch(Exception e){
e.printStackTrace();
}
Configuration Code
#Configuration
#EnableRabbit
public class RabbitMQJobConfiguration extends AbstractBipRabbitConfiguration {
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setQueue(Constants.JOB_QUEUE);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public Queue jobQueue() {
return new Queue(Constants.JOB_QUEUE);
}
#Bean(name="rabbitListenerContainerFactory")
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
Jackson2JsonMessageConverter messageConverter = new Jackson2JsonMessageConverter();
DefaultClassMapper classMapper = new DefaultClassMapper();
Map<String, Class<?>> idClassMapping = new HashMap<String, Class<?>>();
idClassMapping.put("com.bosch.diff.approach.TaskMessage", JobListenerDTO.class);
classMapper.setIdClassMapping(idClassMapping);
messageConverter.setClassMapper(classMapper);
factory.setMessageConverter(messageConverter);
factory.setReceiveTimeout(10L);
return factory;
}
}
I don't know about spring api or configuration for rmq but this
I have to remove message from the queue explicitly once the message reaches the consumer whether it is successful processing or error encountered.
is exactly what is happening when you set the auto-acknowledge flag. In that way, the message is acknowledged as soon as it's consumed - so gone from the queue.
As long as your listener catches the exception the message will be removed from the queue.
If your listener throws an exception, it will be requeued by default; that behavior can be modified by throwing a AmqpRejectAndDontRequeueException or setting the defaultRequeueRejected property - see the documentation for details.