I am trying to build a module to be deployed on multiple nodes using Spring boot. Due to time constraints of the specific application, I have to use UDP and cannot rely on the easier-to-use REST facilities that Spring provides.
I have to be able to send datagrams to a set of nodes that may vary in time (i.e. the set may grow or shrink, or some nodes may move to new ip/port "coordinates"). Communication must be unicast.
I have been reading the official documentation about TCP and UDP support TCP and UDP support, but it is rather... compact, and opaque. The javadocs on the org.springframework.integration classes are also rather brief for that matter.
For what I could understand, an "inbound" channel is used to send a packet, while an outbound channel is used to receive packets.
I haven't been able so far to find an answer to the following issues for inbound (i.e. "send" channels, if I understood well):
- How can I create more channels at runtime, to send packets to multiple destinations?
- If a host gets moved, should I just destroy the channel and set up a new one, or may I change a channel's parameters (destination ip/port) at runtime?
For outbound channels ("receive" channels if I understood well), I have similar questions to the above, as in:
- How do I set up multiple channels at runtime?
- How do I change destination for an existing channel at runtime, not to have to tear it down and set it up anew?
- Should I just open/close "raw" UDP sockets instead?
You have inbound and outbound reversed.
Here's an example that should provide you with what you need; it uses a pub/sub channel to broadcast...
#SpringBootApplication
public class So48213450Application {
private final Map<Integer, IntegrationFlowRegistration> registrations = new HashMap<>();
public static void main(String[] args) {
SpringApplication.run(So48213450Application.class, args);
}
#Bean
public PublishSubscribeChannel channel() {
return new PublishSubscribeChannel();
}
#Bean
public ApplicationRunner runner(PublishSubscribeChannel channel) {
return args -> {
makeANewUdpAdapter(1234);
makeANewUdpAdapter(1235);
channel.send(MessageBuilder.withPayload("foo\n").build());
registrations.values().forEach(r -> {
r.stop();
r.destroy();
});
};
}
#Autowired
private IntegrationFlowContext flowContext;
public void makeANewUdpAdapter(int port) {
System.out.println("Creating an adapter to send to port " + port);
IntegrationFlow flow = IntegrationFlows.from(channel())
.handle(Udp.outboundAdapter("localhost", port))
.get();
IntegrationFlowRegistration registration = flowContext.registration(flow).register();
registrations.put(port, registration);
}
}
result:
$ nc -u -l 1234 &
[1] 56730
$ nc -u -l 1235 &
[2] 56739
$ jobs
[1]- Running nc -u -l 1234 &
[2]+ Running nc -u -l 1235 &
$ foo
foo
You can't change parameters at runtime, you would have to create new ones.
EDIT
In response to your comments below...
You can't mix and match spring integration jars (2.1.x and 5.0.x); they must all be with the same version. My example above used Boot 2.0.0.M7 (boot 2 is scheduled to be released next month).
The Udp factory class was added to spring-integration-ip in 5.0.0.
Here is a similar example (which also adds receiving adapters) for boot 1.5.9 and spring integration 4.3.13...
#SpringBootApplication
public class So482134501Application {
private final Map<Integer, IntegrationFlowRegistration> registrations = new HashMap<>();
#Autowired
private IntegrationFlowContext flowContext;
public static void main(String[] args) {
SpringApplication.run(So482134501Application.class, args);
}
#Bean
public PublishSubscribeChannel channel() {
return new PublishSubscribeChannel();
}
#Bean
public ApplicationRunner runner(PublishSubscribeChannel channel) {
return args -> {
makeANewUdpInbound(1234);
makeANewUdpInbound(1235);
makeANewUdpOutbound(1234);
makeANewUdpOutbound(1235);
Thread.sleep(5_000);
channel.send(MessageBuilder.withPayload("foo\n").build());
this.registrations.values().forEach(r -> {
r.stop();
r.destroy();
});
this.registrations.clear();
};
}
public void makeANewUdpOutbound(int port) {
System.out.println("Creating an adapter to send to port " + port);
IntegrationFlow flow = IntegrationFlows.from(channel())
.handle(new UnicastSendingMessageHandler("localhost", port))
.get();
IntegrationFlowRegistration registration = flowContext.registration(flow).register();
registrations.put(port, registration);
}
public void makeANewUdpInbound(int port) {
System.out.println("Creating an adapter to receive from port " + port);
IntegrationFlow flow = IntegrationFlows.from(new UnicastReceivingChannelAdapter(port))
.<byte[], String>transform(String::new)
.handle(System.out::println)
.get();
IntegrationFlowRegistration registration = flowContext.registration(flow).register();
registrations.put(port, registration);
}
}
result:
GenericMessage [payload=foo
, headers={ip_packetAddress=localhost/127.0.0.1:54881, ip_address=127.0.0.1, id=db7dae61-078c-5eb6-dde4-f83fc6c591d1, ip_port=54881, ip_hostname=localhost, timestamp=1515764556722}]
GenericMessage [payload=foo
, headers={ip_packetAddress=localhost/127.0.0.1:54880, ip_address=127.0.0.1, id=d1f79e79-569b-637b-57c5-549051f1b031, ip_port=54880, ip_hostname=localhost, timestamp=1515764556722}]
Related
In my Code, SqsMessageDrivenChannelAdapter channel adapter configured to read message from AWS queue and push to a pollable channel(Queue). To the pollable channel,a service activator is pointing to poll message and process.
My Exact question: How to make work service activator as multithreaded to poll message from pollable channel and do some parallel task by specified thread size.
Channel Adapter:
#Bean
public MessageProducer sqsMessageDrivenChannelAdapterForFlights() {
log.info("**** start listening to: " + ttFlightsXMLSqsName + " **** ");
SqsMessageDrivenChannelAdapter adapter =
new SqsMessageDrivenChannelAdapter(amazonSqs, ttFlightsXMLSqsName);
adapter.setOutputChannelName(MessageChannelConstants.get_tt_flights);
adapter.setMaxNumberOfMessages(5);
return adapter;
}
Pollable Channel:
#Bean(name = MessageChannelConstants.get_tt_flights)
public PollableChannel sqsInputChannelFlights() {
return new QueueChannel();
}
Service activator:
#ServiceActivator(inputChannel = MessageChannelConstants.get_tt_flights,
poller = #Poller(fixedRate = "5000"))
public void processFlightData(Message<?> receive) throws PacPlusException {
.................
long startTime = System.currentTimeMillis();
}
Final question: If I make two service activator pointing to the same pollable channel will it work perfectly and is it good to use kind of parallel message process?
See an task-executor option for the poller configuration. Exactly this one makes the same service activator to be called in parallel.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/endpoint.html#endpoint-namespace
I recently changed from using a standard Rabbit Template, in my Spring Boot application, to using an Async Rabbit Template. In the process, I switched from the standard send method to using the sendAndReceive method.
Making this change does not seem to affect the publishing of messages to RabbitMQ, however I do now see stack traces as follows when sending messages:
org.springframework.amqp.core.AmqpReplyTimeoutException: Reply timed out
at org.springframework.amqp.rabbit.AsyncRabbitTemplate$RabbitFuture$TimeoutTask.run(AsyncRabbitTemplate.java:762) [spring-rabbit-2.3.10.jar!/:2.3.10]
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) [spring-context-5.3.9.jar!/:5.3.9]
I have tried modifying various settings including the reply and receive timeouts but all that changes is the time it takes to receive the above error. I have also tried setting useDirectReplyToContainer to true as well as setting useChannelForCorrelation to true.
I have managed to recreate the issue in a main method, included bellow, using a RabbitMQ broker running in docker.
public static void main(String[] args) {
com.rabbitmq.client.ConnectionFactory cf = new com.rabbitmq.client.ConnectionFactory();
cf.setHost("localhost");
cf.setPort(5672);
cf.setUsername("<my-username>");
cf.setPassword("<my-password>");
cf.setVirtualHost("<my-vhost>");
ConnectionFactory connectionFactory = new CachingConnectionFactory(cf);
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setExchange("primary");
rabbitTemplate.setUseDirectReplyToContainer(true);
rabbitTemplate.setReceiveTimeout(10000);
rabbitTemplate.setReplyTimeout(10000);
rabbitTemplate.setUseChannelForCorrelation(true);
AsyncRabbitTemplate asyncRabbitTemplate = new AsyncRabbitTemplate(rabbitTemplate);
asyncRabbitTemplate.start();
System.out.printf("Async Rabbit Template Running? %b\n", asyncRabbitTemplate.isRunning());
MessageBuilderSupport<MessageProperties> props = MessagePropertiesBuilder.newInstance()
.setContentType(MessageProperties.CONTENT_TYPE_TEXT_PLAIN)
.setMessageId(UUID.randomUUID().toString())
.setHeader(PUBLISH_TIME_HEADER, Instant.now(Clock.systemUTC()).toEpochMilli())
.setDeliveryMode(MessageDeliveryMode.NON_PERSISTENT);
asyncRabbitTemplate.sendAndReceive(
"1.1.1.csv-routing-key",
new Message(
"a,test,csv".getBytes(StandardCharsets.UTF_8),
props.build()
)
).addCallback(new ListenableFutureCallback<>() {
#Override
public void onFailure(Throwable ex) {
System.out.printf("Error sending message:\n%s\n", ex.getLocalizedMessage());
}
#Override
public void onSuccess(Message result) {
System.out.println("Message successfully sent");
}
});
}
I am sure that I am just missing a configuration option but any help would be appricated.
Thanks. :)
asyncRabbitTemplate.sendAndReceive(..) will always expect a response from the consumer of the message, hence the timeout you are receiving.
To fire and forget use the standard RabbitTemplate.send(...) and catching any exceptions in a try/catch block:
try {
rabbitTemplate.send("1.1.1.csv-routing-key",
new Message(
"a,test,csv".getBytes(StandardCharsets.UTF_8),
props.build());
} catch (AmqpException ex) {
log.error("failed to send rabbit message, routing key = {}", routingKey, ex);
}
Set reply timeout to some bigger number and see the effect.
rabbitTemplate.setReplyTimeout(60000);
https://docs.spring.io/spring-amqp/reference/html/#reply-timeout
Gary Russell kindly answered a previous question of mine about Spring Integration udp flows. Moving from there, I have stumbled upon an issue with ports.
The Spring Integration documentation says that you can put 0 to the inbound channel adapter port, and the OS will select an available port for the adapter, which can be retrieved at runtime invoking getPort() on the adapter object. The problem is that at runtime I just get a 0 if I try to retrieve the port programmatically.
Here's "my" code (i.e. a slightly modified version of Russel's answer to my previous question for Spring Integration 4.3.12, which I am currently using).
#SpringBootApplication
public class TestApp {
private final Map<Integer, IntegrationFlowRegistration> registrations = new HashMap<>();
#Autowired
private IntegrationFlowContext flowContext;
public static void main(String[] args) {
SpringApplication.run(TestApp.class, args);
}
#Bean
public PublishSubscribeChannel channel() {
return new PublishSubscribeChannel();
}
#Bean
public TestData test() {
return new TestData();
}
#Bean
public ApplicationRunner runner() {
return args -> {
UnicastReceivingChannelAdapter source;
source = makeANewUdpInbound(0);
makeANewUdpOutbound(source.getPort());
Thread.sleep(5_000);
channel().send(MessageBuilder.withPayload("foo\n").build());
this.registrations.values().forEach(r -> {
r.stop();
r.destroy();
});
this.registrations.clear();
makeANewUdpInbound(1235);
makeANewUdpOutbound(1235);
Thread.sleep(5_000);
channel().send(MessageBuilder.withPayload("bar\n").build());
this.registrations.values().forEach(r -> {
r.stop();
r.destroy();
});
this.registrations.clear();
};
}
public UnicastSendingMessageHandler makeANewUdpOutbound(int port) {
System.out.println("Creating an adapter to send to port " + port);
UnicastSendingMessageHandler adapter = new UnicastSendingMessageHandler("localhost", port);
IntegrationFlow flow = IntegrationFlows.from(channel())
.handle(adapter)
.get();
IntegrationFlowRegistration registration = flowContext.registration(flow).register();
registrations.put(port, registration);
return adapter;
}
public UnicastReceivingChannelAdapter makeANewUdpInbound(int port) {
System.out.println("Creating an adapter to receive from port " + port);
UnicastReceivingChannelAdapter source = new UnicastReceivingChannelAdapter(port);
IntegrationFlow flow = IntegrationFlows.from(source)
.<byte[], String>transform(String::new)
.handle(System.out::println)
.get();
IntegrationFlowRegistration registration = flowContext.registration(flow).register();
registrations.put(port, registration);
return source;
}
}
The output I read is
Creating an adapter to receive from port 0
Creating an adapter to send to port 0
Creating an adapter to receive from port 1235
Creating an adapter to send to port 1235
GenericMessage [payload=bar, headers={ip_packetAddress=127.0.0.1/127.0.0.1:54374, ip_address=127.0.0.1, id=c95d6255-e63a-433d-3723-c389fe66b060, ip_port=54374, ip_hostname=127.0.0.1, timestamp=1517220716983}]
I suspect the library did create adapters on OS-chosen free ports, but I am unable to retrieve the assigned port.
The port is assigned asynchronously; you need to wait until the port is actually assigned. Something like...
int n = 0;
while (n++ < 100 && ! source.isListening()) {
Thread.sleep(100;
}
if (!source.isListening()) {
// failed to start in 10 seconds.
}
We should probably enhance the adapter to emit an event when the port is ready. Feel free to open an 'Improvement' JIRA Issue.
I am trying to register a simple REST service on int port,
to ZooKeeper server at localhost:2181.
I checked path ls / using zooClient too.
Any ideas?
private static void registerInZookeeper(int port) throws Exception {
CuratorFramework curatorFramework = CuratorFrameworkFactory
.newClient("localhost:2181", new RetryForever(5));
curatorFramework.start();
ServiceInstance<Object> serviceInstance = ServiceInstance.builder()
.address("localhost")
.port(port)
.name("worker")
.uriSpec(new UriSpec("{scheme}://{address}:{port}"))
.build();
ServiceDiscoveryBuilder.builder(Object.class)
.basePath("myNode")
.client(curatorFramework)
.thisInstance(serviceInstance)
.build()
.start();
Optional.ofNullable(curatorFramework.checkExists().forPath("/zookeeper")).ifPresent(System.out::println);
Optional.ofNullable(curatorFramework.checkExists().forPath("/myNode")).ifPresent(System.out::println);
}
I kept receiving Received packet at server of unknown type 15 from Zoo Server, because of compatibility issues
the registration code here looks correct. In order to print registered instances the following code can be executed:
Optional.ofNullable(curatorFramework.getChildren().forPath("/myNode/worker"))
.orElse(Collections.emptyList())
.forEach(childNode -> {
try {
System.out.println(childNode);
System.out.println(new String(curatorFramework.getData().forPath("/myNode/worker/" + childNode)));
} catch (Exception e) {
e.printStackTrace();
}
});
The result will be like
07:23:12.353 INFO [main-EventThread] ConnectionStateManager:228 - State change: CONNECTED
48202336-e89b-4724-912b-89620f7c9954
{"name":"worker","id":"48202336-e89b-4724-912b-89620f7c9954","address":"localhost","port":1000,"sslPort":null,"payload":null,"registrationTimeUTC":1515561792319,"serviceType":"DYNAMIC","uriSpec":{"parts":[{"value":"scheme","variable":true},{"value":"://","variable":false},{"value":"address","variable":true},{"value":":","variable":false},{"value":"port","variable":true}]}}
Creating your curator framework with zk34 (the version used by kafka) compatibility should fix your problem
private CuratorFramework buildFramework(String ip) {
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
return CuratorFrameworkFactory.builder().zk34CompatibilityMode(true).connectString(ip + ":2181")
.retryPolicy(retryPolicy).build();
}
Please note that curator will just try its best and some new methods (eg. creatingParentsIfNeeded (ok) vs creatingParentContainersIfNeeded (ko)) will fail.
Hi – I wonder if anyone can help me. I’m using Netty 4.1.9 to send UDP messages between two Linux machines:
Both machines are attached via 3 separate NICs to the same 3 networks.
I have some working code that sends the messages from one to the other and that is received okay.
I then changed the code to force the UDP traffic onto a different network, out through a different NIC on the sender.
The sending works, I can see the traffic on the different network, but now the netty receiver can’t see the traffic.
As far as I can see the receiver code shouldn’t care which NIC the data comes in on, so I don’t understand what the issue is.
I’m sure the data arrives at the receiver host on the correct NIC because running “tshark -i ” on the receiver shows the traffic on the correct network, with the correct destination port, and expected length.
The broadcast address I'm using in both cases is 255.255.255.255.
Here's the code for the setting up the receiver channel, this code works fine when the traffic arrives on the first NIC, but not when it arrives on the other:
public Channel createReceivingChannel(final int port,
final EventLoopGroup myGroup) throws InterruptedException
{
return new Bootstrap()
.group(myGroup)
.channelFactory(new ChannelFactory<NioDatagramChannel>()
{
#Override
public NioDatagramChannel newChannel()
{
return new NioDatagramChannel(InternetProtocolFamily.IPv4);
}
})
.handler(new ReceiverInitializer(this.protocolConfig))
.bind(port).sync().channel();
}
Here's the sender code that works with the receiver, but puts the UDP messages on the wrong network:
public Channel createSendingChannel( final NetworkAddress localAddress,
final int port,
final EventLoopGroup myGroup) throws InterruptedException
{
return new Bootstrap()
.group(myGroup)
.channelFactory(new ChannelFactory<NioDatagramChannel>()
{
#Override
public NioDatagramChannel newChannel()
{
return new NioDatagramChannel(InternetProtocolFamily.IPv4);
}
})
.localAddress(localAddress.address, port)
.option(ChannelOption.SO_BROADCAST, true)
.handler(new SenderInitializer(this.protocolConfig))
.bind(0).sync().channel();
}
And here's my sender code that puts the data on the right network, but the receiver above can't see it:
public Channel createSendingChannel(
final NetworkAddress localAddress,
final int port,
final EventLoopGroup myGroup) throws InterruptedException
{
final InetSocketAddress localSocketAddress = new InetSocketAddress(localAddress.address, port);
return new Bootstrap()
.group(myGroup)
.channelFactory(new ChannelFactory<NioDatagramChannel>()
{
#Override
public NioDatagramChannel newChannel()
{
return new NioDatagramChannel(InternetProtocolFamily.IPv4);
}
})
.localAddress(localAddress.address, port)
.option(ChannelOption.SO_BROADCAST, true)
.handler(new SenderInitializer(this.protocolConfig))
.bind(localSocketAddress).sync().channel(); // difference here to bind the sender to correct NIC
}
Any help gratefully received at this stage. I'm stuck.
Many thanks in advance,
Michael.