Mule FTP polling stops without error or warning - java

I encountered an issue with the FTP polling of Mule ESB standalone:
The application was running for a few days without issues, and then the FTP polling stopped without giving warning or error.
Logs were showing signs of activity for the FTP polling until it stopped. Afterward nothing, but other connectors were still active (mainly SFTP polling). I enabled DEBUG log on runtime to see if there was still activity, and the corresponding connector threads were completely silent, as if stopped or blocked.
In the end, restarting the application solved the issue temporarily, but I am trying to understand why this occured to avoid facing it again. My suspicion is that the FTP connector threads either stopped or were blocked, preventing further polls.
It may be caused by an extended FtpMessageReceiver we used to prevent file deletion after poll (overriding postProcess() function). However looking in the source code of both this component and the base FTP receiver and connector, I cannot see how it could happen.
Any idea why the poll would suddenly stop without throwing an error?
Here is the current connector configuration:
<ftp:connector name="nonDeletingFtpConnector" doc:name="FTP"
pollingFrequency="${frequency}"
validateConnections="true">
<reconnect frequency="${frequency}" count="${count}"/>
<service-overrides messageReceiver="my.comp.NonDeletingFtpMessageReceiver" />
</ftp:connector>
And the corresponding endpoint:
<ftp:inbound-endpoint host="${ftp.source.host}"
port="${ftp.source.port}"
path="${ftp.source.path}"
user="${ftp.source.login}"
responseTimeout="10000"
password="${ftp.source.password}"
connector-ref="archivingFtpConnector"
pollingFrequency="${ftp.default.polling.frequency}">
<file:filename-wildcard-filter pattern="*.zip"/>
</ftp:inbound-endpoint>
The messageReceiver code:
public class NonDeletingFtpMessageReceiver extends FtpMessageReceiver {
public NonDeletingFtpMessageReceiver(Connector connector, FlowConstruct flowConstruct, InboundEndpoint endpoint, long frequency) throws CreateException {
super(connector, flowConstruct, endpoint, frequency);
}
#Override
protected void postProcess(FTPClient client, FTPFile file, MuleMessage message) throws Exception {
//do nothing
}
}
As you can see we defined a FtpMessageReceiver to avoid the file deletion on poll (this is done further on the flow), but looking in the code I can't see how skipping the super.postProcess() call (which is responsible for deleting the file) may cause issues.
FtpMessageReceiver source code I looked in:
https://github.com/mulesoft/mule/blob/mule-3.5.0/transports/ftp/src/main/java/org/mule/transport/ftp/FtpMessageReceiver.java
Technical config:
Mule Standalone 3.5.0
Ubuntu 14.04.2 LTS
Java OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.14.04.1)
Any help would be appreciated. Thanks by advance !

As discussed in the comments, the bug was more linked to the Apache FTP Client and I created a specific post here.
Here is the solution found: using a custom FtpConnectionFactory to configure the client correctly with a timeout value > 0. This way the hanging is interrupted with a timeout exception being thrown.
public class SafeFtpConnectionFactory extends FtpConnectionFactory{
//define a default timeout
public static int defaultTimeout = 60000;
public static synchronized int getDefaultTimeout() {
return defaultTimeout;
}
public static synchronized void setDefaultTimeout(int defaultTimeout) {
SafeFtpConnectionFactory.defaultTimeout = defaultTimeout;
}
public SafeFtpConnectionFactory(EndpointURI uri) {
super(uri);
}
#Override
protected FTPClient createFtpClient() {
FTPClient client = super.createFtpClient();
//Define the default timeout here, which will be used by the socket by default,
//instead of the 0 timeout hanging indefinitely
client.setDefaultTimeout(getDefaultTimeout());
return client;
}
}
And then attaching it to the connector:
<ftp:connector name="archivingFtpConnector" doc:name="FTP"
pollingFrequency="${frequency}"
validateConnections="true"
connectionFactoryClass="my.comp.SafeFtpConnectionFactory">
<reconnect frequency="${reconnection.frequency}" count="${reconnection.attempt}"/>
</ftp:connector>
I'll try to update this answer if there is any notable change on the other.

Related

How to know running server details and shutting down forcefully from java code

My java application is running on multiple server with heavy CPU load activity. Each server has 10 threads running at a time to consume springboot webclient requests.
When any issue occurs in any one of the request, in handler code I want to check the running thread/request's server details like host,port,instanceId for tracking and later I want to close the server completely which have issue with request.
public <T> T method(Function<WebClient,T> method){
try{
return method.apply(<webclient object>);
}
catch(Exception e){
Thread t=Thread.currentThread(); //able to get thread details
logger.info("running thread:"+t.getName());
//how to get server details for problematic request ?
// how to shutdown that server instance forcefully using code?
}
}

Polling for Pod's ready state

I am using the fabric8 library for java for deploying appliations on a Kubernetes cluster.
I want to poll the status of pods to know when they are ready. I started writing my own until I read about the Watcher.
I implemented something like this
deployment =
kubeClient.extensions().deployments().inNamespace(namespaceName).create(deployment);
kubeClient.pods().inNamespace(namespaceName).watch(new Watcher<Pod>() {
#Override
public void eventReceived(io.fabric8.kubernetes.client.Watcher.Action action,
Pod resource) {
logger.info("Pod event {} {}", action, resource);
logger.info("Pod status {} , Reason {} ", resource.getStatus().getPhase(),
resource.getStatus().getReason());
}
#Override
// What causes the watcher to close?
public void onClose(KubernetesClientException cause) {
if (cause != null) {
// throw?
logger.error("Pod event {} ", cause);
}
}
});
I m not sure if I understand the Watcher functionality correctly. Does it time out? Or Do I still write my poller inside the eventReceivedMethod()? What is the use case for a watcher?
// What causes the watcher to close?
Since watches are implemented using websockets, a connection is subject to closure at any time for any reason or no reason.
What is the use case for a watcher?
I would imagine it's two-fold: not paying the TCP/IP + SSL connection setup cost, making it quicker, and having your system be event-driven rather than simple polling, which will make every participant use less resources (the server and your client).
But yes, the answer to your question is that you need to have retry logic to reestablish the watcher if you have not yet reached the Pod state you were expecting.

How to unblock an app blocked due to publishing events on rabbitmq server having disk space issues?

I am using rabbitmq java client to publish events from my java application.
When the server goes out of space, all the publish messages are blocked.
In my application, I am ok with events not being published but, I want the application to continue to work. To that end I looked at the documentation and found BlockedListener on the connection[1]
I was able to use this. But, the problem is unless I throw a runtime exception in the handleBlocked method, my app/publisher is still blocked. I tried to close/abort the connection but, nothing worked.
Is there any other graceful way to unblock my app
This is my BlockedListener code
private class BlockedConnectionHandler implements BlockedListener {
#Override
public void handleBlocked(String reason) throws IOException {
s_logger.error("rabbitmq connection is blocked with reason: " + reason);
throw new RuntimeException("unblocking the parent thread");
}
#Override
public void handleUnblocked() throws IOException {
s_logger.info("rabbitmq connection in unblocked");
}
}
[1] https://www.rabbitmq.com/connection-blocked.html

Apache Camel, send message when server starts and stops

I have a simply camel MINA server using the JAVA DSL, and I am running like the example documented here:
Running Camel standalone and have it keep running in JAVA
MINA 2 Component
Currently this server receives reports from a queue, updates them, and then sends them away to the next server. A very simple code:
public class MyApp_B {
private Main main;
public static void main(String... args) throws Exception {
MyApp_B loadbalancer = new MyApp_B();
loadbalancer.boot();
}
public void boot() throws Exception {
main = new Main();
main.enableHangupSupport();
main.addRouteBuilder(
new RouteBuilder(){
#Override
public void configure() throws Exception {
from("mina:tcp://localhost:9991")
.setHeader("minaServer", constant("localhost:9991"))
.beanRef("service.Reporting", "updateReport")
.to("direct:messageSender1");
from("direct:messageSender1")
.to("mina:tcp://localhost:9993")
.log("${body}");
}
}
);
System.out.println("Starting Camel MyApp_B. Use ctrl + c to terminate the JVM.\n");
main.run();
}
}
Now, I would like to know if it is possible to do two things:
Make this server send a message to a master server when it starts running. This is an "Hello" message with this server's information basically.
Tell the master server to forget him when I shut it down pressing CTRL+C or doing something else.
I have also read this:
http://camel.apache.org/maven/current/camel-core/apidocs/org/apache/camel/support/ServiceSupport.html#doStart%28%29
technically, by overriding the doStart and doStop methods I should get the intended behavior, however, those methods (specially the doStop method) don't work at all.
Is there a way to do this ? If yes how? If not, what are my options?
Thanks in advance, Pedro.
The code does work properly after all. The problem is my IDE, Eclipse. When using the Terminate button, Eclipse simply kills the process instead of send the CTRL+C signal to it. Furthermore it looks like Eclipse has no way of being able to send a CTRL+C signal to a process running on its console.
I have also created a discussion on Eclipse's official forums:
http://www.eclipse.org/forums/index.php/m/1176961/#msg_1176961
And may it some day help some one in a situation similar to mine.

How one can know if the client has closed the connection

I've been playing with the new Servlet 3.0 async features with Tomcat 7.0.4. I found this Chat Application, that lets clients hang on GET request to get message updates. This is working just fine when it comes to receiving the messages.
The problem arises when the client is disconnected i.e. the user closes the browser. It seems that the server does not raise IOException, even though the client has disconnected. The message thread (see the source code from link above) is happily writing to all stored AsyncContext's output streams.
Is this a Tomcat bug? or am I missing something here? If this is not a bug, then how I'm supposed to detect whether the client has closed the connection?
The code there at line 44 - 47 is taking care of it,
} catch(IOException ex) {
System.out.println(ex);
queue.remove(ac);
}
And here too at 75 - 83, using timeout thingie,
req.addAsyncListener(new AsyncListener() {
public void onComplete(AsyncEvent event) throws IOException {
queue.remove(ac);
}
public void onTimeout(AsyncEvent event) throws IOException {
queue.remove(ac);
}
});
EDIT: After getting a little more insight.
Tomcat 7.0.4 is still in beta. So, you can expect such behaviour
I tried hard but can't find the method setAsyncTimeout() in the doc, neither here, nor here. So, I think they dropped it completely in the final version due to some unknown valid reason
The example states, "why should I use the framework instead of waiting for Servlet 3.0 Async API". Which infers that its written before the final thingie
So, what I can say, after combining all these fact, that you are trying to work with the thing that is broken in a sense. That also, may be, the reason for different and weird results.

Categories