Apache james 3.4 message spooled but not delivered - java

Installed apache james 3.4 on my local machine. Tried sending a smtp email , enabled debug mode in logs , I can see that email was received and sent to spool but it does not really go to database or get stored anywhere. Could see below logs after an email was sent:
DEBUG 14:40:22,520 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing james message handler org.apache.james.protocols.smtp.core.esmtp.MailSizeEsmtpExtension#139346fe
DEBUG 14:40:22,523 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing hook org.apache.james.protocols.smtp.core.log.HookResultLogger#7e0a3d14
DEBUG 14:40:22,523 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing hook org.apache.james.smtpserver.jmx.HookResultJMXMonitor#7727309d
DEBUG 14:40:22,526 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing james message handler org.apache.james.smtpserver.AddDefaultAttributesMessageHook#2ecd38f
DEBUG 14:40:22,527 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing hook org.apache.james.protocols.smtp.core.log.HookResultLogger#7e0a3d14
DEBUG 14:40:22,527 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing hook org.apache.james.smtpserver.jmx.HookResultJMXMonitor#7727309d
DEBUG 14:40:22,528 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing james message handler org.apache.james.smtpserver.SendMailHandler#e301d29
DEBUG 14:40:22,528 | org.apache.james.smtpserver.SendMailHandler | sending mail
INFO 14:40:22,567 | org.apache.james.smtpserver.SendMailHandler | Successfully spooled mail Mail1584042022511-573e00a5-df5c-4bd0-a9d2-0d4e45e12b0d from MaybeSender{mailAddress=Optional[kart2#kmart.com]} on 127.0.0.1/127.0.0.1 for [kart2#kmart.com]
DEBUG 14:40:22,568 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing hook org.apache.james.protocols.smtp.core.log.HookResultLogger#7e0a3d14
DEBUG 14:40:22,569 | org.apache.james.smtpserver.DataLineJamesMessageHookHandler | executing hook org.apache.james.smtpserver.jmx.HookResultJMXMonitor#7727309d
DEBUG 14:40:22,583 | org.apache.james.mailetcontainer.impl.JamesMailSpooler | ==== Begin processing mail Mail1584042022511-573e00a5-df5c-4bd0-a9d2-0d4e45e12b0d ====
DEBUG 14:40:22,584 | org.apache.james.mailetcontainer.lib.AbstractStateCompositeProcessor | Call MailProcessor root
DEBUG 14:40:22,587 | org.apache.camel.component.direct.DirectProducer | Starting producer: Producer[direct://processor.root]
DEBUG 14:40:22,588 | org.apache.camel.impl.ProducerCache | Adding to producer cache with key: direct://processor.root for producer: Producer[direct://processor.root]
DEBUG 14:40:22,591 | org.apache.camel.impl.ProducerCache | >>>> direct://processor.root Exchange[]
DEBUG 14:40:22,610 | org.apache.camel.processor.MulticastProcessor | Done sequential processing 1 exchanges
DEBUG 14:40:22,616 | org.apache.camel.processor.MulticastProcessor | Done sequential processing 1 exchanges
DEBUG 14:40:22,617 | org.apache.camel.processor.MulticastProcessor | Done sequential processing 1 exchanges
DEBUG 14:40:22,618 | org.apache.camel.processor.MulticastProcessor | ExchangeId: ID-WW-CFT2PV2-1584041951079-0-9 is marked to stop routing: Exchange[ID-WW-CFT2PV2-1584041951079-0-9]
DEBUG 14:40:22,618 | org.apache.camel.processor.MulticastProcessor | Done sequential processing 1 exchanges
DEBUG 14:40:22,619 | org.apache.camel.processor.Pipeline | ExchangeId: ID-WW-CFT2PV2-1584041951079-0-1 is marked to stop routing: Exchange[ID-WW-CFT2PV2-1584041951079-0-1]
DEBUG 14:40:22,637 | org.apache.james.mailetcontainer.impl.JamesMailSpooler | ==== End processing mail Mail1584042022511-573e00a5-df5c-4bd0-a9d2-0d4e45e12b0d ====
Any help is appreciated. Let me know

I've also been having this issue and found that commenting out the following section in the default conf/mailetcontainer.xml configuration file did the trick.
<mailet matcher="All" class="WithPriority">
<value>8</value>
</mailet>
<mailet matcher="HasPriority=8" class="Null"/>
<mailet matcher="AtLeastPriority=8" class="Null"/>
<mailet matcher="AtMostPriority=8" class="Null"/>
It looks like this stops mail getting beyond the root processor.
I had initially followed this advice to get the server running Apache James Spring distribution not starting but this seems to be more relevant Re: Getting Apache James 3.4 to process mail.

Related

How to get the actual start time of the service using ManagementFactory.getRuntimeMXBean().getStartTime()

I have to implement a logic based on the JVM StartTime. How may I get the actual start time of the JVM? Even the JVM restarts internally also can I able to use this ManagementFactory.getRuntimeMXBean().getStartTime() method?
Below is the sample of logs of the service which I run in my environment.
I have started the windows service # 01/01/2022 10:00:00 by which the code is created using Java. By the time the wrapper service prints the logs like below,
STATUS | wrapper | 2022/01/01 10:00:00 | Launching a JVM...
INFO | jvm 1 | 2022/01/01 10:00:00 | WrapperManager: Initializing...
INFO | jvm 1 | 2022/01/01 10:00:00 | Wrapper startup method..
After some period of time wrapper service prints, the logs like JVM is restarting and the new JVM to be launch,
STATUS | wrapper | 2022/01/01 10:58:01 | JVM requested a restart.
INFO | jvm 1 | 2022/01/01 10:58:02 | Going to shutdown the all threads...0
STATUS | wrapper | 2022/01/01 10:58:09 | Launching a JVM...
INFO | jvm 2 | 2022/01/01 10:58:09 | WrapperManager: Initializing...
INFO | jvm 2 | 2022/01/01 10:58:09 | Wrapper startup method..
What the doubt is even if the JVM is launching again internally without restarting the service manually will this ManagementFactory.getRuntimeMXBean().getStartTime() method returns the start time as 01/01/2022 10:00:00???
Someone help me in this context to find the actual start time of the service.

ActiveMQ - Applications unable to connect for a minute on localhost. Eventual reconnect using failover

I'm running an ActiveMQ broker (version 5.15.12 and later downgraded to 5.15.8) on a windows server on which several local applications are running that are connecting. During heavier than average load on ActiveMQ, it happens regularly that the initial connection to ActiveMQ cannot be made by the applications (all (10) application seem to be affected evenly). On average, the web client of ActiveMQ show around 1500 connections and the broker is servicing 91 queues and 0 topics, with some of the queues processing ~100 events a second while other queues barely have any traffic.
The applications attempt to reconnect using the failover mechanism (default configuration). This can sometimes take up to 40 seconds or longer (after the initial reconnections fail, the exponential back-off causes retries to happens after 10, 20 and 40 seconds).
The applications make use of PooledConnectionFactory with a poolsize of 300 (upped to 600 on one of the applications as a test without impact)
This is the URI of the broker:
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=8000&wireFormat.maxFrameSize=104857600"/>
Memory:
wrapper.java.maxmemory=8192
Thinking it was a resource issue, the maxConnections has been upped to 12000 and advisory support disabled both without change.
While running ActiveMQ with debug logging enabled, I noticed that in the span of 500ms ActiveMQ closed 242 connections with the following logging:
2022-05-01 07:57:16,599 | DEBUG | Unregistering MBean org.apache.activemq:type=Broker,brokerName=localhost,destinationType=Topic,destinationName=ActiveMQ.Advisory.TempQueue_ActiveMQ.Advisory.TempTopic,endpoint=Consumer,clientId=ID_HIT500SRV201-59070-1650984489617-0_624222,consumerId=ID_HIT500SRV201-59070-1650984489617-1_624223_-1_1 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:///127.0.0.1:62259#61616
2022-05-01 07:57:16,599 | DEBUG | localhost removing consumer: ID:HIT500SRV201-59070-1650984489617-1:624223:-1:1 for destination: ActiveMQ.Advisory.TempQueue,ActiveMQ.Advisory.TempTopic | org.apache.activemq.broker.region.AbstractRegion | ActiveMQ Transport: tcp:///127.0.0.1:62259#61616
2022-05-01 07:57:16,599 | DEBUG | remove connection id: ID:HIT500SRV201-59070-1650984489617-1:624223 | org.apache.activemq.broker.TransportConnection | ActiveMQ Transport: tcp:///127.0.0.1:62259#61616
2022-05-01 07:57:16,599 | DEBUG | Publishing: tcp://HIT500SRV201:61616 for broker transport URI: tcp://HIT500SRV201:61616?maximumConnections=12000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.broker.TransportConnector | ActiveMQ Transport: tcp:///127.0.0.1:62259#61616
2022-05-01 07:57:16,599 | DEBUG | Publishing: tcp://HIT500SRV201:61616 for broker transport URI: tcp://HIT500SRV201:61616?maximumConnections=12000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.broker.TransportConnector | ActiveMQ Transport: tcp:///127.0.0.1:62259#61616
2022-05-01 07:57:16,599 | DEBUG | Unregistering MBean org.apache.activemq:type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=openwire,connectionViewType=clientId,connectionName=ID_HIT500SRV201-59070-1650984489617-0_624222 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:///127.0.0.1:62259#61616
2022-05-01 07:57:16,599 | DEBUG | Unregistering MBean org.apache.activemq:type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=openwire,connectionViewType=remoteAddress,connectionName=tcp_//127.0.0.1_62259 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:///127.0.0.1:62259#61616
2022-05-01 07:57:16,599 | DEBUG | Unregistering MBean org.apache.activemq:type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=openwire,connectionViewType=remoteAddress,connectionName=tcp_//127.0.0.1_62259 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:///127.0.0.1:62259#61616
2022-05-01 07:57:16,599 | DEBUG | Stopping connection: tcp://127.0.0.1:62259 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-4925514
2022-05-01 07:57:16,599 | DEBUG | Stopping transport tcp:///127.0.0.1:62259#61616 | org.apache.activemq.transport.tcp.TcpTransport | ActiveMQ BrokerService[localhost] Task-4925514
2022-05-01 07:57:16,614 | DEBUG | Closed socket Socket[addr=/127.0.0.1,port=62259,localport=61616] | org.apache.activemq.transport.tcp.TcpTransport | ActiveMQ Task-1
2022-05-01 07:57:16,614 | DEBUG | Stopped transport: tcp://127.0.0.1:62259 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-4925514
2022-05-01 07:57:16,614 | DEBUG | Connection Stopped: tcp://127.0.0.1:62259 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-4925514
During and after this time, new connections are started. After this, the applications are able to reconnect again. I'm not even sure if this might just be expected behaviour or if it's related to the problem.
Any insights would be appreciated, thanks!
The combinations of our applications, a database and activemq had exhausted the default dynamic port range. See https://learn.microsoft.com/en-us/windows/client-management/troubleshoot-tcpip-port-exhaust

Camel route with AMQP consumer runs ok in Eclipse, hangs in karaf

I am using Camel 2.17.3 and karaf 4.0.7 (also tried 4.0.1).
I have a Camel route that runs fine in Eclipse when a junit test starts it, but hangs when deployed to karaf. If I change the amqp: 'from' component to timer: the route runs fine in karaf.
My AMQP setup in the routebuilder is:
#Override
public void configure() throws Exception {
getContext().addComponent("amqp", AMQPComponent.amqpComponent(String.format("amqp://%s:%s?amqp.saslMechanisms=ANONYMOUS", AMQP_SERVICE_HOST, AMQP_SERVICE_PORT)));
Even this route will hang karaf, and run fine in Eclipse:
from("amqp:queue:myqueue").routeId("myRoute")
.log("temp")
In Karaf, when I say "hang", I observe these things:
If I try to exit karaf, it hangs - I need to kill the process.
If I try to stop the bundle, karaf hangs - I need to kill the process.
Neither camel:context-list nor camel:route-list return anything
I do not get a "route consuming from..." message in the log. This is all
the output from starting the bundle:
2016-10-08 23:46:00,593 | INFO | nsole user karaf | bundle
| 90 - org.apache.aries.spifly.dynamic.bundle - 1.0.1 | Bundle
Considered for SPI providers: mis-routes 2016-10-08 23:46:00,593 |
INFO | nsole user karaf | bundle | 90 -
org.apache.aries.spifly.dynamic.bundle - 1.0.1 | No 'SPI-Provider'
Manifest header. Skipping bundle: mis-routes 2016-10-08 23:46:05,595 |
INFO | ool-130-thread-1 | OsgiDefaultCamelContext | 56 -
org.apache.camel.camel-core - 2.17.3 | Apache Camel 2.17.3
(CamelContext: mis-routes) is starting 2016-10-08 23:46:05,599 | INFO
| ool-130-thread-1 | OsgiDefaultCamelContext | 56 -
org.apache.camel.camel-core - 2.17.3 | MDC logging is enabled on
CamelContext: mis-routes 2016-10-08 23:46:05,601 | INFO |
ool-130-thread-1 | ManagedManagementStrategy | 56 -
org.apache.camel.camel-core - 2.17.3 | JMX is enabled 2016-10-08
23:46:05,708 | INFO | ool-130-thread-1 |
DefaultRuntimeEndpointRegistry | 56 - org.apache.camel.camel-core -
2.17.3 | Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
2016-10-08 23:46:05,804 | INFO | ool-130-thread-1 |
OsgiDefaultCamelContext | 56 - org.apache.camel.camel-core -
2.17.3 | AllowUseOriginalMessage is enabled. If access to the original message is not needed, then its recommended to turn this option off as
it may improve performance. 2016-10-08 23:46:05,805 | INFO |
ool-130-thread-1 | OsgiDefaultCamelContext | 56 -
org.apache.camel.camel-core - 2.17.3 | StreamCaching is not in use. If
using streams then its recommended to enable stream caching. See more
details at http://camel.apache.org/stream-caching.html
Any help would be hugely appreciated. Thanks!
The reason should be related to this issue in Camel JIRA: https://issues.apache.org/jira/browse/CAMEL-10278
The main problem is proton-j 0.10 is incompatible with qpid-jms-client version 0.8. We upgraded the dependency to proton-j 0.12.0 and the fix will be available in the Camel 2.17.4 release.
For the moment you can use Camel 2.17.4-SNAPSHOT or upgrade the dependency in the Camel-Amqp Karaf feature.

ActiveMQ shuts down on startup with "TERM Trapped"

I'm currently using the java service wrapper to start my ActiveMQ (5.10) service running on Ubuntu 14.04.1 LTS. Every now and then when my Jenkins instance runs the last step in my deploy script
cd /app/apache-activemq-5.10.0/bin/linux-x86-64; ./activemq restart
the wrapper will try to start up, then immediately shutdown with the following log entries.
STATUS | wrapper | 2014/10/09 08:15:09 | --> Wrapper Started as Daemon
STATUS | wrapper | 2014/10/09 08:15:09 | Launching a JVM...
STATUS | wrapper | 2014/10/09 08:15:09 | TERM trapped. Shutting down.
WARN | wrapper | 2014/10/09 08:15:09 | JVM exited unexpectedly while stopping the application.
STATUS | wrapper | 2014/10/09 08:15:09 | <-- Wrapper Stopped
I haven't got a clue why this is happening. Ideas anyone?

Play sometimes shutdown with memcached client shutdown

My production play server sometimes shutdowns without any warning .
The last message in system.out is:
~ Selection key is not valid.
INFO | jvm 9 | 2012/02/27 11:01:23 | 11:01:22,654 %0-5p ~ Shut down memcached client
INFO | jvm 9 | 2012/02/27 11:01:23 | 11:01:22,657 %0-5p ~ Shut down channel java.nio.channels.SocketChannel[closed]
INFO | jvm 9 | 2012/02/27 11:01:23 | 11:01:22,672 %0-5p ~ Shut down selector sun.nio.ch.EPollSelectorImpl
error happens in spymemcached.jar
Why? Any reason?
Thanks.

Categories