I'm not sure how Socket Appender works. I know that logging events are sent to particular port. Then we can print logs on a console or put into a file.
My question is more about the way logs are sent. Is there e.g. one queue? Is it synchronous or asynchronous? Can using it slow down my program?
I've found some info here, but it isn't clear for me.
From the SocketAppender documentation
Logging events are automatically buffered by the native TCP
implementation. This means that if the link to server is slow but
still faster than the rate of (log) event production by the client,
the client will not be affected by the slow network connection.
However, if the network connection is slower then the rate of event
production, then the client can only progress at the network rate. In
particular, if the network link to the the server is down, the client
will be blocked.
On the other hand, if the network link is up, but the
server is down, the client will not be blocked when making log
requests but the log events will be lost due to server unavailability.
Since the appender uses the TCP protocol, I would say the log events are "sort of synchronous".
Basically, the appender uses TCP to send the first log event to the server. However, if the network latency is so high that the message has still not been sent by the time a second event is generated, then the second log event will have to wait (and thus block), until the first event is consumed. So yes, it would slow down your application, if the app generates log events faster than the network can pass them on.
As mentioned by #Akhil and #Nikita, JMSAppender or AsyncAppender would be better options if you don't want the performance of your application to be impacted by the network latency.
Socket Appender sends the logs as a serialized Obect to a SocketNode or log server. In the appender the Connector Thread with a configured reconnectionDelay will check for the connection integrity and will dump all the logs if the connection is not initialized or disconnected.Hence no blocking on the application flow.
If you need better JMS features in sending log info across JVM try JMSAppender.
Log4j JMS appender can be used to send your log messages to JMS
broker.The events are serialized and transmitted as JMS message type ObjectMessage.
You can get a sample program HERE.
It seems to be synchronous (checked sources) but I may be mistaken. You can use AsyncAppender to make it asyncrhonous. See this.
Related
I'm using Kafka log compactor, and I was wondering if there lis any call-back function that I can use to be invoked as a consumer, when the Kafka broker perform the log compaction of my Topic.
So far I cannot see any callback for this, so I was wondering what is the standard strategy to detect that log compaction took place.
Regards
The client itself has no communication with the broker for such events. In the past, we used Splunk to capture the compaction events from the LogCleaner process logs, and we could generate webhook events based on that, if we needed if for any reason (we only used it for administrative debugging and clients never needed it)
I have some question about the gelf module (http://logging.paluch.biz/) and in particular when the graylog server is not available for some reason.
Is log4j will cache the logs somewhere and will send them when the connection to the graylog is recovered?
Is the application using this module will stop to work during the issue with graylog server?
Thanks.
Gelf-Appenders are online appenders without a cache. They connect directly do a remote service and submit log events as your application produces these.
If the remote service is down, log events get lost. There are a few options with different impacts:
TCP: TCP comes with transport reliability and requires a connection. If the remote service becomes slow/unresponsive, then your application gets affected, as soon as I/O buffer are saturated. logstash-gelf uses NIO in a non-blocking way if all data was sent. If the TCP connection drops, then you will run into connection timeouts, if the remote side is not reachable or connection refused states if the remote port is closed. In any case, you get reliability, but it will affect your application performance.
UDP: UDP has no connection notion, it's used for fire-and-forget communication. If the remote side becomes unhealthy, your application usually is not affected, but you encounter log event loss.
Redis: You can use Redis as an intermediate buffer if your Graylog instance is known to fail/been taken down for maintenance. Once Graylog is available again, it should catch up, and you prevent log event loss to some degree. If your Redis service becomes unhealthy, see Point 1.
HTTP: HTTP is another option that gives you a degree of flexibility. You can put your Graylog servers behind a load-balancer to improve availability and reduce the risk of failure. Log event loss is still possible.
If you want to ensure log continuity and reduce the probability of log event loss, then write logs to disk. It's still no 100% guarantee against loss (disk failure, disk full) but improves application performance. The log file (ideally some JSON-based format) can then be parsed and submitted to Graylog by maintaining a read offset to recover from a remote outage.
I have a situation where I need to read a(on going) messages from a topic and put them on another Queue . I have doubts do I need jms Queue or I can be satisfied with an in memory java Queue . I will do the reading from the Queue by other thread(s) in same jvm and will do client acknowledge of the message to the topic after reading the message from the (in memory) queue and process it as necessary (send it to remote IBM MQ) .So if my client crash the messages that were exist in the in memory queue will be lost but will still exist on topic and will be redeliver to me . Am I right ?
Some of this depends on how you have set up the queue/topic and the connection string you are using to read from IBM's MQ but if you are using the defaults you WILL lose messages if you're reading it to an in-memory queue.
I'd use ActiveMQ, either in the same JVM as a library so you have it taking care of receipt, delivery and persistence.
Also if you are listening to a topic you're not going to be sent missed messages after a crash even if you reconnect afterwards unless you've
configured your client as a durable subscriber
reconnect in the time (before the expireMessagesPeriod is reached)
The ActiveMQ library is not large and worth using if ensure delivery of every message is important, especially in an asynchronous environment.
Main difference is that in-memory loses data when the application goes down; JMS queue loses data when the server goes down IF the topic/queue is not persistent. The former is much more likely than the latter, so I'd also say go with JMS.
I need to send a continuous flow of messages (simple TextMessages with a timestamp and x/y coordinates) over a wireless network from a moving computer. There will be a lot of these short messages (like 200 per sec) and unfortunately the network connection is most likely unreliable since the sending device will leave the WLAN area from time to time... When the connection is not available, all upcoming messages should be buffered until the connection is back up again. The order of the transmitted messages does not matter, since they contain a timestamp, but ALL messages must be transferred.
What would be a simple but reliable method for sending these telegrams? Would it be possible to just use a "plain" TCP or UDP socket connection? Would messages be buffered when the connection is temporarily down and send afterwards automatically? Or is the connection loss directly detected and reported, thus I could buffer the messages and try to reconnect periodically on my own? Do libraries like Netty help here?
I also thought about using a broker to broker communication (e.g. ActiveMQ network of brokers) as an alternative. Would the overhead too big here?! Would you suggest another messaging middleware in this case?
TCP is guaranteed delivery (When it's connected that is) - You should check if the connection went down and put messages in a queue while it is retrying the connection. Once it sees that connection is back up dump the queue into the TCP socket.
Also look into TCP Keepalive for recognition of a down connection: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
Seems like you could use a message wrapper like Java JMS using a "Assured persistent" reliability mode. I have not done this myself, in the context of text messages, but this idea may lead you to the right answer. Also, there may be an Apache library already written that handles what you need, such as Qpid .
We are running a high throughput system that utilizes tibco-ems JMS to pass large numbers of messages to and from our main server to our client connections. We've done some statistics and have determined that JMS is the causing a lot of latency. How can we make tibco JMS more performant? Are there any resources that give a good discussion on this topic.
Using non-persistent messages is one option if you don't need persistence.
Note that even if you do need persistence, sometimes it's better to use non persistent messages, and in case of a crash perform a different recovery action (like resending all messages)
This is relevant if:
crashes are rare (as the recovery takes time)
you can easily detect a crash
you can handle duplicate messages (you may not know exactly which messages were delivered before the crash
EMS also provides some mechanisms that are persistent, but less bullet proof then classic guaranteed delivery
these include:
instead of "exactly once" message delivery you can use "at least once" or "up to once" delivery.
you may use the pre-fetch mechanism which causes the client to fetch messages to memory before your application request them.
EMS should not be the bottle neck. I've done testing and we have gotten a shitload of throughput on our server.
You need to try to determine where the bottle neck is. Is the problem in the producer of the message or the consumer. Are messages piling up on the queue.
What type of scenario are you doing.
Pub/sup or request reply?
are you having temporary queue pile up. Too many temporary queues can cause performance issues. (Mostly when they linger because you didn't close something properly)
Are you publishing to a topic with durable subscribers if so. Try bridging the topic to queue and reading from those. Durable subscribers can cause a little hiccup in performance too since it needs to track who has copies of all messages.
Ensure that your sending process has one session and multiple calls through that session. Don't open a complete session for each operation. Re-use where possible. Do the same for the consumer.
make sure you CLOSE when you are done. EMS doesn't clear things up. So if you make a connection and just close your app the connection still is there and sucking up resources.
review your tolerance for lost messages in the even of a crash. If you are doing Client ack and it doesn't matter if you crash processing the message then switch to auto. Also I believe if you are using (TEMS - Tibco EMS for WCF) there's a problem with the session acknowledge. So a message is only when its processed on the whole message, we switched from Client ACK to the one that had Dups ok and it worked better)