Zookeeper in-memory log - java

Does ZooKeeper have an "in-memory log"? I've some experience with ZooKeeper but I never seen anything like it before (from the client side), and after searching I haven't found anything related to an in-memory log. As far as I know, every operation (create, setData, delete) is made on disk.
However, in the paper Ravana: Controller Fault-Tolerance in Software-Defined Networking the authors
Event logging: The master saves each event in ZooKeeper’s distributed
in-memory log. Slaves monitor the log by registering a
trigger for it. When a new event is propagated to a slave’s log, the
trigger is activated so that the slave can read the newly arrived event
locally.
So, assuming that there is an in-memory log, how would a (java) client app use it? Or is it server side only?

Related

Pb to load and initialize Custom Cache Store with Ignite

We want to use Ignite as a cache layer on top of Postgres.
We have implemented a Custom Cache Store.
And we experiment some troubles in some situation where Ignite is not stable and we have such error :
javax.cache.CacheException: class org.apache.ignite.IgniteClientDisconnectedException: Client node disconnected: null at
org.apache.ignite.internal.processors.cache.GridCacheGateway.checkState(GridCacheGateway.java:97) at
org.apache.ignite.internal.processors.cache.GridCacheGateway.isStopped(GridCacheGateway.java:269) at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.checkProxyIsValid(GatewayProtectedCacheProxy.java:1597) at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.onEnter(GatewayProtectedCacheProxy.java:1621) at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:673)
Ignite is launched apart of our application, and when we launch our app, we loadcache and disable WAL.
When we relaunch our app, without relaunch Ignite, we have these issues.
I am wonder why. Is there any link with the fact that WAL must not be disable? And how to know that cache is already initialised and do not need to loadCache? DO you have recommendation for several apps with custom cache store, connected with one ignite cluster?
Thanks
Please check out https://ignite.apache.org/docs/latest/clustering/connect-client-nodes:
While a client is in a disconnected state and an attempt to reconnect is in progress, the Ignite API throws a IgniteClientDisconnectedException. The exception contains a future that represents a re-connection operation. You can use the future to wait until the operation is complete.
Also, WAL enable-disable is known to have issues, and it's only safe to do on stable topology. Please share logs if you want to investigate.

Fail safe mechanism for Kafka

I am working on application that writes to Kafka queue which is read by other application. When I am unable to send message on Kafka due to network or other reason, I need to write messages during Kafka down time to other place e.g Oracle or local file system, so that I don't loose messages generated during down time.Problem with oracle or other DB is it too can go down. Is there any recommendations about how could I achieve fail safe during Kafka down time.
Number of messages generated are approx 20-25 million per day. For messages stored during downtime I am planning to have batch job to re send them to destination application once target application is up again.
Thank you
You can push those messages into a cloud based messaging service like SQS. It supports around 3K messages per second.
There is also a connector that allows you to push back the messages into Kafka directly, with no other headaches.
If you can't export the data out of your local network, then maybe a cluster of RabbitMQ instances may help, although it wouldn't be a plug & play solution.

AWS SQS temporary queues not being deleted on app shutdown

We are attempting to use the aws sqs temporary queues library for synchronous communication between two of our apps. One app utilises a AmazonSQSRequester while the other uses a AmazonSQSResponder - both are created using the builders from the library and wired in as Spring beans in app config. Through AWS console we create an SQS queue to work as the 'host queue' required for the request/return pattern. The requesting app sends to this queue and the responding app uses a SQSMessageConsumer to poll the queue and pass messages to the AmazonSQSResponder. Part of how (I'm fairly sure) the library works is that the Requester spins up a temporary SQS queue (a real, static one), then sends that queue url as an attribute in a message to the Responder, which then posts its response there.
Communications between the apps work fine and temporary queues are automatically created. The issue is that when the Requester app shuts down, the temporary queue (now orphaned and useless) persists when it should be cleared up by the library. Information on how we're expecting this clean up to work can be found in this aws post:
The Temporary Queue Client client addresses this issue as well. For each host queue with recent API calls, the client periodically uses the TagQueue API action to attach a fresh tag value that indicates the queue is still being used. The tagging process serves as a heartbeat to keep the queue alive. According to a configurable time period (by default, 5 minutes), a background thread uses the ListQueues API action to obtain the URLs of all queues with the configured prefix. Then, it deletes each queue that has not been tagged recently.
The problem we are having is that when we kill the Requester app, unexplained messages appear in the temporary queue/response queue. We are unsure which app puts them there. Messages in queue prevent the automagic cleanup from happening. The unexplained messages share the same content, a short string:
.rO0ABXA=
This looks like it was logged as a bug with the library: https://github.com/awslabs/amazon-sqs-java-temporary-queues-client/issues/11. Will hopefully be fixed soon!

sql server as persistent db for activemq

When my activemq goes down, how can i store the message that are on its way to activemq?If the answer is using persistance db , then how and when can i resend those messages that were stored in db back to activemq queue(assuming it is up and working)?
(To give you a complete background: whenever a row gets inserted into by db my db triggers http to my java app .this app puts the changes in db as messages into activemq(we have written this thing as we are not experts in java spring frame work))
any solutions or suggestions in this regard is much appreciated
What you are looking for is indeed persistency:
Persistent messaging (ensures the messages are stored in a datastore until the broker receives the acknowledgement that it has been delivered successfully to all consumers)
This will ensure the messages are re-sent (automatically) once the broker is back alive.
If you want redundancy, you should look for the master/slave topology

log4j: How does a Socket Appender work?

I'm not sure how Socket Appender works. I know that logging events are sent to particular port. Then we can print logs on a console or put into a file.
My question is more about the way logs are sent. Is there e.g. one queue? Is it synchronous or asynchronous? Can using it slow down my program?
I've found some info here, but it isn't clear for me.
From the SocketAppender documentation
Logging events are automatically buffered by the native TCP
implementation. This means that if the link to server is slow but
still faster than the rate of (log) event production by the client,
the client will not be affected by the slow network connection.
However, if the network connection is slower then the rate of event
production, then the client can only progress at the network rate. In
particular, if the network link to the the server is down, the client
will be blocked.
On the other hand, if the network link is up, but the
server is down, the client will not be blocked when making log
requests but the log events will be lost due to server unavailability.
Since the appender uses the TCP protocol, I would say the log events are "sort of synchronous".
Basically, the appender uses TCP to send the first log event to the server. However, if the network latency is so high that the message has still not been sent by the time a second event is generated, then the second log event will have to wait (and thus block), until the first event is consumed. So yes, it would slow down your application, if the app generates log events faster than the network can pass them on.
As mentioned by #Akhil and #Nikita, JMSAppender or AsyncAppender would be better options if you don't want the performance of your application to be impacted by the network latency.
Socket Appender sends the logs as a serialized Obect to a SocketNode or log server. In the appender the Connector Thread with a configured reconnectionDelay will check for the connection integrity and will dump all the logs if the connection is not initialized or disconnected.Hence no blocking on the application flow.
If you need better JMS features in sending log info across JVM try JMSAppender.
Log4j JMS appender can be used to send your log messages to JMS
broker.The events are serialized and transmitted as JMS message type ObjectMessage.
You can get a sample program HERE.
It seems to be synchronous (checked sources) but I may be mistaken. You can use AsyncAppender to make it asyncrhonous. See this.

Categories