Number of JMS listeners on MQ - java

We are developing an application in JAVA. We will use JMS to listen to messages coming on to MQ. We are expecting around 100K message from approx 100 users (each message approx. 1400 charachters long) per day. How many listeners is good to have for this scenario. What I am trying to know is, how many messages a JMS listeners can process per unit. Approximate number is enough for now. Is there a documentation where we can find out this information?

You have to look at two things here: server performance and client performance.
Major JMS providers (HornetQ, ActiveMQ, etc.) can easily handle 5000+ messages per second, so you are covered on that side (if you want more information have a look at the SPECjms2007 results).
Client performance depends on the computing power of your clients (obviously) and what you want to do with those messages. Technically, there isn't a limit in how many messages a client can process. My experience is that message marshalling/unmarshalling is a huge factor, so as a rough estimate you can assume that your client can handle about the same message load as your server, assuming equally powerful machines and light processing of message content.
In the end you will want to do some load testing.

Related

How to increase WebSocket throughput

I need to pull data from a lot of clients connecting to a java server through a web socket.
There are a lot of web socket implementations, and I picked vert.x.
I made a simple demo where I listen to text frames of json, parse them with jackson and send response back. Json parser doesn't influence significantly on the throughput.
I am getting overall speed 2.5k per second with 2 or 10 clients.
Then I tried to use buffering and clients don't wait for every single response but send batch of messages (30k - 90k) after a confirmation from a server - speed increased up to 8k per second.
I see that java process has a CPU bottleneck - 1 core is used by 100%.
Mean while nodejs client cpu consumption is only 5%.
Even 1 client causes server to eat almost a whole core.
Do you think it's worth to try other websocket implementations like jetty?
Is there way to scale vert.x with multiple cores?
After I changed the log level from debug to info I have 70k. Debug level causes vert.x print messages for every frame.
It's possible specify number of verticle (thread) instances by e.g. configuring DeploymentOptions http://vertx.io/docs/vertx-core/java/#_specifying_number_of_verticle_instances
You was able to create more than 60k connections on a single machine, so I assume the average time of a connection was less than a second. Is it the case you expect on production? To compare other solutions you can try to run https://github.com/smallnest/C1000K-Servers
Something doesn't sound right. That's very low performance. Sounds like vert.x is not configured properly for parallelization. Are you using only one verticle (thread)?
The Kaazing Gateway is one of the first WS implementations. By default, it uses multiple cores and is further configurable with threads, etc. We have users using it for massive IoT deployments so your issue is not the JVM.
In case you're interested, here's the github repo: https://github.com/kaazing/gateway
Full disclosure: I work for Kaazing

Uniform in zeromq for realtime purpose

I need to implement analytics system with server and terminals which in realtime.
I use library ZeroMq (pub|sub mode) to send messages to client (~40bytes).
if I connect with 1 client, messages come with delay (sometime more than 250ms).
if I connect with 100 clients a lot of clients lose uniformity of delivery (more than 750ms no one message, after that huge scope of data). It is so critical issue for me.
I have to publish to more than 6000 terminals...
Publish every 30ms, it is about 1700bytes to each client in the worst case (tcp)
Maybe I should use another technology to deliver messages in realtime?
As I said in the comment, Multicast is the way. The primary overriding concern is whether your terminals can join the group you are publishing on - irrespective of how far away they are.
You've not indicated how the terminals connect to your network - (for example vpn over internet, private line whatever..) You asked for a better technology - it's multicast.
Now there are some options if you are going to go down the tcp route:
Build a load-balacing infrastructure which sits in front of your
service. Meaning that your terminals don't connect to your
service, but to a set of load balancers which then connect to your
service. If you have 10 of these for example, each only has to deal
with 600 clients. Your problem is much smaller - you can scale this
way. Don't forget to use asynchronous io.
Buy better hardware - for example solace or tervela do hardware
message brokers which can scale to very large numbers concurrent tcp
connections - but this is not cheap.

Will it make a difference in performance to have multiple ques across different nodes?

Right now I'm working on how to improve efficiency of Rabbit.
For example:
Components:
TCP Load Balancer
Producers
RabbitMQ cluster
Consumers
Producers:
2-5 EC2 servers. Each server has a logstash installed and configured to send messages to the Rabbit. Nothing special about this. Only one requirements, messages needs to be persistent. (Just in case I/O is not an issue)
RabbitMQ Cluster:
2 EC2 servers. Lot of memory, cpu, good disk, nice bandwidth.
Consumers:
Very different number of consumers could be from 2-15. Consumers connects to the Load Balancer (ELB). Some of them uses basic.get some of them uses basic.consume. Requirements: no_ack = False , means all messages needs to be acknowledged.
Right now we have one queue that holds 95% of the traffic. My questions are:
In case if I create an equal number of queues on each node in the rabbit cluster.(Right now I'm talking about how to distribute load of this one high traffic queue.) And each producer will publish messages to it's own queue. Consumers on the other end will subscribe to all queues and get messages from every queue. Will it increase performance?
Also will 1 to 1 relationships between exchanges and queues make any difference in performance?
Finally, what would you recommend in this case? ( consumers can't be dynamically configurred)
Firstly, if you're mirroring queues and using persistent messages, you're going to be harming performance. See my article on the performance of RabbitMQ HA for details.
You're right though to consider more queues in order to improve throughput - each RabbitMQ is a single Erlang process so will only ever use one core. There's other Erlang processes of course that will help, but you are indeed bottlenecked by the single queue. The answer is the consistent hash exchange type which will evenly distribute messages to each queue connected to it. YMMV but I'd imagine you'd want one queue per core of each server.

Tib RV - listing all the processes that are publishing to a given topic

We have RV messaging systems publishing and receiving messages.Recently some underlying jars were upgraded - these are serialization jars used by all publishers and subscribers. However , it seems that some of the publishers are still referencing old versions of the serialization jars and therefore the receivers fail when trying to deserialize received messages.
Obviously restarting these publisher services should fix the problem. However , how do I identify all publishers using a particular topic to send messages to ? There must be some RV admin way of listing all the processes that are publishing to a given topic ?
I just gave a similar answer on another question:
There is a really great tool for this called Rai Insight
Basically what it can do is to sit on a box and silently listen all the multicast data and represent statistics even in real time. We used it to monitor traffic flow spikes with just few seconds delay.
It can give you traffic statistics braked down by multicast group, service number or even sending machine. Traffic flow peak/average, retransmission rate peak/average. All you can think of.
It will also give you per-service per-topic information.

What steps can be taken to optimize tibco JMS to be more performant?

We are running a high throughput system that utilizes tibco-ems JMS to pass large numbers of messages to and from our main server to our client connections. We've done some statistics and have determined that JMS is the causing a lot of latency. How can we make tibco JMS more performant? Are there any resources that give a good discussion on this topic.
Using non-persistent messages is one option if you don't need persistence.
Note that even if you do need persistence, sometimes it's better to use non persistent messages, and in case of a crash perform a different recovery action (like resending all messages)
This is relevant if:
crashes are rare (as the recovery takes time)
you can easily detect a crash
you can handle duplicate messages (you may not know exactly which messages were delivered before the crash
EMS also provides some mechanisms that are persistent, but less bullet proof then classic guaranteed delivery
these include:
instead of "exactly once" message delivery you can use "at least once" or "up to once" delivery.
you may use the pre-fetch mechanism which causes the client to fetch messages to memory before your application request them.
EMS should not be the bottle neck. I've done testing and we have gotten a shitload of throughput on our server.
You need to try to determine where the bottle neck is. Is the problem in the producer of the message or the consumer. Are messages piling up on the queue.
What type of scenario are you doing.
Pub/sup or request reply?
are you having temporary queue pile up. Too many temporary queues can cause performance issues. (Mostly when they linger because you didn't close something properly)
Are you publishing to a topic with durable subscribers if so. Try bridging the topic to queue and reading from those. Durable subscribers can cause a little hiccup in performance too since it needs to track who has copies of all messages.
Ensure that your sending process has one session and multiple calls through that session. Don't open a complete session for each operation. Re-use where possible. Do the same for the consumer.
make sure you CLOSE when you are done. EMS doesn't clear things up. So if you make a connection and just close your app the connection still is there and sucking up resources.
review your tolerance for lost messages in the even of a crash. If you are doing Client ack and it doesn't matter if you crash processing the message then switch to auto. Also I believe if you are using (TEMS - Tibco EMS for WCF) there's a problem with the session acknowledge. So a message is only when its processed on the whole message, we switched from Client ACK to the one that had Dups ok and it worked better)

Categories