Currently I am working on a Large Scale Application which uses GWT with Hibenate. We are facing some performance issues with existing Jetty / Tomcat server. And we want a another server that handles hibernate queries and GWT both perfectly.
Problem with tomcat is it sometimes stops responding GWT requests, and client hangs up on some points.
There are certain servers that comes in my mind like :
GlassFish
Jboss
IBM WebSphere AS
etc.
Please suggest some high scale server that handles GWT RPC request well and can run in multi-client environment well. We are expecting 100 concurrent users, Hardware is not an issue.
Thanking You,
Regards,
I think that your problem is not related to Tomcat or Hibernate. Your application should have scalebility problem. I do suggest you to investigate your application before investing to a fancy application server.
Related
My spring boot application becomes slow after 1-2 days when I deploy it on production server. I'm using AWS EC2 instance. In start the speed is fine, but after a couple of days I have to restart my instance to get back the desired performance. Any hint what might be wrong here?
Have you check for memory leakage in application as it is nothing to do with EC2 instance. As you mention it was working fine after restart.
It is not best practice to use embed server on production.
I would suggest you should use AWS Elastic Beanstalk service for deploying spring boot application, there is no additional charge on it.
Okay so after some analysis (Thread dumping of my tomcat server on production) I found out that there were some processes (code-smells) which were taking all of my CPU space, and hence my instance was becoming slow, and effecting the performance of my application overall.
I'm having some trouble receiving large subprotocol messages using Spring Boot, Spring WebSocket and Undertow. The messages are cut off after 16kB. After doing some digging I found the following configuration property which seems to do what I want:
server.undertow.buffer-size=32768
This configuration property seems to be properly picked up when checking the /configprops actuator endpoint. Unfortunately, this doesn't seem to help in receiving messages larger than 16kB.
I also stumbled upon this ominous line from the Undertow documentation (emphasis mine):
For servers the ideal size is generally 16k, as this is usually the maximum amount of data that can be written out via a write() operation (depending on the network setting of the operating system).
This confirms what I've been experiencing that setting the server.undertow.buffer-size has no effect as it's capped by an OS level setting. As I'm using Ubuntu Linux I have been fiddling around with net.core.rmem_* and net.core.wmem_* settings but these don't seem to have any effect either. It's not possible to reproduce this issue on macOS.
Does anyone know how to configure Undertow, Spring Boot, and/or Spring WebSocket to support these messages?
I directly answered the question above myself. The settings I proposed in Spring Boot do work! The problem we had was that the load balancer, in our case haproxy, in front of it was cutting of the messages after 16kB. Tuning haproxy to allow larger messages solved the issue. In the meanwhile, we tweaked the protocol we were using to be more efficient so we don't require these large messages anymore so our haproxy solution wasn't tested in production (YMMV).
Because the developers were all working on macOS and Windows and the issue only occurred on the acceptance and production environments which ran Ubuntu we incorrectly assumed this was the cause.
Lessons learned (these are all really dumb and basic but we made these mistakes anyway):
Be sure to validate all your assumptions! If we thought Ubuntu was the issue we should've singled out Ubuntu in tests earlier. For example, by using a VM with Ubuntu on which to validate our assumptions in quarantaine.
Make sure your development environment matches your production environment! In our development environments we weren't running haproxy. As "high level" developers we tend to put aside load balancers and web servers as commodity infrastructure but this example once again shows that these "commodities" can very directly impact the working of your application.
I have coded a Jersey based java server which is all wrapped in one excecutable jar.
I am looking for a web host service in which i can deploy the jar and run it.
I saw some dedicated servers which can do this but this is overshooting the need, any suggestions?
As per your comment I understand that you created a web application with a Jetty embedded server.
I think the best solution for you in this case is to get a virtual machine host, install JRE, upload your *.jar and run it from there. Given firewall permissions and correct configuration you should be able to receive requests on the 80 port. Cons? It costs. A lot.
Most of the Java hosts have already a servlet container running (almost always Tomcat) and you can only deploy your web application in it. Having an embedded Jetty server messes up everything for you.
I strongly suggest you to detach your web application (or as you called it REST server) from Jetty and deploy the *.war in any of the multiple free Java hosts to test it online.
EDIT
Thanks to you I made a deeper research on the topic and found an interesting guide to deploy a web application with embedded Jetty server in Heroku. I've never tried it nor I know if its free, but maybe you can give a try.
Digital Ocean work pretty well for me. Their basic packages are really cheap and you get root control over your own machine, meaning you can host whatever you want without restrictions. The only downside is that they are pretty old school - you have to set up EVERYTHING yourself, including firewalls etc. There are a lot of guides available on their website though, which makes life a lot easier!
http://www.digitalocean.com
I know this is a touch redundant but I don't have voting or comment rights yet so this is the only method for me to communicate.
Digital Ocean is a solid choice. I am paying 5$ a month for a VM with 512 Mb Ram and 20 gigs of storage (which for my use is just fine.) I am still working on my first proper deploy but as stated above there are tons of tutorials to guide you through it. I have no prior command line experience but I've managed to get the server running, Created an SSH key, uploaded my landing page and have gotten a test project using Spark as the embedded server up and functional in a matter of a few hours. The Droplets are easily scalable from what I've seen. I'm still having trouble deploying an Rest based app with Postgres as the DB but it seems more to do with the ports in play than anything else. Keep getting 404s.
I am working at a startup, we are just about to roll out our first beta. Knowing that we will be having a good number of users, we want to have seamlessly deployment when we are adding new features.
I have worked with windows azure before, and I know they support seamless deployment, so I did some googling and cloudbees was the first result.
So the question is, with what we have now (geronimo server, rackspace hosting), is it possible to seamlessly redeploy a java web application? If so, how?
Are there other alternative solution, such as using another hosting provider or use a different web server? (Because it is a startup, it would be beneficial if the answer keeps scalability in mind)
If with a seamless redeploy, you mean an upgrade of your application without any downtime or restarting of your server, LiveRebel might be something to look at.
See http://zeroturnaround.com/liverebel
There are a lot of methods for doing this in the java world. If you don't use sessions (or use shared sessions between app servers) you can do a rolling stop/deploy/start of your appservers, taking 1 offline at a time and using a load balancer to ensure that traffic goes to the other servers.
I have heard Glassfish has such feature, the reference probably ment this (Glassfish 3.x redeploy command) : http://docs.oracle.com/cd/E19798-01/821-1758/6nmnj7q1h/index.html
We have a java-based web application that makes a couple bursts of asynchronous http calls to web services & api's. Using a default Jetty configuration, the application takes roughly 4 seconds to complete. The same operation in Tomcat is taking over a minute.
A slew of configuration changes for Tomcat have been attempted, but nothing seems to help. Any pointers?
Use a profiler to investigate where the time is spent. A good initial choice is jvisualvm in the JDK.
My initial guess would be a DNS issue.
It's not logical that tomcat needs 60 seconds for processing something that Jetty solves in 4. They are both executing Java code.
Is there thread congestion on tomcat? How many threads can the http connectors of tomcat and jetty handle at the same time? What is your configuration?
One suggesting i have to get to the bottom of your problem is to download the Tomcat source and step through the code. Although as mentioned... profiling would save you allot of time. Odd are that its a DNS issue.