I made a Java Web Application hosted by Tomcat. It can be found on this URL.
The problem that I am experiencing is that whenever I visit the page the first time it takes about 10s while every time after it it takes only 100-500ms. I would say the speed improvement is due to browser caching, but not really, when I refresh cache too (ctrl+shift+r) I get the same quick response.
Then after some time, about 5 minutes I visit the page again and it's slow again.
You may try some tests yourself on the URL provided by changing the search parameter value to e.g.: 1050,1051,1052,2670,4000,2300,2200.
Also the interesting fact I have spotted is that no matter how big payload (compare 1050 with 2300) the time is almost always the same approx. 9-10s. So what I assume is that something like Java Server has got to get ready and that is what is taking time.
EDIT:
I was first thinking it could be related to Java/Tomcat having to load some resources and then after some time +-3-5mins again because for some reason it unloaded it. But as I wrote above, even if I change the URL query string (that causes a different SQL query during execution), it again loads long. Can the issue be related to the DB? I am using MySQL.
EDIT2:
The reason why it's faster is most likely the server caching. I am 95% sure and that is because I made couple of experiments such as trying it on 2 computers etc. (no it's not browser caching). Then I realized that if it's fast only when it's cached, what takes so long is the actual .executeQuery line of code. It takes 10s even though the exact same request through a client such as WorkBench takes only 0.285s. Therefore I am going to try to use a PreparedStatement or experiment further.
The content is 200kB in size. You need to optimize it in the front and backend.
The main problem is in your backend. Check why it takes so long.
In the frontend you can enable gzip compression. Check the tomcat documentation on how to do it. This will reduce the size for the download.
The second time is probably faster for you due to browser caching.
Check the response code in Firebug, if the response is 304 it is cached. Response 200 means it has been loaded from the server.
ORM systems (like Hibernate) can cause significantly slow startup if you forget to turn off the initial modell-schema sync option in production environment, for example in Spring -> JPA -> Hibernate -> application-[server].yml->
spring:
jpa:
hibernate:
ddl-auto: update
update none
If your model didn't change, switch "update" to "none" in the production environment for faster startup.
Related
Production Environment: Tomcat 9 on CentOS 7 x64, mysql/mariadb 5.5x
Testing Environment: Tomcat 9 in Eclipse on Windows 7 x64, mysql 5.5x
I'm a Tomcat newbie looking for the best method to have server-wide variables readable/writable from all Webapps for things like MaintenanceMode(on/off) and MaintenanceMessage, etc.
These are the variable properties I'm looking for:
Readable/writable from all instances of all java servlets on all webapps.
Value persists after OS, Tomcat, or Webapp restart.
I must be able to change the value of it from one webapp and then all other webapps recognize the change quickly, ideally without restarting.
Ideally I wouldn't want to read the variable from disk on each server request. In case server is getting DDOSed or something.
Ideally the solution is OS independent.
If it's a disk file solution please recommend a place for me to store the file.
I'm new to Tomcat so some detail in any answers would be appreciated or links to detail. I'll probably be using a servlet on it's own 'admin' webapp that's only accessible through SSH-tunneling, etc, to set the variables. Then the public webapps would respond to any changes, like showing a maintenance message while I backup databases. I could also possibly change the variables using linux commands if needed.
If I stored the server variables in a database that could be fine but I wouldn't want to read the DB on every single request most likely, and when I change a variable I'd have to once again notify every webapp/servlet that something was changed and to re-read the DB.
Thanks for any help.
I implemented this recently in the form of "system messages", some of which are for maintenance. But the effect is the same. We had some additional "requirements" which helped us form the solution. These may or may not match up to your expectations/desires:
Multiple-server coordination
Immediate synchronization was not necessary
We used our relational database for the actual data storage. A single table with "system messages" and a few other fields such as when the messages became effective (not_before DATETIME) and when the messages became ineffective (not_after DATETIME).
On startup, the application reads the system messages table to find the currently-valid messages and stores them in application scope, including their validity dates. Whenever we want to show them on the screen, we check each item in memory and evict any that have expired. This is pretty fast.
Every X minutes (e.g. from cron), we make a request to a special servlet (on each server) that re-loads the system messages from the database. We protect that servlet by only allowing requests from certain IPs, so DOS is not an issue.
Not only can we add a system message from any server in the cluster, but we can also add one by writing directly to the database. That may be advantageous if you don't always want to use your application to generate these events.
You can change the value for X to anything as low as 1 (minute) if you are using cron. If you use some other kind of system, you can probably get updated even more often. I would seriously reconsider your requirement of "immediate" recognition because that requires a system that has much worse performance.
If you can guarantee that only your application can generate these changes, and you have a list of all other servers in the cluster somewhere, you could theoretically ping them all with the new message (or notify them that a new message exists and it's time to update their message-list), but that kind of thing is better-done with an orchestration tool such as Kubernetes, and we are getting a little out of scope IMO.
I have created a java application that is inserting data to a mysql database. Under some conditions, i need to post some of these data via email, using a java application that I will write as well.
My problem is that i am not sure how i should implement this.
From what I understand, I could use a UDF inside MySql to execute a java application, for which there are many against opinions into using it. Let alone that both the database and the mail client application will reside in a VM that i dont have admin access, and dont want to install anything that neither me nor the admin knows.
My other alternative, that I can think of, is to set up the mail client (or some other application), to run every minute just to check for newly inserted data. Is this a better aproach? Isn't it going to use resources for doing almost nothing. At the moment the VM might not be heavily loaded, but i have no idea how many applications there might end up running on the same machine.
Is there any other alternative i should consider using?
You also need to consider the speed of internet, database server load, system resources. If you have enough memory and less load to insert data in databases or database load is not so much then you can approach this by cron setup. For linux call a script for every 5 minutes. The script perform the following-
1. Fetch unread Emails as files
3. Perfrom shell script to read needed data.
3. write data to mysql
4. Delete the email
If you have heavy loaded system then wise you need to do this once or twice in an hour or may vary.
We are facing two general issues in our production env and would like to get recommendations.
We are using cluster of nodes running Jboss and apache web server for load balancing.
The two problems are,
All the server nodes work fine normally, however, suddenly within a minute, one of the node reach out the maximum DB connection limit (say, from 30 to 100) and start throwing errors (Unable to get manage connection).
I have seen that sometimes, we got simultaneously lot of same webservice calls from one user. For instance, more than 1000 web service calls of the same service by same user within a minute. It looks like, may be user is stuck in some kind of repetitive loop in browser (not sure).
To fix first problem, I have seen we don't have any connection leak issue. Mostly, we found that the service response time becomes very high, however the load balancer sends the equal traffic to each node and therefore possibly, this node might get exhausted. One solution I was thinking is to timeout the service call earlier which takes more than certain time but I am not sure is it good idea. Any thoughts, what recommendations/practice available to tackle such situation?
To fix second problem, I think the application should not defend or check for such large number of service calls but it should be in the higher level like firewall or web server. However, I would like to know your thoughts for this.
I hope my question make sense but if it doesn't, please feel free to ask,
Application - Struts 1.2, Tomcat 6
Action class calls a service which through DAO executes a query and returns results.
The query is not long running and gives results in seconds when run directly over the database (through SQL client say SQL Developer), but, when the user browses through application front end and same query is run in background through the application, the system hangs and the response either times out or takes a lot of time.
Issue is specific to one particular screen implying that app server to db server connectivity is ok.
Is there a way to enable debug logging of Tomcat/ Struts without any code change, to identify one out of the two scenarios below or any other scenarios possible?
The query is taking the time.
The response is not being sent back to the browser.
P.S. - Debugging or code change to add logging is not an immediate option.
Something to look at is a "Java Profiler". The one that I've used and have liked is YourKit.
I'm having a problem with the GAE backends and taskqueues. Basically what happens is, after the backend does several url fetch calls for a few minutes, tasks start getting stuck without even starting. The enforced rate drops to 0.10/s and the queue hardly moves. It only starts to move if I restart the Backend instance but only to reach the 0.10/s enforced rate again.
I'm currently working on a GAE project that requires the app to traverse around 70000 URLs, retrieve the HTML, check for values in the HTML, and update some records in the datastore based on the values in the HTML.
The implementation involves a cron job that takes around 300 URLs every minute, splits them by 10's, and assigns them to different tasks in the task queue. Each task goes through their 10 URLs, processing the contents.
I'm running a B4 static backend instance. Task queue rate is at 5/s. Max concurrent requests is 8.
I tried adding task aging as well but it didn't help.
---- October 19, 2013 ----
Edit: I tried commenting out a lot of code and narrowed down the problem to URL Fetching. Apparently when I remove the URL fetching, things run very smoothly. Still, I'm not sure how to fix this since I'm pretty sure I closed all connection related resources.
You may reach the quota limit for URL fetch which is 3000 API calls/minute or 657,000 API calls/day.