Static resources (images) Tomcat or Nginx server - java

Give me the right way. I have nginx server (list 80port) which proxy to tomcat server (for ex 8080port). I need to get static images in my spring app. I got something like this:
1) map images on tomcat server (aliease) or Context docBase
2) map static on nginx server
3) create another sub domain for ex images.mysite.com and work with him.
And also what will be better?

There is no universal right way.
If you have a low traffic site: Use what you can set up quickest. Don't worry, if you are running into performance problems, they won't be due to this decision but due to other aspects of your solution.
If you have a high traffic site: Start with the easiest setup (same as before). Then measure where your performance problems are. Again, most likely they won't be due to the delivery of static content, but whatever your biggest performance problem is: Fix it, rinse, repeat. If static content delivery makes up for an improvement of 0.5% of performance, while another factor makes up 20%, guess where you should invest your time (hint: it's not static content delivery)
In this regards I'm totally with Klaus Groenbaek's comment: Building a complex system that's harder to maintain without having some justification (measurements) showing the advantage of the complexity is preposterous.
Unless you identify an actual performance bottleneck in your own system, optimize for maintainability, build the simplest possible system.

Performance:
Nginx is a great webserver and at the moment is the best when talking about serving static content. You can referr to benchmarks available online, or benchmark it yourself.
Subdomain/Separate domain for static content:
By using sub/separation for static content, you will eliminate cookies on static content, reduce http request/response size and will have better performance.
You will also increase the number of parallel downloads that the browser can perform. This will reduce your page load time.
This will increase your costs if you have ssl enabled, you need a certificate for your sub/separate domain too.

Related

How to run very long request that uses high memory in tomcat?

I have a tomcat server.
In the tomcat server, I handle some restful request which calls to very high memory usage server which can last 15 minutes and finally can crash the tomcat.
How can I run this request:
1. without crash the tomcat?
2. without exceed the limit of 3 min on restful requests?
Thank you.
Try another architectural approach.
REST is designed to be statusless, so you have to introduce status.
I suggest you implement ...
the long running task as batch in the background (as
#kamran-ghiasvand suggests).
a submit request that starts the batch and returns a unique ID
a status request that reports the status of the task (auto refresh
the screen every 5s e.g.). You can do that on html/page basis or as
ajax.
To give you an idea what you'll might need on the backend, I quote our PaymentService interface below.
public interface PaymentService {
PnExecution createPaymentExecution(List<Period> Periods, Date calculationDate) throws PnValidationException;
Long createPaymentExecutionAsync(List<Period> Periods, Date calculationDate);
PnExecution simulatePaymentExecution(Period Period, Date calculationDate) throws PnValidationException;
Void deletePaymentExecution(Long pnExecutionId, AsyncTaskListener<?, ?> listener);
Long deletePaymentExecutionAsync(Long pnExecutionId);
void removePaymentNotificationFromPaymentExecution(Long pnExecutionId, Pn paymentNotification);
}
About performance:
Try to find the memory consumers, and try to sequentialize the problem, cut it into steps. Make sure you have not created memory leaks by keeping unused objects referenced. Last resort would be concurrence (of independent tasks) or parallel (processing of similar tasks). But most of these problems are the result of a too straight-forward architectural approach.
Crashing tomcat server has nothing to do with request processing time, though, it might occur due to JVM heap memory overflow (or thousands of other reasons). You should make sure about reason of crash by investigating tomcat logs carefully. If its reason is lack of memory, you can allocate more memory to JVM upon starting tomcat using '-Xmx' flag. For example, you can add the following line in your setenv.sh for allocating 2GB of ram to tomcat:
CATALINA_OPTS="-Xmx2048m"
In terms of request timeout, also there are many reasons that play role here. For example, connectionTimeout of your http connector (see server.xml), network or browser or web client limitations and many other reasons.
Generally speaking, its very bad practice to make such long request synchronously via restful request. I suggest that you consider another workarounds such as websocket or push notification for announcing user that his time-consuming request is completed on server side.
Basically what you are asking boils down to this:
For some task running on Tomcat, that I have not told you anything about, how do I make it run faster, use less memory and not crash.
In the general case, you need to analyze your code to work out why it is taking so long and using so much memory. Then you need to modify or rewrite it as required to reduce memory utilization and improve its efficiency.
I don't think we can offer sensible advice into how to make the request faster, etc without more details. For example, the advice that some people have been offering to split the request into smaller requests, or perform the large request asynchronously won't necessarily help. You should not try these ideas without first understanding what the real problem is.
It is also possible that your task is taking too long and crashing Tomcat for a specific reason:
It is possible that the request's use of (too much) memory is actually causing the requests to take too long. If a JVM is running out of heap memory, it will spend more and more time running the GC. Ultimately it will fail with an OutOfMemoryError.
The excessive memory use could be related to the size of the task that the request is performing.
The excessive memory use could be caused by a bug (memory leak) in your code or some 3rd party library that you are using.
Depending on the above, the problem could be solved by:
increasing Tomcat's heapsize,
fixing the memory leak, or
limiting the size of the "problem" that the request is trying to solve.
It is also possible that you just have a bug in your code; e.g. an infinite loop.
In summary, you have not provided enough information to allow a proper diagnosis. The best we can do is suggest possible causes. Guesses, really.

Tomcat - scaling on a single server

I have an embedded Tomcat application running on an Amazon EC2 instance. The site was getting an increased amount of traffic, so I upgraded to a much larger instance. However, with the same amount of traffic and with a much larger server, the slow down was still there. I increased the maxThreads and the xmx/xms, but that didn't help much.
The server resources used are small on both the web server and the database server (RDS) (less than 10% memory and less than 20% CPU).
Is there anything that can be done to speed up Tomcat? Or should I bite the bullet and use multiple Tomcat instances and a load balancer?
EDIT: Just to clarify, nothing has changed in the application, just the traffic increased (almost doubled). My assumption was that (more than) doubling the resources (web server and db) should be adequate. I guess it's not that simple.
It's better you don't do anything right now. You obviously don't know what's actually slowing down the application, that's why you were surprised when you upgraded the server and it had no effect.
Instead of randomly doing things like a monkey with a typewriter, hoping that something will help, profile your application (and run load testing against it) and see what are the "heaviest" actions. Then decide how to fix it, whether it's with code optimization, architectural changes, load balancing or any other solutions.
Don't guess, know.
stepanian, even though I'm not sure exactly what you mean by 'slow down', I had a similar case a few months back, when a huge spike of traffic (some viral campaign) caused our tomcat to behave really weird, even though the bottleneck was NOT on the disk, cpu or memory.
In our case, changing the Connector configuration basically solved the problem, especially understanding the:
acceptCount
acceptorThreadCount
maxConnections
maxThreads
Hope this helps! : )

Server loading static resources too slowly

Server loading static resources too slowly - what server optimizations can I make?
Images + CSS content is loading way too slowly (relatively small files at that) are taking over 1 second each to load. What are some optimizations that I can do server-side to reduce those load times (Other than increasing server processing power/network speed).
The server is WebSphere.
There are plenty possibilies (sorted by importance):
Set proper Expires- and Last Modified-Header for all static resources. This can reduce overall requests for static resources dramatically. Thus reducing server load. No requests are the fastest requests with no payload.
Serve static resources from a separate cookie-less (sub-)domain.
Use CSS-Spites and combine often used graphics like Logos and Icons into one single large image.
Combine all your CSS in a single or just a few files. This reduces overall request count and increases frontend performance, too.
Optimize your image sizes lossless with tools like PngOut.
Pre-gzip your css (and js) files and serve them directly from memory. Do not read them from hard disc and compress on the fly.
Use a library like jawr if you do not want to do all these things on your own. Many of these things can jawr handle for you without having negative impacts on your development.
Let Apache webserver serve these static contents for you.
Use something like mod_proxy that relies on your Caching Headers to serve the contents for you. Apache is faster in serving static resources and more important it can be done from another system in front of your Websphere server.
Use a CDN for serving your static content.
Is it possible to wrap these file resources in a .jar file, then use the Java Zip and/or Java Jar APIs to read them?
If you employed a gzip filter to compress the output or static resources, make sure to exclude images as they render slow when gzipped on the server side before responding out.
You may want to read this Using IBM HTTP Server diagnostic capabilities with WebSphere
and this WebSphere tuning for the impatient: How to get 80% of the performance improvement with 20% of the effort
Make sure keep alive is on and functioning. Reduces the overall network overhead required.Please Refer this
Also, make sure you have enough memory allocated to the VM running the server. Using GC stats for logging memory usage and GC is a good idea...e.g. add these to the java VM:
-verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails

How to implement an HTTP transfer of huge binary files from the file generator to the server (Java)?

Simply put our system consists of a Server and an Agent. The Agent generates a huge binary file, which may be required to be transfered to the Server.
Given:
The system must cope with files up to 1G now, which is likely to grow to 10G in 2 years
The transfer must be over HTTP, because other ports may be closed.
This is not a file sharing system - the Agent just need to push the file to the Server.
Both the Agent and the Server are written in Java.
The binary file may contain sensitive information, so the transfer must be secure.
I am looking for techniques and libraries to help me with transfering huge files. Some of the topics, which I am aware of are:
Compression Which one to choose? We do not limit ourselves to gzip or deflate, just because they are the most popular for HTTP traffic. If there is some unusual compression scheme, which yields better results for our task - so be it.
Splitting Obviously, the file needs to be split and transfered in several parallel sessions.
Background Transfering a huge file takes a long time. Does it affect the solution, if at all?
Security Is HTTPS the way to go? Or should we take another approach, given the volume of data?
off-the-shelf I am fully prepared to code it myself (should be fun), but I cannot avoid the question whether there are any off-the-shelf solutions satisfying my demands.
Has anyone encountered this problem in their products and how was it dealt with?
Edit 1
Some may question the choice of HTTP as the transfer protocol. The thing is that the Server and the Agent may be quite remoted from each other, even if located in the same corporate network. We have already faced numerous issues related to the fact that customers keep only HTTP ports open on the nodes in their corporate networks. It does not leave us much choice, but use HTTP. Using FTP is fine, but it will have to be tunneled through HTTP - does it mean we still have all the benefits of FTP or will it cripple it to the point where other alternatives are more viable? I do not know.
Edit 2
Correction - HTTPS is always open and sometimes (but not always) HTTP is open as well. But that is it.
You can use any protocol on port 80. Using HTTP is a good choice, but you don't have to use it.
Compression Which one to choose? We do not limit ourselves to gzip or deflate, just because they are the most popular for HTTP traffic. If there is some unusual compression scheme, which yields better results for our task - so be it.
The best compression depends on the content. I would use Deflator for simplicity, however BZIP2 can give better results (requires a library)
For your file type you may find doing some compression specific to that type first, can make the data sent smaller.
Splitting Obviously, the file needs to be split and transfered in several parallel sessions.
This is no obvious to me. Downloading data in parallel improves performance by grabbing more of the available bandwidth (i.e. squeezing out other users of the same bandwidth) This may be undesirable or even pointless (if there are no other users)
Background Transfering a huge file takes a long time. Does it affect the solution, if at all?
You will want the ability to re-start the download at any point.
Security Is HTTPS the way to go? Or should we take another approach, given the volume of data?
I am sure its fine, regardless of the volume of data.
off-the-shelf I am fully prepared to code it myself (should be fun), but I cannot avoid the question whether there are any off-the-shelf solutions satisfying my demands.
I would try using existing web servers to see if they are up to the job. I would be surprised if there isn't a free web server which does all the above.
Here is a selection http://www.java-sources.net/open-source/web-servers

Sporadic behavior by the machines in stress

We are doing some Java stress runs (involving network IO). Initially things are all fine and the system responds very fast (avg latency in test 2ms). But hours later when I redo the same test I observe the performance goes down (20 - 60ms). It's the same Jar files, same JVM, and the same LAN over which the stress is running. I am not understanding the reason for this behavior.
The LAN is 1GBPS and for the stress requirements I'm sure we are not using all of it.
So my questions:
Can it be because of some switches in the LANs?
Does the machine slow off after some time ( The machines are restarted .. say about 6months back well before the stress can start; They are RHEL5, XEON 64bit Quad core)
What is the general way to debug such an issues?
A few questions...
How much of the environment is under your control and are you putting any measures in place to ensure it's consistent for each run? i.e. are you sharing the network with other systems, is the machine you're using being used solely for your stress testing?
The way I'd look at this is to start gathering details on what your machine and code are up to. That means use perfmon (windows) sar (unix) to find out what the OS and hardware is doing and get a profiler attached to make sure your code is doing the same thing and help pin-point where the bottleneck is occuring from a code perspective.
Nothing terribly detailed but something I hope that will help get you started.
The general way is "measure everything". This, in particular might mean:
Ensure time on all servers is the same (use ntp or something similar);
Measure how long did it take to generate request (what if request generator has a bug?);
Measure when did request leave the client machine(s), or at least how long did it take to do i/o. Sometimes it is enough to know average time necessary for many requests.
Measure when did the request arrive.
Measure how long did it take to generate a response.
Measure how long did it take to send the response.
You can probably start from the 5th element, as this is (you believe) your critical chain. But it is best to log as much as you can - as according to what you've said yourself, it takes days to produce different results.
If you don't want to modify your code, look for cases where you can sniff data without intervening (e.g. define a servlet filter in your web.xml).

Categories