Installed a tomcat environment on my test server (Fedora 26). Everything is stock package. I've also installed and setup Nginx reverse proxy on the front. tomcat-users.xml is set and I can login to the app manager as expected.
Now, when I try to deploy a WAR to it, I get critical failure on my Nginx log:
2017/09/25 15:12:21 [crit] 13878#0: *36 open() "/var/lib/nginx/tmp/client_body/000000XXXX" failed (13: Permission denied), client: 200.x.x.x, server: some-sandbox.com, request: "POST /manager/html/upload?org.apache.catalina.filters.CSRF_NONCE=XXXXXXXxxxx HTTP/1.1", host: "some-sandbox.com", referrer: "https://some-sandbox.com/manager/html/upload?org.apache.catalina.filters.CSRF_NONCE=XXXXXXXxxxx
Nginx then return 500 internal server to browser.
What could I have get wrong? Any suggestion how to tackle?
Thanks.
Apparently there is some permission issue with the temporary upload folder /var/lib/nginx/tmp. I've made sure that the whole path is owned by the correct system user. But the issue still exists.
So to circumvent the issue, I decided to config Nginx to skip caching the client body at all. For my purpose, there is no practically value to cache before proxying.
Nginx 1.7.11 introduced a new proxy_request_buffering directive. If you set it to off, the buffering would be disabled. And hence any permission issue would not affect the upload.
So my server section has this:
location / {
proxy_request_buffering off;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080/;
}
You can check the user privilage on the file for current user.
Related
I have a Nginx proxy pass that redirects to the HTTPS address of my Widlfly deployant.
If I call my URL only via http:// the page loads normally. But if I call the url with https:// I get these messages in the browser developer tool:
Loading the module of
"https://www.planyourplaylist.com/VAADIN/build/vaadin-bundle-b84b24669ab9c1964b96.cache.js"
was blocked due to an unapproved MIME type ("text/html").
Uncaught (in promise) TypeError: ServiceWorker script at https://www.planyourplaylist.com/sw.js for scope https://www.planyourplaylist.com/ encountered an error during installation.
My widlfly.conf for the nginx looks like this:
upstream wildfly {
# List of Widlfly Application Servers
server <ip-adress>;
}
server {
listen 80;
server_name <ip-adress>;
location / {
#return 301 https://<ip-adress>:8443/;
proxy_pass http://<ip-adress>:8080/;
}
location /management {
proxy_pass http://<ip-adress>:9990/management;
}
location /console {
proxy_pass http://<ip-adress>:9990/console;
}
location /logout {
proxy_pass http://<ip-adress>:9990/logout;
}
location /error {
proxy_pass http://<ip-adress>:9990;
}
}
server {
listen 443 ssl ;
server_name <ip-adress>;
ssl_certificate ssl_cert/planyourplaylist_cert.cer;
ssl_certificate_key ssl_cert/planyourplaylist_private.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
# when user requests /
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://<ip-adress>:8443/;
}
location ~ \.(css|js|png|gif|jpeg|jpg|swf|ico|woff){
root /opt/wildfly/standalone/deployments/planyourplaylist.war;
expires 360d;
}
}
Thanks for your help. :)
Thanks to #SimonMartinelli for the link to the post.
The problem was actually fixed by including the mime.types file in nginx.
What I unfortunately dont understand is, I had the mime.types file already included but in the nginx.conf file as described in the ofiziellen eyample.
https://www.nginx.com/resources/wiki/start/topics/examples/full/
Therefore the question where is the difference if I include the mime.types file in the nginx.conf file under http{...} or in the widlfly.conf under server{...}.
In my understanding the file should already be included when the nginx.conf file is loaded.
Thanks for your Help. :)
I need to debug a remote java application running behind an nginx reverse proxy.
I get the following error:
Failed to attach to remote debuggee VM. Reason: java.io.IOException: Received invalid handshake
What should be the right nginx configuration to achieve this?
I have successfully attached vscode java debugger to the remote java application by targeting the app's host directly.
Resolver is 127.0.0.11 because I'm using nginx docker image.
My nginx config file app.xyz.com.conf in conf.d:
server {
listen 1043;
resolver 127.0.0.11 valid=30s;
server_name app.xyz.com;
include /etc/nginx/mime.types;
location / {
proxy_buffer_size 8k;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
set $upstream "http://java-app:1043";
proxy_pass $upstream;
client_max_body_size 10M;
}
}
Thanks in advance!
You should declare tcp port instead http for debug java application.
I have a Docker container, with spring boot microservice.
Also I have a Docker container, with jwilder/nginx-proxy configured with SSL working fine. The idea is to do a proxy with SSL.
But when i try call spring boot microservice I get next error:
Is necesary to do SSL configuration in the SpringBoot App too ?
Like this ?:
server.port: 443
server.ssl.key-store: keystore.p12
server.ssl.key-store-password: mypassword
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias: tomcat
Default.conf Nginx file (Autogenerated by Nginx Docker image).
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
resolver 10.12.149.2;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 443 ssl http2;
access_log /var/log/nginx/access.log vhost;
return 503;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
# app.example.com
upstream app.example.com {
## Can be connect with "bridge" network
# crdx_app-test
server 172.18.0.2:80;
}
server {
server_name app.example.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
return 301 https://$host$request_uri;
}
server {
server_name app.example.com;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/app.example.com.crt;
ssl_certificate_key /etc/nginx/certs/app.example.com.key;
add_header Strict-Transport-Security "max-age=31536000";
location / {
proxy_pass https://app.example.com;
}
}
I have found the solution and it is very simple.
Both containers, the Nginx and the container of the application, must be under the same network, which should not be the default one.
For example:
Nginx Container
docker run -d -p 80:80 -p 443:443 --network="mynetwork" -v /path/to/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy:alpine
App Container
docker run -p 8080 --network="mynetwork" -e VIRTUAL_HOST=app.example.com --name ssl_test -d sslapptest
This way SSL works perfectly
Is necessary to do SSL configuration in the SpringBoot App too ?
No. As long as you are sure that the nginx to spring boot network cannot be listened or tampered with by any other party you don't have to enable SSL on your spring boot app.
Then why nginx is giving 503.
I don't know the answer until I see your nginx config. Most likely, your nginx config has some issues. This article may help you to set up nginx reverse proxy with SSL termination.
I have two docker container running, one is a nginx that accepts http and https requests and passes them to the other one which is a jetty container.
I have noticed an issue since I switched to docker. I can't get the right request IP.
The jetty application checks the request IP to ensure requests are coming from a particular server. In the Servlet I use following code to get the IP:
protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
...
String remoteIpAddress = request.getRemoteAddr();
...
}
But I then get the IP 172.17.0.x, which seems to be some IP from docker and not the expected IP from the requester.
My docker images are run with following params:
docker run -d --read-only --name=jetty -v /tmp -v /run/jetty jetty:9
docker run -d --read-only --name=nginx --link jetty:jetty -v /var/run -p 80:80 -p 443:443 nginx
The important part is the --link param, where I link the networking of jetty to nginx.
In the nginx config I have defined an proxy pass to jetty:
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
and
location / {
proxy_pass http://jetty:8080;
}
My question is: how do I get the right IP from the request and not the 127.17.0.x one?
The accepted answer seems rather weird for someone that is using the default Docker Jetty image, we should not be changing or uncommenting things manually like that.
Here is the way to derive a the Docker image that worked for me:
FROM jetty:9.4-jre11
COPY checkout/my-app/target/v.war /var/lib/jetty/webapps/v.war
RUN java -jar /usr/local/jetty/start.jar --create-startd --add-to-start=http-forwarded
The file /usr/local/jetty/etc/jetty-http-forwarded.xml, which adds the org.eclipse.jetty.server.ForwardedRequestCustomizer to the configuration, will be added to the jetty.start automatically.
If using Jetty 9, enable the ForwardRequestCustomizer
To do that ...
$ mkdir /path/to/jetty-base/etc
$ cp /path/to/jetty-dist/etc/jetty.xml /path/to/jetty-base/etc/
$ edit /path/to/jetty-base/etc/jetty.xml
Uncomment the lines
<Call name="addCustomizer">
<Arg><New class="org.eclipse.jetty.server.ForwardedRequestCustomizer"/></Arg>
</Call>
Start your ${jetty.base}
$ cd /path/to/jetty-base
$ java -jar /path/to/jetty-dist/start.jar
Done
When you do the request.getRemoteAddr(); you get the ip of the request, in this case the nginx running in docker.
The lines you added the in nginx config file add headers with the original ip, so the only thing you have to do is get the X-Real-IP header
I'm using a free SSL-cert from startssl.com for my Artifactory-repo. It's all green and nice in my browsers, but of course not from Java. So I installed the cacerts with this handy script:
http://www.ailis.de/~k/uploads/scripts/import-startssl
But I STILL get the:
Server access Error: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException
error. JAVA_HOME set correctly. Any suggestion highly appreciated!
More info:
Its Ivy from SBT 0.12.2 (using pualp's script https://github.com/paulp/sbt-extras) that is barfing on the cert:
[info] Resolving net.liftmodules#omniauth_2.10;2.5-SNAPSHOT-0.7-SNAPSHOT ...
[error] Server access Error: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target url=https://repo.woodenstake.se/all/net/liftmodules/omniauth_2.10/2.5-SNAPSHOT-0.7-SNAPSHOT/maven-metadata.xml
--
Update:
The problem seems to be something totally different not related to Java per se. Visiting the page from a browser yields a green cert and I can see the info that its signed from StartSSL. But even wget or curl chokes and tells me that this is a self-signed cert. It seems that different certs are delivered depending on the client.
The repo is at https://repo.woodenstake.se/ - If you paste this in your browser I would guess that you get the StartSSL-cert. BUT if you do wget https://repo.woodenstake.se/ you get some old self-signed cert that I don't know where it comes from.
-- Update to update:
So the problem is that I'm serving a few sites of the form *.woodenstake.se. I got the feeling that it would be possible to have different certs like:
server {
listen 443;
server_name site1.woodenstake.se;
client_max_body_size 512m;
ssl on;
ssl_certificate cert1.crt;
ssl_certificate_key cert1.key;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://server1;
break;
}
}
}
server {
listen 443;
server_name site2.woodenstake.se;
client_max_body_size 512m;
ssl on;
ssl_certificate cert2.crt;
ssl_certificate_key cert2.key;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://server2;
break;
}
}
}
and it works just fine in all my browsers.
However, it doesn't work from wget or JDK6.
Problem was something completely different. Apparently you can't have more than one certificate on the same IP and be sure that all clients can handle it. I have a few tools on this machine and my nginx-config had references to both the StartSSL cert for this site but also to a self-signed (snakeoil) cert for some other sites.
My nginx supports TLS SNI:
~ $ sudo nginx -V
nginx version: nginx/0.7.65
TLS SNI support enabled
but apparently wget and Java clients doesn't handle it. All my browsers do though.
Maybe it's possible to do something like:
http://library.linode.com/security/ssl-certificates/subject-alternate-names
but I don't know if it is possible to get StartSSL to sign it.
More info here:
http://www.carloscastillo.com.ar/2011/05/multiple-ssl-certificates-on-same-ip.html
Wget test on my Ubuntu-desktop:
viktor#hedefalk-i7:~$ wget https://bob.sni.velox.ch/
--2013-03-25 17:07:19-- https://bob.sni.velox.ch/
Resolving bob.sni.velox.ch (bob.sni.velox.ch)... 62.75.148.60
Connecting to bob.sni.velox.ch (bob.sni.velox.ch)|62.75.148.60|:443... connected.
ERROR: no certificate subject alternative name matches
requested host name `bob.sni.velox.ch'.
To connect to bob.sni.velox.ch insecurely, use `--no-check-certificate'
So I think the answer to my question is
Your version of Java (or all, but maybe it works in JDK7: http://docs.oracle.com/javase/7/docs/technotes/guides/security/enhancements-7.html) doesn't support TLS SNI so nginx can't be sure which certificate to serve since this is negotiated before http. Buy a wildcard-cert for real money from the man or cry a river.