SSL Error while connecting messenger API with letsencypt - java

I am trying to connect to a service from messenger API, I am getting following error.
SSL certificate problem: unable to get local issuer certificate
I have used LetsEncrypt as HTTPS certificate issuer.
This is my config for NGINX : -
server {
listen ip:80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
server_name example.com;
listen ip:443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
#https://www.scalescale.com/tips/nginx/504-gateway-time-out-using-nginx/
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
proxy_pass http://localhost:8083/;
index index.html ;
}
}

try this configuration format. I think it gonna help to you.
server {
listen example.com:80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen www.example.com:80;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
server {
server_name example.com;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
root /home/admin/web/example.com/public_html/index.html; #your index.html path
index index.html;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
}
}

Related

Nginx proxy-pass to HTTPS Wildfly gives blocked MIME Type message and dont load UI

I have a Nginx proxy pass that redirects to the HTTPS address of my Widlfly deployant.
If I call my URL only via http:// the page loads normally. But if I call the url with https:// I get these messages in the browser developer tool:
Loading the module of
"https://www.planyourplaylist.com/VAADIN/build/vaadin-bundle-b84b24669ab9c1964b96.cache.js"
was blocked due to an unapproved MIME type ("text/html").
Uncaught (in promise) TypeError: ServiceWorker script at https://www.planyourplaylist.com/sw.js for scope https://www.planyourplaylist.com/ encountered an error during installation.
My widlfly.conf for the nginx looks like this:
upstream wildfly {
# List of Widlfly Application Servers
server <ip-adress>;
}
server {
listen 80;
server_name <ip-adress>;
location / {
#return 301 https://<ip-adress>:8443/;
proxy_pass http://<ip-adress>:8080/;
}
location /management {
proxy_pass http://<ip-adress>:9990/management;
}
location /console {
proxy_pass http://<ip-adress>:9990/console;
}
location /logout {
proxy_pass http://<ip-adress>:9990/logout;
}
location /error {
proxy_pass http://<ip-adress>:9990;
}
}
server {
listen 443 ssl ;
server_name <ip-adress>;
ssl_certificate ssl_cert/planyourplaylist_cert.cer;
ssl_certificate_key ssl_cert/planyourplaylist_private.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
# when user requests /
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://<ip-adress>:8443/;
}
location ~ \.(css|js|png|gif|jpeg|jpg|swf|ico|woff){
root /opt/wildfly/standalone/deployments/planyourplaylist.war;
expires 360d;
}
}
Thanks for your help. :)
Thanks to #SimonMartinelli for the link to the post.
The problem was actually fixed by including the mime.types file in nginx.
What I unfortunately dont understand is, I had the mime.types file already included but in the nginx.conf file as described in the ofiziellen eyample.
https://www.nginx.com/resources/wiki/start/topics/examples/full/
Therefore the question where is the difference if I include the mime.types file in the nginx.conf file under http{...} or in the widlfly.conf under server{...}.
In my understanding the file should already be included when the nginx.conf file is loaded.
Thanks for your Help. :)

Setup Nginx + SSL + Spring MVC + Security

I have the following installation: Nginx + selfmade SLL cert + Wildfly running on the same host. On Wildfly I have Spring MVC app with context path /myapp with Spring Security.
App is working perfectly when accessing it at http://192.168.13.13:8080/myapp (all pages, redirects, logins, etc.).
So I want to access it through Nginx and root context path, i.e. http://192.168.13.13/. And at this moment I'm stuck.
If I do not use SLL and proxying from / to / - it works. But I have to enter http://192.168.13.13/myapp to access http://192.168.13.13:8080/myapp.
If I set proxy from / to /myapp - all Spring redirects are broke.
If I enable SSL - I can't login to my app and Spring redirects also broke.
Could someone prompt for correct setup of my installation?
Current nginx config:
upstream wildfly {
server 127.0.0.1:8080 weight=100 max_fails=5 fail_timeout=5;
}
server {
underscores_in_headers on;
listen 80;
listen [::]:80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
underscores_in_headers on;
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/self.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_trusted_certificate /etc/nginx/ssl/self.crt;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_pass_header X-CSRF-TOKEN;
proxy_pass http://wildfly/;
}
}
Current Spring Security config:
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.antMatchers("/", "/index").permitAll()
.antMatchers("/login").permitAll()
.antMatchers("/error/**").permitAll()
.antMatchers("/platform/**").authenticated()
.and().cors()
.and().formLogin().loginPage("/login").usernameParameter("username").passwordParameter("password")
.successHandler(new CustomAuthenticationSuccessHandler())
.failureHandler(customAuthenticationFailureHandler())
.and().csrf().csrfTokenRepository(new CookieCsrfTokenRepository())
.and().logout().logoutRequestMatcher(new AntPathRequestMatcher("/logout"))
.addLogoutHandler(new SecurityContextLogoutHandler())
.clearAuthentication(true)
.logoutSuccessUrl("/login?logout").deleteCookies("JSESSIONID")
.invalidateHttpSession(true)
.and().rememberMe().key("remembermekay").tokenValiditySeconds(60*60*24*14)
.and().exceptionHandling().accessDeniedPage("/error/access_denied");
}
If you need any other configs please ask...
Finally found the solution: in Wildfly I've configured virtual host and map it to my application. No other actions were required - now it's working as expected.
<host name="example.com" alias="example.com" default-web-module="myapp.war">
<access-log directory="${jboss.server.log.dir}/access" prefix="myapp_access_log."/>
</host>

How to debug remote Java application through nginx reverse proxy

I need to debug a remote java application running behind an nginx reverse proxy.
I get the following error:
Failed to attach to remote debuggee VM. Reason: java.io.IOException: Received invalid handshake
What should be the right nginx configuration to achieve this?
I have successfully attached vscode java debugger to the remote java application by targeting the app's host directly.
Resolver is 127.0.0.11 because I'm using nginx docker image.
My nginx config file app.xyz.com.conf in conf.d:
server {
listen 1043;
resolver 127.0.0.11 valid=30s;
server_name app.xyz.com;
include /etc/nginx/mime.types;
location / {
proxy_buffer_size 8k;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
set $upstream "http://java-app:1043";
proxy_pass $upstream;
client_max_body_size 10M;
}
}
Thanks in advance!
You should declare tcp port instead http for debug java application.

SSL SpringBoot App Docker container behind Nginx Proxy

I have a Docker container, with spring boot microservice.
Also I have a Docker container, with jwilder/nginx-proxy configured with SSL working fine. The idea is to do a proxy with SSL.
But when i try call spring boot microservice I get next error:
Is necesary to do SSL configuration in the SpringBoot App too ?
Like this ?:
server.port: 443
server.ssl.key-store: keystore.p12
server.ssl.key-store-password: mypassword
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias: tomcat
Default.conf Nginx file (Autogenerated by Nginx Docker image).
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
resolver 10.12.149.2;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 443 ssl http2;
access_log /var/log/nginx/access.log vhost;
return 503;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
# app.example.com
upstream app.example.com {
## Can be connect with "bridge" network
# crdx_app-test
server 172.18.0.2:80;
}
server {
server_name app.example.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
return 301 https://$host$request_uri;
}
server {
server_name app.example.com;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/app.example.com.crt;
ssl_certificate_key /etc/nginx/certs/app.example.com.key;
add_header Strict-Transport-Security "max-age=31536000";
location / {
proxy_pass https://app.example.com;
}
}
I have found the solution and it is very simple.
Both containers, the Nginx and the container of the application, must be under the same network, which should not be the default one.
For example:
Nginx Container
docker run -d -p 80:80 -p 443:443 --network="mynetwork" -v /path/to/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy:alpine
App Container
docker run -p 8080 --network="mynetwork" -e VIRTUAL_HOST=app.example.com --name ssl_test -d sslapptest
This way SSL works perfectly
Is necessary to do SSL configuration in the SpringBoot App too ?
No. As long as you are sure that the nginx to spring boot network cannot be listened or tampered with by any other party you don't have to enable SSL on your spring boot app.
Then why nginx is giving 503.
I don't know the answer until I see your nginx config. Most likely, your nginx config has some issues. This article may help you to set up nginx reverse proxy with SSL termination.

nginx as a proxy for Websocket to send text messages

I have a websocket service running on vm (remote address port 8090). Using Nginx to proxy the connections. nginx config as follows:
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:8090;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
From my local host I was able to connect to the websocket using ip as ws://111.11.1.1/websocket
But, when I send a message from my local host or application to the remote websocket websocket.sendTextMessage("Message") I am not able to hit the socket..assuming there is something wrong with my nginx config..
UPDATE: Changed the config for Nginx by adding
http{ server{..location/{...}}}
and when I restart Nginx service, i got an error
nginx emerg http directive is not allowed here in /default.conf:1
nginx: configuration file /nginx.conf test failed
Any suggestions are helpful!
Use the following configs, it might help.
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:8090;
proxy_redirect off;
proxy_pass_request_headers on;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection Upgrade;
}
}

Categories