Is it possible do run MinIO not on default path on nginx?
I have a backend that generate presigned url with this code:
MinioClient minioClient = new MinioClient("http://x.x.x.x:9000", "key", "key");
String url = minioClient.presignedGetObject("bucket", "name", 60 * 60 * 24);
where http://x.x.x.x:9000 is the local minio.
It return:
http://x.x.x.x:9000/bucket/name?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=admin%2F20181122%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20181122T072255Z&X-Amz-Expires=184&X-Amz-SignedHeaders=host&X-Amz-Signature=460b9b46f5fac13f29de4372dd7c1e8d6d6c747510761695a40d6b9ff08ba7d8
This link work under VPN as expected, but when i rewrite the url as https://example.com/bucket/name?... to be reached to all users I get signature error.
I have a nginx as reverse proxy and a frontend on default location:
location / {
proxy_pass http://x.x.x.x:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /bucket/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://x.x.x.x:9000;
}
The problem is when i rewrite the url it invalidate the signature.
Probably if i run minio for example in https://example.com/minio and then use this link as endpoint to generate the presigned url I will not have problem of signature.
Minio uses the host for the signatures, so when the host changes (x.x.x.x:9000 to example.com), the signed URL becomes invalid. Try this -
proxy_set_header Host 'x.x.x.x:9000';
We use something similar for our Kubernetes ingress.
Related
I have a Nginx proxy pass that redirects to the HTTPS address of my Widlfly deployant.
If I call my URL only via http:// the page loads normally. But if I call the url with https:// I get these messages in the browser developer tool:
Loading the module of
"https://www.planyourplaylist.com/VAADIN/build/vaadin-bundle-b84b24669ab9c1964b96.cache.js"
was blocked due to an unapproved MIME type ("text/html").
Uncaught (in promise) TypeError: ServiceWorker script at https://www.planyourplaylist.com/sw.js for scope https://www.planyourplaylist.com/ encountered an error during installation.
My widlfly.conf for the nginx looks like this:
upstream wildfly {
# List of Widlfly Application Servers
server <ip-adress>;
}
server {
listen 80;
server_name <ip-adress>;
location / {
#return 301 https://<ip-adress>:8443/;
proxy_pass http://<ip-adress>:8080/;
}
location /management {
proxy_pass http://<ip-adress>:9990/management;
}
location /console {
proxy_pass http://<ip-adress>:9990/console;
}
location /logout {
proxy_pass http://<ip-adress>:9990/logout;
}
location /error {
proxy_pass http://<ip-adress>:9990;
}
}
server {
listen 443 ssl ;
server_name <ip-adress>;
ssl_certificate ssl_cert/planyourplaylist_cert.cer;
ssl_certificate_key ssl_cert/planyourplaylist_private.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
# when user requests /
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://<ip-adress>:8443/;
}
location ~ \.(css|js|png|gif|jpeg|jpg|swf|ico|woff){
root /opt/wildfly/standalone/deployments/planyourplaylist.war;
expires 360d;
}
}
Thanks for your help. :)
Thanks to #SimonMartinelli for the link to the post.
The problem was actually fixed by including the mime.types file in nginx.
What I unfortunately dont understand is, I had the mime.types file already included but in the nginx.conf file as described in the ofiziellen eyample.
https://www.nginx.com/resources/wiki/start/topics/examples/full/
Therefore the question where is the difference if I include the mime.types file in the nginx.conf file under http{...} or in the widlfly.conf under server{...}.
In my understanding the file should already be included when the nginx.conf file is loaded.
Thanks for your Help. :)
I am trying to connect to a service from messenger API, I am getting following error.
SSL certificate problem: unable to get local issuer certificate
I have used LetsEncrypt as HTTPS certificate issuer.
This is my config for NGINX : -
server {
listen ip:80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
server_name example.com;
listen ip:443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
#https://www.scalescale.com/tips/nginx/504-gateway-time-out-using-nginx/
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
proxy_pass http://localhost:8083/;
index index.html ;
}
}
try this configuration format. I think it gonna help to you.
server {
listen example.com:80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen www.example.com:80;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
server {
server_name example.com;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
root /home/admin/web/example.com/public_html/index.html; #your index.html path
index index.html;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
}
}
I have the following installation: Nginx + selfmade SLL cert + Wildfly running on the same host. On Wildfly I have Spring MVC app with context path /myapp with Spring Security.
App is working perfectly when accessing it at http://192.168.13.13:8080/myapp (all pages, redirects, logins, etc.).
So I want to access it through Nginx and root context path, i.e. http://192.168.13.13/. And at this moment I'm stuck.
If I do not use SLL and proxying from / to / - it works. But I have to enter http://192.168.13.13/myapp to access http://192.168.13.13:8080/myapp.
If I set proxy from / to /myapp - all Spring redirects are broke.
If I enable SSL - I can't login to my app and Spring redirects also broke.
Could someone prompt for correct setup of my installation?
Current nginx config:
upstream wildfly {
server 127.0.0.1:8080 weight=100 max_fails=5 fail_timeout=5;
}
server {
underscores_in_headers on;
listen 80;
listen [::]:80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
underscores_in_headers on;
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/self.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_trusted_certificate /etc/nginx/ssl/self.crt;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_pass_header X-CSRF-TOKEN;
proxy_pass http://wildfly/;
}
}
Current Spring Security config:
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.antMatchers("/", "/index").permitAll()
.antMatchers("/login").permitAll()
.antMatchers("/error/**").permitAll()
.antMatchers("/platform/**").authenticated()
.and().cors()
.and().formLogin().loginPage("/login").usernameParameter("username").passwordParameter("password")
.successHandler(new CustomAuthenticationSuccessHandler())
.failureHandler(customAuthenticationFailureHandler())
.and().csrf().csrfTokenRepository(new CookieCsrfTokenRepository())
.and().logout().logoutRequestMatcher(new AntPathRequestMatcher("/logout"))
.addLogoutHandler(new SecurityContextLogoutHandler())
.clearAuthentication(true)
.logoutSuccessUrl("/login?logout").deleteCookies("JSESSIONID")
.invalidateHttpSession(true)
.and().rememberMe().key("remembermekay").tokenValiditySeconds(60*60*24*14)
.and().exceptionHandling().accessDeniedPage("/error/access_denied");
}
If you need any other configs please ask...
Finally found the solution: in Wildfly I've configured virtual host and map it to my application. No other actions were required - now it's working as expected.
<host name="example.com" alias="example.com" default-web-module="myapp.war">
<access-log directory="${jboss.server.log.dir}/access" prefix="myapp_access_log."/>
</host>
I need to debug a remote java application running behind an nginx reverse proxy.
I get the following error:
Failed to attach to remote debuggee VM. Reason: java.io.IOException: Received invalid handshake
What should be the right nginx configuration to achieve this?
I have successfully attached vscode java debugger to the remote java application by targeting the app's host directly.
Resolver is 127.0.0.11 because I'm using nginx docker image.
My nginx config file app.xyz.com.conf in conf.d:
server {
listen 1043;
resolver 127.0.0.11 valid=30s;
server_name app.xyz.com;
include /etc/nginx/mime.types;
location / {
proxy_buffer_size 8k;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
set $upstream "http://java-app:1043";
proxy_pass $upstream;
client_max_body_size 10M;
}
}
Thanks in advance!
You should declare tcp port instead http for debug java application.
I have a websocket service running on vm (remote address port 8090). Using Nginx to proxy the connections. nginx config as follows:
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:8090;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
From my local host I was able to connect to the websocket using ip as ws://111.11.1.1/websocket
But, when I send a message from my local host or application to the remote websocket websocket.sendTextMessage("Message") I am not able to hit the socket..assuming there is something wrong with my nginx config..
UPDATE: Changed the config for Nginx by adding
http{ server{..location/{...}}}
and when I restart Nginx service, i got an error
nginx emerg http directive is not allowed here in /default.conf:1
nginx: configuration file /nginx.conf test failed
Any suggestions are helpful!
Use the following configs, it might help.
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:8090;
proxy_redirect off;
proxy_pass_request_headers on;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection Upgrade;
}
}