I'm trying to deploy a solution using Open Trip Planner, and everything is OK if I use HTTP, but apparently the HTTPS connection doesn't work.
I've followed the official docs but with no success, apparently the internal server is running, it logs that the expected HTTPS port is listening and the port is actually shown as listening by the OS (Windows 10 Pro) but no secure connection can be established (I tried the "curl" and "open-ssl" tests in the page but both failed)
This is the document I refer to:
http://docs.opentripplanner.org/en/latest/Security/#security
Please any help is appreciated, thanks in advance
Is using a reverse proxy like nginx an option for you? That way nginx can handle the HTTPS requests, and then pass them onto opentripplanner.
Here's an example nginx configuration:
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/cacert.pem;
ssl_certificate_key /etc/ssl/privkey.pem;
server_name opentripplanner.example.com;
proxy_pass http://127.0.0.1:8000;
}
References:
https://manual.seafile.com/deploy/https_with_nginx.html
https://nginx.org/en/docs/beginners_guide.html
Related
Currently I am hosting WAR with tomcat.
However, I find that if we host web by port 18080,
just like http://my-server-site:18080/welcome
the page can show successfully.
However, if i just type :
http://my-server-site/welcome
it said cannot find directory '/welcome'.
Anyone have idea why looks weird?
Thanks
Not weird at all...
If you do not specify a port, it will default to 80 for HTTP and 443 for HTTPS. I guess you have another web server (apache?) running on the same host that give you the error you see.
If you are expecting to see the same page on the default port, you will need to configure your web server as a proxy. ProxyPass for apache and proxy_pass for nginx.
I need some help with doing netty socket io over https. I have got it to in my local env but not on a server with secure domain. The server starts but client isn't able to connect. Tried by starting the socket server with IP as well as domain name. For the server to start with domain name as hostname value in setHostname method, I added an entry in /etc/hosts file as following
127.0.0.1 localhost example.com
Socket server started by giving example.com as hostname but client isn't able to connect using the same hostname over https as following
var socket = io.connect('https://example.com:10443')
Tried with options - { secure: true, reconnect: true, rejectUnauthorized : false } too but the same issue.
On server side my configuration is as following
Configuration configuration = new Configuration();
configuration.setHostname("example.com");
configuration.setPort(10443);
configuration.setKeyStorePassword("mypassword");
InputStream stream = getClass().getClassLoader().getResourceAsStream("keystore.jks");
configuration.setKeyStore(stream);
The jsk file was created using keytool command for the same domain (example.com)
Is there something more to be done for the port - 10443 to be used by the socket server? Or is there any other configuration to be done?
Got the solution! I had not mentioned that the domain was set up on cloudflare. Here the issue was with the port I used - 10443. It's not supported by cloudflare. Changed it to 8443 and it worked!
For those who come across this, please find here the list of supported ports that Cloudflare work with. May save much of your time unlike me.
Also, please note that I used my public IP as hostname in setHostname() method so that I don't need anything added in my hosts file. Then gave the actual domain name with https on client side to connect to the server. That's it. Thank you all!
Sandeep
I have serverA with haproxy and configuration:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http-in
bind *:80
default_backend servers
backend servers
option httpchk OPTIONS /
option forwardfor
stats enable
stats refresh 10s
stats hide-version
stats scope .
stats uri /admin?stats
stats realm Haproxy\ Statistics
stats auth admin:pass
cookie JSESSIONID prefix
server tomcat1 serverB:10039 cookie JSESSIONID_SERVER_1
server tomcat2 serverC:10039 cookie JSESSIONID_SERVER_2
Now, i goes to http://serverA/admin?stats and got statistic. On servers serverB and serverC installed WebSphere Application Server and WebSphere Portal Server (WAS it is like Tomcat and WPS it is like any application deployed to Tomcat). It hosts on port 100039. Now i goes to http://serverA/wps/portal and got my portal, but when i click on any link to any page, i got redirect to http://serverA:100039/wps/portal/bla/bla, this happens because WPS response with its port - 100039, but my haproxy configuration listen only 80 port. What i've tried:
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
For an example, in nginx i got like this:
My application hosts on 3000 port and usefull part of my nginx configuration looks like this:
location #ruby {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 300;
proxy_pass http://app; #upstream app
}
How i can do this in HAProxy?
This question is similar to WebSphere Portal behind reverse proxy and getServerPort()
I think the issue is that WebSphere Application Server (traditional) doesn't honor host headers properly, which can impact getting reverse proxies to work.
Try the settings recommended in that other answer (adjust the apache configuration setting for haproxy), and all should be well.
In your backend section, use "http-request set-header" to set $WSSP and $WSSN to your client-visible hostname and port. They will then be used for self-referential redirects.
Or, set the websphere custom properties trusthostheaderport and com.ibm.ws.webcontainer.extractHostHeaderPort (http://www-01.ibm.com/support/docview.wss?uid=swg21569667) to respect the port in the Host: header.
With this option you may need to ask HAProxy to set the host header to the clients view with "http-request set-header Host" also in the backend section. I'm not sure what the default is.
I am using an implemention of nginx with jetty servlets.
For the purpose of my project I need to initialize two connection to the jetty servlet and keep them open.
To initialize the downlink I use a normal request and I get the inputstream back.
To initialize the uplink I use a chunked encoding request.
I use a 1.4.6 nginx version so the chunked encoding should be set by default, regardless I set it in my server definition.
#HTTPS server
server {
listen 443;
listen [::]:443;
server_name localhost;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_http_version 1.1;
expires off;
proxy_buffering off;
chunked_transfer_encoding on;
proxy_pass https://127.0.0.1:8080;
# root html;
# index index.html index.htm;
}
}
I've searched through all the forums and I still can't come across a solution.
Enabled chunked encoding, proxy buffering off etc etc.
I can't get it to work. I have also done simple tests to make that it's not my apps implementation that's blocking it somehow and it still doesn't work.
Anything else I can try?
So I also posted on the nginx forum and I got a reply. The thing I am specifically looking for is called "unbuffered upload" and that is currently a feature that nginx does not provide.
Using websockets is out of the question because later this prototype will need to be implement in a bigger and older system that uses the http protocol. So the answer for this would be it's not possible with "nginx". A possible work around for anyone facing the same issue is using tengine which is an nginx fork.
i'm coding a command line tool to manage the S3 service. on my local machine, everything works but on the server where it should be executed, fails with the following message:
Error Message: Unable to execute HTTP request: Connection to http://s3.amazonaws.com refused
i make the connection with the following code:
s3 = new AmazonS3Client(credentials,clientConf);
clientConf only sets the protocol to HTTP, as i suspected that maybe could be a problem to connect to HTTPS but i'm having the same result.
now, the server have the following configuration:
debian 6 64 bits
LAMP installed from source
openssl installed from source
java installed from distribution packages packages
this is the network configuration:
auto eth0
iface eth0 inet static
address XX.XX.XX.XX
netmask 255.255.255.255
broadcast XX.XX.XX.XX (same as address)
auto eth0:0
iface eth0:0 inet static
address XX.XX.XX.XX
netmask 255.255.255.255
broadcast XX.XX.XX.XX (same as address)
auto eth0:1
iface eth0:1 inet static
address XX.XX.XX.XX
netmask 255.255.255.255
broadcast XX.XX.XX.XX (same as address)
post-up route add 10.255.255.1 dev eth0
post-up route add default gw 10.255.255.1
wget, telnet, curl, everything works, except this, i have 3 network interfaces as i have 2 SSL and another ip for the other sites.
how i should configure the clientConf to make this work? is a java problem? a network problem? at least, how i can get more debug info? i tried to catch the AmazonClientException exception but doesn't work.
Thanks in advance :)
Regards.
This has been reported as a bug in the Amazon S3 API. Quoth ZachM#AWS:
This appears to be a bug in the SDK. The problem is that the client
configuration object is shared with the Security Token Service client
that DynamoDB uses to establish a session, and it (unlike Dynamo)
doesn't accept the HTTP protocol. There are a couple workarounds:
1) Create your own instance of STSSessionCredentialsProvider and
provide it to your DynamoDB client, or
2) Instead of specifying the protocol in the ClientConfiguration,
specify it with a call to setEndpoint("http://...")
We'll discuss solutions for this bug.
I would recommend using one of the workarounds for now. Good luck getting your connection to work successfully.
(Additional documentation and workarounds)