Spring cloud gateway returns empty response - java

I am trying to use Spring Cloud Gateway for routing my microservice, but even if the microservice is working expectedly gateway routing returns an empty response.
My microservice is a simple application and it's running on port 5861. I routed my Gateway to it simply by predicting all cases to be sure it routed, after my routing trials with specific paths.
That's my Gateway configuration file:
spring:
cloud:
gateway:
routes:
- id: product_service
uri: localhost:5861/
predicates:
- Path=/product-service/**
After running both service and hitting them, my microservice returns response properly:
$ curl -v http://localhost:5861/product/
* Trying 127.0.0.1:5861...
* Connected to localhost (127.0.0.1) port 5861 (#0)
> GET /product/ HTTP/1.1
> Host: localhost:5861
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
< Content-Type: application/json
< Transfer-Encoding: chunked
< Date: Wed, 01 Jun 2022 19:41:40 GMT
<
* Connection #0 to host localhost left intact
[{"id":1,"name":"Apple","price":2},{"id":2,"name":"Apple","price":2},{"id":3,"name":"Apple","price":2}]
But if I try to do this from my API gateway it returns nothing, if I try to reach it from my browser a blank page occurs.
$ curl -v http://localhost:5860/product-service/product
* Trying 127.0.0.1:5860...
* Connected to localhost (127.0.0.1) port 5860 (#0)
> GET /product-service/product HTTP/1.1
> Host: localhost:5860
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-length: 0
<
* Connection #0 to host localhost left intact
Can someone help me with what I am missing? I want to get the same response from the gateway.

It seems like your application has /product/ endpoint exposed.
From gateway, you are trying to redirect from /product-service/product to /product/ of service.
By default, spring cloud gateway redirects to Uris as they are. So currently, I believe that http://localhost:5860/product-service/product is being redirected to http://localhost:5861/product-service/product.
If you need to redirect from product-service/** to /** of product-service then use RewritePath filter.
here is an example usage that may work for you:
spring:
cloud:
gateway:
routes:
- id: product_service
uri: localhost:5861/
predicates:
- Path=/product-service/**
filters:
- RewritePath=/product-service/(?<segment>/?.*),$\{segment}

Working update:
spring:
cloud:
gateway:
routes:
- id: product_service
uri: http://localhost:5861/
predicates:
- Path=/product

Related

unable to read the original url when host is specified

My code is running on localhost and i am hitting one of my urls as below
curl -k -vv --http1.1 "https://localhost:8443/versa/login" -H 'Host: google.com'
Now i m a trying to read the url in my code using following
StringBuffer url = httpServletRequest.getRequestURL();
The value is always as follows irrespective the protocol used is HTTP/1.1 or HTTP/2
https://google.com/versa/login
How to read the original url here?
You can't. Have a look at the output from curl - it should give the clue [I dropped the HTTPS to HTTP to simplify but same for HTTPS just more debug from CURL]
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8443 (#0)
> GET /versa/login HTTP/1.1
> Host: google.com
> User-Agent: curl/7.64.1
> Accept: */*
The first two lines are just debug. The next 4 is what is sent to the server .. See nothing about port or localhost ..

Liveness probe failing but the endpoint is accessible from different pods

I'm trying to implement a simple liveness probe in my helm chart deployment template. Below is my liveness probe configuration. Spring boot /actuator/health endpoint is used as the health check endpoint.
containers:
- name: {{ .Release.Name }}-container
image: {{ .Values.container.image }}
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
failureThreshold: 5
periodSeconds: 10
initialDelaySeconds: 30
timeoutSeconds: 25
This is the error I'm encountering (Tried adding a large initialDelay and also tried adding a startupProbe. Both did not work)
Liveness probe failed: Get http://x.x.x.x:8080/actuator/health: dial tcp x.x.x.x:8080: connect: connection refused
However I'm able to get a 200 response from different pods via this endpoint which are in the same ec2 instance and also different ec2 instances.
$k exec -it pod/test sh
# curl http://x.x.x.x:8080/actuator/health -I
HTTP/1.1 200 OK
Connection: keep-alive
Transfer-Encoding: chunked
Content-Type: application/vnd.spring-boot.actuator.v3+json
correlation-id: x-x-x-x-x
Date: Fri, 09 Oct 2020 14:04:56 GMT
Without the liveness probe, the app is working fine and I can access all the endpoints via port 8080.
Tried setting up livenessprobe to an nginx image and it works fine (So ruling out network issues)
Containers:
liveness:
Container ID: docker://0af63462845d6a2b44490308147c73277d22aff56f993ca7c065a495ff97fcfa
Image: nginx
Image ID: docker-pullable://nginx#sha256:c628b67d21744fce822d22fdcc0389f6bd763daac23a6b77147d0712ea7102d0
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 29 Sep 2020 15:53:17 +0530
Ready: True
Restart Count: 0
Liveness: http-get http://:80/ delay=2s timeout=1s period=2s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-57smz (ro)

Websocket connection attempt fails, returns "Connection: Upgrade, close."

I'm building a Java websocket server using Tomcat. On my dev build, it works perfectly. However when I deploy it to production, the server is automatically appending "close" to the connection response header, immediately closing the socket (which never seems to connect to the server in the first place).
Here's some context for the production environment:
Tomcat 7, Java 8 on RHEL
Communications are encrypted by SSL, websocket uses wss
The server is behind an institutional firewall (but I expect that the encryption should make this a non-issue)
My local dev environment is not an exact clone (as it's used for multiple projects). It's running Tomcat 8, but I believe Tomcat 7 should feature comparable websocket support.
Here's the request/response (as captured by Chrome dev tools) when the websocket is sent to the production server:
General:
Request URL: wss://example.com/WSServer
Request Method: GET
Status Code: 101 Switching Protocols
Response:
HTTP/1.1 101 Switching Protocols
Date: Thu, 10 May 2018 17:04:39 GMT
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Upgrade: websocket
Connection: upgrade, close
Sec-WebSocket-Accept: JFNyciPc/Cza8PFaXWVct6f21qw=
Sec-WebSocket-Extensions: permessage-deflate;client_max_window_bits=15
Content-Length: 0
Content-Type: text/plain; charset=UTF-8
Request:
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cache-Control: no-cache
Connection: Upgrade
Cookie: *redacted*
Host: example.com
Origin: https://example.com
Pragma: no-cache
Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits
Sec-WebSocket-Key: OvMcwMxIYqBLrx9ijlFK/w==
Sec-WebSocket-Version: 13
Upgrade: websocket
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36`
As far as I can tell, the most revealing part of this is Connection: upgrade, close, which explains the client-side behavior below.
Here's a snippet of the client-side Javascript:
var socket = new WebSocket((window.location.protocol==="http:"?"ws:":"wss:") + "//" + window.location.host + "/WSServer");
socket.onopen = function wsOpen() {
socket.send("Hello!"));
}
socket.onclose = function wsClose(reason) {
log(JSON.stringify(reason)); //debug
}
socket.onopen gets called first. Executed normally, this doesn't produce any console message, but if I delay its execution with a breakpoint I get an error message: "Websocket is already in CLOSED or CLOSING state."
socket.onclose gets called immediately after. The reason code is 1006 with no explanation.
I've also put some debug logging in the ServerEndpointConfig.Configurator.modifyHandshake method, but it never reaches that point, nor does it reach the #OnOpen-annotated method.
Any idea what's causing the connection to fail? Again, the server and client code works in dev, so I'm confident that it's not a code issue. Is it a Tomcat configuration issue (as far as I can tell, there's nothing unusual about the way it's setup). Is there something obvious I'm missing?
Thanks in advance for any help!
HTTP/1.1 enables keep-alive connections by default.
A request such as:
GET / HTTP/1.1
Host: example.com
Connection: close
tells the server to disable keep-alive on the connection (the opposite of the HTTP/1.1 default)
Upgrade is a hop-by-hop header, just like Connection, and Upgrade is only valid if listed in Connection, e.g. Connection: Upgrade
When a client makes an HTTP/1.1 request containing Upgrade, the server receiving the request is not required to upgrade, and can instead simply respond with an HTTP/1.1 response.
Connection: upgrade, close requests that the server upgrade to (one of) the protocol(s) in the Upgrade header, or else to respond with HTTP/1.1 and close the connection. If the server upgrades the protocol, then the server uses the new protocol, and the close token in Connection is ignored, as the server is now using the upgraded protocol in the Upgrade response header immediately after the HTTP/1.1 101 Switching Protocols response.

Sending json as data with GET call

I can see that the following curl command works remotely:
curl -X GET -d '{"begin":22, "end":33}' http://myRemoteApp.com:8080/publicApi/user/test/data
However as per the docs at http://curl.haxx.se/docs/manpage.html,
-d, --data
(HTTP) Sends the specified data in a POST request to the HTTP server,
in the same way that a browser does when a user has filled in an HTML
form and presses the submit button. This will cause curl to pass the
data to the server using the content-type
application/x-www-form-urlencoded. Compare to -F, --form.
So how is the GET working with curl if we are using -d to post data ?
Also there is no HttpUrlConnection method OR Restlet method to send json in a GET call. Is there ?
According to the curl documentation, -X forces the method word to be a particular value, regardless of whether it results in a sensible request or not. We can run curl with tracing to see what it actually sends in this case:
$ curl -X GET -d '{"begin":22, "end":33}' --trace-ascii - http://localhost:8080/
== Info: About to connect() to localhost port 8080 (#0)
== Info: Trying ::1... == Info: connected
== Info: Connected to localhost (::1) port 8080 (#0)
=> Send header, 238 bytes (0xee)
0000: GET / HTTP/1.1
0010: User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
0050: NSS/3.14.3.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
0084: Host: localhost:8080
009a: Accept: */*
00a7: Content-Length: 22
00bb: Content-Type: application/x-www-form-urlencoded
00ec:
=> Send data, 22 bytes (0x16)
0000: {"begin":22, "end":33}
So this command does in fact cause curl to send a GET request with a message body.
This question has some extensive discussion of using GET with a request body. The answers agree that it's not actually illegal to send a GET request with a message body, but you can't expect the server to pay attention to the body. It seems that the specific server which you're using does handle these requests, whether due to a bug, happy accident, or deliberate design decision.

How to customize response Error 400 Bad Request

I have a web service provided by jetty.
How can i filter URL with illegal characters?
I can not control some return information when the request URL has illegal characters.
actually, i want to return some specific info when the URL is invalid.
for example: i added a filter in my application to validate the URL, if illegal then i will return defined info.
but, I can not filter some URL like "%adsasd", it seem be handled by jetty.
curl -v -X PUT -u user:password 'http://myip.com:8080/%adsasd'
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
* Server auth using Basic with user 'user'
> PUT /%adsasd HTTP/1.1
> Authorization: Basic YWRtaW46MTIzNDU2
> User-Agent: curl/7.35.0
> Accept: */*
> Host:127.0.0.1:8080
> < HTTP/1.1 400 Bad Request
< Content-Length: 0
* Server Jetty(9.0.6.v20130930) is not blacklisted
< Server: Jetty(9.0.6.v20130930) <
* Connection #0 to host 127.0.0.1 left intact
The error response from Jetty
HTTP/1.1 400 Bad Request
indicates that Jetty did detect that as a bad URL and rejected it.
As for how to customize this, that is really tricky, mainly due to how early in the processing of this specific request it occurs in.
This kind of error (400 Bad Request) occurs during the parsing of the raw incoming http request, well before the server container has even attempted to figure out what context to talk to.
There is no way to have a custom error handler in a specific webapp context handle this sort of fundamental http error. As the server has yet to figure out what context to even talk to.
There is also no way at the server side (even at a global level) to customize this error message.
If you want such a feature, please file a feature request.

Categories