deploy app on appengine on port 8761 - java

I'm deploying a Java app that runs on port 8761, and works fine on localhost.
Although when I push to App Engine flexible environment, I get a HTTP 502 server error.
Here is my app.yaml:
runtime: java
env: flex
service: eureka
runtime_config:
jdk: openjdk8
handlers:
- url: /.*
script: ignore
secure: always
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 2
The log from gcloud is fine, server is running, but my request doesn't seems to hit the app at all.
I noticed that if I run on port 8080, it works. For now, it is not a problem change the default port to 8080, but I would like to understand why I'm not able to run it on 8761

I think you need to use the network settings section in the app.yaml config file:
network:
forwarded_ports:
- 8761/tcp
You might also need to set firewall rules in the Cloud Platform Console.

Related

Error connecting Node.js web client to Java gRPC server

I have a gRPC server written in Java and I'm currently trying to create a web client, with React. However, I can't seem to manage the connection between the envoy proxy to which the client is connecting and the actual server.
I would expect to receive the same message as with the Java client, but I get the error "Http response at 400 or 500 level", receiving an empty response with the web client, while the Java server doesn't even get the request.
The server runs on port 8080, and the envoy proxy is configured on port 9090, which is the one used by the web client.
Dockerfile:
FROM envoyproxy/envoy-dev:latest
COPY ./envoy.yaml /etc/envoy/envoy.yaml
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l trace --log-path /tmp/envoy_info.log
envoy.yaml:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9090 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: m_service
cors:
allow_origin:
- "*"
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
expose_headers: grpc-status,grpc-message
enabled: true
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: m_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts:
socket_address:
address: localhost
port_value: 8080
The commands I use for building and running the docker container are docker build -t m-server ., and docker run -p 9090:9090 -td m-server /bin/bash and the proto classes for the front-end are loaded statically.
If there's any more code that'd be useful to post, please let me know. Any advice is appreciated, thank you!
For me the solution was to change the command passed to run the container, thus docker run -p 9090:9090 -td m-server /bin/bash becoming docker run -d -p 9090:9090 -p 9901:9901 m-server. The main difference was putting -d instead of -td and the second port mapping is for the envoy server.
I am just learning Docker and from what I understood from the documentation, the explanation would be that I was running the container in detached mode, but with a pseudo-tty allocated, which is used in foreground mode. I've seen it here but the purpose was slightly different and at the time I misunderstood it as only keeping the container running was not what I needed.
Changing 'localhost' to '0.0.0.0', as suggested in this answer is also important.
Looks like Envoy is not forwarding the request to your Java server. Envoy has an admin interface https://www.envoyproxy.io/docs/envoy/latest/operations/admin . That and the Envoy log files should help troubleshoot this.
socket_address:
address: localhost
This is the problem. Your envoy tries to forward to itself if it's running as dockerized image, because localhost is not your docker host machine for running container (where grpc server is running) , but actually localhost of running container. Use docker compose, port mapping or external network. Good luck

Remote Debugging of Openshift Application in Intellij-Idea

I have a java application running on an openshift remote cluster and I want to debug the app from my local machine with Intellij-Idea. The app is built by a Jenkinsfile on another remote jenkins server (gradle build, docker build and pushed to openshift, where it is automatically deployed).
The Dockerfile exposes port 9009 and therefore my Intellij Remote Debug Config looks like this:
Debug Config
With the localhost in the Debug Config I need openshift port-forwarding:
oc port-forward my-pod 9009
Forwarding from 127.0.0.1:9009 -> 9009
When I start the Debugger I get the following error in Intellij:
Error running 'DTC Remote Debug':
Unable to open debugger port (localhost:9009): java.net.ConnectException "Connection refused: connect"
At the same time the terminal with the port forwarding shows:
Handling connection for 9009
E0927 09:52:33.711817 5996 portforward.go:331] an error occurred forwarding 9009 -> 9009: error forwarding port 9009 to pod ad370...c010, uid : exit status 1: 2019/09/27 03:52:33 socat[129691] E connect(5, AF=2 127.0.0.1:9009, 16): Connection refused
Doing an Nmap scan against the url where I get the index.html of my application I got the following:
nmap -sS my-openshift-url
Starting Nmap 7.80 ( https://nmap.org ) at 2019-09-27 15:01 Mitteleuropõische Sommerzeit
Nmap scan report for my-openshift-url (IP-Address)
Host is up (0.0043s latency).
rDNS record for IP-Address: dispatch-my-domain
Not shown: 997 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
9009/tcp closed pichat
Nmap done: 1 IP address (1 host up) scanned in 6.10 seconds
I guess the problem is the closed 9009 port, but I have no clue how I can open that port on my openshift cluster. I already set several environment variables in the openshift web UI (just to be sure):
DEBUG TRUE
DEBUG true
DEBUGGING TRUE
DEBUGGING true
JAVA_DEBUG TRUE
JAVA_DEBUG true
JAVA_DEBUG_PORT 9009
But I can't get it to work. If I switch the port-forwarding to 8080 I can access the index.html via localhost:8080 from my browser. I don't know if I need to change something in the project code (gradle, docker, jenkins, etc.) or if I can just open the port on the deployed service in openshift somehow...
If anything isn't clear or if I missed something just tell me. I'm happy for every piece of advice.
Regards,
Christoph
Adding the following environment variable in openshift did the trick:
JAVA_TOOL_OPTIONS -agentlib:jdwp=transport=dt_socket,address=9009,server=y,suspend=n
All the other environment variables from above are absolete...

Dockerized Mac/Java app can't talk to localhost

MacOS + Docker (Version 17.12.0-ce-mac49 (21995)) here. I am trying to Dockerize an existing Spring Boot app. Here's my Dockerfile:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
Here's my Spring Boot application.yml config file. As you can see it expects Docker to inject environment variables from an env file:
logging:
config: 'logback.groovy'
server:
port: 9200
error:
whitelabel:
enabled: true
spring:
cache:
type: none
datasource:
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://${DB_HOST}:3306/myapp_db?useSSL=false&nullNamePatternMatchesAll=true
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
testWhileIdle: true
validationQuery: SELECT 1
jpa:
show-sql: false
hibernate:
ddl-auto: none
naming:
physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
properties:
hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
hibernate.cache.use_second_level_cache: false
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: false
hibernate.hbm2ddl.auto: validate
myapp:
detailsMode: ${DETAILS_MODE}
tokenExpiryDays:
alert: 5
jwtInfo:
secret: ${JWT_SECRET}
expiry: ${JWT_EXPIRY}
topics:
adminAlerts: admin-alerts
Here's my myapp-local.env file:
DB_HOST=localhost
DB_USERNAME=root
DB_PASSWORD=
DETAILS_MODE=Terse
JWT_SECRET=12345==
JWT_EXPIRY=86400000
It's worth noting that above in the env file, I have tried localhost, 127.0.0.1 and 172.17.0.1 and all of them produce identical errors below.
Then I build the container:
docker build -t myapp .
Success! Then I run the container:
docker run -it -p 9200:9200 --net="host" --env-file myapp-local.env --name myapp myapp
...and I watch as the container quickly dies with MySQL connection-related exceptions (can't connect to the MySQL machine running locally). I can confirm that the Spring Boot app has no problem connecting to MySQL when it runs as an executable ("fat") jar outside of Docker, and I can confirm that the local MySQL instance is up and running and is perfectly healthy.
Unable to connect to database. }com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:590)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:57)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1606)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:633)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
When I turn TRACE-level logging on, I see it is trying to connect to:
url=jdbc:mysql://localhost:3306/myapp?useSSL=false&nullNamePatternMatchesAll=true
So it does look like Docker is properly injecting the env file's vars into the Spring YAML-based config. So this doesn't feel like a config issue, moreover an isse with the container speaking to the MySQL port running on the Docker host.
Can anybody see where I'm going awry?
Accessing the host machine from within a container is not recommended. Usually it can be solved by wrapping service you need into a container and accessing it via container name.
There is no solution, there are only workarounds, you can use one of them:
On Mac you can access the host services using docker.for.mac.host.internal DNS name.
You need to set environment variable like this:
DB_HOST=docker.for.mac.host.internal
And refer to the DB_HOST from your connection string.
For more details see the documentation:
From 17.12 onwards our recommendation is to connect to the special
Mac-only DNS name docker.for.mac.host.internal, which resolves to the
internal IP address used by the host.
Note: Having --net="host" doesn't let you reach the host machine via localhost. localhost always points to local machine, but in case if it is invoked from within a container it points to the container itself.
So basically Docker app is not in the same network as the host you're running it from and that's why you can't access MySQL by pointing to localhost (because this is another network from Docker's point of view).
What you could try is to run docker with --net="host" option and then it will share the network with its host.
You can find better explanation on this issue in this topic From inside of a Docker container, how do I connect to the localhost of the machine?

How to override embedded DNS server for Docker in /etc/resolv.conf from a docker-compose file

I am using a docker-compose command to create and start my containers.
My Docker Version
docker --version
Docker version 17.09.0-ce, build afdb6d4
My Docker-Compose version
docker-compose --version
docker-compose version 1.16.1, build 6d1ac21
The .yml file that I'm using looks something like this:
(Note that I've just shortened it to take sensitive things out)
---
services:
zookeeper:
image: "zookeeper"
server-1:
cap_add:
- "NET_ADMIN"
server-0:
cap_add:
- "NET_ADMIN"
dns:
- 8.8.8.8
- 9.9.9.9
environment:
SERVER_ID: 0
NETEM_HOSTS: ""
LOSS_VALUES: ""
MAX_RATE_VALUES: ""
DELAY_VALUES: ""
image: "cloud.mycompany.com:5000/server-0:latest"
fakedns:
image: "cloud.mycompany.com:5000/fakedns:latest"
version: "3.3"
Then I start using:
docker-compose --file compose.yml up -d
My Question is this:
1) After containers come up... when I go into a container, for e.g. in this case server-0, I don't see the /etc/resolv.conf file updated to use these nameservers. Instead it uses the embedded dns of docker which is 127.0.0.11
2) How do I make sure that it uses what I specify in file that is used by docker-compose
3) I tried to do this with the command and it seems to work, but I need to do from compose-file
docker run -p 4000:53 --dns=8.8.8.8 cloud.mycompany.com:5000/server-0:latest
4) Ideally, I want it to have the IP address of the container 'fakedns' so that it uses this one instead of the embedded one #127.0.0.11
You won't see custom DNS servers in /etc/resolv.conf but Docker's resolver will forward DNS requests to them.
User Defined Networks and DNS
Docker compose definitions that are v2+ create a user defined network by default.
Docker with a user defined network uses an embedded DNS server so that Docker can respond for local container requests (service discovery).
For any DNS hosts Docker can not resolve, the request will be forwarded onto a DNS server. This is either the system default server, the server configured in dockerd or the DNS server configured for the container at run time.
Docker DNS
Be careful when using internal DNS servers. Things in the Docker daemon will break if you point the systems DNS at a container as you create a chicken or the egg problem, Docker needs DNS to start but can't start the container to provide DNS.
As your example config is only setting the DNS for one app container it should be ok, but make sure the DNS container is up and healthy before your application.

JMX doesn't work in Tomcat

I have the following configuration:
I deployed a sample SymmetricDS engine in Tomcat 8. It should have a JMX MBean I have to connect to. The configuration file symmetric-server.properties has the following values:
# Enable Java Management Extensions (JMX) web console.
#
jmx.http.enable=true
# Port number for Java Management Extensions (JMX) web console.
#
jmx.http.port=31417
# Enable Java Management Extensions (JMX) remote agent.
#
jmx.agent.enable=true
# Port number for the Java Management Extensions (JMX) remote agent.
#
jmx.agent.port=31418
And yet when I go to localhost:31417 I get 404 and when I launch the JConsole this application is nowhere to be found.
But when I start SymmetricDS with command bin\sym and it launches using the embedded jetty server, I can see the HTTP Adaptor on localhost:31417 and can connect via JConsole to the local application, yet I cannot connect remotely to localhost:31418:
I downloaded the sources of the SymmetricDS and in the file
symmetric-server\src\main\java\org\jumpmind\symmetric\SymmetricWebServer.java
there are only three configuguration taken from file symmetric-server.properties -- from default values it seems that they are jmx.http.port for HTTP Adaptor, https.port for HTTPS and http.port for SymmetricWebServer.
I also tried changing jmx.agent.enable to false and manually overriding java command line options in sym_service.conf by adding:
wrapper.java.additional.13=-Dcom.sun.management.jmxremote
wrapper.java.additional.14=-Dcom.sun.management.jmxremote.port=31417
wrapper.java.additional.15=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.16=-Dcom.sun.management.jmxremote.ssl=false
to no avail.
Could you please help me, what am I doing wrong?
Update
After greping sources I found SystemConstants.java, in which again there were ports for http, https and jmx.http, but none for remote agent

Categories