Im using kubenetes and I have a pod contain ignite db I added to the pod another container - sscaling/jmx-prometheus-exporter:latest
I read in git I should run this
To run as a javaagent download the jar and run:
java -javaagent:./jmx_prometheus_javaagent-0.14.0.jar=8080:config.yaml -jar yourJar.jar
but I didnt understand, should I also download the file for the kubenrtes container as well?
can someone assist how can I continue from here
I have the following default configuration:
---
hostPort: localhost:5555
username:
password:
rules:
- pattern: ".*"
Related
In my Spring Boot app, I have a docker-compose.yml file that is used Dockerizing my app with Dockerfile. On the other hand, for the local development, other developers would also need a docker-compose.yml file for creating MySQL on a local Docker environment.
For this kind of general situations, what is the proper way to provide the configuration for local development besides Dockerizing? I look but there seems to be no docker-compose-dev.yml file usage as far as I see. So, what should I do? Where keep my compose config? I think I can use any whatever file name but in this case should it be a differennt location than the default one?
Name the developer-oriented file docker-compose.override.yml. If this file is present, Compose will use its settings to augment and override the settings in the main docker-compose.yml. See Multiple Compose files in the Docker documentation for more details.
Typically if you have this file, you'll do pre-production testing and deployment using only the main docker-compose.yml file but not the override file. This works better if the settings in the override file are purely additive.
You might have a production-oriented Compose file like, for example,
# docker-compose.yml
version: '3.8'
services:
app:
image: registry.example.com/app:${APP_TAG:-latest}
environment: [MYSQL_HOST, MYSQL_DATABASE MYSQL_USER, MYSQL_PASSWORD]
where most of the interesting settings are injected from host environment variables, and expecting to interface with an external database. In development, though, you might want to build this image from source and install a database as part of the setup
version: '3.8'
services:
app:
build: .
environment:
MYSQL_HOST: mysql
ET: cetera
mysql:
image: mysql
volumes: ['mysql:/var/lib/mysql/data']
environment: {...}
volumes:
mysql:
The settings from the two files are combined, so if you
APP_TAG=20221109 docker-compose build
you'll wind up with an image name registry.example.com/app:20221109, for example.
I have a cicd pipeline in gitlab that uses gitlab-secrets to publish them to a spring-boot application as follows:
docker-compose.yml:
services:
my-app:
image: my-app-image:latest
environment:
- SPRING_PROFILES_ACTIVE=test
- app.welcome.secret=$APP_WELCOME_SECRET
Of course I defined the $APP_WELCOME_SECRET as variable it gitlabs ci/cd variables configuration page.
Problem: running the gitlab-ci pipelines results in:
The APP_WELCOME_SECRET variable is not set. Defaulting to a blank string.
But why?
I could solve it by writing the secret value into a .env file as follows:
gitlab-ci.yml:
deploy:
stage: deploy
script:
- touch .env
- echo "APP_WELCOME_SECRET=$APP_WELCOME_SECRET" >> .env
- scp -i $SSH_KEY docker-compose.yml .env user#$MY_SERVER:~/deployment/app
While that works, I would still be interested in a better approach.
I have a Spring Web Application in a DevOps repository, with a .yml that looks like this (as generated by the tool in the DevOps web client):
# Build your Java project and deploy it to Azure as a Linux web app
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/java
trigger:
- master
variables:
Version: '0.0.1'
# Azure Resource Manager connection created during pipeline creation
azureSubscription: '-snipped-'
# Web app name
webAppName: 'SlackCentralTestApp'
# Environment name
environmentName: 'SlackCentralTestApp'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: MavenPackageAndPublishArtifacts
displayName: Maven Package and Publish Artifacts
pool:
vmImage: $(vmImageName)
steps:
- task: Maven#3
displayName: 'Maven Package'
inputs:
mavenPomFile: 'pom.xml'
publishJUnitResults: true
testResultsFiles: '**/surefire-reports/TEST-*.xml'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
- task: CopyFiles#2
displayName: 'Copy Files to artifact staging directory'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: '**/target/SlackbotTest-$(Version)-SNAPSHOT.?(war|jar)'
TargetFolder: $(Build.ArtifactStagingDirectory)
- upload: $(Build.ArtifactStagingDirectory)
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeployLinuxWebApp
displayName: Deploy Linux Web App
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: SlackCentralTestApp'
inputs:
azureSubscription: 'Azure for Students (4998490e-1bc4-43fc-a370-80744706d1f5)'
appType: 'webAppLinux'
appName: 'SlackCentralTestApp'
package: '$(Pipeline.Workspace)/drop/target/SlackbotTest-$(Version)-SNAPSHOT.jar'
runtimeStack: 'JAVA|11-java11'
startUpCommand: 'java -jar $(Pipeline.Workspace)/drop/target/SlackbotTest-$(Version)-SNAPSHOT.jar'
The deployment process seems to be successful as seen in this screenshot, yet when I take a look at the server log, I sadly get greeted with the following error:
2020-04-14T09:03:55.658599508Z Initializing App Insights applicationinsights-agent-codeless-2.5.0.jar....
2020-04-14T09:03:55.669449167Z STARTUP_FILE=
2020-04-14T09:03:55.676322304Z STARTUP_COMMAND=java -jar /home/vsts/work/1/drop/target/SlackbotTest-0.0.1-SNAPSHOT.jar
2020-04-14T09:03:55.676350305Z No STARTUP_FILE available.
2020-04-14T09:03:55.676428905Z Running STARTUP_COMMAND: java -jar /home/vsts/work/1/drop/target/SlackbotTest-0.0.1-SNAPSHOT.jar
2020-04-14T09:03:55.681352232Z Error: Unable to access jarfile /home/vsts/work/1/drop/target/SlackbotTest-0.0.1-SNAPSHOT.jar
2020-04-14T09:03:55.694784805Z Finished running startup command 'java -jar /home/vsts/work/1/drop/target/SlackbotTest-0.0.1-SNAPSHOT.jar'. Exiting with exit code 1.
This would lead me to conclude that the path as given in the 'startUpCommand' property from the .yml file is incorrect, yet I have not been able to find what the correct path should be.
I have attempted the following:
Specify no directories, only the filename. Leads to the same result, sadly.
Use the 'find' command in Bash to find any .jars on the Web App, which tells me that there are none.
Manually building the application from the Web App does not seem to be an option either, as it's a Java 11 application and the java SE that seems to be included is version 1.8, regardless of the version I specify during the creation of the Web App resource in Azure.
We were using the cf-uaa's gradle tasks to create a docker image but those have been removed in the latest version. I've loaded the war in a recent version, but the service does not seem to be starting correctly.
I've been building the war from the v74 tag, adding it to tomcat:8.5.45-jdk12-openjdk-oracle or tomcat:9.0.24-jdk12-openjdk-oracle, and setting the various env vars that we were passing in to the previous image. I'm not seeing any log entries after the initial tomcat output stating that my war has been deployed and the server startup time.
The Dockerfile is basically just an adaptation of what was being passed in the previous image:
FROM tomcat:8.5.45-jdk12-openjdk-oracle
#FROM tomcat:9.0.24-jdk12-openjdk-oracle
ENV LOGIN_CONFIG_URL WEB-INF/classes/required_configuration.yml
ENV UAA_CONFIG_PATH /uaa
RUN bash -c "rm -r /usr/local/tomcat/webapps/ROOT"
RUN bash -c "rm -r /usr/local/tomcat/webapps/host-manager"
RUN bash -c "rm -r /usr/local/tomcat/webapps/manager"
RUN bash -c "rm -r /usr/local/tomcat/webapps/examples"
RUN bash -c "rm -r /usr/local/tomcat/webapps/docs"
ADD *.war /usr/local/tomcat/webapps/uaa.war
RUN bash -c "echo $LOGIN_CONFIG_URL"
EXPOSE 8080
I would expect to see the service responding to my requests, or some errors in the log indicating that the war failed to deploy. I am not currently getting any log output generated from the application code. When I send a request to the service, the response is a 500 with the an error header from the service.
X-Cf-Uaa-Error:Server failed to start. Possible configuration error.
update: I've located the uaa logs within .../tomcat/logs/uaa.log I'm not seeing anything indicating that the service failed to deploy, but I am also not seeing anything to indicate that it is picking up the env vars I have set in the container. I recreated the service using the war from the original setup which started successfully using the uaa.yml which I mounted as a volume. Comparing the logs, the original setup's first log entry is YamlProcessor which does not show up in the v75 logs at all. In fact, no debug entries show up at all, which suggests to me that my LOG_LEVEL env var is not propagating either.
Update 2: We reverted the image base to FROM tomcat:8.5-jre8 and started seeing flyway errors in the uaa.log. Our previous datasource url format was url: jdbc:postgresql://${POSTGRES_NAME}:5432/${DB}?currentSchema=uaa which caused a flyway exception. After removing the schema reference, it created the tables in the public schema. By creating the uaa schema manually before starting the service, it was able to run with the original format. The flyway version has updated, so perhaps there something new that needs to be set.
The application seems to be running, but when I try to get a token at /uaa/oauth/token I get a 500 with this error in the logs: Caused by: java.lang.NoSuchMethodError: java.nio.CharBuffer.limit(I)Ljava/nio/CharBuffer;
Since Jan 2021, UAA server docker images is now be available on cloudfoundry/uaa dockerhub repository.
docker pull cloudfoundry/uaa:75.0.0
See its Dockerfile for more details.
Can you try following ?
https://github.com/hortonworks/docker-cloudbreak-uaa
This works very well.
The following situation:
I have a Spring Boot Application
which runs in a Docker swarm
but fails to start because it was not properly configured (a property is missing).
It seems to me that the docker swarm always tries to restart the container, but always fails because of the missing property.
The restart makes no sense because docker will never be able to start the application unless I fix the missing property.
So currently the swarm ends in an endless loop.
Regarding this problem I already read:
The docker documentation: https://docs.docker.com/config/containers/start-containers-automatically/
and several StackOverflow posts: https://stackoverflow.com/search?q=Docker+restart
My "setup":
The dockerfile:
ARG nexus_docker_registry=mynexus.com:10099
FROM ${nexus_docker_registry}/openjdk:8-jdk-alpine
ADD myjar.jar myjar.jar
ENV JAVA_OPTS=""
ENTRYPOINT [ "java", "-jar", "/myjar.jar" ]
my YML-file to create the docker service:
---
- hosts: docker_manager
become: false
vars:
servicename: 'myservice'
imageurl: "mynexus.com:10099/myjar:{{version}}"
extraoptions:
- "--with-registry-auth"
- "--detach=true"
- "--log-driver gelf"
- "--log-opt 'gelf-address=udp://{{ groups['logstash'][0] }}:10001'"
- "--hostname 'myhost.com'"
- "--mount 'type=bind,source=/etc/localtime,destination=/etc/localtime:ro'"
- "--mount 'type=volume,source=mykeys,destination=/mykeys'"
- "--env 'spring.profiles.active=docker'"
- "--publish 8000:6666"
tasks:
- name: Include vault
include_vars: "myvault.yml"
- name: "delete service '{{ servicename }}'"
command: sudo docker service rm "{{ servicename }}"
args:
warn: false
ignore_errors: true
run_once: true
- name: "create service {{ servicename }}"
command: sudo docker service create {{ extraoptions | join( ' ' ) }} --name "{{ servicename }}" "{{ imageurl }}"
args:
warn: false
run_once: true
What I want to achieve is:
If the spring boot application is not able to start because of for example a BeanCreationException or something similar, then I don't want the docker service to restart endlessly.
If I restart the swarm etc. the docker service should restart automatically.
In the docker documentation is written:
If you manually stop a container, its restart policy is ignored until the Docker daemon restarts or the container is manually restarted. This is another attempt to prevent a restart loop.
So I guess that what I want to achieve is not possible with a restart policy.
Questions:
but maybe I can write something in my Dockerfile that I achieve my goals?
Or am I totally wrong here and misinterpret the documentation?
I am unfortunately not a docker expert and still learning to handle 'the swarm'.
There are 4 different restart policies in Docker:
no - Do not automatically restart the container. (the default)
on-failure - Restart the container if it exits due to an error, which manifests as a non-zero exit code.
always - Always restart the container if it stops
unless-stopped - Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.
There is no way for docker to "detect" a type of error from an application and restart or not depending on that.
One way to achieve this is to use supervisord within your container and let that handle the restart depending of a list of exit codes that you define. But this means that your container will only restart when supervisord crashes, not when you application does and you'll have to change your code to return different exit codes on the errors that should be restarting and the ones that shouldn't.
Because it does not seem possible what I wanted to achieve, I read the documentation again (https://docs.docker.com/engine/reference/commandline/service_create/) and found the option --restart-max-attempts which will solve my problem with the endless loop.
You may want to try and implement the creation of a docker stack based on a docker-compose file.
In this scenario, as the compose v3 documentation indicates, you have full control over the service restart policy.
The next example won't allow restart:
version: "3.9"
services:
python:
image: my_user/my_repo:my_container
volumes:
- /home/python:/home
deploy:
restart_policy:
condition: none
You can adjust the restart_policy block with condition: [none | on-failure | any] and with max_attempts: [your_int]