I'm setting up a new environment for my application, we only have "prod" environment, I want to create a testing environment, for that, I configured two Spring profiles, "test" and "prod", and created a new branch called "test" where we want to have the test environment and push that branch to master like a kind of "promotion" to production.
This is a extract of our application.yml
spring:
profiles: test
{some properties...}
---
spring:
profiles: prod
{some properties...}
We are using Heroku to deploy our app and repositories from AzureDevOps, where we also have a pipeline that runs when we push commits to master, this pipeline push the AzureDevOps master branch to the Heroku repository. In Heroku we have an application created on "staging", we didn't add a "production" application yet (not sure if it's relevant but I wanted to clarify that).
This is the pipeline:
git checkout $(Build.SourceBranchName)
git remote add heroku https://heroku:$(pat)#git.heroku.com/app-hto.git
git push heroku $(Build.SourceBranchName)
To specify the profile I'm using the Procfile file in my Java project, where we have this:
web: java -Dspring.profiles.active=prod -Dserver.port=$PORT $JAVA_OPTS -jar target/api-0.0.1-SNAPSHOT.jar
As you can see I'm not a Heroku expert so I don't know how to proceed, so, my question is, how can I specify which profile use for each environment? There is a way to accomplish that using AzureDevOps pipelines?
Azure Devops might be able to accomplish that, but it would be complicated.
It will be easier to achieve this with heroku. Heroku itself provides ways to control which profile is active either by cli, dashboard,or api. for details check here
Hope i point to the right direction.
Related
I am planning to use gradle as build tool with docker for containerizing spring boot applications.
I currently have one question regarding best practices/pros/cons from:
a. from general perspective as a best practice.
b. from CI /CD perspective.
I have understood that I can do it in three ways:
1. Do gradle build by running command on your host machine + then dockerize your spring boot app
eg:
/.gradlew build
docker build -f dockerfile...
2. Do gradle build inside dockerfile itself.
for 2, I got inspiration from these guys at dockercon(https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/).
eg:
FROM gradle:4.7.0-jdk8-alpine AS build
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle build --no-daemon
FROM openjdk:8-jre-slim
EXPOSE 8080
RUN mkdir /app
COPY --from=build /home/gradle/src/build/libs/*.jar /app/spring-boot-application.jar
ENTRYPOINT ["java", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-Djava.security.egd=file:/dev/./urandom","-jar","/app/spring-boot-application.jar"]
There are other articles as well
https://codefresh.io/docs/docs/learn-by-example/java/gradle/
https://codefresh.io/docker-tutorial/java_docker_pipeline/
here I would also like to point that for option 2 that:
a. I plan to use mount option from docker instead of rebuilding image again and again to reflect local changes.
b. I plan to leverage multistage builds, so that we can just discard heavy gradle input and just focus on jar in final output.
3. Use buildpacks, jib or spring boot build image command.
any ideas? If anyone has experienced any pros cons in this area please share.
After almost 7 years of building Docker images from Gradle, long before Docker became a commonplace thing, I’ve never done option 2. I’ve done options 1 and 3, primarily 3.
The problem with #1 is that you lose the information from your Gradle project that can be used to build the image, like the location of the jar file and the project name (there are several others). You end up redefining them on the command line, and the result could be very different.
The problem with #2 is the loss of developer productivity and conflating responsibilities. I can’t imagine building a Docker image every time I made a change to the code. Gradle is a build tool, Docker is a delivery mechanism, and they have different goals.
There are many articles that you can find online for building Docker images that apply equally well to Spring applications. Most notably:
Use layers to avoid rebuilding code not changed.
Use a Gradle Docker plugin to integrate Docker build within Gradle. I’m not sure if the Spring Boot plugin has integrated Docker build task now, if so, use that.
Use a JRE as base instead of JDK if your code can live with that. Many projects don’t need JDK to run.
After a lot of thought and research I have switched to buildpacks.
Also, I have included image build process in build with maven plugin.
Buildpacks are mature way of building docker images with all best practices and security in mind.
Dockerfile is good option if you have a really really custom requirement that does not fit in ideal scenarios and requires custom development.
Example of buildpacks:
https://paketo.io/docs/howto/java/
For this java buildpack you can configure:
you can configure your ssl certificates
jvm options
remote debugging options
JVM vendor switching
Custom command launch
and list goes on...
Also you do not need to worry for all best practices (https://docs.docker.com/develop/develop-images/dockerfile_best-practices/), buildpacks handle that for you.
In case of dockerfile, you might forget to apply standard practices to your docker image(when we are dealing with lot of services/images).
So its another piece of code to maintain and ensure you follow all practices correctly.
Also you can include buildpack in your builds as well,
eg: you can use spring boot gradle or maven plugin
ref: https://docs.spring.io/spring-boot/docs/current/maven-plugin/reference/htmlsingle/#build-image
Ref: https://tanzu.vmware.com/developer/blog/understanding-the-differences-between-dockerfile-and-cloud-native-buildpacks/
ref: https://cloud.google.com/blog/topics/developers-practitioners/comparing-containerization-methods-buildpacks-jib-and-dockerfile
We have a legacy application written in Java using Apache Struts 1.x and Spring 2.x that we want to containerize.
The problem we have is in the way this project is configured. It is done through Maven properties and profiles (one for each environment) that are turned into properties files. These properties, at compile time, are placed inside the WAR.
What would be the correct way to create an image of this application without modifying the project code? That is, that somehow the configuration is externalized, for example, in environment variables. Maybe it should be in a volume?
So far what we have achieved is a two-stage Dockerfile, where it first compiles with Maven with a specific profile and then copies the WARs in the second stage from a Tomcat image. But doing it this way the generated Docker image is not environment independent, which is what we want to achieve.
Thanks!
Spring Cloud Config Server provides an HTTP resource-based API for external configuration (name-value pairs or equivalent YAML content). The server is embeddable in a Spring Boot application, by using the #EnableConfigServer annotation.
So will deploy the Spring Cloud config server on one container containing all the environment configurations - https://cloud.spring.io/spring-cloud-config/multi/multi__spring_cloud_config_server.html
Once it's deployed, you can easily deploy your application docker image in different environments using a bootstrap.yml file that accepts the cloud-config server based on the profiles (dev/uat/staging/prod)
###
server:
port: 9092
spring:
application:
name: application-name
cloud:
config:
urI: REPLACE_CLOUD_CONFIG_URI (https://<spring_cloud_config_url>:8888/)
profiles:
active: REPLACE_PROFILE (dev/uat/staging/prod)
management:
endpoints:
web:
exposure:
include: refresh
I have created a simple Selenium+java maven project locally on my machine. I am able to execute the testing locally triggering it with Maven.
I want to trigger it from Jenkins (which is installed on a remote machine (my company's QA server))
I am using the option 'custom workspace' of Jenkins.
As the Jenkins is on the server, it's not able to understand/locate the local path
'C:\Automation\MavenProject\'
How I can achieve this?
You can do it with master-slave concepts in Jenkins. Slave machine would be your windows machine which will connect to master Jenkins with the help of some jars. You need to create a node on your Jenkins server and after adding the configuration, you need to download the corresponding slave jars on your machine. Once you execute those jars on your local machine, it will interact with Jenkins server. Your Jenkins job can then perform further activities on your slave machine.
Also in the Jenkins job, you need to refer to this node by enabling the following option.
Restrict where this project can be run
1)Create a Maven Project in Jenkins, you may have to install some plugin if its not visible
Pushing your code to some type of version Control like Github
You need to configure Jenkins to provide GitHub build information and POM Location
Since you Jenkins is hosted up in the server, you have be able to access it public, you need to download Github integration plugin and configure jenkins to provide GitHub hook trigger for GITScm polling and similar action in github itself
This should get you going, also google or youtube, lots of solution , one example
https://www.youtube.com/watch?v=PhxZamqYJws
I am trying to set up an application for a university project. It is a maven/tomcat Spring Boot application (a website) which I have coded at my local machine using STS.
The application works fine on my local machine, meaning I've compiled it into a jar file, ran it and I can see it from localhost. Links, cookies and everything work as intended. Now, I want to run it in a google cloud VM instance with tomcat preinstalled and a static IP address, but I am quite unsure on how to do it.
I tried using scp to transfer the jar file (along with all resources and classes) to my VM instance and I ran it from there. But when I try to kill all tomcat8 processes, run my file, I am still prompted to the classic "It works!" page, not my pages.
I am very new to these things, so be aware that I may be oversimplifying the process. Should I plug the files into some specific folder? Any insight as to how I should proceed is more than welcome. Thank you all for your time. ~Mike
If you want to run .jar file directly, then follow the below steps:
1. scp app.jar <IP>:.
2. cp /home/ubuntu/app.jar /opt/tomcat/webapps/
3. cd /opt/tomcat/bin/ // Assuming tomcat is properly installed with users configured.
4. ./catalina.sh start // Start the tomcat
The above setup should be sufficient to deploy the application. If you monitor the tomcat startup logs, you will get registered URL for which the application was registered.
The tomcat deployment is fine, but there is a better way: Using docker (For first time setup users, steps might be bit confusing. Will try to explain all the steps but bit of google is required, if any step is not working as expected.)
Following solution involves following steps:
1. Install docker on the server.
2. SpringBoot application configuration for containerization on the host machine.
3. GCP configuration on the host machine for pushing the built image to private container repository.
4. Configuration of the server for GCP for pulling the image from private container repository and finally starting the application.
First:
Install the docker on to the server using this link
Second: Setup/Configure the spring boot application for docker deployment.
Install spotify docker plugin for maven, so that, docker image is build during maven build process and we will also add required docker configuration for maven here.
a. At the top of your pom.xml, after the <parent> tag, add the following meta-information about your project
<groupId>com.companyName</groupId>
<artifactId>projectArtifact</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>Project Name</name>
b. Under <build><plugins> section, add the following code:
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.9</version>
<executions>
<execution>
<id>default</id>
<goals>
<goal>build</goal>
<!-- <goal>push</goal> -->
</goals>
</execution>
</executions>
<configuration>
<repository>companyName/${project.artifactId}</repository>
<tag>${project.version}</tag>
<noCache>true</noCache>
<buildArgs>
<JAR_FILE>target/${project.artifactId}-${project.version}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
In the above step, we have added docker plugin to maven build process and also added meta-information about the project, so that, the .jar is built with the said name and version.
In the root of the project, create Dockerfile and add the following content:
# Use java 8 on bare linux as our base image
FROM openjdk:8-jdk-alpine
# Accepting argument from mvn plugin
ARG JAR_FILE
# Set ENV mode
#ENV STAGE=default
ENV DOCKER=true
# Renaming Jar File
COPY ${JAR_FILE} app.jar
# Starting the application
CMD ["java", "-jar", "/app.jar"]
# For actual prod applications, profiles wrt application.properties are used, but for college project, its ok*(Ignoring for first time configuration)
# CMD ["java", "-Dspring.profiles.active=${STAGE}", "-Dserver.port=6262", "-jar", "/app.jar"]
# Assuming the application port to be 6262. Replace with the appropriate port.
EXPOSE 6262
Now the required configuration at SpringBoot side is done. To get the application for deployment, we need to first build the docker image. This can be done by:
a. cd to root of the project.
b. docker build -t companyname/projectname .
The above step will build the image. Built images can be viewed by docker images
Now we need to configure from GCP side: (Will not be descriptive. Lot of good articles can found, please google.)
a. Configure GCP container registry and also gcloud-cli in your laptop (It is bit tricky for first time users, but a bit of research is better than typing the straight forward answer)
Once gcloud-cli is configured, push the image to the registry.
a. docker images
b. copy the built image's image id.
c. docker tag /: (Please go through the documentation of GCP for example)
d. docker push /:
Now we have pushed the built image to private docker repository.
ssh to server: (This step can be done in multiple ways and for real use -cases, CICD pipeline tool is ideally used.) Here we will follow a simple method.
a. Configure gcloud-cli with new IAM user in the server.
b. Login to gcloud repository
c. docker pull <asia.gcr.io/gcp-project-id>/<project>:<tag>
d. docker run -p <hostport>:<applicationport(6262 here)> --name container_name -d <asia.gcr.io/gcp-project-id>/<project>:<tag>
The above step should start the docker container and SpringBoot application should be running. From here, one can configure reverse proxy if required, or update the firewall settings, so that traffic can be reached to the said port.
I understand, it's a tad bit difficult and confusing, but I suggest, a lot of research will help you. Once familiar with docker, gcloud-cli, it wont be confusing any more. Let me know, if any step in detail is required.
I am working a Java REST api for a hobby project and I am using Heroku as my deployment platform. I managed to deploy the application using
heroku-maven-plugin.
Then I integrated my GitHub repo with heroku app and tried to deploy from master branch. But then it fails with the following message
Failed to detect set buildpack https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/java.tgz
More info: https://devcenter.heroku.com/articles/buildpacks#detection-failure
!Push failed
Can you please explain how to fix this?
Update :
I have tried setting the buildpack to heroku/java from both dashboard and the heroku CLI tool. But still the problem remains.
GitHub : online-activity-diary-api
The Heroku Maven plugin and Github deployment to the same app are not compatible. They use different buildpacks that do different work. You'll first need to make sure your deploying to different apps with these two mechanisms.
When you deploy with Github sync, you'll need to make sure your buildpack is configured to heroku/java in the dashboard. Then make sure your application has a pom.xml checked into Git.