I'm building a simple hello world application in java (based on spring) which I launch to AWS through a pipeline.
The buildspec.yml is defined as follows:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
build:
commands:
- mvn package
artifacts:
files:
- '**/*'
with the appspec.yml as follows:
version: 0.0
os: linux
files:
- source: target/helloworld-1.0-SNAPSHOT.jar
destination: /tmp
hooks:
ApplicationStart:
- location: codedeploy/ApplicationStart.sh
timeout: 60
runas: root
The file codedeploy/ApplicationStart.sh:
#!/usr/bin/env bash
JAR_FILE_HOME='/tmp/helloworld-1.0-SNAPSHOT.jar'
java -jar JAR_FILE_HOME
Weirdly enough the deployment fails with the following error:
Script at specified location: codedeploy/ApplicationStart.sh run as
user root failed with exit code 127
Output log:
[stderr]/opt/codedeploy-agent/deployment-root/5092b759-ecc4-44cb-859a-9823734abc04/d-GVQ6R854B/deployment-archive/codedeploy/ApplicationStart.sh:
line 9: java: command not found
This seems very counter-intuitive since I've installed java in the buildspec.yml. Do I need to install java manually again within the ApplicationStart script or am I doing something else wrong?
CodeBuild doesn't have a link with your application instance instead it only create run time when it receives artifacts for build event.
You don't need to install JAVA run time every time with appspec.yml. I would recommend you to install JAVA run time on an EC2 instance then create an AMI, as a reference base image for future use or you can proceed with Elasticbeanstalk which has prebuilt environments.
The other answer also suggests this but just to clarify:
CodeBuild (specified in buildspec.yml) does the artifact creation. Simply put it takes your code and creates the jar. You defined here the java version. However this has nothing to do with you instance where it will deployed. It is an image where the build happens.
CodeDeploy (specified in appspec.yml) is responsible for deploying the artifact on the instances defined and owned by you. If you created the target instance manually you need to make java available there. As matesio suggested above you could simplify / automate the instance creation with proper java env but that is your responsibility as that is your instance (not like the env used for the build which is configured by AWS in the background)
Related
Ok, I encountered linking problems while creating jib docker image.
I copy the files i want into container
jib {
allowInsecureRegistries = true
extraDirectories{
paths{
path{
from = file('jnetpcap/jib')
into = '/native'
}
}
}
.
.
.
and in other task, i point to those libraries
task cmdScript(type: CreateStartScripts) {
mainClassName = "cic.cs.unb.ca.ifm.Cmd"
applicationName = "cfm"
outputDir = new File(project.buildDir, 'scripts')
classpath = jar.outputs.files + project.configurations.runtime
defaultJvmOpts = ["-Djava.library.path=/native"]
}
I checked, and those libraries are added correctly to container. It is not a problem with copying libs, but setting up linker.
cmdScript sets correct linker if i build project with distTar, but i don't know how to set up linker when building it with jibDockerBuild.
I couldn't find anwer to my issue here so decided to seek help on SO.
UPDATE
I have found some clues here.
I have updated my jib task by adding
jib {
allowInsecureRegistries = true
extraDirectories{
paths{
path{
from = file('jnetpcap/jib')
into = '/native'
}
}
}
container.jvmFlags = ["-Djava.library.path=/native/*"]
But I keep getting the same error.
error message is
exception in thread main java.lang.unsatisfiedlinkerror 'long com.slytechs.library.NativeLibrary.dlopen(java.lang.String)'
The issue is largely unrelated to Jib. The root cause is missing required libraries inside the container environment.
First things first, it should be container.jvmFlags = ["-Djava.library.path=/native"] (not /native/* with the asterisk).
Now, jNetPcap is a Java wrapper around Libpcap and WinPcap libraries found on various Unix and Windows platforms. That is, on Linux (which is the OS of the container you are building), it depends on Libpcap and requires it to be installed on the system. Most OpenJDK container images (including the one Jib uses as a base image) does not come with Libpcap pre-installed, and I suspect the first problem being that you are not installing Libpcap into the container.
jNetPcap also requires loading other native libraries. In the case of my example below, they were the two .so shared object files that come with the jNetPcap package: libjnetpcap-pcap100.so and libjnetpcap.so.
For explanation, below is the complete example that creates a working container image.
Dockerfile
# This Dockerfile is only for demonstration.
FROM adoptopenjdk/openjdk11
# "libpcap-dev" includes the following files:
# - /usr/lib/x86_64-linux-gnu/libpcap.a
# - /usr/lib/x86_64-linux-gnu/libpcap.so -> libpcap.so.0.8
# - /usr/lib/x86_64-linux-gnu/libpcap.so.0.8 -> libpcap.so.1.8.1
# - /usr/lib/x86_64-linux-gnu/libpcap.so.1.8.1
RUN apt-get update && apt-get install -y libpcap-dev
# My machine is x86_64 running Linux.
RUN curl -o jnetpcap.tgz https://master.dl.sourceforge.net/project/jnetpcap/jnetpcap/1.4/jnetpcap-1.4.r1300-1.linux.x86_64.tgz
# The tar includes the following files:
# - jnetpcap-1.4.r1300/jnetpcap.jar
# - jnetpcap-1.4.r1300/libjnetpcap-pcap100.so
# - jnetpcap-1.4.r1300/libjnetpcap.so
RUN tar -zxvf jnetpcap.tgz
# .class file compiled with "javac -cp jnetpcap.jar MyMain.java"
COPY MyMain.class /my-app/
ENTRYPOINT ["java", "-cp", "/my-app:/jnetpcap-1.4.r1300/jnetpcap.jar", "-Djava.library.path=/jnetpcap-1.4.r1300", "MyMain"]
MyMain.java
import java.util.*;
import org.jnetpcap.*;
public class MyMain {
public static void main(String[] args) {
Pcap.findAllDevs(new ArrayList<>(), new StringBuilder());
System.out.println("SUCCESS!");
}
}
$ docker build -t test .
$ docker run --rm test
SUCCESS!
Therefore, as long as you copy necessary dependent libraries and have correct configuration, you should be able to make it work with Jib.
For installing Libpcap, I can think of a couple options:
Prepare a custom base image (for example, apt-get install libpcap-dev as above) and configure jib.from.image to use it.
Manually download and copy the libpcap.so file into, say, /usr/lib, using the extraDirectories feature. (You can even make your Gradle project dynamically download the file when building your project.)
For copying jNetPcap native libraries (libjnetpcap-pcap100.so and libjnetpcap.so), it's the same story. However, looks like you already manually downloaded and attempted copying them using the extraDirectories feature, so I guess you can continue doing so. But still, preparing a custom base image is another viable option. Note that in the example above, I configured -Djava.library.path=... for jNetPcap (BTW, there are many other ways to have Linux and JVM load shared libraries in an arbitrary directory), but if you copy the .so files into some standard locations (for example, /usr/lib), you wouldn't even need to set -Djava.library.path.
For all of the native libraries (.so files) above, make sure to download the right binaries compatible with the container architecture and OS (probably amd64 and Linux in your case).
I'm struggling with the deployment of a spring app that needs to compile java code during runtime. My app calls the javac command when a user submits a solution to a problem, so it can later run java
I'm deploying to cloud foundry and using the java-buildpack, but unfortunately, it doesn't come with JDK, only JRE is available and that thing has no javac or java commands available.
Do you guys know a way on how to add JDK to cloud foundry, without having to write my own custom buildpack.
Thanks
I would suggest you use multi-buildpack support and use the apt-buildpack to install a JDK. It should work fine alongside the JBP. It just needs to be first in the list.
https://github.com/cloudfoundry/apt-buildpack
Example:
Create an apt.yml.
---
packages:
- openjdk-11-jdk-headless
Bundle that into your JAR, jar uf path/to/your/file.jar apt.yml. It should be added to the root of the JAR, so if you jar tf path/to/your/file.jar you should see just apt.yml and nothing prefixed to it.
Update your manifest.yml. Add the apt-buildpack first in the list.
---
applications:
- name: spring-music
memory: 1G
path: build/libs/spring-music-1.0.jar
buildpacks:
- https://github.com/cloudfoundry/apt-buildpack#v0.2.2
- java_buildpack
Then cf push. You should see the apt-buildpack run and install the JDK. It'll then be installed under ~/deps/0/lib/jvm/java-11-openjdk-amd64. It does not appear to end up on the PATH either, so use a full path to javac or update the path.
One step in my Azure DevOps pipeline requires Java to be installed on the agent.
I found the "Java Tool Installer" task here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/tool/java-tool-installer?view=azure-devops
This looks, however, more like a SDK installer. I only need a Java runtime environment. I am looking for something like the Python installer task:
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.6'
Is there anything for Java getting close to this?
Is there anything for Java getting close to this?
Test with the Python installer task, this task is used to specify a specific python version via setting the environment.
To achieve a similar purpose with Java, you could set the Java_Home and Path variable during the runtime.
You could add a powershell task at the first step.
Here is an example:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
echo "##vso[task.setvariable variable=JAVA_HOME]$(JAVA_HOME_11_X64)"
echo "##vso[task.setvariable variable=PATH]$(JAVA_HOME_11_X64)\bin;$(PATH)"
The $(JAVA_HOME_11_X64) variable is an environment variable.
You could check this variable with the script env | sort. Then the supported value will list in the output.
For example:
In this case, the JAVA_HOME variable will be set as the expected value.
Hope this helps.
Since Azure supports Docker, I would simply go for docker:
trigger:
- main
pr:
- main
- releases/*
pool:
vmImage: 'ubuntu-20.04'
container: adoptopenjdk:latest
steps:
- script: ./gradlew check
I need to specify multiple buildpacks in my springboot application.
I have created a file multi-buildpack.yml in the root dir of my application where i have specified the 2 buildpacks.
multi-buildpack.yml File
buildpacks:
- https://github.com/cloudfoundry/python-buildpack
- https://github.com/cloudfoundry/java-buildpack
But i am getting the below error while pushing my app:
ERROR A multi-buildpack.yml file must be provided at your app root to use this buildpack.
Can anyone please help!
Thanks
Run the following command to ensure that you are using the cf CLI v6.38 or later:
$ cf version
To push your app with multiple buildpacks, specify each buildpack with a -b flag:
$ cf push YOUR-APP -b BUILDPACK-NAME-1 -b BUILDPACK-NAME-2 ... -b FINAL-BUILDPACK-NAME
The last buildpack you specify is the final buildpack, which can modify the launch environment and set the start command.
To see a list of available buildpacks, run cf buildpacks
More: https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
Unless you're on an older version of CloudFoundry, you shouldn't use the multi-buildpack buildpack anymore. True multi-buildpack support is available in the platform. Use that instead.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
This is currently a two step process due to some limitations with the current cf cli:
cf push YOUR-APP --no-start -b binary_buildpack
cf v3-push YOUR-APP -b BUILDPACK-NAME-1 -b BUILDPACK-NAME-2
That should go away at some point in the future and be just one command.
If you really want to use the multi-buildpack buildpack, double check the location of the multi-buildpack.yml file and the format of it. You can compare to one of the fixture tests here. Maybe even run one of the fixture tests to get a baseline test that works (or at least, should work).
Hope that helps!
When attempting to start Elasticsearch 5.1.1 via
$ elasticsearch
I get the output:
Error: Could not find or load main class -Xms2g
I've looked into:
I read it may be an error in how a class is called? But I'm not exposed to doing that. This thread doesn't help and isn't really my problem as I'm not installing a plugin.
I installed via Homebrew. Here is some output from that:
$ brew info elasticsearch
elasticsearch: stable 5.1.1, HEAD
Distributed search & analytics engine
https://www.elastic.co/products/elasticsearch
Conflicts with: elasticsearch#1.7, elasticsearch#2.4
/usr/local/Cellar/elasticsearch/5.1.1 (98 files, 35.2M) *
Built from source on 2016-12-14 at 09:23:56
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/elasticsearch.rb
==> Requirements
Required: java >= 1.8 ✔
==> Caveats
Data: /usr/local/var/elasticsearch/elasticsearch_GabbAHH/
Logs: /usr/local/var/log/elasticsearch/elasticsearch_GabbAHH.log
Plugins: /usr/local/Cellar/elasticsearch/5.1.1/libexec/plugins/
Config: /usr/local/etc/elasticsearch/
plugin script: /usr/local/Cellar/elasticsearch/5.1.1/libexec/bin/plugin
To have launchd start elasticsearch now and restart at login:
brew services start elasticsearch
Or, if you don't want/need a background service you can just run:
elasticsearch
$ brew doctor
Please note that these warnings are just used to help the Homebrew maintainers
with debugging if you file an issue. If everything you use Homebrew for is
working fine: please don't worry and just ignore them. Thanks!
Warning: You have unlinked kegs in your Cellar
Leaving kegs unlinked can lead to build-trouble and cause brews that depend on
those kegs to fail to run properly once built. Run `brew link` on these:
mongodb
ruby
I also tried originally installing via manually extracting the .tar.gz package. I at first got some Java permission denied errors, but after running a chown to admin for myself I also got this same Error: Could not find or load main class type error.
I just updated my Java JDK to the latest: 1.8.0_112 and set my JAVA_HOME variable to that directory accordingly.
The most recent version of Elasticsearch 2 (2.4.3) works. Meanwhile, Elasticsearch v5.0.2 fails.
What can I do to have Elasticsearch properly installed on my Mac?
Watch out with this line in your .bashrc or similar:
export GREP_OPTIONS='--color=always'
That will break many shell's pipes, so many runners will fail without giving you any clue.
Distros as Ubuntu usually have this enabled by default for system users. Be careful and check that shell environment variable.
This problem was solved on my servers by just unsetting this variable.
You most likely have a problem in your .bash_profile. Usually elastic starts out of the box on a mac.
What I usually do to make upgrading and using multiple projects on my machine is using the following script:
#!/bin/bash
CURRENT_PROJECT=$(pwd)
CONFIG=$CURRENT_PROJECT/config
DATA=$CURRENT_PROJECT/data
LOGS=$CURRENT_PROJECT/logs
BASH_ES_OPTS="-Epath.conf=$CONFIG -Epath.data=$DATA -Epath.logs=$LOGS"
ELASTICSEARCH=$HOME/Development/elastic/elasticsearch/elasticsearch-5.1.1
$ELASTICSEARCH/bin/elasticsearch $BASH_ES_OPTS
Notice the options in BASH_ES_OPTS, these are the ones that changes a lot in version 5. My structure is a folder per project with this script and a few folders: config, data and logs. The config folder contains the files from the elastic distribution: elasticsearch.yml, jvm.properties and log4j2.properties.