build failing after adding jruby in ruby project - java

I have setup my java package where I have written few java methods.
I want to call those methods from Ruby on Rails website but it's not working. I'm trying to do one POC for it but it's not working.
I installed jruby for it but build is failing. It's not able to find gems.
ruby gems environment:
gem env
RubyGems Environment:
RUBYGEMS VERSION: 3.0.6
RUBY VERSION: 2.5.3 (2019-09-09 patchlevel 0) [java]
INSTALLATION DIRECTORY: /local/home/skkundra/.rvm/gems/jruby-9.2.6.0
USER INSTALLATION DIRECTORY: /home/skkundra/.gem/jruby/2.5.0
RUBY EXECUTABLE: /local/home/skkundra/.rvm/rubies/jruby-9.2.6.0/bin/jruby
GIT EXECUTABLE: /usr/bin/git
EXECUTABLE DIRECTORY: /local/home/skkundra/.rvm/gems/jruby-9.2.6.0/bin
SPEC CACHE DIRECTORY: /home/skkundra/.gem/specs
SYSTEM CONFIGURATION DIRECTORY:
/local/home/skkundra/.rvm/rubies/jruby-9.2.6.0/etc
- RUBYGEMS PLATFORMS:
- ruby
- universal-java-1.8
GEM PATHS:
/local/home/skkundra/.rvm/gems/jruby-9.2.6.0
/local/home/skkundra/.rvm/rubies/jruby-
9.2.6.0/lib/ruby/gems/shared
GEM CONFIGURATION:
:update_sources => true
:verbose => true
:backtrace => false
:bulk_threshold => 1000
"install" => "--no-rdoc --no-ri"
"update" => "--no-rdoc --no-ri"
REMOTE SOURCES:
https://rubygems.org/
SHELL PATH:
/home/skkundra/.rvm/gems/jruby-9.2.6.0/bin
/home/skkundra/.rvm/gems/jruby-9.2.6.0#global/bin
/home/skkundra/.rvm/rubies/jruby-9.2.6.0/bin
/home/skkundra/.rvm/bin
/home/skkundra/.toolbox/bin
/usr/NX/bin
/usr/local/bin
/bin
/usr/bin
/home/skkundra/bin
/usr/local/sbin
/usr/sbin
/sbin
I'm getting
/home/skkundra/VPSInvoiceCentralWebsite/env/VPSInvoiceCentralWebsite-1.0/runtime/ruby2.3.x/lib/ruby/2.3.0/rubygems/dependency.rb:319:in `to_specs': Could not find 'thread_safe' (~> 0.1) - did find: [thread_safe-0.3.6-java] (Gem::LoadError)

Related

quarkus native build is stuck

I using quarkus 8.3.Final , gradle
Native build always get stucked at
Building native image source jar: /home/runner/work/qu-queue-service/qu-queue-service/build/qu-queue-service-1.0.0-SNAPSHOT-native-image-source-jar/qu-queue-service-1.0.0-SNAPSHOT-runner.jar
Building native image from /home/runner/work/qu-queue-service/qu-queue-service/build/qu-queue-service-1.0.0-SNAPSHOT-native-image-source-jar/qu-queue-service-1.0.0-SNAPSHOT-runner.jar
Using docker to run the native image builder
Checking image status quay.io/quarkus/ubi-quarkus-mandrel:22.0.0.2-Final-java17
22.0.0.2-Final-java17: Pulling from quarkus/ubi-quarkus-mandrel
54e56e6f8572: Pulling fs layer
4f8ddd7f5a75: Pulling fs layer
20939a5b3d59: Pulling fs layer
4f8ddd7f5a75: Verifying Checksum
4f8ddd7f5a75: Download complete
54e56e6f8572: Verifying Checksum
54e56e6f8572: Download complete
54e56e6f8572: Pull complete
20939a5b3d59: Verifying Checksum
20939a5b3d59: Download complete
4f8ddd7f5a75: Pull complete
20939a5b3d59: Pull complete
Digest: sha256:7751b408ac408d6f91a95c864a2b8d85129987c8d5c1fc5356e9940c8e330837
Status: Downloaded newer image for quay.io/quarkus/ubi-quarkus-mandrel:22.0.0.2-Final-java17
quay.io/quarkus/ubi-quarkus-mandrel:22.0.0.2-Final-java17
Running Quarkus native-image plugin on native-image 22.0.0.2-Final Mandrel Distribution (Java Version 17.0.2+8)
docker run --env LANG=C --rm --user 1001:121 -v /home/runner/work/qu-queue-service/qu-queue-service/build/qu-queue-service-1.0.0-SNAPSHOT-native-image-source-jar:/project:z --name build-native-clyaI quay.io/quarkus/ubi-quarkus-mandrel:22.0.0.2-Final-java17 -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-DCoordinatorEnvironmentBean.transactionStatusManagerEnable=false -J-Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=3 -J-Duser.language=en -J-Dfile.encoding=UTF-8 -H:-ParseOnce -J--add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED -J--add-opens=java.base/java.text=ALL-UNNAMED -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy\$BySpaceAndTime -H:+JNI -H:+AllowFoldMethods -J-Djava.awt.headless=true -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -J-Xmx3G -H:-AddAllCharsets -H:EnableURLProtocols=http,https -H:-UseServiceLoaderFeature -H:+StackTrace qu-queue-service-1.0.0-SNAPSHOT-runner -jar qu-queue-service-1.0.0-SNAPSHOT-runner.jar
========================================================================================================================
GraalVM Native Image: Generating 'qu-queue-service-1.0.0-SNAPSHOT-runner'...
========================================================================================================================
[1/7] Initializing... (13.1s # 0.22GB)
Version info: 'GraalVM 22.0.0.2-Final Java 17 Mandrel Distribution'
8 user-provided feature(s)
- io.quarkus.caffeine.runtime.graal.CacheConstructorsAutofeature
- io.quarkus.hibernate.orm.runtime.graal.DisableLoggingAutoFeature
- io.quarkus.jdbc.postgresql.runtime.graal.SQLXMLFeature
- io.quarkus.runner.AutoFeature
- io.quarkus.runtime.graal.DisableLoggingAutoFeature
- io.quarkus.runtime.graal.ResourcesFeature
- org.hibernate.graalvm.internal.GraalVMStaticAutofeature
- org.hibernate.graalvm.internal.QueryParsingSupport
[2/7] Performing analysis... [**********] (191.2s # 2.47GB)
26,471 (96.76%) of 27,356 classes reachable
45,531 (68.17%) of 66,791 fields reachable
143,995 (80.30%) of 179,318 methods reachable
1,537 classes, 1,795 fields, and 10,535 methods registered for reflection
65 classes, 77 fields, and 55 methods registered for JNI access
[3/7] Building universe... (7.2s # 2.60GB)
[4/7] Parsing methods... [******] (36.8s # 2.34GB)
The last build took 6h on github CI, on my local machine it also stop there while grinding the CPU
Is there any leads I can follow?

Java version check using ansible is getting skipped for no apparent reason

As part of my ansible setup I'm currently checking in the VM what is the Java version and if that's not the one expected then download and install that version. I have given the standard regex to find the Java version but that step is getting skipped
- name: "[install] Check for Java install"
shell: "java -version 2>&1 | grep version | awk '{print $3}'"
changed_when: False
register: java_installed
ignore_errors: True
- when: java_installed.stdout != "17.0.2"
block:
- debug:
msg: "Java is not installed"
- name: "[install] Installing Java 17"
become: true
yum:
name: /var/tmp/jdk-17_linux-x64_bin.rpm
state: present
But these steps are getting skipped while executing
TASK [java : [install] Check for Java install] *****************************************************************************************************************************
skipping: [VM name hidden by me ]
TASK [java : debug] ********************************************************************************************************************************************************
skipping: [VM name hidden by me]
TASK [java : [install] Installing Java 17] *********************************************************************************************************************************
skipping: [VM name hidden by me]
when I execute
java -version 2>&1 | grep version | awk '{print $3}'
This is what I get
"1.8.0_312"
Does anyone know why it's getting skipped. Thanks
Note: below is an answer to your direct problem. Meanwhile if java was installed as a system package, I strongly suggest you have a look at the answer by #Jaay to get the version directly from package facts rather than using shell/command
This is what I get
"1.8.0_312"
As you can see, the quotes are part of the output. Hence if you debug java_installed.stdout you get (ran on my local machine with java 11):
TASK [debug] ********************************************************************************************************************
ok: [localhost] => {
"java_installed.stdout": "\"11.0.15\""
}
A simple workaround is to read the incoming value as json. The following does the job (once again customized for my local machine to test and using the version test as good practice)
---
- hosts: localhost
gather_facts: false
tasks:
- name: "[install] Check for Java install"
shell: "java -version 2>&1 | grep version | awk '{print $3}'"
changed_when: false
failed_when: false
register: java_installed
- name: show the raw and refactored captured var
vars:
my_msg:
- "Raw value for version is: {{ java_installed.stdout }}"
- "Refactored value for version is: {{ java_installed.stdout | from_json }}"
debug:
msg: "{{ my_msg }}"
- when: java_installed.stdout | from_json is version("11.0.15", "==")
debug:
msg: "Java 11 is installed"
- when: java_installed.stdout | from_json is not version("17.0.2", "==")
debug:
msg: "Java 17 is not installed"
and gives
PLAY [localhost] ****************************************************************************************************************
TASK [[install] Check for Java install] *****************************************************************************************
ok: [localhost]
TASK [show the raw and refactored captured var] *********************************************************************************
ok: [localhost] => {
"msg": [
"Raw value for version is: \"11.0.15\"",
"Refactored value for version is: 11.0.15"
]
}
TASK [debug] ********************************************************************************************************************
ok: [localhost] => {
"msg": "Java 11 is installed"
}
TASK [debug] ********************************************************************************************************************
ok: [localhost] => {
"msg": "Java 17 is not installed"
}
PLAY RECAP **********************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Not really a big fan of shell command in Ansible, you can use the package facts core plugin to retrieve the installed Java version. This way you should get rid of the outputs problem using shell command
- name: get the rpm or apt package facts
package_facts:
manager: "auto"
- name: show Java version
debug: var=ansible_facts.packages.jdk[0].version
PS: This will work only if java is installed with a package manager (not just copied in your system)

Azure Linux Webapp cannot seem to find deployed Spring .jar from DevOps repo

I have a Spring Web Application in a DevOps repository, with a .yml that looks like this (as generated by the tool in the DevOps web client):
# Build your Java project and deploy it to Azure as a Linux web app
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/java
trigger:
- master
variables:
Version: '0.0.1'
# Azure Resource Manager connection created during pipeline creation
azureSubscription: '-snipped-'
# Web app name
webAppName: 'SlackCentralTestApp'
# Environment name
environmentName: 'SlackCentralTestApp'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: MavenPackageAndPublishArtifacts
displayName: Maven Package and Publish Artifacts
pool:
vmImage: $(vmImageName)
steps:
- task: Maven#3
displayName: 'Maven Package'
inputs:
mavenPomFile: 'pom.xml'
publishJUnitResults: true
testResultsFiles: '**/surefire-reports/TEST-*.xml'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
- task: CopyFiles#2
displayName: 'Copy Files to artifact staging directory'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: '**/target/SlackbotTest-$(Version)-SNAPSHOT.?(war|jar)'
TargetFolder: $(Build.ArtifactStagingDirectory)
- upload: $(Build.ArtifactStagingDirectory)
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeployLinuxWebApp
displayName: Deploy Linux Web App
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: SlackCentralTestApp'
inputs:
azureSubscription: 'Azure for Students (4998490e-1bc4-43fc-a370-80744706d1f5)'
appType: 'webAppLinux'
appName: 'SlackCentralTestApp'
package: '$(Pipeline.Workspace)/drop/target/SlackbotTest-$(Version)-SNAPSHOT.jar'
runtimeStack: 'JAVA|11-java11'
startUpCommand: 'java -jar $(Pipeline.Workspace)/drop/target/SlackbotTest-$(Version)-SNAPSHOT.jar'
The deployment process seems to be successful as seen in this screenshot, yet when I take a look at the server log, I sadly get greeted with the following error:
2020-04-14T09:03:55.658599508Z Initializing App Insights applicationinsights-agent-codeless-2.5.0.jar....
2020-04-14T09:03:55.669449167Z STARTUP_FILE=
2020-04-14T09:03:55.676322304Z STARTUP_COMMAND=java -jar /home/vsts/work/1/drop/target/SlackbotTest-0.0.1-SNAPSHOT.jar
2020-04-14T09:03:55.676350305Z No STARTUP_FILE available.
2020-04-14T09:03:55.676428905Z Running STARTUP_COMMAND: java -jar /home/vsts/work/1/drop/target/SlackbotTest-0.0.1-SNAPSHOT.jar
2020-04-14T09:03:55.681352232Z Error: Unable to access jarfile /home/vsts/work/1/drop/target/SlackbotTest-0.0.1-SNAPSHOT.jar
2020-04-14T09:03:55.694784805Z Finished running startup command 'java -jar /home/vsts/work/1/drop/target/SlackbotTest-0.0.1-SNAPSHOT.jar'. Exiting with exit code 1.
This would lead me to conclude that the path as given in the 'startUpCommand' property from the .yml file is incorrect, yet I have not been able to find what the correct path should be.
I have attempted the following:
Specify no directories, only the filename. Leads to the same result, sadly.
Use the 'find' command in Bash to find any .jars on the Web App, which tells me that there are none.
Manually building the application from the Web App does not seem to be an option either, as it's a Java 11 application and the java SE that seems to be included is version 1.8, regardless of the version I specify during the creation of the Web App resource in Azure.

How to configure continuous integration for aws lambda with java using the services of aws aws-codebuild, codedeploy, cloudformation

I am implementing a lambda function with the tool of continuous integrations of aws . CodeSource , CodeBuild CodePipeLine.
After set up all, when i test the lambda the result is
{
"errorMessage": "Class not found: com.ad.client.App",
"errorType": "java.lang.ClassNotFoundException"
}
Class not found: com.ad.client.App: java.lang.ClassNotFoundException
java.lang.ClassNotFoundException: com.ad.client.App
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
All stage of Pipeline are succeeded(Source , Build , Deploy)
If a load the jar directly in the lambda console the result is the correct
I review the log of the build and found this:
[Container] 2019/06/13 13:09:38 Running command echo THE PATH WORK IS !!!
THE PATH WORK IS !!!
[Container] 2019/06/13 13:09:38 Running command pwd
/codebuild/output/src748698927/src
[Container] 2019/06/13 13:09:38 Running command echo The list of file is !!
The list of file is !!
[Container] 2019/06/13 13:09:38 Running command ls
Readme.md
buildspec.yml
dependency-reduced-pom.xml
ftc-client.iml
outputtemplate.yaml
pom.xml
src
target
template.yaml
[Container] 2019/06/13 13:09:38 Running command echo CODE BUILD SRC DIRECTORY
CODE BUILD SRC DIRECTORY
[Container] 2019/06/13 13:09:38 Running command echo $CODEBUILD_SRC_DIR
/codebuild/output/src748698927/src
INFO] skip non existing resourceDirectory /codebuild/output/src748698927/src/src/main/resources
In some portion of code show me that the path src is duplicated, i don't know if it has something to related with the problem
My config files are:
template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Ftc-client
Resources:
FtcClientFunction:
Type: AWS::Serverless::Function
Properties:
Handler: com.ad.client.App::handleRequest
Runtime: java8
CodeUri: ./
Events:
MyFtcClientApi:
Type: Api
Properties:
Path: /client
Method: GET
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
build:
commands:
- echo Build started on `date`
- mvn test
- export BUCKET=my-bucket-for-test
- aws cloudformation package --template-file template.yaml --s3-bucket $BUCKET --output-template-file outputtemplate.yaml
finally:
- echo THE PATH WORK IS !!!
- pwd
- echo The list of file is !!
- ls
- echo CODE BUILD SRC DIRECTORY
- echo $CODEBUILD_SRC_DIR
post_build:
commands:
- echo Build completed on `date`
- mvn package
artifacts:
files:
- target/ftc-client-1.0-SNAPSHOT.jar
- template.yaml
- outputtemplate.yaml
discard-paths: yes
The source code structure is :
/fclient/src/main/java/com/ad/App.java
/tclient/buildspec.yml
/fclient/pom.xml
/fclient/template.yaml
I want to make this but with Java : https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
thank for everyone whom can give me a cue
This is the solution - it was necessary to unzip the jar in root of my code:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
pre_build:
commands:
- echo Test started on `date`
- mvn clean compile test
build:
commands:
- echo Build started on `date`
- export BUCKET=my-bucket-for-test
- mvn package shade:shade
- mv target/ftc-client-1.0-SNAPSHOT.jar
- unzip ftc-client-1.0-SNAPSHOT.jar
- rm -rf target tst src buildspec.yml pom.xml ftc-client-1.0-SNAPSHOT.jar
- aws cloudformation package --template-file template.yaml --s3-bucket $BUCKET --output-template-file outputtemplate.yaml
post_build:
commands:
- echo Build completed on `date` !!!
artifacts:
files:
- target/ftc-client-1.0-SNAPSHOT.jar
- template.yaml
- outputtemplate.yaml
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html

CircleCI + Gradle + Heroku deployment

I'm trying to provide a continuous deployment with Gradle and Heroku but for some reason, the deployment step is not running.
CircleCI Pipeline result
I've already updated the circle ci with the Heroku key.
version: 2
jobs:
build:
docker:
- image: circleci/openjdk:8-jdk
working_directory: ~/repo
environment:
JVM_OPTS: -Xmx3200m
TERM: dumb
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "build.gradle" }}
- v1-dependencies-
- run: gradle dependencies
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "build.gradle" }}
# run tests!
- run: gradle test
deployment:
staging:
branch: master
heroku:
appname: my-heroku-app
Could you guys help me, please? Is the deployment step in the right place?
You are using deployment configuration for CircleCI 1.0 but you are using CircleCI 2.0.
From the documentation for CircleCI 2.0:
The built-in Heroku integration through the CircleCI UI is not
implemented for CircleCI 2.0. However, it is possible to deploy to
Heroku manually.
To deploy on heroku with CircleCI 2.0, you need :
add environment variables HEROKU_LOGIN, HEROKU_API_KEY, HEROKU_APP_NAME to your CircleCI project settings https://circleci.com/gh/<account>/<project>/edit#env-vars
create a private ssh key without passphrase and add it to your CircleCI project settings https://circleci.com/gh/https://circleci.com/gh/<account>/<project>/edit#ssh for hostname git.heroku.com
add steps in the .circleci/config.yml file with the fingerprint of your ssh key
- run:
name: Setup Heroku
command: |
ssh-keyscan -H heroku.com >> ~/.ssh/known_hosts
cat > ~/.netrc << EOF
machine api.heroku.com
login $HEROKU_LOGIN
password $HEROKU_API_KEY
EOF
cat >> ~/.ssh/config << EOF
VerifyHostKeyDNS yes
StrictHostKeyChecking no
EOF
- add_ssh_keys:
fingerprints:
- "<SSH KEY fingerprint>"
- deploy:
name: "Deploy to Heroku"
command: git push --force git#heroku.com:$HEROKU_APP_NAME.git HEAD:refs/heads/master

Categories