My Spring Boot app uses the jSerialComm library (v2.6.0) to do serial comms over a USB port. The jSerialComm documentation notes the importance of adding the user to a number of groups:
Note for Linux users: Serial port access is limited to certain users and groups in Linux. To enable user access, you must open a terminal and enter the following commands before jSerialComm will be able to access the ports on your system. Don't worry if some of the commands fail. All of these groups may not exist on every Linux distro. (Note, this process must only be done once for each user):
sudo usermod -a -G uucp username
sudo usermod -a -G dialout username
sudo usermod -a -G lock username
sudo usermod -a -G tty username
So I wrote the Dockerfile as follows:
FROM adoptopenjdk:11-jre-hotspot
# Run application as non-root user to help to mitigate some risks
RUN groupadd -r spring && useradd -r spring -g spring && \
usermod -a -G uucp spring && \
usermod -a -G dialout spring && \
usermod -a -G tty spring
# `lock` group doesn't seem to exist, hence commented-out:
# usermod -a -G lock spring
USER spring:spring
COPY /Java/tempctrl/build/libs/*.jar app.jar
EXPOSE 80
ENTRYPOINT ["java", "-jar", "/app.jar"]
... and I include --device=/dev/ttyACM0:/dev/ttyACM0 (and, temporarily clutching at straws, --privileged) in the docker run command.
When the app starts, logging confirms that /dev/ttyACM0 is found OK. But when the app tries to read from the serial port it receives continuous zeros. (Note: saw this a few times before moving the app to Docker and it was symptomatic of the USB port being already in use.)
If I comment-out USER spring:spring (i.e. allow the contained app to run as root) everything is fine.
How can I make this work without root privileges?
According to the Docker docs, devices shared with --device seem to be root-owned (see image in link).
Perhaps you can try chowning it?
Related
I have an issue where running sudo iscsiadm -m discovery -t st -p IP -l logs to dmesg across all terminals on the server.
The command is run from a java application, using:
Runtime.getRuntime().exec("/bin/bash", "-c", "sudo iscsiadm -m discovery -t st -p *IP* -l");
I have tried the following:
Apppending > /dev/null 2>&1 to the end of the iscsiadm discovery... command
Capturing input streams from the returned progress (process.getInputStream() and process.getErrorStream())
Appending > /dev/null 2>&1 to the software launching the Jar.
None of the above attempts prevent logging across all virtual terminals. The log starts [some_num.some_dec] LOG_MESSAGE which suggests it is outputting to dmesg? If this is true how do I prevent this? Currently it makes the system impossible to debug because it's printing over the terminal prompt.
Thanks
Issue fixed.
It turns out it was not iscsiadm logging to dmesg, it was the mount command afterwards because the blockdev did not exist.
I have modified my java code to try iscsiadm -m discovery... then run iscsiadm -m session to determine if the appropriate device has a connection or not prior to mounting.
I have an application that gets deployed from a docker image to a Kubernetes pod. Inside of my docker image I run the following command
FROM openjdk:17.0.1-slim
USER root
WORKDIR /opt/app
ARG JAR_FILE
ARG INFO_APP_BUILD
RUN apt-get update
RUN apt-get install -y sshpass
RUN apt-get install -y openssh-service
COPY /build/libs/*SNAPSHOT.jar /opt/app/app.jar
ENV INFO_APP_BUILD=${INFO_APP_BUILD}
EXPOSE 8080
CMD java -jar /opt/app/app.jar
When the application gets deployed, out of my control, the user gets set to a non root user.
Now the important part is that when i try to launch an ssh command i get an error message no user exists for uid [random id here]
My goal is to configure the docker image to create a user and grant it permission to use the SSH command.
When the application gets deployed, out of my control, the user gets set to a non root user.
Inside the container, the user running java -jar /opt/app/app.jar is root, because of USER root.
Outside the container, on the host, a deployed application is usually (almost exclusively) never executed/accessed as root.
But it should still make ssh request from within the container to a server:
the openssh service is started
the container /root/.ssh has the right public/private key
the ~user/.ssh folder, on the target server where the Docker application is running, has the authorized_keys with the public one in it.
But if the user does not exist inside the container, you need to create it on docker run, as in here:
docker run -it --rm --entrypoint sh "$#" \
-c "[ -x /usr/sbin/useradd ] && useradd -m -u $(id -u) u1 -s /bin/sh || adduser -D -u $(id -u) u1 -s /bin/sh;
exec su - u1"
I am adding Apache Tika for extracting text out of documents and images (with TikaOcr) to an already existing service in the Azure Functions based on top of AppService. Now, Apache Tika requires tesseract to be installed in the machine locally. To overcome that, I used apt-get to set up (by ssh-ing) into the server but (from what I understand) the setup is performed on the base AppService layer. As a result, invocation of concurrent OCR commands really slow down my functions. Since there are no official binaries of Tesseract, I was wondering if any of the following is possible:
Bundle Tesseract with my Functions app
Build a docker image with Tesseract.
Build a multi-container docker app with a tesseract runtime image (tesseract-shadow/tesseract-ocr-re)
I have tried to build docker image (following instructions from here) with tesseract with the following dockerfile but Apache Tika fails to perform OCR with this.
ARG JAVA_VERSION=11
# This image additionally contains function core tools – useful when using custom extensions
#FROM mcr.microsoft.com/azure-functions/java:3.0-java$JAVA_VERSION-core-tools AS installer-env
FROM mcr.microsoft.com/azure-functions/java:3.0-java$JAVA_VERSION-build AS installer-env
RUN apt-get update && apt-get install -y tesseract-ocr
COPY . /src/functions-tika-extraction
RUN cd /src/functions-tika-extraction && \
mkdir -p /home/site/wwwroot && \
mvn clean package && \
cd ./target/azure-functions/ && \
cd $(ls -d */|head -n 1) && \
cp -a . /home/site/wwwroot
# This image is ssh enabled
FROM mcr.microsoft.com/azure-functions/java:3.0-java$JAVA_VERSION-appservice
# This image isn't ssh enabled
#FROM mcr.microsoft.com/azure-functions/java:3.0-java$JAVA_VERSION
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY --from=installer-env ["/home/site/wwwroot", "/home/site/wwwroot"]
I'm fairly new to Docker and Azure Platform so I may be missing something here, but how can I get my Azure Functions to work with Tesseract using Docker or any other method?
After reading through the docker docs and getting to know some basics about docker, I could finally figure out that tesseract was in fact installed, below Azure AppService layer which somehow does not allow a container to access it. Tesseract can be made available to Azure Functions if installed in the uppermost layer by including it in the bottom of the Dockerfile as follows:
ARG JAVA_VERSION=11
FROM mcr.microsoft.com/azure-functions/java:3.0-java$JAVA_VERSION-build AS installer-env
# remove this line
# RUN apt-get update && apt-get install -y tesseract-ocr
COPY . /src/functions-tika-extraction
RUN cd /src/functions-tika-extraction && \
mkdir -p /home/site/wwwroot && \
mvn clean package && \
cd ./target/azure-functions/ && \
cd $(ls -d */|head -n 1) && \
cp -a . /home/site/wwwroot
# This image is ssh enabled
FROM mcr.microsoft.com/azure-functions/java:3.0-java$JAVA_VERSION-appservice
# add the line here
RUN apt-get update && apt-get install -y tesseract-ocr
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY --from=installer-env ["/home/site/wwwroot", "/home/site/wwwroot"]
While it does satisfy my requirement of bundling tesseract-ocr with Azure Functions Java application, the invocation is still very slow unfortunately.
I have a Spring Boot Java app running on Ubuntu 14.x using Oracle Java 1.8.0 that I want to debug remotely with IntelliJ. I have tried to get it to listen on a port for debug purposes but with no success. Note, the ports I tried are all well above the port 1024, to make sure it's not a permission problem. I am not root but I do have sudo access to the box.
I tried adding this to the java command line:
-agentlib:jdwp=transport=dt_socket,address=localhost:9009,server=y,suspend=y
A technique I got from this document:
http://javahowto.blogspot.com/2010/09/java-agentlibjdwp-for-attaching.html
However when I run this command:
sudo netstat -an | grep LISTEN
I don't see port 9009. Also, the app does not wait for debugger attachment as indicated by the "suspend=y" parameter, because I see the app initialization messages stream by as normal as the app starts up. Why isn't this working?
Here is the shell script that launches the app. Note, this shell script is launched by supervisord. I point this out in case that might be causing any trouble:
# !/bin/bash
# Shell script to launch Spring Boot app
# Kill subprocess when parent bash process is terminated by supervisor or when CTRL+C is received
trap 'kill -TERM $PID' TERM INT
java \
-Dnetworkaddress.cache.ttl=5 \
-Dnetworkaddress.cache.negative.ttl=5 \
\
-jar spbootapp.jar \
-agentlib:jdwp=transport=dt_socket,address=localhost:9009,server=y,suspend=y
--spring.application.name=spbootapp-awsdev \
--spring.profiles.active=cluster \
--spring.cloud.config.enabled=false \
--endpoints.configprops.enabled=false \
--endpoints.health.sensitive=false \
&
The debug parameters -agentlib:jdwp=transport=dt_socket,address=localhost:9009,server=y,suspend=y need to go before the -jar in the command.
I'm deploying an application to a pre-configured GlassFish Docker container on Elastic Beanstalk. I need to install the MySQL connector/J library so it's accessible to GlassFish. According to the AWS documentation, "With preconfigured Docker platforms you cannot use a configuration file to customize and configure the software that your application depends on." But it does state: "you can add a Dockerfile to your application's root folder."
They provide an example (with no explanation!) on installing a PostgreSQL library for Python but I'm using Java and MySQL. As I am completely unfamiliar with Docker (and really just need to configure this one thing), I can't figure out how to do it. Using Amazon's example Dockerfile I was able to come up with this:
FROM amazon/aws-eb-glassfish:4.1-jdk8-onbuild-3.5.1
# Exposes port 8080
EXPOSE 8080 3306
# Install MySQL dependencies
RUN curl -L -o /tmp/mysql-connector-java-5.1.34.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.34/mysql-connector-java-5.1.34.jar && \
apt-get update && \
apt-get install -y /tmp/mysql-connector-java-5.1.34.jar libpq-dev && \
rm -rf /var/lib/apt/lists/*
However, on application deploy, I'm getting the following error:
t update && apt-get install -y /tmp/mysql-connector-java-5.1.34.jar libpq-dev && rm -rf /var/lib/apt/lists/*] returned a non-zero code: 100"
And obviously, my library is not installed. How can I install the MySQL Connector/J library with a Dockerfile?
apt-get install -y /tmp/mysql-connector-java-5.1.34.jar
This is not how apt-get works. It installs debian packages.
There is a package called libmysql-java that may be what you want.
You might also try something like
RUN curl -L -o /mysql-connector-java-5.1.34.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.34/mysql-connector-java-5.1.34.jar
ENV CLASSPATH=/mysql-connector-java-5.1.34.jar:${CLASSPATH}