Micronaut: Build native image with Consul dependency does not work - java

I am trying to build a native image of a micronaut (v1.0.4) application.
This application uses Consul as service discovery.
I've created the app using --features option:
$ mn create-app my-app --features discovery-consul --features graal-native-image --build maven
The application works perfectly on my local machine, but when I try to build a docker container with the native image I get an error:
$ ./docker-build.sh
error: No instances are allowed in the image heap for a class that is initialized or reinitialized at image runtime:
sun.security.provider.NativePRNG
Detailed message:
Error: No instances are allowed in the image heap for a class that is initialized or reinitialized at image runtime: sun.security.provider.NativePRNG
Trace: object java.security.SecureRandom
method com.sun.jndi.dns.DnsClient.query(DnsName, int, int, boolean, boolean)
Call path from entry point to com.sun.jndi.dns.DnsClient.query(DnsName, int, int, boolean, boolean):
at com.sun.jndi.dns.DnsClient.query(DnsClient.java:178)
at com.sun.jndi.dns.Resolver.query(Resolver.java:81)
at com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:434)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:235)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:141)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:129)
at javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:142)
at io.micronaut.discovery.client.DnsResolver.getCNamesFromTxtRecord(DnsResolver.java:59)
at io.micronaut.discovery.client.EndpointUtil.getEC2DiscoveryUrlsFromZone(EndpointUtil.java:197)
at io.micronaut.discovery.client.EndpointUtil.getServiceUrlsFromDNS(EndpointUtil.java:141)
If I remove Consul integration, it works without any problem.
I could not find anything useful on the official documentation:
Microservices as GraalVM native images
Consul Support
Does anyone know where the problem is?

After going over several issues and posts, I ended up finding the answer.
To remove this failure, just add this class com.sun.jndi.dns.DnsClient to the list of classes under the option --delay-class-initialization-to-runtime when you create the native image in you Dockerfile:
Dockerfile
RUN native-image --no-server \
...
--delay-class-initialization-to-runtime=...,com.sun.jndi.dns.DnsClient \
-H:-UseServiceLoaderFeature \
--allow-incomplete-classpath \
-H:Name=model-quotes \
-H:Class=model.quotes.Application
...
After doing that, everything works ok and the docker image is generated successfully.
It should be a good idea to add this class in Dockerfile generated by default. It is a bit annoying to generate a new project using Micronaut CLI and find that native images does not work without changing anything.

Related

How can I export traces generated by the OpenTelemetry Java agent to Google Cloud Trace?

I've got a Spring Boot application that'd I'd like to automatically generate traces for using the OpenTelemetry Java agent, and subsequently upload those traces to Google Cloud Trace.
I've added the following code to the entry point of my application for sending traces:
OpenTelemetrySdk.builder()
.setTracerProvider(
SdkTracerProvider.builder()
.addSpanProcessor(
SimpleSpanProcessor.create(TraceExporter.createWithDefaultConfiguration())
)
.build()
)
.buildAndRegisterGlobal();
...and I'm running my application with the following system properties:
-javaagent:path/to/opentelemetry-javaagent-all.jar \
-jar myapp.jar
...but I don't know how to connect the two.
Is there some agent configuration I can apply? Something like:
-Dotel.traces.exporter=google_cloud_trace
I ended up resolving this as follows:
Clone the GoogleCloudPlatform /
opentelemetry-operations-java repo
git clone
git#github.com:GoogleCloudPlatform/opentelemetry-operations-java.git
Build the exporter-auto project
./gradlew clean :exporter-auto:shadowJar
Copy the jar produced in exporter-auto/build/libs to my target project
Run the application with the following arguments:
-javaagent:path/to/opentelemetry-javaagent-all.jar
-Dotel.javaagent.experimental.extensions=[artifact-from-step-3].jar
-Dotel.traces.exporter=google_cloud_trace
-Dotel.metrics.exporter=none
-jar myapp.jar
Note: This setup does not require any explicit code changes in the target code base.

Apache Flink Python Table API UDF Dependencies Problem

After starting a Python Table API Job that involves user defined functions (UDF) by submitting it to a local cluster, it crashes with a
py4j.protocol.Py4JJavaError caused by
java.util.ServiceConfigurationError: org.apache.beam.sdk.options.PipelineOptionsRegistrar: org.apache.beam.sdk.options.DefaultPipelineOptionsRegistrar not a subtype.
I am aware that this is a bug concerning the dependencies on the lib path/classloading. I have already tried to follow all instructions at the following link: https://ci.apache.org/projects/flink/flink-docs-release-1.10/monitoring/debugging_classloading.html
I have tried extensively different configurations with the classloader.parent-first-patterns-additional config option. Different entries with org.apache.beam.sdk.[...] have led to different, additional error messages.
The following dependencies, which refer to apache beam, are on the lib path:
beam-model-fn-execution-2.20.jar
beam-model-job-management-2.20.jar
beam-model-pipeline-2.20.jar
beam-runners-core-construction-java-2.20.jar
beam-runners-java-fn-execution-2.20.jar
beam-sdks-java-core-2.20.jar
beam-sdks-java-fn-execution-2.20.jar
beam-vendor-grpc-1_21_0-0.1.jar
beam-vendor-grpc-1_26_0.0.3.jar
beam-vendor-guava-26_0-jre-0.1.jar
beam-vendor-sdks-java-extensions-protobuf-2.20.jar
I can also rule out that it is due to my code, as I have tested the following sample code of the project website: https://flink.apache.org/2020/04/09/pyflink-udf-support-flink.html
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.table import StreamTableEnvironment, DataTypes
from pyflink.table.descriptors import Schema, OldCsv, FileSystem
from pyflink.table.udf import udf
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
t_env = StreamTableEnvironment.create(env)
add = udf(lambda i, j: i + j, [DataTypes.BIGINT(), DataTypes.BIGINT()], DataTypes.BIGINT())
t_env.register_function("add", add)
t_env.connect(FileSystem().path('/tmp/input')) \
.with_format(OldCsv()
.field('a', DataTypes.BIGINT())
.field('b', DataTypes.BIGINT())) \
.with_schema(Schema()
.field('a', DataTypes.BIGINT())
.field('b', DataTypes.BIGINT())) \
.create_temporary_table('mySource')
t_env.connect(FileSystem().path('/tmp/output')) \
.with_format(OldCsv()
.field('sum', DataTypes.BIGINT())) \
.with_schema(Schema()
.field('sum', DataTypes.BIGINT())) \
.create_temporary_table('mySink')
t_env.from_path('mySource')\
.select("add(a, b)") \
.insert_into('mySink')
t_env.execute("tutorial_job")
When executing this code, the same error message appears.
Does anyone have a description of a configuration of a Flink cluster that can run Python Table API jobs with UDF? Many thanks for all tips in advance!
The problem is solved by the new version 1.10.1 of Apache Flink. Executing the sample script shown in the question is now possible via the binaries with the command run -py path/to/script without any problems.
As for the dependencies, they are already included in the already delivered flink_table_x.xx-1.10.1.jar. So no further dependencies need to be added to the lib-path, which was done in the question by the debugging/configuration attempt.

Using micronaut on CloudRun fully managed

I tried to run micronaut framework on Cloud Run for testing clod start performance.
When I deploy in command line, I have this issue:
Deploying...
Creating Revision... Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information....failed
Deployment failed
ERROR: (gcloud.beta.run.deploy) Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
However, I tested several dockerfile configuration, and I think that my last one is good for passing the EnvVar port to the expected Micronaut EnvVar:
FROM gradle:jdk11-slim as builder
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle build
FROM adoptopenjdk/openjdk11-openj9:jdk-11.0.1.13-alpine-slim
COPY --from=builder /home/gradle/src/build/libs/micronaut-jib-cr*.jar micronaut-jib-cr.jar
ENV MICRONAUT_SERVER_PORT=${PORT}
EXPOSE ${PORT}
CMD java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dcom.sun.management.jmxremote -noverify ${JAVA_OPTS} -jar micronaut-jib-cr.jar
So, I went deeped in the Cloud Run logs, and I saw an another cause of this problem
D Container Sandbox Limitation: Unsupported syscall setsockopt(0x8,0x1,0xc,0x2ae1273fc05c,0x4,0x32)
D Container Sandbox Limitation: Unsupported syscall setsockopt(0x8,0x6,0x6,0x2ae1273fc03c,0x4,0x3a)
A Error: Could not find or load main class micronaut.jib.cr.Application
A Caused by: java.lang.ClassNotFoundException: micronaut.jib.cr.Application
D Container Sandbox Limitation: Unsupported syscall semctl(0x1,0x0,0x2,0x2ae12753ef50,0x2,0x2ae12753ef50)
D Container Sandbox Limitation: Unsupported syscall semctl(0x1,0x0,0x2,0x2ae12753ef50,0x2,0x2ae12753ef50)
D Container Sandbox Limitation: Unsupported syscall semctl(0x1,0x0,0x2,0x2ae12753f440,0x2,0x2ae12753f440)
D Container Sandbox Limitation: Unsupported syscall semctl(0x1,0x0,0x2,0x2ae12753f440,0x2,0x2ae12753f440)
D Container Sandbox Limitation: Unsupported syscall semctl(0x1,0x0,0x2,0x2ae12753f440,0x2,0x2ae12753f440)
D Container Sandbox Limitation: Unsupported syscall semctl(0x1,0x0,0x2,0x2ae12753f440,0x2,0x2ae12753f440)
Is it a real wrong port usage ? In this case, can you help me with my dockerfile ?
If not, is it a known Cloud Run limitation ?
There is a work-around on micronaut to solve this syscall limitation ?
Thanks for your help
It's not a port issue. It looks like you hit a limitation of gvisor, which is the sandbox used by Cloud Run. Your container is trying to make a syscall that the sandbox does not (yet) support, which is causing the container to crash during start-up.
Indeed, John Hanley was right. My gradle build didn't reference the correct main class... I shame of this evident error. But I used Visual Studio Code for the first time (because it seems trendy!), and the package refactor is not as efficient as IntelliJ one ! (Or I haven't the right plugin!)
Thanks for your help, now it work perfectly (and more again with GraalVM packaging!)

Spring boot SOAP service running in docker doesn't find ExtensibilityElement class

I'm setting up a SOAP service using spring-boot and run it in a docker container.
When I run the jar alone everything works fine, but when I try to run it in a docker container it fail to initialize and throws this error:
Failed to instantiate [org.springframework.ws.wsdl.wsdl11.Wsdl11Definition]: Factory method 'defaultWsdl11Definition' threw exception;
nested exception is java.lang.NoClassDefFoundError: javax/wsdl/extensions/ExtensibilityElement
I already try different images, also creating a base docker image and install oracle jdk manually.
You can find the exact code here and try by yourself.
To run the app:
gradle build
java -jar build/libs/service-0.0.1-SNAPSHOT.jar
To create the docker image:
docker build -t soap:service --build-arg JAR_FILE=./build/libs/service-0.0.1-SNAPSHOT.jar .
To run the docker image:
docker run soap:service
Any help is appreciated.
If someone wants to know, the problem was that wsdl4j library was setted to be on compileOnly and by changing it to compile the library was presented on the final jar.
For more information visit https://community.liferay.com/blogs/-/blogs/gradle-compile-vs-compileonly-vs-compileinclude

jna Native.LoadLibrary does not manage to load library on server (working in local)

I use JNA to load a c++ library (.so) in a java project. I package my library inside the jar, and load it from the jar when instantiating the java class that uses it. I do all this like so:
mvn install compiles the c++ code and packages the outcome dynamic library inside the jar.
I call in a static context when instantiating the LibraryWrapperClass the following
System.load( temp.getAbsolutePath() );
where temp is a temporary file containing the library which was found in the jar. This code is based on the work found here adamheinrich
- I call Native.loadLibrary(LIBRARYPATH) to wrap the library into a java class.
private interface Wrapper extends Library {
Wrapper INSTANCE = Native.loadLibrary( C_LIBRARY_PATH, Wrapper.class );
Pointer Constructor();
...
}
I run tests and validate that the library was found and up and running.
I use a java web project that depends on this project. It uses tomcat and runs fine in local.
My issue is that when I deploy on the server, the LibraryWrapperClass cannot instantiate. Error on server is:
java.lang.NoClassDefFoundError: Could not initialize class pacakgeName.LibraryWrapperClass
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:375)
at org.hibernate.annotations.common.util.StandardClassLoaderDelegateImpl.classForName(StandardClassLoaderDelegateImpl.java:57)
at org.hibernate.boot.internal.MetadataBuilderImpl$MetadataBuildingOptionsImpl$4.classForName(MetadataBuilderImpl.java:758)
at org.hibernate.annotations.common.reflection.java.JavaReflectionManager.classForName(JavaReflectionManager.java:144)
at...
This error seems that the library is found, since there is not the UnsatisfiedLinkError exception thrown. But something else is failing. Do someone know what could happen? How could I debug?
I recall that everything works perfectly in local.
How could I debug?
1. with strace
strace will give you what files Tomcat is trying to open : strace -f -e trace=file -o log.txt bin/startup.sh
After this, look for packageName in log.txt, or other files not found with :
egrep ' open.*No such file' log.txt
2. with JConsole
Enable JMX, launch a JConsole, go to VM summary tab, and check/compare very carefully VM arguments/classpath/library path/boot class path
3. dependency listing with ldd
If a dependency issue is likely to be the problem, the ldd sharedLibraryFile.so command lists all the dependencies and allows to track which one might be missing.

Categories