I would like to use Spring Cloud Stream kinesis binder with KPL/KCL enabled. However, when I enabled that by using kpl-kcl-enabled: true the following error keeps coming up:
com.amazonaws.services.kinesis.producer.IrrecoverableError: Error starting child process at
com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:537) at
com.amazonaws.services.kinesis.producer.Daemon.startChildProcess(Daemon.java:468) at
com.amazonaws.services.kinesis.producer.Daemon.access$100(Daemon.java:63) at
com.amazonaws.services.kinesis.producer.Daemon$1.run(Daemon.java:133) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at
java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Cannot run program "/tmp/amazon-kinesis-producer-native-
binaries/kinesis_producer_685427917724EC847D7D65F261E7040F3FCCB039": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) at
com.amazonaws.services.kinesis.producer.Daemon.startChildProcess(Daemon.java:466) ... 5 common frames omitted Caused by: java.io.IOException: error=2, No such file or directory at
java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.<init>(UNIXProcess.java:247) at java.lang.ProcessImpl.start(ProcessImpl.java:134) at
java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) ... 6 common frames omitted
After quite a few attempts to restart it it throws out of memory exception:
Exception in thread "kpl-daemon-0000" java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at com.amazonaws.services.kinesis.producer.Daemon.<init>(Daemon.java:95)
at com.amazonaws.services.kinesis.producer.KinesisProducer$MessageHandler.onError(KinesisProducer.java:168)
at com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:537)
at com.amazonaws.services.kinesis.producer.Daemon.startChildProcess(Daemon.java:468)
at com.amazonaws.services.kinesis.producer.Daemon.access$100(Daemon.java:63)
at com.amazonaws.services.kinesis.producer.Daemon$1.run(Daemon.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
KPL expects glibc in version 2.5 or higher to be available in your Linux version.
openjdk:8-jdk-alpine docker image does not provide that.
You need to use a different docker image, for example: openjdk:8-jdk-slim to have jdk and glibc already installed, or frolvlad/alpine-glibc for alpine image with glibc.
Related
I am running a java application on kubernetes which upload multiple files from local container to s3 bucket using s3a, but I am getting the below exception in logs and files are not getting uploaded to s3.
Partial files are getting uploaded to s3
2021-11-30 12:28:44,529 614982 [secor-threadpool-1] ERROR c.pinterest.secor.uploader.Uploader - Error while uploading to S3 : java.lang.RuntimeException: org.apache.hadoop.fs.s3a.AWSClientIOException: copyFromLocalFile(/tmp/secor_data/message_logs/backup/7_22/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate, s3a://prod-dataplatform-hive/orc_secor_test/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate) on /tmp/secor_data/message_logs/backup/7_22/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate: com.amazonaws.AmazonClientException: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V
java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.hadoop.fs.s3a.AWSClientIOException: copyFromLocalFile(/tmp/secor_data/message_logs/backup/7_22/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate, s3a://prod-dataplatform-hive/orc_secor_test/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate) on /tmp/secor_data/message_logs/backup/7_22/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate: com.amazonaws.AmazonClientException: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at com.pinterest.secor.uploader.FutureHandle.get(FutureHandle.java:34)
at com.pinterest.secor.uploader.Uploader.uploadFiles(Uploader.java:117)
at com.pinterest.secor.uploader.Uploader.checkTopicPartition(Uploader.java:244)
at com.pinterest.secor.uploader.Uploader.applyPolicy(Uploader.java:284)
at com.pinterest.secor.consumer.Consumer.checkUploadPolicy(Consumer.java:141)
at com.pinterest.secor.consumer.Consumer.run(Consumer.java:133)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: org.apache.hadoop.fs.s3a.AWSClientIOException: copyFromLocalFile(/tmp/secor_data/message_logs/backup/7_22/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate, s3a://prod-dataplatform-hive/orc_secor_test/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate) on /tmp/secor_data/message_logs/backup/7_22/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate: com.amazonaws.AmazonClientException: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V
at com.pinterest.secor.uploader.HadoopS3UploadManager$1.run(HadoopS3UploadManager.java:63)
... 5 more
Caused by: org.apache.hadoop.fs.s3a.AWSClientIOException: copyFromLocalFile(/tmp/secor_data/message_logs/backup/7_22/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate, s3a://prod-dataplatform-hive/orc_secor_test/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate) on /tmp/secor_data/message_logs/backup/7_22/DAPI_UNICAST_BOOKING_ACTION/driverapi_transactional_ha/dapi_unicast_booking_action/dt=2021-11-29/hr=12/pk_1_1_00000000000000101414.deflate: com.amazonaws.AmazonClientException: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:144)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:117)
at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:2039)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2320)
at org.apache.hadoop.fs.FileSystem.moveFromLocalFile(FileSystem.java:2307)
at com.pinterest.secor.util.FileUtil.moveToCloud(FileUtil.java:206)
at com.pinterest.secor.uploader.HadoopS3UploadManager$1.run(HadoopS3UploadManager.java:61)
... 5 more
Caused by: com.amazonaws.AmazonClientException: Unable to complete transfer: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.unwrapExecutionException(AbstractTransfer.java:286)
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.rethrowExecutionException(AbstractTransfer.java:265)
at com.amazonaws.services.s3.transfer.internal.UploadImpl.waitForUploadResult(UploadImpl.java:66)
at org.apache.hadoop.fs.s3a.S3AFileSystem.innerCopyFromLocalFile(S3AFileSystem.java:2099)
at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:2037)
... 9 more
Caused by: java.lang.NoSuchMethodError: com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lorg/apache/commons/logging/Log;)V
at com.amazonaws.services.s3.model.S3DataSource$Utils.cleanupDataSource(S3DataSource.java:48)
at com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1863)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1817)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:169)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:149)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:115)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:45)
... 4 more
Looks like this is a bug.
problems caused by AWS SDK classes from the localstack-utils-fat.jar overriding classes defined in the actual Lambda jar/zip itself.
Here's the version you need with the fix. It sounds like there is a work around:
implemented a partial fix for this that moved the localstack-utils-fat.jar later on the classpath, but this fix only applied to lambdas being run using the docker executor.
Basically, it's not your fault. It's a code issue with dependencies overwriting each others function signatures. You need to use the latest localstack-utils-fat.jar and it should fix your issue.
I am trying to run a JavaFX application with OpenJDK/OpenJFX,
this is an existing working application based on OracleJDK/JavaFX which we want to migrate to OpenJFX.
I followed the instructions on how to build a fat jar from the OpenJFX documentation and I use these versions:
OpenJDK 10
OpenJFX 11
I have succesfully built a fat jar that contains all the .class files and libraries.
On Windows 10 I am able to build and run the OpenJFX HelloWorld application. But when running my own application I get errors about various stock shaders that could not be loaded:
java.lang.InternalError: Error loading stock shader Solid_Color
Stacktrace:
java.lang.InternalError: Error loading stock shader Solid_Color
at com.sun.prism.d3d.D3DResourceFactory.createStockShader(D3DResourceFactory.java:411)
at com.sun.prism.impl.ps.BaseShaderContext.getPaintShader(BaseShaderContext.java:263)
at com.sun.prism.impl.ps.BaseShaderContext.validatePaintOp(BaseShaderContext.java:484)
at com.sun.prism.impl.ps.BaseShaderContext.validatePaintOp(BaseShaderContext.java:355)
at com.sun.prism.impl.ps.BaseShaderGraphics.fillQuad(BaseShaderGraphics.java:1613)
at com.sun.javafx.tk.quantum.ViewPainter.doPaint(ViewPainter.java:475)
at com.sun.javafx.tk.quantum.ViewPainter.paintImpl(ViewPainter.java:328)
at com.sun.javafx.tk.quantum.UploadingPainter.run(UploadingPainter.java:142)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at com.sun.javafx.tk.RenderJob.run(RenderJob.java:58)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at com.sun.javafx.tk.quantum.QuantumRenderer$PipelineRunnable.run(QuantumRenderer.java:125)
at java.base/java.lang.Thread.run(Thread.java:844)
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at com.sun.prism.d3d.D3DResourceFactory.createStockShader(D3DResourceFactory.java:408)
at com.sun.prism.impl.ps.BaseShaderContext.getPaintShader(BaseShaderContext.java:263)
at com.sun.prism.impl.ps.BaseShaderContext.validatePaintOp(BaseShaderContext.java:484)
at com.sun.prism.impl.ps.BaseShaderContext.validatePaintOp(BaseShaderContext.java:355)
at com.sun.prism.impl.ps.BaseShaderGraphics.fillQuad(BaseShaderGraphics.java:1613)
at com.sun.javafx.tk.quantum.ViewPainter.doPaint(ViewPainter.java:475)
at com.sun.javafx.tk.quantum.ViewPainter.paintImpl(ViewPainter.java:328)
at com.sun.javafx.tk.quantum.UploadingPainter.run(UploadingPainter.java:142)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at com.sun.javafx.tk.RenderJob.run(RenderJob.java:58)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at com.sun.javafx.tk.quantum.QuantumRenderer$PipelineRunnable.run(QuantumRenderer.java:125)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.RuntimeException: InputStream must be non-null
at com.sun.prism.d3d.D3DResourceFactory.getBuffer(D3DResourceFactory.java:349)
at com.sun.prism.d3d.D3DResourceFactory.createShader(D3DResourceFactory.java:390)
at com.sun.prism.shader.Solid_Color_Loader.loadShader(Solid_Color_Loader.java:47)
... 19 more
I followed the suggestion to increase the VRAM from a similar question:
JavaFX on Raspberry PI: Error loading stock shader
This did not help.
Also read another similar question:
How to recompile JavaFX 11/12
Tried to run the application on various Windows installations, same problem.
They seem to be caused by code such as:
Pane root = new Pane();
root.setStyle("-fx-background-color: #676767;-fx-base: #676767; -fx-background: #676767;");
that always worked fine.
What am I missing?
I have downloaded the following project from GIT amazon-sumerian-arcore-starter-app, with the intention of running it on my Android device to play around. I am not a Java programmer by nature, so have downloaded Android Studio for Windows in order to build and run the sample.
When I try to turn the source code into a 'module' (which I assume I need to do in order to run the code on a device as it expects a target module when you create a profile to run the code as, I get this error. I assume it's similar to missing a dll/library/reference in .NET C#? But I can't be sure. Anyone know how to address this issue? I would have expected the sample code to 'just work'.
Error:Internal error: (java.lang.ClassNotFoundException) com.google.wireless.android.sdk.stats.IntellijIndexingStats$Index
java.lang.ClassNotFoundException: com.google.wireless.android.sdk.stats.IntellijIndexingStats$Index
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at com.intellij.util.indexing.counters.IndexCounters.<clinit>(IndexCounters.java:34)
at com.intellij.util.indexing.impl.MapReduceIndex.<init>(MapReduceIndex.java:86)
at org.jetbrains.jps.backwardRefs.index.CompilerReferenceIndex$CompilerMapReduceIndex.<init>(CompilerReferenceIndex.java:214)
at org.jetbrains.jps.backwardRefs.index.CompilerReferenceIndex.<init>(CompilerReferenceIndex.java:73)
at org.jetbrains.jps.backwardRefs.JavaCompilerBackwardReferenceIndex.<init>(JavaCompilerBackwardReferenceIndex.java:12)
at org.jetbrains.jps.backwardRefs.JavaBackwardReferenceIndexWriter.initialize(JavaBackwardReferenceIndexWriter.java:74)
at org.jetbrains.jps.backwardRefs.JavaBackwardReferenceIndexBuilder.buildStarted(JavaBackwardReferenceIndexBuilder.java:40)
at org.jetbrains.jps.incremental.IncProjectBuilder.runBuild(IncProjectBuilder.java:358)
at org.jetbrains.jps.incremental.IncProjectBuilder.build(IncProjectBuilder.java:178)
at org.jetbrains.jps.cmdline.BuildRunner.runBuild(BuildRunner.java:138)
at org.jetbrains.jps.cmdline.BuildSession.runBuild(BuildSession.java:302)
at org.jetbrains.jps.cmdline.BuildSession.run(BuildSession.java:135)
at org.jetbrains.jps.cmdline.BuildMain$MyMessageHandler.lambda$channelRead0$0(BuildMain.java:229)
at org.jetbrains.jps.service.impl.SharedThreadPoolImpl.lambda$executeOnPooledThread$0(SharedThreadPoolImpl.java:42)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745
We are running Spark as Docker microservice. My spark application able to submit the tasks to worker node, but worker node is not able to connect to application, throwing UnknowHostException. Basically worker node trying to communicate with application with containerId(658e5d214a60) which is not getting resolved to container IP.
It is working when I run these services with docker-compose yaml files in local Linux machine but failing in AWS EC2 container.
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:64)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:293)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:201)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:65)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:64)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
... 4 more
Caused by: java.io.IOException: Failed to connect to 658e5d214a60:36335
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: 658e5d214a60
at java.net.InetAddress.getAllByName0(InetAddress.java:1259)
at java.net.InetAddress.getAllByName(InetAddress.java:1171)
at java.net.InetAddress.getAllByName(InetAddress.java:1105)
at java.net.InetAddress.getByName(InetAddress.java:1055)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:208)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:49)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:188)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:174)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:978)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:512)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:423)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:482)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
... 1 more
Spark version 2.3.0
Docker version 1.12.6
Since the container port is dynamic we cant map it to host then use.
I'm trying to get Intellij IDEA to work with a Wildfly 10 server. I installed Intellij following this and Wildfly thanks to this script.
The startup script used by Intellij is /opt/wildfly-10.0.0.Final/bin/standalone.sh, which I tried to run manually, and I got the following error :
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/wildfly
JAVA: /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java
JAVA_OPTS: -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
=========================================================================
java.lang.IllegalArgumentException: Failed to instantiate class "org.jboss.logmanager.handlers.PeriodicRotatingFileHandler" for handler "FILE"
at org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:116)
at org.jboss.logmanager.config.LogContextConfigurationImpl.doPrepare(LogContextConfigurationImpl.java:335)
at org.jboss.logmanager.config.LogContextConfigurationImpl.prepare(LogContextConfigurationImpl.java:288)
at org.jboss.logmanager.config.LogContextConfigurationImpl.commit(LogContextConfigurationImpl.java:297)
at org.jboss.logmanager.PropertyConfigurator.configure(PropertyConfigurator.java:546)
at org.jboss.logmanager.PropertyConfigurator.configure(PropertyConfigurator.java:97)
at org.jboss.logmanager.LogManager.readConfiguration(LogManager.java:514)
at org.jboss.logmanager.LogManager.readConfiguration(LogManager.java:476)
at java.util.logging.LogManager$3.run(LogManager.java:399)
at java.util.logging.LogManager$3.run(LogManager.java:396)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.logging.LogManager.readPrimordialConfiguration(LogManager.java:396)
at java.util.logging.LogManager.access$800(LogManager.java:145)
at java.util.logging.LogManager$2.run(LogManager.java:345)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.logging.LogManager.ensureLogManagerInitialized(LogManager.java:338)
at java.util.logging.LogManager.getLogManager(LogManager.java:378)
at org.jboss.modules.Main.main(Main.java:482)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:114)
... 17 more
Caused by: java.io.FileNotFoundException: /opt/wildfly/standalone/log/server.log (Permission non accordée)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.jboss.logmanager.handlers.FileHandler.setFile(FileHandler.java:151)
at org.jboss.logmanager.handlers.PeriodicRotatingFileHandler.setFile(PeriodicRotatingFileHandler.java:102)
at org.jboss.logmanager.handlers.FileHandler.setFileName(FileHandler.java:189)
at org.jboss.logmanager.handlers.FileHandler.<init>(FileHandler.java:119)
at org.jboss.logmanager.handlers.PeriodicRotatingFileHandler.<init>(PeriodicRotatingFileHandler.java:70)
... 22 more
java.util.concurrent.ExecutionException: Operation failed
at org.jboss.threads.AsyncFutureTask.operationFailed(AsyncFutureTask.java:74)
at org.jboss.threads.AsyncFutureTask.get(AsyncFutureTask.java:268)
at org.jboss.as.server.Main.main(Main.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.modules.Module.run(Module.java:329)
at org.jboss.modules.Main.main(Main.java:507)
Caused by: org.jboss.msc.service.StartException in service jboss.as: Failed to start service
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1904)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: WFLYDR0006: Directory /opt/wildfly/standalone/data/content is not writable
at org.jboss.as.repository.ContentRepository$Factory$ContentRepositoryImpl.<init>(ContentRepository.java:188)
at org.jboss.as.repository.ContentRepository$Factory.addService(ContentRepository.java:154)
at org.jboss.as.server.ApplicationServerService.start(ApplicationServerService.java:146)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
... 3 more
This is exactly the same I get as output in Intellij.
I found thanks to this topic that it may be a matter of user permissions.
However, when I try to add a user to wildfly with the add-user.sh script, I get this error : ./add-user.sh: 1: eval: /usr/lib/jvm/jdk1.8.0_60/bin/java: not found
It is looking for a wrong JDK path. I tried to change it following different solutions but none of them work.
My JAVA_HOME is set to /usr/lib/jvm/java-8-oracle .
Is any of you having an idea of what to do ? Thank you in advance :)
Forget the add-user.sh script. This is for adding users to wildfly. Your issue is with your Linux users.
The directory has to be writable by whatever user wildfly is running as.
If you are running it as a user named wildfly then you have to change the ownership of those directories to that user. According to your question it looks like you are running as some other user that does not have permissions to those directories.
If you want a quick easy fix and you aren't worried about other users on the system you could just change permissions like:
sudo chmod -R 766 /opt/wildfly/standalone/
This will give the owner all permissions, and other users read/write permissions to those directories.
This is not best practice. Best practice is to give that directory ownership to user 'wildfly' with permissions of 600. Then you should run wildfly as the 'wildfly' user on linux. Any startup script you can find will likely do this for you.