I have a jar which runs fine on my host; specifically, when I run
java -jar myjar.jar
I get the expected output:
[2018-12-05 16:46:53.917] boot - 21252 INFO [main] --- Application: No active profile set, falling back to default profiles: default
[2018-12-05 16:47:00.855] boot - 21252 INFO [main] --- Application: Started Application in 8.176 seconds (JVM running for 9.106)
This is the Core Data Micro Service.
[2018-12-05 16:47:00.856] boot - 21252 INFO [main] --- Application: Registering to queue for events
[2018-12-05 16:47:00.857] boot - 21252 INFO [main] --- ZeroMQEventSubscriber: Getting subscriber, listening to tcp://localhost:5565
[2018-12-05 16:47:00.915] boot - 21252 INFO [main] --- ZeroMQEventSubscriber: Watching for new Event messages...
But then, I try to run the same jar inside a docker container. So I create the image like this:
FROM openjdk:8-jdk-alpine
COPY myjar.jar /opt/spring-cloud/lib/
ENTRYPOINT ["/usr/bin/java"]
CMD ["-jar", "/opt/spring-cloud/lib/myjar.jar"]
EXPOSE 48080
and run it:
sudo docker run [ID]
but this time, I get this exception from the container logs (this is only a part of the exception because it is too big, but I can show it all if needed):
[2018-12-07 08:30:31.447] boot - 1 INFO [main] --- Application: No active profile set, falling back to default profiles: default
[2018-12-07 08:32:35.423] boot - 1 ERROR [main] --- SpringApplication: Application startup failed
...
...
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'readingControllerImpl': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: org.edgexfoundry.dao.ValueDescriptorRepository org.edgexfoundry.controller.impl.ReadingControllerImpl.valDescRepos; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'valueDescriptorRepository': Invocation of init method failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 120000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=localhost:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 120000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=localhost:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:334)
...
...
Caused by: com.mongodb.MongoTimeoutException: Timed out after 120000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=localhost:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
at com.mongodb.BaseCluster.getServer(BaseCluster.java:82)
at com.mongodb.DBTCPConnector.getServer(DBTCPConnector.java:664)
at com.mongodb.DBTCPConnector.access$500(DBTCPConnector.java:40)
at com.mongodb.DBTCPConnector$MyPort.getConnection(DBTCPConnector.java:513)
at com.mongodb.DBTCPConnector$MyPort.get(DBTCPConnector.java:456)
at com.mongodb.DBTCPConnector.getPrimaryPort(DBTCPConnector.java:415)
at com.mongodb.DBCollectionImpl.createIndex(DBCollectionImpl.java:378)
at com.mongodb.DBCollection.createIndex(DBCollection.java:597)
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.createIndex(MongoPersistentEntityIndexCreator.java:142)
... 57 more
Mongo has been started in another container through docker-compose (together with other services in other containers):
ps aux | grep mongo
root 16226 0.0 0.0 4340 768 ? Ss 10:27 0:00 /bin/sh -c /edgex/mongo/config/launch-edgex-mongo.sh
root 16292 0.0 0.0 4340 764 ? S 10:27 0:00 /bin/sh /edgex/mongo/config/launch-edgex-mongo.sh
root 16293 0.5 0.3 961168 61400 ? SLl 10:27 0:05 mongod --smallfiles
This is the docker-compose file:
version: '3'
services:
volume:
image: edgexfoundry/docker-edgex-volume:0.6.0
container_name: edgex-files
networks:
- edgex-network
volumes:
- db-data:/data/db
- log-data:/edgex/logs
- consul-config:/consul/config
- consul-data:/consul/data
mongo:
image: edgexfoundry/docker-edgex-mongo:0.6.0
ports:
- "27017:27017"
container_name: edgex-mongo
hostname: edgex-mongo
networks:
- edgex-network
volumes:
- db-data:/data/db
- log-data:/edgex/logs
- consul-config:/consul/config
- consul-data:/consul/data
depends_on:
- volume
.... more services...
networks:
edgex-network:
driver: "bridge
And the mongo db configuration properties:
spring.data.mongodb.username=core
spring.data.mongodb.password=password
spring.data.mongodb.database=coredata
#change to localhost when running locally during development
# (or set hosts to point edgex-mongo to the mongo host
spring.data.mongodb.host=localhost
#spring.data.mongodb.host=edgex-mongo
spring.data.mongodb.port=27017
spring.data.mongodb.connectTimeout=120000
spring.data.mongodb.socketTimeout=60000
spring.data.mongodb.maxWaitTime=120000
spring.data.mongodb.socketKeepAlive=true
Any ideas what may be going wrong?
There are two things going wrong here, first of all spring tries to connect to your mongodb on localhost, within docker this does not work since localhost references to the current container where of course no mongodb is available. To fix this you have to comment out this line and uncomment the next line which lists the host as edgex-mongo which corresponds with the hostname of your mongodb container, so spring knows to connect to that container.
However when you would do this you would run into the issue that it would not recognize edgex-mongo since it has no connection to this container. edgex-mongo is inside a bridged network which requires you to add the spring container to this network by using the following command:
docker run --network edgex--network [image]
I hope this helps you
Related
I met a time out exception when I trying connect a docker image to a local mongodb
The error is
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'getBuilder' defined in io.mongock.runner.springboot.config.MongockContext: Unsatisfied dependency expressed through method 'getBuilder' parameter 0: Error creating bean with name 'connectionDriver' defined in class path resource [io/mongock/driver/mongodb/springdata/v4/config/SpringDataMongoV4Context.class]: Failed to instantiate [io.mongock.driver.api.driver.ConnectionDriver]: Factory method 'connectionDriver' threw exception with message: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]
Here is my docker file and entrypoint for my image
dockerfile
FROM openjdk:17
COPY /build/libs/demo-0.0.2-SNAPSHOT.jar Demo-0.0.2.jar
COPY entrypoint.sh entrypoint.sh
RUN chmod +x entrypoint.sh
EXPOSE 3000
ENTRYPOINT ./entrypoint.sh
entrypoint.sh
java -jar -Dspring.data.mongodb.uri="$MONGODB_URI" Demo-0.0.2.jar
Here is the docker compose file for mongodb
version: '3.8'
services:
mongodb:
image: mongo:latest
container_name: dev_mongo
restart: always
environment:
- MONGO_INITDB_DATABASE=admin
- MONGO_INITDB_ROOT_USERNAME=dev_mongo_username
- MONGO_INITDB_ROOT_PASSWORD=dev_mongo_pwd
ports:
- "27018:27017"
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- type: volume
source: mongodb_data
target: /data/db
networks:
- mongo_network
networks:
mongo_network:
driver: bridge
volumes:
mongodb_data:
And here is the command line I used to run the docker image
docker run --env MONGODB_URI="mongodb://local_mongo_username:local_mongo_pwd#localhost:27017/?authSource=admin&tls=false" --name test_app1 -d -p 3000:3000 demo:latest
How can I solve this problem and connect to mongodb successfully ?
I developed a spring boot application to store login information via redis. I have following docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "8082:8082"
links:
- redis
redis:
image: redis
container_name: redis
hostname: redis-db
ports:
- "6379:6379"
command: redis-server --port 6379 --bind 0.0.0.0 --protected-mode no
Dockerfile:
FROM openjdk:8-jre-alpine
VOLUME /tmp
COPY app.jar app.jar
EXPOSE 8082
ENTRYPOINT ["java", "-jar", "app.jar"]
In the Spring Boot Application, I just use following application.properties to setup the Redis Connection:
server.port = 8082
spring.redis.host=redis-db
spring.redis.port=6379
I access the repository via CrudRepository and QueryByExampleExecutor. Every time I try to access the data, I get the following error:
sgartner-web-1 | 2021-11-25 01:22:53.600 ERROR 1 --- [nio-8082-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to localhost:6379] with root cause
sgartner-web-1 |
sgartner-web-1 | java.net.ConnectException: Connection refused
sgartner-web-1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_212]
sgartner-web-1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_212]
sgartner-web-1 | at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[netty-transport-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[netty-transport-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:707) ~[netty-transport-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-transport-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-transport-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[netty-common-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.69.Final.jar!/:4.1.69.Final]
sgartner-web-1 | at java.lang.Thread.run(Thread.java:748) [na:1.8.0_212]
I already have tested the application "normal" instances of redis and the application. What seems odd to me is the "Unable to connect to localhost:6379]", even though I'm not trying to connect to localhost.
Testing this behavior while running the application outside of a container shows, that it recognises the changed hostname and doesn't try to connect to localhost...
The specified redis command also doesn't seem to be the problem: tested this with running redis outside of a container with the same command and trying to connect to it via a VM.
I would really appreciate a solution or another possible cause to look into. Thanks to whoever replies to this!
as per logs it clearly says that web application is not able to connect to redis port 6379. Can you try expose 6379 in Dockerfile? , please check first are you able to pint 6379 from your local machine. Seems network issue.
See is this helps.
Docker can't connect to redis from another service
I am trying to use https://github.com/testcontainers/testcontainers-scala that is inherent from https://www.testcontainers.org/ as the following:
final class MessageSpec extends BddSpec
with ForAllTestContainer
with BeforeAndAfterAll {
override val container = GenericContainer("sweetsoft/sapmock").configure{ c =>
c.addExposedPort(8080)
c.withNetwork(Network.newNetwork())
}
override def beforeAll() {
}
feature("Process incoming messages") {
When I run the test with the command sbt test, I've got the following exception:
15:22:23.171 [pool-7-thread-2] ERROR 🐳 [sweetsoft/sapmock:latest] - Could not start container
org.testcontainers.containers.ContainerLaunchException: Timed out waiting for container port to open (localhost ports: [32775] should be listening)
at org.testcontainers.containers.wait.strategy.HostPortWaitStrategy.waitUntilReady(HostPortWaitStrategy.java:47)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
at org.testcontainers.containers.wait.HostPortWaitStrategy.waitUntilReady(HostPortWaitStrategy.java:23)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:582)
The image is a local image:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
sweetsoft/sapmock latest f02be90356e7 3 hours ago 664MB
openjdk 8 bec43387959a 11 days ago 625MB
quay.io/testcontainers/ryuk 0.2.3 64849fd2d464 3 months ago 10.7MB
The question is, why is it waiting for 32775 port? And for what is the port good for?
Update
Maybe this log will help:
15:47:47.274 [pool-7-thread-4] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with Environment variables, system properties and defaults. Resolved:
dockerHost=unix:///var/run/docker.sock
apiVersion='{UNKNOWN_VERSION}'
registryUrl='https://index.docker.io/v1/'
registryUsername='developer'
registryPassword='null'
registryEmail='null'
dockerConfig='DefaultDockerClientConfig[dockerHost=unix:///var/run/docker.sock,registryUsername=developer,registryPassword=<null>,registryEmail=<null>,registryUrl=https://index.docker.io/v1/,dockerConfigPath=/home/developer/.docker,sslConfig=<null>,apiVersion={UNKNOWN_VERSION},dockerConfig=<null>]'
15:47:47.275 [pool-7-thread-4] INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost
15:47:47.277 [pool-7-thread-4] DEBUG com.github.dockerjava.core.command.AbstrDockerCmd - Cmd: com.github.dockerjava.core.exec.InfoCmdExec#51a07bb5
15:47:47.389 [pool-7-thread-4] DEBUG com.github.dockerjava.core.command.AbstrDockerCmd - Cmd: com.github.dockerjava.core.exec.VersionCmdExec#70fc9b37
15:47:47.392 [pool-7-thread-4] INFO org.testcontainers.DockerClientFactory - Connected to docker:
Server Version: 18.09.6
API Version: 1.39
Operating System: Ubuntu 18.04.2 LTS
Total Memory: 7976 MB
15:47:47.395 [pool-7-thread-4] DEBUG com.github.dockerjava.core.command.AbstrDockerCmd - Cmd: ListImagesCmdImpl[imageNameFilter=quay.io/testcontainers/ryuk:0.2.3,showAll=false,filters=com.github.dockerjava.core.util.FiltersBuilder#0,execution=com.github.dockerjava.core.exec.ListImagesCmdExec#562a343]
15:47:47.417 [pool-7-thread-4] DEBUG org.testcontainers.utility.RegistryAuthLocator - Looking up auth config for image: quay.io/testcontainers/ryuk:0.2.3
15:47:47.417 [pool-7-thread-4] DEBUG org.testcontainers.utility.RegistryAuthLocator - RegistryAuthLocator has configFile: /home/developer/.docker/config.json (does not exist) and commandPathPrefix:
15:47:47.418 [pool-7-thread-4] WARN org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: quay.io/testcontainers/ryuk:0.2.3, configFile: /home/developer/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/developer/.docker/config.json (No such file or directory)
15:47:47.418 [pool-7-thread-4] DEBUG org.testcontainers.dockerclient.auth.AuthDelegatingDockerClientConfig - Effective auth config [null]
Original java library has answer to your port question.
https://www.testcontainers.org/features/networking/
Note that this exposed port number is from the perspective of the
container.
From the host's perspective Testcontainers actually exposes this on a
random free port. This is by design, to avoid port collisions that may
arise with locally running software or in between parallel test runs.
Because there is this layer of indirection, it is necessary to ask
Testcontainers for the actual mapped port at runtime. This can be done
using the getMappedPort method, which takes the original (container)
port as an argument
In Scala library you can get this mapped port by calling
container.mappedPort(yourExposedPort)
Error is most likely related to this concept, you need to expose that port in advance, inside your docker image. Make sure that you either have expose 8080 command somewhere in your dockerfile or any image that is used to build yours have it
My docker image has built using their official repo
https://github.com/wso2/docker-apim/tree/master/dockerfiles/apim
I used their documents and had the files required to build it
init.sh jdk1.8.0_171 postgresql-42.2.0.jar wso2am-2.2.0
I used the following config for master-datasources.xml
http://yasassriratnayake.blogspot.com/2014/07/changing-default-db-of-wso2-api-manger.html
And metrics-datasources.xml similar way.
When I run docker then it gives the following logs
ubuntu#ip-172-31-0-166:~/docker-apim-2/dockerfiles/apim$ docker run -it -p 9999:9443 wso2am:2.2.0
<>JAVA_HOME environment variable is set to /home/wso2carbon/java
CARBON_HOME environment variable is set to /home/wso2carbon/wso2am-2.2.0
Using Java memory options: -Xms256m -Xmx1024m
[2018-06-27 13:17:12,698] INFO - QpidBundleActivator Setting BundleContext in PluginManager
[2018-06-27 13:17:13,945] INFO - CarbonCoreActivator Starting WSO2 Carbon...
[2018-06-27 13:17:13,945] INFO - CarbonCoreActivator Operating System : Linux 4.4.0-1061-aws, amd64
[2018-06-27 13:17:13,946] INFO - CarbonCoreActivator Java Home : /home/wso2carbon/java/jre
[2018-06-27 13:17:13,946] INFO - CarbonCoreActivator Java Version : 1.8.0_171
[2018-06-27 13:17:13,946] INFO - CarbonCoreActivator Java VM : Java HotSpot(TM) 64-Bit Server VM 25.171-b11,Oracle Corporation
[2018-06-27 13:17:13,947] INFO - CarbonCoreActivator Carbon Home : /home/wso2carbon/wso2am-2.2.0
[2018-06-27 13:17:13,947] INFO - CarbonCoreActivator Java Temp Dir : /home/wso2carbon/wso2am-2.2.0/tmp
[2018-06-27 13:17:13,947] INFO - CarbonCoreActivator User : wso2carbon, en-US, Etc/UTC
[2018-06-27 13:17:14,252] INFO - KafkaEventAdapterServiceDS Successfully deployed the Kafka output event adaptor service
[2018-06-27 13:17:14,383] INFO - TemplateDeployerServiceTrackerDS Successfully deployed the execution manager tracker service
[2018-06-27 13:17:16,127] WARN - ConnectionFactoryImpl ConnectException occurred while connecting to localhost:5432
java.net.ConnectException: Connection refused (Connection refused)
[2018-06-27 13:17:16,141] ERROR - Driver Connection error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Caused by: java.net.ConnectException: Connection refused (Connection refused)
[2018-06-27 13:17:16,160] ERROR - DefaultRealm nullType class java.lang.reflect.InvocationTargetException
org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
Caused by: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
[2018-06-27 13:17:16,185] ERROR - Activator Cannot start User Manager Core bundle
org.wso2.carbon.user.core.UserStoreException: Cannot initialize the realm.
Caused by: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
[2018-06-27 13:17:25,767] INFO - TaglibUriRule TLD skipped. URI: http://tiles.apache.org/tags-tiles is already defined </br>
My questions are
You have build your docker image using mySQL, Is there any way to build image that will be compatible with PostgreSQL?
What are the changes required and which files be changed to build with PostgreSQL compatible api-manager image?
Please suggest step by step if you have overcome something like I'm troubleshooting?
Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Have you just copy/paste the configuration or did you try to understood what's inside? the system is trying to connect to locahost, but you need to configure a separate database server, see the docs
You have build your docker image using mySQL, Is there any way to build image that will be compatible with PostgreSQL?
indeed, read the Dockerfile, instead of copying the mysql driver, you can provide a postgresql driver, update the datasource config and you are good to go
I have 3 node spark cluster
node1 , node2 and node 3
I running below command on node 1 for deploying driver
/usr/local/spark-1.2.1-bin-hadoop2.4/bin/spark-submit --class com.fst.firststep.aggregator.FirstStepMessageProcessor --master spark://ec2-xx-xx-xx-xx.compute-1.amazonaws.com:7077 --deploy-mode cluster --supervise file:///home/xyz/sparkstreaming-0.0.1-SNAPSHOT.jar /home/xyz/config.properties
driver gets launched on node 2 in cluster. but getting exception on node 2 that it is trying to bind to node 1 ip.
2015-02-26 08:47:32 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off
2015-02-26 08:47:32 INFO Slf4jLogger:80 - Slf4jLogger started
2015-02-26 08:47:33 ERROR NettyTransport:65 - failed to bind to ec2-xx.xx.xx.xx.compute-1.amazonaws.com/xx.xx.xx.xx:0, shutting down Netty transport
2015-02-26 08:47:33 WARN Utils:71 - Service 'Driver' could not bind on port 0. Attempting port 1.
2015-02-26 08:47:33 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off
2015-02-26 08:47:33 ERROR Remoting:65 - Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed
at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:136)
at akka.remote.Remoting.start(Remoting.scala:201)
at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:618)
at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:615)
at akka.actor.ActorSystemImpl._start(ActorSystem.scala:615)
at akka.actor.ActorSystemImpl.start(ActorSystem.scala:632)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:118)
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:54)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53)
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1765)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1756)
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:56)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:33)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: ec2-xx-xx-xx.compute-1.amazonaws.com/xx.xx.xx.xx:0
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
kindly suggest
Thanks
after spending lot more time.i got the answer.i did below changes
remove entry of SPARK_LOCAL_IP and SPARK_MASTER_IP
add host name and private ip address of each other nodes in etc/hosts.
use --deploy-mode cluster --supervise
thats all and it works perfectly with fully HA components(Master,Slaves and Driver)
Thanks
Cluster mode is not supported in EC2 1.2 instances where it creates a standalone cluster. Hence you can try removing
--deploy-mode cluster --supervise