ERROR org.apache.activemq.broker.BrokerService - Failed to start Apache ActiveMQ ([localhost, null], java.io.IOException: org/apache/activemq/store/NoLocalSubscriptionAware)
INFO org.apache.activemq.broker.BrokerService - Apache ActiveMQ 5.9.1 (localhost, null) is shutting down
INFO org.apache.activemq.broker.TransportConnector - Connector tcp://localhost:61616 stopped
WARN org.apache.activemq.broker.jmx.ManagementContext - Failed to start JMX connector Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi]. Will restart management to re-create JMX connector, trying to remedy this issue.
The code I am trying to use is
BrokerService broker = new BrokerService();
TransportConnector connector = new TransportConnector();
connector.setUri(new URI("tcp://localhost:61616"));
broker.addConnector(connector);
broker.start();
I am getting exception at start() method. I am deploying this on server not in my computer.
It's quite hard to say what is wrong given the limited information but one thing I'd check is that there isn't already a broker running on that server as it looks like something is at least sitting on the JMX port already. You could check in the broker log to see if the broker logs any additional information on the error.
Related
I am running ActiveMQ 5.15.5 as a standalone broker and my spring application
is connecting to it.
I wanted to know if I can log the Task-ID that the broker logs, in the
client application logs.
Currently application logs look like:
INFO ] 2018-11-29 09:52:19,144 [ActiveMQ Session Task] ....
[INFO ] 2018-11-29 09:52:19,168 [ActiveMQ Session Task] ...
[INFO ] 2018-11-29 09:52:19,199 [ActiveMQ Session Task] ....
I believe if I had embedded activeMQ the logs would look like
INFO ] 2018-11-29 09:52:19,144 [ActiveMQ Session Task-9] ....
[INFO ] 2018-11-29 09:52:19,168 [ActiveMQ Session Task-9] ...
Looking at the client application logs, i do not have a way to categorize
transactions by multiple users as they are all logged as "ActiveMQ Session
Task"
Is there a way to log the Task-ID from broker (I do see the Task-ID at the
broker logs activemq.log) in the client logs.
I tried to set the ActiveMQ logs in the client log4j.xml to info with no
luck.
Thanks
The "Task-ID", as you call it, which is logged here is actually just the name of the thread on the broker which is performing the work. The client has no idea about the thread name on the broker and there is no way to communicate that information with the client. Those threads are pooled and re-used over & over so using their names to identify a unique transaction almost certainly wouldn't work anyway.
I am having problems running an app I have developed in an EC2 instance. When I execute the .jar (java -jar app.jar), the SpringBoot app starts but it fails when trying to connect to my MySQL RDS database. The thing is when I run the app locally on my machine, It has no issues with the DB connection.
I have opened the port where the app is running (8090) and MySql port as well (3306) for inbound and outbound traffic:
This is the error I get:
2016-09-23 17:46:38.132 INFO 10161 --- [main] .t.TomcatEmbeddedServletContainerFactory : Server initialized with port: 8090
2016-09-23 17:46:38.604 INFO 10161 --- [main] o.apache.catalina.core.StandardService : Starting service Tomcat
2016-09-23 17:46:38.605 INFO 10161 --- [main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/7.0.54
2016-09-23 17:46:38.724 INFO 10161 --- [ost startStop 1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2016-09-23 17:46:38.725 INFO 10161 --- [ost startStop 1] o.s.web.context.ContextLoader: Root WebApplicationContext: initialization completed in 5028 ms
2016-09-23 17:48:48.476 ERROR 10161 --- [ost startStop 1] o.a.tomcat.jdbc.pool.ConnectionPool: Unable to create initial connections of pool.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
Any ideas how can i solve this problem?
Thank you very much for your help
Regards
Andres
From your description and log file, it's likely that network configuration is the cause here.
You might want to draw the network topology of your instances (region/availability zone, VPC, subnet, network acl, security group). This will be very helpful when you do more complex development work.
There are good references: VPC Introduction and Security in your VPC and Scenarios for Accessing a DB Instance in a VPC
I suggest the following actions for your troubleshooting:
Check security group (SG) configuration of your EC2 instance and RDS instance.
You can check this by going to EC2 Dashboard/RDS Dashboard -> Click on an instance and look at "Security Group" description, or you can click on the Setting icon (Show/Hide columns) and tick "Security Groups".
In RDS's SG configuration: make sure you have enable access from EC2 instance's SG to port 3306. You can do this by putting EC2 instance's SG ID into Source field of the config, as a "Custom IP" value. See the 1st scenario in the above reference for more detail.
Use mysql command line to test the connection between EC2 instance and RDS.
Hope it helps.
You need to perform following steps :
1) Go to EC2 instance and find security group you want access in RDS
2) Now go to your RDS security group and select inbound rules
Select ALL TCP and add your sg-xxx(security group)
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html
I start an Hbase cluster for my test class. I use that helper class:
HBaseClusterSingleton.java
and use it as like that:
private static final HBaseClusterSingleton cluster = HBaseClusterSingleton.build(1);
I retrieve configuration object as follows:
cluster.getConf()
and I use it at Spark as follows:
sparkContext.newAPIHadoopRDD(conf, MyInputFormat.class, clazzK,
clazzV);
When I run my test there is no need to startup an Hbase cluster because Spark will connect to my dummy cluster. However when I run my test method it throws an error:
2015-08-26 01:19:59,558 INFO [Executor task launch
worker-0-SendThread(localhost:2181)] zookeeper.ClientCnxn
(ClientCnxn.java:logStartConnect(966)) - Opening socket connection to
server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
2015-08-26 01:19:59,559 WARN [Executor
task launch worker-0-SendThread(localhost:2181)] zookeeper.ClientCnxn
(ClientCnxn.java:run(1089)) - Session 0x0 for server null, unexpected
error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused at
sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
Hbase tests, which do not run on Spark, works well. When I check the logs I see that cluster and Spark is started up correctly:
2015-08-26 01:35:21,791 INFO [main] hdfs.MiniDFSCluster
(MiniDFSCluster.java:waitActive(2055)) - Cluster is active
2015-08-26 01:35:40,334 INFO [main] util.Utils
(Logging.scala:logInfo(59)) - Successfully started service
'sparkDriver' on port 56941.
I realized that when I start up an hbase from command line my test method for Spark connects to it!
So, does it means that it doesn't care about the conf I passed to it? Any ideas about how to solve it?
I have configured two servers and one active mq server.
One server will send a JMS message and the Other server will receive the JMS messages from the active mq server.
Usually we will start active mq server and the servers one by one.
Now one of the server get's started successfully where as the other throws bind exception with 1099 as port already bind.
I have verified none of the process uses the port 1099.
Need a workaround if solution is not possible.
Exception stack trace
[Apr 10 09:58:37] [/] WARN org.apache.activemq.broker.jmx.ManagementContext
(JCLLoggerAdapter.java:359) - Failed to start jmx connector: Cannot bind to URL
[rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root
exception is java.rmi.AlreadyBoundException: jmxrmi]
[Apr 10 09:58:37] [/] WARN org.apache.activemq.broker.jmx.ManagementContext
(JCLLoggerAdapter.java:359) - Failed to start jmx connector: Cannot bind to URL
[rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root
exception is java.rmi.AlreadyBoundException: jmxrmi]
[Apr 10 09:58:37] [/] DEBUG org.apache.activemq.broker.jmx.ManagementContext
(JCLLoggerAdapter.java:245) - Reason for failed jms connector start
java.io.IOException: Cannot bind to URL [rmi://localhost:1099/jmxrmi]:
javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is
java.rmi.AlreadyBoundException: jmxrmi]
at
Thanks.
As described by the provided stacktrace, both server have Remote JMX enabled on the same port. Use the -Dcom.sun.management.jmxremote.port=portNum option at the JVM level to tune the JMX port, or purely disable Remote JMX by removing the -Dcom.sun.management.jmxremote option. These options are usually located in ActiveMQ startup scripts.
I am trying to test Amazon's new Memcached client with AutoDiscovery. I have one memcached node which I am able to connect to using XMemcached 1.3.5 as well as a standard SpyMemcached library.
I am following the instructions here: http://docs.amazonwebservices.com/AmazonElastiCache/latest/UserGuide/AutoDiscovery.html
The code is almost identical to the example and is:
String configEndpoint = "<server name>.rgcl8z.cfg.use1.cache.amazonaws.com";
Integer clusterPort = 11211;
MemcachedClient client = new MemcachedClient(new InetSocketAddress(configEndpoint, clusterPort));
client.set("theKey", 3600, "This is the data value");
I see the following in the logs when I create the connection. The error happens when I try to set a value:
2013-01-04 22:05:30.445 INFO net.spy.memcached.MemcachedConnection: Added {QA sa=/<ip>:11211, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2013-01-04 22:05:32.861 INFO net.spy.memcached.ConfigurationPoller: Starting configuration poller.
2013-01-04 22:05:32.861 INFO net.spy.memcached.ConfigurationPoller: Endpoint to use for configuration access in this poll NodeEndPoint - HostName:<our-server>.rgcl8z.cfg.use1.cache.amazonaws.com IpAddress:<ip> Port:11211
2013-01-04 22:05:32.950 WARN net.spy.memcached.MemcachedClient: Configuration endpoint timed out for config call. Leaving the initialization work to configuration poller.
Exception in thread "main" java.lang.IllegalStateException: Client is not initialized
at net.spy.memcached.MemcachedClient.checkState(MemcachedClient.java:1623)
at net.spy.memcached.MemcachedClient.enqueueOperation(MemcachedClient.java:1617)
at net.spy.memcached.MemcachedClient.asyncStore(MemcachedClient.java:474)
at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:905)
at com.thinknear.venice.initializers.VeniceAssets.main(VeniceAssets.java:227)
I've tried this both locally and on a EC2 instance (I can connect using other libraries to the nodes)
I've tried using both 1.4.5 and 1.4.14 Memcached engines
I relaxed the security group constraints as well just in case
Any thoughts on why the config endpoint would be timing out?
Client is not initialised:
You can not directly connect to amazon elastic cache node through your local machine you can only access it through your ec2 machiene.If you want to check you can telnet from your local machine it will not connect I also suufered from the same problem .You can telnet it from your Ec2 machine.so try your code at ec2 machine it will work.
Do telnet on memcache server to check connectivity ,in mine case it was not listed so was not able to made connection ,
problem solved by listing my server to memcache.