Elasticsearch,kibana and apm-server are installed in a ec2 server
I have installed automatic java agent attach to another server to track jenkins app
Agent is getting attached to the process but dynamic configuration options are not working
Apmagent directory: (command ls)
apm-agent-attach-standalone.jar elasticapm.properties
elasticapm.properties file
service_name="jenkins-dev"
server_url="http://x.x.x.x:8200"
recording=true
enabled=true
log_level="DEBUG"
log_file=_AGENT_HOME_/logs/elastic-apm.log
Attach Command:
sudo java -jar apm-agent-attach-standalone.jar --include '.jenkins.'
->This doesn't pick configuration file but attached the agent
so i used below command to update
sudo java -jar apm-agent-attach-standalone.jar --include '.*jenkins.*' --config recording=false,enabled=false
sudo java -jar apm-agent-attach-standalone.jar --include '.*jenkins.*' --config
config_file=elasticapm.properties log_file=/etc/apmagents/apm.log
Log:
2021-04-12 10:47:20,338 [elastic-apm-server-reporter] ERROR co.elastic.apm.agent.report.IntakeV2ReportingEventHandler - Error trying to connect to APM Server. Some details about SSL configurations corresponding the current connection are logged at INFO level.
2021-04-12 10:47:20,339 [elastic-apm-server-reporter] ERROR co.elastic.apm.agent.report.IntakeV2ReportingEventHandler - Failed to handle event of type JSON_WRITER with this error: Connection refused (Connection refused)
2021-04-12 10:47:20,339 [elastic-apm-server-reporter] INFO co.elastic.apm.agent.report.IntakeV2ReportingEventHandler - Backing off for 36 seconds (+/-10%)
2021-04-12 10:47:24,345 [elastic-apm-remote-config-poller] ERROR co.elastic.apm.agent.configuration.ApmServerConfigurationSource - Connection refused (Connection refused)
Query:
1.Which is the right way to use the configuration options in command line?
2.Do we need to create a log file or it will create if log_file is used..now its polluting the application log
Try to specify the config_file using the following notation:
-Delastic.apm.config_file=elasticapm.properties
The attacher can create the log file depending on the settings configured during startup. See the [1] current code for a better understanding.
[1] https://github.com/elastic/apm-agent-java/blob/0465d479430172c3e745afd2ef5b62a3da6b60aa/apm-agent-attach-cli/src/main/java/co/elastic/apm/attach/AgentAttacher.java#L79
Related
My docker image has built using their official repo
https://github.com/wso2/docker-apim/tree/master/dockerfiles/apim
I used their documents and had the files required to build it
init.sh jdk1.8.0_171 postgresql-42.2.0.jar wso2am-2.2.0
I used the following config for master-datasources.xml
http://yasassriratnayake.blogspot.com/2014/07/changing-default-db-of-wso2-api-manger.html
And metrics-datasources.xml similar way.
When I run docker then it gives the following logs
ubuntu#ip-172-31-0-166:~/docker-apim-2/dockerfiles/apim$ docker run -it -p 9999:9443 wso2am:2.2.0
<>JAVA_HOME environment variable is set to /home/wso2carbon/java
CARBON_HOME environment variable is set to /home/wso2carbon/wso2am-2.2.0
Using Java memory options: -Xms256m -Xmx1024m
[2018-06-27 13:17:12,698] INFO - QpidBundleActivator Setting BundleContext in PluginManager
[2018-06-27 13:17:13,945] INFO - CarbonCoreActivator Starting WSO2 Carbon...
[2018-06-27 13:17:13,945] INFO - CarbonCoreActivator Operating System : Linux 4.4.0-1061-aws, amd64
[2018-06-27 13:17:13,946] INFO - CarbonCoreActivator Java Home : /home/wso2carbon/java/jre
[2018-06-27 13:17:13,946] INFO - CarbonCoreActivator Java Version : 1.8.0_171
[2018-06-27 13:17:13,946] INFO - CarbonCoreActivator Java VM : Java HotSpot(TM) 64-Bit Server VM 25.171-b11,Oracle Corporation
[2018-06-27 13:17:13,947] INFO - CarbonCoreActivator Carbon Home : /home/wso2carbon/wso2am-2.2.0
[2018-06-27 13:17:13,947] INFO - CarbonCoreActivator Java Temp Dir : /home/wso2carbon/wso2am-2.2.0/tmp
[2018-06-27 13:17:13,947] INFO - CarbonCoreActivator User : wso2carbon, en-US, Etc/UTC
[2018-06-27 13:17:14,252] INFO - KafkaEventAdapterServiceDS Successfully deployed the Kafka output event adaptor service
[2018-06-27 13:17:14,383] INFO - TemplateDeployerServiceTrackerDS Successfully deployed the execution manager tracker service
[2018-06-27 13:17:16,127] WARN - ConnectionFactoryImpl ConnectException occurred while connecting to localhost:5432
java.net.ConnectException: Connection refused (Connection refused)
[2018-06-27 13:17:16,141] ERROR - Driver Connection error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Caused by: java.net.ConnectException: Connection refused (Connection refused)
[2018-06-27 13:17:16,160] ERROR - DefaultRealm nullType class java.lang.reflect.InvocationTargetException
org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
Caused by: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
[2018-06-27 13:17:16,185] ERROR - Activator Cannot start User Manager Core bundle
org.wso2.carbon.user.core.UserStoreException: Cannot initialize the realm.
Caused by: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
[2018-06-27 13:17:25,767] INFO - TaglibUriRule TLD skipped. URI: http://tiles.apache.org/tags-tiles is already defined </br>
My questions are
You have build your docker image using mySQL, Is there any way to build image that will be compatible with PostgreSQL?
What are the changes required and which files be changed to build with PostgreSQL compatible api-manager image?
Please suggest step by step if you have overcome something like I'm troubleshooting?
Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Have you just copy/paste the configuration or did you try to understood what's inside? the system is trying to connect to locahost, but you need to configure a separate database server, see the docs
You have build your docker image using mySQL, Is there any way to build image that will be compatible with PostgreSQL?
indeed, read the Dockerfile, instead of copying the mysql driver, you can provide a postgresql driver, update the datasource config and you are good to go
I have just installed Wildfly and I tried to connect it :
\wildfly-11.0.0.Final\bin>jboss-cli.bat -c
But gives me follow error :
Failed to connect to the controller: The controller is not available
at localhost:9990: java.net.ConnectException: WFLYPRT0053: Could not
connect to remote+http://localhost:9990. The connection failed:
WFLYPRT0053: Could not connect to remote+http://localhost:9990. The
connection failed: Connection refused: no further information
I tried a lot of solutions but it's not working for me.
With Widlfly running, i.e. (standalone.bat), use the --controller option to define where it is:
jboss-cli.bat -c --controller=localhost:9990
Got the same error on wildfly version 16
Error
Failed to connect to the controller: The controller is not available at localhost:: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:. The connection failed: WFLYPRT0053: Could not connect to remote+http://localhost:. The connection failed: Connection refused
And following done and resolved successfully
Step 01
Comment (or you can remove) following line from /bin/jboss-cli.xml
default-protocol use-legacy-override="true">remote+https</default-protocol
Correct protocol Ex:
<default-protocol use-legacy-override="true">remote+http</default-protocol>
<!-- The default controller to connect to when 'connect' command is executed w/o arguments -->
<default-controller>
<protocol>remote+http</protocol>
<host>localhost</host>
<port>9990</port>
</default-controller>
Step 02
In my case i have alredy created a Administrative user hence, I have statup the CLI with following commnad
./jboss-cli.sh --user="<user>" --password="<password>" --controller=remote+http:<your IP>:<port> --connect
Example :
./jboss-cli.sh --user="Admin" --password="Password" --controller=remote+http://19.199.115.172:9990 --connect
Make sure your wildfly is up and running. If you have used different port for the admin console it should be added .
Jboss must be running while doing this. I was trying to do this while my jboss was not up n running... then i got this error message. So boot up jboss and try again.
The most common case is that there is a mismatch between the default controller defined in jboss-cli.xml and the management port, configured in the standalone.xml/domain.xml. Out of the box, they should converge on localhost:9990. Therefore, verify if you changed any of the two files. Other than that, it could be a firewall/network issue.
See also: Cannot connect to WildFly with CLI
Using win7-64, jdk8, sparks1.6.2.
I have spark running, winutils, HADOOP_HOME, etc
Per documentation Note: The launch scripts do not currently support Windows. To run a Spark cluster on Windows, start the master and workers by hand. But does not say how?
How do I launch spark master on windows?
Tried running sh start-master.sh thru git bash : failed to launch org.apache.spark.deploy.master.Master: Even though it prints out Master --ip Sam-Toshiba --port 7077 --webui-port 8080 - So I don't know what all this means.
But when I try spark-submit --class " " --master spark://Sam-Toshiba:7077 target/ .jar -
I get errors:
WARN AbstractLifeCycle: FAILED SelectChannelConnector#0.0.0.0:
4040: java.net.BindException: Address already in use: bind
java.net.BindException: Address already in use
WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/01/12 14:44:29 WARN AppClient$ClientEndpoint: Failed to connect to master Sam-Toshiba:7077
java.io.IOException: Failed to connect to Sam-Toshiba/192.168.137.1:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
Also tried spark://localhost:7077 - same errors
On Windows you can launch Master using below command. Open command prompt and go to Spark bin folder and execute
spark-class.cmd org.apache.spark.deploy.master.Master
Above command will print like Master: Starting Spark master at spark://192.168.99.1:7077 in console as per IP of your machine. You can check the UI at http://192.168.99.1:8080/
If you want to launch worker once your master is up you can use below command. This will use all the available cores of your machine.
spark-class.cmd org.apache.spark.deploy.worker.Worker spark://192.168.99.1:7077
If you want to utilize 2 cores of your 4 cores of machine then use
spark-class.cmd org.apache.spark.deploy.worker.Worker -c 2 spark://192.168.99.1:7077
I want to connect to HBase running in standalone in a docker, using Java and the HBase API
I use this code to connect :
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "163.172.142.199");
config.set("hbase.zookeeper.property.clientPort","2181");
HBaseAdmin.checkHBaseAvailable(config);
Here is my /etc/hosts file
127.0.0.1 localhost
XXX.XXX.XXX.XXX hbase-srv
Here is the /etc/hosts file from my docker (named hbase-srv)
XXX.XXX.XXX.XXX hbase-srv
With this configuration, I get a connection refused error :
INFO | Initiating client connection, connectString=163.172.142.199:2181 sessionTimeout=90000 watcher=hconnection-0x6aba2b860x0, quorum=163.172.142.199:2181, baseZNode=/hbase
INFO | Opening socket connection to server 163.172.142.199/163.172.142.199:2181. Will not attempt to authenticate using SASL (unknown error)
INFO | Socket connection established to 163.172.142.199/163.172.142.199:2181, initiating session
INFO | Session establishment complete on server 163.172.142.199/163.172.142.199:2181, sessionid = 0x15602f8d8dc0002, negotiated timeout = 40000
INFO | Closing zookeeper sessionid=0x15602f8d8dc0002
INFO | Session: 0x15602f8d8dc0002 closed
INFO | EventThread shut down
org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1560)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:948)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3159)
at hbase.Benchmark.main(Benchmark.java:26)
However, if I remove the lines XXX.XXX.XXX.XXX hbase-srv from both /etc/hosts files I get the error unknown host : hbase-srv
I have also checked, I can successfully telnet to my hbase docker on the client port.
On the docker, all the ports used by HBase are opened and binded to the same number (60000 on 60000, 2181 on 2181, etc).
I also wanted to add that all was fine when I used this configuration on localhost.
If you can't give me an answer to my problem, could you at least give me a procedure to deploy a standalone hbase on a docker.
UPDATE : Here is my Docker file
FROM java:openjdk-8
ADD hbase-1.2.1 /hbase-1.2.1
WORKDIR /hbase-1.2.1
# ZooKeeper
EXPOSE 2181
# HMaster
EXPOSE 60000
# HMaster Web
EXPOSE 60010
# RegionServer
EXPOSE 60020
# RegionServer Web
EXPOSE 60030
EXPOSE 16010
RUN chmod 755 /hbase-1.2.1/bin/start-hbase.sh
CMD ["/hbase-1.2.1/bin/start-hbase.sh"]
My HBase shell is working, I also tried to open the port using iptables for tcp and udp but still the same problem
There are two problems with your Dockerfile:
use hbase master start instead of start-hbase.sh
regionserver is actually not running on 60020
The 2nd problem is not so easy to solve. If run hbase standalone with version >= 1.2.0 (not sure, I'm running 1.2.0), hbase will use ephemeral port instead of the default port or the port you provide in hbase-site.xml which makes it very hard to provide hbase service in docker using the original version.
I add a property named hbase.localcluster.port.ephemeral and managed to build a standalone hbase in docker, which you can reference here.
I've been following this tutorial which is how to setup remote debugging.
I have my Jar running debug mode listening on port 6065 on my server with the following setting:
-Xrunjdwp\:transport\=dt_socket,server\=y,address\=6065,suspend\=n
I start the JAR on my server with jar in debug mode with:
myApplicationThatContainsJar.exe -debug "my application"
> Now Starting JVM
> Listening for transport dt_socket at address: 6065
I have psping tool installed which I use to ping (IPaddress:port). I am able to ping my ipaddress example: 44.66.33.66:6065 from my dev box and get a reply. But when I try to initiate remote debug in Eclipse I get:
Failed to connect to remote VM. Connection refused. Connection
refused: connect
I have verified I've added permission for both inbound and outbound traffic for that port on both my DEV box and server.
When I start my JAR on my server and before I try to connect with Eclipse I do a:
psping 44.66.33.66:6065
And I get a response stating that it sent and recieved with 0% loss, meaning I am indeed getting a response.
In the eclipse debug configuration, which in the 'Debug Configurations' window I have the host and port listed properly and the 'Connection Type: Standard(Socket Attach)'.
As soon as I try to connect with Eclipse and get the connection refused error I try psping again from the command line, but now eclipse did something to the connection and I get in the command line:
The remote computer refused the network connection.
Any suggestions where else I should check? or where else I should troubleshoot? I'm trying to do remote-debugging for my JAR from the DEV box to my server.
-Xrunjdwp\:transport\=dt_socket,server\=y,address\=6065,suspend\=n
should be:
-Xrunjdwp\:transport\=dt_socket,server\=y,address\=6065,suspend\=y
notice the suspend\=y keep in mind I need to escape the equals and others may not need to do this. Once I set to suspend\=y the application waited for me to connect from eclipse and I was able to begin remote debugging.
Don't forget to open inbound and outbound traffic for the port you used for address\=6065
Check network setting in VM and set Attached to: Bridged Adapter
Check VM IP using ipconfig command
Run Java Application using following command
> java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=n myapp
Above command myapp replace to your application name
In eclipse go to debug configuration -> remote java Application -> New
Host: remote VM ip
Port: given port that given in command
Apply -> Debug