I have a custom source for my Flume (version 1.5.0) agent and I want to debug it. It's actually custom Twitter source, from Cloudera's example here. I have a number of questions:
(1) Is it possible to remote debug the Flume source (written in Java) when I run the Flume agent?
In addition, when I run the agent, I have this option
-Dflume.root.logger=DEBUG,console
but it seems that the logger.debugs that I have in the Java source are not appearing in the terminal.
(2) How do I make my logs appear? What's missing in my Flume or logging configuration?
(3) If I'm able to make the logs appear, how do I print to the file the console output of my Flume source logger.debugs only, excluding Flume agent's own logs?
Thanks.
Use following arguments for the JVM running flume agent as specified in the link http://stackoverflow.com/a/22631355/1660002 .
EX-
For newer JDK(for me 1.8) :
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6006
And you can connect to that remote port in the address field using IntelliJ or any other IDE remote debugging
Related
I'm trying to use Logback to centralize the logging of 2 microservices.
Therefore reading the documention I understood I should setup a simpleLogServer to which send the logging events of my others 2 java microservices.
https://logback.qos.ch/manual/appenders.html#serverSocketAppender
Assuming you are in the logback-examples/ directory, start SimpleSocketServer with the following command:
java ch.qos.logback.classic.net.SimpleSocketServer 6000 src/main/java/chapters/appenders/socket/server1.xml
So I've clone the git hub repo https://github.com/qos-ch/logback and from inside the logback-examples/ directory I've run the command only to get:
Error: Could not find or load main class ch.qos.logback.classic.net.SimpleSocketServer
I can't find an example of what I'm trying to achieve, and also I don't understand how can I start a SimpleSocketServer on my prod server only launching a java command and without installing anything.
Can someone clarify that for me? Thanks
I am new to service fabric and trying to deploy java application to local service fabric cluster with 5 nodes. i am using Ubuntu VM and following below steps to build and deploy it in asf cluster. while deploying i am getting below error. i tried to deploy in asf remote cluster also and got the same issue. Can you please help me on this.
Link :Jav Application deploymen to ASF cluster
Error code:
Just tried this out and it worked for me so just going to ask some questions to make sure we didn't miss anything from the documents.
Under DhrumilSpringServiceFabric->DhrumilSpringGettingStartedPkg -> code, do you have two files?
gs-spring-boot-0.1.0.jar
entryPoint.sh
The entryPoint.sh file should have the following contents:
#!/bin/bash
BASEDIR=$(dirname $0)
cd $BASEDIR
java -jar gs-spring-boot-0.1.0.jar
Additionally, in the ServiceManifest.xml (located in DhrumilSpringServiceFabric->DhrumilSpringGettingStartedPkg), there should be the following snippet:
<CodePackage Name="code" Version="1.0.0">
<EntryPoint>
<ExeHost>
<Program>entryPoint.sh</Program>
<Arguments></Arguments>
<WorkingFolder>CodePackage</WorkingFolder>
</ExeHost>
</EntryPoint>
The Program property value "entryPoint.sh" has to be identical including casing with what's in your "code" folder.
If the above all check out, then please respond and happy to dive deeper into this.
#Dhrumil Shah, I replicated the steps provided in the document and was able to achieve the desired results successfully.
Can you let me know if your java application is working fine without using service fabric and if you are using cli for your deployment?
Also, please check if java is installed properly on your VM. Check the below link for more information:
Java Webapp Deployment in Azure Service fabric explorer
I found the issue after spending some time in ASF logger. The issue was my YO generator was not working properly. i mean Yo json file was correpted. i run yo doctor and corrected it. its work
I'm very new in the debugging the code cloned from the Github. However, till now, I have done below :
cloned the repo to my local machine (git clone ) as well as using "sourcetree" software.
built the code (mvn clean install)
able to import the maven project in IDE (Ecliplse, InteliiJ)
After the build completed I'm able to start the application (eg: start.sh) in the target/bin directory which has been created after the build
Logged into the application's UI succesfully
Questions:
- Now, at this moment I am not sure what is the main class file for aplication and from where and which .java file I shall attach the breakpoint
- Once attached the breakpoint how shall I debug it while traversing through the UI.
Can someone please give me a pointer. Thanks in advance!
Eg: I'm testing all this on "Apache/NiFi-Registry" project.
ref: https://github.com/apache/nifi-registry
You're going to need to edit this line in the nifi-registry.sh script to enable remote debugging
run_nifi_registry_cmd="'${JAVA}' -cp '${BOOTSTRAP_CLASSPATH}' -Xms12m -Xmx24m ${BOOTSTRAP_DIR_PARAMS} org.apache.nifi.registry.bootstrap.RunNiFiRegistry $#"
Is it just me, or is that memory footprint really small?
For example, in Kafka, there is this section of the startup script
# Set Debug options if enabled
if [ "x$KAFKA_DEBUG" != "x" ]; then
# Use default ports
DEFAULT_JAVA_DEBUG_PORT="5005"
if [ -z "$JAVA_DEBUG_PORT" ]; then
JAVA_DEBUG_PORT="$DEFAULT_JAVA_DEBUG_PORT"
fi
# Use the defaults if JAVA_DEBUG_OPTS was not set
DEFAULT_JAVA_DEBUG_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=${DEBUG_SUSPEND_FLAG:-n},address=$JAVA_DEBUG_PORT"
if [ -z "$JAVA_DEBUG_OPTS" ]; then
JAVA_DEBUG_OPTS="$DEFAULT_JAVA_DEBUG_OPTS"
fi
echo "Enabling Java debug options: $JAVA_DEBUG_OPTS"
KAFKA_OPTS="$JAVA_DEBUG_OPTS $KAFKA_OPTS"
fi
Then it runs eventually ${JAVA} ... ${KAFKA_OPTS}, and if you stop Kafka server and start it with
export KAFKA_DEBUG=y; kafka-server-start ...
Then you can attach a remote debugger on port 5005 by default
I understand you're using NiFi Registry, not Kafka, but basically, you need to add arguments to the JVM and reboot it. You can't just attach to the running Registry and walk through the source code.
Remote debugging a Java application
when I run sample IBM Bluemix Liberty for Java application https://github.com/ibmjstart/bluemix-java-postgresql-uploader.git following error:
-----> Downloaded app package (1.9M)
-----> Downloaded app buildpack cache (4.0K)
OK
/var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:101:in build_pack': Unable to detect a supported application type (RuntimeError) from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:74:inblock in compile_with_timeout'
from /usr/lib/ruby/1.9.1/timeout.rb:68:in timeout' from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:73:incompile_with_timeout'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:54:in block in stage_application' from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:50:inchdir'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:50:in stage_application' from /var/vcap/packages/dea_next/buildpacks/bin/run:10:in'
FAILED
Server error, status code: 400, error code: 170001, message: Staging error: cannot get instances since staging failed
TIP: use 'cf logs jpu-henryhan --recent' for more information
The top error looks like you left off the -p <path_to_war> parameter when doing a push. If you just push a directory containing a WAR file, it will not be detected by the Java buildpack.
The tip provided in the output of your cf push request is relevant.
TIP: use 'cf logs jpu-henryhan --recent' for more information
Running that command will tail the log files produced during the staging process and let you see what error may have been raised. Often, it can be a missing dependency or a transient failure of some sort.
I just successfully deployed the sample using the "deploy to Bluemix" button and manually via the cf command line tool. Unless you changed the code, it is most likely that this error is a transient failure.
Run following command:
$ cf push jpu- -b https://github.com/cloudfoundry/java-buildpack --no-manifest --no-start -p PostgreSQLUpload.war
add the parameter to set the buildpack "-b https://github.com/cloudfoundry/java-buildpack"
Could someone tell me the detailed description of below flume command to execute conf file.
bin/flume-ng agent --conf-file netcat_flume.conf --name a1
-Dflume.root.logger=INFO,console
As of my knowledge,
--conf-file -> To specify Configuration File name or to mention to FLUME that we need to run this file.
--name -> Agent
But what below command does.?
-Dflume.root.logger=INFO,console
Thanks in advance for your help.
Its the Log4j Property which is explained in detail below
INFO which means output only informational messages that highlight the progress of the application at coarse-grained level. For more details check
console means output the log4j logs onto the console. Other options available are write to database and write to file.
-Dflume.root.logger=INFO,console
The above statement write coarse grained level logs of flume execution to console
The shell script flume-ng,accept args,finally run command like:
java -Xmx20m -Dflume.root.logger=INFO,console -cp '=:/home/scy/apache-flume-1.4.0-bin/lib/*:/home/scy/apache-flume-1.4.0-bin/conf:/home/scy/jdk1.6.0_45/lib/tools.jar' -Djava.library.path= org.apache.flume.node.Application --conf-file conf/example.conf --name agent1 conf org.apache.flume.node
Let's look at sourcecode org.apache.flume.node.Application.main(String[] args):
PropertiesFileConfigurationProvider configurationProvider =
new PropertiesFileConfigurationProvider(agentName,
configurationFile);
Here class PropertiesFileConfigurationProvider accept agentName and configurationFile which specific by "--conf-file" and "--name"
Then application.start() run all source,channel and sink
As about -Dflume.root.logger=INFO,console,Let's look at flume/log4j.properties:
flume.root.logger=INFO,LOGFILE
flume.root.logger will changed by -Dflume.root.logger=INFO,console,it means put all info level logs to console