We're trying to debug the behaviour of svnkit during checkout on Linux (Debian) that is related to a JDK/JVM bug described here
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8172578
https://bugs.openjdk.java.net/browse/JDK-8075484
We followed the steps described here
https://wiki.svnkit.com/Troubleshooting
by renaming the file conf/logging.properties.disabled to logging.properties and by also replacing the /usr/lib/jvm/java-1.8.0-openjdk-amd64/jre/lib/logging.properties (which symlinks to /etc/java-8-openjdk/logging.properties)
This produces a log file under Windows (in the bin folder of the svnkit standalone), but has no effect under Linux.
Calling
./jsvn checkout --username XYZ --password ABC http://SVN_SERVER/svn/project/trunk/
makes ps aux tell us
/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.util.logging.config.file=/tmp/svnkit-1.8.15/conf/logging.properties -Dsun.io.useCanonCaches=false -classpath /tmp/svnkit-1.8.15/lib/svnkit-1.8.15.jar:/tmp/svnkit-1.8.15/lib/sequence-library-1.0.3.jar:/tmp/svnkit-1.8.15/lib/sqljet-1.1.10.jar:/tmp/svnkit-1.8.15/lib/jna-4.1.0.jar:/tmp/svnkit-1.8.15/lib/jna-platform-4.1.0.jar:/tmp/svnkit-1.8.15/lib/trilead-ssh2-1.0.0-build221.jar:/tmp/svnkit-1.8.15/lib/jsch.agentproxy.connector-factory-0.0.7.jar:/tmp/svnkit-1.8.15/lib/jsch.agentproxy.svnkit-trilead-ssh2-0.0.7.jar:/tmp/svnkit-1.8.15/lib/antlr-runtime-3.4.jar:/tmp/svnkit-1.8.15/lib/jsch.agentproxy.core-0.0.7.jar:/tmp/svnkit-1.8.15/lib/jsch.agentproxy.usocket-jna-0.0.7.jar:/tmp/svnkit-1.8.15/lib/jsch.agentproxy.usocket-nc-0.0.7.jar:/tmp/svnkit-1.8.15/lib/jsch.agentproxy.sshagent-0.0.7.jar:/tmp/svnkit-1.8.15/lib/jsch.agentproxy.pageant-0.0.7.jar:/tmp/svnkit-1.8.15/lib/svnkit-cli-1.8.15.jar org.tmatesoft.svn.cli.SVN checkout --username XYZ --password ABC http://SVN_SERVER/svn/project/trunk/
where
-Djava.util.logging.config.file=/tmp/svnkit-1.8.15/conf/logging.properties
is the part that tells us the renamed logging.properties file is being used to configure the logging.
The content of the logging.properties is
svnkit.level=FINEST
svnkit-network.level=FINEST
svnkit-wc.level=FINEST
svnkit-cli.level=FINEST
handlers = java.util.logging.FileHandler
java.util.logging.FileHandler.pattern = svnkit.%u.log
java.util.logging.FileHandler.limit = 0
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.append = true
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
Any ideas what we are doing wrong?
Related
With regard to the Log4j JNDI remote code execution vulnerability that has been identified CVE-2021-44228 - (also see references) - I wondered if Log4j-v1.2 is also impacted, but the closest I got from source code review is the JMS-Appender.
The question is, while the posts on the Internet indicate that Log4j 1.2 is also vulnerable, I am not able to find the relevant source code for it.
Am I missing something that others have identified?
Log4j 1.2 appears to have a vulnerability in the socket-server class, but my understanding is that it needs to be enabled in the first place for it to be applicable and hence is not a passive threat unlike the JNDI-lookup vulnerability which the one identified appears to be.
Is my understanding - that Log4j v1.2 - is not vulnerable to the jndi-remote-code execution bug correct?
References
Apache Log4j Security Vulnerabilities
Zero-day in ubiquitous Log4j tool poses a grave threat to the Internet
Worst Apache Log4j RCE Zero day Dropped on Internet
‘Log4Shell’ vulnerability poses critical threat to applications using ‘ubiquitous’ Java logging package Apache Log4j
This blog post from Cloudflare also indicates the same point as from AKX....that it was introduced from Log4j 2!
Update #1 - A fork of the (now-retired) apache-log4j-1.2.x with patch fixes for few vulnerabilities identified in the older library is now available (from the original log4j author). The site is https://reload4j.qos.ch/. As of 21-Jan-2022 version 1.2.18.2 has been released. Vulnerabilities addressed to date include those pertaining to JMSAppender, SocketServer and Chainsaw vulnerabilities. Note that I am simply relaying this information. Have not verified the fixes from my end. Please refer the link for additional details.
The JNDI feature was added into Log4j 2.0-beta9.
Log4j 1.x thus does not have the vulnerable code.
While not affected by the exact same Log4Shell issue, the Apache Log4j team recommends to remove JMSAppender and SocketServer, which has a vulnerability in CVE-2019-17571, from your JAR files.
You can use the zip command to remove the affected classes. Replace the filename/version with yours:
zip -d log4j-1.2.16.jar org/apache/log4j/net/JMSAppender.class
zip -d log4j-1.2.16.jar org/apache/log4j/net/SocketServer.class
You can look through through the files in your zip using less and grep, e.g. less log4j-1.2.16.jar | grep JMSAppender
That being said, Apache recommends that you upgrade to the 2.x version if possible. According to their security page:
Please note that Log4j 1.x has reached end of life and is no longer supported. Vulnerabilities reported after August 2015 against Log4j 1.x were not checked and will not be fixed. Users should upgrade to Log4j 2 to obtain security fixes.
In addition to giraffesyo's answer and in case it helps anyone - I wrote this Bash script - which removes classes identified as vulnerabilities (link here to Log4j dev thread) and sets properties files are read-only - as suggested here on a Red Hat Bugzilla thread.
Note 1 - it does not check for any usage of these classes in properties it is purely a way to find and remove - use at own risk!
Note 2 - it depends on zip and unzip being installed
#!/bin/bash
DIR=$1
APPLY=$2
# Classes to be searched for/removed
CLASSES="org/apache/log4j/net/SimpleSocketServer.class
org/apache/log4j/net/SocketServer.class
org/apache/log4j/net/JMSAppender.class"
PROGNAME=`basename $0`
PROGPATH=`echo $0 | sed -e 's,[\\/][^\\/][^\\/]*$,,'`
usage () {
echo >&2 Usage: ${PROGNAME} DIR [APPLY]
echo >&2 Where DIR is the starting directory for find
echo >&2 and APPLY = "Y" - to perform purification
exit 1
}
# Force upper case on Apply
APPLY=$(echo "${APPLY}" | tr '[:lower:]' '[:upper:]')
# Default Apply to N
if [ "$APPLY" == "" ] ; then
APPLY="N"
fi
# Check parameters
if [ "$DIR" == "" ] ; then
usage
fi
echo $APPLY | grep -q -i -e '^Y$' -e '^N$' || usage
# Search for log4j jar files - for class file removal
FILES=$(find $DIR -name *log4j*jar)
for f in $FILES
do
echo "Checking Jar [$f]"
for jf in $CLASSES
do
unzip -v $f | grep -e "$jf"
if [ "$APPLY" = "Y" ]
then
echo "Deleting $jf from $f"
zip -d $f $jf
fi
done
done
# Search for Log4j properties files - for read-only setting
PFILES=$(find $DIR -name *log4j*properties)
for f in $PFILES
do
echo "Checking permissions [$f]"
if [ "$APPLY" = "Y" ]
then
echo "Changing permissons on $f"
chmod 444 $f
fi
ls -l $f
done
I am trying to run the typical Flume first example to get tweets and store them in HDFS using Apache FLume.
[Hadoop version 3.1.3; Apache Flume 1.9.0]
I have configured flume-env.sh:
`
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/
export CLASSPATH=$CLASSPATH:/FLUME_HOME/lib/*
Configured the agent as shown in the TwitterStream.properties config file:
# Naming the components on the current agent.
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
# Describing/Configuring the source
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.consumerKey = **********************
TwitterAgent.sources.Twitter.consumerSecret = **********************
TwitterAgent.sources.Twitter.accessToken = **********************
TwitterAgent.sources.Twitter.accessTokenSecret = **********************
TwitterAgent.sources.Twitter.keywords = tutorials point, java, bigdata, mapreduce, mahout, hbase, nosql
# Describing/Configuring the sink
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/twitter_data/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
# Describing/Configuring the channel
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100
# Binding the source and sink to the channel
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sinks.HDFS.channel = MemChannel
And then running the command:
bin/flume-ng agent -c /home/jiglesia/hadoop/flume/conf/ -f TwitterStream.properties -n TwitterAgent -Dflume.root.logger=INFO, console -n TwitterAgent
Getting the following ERROR during the execution:
Info: Sourcing environment configuration script /home/jiglesia/hadoop/flume/conf/flume-env.sh
Info: Including Hadoop libraries found via (/home/jiglesia/hadoop/bin/hadoop) for HDFS access
/home/jiglesia/hadoop/libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_USER: bad substitution
/home/jiglesia/hadoop/libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_OPTS: bad substitution
Info: Including Hive libraries found via () for Hive access
I don't know why it says Bad Substitution.
I finally attach the entire log if it could say anything to you:
Info: Sourcing environment configuration script /home/jiglesia/hadoop/flume/conf/flume-env.sh
Info: Including Hadoop libraries found via (/home/jiglesia/hadoop/bin/hadoop) for HDFS access
/home/jiglesia/hadoop/libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_USER: bad substitution
/home/jiglesia/hadoop/libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.FLUME.TOOLS.GETJAVAPROPERTY_OPTS: bad substitution
Info: Including Hive libraries found via () for Hive access
+ exec /usr/lib/jvm/java-8-openjdk-amd64/jre//bin/java -Xmx20m -Dflume.root.logger=INFO, -cp '/home/jiglesia/hadoop/flume/conf:/home/jiglesia/hadoop/flume/lib/*:/home/jiglesia/hadoop/etc/hadoop:/home/jiglesia/hadoop/share/hadoop/common/lib/*:/home/jiglesia/hadoop/share/hadoop/common/*:/home/jiglesia/hadoop/share/hadoop/hdfs:/home/jiglesia/hadoop/share/hadoop/hdfs/lib/*:/home/jiglesia/hadoop/share/hadoop/hdfs/*:/home/jiglesia/hadoop/share/hadoop/mapreduce/lib/*:/home/jiglesia/hadoop/share/hadoop/mapreduce/*:/home/jiglesia/hadoop/share/hadoop/yarn:/home/jiglesia/hadoop/share/hadoop/yarn/lib/*:/home/jiglesia/hadoop/share/hadoop/yarn/*:/lib/*' -Djava.library.path=:/home/jiglesia/hadoop/lib/native org.apache.flume.node.Application -f TwitterStream.properties -n TwitterAgent console -n TwitterAgent
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/jiglesia/hadoop/flume/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/jiglesia/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.flume.node.Application).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
The environment variables configured in the bashrc file:
# HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/
export HADOOP_HOME=/home/jiglesia/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
# HADOOP VARIABLES END
# FLUME VARIABLES START
FLUME_HOME=/home/jiglesia/hadoop/flume
PATH=$PATH:/FLUME_HOME/bin
CLASSPATH=$CLASSPATH:/FLUME_HOME/lib/*
# FLUME VARIABLES END
Thanks for your help !
You've referenced bash variables incorrectly.
Try this
FLUME_HOME=$HADOOP_HOME/flume
PATH=$PATH:$FLUME_HOME/bin
CLASSPATH=$CLASSPATH:$FLUME_HOME/lib/*.jar
Note: I'd suggest not putting flume as a subdirectory of Hadoop.
I recommend using Apache Ambari to install and configure Hadoop and Flume processes
In .bashrc and .profile I have my QHOME variable set to the directory that contains k4.lic, l64, q.k (validated by echo $QHOME).
When I startup q from the login shell all works fine, the license file is found.
When I start a q process programmaticaly from java, then I get the following output
[13:43:48][Step 1/2] WARN [main] (QSession.java:78) - Q Session not alive
[13:43:48][Step 1/2] INFO [main] (QSession.java:97) - QHOME: null
[13:43:48][Step 1/2] INFO [main] (QSession.java:98) - QLIC: null
[13:43:48][Step 1/2] ERROR [main] (QSession.java:101) - Error output
[13:43:48][Step 1/2] '2018.02.06T13:43:46.597 k4.lic
i.e. the license is not found because the QHOME env variable is undefined. This problem is described here: ".bashrc is only sourced in a login shell". The proposed solution is
"If you want a variable to be set in all Bourne shell derivatives regardless of whether they are interactive or not, put it in both .profile and .bashrc."
But I have already copied the content of .bashrc into .profile and still get the same error.
Unfortunately, there is no way to pass the path to the license as a command line argument for the q binary, so I have to work with QHOME.
What I could do is put a 32-bit version in my java project but obviously it is advantageous to use the 64-bit version.
Suggestions much appreciated!
Thanks
Thanks to #Jonathan McMurray!
The exact solution is to use
Runtime.getRuntime().exec(command, envp);
where command is for example q -p 5000 and envp is for example
String[] envp = {"QHOME="+qHomePath};
I'm trying to create a Wildfly docker image with a postgres datasource.
When I build the dockerfile it always fails with Permission Denied when I try to install the postgres module.
My dockerfile looks look this:
FROM wildflyext/wildfly-camel
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
ADD postgresql-9.4-1201.jdbc41.jar /tmp/
ADD config.sh /tmp/
ADD batch.cli /tmp/
RUN /tmp/config.sh
Which calls the following:
#!/bin/bash
JBOSS_HOME=/opt/jboss/wildfly
JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh
JBOSS_MODE=${1:-"standalone"}
JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"}
function wait_for_wildfly() {
until `$JBOSS_CLI -c "ls /deployment" &> /dev/null`; do
sleep 10
done
}
echo "==> Starting WildFly..."
$JBOSS_HOME/bin/$JBOSS_MODE.sh -c $JBOSS_CONFIG > /dev/null &
echo "==> Waiting..."
wait_for_wildfly
echo "==> Executing..."
$JBOSS_CLI -c --file=`dirname "$0"`/batch.cli --connect
echo "==> Shutting down WildFly..."
if [ "$JBOSS_MODE" = "standalone" ]; then
$JBOSS_CLI -c ":shutdown"
else
$JBOSS_CLI -c "/host=*:shutdown"
fi
And
batch
module add --name=org.postgresql --resources=/tmp/postgresql-9.4-1201.jdbc41.jar --dependencies=javax.api,javax.transaction.api
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
run-batch
The output when building is:
==> Starting WildFly...
==> Waiting...
==> Executing... Failed to locate the file on the filesystem copying /tmp/postgresql-9.4-1201.jdbc41.jar to
/opt/jboss/wildfly/modules/org/postgresql/main/postgresql-9.4-1201.jdbc41.jar:
/tmp/postgresql-9.4-1201.jdbc41.jar (Permission denied)
What permissions are required, and where do I set the permission(s)?
Thanks
It seems the JAR file is not readable by the jboss user (the user comming from parent image). The postgresql-9.4-1201.jdbc41.jar is added under the root user - find details in this GitHub discussion.
You could
either add permissions to JAR file before adding it to the image
or add permissions to JAR file in the image after the adding
or change ownership of the file in the image
The simplest solution could be the first one. The other 2 solutions need also switching user to root (USER root in dockerfile) and then back to jboss.
Here a advice : make a cli file like this :
connect
module add --name=sqlserver.jdbc --resources=#INSTALL_FOLDER#/libext/jtds-1.3.1.jar --dependencies=javax.api,javax.transaction.api
/subsystem=datasources/jdbc-driver=sqlserver:add(driver-module-name=sqlserver.jdbc,driver-name=sqlserver,driver-class-name=#JDBC_DRIVER#)
/subsystem=datasources/data-source=#DATASOURCENAME#:add(jndi-name=java:jboss/#JNDI_NAME#,enabled="true",use-java-context="true",driver-name=sqlserver,connection-url="#JDBC_URL#",user-name=#JDBC_USER#,password=#JDBC_PASSWORD#,validate-on-match=true,background-validation=true)
replace #VAR# by our own value... and It should work!
Be caution than JBOSS/Wildfly 10 think relatively for jar --resources by default but wildfly 8 think absolute path this could make you weird ! ;-)
cheers!
Google's Closure Compiler has this flag for logging:
--logging_level VAL
The logging level (standard java.util.logging.Level values)
for Compiler progress. Does not control errors or warnings
for the JavaScript code under compilation.
So I can set the logging level to one of: SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST.
But how do I enable logging, and where is the log file?
Create a file called logging.properties with the following contents:
handlers = java.util.logging.FileHandler
java.util.logging.FileHandler.pattern = closure-compiler.log
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
Then set the java.util.logging.config.file Java system property to logging.properties when running the application:
java -Djava.util.logging.config.file=logging.properties \
-jar compiler.jar \
--js script.js \
--logging_level FINEST
The log will be written to closure-compiler.log.