arquillian test does not start - java

The arquillian test fails to start with the following message:
"Error: The LogManager accessed before the java.util.logging.manager
system property was set to org.jboss.logmanager.LogManager. Results
may be unexpected SLF4J: Failed to load class
org.slf4j.impl.StaticLoggerBinder SLF4J: Defaulting to no-operation
(NOP) logger implementation SLF4J: See
http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details"
Unfortunately I don't understand the messages. I've researched but haven't found a solution

Related

Spring Tool Suite SLF4J: Class path contains multiple SLF4J bindings

Running mvn clean install throws the below error in the console
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Applications/SpringToolSuite4.app/Contents/Eclipse/plugins/org.eclipse.m2e.maven.runtime.slf4j.simple_1.16.0.20200610-1735/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [file:/Applications/SpringToolSuite4.app/Contents/Eclipse/configuration/org.eclipse.osgi/1965/0/.cp/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Applications/SpringToolSuite4.app/Contents/Eclipse/plugins/org.eclipse.m2e.maven.runtime.slf4j.simple_1.16.0.20200610-1735/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [file:/Applications/SpringToolSuite4.app/Contents/Eclipse/configuration/org.eclipse.osgi/1965/0/.cp/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
Below command solved the issues
mv file:/Applications/SpringToolSuite4.app/Contents/Eclipse/configuration/org.eclipse.osgi/1965/0/.cp/org/slf4j/impl/StaticLoggerBinder.class file:/Applications/SpringToolSuite4.app/Contents/Eclipse/configuration/org.eclipse.osgi/1965/0/.cp/org/slf4j/impl/StaticLoggerBinder.class1

Hbase error while executing ./bin/start-hbase.sh (windows)

I installed Hadoop with this tutorial https://www.youtube.com/watch?v=g7Qpnmi0Q-s and it´s working. I installed it in C:/hadoop.
I installed it only because I read that hadoop is a prerequisite for executing (no single mode) and the error messages are regarding some hadoop configurations. But it didn´t help.
I tried to install Hbase with this tutorial https://ics.upjs.sk/~novotnyr/blog/334/setting-up-hbase-on-windows. But I´m getting this error while executing ./bin/start-hbase.sh
Output in cygwin terminal:
$ ./bin/start-hbase.sh
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further detail
s.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further detail
s.
: Name or service not knownstname laptop-l6543teb
running master, logging to /cygdrive/c/java/hbase-2.2.4-bin/hbase-2.2.4//logs/hbase-maiwa-master-LAPTOP-L6543TEB.out
: running regionserver, logging to /cygdrive/c/java/hbase-2.2.4-bin/hbase-2.2.4//logs/hbase-maiwa-regionserver-LAPTOP-L6543TEB.out
hbase-site-xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///C:/cygwin/root/tmp/hbase/data</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>C:\Java\hbase-2.2.4-bin\hbase-2.2.4\logs</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>false</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
</configuration>
Environment variables:
Path variables:
The error output produced by start-hbase.sh has three different errors.
1. Issue with HADOOP_HOME variable
WARNING: DEFAULT_LIBEXEC_DIR ignored. It has been replaced by HADOOP_DEFAULT_LIBEXEC_DIR. WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
ERROR: Invalid HADOOP_COMMON_HOME
Update the Environment variables with HADOOP_HOME pointing to the Hadoop installation folder (not the bin folder within the installation folder).
As per your setting,
HADOOP_HOME=C:\hadoop\
Additionally, set the location of the configuration files
HADOOP_CONF_DIR=C:\hadoop\etc\hadoop\
2. Issue with interpreting Linux style path or Invalid path
cygpath: can't convert empty path
In hbase-env.sh (under C:\Java\hbase-2.2.4-bin\hbase-2.2.4\conf\), update the values for HBASE_HOME and HBASE_CLASSPATH
As per your installation,
export HBASE_HOME=/cygdrive/c/Java/hbase-2.2.4-bin/hbase-2.2.4/
export HBASE_CLASSPATH=/cygdrive/c/Java/hbase-2.2.4-bin/hbase-2.2.4/lib/
And in your environment variables, make sure HBASE_HOME is configured similar to HADOOP_HOME.
3. Unable to resolve hostname
: Name or service not knownstname laptop-l6543teb
Update your hosts file with correct IP - Hostname mapping.

Google Maps Services Java Client Freezing WebApp

I would like to get the latitude and longitude of a location name entered by a customer from the Google Places API. The excecution freezes at the second line.
try {
GeoApiContext context = new GeoApiContext.Builder().apiKey("MY_API_KEY").build();
PlaceDetails placeDetails = PlacesApi.placeDetails(context, "Nairobi").await();
double lat = placeDetails.geometry.location.lat;
double lng = placeDetails.geometry.location.lng;
} catch (ApiException | InterruptedException | IOException apiException) {
apiException.printStackTrace(System.out);
}
I get the following stack trace but I can't quite pinout the root cause as the example I am running is the simplest in the tests provided.
Libraries included in my application are:
google-maps-services-0.2.1.jar
gson-2.8.1.jar
okhttp-3.8.1.jar
okio-1.13.0.jar
slf4j-api-1.7.25.jar
This is also a similar issue on github
This is the stack trace I gate and
Info: apartments was successfully deployed in 2,737 milliseconds.
Severe: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
Severe: SLF4J: Defaulting to no-operation (NOP) logger implementation
Severe: SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Severe: Exception in thread "RateLimitExecutorDelayThread"
Severe: java.lang.NoSuchMethodError: com.google.common.util.concurrent.RateLimiter.acquire()D
at com.google.maps.internal.RateLimitExecutorService.run(RateLimitExecutorService.java:75)
at java.lang.Thread.run(Thread.java:745)
Change Google API Version to 0.2.0 as below in pom.xml
<dependency>
<groupId>com.google.maps</groupId>
<artifactId>google-maps-services</artifactId>
<version>0.2.0</version>
''
This will fix your problem

com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed

Im facing the below issue when trying to connect to cassandra cluster and display table contents:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /175.14.3.164:9042 (com.datastax.driver.core.TransportException: [/172.16.3.163:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:248)
at com.datastax.driver.core.Cluster.connect(Cluster.java:281)
at com.X.Y.App.main(App.java:27)
In my cassandra.yaml file
native_transport_port: 9042
My listen_address : 172.14.3.164
seeds: 172.14.3.164
rpc_address: 172.14.3.164
And code :
cluster=Cluster.builder().addContactPoint("172.14.3.164").build();
I had seen other links related to it but followed them but still could n't fix it. Kindly help

Hadoop CDH4 Error:SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder"

I am using ubuntu 14.04. CDH4.7
I am installing as per the procedure given in the link below
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Quick-Start/cdh4qs_topic_3_2.html
The problem is I am not able to start the data node . I am getting the error as
naveensrikanthd#ubuntu:/$ for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done
[sudo] password for naveensrikanthd:
* Starting Hadoop datanode:
starting datanode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-datanode-ubuntu.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
* Starting Hadoop namenode:
namenode running as process 15437. Stop it first.
* Starting Hadoop secondarynamenode:
secondarynamenode running as process 3061. Stop it first.
naveensrikanthd#ubuntu:/$ jps
7467 RunJar
8048 RunJar
18363 Jps
No Hadoop process is running and this three statements given above[slf4J] are shuffling between namenode,datanode:
Below given is the log file for the path:
/var/log/hadoop-hdfs/hadoop-hdfs-datanode-ubuntu.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
ulimit -a for user hdfs
What should I do to rid of this error anyone please help in crossing this error
The output shows that in fact the namenodes are already running. You should double-check where you think they are supposed to run and what your config says, because it's saying you already succeeded.
The error from log4j has nothing to do with Hadoop functionality.

Categories