java mail aws smtp: javax.mail.AuthenticationFailedException: 220 Ready to start TLS - java

I have configured the below properties for smtp appender. the same configuration works fine in local but getting error on AWS
log4j.appender.email=org.apache.log4j.net.SMTPAppender
log4j.appender.email.BufferSize=20
log4j.appender.email.SMTPHost=email-smtp.us-east-1.amazonaws.com
log4j.appender.email.SMTPUsername=<username>
log4j.appender.email.SMTPPassword=<password>
log4j.appender.email.SMTPPort=587
log4j.appender.email.From=<from#email>
log4j.appender.email.To=<to#email>
log4j.appender.email.Subject=<Subjec>
log4j.appender.email.layout=org.apache.log4j.PatternLayout
log4j.appender.email.layout.ConversionPattern=%d %-5p [%t] (%c{1}:%L) - %m%n
log4j.appender.email.SMTPDebug=true
log4j.appender.email.EnableSsl=true
log4j.appender.email.smtp.starttls.enable=true
log4j.appender.email.smtp.auth=true
log4j.appender.email.TLS=true

Replace
<dependency>
<groupId>javax.mail</groupId>
<artifactId>javax.mail-api</artifactId>
<version>1.6.2</version>
</dependency>
on
<dependency>
<groupId>com.sun.mail</groupId>
<artifactId>jakarta.mail</artifactId>
<version>1.6.7</version>
</dependency>

Solution: I have used 1.6.4 version for mailapi and smtp jars.

Related

Selenium - Exception in thread "main" java.lang.NoClassDefFoundError: org/reactivestreams/Publisher

I am creating a Maven project for Selenium in eclipse. Don't know why it threw log4j error (It didn't used to earlier, before upgrading Eclipse). The error is as follows -
I have already added "log4j.properties" file under src/main/resources as -
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
Also added dependency as following in POM.xml -
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.26</version>
</dependency>
Artifacts used -
Eclipse - Version: 2020-06 (4.16.0)
Maven artifact id - maven-archetype-quickstart - v1.4
Selenium version - 3.141.59
The error is not related to log4j. The error is for org.reactivestreams.Publisher. Add the following Maven dependency to get it:
<!-- https://mvnrepository.com/artifact/org.reactivestreams/reactive-streams -->
<dependency>
<groupId>org.reactivestreams</groupId>
<artifactId>reactive-streams</artifactId>
<version>1.0.3</version>
</dependency>
Make sure to update the project after adding the dependency.

What version of log4j did %c{1.} become valid

I would like to use the following Conversion Pattern
%d{yyyy-MM-dd HH:mm:ss} [%t] %-5p %c{1.}:%L - %m%n
Which produces output like
2016-06-08 10:29:40 [http-nio-8080-exec-8] DEBUG h.d.h.l.l.s.w.f.MyClass:27 - This is a debug message.
2016-06-08 10:29:40 [http-nio-8080-exec-8] INFO h.d.h.l.l.s.w.f.MyClass:22 - This is an info message.
2016-06-08 10:29:40 [http-nio-8080-exec-8] WARN h.d.h.l.l.s.w.f.MyClass:33 - This is a warn message.
2016-06-08 10:29:40 [http-nio-8080-exec-8] ERROR h.d.h.l.l.s.w.f.MyClass:39 - This is an error message.
2016-06-08 10:29:40 [http-nio-8080-exec-8] FATAL h.d.h.l.l.s.w.f.MyClass:45 - This is a fatal message.
However when I run my tests and trigger my log4j file I get the error message
log4j:ERROR Category option "1." not a decimal integer.
Log4j and slf4j are setup in my pom with
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.19</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.19</version>
<scope>runtime</scope>
</dependency>
What version of log4j do I need to get 1. to be a valid Category option.
I was using PatternLayout not EnhancedPatternLayout
%c{1.}
Is only available in EnhancedPatternLayout

Error while trying to connect cassandra database using spark streaming

I'm working in a project which uses Spark streaming, Apache kafka and Cassandra.
I use streaming-kafka integration. In kafka I have a producer which sends data using this configuration:
props.put("metadata.broker.list", KafkaProperties.ZOOKEEPER);
props.put("bootstrap.servers", KafkaProperties.SERVER);
props.put("client.id", "DemoProducer");
where ZOOKEEPER = localhost:2181, and SERVER = localhost:9092.
Once I send data I can receive it with spark, and I can consume it too. My spark configuration is:
SparkConf sparkConf = new SparkConf().setAppName("org.kakfa.spark.ConsumerData").setMaster("local[4]");
sparkConf.set("spark.cassandra.connection.host", "localhost");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
After that I am trying to store this data into cassandra database. But when I try to open session using this:
CassandraConnector connector = CassandraConnector.apply(jssc.sparkContext().getConf());
Session session = connector.openSession();
I get the following error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:334)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:182)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:70)
at org.kakfa.spark.ConsumerData.main(ConsumerData.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Regarding to cassandra, I'm using default configuration:
start_native_transport: true
native_transport_port: 9042
- seeds: "127.0.0.1"
cluster_name: 'Test Cluster'
rpc_address: localhost
rpc_port: 9160
start_rpc: true
I can manage to connect to cassandra from the command line using cqlsh localhost, getting the following message:
Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4] Use HELP for help. cqlsh>
I used nodetool status too, which shows me this:
http://pastebin.com/ZQ5YyDyB
For running cassandra I invoke bin/cassandra -f
What I am trying to run is this:
try (Session session = connector.openSession()) {
System.out.println("dentro del try");
session.execute("DROP KEYSPACE IF EXISTS test");
System.out.println("dentro del try - 1");
session.execute("CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}");
System.out.println("dentro del try - 2");
session.execute("CREATE TABLE test.users (id TEXT PRIMARY KEY, name TEXT)");
System.out.println("dentro del try - 3");
}
My pom.xml file looks like that:
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.6.0-M1</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.6.0-M2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20160212</version>
</dependency>
</dependencies>
I have no idea why I can't connect to cassandra using spark, is it configuration bad or what i am doing wrong?
Thank you!
com.datastax.driver.core.exceptions.InvalidQueryException:
unconfigured table schema_keyspaces)
That error indicates an old driver with a new Cassandra version. Looking at the POM file, we find there the spark-cassandra-connector dependency declared twice.
One uses version 1.6.0-m2 (GOOD) and the other 1.1.0-alpha2 (old).
Remove the references to the old dependencies 1.1.0-alpha2 from your config:
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.1.0-alpha2</version>
</dependency>

mongodb driver - how to hide debug logging with log4j?

Mongo java driver is verbose since v3.x.x, it's not very optimal to store a log of logs like this. Would like to hide some of them.
Here are my dependencies (maven / pom.xml)
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.0.1</version>
</dependency>
<!-- LOGS -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jul-to-slf4j</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.5</version>
</dependency>
I want to hide mongo java driver logs :
2015-05-16 10:43:22 023 DEBUG command:56 - Sending command {count : BsonString{value='skydatas'}} to database rizze on connection [connectionId{localValue:2, serverValue:9}] to server 127.0.0.1:27017
2015-05-16 10:43:22 041 DEBUG command:56 - Command execution completed
2015-05-16 10:43:22 042 DEBUG command:56 - Sending command {count : BsonString{value='skydatas'}} to database rizze on connection [connectionId{localValue:2, serverValue:9}] to server 127.0.0.1:27017
2015-05-16 10:43:22 060 DEBUG command:56 - Command execution completed
2015-05-16T10:49:34.448Z DEBUG Checking status of 127.0.0.1:27017 | | cluster:56
2015-05-16T10:49:34.452Z DEBUG Updating cluster description to {type=STANDALONE, servers=[{address=127.0.0.1:27017, type=STANDALONE, roundTripTime=1,3 ms, state=CONNECTED}] | | cluster:56
Thanks
You should be able to do this from your log4j configuration file.
The first thing I would do is temporarily add the fully-qualified class name to your log4j appender format (%C if you're using a formatter based on PatternLayout). That gives you the package and class names of the source of the logging statements you want to exclude.
Then follow the example from the log4j manual to specifically raise the logging level of the packages/classes you don't want to see (search the page for text from the snippet below to find it on the page for more context):
# Print only messages of level WARN or above in the package com.foo.
log4j.logger.com.foo=WARN
Don't forget to take the %C out of your pattern when you've excluded all the packages you don't want to see.
Here is a log4j configuration, I'm using with log4j:
# Log4J logger file
log = ./log4j
log4j.rootLogger = WARN, CONSOLE
log4j.logger.com.rizze.db=INFO
# Direct log messages to stdout
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Target=System.out
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss SSS} %-5p %m / %c{1}:%L %n
What is doing this work around
- show Logs with level >= INFO for package com.rizze.db
- show others Logs with level >= WARN
it is work for me.
add this line to you log4j configuration to fix this problem.
log4j.logger.org.mongodb.driver=ERROR

Control myBatis logs destination file and level

I'm working on a spring-based application which has to communicate with a SQL database through mybatis: all right but the logs destination.
For some reason mybatis logs to the wrong file, could you help me to figure out why? Here's my configuration:
log4j.properties:
### Appenders
# Console appender
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.Threshold=WARN
log4j.appender.console.layout=org.apache.log4j.PatternLayout
# Application file appender
log4j.appender.main=org.apache.log4j.RollingFileAppender
log4j.appender.main.File=logs/app.log
log4j.appender.main.layout=org.apache.log4j.PatternLayout
log4j.appender.main.MaxFileSize=10MB
log4j.appender.main.MaxBackupIndex=15
# Libs file appender
log4j.appender.libs=org.apache.log4j.RollingFileAppender
log4j.appender.libs.File=logs/app_libs.log
log4j.appender.libs.layout=org.apache.log4j.PatternLayout
log4j.appender.libs.MaxFileSize=10MB
log4j.appender.libs.MaxBackupIndex=15
### Loggers & additivity
# Application
log4j.additivity.our.company.basepackage=false
log4j.logger.our.company.basepackage=TRACE,main,console
# Root logger
log4j.rootLogger=INFO,libs
pom.xml snippet
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId>
<version>1.7.5</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>runtime</scope>
</dependency>
I find TRACE-level rows of mybatis ("org.apache.ibatis.logging.jdbc.BaseJdbcLogger.trace(BaseJdbcLogger.java:145)") in the file "app.log".
I excluded commons-logging from spring-core, and with a dependency tree I don't see commons-logging. Why isn't mybatis logging to the file "app_libs.log"? Why does mybatis not respect the specified level?
Thank you.
Edit 1
The code with which the database gets queried has been generated with mybatis-generator, and the generated code lives somewhere under the package "our.company.basepackage".
Since the question was posted, I didn't stop to think about this, until now: I found the reason of that behaviour.
The decisive suggestion is that "the code has been generated with mybatis-generator", and it has been generated in the same subpackage of the application: this means that the *Mapper classes, used for querying the databaes, effectively are in the application package and so their logs are treated as logs of "our.company.basepackage" and not as logs of "org.apache.ibatis".
The "org.apache.ibatis" in logs rows was misleading me.
After this small insight, I added the following to my log4j.properties:
log4j.additivity.our.company.basepackage.persistence.mybatis=false
log4j.logger.our.company.basepackage.persistence.mybatis=INFO,libs
With these 2 more lines, everything works properly, i.e. no more "org.apache.ibatis" rows in app.log.
I hope that this can be useful to someone other using mybatis-generator.

Categories