is there any easy way to turn on query logging on cassandra through xml configuration? I'm using namespace:
xmlns:cassandra="http://www.springframework.org/schema/data/cassandra"
but I can't find any suitable solution. I was trying to turn on trace through cqlsh, but it dosen't work for my app.
I was trying also to add line:
<logger name="com.datastax.driver.core.QueryLogger.NORMAL" level="TRACE" />
But also doesn't work.
My versions:
spring-data-cassandra-1.4.0
cassandra: 2.1.5
Add a QueryLogger #Bean and get the Cluster #Autowired in:
#Bean
public QueryLogger queryLogger(Cluster cluster) {
QueryLogger queryLogger = QueryLogger.builder()
.build();
cluster.register(queryLogger);
return queryLogger;
}
(+ obviously configure QueryLogger.Builder as required).
Don't forget to set log levels to DEBUG/TRACE in your application.yml:
logging.level.com.datastax.driver.core.QueryLogger.NORMAL: DEBUG
logging.level.com.datastax.driver.core.QueryLogger.SLOW: TRACE
VoilĂ !
Please check out this link and check if you added the query logger to your cluster definition like stated:
Cluster cluster = ...
QueryLogger queryLogger = QueryLogger.builder(cluster)
.withConstantThreshold(...)
.withMaxQueryStringLength(...)
.build();
cluster.register(queryLogger);
Let me know if it helped.
If you are using Spring Data Cassandra 2.4+ QueryLogger is not available anymore, it was replaced with RequestTracker which can be configured in application.yml or overridden depending on your needs.
The Java driver provides a RequestTracker interface. You can specify an implementation of your own or use the provided RequestLogger implementation by configuring the properties in the datastax-java-driver.advanced.request-tracker namespace. The RequestLogger tracks every query your application executes and has options to enable logging for successful, failed, and slow queries. Use the slow query logger to identify queries that are not within your defined performance.
Configuration:
datastax-java-driver.advanced.request-tracker {
class = RequestLogger
logs {
# Whether to log successful requests.
success.enabled = true
slow {
# The threshold to classify a successful request as "slow". If this is unset, all
# successful requests will be considered as normal.
threshold = 1 second
# Whether to log slow requests.
enabled = true
}
# Whether to log failed requests.
error.enabled = true
# The maximum length of the query string in the log message. If it is longer than that, it
# will be truncated.
max-query-length = 500
# Whether to log bound values in addition to the query string.
show-values = true
# The maximum length for bound values in the log message. If the formatted representation of
# a value is longer than that, it will be truncated.
max-value-length = 50
# The maximum number of bound values to log. If a request has more values, the list of
# values will be truncated.
max-values = 50
# Whether to log stack traces for failed queries. If this is disabled, the log will just
# include the exception's string representation (generally the class name and message).
show-stack-traces = true
}
More details.
If you're using Spring Data for Apache Cassandra version 2.0 or higher, then you can use your logging configuration to activate CQL logging. Set the log level of org.springframework.data.cassandra.core.cql.CqlTemplate to DEBUG, no need to mess with QueryLogger:
-Dlogging.level.org.springframework.data.cassandra.core.cql.CqlTemplate=DEBUG
This can, of course, be permanently done in the application.properties.
Related
On my application, in my Java Backend, I call with SOAP UI a service to retrieve values (SELECT) that I receive.
Then, I make an update that impacts the previous result.
Then, on SOAP UI, I call my service again with the same SELECT request in order to view my modifications but the result is the same as during the first call, my modifications are not returned. I check the database, my modifications are present.
My application is under JHIPSTER 7.1 and my database is under Postgres. I am using ehcache with hibernate and I suspect the cache is the cause of this problem because as soon as I restart my service the query returns the correct values.
Do you know why I am seeing this problem and how to solve it?
EDIT 1
Ehcache configuration
jpa:
open-in-view: false
properties:
...
hibernate.cache.use_second_level_cache: false <- change from true (KO) to false (OK)
hibernate.cache.use_query_cache: false
End for the profils
Dev (current profil with problem)
jhipster:
cache: # Cache configuration
ehcache: # Ehcache configuration
time-to-live-seconds: 3600 # By default objects stay 1 hour in the cache
max-entries: 100 # Number of objects in each cache entry
Prod
jhipster:
http:
cache: # Used by the CachingHttpHeadersFilter
timeToLiveInDays: 1461
cache: # Cache configuration
ehcache: # Ehcache configuration
time-to-live-seconds: 3600 # By default objects stay 1 hour in the cache
max-entries: 1000 # Number of objects in each cache entry
All my request are subscribe with RESP API. I'm new on ehcache and in my code, I don't put any code about it on my differents services ...
EDIT 2
Example of my repository use JPA
How do you update the database? Using the Spring Data repository? Normally, when a table is updated, Hibernate will clear the cache for this table. So everything should be in sync.
It won't be the case if you update in a out-of-Hibernate way. Also, look at your caching strategy for the entity. A READ_ONLY entity is not expected to change so Hibernate won't evict.
{
"mdc":{
},
"timestamp":"2021-05-11 11:48:04.055",
"level":"ERROR",
"logger":"org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner",
"message":"Failed to create topics",
"exception":"\"\norg.apache.kafka.common.errors.UnsupportedVersionException: Creating topics with default partitions/replication factor are only supported in CreateTopicRequest version 4+. The following topics need values for partitions and replicas:"
Please suggest what changes are required as i am getting this error.
I see you are new here. You should always include version information and full stack trace for questions like this.
Upgrade your broker to >= 2.4 or set the binder replication factor property.
See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/commit/4161f875ede0446ab1d485730c51e6a2c5baa37a
Change default replication factor to -1
Binder now uses a default value of -1 for replication factor signaling the
broker to use defaults. Users who are on Kafka brokers older than 2.4,
need to set this to the previous default value of 1 used in the binder.
In either case, if there is an admin policy that requires replication factor > 1,
then that value must be used instead.
Overriding the default replication factor(-1) with a non negative value fixed the issue for me.
spring.cloud.stream.kafka.binder.replication-factor=1
For application.yaml file use:
spring.cloud.stream.kafka.binder.replicationFactor: 1
Change the code to following . The fluent api allows to give partitions and replicas.
#Bean
NewTopic topicBytes() {
return TopicBuilder.name("reflectoring-bytes").partitions(1).replicas(1).build();
}
In a spring boot application or in generally, does tomcat has a default thread pool configured?
If we do not configure anything, the tomcat will initiate new threads for each request and the thread gets destroy once the request finish?
And if a thread pool configured, particular thread would serve many requests when ever container pick that thread from pool?
Here is the configs of the embed Tomcat in springboot
server.tomcat.accept-count=100 # Maximum queue length for incoming connection requests when all possible request processing threads are in use.
server.tomcat.accesslog.buffered=true # Whether to buffer output such that it is flushed only periodically.
server.tomcat.accesslog.directory=logs # Directory in which log files are created. Can be absolute or relative to the Tomcat base dir.
server.tomcat.accesslog.enabled=false # Enable access log.
server.tomcat.accesslog.file-date-format=.yyyy-MM-dd # Date format to place in the log file name.
server.tomcat.accesslog.pattern=common # Format pattern for access logs.
server.tomcat.accesslog.prefix=access_log # Log file name prefix.
server.tomcat.accesslog.rename-on-rotate=false # Whether to defer inclusion of the date stamp in the file name until rotate time.
server.tomcat.accesslog.request-attributes-enabled=false # Set request attributes for the IP address, Hostname, protocol, and port used for the request.
server.tomcat.accesslog.rotate=true # Whether to enable access log rotation.
server.tomcat.accesslog.suffix=.log # Log file name suffix.
server.tomcat.additional-tld-skip-patterns= # Comma-separated list of additional patterns that match jars to ignore for TLD scanning.
server.tomcat.background-processor-delay=10s # Delay between the invocation of backgroundProcess methods. If a duration suffix is not specified, seconds will be used.
server.tomcat.basedir= # Tomcat base directory. If not specified, a temporary directory is used.
server.tomcat.internal-proxies=10\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|\\
192\\.168\\.\\d{1,3}\\.\\d{1,3}|\\
169\\.254\\.\\d{1,3}\\.\\d{1,3}|\\
127\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.1[6-9]{1}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.2[0-9]{1}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.3[0-1]{1}\\.\\d{1,3}\\.\\d{1,3}\\
0:0:0:0:0:0:0:1\\
::1 # Regular expression that matches proxies that are to be trusted.
server.tomcat.max-connections=10000 # Maximum number of connections that the server accepts and processes at any given time.
server.tomcat.max-http-header-size=0 # Maximum size in bytes of the HTTP message header.
server.tomcat.max-http-post-size=2097152 # Maximum size in bytes of the HTTP post content.
server.tomcat.max-threads=200 # Maximum amount of worker threads.
server.tomcat.min-spare-threads=10 # Minimum amount of worker threads.
server.tomcat.port-header=X-Forwarded-Port # Name of the HTTP header used to override the original port value.
server.tomcat.protocol-header= # Header that holds the incoming protocol, usually named "X-Forwarded-Proto".
server.tomcat.protocol-header-https-value=https # Value of the protocol header indicating whether the incoming request uses SSL.
server.tomcat.redirect-context-root=true # Whether requests to the context root should be redirected by appending a / to the path.
server.tomcat.remote-ip-header= # Name of the HTTP header from which the remote IP is extracted. For instance, `X-FORWARDED-FOR`.
server.tomcat.resource.cache-ttl= # Time-to-live of the static resource cache.
server.tomcat.uri-encoding=UTF-8 # Character encoding to use to decode the URI.
server.tomcat.use-relative-redirects= # Whether HTTP 1.1 and later location headers generated by a call to sendRedirect will use relative or absolute redirects.
As you can see for the default value, the minimum amount of worker threads is 10, and the maximum amount of worker threads is 200, and the maximum queue length for incoming connection requests when all possible request processing threads are in use is 100.
Yes spring boot uses Embeded tomcat server, you can modify some of its configs in application.yml or application.properties By default it has 200 threads spring-docs
# EMBEDDED SERVER CONFIGURATION (ServerProperties)
server.port=8080
server.address= # bind to a specific NIC
server.session-timeout= # session timeout in seconds
server.context-path= # the context path, defaults to '/'
server.servlet-path= # the servlet path, defaults to '/'
server.tomcat.access-log-pattern= # log pattern of the access log
server.tomcat.access-log-enabled=false # is access logging enabled
server.tomcat.protocol-header=x-forwarded-proto # ssl forward headers
server.tomcat.remote-ip-header=x-forwarded-for
server.tomcat.basedir=/tmp # base dir (usually not needed, defaults to tmp)
server.tomcat.background-processor-delay=30; # in seconds
server.tomcat.max-threads = 0 # number of threads in protocol handler
server.tomcat.uri-encoding = UTF-8 # character encoding to use for URL decoding
We have a webserver and multiple users log in to it. We generally put log level to ERROR or INFO level. But sometimes, for debugging purpose, we need to see logs. There is one way to set it at runtime, but this process is not so good in case of loads of traffic. Important logs will be missed and also we don't know for how much time we need to keep it that way. I have written a wrapper in log4j v1.2, which just ignores the level check if userid belongs to some TestUsersList. So, it opens all logs for a particular user[a thread] only. A snippet is below-
public void trace(Object message) {
Object diagValue = MDC.get(LoggerConstants.IS_ANALYZER_NUMBER);
if (valueToMatch.equals(diagValue)) { // Some condition to check test number
forcedLog(FQCN, Level.TRACE, message, null);
return;
}
if (repository.isDisabled(Level.TRACE_INT))
return;
if (Level.TRACE.isGreaterOrEqual(this.getEffectiveLevel()))
forcedLog(FQCN, Level.TRACE, message, null);
}
But now I have moved to log4j2, I don't want to write this wrapper again. Is there any inbuilt functionality which log4j2 provides for this?
This can be done with filters. Add a logger to the configuration that logs all the messages you want, then add a ThreadContextMapFilter that has a KeyValuePair for each user you want to log.
Then put the user ids in the Thread Context within the code.
I want to debug a few of my application JDBC queries so I wanted to configure java.util.logging to dump the actual SELECT statements that were run against the database and the data bound to their parameters.
I already have java.util.logging configured to log other messages to file. The setup code is as folloing:
Handler fh = new FileHandler("file.log", true);
Logger logger = Logger.getLogger("");
fh.setFormatter(new GruposLogFormatter());
logger.addHandler(fh);
logger.setLevel(Level.ALL);
logger.info("==================================");
So, how can I configure java.util.logging to log JDBC queries to a file or sysout?
This answer barely touches the subject, but didn't help me.
If you are using Hibernate following configurations can be done to get the sql queries printed.
How to print a query string with parameter values when using Hibernate
You can use http://sourceforge.net/projects/p6spy/ also and intercept all queries and log them to a file