I want to see what queries mongo java driver produce, but I'm not able to do that.
Using information from the official documentation I'm able just see in the log that update operation executes, but I don't see the query of this operation.
You can set the logger level for org.mongodb to DEBUG and your Java driver will emit detailed logging like this:
2018-01-18 16:51:07|[main]|[NA]|INFO |org.mongodb.driver.connection|Opened connection [connectionId{localValue:2, serverValue:39}] to localhost:27017
2018-01-18 16:51:07|[main]|[NA]|DEBUG|org.mongodb.driver.protocol.insert|Inserting 1 documents into namespace stackoverflow.sample on connection [connectionId{localValue:2, serverValue:39}] to server localhost:27017
2018-01-18 16:51:07|[main]|[NA]|DEBUG|org.mongodb.driver.protocol.insert|Insert completed
2018-01-18 16:51:07|[main]|[NA]|DEBUG|org.mongodb.driver.protocol.command|Sending command {find : BsonString{value='sample'}} to database stackoverflow on connection [connectionId{localValue:2, serverValue:39}] to server localhost:27017
2018-01-18 16:51:07|[main]|[NA]|DEBUG|org.mongodb.driver.protocol.command|Command execution completed
2018-01-18 16:51:07|[main]|[NA]|DEBUG|org.mongodb.driver.protocol.command|Sending command {findandmodify : BsonString{value='sample'}} to database stackoverflow on connection [connectionId{localValue:2, serverValue:39}] to server localhost:27017
2018-01-18 16:51:07|[main]|[NA]|DEBUG|org.mongodb.driver.protocol.command|Command execution completed
In the above log output you can see the details of a query submitted by the client:
org.mongodb.driver.protocol.command|Sending command {find : BsonString{value='sample'}}
Alternatively, you can enable profiling on the server side ...
db.setProfilingLevel(2)
... causes the MongoDB profiler to collect data for all operations against that database.
The profiler output (which includes the query submitted by the client) is written to the system.profile collection in whichever database profiling has been enabled.
More details in the docs but the short summary is:
// turn up the logging
db.setProfilingLevel(2)
// ... run some commands
// find all profiler documents, most recent first
db.system.profile.find().sort( { ts : -1 } )
// turn down the logging
db.setProfilingLevel(0)
If you're using Spring Boot 1.5.x (I'm on 1.5.19) you'll need to override the version of org.mongodb:mongodb-driver to at least version 3.7.0 to get the additional info in the logs.
See this ticket for more details: https://jira.mongodb.org/browse/JAVA-2698
Related
currently I'm struggling with Neo4j/GrapheneDB (Dev free plan) on Heroku platform.
Launching my app locally via "heroku local" works fine, it connects (Neo4j Java Driver 4) to a Neo4j 3.5.18 (running from Docker image "neo4j:3.5").
My app is built using Micronaut framework, using its Neo4j support. Launching my app on Heroku platform succeeds, I'm using Gradle Heroku plugin for this task.
But accessing the database with business operations (and health checks) fails with exception like this:
INFO Driver - Direct driver instance 1523082263 created for server address hobby-[...]ldel.dbs.graphenedb.com:24787
WARN RetryLogic - Transaction failed and will be retried in 1032ms
org.neo4j.driver.exceptions.ServiceUnavailableException: Connection to the database terminated. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0.
at org.neo4j.driver.internal.util.Futures.blockingGet(Futures.java:143)
at org.neo4j.driver.internal.InternalSession.beginTransaction(InternalSession.java:163)
at org.neo4j.driver.internal.InternalSession.lambda$transaction$4(InternalSession.java:147)
at org.neo4j.driver.internal.retry.ExponentialBackoffRetryLogic.retry(ExponentialBackoffRetryLogic.java:101)
at org.neo4j.driver.internal.InternalSession.transaction(InternalSession.java:146)
at org.neo4j.driver.internal.InternalSession.readTransaction(InternalSession.java:112)
at org.neo4j.driver.internal.InternalSession.readTransaction(InternalSession.java:106)
at PersonController.logInfoOf(PersonController.java:57)
at PersonController.<init>(PersonController.java:50)
at $PersonControllerDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1814)
[...]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
Suppressed: org.neo4j.driver.internal.util.ErrorUtil$InternalExceptionCause: null
at org.neo4j.driver.internal.util.ErrorUtil.newConnectionTerminatedError(ErrorUtil.java:52)
at org.neo4j.driver.internal.async.connection.HandshakeHandler.channelInactive(HandshakeHandler.java:81)
[...]
at org.neo4j.driver.internal.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 common frames omitted
I'm sure to get login credentials from OS environment variables GRAPHENEDB_BOLT_URL, GRAPHENEDB_BOLT_USER, and GRAPHENEDB_BOLT_PASSWORD injected to the app correctly; I've verified it with some debug log statements:
State changed from starting to up
INFO io.micronaut.runtime.Micronaut - Startup completed in 2360ms. Server Running: http://localhost:7382
INFO Application - Neo4j Bolt URIs: [bolt://hobby-[...]ldel.dbs.graphenedb.com:24787]
INFO Application - Neo4j Bolt encrypted? false
INFO Application - Neo4j Bolt trust strategy: TRUST_SYSTEM_CA_SIGNED_CERTIFICATES
INFO Application - Changed trust strategy to: TRUST_ALL_CERTIFICATES
INFO Application - Env.: GRAPHENEDB_BOLT_URL='bolt://hobby-[...]ldel.dbs.graphenedb.com:24787'
INFO Application - Env.: GRAPHENEDB_BOLT_USER='app1[...]hdai'
INFO Application - Env.: GRAPHENEDB_BOLT_PASSWORD of length 31
I've also tried restarting GrapheneDB instance via Heroku plugin website, but with same negative results.
What's going wrong here? Are there any ways to further nail down the root cause?
Thanks
Christian
I had a closer look at this and it seems that you need the driver encryption turned on for the Graphene db instances. This can be configured in application.yml as below:
neo4j:
encryption: true
For reference, here is a sample project https://github.com/aldrinm/micronaut-neo4j-heroku
I am connecting HANA DB via ngdbc.jar. Connection is made properly but after running query 3-4 times connection to HANA DB is lost. When I restart my JAVA server again it works for 3-4 times. Can anyone help?
Error Msg-
WARN [org.hibernate.util.JDBCExceptionReporter] (http--0.0.0.0-8080-6) SQL Error: -708, SQLState: 08006
ERROR [org.hibernate.util.JDBCExceptionReporter] (http--0.0.0.0-8080-6) Data receive failed [Connection reset].
INFO [com.ultimatix.controller.MetricsController] (http--0.0.0.0-8080-6) context setMonthFreezeDateorg.hibernate.exception.JDBCConnectionException: could not execute query
ERROR [org.hibernate.transaction.JDBCTransaction] (http--0.0.0.0-8080-6) JDBC rollback failed: com.sap.db.jdbc.exceptions.jdbc40.SQLNonTransientConnectionException: Connection to database server lost; check server and network status [System error: Socket closed]
I can see you are using Hibernate based on your log.
Can you elaborate a little bit on your stack?
As #RC said, you should consider connection pooling instead of opening direct connections if this is what you are doing in your "JAVA" server.
May be your are keeping the connection open for too long and it times out.
These are all guesses, until you can share any logs or sample code.
One more thing, related to the ngjdbc driver only, there is a "reconnect" connection property which by default is set to false.
Regards
I want to test pulling data from Apache HBase with a Java application. The application will use SQL-like queries via a JDBC to Apache Phoenix.
I've set up my Hadoop "cluster" on one machine using Ambari and the HortonWorks HDP 2.5 platform. I've also Kerberized the environment using Ambari's wizard, where my KDC is a seperate machine running Windows Active Directory.
Ambari shows no errors, and I am able to use sqlline.py to successfully make SQL-like calls to HBase through Phoenix. I set up some example tables this way (cf. HortonWorks Phoenix & ODBC tutorial, although I had to kinit etc. first).
However, I am having problems creating a JDBC datasource to be used by the Java application. In my case, I am planning to host the webapp on WildFly 10.1 and I am developing with Eclipse JEE with the JBoss Tools plugin.
These are the steps I used to create the datasource:
Datasource Explorer > Database Connections > New...
Connection Profile: Generic JDBC
URL: jdbc:phoenix:hdfs.eaa.local:2181/hbase-secure:HTTP/hbase.eaa.local#EAA.LOCAL:jboss.server.temp.dir/spnego.service.keytab
Username: hbase -I'm unsure what to put here-
Driver: I've created a new driver of the type "Generic JDBC Driver" and I had to add JAR files for all of the dependencies of phoenix-core-[version].jar. The Driver Class is org.apache.phoenix.jbdc.PhoenixDriver.
I got the connection string from an extant post in the HortonWorks community, which is why it includes the Kerberos principal and keytab used for the connection.
When I try to test the datasource connection, it churns for about 5 minutes before spitting out an error message (after something like 35 attempts). The client returns Java exceptions that the sockets are in a "closing state", and the Zookeeper logs are less helpful:
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560217 with negotiated timeout 40000 for client /192.168.40.3:52674
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43860
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:43860
INFO [Thread-1448:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43860 (no session established for client)
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /192.168.40.41:43922
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560218 with negotiated timeout 40000 for client /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#118] - Successfully authenticated client: authenticationID=hbase/hdfs.eaa.local#EAA.LOCAL; authorizationID=hbase/hdfs.eaa.local#EAA.LOCAL.
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#134] - Setting authorizedID: hbase
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#964] - adding SASL authorization for authorizationID: hbase
INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#494] - Processed session termination for sessionid: 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43922 which had sessionid 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:44008
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:44008
INFO [Thread-1449:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:44008 (no session established for client)
NB. 192.168.40.3 is the VPN server, which my host machine is using to tunnel into the environment with the Hadoop cluster. 192.168.40.41 is the machine running the cluster, hdfs.eaa.local.
There are plenty of accepted socket connections which are then immediately closed. Occasionally the client authenticates successfully (so I'm confident in my Kerberos settings) but then there is a session termination immediately afterward.
I've also tried to deploy the Datasource directly in WildFly with jboss-cli and standalone.xml and module.xml modifications. But I get lots of problems with missing dependencies that I'm not sure how to resolve without creating a new module for each required JAR (and there are a lot) for phoenix-core-[version].jar. I followed this guide.
What can I do to fix the issue or diagnose further? I've been pulling my hair out for a couple of days now.
You need to add hbase-site.xml and core-site.xml to your classpath.
See How to connect to a Kerberos-secured Apache Phoenix data source with WildFly? for more information.
I am having problems running an app I have developed in an EC2 instance. When I execute the .jar (java -jar app.jar), the SpringBoot app starts but it fails when trying to connect to my MySQL RDS database. The thing is when I run the app locally on my machine, It has no issues with the DB connection.
I have opened the port where the app is running (8090) and MySql port as well (3306) for inbound and outbound traffic:
This is the error I get:
2016-09-23 17:46:38.132 INFO 10161 --- [main] .t.TomcatEmbeddedServletContainerFactory : Server initialized with port: 8090
2016-09-23 17:46:38.604 INFO 10161 --- [main] o.apache.catalina.core.StandardService : Starting service Tomcat
2016-09-23 17:46:38.605 INFO 10161 --- [main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/7.0.54
2016-09-23 17:46:38.724 INFO 10161 --- [ost startStop 1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2016-09-23 17:46:38.725 INFO 10161 --- [ost startStop 1] o.s.web.context.ContextLoader: Root WebApplicationContext: initialization completed in 5028 ms
2016-09-23 17:48:48.476 ERROR 10161 --- [ost startStop 1] o.a.tomcat.jdbc.pool.ConnectionPool: Unable to create initial connections of pool.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
Any ideas how can i solve this problem?
Thank you very much for your help
Regards
Andres
From your description and log file, it's likely that network configuration is the cause here.
You might want to draw the network topology of your instances (region/availability zone, VPC, subnet, network acl, security group). This will be very helpful when you do more complex development work.
There are good references: VPC Introduction and Security in your VPC and Scenarios for Accessing a DB Instance in a VPC
I suggest the following actions for your troubleshooting:
Check security group (SG) configuration of your EC2 instance and RDS instance.
You can check this by going to EC2 Dashboard/RDS Dashboard -> Click on an instance and look at "Security Group" description, or you can click on the Setting icon (Show/Hide columns) and tick "Security Groups".
In RDS's SG configuration: make sure you have enable access from EC2 instance's SG to port 3306. You can do this by putting EC2 instance's SG ID into Source field of the config, as a "Custom IP" value. See the 1st scenario in the above reference for more detail.
Use mysql command line to test the connection between EC2 instance and RDS.
Hope it helps.
You need to perform following steps :
1) Go to EC2 instance and find security group you want access in RDS
2) Now go to your RDS security group and select inbound rules
Select ALL TCP and add your sg-xxx(security group)
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html
I Recently started learning cassandra and going through online tutorials for cassandra with DataStax Java Drivers. I am simply trying to connect a localhost node on my laptop.
Setup Details -
OS - Windows 7
Cassandra Version - Cassandra version: 2.1-SNAPSHOT
DataStax java driver version - 3.1.0
I could able to connect to local node by using CQLSH and cassandra-cli clients.
I can also see the default keyspace system and system_traces.
Below is the cassandra server log
INFO 12:12:51 Node localhost/127.0.0.1 state jump to normal
INFO 12:12:51 Startup completed! Now serving reads.
INFO 12:12:51 Starting listening for CQL clients on /0.0.0.0:9042...
INFO 12:12:51 Binding thrift service to /0.0.0.0:9160
INFO 12:12:51 Using TFramedTransport with a max frame size of 15728640 bytes.
INFO 12:12:51 Using synchronous/threadpool thrift server on 0.0.0.0 : 9160
INFO 12:12:51 Listening for thrift clients...
I am trying below simple code -
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
Metadata metadata = cluster.getMetadata();
This code throws below exception - Part of the Trace
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured columnfamily schema_usertypes))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
I have been through all the previously asked question. Most of answers suggests changing the configuration in cassandra.yaml file.
My cassandra.yaml configuration is -
start_native_transport: true
native_transport_port: 9042
listen_address: localhost
rpc_address: 0.0.0.0
rpc_port: 9160
Most of the answers suggests to use actual IP address of machine at rpc_address, which I tried but did not worked.
Here are the Questions I been through -
Question One, Question two ,Question three, Topic ,Connection requirement, Question four.
This page lists compatibility of Java DataStax drivers with cassandra versions, so I changed the driver version to 2.1.1 (As I am using cassandra 2.1), but it did not worked.
Please suggest what could be wrong.
The error with schema_usertypes seems like the driver is trying to look for a table that isn't there maybe related to this Jira.
You say you are running a 2.1-SNAPSHOT of Cassandra? Try Cassandra 2.1.15. Something seems off on your Cassandra node, the driver is able to talk to your cluster since it trys to look up data inside schema_usertypes.