Authentication with MS SQL Server from Mac OS X - java

Based on this question I've built the following function, which seems to connect to the SQL Server but is unable to authenticate:
connect <- function(server, database, user, password){
# Autoinstall if necessary
packages <- c("RJDBC")
new.packages <- packages[!(packages %in% installed.packages()[, "Package"])]
if(length(new.packages)){
install.packages(new.packages, dependencies = TRUE)
}
library(RJDBC)
sql.java <- paste(find.package(package = 'mypackage'), '/sqljdbc4.jar', sep = "")
drv <- JDBC("com.microsoft.sqlserver.jdbc.SQLServerDriver",
sql.java,
identifier.quote = "`")
con.string <- paste("jdbc:sqlserver://", server, ";databaseName=", database, ";integratedSecurity=true;", sep = "")
con <- dbConnect(drv, con.string, user, password)
}
# Feb 24, 2015 9:57:56 AM com.microsoft.sqlserver.jdbc.AuthenticationJNI <clinit>
# WARNING: Failed to load the sqljdbc_auth.dll cause : no sqljdbc_auth in java.library.path
# Show Traceback
# Rerun with Debug
# Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1], :
# com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication. ClientConnectionId:5ac4ed9c-a4a8-4318-ba77-7f3fb2ed01b0
# If I take ;integratedSecurity=true out of there:
# Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1], :
# com.microsoft.sqlserver.jdbc.SQLServerException: Login failed for user 'domain\user'. ClientConnectionId:29035eb6-7b9f-4d9c-86b4-6d6760c61399
Does anyone know how I can handle authentication? It seems to work with RODBC on my Windows machine.

Related

org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host Linux error that works on Windows

I have looked at many (too many) similar questions here and tried over and over to solve this one but I can't quite get it to resolve.
I have a java code that connects to a PostGreSQL server on a RHEL 8.2 box to query and return a value. When I run it on my Windows 10 laptop; it works with no issues. However, when I run it locally on the Linux box it fails with the dreaded org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host.
The java code is made by a ETL software called Talend; and the code is run on Linux by RunDeck.
To make matters more confusing, I can login on Linux as the postgres user and use the psql interface to connect to the db with the same host, port, db, and user I'm using in Java without error-
psql -h [server ip] -p 5432 -d my_special_db -U my_db_user
I have checked the server settings, the pg_hba.conf file, the user permissions, the log files and tried changing all of the above to get some sort of return; all to no avail.
I have added to pg_hba.conf:
host my_special_db my_db_user [server ip]/32 md5
host my_special_db my_db_user [server ip with .0 as the last quatraine]/24 md5
host my_special_db root ::1/128 trust
host all all [server ip]/32 trust
I have tried multiple connection modifiers like:
sslfactory=org.postgresql.ssl.NonValidatingFactory&sslmode=prefer
or
sslmode=prefer
or
sslmode=Require
or
no modifier at all
Please advise.
Details:
OS
NAME : Linux RHEL 8.2
VERSION : 4.18.0-372.16.1.el8_6.x86_64
JVM
IMPLEMENTATIONVERSION : 11.0.15+9-LTS
NAME : OpenJDK 64-Bit Server VM
VENDOR : Amazon.com Inc.
VERSION : 11.0.15
PostGreSQL
postgresql11-libs-11.7-1PGDG.rhel8.x86_64.rpm \
postgresql11-11.7-1PGDG.rhel8.x86_64.rpm \
postgresql11-server-11.7-1PGDG.rhel8.x86_64.rpm
Under server > properties > SSL
All Defaults:
SSL Mode: Prefer
Client Cert: [blank]
Client Cert Key: [blank]
Root Cert: [blank]
Cert Rev List: [blank]
SSL compression?: no
//////////////////
Under /etc/systemd/system/postgresql-11.service
[Unit]
Description=PostgreSQL 11 database server
Documentation=https://www.postgresql.org/docs/11/static/
After=syslog.target
After=network.target
[Service]
Type=notify
User=postgres
Group=postgres
# Note: avoid inserting whitespace in these Environment= lines, or you may
# break postgresql-setup.
# Location of database directory
Environment=PGDATA=[my data directory path]
# Where to send early-startup messages from the server (before the logging
# options of postgresql.conf take effect)
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog
# Disable OOM kill on the postmaster
OOMScoreAdjust=-1000
Environment=PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj
Environment=PG_OOM_ADJUST_VALUE=0
ExecStartPre=[path to server]/pgsql-11/bin/postgresql-11-check-db-dir ${PGDATA}
ExecStart=[path to server]/pgsql-11/bin/postgres -D ${PGDATA}
#ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGINT
# Do not set any timeout value, so that systemd will not kill postmaster
# during crash recovery.
TimeoutSec=0
[Install]
WantedBy=multi-user.target
[Service]
Environment=PGDATA=[my data directory path]
//////////////////
//////////////////
Top of the pg_hba.conf file:
# TYPE DATABASE USER ADDRESS METHOD
# linux requires a 'local' entry
#local all all md5
local all all md5
# IPv4 local connections:
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::0/0 md5
# no ssl connection
hostnossl all all 0.0.0.0/0 trust
# IPv4 local connections:
hostnossl all all 127.0.0.1/32 md5
# IPv6 local connections:
hostnossl all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
# This Servers IP and Ranges as Talend doesn't use localhost
host postgres postgres [server ip]/32 md5
host postgres all [server ip with .0 as the last quatraine]/24 md5
host all postgres [server ip]/32 md5
host all all [server ip with .0 as the last quatraine]/24 md5
host my_special_db my_db_user [server ip]/32 md5
host my_special_db my_db_user [server ip with .0 as the last quatraine]/24 md5
host my_special_db root ::1/128 trust
host all all [server ip]/32 trust
//////////////////
//////////////////
Part of My Java Code:
conn_tDBInput_1 = (java.sql.Connection) globalMap.get("conn_tDBConnection_2");
java.sql.Statement stmt_tDBInput_1 = conn_tDBInput_1.createStatement();
String dbquery_tDBInput_1 = "\n(SELECT ID AS \"id\"\n, ATTRIBUTE AS \"attribute\"\n, DEFINITION AS \"definition\"\n, DESCRIPTION AS \"description\""
+ "\nFROM " + context.SEC_DB_SCHEMA + "." + context.SEC_DB_TABLE
+ "\nWHERE \"description\" = 'my_filter_here'\nLIMIT 1);\n\n";
globalMap.put("tDBInput_1_QUERY", dbquery_tDBInput_1);
java.sql.ResultSet rs_tDBInput_1 = null;
try {
rs_tDBInput_1 = stmt_tDBInput_1.executeQuery(dbquery_tDBInput_1);
java.sql.ResultSetMetaData rsmd_tDBInput_1 = rs_tDBInput_1.getMetaData();
int colQtyInRs_tDBInput_1 = rsmd_tDBInput_1.getColumnCount();
String tmpContent_tDBInput_1 = null;
while (rs_tDBInput_1.next()) {
nb_line_tDBInput_1++;
if (colQtyInRs_tDBInput_1 < 1) {
row1.ID = 0;
} else {
row1.ID = rs_tDBInput_1.getInt(1);
if (rs_tDBInput_1.wasNull()) {
throw new RuntimeException("Null value in non-Nullable column");
}
}
if (colQtyInRs_tDBInput_1 < 2) {
row1.ATTRIBUTE = null;
} else {
row1.ATTRIBUTE = routines.system.JDBCUtil.getString(rs_tDBInput_1, 2, false);
}
if (colQtyInRs_tDBInput_1 < 3) {
row1.DEFINITION = null;
} else {
row1.DEFINITION = routines.system.JDBCUtil.getString(rs_tDBInput_1, 3, false);
}
if (colQtyInRs_tDBInput_1 < 4) {
row1.DESCRIPTION = null;
} else {
row1.DESCRIPTION = routines.system.JDBCUtil.getString(rs_tDBInput_1, 4, false);
}
/**
* [tDBInput_1 begin ] stop
*/
/**
* [tDBInput_1 main ] start
*/
currentComponent = "tDBInput_1";
tos_count_tDBInput_1++;
/**
* [tDBInput_1 main ] stop
*/
/**
* [tDBInput_1 process_data_begin ] start
*/
currentComponent = "tDBInput_1";
/**
* [tDBInput_1 process_data_begin ] stop
*/
/**
* [tReplicate_1 main ] start
*/
currentComponent = "tReplicate_1";
if (execStat) {
runStat.updateStatOnConnection(iterateId, 1, 1
, "row1"
);
}
row5 = new row5Struct();
row5.ID = row1.ID;
row5.ATTRIBUTE = row1.ATTRIBUTE;
row5.DEFINITION = row1.DEFINITION;
row5.DESCRIPTION = row1.DESCRIPTION;
globalMap.put("row5.ATTRIBUTE", row5.ATTRIBUTE);
globalMap.put("row5.DEFINITION", row5.DEFINITION);
// Set Reports DB values
context.DB_HOST = [my_server_ip];
context.DB_NAME = [my_db];
context.DB_PORT = 5432;
// This will have to be changed for each tower will have it's own 'reports' table
context.DB_SCHEMA = "my_schema";
context.DB_TABLE = "my_table";
context.DB_USER = (String)(globalMap.get("row5.ATTRIBUTE"));
context.DB_PWD = (String)(globalMap.get("row5.DEFINITION"));
//------------
currentComponent="tDBConnection_1";
int tos_count_tDBConnection_1 = 0;
String dbProperties_tDBConnection_1 = "sslfactory=org.postgresql.ssl.NonValidatingFactory&sslmode=prefer";
String url_tDBConnection_1 = "jdbc:postgresql://"+context.DB_HOST+":"+context.DB_PORT+"/"+context.DB_NAME;
if(dbProperties_tDBConnection_1 != null && !"".equals(dbProperties_tDBConnection_1.trim())) {
url_tDBConnection_1 = url_tDBConnection_1 + "?" + dbProperties_tDBConnection_1;
}
String dbUser_tDBConnection_1 = context.DB_USER;
//////////////////
//////////////////
Log entry from the time of trying to connect both locally and remotely-
2022-08-23 15:20:45.452 EDT [67687] LOG: database system is ready to accept connections
2022-08-23 15:22:00.719 EDT [67687] LOG: received fast shutdown request
2022-08-23 15:22:00.720 EDT [67687] LOG: aborting any active transactions
2022-08-23 15:22:00.722 EDT [67687] LOG: background worker "logical replication launcher" (PID 67696) exited with exit code 1
2022-08-23 15:22:00.722 EDT [67691] LOG: shutting down
2022-08-23 15:22:00.754 EDT [67687] LOG: database system is shut down
2022-08-23 15:22:00.814 EDT [67739] LOG: database system was shut down at 2022-08-23 15:22:00 EDT
2022-08-23 15:22:00.818 EDT [67735] LOG: database system is ready to accept connections
2022-08-23 15:52:25.666 EDT [67735] LOG: received fast shutdown request
2022-08-23 15:52:25.668 EDT [67735] LOG: aborting any active transactions
2022-08-23 15:52:25.669 EDT [67735] LOG: background worker "logical replication launcher" (PID 67745) exited with exit code 1
2022-08-23 15:52:25.669 EDT [67740] LOG: shutting down
2022-08-23 15:52:25.692 EDT [67735] LOG: database system is shut down
2022-08-23 15:52:34.188 EDT [79803] LOG: database system was shut down at 2022-08-23 15:52:25 EDT
2022-08-23 15:52:34.194 EDT [79801] LOG: database system is ready to accept connections
//////////////////////////////////////
UPDATE:
Here is a much of the java error message as I can share; given some of the restrictions on what I can post;
error message :: FATAL: no pg_hba.conf entry for host "[my_server_ip]", user "my_db_user", database "my_db", SSL off|4
Exception in component tDBConnection_2 (Generic_Talend_Job)
org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "[my_server_ip]", user "my_db_user", database "my_db", SSL off
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:613)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:161)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tDBConnection_2Process(Generic_Talend_Job.java:5752)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tSystem_1Process(Generic_Talend_Job.java:5614)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tJava_3Process(Generic_Talend_Job.java:5342)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tWarn_1Process(Generic_Talend_Job.java:5094)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tFileInputDelimited_1Process(Generic_Talend_Job.java:4851)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tLibraryLoad_1Process(Generic_Talend_Job.java:4089)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tPrejob_1Process(Generic_Talend_Job.java:3856)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.runJobInTOS(Generic_Talend_Job.java:11916)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.main(Generic_Talend_Job.java:11245)
Suppressed: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "[my_server_ip]", user "my_db_user", database "my_db", SSL off
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:613)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:161)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
... 15 more
sec db connection failure, error message :: FATAL: no pg_hba.conf entry for host "[my_server_ip]", user "my_db_user", database "my_db", SSL off
Update 2:
Entry from this mornings logs at /data_folder/log after trying again and getting the same error
[postgres#my_server_name log]$ vim postgresql-Wed.log
2022-08-24 10:00:17.714 EDT [79801] LOG: received fast shutdown request
2022-08-24 10:00:17.717 EDT [79801] LOG: aborting any active transactions
2022-08-24 10:00:17.717 EDT [508177] FATAL: terminating connection due to administrator command
2022-08-24 10:00:17.717 EDT [508176] FATAL: terminating connection due to administrator command
2022-08-24 10:00:17.718 EDT [79801] LOG: background worker "logical replication launcher" (PID 79809) exited with exit code 1
2022-08-24 10:00:17.718 EDT [79804] LOG: shutting down
2022-08-24 10:00:17.740 EDT [79801] LOG: database system is shut down
2022-08-24 10:00:35.780 EDT [513920] LOG: database system was shut down at 2022-08-24 10:00:17 EDT
2022-08-24 10:00:35.788 EDT [513918] LOG: database system is ready to accept connections
Update 3:
From the postgresql.conf file:
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/var/run/postgresql, /tmp' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = md5 # md5 or scram-sha-256
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - SSL -
#ssl = off
#ssl_ca_file = ''
#ssl_cert_file = 'server.crt'
#ssl_crl_file = ''
#ssl_key_file = 'server.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
Update 4:
After doing some more searching, I have found that the default for the postgres logging isn't verbose; so this is the verbose part:
[636130] [630676b4.9b4e2] [2022-08-24 15:06:28.850 EDT] [] [F0000] [] [] [] [] [] [7] [0]: LOG: hostssl record cannot match because SSL is disabled
[636130] [630676b4.9b4e2] [2022-08-24 15:06:28.850 EDT] [] [F0000] [] [] [] [] [] [8] [0]: HINT: Set ssl = on in postgresql.conf.
[636130] [630676b4.9b4e2] [2022-08-24 15:06:28.850 EDT] [] [F0000] [] [] [] [] [] [9] [0]: CONTEXT: line 103 of configuration file "[data folder path]/pg_hba.conf"
[636132] [630676b4.9b4e4] [2022-08-24 15:06:28.854 EDT] [] [00000] [] [] [] [] [] [1] [0]: LOG: database system was shut down at 2022-08-24 15:06:18 EDT
[636130] [630676b4.9b4e2] [2022-08-24 15:06:28.859 EDT] [] [00000] [] [] [] [] [] [10] [0]: LOG: database system is ready to accept connections
I have gotten this error even after the following updates-
update to the postgresql.conf file:
edit 1
ssl = off
edit 2
ssl = on
edit 3
#ssl = off
updated the log level in the postgresql.conf file to
log_line_prefix = '[%p] [%c] [%m] [%v] [%e] [%i] [%d] [%u] [%a] [%r] [%l] [%x]: '
...
log_statement = 'all'
updated the pg_hba.conf file to start with:
# Allow any user on the local system to connect to any database with
# any database user name using Unix-domain sockets (the default for local
# connections).
#
# TYPE DATABASE USER ADDRESS METHOD
local all all trust
# The same using local loopback TCP/IP connections.
#
# TYPE DATABASE USER ADDRESS METHOD
host all all 127.0.0.1/32 trust
# The same as the previous line, but using a separate netmask column
#
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
host all all 127.0.0.1 255.255.255.255 trust
# The same over IPv6.
#
# TYPE DATABASE USER ADDRESS METHOD
host all all ::1/128 trust
# The same using a host name (would typically cover both IPv4 and IPv6).
#
# TYPE DATABASE USER ADDRESS METHOD
host all all localhost trust
hostssl all all 0.0.0.0/0 md5
also updated the connection params with-
sslfactory=org.postgresql.ssl.NonValidatingFactory
According to postgres this is supposed to stop any and all SSL issues and just let the connections through-
https://jdbc.postgresql.org/documentation/head/ssl-client.html
"A non-validating connection is established via a custom SSLSocketFactory class that is provided with the driver. Setting the connection URL parameter sslfactory=org.postgresql.ssl.NonValidatingFactory will turn off all SSL validation."
Update 5:
Logging settings from postgresql.conf file:
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '[%p] [%c] [%m] [%v] [%e] [%i] [%d] [%u] [%a] [%r] [%l] [%x]: ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
log_statement = 'all' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'America/New_York'
Update 6:
I made edits to the logging part of the config, and now have a lot of output from the log
new logging settings:
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
log_min_messages = debug5 # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
log_min_error_statement = debug5 # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
log_connections = on
log_disconnections = on
#log_duration = off
log_error_verbosity = verbose # terse, default, or verbose messages
log_hostname = on
log_line_prefix = '[%p] [%c] [%m] [%v] [%e] [%i] [%d] [%u] [%a] [%r] [%l] [%x]: ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
log_statement = 'all' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'America/New_York'
I couldn't paste the error log here because after making it verbose it was very long- so I made a gist
Last update:
I removed the Gist because the peeps here helped me find my error:
okay, so the issue was 3 things; thing 1, we have a 'properties' file that I thought was set to this server this db; and it wasn't // thing 2, the default logging was set to 'warning' and didn't have enough information so after putting it to debug level 5 (the most data) and on 'all' events; I finally saw a error message that made me realize it was pointed at the wrong db // thing 3, in the many (too many) iterations of my pg_hba.conf file I had added then removed the IP address.
so now it's working, lol
Sometimes you are too close to the tracks to see the train, lol
I removed the Gist because the peeps here helped me find my error:
okay, so the issue was 3 things; thing 1, we have a 'properties' file that I thought was set to this server this db; and it wasn't // thing 2, the default logging was set to 'warning' and didn't have enough information so after putting it to debug level 5 (the most data) and on 'all' events; I finally saw a error message that made me realize it was pointed at the wrong db // thing 3, in the many (too many) iterations of my pg_hba.conf file I had added then removed the IP address. so now it's working, lol Sometimes you are too close to the tracks to see the train, lol

/usr/bin/java not found when dumping assetic

I ran php app/console assetic:dump to publish assetics for a Symfony2 application, but I got this error:
[Assetic\Exception\FilterException]
An error occurred while running:
'/usr/bin/java' '-jar' '/home/devx/public_html/mutualcontratos/app/Resources/java/yuicompressor-2.4.7.jar' '--charset' 'UTF-8' '-o' '/tmp/YUI-OUT-3E8eNl' '--type' 'js' '/tmp/YUI-IN-DmkGnm'
Error Output:
sh: /usr/bin/java: No existe el fichero o el directorio
That means that java executable could not be found.
The fact is that I have another Symfony2 app on the same server and it works there. I compared both config.yml files and they are identical. What is missing in this other app?
This is the full config.xml file:
imports:
- { resource: parameters.yml }
- { resource: security.yml }
framework:
#esi: ~
translator: { fallback: es_CL }
secret: %secret%
router:
resource: "%kernel.root_dir%/config/routing.yml"
strict_requirements: ~
form: ~
csrf_protection: ~
validation: { enable_annotations: true }
templating:
engines: ['twig']
#assets_version: SomeVersionScheme
default_locale: "%locale%"
trusted_proxies: ~
session: ~
fragments: ~
http_method_override: true
# Twig Configuration
twig:
debug: %kernel.debug%
strict_variables: %kernel.debug%
# Assetic Configuration
assetic:
debug: %kernel.debug%
use_controller: false
bundles: [ FOSUserBundle, MutualContratosBundle ]
#java: /usr/bin/java
filters:
cssrewrite: ~
#closure:
# jar: %kernel.root_dir%/Resources/java/compiler.jar
yui_css:
jar: %kernel.root_dir%/Resources/java/yuicompressor-2.4.7.jar
yui_js:
jar: %kernel.root_dir%/Resources/java/yuicompressor-2.4.7.jar
# Doctrine Configuration
doctrine:
dbal:
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
user: %database_user%
password: %database_password%
charset: UTF8
# if using pdo_sqlite as your database driver, add the path in parameters.yml
# e.g. database_path: %kernel.root_dir%/data/data.db3
# path: %database_path%
orm:
auto_generate_proxy_classes: %kernel.debug%
auto_mapping: true
# Swiftmailer Configuration
swiftmailer:
transport: %mailer_transport%
host: %mailer_host%
username: %mailer_user%
password: %mailer_password%
spool: { type: memory }
fos_user:
db_driver: orm # other valid values are 'mongodb', 'couchdb'
firewall_name: main
user_class: Mutual\ContratosBundle\Entity\Usuario
That is strange.... java does not exist in that folder... however, I have another symfony2 site in the same server and it works. I really don't know what assetic is doing to publish assets.

Connecting to local instance of PostgreSql with JDBC for tuffy

I can login into Postgres using "psql -h localhost -U myuser mydatabase"
But my tuffy config file is like this :
# JDBC connection string; must be PostgreSQL
db_url = jdbc:postgresql://localhost:5432/mydb
# Database username; must be a superuser
db_username = myuser
# The password for db_username
db_password = mypass
# The working directory; Tuffy may write sizable temporary data here
dir_working = /var/postgres/data
But when I run tuffy commands, it show error like this :
Connecting to RDBMS at jdbc:postgresql://localhost:5432/tuffydb Failed
to connect to PostgreSQL! FATAL: password authentication failed for
user "tuffer" Exception in thread "main"
java.lang.NullPointerException at tuffy.db.RDB.query(RDB.java:438) at
tuffy.db.RDB.schemaExists(RDB.java:676) at
tuffy.db.RDB.resetSchema(RDB.java:717) at
tuffy.main.Infer.setUp(Infer.java:77) at
tuffy.main.PartInfer.run(PartInfer.java:18) at
tuffy.main.Main.main(Main.java:29)

Jython script to create oracle datasource in Websphere

The below code creates the datasource sucessfully but there is an exception when test connection is called. But when i restart the server and use the Test connection in console it works.
How to avoid server restart and make the test connection work in the script.
I do have this error after the script is executed and before restart:
J2CA0130I: Data Source [nMINEDB] has Component-managed Authentication Alias [test/nMINEDBUser] but no corresponding J2C Authentication Data Entry is defined in security.xml. This may have undesirable effects.
Code:
#
# Jython Script to create MINE web data-source and JDBC provider.
#
#Import Statements
import os
import re
import sys
# Create JDBC provider for MINE oracle database.
def createMINEJDBCProvider():
server = '/Server:server1'
# Set the Node ID
serverID = AdminConfig.getid(server)
print 'Server ID:' + serverID
#Configuring J2c auth
userAlias='test/nMINEDBUser'
alias = ['alias', userAlias]
userid = ['userId', 'MINEDB']
password = ['password', 'MINEpass']
jaasAttrs = [alias, userid, password]
security = AdminConfig.getid('/Security:/')
print 'security:'+security
j2cUser=AdminConfig.create('JAASAuthData', security, jaasAttrs)
AdminConfig.save()
print 'Creating MINE User sucessfull'
# Test to see if the provider has already been created.
MINEJDBCprovider = AdminConfig.getid('/JDBCProvider:nOracle JDBC Driver/')
if len(MINEJDBCprovider) == 0:
providerName='nOracle JDBC Driver'
print 'creating Oracle JDBC provider on server:'+serverID
print 'JDBC provider Name:'+providerName
MINEJDBCprop1 = ['name', providerName]
MINEJDBCprop2 = ['description','Oracle JDBC Driver for MINE Application']
MINEJDBCprop3 = ['implementationClassName','oracle.jdbc.pool.OracleConnectionPoolDataSource']
MINEJDBCprop4 = ['classpath','C:/jars/ojdbc/ojdbc6.jar']
MINEJDBCprops=[MINEJDBCprop1,MINEJDBCprop2,MINEJDBCprop3,MINEJDBCprop4]
providerID = AdminConfig.create('JDBCProvider', serverID, MINEJDBCprops)
AdminConfig.save()
print 'Creating Oracle JDBC provider on server sucessfull with provider:'+providerID
createMINEDataSource()
else:
print 'oracle provider exists:'+MINEJDBCprovider
def createMINEDataSource():
providerName='nOracle JDBC Driver'
userAlias='test/nMINEDBUser'
MINEJDBCprovider = AdminConfig.getid('/JDBCProvider:nOracle JDBC Driver/')
MINEDataSource = AdminConfig.getid('/JDBCProvider:'+providerName+'/DataSource:MINEDB/')
if len(MINEDataSource) == 0:
# Set the datasource attributes
MINEDSprop1 = ['name', 'nMINEDB']
MINEDSprop2 = ['jndiName', 'jdbc/nMINEdb']
MINEDSprop3 = ['description', 'MINE database']
MINEDSprop4 = ['datasourceHelperClassname', 'com.ibm.websphere.rsadapter.Oracle11gDataStoreHelper']
MINEDSprop5 = ['authDataAlias' , userAlias]
mapConfigprop=["mappingConfigAlias", "DefaultPrincipalMapping"]
mapConfigs=[MINEDSprop5 , mapConfigprop]
mappingConfig=["mapping", mapConfigs]
MINEDSprops = [MINEDSprop1, MINEDSprop2, MINEDSprop3, MINEDSprop4, MINEDSprop5, mappingConfig]
MINEDataSource = AdminConfig.create('DataSource', MINEJDBCprovider, MINEDSprops)
#Set the DB URL
propSet = AdminConfig.create('J2EEResourcePropertySet', MINEDataSource, [])
AdminConfig.create('J2EEResourceProperty', propSet, [["name", "URL"], ["value", "jdbc:oracle:thin:#myserver:1523:MINED2"]])
AdminConfig.save()
print 'Creating MINE JDBC Datasource on server sucessfull with datasource:'+MINEDataSource
#Mapping module not trying right now
#AdminConfig.create('MappingModule', MINEDataSource, mappingConfig)
else:
print 'MINE Datasource already exists in the server:'+MINEDataSource
print 'Testing datasource connection'
print AdminControl.testConnection(MINEDataSource)
try:
print 'start'
createMINEJDBCProvider()
createMINEDataSource()
print 'end'
except:
print "***** Unexpected error while creating JDBC datasource:", sys.exc_info(), " *****"
raise
j2c authentication isn't initialized until the server is recycled.

Sonar stops working every few days: jdbc connection error

I have a Sonar server which is used once a day from maven/jenkins and every few days, say every 4 or 5 days, it crashes and and shows the message "We're sorry, but something went wrong".
In the log, the error is always about a jdbc connection problem. I thought that it was a problem with the database but then if I restart the sonar server everything goes fine again.
So, it looks like a memory leak or something in the sonar server that makes it crash every few days until someone restarts it. Does that makes sense?. This is the configuration I have:
sonar.jdbc.username: xxxx
sonar.jdbc.password: xxxx
sonar.jdbc.url: jdbc:mysql://x.x.x.x:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
#----- Connection pool settings
sonar.jdbc.maxActive: 20
sonar.jdbc.maxIdle: 5
sonar.jdbc.minIdle: 2
sonar.jdbc.maxWait: 5000
sonar.jdbc.minEvictableIdleTimeMillis: 600000
sonar.jdbc.timeBetweenEvictionRunsMillis: 30000
sonar.updatecenter.activate=true
http.proxyHost=xxxx
http.proxyPort=3128
sonar.notifications.delay=60
That's it. And this is the error log:
INFO o.s.s.p.ServerImpl SonarQube Server / 3.7.3 /
INFO o.s.c.p.Database Create JDBC datasource for jdbc:mysql://x.x.x.x:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
ERROR o.s.c.p.Database Can not connect to database.
Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
org.apache.commons.dbcp.SQLNestedException:
Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
.
.
.
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
.
.
.
Caused by: java.net.ConnectException: Connection refused
.
.
.
INFO jruby.rack An exception happened during JRuby-Rack startup
no connection available
--- System
jruby 1.6.8 (ruby-1.8.7-p357) (2012-09-18 1772b40) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_43) [linux-amd64-java]
Time: Thu Jan 02 08:04:08 -0500 2014
Server: jetty/7.6.11.v20130520
jruby.home: file:/opt/sonar/war/sonar-server/WEB-INF/lib/jruby-complete-1.6.8.jar!/META-INF/jruby.home
--- Context Init Parameters:
jruby.compat.version = 1.8
jruby.max.runtimes = 1
jruby.min.runtimes = 1
jruby.rack.logging = slf4j
public.root = /
rails.env = production
--- Backtrace
ActiveRecord::ConnectionNotEstablished: no connection available
set_native_database_types at arjdbc/jdbc/RubyJdbcConnection.java:517
initialize at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/connection.rb:61
initialize at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/adapter.rb:31
jdbc_connection at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/jdbc/connection_methods.rb:6
send at org/jruby/RubyKernel.java:2109
new_connection at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_pool.rb:223
checkout_new_connection at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_pool.rb:245
checkout at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_pool.rb:188
loop at org/jruby/RubyKernel.java:1439
checkout at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_pool.rb:184
mon_synchronize at file:/opt/sonar/war/sonar-server/WEB-INF/lib/jruby-complete-1.6.8.jar!/META-INF/jruby.home/lib/ruby/1.8/monitor.rb:191
checkout at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_pool.rb:183
connection at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_pool.rb:98
retrieve_connection at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_pool.rb:326
retrieve_connection at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_specification.rb:123
connection at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstract/connection_specification.rb:115
initialize at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:440
up at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:401
migrate at /opt/sonar/war/sonar-server/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:383
upgrade_and_start at /opt/sonar/war/sonar-server/WEB-INF/lib/database_version.rb:62
automatic_setup at /opt/sonar/war/sonar-server/WEB-INF/lib/database_version.rb:74
(root) at /opt/sonar/war/sonar-server/WEB-INF/config/environment.rb:213
load at org/jruby/RubyKernel.java:1087
load_environment at /opt/sonar/war/sonar-server/WEB-INF/config/environment.rb:23
load_environment at file:/opt/sonar/war/sonar-server/WEB-INF/lib/jruby-rack-1.1.10.jar!/jruby/rack/rails_booter.rb:65
(root) at <script>:1
--- RubyGems
Gem.dir: /opt/sonar/war/sonar-server/WEB-INF/gems
Gem.path:
/opt/sonar/war/sonar-server/WEB-INF/gems
Activated gems:
rake-0.9.2.2
activesupport-2.3.15
activerecord-2.3.15
rack-1.1.3
actionpack-2.3.15
actionmailer-2.3.15
activeresource-2.3.15
rails-2.3.15
color-tools-1.3.0
i18n-0.4.2
json-jruby-1.2.3-universal-java-1.6
activerecord-jdbc-adapter-1.1.3
fastercsv-1.4.0
--- Bundler
undefined method `bundle_path' for Bundler:Module
--- JRuby-Rack Config
compat_version = RUBY1_8
default_logger = org.jruby.rack.logging.StandardOutLogger#4fbbe4e1
err = java.io.PrintStream#d2284af
filter_adds_html = true
filter_verifies_resource = false
ignore_environment = false
initial_memory_buffer_size =
initial_runtimes = 1
jms_connection_factory =
jms_jndi_properties =
logger = org.jruby.rack.logging.Slf4jLogger#566dc8f0
logger_class_name = slf4j
logger_name = jruby.rack
maximum_memory_buffer_size =
maximum_runtimes = 1
num_initializer_threads =
out = java.io.PrintStream#6aeeefcf
rackup =
rackup_path =
rewindable = true
runtime_arguments =
runtime_timeout_seconds =
serial_initialization = false
servlet_context = ServletContext#o.e.j.w.WebAppContext{/,file:/opt/sonar/war/sonar-server/},file:/opt/sonar/war/sonar-server
ERROR jruby.rack unable to create shared application instance
org.jruby.rack.RackInitializationException: no connection available
.
.
.
org.jruby.exceptions.RaiseException:
(ConnectionNotEstablished) no connection available
.
.
.
ERROR jruby.rack Error: application initialization failed
org.jruby.rack.RackInitializationException: no connection available
.
.
.
org.jruby.exceptions.RaiseException:
(ConnectionNotEstablished) no connection available
Any help will be appreciated :)
This sounds familiar to me :P Try asking your operators if they have some automated 'cleansing' operation that just kills periodically open database connections automatically in order to prevent leaked connections to the database.
It happened to me, in a Windows 2012 Server and SQL Server 2012 database in a different server. It seems that sonar service maintains a connection open, created at start up time, so any disconnection (networking, database restart, etc.) causes this unrecoverable connection problem. Restarting the Sonar (SonarQube) Windows service solved the problem to me. But, if this problem is frequent, as in your case, it would be a good idea to schedule a service restart task, or find out what is taking the connection down.

Categories