Java Runtime Exception: USB to RS232 serial comm using jssc maven package - java

I developed a desktop application using JavaFX and Maven dependency manager. I used Java 8 and JSSC package to communicate with a serial port using USB. That time it was working as I expected. But now when I try to run the project, it's showing me the following exception and shutdown the app.
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000007110b5db, pid=9452, tid=0x0000000000002384
#
# JRE version: OpenJDK Runtime Environment (8.0_332-b08) (build 1.8.0_332-b08)
# Java VM: OpenJDK 64-Bit Server VM (25.332-b08 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# C [jSSC-2.8_x86_64.dll+0xb5db]
#
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# C:\Users\Sincos\Desktop\HomeOffice\Java\FDH-Relay\hs_err_pid9452.log
#
# If you would like to submit a bug report, please visit:
# https://github.com/corretto/corretto-8/issues/
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Process finished with exit code 1
When I start the process, the following function is triggered.
public void open() throws SerialPortException {
port = new SerialPort(comPort);
port.openPort();//Open serial port
port.setParams(Integer.parseInt(baudRate), Integer.parseInt(dataSize), Integer.parseInt(stopBit), Integer.parseInt(parity));
port.addEventListener(new SerialPortEventListener() {
public void serialEvent(SerialPortEvent serialPortEvent) {
try {
int length = 0;
buffer= port.readString();
if (buffer != null){
length = buffer.length();
}
for (int i=0;i<length;i++){
queue.add((int)buffer.charAt(i));
}
} catch (SerialPortException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
}
I set up the port, baudRate, dataSize, stopBit, and parity.
Here is the maven dependency that I used in the project.
<!-- https://mvnrepository.com/artifact/org.scream3r/jssc -->
<dependency>
<groupId>org.scream3r</groupId>
<artifactId>jssc</artifactId>
<version>2.8.0</version>
</dependency>
Here is the other variable and constructor where I initialize the data.
String comPort;
public static Queue<Integer> queue = new LinkedList<>();
SerialPort port;
java.lang.String buffer;
String baudRate, dataSize, stopBit, parity;
public ExternalSerialConnection(String comport, String baudRate, String dataSize, String stopBit, String parity) {
this.comPort=comport;
this.baudRate=baudRate;
this.stopBit=stopBit;
this.dataSize=dataSize;
this.parity=parity;
}
Is there anyone who can help me to solve this issue?

Related

org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host Linux error that works on Windows

I have looked at many (too many) similar questions here and tried over and over to solve this one but I can't quite get it to resolve.
I have a java code that connects to a PostGreSQL server on a RHEL 8.2 box to query and return a value. When I run it on my Windows 10 laptop; it works with no issues. However, when I run it locally on the Linux box it fails with the dreaded org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host.
The java code is made by a ETL software called Talend; and the code is run on Linux by RunDeck.
To make matters more confusing, I can login on Linux as the postgres user and use the psql interface to connect to the db with the same host, port, db, and user I'm using in Java without error-
psql -h [server ip] -p 5432 -d my_special_db -U my_db_user
I have checked the server settings, the pg_hba.conf file, the user permissions, the log files and tried changing all of the above to get some sort of return; all to no avail.
I have added to pg_hba.conf:
host my_special_db my_db_user [server ip]/32 md5
host my_special_db my_db_user [server ip with .0 as the last quatraine]/24 md5
host my_special_db root ::1/128 trust
host all all [server ip]/32 trust
I have tried multiple connection modifiers like:
sslfactory=org.postgresql.ssl.NonValidatingFactory&sslmode=prefer
or
sslmode=prefer
or
sslmode=Require
or
no modifier at all
Please advise.
Details:
OS
NAME : Linux RHEL 8.2
VERSION : 4.18.0-372.16.1.el8_6.x86_64
JVM
IMPLEMENTATIONVERSION : 11.0.15+9-LTS
NAME : OpenJDK 64-Bit Server VM
VENDOR : Amazon.com Inc.
VERSION : 11.0.15
PostGreSQL
postgresql11-libs-11.7-1PGDG.rhel8.x86_64.rpm \
postgresql11-11.7-1PGDG.rhel8.x86_64.rpm \
postgresql11-server-11.7-1PGDG.rhel8.x86_64.rpm
Under server > properties > SSL
All Defaults:
SSL Mode: Prefer
Client Cert: [blank]
Client Cert Key: [blank]
Root Cert: [blank]
Cert Rev List: [blank]
SSL compression?: no
//////////////////
Under /etc/systemd/system/postgresql-11.service
[Unit]
Description=PostgreSQL 11 database server
Documentation=https://www.postgresql.org/docs/11/static/
After=syslog.target
After=network.target
[Service]
Type=notify
User=postgres
Group=postgres
# Note: avoid inserting whitespace in these Environment= lines, or you may
# break postgresql-setup.
# Location of database directory
Environment=PGDATA=[my data directory path]
# Where to send early-startup messages from the server (before the logging
# options of postgresql.conf take effect)
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog
# Disable OOM kill on the postmaster
OOMScoreAdjust=-1000
Environment=PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj
Environment=PG_OOM_ADJUST_VALUE=0
ExecStartPre=[path to server]/pgsql-11/bin/postgresql-11-check-db-dir ${PGDATA}
ExecStart=[path to server]/pgsql-11/bin/postgres -D ${PGDATA}
#ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGINT
# Do not set any timeout value, so that systemd will not kill postmaster
# during crash recovery.
TimeoutSec=0
[Install]
WantedBy=multi-user.target
[Service]
Environment=PGDATA=[my data directory path]
//////////////////
//////////////////
Top of the pg_hba.conf file:
# TYPE DATABASE USER ADDRESS METHOD
# linux requires a 'local' entry
#local all all md5
local all all md5
# IPv4 local connections:
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::0/0 md5
# no ssl connection
hostnossl all all 0.0.0.0/0 trust
# IPv4 local connections:
hostnossl all all 127.0.0.1/32 md5
# IPv6 local connections:
hostnossl all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
# This Servers IP and Ranges as Talend doesn't use localhost
host postgres postgres [server ip]/32 md5
host postgres all [server ip with .0 as the last quatraine]/24 md5
host all postgres [server ip]/32 md5
host all all [server ip with .0 as the last quatraine]/24 md5
host my_special_db my_db_user [server ip]/32 md5
host my_special_db my_db_user [server ip with .0 as the last quatraine]/24 md5
host my_special_db root ::1/128 trust
host all all [server ip]/32 trust
//////////////////
//////////////////
Part of My Java Code:
conn_tDBInput_1 = (java.sql.Connection) globalMap.get("conn_tDBConnection_2");
java.sql.Statement stmt_tDBInput_1 = conn_tDBInput_1.createStatement();
String dbquery_tDBInput_1 = "\n(SELECT ID AS \"id\"\n, ATTRIBUTE AS \"attribute\"\n, DEFINITION AS \"definition\"\n, DESCRIPTION AS \"description\""
+ "\nFROM " + context.SEC_DB_SCHEMA + "." + context.SEC_DB_TABLE
+ "\nWHERE \"description\" = 'my_filter_here'\nLIMIT 1);\n\n";
globalMap.put("tDBInput_1_QUERY", dbquery_tDBInput_1);
java.sql.ResultSet rs_tDBInput_1 = null;
try {
rs_tDBInput_1 = stmt_tDBInput_1.executeQuery(dbquery_tDBInput_1);
java.sql.ResultSetMetaData rsmd_tDBInput_1 = rs_tDBInput_1.getMetaData();
int colQtyInRs_tDBInput_1 = rsmd_tDBInput_1.getColumnCount();
String tmpContent_tDBInput_1 = null;
while (rs_tDBInput_1.next()) {
nb_line_tDBInput_1++;
if (colQtyInRs_tDBInput_1 < 1) {
row1.ID = 0;
} else {
row1.ID = rs_tDBInput_1.getInt(1);
if (rs_tDBInput_1.wasNull()) {
throw new RuntimeException("Null value in non-Nullable column");
}
}
if (colQtyInRs_tDBInput_1 < 2) {
row1.ATTRIBUTE = null;
} else {
row1.ATTRIBUTE = routines.system.JDBCUtil.getString(rs_tDBInput_1, 2, false);
}
if (colQtyInRs_tDBInput_1 < 3) {
row1.DEFINITION = null;
} else {
row1.DEFINITION = routines.system.JDBCUtil.getString(rs_tDBInput_1, 3, false);
}
if (colQtyInRs_tDBInput_1 < 4) {
row1.DESCRIPTION = null;
} else {
row1.DESCRIPTION = routines.system.JDBCUtil.getString(rs_tDBInput_1, 4, false);
}
/**
* [tDBInput_1 begin ] stop
*/
/**
* [tDBInput_1 main ] start
*/
currentComponent = "tDBInput_1";
tos_count_tDBInput_1++;
/**
* [tDBInput_1 main ] stop
*/
/**
* [tDBInput_1 process_data_begin ] start
*/
currentComponent = "tDBInput_1";
/**
* [tDBInput_1 process_data_begin ] stop
*/
/**
* [tReplicate_1 main ] start
*/
currentComponent = "tReplicate_1";
if (execStat) {
runStat.updateStatOnConnection(iterateId, 1, 1
, "row1"
);
}
row5 = new row5Struct();
row5.ID = row1.ID;
row5.ATTRIBUTE = row1.ATTRIBUTE;
row5.DEFINITION = row1.DEFINITION;
row5.DESCRIPTION = row1.DESCRIPTION;
globalMap.put("row5.ATTRIBUTE", row5.ATTRIBUTE);
globalMap.put("row5.DEFINITION", row5.DEFINITION);
// Set Reports DB values
context.DB_HOST = [my_server_ip];
context.DB_NAME = [my_db];
context.DB_PORT = 5432;
// This will have to be changed for each tower will have it's own 'reports' table
context.DB_SCHEMA = "my_schema";
context.DB_TABLE = "my_table";
context.DB_USER = (String)(globalMap.get("row5.ATTRIBUTE"));
context.DB_PWD = (String)(globalMap.get("row5.DEFINITION"));
//------------
currentComponent="tDBConnection_1";
int tos_count_tDBConnection_1 = 0;
String dbProperties_tDBConnection_1 = "sslfactory=org.postgresql.ssl.NonValidatingFactory&sslmode=prefer";
String url_tDBConnection_1 = "jdbc:postgresql://"+context.DB_HOST+":"+context.DB_PORT+"/"+context.DB_NAME;
if(dbProperties_tDBConnection_1 != null && !"".equals(dbProperties_tDBConnection_1.trim())) {
url_tDBConnection_1 = url_tDBConnection_1 + "?" + dbProperties_tDBConnection_1;
}
String dbUser_tDBConnection_1 = context.DB_USER;
//////////////////
//////////////////
Log entry from the time of trying to connect both locally and remotely-
2022-08-23 15:20:45.452 EDT [67687] LOG: database system is ready to accept connections
2022-08-23 15:22:00.719 EDT [67687] LOG: received fast shutdown request
2022-08-23 15:22:00.720 EDT [67687] LOG: aborting any active transactions
2022-08-23 15:22:00.722 EDT [67687] LOG: background worker "logical replication launcher" (PID 67696) exited with exit code 1
2022-08-23 15:22:00.722 EDT [67691] LOG: shutting down
2022-08-23 15:22:00.754 EDT [67687] LOG: database system is shut down
2022-08-23 15:22:00.814 EDT [67739] LOG: database system was shut down at 2022-08-23 15:22:00 EDT
2022-08-23 15:22:00.818 EDT [67735] LOG: database system is ready to accept connections
2022-08-23 15:52:25.666 EDT [67735] LOG: received fast shutdown request
2022-08-23 15:52:25.668 EDT [67735] LOG: aborting any active transactions
2022-08-23 15:52:25.669 EDT [67735] LOG: background worker "logical replication launcher" (PID 67745) exited with exit code 1
2022-08-23 15:52:25.669 EDT [67740] LOG: shutting down
2022-08-23 15:52:25.692 EDT [67735] LOG: database system is shut down
2022-08-23 15:52:34.188 EDT [79803] LOG: database system was shut down at 2022-08-23 15:52:25 EDT
2022-08-23 15:52:34.194 EDT [79801] LOG: database system is ready to accept connections
//////////////////////////////////////
UPDATE:
Here is a much of the java error message as I can share; given some of the restrictions on what I can post;
error message :: FATAL: no pg_hba.conf entry for host "[my_server_ip]", user "my_db_user", database "my_db", SSL off|4
Exception in component tDBConnection_2 (Generic_Talend_Job)
org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "[my_server_ip]", user "my_db_user", database "my_db", SSL off
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:613)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:161)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tDBConnection_2Process(Generic_Talend_Job.java:5752)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tSystem_1Process(Generic_Talend_Job.java:5614)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tJava_3Process(Generic_Talend_Job.java:5342)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tWarn_1Process(Generic_Talend_Job.java:5094)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tFileInputDelimited_1Process(Generic_Talend_Job.java:4851)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tLibraryLoad_1Process(Generic_Talend_Job.java:4089)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.tPrejob_1Process(Generic_Talend_Job.java:3856)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.runJobInTOS(Generic_Talend_Job.java:11916)
at local_project.generic_talend_job_0_1.Generic_Talend_Job.main(Generic_Talend_Job.java:11245)
Suppressed: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "[my_server_ip]", user "my_db_user", database "my_db", SSL off
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:613)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:161)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
... 15 more
sec db connection failure, error message :: FATAL: no pg_hba.conf entry for host "[my_server_ip]", user "my_db_user", database "my_db", SSL off
Update 2:
Entry from this mornings logs at /data_folder/log after trying again and getting the same error
[postgres#my_server_name log]$ vim postgresql-Wed.log
2022-08-24 10:00:17.714 EDT [79801] LOG: received fast shutdown request
2022-08-24 10:00:17.717 EDT [79801] LOG: aborting any active transactions
2022-08-24 10:00:17.717 EDT [508177] FATAL: terminating connection due to administrator command
2022-08-24 10:00:17.717 EDT [508176] FATAL: terminating connection due to administrator command
2022-08-24 10:00:17.718 EDT [79801] LOG: background worker "logical replication launcher" (PID 79809) exited with exit code 1
2022-08-24 10:00:17.718 EDT [79804] LOG: shutting down
2022-08-24 10:00:17.740 EDT [79801] LOG: database system is shut down
2022-08-24 10:00:35.780 EDT [513920] LOG: database system was shut down at 2022-08-24 10:00:17 EDT
2022-08-24 10:00:35.788 EDT [513918] LOG: database system is ready to accept connections
Update 3:
From the postgresql.conf file:
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/var/run/postgresql, /tmp' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = md5 # md5 or scram-sha-256
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - SSL -
#ssl = off
#ssl_ca_file = ''
#ssl_cert_file = 'server.crt'
#ssl_crl_file = ''
#ssl_key_file = 'server.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
Update 4:
After doing some more searching, I have found that the default for the postgres logging isn't verbose; so this is the verbose part:
[636130] [630676b4.9b4e2] [2022-08-24 15:06:28.850 EDT] [] [F0000] [] [] [] [] [] [7] [0]: LOG: hostssl record cannot match because SSL is disabled
[636130] [630676b4.9b4e2] [2022-08-24 15:06:28.850 EDT] [] [F0000] [] [] [] [] [] [8] [0]: HINT: Set ssl = on in postgresql.conf.
[636130] [630676b4.9b4e2] [2022-08-24 15:06:28.850 EDT] [] [F0000] [] [] [] [] [] [9] [0]: CONTEXT: line 103 of configuration file "[data folder path]/pg_hba.conf"
[636132] [630676b4.9b4e4] [2022-08-24 15:06:28.854 EDT] [] [00000] [] [] [] [] [] [1] [0]: LOG: database system was shut down at 2022-08-24 15:06:18 EDT
[636130] [630676b4.9b4e2] [2022-08-24 15:06:28.859 EDT] [] [00000] [] [] [] [] [] [10] [0]: LOG: database system is ready to accept connections
I have gotten this error even after the following updates-
update to the postgresql.conf file:
edit 1
ssl = off
edit 2
ssl = on
edit 3
#ssl = off
updated the log level in the postgresql.conf file to
log_line_prefix = '[%p] [%c] [%m] [%v] [%e] [%i] [%d] [%u] [%a] [%r] [%l] [%x]: '
...
log_statement = 'all'
updated the pg_hba.conf file to start with:
# Allow any user on the local system to connect to any database with
# any database user name using Unix-domain sockets (the default for local
# connections).
#
# TYPE DATABASE USER ADDRESS METHOD
local all all trust
# The same using local loopback TCP/IP connections.
#
# TYPE DATABASE USER ADDRESS METHOD
host all all 127.0.0.1/32 trust
# The same as the previous line, but using a separate netmask column
#
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
host all all 127.0.0.1 255.255.255.255 trust
# The same over IPv6.
#
# TYPE DATABASE USER ADDRESS METHOD
host all all ::1/128 trust
# The same using a host name (would typically cover both IPv4 and IPv6).
#
# TYPE DATABASE USER ADDRESS METHOD
host all all localhost trust
hostssl all all 0.0.0.0/0 md5
also updated the connection params with-
sslfactory=org.postgresql.ssl.NonValidatingFactory
According to postgres this is supposed to stop any and all SSL issues and just let the connections through-
https://jdbc.postgresql.org/documentation/head/ssl-client.html
"A non-validating connection is established via a custom SSLSocketFactory class that is provided with the driver. Setting the connection URL parameter sslfactory=org.postgresql.ssl.NonValidatingFactory will turn off all SSL validation."
Update 5:
Logging settings from postgresql.conf file:
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '[%p] [%c] [%m] [%v] [%e] [%i] [%d] [%u] [%a] [%r] [%l] [%x]: ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
log_statement = 'all' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'America/New_York'
Update 6:
I made edits to the logging part of the config, and now have a lot of output from the log
new logging settings:
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
log_min_messages = debug5 # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
log_min_error_statement = debug5 # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
log_connections = on
log_disconnections = on
#log_duration = off
log_error_verbosity = verbose # terse, default, or verbose messages
log_hostname = on
log_line_prefix = '[%p] [%c] [%m] [%v] [%e] [%i] [%d] [%u] [%a] [%r] [%l] [%x]: ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
log_statement = 'all' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'America/New_York'
I couldn't paste the error log here because after making it verbose it was very long- so I made a gist
Last update:
I removed the Gist because the peeps here helped me find my error:
okay, so the issue was 3 things; thing 1, we have a 'properties' file that I thought was set to this server this db; and it wasn't // thing 2, the default logging was set to 'warning' and didn't have enough information so after putting it to debug level 5 (the most data) and on 'all' events; I finally saw a error message that made me realize it was pointed at the wrong db // thing 3, in the many (too many) iterations of my pg_hba.conf file I had added then removed the IP address.
so now it's working, lol
Sometimes you are too close to the tracks to see the train, lol
I removed the Gist because the peeps here helped me find my error:
okay, so the issue was 3 things; thing 1, we have a 'properties' file that I thought was set to this server this db; and it wasn't // thing 2, the default logging was set to 'warning' and didn't have enough information so after putting it to debug level 5 (the most data) and on 'all' events; I finally saw a error message that made me realize it was pointed at the wrong db // thing 3, in the many (too many) iterations of my pg_hba.conf file I had added then removed the IP address. so now it's working, lol Sometimes you are too close to the tracks to see the train, lol

how to configure a email functionality in play-java 2.5.4

I wrote the following code for email functionality:
import org.apache.commons.mail.DefaultAuthenticator;
import org.apache.commons.mail.EmailException;
import org.apache.commons.mail.HtmlEmail;
import play.mvc.Controller;
import play.mvc.Result;
public class MailController extends Controller {
public Result sendEmail() throws EmailException {
HtmlEmail email = new HtmlEmail();
String authuser = ".......#gmail.com";
String authpwd = "XXXXXXX";
email.setSmtpPort(587);
email.setAuthenticator(new DefaultAuthenticator(authuser, authpwd));
email.setDebug(true);
email.setHostName("smtp.gmail.com");
email.setFrom(".........#gmail.com", "SenderName");
email.setSubject("TestMail");
email.setHtmlMsg("<html><body><h1>welcome to u</h1></body></html>");
//email.addTo(".......#gmail.com", "receiver name");
email.setTLS(true);
email.send();
return play.mvc.Results.ok("Success");
}
}
However, I'm facing problems (such as exceptions getting caught in Netty).
I added a plugin in bulid.sbt:
libraryDependencies ++= Seq(
"com.typesafe.play" %% "play-mailer" % "5.0.0-M1"
)
application.conf:
# Mailer
# ~~~~~
play.mailer {
host=smtpout.secureserver.net
port=587
ssl=false
tls=false
user=my username
password=my password
debug=false
mock=false
}
This is one of the errors that I'm facing:
[error] p.c.s.n.PlayRequestHandler - Exception caught in Netty
java.lang.NoClassDefFoundError: Could not initialize class play.api.http.DefaultHttpErrorHandler$
at play.core.server.Server$class.logExceptionAndGetResult$1(Server.scala:45)
at play.core.server.Server$class.getHandlerFor(Server.scala:65)
at play.core.server.NettyServer.getHandlerFor(NettyServer.scala:47)
at play.core.server.netty.PlayRequestHandler.handle(PlayRequestHandler.scala:82)
at play.core.server.netty.PlayRequestHandler.channelRead(PlayRequestHandler.scala:163)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278)
at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:129)
at com.typesafe.netty.http.HttpStreamsServerHandler.channelRead(HttpStreamsServerHandler.java:96)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
How can I fix this?
my application.config file is as below
# This is the main configuration file for the application.
# https://www.playframework.com/documentation/latest/ConfigFile
# ~~~~~
# Play uses HOCON as its configuration file format. HOCON has a number
# of advantages over other config formats, but there are two things that
# can be used when modifying settings.
#
# You can include other configuration files in this main application.conf file:
#include "extra-config.conf"
#
# You can declare variables and substitute for them:
#mykey = ${some.value}
#
# And if an environment variable exists when there is no other subsitution, then
# HOCON will fall back to substituting environment variable:
#mykey = ${JAVA_HOME}
## Akka
# https://www.playframework.com/documentation/latest/ScalaAkka#Configuration
# https://www.playframework.com/documentation/latest/JavaAkka#Configuration
# ~~~~~
# Play uses Akka internally and exposes Akka Streams and actors in Websockets and
# other streaming HTTP responses.
akka {
# "akka.log-config-on-start" is extraordinarly useful because it log the complete
# configuration at INFO level, including defaults and overrides, so it s worth
# putting at the very top.
#
# Put the following in your conf/logback.xml file:
#
# <logger name="akka.actor" level="INFO" />
#
# And then uncomment this line to debug the configuration.
#
#log-config-on-start = true
}
## Secret key
# http://www.playframework.com/documentation/latest/ApplicationSecret
# ~~~~~
# The secret key is used to sign Play's session cookie.
# This must be changed for production, but we don't recommend you change it in this file.
play.crypto.secret = "changeme"
## Modules
# https://www.playframework.com/documentation/latest/Modules
# ~~~~~
# Control which modules are loaded when Play starts. Note that modules are
# the replacement for "GlobalSettings", which are deprecated in 2.5.x.
# Please see https://www.playframework.com/documentation/latest/GlobalSettings
# for more information.
#
# You can also extend Play functionality by using one of the publically available
# Play modules: https://playframework.com/documentation/latest/ModuleDirectory
play.modules {
# By default, Play will load any class called Module that is defined
# in the root package (the "app" directory), or you can define them
# explicitly below.
# If there are any built-in modules that you want to disable, you can list them here.
#enabled += my.application.Module
# If there are any built-in modules that you want to disable, you can list them here.
#disabled += ""
}
## IDE
# https://www.playframework.com/documentation/latest/IDE
# ~~~~~
# Depending on your IDE, you can add a hyperlink for errors that will jump you
# directly to the code location in the IDE in dev mode. The following line makes
# use of the IntelliJ IDEA REST interface:
#play.editor=http://localhost:63342/api/file/?file=%s&line=%s
## Internationalisation
# https://www.playframework.com/documentation/latest/JavaI18N
# https://www.playframework.com/documentation/latest/ScalaI18N
# ~~~~~
# Play comes with its own i18n settings, which allow the user's preferred language
# to map through to internal messages, or allow the language to be stored in a cookie.
play.i18n {
# The application languages
langs = [ "en" ]
# Whether the language cookie should be secure or not
#langCookieSecure = true
# Whether the HTTP only attribute of the cookie should be set to true
#langCookieHttpOnly = true
}
## Play HTTP settings
# ~~~~~
play.http {
## Router
# https://www.playframework.com/documentation/latest/JavaRouting
# https://www.playframework.com/documentation/latest/ScalaRouting
# ~~~~~
# Define the Router object to use for this application.
# This router will be looked up first when the application is starting up,
# so make sure this is the entry point.
# Furthermore, it's assumed your route file is named properly.
# So for an application router like `my.application.Router`,
# you may need to define a router file `conf/my.application.routes`.
# Default to Routes in the root package (aka "apps" folder) (and conf/routes)
#router = my.application.Router
## Action Creator
# https://www.playframework.com/documentation/latest/JavaActionCreator
# ~~~~~
#actionCreator = null
## ErrorHandler
# https://www.playframework.com/documentation/latest/JavaRouting
# https://www.playframework.com/documentation/latest/ScalaRouting
# ~~~~~
# If null, will attempt to load a class called ErrorHandler in the root package,
#errorHandler = null
## Filters
# https://www.playframework.com/documentation/latest/ScalaHttpFilters
# https://www.playframework.com/documentation/latest/JavaHttpFilters
# ~~~~~
# Filters run code on every request. They can be used to perform
# common logic for all your actions, e.g. adding common headers.
# Defaults to "Filters" in the root package (aka "apps" folder)
# Alternatively you can explicitly register a class here.
#filters = my.application.Filters
## Session & Flash
# https://www.playframework.com/documentation/latest/JavaSessionFlash
# https://www.playframework.com/documentation/latest/ScalaSessionFlash
# ~~~~~
session {
# Sets the cookie to be sent only over HTTPS.
#secure = true
# Sets the cookie to be accessed only by the server.
#httpOnly = true
# Sets the max-age field of the cookie to 5 minutes.
# NOTE: this only sets when the browser will discard the cookie. Play will consider any
# cookie value with a valid signature to be a valid session forever. To implement a server side session timeout,
# you need to put a timestamp in the session and check it at regular intervals to possibly expire it.
#maxAge = 300
# Sets the domain on the session cookie.
#domain = "example.com"
}
flash {
# Sets the cookie to be sent only over HTTPS.
#secure = true
# Sets the cookie to be accessed only by the server.
#httpOnly = true
}
}
## Netty Provider
# https://www.playframework.com/documentation/latest/SettingsNetty
# ~~~~~
play.server.netty {
# Whether the Netty wire should be logged
#log.wire = true
# If you run Play on Linux, you can use Netty's native socket transport
# for higher performance with less garbage.
#transport = "native"
}
## WS (HTTP Client)
# https://www.playframework.com/documentation/latest/ScalaWS#Configuring-WS
# ~~~~~
# The HTTP client primarily used for REST APIs. The default client can be
# configured directly, but you can also create different client instances
# with customized settings. You must enable this by adding to build.sbt:
#
# libraryDependencies += ws // or javaWs if using java
#
play.ws {
# Sets HTTP requests not to follow 302 requests
#followRedirects = false
# Sets the maximum number of open HTTP connections for the client.
#ahc.maxConnectionsTotal = 50
## WS SSL
# https://www.playframework.com/documentation/latest/WsSSL
# ~~~~~
ssl {
# Configuring HTTPS with Play WS does not require programming. You can
# set up both trustManager and keyManager for mutual authentication, and
# turn on JSSE debugging in development with a reload.
#debug.handshake = true
#trustManager = {
# stores = [
# { type = "JKS", path = "exampletrust.jks" }
# ]
#}
}
}
## Cache
# https://www.playframework.com/documentation/latest/JavaCache
# https://www.playframework.com/documentation/latest/ScalaCache
# ~~~~~
# Play comes with an integrated cache API that can reduce the operational
# overhead of repeated requests. You must enable this by adding to build.sbt:
#
# libraryDependencies += cache
#
play.cache {
# If you want to bind several caches, you can bind the individually
#bindCaches = ["db-cache", "user-cache", "session-cache"]
}
## Filters
# https://www.playframework.com/documentation/latest/Filters
# ~~~~~
# There are a number of built-in filters that can be enabled and configured
# to give Play greater security. You must enable this by adding to build.sbt:
#
# libraryDependencies += filters
#
play.filters {
## CORS filter configuration
# https://www.playframework.com/documentation/latest/CorsFilter
# ~~~~~
# CORS is a protocol that allows web applications to make requests from the browser
# across different domains.
# NOTE: You MUST apply the CORS configuration before the CSRF filter, as CSRF has
# dependencies on CORS settings.
cors {
# Filter paths by a whitelist of path prefixes
#pathPrefixes = ["/some/path", ...]
# The allowed origins. If null, all origins are allowed.
#allowedOrigins = ["http://www.example.com"]
# The allowed HTTP methods. If null, all methods are allowed
#allowedHttpMethods = ["GET", "POST"]
}
## CSRF Filter
# https://www.playframework.com/documentation/latest/ScalaCsrf#Applying-a-global-CSRF-filter
# https://www.playframework.com/documentation/latest/JavaCsrf#Applying-a-global-CSRF-filter
# ~~~~~
# Play supports multiple methods for verifying that a request is not a CSRF request.
# The primary mechanism is a CSRF token. This token gets placed either in the query string
# or body of every form submitted, and also gets placed in the users session.
# Play then verifies that both tokens are present and match.
csrf {
# Sets the cookie to be sent only over HTTPS
#cookie.secure = true
# Defaults to CSRFErrorHandler in the root package.
#errorHandler = MyCSRFErrorHandler
}
## Security headers filter configuration
# https://www.playframework.com/documentation/latest/SecurityHeaders
# ~~~~~
# Defines security headers that prevent XSS attacks.
# If enabled, then all options are set to the below configuration by default:
headers {
# The X-Frame-Options header. If null, the header is not set.
#frameOptions = "DENY"
# The X-XSS-Protection header. If null, the header is not set.
#xssProtection = "1; mode=block"
# The X-Content-Type-Options header. If null, the header is not set.
#contentTypeOptions = "nosniff"
# The X-Permitted-Cross-Domain-Policies header. If null, the header is not set.
#permittedCrossDomainPolicies = "master-only"
# The Content-Security-Policy header. If null, the header is not set.
#contentSecurityPolicy = "default-src 'self'"
}
## Allowed hosts filter configuration
# https://www.playframework.com/documentation/latest/AllowedHostsFilter
# ~~~~~
# Play provides a filter that lets you configure which hosts can access your application.
# This is useful to prevent cache poisoning attacks.
hosts {
# Allow requests to example.com, its subdomains, and localhost:9000.
#allowed = [".example.com", "localhost:9000"]
}
}
## Evolutions
# https://www.playframework.com/documentation/latest/Evolutions
# ~~~~~
# Evolutions allows database scripts to be automatically run on startup in dev mode
# for database migrations. You must enable this by adding to build.sbt:
#
# libraryDependencies += evolutions
#
play.evolutions {
# You can disable evolutions for a specific datasource if necessary
#db.default.enabled = false
}
## Database Connection Pool
# https://www.playframework.com/documentation/latest/SettingsJDBC
# ~~~~~
# Play doesn't require a JDBC database to run, but you can easily enable one.
#
# libraryDependencies += jdbc
#
play.db {
# The combination of these two settings results in "db.default" as the
# default JDBC pool:
#config = "db"
#default = "default"
# Play uses HikariCP as the default connection pool. You can override
# settings by changing the prototype:
prototype {
# Sets a fixed JDBC connection pool size of 50
#hikaricp.minimumIdle = 50
#hikaricp.maximumPoolSize = 50
}
}
## JDBC Datasource
# https://www.playframework.com/documentation/latest/JavaDatabase
# https://www.playframework.com/documentation/latest/ScalaDatabase
# ~~~~~
# Once JDBC datasource is set up, you can work with several different
# database options:
#
# Slick (Scala preferred option): https://www.playframework.com/documentation/latest/PlaySlick
# JPA (Java preferred option): https://playframework.com/documentation/latest/JavaJPA
# EBean: https://playframework.com/documentation/latest/JavaEbean
# Anorm: https://www.playframework.com/documentation/latest/ScalaAnorm
#
db {
# You can declare as many datasources as you want.
# By convention, the default datasource is named `default`
# https://www.playframework.com/documentation/latest/Developing-with-the-H2-Database
#default.driver = org.h2.Driver
#default.url = "jdbc:h2:mem:play"
#default.username = sa
#default.password = ""
# You can turn on SQL logging for any datasource
# https://www.playframework.com/documentation/latest/Highlights25#Logging-SQL-statements
#default.logSql=true
}
#play.mailer {
# default.host=smtp.gmail.com
# default.port=587
# ssl=false
# default.tls=true
# default.user=.......gmail.com
# default.password=mypassword
# default.debug=true
# default.mock=false
#}
and this is my build.sbt file
name := """play"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayJava)
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
javaJdbc,
cache,
javaWs,
"com.typesafe.play" %% "play-mailer" % "5.0.0-M1"
)
added a plugin for mailer
libraryDependencies ++= Seq(
"com.typesafe.play" %% "play-mailer" % "5.0.0-M1"
)
any my java code
import org.apache.commons.mail.DefaultAuthenticator;
import org.apache.commons.mail.EmailException;
import org.apache.commons.mail.HtmlEmail;
import play.mvc.Controller;
import play.mvc.Result;
public class MailController extends Controller {
public Result sendEmail() throws EmailException {
HtmlEmail email = new HtmlEmail();
String authuser = "..........#gmail.com";
String authpwd = "XXXXXX";
email.setSmtpPort(587);
email.setAuthenticator(new DefaultAuthenticator(authuser, authpwd));
email.setDebug(true);
email.setHostName("smtp.gmail.com");
email.setFrom("from#gmail.com", "SenderName");
email.setSubject("TestMail");
email.setHtmlMsg("<html><body><h1>welcome to u</h1></body></html>");
email.addTo("to#gmail.com", "receiver name");
email.setTLS(true);
email.send();
return play.mvc.Results.ok("Success");
}
}
like this every time i have to write SMTP configuration in every class
so i have to config SMTP configuration in appliction.config file only.
if any suggestion share it

java.rmi.NoSuchObjectException: no such object exception

There have been acouple of questions about this already, but their answers suggest that the exported object has been GC'd on the server side and that is cousing the problems. However it seems like that this is not the issue here.
Mentioned exception is thrown only on single machine:
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
With java:
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
This happend on the same machine with OpenJDK 7something as well.
According to other answers I am supposed to keep a strong reference to handling objects. I am doing it now, so what more can be done?
The same code works on windows as well as on a different remote linux machine with java 7.
Any ideas why?
I have implemented some finalizers to the connected classes, but none of them are called.
As suggested I am using static references. As for me there is no way get exported object GC enligible. Exception is thrown on remote method invocation right after object lookup.
Piece of class
public class Client{
//some fields
private final int RMI_PORT;
private static SearchTestServiceImpl searchTestService;
private static Remote stub;
private Registry registry;
//and starting service
public void startService() throws RemoteException {
createRegistry();
searchTestService = new SearchTestServiceImpl(name);
stub = UnicastRemoteObject.exportObject(searchTestService, RMI_PORT + 1);
registry.rebind(SearchTestService.class.getName(), stub);
log.info("Binding {} to port {}", SearchTestService.class.getName(), RMI_PORT + 1);
}
private void createRegistry() throws RemoteException {
log.info("Starting RMI registry on port {}", RMI_PORT);
registry = LocateRegistry.createRegistry(RMI_PORT);
}
(...)
}
And bootstrapping code
public class Bootstrap {
private Logger log = LoggerFactory.getLogger(Bootstrap.class);
private static Client c;
public static void main(String[] args) throws NumberFormatException,
// some preparations
c = new Client(Integer.valueOf(port), name);
c.startService();
System.gc();
System.runFinalization();
synchronized (c) {
c.wait();
}
}
}
and stacktrace
java.rmi.NoSuchObjectException: no such object in table
at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(Unknown Source) ~[na:1.7.0_65]
at sun.rmi.transport.StreamRemoteCall.executeCall(Unknown Source) ~[na:1.7.0_65]
at sun.rmi.server.UnicastRef.invoke(Unknown Source) ~[na:1.7.0_65]
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(Unknown Source) ~[na:1.7.0_65]
at java.rmi.server.RemoteObjectInvocationHandler.invoke(Unknown Source) ~[na:1.7.0_65]
at com.sun.proxy.$Proxy0.getName(Unknown Source) ~[na:na]
at call to a method of remote lookedup object #getName in this example.
Requested piece of code - lookup and call that throws exception
//somewhere
SearchTestService c = getClient(address); // this returns nice stub
String name = c.getName(); // this is throwing exception
private SearchTestService getClient(String string) throws NumberFormatException, RemoteException, NotBoundException {
String[] parts = string.split(":");
Registry registry = LocateRegistry.getRegistry(parts[0], Integer.parseInt(parts[1]));
SearchTestService client = (SearchTestService) registry.lookup(SearchTestService.class.getName());
return (SearchTestService) client;
}
Console output after running "listening" client side code (with RMI registry)
10:17:55.915 [main] INFO pl.breeze.searchtest.client.Client - Starting RMI registry on port 12097
10:17:55.936 [main] INFO p.b.s.client.SearchTestServiceImpl - Test agent Breeze Dev staging is up and running
10:17:55.952 [main] INFO pl.breeze.searchtest.client.Client - Binding pl.choina.searchtest.remote.SearchTestService to port 12098
And this waits untill manuall shutdown - tested.
NoSuchObjectException
Javadoc:
A NoSuchObjectException is thrown if an attempt is made to invoke a method on an object that no longer exists in the remote virtual machine.
This means that the remote object referred to by the stub you are calling methods on has been unexported, i.e. the stub is 'stale'. The only way that can happen is by unexporting the object, either manually or as a result of GC.
As suggested I am using static references.
No you're not. You need to make the Registry reference static. Otherwise you just form a cycle between Client and Registry that can be garbage-collected all at once.
Why you're calling your server Client is another mystery.
EDIT A few comments:
stub = UnicastRemoteObject.exportObject(searchTestService, RMI_PORT + 1);
There's no need to use a second port here. Just re-use the Registry port.
log.info("Binding {} to port {}", SearchTestService.class.getName(), RMI_PORT + 1);
This is misleading. You've already done both, but what you have done is:
Exported the object on the port, and
Bound the object to a name
in two separate steps.
System.gc();
System.runFinalization();
Strange things to be doing here, or indeed anywhere.
synchronized (c) {
c.wait();
}
This isn't reliable. You don't really need anything here, as RMI should keep the JVM open as long as it has exported remote objects, but you could do what the Registry does:
while (true)
{
Thread.sleep(Integer.MAX_VALUE);
}
with the appropriate exception handling.
I can't reproduce your problem, but then I'm on Windows 7.

Netty 4, Java 7 JVM SIGSEGV crash under load

I have a binary protocol implemented with Netty that is being performance tested, and the JVM is crashing with the below report. I do not know how to repeat the crash, but it does happen regularly and only under heavy load. I have the following dependencies:
java 7.0_51-b13
netty 4.0.18_Final
fedora 20
It appears that the array copy is occurring in the nioEventLoopGroup thread. The performance test I am running is sending a large number of small messages over ~50 TCP connections. Where a large number is about 1 million 200 byte messages per connection. Each message has 2 response messages sent back.
This is what I am doing to create Netty:
Bootstrap:
m_serverBootstrap.group(m_eventLoopGroup)
.channel(NioServerSocketChannel.class)
.localAddress(m_config.getSmppPort())
.childAttr(InternalAttributeKeys.METRICS, m_metricRegistry)
.childHandler(new CustomServerChannelInitializer());
m_serverBindChannelFuture = m_serverBootstrap.bind().sync();
CustomerServerChannelInitializer
protected void initChannel(SocketChannel ch) throws Exception {
log.info("initChannel(SocketChannel ch) {} {} ",ch,this);
ch.pipeline()
.addLast(new IpFilterHandler())
.addLast(new ProtocolEncoder())
.addLast(new LengthFieldBasedFrameDecoder(4 * 1024, 0, 4, -4, 0))
.addLast(new ProtocolDecoder())
.addLast(new WindowingHandler())
.addLast(new SequenceNumberAssignmentHandler())
.addLast("idleState", new IdleStateHandler(idleTime, idleTime, idleTime))
.addLast("idleDisconnect", m_idleDisconnectHandler)
.addLast("auth", m_authHandler)
.addLast("catchall", new CatchallHandler(false));
ch.config().setAllocator(PooledByteBufAllocator.DEFAULT);
ch.config().setAutoRead(true);
log.info("finished initChannel(SocketChannel ch) {} {} ",ch,this);
}
After initial connection the pipeline is altered again in the authHandler
#Override
protected void channelRead0(ChannelHandlerContext ctx, CustomMessage msg) throws Exception {
ResponseMessage response = auth(msg,ctx);
ctx.pipeline().replace("auth", "msghandler", new MessageHandler());
ctx.pipeline().replace("idleState", "inactivityPeriod", new IdleStateHandler());
ctx.pipeline().addAfter("msghandler", "responsehandler", new ResponseHandler());
ctx.pipeline().addAfter("responsehandler", "heartbeat", new HeartbeatHandler());
ctx.pipeline().addAfter("heartbeat", "disconnect", new DisconnectHandler());
ctx.channel().closeFuture().addListener(new CleanupChannelFutureListener(ctx));
ctx.writeAndFlush(response);
}
jvm report. I have a detailed report if it helps http://pastebin.com/RV0KqPMf
If the JMX threads in the detailed report are bothering you, I can and have reproduced the issue without them.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007ffa9eb18eaa, pid=1731, tid=140710808540928
#
# JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# v ~StubRoutines::jbyte_disjoint_arraycopy
#
# Core dump written. Default location: /home/user/dir/core or core.1731
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread (0x00007ff9fc06f800): JavaThread "nioEventLoopGroup-2-12" [_thread_in_Java, id=1912, stack(0x00007ff9c9b25000,0x00007ff9c9c26000)]
siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x00007ff987df7715
What is the best way to find out what is causing this SIGSEGV in the JVM?
This is definitely a Netty bug.
Netty 4.x heavily uses Unsafe API - Oracle JDK internal API that allows raw memory access.
See PlatformDependent0.java from Netty sources.
The crash log tells that the problem happens inside Unsafe.copyMemory call where the target is a byte[] array in Java Heap young generation, and the source points to an unmapped memory region. Most likely this is caused by an attempt to get bytes from a native buffer that has been previously released. There are no sanity checks inside Unsafe API, so any misuse typically ends up with a JVM crash.
Upgrading from Netty 4.0.18.Final to 4.0.20.Final fixed this issue.

How to initialize JRockit MBean tree

I have the following code that just lists all MBean names found in platform MBean server:
public static void main(final String[] args) throws Exception {
initJMX();
}
#SuppressWarnings("unchecked")
private static void initJMX() throws IOException, MalformedURLException, AttributeNotFoundException,
InstanceNotFoundException, MalformedObjectNameException, MBeanException, ReflectionException,
NullPointerException {
JMXConnector jmxc = null;
final Map<String, String> map = new HashMap<String, String>();
jmxc = JMXConnectorFactory.newJMXConnector(createConnectionURL("localhost", 7788), map);
jmxc.connect();
final MBeanServerConnection connection = jmxc.getMBeanServerConnection();
final String[] domains = connection.getDomains();
for (final String domain : domains) {
final Set<ObjectName> mBeans = connection.queryNames(new ObjectName(domain + ":*"), null);
for (final ObjectName name : mBeans) {
System.out.println(name);
}
}
jmxc.close();
}
When I try to run this code with JRockit 1.5.0_4.0.1 with the following parameters:
-Xmanagement:ssl=false,authenticate=false,autodiscovery=false,port=7788
And it prints the following list:
[INFO ][mgmnt ] Remote JMX connector started at address localhost:7788
[INFO ][mgmnt ] Local JMX connector started
com.oracle.jrockit:type=FlightRecorder
java.util.logging:type=Logging
JMImplementation:type=MBeanServerDelegate
java.lang:type=Compilation
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Young Collector
java.lang:type=MemoryManager,name=Class Manager
java.lang:type=MemoryPool,name=ClassBlock Memory
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Old Collector
java.lang:type=Runtime
java.lang:type=MemoryPool,name=Nursery
java.lang:type=ClassLoading
java.lang:type=Threading
java.lang:type=MemoryPool,name=Class Memory
java.lang:type=OperatingSystem
java.lang:type=Memory
java.lang:type=MemoryPool,name=Old Space
But if I put a breakpoint before a call to initJMX method and at that point connect to that JVM with JRMC, then JRMC displays much more MBeans and also after I continue program execution it also prints a different list which contains more JRockit related MBeans:
[INFO ][mgmnt ] Remote JMX connector started at address T500W7AAD:7788
[INFO ][mgmnt ] Local JMX connector started
com.oracle.jrockit:type=FlightRecorder
oracle.jrockit.management:type=PerfCounters
oracle.jrockit.management:type=Compilation
oracle.jrockit.management:type=Log
oracle.jrockit.management:type=Profiler
oracle.jrockit.management:type=MemLeak
oracle.jrockit.management:type=JRockitConsole
oracle.jrockit.management:type=GarbageCollector
oracle.jrockit.management:type=Runtime
oracle.jrockit.management:type=Threading
oracle.jrockit.management:type=DiagnosticCommand
oracle.jrockit.management:type=Memory
java.util.logging:type=Logging
JMImplementation:type=MBeanServerDelegate
java.lang:type=Compilation
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Young Collector
java.lang:type=MemoryManager,name=Class Manager
java.lang:type=MemoryPool,name=ClassBlock Memory
java.lang:type=GarbageCollector,name=Garbage collection optimized for throughput Old Collector
java.lang:type=Runtime
java.lang:type=MemoryPool,name=Nursery
java.lang:type=ClassLoading
java.lang:type=Threading
java.lang:type=MemoryPool,name=Class Memory
java.lang:type=OperatingSystem
java.lang:type=Memory
java.lang:type=MemoryPool,name=Old Space
Is there a way to say JRockit to initialize those beans automatically on JVM startup without a need of explicit JRMC connection? The problem is that I'm trying to write some code that reuses some of those MBeans, but they are not available until I connect with JRMC.
UPDATE: This seems to be JRockit jdk1.5.0_4.0.1 problem. As same code works as expected on JRockit jdk6.0_4.1.0.
This appears to be a problem with the Windows version of JRockit that I use:
java version "1.5.0_24"
Java(TM) Platform, Standard Edition for Business (build 1.5.0_24-b02)
Oracle JRockit(R) (build R28.0.1-21-133393-1.5.0_24-20100512-2131-windows-x86_64, compiled mode)
Same code works as expected on latest JRockit for JDK 1.6.0 on Windows:
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Oracle JRockit(R) (build R28.2.2-7-148152-1.6.0_29-20111221-2104-windows-x86_64, compiled mode)
and on the same JRockit version, but for Linux:
java version "1.5.0_24"
Java(TM) Platform, Standard Edition for Business (build 1.5.0_24-b02)
Oracle JRockit(R) (build R28.1.0-123-138454-1.5.0_24-20101014-1350-linux-x86_64, compiled mode)
try your query with object names of *:*
final Set<ObjectName> mBeans = connection.queryNames(new ObjectName("*:*"),
Maybe there is more than one MBeanServer in the JRockit that the JRMC finds all MBeanServers.

Categories