I connect to external host successfully
void createConnection()
{
logger.info("Connecting to " + host_ + ":" + port_);
Socket sock_ = new Socket(host_, port_);
}
Connection is executed successfully, however I need to implement a reconnection mechanism, that is triggered when host is down/killed, and subsequently reconnect to A new host and port.
Is there such mechanism in JDK? Something like trigger event or Observer?
Related
I am looking at the documentation of PoolingHttpClientConnectionManager https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.html
There is an API setValidateAfterInactivity. validateAfterInactivity is not very clear to me. It says - "Defines period of inactivity in milliseconds after which persistent connections must be re-validated prior to being leased to the consumer"
How exactly does it re-validate the connection? Wanted to understand the process. Does it send any http request to server or something to re-validate, or its something else?
What is the criteria/mechanism it uses to revalidate the connection? How does it all work?
It use JDBC connection to do validate.
final ManagedHttpClientConnection conn = poolEntry.getConnection();
if (conn != null) {
conn.activate();
} else {
poolEntry.assignConnection(connFactory.createConnection(null));
}
if (log.isDebugEnabled()) {
log.debug("Connection leased: " + ConnPoolSupport.formatStats(
poolEntry.getConnection(), route, state, pool));
}
source code here
getConnectionState() as connected /disconnected depending on the device .if it is sending message i should see connected and if it not sending i should get disconnected .But each time i run the below java Program i am getting status as disconnected irrespective of device is sending messages or not
RegistryManager registryManager = RegistryManager.createFromConnectionString(connectionString);
System.out.println(registryManager.getDevices(new Integer(1000)));
while(true){
ArrayList<Device> deviceslist=registryManager.getDevices(new Integer(1000));
for(Device device:deviceslist)
{
/*System.out.println(device.getDeviceId());
System.out.println(device.getPrimaryKey());
System.out.println(device.getSecondaryKey());*/
System.out.println(device.getDeviceId());
System.out.println(device.getConnectionState());
/*System.out.println(device.getConnectionStateUpdatedTime());
System.out.println(device.getLastActivityTime());
System.out.println(device.getStatusReason());
System.out.println(device.getStatusUpdatedTime());
System.out.println(device.getSymmetricKey());
System.out.println(device.geteTag());
*/ }
}
I definitely am seeing otherwise.
I'm creating an simple C# console application using the code below,
static async void QueryDevices()
{
RegistryManager manager = RegistryManager.CreateFromConnectionString(connectionString);
while (true)
{
var devices = await manager.GetDevicesAsync(100);
{
foreach (var item in devices)
{
Console.WriteLine(DateTime.Now + ": " + item.Id + ", " + item.ConnectionState);
System.Threading.Thread.Sleep(100);
}
}
}
}
The git here is to always query the whole device list, because the ConnectionState property somehow looks like "static" memebers of the single device client instance, which is not apt-to change even when the actual state changes.
And my output is like below, the "connected" state is when I'm using an java client sample to send message to the IoT Hub.
I'm experiencing java.net.ConnectException in random ways.
My servlet runs in Tomcat 6.0 (JDK 1.6).
The servlet periodically fetches data from 4-5 third-party web servers.
The servlet uses a ScheduledExecutorService to fetch the data.
Run locally, all is fine and dandy. Run on my prod server, I see semi-random failures to fetch data from 1 of the third parties (Canadian weather data).
These are the URLs that are failing (plain RSS feeds):
http://weather.gc.ca/rss/city/pe-1_e.xml
http://weather.gc.ca/rss/city/pe-2_e.xml
http://weather.gc.ca/rss/city/pe-3_e.xml
http://weather.gc.ca/rss/city/pe-4_e.xml
http://weather.gc.ca/rss/city/pe-5_e.xml
http://weather.gc.ca/rss/city/pe-6_e.xml
http://meteo.gc.ca/rss/city/pe-1_f.xml
http://meteo.gc.ca/rss/city/pe-2_f.xml
http://meteo.gc.ca/rss/city/pe-3_f.xml
http://meteo.gc.ca/rss/city/pe-4_f.xml
http://meteo.gc.ca/rss/city/pe-5_f.xml
http://meteo.gc.ca/rss/city/pe-6_f.xml
Strange: each cycle, when I periodically fetch this data, the success/fail is all over the map: some succeed, some fail, but it never seems to be the same twice. So, I'm not completely blocked, just randomly blocked.
I slowed down my fetches, by introducing a 61s pause between each one. That had no effect.
The guts of the code that does the actual fetch:
private static final int TIMEOUT = 60*1000; //msecs
public String fetch(String aURL, String aEncoding /*UTF-8*/) {
String result = "";
long start = System.currentTimeMillis();
Scanner scanner = null;
URLConnection connection = null;
try {
URL url = new URL(aURL);
connection = url.openConnection(); //this doesn't talk to the network yet
connection.setConnectTimeout(TIMEOUT);
connection.setReadTimeout(TIMEOUT);
connection.connect(); //actually connects; this shouldn't be needed here
scanner = new Scanner(connection.getInputStream(), aEncoding);
scanner.useDelimiter(END_OF_INPUT);
result = scanner.next();
}
catch (IOException ex) {
long end = System.currentTimeMillis();
long time = end - start;
fLogger.severe(
"Problem connecting to " + aURL + " Encoding:" + aEncoding +
". Exception: " + ex.getMessage() + " " + ex.toString() + " Cause:" + ex.getCause() +
" Connection Timeout: " + connection.getConnectTimeout() + "msecs. Read timeout:" +
connection.getReadTimeout() + "msecs."
+ " Time taken to fail: " + time + " msecs."
);
}
finally {
if (scanner != null) scanner.close();
}
return result;
}
Example log entry showing a failure:
SEVERE: Problem connecting to http://weather.gc.ca/rss/city/pe-5_e.xml Encoding:UTF-8.
Exception: Connection timed out java.net.ConnectException: Connection timed out
Cause:null
Connection Timeout: 60000msecs.
Read timeout:60000msecs.
Time taken to fail: 15028 msecs.
Note that the time to fail is always 15s + a tiny amount.
Also note that it fails to reach the configured 60s timeout for the connection.
The host-server admins (Environment Canada) state that they don't have any kind of a blacklist for the IP address of misbehaving clients.
Also important: the code had been running for several months without this happening.
Someone suggested that instead I should use curl, a bash script, and cron. I implemented that, and it works fine.
I'm not able to solve this problem using Java.
In a non-blocking connect on the client side, it might be the case that the server is not up and the connection cannot be established. I use selector to wait for OP_CONNECT to figure out if the connection can be established in the following way:
connection = SocketChannel.open();
connection.configureBlocking(false);
// Kick off connection establishment
connection.connect(hostAddress);
connection.register(selector, SelectionKey.OP_CONNECT);
this.selector.select(2000);
// Iterate over the set of keys for which events are available
Iterator<SelectionKey> selectedKeys = this.selector.selectedKeys().iterator();
if (!selectedKeys.hasNext()) {
throw new IllegalStateException("\"Could not connect to \" + hostAddress");
}
SelectionKey key = selectedKeys.next();
boolean valid = key.isValid();
if (!key.isConnectable()) {
throw new IllegalStateException("\"Could not connect to \" + hostAddress");
}
finishConnection(key);
However, even if I do not start the server, the key.isConnectable() returns true... I don;t understand why that is the case and how to make sure that I only call selector.select() again when I am connected
I have a system with a db4o server that has two clients. One client is hosted in process, the other is a web server hosting a number of servlets that need to query the database.
In the web server's connection code, I have registered for the commit event, and use it to refresh objects as suggested by the db4o documentation at http://community.versant.com/documentation/reference/db4o-8.0/java/reference/Content/advanced_topics/callbacks/possible_usecases/committed_event_example.htm :
client = Db4oClientServer.openClient (context.getBean ("db4oClientConfiguration", ClientConfiguration.class),
arg0.getServletContext ().getInitParameter ("databasehost"),
Integer.parseInt (arg0.getServletContext ().getInitParameter ("databaseport")),
arg0.getServletContext ().getInitParameter ("databaseuser"),
arg0.getServletContext ().getInitParameter ("databasepassword"));
System.out.println ("DB4O connection established");
EventRegistry events = EventRegistryFactory.forObjectContainer (client);
events.committed ().addListener (new EventListener4<CommitEventArgs> () {
public void onEvent (Event4<CommitEventArgs> commitEvent, CommitEventArgs commitEventArgs)
{
for (Iterator4<?> it = commitEventArgs.updated ().iterator (); it.moveNext ();)
{
LazyObjectReference reference = (LazyObjectReference) it.current ();
System.out.println ("Updated object: " + reference.getClass () + ":" + reference.getInternalID ());
//if (trackedClasses.contains (reference.getClass ()))
{
Object obj = reference.getObject ();
commitEventArgs.objectContainer ().ext ().refresh (obj, 1);
System.out.println (" => updated (" + obj + ")");
}
}
}
});
In the in-process client, the following code is then executed:
try {
PlayerCharacter pc = new PlayerCharacter (player, name);
pc.setBio(bio);
pc.setArchetype(archetype);
player.getCharacters ().add (pc);
database.store (pc);
database.store (player.getCharacters ());
database.store (player);
database.commit ();
con.sendEvent (id, "CHARACTER_CREATED".getBytes (Constants.CHARSET));
}
catch (Exception e)
{
con.sendEvent (id, EventNames.ERROR, e.toString ());
}
The 'CHARACTER_CREATED' event gets sent successfully, so I know that commit isn't throwing an exception, but nothing shows up on the other client. It continues to use the old versions of the objects, and the 'updated object' messages I'm expecting don't show up on the server console.
Any ideas what I'm doing wrong?
Apparently the .committed() event only fires on a client, when the commit is from a other TCP client.
So you would need to turn your internal .openClient() / .openSession() clients to full blown TCP clients to see the events.
The .openClient() / .openSession() object containers are way more light weight and bypass all code which is related to network communication. Apparently also the event distribution across the network.