As per my understanding, I want to follow the best practice for releasing the resources at the end to prevent any connection leaks. Here is my code in HelperClass.
public static DynamoDB getDynamoDBConnection()
{
try
{
dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
}
catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
dynamoDB.shutdown();
}
return dynamoDB;
}
My doubt is, since the finally block will be executed no matter what, will the dynamoDB returns empty connection because it will be closed in finally block and then execute the return statement? TIA.
Your understanding is correct. dynamoBD.shutdown() will always execute before return dynamoDB.
I'm not familiar with the framework you're working with, but I would probably organize the code as follows:
public static DynamoDB getDynamoDBConnection()
throws ApplicationSpecificException {
try {
return new DynamoDB(new AmazonDynamoDBClient(
new ProfileCredentialsProvider()));
} catch(AmazonServiceException ase) {
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
throw new ApplicationSpecificException("some good message", ase);
}
}
and use it as
DynamoDB con = null;
try {
con = getDynamoDBConnection();
// Do whatever you need to do with con
} catch (ApplicationSpecificException e) {
// deal with it gracefully
} finally {
if (con != null)
con.shutdown();
}
You could also create an AutoCloseable wrapper for your dynamoDB connection (that calls shutdown inside close) and do
try (DynamoDB con = getDynamoDBConnection()) {
// Do whatever you need to do with con
} catch (ApplicationSpecificException e) {
// deal with it gracefully
}
Yes,dynamoDB will return an empty connection as dynamoBD.shutdow() will be executed before return statement, Always.
Although I am not answering your question about the finally block being executed always (there are several answers to that question already), I would like to share some information about how DynamoDB clients are expected to be used.
The DynamoDB client is a thread-safe object and is intended to be shared between multiple threads - you can create a global one for your application and re-use the object where ever you need it. Generally, the client creation is managed by some sort of IoC container (Spring IoC container for example) and then provided by the container to whatever code needs it through dependency injection.
Underneath the hood, the DynamoDB client maintains a pool of HTTP connections for communicating the DynamoDB endpoint and uses connections from within this pool. The various parameters of the pool can be configured by passing an instance of the ClientConfiguration object when constructing the client. For example, one of the parameters is the maximum number of open HTTP connections allowed.
With the above understanding, I would say that since the DynamoDB client manages the lifecycle of HTTP connections, resource leaks shouldn't really be concern of code that uses the DynamoDB client.
How about we "imitate" the error and see what happens ? This is what I mean:
___Case 1___
try{
// dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
throw new AmazonServiceException("Whatever parameters required to instantiate this exception");
} catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
//dynamoDB.shutdown();
slf4jLogger.info("Database gracefully shutdowned");
}
___Case 2___
try{
// dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
throw new Exception("Whatever parameters required to instantiate this exception");
} catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
//dynamoDB.shutdown();
slf4jLogger.info("Database gracefully shutdowned");
}
These exercise could be a perfect place to use unit tests and more specifically mock tests. I suggest you to take a close look at JMockit, which will help you write such tests much more easily.
Related
I have a Server that can receive multiple request at the same time.
In my Server, I have to make some traitement and wait for response. This traitmenet is done by externe library so I don't how much should I wait.
So the Server looks like :
public class MyServer{
#Override
//method from the library
public void workonRequest(){
//---
response=[...]
}
public void listenRequest() {
new Thread(() -> {
while (true) {
try {
socket = server.accept();
ObjectInputStream input = new ObjectInputStream(socket.getInputStream());
ObjectOutputStream output = new ObjectOutputStream(socket.getOutputStream());
socket.setTcpNoDelay(true); //TODO : Not sure !
new Thread(() -> {
try {
handleRequest(input, output);
} catch (IOException e) {
throw new RuntimeException(e);
}
}).start();
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
}
}).start();
}
And the handle request method is :
public void handleRequest(ObjectInputStream input, ObjectOutputStream output) throws IOException {
try {
while (true) {
//forward the request to the library
//work on it [means using the library and waiting]
// return response
}
}
}
The response object is the result that I want return to the client
How to deal with the problem of waiting for the answer?
How can I make sure that there will be no problems when more than 2 clients send requests at the same time.
Thanks in advance
How to deal with the problem of waiting for the answer ?###
Using while(true) can create issues because you are blocking the thread and opening sub thread and multi streams will make it more complex. There is easy way called reactive programming which handles this kind of multi-threaded issues easily, quarkus async solution and spring, if you still want to manage your sockets from java code you can use akka
How can I make sure that there will be no problems when more than 2 clients send requests at the same time.
That can be done by not blocking the main thread and If you manage to use reactive and/or async approach you will not have that problem.
Reference
https://quarkus.io/guides/getting-started-reactive
https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html
My current Lambda function is calling a 3rd party web service Synchronously.This function occasionally times out (current timeout set to 25s and cannot be increased further)
My code is something like:
handleRequest(InputStream input, OutputStream output, Context context) throws IOException {
try{
response = calling 3rd party REST service
}catch(Exception e){
//handle exceptions
}
}
1)I want to custom handle the timeout (tracking the time and handling few milli seconds before actual timeout) within my Lambda function by sending a custom error message back to the client.
How can I effectively use the
context.getRemainingTimeInMillis()
method to track the time remaining while my synchronous call is running? Planning to call the context.getRemainingTimeInMillis() asynchronously.Is that the right approach?
2)What is a good way to test the timeout custom functionality ?
I solved my problem by increasing the Lambda timeout and invoking my process in a new thread and timing out the Thread after n seconds.
ExecutorService service = Executors.newSingleThreadExecutor();
try {
Runnable r = () ->{
try {
myFunction();
} catch (Exception e) {
e.printStackTrace();
}
};
f = service.submit(r);
f.get(n, TimeUnit.MILLISECONDS);// attempt the task for n milliseconds
}catch(TimeoutException toe){
//custom logic
}
Another option is to use the
readTimeOut
property of the RestClient(in my case Jersey) to set the timeout.But I see that this property is not working consistently within the Lambda code.Not sure if it's and issue with the Jersey client or the Lambda.
You can try with cancellation token to return custom exceptions with lambda before timeout.
try
{
var tokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(1)); // set timeout value
var taskResult = ApiCall(); // call web service method
while (!taskResult.IsCompleted)
{
if (tokenSource.IsCancellationRequested)
{
throw new OperationCanceledException("time out for lambda"); // throw custom exceptions eg : OperationCanceledException
}
}
return taskResult.Result;
}
catch (OperationCanceledException ex)
{
// handle exception
}
I'm new to vert.x and would like to know if its possible to configure eventbus somehow to make it work consistently?
I mean need to send requests one by one using vert.x
At the moment I got this code which uses eventloop principle and waits until all handlers finished, but I don't need this done that fast, idea is to free server from lots of requests at the same time. Here eb_send() uses default EventBus.send() method. In other words I want to execute all requests with blocking, waiting for answers before requests.
List<Future> queue = new ArrayList<>();
files.forEach(fileObj -> {
Future<JsonObject> trashStatusHandler = Future.future();
queue.add(trashStatusHandler);
eb_send(segment, StorageType.getAddress(StorageType.getStorageType(fileInfo.getString("storageType"))) + ".getTrashStatus", fileInfo, reply -> {
Entity dummy = createDummySegment();
try {
if (reply.succeeded()) {
//succeded
}
} catch (Exception ex) {
log.error(ex);
}
trashStatusHandler.complete();
});
});
The basic idea is to extract this into a function, which you would invoke recursively.
public void sendFile(List<File> files, AtomicInteger c) {
eb_send(segment, StorageType.getAddress(StorageType.getStorageType(fileInfo.getString("storageType"))) + ".getTrashStatus", fileInfo, reply -> {
Entity dummy = createDummySegment();
try {
if (reply.succeeded()) {
//succeded
}
// Recursion
if (c.incrementAndGet() < files.size()) {
sendFile(files, c);
}
} catch (Exception ex) {
log.error(ex);
}
});
}
I’m writing in order to get some help.
To be short, I’m trying to use com.unboundid.ldap.sdk (but it is not necessary - the same problem i get if i use oracle's javax.naming.ldap.*) to handle with ldap transactions, and I get the following error:
Exception in thread "Main Thread" java.lang.AssertionError: Result EndTransactionExtendedResult(resultCode=2 (protocol error), diagnosticMessage='protocol error') did not have the expected result code of '0 (success)'.
at com.unboundid.util.LDAPTestUtils.assertResultCodeEquals(LDAPTestUtils.java:1484)
at pkg.Main.main(Main.java:116)
My program is the following ( I’m using simple example from https://www.unboundid.com/products/ldap-sdk/docs/javadoc/com/unboundid/ldap/sdk/extensions/StartTransactionExtendedRequest.html ) :
public class Main {
public static void main( String[] args ) throws LDAPException {
LDAPConnection connection = null;
try {
connection = new LDAPConnection("***", ***, "***", "***");
} catch (LDAPException e1) {
e1.printStackTrace();
}
// Use the start transaction extended operation to begin a transaction.
StartTransactionExtendedResult startTxnResult;
try
{
startTxnResult = (StartTransactionExtendedResult)
connection.processExtendedOperation(
new StartTransactionExtendedRequest());
// This doesn't necessarily mean that the operation was successful, since
// some kinds of extended operations return non-success results under
// normal conditions.
}
catch (LDAPException le)
{
// For an extended operation, this generally means that a problem was
// encountered while trying to send the request or read the result.
startTxnResult = new StartTransactionExtendedResult(
new ExtendedResult(le));
}
LDAPTestUtils.assertResultCodeEquals(startTxnResult, ResultCode.SUCCESS);
ASN1OctetString txnID = startTxnResult.getTransactionID();
// At this point, we have a transaction available for use. If any problem
// arises, we want to ensure that the transaction is aborted, so create a
// try block to process the operations and a finally block to commit or
// abort the transaction.
boolean commit = false;
try
{
// do nothing
}
finally
{
// Commit or abort the transaction.
EndTransactionExtendedResult endTxnResult;
try
{
endTxnResult = (EndTransactionExtendedResult)
connection.processExtendedOperation(
new EndTransactionExtendedRequest(txnID, commit));
}
catch (LDAPException le)
{
endTxnResult = new EndTransactionExtendedResult(new ExtendedResult(le));
}
LDAPTestUtils.assertResultCodeEquals(endTxnResult, ResultCode.SUCCESS);
}
}
}
As you can see, I do nothing with the transaction: just start and rolling back, but it still not working.
Connection is ok, and I receive transaction id = F10285501E20C32AE040A8C0070F7502 BUT IT ALWAYS THE SAME - is it all wrigth???
If “// do nothing” replace with some action exception: unwilling to perform.
I’m starting to think that it is OID problem, but I just can’t figure out what is wrong…
OID is on a WebLogic server and it’s version is :
Version Information
ODSM 11.1.1.6.0
OID 11.1.1.6.0
DB 11.2.0.2.0
All ideas will be appreciated.
I'm using a variation of the example at http://svn.apache.org/repos/asf/activemq/trunk/assembly/src/release/example/src/StompExample.java to receive message from a queue. What I'm trying to do is to keep listening to a queue and perform some action upon reception of a new message. The problem is that I couldn't find a way to register a listener to any of the related objects. I've tried something like:
public static void main(String args[]) throws Exception {
StompConnection connection = null;
try {
connection = new StompConnection();
connection.open("localhost", 61613);
connection.connect("admin", "activemq");
connection.subscribe("/queue/worker", Subscribe.AckModeValues.AUTO);
while (true) {
StompFrame message = connection.receive();
System.out.println(message.getBody());
}
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (connection != null) {
connection.disconnect();
}
}
}
but this doesn't work as a time out occurs after a few seconds (java.net.SocketTimeoutException: Read timed out). Is there anything I can do to indefinitely listen to this queue?
ActiveMQ's StompConnection class is a relatively primitive STOMP client. Its not capable of async callbacks on Message or for indefinite waits. You can pass a timeout to receive but depending on whether you are using STOMP v1.1 it could still timeout early if a heart-beat isn't received in time. You can of course always catch the timeout exception and try again.
For STOMP via Java you're better off using StompJMS or the like which behaves like a real JMS client and allows for async Message receipt.
#Tim Bish: I tried StompJMS, but couldn't find any example that I could use (maybe you can provide a link). I 'fixed' the problem by setting the timeout to 0 which seems to be blocking.
even i was facing the same issue.. you can fix this by adding time out to your receive() method.
Declare a long type variable.
long waitTimeOut = 5000; //this is 5 seconds
now modify your receive function like below.
StompFrame message = connection.receive(waitTimeOut);
This will definitely work.