I’m writing in order to get some help.
To be short, I’m trying to use com.unboundid.ldap.sdk (but it is not necessary - the same problem i get if i use oracle's javax.naming.ldap.*) to handle with ldap transactions, and I get the following error:
Exception in thread "Main Thread" java.lang.AssertionError: Result EndTransactionExtendedResult(resultCode=2 (protocol error), diagnosticMessage='protocol error') did not have the expected result code of '0 (success)'.
at com.unboundid.util.LDAPTestUtils.assertResultCodeEquals(LDAPTestUtils.java:1484)
at pkg.Main.main(Main.java:116)
My program is the following ( I’m using simple example from https://www.unboundid.com/products/ldap-sdk/docs/javadoc/com/unboundid/ldap/sdk/extensions/StartTransactionExtendedRequest.html ) :
public class Main {
public static void main( String[] args ) throws LDAPException {
LDAPConnection connection = null;
try {
connection = new LDAPConnection("***", ***, "***", "***");
} catch (LDAPException e1) {
e1.printStackTrace();
}
// Use the start transaction extended operation to begin a transaction.
StartTransactionExtendedResult startTxnResult;
try
{
startTxnResult = (StartTransactionExtendedResult)
connection.processExtendedOperation(
new StartTransactionExtendedRequest());
// This doesn't necessarily mean that the operation was successful, since
// some kinds of extended operations return non-success results under
// normal conditions.
}
catch (LDAPException le)
{
// For an extended operation, this generally means that a problem was
// encountered while trying to send the request or read the result.
startTxnResult = new StartTransactionExtendedResult(
new ExtendedResult(le));
}
LDAPTestUtils.assertResultCodeEquals(startTxnResult, ResultCode.SUCCESS);
ASN1OctetString txnID = startTxnResult.getTransactionID();
// At this point, we have a transaction available for use. If any problem
// arises, we want to ensure that the transaction is aborted, so create a
// try block to process the operations and a finally block to commit or
// abort the transaction.
boolean commit = false;
try
{
// do nothing
}
finally
{
// Commit or abort the transaction.
EndTransactionExtendedResult endTxnResult;
try
{
endTxnResult = (EndTransactionExtendedResult)
connection.processExtendedOperation(
new EndTransactionExtendedRequest(txnID, commit));
}
catch (LDAPException le)
{
endTxnResult = new EndTransactionExtendedResult(new ExtendedResult(le));
}
LDAPTestUtils.assertResultCodeEquals(endTxnResult, ResultCode.SUCCESS);
}
}
}
As you can see, I do nothing with the transaction: just start and rolling back, but it still not working.
Connection is ok, and I receive transaction id = F10285501E20C32AE040A8C0070F7502 BUT IT ALWAYS THE SAME - is it all wrigth???
If “// do nothing” replace with some action exception: unwilling to perform.
I’m starting to think that it is OID problem, but I just can’t figure out what is wrong…
OID is on a WebLogic server and it’s version is :
Version Information
ODSM 11.1.1.6.0
OID 11.1.1.6.0
DB 11.2.0.2.0
All ideas will be appreciated.
Related
My current Lambda function is calling a 3rd party web service Synchronously.This function occasionally times out (current timeout set to 25s and cannot be increased further)
My code is something like:
handleRequest(InputStream input, OutputStream output, Context context) throws IOException {
try{
response = calling 3rd party REST service
}catch(Exception e){
//handle exceptions
}
}
1)I want to custom handle the timeout (tracking the time and handling few milli seconds before actual timeout) within my Lambda function by sending a custom error message back to the client.
How can I effectively use the
context.getRemainingTimeInMillis()
method to track the time remaining while my synchronous call is running? Planning to call the context.getRemainingTimeInMillis() asynchronously.Is that the right approach?
2)What is a good way to test the timeout custom functionality ?
I solved my problem by increasing the Lambda timeout and invoking my process in a new thread and timing out the Thread after n seconds.
ExecutorService service = Executors.newSingleThreadExecutor();
try {
Runnable r = () ->{
try {
myFunction();
} catch (Exception e) {
e.printStackTrace();
}
};
f = service.submit(r);
f.get(n, TimeUnit.MILLISECONDS);// attempt the task for n milliseconds
}catch(TimeoutException toe){
//custom logic
}
Another option is to use the
readTimeOut
property of the RestClient(in my case Jersey) to set the timeout.But I see that this property is not working consistently within the Lambda code.Not sure if it's and issue with the Jersey client or the Lambda.
You can try with cancellation token to return custom exceptions with lambda before timeout.
try
{
var tokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(1)); // set timeout value
var taskResult = ApiCall(); // call web service method
while (!taskResult.IsCompleted)
{
if (tokenSource.IsCancellationRequested)
{
throw new OperationCanceledException("time out for lambda"); // throw custom exceptions eg : OperationCanceledException
}
}
return taskResult.Result;
}
catch (OperationCanceledException ex)
{
// handle exception
}
As per my understanding, I want to follow the best practice for releasing the resources at the end to prevent any connection leaks. Here is my code in HelperClass.
public static DynamoDB getDynamoDBConnection()
{
try
{
dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
}
catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
dynamoDB.shutdown();
}
return dynamoDB;
}
My doubt is, since the finally block will be executed no matter what, will the dynamoDB returns empty connection because it will be closed in finally block and then execute the return statement? TIA.
Your understanding is correct. dynamoBD.shutdown() will always execute before return dynamoDB.
I'm not familiar with the framework you're working with, but I would probably organize the code as follows:
public static DynamoDB getDynamoDBConnection()
throws ApplicationSpecificException {
try {
return new DynamoDB(new AmazonDynamoDBClient(
new ProfileCredentialsProvider()));
} catch(AmazonServiceException ase) {
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
throw new ApplicationSpecificException("some good message", ase);
}
}
and use it as
DynamoDB con = null;
try {
con = getDynamoDBConnection();
// Do whatever you need to do with con
} catch (ApplicationSpecificException e) {
// deal with it gracefully
} finally {
if (con != null)
con.shutdown();
}
You could also create an AutoCloseable wrapper for your dynamoDB connection (that calls shutdown inside close) and do
try (DynamoDB con = getDynamoDBConnection()) {
// Do whatever you need to do with con
} catch (ApplicationSpecificException e) {
// deal with it gracefully
}
Yes,dynamoDB will return an empty connection as dynamoBD.shutdow() will be executed before return statement, Always.
Although I am not answering your question about the finally block being executed always (there are several answers to that question already), I would like to share some information about how DynamoDB clients are expected to be used.
The DynamoDB client is a thread-safe object and is intended to be shared between multiple threads - you can create a global one for your application and re-use the object where ever you need it. Generally, the client creation is managed by some sort of IoC container (Spring IoC container for example) and then provided by the container to whatever code needs it through dependency injection.
Underneath the hood, the DynamoDB client maintains a pool of HTTP connections for communicating the DynamoDB endpoint and uses connections from within this pool. The various parameters of the pool can be configured by passing an instance of the ClientConfiguration object when constructing the client. For example, one of the parameters is the maximum number of open HTTP connections allowed.
With the above understanding, I would say that since the DynamoDB client manages the lifecycle of HTTP connections, resource leaks shouldn't really be concern of code that uses the DynamoDB client.
How about we "imitate" the error and see what happens ? This is what I mean:
___Case 1___
try{
// dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
throw new AmazonServiceException("Whatever parameters required to instantiate this exception");
} catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
//dynamoDB.shutdown();
slf4jLogger.info("Database gracefully shutdowned");
}
___Case 2___
try{
// dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
throw new Exception("Whatever parameters required to instantiate this exception");
} catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
//dynamoDB.shutdown();
slf4jLogger.info("Database gracefully shutdowned");
}
These exercise could be a perfect place to use unit tests and more specifically mock tests. I suggest you to take a close look at JMockit, which will help you write such tests much more easily.
I am using Spring and JPA for a financial data handling project. I have to handle scheduling transactions for future dates. In a daily run quartz cron job and execute all schedule transaction and persist to real table.
My problem is, when going to execute trigger for a given time, one record failed due to some reason, then all other records were not executed.
I need to execute all other transactions and failed transactions should rollback.
Is there a way to handle these things?
following code block get all the schedule job
public void bankingPaymentSchedullerRun(){
// get all pending job
List<BankScheduler> bankSchedulers = bankSchedulerDao.findByStatus(ScheduleStatus.PENDING);
Calendar currentDate = DateUtil.getFormatedCalenderDate(DateUtil.currentDate(), "yyyy-MM-dd");
if (bankSchedulers != null) {
for (BankScheduler bankScheduler : bankSchedulers) {
LOGGER.info("bankingPaymentSchedullerRun " + bankScheduler.toString());
//compare from date and to date
if ((bankScheduler.getFromDate().compareTo(currentDate) <= 0)
&& (bankScheduler.getToDate().compareTo(currentDate)) >= 0) {
if (bankScheduler.getSchedulerType().equals(SchedulerType.FUND_TRANSFER.toString())) {
scheduleFundTransfer(bankScheduler);
}else {
scheduleUtilityPayment(bankScheduler);
}
}
}
}
}
Fund transfer execution code block
public PaymentResponse scheduleFundTransfer(final BankScheduler bankScheduler){
FundTransfer fundTransfer = new FundTransfer();
fundTransfer.setBranchName(bankScheduler.getBankSchedulerFundTransfer().getBeneficiaryBranchName());
fundTransfer.setBeneficiaryBankName(bankScheduler.getBankSchedulerFundTransfer().getBeneficiaryBankName());
fundTransfer.setBeneficiaryType(bankScheduler.getBankSchedulerFundTransfer().getBeneficiaryType());
fundTransfer.setBeneficiaryName(bankScheduler.getBankSchedulerFundTransfer().getBeneficiaryName());
fundTransfer.setCurrency(bankScheduler.getBankSchedulerFundTransfer().getCurrency());
fundTransfer.setNarration(bankScheduler.getDetail());
fundTransfer.setThirdPartyType(bankScheduler.getBankSchedulerFundTransfer().getThirdPartyType());
fundTransfer.setTransferedAmount(bankScheduler.getBankSchedulerFundTransfer().getAmount());
fundTransfer.setUserAccountNumber(bankScheduler.getBankSchedulerFundTransfer().getAccountNumber());
fundTransfer.setUserName(bankScheduler.getBankSchedulerFundTransfer().getUserName());
FundTransfer fundTransferUpdate = null;
//commit to fundtransfer real table
try {
fundTransferUpdate = fundTransferDao.create(fundTransfer);
} catch (Exception e) {
LOGGER.error("Exception occur when call update fund transfer in fundTransfer()", e);
}
// getting from currecy Decimals
BankCurrecyInfor currecyInfo =
bankCurrecyInforDao.getDecimalpointsByCurrecy(fundTransfer.getUserAccount().getCurrencyCode());
fundTransferUpdate.setDecimalAmt(String.valueOf(currecyInfo.getNoOfDecimal()));
//call to bank back-end to update
PaymentResponse response = accountServiceInvoker.fundTransfer(fundTransferUpdate);
//for sending sms
SmsCriteria smsCriteria = new SmsCriteria();
smsCriteria.setBank_name("ABC");
messageServiceInvoker.sentIbSms(smsCriteria);
return response;
}
Yes there is, use proper exception handling so when single transaction fails, other will still (try to) execute.
Pseudocode
for(scheduled task from all scheduled tasks) {
try{
begin transaction
do your stuff with jpa
commit transaction
}catch(Exception e){
rollback transaction, log error and stuff
}finally{
release resources
}
}
I am using MS SQL Server, and my program recently started losing the DB connection randomly. I am using a non-XA driver.
The most likely suspect is the asynchronous database logging I added.
The sneaky thing is, I have used a thread pool:
ExecutorService ruleLoggingExecutor = Executors.newFixedThreadPool(10);
and in the finally block of my process, I start off a new thread that calls down to the addLogs() method.
The code works for hours, days, and then during a totally unrelated query, it will lose the DB connection. I have an inkling that the problem is that two concurrent inserts are being attempted. But I don't know if putting 'synchronized' on the addLogs method would fix it, or if I need transactional code, or what. Any advice?
In the DAO:
private EntityManager getEntityManager(InitialContext context) {
try {
if (emf == null) {
emf = (EntityManagerFactory) context
.lookup("java:jboss/persistence/db");
}
return emf.createEntityManager();
} catch (Exception e) {
logger.error(
"Error finding EntityManagerFactory in JNDI: "
+ e.getMessage(), e);
return null;
}
}
public void addLogs(InitialContext context, String key, String logs,
String responseXml) {
EntityManager em = getEntityManager(context);
try {
TblRuleLog log = new TblRuleLog();
log.setAuthKey(key);
log.setLogMessage(logs);
log.setDateTime(new Timestamp(new Date().getTime()));
log.setResponseXml(responseXml);
em.persist(log);
em.flush();
} catch (Exception e) {
logger.error(e.getMessage(), e);
} finally {
em.close();
}
}
It seems the connection is closed after a timeout, perhaps due to transaction not being commited/rolled back (and locks not being released on the tables/rows).
Manual flushing looks suspicious. I'd use entityManager.getTransaction().begin/commit() and remove em.flush().
I'm using a variation of the example at http://svn.apache.org/repos/asf/activemq/trunk/assembly/src/release/example/src/StompExample.java to receive message from a queue. What I'm trying to do is to keep listening to a queue and perform some action upon reception of a new message. The problem is that I couldn't find a way to register a listener to any of the related objects. I've tried something like:
public static void main(String args[]) throws Exception {
StompConnection connection = null;
try {
connection = new StompConnection();
connection.open("localhost", 61613);
connection.connect("admin", "activemq");
connection.subscribe("/queue/worker", Subscribe.AckModeValues.AUTO);
while (true) {
StompFrame message = connection.receive();
System.out.println(message.getBody());
}
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (connection != null) {
connection.disconnect();
}
}
}
but this doesn't work as a time out occurs after a few seconds (java.net.SocketTimeoutException: Read timed out). Is there anything I can do to indefinitely listen to this queue?
ActiveMQ's StompConnection class is a relatively primitive STOMP client. Its not capable of async callbacks on Message or for indefinite waits. You can pass a timeout to receive but depending on whether you are using STOMP v1.1 it could still timeout early if a heart-beat isn't received in time. You can of course always catch the timeout exception and try again.
For STOMP via Java you're better off using StompJMS or the like which behaves like a real JMS client and allows for async Message receipt.
#Tim Bish: I tried StompJMS, but couldn't find any example that I could use (maybe you can provide a link). I 'fixed' the problem by setting the timeout to 0 which seems to be blocking.
even i was facing the same issue.. you can fix this by adding time out to your receive() method.
Declare a long type variable.
long waitTimeOut = 5000; //this is 5 seconds
now modify your receive function like below.
StompFrame message = connection.receive(waitTimeOut);
This will definitely work.