How do I test that the following code will perform the logging statement when Exception is thrown, using Mockito?
public void cleanUp() {
for (Map.Entry<String, Connection> connection : dbConnectionMap.entrySet()) {
try {
if (connection.getValue() != null) {
connection.getValue().close();
}
}catch (Exception e) {
LOGGER.log(Level.WARNING, "Exception when closing database connection: ", e);
}
}
reset();
}
Related
We have a spark streaming program which pull messages from kafka and process each individual message using forEachPartiton transformation.
If case if there is specific error in the processing function we would like to throw the exception back and halt the program. The same seems to be not happening. Below is the code we are trying to execute.
JavaInputDStream<KafkaDTO> stream = KafkaUtils.createDirectStream( ...);
stream.foreachRDD(new Function<JavaRDD<KafkaDTO>, Void>() {
public Void call(JavaRDD<KafkaDTO> rdd) throws PropertiesLoadException, Exception {
rdd.foreachPartition(new VoidFunction<Iterator<KafkaDTO>>() {
#Override
public void call(Iterator<KafkaDTO> itr) throws PropertiesLoadException, Exception {
while (itr.hasNext()) {
KafkaDTO dto = itr.next();
try{
//process the message here.
} catch (PropertiesLoadException e) {
// throw Exception if property file is not found
throw new PropertiesLoadException(" PropertiesLoadException: "+e.getMessage());
} catch (Exception e) {
throw new Exception(" Exception : "+e.getMessage());
}
}
}
});
}
}
In the above code even if we throw a PropertiesLoadException the program doesn't halt and streaming continues. The max retries we set in Spark configuration is only 4. The streaming program continues even after 4 failures. How should the exception be thrown to stop the program?
I am not sure if this is the best approach but we surrounded the main batch with try and catch and when I get exception I just call close context. In addition you need to make sure that stop gracfully is off (false).
Example code:
try {
process(dataframe);
} catch (Exception e) {
logger.error("Failed on write - will stop spark context immediately!!" + e.getMessage());
closeContext(jssc);
if (e instanceof InterruptedException) {
Thread.currentThread().interrupt();
}
throw e;
}
And close function:
private void closeContext(JavaStreamingContext jssc) {
logger.warn("stopping the context");
jssc.stop(false, jssc.sparkContext().getConf().getBoolean("spark.streaming.stopGracefullyOnShutdown", false));
logger.error("Context was stopped");
}
In config :
spark.streaming.stopGracefullyOnShutdown false
I think that with your code it should look like this:
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, streamBatch);
JavaInputDStream<KafkaDTO> stream = KafkaUtils.createDirectStream( jssc, ...);
stream.foreachRDD(new Function<JavaRDD<KafkaDTO>, Void>() {
public Void call(JavaRDD<KafkaDTO> rdd) throws PropertiesLoadException, Exception {
try {
rdd.foreachPartition(new VoidFunction<Iterator<KafkaDTO>>() {
#Override
public void call(Iterator<KafkaDTO> itr) throws PropertiesLoadException, Exception {
while (itr.hasNext()) {
KafkaDTO dto = itr.next();
try {
//process the message here.
} catch (PropertiesLoadException e) {
// throw Exception if property file is not found
throw new PropertiesLoadException(" PropertiesLoadException: " + e.getMessage());
} catch (Exception e) {
throw new Exception(" Exception : " + e.getMessage());
}
}
}
});
} catch (Exception e){
logger.error("Failed on write - will stop spark context immediately!!" + e.getMessage());
closeContext(jssc);
if (e instanceof InterruptedException) {
Thread.currentThread().interrupt();
}
throw e;
}
}
}
In addition please note that my stream is working on spark 2.1 Standalone (not yarn / mesos) client mode. In addition I implement the stop gracefully my self using ZK.
I have a class to show HTTP's error messages.
According to the throwable it shows a message.
But some time I got null pointer exception
public static void showGeneralErrors(Throwable throwable) {
String message = "";
AppInitialization appInitialization = AppInitialization.getInstance();
if (appInitialization == null) {
return;
}
try {
if (throwable instanceof HttpException) {
if (((HttpException) throwable).code() == 500) {
message = appInitialization.getString(R.string.server_error);
} else {
message = appInitialization.getString(R.string.parsing_problem);
}
} else if (throwable instanceof IOException) {
message = appInitialization.getString(R.string.internet_error);
}else if(throwable instanceof SSLHandshakeException){
message = appInitialization.getString(R.string.internet_error);
}
if (!TextUtils.isEmpty(message)) {
Toast.makeText(appInitialization, message, Toast.LENGTH_SHORT).show();
}
} catch (Exception e) {
Log.e(">>>>>", "Exception network error handler " + e.getMessage());
} catch (IllegalStateException e) {
Log.e(">>>>>", "IllegalStateException network error handler " + e.getMessage());
} catch (NullPointerException e) {
Log.e(">>>>>", "NullPointerException network error handler " + e.getMessage());
}
}
And error message is:
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'android.content.res.Resources android.content.Context.getResources()' on a null object reference
at android.widget.Toast.makeText(Toast.java:298)
And public AppInitialization is:
public class AppInitialization extends Application {
private static AppInitialization mInstance;
public static synchronized AppInitialization getInstance() {
return mInstance;
}
public void onCreate() {
super.onCreate();
mInstance = this;
}
And it comes from retrofit Onfailure method:
GeneralRepo.getCountryFromIp(getContext())
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(Schedulers.io())
.subscribe(countryFromIPResponse -> {
//do something
}, throwable -> {
// Where i got error
NetworkErrorHandler.showGeneralErrors(throwable);
});
why i got this error and why try/catch doesn't work?
Put your try catch block on else portion because NullPointerException occur
appInitialization
is coming null so ..
write :
public static void showGeneralErrors(Throwable throwable) {
String message = "";
AppInitialization appInitialization = AppInitialization.getInstance();
if (appInitialization == null) {
return;
}else{
try { if (throwable instanceof HttpException) {
if (((HttpException) throwable).code() == 500) {
message = appInitialization.getString(R.string.server_error);
} else {
message = appInitialization.getString(R.string.parsing_problem);
} } else if (throwable instanceof IOException) {
message = appInitialization.getString(R.string.internet_error); }else if(throwable instanceof SSLHandshakeException){
message = appInitialization.getString(R.string.internet_error); } if (!TextUtils.isEmpty(message)) {
Toast.makeText(appInitialization, message, Toast.LENGTH_SHORT).show(); } } catch (Exception e) {
Log.e(">>>>>", "Exception network error handler " +
e.getMessage()); } catch (IllegalStateException e) {
Log.e(">>>>>", "IllegalStateException network error handler " + e.getMessage()); } catch (NullPointerException e) {
Log.e(">>>>>", "NullPointerException network error handler " +
e.getMessage()); } } }
I try to rollback my DB change
the roollback code runs with no exception and yet my DB is dirty with changes.
Am i missing something?
final Connection dbConnection = rulesUiRepository.getConnection();
dbConnection.setAutoCommit(false);
try {
if (rulesUiRepository.updateRulesUiSnapshot(this.nonSplittedRulesSnapshot) == -1)
throw new RuntimeException("cannot save ui snapshot to DB");
...more code
} catch (Exception e) {
logger.error("transaction to update db and cofman failed", e);
//did work
//dbConnection.rollback();
throw new Exception("transaction to update db and cofman failed", e);
} finally {
//or
if (dbConnection != null) {
dbConnection.close();
}
}
with that code:
public synchronized void rollback() throws SQLException {
try {
this.txn_known_resolved = true;
this.inner.rollback();
} catch (NullPointerException var2) {
if(this.isDetached()) {
throw SqlUtils.toSQLException("You can't operate on a closed Connection!!!", var2);
} else {
throw var2;
}
} catch (Exception var3) {
if(!this.isDetached()) {
throw this.parentPooledConnection.handleThrowable(var3);
} else {
throw SqlUtils.toSQLException(var3);
}
}
}
Rollbak do rollback since last commit. You have code you didn't show that might do commit implicitly by using #Transactional for example or explicitly and the rollback transaction will be effective only on database transactions after it
To my understanding it is possible with this code that when changing a user role another user can change the same role and always wins the last. It would even be possible for us to store parts of one and parts of the other. This is possible due to the 3 queries in the DAO. I would like to get "ThreadSafe" that during a change not another user can make a change or it will be detected that someone changed it before.
My idea was to change the method in the RoleManager.
Idea:
public interface RoleManager {
static synchronized void EditRole(UserRoleBO editedObjet, UserRoleBO nonEditedObject);
This does not work with this type of design(with a interface).
My Question:
Is there an elegant way to solve the problem without changing the
design?
Addition Note:
Tell me if i have big mistakes in my code.
Manager:
public class RoleManagerImpl implements RoleManager {
#Override
public void editRole(UserRoleBO editedObjet, UserRoleBO nonEditedObject) {
EditUserRole editUserRole = EditUserRole.Factory.createEditUserRole(nonEditedObject);
boolean hasChangedBeforeInDB = editUserRole.detectChanges();
if (hasChangedBeforeInDB) {
throw new ManagerException(ManagerException.TYPE.HASCHANGEDBEFOREINDB, null);
}
RoleDAO roleDAO = new RoleDAOImpl();
roleDAO.editRole(editedObjet);
}
}
DAO:
#Override
public int editRole(UserRoleBO role) {
Connection conn = null;
int status;
try {
//Set up connection
conn = ConnectionPool.getInstance().acquire();
DSLContext create = DSL.using(conn, SQLDialect.MARIADB);
//sql processing and return
status = create.executeUpdate(role.getRole());
EditUserRole editUserRole = EditUserRole.Factory.createEditUserRole(role);
editUserRole.detectChanges();
addPermission(editUserRole.getAddlist(), role.getRole());
deletePermissions(editUserRole.getDeleteList(), role.getRole());
}
// Error handling sql
catch (MappingException e) {
throw new DAOException(DAOException.TYPE.MAPPINGEXCEPTION, e);
}
catch (DataAccessException e) {
throw new DAOException(DAOException.TYPE.DATAACCESSEXECPTION, e);
}
catch (Exception e) {
throw new DAOException(DAOException.TYPE.UNKOWNEXCEPTION, e);
} finally {
//Connection release handling
try{
if(conn != null) {
ConnectionPool.getInstance().release(conn);
}
}
// Error handling connection
catch (DataAccessException e) {
throw new DAOException(DAOException.TYPE.RELEASECONNECTIONEXCEPTION, e);
}
catch (Exception e) {
throw new DAOException(DAOException.TYPE.UNKOWNRELEASECONNECTIONEXCEPTION, e);
}
}
//Return result
return status;
}
Thanks for helping.
this is just a possible answer. In my case, i use jooq and a mariadb.
With the assumption that we only have one central database this solution works. In a cluster, there is always the problem of the split brain.
What happens is that I lock the rows. So if the next thread tries to lock this he must wait. If it is allowed to continue, the exception HASCHANGEDBEFOREINDB is thrown.
Take care u have to commit or rollback to end the lock.
EditRole:
#Override
public int editRole(UserRoleBO editedRole ,UserRoleBO nonEditedRole) throws SQLException {
Connection conn = null;
int status;
try {
//Set up connection
conn = ConnectionPool.getInstance().acquire();
conn.setAutoCommit(false);
DSLContext create = DSL.using(conn, SQLDialect.MARIADB);
//lock rows
lockRowsOf(editedRole, conn);
EditUserRole editUserRole = EditUserRole.Factory.createEditUserRole(nonEditedRole);
boolean hasChangedBeforeInDB = editUserRole.detectChanges();
if (hasChangedBeforeInDB) {
throw new DAOException(DAOException.TYPE.HASCHANGEDBEFOREINDB, null);
}
EditUserRole editUserRole2 = EditUserRole.Factory.createEditUserRole(editedRole);
editUserRole2.detectChanges();
//sql processing and return
status = create.executeUpdate(editedRole.getRole());
addPermission(editUserRole2.getAddlist(), editedRole.getRole().getId(), conn);
deletePermissions(editUserRole2.getDeleteList(), editedRole.getRole(), conn);
conn.commit();
}
// Error handling sql
catch (MappingException e) {
conn.rollback();
throw new DAOException(DAOException.TYPE.MAPPINGEXCEPTION, e);
}
catch (DataAccessException e) {
conn.rollback();
throw new DAOException(DAOException.TYPE.DATAACCESSEXECPTION, e);
}
catch (Exception e) {
conn.rollback();
throw new DAOException(DAOException.TYPE.UNKOWNEXCEPTION, e);
} finally {
//Connection release handling
try{
if(conn != null) {
conn.setAutoCommit(true);
ConnectionPool.getInstance().release(conn);
}
}
// Error handling connection
catch (DataAccessException e) {
throw new DAOException(DAOException.TYPE.RELEASECONNECTIONEXCEPTION, e);
}
catch (Exception e) {
throw new DAOException(DAOException.TYPE.UNKOWNRELEASECONNECTIONEXCEPTION, e);
}
}
//Return result
return status;
}
Lock:
#Override
public void lockRowsOf(UserRoleBO role, Connection conn) {
int status;
try {
DSLContext create = DSL.using(conn, SQLDialect.MARIADB);
//sql processing and return
status = create.select()
.from(AUTH_ROLE)
.where(AUTH_ROLE.ID.eq(role.getRole().getId()))
.forUpdate().execute();
status = create.select()
.from(AUTH_ROLE_PERMISSION)
.where(AUTH_ROLE_PERMISSION.ROLE_ID.eq(role.getRole().getId()))
.forUpdate().execute();
}
// Error handling sql
catch (MappingException e) {
throw new DAOException(DAOException.TYPE.MAPPINGEXCEPTION, e);
}
catch (DataAccessException e) {
throw new DAOException(DAOException.TYPE.DATAACCESSEXECPTION, e);
}
catch (Exception e) {
throw new DAOException(DAOException.TYPE.UNKOWNEXCEPTION, e);
} finally {
//Connection will still needed to buffer the delete and insert
}
}
I am bit curious to know in the below code snippet, is there any chances of database connection not being closed. I am getting an issue in the SonarQube telling "Method may fail to close database resource"
try {
con = OracleUtil.getConnection();
pstmtInsert = con.prepareStatement(insertUpdateQuery);
pstmtInsert.setString(++k, categoryCode);
pstmtInsert.clearParameters();
pstmtInsert = con.prepareStatement(updateQuery);
for (i = 0; i < userList.size(); i++) {
pstmtInsert.setString(1, p_setId);
addCount = pstmtInsert.executeUpdate();
if (addCount == 1) {
con.commit();
usercount++;
} else {
con.rollback();
}
}
}
catch (SQLException sqle) {
_log.error(methodName, "SQLException " + sqle.getMessage());
sqle.printStackTrace();
EventHandler.handle();//calling event handler
throw new BTSLBaseException(this, "addInterfaceDetails", "error.general.sql.processing");
}
catch (Exception e) {
_log.error(methodName, " Exception " + e.getMessage());
e.printStackTrace();
EventHandler.handle();//calling event handler
throw new BTSLBaseException(this, "addInterfaceDetails", "error.general.processing");
}
finally {
try {
if (pstmtInsert != null) {
pstmtInsert.close();
}
} catch (Exception e) {
_log.errorTrace(methodName, e);
}
try {
if (con != null) {
con.close();
}
} catch (Exception e) {
_log.errorTrace(methodName, e);
}
if (_log.isDebugEnabled()) {
_log.debug("addRewardDetails", " Exiting addCount " + addCount);
}
}
Thanks in advance
If you are using Java 7+, I suggest you use try-with-resources. It ensures the resources are closed after the operation is completed.
Issue has been resolved when I closed the first prepare statement before starting the another one.
added below code snippet after the line pstmtInsert.clearParameters();
try {
if (pstmtInsert != null) {
pstmtInsert.close();
}
} catch (Exception e) {
_log.errorTrace(methodName, e);
}