i'm in the process of reviving my optaplanner code that i wrote about 6 months ago and while trying to figure out why some of my hard constraints are being broken i found that a filter that i wrote that is supposed to filter out illegal moves is not being referred to. I put breakpoints at all of the methods of the move factory, the moves methods, and the filter and none are being called. I'm pretty sure that that wasn'nt the case before i updated to the latest version but i might be wrong.
Update: the factory is being used when i run optaplanner in my test case but not in production, so i guess this is not to due with my configuration but rather the scenario, but i don't know what may affect it being used or not
my solver config:
<?xml version="1.0" encoding="UTF-8"?>
<solver>
<environmentMode>FULL_ASSERT</environmentMode>
<!-- Domain model configuration -->
<solutionClass>com.rdthree.plenty.services.activities.planner.ActivitySolution</solutionClass>
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskPlannerDto</entityClass>
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskResourceAllocationPlannerDto</entityClass>
<!-- Score configuration -->
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>com/rdthree/plenty/services/activities/planner/activity-scoring.drl</scoreDrl>
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory>
<!-- Optimization algorithms configuration -->
<termination>
<terminationCompositionStyle>OR</terminationCompositionStyle>
<bestScoreLimit>0hard/0soft</bestScoreLimit>
<secondsSpentLimit>60</secondsSpentLimit>
</termination>
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="resourceAllocationSelector">
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskResourceAllocationPlannerDto</entityClass>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterManner>DECREASING_DIFFICULTY_IF_AVAILABLE</sorterManner>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="resourceAllocationSelector" />
<valueSelector>
<variableName>resource</variableName>
<cacheType>PHASE</cacheType>
</valueSelector>
</changeMoveSelector>
</queuedEntityPlacer>
</constructionHeuristic>
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="taskSelector">
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskPlannerDto</entityClass>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterManner>DECREASING_DIFFICULTY_IF_AVAILABLE</sorterManner>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="taskSelector" />
<filterClass>com.rdthree.plenty.services.activities.planner.filters.TaskLengthChnageFilter</filterClass>
<valueSelector>
<variableName>interval</variableName>
<cacheType>PHASE</cacheType>
</valueSelector>
</changeMoveSelector>
</queuedEntityPlacer>
</constructionHeuristic>
<localSearch>
<unionMoveSelector>
<moveListFactory>
<moveListFactoryClass>com.rdthree.plenty.services.activities.planner.MoveResourceAllocationMoveFactory</moveListFactoryClass>
</moveListFactory>
<changeMoveSelector>
<fixedProbabilityWeight>1.0</fixedProbabilityWeight>
<filterClass>com.rdthree.plenty.services.activities.planner.filters.TaskLengthChnageFilter</filterClass>
<entitySelector id="taskMoveSelector">
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskPlannerDto</entityClass>
</entitySelector>
<valueSelector>
<variableName>interval</variableName>
</valueSelector>
</changeMoveSelector>
</unionMoveSelector>
<acceptor>
<valueTabuSize>7</valueTabuSize>
</acceptor>
<forager>
<acceptedCountLimit>2000</acceptedCountLimit>
</forager>
</localSearch>
my custom move factory:
public class MoveResourceAllocationMoveFactory implements MoveListFactory<ActivitySolution> {
#Override
public List<? extends Move> createMoveList(ActivitySolution solution) {
List<Move> moveList = new ArrayList<Move>();
for (TaskResourceAllocationPlannerDto allocation : solution.getResourceAllocations()) {
for (TaskResourcePlannerDto resource : solution.getResources()) {
moveList.add(new MoveResourceAllocations(allocation, resource));
}
}
return moveList;
}
}
my custom move:
public class MoveResourceAllocations extends AbstractMove {
private TaskResourceAllocationPlannerDto allocation;
private TaskResourcePlannerDto newResource;
#Getter
#Setter
boolean doMove;
public MoveResourceAllocations(TaskResourceAllocationPlannerDto allocation, TaskResourcePlannerDto newResource) {
super();
this.allocation = allocation;
this.newResource = newResource;
}
#Override
public boolean isMoveDoable(ScoreDirector scoreDirector) {
if (allocation.getResource().equals(newResource)) {
return false;
}
return new ResourceTypeMismatchFilter().acceptCustomMove(scoreDirector, this);
}
#Override
public Move createUndoMove(ScoreDirector scoreDirector) {
return new MoveResourceAllocations(allocation, allocation.getResource());
}
#Override
public void doMoveOnGenuineVariables(ScoreDirector scoreDirector) {
scoreDirector.beforeVariableChanged(allocation, "resource");
updateOnHandAmounts(scoreDirector);
allocation.setResource(newResource);
scoreDirector.afterVariableChanged(allocation, "resource");
}
private void updateOnHandAmounts(ScoreDirector scoreDirector) {
ActivitySolution solution = (ActivitySolution) scoreDirector.getWorkingSolution();
List<OnHandForProduct> onHandForProducts = solution.getOnHandForProducts();
List<ProductInventoryTransactionPlannerDto> transactions = solution.getTransactions();
boolean transactionFoundForTask = false;
if ((newResource.getClass().getSimpleName().contains(Product.class.getSimpleName()))
&& allocation.getResourceClass().equals(Product.class)) {
// find the transaction caused by the task and product in question and replace the product in the
// transaction with the newly assigned product and revert this for an undo move
for (ProductInventoryTransactionPlannerDto transaction : transactions) {
if (transaction.getCauseId().equals(allocation.getTaskId())
&& transaction.getProductId() == (allocation.getResource().getId())
&& transaction.getTransactionTypeName().equals(InventoryTransactionType.SUBTRACT)) {
transaction.setProductId(newResource.getId());
transactionFoundForTask = true;
break;
}
}
if (!transactionFoundForTask) {
throw new EmptyResultDataAccessException(
"Internal scheduler fail: no product transaction found for the product-requiring task with id: "
+ allocation.getTaskId() + " for product : " + allocation.getResource(), 1);
}
TaskPlannerDto thisTask = null;
for (TaskPlannerDto task : solution.getTasks()) {
if (task.getId().equals(allocation.getTaskId())) {
thisTask = task;
}
}
Long oldProductId = allocation.getResource().getId();
Long newProductId = newResource.getId();
for (OnHandForProduct onHandForProduct : onHandForProducts) {
if (onHandForProduct.getProductId().equals(oldProductId)
&& onHandForProduct.getDate().isAfter(
thisTask.getInterval().getStart().withTimeAtStartOfDay()
.plusDays(0/* - GeneralPrefs.PRODUCT_PRESENCE_SAFETY_BUFFER*/))) {
onHandForProduct.setAmount(onHandForProduct.getAmount() + allocation.getAmount());
}
if (onHandForProduct.getProductId().equals(newProductId)
&& onHandForProduct.getDate().isAfter(
thisTask.getInterval().getStart().withTimeAtStartOfDay()
.plusDays(0/* - GeneralPrefs.PRODUCT_PRESENCE_SAFETY_BUFFER*/))) {
onHandForProduct.setAmount(onHandForProduct.getAmount() - allocation.getAmount());
}
}
}
}
#Override
public Collection<? extends Object> getPlanningEntities() {
return Collections.singletonList(allocation);
}
#Override
public Collection<? extends Object> getPlanningValues() {
return Collections.singletonList(newResource);
}
#Override
public String toString() {
return "replacing resource " + allocation.getResource() + " for task with id " + allocation.getId() + " with "
+ newResource;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((allocation == null) ? 0 : allocation.hashCode());
result = prime * result + ((newResource == null) ? 0 : newResource.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
MoveResourceAllocations other = (MoveResourceAllocations) obj;
if (allocation == null) {
if (other.allocation != null)
return false;
} else if (!allocation.equals(other.allocation))
return false;
if (newResource == null) {
if (other.newResource != null)
return false;
} else if (!newResource.equals(other.newResource))
return false;
return true;
}
}
The config looks good.
1) Maybe the 2 Construction Heuristics phase never completely finish.
Turn on INFO logging (or better yet DEBUG). It will log when each of the 2 Construction Heuristics ends.
2) Maybe Local Search starts with a ChangeMoveSelector (it's a union, so any one of the 2 selectors can go first), and it hangs somehow in the filter. Turn on TRACE logging to see the selected moves.
Related
I found a memory leak in my spring batch code. Just when I run the code below. Some people seem to say that jobexplorer causes a memory leak. Should I not use jobexplorer? thanks for the help.
At boot :
Just 5 min later : 5gb more memory consumption
And 1 hour later, It's kills some process by oom kill.
I use
java 11
spring boot 2.7.1
spring-boot-starter-batch 2.4.0
This is my code.
spring-batch processConfiguration and some class.
-BlockProcessConfiguration
-jobValidator
BlockProcessConfiguration
#Configuration
#RequiredArgsConstructor
#Slf4j
#Profile("block")
public class BlockProcessConfiguration {
#Value("${isStanby:false}")
private Boolean isStanby;
#Scheduled(fixedDelay = 500)
public String launch() throws JobInstanceAlreadyCompleteException, JobExecutionAlreadyRunningException, JobParametersInvalidException, JobRestartException {
if (isStanby != null && isStanby) {
Boolean isRunningJob = jobValidator.isExistLatestRunningJob(JOB_NAME, 5000);
if (isRunningJob) {
return "skip";
}
}
return "completed";
}
jobValidator
import java.util.*;
#RequiredArgsConstructor
#Slf4j
#Component
public class JobValidator {
public enum batchMode {
RECOVER, FORWARD
}
private final JobExplorer jobExplorer;
public Boolean isExistLatestRunningJob(String jobName, long jobTTL) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 10000);
if (jobInstances.size() > 0) {
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstances.get(0));
jobInstances.clear();
if (jobExecutions.size() > 0) {
JobExecution jobExecution = jobExecutions.get(0);
jobExecutions.clear();
// boolean isRunning = jobExecution.isRunning();
Date createTime = jobExecution.getCreateTime();
long now = new Date().getTime();
long timeFrame = now - createTime.getTime();
log.info("createTime.getTime() : {}", createTime.getTime());
log.info("isExistLatestRunningJob found jobExecution : id, status, timeFrame, jobTTL : {}, {}, {}, {}", jobExecution.getJobId(), jobExecution.getStatus(), timeFrame, jobTTL);
// if (jobExecution.isRunning() && (now.getTime() - createTime.getTime()) < jobTTL) {
if ( timeFrame < jobTTL ) {
log.info("isExistLatestRunningJob result : {}", true);
log.info("Job is already running, skip this job, job name : {}", jobName);
return true;
}
}
}
return false;
}
public Boolean isExecutableJob(String jobName, String paramKey, Long paramValue) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 1);
if (jobInstances.size() > 0) {
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstances.get(0));
if (jobExecutions.size() > 0) {
JobExecution jobExecution = jobExecutions.get(0);
JobParameters jobParameters = jobExecution.getJobParameters();
Optional<Long> blockNumber = Optional.ofNullable(jobParameters.getLong(paramKey));
if (blockNumber.isPresent() && blockNumber.get().equals(paramValue)) {
if (jobExecution.getStatus().equals(BatchStatus.STARTED)) {
// throw new RuntimeException("waiting until previous job done");
log.info("waiting until previous job done ... : {}", jobName);
return false;
}
}
}
}
return true;
}
public Long getStartNumberFromBatch(String jobName, String batchMode, String paramKey1, String paramKey2, long defaultValue) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 20);
ArrayList<Long> failExecutionNumbers = new ArrayList<>();
ArrayList<Long> successExecutionNumbers = new ArrayList<>();
ArrayList<Long> successEndExecutionNumbers = new ArrayList<>();
ArrayList<JobExecution> executions = new ArrayList<>();
jobInstances.stream().map(jobInstance -> jobExplorer.getJobExecutions(jobInstance)).forEach(jobExecution -> {
JobParameters jobParameters = jobExecution.get(0).getJobParameters();
Optional<Long> param1 = Optional.ofNullable(jobParameters.getLong(paramKey1));
Optional<Long> param2 = Optional.ofNullable(jobParameters.getLong(paramKey2));
if (param1.isPresent() && param2.isPresent()) {
if (jobExecution.get(0).getExitStatus().getExitCode().equals("FAILED")) {
failExecutionNumbers.add(param1.get());
} else {
successExecutionNumbers.add(param1.get());
successEndExecutionNumbers.add(param2.get());
}
}
});
if (failExecutionNumbers.size() == 0 && successExecutionNumbers.size() == 0) {
return defaultValue;
}
long successMax = defaultValue;
long failMin = defaultValue;
if (successEndExecutionNumbers.size() > 0) {
successMax = Collections.max(successEndExecutionNumbers);
}
if (failExecutionNumbers.size() > 0) {
failExecutionNumbers.removeIf(successExecutionNumbers::contains);
if (failExecutionNumbers.size() > 0) {
failMin = Collections.min(failExecutionNumbers);
} else {
return successMax;
}
}
if (Objects.equals(batchMode, JobValidator.batchMode.RECOVER.toString())) {
return Math.min(failMin, successMax);
} else {
return Math.max(failMin, successMax);
}
}
}
I would not consider that as a memory leak (and definitely not is Spring Batch's code). The way you are checking things like isExistLatestRunningJob involves retrieving a lot of data that is not really needed. For example, the method isExistLatestRunningJob() could be implemented with a single database query instead of retrieving 10000 job instances with:
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 10000);
A query like the following should work:
SELECT E.JOB_EXECUTION_ID from BATCH_JOB_EXECUTION E, BATCH_JOB_INSTANCE I where E.JOB_INSTANCE_ID = I.JOB_INSTANCE_ID and I.JOB_NAME=? and E.START_TIME is not NULL and E.END_TIME is NULL
Adding to that that your method is called every 500ms.. Clearing the lists does not necessarily clear memory at the time you might expect.
So I think you should find a way to optimize the way you retrieve data by doing the filtering on the database side instead of the application side.
I debugged your code.
The problem is your enum field var is defined as public and not static.
I think It's will make serious problem, If you reference this var in another class.
Define enum field as static or private.
I hope find solution.
I'm trying to write a customized java rule to check if the debug/trace log levels are enabled. If the log level check has been forgotten, the rule should report an issue.
import org.apache.juli.logging.Log;
import org.apache.juli.logging.LogFactory;
public class CheckDebugAndTraceLevel {
private static final Log LOG = LogFactory.getLog(CheckDebugAndTraceLevel.class);
void foo()
{
if(LOG.isDebugEnabled())
{
LOG.debug("some debug text..");
}
LOG.debug("some debug text.."); // Noncompliant {{ check LOG.debug with an if statement}}
if(LOG.isTraceEnabled())
{
LOG.trace("some debug text..");
}
LOG.trace("some text.."); // Noncompliant {{ check LOG.trace with an if statement}}
}
}
I tested my rule with loggers and recieved "Log.debug" and "Log.trace" from my example class. Still, I get an assertion error while executing jUnit test.
public class DebugAndTraceRule extends BaseTreeVisitor implements JavaFileScanner {
private JavaFileScannerContext context;
private boolean logFlag = false;
#Override
public void scanFile(JavaFileScannerContext context) {
this.context = context;
scan(context.getTree());
}
#Override
public void visitMethod(MethodTree tree) {
super.visitMethod(tree);
String logOption;
for (StatementTree statement : tree.block().body()) {
if (statement.is(Kind.EXPRESSION_STATEMENT)) {
logOption = statement.firstToken().text() + ".";
//System.out.println(statement);
ExpressionStatementTreeImpl eStatement = (ExpressionStatementTreeImpl) statement;
MethodInvocationTreeImpl methodInvoc = (MethodInvocationTreeImpl) eStatement.expression();
MemberSelectExpressionTreeImpl memberSel = (MemberSelectExpressionTreeImpl) methodInvoc.methodSelect();
logOption += memberSel.identifier();
// check logOption
if (logFlag && (logOption.equals("LOG.debug") || logOption.equals("LOG.trace"))) {
//System.out.println(logOption + " - line:" + statement.firstToken().line());
context.reportIssue(this, tree, "debug/trace levels of your logger must be enabled!");
}
}
}
}
#Override
public void visitVariable(VariableTree tree) {
super.visitVariable(tree);
if (tree.type().symbolType().toString().equals("Log")) {
logFlag = true;
}
}
}
If someone could help me out, I would really appreciate it!
Need guideline -
How to do hard delete when no reference is available and do soft delete when reference is available, this operation should be performed in a single method itself.
E.g.
I have 1 master table and 3 transactional tables and the master reference is available in all 3 transactional tables.
Now while deleting master row - I have to do the following: If master reference is available then update the master table row and if no master ref. is available delete the row.
I tried following so far.
Service Implementation -
public response doHardOrSoftDelete(Employee emp) {
boolean flag = iMasterDao.isDataExist(emp);
if(flag) {
boolean result = iMasterDao.doSoftDelete(emp);
} else {
boolean result = iMasterDao.doHardDelete(emp);
}
}
Second Approach:
As we know that while deleting a record if the reference is available then it throws ConstraintViolationException so simply we can catch it and check that caught exception is of type ConstraintViolationException or not, if yes then call doSoftDelete() method and return the response. So here you don't need to write method or anything to check the references. But I'm not sure whether it is the right approach or not. Just help me with it.
Here is what I tried again -
public Response deleteEmployee(Employee emp) {
Response response = null;
try{
String status= iMasterDao.deleteEmployeeDetails(emp);
if(status.equals("SUCCESS")) {
response = new Response();
response.setStatus("Success");
response.setStatusCode("200");
response.setResult("True");
response.setReason("Record deleted successfully");
return response;
}else {
response = new Response();
response.setStatus("Fail");
response.setStatusCode("200");
response.setResult("False");
}
}catch(Exception e){
response = new Response();
Throwable t =e.getCause();
while ((t != null) && !(t instanceof ConstraintViolationException)) {
t = t.getCause();
}
if(t instanceof ConstraintViolationException){
boolean flag = iMasterDao.setEmployeeIsDeactive(emp);
if(flag) {
response.setStatus("Success");
response.setStatusCode("200");
response.setResult("True");
response.setReason("Record deleted successfully");
}else{
response.setStatus("Fail");
response.setStatusCode("200");
response.setResult("False");
}
}else {
response.setStatus("Fail");
response.setStatusCode("500");
response.setResult("False");
response.setReason("# EXCEPTION : " + e.getMessage());
}
}
return response;
}
Dao Implementation -
public boolean isDataExist(Employee emp) {
boolean flag = false;
List<Object[]> tbl1 = session.createQuery("FROM Table1 where emp_id=:id")
.setParameter("id",emp.getId())
.getResultList();
if(!tbl1.isEmpty() && tbl1.size() > 0) {
flag = true;
}
List<Object[]> tbl2 = session.createQuery("FROM Table2 where emp_id=:id")
.setParameter("id",emp.getId())
.getResultList();
if(!tbl2.isEmpty() && tbl2.size() > 0) {
flag = true;
}
List<Object[]> tbl3 = session.createQuery("FROM Table3 where emp_id=:id")
.setParameter("id",emp.getId())
.getResultList();
if(!tbl3.isEmpty() && tbl3.size() > 0) {
flag = true;
}
return flag;
}
public boolean doSoftDelete(Employee emp) {
empDet = session.get(Employee.class, emp.getId());
empDet .setIsActive("N");
session.update(empDet);
}
public boolean doHardDelete(Employee emp) {
empDet = session.get(Employee.class, emp.getId());
session.delete(empDet);
}
No matter how many transactional tables will be added with master tbl reference, my code should do the operations(soft/hard delete) accordingly.
In my case, every time new transactional tables get added with a master reference I've do the checks, so Simply I want to skip the isDataExist() method and do the deletions accordingly, how can I do it in a better way?
Please help me with the right approach to do the same.
There's a lot of repeated code in the body of isDataExist() method which is both hard to maintain and hard to extend (if you have to add 3 more tables the code will double in size).
On top of that the logic is not optimal as it will go over all tables even if the result from the first one is enough to return true.
Here is a simplified version (please note that I haven't tested the code and there could be errors, but it should be enough to explain the concept):
public boolean isDataExist(Employee emp) {
List<String> tableNames = List.of("Table1", "Table2", "Table3");
for (String tableName : tableNames) {
if (existsInTable(tableName, emp.getId())) {
return true;
}
}
return false;
}
private boolean existsInTable(String tableName, Long employeeId) {
String query = String.format("SELECT count(*) FROM %s WHERE emp_id=:id", tableName);
long count = (long)session
.createQuery(query)
.setParameter("id", employeeId)
.getSingleResult();
return count > 0;
}
isDataExist() contains a list of all table names and iterates over these until the first successful encounter of the required Employee id in which case it returns true. If not found in any table the method returns false.
private boolean existsInTable(String tableName, Long employeeId) is a helper method that does the actual search for employeeId in the specified tableName.
I changed the query to just return the count (0 or more) instead of a the actual entity objects as these are not required and there's no point to fetch them.
EDIT in response to the "Second approach"
Is the Second Approach meeting the requirements?
If so, then it is a "right approach" to the problem. :)
I would refactor the deleteEmployeeDetails method to either return a boolean (if just two possible outcomes are expected) or to return a custom Enum as using a String here doesn't seem appropriate.
There is repeated code in deleteEmployeeDetails and this is never a good thing. You should separate the logic which decides the type of the response from the code that builds it, thus making your code easier to follow, debug and extend when required.
Let me know if you need a code example of the ideas above.
EDIT #2
Here is the sample code as requested.
First we define a Status enum which should be used as return type from MasterDao's methods:
public enum Status {
DELETE_SUCCESS("Success", "200", "True", "Record deleted successfully"),
DELETE_FAIL("Fail", "200", "False", ""),
DEACTIVATE_SUCCESS("Success", "200", "True", "Record deactivated successfully"),
DEACTIVATE_FAIL("Fail", "200", "False", ""),
ERROR("Fail", "500", "False", "");
private String status;
private String statusCode;
private String result;
private String reason;
Status(String status, String statusCode, String result, String reason) {
this.status = status;
this.statusCode = statusCode;
this.result = result;
this.reason = reason;
}
// Getters
}
MasterDao methods changed to return Status instead of String or boolean:
public Status deleteEmployeeDetails(Employee employee) {
return Status.DELETE_SUCCESS; // or Status.DELETE_FAIL
}
public Status deactivateEmployee(Employee employee) {
return Status.DEACTIVATE_SUCCESS; // or Status.DEACTIVATE_FAIL
}
Here is the new deleteEmployee() method:
public Response deleteEmployee(Employee employee) {
Status status;
String reason = null;
try {
status = masterDao.deleteEmployeeDetails(employee);
} catch (Exception e) {
if (isConstraintViolationException(e)) {
status = masterDao.deactivateEmployee(employee);
} else {
status = Status.ERROR;
reason = "# EXCEPTION : " + e.getMessage();
}
}
return buildResponse(status, reason);
}
It uses two simple utility methods (you can make these static or export to utility class as they do not depend on the internal state).
First checks if the root cause of the thrown exception is ConstraintViolationException:
private boolean isConstraintViolationException(Throwable throwable) {
Throwable root = throwable;
while (root != null && !(root instanceof ConstraintViolationException)) {
root = root.getCause();
}
return root != null;
}
And the second one builds the Response out of the Status and a reason:
private Response buildResponse(Status status, String reason) {
Response response = new Response();
response.setStatus(status.getStatus());
response.setStatusCode(status.getStatusCode());
response.setResult(status.getResult());
if (reason != null) {
response.setReason(reason);
} else {
response.setReason(status.getReason());
}
return response;
}
If you do not like to have the Status enum loaded with default Response messages, you could strip it from the extra info:
public enum Status {
DELETE_SUCCESS, DELETE_FAIL, DEACTIVATE_SUCCESS, DEACTIVATE_FAIL, ERROR;
}
And use switch or if-else statements in buildResponse(Status status, String reason) method to build the response based on the Status type.
I'm working on cache implementation with RxJava 2. What I need is when network request fails, my repository would insert stale data and show error message. While I'm able to insert Item with .onErrorReturnItem(cachedItem) the error gets lost. Also I'm able to concat cached data with network request, but it is a bit cumbersome:
public Observable<Dashboard> getDashboard(String phoneNum, boolean getNewData) {
if (getNewData) invalidateDashboardCache();//just set dashboardCacheValid = false
Observable<Dashboard> observableToCache = Observable.fromCallable(
() -> {
Dashboard cached = mCache.getDashboard(phoneNum);
if (cached != null) {
if (!cached.cacheValid()) {
dashboardCacheValid = false;
}
return cached;
}
dashboardCacheValid = false;
return Dashboard.EMPTY;
})
.concatMap(cachedDashboard -> Observable.concat(Observable.just(cachedDashboard),
Observable.fromCallable(() -> !dashboardCacheValid)
.filter(Boolean::booleanValue)
.flatMap(cacheNotValid -> mNetworkHelper.getDashboardRaw(phoneNum))
.doOnNext(dashboard -> {
mCache.putDashboard(pnumber, dashboard);
dashboardCacheValid = true;
})));
return cacheObservable(CACHE_PREFIX_GET_DASHBOARD + phoneNum, observableToCache); //this is for multiple calls
}
Is there a way to modify .onErrorReturnItem(cachedDashboard) to something like this?:
Thanks to #akarnokd I was able to solve it properly and with much cleaner code:
public Observable<Dashboard> getDashboardNew(String phoneNum, boolean getNewData) {
Dashboard fromCache = mCache.getDashboard(phoneNum, getNewData);
dashboardCacheValid = fromCache.cacheValid();
if (getNewData) invalidateDashboardCache();
if (dashboardCacheValid) {
return Observable.just(fromCache);
} else {
final Dashboard cached = fromCache;
Observable<Dashboard> observableToCache = mNetworkHelper.getDashboardRaw(phoneNum)
.doOnNext(dashboard -> mCache.putDashboard(phoneNum, dashboard))
.onErrorResumeNext(throwable -> {
return Observable.concat(Observable.just(cached), Observable.error(throwable));
});
return cacheObservable(CACHE_PREFIX_GET_DASHBOARD + phoneNum, observableToCache);
}
}
I have two Observables, let's call them PeanutButter and Jelly. I'd like to combine them to a Sandwich Observable. I can do that using:
Observable<PeanutButter> peanutButterObservable = ...;
Observable<Jelly> jellyObservable = ...;
Observable<Sandwich> sandwichObservable = Observable.combineLatest(
peanutButterObservable,
jellyObservable,
(pb, j) -> makeSandwich(pb, j))
The problem is that RX waits for the first PeanutButter and the first Jelly to be emitted before emitting the first combined Sandwich but Jelly may never be emitted which means I never get the first Sandwich.
I'd like to combine the two feeds such that a combined item is emitted as soon as the first item from either feed is emitted, regardless of whether the other feed has yet to emit anything, how do I do that in RxJava?
one possible approach would be to use the startWith operator to trigger an emission of a known value from each stream upon subscription. this way combineLatest() will trigger if either stream emits a value. you'd just have to be mindful of looking out for the initial/signal values in the onNext consumer.
something like this...:
#Test
public void sandwiches() {
final Observable<String> peanutButters = Observable.just("chunky", "smooth")
.startWith("--initial--");
final Observable<String> jellies = Observable.just("strawberry", "blackberry", "raspberry")
.startWith("--initial--");
Observable.combineLatest(peanutButters, jellies, (peanutButter, jelly) -> {
return new Pair<>(peanutButter, jelly);
})
.subscribe(
next -> {
final String peanutButter = next.getFirst();
final String jelly = next.getSecond();
if(peanutButter.equals("--initial--") && jelly.equals("--initial--")) {
// initial emissions
} else if(peanutButter.equals("--initial--")) {
// jelly emission
} else if(jelly.equals("--initial--")) {
// peanut butter emission
} else {
// peanut butter + jelly emissions
}
},
error -> {
System.err.println("## onError(" + error.getMessage() + ")");
},
() -> {
System.out.println("## onComplete()");
}
);
}
I think this problem can be approached by using merge and scan operators:
public class RxJavaUnitTestJava {
public Observable<Sandwich> getSandwich(Observable<Jelly> jelly, Observable<PeanutButter> peanutButter) {
return Observable.merge(jelly, peanutButter)
.scan(new Sandwich(null, null), (BiFunction<Object, Object, Object>) (prevResult, newItem) -> {
Sandwich prevSandwich = (Sandwich) prevResult;
if (newItem instanceof Jelly) {
System.out.println("emitted: " + ((Jelly) newItem).tag);
return new Sandwich((Jelly) newItem, prevSandwich.peanutButter);
} else {
System.out.println("emitted: " + ((PeanutButter) newItem).tag);
return new Sandwich(prevSandwich.jelly, (PeanutButter) newItem);
}
})
.skip(1) // skip emitting scan's default item
.cast(Sandwich.class);
}
#Test
public void testGetSandwich() {
PublishSubject<Jelly> jelly = PublishSubject.create();
PublishSubject<PeanutButter> peanutButter = PublishSubject.create();
getSandwich(jelly, peanutButter).subscribe(new Observer<Sandwich>() {
#Override
public void onSubscribe(Disposable d) {
System.out.println("onSubscribe");
}
#Override
public void onNext(Sandwich sandwich) {
System.out.println("onNext: Sandwich: " + sandwich.toString());
}
#Override
public void onError(Throwable e) {
System.out.println("onError: " + e.toString());
}
#Override
public void onComplete() {
System.out.println("onComplete");
}
});
jelly.onNext(new Jelly("jelly1"));
jelly.onNext(new Jelly("jelly2"));
peanutButter.onNext(new PeanutButter("peanutButter1"));
jelly.onNext(new Jelly("jelly3"));
peanutButter.onNext(new PeanutButter("peanutButter2"));
}
class Jelly {
String tag;
public Jelly(String tag) {
this.tag = tag;
}
}
class PeanutButter {
String tag;
public PeanutButter(String tag) {
this.tag = tag;
}
}
class Sandwich {
Jelly jelly;
PeanutButter peanutButter;
public Sandwich(Jelly jelly, PeanutButter peanutButter) {
this.jelly = jelly;
this.peanutButter = peanutButter;
}
#Override
public String toString() {
String jellyResult = (jelly != null) ? jelly.tag : "no jelly";
String peanutButterResult = (peanutButter != null) ? peanutButter.tag : "no peanutButter";
return jellyResult + " | " + peanutButterResult;
}
}
}
Output:
onSubscribe
emitted: jelly1
onNext: Sandwich: jelly1 | no peanutButter
emitted: jelly2
onNext: Sandwich: jelly2 | no peanutButter
emitted: peanutButter1
onNext: Sandwich: jelly2 | peanutButter1
emitted: jelly3
onNext: Sandwich: jelly3 | peanutButter1
emitted: peanutButter2
onNext: Sandwich: jelly3 | peanutButter2
The fact that Jelly, PeanutButter and Sandwich are all independent types makes it a bit more complex around casting and nullability in scan. If you have control over these types, this solution can be further improved.