I found a memory leak in my spring batch code. Just when I run the code below. Some people seem to say that jobexplorer causes a memory leak. Should I not use jobexplorer? thanks for the help.
At boot :
Just 5 min later : 5gb more memory consumption
And 1 hour later, It's kills some process by oom kill.
I use
java 11
spring boot 2.7.1
spring-boot-starter-batch 2.4.0
This is my code.
spring-batch processConfiguration and some class.
-BlockProcessConfiguration
-jobValidator
BlockProcessConfiguration
#Configuration
#RequiredArgsConstructor
#Slf4j
#Profile("block")
public class BlockProcessConfiguration {
#Value("${isStanby:false}")
private Boolean isStanby;
#Scheduled(fixedDelay = 500)
public String launch() throws JobInstanceAlreadyCompleteException, JobExecutionAlreadyRunningException, JobParametersInvalidException, JobRestartException {
if (isStanby != null && isStanby) {
Boolean isRunningJob = jobValidator.isExistLatestRunningJob(JOB_NAME, 5000);
if (isRunningJob) {
return "skip";
}
}
return "completed";
}
jobValidator
import java.util.*;
#RequiredArgsConstructor
#Slf4j
#Component
public class JobValidator {
public enum batchMode {
RECOVER, FORWARD
}
private final JobExplorer jobExplorer;
public Boolean isExistLatestRunningJob(String jobName, long jobTTL) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 10000);
if (jobInstances.size() > 0) {
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstances.get(0));
jobInstances.clear();
if (jobExecutions.size() > 0) {
JobExecution jobExecution = jobExecutions.get(0);
jobExecutions.clear();
// boolean isRunning = jobExecution.isRunning();
Date createTime = jobExecution.getCreateTime();
long now = new Date().getTime();
long timeFrame = now - createTime.getTime();
log.info("createTime.getTime() : {}", createTime.getTime());
log.info("isExistLatestRunningJob found jobExecution : id, status, timeFrame, jobTTL : {}, {}, {}, {}", jobExecution.getJobId(), jobExecution.getStatus(), timeFrame, jobTTL);
// if (jobExecution.isRunning() && (now.getTime() - createTime.getTime()) < jobTTL) {
if ( timeFrame < jobTTL ) {
log.info("isExistLatestRunningJob result : {}", true);
log.info("Job is already running, skip this job, job name : {}", jobName);
return true;
}
}
}
return false;
}
public Boolean isExecutableJob(String jobName, String paramKey, Long paramValue) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 1);
if (jobInstances.size() > 0) {
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstances.get(0));
if (jobExecutions.size() > 0) {
JobExecution jobExecution = jobExecutions.get(0);
JobParameters jobParameters = jobExecution.getJobParameters();
Optional<Long> blockNumber = Optional.ofNullable(jobParameters.getLong(paramKey));
if (blockNumber.isPresent() && blockNumber.get().equals(paramValue)) {
if (jobExecution.getStatus().equals(BatchStatus.STARTED)) {
// throw new RuntimeException("waiting until previous job done");
log.info("waiting until previous job done ... : {}", jobName);
return false;
}
}
}
}
return true;
}
public Long getStartNumberFromBatch(String jobName, String batchMode, String paramKey1, String paramKey2, long defaultValue) {
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 20);
ArrayList<Long> failExecutionNumbers = new ArrayList<>();
ArrayList<Long> successExecutionNumbers = new ArrayList<>();
ArrayList<Long> successEndExecutionNumbers = new ArrayList<>();
ArrayList<JobExecution> executions = new ArrayList<>();
jobInstances.stream().map(jobInstance -> jobExplorer.getJobExecutions(jobInstance)).forEach(jobExecution -> {
JobParameters jobParameters = jobExecution.get(0).getJobParameters();
Optional<Long> param1 = Optional.ofNullable(jobParameters.getLong(paramKey1));
Optional<Long> param2 = Optional.ofNullable(jobParameters.getLong(paramKey2));
if (param1.isPresent() && param2.isPresent()) {
if (jobExecution.get(0).getExitStatus().getExitCode().equals("FAILED")) {
failExecutionNumbers.add(param1.get());
} else {
successExecutionNumbers.add(param1.get());
successEndExecutionNumbers.add(param2.get());
}
}
});
if (failExecutionNumbers.size() == 0 && successExecutionNumbers.size() == 0) {
return defaultValue;
}
long successMax = defaultValue;
long failMin = defaultValue;
if (successEndExecutionNumbers.size() > 0) {
successMax = Collections.max(successEndExecutionNumbers);
}
if (failExecutionNumbers.size() > 0) {
failExecutionNumbers.removeIf(successExecutionNumbers::contains);
if (failExecutionNumbers.size() > 0) {
failMin = Collections.min(failExecutionNumbers);
} else {
return successMax;
}
}
if (Objects.equals(batchMode, JobValidator.batchMode.RECOVER.toString())) {
return Math.min(failMin, successMax);
} else {
return Math.max(failMin, successMax);
}
}
}
I would not consider that as a memory leak (and definitely not is Spring Batch's code). The way you are checking things like isExistLatestRunningJob involves retrieving a lot of data that is not really needed. For example, the method isExistLatestRunningJob() could be implemented with a single database query instead of retrieving 10000 job instances with:
List<JobInstance> jobInstances = jobExplorer.findJobInstancesByJobName(jobName, 0, 10000);
A query like the following should work:
SELECT E.JOB_EXECUTION_ID from BATCH_JOB_EXECUTION E, BATCH_JOB_INSTANCE I where E.JOB_INSTANCE_ID = I.JOB_INSTANCE_ID and I.JOB_NAME=? and E.START_TIME is not NULL and E.END_TIME is NULL
Adding to that that your method is called every 500ms.. Clearing the lists does not necessarily clear memory at the time you might expect.
So I think you should find a way to optimize the way you retrieve data by doing the filtering on the database side instead of the application side.
I debugged your code.
The problem is your enum field var is defined as public and not static.
I think It's will make serious problem, If you reference this var in another class.
Define enum field as static or private.
I hope find solution.
Related
I am using JPA to store data and faced two problems during implementation. I have two entities (Station and Commodity) that have many-to-many relationship with intermediate table so that I had to created the third one. When app receives message it converts its data to entites and should save but sometimes app throwing a ConstraintViolationException because there is null value at foreign key field referencing to Commodity entity.
I've tried simple approach: selecting needed commodity from database and saving it if there is no one. Then I started to use bulk searching all commodities of message and then putting it where are needed. None of them did a trick.
In my opinion the problem could be caused by multi-threading read\insert.
The second problem is that service stop running when exception is thrown. App can lost some of transactions that's not a big deal but it simply stops after rollback.
How can I resolve these conflicts?
Here is code of data handling class and diagram of entities :
#Service
#AllArgsConstructor
#Slf4j
public class ZeromqCommoditiesServiceImpl implements ZeromqCommoditesService {
private final CategoryTransactionHandler categoryHandler;
private final CommodityTransactionHandler commodityHandler;
private final EconomyTransactionHandler economyHandler;
private final StationTransactionHandler stationHandler;
private final SystemTransactionHandler systemHandler;
#Override
#Transactional(
isolation = Isolation.READ_COMMITTED,
propagation = Propagation.REQUIRES_NEW,
rollbackFor = Throwable.class)
#Modifying
public void saveData(ZeromqCommodityPayload payload) {
CommodityContent content = payload.getContent();
var station = stationHandler.createOrFindStation(content.getStationName());
var system = systemHandler.createOrFindSystem(content.getSystemName());
var commodityReferences = getMapOfCommodities(content);
station.setSystem(system);
updateEconomies(station, content);
updateProhibited(station, content, commodityReferences);
updateStationCommodities(station, content, commodityReferences);
try {
saveStation(station);
} catch (ConstraintViolationException | PersistentObjectException | DataAccessException e) {
log.error("Error saving commodity info \n" + content, e);
}
}
public void saveStation(StationEntity station) {
stationHandler.saveStation(station);
if (station.getId() != null) {
log.debug(String.format("Updated \"%s\" station info", station.getName()));
} else {
log.debug(String.format("Updated \"%s\" station info", station.getName()));
}
}
private void updateEconomies(StationEntity station, CommodityContent content) {
station.getEconomies().clear();
if (content.getEconomies() != null) {
var economies = content.getEconomies()
.stream()
.map(economy -> {
var stationEconomyEntity = economyHandler.createOrFindEconomy(economy.getName());
Double proportion = economy.getProportion();
stationEconomyEntity.setProportion(proportion != null ? proportion : 1.0);
return stationEconomyEntity;
})
.peek(economy -> economy.setStation(station))
.toList();
station.getEconomies().addAll(economies);
}
}
private void updateProhibited(
StationEntity station,
CommodityContent content,
Map<String, CommodityEntity> commodityEntityMap) {
station.getProhibited().clear();
if (content.getProhibited() != null) {
var prohibitedCommodityEntities = content.getProhibited()
.stream()
.map(prohibited -> {
String eddnName = prohibited.toLowerCase(Locale.ROOT);
CommodityEntity commodityReference = getCommodityEntity(commodityEntityMap, eddnName);
return new ProhibitedCommodityEntity(station, commodityReference);
}
)
.toList();
station.getProhibited().addAll(prohibitedCommodityEntities);
}
}
private void updateStationCommodities(
StationEntity station,
CommodityContent content,
Map<String, CommodityEntity> commodityEntityMap) {
station.getCommodities().clear();
if (content.getCommodities() != null) {
var commodities = content.getCommodities()
.stream()
.map(commodity -> {
CommodityEntity commodityReference = getCommodityEntity(
commodityEntityMap,
commodity.getEddnName());
return StationCommodityEntity.builder()
.commodity(commodityReference)
.buyPrice(commodity.getBuyPrice())
.sellPrice(commodity.getSellPrice())
.demand(commodity.getDemand())
.stock(commodity.getStock())
.station(station)
.build();
})
.toList();
station.getCommodities().addAll(commodities);
}
}
private CommodityEntity getCommodityEntity(Map<String, CommodityEntity> commodityEntityMap, String eddnName) {
return commodityEntityMap.get(eddnName);
}
private Map<String, CommodityEntity> getMapOfCommodities(#NotNull CommodityContent content) {
Set<String> commodities = content.getCommodities()
.stream()
.map(Commodity::getEddnName)
.collect(Collectors.toSet());
if (content.getProhibited() != null && content.getProhibited().size() > 0) {
commodities.addAll(content.getProhibited().
stream()
.map(item -> item.toLowerCase(Locale.ROOT))
.collect(Collectors.toSet()));
}
var commodityReferencesMap = commodityHandler.findAllByEddnName(commodities)
.stream()
.collect(Collectors.toMap(
CommodityEntity::getEddnName,
item -> item
));
commodities.forEach(commodity -> {
if (commodityReferencesMap.get(commodity.toLowerCase()) == null) {
CommodityCategoryEntity category = categoryHandler.createOrFindCategory("Unknown");
CommodityEntity newCommodity = new CommodityEntity(commodity, commodity, category);
CommodityEntity managedCommodity = commodityHandler.saveCommodity(newCommodity);
commodityReferencesMap.put(managedCommodity.getEddnName(), managedCommodity);
}
});
return commodityReferencesMap;
}
}
Thanks in advance
Need guideline -
How to do hard delete when no reference is available and do soft delete when reference is available, this operation should be performed in a single method itself.
E.g.
I have 1 master table and 3 transactional tables and the master reference is available in all 3 transactional tables.
Now while deleting master row - I have to do the following: If master reference is available then update the master table row and if no master ref. is available delete the row.
I tried following so far.
Service Implementation -
public response doHardOrSoftDelete(Employee emp) {
boolean flag = iMasterDao.isDataExist(emp);
if(flag) {
boolean result = iMasterDao.doSoftDelete(emp);
} else {
boolean result = iMasterDao.doHardDelete(emp);
}
}
Second Approach:
As we know that while deleting a record if the reference is available then it throws ConstraintViolationException so simply we can catch it and check that caught exception is of type ConstraintViolationException or not, if yes then call doSoftDelete() method and return the response. So here you don't need to write method or anything to check the references. But I'm not sure whether it is the right approach or not. Just help me with it.
Here is what I tried again -
public Response deleteEmployee(Employee emp) {
Response response = null;
try{
String status= iMasterDao.deleteEmployeeDetails(emp);
if(status.equals("SUCCESS")) {
response = new Response();
response.setStatus("Success");
response.setStatusCode("200");
response.setResult("True");
response.setReason("Record deleted successfully");
return response;
}else {
response = new Response();
response.setStatus("Fail");
response.setStatusCode("200");
response.setResult("False");
}
}catch(Exception e){
response = new Response();
Throwable t =e.getCause();
while ((t != null) && !(t instanceof ConstraintViolationException)) {
t = t.getCause();
}
if(t instanceof ConstraintViolationException){
boolean flag = iMasterDao.setEmployeeIsDeactive(emp);
if(flag) {
response.setStatus("Success");
response.setStatusCode("200");
response.setResult("True");
response.setReason("Record deleted successfully");
}else{
response.setStatus("Fail");
response.setStatusCode("200");
response.setResult("False");
}
}else {
response.setStatus("Fail");
response.setStatusCode("500");
response.setResult("False");
response.setReason("# EXCEPTION : " + e.getMessage());
}
}
return response;
}
Dao Implementation -
public boolean isDataExist(Employee emp) {
boolean flag = false;
List<Object[]> tbl1 = session.createQuery("FROM Table1 where emp_id=:id")
.setParameter("id",emp.getId())
.getResultList();
if(!tbl1.isEmpty() && tbl1.size() > 0) {
flag = true;
}
List<Object[]> tbl2 = session.createQuery("FROM Table2 where emp_id=:id")
.setParameter("id",emp.getId())
.getResultList();
if(!tbl2.isEmpty() && tbl2.size() > 0) {
flag = true;
}
List<Object[]> tbl3 = session.createQuery("FROM Table3 where emp_id=:id")
.setParameter("id",emp.getId())
.getResultList();
if(!tbl3.isEmpty() && tbl3.size() > 0) {
flag = true;
}
return flag;
}
public boolean doSoftDelete(Employee emp) {
empDet = session.get(Employee.class, emp.getId());
empDet .setIsActive("N");
session.update(empDet);
}
public boolean doHardDelete(Employee emp) {
empDet = session.get(Employee.class, emp.getId());
session.delete(empDet);
}
No matter how many transactional tables will be added with master tbl reference, my code should do the operations(soft/hard delete) accordingly.
In my case, every time new transactional tables get added with a master reference I've do the checks, so Simply I want to skip the isDataExist() method and do the deletions accordingly, how can I do it in a better way?
Please help me with the right approach to do the same.
There's a lot of repeated code in the body of isDataExist() method which is both hard to maintain and hard to extend (if you have to add 3 more tables the code will double in size).
On top of that the logic is not optimal as it will go over all tables even if the result from the first one is enough to return true.
Here is a simplified version (please note that I haven't tested the code and there could be errors, but it should be enough to explain the concept):
public boolean isDataExist(Employee emp) {
List<String> tableNames = List.of("Table1", "Table2", "Table3");
for (String tableName : tableNames) {
if (existsInTable(tableName, emp.getId())) {
return true;
}
}
return false;
}
private boolean existsInTable(String tableName, Long employeeId) {
String query = String.format("SELECT count(*) FROM %s WHERE emp_id=:id", tableName);
long count = (long)session
.createQuery(query)
.setParameter("id", employeeId)
.getSingleResult();
return count > 0;
}
isDataExist() contains a list of all table names and iterates over these until the first successful encounter of the required Employee id in which case it returns true. If not found in any table the method returns false.
private boolean existsInTable(String tableName, Long employeeId) is a helper method that does the actual search for employeeId in the specified tableName.
I changed the query to just return the count (0 or more) instead of a the actual entity objects as these are not required and there's no point to fetch them.
EDIT in response to the "Second approach"
Is the Second Approach meeting the requirements?
If so, then it is a "right approach" to the problem. :)
I would refactor the deleteEmployeeDetails method to either return a boolean (if just two possible outcomes are expected) or to return a custom Enum as using a String here doesn't seem appropriate.
There is repeated code in deleteEmployeeDetails and this is never a good thing. You should separate the logic which decides the type of the response from the code that builds it, thus making your code easier to follow, debug and extend when required.
Let me know if you need a code example of the ideas above.
EDIT #2
Here is the sample code as requested.
First we define a Status enum which should be used as return type from MasterDao's methods:
public enum Status {
DELETE_SUCCESS("Success", "200", "True", "Record deleted successfully"),
DELETE_FAIL("Fail", "200", "False", ""),
DEACTIVATE_SUCCESS("Success", "200", "True", "Record deactivated successfully"),
DEACTIVATE_FAIL("Fail", "200", "False", ""),
ERROR("Fail", "500", "False", "");
private String status;
private String statusCode;
private String result;
private String reason;
Status(String status, String statusCode, String result, String reason) {
this.status = status;
this.statusCode = statusCode;
this.result = result;
this.reason = reason;
}
// Getters
}
MasterDao methods changed to return Status instead of String or boolean:
public Status deleteEmployeeDetails(Employee employee) {
return Status.DELETE_SUCCESS; // or Status.DELETE_FAIL
}
public Status deactivateEmployee(Employee employee) {
return Status.DEACTIVATE_SUCCESS; // or Status.DEACTIVATE_FAIL
}
Here is the new deleteEmployee() method:
public Response deleteEmployee(Employee employee) {
Status status;
String reason = null;
try {
status = masterDao.deleteEmployeeDetails(employee);
} catch (Exception e) {
if (isConstraintViolationException(e)) {
status = masterDao.deactivateEmployee(employee);
} else {
status = Status.ERROR;
reason = "# EXCEPTION : " + e.getMessage();
}
}
return buildResponse(status, reason);
}
It uses two simple utility methods (you can make these static or export to utility class as they do not depend on the internal state).
First checks if the root cause of the thrown exception is ConstraintViolationException:
private boolean isConstraintViolationException(Throwable throwable) {
Throwable root = throwable;
while (root != null && !(root instanceof ConstraintViolationException)) {
root = root.getCause();
}
return root != null;
}
And the second one builds the Response out of the Status and a reason:
private Response buildResponse(Status status, String reason) {
Response response = new Response();
response.setStatus(status.getStatus());
response.setStatusCode(status.getStatusCode());
response.setResult(status.getResult());
if (reason != null) {
response.setReason(reason);
} else {
response.setReason(status.getReason());
}
return response;
}
If you do not like to have the Status enum loaded with default Response messages, you could strip it from the extra info:
public enum Status {
DELETE_SUCCESS, DELETE_FAIL, DEACTIVATE_SUCCESS, DEACTIVATE_FAIL, ERROR;
}
And use switch or if-else statements in buildResponse(Status status, String reason) method to build the response based on the Status type.
I've build a cache that returns a value in list format when you enter the parameters. If that value is not in the cache, it goes to the database and retrieves it, putting it in the cache for future reference:
private ProfileDAO profileDAO;
private String[] temp;
private LoadingCache<String, List<Profile>> loadingCache = CacheBuilder.newBuilder()
.refreshAfterWrite(5, TimeUnit.MINUTES)
.expireAfterWrite(5, TimeUnit.MINUTES)
.build(
new CacheLoader<String, List<Profile>>() {
#Override
public List<Profile> load(String key) throws Exception {
logger.info("Running method to retrieve from database");
temp = key.split("\\|");
String instance = temp[0];
String name = temp[1];
List<Profile> profiles= profileDAO.getProfileByFields(id, name);
if (profiles.isEmpty()) {
List<Profile> nullValue = new ArrayList<Profile>();
logger.info("Unable to find a value.");
return nullValue;
}
logger.info("Found a value");
return profileDAO.getProfileByFields(id, name);
}
}
);
public List<Profile> getProfileByFields(String id, String name) throws Exception {
String key = id.toLowerCase() + "|" + name.toLowerCase()
return loadingCache.get(key);
}
This seems to work fine, but it does not take into account null values. If I look for an entry that does not exist, I get an exception for :
com.google.common.cache.CacheLoader$InvalidCacheLoadException: CacheLoader returned null for key A01|Peter
I'd like to simply return an empty List(Profile) if there is no match in the database, but my if statement has failed. Is there any way around this error for this particular use case?
Though this feels a bit hacky, I think it's a more complete solution (Suresh's answer only really applies to collections).
Define a singleton object that will represent null, and insert that value into the cache instead of null (converting to null at retrieval time):
class MyDAO
{
static final Object NULL = new Object();
LoadingCache<String,Object> cache = CacheBuilder.newBuilder()
.build( new CacheLoader<>()
{
public Object load( String key )
{
Object value = database.get( key );
if( value == null )
return NULL;
return value;
}
});
Object get( String key )
{
Object value = cache.get( key );
if( value == NULL ) // use '==' to compare object references
return null;
return value;
}
}
I believe this approach is preferable, in terms of efficiency, to any involving the use of exceptions.
Using Optional class Optional<Object> as the cache value is the easiest and cleanest way to do it.
Make changes in your code to check first profiles is null or not as(using profiles == null ...) :
private ProfileDAO profileDAO;
private String[] temp;
private LoadingCache<String, List<Profile>> loadingCache = CacheBuilder.newBuilder()
.refreshAfterWrite(5, TimeUnit.MINUTES)
.expireAfterWrite(5, TimeUnit.MINUTES)
.build(
new CacheLoader<String, List<Profile>>() {
#Override
public List<Profile> load(String key) throws Exception {
logger.info("Running method to retrieve from database");
temp = key.split("\\|");
String instance = temp[0];
String name = temp[1];
List<Profile> profiles= profileDAO.getProfileByFields(id, name);
if (profiles == null || profiles.isEmpty()) {
List<Profile> nullValue = new ArrayList<Profile>();
logger.info("Unable to find a value.");
return nullValue;
}
logger.info("Found a value");
return profileDAO.getProfileByFields(id, name);
}
}
);
public List<Profile> getProfileByFields(String id, String name) throws Exception {
String key = id.toLowerCase() + "|" + name.toLowerCase()
return loadingCache.get(key);
}
Please check this code is working for you null values or not..
I'm working on cache implementation with RxJava 2. What I need is when network request fails, my repository would insert stale data and show error message. While I'm able to insert Item with .onErrorReturnItem(cachedItem) the error gets lost. Also I'm able to concat cached data with network request, but it is a bit cumbersome:
public Observable<Dashboard> getDashboard(String phoneNum, boolean getNewData) {
if (getNewData) invalidateDashboardCache();//just set dashboardCacheValid = false
Observable<Dashboard> observableToCache = Observable.fromCallable(
() -> {
Dashboard cached = mCache.getDashboard(phoneNum);
if (cached != null) {
if (!cached.cacheValid()) {
dashboardCacheValid = false;
}
return cached;
}
dashboardCacheValid = false;
return Dashboard.EMPTY;
})
.concatMap(cachedDashboard -> Observable.concat(Observable.just(cachedDashboard),
Observable.fromCallable(() -> !dashboardCacheValid)
.filter(Boolean::booleanValue)
.flatMap(cacheNotValid -> mNetworkHelper.getDashboardRaw(phoneNum))
.doOnNext(dashboard -> {
mCache.putDashboard(pnumber, dashboard);
dashboardCacheValid = true;
})));
return cacheObservable(CACHE_PREFIX_GET_DASHBOARD + phoneNum, observableToCache); //this is for multiple calls
}
Is there a way to modify .onErrorReturnItem(cachedDashboard) to something like this?:
Thanks to #akarnokd I was able to solve it properly and with much cleaner code:
public Observable<Dashboard> getDashboardNew(String phoneNum, boolean getNewData) {
Dashboard fromCache = mCache.getDashboard(phoneNum, getNewData);
dashboardCacheValid = fromCache.cacheValid();
if (getNewData) invalidateDashboardCache();
if (dashboardCacheValid) {
return Observable.just(fromCache);
} else {
final Dashboard cached = fromCache;
Observable<Dashboard> observableToCache = mNetworkHelper.getDashboardRaw(phoneNum)
.doOnNext(dashboard -> mCache.putDashboard(phoneNum, dashboard))
.onErrorResumeNext(throwable -> {
return Observable.concat(Observable.just(cached), Observable.error(throwable));
});
return cacheObservable(CACHE_PREFIX_GET_DASHBOARD + phoneNum, observableToCache);
}
}
i'm in the process of reviving my optaplanner code that i wrote about 6 months ago and while trying to figure out why some of my hard constraints are being broken i found that a filter that i wrote that is supposed to filter out illegal moves is not being referred to. I put breakpoints at all of the methods of the move factory, the moves methods, and the filter and none are being called. I'm pretty sure that that wasn'nt the case before i updated to the latest version but i might be wrong.
Update: the factory is being used when i run optaplanner in my test case but not in production, so i guess this is not to due with my configuration but rather the scenario, but i don't know what may affect it being used or not
my solver config:
<?xml version="1.0" encoding="UTF-8"?>
<solver>
<environmentMode>FULL_ASSERT</environmentMode>
<!-- Domain model configuration -->
<solutionClass>com.rdthree.plenty.services.activities.planner.ActivitySolution</solutionClass>
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskPlannerDto</entityClass>
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskResourceAllocationPlannerDto</entityClass>
<!-- Score configuration -->
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>com/rdthree/plenty/services/activities/planner/activity-scoring.drl</scoreDrl>
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory>
<!-- Optimization algorithms configuration -->
<termination>
<terminationCompositionStyle>OR</terminationCompositionStyle>
<bestScoreLimit>0hard/0soft</bestScoreLimit>
<secondsSpentLimit>60</secondsSpentLimit>
</termination>
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="resourceAllocationSelector">
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskResourceAllocationPlannerDto</entityClass>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterManner>DECREASING_DIFFICULTY_IF_AVAILABLE</sorterManner>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="resourceAllocationSelector" />
<valueSelector>
<variableName>resource</variableName>
<cacheType>PHASE</cacheType>
</valueSelector>
</changeMoveSelector>
</queuedEntityPlacer>
</constructionHeuristic>
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="taskSelector">
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskPlannerDto</entityClass>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterManner>DECREASING_DIFFICULTY_IF_AVAILABLE</sorterManner>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="taskSelector" />
<filterClass>com.rdthree.plenty.services.activities.planner.filters.TaskLengthChnageFilter</filterClass>
<valueSelector>
<variableName>interval</variableName>
<cacheType>PHASE</cacheType>
</valueSelector>
</changeMoveSelector>
</queuedEntityPlacer>
</constructionHeuristic>
<localSearch>
<unionMoveSelector>
<moveListFactory>
<moveListFactoryClass>com.rdthree.plenty.services.activities.planner.MoveResourceAllocationMoveFactory</moveListFactoryClass>
</moveListFactory>
<changeMoveSelector>
<fixedProbabilityWeight>1.0</fixedProbabilityWeight>
<filterClass>com.rdthree.plenty.services.activities.planner.filters.TaskLengthChnageFilter</filterClass>
<entitySelector id="taskMoveSelector">
<entityClass>com.rdthree.plenty.services.activities.helpers.dtos.TaskPlannerDto</entityClass>
</entitySelector>
<valueSelector>
<variableName>interval</variableName>
</valueSelector>
</changeMoveSelector>
</unionMoveSelector>
<acceptor>
<valueTabuSize>7</valueTabuSize>
</acceptor>
<forager>
<acceptedCountLimit>2000</acceptedCountLimit>
</forager>
</localSearch>
my custom move factory:
public class MoveResourceAllocationMoveFactory implements MoveListFactory<ActivitySolution> {
#Override
public List<? extends Move> createMoveList(ActivitySolution solution) {
List<Move> moveList = new ArrayList<Move>();
for (TaskResourceAllocationPlannerDto allocation : solution.getResourceAllocations()) {
for (TaskResourcePlannerDto resource : solution.getResources()) {
moveList.add(new MoveResourceAllocations(allocation, resource));
}
}
return moveList;
}
}
my custom move:
public class MoveResourceAllocations extends AbstractMove {
private TaskResourceAllocationPlannerDto allocation;
private TaskResourcePlannerDto newResource;
#Getter
#Setter
boolean doMove;
public MoveResourceAllocations(TaskResourceAllocationPlannerDto allocation, TaskResourcePlannerDto newResource) {
super();
this.allocation = allocation;
this.newResource = newResource;
}
#Override
public boolean isMoveDoable(ScoreDirector scoreDirector) {
if (allocation.getResource().equals(newResource)) {
return false;
}
return new ResourceTypeMismatchFilter().acceptCustomMove(scoreDirector, this);
}
#Override
public Move createUndoMove(ScoreDirector scoreDirector) {
return new MoveResourceAllocations(allocation, allocation.getResource());
}
#Override
public void doMoveOnGenuineVariables(ScoreDirector scoreDirector) {
scoreDirector.beforeVariableChanged(allocation, "resource");
updateOnHandAmounts(scoreDirector);
allocation.setResource(newResource);
scoreDirector.afterVariableChanged(allocation, "resource");
}
private void updateOnHandAmounts(ScoreDirector scoreDirector) {
ActivitySolution solution = (ActivitySolution) scoreDirector.getWorkingSolution();
List<OnHandForProduct> onHandForProducts = solution.getOnHandForProducts();
List<ProductInventoryTransactionPlannerDto> transactions = solution.getTransactions();
boolean transactionFoundForTask = false;
if ((newResource.getClass().getSimpleName().contains(Product.class.getSimpleName()))
&& allocation.getResourceClass().equals(Product.class)) {
// find the transaction caused by the task and product in question and replace the product in the
// transaction with the newly assigned product and revert this for an undo move
for (ProductInventoryTransactionPlannerDto transaction : transactions) {
if (transaction.getCauseId().equals(allocation.getTaskId())
&& transaction.getProductId() == (allocation.getResource().getId())
&& transaction.getTransactionTypeName().equals(InventoryTransactionType.SUBTRACT)) {
transaction.setProductId(newResource.getId());
transactionFoundForTask = true;
break;
}
}
if (!transactionFoundForTask) {
throw new EmptyResultDataAccessException(
"Internal scheduler fail: no product transaction found for the product-requiring task with id: "
+ allocation.getTaskId() + " for product : " + allocation.getResource(), 1);
}
TaskPlannerDto thisTask = null;
for (TaskPlannerDto task : solution.getTasks()) {
if (task.getId().equals(allocation.getTaskId())) {
thisTask = task;
}
}
Long oldProductId = allocation.getResource().getId();
Long newProductId = newResource.getId();
for (OnHandForProduct onHandForProduct : onHandForProducts) {
if (onHandForProduct.getProductId().equals(oldProductId)
&& onHandForProduct.getDate().isAfter(
thisTask.getInterval().getStart().withTimeAtStartOfDay()
.plusDays(0/* - GeneralPrefs.PRODUCT_PRESENCE_SAFETY_BUFFER*/))) {
onHandForProduct.setAmount(onHandForProduct.getAmount() + allocation.getAmount());
}
if (onHandForProduct.getProductId().equals(newProductId)
&& onHandForProduct.getDate().isAfter(
thisTask.getInterval().getStart().withTimeAtStartOfDay()
.plusDays(0/* - GeneralPrefs.PRODUCT_PRESENCE_SAFETY_BUFFER*/))) {
onHandForProduct.setAmount(onHandForProduct.getAmount() - allocation.getAmount());
}
}
}
}
#Override
public Collection<? extends Object> getPlanningEntities() {
return Collections.singletonList(allocation);
}
#Override
public Collection<? extends Object> getPlanningValues() {
return Collections.singletonList(newResource);
}
#Override
public String toString() {
return "replacing resource " + allocation.getResource() + " for task with id " + allocation.getId() + " with "
+ newResource;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((allocation == null) ? 0 : allocation.hashCode());
result = prime * result + ((newResource == null) ? 0 : newResource.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
MoveResourceAllocations other = (MoveResourceAllocations) obj;
if (allocation == null) {
if (other.allocation != null)
return false;
} else if (!allocation.equals(other.allocation))
return false;
if (newResource == null) {
if (other.newResource != null)
return false;
} else if (!newResource.equals(other.newResource))
return false;
return true;
}
}
The config looks good.
1) Maybe the 2 Construction Heuristics phase never completely finish.
Turn on INFO logging (or better yet DEBUG). It will log when each of the 2 Construction Heuristics ends.
2) Maybe Local Search starts with a ChangeMoveSelector (it's a union, so any one of the 2 selectors can go first), and it hangs somehow in the filter. Turn on TRACE logging to see the selected moves.