I am using spring Application and my gigaspace is connecting at startup. I am not getting any exception, if gigaspace is down.
#Override
public void onContextRefreshed(ContextRefreshedEvent event) {
String gigaSpaceURL = null;
LOGGER.info("({}) initializing gigaspaces client", getName());
try {
initGSProxy();
Iterator<Map.Entry<ConfiguredSpace, Space>> entries = spaces.entrySet().iterator();
while (entries.hasNext()) {
Map.Entry<ConfiguredSpace, Space> entry = entries.next();
LOGGER.info("({}) initialing space- key=" +
entry.getKey() + ", value = " + entry.getValue(),
getName());
// TODO : Need to verify Boolean Value Input
gigaspace.createSpace(entry.getKey().name(),
entry.getValue().getURL(), false);
gigaSpaceURL = entry.getValue().getURL();
}
} catch (Exception e) {
return;
}
GenericUtil.updateLogLevel("INFO",
"com.renovite.ripps.ap.gs.Spaces");
LOGGER.info("\n************************************\nConnected with Gigaspace successfully:URL:" + gigaSpaceURL
+ "\n************************************\n");
GenericUtil.updateLogLevel("ERROR",
"com.renovite.ripps.ap.gs.Spaces");
}
Take reference of Gigaspace by using getGigaSpace() method which takes spacekey as an argument.If it throw exception at run time, it means application is not able to connect with specified Gigaspace url.
Or more elegant way, In your Gigaspace proxy class (which actually implements IGigaspace) override the getGigaSpace() method such that it will return null if connection is not possible.
/** The spaces. */
private transient Map spaces = new HashMap<>();
#Override
public GigaSpace getGigaSpace(String spaceKey) {
if(spaces.get(spaceKey) != null){
return spaces.get(spaceKey).getGigaSpace();
}
return null;
}
spaces is a Map of all urls that are registered with Gigapsace.If no one is registered, we are returning null in the above method.
Related
I have written a code which fetched the S3 objects from AWS s3 using S3 sdk and stores the same in our DB, the only problem is the task is repeated for three different services, the only thing is changed is the instance of service class.
I have copy and pasted code in each service layer just to changes the instance for an instance.
The task is repeated for service classes VehicleImageService, MsilLayoutService and NonMsilLayoutService, every layer is having its own repository.
I am trying to identify a way to accomplish the same by placing that snippet in one place and on an runtime using Reflection API I wish to pass the correct instance and invoke the method, but I want to achieve the same using best industry practices and pattern. I.e. I want to refactor into generic methods for other services, so instance can be passed at runtime.
So kindly assist me on the same.
public void persistImageDetails() {
log.info("MsilVehicleLayoutServiceImpl::persistImageDetails::START");
String bucketKey = null; //common param
String modelCode = null;//common param
List<S3Object> objList = new ArrayList<>(); //common param
String bucketName = s3BucketDetails.getBucketName();//common param
String bucketPath = s3BucketDetails.getBucketPrefix();//common param
try {
//the layoutRepository object can be MSILRepository,NonMSILRepository and VehilceImageRepository
List<ModelCode> modelCodes = layoutRepository.findDistinctAllBy(); // this line need to take care of
List<String> modelCodePresent = modelCodes.stream().map(ModelCode::getModelCode)
.collect(Collectors.toList());
List<CommonPrefix> allKeysInDesiredBucket = listAllKeysInsideBucket(bucketName, bucketPath);//common param
synchDB(modelCodePresent, allKeysInDesiredBucket);
if (null != allKeysInDesiredBucket && !allKeysInDesiredBucket.isEmpty()) {
for (CommonPrefix commonPrefix : allKeysInDesiredBucket) {
bucketKey = commonPrefix.prefix();
modelCode = new File(bucketKey).getName();
if (modelCodePresent.contains(modelCode)) {
log.info("skipping iteration for {} model code", modelCode);
continue;
}
objList = s3Service.getBucketObjects(bucketName, bucketKey);
if (null != objList && !objList.isEmpty()) {
for (S3Object object : AppUtil.skipFirst(objList)) {
saveLayout(bucketName, modelCode, object);
}
}
}
}
log.info("MSIL Vehicle Layout entries has been successfully saved");
} catch (Exception e) {
log.error("Error occured", e);
e.printStackTrace();
}
log.info("MsilVehicleLayoutServiceImpl::persistImageDetails::END");
}
private void saveLayout(String bucketName, String modelCode, S3Object object) {
log.info("Inside saveLayout::Start preparing entity to persist");
String resourceUri = null;
MsilVehicleLayout vehicleLayout = new MsilVehicleLayout();// this can be MsilVehicleLayout. NonMsilVehicleLayout, VehicleImage
vehicleLayout.setFileName(FilenameUtils.removeExtension(FilenameUtils.getName(object.key())));
vehicleLayout.setModelCode(modelCode);
vehicleLayout.setS3BucketKey(object.key());
resourceUri = getS3ObjectURI(bucketName, object.key());
vehicleLayout.setS3ObjectUri(resourceUri);
vehicleLayout.setS3PresignedUri(null);
vehicleLayout.setS3PresignedExpDate(null);
layoutRepository.save(vehicleLayout); //the layoutRepository object can be MSILRepository,NonMSILRepository and VehilceImageRepository
log.info("Exiting saveLayout::End entity saved");
}
TL:DR; When running tests with different #ResourceArgs, the configuration of different tests get thrown around and override others, breaking tests meant to run with specific configurations.
So, I have a service that has tests that run in different configuration setups. The main difference at the moment is the service can either manage its own authentication or get it from an external source (Keycloak).
I firstly control this using test profiles, which seem to work fine. Unfortunately, in order to support both cases, the ResourceLifecycleManager I have setup supports setting up a Keycloak instance and returns config values that break the config for self authentication (This is due primarily to the fact that I have not found out how to get the lifecycle manager to determine on its own what profile or config is currently running. If I could do this, I think I would be much better off than using #ResourceArg, so would love to know if I missed something here).
To remedy this shortcoming, I have attempted to use #ResourceArgs to convey to the lifecycle manager when to setup for external auth. However, I have noticed some really odd execution timings and the config that ends up at my test/service isn't what I intend based on the test class's annotations, where it is obvious the lifecycle manager has setup for external auth.
Additionally, it should be noted that I have my tests ordered such that the profiles and configs shouldn't be running out of order; all the tests that don't care are run first, then the 'normal' tests with self auth, then the tests with the external auth profile. I can see this working appropriately when I run in intellij, and the fact I can tell the time is being taken to start up the new service instance between the test profiles.
Looking at the logs when I throw a breakpoint in places, some odd things are obvious:
When breakpoint on an erring test (before the external-configured tests run)
The start() method of my TestResourceLifecycleManager has been called twice
The first run ran with Keycloak starting, would override/break config
though the time I would expect to need to be taken to start up keycloak not happening, a little confused here
The second run is correct, not starting keycloak
The profile config is what is expected, except for what the keycloak setup would override
When breakpoint on an external-configured test (after all self-configured tests run):
The start() method has now been called 4 times; appears that things were started in the same order as before again for the new run of the app
There could be some weirdness in how Intellij/Gradle shows logs, but I am interpreting this as:
Quarkus initting the two instances of LifecycleManager when starting the app for some reason, and one's config overrides the other, causing my woes.
The lifecycle manager is working as expected; it appropriately starts/ doesn't start keycloak when configured either way
At this point I can't tell if I'm doing something wrong, or if there's a bug.
Test class example for self-auth test (same annotations for all tests in this (test) profile):
#Slf4j
#QuarkusTest
#QuarkusTestResource(TestResourceLifecycleManager.class)
#TestHTTPEndpoint(Auth.class)
class AuthTest extends RunningServerTest {
Test class example for external auth test (same annotations for all tests in this (externalAuth) profile):
#Slf4j
#QuarkusTest
#TestProfile(ExternalAuthTestProfile.class)
#QuarkusTestResource(value = TestResourceLifecycleManager.class, initArgs = #ResourceArg(name=TestResourceLifecycleManager.EXTERNAL_AUTH_ARG, value="true"))
#TestHTTPEndpoint(Auth.class)
class AuthExternalTest extends RunningServerTest {
ExternalAuthTestProfile extends this, providing the appropriate profile name:
public class NonDefaultTestProfile implements QuarkusTestProfile {
private final String testProfile;
private final Map<String, String> overrides = new HashMap<>();
protected NonDefaultTestProfile(String testProfile) {
this.testProfile = testProfile;
}
protected NonDefaultTestProfile(String testProfile, Map<String, String> configOverrides) {
this(testProfile);
this.overrides.putAll(configOverrides);
}
#Override
public Map<String, String> getConfigOverrides() {
return new HashMap<>(this.overrides);
}
#Override
public String getConfigProfile() {
return testProfile;
}
#Override
public List<TestResourceEntry> testResources() {
return QuarkusTestProfile.super.testResources();
}
}
Lifecycle manager:
#Slf4j
public class TestResourceLifecycleManager implements QuarkusTestResourceLifecycleManager {
public static final String EXTERNAL_AUTH_ARG = "externalAuth";
private static volatile MongodExecutable MONGO_EXE = null;
private static volatile KeycloakContainer KEYCLOAK_CONTAINER = null;
private boolean externalAuth = false;
public synchronized Map<String, String> startKeycloakTestServer() {
if(!this.externalAuth){
log.info("No need for keycloak.");
return Map.of();
}
if (KEYCLOAK_CONTAINER != null) {
log.info("Keycloak already started.");
} else {
KEYCLOAK_CONTAINER = new KeycloakContainer()
// .withEnv("hello","world")
.withRealmImportFile("keycloak-realm.json");
KEYCLOAK_CONTAINER.start();
log.info(
"Test keycloak started at endpoint: {}\tAdmin creds: {}:{}",
KEYCLOAK_CONTAINER.getAuthServerUrl(),
KEYCLOAK_CONTAINER.getAdminUsername(),
KEYCLOAK_CONTAINER.getAdminPassword()
);
}
String clientId;
String clientSecret;
String publicKey = "";
try (
Keycloak keycloak = KeycloakBuilder.builder()
.serverUrl(KEYCLOAK_CONTAINER.getAuthServerUrl())
.realm("master")
.grantType(OAuth2Constants.PASSWORD)
.clientId("admin-cli")
.username(KEYCLOAK_CONTAINER.getAdminUsername())
.password(KEYCLOAK_CONTAINER.getAdminPassword())
.build();
) {
RealmResource appsRealmResource = keycloak.realms().realm("apps");
ClientRepresentation qmClientResource = appsRealmResource.clients().findByClientId("quartermaster").get(0);
clientSecret = qmClientResource.getSecret();
log.info("Got client id \"{}\" with secret: {}", "quartermaster", clientSecret);
//get private key
for (KeysMetadataRepresentation.KeyMetadataRepresentation curKey : appsRealmResource.keys().getKeyMetadata().getKeys()) {
if (!SIG.equals(curKey.getUse())) {
continue;
}
if (!"RSA".equals(curKey.getType())) {
continue;
}
String publicKeyTemp = curKey.getPublicKey();
if (publicKeyTemp == null || publicKeyTemp.isBlank()) {
continue;
}
publicKey = publicKeyTemp;
log.info("Found a relevant key for public key use: {} / {}", curKey.getKid(), publicKey);
}
}
// write public key
// = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString() + "/security/testKeycloakPublicKey.pem");
File publicKeyFile;
try {
publicKeyFile = File.createTempFile("oqmTestKeycloakPublicKey",".pem");
// publicKeyFile = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString().replace("/classes/java/", "/resources/") + "/security/testKeycloakPublicKey.pem");
log.info("path of public key: {}", publicKeyFile);
// if(publicKeyFile.createNewFile()){
// log.info("created new public key file");
//
// } else {
// log.info("Public file already exists");
// }
try (
FileOutputStream os = new FileOutputStream(
publicKeyFile
);
) {
IOUtils.write(publicKey, os, UTF_8);
} catch (IOException e) {
log.error("Failed to write out public key of keycloak: ", e);
throw new IllegalStateException("Failed to write out public key of keycloak.", e);
}
} catch (IOException e) {
log.error("Failed to create public key file: ", e);
throw new IllegalStateException("Failed to create public key file", e);
}
String keycloakUrl = KEYCLOAK_CONTAINER.getAuthServerUrl().replace("/auth", "");
return Map.of(
"test.keycloak.url", keycloakUrl,
"test.keycloak.authUrl", KEYCLOAK_CONTAINER.getAuthServerUrl(),
"test.keycloak.adminName", KEYCLOAK_CONTAINER.getAdminUsername(),
"test.keycloak.adminPass", KEYCLOAK_CONTAINER.getAdminPassword(),
//TODO:: add config for server to talk to
"service.externalAuth.url", keycloakUrl,
"mp.jwt.verify.publickey.location", publicKeyFile.getAbsolutePath()
);
}
public static synchronized void startMongoTestServer() throws IOException {
if (MONGO_EXE != null) {
log.info("Flapdoodle Mongo already started.");
return;
}
Version.Main version = Version.Main.V4_0;
int port = 27018;
log.info("Starting Flapdoodle Test Mongo {} on port {}", version, port);
IMongodConfig config = new MongodConfigBuilder()
.version(version)
.net(new Net(port, Network.localhostIsIPv6()))
.build();
try {
MONGO_EXE = MongodStarter.getDefaultInstance().prepare(config);
MongodProcess process = MONGO_EXE.start();
if (!process.isProcessRunning()) {
throw new IOException();
}
} catch (Throwable e) {
log.error("FAILED to start test mongo server: ", e);
MONGO_EXE = null;
throw e;
}
}
public static synchronized void stopMongoTestServer() {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
MONGO_EXE.stop();
MONGO_EXE = null;
}
public synchronized static void cleanMongo() throws IOException {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
log.info("Cleaning Mongo of all entries.");
}
#Override
public void init(Map<String, String> initArgs) {
this.externalAuth = Boolean.parseBoolean(initArgs.getOrDefault(EXTERNAL_AUTH_ARG, Boolean.toString(this.externalAuth)));
}
#Override
public Map<String, String> start() {
log.info("STARTING test lifecycle resources.");
Map<String, String> configOverride = new HashMap<>();
try {
startMongoTestServer();
} catch (IOException e) {
log.error("Unable to start Flapdoodle Mongo server");
}
configOverride.putAll(startKeycloakTestServer());
return configOverride;
}
#Override
public void stop() {
log.info("STOPPING test lifecycle resources.");
stopMongoTestServer();
}
}
The app can be found here: https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/open-qm-base-station
The tests are currently failing in the ways I am describing, so feel free to look around.
Note that to run this, you will need to run ./gradlew build publishToMavenLocal in https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/libs/open-qm-core to install a dependency locally.
Github issue also tracking this: https://github.com/quarkusio/quarkus/issues/22025
Any use of #QuarkusTestResource() without restrictToAnnotatedClass set to true, means that the QuarkusTestResourceLifecycleManager will be applied to all tests no matter where the annotation is placed.
Hope restrictToAnnotatedClass will solve the problem.
First, I want to say thanks to everyone that took their time to help me figure this out because I was searching for more than a week for a solution to my problem. Here it is:
My goal is to start a custom workflow in Alfresco Community 5.2 and to set some custom properties in the first task trough a web script using only the Public Java API. My class is extending AbstractWebScript. Currently I have success with starting the workflow and setting properties like bpm:workflowDescription, but I'm not able to set my custom properties in the tasks.
Here is the code:
public class StartWorkflow extends AbstractWebScript {
/**
* The Alfresco Service Registry that gives access to all public content services in Alfresco.
*/
private ServiceRegistry serviceRegistry;
public void setServiceRegistry(ServiceRegistry serviceRegistry) {
this.serviceRegistry = serviceRegistry;
}
#Override
public void execute(WebScriptRequest req, WebScriptResponse res) throws IOException {
// Create JSON object for the response
JSONObject obj = new JSONObject();
try {
// Check if parameter defName is present in the request
String wfDefFromReq = req.getParameter("defName");
if (wfDefFromReq == null) {
obj.put("resultCode", "1 (Error)");
obj.put("errorMessage", "Parameter defName not found.");
return;
}
// Get the WFL Service
WorkflowService workflowService = serviceRegistry.getWorkflowService();
// Build WFL Definition name
String wfDefName = "activiti$" + wfDefFromReq;
// Get WorkflowDefinition object
WorkflowDefinition wfDef = workflowService.getDefinitionByName(wfDefName);
// Check if such WorkflowDefinition exists
if (wfDef == null) {
obj.put("resultCode", "1 (Error)");
obj.put("errorMessage", "No workflow definition found for defName = " + wfDefName);
return;
}
// Get parameters from the request
Content reqContent = req.getContent();
if (reqContent == null) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST, "Missing request body.");
}
String content;
content = reqContent.getContent();
if (content.isEmpty()) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST, "Content is empty");
}
JSONTokener jsonTokener = new JSONTokener(content);
JSONObject json = new JSONObject(jsonTokener);
// Set the workflow description
Map<QName, Serializable> params = new HashMap();
params.put(WorkflowModel.PROP_WORKFLOW_DESCRIPTION, "Workflow started from JAVA API");
// Start the workflow
WorkflowPath wfPath = workflowService.startWorkflow(wfDef.getId(), params);
// Get params from the POST request
Map<QName, Serializable> reqParams = new HashMap();
Iterator<String> i = json.keys();
while (i.hasNext()) {
String paramName = i.next();
QName qName = QName.createQName(paramName);
String value = json.getString(qName.getLocalName());
reqParams.put(qName, value);
}
// Try to update the task properties
// Get the next active task which contains the properties to update
WorkflowTask wfTask = workflowService.getTasksForWorkflowPath(wfPath.getId()).get(0);
// Update properties
WorkflowTask updatedTask = workflowService.updateTask(wfTask.getId(), reqParams, null, null);
obj.put("resultCode", "0 (Success)");
obj.put("workflowId", wfPath.getId());
} catch (JSONException e) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST,
e.getLocalizedMessage());
} catch (IOException ioe) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST,
"Error when parsing the request.",
ioe);
} finally {
// build a JSON string and send it back
String jsonString = obj.toString();
res.getWriter().write(jsonString);
}
}
}
Here is how I call the webscript:
curl -v -uadmin:admin -X POST -d #postParams.json localhost:8080/alfresco/s/workflow/startJava?defName=nameOfTheWFLDefinition -H "Content-Type:application/json"
In postParams.json file I have the required pairs for property/value which I need to update:
{
"cmprop:propOne" : "Value 1",
"cmprop:propTwo" : "Value 2",
"cmprop:propThree" : "Value 3"
}
The workflow is started, bpm:workflowDescription is set correctly, but the properties in the task are not visible to be set.
I made a JS script which I call when the workflow is started:
execution.setVariable('bpm_workflowDescription', 'Some String ' + execution.getVariable('cmprop:propOne'));
And actually the value for cmprop:propOne is used and the description is properly updated - which means that those properties are updated somewhere (on execution level maybe?) but I cannot figure out why they are not visible when I open the task.
I had success with starting the workflow and updating the properties using the JavaScript API with:
if (wfdef) {
// Get the params
wfparams = {};
if (jsonRequest) {
for ( var prop in jsonRequest) {
wfparams[prop] = jsonRequest[prop];
}
}
wfpackage = workflow.createPackage();
wfpath = wfdef.startWorkflow(wfpackage, wfparams);
The problem is that I only want to use the public Java API, please help.
Thanks!
Do you set your variables locally in your tasks? From what I see, it seems that you define your variables at the execution level, but not at the state level. If you take a look at the ootb adhoc.bpmn20.xml file (https://github.com/Activiti/Activiti-Designer/blob/master/org.activiti.designer.eclipse/src/main/resources/templates/adhoc.bpmn20.xml), you can notice an event listener that sets the variable locally:
<extensionElements>
<activiti:taskListener event="create" class="org.alfresco.repo.workflow.activiti.tasklistener.ScriptTaskListener">
<activiti:field name="script">
<activiti:string>
if (typeof bpm_workflowDueDate != 'undefined') task.setVariableLocal('bpm_dueDate', bpm_workflowDueDate);
if (typeof bpm_workflowPriority != 'undefined') task.priority = bpm_workflowPriority;
</activiti:string>
</activiti:field>
</activiti:taskListener>
</extensionElements>
Usually, I just try to import all tasks for my custom model prefix. So for you, it should look like that:
import java.util.Set;
import org.activiti.engine.delegate.DelegateExecution;
import org.activiti.engine.delegate.DelegateTask;
import org.apache.log4j.Logger;
public class ImportVariables extends AbstractTaskListener {
private Logger logger = Logger.getLogger(ImportVariables.class);
#Override
public void notify(DelegateTask task) {
logger.debug("Inside ImportVariables.notify()");
logger.debug("Task ID:" + task.getId());
logger.debug("Task name:" + task.getName());
logger.debug("Task proc ID:" + task.getProcessInstanceId());
logger.debug("Task def key:" + task.getTaskDefinitionKey());
DelegateExecution execution = task.getExecution();
Set<String> executionVariables = execution.getVariableNamesLocal();
for (String variableName : executionVariables) {
// If the variable starts by "cmprop_"
if (variableName.startsWith("cmprop_")) {
// Publish it at the task level
task.setVariableLocal(variableName, execution.getVariableLocal(variableName));
}
}
}
}
I'm looking for one way to share a non-parcelable item from my application and my current Service. This is the situation:
I have a Service to store all the media data from a camera application, photos, videos etc. The mission of this service is continue saving the media when the user go to background. When I did this in a first instance, I had a lot of SIGSEGV errors:
08-22 10:15:49.377 15784-15818/com.bq.camerabq A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x8c3f4000 in tid 15818 (CameraModuleBac)
This was because the Image item that I recover from my imageReaders are not parceable, I fix this saving the Bytebuffers from the image instead of the whole image item.
But, now I'm getting the same problem with DNG captures, because from my imageReader I got a CaptureResult item that I need to use to create a DngCreator item to write the dng image.
CaptureResults and DngCreators are not parcelables or Serializables, so I don't find a way to save my data from the application to recover it in the service if I'm in background.
I have tried to copy the reference when calling the Service and it didn't worked. Also I saw in other posts as Object Sharing Between Activities in Android and Object Sharing Between Activities in Android that I can save the item in a static reference in my application context to be able to recover it in different activities. So finally I tried this:
public class DngManager extends Application {
public static DngManager sDngManagerInstance;
protected Hashtable<String, CaptureResult> dngCaptureResults;
private static String DNG_KEY_PREFIX = "dngResult_";
public DngManager(){
super();
createDNGCaptureResults();
}
public void createDNGCaptureResults() {
dngCaptureResults = new Hashtable<String, CaptureResult>();
}
public boolean addDNGCaptureResultToSharedMem(long dateKey, CaptureResult value) {
dngCaptureResults.put(DNG_KEY_PREFIX + dateKey, value);
return true;
}
public CaptureResult getFromDNGCaptureResults(long dateKey) {
return dngCaptureResults.get(DNG_KEY_PREFIX + dateKey);
}
private boolean containsDNGCaptureResults(long dateKey) {
return dngCaptureResults.containsKey(DNG_KEY_PREFIX + dateKey);
}
public void clearDNGCaptureResults(long dateKey) {
String partKey = String.valueOf(dateKey);
Enumeration<String> e2 = dngCaptureResults.keys();
while (e2.hasMoreElements()) {
String i = (String) e2.nextElement();
if (i.contains(partKey))
dngCaptureResults.remove(i);
}
}
public static DngManager getInstance(){
if (sDngManagerInstance == null){
sDngManagerInstance = new DngManager();
}
return sDngManagerInstance;
}
}
And later I recover it in my service:
CaptureResult dngResult = ((DngManager)getApplication()).getFromDNGCaptureResults(mDngPicture.getDateTaken());
if (dngResult == null) {
return;
}
DngCreator dngCreator = new DngCreator(mCameraCharacteristics, dngResult);
path = Storage.generateFilepath(title, "dng");
file = new File(Uri.decode(path));
try {
Log.e(TAG, "[DngSaveTask|run] WriteByteBuffer: Height " + mDngPicture.getSize().getHeight() + " Width " + mDngPicture.getSize().getWidth());
OutputStream os = new FileOutputStream(file);
dngCreator.writeByteBuffer(os, mDngPicture.getSize(), mDngPicture.getDngByteBuffer(), 0);
} catch (IOException e) {
Log.d(TAG, "[DngSaveTask|run] " + e);
e.printStackTrace();
}
dngCreator.close();
Log.e(TAG, "[DngSaveTask|run] Cleaning Result from shared memory");
DngManager.getInstance().clearDNGCaptureResults(mDngPicture.getDateTaken());
MediaScannerConnection.scanFile(getApplicationContext(), new String[]{file.getAbsolutePath()}, null, null);
Anyway, it still giving me back a SIGSEGV error. What else can I try?
In the java docs of the map interface's entrySet() method I found this statement and I really do no understand it.
The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. If the map is modified while an iteration over the set is in progress, the results of the iteration are undefined
what is meant by undefined here?
For more clarification, this is my situation.
I have a web application based on spring & hibernate.
Our team implemented custom caching class called CachedIntegrationClients.
We are using RabbitMQ as a messaging server.
instead of getting our clients each time we want to send a message to the server, we cache the clients using the previous caching class.
The problem is that the messages are sent to the messaging server twice.
Viewing the logs, we found that the method that get the cached clients return the client twice, although this (theoretically) impossible as we store the clients in a map, and the map does not allow duplicate keys.
After some smoke viewing of the code I found that the method that iterates over the cached clients gets a set of the clients from the cached clients map.
So I suspected that while iterating over this set, another request is made by another client and this client may be uncached, so it modifies the map.
Any way this is the CachedIntegrationClients class
public class CachedIntegrationClientServiceImpl {
private IntegrationDao integrationDao;
private IntegrationService integrationService;
Map<String, IntegrationClient> cachedIntegrationClients = null;
#Override
public void setBaseDAO(BaseDao baseDao) {
super.setBaseDAO(integrationDao);
}
#Override
public void refreshCache() {
cachedIntegrationClients = null;
}
synchronized private void putOneIntegrationClientOnCache(IntegrationClient integrationClient){
fillCachedIntegrationClients(); // only fill cache if it is null , it will never refill cache
if (! cachedIntegrationClients.containsValue(integrationClient)) {
cachedIntegrationClients.put(integrationClient.getClientSlug(),integrationClient);
}
}
/**
* only fill cache if it is null , it will never refill cache
*/
private void fillCachedIntegrationClients() {
if (cachedIntegrationClients != null) {
return ;
}
log.debug("filling cache of cachedClients");
cachedIntegrationClients = new HashMap<String, IntegrationClient>(); // initialize cache Map
List<IntegrationClient> allCachedIntegrationClients= integrationDao.getAllIntegrationClients();
if (allCachedIntegrationClients != null) {
for (IntegrationClient integrationClient : allCachedIntegrationClients) {
integrationService
.injectCssFileForIntegrationClient(integrationClient);
fetchClientServiceRelations(integrationClient
.getIntegrationClientServiceList());
}
for (IntegrationClient integrationClient : allCachedIntegrationClients) {
putOneIntegrationClientOnCache(integrationClient);
}
}
}
/**
* fetch all client service
* #param integrationClientServiceList
*/
private void fetchClientServiceRelations(
List<IntegrationClientService> integrationClientServiceList) {
for (IntegrationClientService integrationClientService : integrationClientServiceList) {
fetchClientServiceRelations(integrationClientService);
}
}
private void fetchClientServiceRelations(IntegrationClientService clientService) {
for (Exchange exchange : clientService.getExchangeList()) {
exchange.getId();
}
for (Company company : clientService.getCompanyList()) {
company.getId();
}
}
/**
* Get a client given its slug.
*
* If the client was not found, an exception will be thrown.
*
* #throws ClientNotFoundIntegrationException
* #return IntegrationClient
*/
#Override
public IntegrationClient getIntegrationClient(String clientSlug) throws ClientNotFoundIntegrationException {
if (cachedIntegrationClients == null) {
fillCachedIntegrationClients();
}
if (!cachedIntegrationClients.containsKey(clientSlug)) {
IntegrationClient integrationClient = integrationDao.getIntegrationClient(clientSlug);
if (integrationClient != null) {
this.fetchClientServiceRelations(integrationClient.getIntegrationClientServiceList());
integrationService.injectCssFileForIntegrationClient(integrationClient);
cachedIntegrationClients.put(clientSlug, integrationClient);
}
}
IntegrationClient client = cachedIntegrationClients.get(clientSlug);
if (client == null) {
throw ClientNotFoundIntegrationException.forClientSlug(clientSlug);
}
return client;
}
public void setIntegrationDao(IntegrationDao integrationDao) {
this.integrationDao = integrationDao;
}
public IntegrationDao getIntegrationDao() {
return integrationDao;
}
public Map<String, IntegrationClient> getCachedIntegrationClients() {
if (cachedIntegrationClients == null) {
fillCachedIntegrationClients();
}
return cachedIntegrationClients;
}
public IntegrationService getIntegrationService() {
return integrationService;
}
public void setIntegrationService(IntegrationService integrationService) {
this.integrationService = integrationService;
}
}
and here is the method that iterates over the set
public List<IntegrationClientService> getIntegrationClientServicesForService(IntegrationServiceModel service) {
List<IntegrationClientService> integrationClientServices = new ArrayList<IntegrationClientService>();
for (Entry<String, IntegrationClient> entry : cachedIntegrationClientService.getCachedIntegrationClients().entrySet()) {
IntegrationClientService integrationClientService = getIntegrationClientService(entry.getValue(), service);
if (integrationClientService != null) {
integrationClientServices.add(integrationClientService);
}
}
return integrationClientServices;
}
Also here is the method that calls the previous one
List<IntegrationClientService> clients = integrationService.getIntegrationClientServicesForService(service);
System.out.println(clients.size());
if (clients.size() > 0) {
log.info("Inbound service message [" + messageType.getKey() + "] to be sent to " + clients.size()
+ " registered clients: [" + StringUtils.arrayToDelimitedString(clients.toArray(), ", ") + "]");
for (IntegrationClientService integrationClientService : clients) {
Message<T> message = integrationMessageBuilder.build(messageType, payload, integrationClientService);
try {
channel.send(message);
} catch (RuntimeException e) {
messagingIntegrationService.handleException(e, messageType, integrationClientService, payload);
}
}
} else {
log.info("Inbound service message [" + messageType.getKey() + "] but no registered clients, not taking any further action.");
}
and here is the logs that appears on the server
BaseIntegrationGateway.createAndSendToSubscribers(65) | Inbound service message [news.create] to be sent to 3 registered clients: [Id=126, Service=IntegrationService.MESSAGE_NEWS, Client=MDC, Id=125, Service=IntegrationService.MESSAGE_NEWS, Client=CNBC, Id=125, Service=IntegrationService.MESSAGE_NEWS, Client=CNBC]
Undefined means there is no requirement for any specific behavior. The implementation is free to start WWIII, re-hang all your toilet rolls by the overhand method, sully your grandmother, etc.
The only permitted modification with a specified behaviour is via the iterator.
Have you looked at java.concurrent.ConcurrentHashMap?
EDIT: I looked over your code again this stikes me as odd:
In fillCachedIntegrationClients() you have the following loop:
for (IntegrationClient integrationClient : allCachedIntegrationClients) {
putOneIntegrationClientOnCache(integrationClient);
}
But the putOneIntegrationClientOnCache method itself directly calls fillCachedIntegrationClients();
synchronized private void putOneIntegrationClientOnCache(IntegrationClient integrationClient){
fillCachedIntegrationClients(); // only fill cache if it is null , it will never refill cache
...
}
Something there must go wrong. You are calling fillCachedIntegrationClients() twice. Actually if I am not mistaken this should actually be a never-ending loop since one method calls the other and vice versa. The != null condition is never met during the initialization. Ofcourse you are modifying and iterating in a undefined way so maybe that saves you from an infinite loop.