Performance handling over synchronized block - java

In this below code; many of the threads are BLOCKED in my system when adding new conference (addConference() API) into the collection, because of system level lock (addConfLock). Adding of each new conference thread is BLOCKED; and Time for adding conf is increased because of each conference object has to be created/builded from DB by executing complex sql. So Thread BLOCKING time is proportional to DB Transaction.
I would like to separate conf object creation from SYNC block of adding a conf into collection.
I tried a solution; Please guide me if anyother solution or explain me the bad about my solution.
Below is the original code.
class Conferences extends OurCollectionImpl
{
//Contains all on going conferences
}
//single conference instance
class Conf {
String confId;
Date startTime;
Date participants;
public void load()
{
// Load conference details from DB, and set its instance memebrs
}
}
class ConfMgr
{
Conferences confs = new Conferences();
Object addConfLock = new Object();
public boolean addConference(DBConn conn, String confID){
synchronized(addConfLock) {
Conf conf = null;
conf = confs.get(confID)
if(conf != null)
{ return true;}
conf = new Conf();
conf.setConfID(confId);
conf.load(); //This is the BIG JOB with in SYNC BLOCK NEED TO SEPARATED
confs.add(conf);
}
}
}
//My solutions
public boolean addConference(DBConn conn, String confID){
Conf conf = null;
synchronized(addConfLock) {
conf = confs.get(confID)
if(conf != null)
{ return true;}
conf = Conf.getInstance(confID, conn);
}
synchronized(conf.builded) { //SYNC is liberated to individual conf object level
if(conf.builded.equals("T")) {
return true;
}
conf.load();
synchronized(addConfLock) {
confs.add(conf);
}
conf.builded = "T";
}
}
}
//single conference instance
class Conf {
String confId;
Date startTime;
Date participants;
String builded = "F"; //This is to avoid building object again.
private static HashMap<String, Conf> instanceMap = new HashMap<String, Conf>;
/*
* Below code will avoid two threads are requesting
* to create conference with same confID.
*/
public static Conf getInstance(DBConn conn, String confID){
//This below synch will ensure singleTon created per confID
synchronized(Conf.Class) {
Conf conf = instanceMap.get(confID);
if(conf == null) {
conf = new Conf();
instanceMap.put(confID, conf);
}
return conf;
}
}
public void load()
{
// Load conference details from DB, and set its instance memebrs
}
}

The problem as you stated is that you locking everything for each conf id.
How about changing the design of locking mechanism so that there is separate lock for each conf id, then you will lock only if transactions are for same conf id else execution will be parallel.
Here is how yo can acheive it:
Your Conferences should be using COncurrentHashMap to store different conf.
Take lock on conf object which you retrieve from conf = confs.get(confID) i.e synchronize(conf).
Your application should perform much better then.
EDIT:
Problem is in this code:
synchronized(addConfLock) {
conf = confs.get(confID)
if(conf != null)
{ return true;}
conf = Conf.getInstance(confID, conn);
}
If you use ConcurrentHashMap as collection you can remove this, it provides an api putIfAbsent , so you dont need to lock it.

Related

Mixed up Test configuration when using #ResourceArg

TL:DR; When running tests with different #ResourceArgs, the configuration of different tests get thrown around and override others, breaking tests meant to run with specific configurations.
So, I have a service that has tests that run in different configuration setups. The main difference at the moment is the service can either manage its own authentication or get it from an external source (Keycloak).
I firstly control this using test profiles, which seem to work fine. Unfortunately, in order to support both cases, the ResourceLifecycleManager I have setup supports setting up a Keycloak instance and returns config values that break the config for self authentication (This is due primarily to the fact that I have not found out how to get the lifecycle manager to determine on its own what profile or config is currently running. If I could do this, I think I would be much better off than using #ResourceArg, so would love to know if I missed something here).
To remedy this shortcoming, I have attempted to use #ResourceArgs to convey to the lifecycle manager when to setup for external auth. However, I have noticed some really odd execution timings and the config that ends up at my test/service isn't what I intend based on the test class's annotations, where it is obvious the lifecycle manager has setup for external auth.
Additionally, it should be noted that I have my tests ordered such that the profiles and configs shouldn't be running out of order; all the tests that don't care are run first, then the 'normal' tests with self auth, then the tests with the external auth profile. I can see this working appropriately when I run in intellij, and the fact I can tell the time is being taken to start up the new service instance between the test profiles.
Looking at the logs when I throw a breakpoint in places, some odd things are obvious:
When breakpoint on an erring test (before the external-configured tests run)
The start() method of my TestResourceLifecycleManager has been called twice
The first run ran with Keycloak starting, would override/break config
though the time I would expect to need to be taken to start up keycloak not happening, a little confused here
The second run is correct, not starting keycloak
The profile config is what is expected, except for what the keycloak setup would override
When breakpoint on an external-configured test (after all self-configured tests run):
The start() method has now been called 4 times; appears that things were started in the same order as before again for the new run of the app
There could be some weirdness in how Intellij/Gradle shows logs, but I am interpreting this as:
Quarkus initting the two instances of LifecycleManager when starting the app for some reason, and one's config overrides the other, causing my woes.
The lifecycle manager is working as expected; it appropriately starts/ doesn't start keycloak when configured either way
At this point I can't tell if I'm doing something wrong, or if there's a bug.
Test class example for self-auth test (same annotations for all tests in this (test) profile):
#Slf4j
#QuarkusTest
#QuarkusTestResource(TestResourceLifecycleManager.class)
#TestHTTPEndpoint(Auth.class)
class AuthTest extends RunningServerTest {
Test class example for external auth test (same annotations for all tests in this (externalAuth) profile):
#Slf4j
#QuarkusTest
#TestProfile(ExternalAuthTestProfile.class)
#QuarkusTestResource(value = TestResourceLifecycleManager.class, initArgs = #ResourceArg(name=TestResourceLifecycleManager.EXTERNAL_AUTH_ARG, value="true"))
#TestHTTPEndpoint(Auth.class)
class AuthExternalTest extends RunningServerTest {
ExternalAuthTestProfile extends this, providing the appropriate profile name:
public class NonDefaultTestProfile implements QuarkusTestProfile {
private final String testProfile;
private final Map<String, String> overrides = new HashMap<>();
protected NonDefaultTestProfile(String testProfile) {
this.testProfile = testProfile;
}
protected NonDefaultTestProfile(String testProfile, Map<String, String> configOverrides) {
this(testProfile);
this.overrides.putAll(configOverrides);
}
#Override
public Map<String, String> getConfigOverrides() {
return new HashMap<>(this.overrides);
}
#Override
public String getConfigProfile() {
return testProfile;
}
#Override
public List<TestResourceEntry> testResources() {
return QuarkusTestProfile.super.testResources();
}
}
Lifecycle manager:
#Slf4j
public class TestResourceLifecycleManager implements QuarkusTestResourceLifecycleManager {
public static final String EXTERNAL_AUTH_ARG = "externalAuth";
private static volatile MongodExecutable MONGO_EXE = null;
private static volatile KeycloakContainer KEYCLOAK_CONTAINER = null;
private boolean externalAuth = false;
public synchronized Map<String, String> startKeycloakTestServer() {
if(!this.externalAuth){
log.info("No need for keycloak.");
return Map.of();
}
if (KEYCLOAK_CONTAINER != null) {
log.info("Keycloak already started.");
} else {
KEYCLOAK_CONTAINER = new KeycloakContainer()
// .withEnv("hello","world")
.withRealmImportFile("keycloak-realm.json");
KEYCLOAK_CONTAINER.start();
log.info(
"Test keycloak started at endpoint: {}\tAdmin creds: {}:{}",
KEYCLOAK_CONTAINER.getAuthServerUrl(),
KEYCLOAK_CONTAINER.getAdminUsername(),
KEYCLOAK_CONTAINER.getAdminPassword()
);
}
String clientId;
String clientSecret;
String publicKey = "";
try (
Keycloak keycloak = KeycloakBuilder.builder()
.serverUrl(KEYCLOAK_CONTAINER.getAuthServerUrl())
.realm("master")
.grantType(OAuth2Constants.PASSWORD)
.clientId("admin-cli")
.username(KEYCLOAK_CONTAINER.getAdminUsername())
.password(KEYCLOAK_CONTAINER.getAdminPassword())
.build();
) {
RealmResource appsRealmResource = keycloak.realms().realm("apps");
ClientRepresentation qmClientResource = appsRealmResource.clients().findByClientId("quartermaster").get(0);
clientSecret = qmClientResource.getSecret();
log.info("Got client id \"{}\" with secret: {}", "quartermaster", clientSecret);
//get private key
for (KeysMetadataRepresentation.KeyMetadataRepresentation curKey : appsRealmResource.keys().getKeyMetadata().getKeys()) {
if (!SIG.equals(curKey.getUse())) {
continue;
}
if (!"RSA".equals(curKey.getType())) {
continue;
}
String publicKeyTemp = curKey.getPublicKey();
if (publicKeyTemp == null || publicKeyTemp.isBlank()) {
continue;
}
publicKey = publicKeyTemp;
log.info("Found a relevant key for public key use: {} / {}", curKey.getKid(), publicKey);
}
}
// write public key
// = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString() + "/security/testKeycloakPublicKey.pem");
File publicKeyFile;
try {
publicKeyFile = File.createTempFile("oqmTestKeycloakPublicKey",".pem");
// publicKeyFile = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString().replace("/classes/java/", "/resources/") + "/security/testKeycloakPublicKey.pem");
log.info("path of public key: {}", publicKeyFile);
// if(publicKeyFile.createNewFile()){
// log.info("created new public key file");
//
// } else {
// log.info("Public file already exists");
// }
try (
FileOutputStream os = new FileOutputStream(
publicKeyFile
);
) {
IOUtils.write(publicKey, os, UTF_8);
} catch (IOException e) {
log.error("Failed to write out public key of keycloak: ", e);
throw new IllegalStateException("Failed to write out public key of keycloak.", e);
}
} catch (IOException e) {
log.error("Failed to create public key file: ", e);
throw new IllegalStateException("Failed to create public key file", e);
}
String keycloakUrl = KEYCLOAK_CONTAINER.getAuthServerUrl().replace("/auth", "");
return Map.of(
"test.keycloak.url", keycloakUrl,
"test.keycloak.authUrl", KEYCLOAK_CONTAINER.getAuthServerUrl(),
"test.keycloak.adminName", KEYCLOAK_CONTAINER.getAdminUsername(),
"test.keycloak.adminPass", KEYCLOAK_CONTAINER.getAdminPassword(),
//TODO:: add config for server to talk to
"service.externalAuth.url", keycloakUrl,
"mp.jwt.verify.publickey.location", publicKeyFile.getAbsolutePath()
);
}
public static synchronized void startMongoTestServer() throws IOException {
if (MONGO_EXE != null) {
log.info("Flapdoodle Mongo already started.");
return;
}
Version.Main version = Version.Main.V4_0;
int port = 27018;
log.info("Starting Flapdoodle Test Mongo {} on port {}", version, port);
IMongodConfig config = new MongodConfigBuilder()
.version(version)
.net(new Net(port, Network.localhostIsIPv6()))
.build();
try {
MONGO_EXE = MongodStarter.getDefaultInstance().prepare(config);
MongodProcess process = MONGO_EXE.start();
if (!process.isProcessRunning()) {
throw new IOException();
}
} catch (Throwable e) {
log.error("FAILED to start test mongo server: ", e);
MONGO_EXE = null;
throw e;
}
}
public static synchronized void stopMongoTestServer() {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
MONGO_EXE.stop();
MONGO_EXE = null;
}
public synchronized static void cleanMongo() throws IOException {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
log.info("Cleaning Mongo of all entries.");
}
#Override
public void init(Map<String, String> initArgs) {
this.externalAuth = Boolean.parseBoolean(initArgs.getOrDefault(EXTERNAL_AUTH_ARG, Boolean.toString(this.externalAuth)));
}
#Override
public Map<String, String> start() {
log.info("STARTING test lifecycle resources.");
Map<String, String> configOverride = new HashMap<>();
try {
startMongoTestServer();
} catch (IOException e) {
log.error("Unable to start Flapdoodle Mongo server");
}
configOverride.putAll(startKeycloakTestServer());
return configOverride;
}
#Override
public void stop() {
log.info("STOPPING test lifecycle resources.");
stopMongoTestServer();
}
}
The app can be found here: https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/open-qm-base-station
The tests are currently failing in the ways I am describing, so feel free to look around.
Note that to run this, you will need to run ./gradlew build publishToMavenLocal in https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/libs/open-qm-core to install a dependency locally.
Github issue also tracking this: https://github.com/quarkusio/quarkus/issues/22025
Any use of #QuarkusTestResource() without restrictToAnnotatedClass set to true, means that the QuarkusTestResourceLifecycleManager will be applied to all tests no matter where the annotation is placed.
Hope restrictToAnnotatedClass will solve the problem.

How to reset a variable in Flume's custom sink class for every batch

I have a flume process that reads data from file on a spooldir & loads the data into MySQL database. There will be multiple types of files that can be processed by the same flume process.
I have created a custom sink java class (extending AbstractSink), that updates a local variable (sInterfaceType) after an initial/first read to decide the data format in the file.
I have to reset it once the file processing completes, so that it has to start with identifying the next batch/interface file.
I tried to do in stop() but it doesn't help. Did anybody do this?
My sink class looks like this:
public class MyFlumeSink2 extends AbstractSink implements Configurable {
private String sInterfaceType; //tells file format of current load
public MyFlumeSink2() {
//my initialization of variables
}
public void configure(Context context) {
//read context variables
}
public void start() {
//create db connection
}
#Override
public void stop() {
//destroy connection
sInterfaceType = ""; //This doesn't help me
super.stop();
}
public Status process() throws EventDeliveryException {
Channel channel = getChannel();
Transaction transaction = channel.getTransaction();
if((sInterfaceType=="" || sInterfaceType==null))
{
//Read first line & set sInterfaceType
}else
//Insert data in MySQL
transaction.commit();
}
}
We have to manually decide which event it is, there is no specialized method called for every new file.
I revised my code to read the event line & set InterfaceType based on first element. My code looks like this:
public Status process() throws EventDeliveryException {
//....other code...
sEvtBody = new String(event.getBody());
sFields = sEvtBody.split(",");
//check first field to know record type
enumRec = RecordType.valueOf( checkRecordType(sFields[0].toUpperCase()) );
switch(enumRec)
{
case CUST_ID:
sInterfaceType = "T_CUST";
bHeader = true;
break;
case TXN_ID:
sInterfaceType = "T_CUST_TXNS";
bHeader = true;
break;
default:
bHeader = false;
}
//insert if not header
if(!bHeader)
{
if(sInterfaceType == "T_CUST")
{
if(sFields.length == 14)
this.bInsertStatus = daoClass.insertHeader(sFields);
else
throw new Exception("INCORRECT_COLUMN_COUNT");
}else if(sInterfaceType == "T_CUST_TXNS")
{
if(sFields.length == 10)
this.bInsertStatus = daoClass.insertData(sFields);
else
throw new Exception("INCORRECT_COLUMN_COUNT");
}
//if(!bInsertStatus)
// logTransaction(sFields);
}
//....Other code....

Using AWS Java's SDKs, how can I terminate the CloudFormation stack of the current instance?

Uses on-line decomentation I come up with the following code to terminate the current EC2 Instance:
public class Ec2Utility {
static private final String LOCAL_META_DATA_ENDPOINT = "http://169.254.169.254/latest/meta-data/";
static private final String LOCAL_INSTANCE_ID_SERVICE = "instance-id";
static public void terminateMe() throws Exception {
TerminateInstancesRequest terminateRequest = new TerminateInstancesRequest().withInstanceIds(getInstanceId());
AmazonEC2 ec2 = new AmazonEC2Client();
ec2.terminateInstances(terminateRequest);
}
static public String getInstanceId() throws Exception {
//SimpleRestClient, is an internal wrapper on http client.
SimpleRestClient client = new SimpleRestClient(LOCAL_META_DATA_ENDPOINT);
HttpResponse response = client.makeRequest(METHOD.GET, LOCAL_INSTANCE_ID_SERVICE);
return IOUtils.toString(response.getEntity().getContent(), "UTF-8");
}
}
My issue is that my EC2 instance is under an AutoScalingGroup which is under a CloudFormationStack, that is because of my organisation deployment standards though this single EC2 is all there is there for this feature.
So, I want to terminate the entire CloudFormationStack from the JavaSDK, keep in mind, I don't have the CloudFormation Stack Name in advance as I didn't have the EC2 Instance Id so I will have to get it from the code using the API calls.
How can I do that, if I can do it?
you should be able to use the deleteStack method from cloud formation sdk
DeleteStackRequest request = new DeleteStackRequest();
request.setStackName(<stack_name_to_be_deleted>);
AmazonCloudFormationClient client = new AmazonCloudFormationClient (<credentials>);
client.deleteStack(request);
If you don't have the stack name, you should be able to retrieve from the Tag of your instance
DescribeInstancesRequest request =new DescribeInstancesRequest();
request.setInstanceIds(instancesList);
DescribeInstancesResult disresult = ec2.describeInstances(request);
List <Reservation> list = disresult.getReservations();
for (Reservation res:list){
List <Instance> instancelist = res.getInstances();
for (Instance instance:instancelist){
List <Tag> tags = instance.getTags();
for (Tag tag:tags){
if (tag.getKey().equals("aws:cloudformation:stack-name")) {
tag.getValue(); // name of the stack
}
}
At the end I've achieved the desired behaviour using the set of the following util functions I wrote:
/**
* Delete the CloudFormationStack with the given name.
*
* #param stackName
* #throws Exception
*/
static public void deleteCloudFormationStack(String stackName) throws Exception {
AmazonCloudFormationClient client = new AmazonCloudFormationClient();
DeleteStackRequest deleteStackRequest = new DeleteStackRequest().withStackName("");
client.deleteStack(deleteStackRequest);
}
static public String getCloudFormationStackName() throws Exception {
AmazonEC2 ec2 = new AmazonEC2Client();
String instanceId = getInstanceId();
List<Tag> tags = getEc2Tags(ec2, instanceId);
for (Tag t : tags) {
if (t.getKey().equalsIgnoreCase(TAG_KEY_STACK_NAME)) {
return t.getValue();
}
}
throw new Exception("Couldn't find stack name for instanceId:" + instanceId);
}
static private List<Tag> getEc2Tags(AmazonEC2 ec2, String instanceId) throws Exception {
DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest().withInstanceIds(instanceId);
DescribeInstancesResult describeInstances = ec2.describeInstances(describeInstancesRequest);
List<Reservation> reservations = describeInstances.getReservations();
if (reservations.isEmpty()) {
throw new Exception("DescribeInstances didn't returned reservation for instanceId:" + instanceId);
}
List<Instance> instances = reservations.get(0).getInstances();
if (instances.isEmpty()) {
throw new Exception("DescribeInstances didn't returned instance for instanceId:" + instanceId);
}
return instances.get(0).getTags();
}
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
// Example of usage from the code:
deleteCloudFormationStack(getCloudFormationStackName());
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Threads and Hibernate with Spring MVC

I am currently working on a web application which is basically a portfolio site for different vendors.
I was working on a thread which copies the details of a vendor and puts it against a new vendor, pretty straightforward.
The thread is intended to work fine but when selecting a particular Catalog object (this catalog object contains a Velocity template), the execution stops and it goes nowhere. Invoking the thread once again just hangs the whole application.
Here is my code.
public class CopySiteThread extends Thread {
public CopySiteThread(ComponentDTO componentDTO, long vendorid, int admin_id) {
/**Application specific business logic not exposed **/
}
public void run() {
/** Application based Business Logic Not Exposed **/
//Copy Catalog first
List<Catalog> catalog = catalogDAO.getCatalog(vendorid);
System.out.println(catalog);
List<Catalog> newCat = new ArrayList<Catalog>();
HashMap<String, Integer> catIdMapList = new HashMap<String, Integer>();
Iterator<Catalog> catIterator = catalog.iterator();
while (catIterator.hasNext()) {
Catalog cat = catIterator.next();
System.out.println(cat);
int catId = catalogDAO.addTemplate(admin_id, cat.getHtml(), cat.getName(), cat.getNickname(), cat.getTemplategroup(), vendor.getVendorid());
catIdMapList.put(cat.getName(), catId);
cat = null;
}
}
}
And the thread is invoked like this.
CopySiteThread thread = new CopySiteThread(componentDTO, baseVendor, admin_id);
thread.start();
After a certain number of iterations, it gets stuck on line Catalog cat = catIterator.next();
This issue is rather strange because I've developed many applications like this without any problem.
Any help appreciated.
The actual problem was in the addCatalog method in CatalogDAO
Session session = sf.openSession();
Transaction tx = null;
Integer templateID = null;
Date date = new Date();
try {
tx = session.beginTransaction();
Catalog catalog = new Catalog();
//Business Logic
templateID = (Integer) session.save(catalog);
} catch (HibernateException ex) {
if (tx != null) tx.rolback();
} finally {
session.close();
}
return templateID;
Fixed by adding a finally clause and closing all sessions.

How to change .properties file using apache.commons.configuration

I have an app where I filter messages according to some rules(existing some keywords or regexps). These rules are to be stored in .properties file(as they must be persistent). I've figured out how to read data from this file. here is the part of the code:
public class Config {
private static final Config ourInstance = new Config();
private static final CompositeConfiguration prop = new CompositeConfiguration();
public static Config getInstance() {
return ourInstance;
}
public Config(){
}
public synchronized void load() {
try {
prop.addConfiguration(new SystemConfiguration());
System.out.println("Loading /rules.properties");
final PropertiesConfiguration p = new PropertiesConfiguration();
p.setPath("/home/mikhail/bzrrep/DLP/DLPServer/src/main/resources/rules.properties");
p.load();
prop.addConfiguration(p);
} catch (ConfigurationException e) {
e.printStackTrace();
}
final int processors = prop.getInt("server.processors", 1);
// If you don't see this line - likely config name is wrong
System.out.println("Using processors:" + processors);
}
public void setKeyword(String customerId, String keyword){
}
public void setRegexp(String customerId, String regexp)
{}
}
as you see I'm going to add values to some properties. Here is the .properties file itself:
users = admin, root, guest
users.admin.keywords = admin
users.admin.regexps = test-5, test-7
users.root.keywords = root
users.root.regexps = *
users.guest.keywords = guest
users.guest.regexps =
I have a GUI for user to add keywords and regexps to this config. so, how to implement methods setKeyword and setRegexp?
The easyest way I found is to read the current values of the property to the String[], add there a new value and set property.
props.setProperty(fieldName, values);

Categories