Mixed up Test configuration when using #ResourceArg - java

TL:DR; When running tests with different #ResourceArgs, the configuration of different tests get thrown around and override others, breaking tests meant to run with specific configurations.
So, I have a service that has tests that run in different configuration setups. The main difference at the moment is the service can either manage its own authentication or get it from an external source (Keycloak).
I firstly control this using test profiles, which seem to work fine. Unfortunately, in order to support both cases, the ResourceLifecycleManager I have setup supports setting up a Keycloak instance and returns config values that break the config for self authentication (This is due primarily to the fact that I have not found out how to get the lifecycle manager to determine on its own what profile or config is currently running. If I could do this, I think I would be much better off than using #ResourceArg, so would love to know if I missed something here).
To remedy this shortcoming, I have attempted to use #ResourceArgs to convey to the lifecycle manager when to setup for external auth. However, I have noticed some really odd execution timings and the config that ends up at my test/service isn't what I intend based on the test class's annotations, where it is obvious the lifecycle manager has setup for external auth.
Additionally, it should be noted that I have my tests ordered such that the profiles and configs shouldn't be running out of order; all the tests that don't care are run first, then the 'normal' tests with self auth, then the tests with the external auth profile. I can see this working appropriately when I run in intellij, and the fact I can tell the time is being taken to start up the new service instance between the test profiles.
Looking at the logs when I throw a breakpoint in places, some odd things are obvious:
When breakpoint on an erring test (before the external-configured tests run)
The start() method of my TestResourceLifecycleManager has been called twice
The first run ran with Keycloak starting, would override/break config
though the time I would expect to need to be taken to start up keycloak not happening, a little confused here
The second run is correct, not starting keycloak
The profile config is what is expected, except for what the keycloak setup would override
When breakpoint on an external-configured test (after all self-configured tests run):
The start() method has now been called 4 times; appears that things were started in the same order as before again for the new run of the app
There could be some weirdness in how Intellij/Gradle shows logs, but I am interpreting this as:
Quarkus initting the two instances of LifecycleManager when starting the app for some reason, and one's config overrides the other, causing my woes.
The lifecycle manager is working as expected; it appropriately starts/ doesn't start keycloak when configured either way
At this point I can't tell if I'm doing something wrong, or if there's a bug.
Test class example for self-auth test (same annotations for all tests in this (test) profile):
#Slf4j
#QuarkusTest
#QuarkusTestResource(TestResourceLifecycleManager.class)
#TestHTTPEndpoint(Auth.class)
class AuthTest extends RunningServerTest {
Test class example for external auth test (same annotations for all tests in this (externalAuth) profile):
#Slf4j
#QuarkusTest
#TestProfile(ExternalAuthTestProfile.class)
#QuarkusTestResource(value = TestResourceLifecycleManager.class, initArgs = #ResourceArg(name=TestResourceLifecycleManager.EXTERNAL_AUTH_ARG, value="true"))
#TestHTTPEndpoint(Auth.class)
class AuthExternalTest extends RunningServerTest {
ExternalAuthTestProfile extends this, providing the appropriate profile name:
public class NonDefaultTestProfile implements QuarkusTestProfile {
private final String testProfile;
private final Map<String, String> overrides = new HashMap<>();
protected NonDefaultTestProfile(String testProfile) {
this.testProfile = testProfile;
}
protected NonDefaultTestProfile(String testProfile, Map<String, String> configOverrides) {
this(testProfile);
this.overrides.putAll(configOverrides);
}
#Override
public Map<String, String> getConfigOverrides() {
return new HashMap<>(this.overrides);
}
#Override
public String getConfigProfile() {
return testProfile;
}
#Override
public List<TestResourceEntry> testResources() {
return QuarkusTestProfile.super.testResources();
}
}
Lifecycle manager:
#Slf4j
public class TestResourceLifecycleManager implements QuarkusTestResourceLifecycleManager {
public static final String EXTERNAL_AUTH_ARG = "externalAuth";
private static volatile MongodExecutable MONGO_EXE = null;
private static volatile KeycloakContainer KEYCLOAK_CONTAINER = null;
private boolean externalAuth = false;
public synchronized Map<String, String> startKeycloakTestServer() {
if(!this.externalAuth){
log.info("No need for keycloak.");
return Map.of();
}
if (KEYCLOAK_CONTAINER != null) {
log.info("Keycloak already started.");
} else {
KEYCLOAK_CONTAINER = new KeycloakContainer()
// .withEnv("hello","world")
.withRealmImportFile("keycloak-realm.json");
KEYCLOAK_CONTAINER.start();
log.info(
"Test keycloak started at endpoint: {}\tAdmin creds: {}:{}",
KEYCLOAK_CONTAINER.getAuthServerUrl(),
KEYCLOAK_CONTAINER.getAdminUsername(),
KEYCLOAK_CONTAINER.getAdminPassword()
);
}
String clientId;
String clientSecret;
String publicKey = "";
try (
Keycloak keycloak = KeycloakBuilder.builder()
.serverUrl(KEYCLOAK_CONTAINER.getAuthServerUrl())
.realm("master")
.grantType(OAuth2Constants.PASSWORD)
.clientId("admin-cli")
.username(KEYCLOAK_CONTAINER.getAdminUsername())
.password(KEYCLOAK_CONTAINER.getAdminPassword())
.build();
) {
RealmResource appsRealmResource = keycloak.realms().realm("apps");
ClientRepresentation qmClientResource = appsRealmResource.clients().findByClientId("quartermaster").get(0);
clientSecret = qmClientResource.getSecret();
log.info("Got client id \"{}\" with secret: {}", "quartermaster", clientSecret);
//get private key
for (KeysMetadataRepresentation.KeyMetadataRepresentation curKey : appsRealmResource.keys().getKeyMetadata().getKeys()) {
if (!SIG.equals(curKey.getUse())) {
continue;
}
if (!"RSA".equals(curKey.getType())) {
continue;
}
String publicKeyTemp = curKey.getPublicKey();
if (publicKeyTemp == null || publicKeyTemp.isBlank()) {
continue;
}
publicKey = publicKeyTemp;
log.info("Found a relevant key for public key use: {} / {}", curKey.getKid(), publicKey);
}
}
// write public key
// = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString() + "/security/testKeycloakPublicKey.pem");
File publicKeyFile;
try {
publicKeyFile = File.createTempFile("oqmTestKeycloakPublicKey",".pem");
// publicKeyFile = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString().replace("/classes/java/", "/resources/") + "/security/testKeycloakPublicKey.pem");
log.info("path of public key: {}", publicKeyFile);
// if(publicKeyFile.createNewFile()){
// log.info("created new public key file");
//
// } else {
// log.info("Public file already exists");
// }
try (
FileOutputStream os = new FileOutputStream(
publicKeyFile
);
) {
IOUtils.write(publicKey, os, UTF_8);
} catch (IOException e) {
log.error("Failed to write out public key of keycloak: ", e);
throw new IllegalStateException("Failed to write out public key of keycloak.", e);
}
} catch (IOException e) {
log.error("Failed to create public key file: ", e);
throw new IllegalStateException("Failed to create public key file", e);
}
String keycloakUrl = KEYCLOAK_CONTAINER.getAuthServerUrl().replace("/auth", "");
return Map.of(
"test.keycloak.url", keycloakUrl,
"test.keycloak.authUrl", KEYCLOAK_CONTAINER.getAuthServerUrl(),
"test.keycloak.adminName", KEYCLOAK_CONTAINER.getAdminUsername(),
"test.keycloak.adminPass", KEYCLOAK_CONTAINER.getAdminPassword(),
//TODO:: add config for server to talk to
"service.externalAuth.url", keycloakUrl,
"mp.jwt.verify.publickey.location", publicKeyFile.getAbsolutePath()
);
}
public static synchronized void startMongoTestServer() throws IOException {
if (MONGO_EXE != null) {
log.info("Flapdoodle Mongo already started.");
return;
}
Version.Main version = Version.Main.V4_0;
int port = 27018;
log.info("Starting Flapdoodle Test Mongo {} on port {}", version, port);
IMongodConfig config = new MongodConfigBuilder()
.version(version)
.net(new Net(port, Network.localhostIsIPv6()))
.build();
try {
MONGO_EXE = MongodStarter.getDefaultInstance().prepare(config);
MongodProcess process = MONGO_EXE.start();
if (!process.isProcessRunning()) {
throw new IOException();
}
} catch (Throwable e) {
log.error("FAILED to start test mongo server: ", e);
MONGO_EXE = null;
throw e;
}
}
public static synchronized void stopMongoTestServer() {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
MONGO_EXE.stop();
MONGO_EXE = null;
}
public synchronized static void cleanMongo() throws IOException {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
log.info("Cleaning Mongo of all entries.");
}
#Override
public void init(Map<String, String> initArgs) {
this.externalAuth = Boolean.parseBoolean(initArgs.getOrDefault(EXTERNAL_AUTH_ARG, Boolean.toString(this.externalAuth)));
}
#Override
public Map<String, String> start() {
log.info("STARTING test lifecycle resources.");
Map<String, String> configOverride = new HashMap<>();
try {
startMongoTestServer();
} catch (IOException e) {
log.error("Unable to start Flapdoodle Mongo server");
}
configOverride.putAll(startKeycloakTestServer());
return configOverride;
}
#Override
public void stop() {
log.info("STOPPING test lifecycle resources.");
stopMongoTestServer();
}
}
The app can be found here: https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/open-qm-base-station
The tests are currently failing in the ways I am describing, so feel free to look around.
Note that to run this, you will need to run ./gradlew build publishToMavenLocal in https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/libs/open-qm-core to install a dependency locally.
Github issue also tracking this: https://github.com/quarkusio/quarkus/issues/22025

Any use of #QuarkusTestResource() without restrictToAnnotatedClass set to true, means that the QuarkusTestResourceLifecycleManager will be applied to all tests no matter where the annotation is placed.
Hope restrictToAnnotatedClass will solve the problem.

Related

Spring-Redis connection not re-connecting between retries

I'm working on a Spring-Batch application, which uses a REDIS connection to populate data.
Here are some relevant dependencies:
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
implementation 'io.lettuce:lettuce-core:5.3.3.RELEASE'
RedisConnection is imported from org.springframework.data.redis.connection
PROBLEM STATEMENT:
There might be a case when the RedisConnection is active when we start the application, but during the time application is running, we might loose the Redis Connection. In that case, when it enters the method below, the method will throw an error that Redis connection is lost. Hence, we retry using the #Retryable logic.
But, lets say during the second retry, the Redis Connection is re-established, we want the Retry to be able to detect that and re-connect to redis and go for the normal flow. But, "THE REDIS-RECONNECTION IS NOT GETTING DETECTED"
TRIED: I tried following https://github.com/lettuce-io/lettuce-core/issues/338 and added lettuceConnectionFactory.validateConnection(); to the defaultRedisConnection as below but to no vain
#Qualifier("defaultRedisConnection")
#Bean
public RedisConnection defaultRedisConnectionDockerCluster() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration();
redisStandaloneConfiguration.setHostName("redis");
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(redisStandaloneConfiguration);
lettuceConnectionFactory.validateConnection();
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory.getConnection();
}
Here is the class:
#Slf4j
#Service
public class PopulateRedisDataService {
#Qualifier("defaultRedisConnection")
private final RedisConnection redisConnection;
private RedisClientData redisClientData = new RedisClientData();
public PopulateRedisDataService(
#Qualifier("defaultRedisConnection") RedisConnection redisConnection,
RedisDataUtils redisDataUtils) {
this.redisConnection = redisConnection;
}
#Retryable(maxAttemptsExpression = "3", backoff = #Backoff(delayExpression = "20_000",
multiplierExpression = "100_000", maxDelayExpression = "100_000"))
public RedisClientData populateData() {
try {
byte[] serObj = Objects.requireNonNull(redisConnection.get("SOME_KEY".getBytes()));
RedisClientData redisClientData = new RedisClientData();
// Some operations to load data from Redis/serObj into redisClientData object.
} catch (Exception e) {
// If Redis doesn't have the key, return empty redisClientData
redisClientData = new RedisClientData();
log.error("Failed to get ClientRegList", e);
}
return redisClientData;
}
#Recover
public void recover(Exception e) {
// Some operations
}
}
Any suggestions to handle this case would be much appreciated.

JUnit4 - How to test for readonly/write-protected directory if run with docker

We have an integration test setup for testing the behavior of missing but required configuration properties. Among one of these properties is a directory where failed uploads should be written to for later retries. The general behavior for this property should be that the application doesn't even start up and fail immediately when certain constraints are violated.
The properties are managed by Spring via certain ConfigurationProperties among these we have a simple S3MessageUploadSettings class
#Getter
#Setter
#ConfigurationProperties(prefix = "s3")
#Validated
public class S3MessageUploadSettings {
#NotNull
private String bucketName;
#NotNull
private String uploadErrorPath;
...
}
In the respective Spring configuration we now perform certain validation checks, like whether the path exists, is writable and a directory, and throw respective RuntimeExceptions when certain assertions aren't met:
#Slf4j
#Import({ S3Config.class })
#Configuration
#EnableConfigurationProperties(S3MessageUploadSettings.class)
public class S3MessageUploadSpringConfig {
#Resource
private S3MessageUploadSettings settings;
...
#PostConstruct
public void checkConstraints() {
String sPath = settings.getUploadErrorPath();
Path path = Paths.get(sPath);
...
log.debug("Probing path '{}' for existence', path);
if (!Files.exists(path)) {
throw new RuntimeException("Required error upload directory '" + path + "' does not exist");
}
log.debug("Probig path '{}' for being a directory", path);
if (!Files.isDirectory(path)) {
throw new RuntimeException("Upload directory '" + path + "' is not a directoy");
}
log.debug("Probing path '{}' for write permissions", path);
if (!Files.isWritable(path)) {
throw new RuntimeException("Error upload path '" + path +"' is not writable);
}
}
}
Our test setup now looks like this:
public class StartupTest {
#ClassRule
public static TemporaryFolder testFolder = new TemporaryFolder();
private static File BASE_FOLDER;
private static File ACCESSIBLE;
private static File WRITE_PROTECTED;
private static File NON_DIRECTORY;
#BeforeClass
public static void initFolderSetup() throws IOException {
BASE_FOLDER = testFolder.getRoot();
ACCESSIBLE = testFolder.newFolder("accessible");
WRITE_PROTECTED = testFolder.newFolder("writeProtected");
if (!WRITE_PROTECTED.setReadOnly()) {
fail("Could not change directory permissions to readonly")
}
if (!WRITE_PROTECTED.setWritable(false)) {
fail("Could not change directory permissions to writable(false)");
}
NON_DIRECTORY = testFolder.newFile("nonDirectory");
}
#Configuration
#Import({
S3MessageUploadSpringConfig.class,
S3MockConfig.class,
...
})
static class BaseContextConfig {
// common bean definitions
...
}
#Configuration
#Import(BaseContextConfig.class)
#PropertySource("classpath:ci.properties")
static class NotExistingPathContextConfig {
#Resource
private S3MessageUploadSettings settings;
#PostConstruct
public void updateSettings() {
settings.setUploadErrorPath(BASE_FOLDER.getPath() + "/foo/bar");
}
}
#Configuration
#Import(BaseContextConfig.class)
#PropertySource("classpath:ci.properties")
static class NotWritablePathContextConfig {
#Resource
private S3MessageUploadSettings settings;
#PostConstruct
public void updateSettings() {
settings.setUploadErrorPath(WRITE_PROTECTED.getPath());
}
}
...
#Configuration
#Import(BaseContextConfig.class)
#PropertySource("classpath:ci.properties")
static class StartableContextConfig {
#Resource
private S3MessageUploadSettings settings;
#PostConstruct
public void updateSettings() {
settings.setUploadErrorPath(ACCESSIBLE.getPath());
}
}
#Test
public void shouldFailStartupDueToNonExistingErrorPathDirectory() {
ApplicationContext context = null;
try {
context = new AnnotationConfigApplicationContext(StartupTest.NotExistingPathContextConfig.class);
fail("Should not have started the context");
} catch (Exception e) {
e.printStackTrace();
assertThat(e, instanceOf(BeanCreationException.class));
assertThat(e.getMessage(), containsString("Required error upload directory '" + BASE_FOLDER + "/foo/bar' does not exist"));
} finally {
closeContext(context);
}
}
#Test
public void shouldFailStartupDueToNonWritablePathDirectory() {
ApplicationContext context = null;
try {
context = new AnnotationConfigApplicationContext(StartupTest.NotWritablePathContextConfig.class);
fail("Should not have started the context");
} catch (Exception e) {
assertThat(e, instanceOf(BeanCreationException.class));
assertThat(e.getMessage(), containsString("Error upload path '" + WRITE_PROTECTED + "' is not writable"));
} finally {
closeContext(context);
}
}
...
#Test
public void shouldStartUpSuccessfully() {
ApplicationContext context = null;
try {
context = new AnnotationConfigApplicationContext(StartableContextConfig.class);
} catch (Exception e) {
e.printStackTrace();
fail("Should not have thrown an exception of type " + e.getClass().getSimpleName() + " with message " + e.getMessage());
} finally {
closeContext(context);
}
}
private void closeContext(ApplicationContext context) {
if (context != null) {
// check and close any running S3 mock as this may have negative impact on the startup of a further context
closeS3Mock(context);
// stop a running Spring context manually as this might interfere with a starting context of an other test
((ConfigurableApplicationContext) context).stop();
}
}
private void closeS3Mock(ApplicationContext context) {
S3Mock s3Mock = null;
try {
if (context != null) {
s3Mock = context.getBean("s3Mock", S3Mock.class);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != s3Mock) {
s3Mock.stop();
}
}
}
}
When run locally, everything looks fine and all tests pass. Though our CI runs these tests inside a docker container and for some reason changing file permissions seem to end up in a NOOP returning true on the method invocation though not changing anything in regards of the file permission itself.
Neiter File.setReadOnly(), File.setWritable(false) nor Files.setPosixFilePermissions(Path, Set<PosixFilePermission>) seem to have an effect on the actual file permissions in the docker container.
I've also tried to change the directories to real directories, i.e. /root or /dev/pts that are write protected, though as the CI runs the tests as root these directories are writable by the application and the test fails again.
I also considered using an in-memory file system (such as JimFS) though here I'm not sure how to convince the test to make use of the custom filesystem. AFAIK JimFS does not support the constructor needed for declaring it as default filesystem.
Which other possibilities exist from within Java to change a directories permission to readonly/write-protected when run inside a docker container or test successfully for such a directory?
I assume this is due to the permissions and policies of the JVM, and you cannot do anything from your code if the OS has blocked some permissions for your JVM.
You can try to edit java.policy file and set appropriate file permissions.
Perhaps these will be some given files to which write privileges will be set, for example:
grant {
permission java.io.FilePermission "/dev/pts/*", "read,write,delete";
};
More examples in docs: https://docs.oracle.com/javase/8/docs/technotes/guides/security/spec/security-spec.doc3.html.

How to monitor folder/directory in spring?

I wan't to write Spring Boot Application in spring which will be monitoring directory in windows, and when I change sub folder or add new one or delete existing one I wanna get information about that.
How can i do that?
I have read this one:
http://docs.spring.io/spring-integration/reference/html/files.html
and each result under 'spring file watcher' in google,
but I can't find solution...
Do you have a good article or example with something like this?
I wan't it to like like this:
#SpringBootApplication
#EnableIntegration
public class SpringApp{
public static void main(String[] args) {
SpringApplication.run(SpringApp.class, args);
}
#Bean
public WatchService watcherService() {
...//define WatchService here
}
}
Regards
spring-boot-devtools has FileSystemWatcher
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
FileWatcherConfig
#Configuration
public class FileWatcherConfig {
#Bean
public FileSystemWatcher fileSystemWatcher() {
FileSystemWatcher fileSystemWatcher = new FileSystemWatcher(true, Duration.ofMillis(5000L), Duration.ofMillis(3000L));
fileSystemWatcher.addSourceFolder(new File("/path/to/folder"));
fileSystemWatcher.addListener(new MyFileChangeListener());
fileSystemWatcher.start();
System.out.println("started fileSystemWatcher");
return fileSystemWatcher;
}
#PreDestroy
public void onDestroy() throws Exception {
fileSystemWatcher().stop();
}
}
MyFileChangeListener
#Component
public class MyFileChangeListener implements FileChangeListener {
#Override
public void onChange(Set<ChangedFiles> changeSet) {
for(ChangedFiles cfiles : changeSet) {
for(ChangedFile cfile: cfiles.getFiles()) {
if( /* (cfile.getType().equals(Type.MODIFY)
|| cfile.getType().equals(Type.ADD)
|| cfile.getType().equals(Type.DELETE) ) && */ !isLocked(cfile.getFile().toPath())) {
System.out.println("Operation: " + cfile.getType()
+ " On file: "+ cfile.getFile().getName() + " is done");
}
}
}
}
private boolean isLocked(Path path) {
try (FileChannel ch = FileChannel.open(path, StandardOpenOption.WRITE); FileLock lock = ch.tryLock()) {
return lock == null;
} catch (IOException e) {
return true;
}
}
}
From Java 7 there is WatchService - it will be the best solution.
Spring configuration could be like the following:
#Slf4j
#Configuration
public class MonitoringConfig {
#Value("${monitoring-folder}")
private String folderPath;
#Bean
public WatchService watchService() {
log.debug("MONITORING_FOLDER: {}", folderPath);
WatchService watchService = null;
try {
watchService = FileSystems.getDefault().newWatchService();
Path path = Paths.get(folderPath);
if (!Files.isDirectory(path)) {
throw new RuntimeException("incorrect monitoring folder: " + path);
}
path.register(
watchService,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY,
StandardWatchEventKinds.ENTRY_CREATE
);
} catch (IOException e) {
log.error("exception for watch service creation:", e);
}
return watchService;
}
}
And Bean for launching monitoring itself:
#Slf4j
#Service
#AllArgsConstructor
public class MonitoringServiceImpl {
private final WatchService watchService;
#Async
#PostConstruct
public void launchMonitoring() {
log.info("START_MONITORING");
try {
WatchKey key;
while ((key = watchService.take()) != null) {
for (WatchEvent<?> event : key.pollEvents()) {
log.debug("Event kind: {}; File affected: {}", event.kind(), event.context());
}
key.reset();
}
} catch (InterruptedException e) {
log.warn("interrupted exception for monitoring service");
}
}
#PreDestroy
public void stopMonitoring() {
log.info("STOP_MONITORING");
if (watchService != null) {
try {
watchService.close();
} catch (IOException e) {
log.error("exception while closing the monitoring service");
}
}
}
}
Also, you have to set #EnableAsync for your application class (it configuration).
and snipped from application.yml:
monitoring-folder: C:\Users\nazar_art
Tested with Spring Boot 2.3.1.
Also used configuration for Async pool:
#Slf4j
#EnableAsync
#Configuration
#AllArgsConstructor
#EnableConfigurationProperties(AsyncProperties.class)
public class AsyncConfiguration implements AsyncConfigurer {
private final AsyncProperties properties;
#Override
#Bean(name = "taskExecutor")
public Executor getAsyncExecutor() {
log.debug("Creating Async Task Executor");
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(properties.getCorePoolSize());
taskExecutor.setMaxPoolSize(properties.getMaxPoolSize());
taskExecutor.setQueueCapacity(properties.getQueueCapacity());
taskExecutor.setThreadNamePrefix(properties.getThreadName());
taskExecutor.initialize();
return taskExecutor;
}
#Bean
public TaskScheduler taskScheduler() {
return new ConcurrentTaskScheduler();
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return new CustomAsyncExceptionHandler();
}
}
Where the custom async exception handler is:
#Slf4j
public class CustomAsyncExceptionHandler implements AsyncUncaughtExceptionHandler {
#Override
public void handleUncaughtException(Throwable throwable, Method method, Object... objects) {
log.error("Exception for Async execution: ", throwable);
log.error("Method name - {}", method.getName());
for (Object param : objects) {
log.error("Parameter value - {}", param);
}
}
}
Configuration at properties file:
async-monitoring:
core-pool-size: 10
max-pool-size: 20
queue-capacity: 1024
thread-name: 'async-ex-'
Where AsyncProperties:
#Getter
#Setter
#ConfigurationProperties("async-monitoring")
public class AsyncProperties {
#NonNull
private Integer corePoolSize;
#NonNull
private Integer maxPoolSize;
#NonNull
private Integer queueCapacity;
#NonNull
private String threadName;
}
For using asynchronous execution I am processing an event like the following:
validatorService.processRecord(recordANPR, zipFullPath);
Where validator service has a look like:
#Async
public void processRecord(EvidentialRecordANPR record, String fullFileName) {
The main idea is that you configure async configuration -> call it from MonitoringService -> put #Async annotation above method at another service which you called (it should be a method of another bean - initialisation goes through a proxy).
You can use pure java for this no need for spring https://docs.oracle.com/javase/tutorial/essential/io/notification.html
See the Spring Integration Samples Repo there's a file sample under 'basic'.
There's a more recent and more sophisticated sample under applications file-split-ftp - it uses Spring Boot and Java configuration Vs. the xml used in the older sample.
found a workaround
you can annotate your task by #Scheduled(fixedDelay = Long.MAX_VALUE)
you could check code:
#Scheduled(fixedDelay = Long.MAX_VALUE)
public void watchTask() {
this.loadOnStartup();
try {
WatchService watcher = FileSystems.getDefault().newWatchService();
Path file = Paths.get(propertyFile);
Path dir = Paths.get(file.getParent().toUri());
dir.register(watcher, ENTRY_MODIFY);
logger.info("Watch Service registered for dir: " + dir.getFileName());
while (true) {
WatchKey key;
try {
key = watcher.take();
} catch (InterruptedException ex) {
return;
}
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
#SuppressWarnings("unchecked")
WatchEvent<Path> ev = (WatchEvent<Path>) event;
Path fileName = ev.context();
logger.debug(kind.name() + ": " + fileName);
if (kind == ENTRY_MODIFY &&
fileName.toString().equals(file.getFileName().toString())) {
//publish event here
}
}
boolean valid = key.reset();
if (!valid) {
break;
}
}
} catch (Exception ex) {
logger.error(ex.getMessage(), ex);
}
}
}
Without giving the details here a few pointers which might help you out.
You can take the directory WatchService code from SÅ‚awomir Czaja's answer:
You can use pure java for this no need for spring https://docs.oracle.com/javase/tutorial/essential/io/notification.html
and wrap that code into a runnable task. This task can notify your clients of directory change using the SimpMessagingTemplate as described here:
Websocket STOMP handle send
Then you can create a scheduler like described here:
Scheduling which handles the start and reaccurance of your task.
Don't forget to configure scheduling and websocket support in your mvc-config as well as STOMP support on the client side (further reading here: STOMP over Websocket)
Apache commons-io is another good alternative to watch changes to files/directories.
You can see the overview of pros and cons of using it in this answer:
https://stackoverflow.com/a/41013350/16470819
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.11.0</version>
</dependency>
Just in case, if somebody is looking for recursive sub-folder watcher, this link may help: How to watch a folder and subfolders for changes

How to login multiple SAP system using SAP jco

I am new to SAP JCo I have requirement to call multiple SAP System using SAP Jco. But I am unable to connect multiple sap system at the same time......
Here is code :
package com.sap.test;
import java.util.Properties;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoRepository;
import com.sap.conn.jco.ext.DestinationDataProvider;
import com.sap.conn.jco.ext.Environment;
import com.sap.utils.MyDestinationDataProvider;
import com.sap.utils.SapSystem;
public class TestMultipleSAPConnection {
public static Properties properties;
public static JCoDestination dest = null;
public static JCoRepository repos = null;
public static SapSystem system = null;
String SAP_SERVER = "SAP_SERVER";
MyDestinationDataProvider myProvider = null;
public static void main(String[] args) throws JCoException {
getConnection_CRM();
getConnection_R3();
}
public static JCoDestination getConnection_R3() {
boolean connR3_flag = true;
JCoDestination dest = null;
JCoRepository repos = null;
String SAP_SERVER = "SAP_SERVER";
Properties properties = new Properties();
SapSystem system = new SapSystem();
system.setClient("100");
system.setHost("r3devsvr.myweb.com");
system.setLanguage("en");
system.setSystemNumber("00");
system.setUser("SAP-R3-USER");
system.setPassword("init1234");
properties.setProperty("jco.client.ashost", system.getHost());
properties.setProperty("jco.client.sysnr", system.getSystemNumber());
properties.setProperty("jco.client.client", system.getClient());
properties.setProperty("jco.client.user", system.getUser());
properties.setProperty("jco.client.passwd", system.getPassword());
properties.setProperty("jco.client.lang", system.getLanguage());
System.out.println("******* Connection Parameter Set *******");
MyDestinationDataProvider myProvider = new MyDestinationDataProvider();
System.out.println("******* Destination Provider Set *******");
myProvider.changePropertiesForABAP_AS(properties);
if (!Environment.isDestinationDataProviderRegistered()) {
System.out.println("Registering Destination Provider R3");
Environment.registerDestinationDataProvider((DestinationDataProvider) myProvider);
}else{
System.out.println("Destination Provider already set..R3");
connR3_flag = false;
}
try {
dest = JCoDestinationManager.getDestination((String) SAP_SERVER);
repos = dest.getRepository();
if (repos == null) {
System.out.println("Repos is null.....");
} else {
System.out.println("Repos is not null.....");
}
System.out.println("After getting repos...");
if(connR3_flag){
System.out.println("R3 Connection Successfull...");
}
} catch (Exception ex) {
System.out.println(ex);
}
return dest;
}
public static JCoDestination getConnection_CRM() {
boolean connCRM_flag = true;
JCoDestination dest = null;
JCoRepository repos = null;
String SAP_SERVER = "SAP_SERVER";
Properties properties = new Properties();
SapSystem system = new SapSystem();
system.setClient("200");
system.setHost("crmdevsvr.myweb.com");
system.setLanguage("en");
system.setSystemNumber("00");
system.setUser("SAP-CRM-USER");
system.setPassword("init1234");
properties.setProperty("jco.client.ashost", system.getHost());
properties.setProperty("jco.client.sysnr", system.getSystemNumber());
properties.setProperty("jco.client.client", system.getClient());
properties.setProperty("jco.client.user", system.getUser());
properties.setProperty("jco.client.passwd", system.getPassword());
properties.setProperty("jco.client.lang", system.getLanguage());
System.out.println("******* Connection Parameter Set *******");
MyDestinationDataProvider myProvider = new MyDestinationDataProvider();
System.out.println("******* Destination Provider Set *******");
myProvider.changePropertiesForABAP_AS(properties);
if (!Environment.isDestinationDataProviderRegistered()) {
System.out.println("Registering Destination Provider CRM");
Environment.registerDestinationDataProvider((DestinationDataProvider) myProvider);
}else{
System.out.println("Destination Provider already set..CRM");
connCRM_flag = false;
}
try {
dest = JCoDestinationManager.getDestination((String) SAP_SERVER);
repos = dest.getRepository();
if (repos == null) {
System.out.println("Repos is null.....");
} else {
System.out.println("Repos is not null.....");
}
System.out.println("After getting repos...");
if(connCRM_flag){
System.out.println("CRM Connection Successfull...");
}
} catch (Exception ex) {
System.out.println(ex);
}
return dest;
}
}
The JCo JavaDoc documentation says:
Only one implementation of DestinationDataProvider can be registered.
For registering another implementation the infrastructure has first to
unregister the implementation that is currently registered. It is not
recommended to permanently exchange DestinationDataProvider
registrations. The one registered instance should globally manage all
destination configurations for the whole infrastructure environment.
So you have to register ONE instance of the DestinationDataProvider, this is an instance of your class MyDestinationDataProvider.
This implementation needs to manage and store ALL the different logon configurations for all your SAP systems, accessible via a distinct destination name string. A simple HashMap<String, Properties> would be a sufficient storage form for this. Add the two Properties instances with distinct destination name strings to the HashMap and return the Properties instance associated to the passed destinationName from method MyDestinationDataProvider.getDestinationProperties(String destinationName).
So you can access the desired JCoDestination instance targeting any SAP system via its specific destination name. In your example you used "SAP_SERVER" for both destination configurations which won't work. For example, use the SAP System ID (SID) as the destination name (key) instead.
If using the names from your example, then JCoDestinationManager.getDestination("CRM") would return the JCoDestination instance for system "CRM" and JCoDestinationManager.getDestination("R3") the JCoDestination for system "R3". And both can be used independently and simultaneously. That's it.
Sharing with you a solution I recently came up with for this exact problem. I discovered two different ways to implement CustomDestinationDataProvider so that I could use multiple destinations.
Something that I did that helped out with both of my different solutions was change out the method in CustomDestinationDataProvider that instantiates the MyDestinationDataProvider inner class so that instead of returning ArrayList, it returns JCoDestination. I changed the name of this method from executeSAPCall to getDestination.
The first way that I discovered that allowed me to use multiple destinations, successfully changing out destinations, was to introduce a class variable for MyDestinationDataProvider so that I could keep my instantiated version. Please note that for this solution, the CustomDestinationDataProvider class is still embedded within my java application code.
I found that this solution only worked for one application. I was not able to use this mechanism in multiple applications on the same tomcat server, but at least I was finally able to successfully switch destinations. Here is the code for CustomDestinationDataProvider.java for this first solution:
public class CustomDestinationDataProvider {
private MyDestinationDataProvider gProvider; // class version of MyDestinationDataProvider
public class MyDestinationDataProvider implements DestinationDataProvider {
private DestinationDataEventListener eL;
private HashMap<String, Properties> secureDBStorage = new HashMap<String, Properties>();
public Properties getDestinationProperties(String destinationName) {
try {
Properties p = secureDBStorage.get(destinationName);
if(p!=null) {
if(p.isEmpty())
throw new DataProviderException(DataProviderException.Reason.INVALID_CONFIGURATION, "destination configuration is incorrect", null);
return p;
}
return null;
} catch(RuntimeException re) {
System.out.println("getDestinationProperties: Exception detected!!! message = " + re.getMessage());
throw new DataProviderException(DataProviderException.Reason.INTERNAL_ERROR, re);
}
}
public void setDestinationDataEventListener(DestinationDataEventListener eventListener) {
this.eL = eventListener;
}
public boolean supportsEvents() {
return true;
}
public void changeProperties(String destName, Properties properties) {
synchronized(secureDBStorage) {
if(properties==null) {
if(secureDBStorage.remove(destName)!=null) {
eL.deleted(destName);
}
} else {
secureDBStorage.put(destName, properties);
eL.updated(destName); // create or updated
}
}
}
}
public JCoDestination getDestination(String destName, Properties connectProperties) {
MyDestinationDataProvider myProvider = new MyDestinationDataProvider();
boolean destinationDataProviderRegistered = com.sap.conn.jco.ext.Environment.isDestinationDataProviderRegistered();
if (!destinationDataProviderRegistered) {
try {
com.sap.conn.jco.ext.Environment.registerDestinationDataProvider(myProvider);
gProvider = myProvider; // save our destination data provider in the class var
} catch(IllegalStateException providerAlreadyRegisteredException) {
throw new Error(providerAlreadyRegisteredException);
}
} else {
myProvider = gProvider; // get the destination data provider from the class var.
}
myProvider.changeProperties(destName, connectProperties);
JCoDestination dest = null;
try {
dest = JCoDestinationManager.getDestination(destName);
} catch(JCoException e) {
e.printStackTrace();
} catch (Exception e) {
// TODO Auto-generated catch block
}
return dest;
}
}
This is the code in my servlet class that I use to instantiate and call CustomDestinationDataProvider within my application code:
CustomDestinationDataProvider cddp = new CustomDestinationDataProvider();
SAPDAO sapDAO = new SAPDAO();
Properties p1 = getProperties("SAPSystem01");
Properties p2 = getProperties("SAPSystem02");
try {
JCoDestination dest = cddp.getDestination("SAP_R3_USERID_01", p1); // establish the first destination
sapDAO.searchEmployees(dest, searchCriteria); // call the first BAPI
dest = cddp.getDestination("SAP_R3_USERID_02", p2); // establish the second destination
sapDAO.searchAvailability(dest); // call the second BAPI
} catch (Exception e) {
e.printStackTrace();
}
Again, this solution only works within one application. If you implement this code directly into more than one application, the first app that calls this code gets the resource and the other one will error out.
The second solution that I came up with allows multiple java applications to use the CustomDestinationDataProvider class at the same time. I broke the CustomDestinationDataProvider class out of my application code and created a separate java spring application for it (not a web application) for the purpose of creating a jar. I then transformed the MyDestinationDataProvider inner class into a singleton. Here's the code for the singleton version of CustomDestinationDataProvider:
public class CustomDestinationDataProvider {
public static class MyDestinationDataProvider implements DestinationDataProvider {
////////////////////////////////////////////////////////////////////
// The following lines convert MyDestinationDataProvider into a singleton. Notice
// that the MyDestinationDataProvider class has now been declared as static.
private static MyDestinationDataProvider myDestinationDataProvider = null;
private MyDestinationDataProvider() {
}
public static MyDestinationDataProvider getInstance() {
if (myDestinationDataProvider == null) {
myDestinationDataProvider = new MyDestinationDataProvider();
}
return myDestinationDataProvider;
}
////////////////////////////////////////////////////////////////////
private DestinationDataEventListener eL;
private HashMap<String, Properties> secureDBStorage = new HashMap<String, Properties>();
public Properties getDestinationProperties(String destinationName) {
try {
Properties p = secureDBStorage.get(destinationName);
if(p!=null) {
if(p.isEmpty())
throw new DataProviderException(DataProviderException.Reason.INVALID_CONFIGURATION, "destination configuration is incorrect", null);
return p;
}
return null;
} catch(RuntimeException re) {
throw new DataProviderException(DataProviderException.Reason.INTERNAL_ERROR, re);
}
}
public void setDestinationDataEventListener(DestinationDataEventListener eventListener) {
this.eL = eventListener;
}
public boolean supportsEvents() {
return true;
}
public void changeProperties(String destName, Properties properties) {
synchronized(secureDBStorage) {
if(properties==null) {
if(secureDBStorage.remove(destName)!=null) {
eL.deleted(destName);
}
} else {
secureDBStorage.put(destName, properties);
eL.updated(destName); // create or updated
}
}
}
}
public JCoDestination getDestination(String destName, Properties connectProperties) throws Exception {
MyDestinationDataProvider myProvider = MyDestinationDataProvider.getInstance();
boolean destinationDataProviderRegistered = com.sap.conn.jco.ext.Environment.isDestinationDataProviderRegistered();
if (!destinationDataProviderRegistered) {
try {
com.sap.conn.jco.ext.Environment.registerDestinationDataProvider(myProvider);
} catch(IllegalStateException providerAlreadyRegisteredException) {
throw new Error(providerAlreadyRegisteredException);
}
}
myProvider.changeProperties(destName, connectProperties);
JCoDestination dest = null;
try {
dest = JCoDestinationManager.getDestination(destName);
} catch(JCoException ex) {
ex.printStackTrace();
throw ex;
} catch (Exception ex) {
ex.printStackTrace();
throw ex;
}
return dest;
}
}
After putting this code into the jar file application and creating the jar file (I call it JCOConnector.jar), I put the jar file on the shared library classpath of my tomcat server and restarted the tomcat server. In my case, this was /opt/tomcat/shared/lib. Check your /opt/tomcat/conf/catalina.properties file for the shared.loader line for the location of your shared library classpath. Mine looks like this:
shared.loader=\
${catalina.home}/shared/lib\*.jar,${catalina.home}/shared/lib
I also put a copy of this jar file in the "C:\Users\userid\Documents\jars" folder on my workstation so that the test application code could see the code in the jar and compile. I then referenced this copy of the jar file in my pom.xml file in both of my test applications:
<dependency>
<groupId>com.mycompany</groupId>
<artifactId>jcoconnector</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>C:\Users\userid\Documents\jars\JCOConnector.jar</systemPath>
</dependency>
After adding this to the pom.xml file, I right clicked on each project, selected Maven -> Update Project..., and I then right clicked again on each project and selected 'Refresh'. Something very important that I learned was to not add a copy of JCOConnector.jar directly to either of my test projects. The reason for this is because I want the code from the jar file in /opt/tomcat/shared/lib/JCOConnector.jar to be used. I then built and deployed each of my test apps to the tomcat server.
The code that calls my JCOConnector.jar shared library in my first test application looks like this:
CustomDestinationDataProvider cddp = new CustomDestinationDataProvider();
JCoDestination dest = null;
SAPDAO sapDAO = new SAPDAO();
Properties p1 = getProperties("SAPSystem01");
try {
dest = cddp.getDestination("SAP_R3_USERID_01", p1);
sapDAO.searchEmployees(dest);
} catch (Exception ex) {
ex.printStackTrace();
}
The code in my second test application that calls my JCOConnector.jar shared library looks like this:
CustomDestinationDataProvider cddp = new CustomDestinationDataProvider();
JCoDestination dest = null;
SAPDAO sapDAO = new SAPDAO();
Properties p2 = getProperties("SAPSystem02");
try {
dest = cddp.getDestination("SAP_R3_USERID_02", p2);
sapDAO.searchAvailability(dest);
} catch (Exception ex) {
ex.printStackTrace();
}
I know that I've left out a lot of the steps involved in first getting the SAP JCO 3 library installed on your workstation and server. I do hope that this helps out at least one other person of getting over the hill of trying to get multiple spring mvc java spplications talking to SAP on the same server.

Not able to load application.conf from cron job in play framework 2.4

I have created a cron job that start during application restart but when i tried to create db connection i am geeting null pointer exception. I am able to create and use db from other module using same configuration.
Below is my Application.conf
db.abc.driver=com.mysql.jdbc.Driver
db.abc.url="jdbc:mysql://localhost:3306/db_name?useSSL=false"
db.abc.username=root
db.abc.password=""
db.abc.autocommit=false
db.abc.isolation=READ_COMMITTED
And code that tried to access db is
public class SchduleJob extends AbstractModule{
#Override
protected void configure() {
bind(JobOne.class)
.to(JobOneImpl.class)
.asEagerSingleton();
} }
#ImplementedBy(JobOneImpl.class)
public interface JobOne {}
#Singleton
public class JobOneImpl implements JobOne {
final ActorSystem actorSystem = ActorSystem.create("name");
final ActorRef alertActor = actorSystem.actorOf(AlertActor.props);
public JobOneImpl() {
scheduleJobs();
}
private Cancellable scheduleJobs() {
return actorSystem.scheduler().schedule(
Duration.create(0, TimeUnit.MILLISECONDS), //Initial delay 0 milliseconds
Duration.create(6, TimeUnit.MINUTES), //Frequency 30 minutes
alertActor,
"alert",
actorSystem.dispatcher(),
null
);
}
}
public class AlertActor extends UntypedActor{
public static Props props = Props.create(AlertActor.class);
final ActorSystem actorSystem = ActorSystem.create("name");
final ActorRef messageActor = actorSystem.actorOf(MessageActor.props());
#Override
public void onReceive(Object message) throws Exception {
if(message != null && message instanceof String) {
RequestDAO requestDAO = new RequestDAO();
try {
List<DBRow> rows = requestDAO.getAllRow();
} catch(Exception exception) {
exception.printStackTrace();
}
}
}
}
public class RequestDAO {
public List<DBRow> getAllRow() throws Exception {
List<DBRow> rows = new ArrayList<DBRow>();
Connection connection = null;
try {
connection = DB.getDataSource("abc").getConnection();
connection.setAutoCommit(false);
} catch(Exception exception) {
exception.printStackTrace();
if(connection != null) {
connection.rollback();
} else {
System.out.println("in else***********");
}
return null;
} finally {
if(connection != null)
connection.close();
}
return schools;
}
When i am calling method getAllRow() of RequestDAO class it's throwing
java.lang.NullPointerException
at play.api.Application$$anonfun$instanceCache$1.apply(Application.scala:235)
at play.api.Application$$anonfun$instanceCache$1.apply(Application.scala:235)
at play.utils.InlineCache.fresh(InlineCache.scala:69)
at play.utils.InlineCache.apply(InlineCache.scala:55)
at play.api.db.DB$.db(DB.scala:22)
at play.api.db.DB$.getDataSource(DB.scala:41)
at play.api.db.DB.getDataSource(DB.scala)
at play.db.DB.getDataSource(DB.java:33)
But same code is working without cron job. What should i do to remove this error
Play uses the Typesafe config library for configuration.
I suspect your current working directory from the cron script isn't set, so it's probably not finding your application.conf (application.properties) file.
However, Config is nice in that it allows you to specify where to look for the file, either by its base name (to choose among .conf | .properties | .json extensions) or the filename including the extension on the java command line:
To specify the base name, use -Dconfig.resource=/path/to/application
To specify the full name, use -Dconfig.file=/path/to/application.properties

Categories