How to monitor folder/directory in spring? - java

I wan't to write Spring Boot Application in spring which will be monitoring directory in windows, and when I change sub folder or add new one or delete existing one I wanna get information about that.
How can i do that?
I have read this one:
http://docs.spring.io/spring-integration/reference/html/files.html
and each result under 'spring file watcher' in google,
but I can't find solution...
Do you have a good article or example with something like this?
I wan't it to like like this:
#SpringBootApplication
#EnableIntegration
public class SpringApp{
public static void main(String[] args) {
SpringApplication.run(SpringApp.class, args);
}
#Bean
public WatchService watcherService() {
...//define WatchService here
}
}
Regards

spring-boot-devtools has FileSystemWatcher
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
FileWatcherConfig
#Configuration
public class FileWatcherConfig {
#Bean
public FileSystemWatcher fileSystemWatcher() {
FileSystemWatcher fileSystemWatcher = new FileSystemWatcher(true, Duration.ofMillis(5000L), Duration.ofMillis(3000L));
fileSystemWatcher.addSourceFolder(new File("/path/to/folder"));
fileSystemWatcher.addListener(new MyFileChangeListener());
fileSystemWatcher.start();
System.out.println("started fileSystemWatcher");
return fileSystemWatcher;
}
#PreDestroy
public void onDestroy() throws Exception {
fileSystemWatcher().stop();
}
}
MyFileChangeListener
#Component
public class MyFileChangeListener implements FileChangeListener {
#Override
public void onChange(Set<ChangedFiles> changeSet) {
for(ChangedFiles cfiles : changeSet) {
for(ChangedFile cfile: cfiles.getFiles()) {
if( /* (cfile.getType().equals(Type.MODIFY)
|| cfile.getType().equals(Type.ADD)
|| cfile.getType().equals(Type.DELETE) ) && */ !isLocked(cfile.getFile().toPath())) {
System.out.println("Operation: " + cfile.getType()
+ " On file: "+ cfile.getFile().getName() + " is done");
}
}
}
}
private boolean isLocked(Path path) {
try (FileChannel ch = FileChannel.open(path, StandardOpenOption.WRITE); FileLock lock = ch.tryLock()) {
return lock == null;
} catch (IOException e) {
return true;
}
}
}

From Java 7 there is WatchService - it will be the best solution.
Spring configuration could be like the following:
#Slf4j
#Configuration
public class MonitoringConfig {
#Value("${monitoring-folder}")
private String folderPath;
#Bean
public WatchService watchService() {
log.debug("MONITORING_FOLDER: {}", folderPath);
WatchService watchService = null;
try {
watchService = FileSystems.getDefault().newWatchService();
Path path = Paths.get(folderPath);
if (!Files.isDirectory(path)) {
throw new RuntimeException("incorrect monitoring folder: " + path);
}
path.register(
watchService,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY,
StandardWatchEventKinds.ENTRY_CREATE
);
} catch (IOException e) {
log.error("exception for watch service creation:", e);
}
return watchService;
}
}
And Bean for launching monitoring itself:
#Slf4j
#Service
#AllArgsConstructor
public class MonitoringServiceImpl {
private final WatchService watchService;
#Async
#PostConstruct
public void launchMonitoring() {
log.info("START_MONITORING");
try {
WatchKey key;
while ((key = watchService.take()) != null) {
for (WatchEvent<?> event : key.pollEvents()) {
log.debug("Event kind: {}; File affected: {}", event.kind(), event.context());
}
key.reset();
}
} catch (InterruptedException e) {
log.warn("interrupted exception for monitoring service");
}
}
#PreDestroy
public void stopMonitoring() {
log.info("STOP_MONITORING");
if (watchService != null) {
try {
watchService.close();
} catch (IOException e) {
log.error("exception while closing the monitoring service");
}
}
}
}
Also, you have to set #EnableAsync for your application class (it configuration).
and snipped from application.yml:
monitoring-folder: C:\Users\nazar_art
Tested with Spring Boot 2.3.1.
Also used configuration for Async pool:
#Slf4j
#EnableAsync
#Configuration
#AllArgsConstructor
#EnableConfigurationProperties(AsyncProperties.class)
public class AsyncConfiguration implements AsyncConfigurer {
private final AsyncProperties properties;
#Override
#Bean(name = "taskExecutor")
public Executor getAsyncExecutor() {
log.debug("Creating Async Task Executor");
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(properties.getCorePoolSize());
taskExecutor.setMaxPoolSize(properties.getMaxPoolSize());
taskExecutor.setQueueCapacity(properties.getQueueCapacity());
taskExecutor.setThreadNamePrefix(properties.getThreadName());
taskExecutor.initialize();
return taskExecutor;
}
#Bean
public TaskScheduler taskScheduler() {
return new ConcurrentTaskScheduler();
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return new CustomAsyncExceptionHandler();
}
}
Where the custom async exception handler is:
#Slf4j
public class CustomAsyncExceptionHandler implements AsyncUncaughtExceptionHandler {
#Override
public void handleUncaughtException(Throwable throwable, Method method, Object... objects) {
log.error("Exception for Async execution: ", throwable);
log.error("Method name - {}", method.getName());
for (Object param : objects) {
log.error("Parameter value - {}", param);
}
}
}
Configuration at properties file:
async-monitoring:
core-pool-size: 10
max-pool-size: 20
queue-capacity: 1024
thread-name: 'async-ex-'
Where AsyncProperties:
#Getter
#Setter
#ConfigurationProperties("async-monitoring")
public class AsyncProperties {
#NonNull
private Integer corePoolSize;
#NonNull
private Integer maxPoolSize;
#NonNull
private Integer queueCapacity;
#NonNull
private String threadName;
}
For using asynchronous execution I am processing an event like the following:
validatorService.processRecord(recordANPR, zipFullPath);
Where validator service has a look like:
#Async
public void processRecord(EvidentialRecordANPR record, String fullFileName) {
The main idea is that you configure async configuration -> call it from MonitoringService -> put #Async annotation above method at another service which you called (it should be a method of another bean - initialisation goes through a proxy).

You can use pure java for this no need for spring https://docs.oracle.com/javase/tutorial/essential/io/notification.html

See the Spring Integration Samples Repo there's a file sample under 'basic'.
There's a more recent and more sophisticated sample under applications file-split-ftp - it uses Spring Boot and Java configuration Vs. the xml used in the older sample.

found a workaround
you can annotate your task by #Scheduled(fixedDelay = Long.MAX_VALUE)
you could check code:
#Scheduled(fixedDelay = Long.MAX_VALUE)
public void watchTask() {
this.loadOnStartup();
try {
WatchService watcher = FileSystems.getDefault().newWatchService();
Path file = Paths.get(propertyFile);
Path dir = Paths.get(file.getParent().toUri());
dir.register(watcher, ENTRY_MODIFY);
logger.info("Watch Service registered for dir: " + dir.getFileName());
while (true) {
WatchKey key;
try {
key = watcher.take();
} catch (InterruptedException ex) {
return;
}
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
#SuppressWarnings("unchecked")
WatchEvent<Path> ev = (WatchEvent<Path>) event;
Path fileName = ev.context();
logger.debug(kind.name() + ": " + fileName);
if (kind == ENTRY_MODIFY &&
fileName.toString().equals(file.getFileName().toString())) {
//publish event here
}
}
boolean valid = key.reset();
if (!valid) {
break;
}
}
} catch (Exception ex) {
logger.error(ex.getMessage(), ex);
}
}
}

Without giving the details here a few pointers which might help you out.
You can take the directory WatchService code from SÅ‚awomir Czaja's answer:
You can use pure java for this no need for spring https://docs.oracle.com/javase/tutorial/essential/io/notification.html
and wrap that code into a runnable task. This task can notify your clients of directory change using the SimpMessagingTemplate as described here:
Websocket STOMP handle send
Then you can create a scheduler like described here:
Scheduling which handles the start and reaccurance of your task.
Don't forget to configure scheduling and websocket support in your mvc-config as well as STOMP support on the client side (further reading here: STOMP over Websocket)

Apache commons-io is another good alternative to watch changes to files/directories.
You can see the overview of pros and cons of using it in this answer:
https://stackoverflow.com/a/41013350/16470819
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.11.0</version>
</dependency>

Just in case, if somebody is looking for recursive sub-folder watcher, this link may help: How to watch a folder and subfolders for changes

Related

Not able to stop an IBM MQ JMS consumer in Spring Boot

I am NOT able to stop an JMS consumer dynamically using a Spring Boot REST endpoint.
The number of consumers stays as is. No exceptions either.
IBM MQ Version: 9.2.0.5
pom.xml
<dependency>
<groupId>com.ibm.mq</groupId>
<artifactId>mq-jms-spring-boot-starter</artifactId>
<version>2.0.8</version>
</dependency>
JmsConfig.java
#Configuration
#EnableJms
#Log4j2
public class JmsConfig {
#Bean
public MQQueueConnectionFactory mqQueueConnectionFactory() {
MQQueueConnectionFactory mqQueueConnectionFactory = new MQQueueConnectionFactory();
mqQueueConnectionFactory.setHostName("my-ibm-mq-host.com");
try {
mqQueueConnectionFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
mqQueueConnectionFactory.setCCSID(1208);
mqQueueConnectionFactory.setChannel("my-channel");
mqQueueConnectionFactory.setPort(1234);
mqQueueConnectionFactory.setQueueManager("my-QM");
} catch (Exception e) {
log.error("Exception while creating JMS connecion...", e.getMessage());
}
return mqQueueConnectionFactory;
}
}
JmsListenerConfig.java
#Configuration
#Log4j2
public class JmsListenerConfig implements JmsListenerConfigurer {
#Autowired
private JmsConfig jmsConfig;
private Map<String, String> queueMap = new HashMap<>();
#Bean
public DefaultJmsListenerContainerFactory mqJmsListenerContainerFactory() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConfig.mqQueueConnectionFactory());
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setSessionTransacted(true);
factory.setConcurrency("5");
return factory;
}
#Override
public void configureJmsListeners(JmsListenerEndpointRegistrar registrar) {
queueMap.put("my-queue-101", "101");
log.info("queueMap: " + queueMap);
queueMap.entrySet().forEach(e -> {
SimpleJmsListenerEndpoint endpoint = new SimpleJmsListenerEndpoint();
endpoint.setDestination(e.getKey());
endpoint.setId(e.getValue());
try {
log.info("Reading message....");
endpoint.setMessageListener(message -> {
try {
log.info("Receieved ID: {} Destination {}", message.getJMSMessageID(), message.getJMSDestination());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
});
registrar.setContainerFactory(mqJmsListenerContainerFactory());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
registrar.registerEndpoint(endpoint);
});
}
}
JmsController.java
#RestController
#RequestMapping("/jms")
#Log4j2
public class JmsController {
#Autowired
ApplicationContext context;
#RequestMapping(value = "/stop", method = RequestMethod.GET)
public #ResponseBody
String haltJmsListener() {
JmsListenerEndpointRegistry listenerEndpointRegistry = context.getBean(JmsListenerEndpointRegistry.class);
Set<String> containerIds = listenerEndpointRegistry.getListenerContainerIds();
log.info("containerIds: " + containerIds);
//stops all consumers
listenerEndpointRegistry.stop(); //DOESN'T WORK :(
//stops a consumer by id, used when there are multiple consumers and want to stop them individually
//listenerEndpointRegistry.getListenerContainer("101").stop(); //DOESN'T WORK EITHER :(
return "Jms Listener stopped";
}
}
Here is the result that I noticed.
Initial # of consumers: 0 (as expected)
After server startup and queue connection, total # of consumers: 1 (as expected)
After hitting http://localhost:8080/jms/stop endpoint, total # of consumers: 1 (NOT as expected, should go back to 0)
Am I missing any configuration ?
You need to also call shutDown on the container; see my comment on this answer DefaultMessageListenerContainer's "isActive" vs "isRunning"
start()/stop() set/reset running; initialize()/shutDown() set/reset active. It depends on what your requirements are. stop() just stops the consumers from getting new messages, but the consumers still exist. shutDown() closes the consumers. Most people call stop + shutdown and then initialize + start to restart. But if you just want to stop consuming for a short time, stop/start is all you need.
You will need to iterate over the containers and cast them to call shutDown().

Mixed up Test configuration when using #ResourceArg

TL:DR; When running tests with different #ResourceArgs, the configuration of different tests get thrown around and override others, breaking tests meant to run with specific configurations.
So, I have a service that has tests that run in different configuration setups. The main difference at the moment is the service can either manage its own authentication or get it from an external source (Keycloak).
I firstly control this using test profiles, which seem to work fine. Unfortunately, in order to support both cases, the ResourceLifecycleManager I have setup supports setting up a Keycloak instance and returns config values that break the config for self authentication (This is due primarily to the fact that I have not found out how to get the lifecycle manager to determine on its own what profile or config is currently running. If I could do this, I think I would be much better off than using #ResourceArg, so would love to know if I missed something here).
To remedy this shortcoming, I have attempted to use #ResourceArgs to convey to the lifecycle manager when to setup for external auth. However, I have noticed some really odd execution timings and the config that ends up at my test/service isn't what I intend based on the test class's annotations, where it is obvious the lifecycle manager has setup for external auth.
Additionally, it should be noted that I have my tests ordered such that the profiles and configs shouldn't be running out of order; all the tests that don't care are run first, then the 'normal' tests with self auth, then the tests with the external auth profile. I can see this working appropriately when I run in intellij, and the fact I can tell the time is being taken to start up the new service instance between the test profiles.
Looking at the logs when I throw a breakpoint in places, some odd things are obvious:
When breakpoint on an erring test (before the external-configured tests run)
The start() method of my TestResourceLifecycleManager has been called twice
The first run ran with Keycloak starting, would override/break config
though the time I would expect to need to be taken to start up keycloak not happening, a little confused here
The second run is correct, not starting keycloak
The profile config is what is expected, except for what the keycloak setup would override
When breakpoint on an external-configured test (after all self-configured tests run):
The start() method has now been called 4 times; appears that things were started in the same order as before again for the new run of the app
There could be some weirdness in how Intellij/Gradle shows logs, but I am interpreting this as:
Quarkus initting the two instances of LifecycleManager when starting the app for some reason, and one's config overrides the other, causing my woes.
The lifecycle manager is working as expected; it appropriately starts/ doesn't start keycloak when configured either way
At this point I can't tell if I'm doing something wrong, or if there's a bug.
Test class example for self-auth test (same annotations for all tests in this (test) profile):
#Slf4j
#QuarkusTest
#QuarkusTestResource(TestResourceLifecycleManager.class)
#TestHTTPEndpoint(Auth.class)
class AuthTest extends RunningServerTest {
Test class example for external auth test (same annotations for all tests in this (externalAuth) profile):
#Slf4j
#QuarkusTest
#TestProfile(ExternalAuthTestProfile.class)
#QuarkusTestResource(value = TestResourceLifecycleManager.class, initArgs = #ResourceArg(name=TestResourceLifecycleManager.EXTERNAL_AUTH_ARG, value="true"))
#TestHTTPEndpoint(Auth.class)
class AuthExternalTest extends RunningServerTest {
ExternalAuthTestProfile extends this, providing the appropriate profile name:
public class NonDefaultTestProfile implements QuarkusTestProfile {
private final String testProfile;
private final Map<String, String> overrides = new HashMap<>();
protected NonDefaultTestProfile(String testProfile) {
this.testProfile = testProfile;
}
protected NonDefaultTestProfile(String testProfile, Map<String, String> configOverrides) {
this(testProfile);
this.overrides.putAll(configOverrides);
}
#Override
public Map<String, String> getConfigOverrides() {
return new HashMap<>(this.overrides);
}
#Override
public String getConfigProfile() {
return testProfile;
}
#Override
public List<TestResourceEntry> testResources() {
return QuarkusTestProfile.super.testResources();
}
}
Lifecycle manager:
#Slf4j
public class TestResourceLifecycleManager implements QuarkusTestResourceLifecycleManager {
public static final String EXTERNAL_AUTH_ARG = "externalAuth";
private static volatile MongodExecutable MONGO_EXE = null;
private static volatile KeycloakContainer KEYCLOAK_CONTAINER = null;
private boolean externalAuth = false;
public synchronized Map<String, String> startKeycloakTestServer() {
if(!this.externalAuth){
log.info("No need for keycloak.");
return Map.of();
}
if (KEYCLOAK_CONTAINER != null) {
log.info("Keycloak already started.");
} else {
KEYCLOAK_CONTAINER = new KeycloakContainer()
// .withEnv("hello","world")
.withRealmImportFile("keycloak-realm.json");
KEYCLOAK_CONTAINER.start();
log.info(
"Test keycloak started at endpoint: {}\tAdmin creds: {}:{}",
KEYCLOAK_CONTAINER.getAuthServerUrl(),
KEYCLOAK_CONTAINER.getAdminUsername(),
KEYCLOAK_CONTAINER.getAdminPassword()
);
}
String clientId;
String clientSecret;
String publicKey = "";
try (
Keycloak keycloak = KeycloakBuilder.builder()
.serverUrl(KEYCLOAK_CONTAINER.getAuthServerUrl())
.realm("master")
.grantType(OAuth2Constants.PASSWORD)
.clientId("admin-cli")
.username(KEYCLOAK_CONTAINER.getAdminUsername())
.password(KEYCLOAK_CONTAINER.getAdminPassword())
.build();
) {
RealmResource appsRealmResource = keycloak.realms().realm("apps");
ClientRepresentation qmClientResource = appsRealmResource.clients().findByClientId("quartermaster").get(0);
clientSecret = qmClientResource.getSecret();
log.info("Got client id \"{}\" with secret: {}", "quartermaster", clientSecret);
//get private key
for (KeysMetadataRepresentation.KeyMetadataRepresentation curKey : appsRealmResource.keys().getKeyMetadata().getKeys()) {
if (!SIG.equals(curKey.getUse())) {
continue;
}
if (!"RSA".equals(curKey.getType())) {
continue;
}
String publicKeyTemp = curKey.getPublicKey();
if (publicKeyTemp == null || publicKeyTemp.isBlank()) {
continue;
}
publicKey = publicKeyTemp;
log.info("Found a relevant key for public key use: {} / {}", curKey.getKid(), publicKey);
}
}
// write public key
// = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString() + "/security/testKeycloakPublicKey.pem");
File publicKeyFile;
try {
publicKeyFile = File.createTempFile("oqmTestKeycloakPublicKey",".pem");
// publicKeyFile = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString().replace("/classes/java/", "/resources/") + "/security/testKeycloakPublicKey.pem");
log.info("path of public key: {}", publicKeyFile);
// if(publicKeyFile.createNewFile()){
// log.info("created new public key file");
//
// } else {
// log.info("Public file already exists");
// }
try (
FileOutputStream os = new FileOutputStream(
publicKeyFile
);
) {
IOUtils.write(publicKey, os, UTF_8);
} catch (IOException e) {
log.error("Failed to write out public key of keycloak: ", e);
throw new IllegalStateException("Failed to write out public key of keycloak.", e);
}
} catch (IOException e) {
log.error("Failed to create public key file: ", e);
throw new IllegalStateException("Failed to create public key file", e);
}
String keycloakUrl = KEYCLOAK_CONTAINER.getAuthServerUrl().replace("/auth", "");
return Map.of(
"test.keycloak.url", keycloakUrl,
"test.keycloak.authUrl", KEYCLOAK_CONTAINER.getAuthServerUrl(),
"test.keycloak.adminName", KEYCLOAK_CONTAINER.getAdminUsername(),
"test.keycloak.adminPass", KEYCLOAK_CONTAINER.getAdminPassword(),
//TODO:: add config for server to talk to
"service.externalAuth.url", keycloakUrl,
"mp.jwt.verify.publickey.location", publicKeyFile.getAbsolutePath()
);
}
public static synchronized void startMongoTestServer() throws IOException {
if (MONGO_EXE != null) {
log.info("Flapdoodle Mongo already started.");
return;
}
Version.Main version = Version.Main.V4_0;
int port = 27018;
log.info("Starting Flapdoodle Test Mongo {} on port {}", version, port);
IMongodConfig config = new MongodConfigBuilder()
.version(version)
.net(new Net(port, Network.localhostIsIPv6()))
.build();
try {
MONGO_EXE = MongodStarter.getDefaultInstance().prepare(config);
MongodProcess process = MONGO_EXE.start();
if (!process.isProcessRunning()) {
throw new IOException();
}
} catch (Throwable e) {
log.error("FAILED to start test mongo server: ", e);
MONGO_EXE = null;
throw e;
}
}
public static synchronized void stopMongoTestServer() {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
MONGO_EXE.stop();
MONGO_EXE = null;
}
public synchronized static void cleanMongo() throws IOException {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
log.info("Cleaning Mongo of all entries.");
}
#Override
public void init(Map<String, String> initArgs) {
this.externalAuth = Boolean.parseBoolean(initArgs.getOrDefault(EXTERNAL_AUTH_ARG, Boolean.toString(this.externalAuth)));
}
#Override
public Map<String, String> start() {
log.info("STARTING test lifecycle resources.");
Map<String, String> configOverride = new HashMap<>();
try {
startMongoTestServer();
} catch (IOException e) {
log.error("Unable to start Flapdoodle Mongo server");
}
configOverride.putAll(startKeycloakTestServer());
return configOverride;
}
#Override
public void stop() {
log.info("STOPPING test lifecycle resources.");
stopMongoTestServer();
}
}
The app can be found here: https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/open-qm-base-station
The tests are currently failing in the ways I am describing, so feel free to look around.
Note that to run this, you will need to run ./gradlew build publishToMavenLocal in https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/libs/open-qm-core to install a dependency locally.
Github issue also tracking this: https://github.com/quarkusio/quarkus/issues/22025
Any use of #QuarkusTestResource() without restrictToAnnotatedClass set to true, means that the QuarkusTestResourceLifecycleManager will be applied to all tests no matter where the annotation is placed.
Hope restrictToAnnotatedClass will solve the problem.

JUnit4 - How to test for readonly/write-protected directory if run with docker

We have an integration test setup for testing the behavior of missing but required configuration properties. Among one of these properties is a directory where failed uploads should be written to for later retries. The general behavior for this property should be that the application doesn't even start up and fail immediately when certain constraints are violated.
The properties are managed by Spring via certain ConfigurationProperties among these we have a simple S3MessageUploadSettings class
#Getter
#Setter
#ConfigurationProperties(prefix = "s3")
#Validated
public class S3MessageUploadSettings {
#NotNull
private String bucketName;
#NotNull
private String uploadErrorPath;
...
}
In the respective Spring configuration we now perform certain validation checks, like whether the path exists, is writable and a directory, and throw respective RuntimeExceptions when certain assertions aren't met:
#Slf4j
#Import({ S3Config.class })
#Configuration
#EnableConfigurationProperties(S3MessageUploadSettings.class)
public class S3MessageUploadSpringConfig {
#Resource
private S3MessageUploadSettings settings;
...
#PostConstruct
public void checkConstraints() {
String sPath = settings.getUploadErrorPath();
Path path = Paths.get(sPath);
...
log.debug("Probing path '{}' for existence', path);
if (!Files.exists(path)) {
throw new RuntimeException("Required error upload directory '" + path + "' does not exist");
}
log.debug("Probig path '{}' for being a directory", path);
if (!Files.isDirectory(path)) {
throw new RuntimeException("Upload directory '" + path + "' is not a directoy");
}
log.debug("Probing path '{}' for write permissions", path);
if (!Files.isWritable(path)) {
throw new RuntimeException("Error upload path '" + path +"' is not writable);
}
}
}
Our test setup now looks like this:
public class StartupTest {
#ClassRule
public static TemporaryFolder testFolder = new TemporaryFolder();
private static File BASE_FOLDER;
private static File ACCESSIBLE;
private static File WRITE_PROTECTED;
private static File NON_DIRECTORY;
#BeforeClass
public static void initFolderSetup() throws IOException {
BASE_FOLDER = testFolder.getRoot();
ACCESSIBLE = testFolder.newFolder("accessible");
WRITE_PROTECTED = testFolder.newFolder("writeProtected");
if (!WRITE_PROTECTED.setReadOnly()) {
fail("Could not change directory permissions to readonly")
}
if (!WRITE_PROTECTED.setWritable(false)) {
fail("Could not change directory permissions to writable(false)");
}
NON_DIRECTORY = testFolder.newFile("nonDirectory");
}
#Configuration
#Import({
S3MessageUploadSpringConfig.class,
S3MockConfig.class,
...
})
static class BaseContextConfig {
// common bean definitions
...
}
#Configuration
#Import(BaseContextConfig.class)
#PropertySource("classpath:ci.properties")
static class NotExistingPathContextConfig {
#Resource
private S3MessageUploadSettings settings;
#PostConstruct
public void updateSettings() {
settings.setUploadErrorPath(BASE_FOLDER.getPath() + "/foo/bar");
}
}
#Configuration
#Import(BaseContextConfig.class)
#PropertySource("classpath:ci.properties")
static class NotWritablePathContextConfig {
#Resource
private S3MessageUploadSettings settings;
#PostConstruct
public void updateSettings() {
settings.setUploadErrorPath(WRITE_PROTECTED.getPath());
}
}
...
#Configuration
#Import(BaseContextConfig.class)
#PropertySource("classpath:ci.properties")
static class StartableContextConfig {
#Resource
private S3MessageUploadSettings settings;
#PostConstruct
public void updateSettings() {
settings.setUploadErrorPath(ACCESSIBLE.getPath());
}
}
#Test
public void shouldFailStartupDueToNonExistingErrorPathDirectory() {
ApplicationContext context = null;
try {
context = new AnnotationConfigApplicationContext(StartupTest.NotExistingPathContextConfig.class);
fail("Should not have started the context");
} catch (Exception e) {
e.printStackTrace();
assertThat(e, instanceOf(BeanCreationException.class));
assertThat(e.getMessage(), containsString("Required error upload directory '" + BASE_FOLDER + "/foo/bar' does not exist"));
} finally {
closeContext(context);
}
}
#Test
public void shouldFailStartupDueToNonWritablePathDirectory() {
ApplicationContext context = null;
try {
context = new AnnotationConfigApplicationContext(StartupTest.NotWritablePathContextConfig.class);
fail("Should not have started the context");
} catch (Exception e) {
assertThat(e, instanceOf(BeanCreationException.class));
assertThat(e.getMessage(), containsString("Error upload path '" + WRITE_PROTECTED + "' is not writable"));
} finally {
closeContext(context);
}
}
...
#Test
public void shouldStartUpSuccessfully() {
ApplicationContext context = null;
try {
context = new AnnotationConfigApplicationContext(StartableContextConfig.class);
} catch (Exception e) {
e.printStackTrace();
fail("Should not have thrown an exception of type " + e.getClass().getSimpleName() + " with message " + e.getMessage());
} finally {
closeContext(context);
}
}
private void closeContext(ApplicationContext context) {
if (context != null) {
// check and close any running S3 mock as this may have negative impact on the startup of a further context
closeS3Mock(context);
// stop a running Spring context manually as this might interfere with a starting context of an other test
((ConfigurableApplicationContext) context).stop();
}
}
private void closeS3Mock(ApplicationContext context) {
S3Mock s3Mock = null;
try {
if (context != null) {
s3Mock = context.getBean("s3Mock", S3Mock.class);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != s3Mock) {
s3Mock.stop();
}
}
}
}
When run locally, everything looks fine and all tests pass. Though our CI runs these tests inside a docker container and for some reason changing file permissions seem to end up in a NOOP returning true on the method invocation though not changing anything in regards of the file permission itself.
Neiter File.setReadOnly(), File.setWritable(false) nor Files.setPosixFilePermissions(Path, Set<PosixFilePermission>) seem to have an effect on the actual file permissions in the docker container.
I've also tried to change the directories to real directories, i.e. /root or /dev/pts that are write protected, though as the CI runs the tests as root these directories are writable by the application and the test fails again.
I also considered using an in-memory file system (such as JimFS) though here I'm not sure how to convince the test to make use of the custom filesystem. AFAIK JimFS does not support the constructor needed for declaring it as default filesystem.
Which other possibilities exist from within Java to change a directories permission to readonly/write-protected when run inside a docker container or test successfully for such a directory?
I assume this is due to the permissions and policies of the JVM, and you cannot do anything from your code if the OS has blocked some permissions for your JVM.
You can try to edit java.policy file and set appropriate file permissions.
Perhaps these will be some given files to which write privileges will be set, for example:
grant {
permission java.io.FilePermission "/dev/pts/*", "read,write,delete";
};
More examples in docs: https://docs.oracle.com/javase/8/docs/technotes/guides/security/spec/security-spec.doc3.html.

Not able to load application.conf from cron job in play framework 2.4

I have created a cron job that start during application restart but when i tried to create db connection i am geeting null pointer exception. I am able to create and use db from other module using same configuration.
Below is my Application.conf
db.abc.driver=com.mysql.jdbc.Driver
db.abc.url="jdbc:mysql://localhost:3306/db_name?useSSL=false"
db.abc.username=root
db.abc.password=""
db.abc.autocommit=false
db.abc.isolation=READ_COMMITTED
And code that tried to access db is
public class SchduleJob extends AbstractModule{
#Override
protected void configure() {
bind(JobOne.class)
.to(JobOneImpl.class)
.asEagerSingleton();
} }
#ImplementedBy(JobOneImpl.class)
public interface JobOne {}
#Singleton
public class JobOneImpl implements JobOne {
final ActorSystem actorSystem = ActorSystem.create("name");
final ActorRef alertActor = actorSystem.actorOf(AlertActor.props);
public JobOneImpl() {
scheduleJobs();
}
private Cancellable scheduleJobs() {
return actorSystem.scheduler().schedule(
Duration.create(0, TimeUnit.MILLISECONDS), //Initial delay 0 milliseconds
Duration.create(6, TimeUnit.MINUTES), //Frequency 30 minutes
alertActor,
"alert",
actorSystem.dispatcher(),
null
);
}
}
public class AlertActor extends UntypedActor{
public static Props props = Props.create(AlertActor.class);
final ActorSystem actorSystem = ActorSystem.create("name");
final ActorRef messageActor = actorSystem.actorOf(MessageActor.props());
#Override
public void onReceive(Object message) throws Exception {
if(message != null && message instanceof String) {
RequestDAO requestDAO = new RequestDAO();
try {
List<DBRow> rows = requestDAO.getAllRow();
} catch(Exception exception) {
exception.printStackTrace();
}
}
}
}
public class RequestDAO {
public List<DBRow> getAllRow() throws Exception {
List<DBRow> rows = new ArrayList<DBRow>();
Connection connection = null;
try {
connection = DB.getDataSource("abc").getConnection();
connection.setAutoCommit(false);
} catch(Exception exception) {
exception.printStackTrace();
if(connection != null) {
connection.rollback();
} else {
System.out.println("in else***********");
}
return null;
} finally {
if(connection != null)
connection.close();
}
return schools;
}
When i am calling method getAllRow() of RequestDAO class it's throwing
java.lang.NullPointerException
at play.api.Application$$anonfun$instanceCache$1.apply(Application.scala:235)
at play.api.Application$$anonfun$instanceCache$1.apply(Application.scala:235)
at play.utils.InlineCache.fresh(InlineCache.scala:69)
at play.utils.InlineCache.apply(InlineCache.scala:55)
at play.api.db.DB$.db(DB.scala:22)
at play.api.db.DB$.getDataSource(DB.scala:41)
at play.api.db.DB.getDataSource(DB.scala)
at play.db.DB.getDataSource(DB.java:33)
But same code is working without cron job. What should i do to remove this error
Play uses the Typesafe config library for configuration.
I suspect your current working directory from the cron script isn't set, so it's probably not finding your application.conf (application.properties) file.
However, Config is nice in that it allows you to specify where to look for the file, either by its base name (to choose among .conf | .properties | .json extensions) or the filename including the extension on the java command line:
To specify the base name, use -Dconfig.resource=/path/to/application
To specify the full name, use -Dconfig.file=/path/to/application.properties

How can I use Spring Batch with OSGI

I want to use Spring batch with osgi to run a job daily.
here what i did:
#Component
#EnableBatchProcessing
public class BatchConfiguration {
private JobBuilderFactory jobs;
public JobBuilderFactory getJobs() {
return jobs;
}
public void setJobs(JobBuilderFactory jobs) {
this.jobs = jobs;
}
private StepBuilderFactory steps;
private EmployeeRepository employeeRepository; //spring data repository
public EmployeeRepository getEmployeeRepository() {
return employeeRepository;
}
#Reference
public void setEmployeeRepository(EmployeeRepository employeeRepository) {
employeeRepository= employeeRepository;
}
public Step syncEmployeesStep() throws Exception{
RepositoryItemWriter writer = new RepositoryItemWriter();
writer.setRepository(employeeRepository);
writer.setMethodName("save");
return steps.get("syncEmployeesStep")
.<Employee, Employee> chunk(10)
.reader(reader())
.writer(writer)
.build();
}
public Job importEmpJob()throws Exception {
return jobs.get("importEmpJob")
.incrementer(new RunIdIncrementer())
.start(syncEmployeesStep())
.next(syncEmployeesStep())
.build();
}
public ItemReader<Employee> reader() throws Exception {
String jpqlQuery = "select a from Employee a";
ServerEMF entityManager = new ServerEMF();
JpaPagingItemReader<Employee> reader = new JpaPagingItemReader<Tariff>();
reader.setQueryString(jpqlQuery);
reader.setEntityManagerFactory(entityManager.getEntityManagerFactory());
reader.setPageSize(3);
reader.afterPropertiesSet();
reader.setSaveState(true);
return reader;
}
}
here I want to run this job to sync between two databases,My problem is how to run this job inside osgi.
#EnableScheduling
#Component
public class JobRunner {
private JobLauncher jobLauncher;
private Job job ;
private BatchConfiguration batchConfig;
//private JobBuilderFactory jobs;
//private JobRepository jobrepo;
final static Logger logger = LoggerFactory.getLogger(BatchConfiguration.class);
BundleContext ctx;
#SuppressWarnings("rawtypes")
ServiceTracker servicetracker;
#Activate
public void start(BundleContext context) {
batchConfig = new BatchConfiguration();
//jobs = new JobBuilderFactory(jobRepository)
try {
job = batchConfig.importEmpJob(); //job is null because i don't know how to use it
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
ctx = context;
servicetracker= new ServiceTracker(ctx, BatchConfiguration.class, null);
servicetracker.open();
new Thread() {
public void run() { findAndRunJob(); }
}.start();
}
#Deactivate
public void stop() {
configAdminTracker.close();
}
#Scheduled(fixedRate = 5000)
protected void findAndRunJob() {
logger.info("job created.");
try {
String dateParam = new Date().toString();
JobParameters param = new JobParametersBuilder().addString("date", dateParam).toJobParameters();
System.out.println(dateParam);
JobExecution execution = jobLauncher.run(job, param);
System.out.println("Exit Status : " + execution.getStatus());
} catch (Exception e) {
//e.printStackTrace();
}
}
for sure,I got a java.lang.NullPointerException because the job is null.
could anyone help me with that?
after updates
#Component
#EnableBatchProcessing
public class BatchConfiguration {
private EmployeeRepository employeeRepository; //spring data repository
public EmployeeRepository getEmployeeRepository() {
return employeeRepository;
}
#Reference
public void setEmployeeRepository(EmployeeRepository employeeRepository) {
employeeRepository= employeeRepository;
}
public Step syncEmployeesStep() throws Exception{
RepositoryItemWriter writer = new RepositoryItemWriter();
writer.setRepository(employeeRepository);
writer.setMethodName("save");
return steps.get("syncEmployeesStep")
.<Employee, Employee> chunk(10)
.reader(reader())
.writer(writer)
.build();
}
public Job importEmpJob(JobRepository jobRepository, PlatformTransactionManager transactionManager)throws Exception {
JobBuilderFactory jobs= new JobBuilderFactory(jobRepository);
StepBuilderFactory stepBuilderFactory = new StepBuilderFactory(jobRepository, transactionManager);
return jobs.get("importEmpJob")
.incrementer(new RunIdIncrementer())
.start(syncEmployeesStep())
.next(syncEmployeesStep())
.build();
}
public ItemReader<Employee> reader() throws Exception {
String jpqlQuery = "select a from Employee a";
ServerEMF entityManager = new ServerEMF();
JpaPagingItemReader<Employee> reader = new JpaPagingItemReader<Tariff>();
reader.setQueryString(jpqlQuery);
reader.setEntityManagerFactory(entityManager.getEntityManagerFactory());
reader.setPageSize(3);
reader.afterPropertiesSet();
reader.setSaveState(true);
return reader;
}
}
job runner class
private JobLauncher jobLauncher;
private PlatformTransactionManager transactionManager;
private JobRepository jobRepository;
Job importEmpJob;
private BatchConfiguration batchConfig;
#SuppressWarnings("deprecation")
#Activate
public void start(BundleContext context) {
try {
batchConfig = new BatchConfiguration();
this.transactionManager = new ResourcelessTransactionManager();
MapJobRepositoryFactoryBean repositorybean = new MapJobRepositoryFactoryBean();
repositorybean.setTransactionManager(transactionManager);
this.jobRepository = repositorybean.getJobRepository(); //error after executing this statement
// setup job launcher
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
simpleJobLauncher.setJobRepository(jobRepository);
this.jobLauncher = simpleJobLauncher;
//System.out.println(job);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
ctx = context;
configAdminTracker = new ServiceTracker(ctx, BatchConfiguration.class.getName(), null);
configAdminTracker.open();
new Thread() {
public void run() { findAndRunJob(); }
}.start();
}
#Deactivate
public void stop() {
configAdminTracker.close();
}
protected void findAndRunJob() {
logger.info("job created.");
try {
String dateParam = new Date().toString();
// creating the job
Job job = batchConfig.importEmpJob(jobRepository, transactionManager);
// running the job
JobExecution execution = this.jobLauncher.run(job, new JobParameters());
System.out.println("Exit Status : " + execution.getStatus());
} catch (Exception e) {
//e.printStackTrace();
}
}
what i getting is "java.lang.IllegalArgumentException: interface org.springframework.batch.core.repository.JobRepository is not visible from class loader" after running .could anyone help me with that error?
In Short
If you're just trying to kick off a something simple and don't need all the Spring Batch goodness I would look into the Apache Sling Commons Scheduler which has a simple job processor on top of Quartz for scheduling[1].
In General
There are a couple of considerations here depending on what you are trying to do. Are you deploying the Spring Batch Jars to the OSGi container with the assumption that the code that is written for the jobs (steps, tasks, etc) will live in separate bundles? OSGi's purpose is to develop modular code so my answer is assuming that this is your end goal. The folks at Pivotal have dropped requiring support for OSGi on their artifacts so to make it work you'll need to determine what you need to export from the Batch jar files. This can be done with BND. I would recommend checking out the new BND Maven plugin [2]. I would configure the Export-Package to export the interfaces you need to write the jobs so that you can write the jobs in separate modular bundles. Then I would probably embed the spring batch jars in a bundle and write a small wrapper around the JobLauncher. This should contain all the actual batch code to a single classloader so that you don't have to worry about OSGi trying to pull in classes dynamically. The downside is that this will prevent you from using many of the batch annotations outside of the spring batch bundle you created but will provide the modularity that you'd be looking for by implementing this type of solution with OSGi.
[1] https://sling.apache.org/documentation/bundles/apache-sling-eventing-and-job-handling.html
[2] http://njbartlett.name/2015/03/27/announcing-bnd-maven-plugin.html

Categories