method for deleting a file:
(idea warns that problems may occur)
private Mono<Boolean> remove(String path, String fileName) {
return Mono.fromRunnable(() -> {
try {
//Possibly blocking call in non-blocking context could lead to thread starvation
Files.delete(Path.of(path + fileName));
} catch (IOException e) {
throw new RuntimeException();
}
}
);
}
Is there a correct way to delete a file?
Related
This is my code :
public void deletevendor(VendorEntity vendorEntity) throws Exception {
CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
try {
dwService.deleteVendor(vendorEntity);
} catch (Exception ex) {
log.info("Error occurred ", ex);
}
});
Boolean isCancelled = future.isCancelled();
Boolean isDone = future.isDone();
Boolean isCompletedExceptionally = future.isCompletedExceptionally();
System.out.println("Asynchronous CompletableFuture: " + isCancelled +" "+ isDone+" "+ isCompletedExceptionally );
}
My code inside the try block works fine. I want to trigger the catch block. How can i do that. What could be the inputs that can trigger the exception for my completablefuture?
For sake of experiments with Future this is the way you can trigger the catch clause:
CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
try {
throw new Exception("");
} catch (Exception ex) {
log.info("Error occurred ", ex);
}
});
According to your code snippet, as already said, the entire exception throwing logic is inside calling the dwService.deleteVendor(vendorEntity) method. It means you have to pass a specific vendorEntity into this public void deletevendor(VendorEntity vendorEntity) method in order to throw an exception caught in CompletableFuture.
I've written a springboot application to perform etl from data source to another data lake every 15 mins. I've scheduled the execution using #Scheduled annotation to a function.
I had created jar and was executing directly through java -jar ingest.jar. It works fine for some days (3-4 days). And just pauses without any exception. To resume, I have to go and press any key to make it active again.
#Scheduled(initialDelayString = "${ingest.initialdelay.in.seconds}000", fixedRateString = "${ingest.rate.in.seconds}000")
public void ingestData(){
// Ingestion Logic
}
Because the problem persisted, I created war and deployed to the tomcat server. But the problem still remains.
Can somebody point me what am I missing here? The same application works fine if I deploy to cloudfoundry.
IO Streams - FileInputStream and FileOutputStream
Helper Functions for IO
public static void saveLastSuccessfulDate(String filepath, String propertyName, Date dateTime) {
Properties prop = new Properties();
OutputStream output = null;
try {
String lastDate = getDateInFormat(dateTime);
log.info("Saving: " + lastDate);
output = new FileOutputStream(filepath);
prop.setProperty(propertyName, lastDate);
prop.store(output, null);
} catch (IOException io) {
io.printStackTrace();
} finally {
if (output != null) {
try {
output.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
//Helper to Write to properties file
public static String checkLastSuccessfulDateAsString(String filepath, String propName) {
Properties prop = new Properties();
InputStream input = null;
try {
input = new FileInputStream(filepath);
// load a properties file
prop.load(input);
String lastSuccesfulDate = prop.getProperty(propName);
log.info("Last Successful Date: "+lastSuccesfulDate);
return lastSuccesfulDate;
} catch (FileNotFoundException f) {
log.error("checkLastSuccessfulDateAsString: File Not Found: " + f.getMessage());
} catch (IOException ex) {
log.error(ex.getMessage());
ex.printStackTrace();
} finally {
if (input != null) {
try {
input.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return null;
}
Regards
In default, spring #Scheduled use single thread. So, if one task was blocking, the next task won't run.
You can make your task class implements SchedulingConfigurer.It will use multithread to run task and avoid blocking. Code like it:
#Component
public class TaskService implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar scheduledTaskRegistrar) {
scheduledTaskRegistrar.setScheduler(taskExecutor());
}
#Bean(destroyMethod = "shutdown")
public ScheduledExecutorService taskExecutor() {
return Executors.newScheduledThreadPool(100);
}
// your code
#Scheduled(initialDelayString = "${ingest.initialdelay.in.seconds}000", fixedRateString = "${ingest.rate.in.seconds}000")
public void ingestData(){
// Ingestion Logic
}
}
May help you.
I am trying to get a very basic RxJava based application to work. I have defined the following Observable class which reads and returns lines from a file:
public Observable<String> getObservable() throws IOException
{
return Observable.create(subscribe -> {
InputStream in = getClass().getResourceAsStream("/trial.txt");
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
String line = null;
try {
while((line = reader.readLine()) != null)
{
subscribe.onNext(line);
}
} catch (IOException e) {
subscribe.onError(e);
}
finally {
subscribe.onCompleted();
}
});
}
Next I have defined the subscrober code:
public static void main(String[] args) throws IOException, InterruptedException {
Thread thread = new Thread(() ->
{
RxObserver observer = new RxObserver();
try {
observer.getObservable()
.observeOn(Schedulers.io())
.subscribe( x ->System.out.println(x),
t -> System.out.println(t),
() -> System.out.println("Completed"));
} catch (IOException e) {
e.printStackTrace();
}
});
thread.start();
thread.join();
}
The file has close to 50000 records. When running the app I am getting "rx.exceptions.MissingBackpressureException". I have gone through some of the documentation and as suggested, I tried added the ".onBackpressureBuffer()" method in the call chain. But then I am not getting the exception but the completed call too isin't getting fired.
What is the right way to handle scenario wherein we have a fast producing Observable?
The first problem is that your readLine logic ignores backpressure. You can apply onBackpressureBuffer() just before observeOn to start with but there is a recent addition SyncOnSubscribe that let's you generate values one by one and takes care of backpressure:
SyncOnSubscribe.createSingleState(() => {
try {
InputStream in = getClass().getResourceAsStream("/trial.txt");
return new BufferedReader(new InputStreamReader(in));
} catch (IOException ex) {
throw new RuntimeException(ex);
}
},
(s, o) -> {
try {
String line = s.readLine();
if (line == null) {
o.onCompleted();
} else {
o.onNext(line);
}
} catch (IOException ex) {
s.onError(ex);
}
},
s -> {
try {
s.close();
} catch (IOException ex) {
}
});
The second problem is that your Thread will complete way before all elements on the io thread has been delivered and thus the main program exits. Either remove the observeOn, add .toBlocking or use a CountDownLatch.
RxObserver observer = new RxObserver();
try {
CountDownLatch cdl = new CountDownLatch(1);
observer.getObservable()
.observeOn(Schedulers.io())
.subscribe( x ->System.out.println(x),
t -> { System.out.println(t); cdl.countDown(); },
() -> { System.out.println("Completed"); cdl.countDown(); });
cdl.await();
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
The problem here is observeOn operator, since each Observer's onNext() call is scheduled to be called on a separate thread, your Observable keeps producing those scheduled calls in a loop regardless of subscriber (observeOn) capacity.
If you keep this synchronous, Observable will not emit next element until subscriber is done with the previous one, since it's all done on a one thread and you will not have backpressure problems anymore.
If you still want to use observeOn, you will have to implement backpressure logic in your Observable's OnSubscribe#call method
I have source files in Cp1250 encoding. All of those file are in dirName directory or its subdirectories. I would like to merge them into one utf-8 file by adding their contents. Unfortunately I get empty line at the beginning of result file.
public static void processDir(String dirName, String resultFileName) {
try {
File resultFile = new File(resultFileName);
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(resultFile), "utf-8"));
Files.walk(Paths.get(dirName)).filter(Files::isRegularFile).forEach((path) -> {
try {
Files.readAllLines(path, Charset.forName("Windows-1250")).stream().forEach((line) -> {
try {
bw.newLine();
bw.write(line);
} catch (Exception e) {
e.printStackTrace();
}
});
} catch (Exception e) {
e.printStackTrace();
}
});
bw.close();
} catch (Exception e) {
e.printStackTrace();
}
}
The reason is that I don't know how to detect the first file in my stream.
I came up with extremely stupid solution which does not rely on streams so it is unsatisfactory:
public static void processDir(String dirName, String resultFileName) {
try {
File resultFile = new File(resultFileName);
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(resultFile), "utf-8"));
Files.walk(Paths.get(dirName)).filter(Files::isRegularFile).forEach((path) -> {
try {
Files.readAllLines(path, Charset.forName("Windows-1250")).stream().forEach((line) -> {
try {
if(resultFile.length() != 0){
bw.newLine();
}
bw.write(line);
if(resultFile.length() == 0){
bw.flush();
}
} catch (Exception e) {
e.printStackTrace();
}
});
} catch (Exception e) {
e.printStackTrace();
}
});
bw.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Also I could use static boolean but that is total gibberish.
You can use the flatMap to create the stream of all lines of all files, then use flatMap again to interleave it with line separator, then use skip(1) to skip the leading separator like this:
public static void processDir(String dirName, String resultFileName) {
try(BufferedWriter bw = Files.newBufferedWriter(Paths.get(resultFileName))) {
Files.walk(Paths.get(dirName)).filter(Files::isRegularFile)
.flatMap(path -> {
try {
return Files.lines(path, Charset.forName("Windows-1250"));
} catch (IOException e) {
throw new UncheckedIOException(e);
}
})
.flatMap(line -> Stream.of(System.lineSeparator(), line))
.skip(1)
.forEach(line -> {
try {
bw.write(line);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
});
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
In general using flatMap+skip combination can help to solve many similar problems.
Also note the Files.newBufferedWriter method which is simpler way to create BufferedWriter. And don't forget about try-with-resources.
Rethink your strategy. If you want to join files and neither, remove nor convert, line terminators, there is no reason to process lines. It seems, the only reason for you to write code processing lines, is, that you have a demand to bail lambda expressions and streams into the solution and the only possibility offered by the current API is to process streams of lines. But obviously, they are not the right tool for the job:
public static void processDir(String dirName, String resultFileName) throws IOException {
Charset cp1250 = Charset.forName("Windows-1250");
CharBuffer buffer=CharBuffer.allocate(8192);
try(BufferedWriter bw
=Files.newBufferedWriter(Paths.get(resultFileName), CREATE, TRUNCATE_EXISTING)) {
Files.walkFileTree(Paths.get(dirName), new SimpleFileVisitor<Path>() {
#Override public FileVisitResult visitFile(
Path path, BasicFileAttributes attrs) throws IOException {
try(BufferedReader r=Files.newBufferedReader(path, cp1250)) {
while(r.read(buffer)>0) {
bw.write(buffer.array(), buffer.arrayOffset(), buffer.position());
buffer.clear();
}
}
return FileVisitResult.CONTINUE;
}
});
bw.close();
}
}
Note how this solution solves the problems of your first attempt. You don’t have to deal with line terminators here, this code doesn’t even waste resources in trying to find them in the input. All it does, is performing the charset conversion on chunks of input data and writing them to the target. The performance difference can be significant.
Further, the code isn’t cluttered with catching exceptions, that you can’t handle. If an IOException occurs at any place of the operation, all pending resources are properly closed and the exception is relayed to the caller.
Granted, it just uses a good old inner class instead of a lambda expression. But it doesn’t reduce the readability compared to your attempt. If it still really bothers you that there is no lambda expression involved, you may check this question & answer for a way to bring them in again.
I need to read a properties file containing some configuration data in a JSF web application.
Right now the code looks like this
private Properties getConfig() {
Properties properties = new Properties();
InputStream inputStream = null;
try {
inputStream = this.getClass().getResourceAsStream("/config.properties");
try {
properties.load(inputStream);
} catch (IOException e) {
logger.error("Error while reading config properties", e);
}
} finally {
if (inputStream != null) {
try {
inputStream.close();
} catch (IOException e) {
logger.error(e.getMessage(), e);
}
}
}
return properties;
}
Is it safe to do it this way or can I run into concurrency issues when multiple threads are calling getConfig()?
No, that should be perfectly safe. I can't see any concurrency issues in there.
However, the exception handling might not be ideal - is it valid to return an empty properties object if you fail to load the config, or should you propagate the exception out of getConfig()? Up to you, really....