How to process two files paired with Apache Camel - java

I am building applications using Apache Camel.
In this application, two files with the same name as the xml file and extension of jpg are placed in a specific directory.
We will process this file using Apache Camel's file2 component.
And I'm using Apache Camel Version 2.19.0
I would like to meet the following specifications.
1. When the processing is completed, move the xml file and the paired jpg file to the done directory
2. When the processing fails, move the xml file and the paired jpg file to the error directory
The directory structure is as follows.
main/ftp/20170605-110000.xml
/20170605-110000.jpg
main/done/20170604-090000.xml
20170604-090000.jpg
main/error/20170604-090000.xml
20170604-090000.jpg
I have satisfied the desired behavior with the following code.
public class ExampleRoute extends RouteBuilder {
private final File ftpDir;
private final File doneDir;
private final File errorDir;
public ExampleRoute() {
this.ftpDir = new File("work/main/ftp");
this.doneDir = new File("work/main/done");
this.errorDir = new File("work/main/error");
}
#Override
public void configure() throws Exception {
String format = "file://%s?include=.*.xml&move=%s&moveFailed=%s";
String from = String.format(format,
ftpDir.getAbsolutePath(),
doneDir.getAbsolutePath(),
errorDir.getAbsolutePath());
// #formatter:off
onException(Exception.class)
.handled(false)
.process(new MoveResourceProcessor((errorDir)))
.stop();
// #formatter:on
// #formatter:off
from(from)
.process(exchange -> {
// Nothing to do...
})
.process(new MoveResourceProcessor(doneDir))
.end();
// #formatter:on
}
private class MoveResourceProcessor implements Processor {
private final File dir;
public MoveResourceProcessor(File dir) {
this.dir = dir;
}
#Override
public void process(Exchange exchange) throws Exception {
String parent = (String) exchange.getIn().getHeader(Exchange.FILE_PARENT);
File parentDir = new File(parent);
String filename = (String) exchange.getIn().getHeader(Exchange.FILE_NAME_ONLY);
String baseName = FilenameUtils.getBaseName(filename);
File source = new File(parentDir, baseName + ".jpg");
if (source.exists()) {
File dest = new File(dir, source.getName());
FileUtils.moveFile(source, dest);
}
}
}
}
But is there a better method for arranging multiple files related to such target files?

Related

How can I print the error message into a txt or json file in my directory?

I have a function that save all the error in a errormessage list
public class Util {
private List<String> errorMessages = new ArrayList<>();
public void outputResult(String content) {
logger.error(content);
errorMessages.add(content);
}
}
and my compare function add all the error message to the list,
public void compare(Config source, Config target) {
if (source.getId() != target.getId()) {
util.outputResult("id not equal");
}
// ...
}
And in my main function, I call this compare function and want to save all the error message in a txt or some other file in my current directory
public class MyClass {
public void main() {
compare();
// writeToFile
}
}
This is what I'm doing right now, I convert ByteArrayOutputStream to a string and print it, there a txt file generated but is empty, and I don't want to a string, I want each error message in the list be printed, how can I do that?
ByteArrayOutputStream errorMessages = new ByteArrayOutputStream();
try (FileWriter w = new FileWriter(pathToReport)) {
w.write(errorMessages.toString());
}
File errorMessagesFile = new File(pathToReport);
errorMessagesFile.writeText(errorMessages.toString());
What logger library that you are using? If you use sl4j, you can couple it with log4j by configuring properly to just log the error messages into the file that you have specified in the configuration. I've done some lookup and I find this stackoverflow: where-does-the-slf4j-log-file-get-saved answer provided a template for you to follow on with this setup.

Invalid character \u0000 in Spring PropertiesPersistingMetadataStore file

As shown in the code below, we have an FtpInboundFileSynchronizingMessageSource with a FileSystemPersistentAcceptOnceFileListFilter using PropertiesPersistingMetadataStore.
#Bean
public PropertiesPersistingMetadataStore getMetadataStore() {
final PropertiesPersistingMetadataStore metadataStore = new PropertiesPersistingMetadataStore() {
#Override
public String putIfAbsent(final String key, final String value) {
try {
super.afterPropertiesSet();
} catch (final Exception e) {
e.printStackTrace();
}
return super.putIfAbsent(key, value);
}
};
metadataStore.setBaseDirectory(getRegistryValue("LOCALMETASTOREDIRECTORY"));
return metadataStore;
}
#Bean
#InboundChannelAdapter(value = "CSVChannel", poller = #Poller(fixedRate = "30000", maxMessagesPerPoll = "1"))
public MessageSource<File> ftpMessageSource() {
final String METHODNAME = "ftpMessageSource()";
if (LoggingHelper.isEntryExitTraceEnabled(LOGGER)) {
LOGGER.entering(CLASSNAME, METHODNAME);
}
final Comparator<File> fileLastModifiedDateComparator = new Comparator<File>() {
#Override
public int compare(final File f1, final File f2) {
return Long.valueOf(f1.lastModified())
.compareTo(f2.lastModified());
}
};
final FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer(), fileLastModifiedDateComparator);
source.setLocalDirectory(new File(getRegistryValue("LOCALDIRECTORY")));
final FileSystemPersistentAcceptOnceFileListFilter fileSystemPersistentAcceptOnceFileListFilter = new FileSystemPersistentAcceptOnceFileListFilter(getMetadataStore(),
getRegistryValue("REMOTEFILENAMEPATTERN_ANAG_CLI"));
fileSystemPersistentAcceptOnceFileListFilter.setFlushOnUpdate(true);
source.setLocalFilter(fileSystemPersistentAcceptOnceFileListFilter);
if (LoggingHelper.isEntryExitTraceEnabled(LOGGER)) {
LOGGER.exiting(CLASSNAME, METHODNAME);
}
return source;
}
We have 4 instances of the application running in production and the local directory, meta store directory are all on a location shared by all 4 instances.
The problem we facing now is we are seeing invalid characters written in the metadata-store.properties file and sometimes there is some process writing this character \u0000 continuously and that causes the file to grow in big size, like 1GB in few minutes. And since the metadata is read in to memory by the framework that is causing outofmemoryexception when the file is very big.
Please see below some entries from the metadata-store.properties file below.
ANAG_CLI_*.CSV/opt/user-integration/anagcli/input/20200609113855907_ANAG_CLI_20200609113846.CSV.a=1591695480000
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
ANAG_CLI_*.CSV/opt/user-integration/anagcli/input/20200610105125916_ANAG_CLI_20200610105118.CSV.a.writing=1591779085951
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000=
ANAG_CLI_*.CSV/opt/user-integration/anagcli/input/20200609133155929_ANAG_CLI_20200609133146.CSV.a=1591702315917
Is it safe to use the PropertiesPersistingMetadataStore like this in a shared location between more than one application instances? How to understand what is causing this invalid character issue and how to avoid this?
Any help would be appreciated!

COPY_ATTRIBUTES not working with Zip filesystem

The Zip filesystem doesn't copy file attributes when using the Java nio Files.copy method with StandardCopyOption.COPY_ATTRIBUTES. Is it supposed to?
The following fully working example code demonstrates the issue. It copies two files into a zip file: one normal, the other read-only. If you then list the zip file (e.g. using 7-zip) you'll see they are both normal, not read-only.
public static void main(String[] args) throws Exception {
Path tmpdir = Files.createTempDirectory(null);
createFiles(tmpdir);
createZip(tmpdir);
}
private static void createFiles(Path tmpdir) throws IOException {
Files.write(tmpdir.resolve("a.txt"), Collections.singleton("Hello, world! (a)"));
Files.write(tmpdir.resolve("b.txt"), Collections.singleton("Hello, world! (b)"));
Files.setAttribute(tmpdir.resolve("b.txt"), "dos:readonly", true);
}
private static void createZip(Path dir) throws IOException
{
Path zip = dir.resolve("data.zip");
URI uri = URI.create("jar:" + zip.toUri());
try (FileSystem fs = FileSystems.newFileSystem(uri, Collections.singletonMap("create", "true"))) {
for (Path path : Files.newDirectoryStream(dir))
if (!path.equals(zip)) {
String name = path.getFileName().toString();
Files.copy(path, fs.getPath(name), StandardCopyOption.COPY_ATTRIBUTES);
}
}
}

Java & Apache-Camel: From direct-endpoint to file-endpoint

I've tried to build a route to copy files from one directory to an other directory. But instead of using:
from(file://source-directory).to(file://destination-directory)
I want to do something like this:
from(direct:start)
.to(direct:doStuff)
.to(direct:readDirectory)
.to(file://destination-folder)
I've done the following stuff:
Route
#Component
public class Route extends AbstractRouteBuilder {
#Override
public void configure() throws Exception {
from("direct:start")
.bean(lookup(ReadDirectory.class))
.split(body())
.setHeader("FILENAME", method(lookup(CreateFilename.class)))
.to("file:///path/to/my/output/directory/?fileName=${header.FILENAME}");
}
Processor
#Component
public class ReadDirectory implements CamelProcessorBean {
#Handler
public ImmutableList<File> apply(#Header("SOURCE_DIR") final String sourceDir) {
final File directory = new File(sourceDir);
final File[] files = directory.listFiles();
if (files == null) {
return ImmutableList.copyOf(Lists.<File>newArrayList());
}
return ImmutableList.copyOf(files);
}
}
I can start my route by using the following pseudo-Test (The point is I can manually start my route by producer.sendBodyAndHeader(..))
public class RouteIT extends StandardIT {
#Produce
private ProducerTemplate producer;
#Test
public void testRoute() throws Exception {
final String uri = "direct:start";
producer.sendBodyAndHeaders(uri, InOut, null, header());
}
private Map<String, Object> header() {
final Map<String, Object> header = Maps.newHashMap();
header.put("SOURCE_DIR", "/path/to/my/input/directory/");
return header;
}
}
AbstractRouteBuilderextends SpringRouteBuilder
CamelProcessorBean is only a Marker-Interface
StandardIT loads SpringContext and stuff
The problem is, that I must set the filename. I've read some stuff that camel sets the header CamelFileNameProduced (during the file endpoint). It is a generic string with timestamp and if I don't set the filename - the written files will get this generic string as the filename.
My Question is: Is there a more beautiful solution to copy files (but starting with a direct-endpoint and read the directory in the middle of the route) and keep the filename for the destination? (I don't have to set the filename when I use from("file:source").to("file:destination"), why must I do it now?)
You can set the file name when you send using the producer template, as long as the header is propagated during the routing between the routes you are all fine, which Camel does by default.
For example
#Test
public void testRoute() throws Exception {
final String uri = "direct:start";
Map headers = ...
headers.put(Exchange.FILE_NAME, "myfile.txt");
producer.sendBodyAndHeaders(uri, InOut, null, headers);
}
The file component talks more about how to control the file name
http://camel.apache.org/file2

How to list the files inside a JAR file?

I have this code which reads all the files from a directory.
File textFolder = new File("text_directory");
File [] texFiles = textFolder.listFiles( new FileFilter() {
public boolean accept( File file ) {
return file.getName().endsWith(".txt");
}
});
It works great. It fills the array with all the files that end with ".txt" from directory "text_directory".
How can I read the contents of a directory in a similar fashion within a JAR file?
So what I really want to do is, to list all the images inside my JAR file, so I can load them with:
ImageIO.read(this.getClass().getResource("CompanyLogo.png"));
(That one works because the "CompanyLogo" is "hardcoded" but the number of images inside the JAR file could be from 10 to 200 variable length.)
EDIT
So I guess my main problem would be: How to know the name of the JAR file where my main class lives?
Granted I could read it using java.util.Zip.
My Structure is like this:
They are like:
my.jar!/Main.class
my.jar!/Aux.class
my.jar!/Other.class
my.jar!/images/image01.png
my.jar!/images/image02a.png
my.jar!/images/imwge034.png
my.jar!/images/imagAe01q.png
my.jar!/META-INF/manifest
Right now I'm able to load for instance "images/image01.png" using:
ImageIO.read(this.getClass().getResource("images/image01.png));
But only because I know the file name, for the rest I have to load them dynamically.
CodeSource src = MyClass.class.getProtectionDomain().getCodeSource();
if (src != null) {
URL jar = src.getLocation();
ZipInputStream zip = new ZipInputStream(jar.openStream());
while(true) {
ZipEntry e = zip.getNextEntry();
if (e == null)
break;
String name = e.getName();
if (name.startsWith("path/to/your/dir/")) {
/* Do something with this entry. */
...
}
}
}
else {
/* Fail... */
}
Note that in Java 7, you can create a FileSystem from the JAR (zip) file, and then use NIO's directory walking and filtering mechanisms to search through it. This would make it easier to write code that handles JARs and "exploded" directories.
Code that works for both IDE's and .jar files:
import java.io.*;
import java.net.*;
import java.nio.file.*;
import java.util.*;
import java.util.stream.*;
public class ResourceWalker {
public static void main(String[] args) throws URISyntaxException, IOException {
URI uri = ResourceWalker.class.getResource("/resources").toURI();
Path myPath;
if (uri.getScheme().equals("jar")) {
FileSystem fileSystem = FileSystems.newFileSystem(uri, Collections.<String, Object>emptyMap());
myPath = fileSystem.getPath("/resources");
} else {
myPath = Paths.get(uri);
}
Stream<Path> walk = Files.walk(myPath, 1);
for (Iterator<Path> it = walk.iterator(); it.hasNext();){
System.out.println(it.next());
}
}
}
erickson's answer worked perfectly:
Here's the working code.
CodeSource src = MyClass.class.getProtectionDomain().getCodeSource();
List<String> list = new ArrayList<String>();
if( src != null ) {
URL jar = src.getLocation();
ZipInputStream zip = new ZipInputStream( jar.openStream());
ZipEntry ze = null;
while( ( ze = zip.getNextEntry() ) != null ) {
String entryName = ze.getName();
if( entryName.startsWith("images") && entryName.endsWith(".png") ) {
list.add( entryName );
}
}
}
webimages = list.toArray( new String[ list.size() ] );
And I have just modify my load method from this:
File[] webimages = ...
BufferedImage image = ImageIO.read(this.getClass().getResource(webimages[nextIndex].getName() ));
To this:
String [] webimages = ...
BufferedImage image = ImageIO.read(this.getClass().getResource(webimages[nextIndex]));
I would like to expand on acheron55's answer, since it is a very non-safe solution, for several reasons:
It doesn't close the FileSystem object.
It doesn't check if the FileSystem object already exists.
It isn't thread-safe.
This is somewhat a safer solution:
private static ConcurrentMap<String, Object> locks = new ConcurrentHashMap<>();
public void walk(String path) throws Exception {
URI uri = getClass().getResource(path).toURI();
if ("jar".equals(uri.getScheme()) {
safeWalkJar(path, uri);
} else {
Files.walk(Paths.get(path));
}
}
private void safeWalkJar(String path, URI uri) throws Exception {
synchronized (getLock(uri)) {
// this'll close the FileSystem object at the end
try (FileSystem fs = getFileSystem(uri)) {
Files.walk(fs.getPath(path));
}
}
}
private Object getLock(URI uri) {
String fileName = parseFileName(uri);
locks.computeIfAbsent(fileName, s -> new Object());
return locks.get(fileName);
}
private String parseFileName(URI uri) {
String schemeSpecificPart = uri.getSchemeSpecificPart();
return schemeSpecificPart.substring(0, schemeSpecificPart.indexOf("!"));
}
private FileSystem getFileSystem(URI uri) throws IOException {
try {
return FileSystems.getFileSystem(uri);
} catch (FileSystemNotFoundException e) {
return FileSystems.newFileSystem(uri, Collections.<String, String>emptyMap());
}
}
There's no real need to synchronize over the file name; one could simply synchronize on the same object every time (or make the method synchronized), it's purely an optimization.
I would say that this is still a problematic solution, since there might be other parts in the code that use the FileSystem interface over the same files, and it could interfere with them (even in a single threaded application).
Also, it doesn't check for nulls (for instance, on getClass().getResource().
This particular Java NIO interface is kind of horrible, since it introduces a global/singleton non thread-safe resource, and its documentation is extremely vague (a lot of unknowns due to provider specific implementations). Results may vary for other FileSystem providers (not JAR). Maybe there's a good reason for it being that way; I don't know, I haven't researched the implementations.
So I guess my main problem would be, how to know the name of the jar where my main class lives.
Assuming that your project is packed in a Jar (not necessarily true!), you can use ClassLoader.getResource() or findResource() with the class name (followed by .class) to get the jar that contains a given class. You'll have to parse the jar name from the URL that gets returned (not that tough), which I will leave as an exercise for the reader :-)
Be sure to test for the case where the class is not part of a jar.
I've ported acheron55's answer to Java 7 and closed the FileSystem object. This code works in IDE's, in jar files and in a jar inside a war on Tomcat 7; but note that it does not work in a jar inside a war on JBoss 7 (it gives FileSystemNotFoundException: Provider "vfs" not installed, see also this post). Furthermore, like the original code, it is not thread safe, as suggested by errr. For these reasons I have abandoned this solution; however, if you can accept these issues, here is my ready-made code:
import java.io.IOException;
import java.net.*;
import java.nio.file.*;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.Collections;
public class ResourceWalker {
public static void main(String[] args) throws URISyntaxException, IOException {
URI uri = ResourceWalker.class.getResource("/resources").toURI();
System.out.println("Starting from: " + uri);
try (FileSystem fileSystem = (uri.getScheme().equals("jar") ? FileSystems.newFileSystem(uri, Collections.<String, Object>emptyMap()) : null)) {
Path myPath = Paths.get(uri);
Files.walkFileTree(myPath, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
System.out.println(file);
return FileVisitResult.CONTINUE;
}
});
}
}
}
Here is an example of using Reflections library to recursively scan classpath by regex name pattern augmented with a couple of Guava perks to to fetch resources contents:
Reflections reflections = new Reflections("com.example.package", new ResourcesScanner());
Set<String> paths = reflections.getResources(Pattern.compile(".*\\.template$"));
Map<String, String> templates = new LinkedHashMap<>();
for (String path : paths) {
log.info("Found " + path);
String templateName = Files.getNameWithoutExtension(path);
URL resource = getClass().getClassLoader().getResource(path);
String text = Resources.toString(resource, StandardCharsets.UTF_8);
templates.put(templateName, text);
}
This works with both jars and exploded classes.
Here's a method I wrote for a "run all JUnits under a package". You should be able to adapt it to your needs.
private static void findClassesInJar(List<String> classFiles, String path) throws IOException {
final String[] parts = path.split("\\Q.jar\\\\E");
if (parts.length == 2) {
String jarFilename = parts[0] + ".jar";
String relativePath = parts[1].replace(File.separatorChar, '/');
JarFile jarFile = new JarFile(jarFilename);
final Enumeration<JarEntry> entries = jarFile.entries();
while (entries.hasMoreElements()) {
final JarEntry entry = entries.nextElement();
final String entryName = entry.getName();
if (entryName.startsWith(relativePath)) {
classFiles.add(entryName.replace('/', File.separatorChar));
}
}
}
}
Edit:
Ah, in that case, you might want this snippet as well (same use case :) )
private static File findClassesDir(Class<?> clazz) {
try {
String path = clazz.getProtectionDomain().getCodeSource().getLocation().getFile();
final String codeSourcePath = URLDecoder.decode(path, "UTF-8");
final String thisClassPath = new File(codeSourcePath, clazz.getPackage().getName().repalce('.', File.separatorChar));
} catch (UnsupportedEncodingException e) {
throw new AssertionError("impossible", e);
}
}
Just to mention that if you are already using Spring, you can take advantage of the PathMatchingResourcePatternResolver.
For instance to get all the PNG files from a images folder in resources
ClassLoader cl = this.getClass().getClassLoader();
ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(cl);
Resource[] resources = resolver.getResources("images/*.png");
for (Resource r: resources){
logger.info(r.getFilename());
// From your example
// ImageIO.read(cl.getResource("images/" + r.getFilename()));
}
A jar file is just a zip file with a structured manifest. You can open the jar file with the usual java zip tools and scan the file contents that way, inflate streams, etc. Then use that in a getResourceAsStream call, and it should be all hunky dory.
EDIT / after clarification
It took me a minute to remember all the bits and pieces and I'm sure there are cleaner ways to do it, but I wanted to see that I wasn't crazy. In my project image.jpg is a file in some part of the main jar file. I get the class loader of the main class (SomeClass is the entry point) and use it to discover the image.jpg resource. Then some stream magic to get it into this ImageInputStream thing and everything is fine.
InputStream inputStream = SomeClass.class.getClassLoader().getResourceAsStream("image.jpg");
JPEGImageReaderSpi imageReaderSpi = new JPEGImageReaderSpi();
ImageReader ir = imageReaderSpi.createReaderInstance();
ImageInputStream iis = new MemoryCacheImageInputStream(inputStream);
ir.setInput(iis);
....
ir.read(0); //will hand us a buffered image
Given an actual JAR file, you can list the contents using JarFile.entries(). You will need to know the location of the JAR file though - you can't just ask the classloader to list everything it could get at.
You should be able to work out the location of the JAR file based on the URL returned from ThisClassName.class.getResource("ThisClassName.class"), but it may be a tiny bit fiddly.
Some time ago I made a function that gets classess from inside JAR:
public static Class[] getClasses(String packageName)
throws ClassNotFoundException{
ArrayList<Class> classes = new ArrayList<Class> ();
packageName = packageName.replaceAll("\\." , "/");
File f = new File(jarName);
if(f.exists()){
try{
JarInputStream jarFile = new JarInputStream(
new FileInputStream (jarName));
JarEntry jarEntry;
while(true) {
jarEntry=jarFile.getNextJarEntry ();
if(jarEntry == null){
break;
}
if((jarEntry.getName ().startsWith (packageName)) &&
(jarEntry.getName ().endsWith (".class")) ) {
classes.add(Class.forName(jarEntry.getName().
replaceAll("/", "\\.").
substring(0, jarEntry.getName().length() - 6)));
}
}
}
catch( Exception e){
e.printStackTrace ();
}
Class[] classesA = new Class[classes.size()];
classes.toArray(classesA);
return classesA;
}else
return null;
}
public static ArrayList<String> listItems(String path) throws Exception{
InputStream in = ClassLoader.getSystemClassLoader().getResourceAsStream(path);
byte[] b = new byte[in.available()];
in.read(b);
String data = new String(b);
String[] s = data.split("\n");
List<String> a = Arrays.asList(s);
ArrayList<String> m = new ArrayList<>(a);
return m;
}
There are two very useful utilities both called JarScan:
www.inetfeedback.com/jarscan
jarscan.dev.java.net
See also this question: JarScan, scan all JAR files in all subfolders for specific class
The most robust mechanism for listing all resources in the classpath is currently to use this pattern with ClassGraph, because it handles the widest possible array of classpath specification mechanisms, including the new JPMS module system. (I am the author of ClassGraph.)
How to know the name of the JAR file where my main class lives?
URI mainClasspathElementURI;
try (ScanResult scanResult = new ClassGraph().whitelistPackages("x.y.z")
.enableClassInfo().scan()) {
mainClasspathElementURI =
scanResult.getClassInfo("x.y.z.MainClass").getClasspathElementURI();
}
How can I read the contents of a directory in a similar fashion within a JAR file?
List<String> classpathElementResourcePaths;
try (ScanResult scanResult = new ClassGraph().overrideClasspath(mainClasspathElementURI)
.scan()) {
classpathElementResourcePaths = scanResult.getAllResources().getPaths();
}
There are lots of other ways to deal with resources too.
One more for the road that's a bit more flexible for matching specific filenames because it uses wildcard globbing. In a functional style this could resemble:
import java.io.IOException;
import java.net.URISyntaxException;
import java.nio.file.FileSystem;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.function.Consumer;
import static java.nio.file.FileSystems.getDefault;
import static java.nio.file.FileSystems.newFileSystem;
import static java.util.Collections.emptyMap;
/**
* Responsible for finding file resources.
*/
public class ResourceWalker {
/**
* Globbing pattern to match font names.
*/
public static final String GLOB_FONTS = "**.{ttf,otf}";
/**
* #param directory The root directory to scan for files matching the glob.
* #param c The consumer function to call for each matching path
* found.
* #throws URISyntaxException Could not convert the resource to a URI.
* #throws IOException Could not walk the tree.
*/
public static void walk(
final String directory, final String glob, final Consumer<Path> c )
throws URISyntaxException, IOException {
final var resource = ResourceWalker.class.getResource( directory );
final var matcher = getDefault().getPathMatcher( "glob:" + glob );
if( resource != null ) {
final var uri = resource.toURI();
final Path path;
FileSystem fs = null;
if( "jar".equals( uri.getScheme() ) ) {
fs = newFileSystem( uri, emptyMap() );
path = fs.getPath( directory );
}
else {
path = Paths.get( uri );
}
try( final var walk = Files.walk( path, 10 ) ) {
for( final var it = walk.iterator(); it.hasNext(); ) {
final Path p = it.next();
if( matcher.matches( p ) ) {
c.accept( p );
}
}
} finally {
if( fs != null ) { fs.close(); }
}
}
}
}
Consider parameterizing the file extensions, left an exercise for the reader.
Be careful with Files.walk. According to the documentation:
This method must be used within a try-with-resources statement or similar control structure to ensure that the stream's open directories are closed promptly after the stream's operations have completed.
Likewise, newFileSystem must be closed, but not before the walker has had a chance to visit the file system paths.
Just a different way of listing/reading files from a jar URL and it does it recursively for nested jars
https://gist.github.com/trung/2cd90faab7f75b3bcbaa
URL urlResource = Thead.currentThread().getContextClassLoader().getResource("foo");
JarReader.read(urlResource, new InputStreamCallback() {
#Override
public void onFile(String name, InputStream is) throws IOException {
// got file name and content stream
}
});

Categories