Batch, Parse, and Convert Meta-Data from .1sc Files - java

TLDR: Questions are after the break.
I am looking to convert and store information from a large (3TB) set of *.1sc images (Bio-Rad, Quantity One). In addition to having the actual image, the file contains a good deal of information regarding where/how the image was taken (meta-data). All of this seams to be held in the Intel Hex format (or at least they all open with "Stable File Version 2.0 Intel Format" in hex).
The ImageJ plugin Bioformats can handle the image, and includes functionality in MetadataTools. To capture just the batch images, I had great success using the batchTiffconvert plugin. The meta-data that seems to be available in ImageJ is incomplete, for this format, but I'm not certain on how to use the MetadataTools (any good guide references would be appreciated, currently going over the API).
My real problem isn't actually parsing the hex to find what I'm looking for. Where I'm failing is actually converting the hex into something meaning full. Example:
I can parse the hex for scan_area, but I haven't been able to convert 00 10 00 16 00 EC B5 86 00 into something meaningful.
Approaching this from the same direction as a similar DM3 question, I was able to make an XML file, but even if I wrote out the whole XML file, much of the meta-data wasn't included (it had things like the date-stamp, which are good). I think this is because of the information passed to GelReader.Java from BioRadReader.Java. In particular this section:
if (getMetadataOptions().getMetadataLevel() != MetadataLevel.MINIMUM) {
String units = firstIFD.getIFDStringValue(MD_FILE_UNITS);
String lab = firstIFD.getIFDStringValue(MD_LAB_NAME);
addGlobalMeta("Scale factor", scale);
addGlobalMeta("Lab name", lab);
addGlobalMeta("Sample info", info);
addGlobalMeta("Date prepared", prepDate);
addGlobalMeta("Time prepared", prepTime);
addGlobalMeta("File units", units);
addGlobalMeta("Data format",
fmt == SQUARE_ROOT ? "square root" : "linear");
}
Because the MetadataLevel set in all the Bio-Rad scripts is MetadataLevel.MINIMUM. I tried adding the additional metadata I wanted here, but again it wasn't able to be convert/decoded usefully.
Is it possible to retrieve more of the metadata using this system? If so, am I working in the right section of code? The source for bio-formats is quite large, and I won't even pretend to have a good grasp on it (though I'm trying). Am I just running into a proprietary format problem? Can anyone tell me how to convert the hex values or point more to a resource that explains it?

First of all: note that neither of the sources you linked above actually correspond to the .1sc file format reader of Bio-Formats. You want the BioRadGelReader.
The Bio-Formats library parses three types of metadata. From the About Bio-Formats page:
There are three types of metadata in Bio-Formats, which we call core metadata, original metadata, and OME metadata.
Core metadata only includes things necessary to understand the basic structure of the pixels: image resolution; number of focal planes, time points, channels, and other dimensional axes; byte order; dimension order; color arrangement (RGB, indexed color or separate channels); and thumbnail resolution.
Original metadata is information specific to a particular file format. These fields are key/value pairs in the original format, with no guarantee of cross-format naming consistency or compatibility. Nomenclature often differs between formats, as each vendor is free to use their own terminology.
OME metadata is information from #1 and #2 converted by Bio-Formats into the OME data model. Performing this conversion is the primary purpose of Bio-Formats. Bio-Formats uses its ability to convert proprietary metadata into OME-XML as part of its integration with the OME and OMERO servers—essentially, they are able to populate their databases in a structured way because Bio-Formats sorts the metadata into the proper places. This conversion is nowhere near complete or bug free, but we are constantly working to improve it. We would greatly appreciate any and all input from users concerning missing or improperly converted metadata fields.
The Bio-Formats command line tools are capable of dumping all original metadata key/value pairs for a given dataset, as well as the converted OME-XML.
In your case, if what you want is quantity over quality, you probably want to record all the original metadata somehow. The showinf command line tool does that automatically (you actually have to pass the -nometa flag to suppress it).
If you look over the complete list of original metadata key/value pairs and the information you seek is still not there, then we'd have to go to the next level and improve the BioRadGelReader to parse more metadata.
Unfortunately, inspecting the source code, it looks like essentially nothing is parsed into the original metadata table for that file format. It was likely reverse engineered, since the Bio-Rad Gel format page says that we do not have a specification document for it.
So what that means is that the Bio-Formats developers are as clueless about the file structure as you are, and would do the same thing you are doing: stare at a hex editor and try to figure things out. Some tricks include:
Look up metadata values using the official Bio-Rad software, then search for those values in various encodings using your hex editor.
Edit one metadata value (if possible) using the official Bio-Rad software—or by doing multiple acquisitions as similarly as possible except for one variable—then diff the output files to see what effect changing that value had.
Check whether the first few hundred bytes matches a known pattern for container formats such as Microsoft OLE-based data, TIFF-based data, or HDF-based data, since many formats reuse these general container structures.
You could also email Bio-Rad to ask whether they are willing to send a spec, and if so, use it to improve the file format reader, and/or forward it on to the Bio-Formats developers.

Related

Is it possible to do this type of search in Java

I am stuck on a project at work that I do not think is really possible and I am wondering if someone can confirm my belief that it isn't possible or at least give me new options to look at.
We are doing a project for a client that involved a mass download of files from a server (easily did with ftp4j and document name list), but now we need to sort through the data from the server. The client is doing work in Contracts and wants us to pull out relevant information such as: Licensor, Licensee, Product, Agreement date, termination date, royalties, restrictions.
Since the documents are completely unstandardized, is that even possible to do? I can imagine loading in the files and searching it but I would have no idea how to pull out information from a paragraph such as the licensor and restrictions on the agreement. These are not hashes but instead are just long contracts. Even if I were to search for 'Licensor' it will come up in the document multiple times. The documents aren't even in a consistent file format. Some are PDF, some are text, some are html, and I've even seen some that were as bad as being a scanned image in a pdf.
My boss keeps pushing for me to work on this project but I feel as if I am out of options. I primarily do web and mobile so big data is really not my strong area. Does this sound possible to do in a reasonable amount of time? (We're talking about at the very minimum 1000 documents). I have been working on this in Java.
I'll do my best to give you some information, as this is not my area of expertise. I would highly consider writing a script that identifies the type of file you are dealing with, and then calls the appropriate parsing methods to handle what you are looking for.
Since you are dealing with big data, python could be pretty useful. Javascript would be my next choice.
If your overall code is written in Java, it should be very portable and flexible no matter which one you choose. Using a regex or a specific string search would be a good way to approach this;
If you are concerned only with Licensor followed by a name, you could identify the format of that particular instance and search for something similar using the regex you create. This can be extrapolated to other instances of searching.
For getting text from an image, try using the API's on this page:
How to read images using Java API?
Scanned Image to Readable Text
For text from a PDF:
https://www.idrsolutions.com/how-to-search-a-pdf-file-for-text/
Also, PDF is just text, so you should be able to search through it using a regex most likely. That would be my method of attack, or possibly using string.split() and make a string buffer that you can append to.
For text from HTML doc:
Here is a cool HTML parser library: http://jericho.htmlparser.net/docs/index.html
A resource that teaches how to remove HTML tags and get the good stuff: http://www.rgagnon.com/javadetails/java-0424.html
If you need anything else, let me know. I'll do my best to find it!
Apache tika can extract plain text from almost any commonly used file format.
But with the situation you describe, you would still need to analyze the text as in "natural language recognition". Thats a field where; despite some advances have been made (by dedicated research teams, spending many person years!); computers still fail pretty bad (heck even humans fail at it, sometimes).
With the number of documents you mentioned (1000's), hire a temp worker and have them sorted/tagged by human brain power. It will be cheaper and you will have less misclassifications.
You can use tika for text extraction. If there is a fixed pattern, you can extract information using regex or xpath queries. Other solution is to use Solr as shown in this video.You don't need solr but watch the video to get idea.

Is there a clean way to to transform text files that are not the same into a standard format

I'm pretty sure the answer i'm going to get is: "why don't you just have the text files all be the same or follow some set format". Unfortunately i do not have this option but, i was wondering if there is a way to take any text file and translate it over to another text or xml file that will always look the same?
The text files pretty much have the same data just arranged differently.
The closest i can come up with is to have an XSLT sheet for each text file but, then i have to turn around and read the file that was just created, delete it, and repeat for each text file.
So, is there a way to grab the data off text files that essentially have the same data just stored differently; and store this data in an object that i could then re-use later on in some process?
If it was up to me, i would push for every text file to follow some predefined format since they all pretty much contain the same data but, it's not up to me.
Odd question... You say they are text files yet mention XSLT as a possible solution. XSLT will only work if the source is XML, if that is so, please redefine the question. If you say text files I assume delimiter separated (e.g. csv), fixed length,...
There are some parsers (like smooks) out there that allow you to parse multiple formats, but it will still require you to perform the "mapping" yourself of course.
This is a typical problem in the integration world so any integration tool should offer you a solution (e.g. wso2, fuse,...).

PDF Handling in Java

I have created a program that should one day become a PDF editor
It's purpose will be saving GUI's textual content to the PDF, and loading it from it. GUI resembles text editor, but it only has certain fields(JTextAreas, actually).
It can look like this (this is only one page, it can have many more, also upper and lower margins are cut out of the picture) It should actually resemble A4 in pixel size.
I have looked around for a bit for PDF libraries and found out that iText could suit my PDF creating needs, however, if I understood it correct, it retirevs text from a whole page as a string which won't work for me, because I will need to detect diferent fields/paragaphs/orsomething to be able to load them back into the program.
Now, I'm a bit lazy, but I don't want to spend hours going trough numerus PDF libraries just to find out that they won't work for me.
Instead, I'm asking someone with a bit more Java PDF handling experience to recommend me one according to my needs.
Or maybe recommend me how to add invisible parts to PDF which will help my program to determine where is it exactly situated insied a PDF file...
Just to be clear (I formed my question wrong before), only thing I need to put in my PDF is text, and that's all I need to later be able to get out. My program should be able to read PDF's which he created himself...
Also, because of the designated use of files created with this program, they need to be in the PDF format.
Short Answer: Use an intermediate format like JSON or XML.
Long Answer: You're using PDF's in a manner that they wasn't designed for. PDF's were not designed to store data; they were designed to present and format data in an portable form. Furthermore, a PDF is a very "heavy" way to store data. I suggest storing your data in another manner, perhaps in a format like JSON or XML.
The advantage now is that you are not tied to a specific output-format like PDF. This can come in handy later on if you decide that you want to export your data into another format (like a Word document, or an image) because you now have a common representation.
I found this link and another link that provides examples that show you how to store and read back metadata in your PDF. This might be what you're looking for, but again, I don't recommend it.
If you really insist on using PDF to store data, I suggest that you store the actual data in either XML or RDF and then attach that to the PDF file when you generate it. Then you can read the XML back for the data.
Assuming that your application will only consume PDF files generated by the same application, there is one part of the PDF specification called Marked Content, that was introduced precisely for this purpose. Using Marked Content you can specify the structure of the text in your document (chapter, paragraph, etc).
Read Chapter 14 - Document Interchange of the PDF Reference Document for more details.

Java: ArrayList, String manipulation and Parsing

I am writing an application that stores references for books, journals, sites and so on. I mean I have already done most.
What I want is a suggestion regarding what is the best way to implement above specs?
What format text file should I use to store the library? Not file type but format. I am using simple text file at the moment. But planning to implement format as in below.
<book><Art of Human Hacking><New York><2011><1>
<journal><Achieving Maximum Speed In 802.11n><London><2009><260-265>
1st tag <book> and <journal> are type identifier. I have used ArrayList. Should I use multi dimensional ArrayList and store each item like below?
[[Art of Human Hacking,New York,2011,1][Achieving Maximum Speed In 802.11n,London,2009,260-265]]
I have used StringTokenizer and I cannot differentiate spaces. How do I fix this?
I have already implemented all features including listing all, listing unique, searching, editing, removing, adding. But everything is done to content without spaces.
You should use an existing serializer instead of writing your own, unless the project forbids it.
For compatability and human readability, CSV would be your best bet. Use an existing CSV parser to get your escaping correct (not that hard to write yourself, but difficult enough to warrant using an existing parser to avoid bugs). Google turned up: http://sourceforge.net/projects/javacsv/
If human editing is not a priority, then JSON is also a good format (it is human readable, but not quite as simple as CSV, and won't display in excel for instance).
Then you have binary protocols, such as native Java serialization, or even Google protocol buffers. These may be faster, but obviously lose the ability to view the data store and would probably complicate your debugging.

Palm Database (PDB) files in Java?

Has anybody written any classes for reading and writing Palm Database (PDB) files in Java? (I mean on a server, not on the Palm device itself.) I tried to google, but all I got were Protein Data Bank references.
I wrote a Perl program that does it using Palm::PDB.pm, but I want to turn it into a servlet for a GWT app.
The jSyncManager project at http://www.jsyncmanager.org/ is under the LGPL and includes classes to read and write PDB files -- look in jSyncManager/API/Protocol/Util/DLPDatabase.java in its source code. It looks like the core code you need from this could be isolated from the rest of the library with a little effort.
There are a few ways that you can go about this;
Easiest but slowest: Find a perl-> java bridge. This will not be quick, but it will work and it should involve the least amount of work.
Find a C++/C# implementation that you have the source to and convert it (this should be the fastest solution)
Find a Java reader ... there seems to be a few listed under google... however I do not have any experience with these.
Depending on what your intended usage is, you might look into writing a simple reader yourself. The format is pretty simple and you only need to handle a couple of simple fields to parse it.
Basically there is a header for the entire file which has a 2 byte integer at the end which specifies the number of record. So just skip your way through the bytes for all the other fields in the header and then read the last field which is the number of records in the file. Be aware that the PDB format writes integers with most significant byte first.
Following this, there will be a record header for each record, the first field of which is the actual offset into the file for the record itself. Again, be aware of the byte order.
So, now you have the offsets into the file for each record in the file, which should make it very easy to read the actual records as long as you know the format of these for the type of PDB file you are trying to read.
Wikipedia has a nice overview of the header formats.
Maybe JPilot can help? They must have a lot of Java code dealing with Palm OS data.

Categories