I need to communicate with a 3th party software through INI files, and I'm using the ini4j library for this.
All was going well, until I need to be able to use a key length of >80 chars.
The library is returning :
Exception in thread "main" java.lang.IllegalArgumentException: Key too
long:
0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
at
java.util.prefs.AbstractPreferences.put(AbstractPreferences.java:243)
The library has set this in Preferences.java:
public static final int MAX_KEY_LENGTH = 80;
Is there any clean way around this?
I found something related here, but I'm not sure how to use it:
http://ini4j.sourceforge.net/apidocs/index.html?org/ini4j/addon/StrictPreferences.html
This is the sample code:
try {
Wini ini = new Wini(new File("test.ini"));
ini.getConfig().setStrictOperator(true);
ini.getConfig().setEscape(false);
java.util.prefs.Preferences prefs = new IniPreferences(ini);
prefs.node("Section").put("0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789", "Test");
ini.store();
} catch (IOException e) {
e.printStackTrace();
}
I was able to fix my problem by using the JIniFile library (https://github.com/SubZane/JIniFile) instead of the Ini4j library.
All working fine now.
Related
I have a (maybe simple) question. In my code I want to know whether there are one or more folder in a classpath or not. First things first: my code is working when using IDE, but not when it runs in a jar file. I understood that there is the way with - especially in Spring Boot - the class PathMatchingPatternResolver to get the resources. I have done this, but I can't get my code working under *.jar. The situation is as follows:
I want to look at a certain path if there are certain folder to get their names. If there are such folder, the names of them will be saved in a List. I need this List in further action. But how to get the folder from *.jar?
Here is what I have done so far:
public List<String> loadSupportedRecords(Meta metaFromTaxCase) {
List<String> vastRecordTypes = new ArrayList<>();
Map<String, String> values = new HashMap<>();
values.put(TAX_TYPE, metaFromTaxCase.getTaxonomie().getKey());
values.put(FISCAL_YEAR, metaFromTaxCase.getVeranlagungszeitraum());
values.put(TAXONOMIE_VERSION, metaFromTaxCase.getVersionDerTaxonomie());
var path = replaceVariablesInPathWithMapValues(vastConfigTemplatePath, values);
var indexOfRepo = path.lastIndexOf("{record-type}/");
var lengthOfVar = "{record-type}".length();
var subPath = path.substring(0, indexOfRepo + lengthOfVar);
try {
for (Resource resource : resourcePatternResolver.getResources(subPath)) {
vastRecordTypes.add(resource.getFilename()); // <-- This does not work
// I know that this does not work, but this is my newest commit.
// I tried with File Object and used the "listFiles()" method,
// but also failed.
}
return vastRecordTypes;
} catch (IOException e) {
throw new ResourceNotFoundException(
String.format(CLASSPATH_ERROR_MESSAGE, path, e.getMessage()));
}
}
Is there any way to get the folder names in IDE AND in *.jar running?
Thank you all for helping me as good as you can.
Finally I got things work. My code runs as follows:
public Set<String> loadSupportedRecords(Meta metaFromTaxCase) {
Set<String> vastRecordTypes = new HashSet<>();
Map<String, String> values = new HashMap<>();
values.put(TAX_TYPE, metaFromTaxCase.getTaxonomie().getKey());
values.put(FISCAL_YEAR, metaFromTaxCase.getVeranlagungszeitraum());
values.put(TAXONOMIE_VERSION,metaFromTaxCase.getVersionDerTaxonomie());
var path = replaceVariablesInPathWithMapValues(vastConfigTemplatePath, values);
try {
for (Resource resource : resourcePatternResolver.getResources(path)) {
vastRecordTypes.add(((ClassPathResource) resource).getPath().split("/")[8]);
}
return vastRecordTypes;
} catch (IOException e) {
throw new ResourceNotFoundException(
String.format(CLASSPATH_ERROR_MESSAGE, path, e.getMessage()));
}
}
As I learned, it is not possible to get "folder" in jar environment. It seems that all is treated as a file. So I searched the file-tree and used the above split method to get the individual folder name. In my code I can be sure, that the first 8 elements of the Array are always the same, so it can be hard coded. I also used a Set now instead of a List, to avoid double inserts (because there are a lot of files underneath the folder). Thanks to all.
I want to convert any file extension to .ttl (TURTLE) and I need to use Apache Jena, I am aware of how it can be accomplished using RDFJ4 but the output isn't as accurate as it is using Jena. I want to know how I can auto-detect the extension or rather file type if I am not aware of the extension when reading a file from a directory. This is my code when I hardcode the file-name, it works, I just need help in auto detecting the file type. My code is as follows:
public class Converter {
public static void main(String[] args) throws FileNotFoundException {
String fileName = "./abc.rdf";
Model model = ModelFactory.createDefaultModel();
//I know this is how it is done with RDF4J but I need to use Apache Jena.
/* RDFParser rdfParser = Rio.createParser(Rio.getWriterFormatForFileName(fileName).orElse(RDFFormat.RDFXML));
RDFWriter rdfWriter = Rio.createWriter(RDFFormat.TURTLE,
new FileOutputStream("./"+stripExtension(fileName)+".ttl"));*/
InputStream is = FileManager.get().open(fileName);
if (is != null) {
model.read(is, null, "RDF/XML");
model.write(new FileOutputStream("./converted.ttl"), "TURTLE");
} else {
System.err.println("cannot read " + fileName);
}
}
}
All help and advice will be highly appreciated.
There is functionality that handles reading from a file using the extension to determine the syntax:
RDFDataMgr.read(model, fileName);
It also handles compressed files e.g. "file.ttl.gz".
There is a registry of languages:
RDFLanguages.fileExtToLang(...)
RDFLanguages.filenameToLang(...)
For more control see RDFParser:
RDFParser.create().
source(FileName)
... many options including forcing the language ...
.parse(model);
https://jena.apache.org/documentation/io/rdf-input.html
I am doing a conversion from docx to pdf format. I successfully did the variable replacement and have a WordprocessingMLPackage template.
I have tried both approches. The old deprcated version of converting to pdf and the newer method. Both fails giving this exception error
Don't know how to handle "application/pdf" as an output format.
Neither an FOEventHandler, nor a Renderer could be found for this
output format. Error: UnsupportedOpertaionException
I have tried everything I can. This thing works on my local machine but now at my workplace. I think I have all the necessary jars. Can u please instruct what course of action should I take.
Code :
Method 1:
Docx4J.toPDF(template, new FileOutputStream("newPdf.pdf"));
Method 2:
public static void createPDF(WordprocessingMLPackage template, String outputPath) {
try {
// 2) Prepare Pdf settings
PdfSettings pdfSettings = new PdfSettings();
// 3) Convert WordprocessingMLPackage to Pdf
OutputStream out = new FileOutputStream(new File(
outputPath));
PdfConversion converter = new org.docx4j.convert.out.pdf.viaXSLFO.Conversion(
template);
converter.output(out, pdfSettings);
} catch (Throwable e) {
e.printStackTrace();
}
}
Both are giving the same error. Any help is appreciated!
My issue is resolved. The problem was that the required fop-1.1.jar was there on my eclipse classpath but it was not there on the local server classpath. I added them there and it worked like a charm.
Novell in HDFS and Hadoop:
I am developing a program which one should get all the files of a specific directory, where we can find several small files of any type.
Get everyfile and make append in a SequenceFile compressed, where the key must be the path of the file, and the value must be the file got, For now my code is:
import java.net.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.io.compress.BZip2Codec;
public class Compact {
public static void main (String [] args) throws Exception{
try{
Configuration conf = new Configuration();
FileSystem fs =
FileSystem.get(new URI("hdfs://quickstart.cloudera:8020"),conf);
Path destino = new Path("/user/cloudera/data/testPractice.seq");//test args[1]
if ((fs.exists(destino))){
System.out.println("exist : " + destino);
return;
}
BZip2Codec codec=new BZip2Codec();
SequenceFile.Writer outSeq = SequenceFile.createWriter(conf
,SequenceFile.Writer.file(fs.makeQualified(destino))
,SequenceFile.Writer.compression(SequenceFile.CompressionType.BLOCK,codec)
,SequenceFile.Writer.keyClass(Text.class)
,SequenceFile.Writer.valueClass(FSDataInputStream.class));
FileStatus[] status = fs.globStatus(new Path("/user/cloudera/data/*.txt"));//args[0]
for (int i=0;i<status.length;i++){
FSDataInputStream in = fs.open(status[i].getPath());
outSeq.append(new org.apache.hadoop.io.Text(status[i].getPath().toString()), new FSDataInputStream(in));
fs.close();
}
outSeq.close();
System.out.println("End Program");
}catch(Exception e){
System.out.println(e.toString());
System.out.println("File not found");
}
}
}
But after of every execution I receive this exception:
java.io.IOException: Could not find a serializer for the Value class: 'org.apache.hadoop.fs.FSDataInputStream'. Please ensure that the configuration 'io.serializations' is properly configured, if you're using custom serialization.
File not found
I understand the error must be in the type of the file I am creating and the type of object I define for adding to the sequenceFile, but I don't know which one should add, can anyone help me?
FSDataInputStream, like any other InputStream, is not intended to be serialized. What serializing an "iterator" over a stream of byte should do ?
What you most likely want to do, is to store the content of the file as the value. For example you can switch the value type from FsDataInputStream to BytesWritable and just get all the bytes out of the FSDataInputStream. One drawback of using Key/Value SequenceFile for a such purpose is that the content of each file has to fit in memory. It could be fine for small files but you have to be aware of this issue.
I am not sure what you are really trying to achieve but perhaps you could avoid reinventing the wheel by using something like Hadoop Archives ?
Thanks a lot by your comments, the problem was the serializer like you say, and finally I used BytesWritable:
FileStatus[] status = fs.globStatus(new Path("/user/cloudera/data/*.txt"));//args[0]
for (int i=0;i<status.length;i++){
FSDataInputStream in = fs.open(status[i].getPath());
byte[] content = new byte [(int)fs.getFileStatus(status[i].getPath()).getLen()];
outSeq.append(new org.apache.hadoop.io.Text(status[i].getPath().toString()), new org.apache.hadoop.io.BytesWritable(in));
}
outSeq.close();
Probably there are other better solutions in the hadoop ecosystem but this problem was an exercise of a degree I am developing, and for now We are remaking the wheel for understanding concepts ;-).
I read the claims from Sun people about the wonderful space economy of not only using FastInfoSet, but using it with an external vocab. The code for this purpose is include in the most recent version (1.2.8) but it is not exactly fully documented.
For many files, this works just great for me. However, we've come up with an XML file which, when serialized from DOM with the vocab I created (using the generator in the FI library), and then read back into DOM, mismatches. The mismatches are all in PC-data.
I just call setVocabulary on the serializer and setExternalVocabulary with a map from URI to vocabulary on the reader.
I had to invent my own mechanism to actually serialize a vocabulary; there didn't seem to be one anywhere in the FI library.
One fiddly bit of business is that the org.jvnet.fastinfoset.Vocabulary class is what the generator gives you, but it's not what the parsers and serializers eat. I made arrangements to serialize these, and then use the code below to turn them into the needed objects:
private static void initializeAnalysis() {
InputStream is = FastInfosetUtils.class.getResourceAsStream(ANALYSIS_VOCAB_CLASSPATH);
try {
ObjectInputStream ois = new ObjectInputStream(is);
analysisJvnetVocab = (SerializableVocabulary) ois.readObject();
ois.close();
} catch (IOException e) {
throw new RuntimeException(e);
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
}
analysisSerializerVocab = new SerializerVocabulary(analysisJvnetVocab.getVocabulary(), false);
analysisParserVocab = new ParserVocabulary(analysisJvnetVocab.getVocabulary());
}
and then, to actually write a document:
SerializerVocabulary fullVocab = new SerializerVocabulary();
fullVocab.setExternalVocabulary(ANALYSIS_VOCAB_URI, analysisSerializerVocab, false);
// pass fullVocab to setVocabulary.
and to read:
Map<Object, Object> vocabMap = new HashMap<Object, Object>();
vocabMap.put(ANALYSIS_VOCAB_URI, analysisParserVocab);
// pass map into setExternalVocabulary
I could easily imagine that the recipe for creating serialization vocabularies is not right, it's not like I was reading a tutorial. Anyone happen to know?
UPDATE
Since no one 'round here had anything to add to this question, I make a test case and filed a bug report. Somewhat to my surprise, it turned out that it was, in fact, a bug, and a fix has been made.