I am working with Solr and I feel interested in understanding all the nitty gritty details of the Solr Index. I am using Solrcloud and the index folder contains several files which includes:
_k.fdt -> field data
_k.fnm -> fields
segments_5
_k.fdx -> field index
_k.si -> segment info
...
They all look like binary/serialized object. I tried to follow this code to read the index file but failed with the following error. Can anyone help me on that?
public class Readfdt {
public static void main(String[] args) throws IOException {
final byte segmentID[];
Path indexpath = Paths.get(
"<solrhome>/example/cloud/node1/solr/gettingstarted_shard1_replica1/data/indexbackup");
String indexfile = "_k.fdt";
Codec codec = new Lucene54Codec();
Directory dir = FSDirectory.open(indexpath);
String segmentName = "_k";
segmentID = new byte[StringHelper.ID_LENGTH];
IOContext ioContext = new IOContext();
SegmentInfo segmentInfos = codec.segmentInfoFormat().read(dir, segmentName, segmentID, ioContext.READ);
System.out.println(segmentInfos);
}
}
And the error message is:
Exception in thread "main" org.apache.lucene.index.CorruptIndexException: file mismatch, expected id=0, got=2umd1rtwuv6lu48qbzywr533s (resource=BufferedChecksumIndexInput(MMapIndexInput(path="<solrhome>/example/cloud/node1/solr/gettingstarted_shard1_replica1/data/indexbackup/_k.si")))
at org.apache.lucene.codecs.CodecUtil.checkIndexHeaderID(CodecUtil.java:266)
at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256)
at org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:86)
at com.datafireball.Readfdt.main(Readfdt.java:29)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum passed (13f6e228). possibly transient resource issue, or a Lucene or JVM bug (resource=BufferedChecksumIndexInput(MMapIndexInput(path="<solrhome>/example/cloud/node1/solr/gettingstarted_shard1_replica1/data/indexbackup/_k.si")))
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:379)
at org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:117)
... 1 more
In the end but not the least, I am new to Java generally and wondering what is the best practice to quickly locate the code to be able to locate the right class/code to deserialize any serialized object.
Related
I'm using XAdES4j 2.2.0 for signing XML documents.
Previously I've used version 1.5.1, and adapted that code to use the new version.
While XADES-BES, XADES-T and XADES-C work flawlessly, there is a problem with XADES-XL (applied after XADES-C), i.e. an exception is caught:
HttpTsaConfiguration must be configured in the profile in order to use an HTTP-based time-stamp token provider
My code looks like:
// Certificate for signing is stored in PKCS12 keystore in file.
KeyingDataProvider keyingDataProvider = FileSystemKeyStoreKeyingDataProvider
.builder("PKCS12", sKeyStorePath, new SigningCertificateSelector.single())
.storePassword(new DirectPasswordProvider(sKeyStorePassword))
.entryPassword(new DirectPasswordProvider(sKeyStorePassword))
.fullChain(true)
.build();
// KeyStore and CertStore for validating signatures and timestamps.
CertificateValidationProvider validationProvider = PKIXCertificateValidationProvider
.builder(keyStore)
.checkRevocation(true)
.intermediateCertStores(certStore)
.build();
// Read document from file on disk.
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(true);
DocumentBuilder db = dbf.newDocumentBuilder();
doc = db.parse(new File(sFilePath));
// Sign document with XAdES-C.
signDocumentC(keyingDataProvider, doc, validationProvider);
// Sign document with XAdES-XL afterwards.
signDocumentXL(keyingDataProvider, doc, validationProvider);
...
/** Method for signing with XAdES-C. */
private void signDocumentC(KeyingDataProvider keyingDataProvider, Document doc, CertificateValidationProvider validationProvider) throws Exception
{
Element elemToSign = doc.getDocumentElement();
DOMHelper.useIdAsXmlId(elemToSign);
ValidationDataProvider vdp = new ValidationDataFromCertValidationProvider(validationProvider);
XadesSigner signer = new XadesCSigningProfile(keyingDataProvider, vdp)
.with(new HttpTimeStampTokenProvider(new DefaultMessageDigestProvider(), new HttpTsaConfiguration("http://freetsa.org/tsr")))
.newSigner();
// Skip check for countersigning, just sign.
new Enveloped(signer).sign(elemToSign);
}
/** Method for signing with XAdES-XL. */
private void signDocumentXL(KeyingDataProvider keyingProvider, Document doc, CertificateValidationProvider validationProvider) throws Exception
{
Element elemToSign = doc.getDocumentElement();
DOMHelper.useIdAsXmlId(elemToSign);
NodeList signatures = doc.getElementsByTagNameNS(Constants.SignatureSpecNS, Constants._TAG_SIGNATURE);
Element signatureNode = (Element)signatures.item(signatures.getLength() - 1);
XadesVerificationProfile nistVerificationProfile = new XadesVerificationProfile(validationProvider);
XadesSignatureFormatExtender formExt = new XadesFormatExtenderProfile().getFormatExtender();
XAdESVerificationResult res = nistVerificationProfile.newVerifier().verify(signatureNode, null, formExt, XAdESForm.X_L);
}
However, there is always an exception thrown at the last line (nistVerificationProfile.newVerifier().verify(...)), that looks like:
[2023-01-23 18:22:29] INFO: [] ESignatureXML.signDocumentXL: Signing xml document XAdES-XL...
xades4j.production.PropertyDataGeneratorNotAvailableException: Property data generation failed for SigAndRefsTimeStamp: data object generator cannot be created
at xades4j.production.PropertyDataGeneratorsMapperImpl.getGenerator(PropertyDataGeneratorsMapperImpl.java:51)
at xades4j.production.PropertiesDataObjectsGeneratorImpl.doGenPropsData(PropertiesDataObjectsGeneratorImpl.java:87)
at xades4j.production.PropertiesDataObjectsGeneratorImpl.genPropsData(PropertiesDataObjectsGeneratorImpl.java:73)
at xades4j.production.PropertiesDataObjectsGeneratorImpl.generateUnsignedPropertiesData(PropertiesDataObjectsGeneratorImpl.java:64)
at xades4j.production.XadesSignatureFormatExtenderImpl.enrichSignature(XadesSignatureFormatExtenderImpl.java:79)
at xades4j.verification.XadesVerifierImpl.verify(XadesVerifierImpl.java:497)
at com.my.ESignatureXML.signDocumentXL(ESignatureXML.java:709)
at com.my.ESignatureXML.signDocument(ESignatureXML.java:482)
at com.my.ESignatureXML.performSigning(ESignatureXML.java:236)
at com.my.ESignatureXML.sign(ESignatureXML.java:187)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: com.google.inject.ProvisionException: Unable to provision, see the following errors:
1) [Guice/ErrorInCustomProvider]: IllegalStateException: HttpTsaConfiguration must be configured in the profile in order to use an HTTP-based time-stamp token provider.
at DefaultProductionBindingsModule.configure(DefaultProductionBindingsModule.java:80)
\_ installed by: Modules$OverrideModule -> DefaultProductionBindingsModule
at HttpTimeStampTokenProvider.<init>(HttpTimeStampTokenProvider.java:44)
\_ for 2nd parameter
while locating HttpTimeStampTokenProvider
at DataGenSigAndRefsTimeStamp.<init>(DataGenSigAndRefsTimeStamp.java:51)
\_ for 2nd parameter
while locating DataGenSigAndRefsTimeStamp
while locating PropertyDataObjectGenerator<SigAndRefsTimeStampProperty>
Learn more:
https://github.com/google/guice/wiki/ERROR_IN_CUSTOM_PROVIDER
1 error
======================
Full classname legend:
======================
DataGenSigAndRefsTimeStamp: "xades4j.production.DataGenSigAndRefsTimeStamp"
DefaultProductionBindingsModule: "xades4j.production.DefaultProductionBindingsModule"
HttpTimeStampTokenProvider: "xades4j.providers.impl.HttpTimeStampTokenProvider"
Modules$OverrideModule: "com.google.inject.util.Modules$OverrideModule"
PropertyDataObjectGenerator: "xades4j.production.PropertyDataObjectGenerator"
SigAndRefsTimeStampProperty: "xades4j.properties.SigAndRefsTimeStampProperty"
========================
End of classname legend:
========================
at com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:251)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1104)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1134)
at xades4j.production.PropertyDataGeneratorsMapperImpl.getGenerator(PropertyDataGeneratorsMapperImpl.java:48)
... 58 more
Caused by: java.lang.IllegalStateException: HttpTsaConfiguration must be configured in the profile in order to use an HTTP-based time-stamp token provider.
at xades4j.production.DefaultProductionBindingsModule.lambda$configure$0(DefaultProductionBindingsModule.java:81)
at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86)
at com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:57)
at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60)
at com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:47)
at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40)
at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300)
at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:60)
at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40)
at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300)
at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:60)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1101)
... 60 more
I have detected that in class PropertiesDataObjectsGeneratorImpl, in method doGenPropsData, while traversing the properties of the signature, one property, xades4j.properties.SigAndRefsTimeStampProperty, has value
time = null, and the exception is actually thrown while trying to add this property to a collection of signature properties to be used in further validation.
This happens only when signing XAdES-XL, while there is no exception in this method with other forms (i.e. XAdES-T and XAdES-C, where also timestamp details could be seen in the resulting XML file).
How to overcome this issue? Am I missing something in declaring signer, validation provider, verification profile...?
As you mentioned in your response, extending a signature to one of those formats requires adding new timestamp properties. As such, xades4j needs to know how to obtain those timestamps.
By default, xades4j includes a TimeStampTokenProvider that knows how to get timestamps via HTTP. You only need to configure HttpTsaConfiguration in the profiles (both for production and extending). This is documented here: https://github.com/luisgoncalves/xades4j/wiki/Timestamping.
So, in your code you don't need to register HttpTimeStampTokenProvider, only HttpTsaConfiguration. Note that this is what the tests (example code) do:
new XadesFormatExtenderProfile().with(DEFAULT_TEST_TSA)...
The same applies to the configuration of XadesCSigningProfile.
After some more hours/days of exploring, I think I've found the reason for this behavior.
Sample code for signing XAdES-XL after XAdES-C (i.e. for enriching the signature) exists in class xades4j.verification.XadesVerifierImplTest, in the following method:
#Test
public void testVerifyCEnrichXL() throws Exception
{
System.out.println("verifyCEnrichXL");
Document doc = getDocument("document.signed.c.xml");
Element signatureNode = getSigElement(doc);
XadesSignatureFormatExtender formExt = new XadesFormatExtenderProfile().with(DEFAULT_TEST_TSA).getFormatExtender();
XAdESVerificationResult res = nistVerificationProfile.newVerifier().verify(signatureNode, null, formExt, XAdESForm.X_L);
assertEquals(XAdESForm.C, res.getSignatureForm());
assertPropElementPresent(signatureNode, SigAndRefsTimeStampProperty.PROP_NAME);
assertPropElementPresent(signatureNode, CertificateValuesProperty.PROP_NAME);
assertPropElementPresent(signatureNode, RevocationValuesProperty.PROP_NAME);
outputDocument(doc, "document.verified.c.xl.xml");
}
The same case would be for signing XAdES-A after XAdES-XL, as shown in class xades4j.production.XadesSignatureFormatExtenderImplTest, in the following method:
#Test
public void testEnrichSignatureWithA() throws Exception
{
System.out.println("enrichSignatureWithA");
Document doc = getDocument("document.verified.c.xl.xml");
Element signatureNode = (Element)doc.getElementsByTagNameNS(Constants.SignatureSpecNS, "Signature").item(0);
XadesSignatureFormatExtender instance = new XadesFormatExtenderProfile().with(DEFAULT_TEST_TSA).getFormatExtender();
XMLSignature sig = new XMLSignature(signatureNode, "");
Collection<UnsignedSignatureProperty> usp = new ArrayList<UnsignedSignatureProperty>(1);
usp.add(new ArchiveTimeStampProperty());
instance.enrichSignature(sig, new UnsignedProperties(usp));
outputDocument(doc, "document.verified.c.xl.a.xml");
}
Comparing this code to my code, it looks like the problem is that there should be a TimeStampTokenProvider supplied for the XadesFormatExtenderProfile, like in:
XadesSignatureFormatExtender formExt = new XadesFormatExtenderProfile()
.with(new HttpTimeStampTokenProvider(new DefaultMessageDigestProvider(), new HttpTsaConfiguration("http://freetsa.org/tsr")))
.getFormatExtender();
or:
XadesSignatureFormatExtender formExt = new XadesFormatExtenderProfile()
.withTimeStampTokenProvider(new HttpTimeStampTokenProvider(new DefaultMessageDigestProvider(), new HttpTsaConfiguration("http://freetsa.org/tsr")))
.getFormatExtender();
In theory, this is needed because every enrichment of XAdES format requires an additional property related to timestamp:
XAdES-C form has property xades:SignatureTimeStamp
XAdES-XL form has property xades:SigAndRefsTimeStamp
XAdES-A form has property xades141:ArchiveTimeStampV2
I thought it would be sufficient to have timestamp applied only once (in XAdES-C), and then the other forms (XAdES-XL and XAdES-A) would afterwards just copy that info into their respective properties; however it proved that this is not the case.
I need to get single the GridFS file using Java driver 3.7+.
I have two collections with file in a database: photo.files and photo.chunks.
The photo.chunks collection contains the binary file like:
The photo.files collection contains the metadata of the document.
To find document using simple database I wrote:
Document doc = collection_messages.find(eq("flag", true)).first();
String messageText = (String) Objects.requireNonNull(doc).get("message");
I tried to find file and wrote in same way as with an example above, according to my collections on screens:
MongoDatabase database_photos = mongoClient.getDatabase("database_photos");
GridFSBucket photos_fs = GridFSBuckets.create(database_photos,
"photos");
...
...
GridFSFindIterable gridFSFile = photos_fs.find(eq("_id", new ObjectId()));
String file = Objects.requireNonNull(gridFSFile.first()).getMD5();
And like:
GridFSFindIterable gridFSFile = photos_fs.find(eq("_id", new ObjectId()));
String file = Objects.requireNonNull(gridFSFile.first()).getFilename();
But I get an error:
java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203)
at project.Bot.onUpdateReceived(Bot.java:832)
at java.util.ArrayList.forEach(ArrayList.java:1249)
Also I checked docs of 3.7 driver, but this example shows how to find several files, but I need single:
gridFSBucket.find().forEach(
new Block<GridFSFile>() {
public void apply(final GridFSFile gridFSFile) {
System.out.println(gridFSFile.getFilename());
}
});
Can someone show me an example how to realize it properly?
I mean getting data, e.g. in chunks collection by Object_id and md5 field also by Object_id in metadata collection.
Thanks in advance.
To find and use specific files:
photos_fs.find(eq("_id", objectId)).forEach(
(Block<GridFSFile>) gridFSFile -> {
// to do something
});
or as alternative, I can find specific field of the file.
It can be done firstly by creating objectId of the first file, then pass it to GridFSFindIterable object to get particular field and value from database and get finally file to convert into String.
MongoDatabase database_photos =
mongoClient.getDatabase("database_photos");
GridFSBucket photos_fs = GridFSBuckets.create(database_photos,
"photos");
...
...
ObjectId objectId = Objects.requireNonNull(photos_fs.find().first()).getObjectId();
GridFSFindIterable gridFSFindIterable = photos_fs.find(eq("_id", objectId));
GridFSFile gridFSFile = Objects.requireNonNull(gridFSFindIterable.first());
String file = Objects.requireNonNull(gridFSFile).getMD5();
But it checks files from photo.files not from photo.chunkscollection.
And I'm not sure that this way is code-safe, because of debug info, but it works despite the warning:
Inconvertible types; cannot cast 'com.mongodb.client.gridfs.model.GridFSFile' to 'com.mongodb.client.gridfs.GridFSFindIterableImpl'
I have the following dependency added:
<dependency>
<groupId>net.sf.supercsv</groupId>
<artifactId>super-csv</artifactId>
<version>2.4.0</version>
</dependency>
private final static String[] COLS = { "col1", "col2", "col3", "col4", "col5",
"col6", "col7", "col8", "col9", "col10", "col11",
"col12", "col13", "col14" };
private final static String[] TEMP_COLS = {"col1", "col2", "col3", "col4", "col5",
"col6", "col7", "col8", "col9", "col10", "col11",
"col12", "col13"};
The below is how I build my reader.
protected CsvPreference csvPref = CsvPreference.STANDARD_PREFERENCE;
protected String encoding = "US-ASCII";
InputStream is = fs.open(path);
BufferedReader br = new BufferedReader(new InputStreamReader(is, encoding));
ICsvBeanReader csvReader = new CsvBeanReader(br, csvPref);
As part of bean reader, I have the following code:
Selections bean = null;
try{
bean = reader.read(Selections.class, Selections.getCols());
}catch(Exception e){
// bean = reader.read(Selections.class, Selections.getTempCols());
// slf4j.error(bean.getEventCode() + bean.getProgramId());
slf4j.error("Error Logged for bean because of COLUMNS MISMATCH");
}
In the above code, It is throwing exception :
java.lang.IllegalArgumentException:the nameMapping array and the number of columns read should be the same size (nameMapping length = 14, columns = 13))
I am not sure what is causing this exception.It is throwing this exception on some of the records even if all the records have 14 columns(I have verified this by using a script, I have even created a schema and uploaded the file with 14 columns). Out of 7,000,000 records 2,100,000 has this issue.
In order to debug what record is causing this problem I have made the below changes to the code.
Selections bean = null;
try{
bean = reader.read(Selections.class, Selections.getCols());
}catch(Exception e){
bean = reader.read(Selections.class, Selections.getTempCols());
slf4j.error(bean.getEventCode() + bean.getProgramId());
slf4j.error("Error Logged for bean because of COLUMNS MISMATCH");
}
Now, the above changes are throwing : java.lang.IllegalArgumentException: the nameMapping array and the number of columns read should be the same size (nameMapping length = 13, columns = 14)
I have no idea why the open csv reader is behaving so strangely. When the count of columns is not 14 it would cause exception and in exception when trying to read it to print the details, It says the column count is 14.
Please help me debug this issue. I shall update more details about the issue if needed. Please let me know.
After a dive into super csv source and your confirmation that you can upload with 14 columns coreectly, I'd suggest you look for a replacement for Super CSV.
My recommendation: Check out Apache Commons CSV.
This library also supports an iterative approach, so you wouldn't need to have 7.000.000 records in memory.
Finally I resolved the problem, the problem is because of the columnquote mode character that I have given in my CSV preferences.
new CsvPreference.Builder('"', '\u0001', "\r\n").build()
My incoming data has " as part of the data. The issue got resolved when I have replaced quoted column with a character that will never be part of the incoming data.
I am not an expert at it, it is because of my ignorance and super-scv is not at fault. I believe super-csv is a decent API to explore and use.
To know more about column quote mode, please refer to their API.
https://super-csv.github.io/super-csv/apidocs/org/supercsv/quote/ColumnQuoteMode.html
I need to normalize a CSV file. I followed this article written by Jeff Heaton. This is (some) of my code:
File sourceFile = new File("Book1.csv");
File targetFile = new File("Book1_norm.csv");
EncogAnalyst analyst = new EncogAnalyst();
AnalystWizard wizard = new AnalystWizard(analyst);
wizard.wizard(sourceFile, true, AnalystFileFormat.DECPNT_COMMA);
final AnalystNormalizeCSV norm = new AnalystNormalizeCSV();
norm.analyze(sourceFile, false, CSVFormat.ENGLISH, analyst);
norm.setProduceOutputHeaders(false);
norm.normalize(targetFile);
The only difference between my code and the one of the article is this line:
norm.setOutputFormat(CSVFormat.ENGLISH);
I tried to use it but it seems that in Encog 3.1.0, that method doesn't exist. The error I get is this one (it looks like the problem is with the line norm.normalize(targetFile):
Exception in thread "main" org.encog.app.analyst.AnalystError: Can't find column: 11700
at org.encog.app.analyst.util.CSVHeaders.find(CSVHeaders.java:187)
at org.encog.app.analyst.csv.normalize.AnalystNormalizeCSV.extractFields(AnalystNormalizeCSV.java:77)
at org.encog.app.analyst.csv.normalize.AnalystNormalizeCSV.normalize(AnalystNormalizeCSV.java:192)
at IEinSoftware.main(IEinSoftware.java:55)
I added a FAQ that shows how to normalize a CSV file. http://www.heatonresearch.com/faq/4/2
Here's a function to do it... of course you need to create an analyst
private EncogAnalyst _analyst;
public void NormalizeFile(FileInfo SourceDataFile, FileInfo NormalizedDataFile)
{
var wizard = new AnalystWizard(_analyst);
wizard.Wizard(SourceDataFile, _useHeaders, AnalystFileFormat.DecpntComma);
var norm = new AnalystNormalizeCSV();
norm.Analyze(SourceDataFile, _useHeaders, CSVFormat.English, _analyst);
norm.ProduceOutputHeaders = _useHeaders;
norm.Normalize(NormalizedDataFile);
}
I have been assigned to clean up a project for a client that uses BIRT reporting. I have fixed most of the issues but I still have one report that is not working and is returning an error. The error is:
Row (id = 1467):
+ There are errors evaluating script "var fileName = row["Attached_File"];
params["HyperlinkParameter"].value = ImageDecoder.decodeDocs(row["Ecrash_Attach"],fileName);":
Wrapped java.lang.NullPointerException (/report/body/table[#id="61"]/detail/row[#id="70"]/cell[#id="71"]/grid[#id="1460"]/row[#id="1462"]/cell[#id="1463"]/table[#id="1464"]/detail/row[#id="1467"]/method[#name="onCreate"]#2)
I can post the full stack trace if someone wants it but for now I will omit it since it is very long.
Here is the source of the decodeDocs method:
public static String decodeDocs(byte[] source, String fileName) {
String randName = "";
byte[] docSource = null;
if ( Base64.isArrayByteBase64(source) ){
docSource = Base64.decodeBase64(source);
}
documentZipPath = writeByteStreamToFile(source);
randName = writeByteStreamToFile(docSource, fileName);
return randName;
}
I am pretty well lost on this one. The error looks to be telling me there is an error on line two of the script which is:
var fileName = row["Attached_File"];
params["HyperlinkParameter"].value = ImageDecoder.decodeDocs(row["Ecrash_Attach"],fileName);
This is written in the OnCreate method of the report. Any help, even clues would be greatly appreciated. If you would like to see the report just ask and I will post the xml for it.
A common mistake I make in BIRT is to access the value of a null report parameter.
In your case, could params["HyperlinkParameter"] be null?