Im using iText 5.5.10 to validate timestamp in pdf file. Can someone explain me, why calling pkcs7.verifyTimestampImprint() method returns false?
Java code is from iText 5 example site,
see C5_02_SignatureInfo.java as input file
use testpdf_timestamp.pdf.
Code:
public static void main(String[] args) throws IOException, GeneralSecurityException {
BouncyCastleProvider provider = new BouncyCastleProvider();
Security.addProvider(provider);
PdfReader reader = new PdfReader("testpdf_timestamp.pdf");
AcroFields fields = reader.getAcroFields();
ArrayList<String> names = fields.getSignatureNames();
for (String name : names) {
System.out.println("===== " + name + " =====");
System.out.println("Signature covers whole document: " + fields.signatureCoversWholeDocument(name));
System.out.println("Document revision: " + fields.getRevision(name) + " of " + fields.getTotalRevisions());
PdfPKCS7 pkcs7 = fields.verifySignature(name);
System.out.println("Integrity check OK? " + pkcs7.verify());
SimpleDateFormat date_format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SS");
System.out.println("Signed on: " + date_format.format(pkcs7.getSignDate().getTime()));
if (pkcs7.getTimeStampDate() != null) {
System.out.println("TimeStamp: " + date_format.format(pkcs7.getTimeStampDate().getTime()));
TimeStampToken ts = pkcs7.getTimeStampToken();
System.out.println("TimeStamp service: " + ts.getTimeStampInfo().getTsa());
// Why pkcs7.verifyTimestampImprint() returns FLASE?
System.out.println("Timestamp verified? " + pkcs7.verifyTimestampImprint());
}
}
}
Maven dependencies:
<!-- https://mvnrepository.com/artifact/com.itextpdf/itextpdf -->
<dependency>
<groupId>com.itextpdf</groupId>
<artifactId>itextpdf</artifactId>
<version>5.5.10</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.bouncycastle/bcpkix-jdk15on -->
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcpkix-jdk15on</artifactId>
<version>1.49</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on -->
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-jdk15on</artifactId>
<version>1.49</version>
</dependency>
Thanks for a response.
The reason
why calling pkcs7.verifyTimestampImprint() method returns false
is that this method is meant for verifying the imprint of a signature time stamp, it is not meant for verifying the imprint of a document time stamp.
PdfPKCS7 is a class for creating and verifying signatures and for requesting time stamps (for use as signature time stamps) and verifying time stamps (both signature and document). Thus, not all of its methods make sense in every use case. Unfortunately the PdfPKCS7 JavaDocs don't make particularly clear which methods to use when.
In case of a document time stamp the verification of the timestamp imprint already takes place during the verify call.
You reference the code from C5_02_SignatureInfo.java on this site which clearly indicates that "these examples were written in the context of the white paper Digital Signatures for PDF documents."
In that white paper you can see that the code you find in C5_02_SignatureInfo.java is from the chapter 5.2 "Retrieving information from a signature" and the time stamp verified by verifyTimestampImprint() is a signature time stamp.
For document time stamp validation you should look at the code in later chapters.
Related
I downloaded a lot of blockchain data using https://bitcoin.org, I took some file and I try to analyse it with bitcoinj library.
I would like to get information from every transaction:
-who send bitcoins,
-how much,
-who receive bitcoins.
I use:
<dependency>
<groupId>org.bitcoinj</groupId>
<artifactId>bitcoinj-core</artifactId>
<version>0.15.10</version>
</dependency>
I have a code:
NetworkParameters np = new MainNetParams();
Context.getOrCreate(MainNetParams.get());
BlockFileLoader loader = new BlockFileLoader(np,List.of(new File("test/resources/blk00450.dat")));
for (Block block : loader) {
for (Transaction tx : block.getTransactions()) {
System.out.println("Transaction ID" + tx.getTxId().toString());
for (TransactionInput ti : tx.getInputs()) {
// how to get wallet addresses of inputs?
}
// this code works for 99% of transactions but for some throws exceptions
for (TransactionOutput to : tx.getOutputs()) {
// sometimes this line throws: org.bitcoinj.script.ScriptException: Cannot cast this script to an address
System.out.println("out address:" + to.getScriptPubKey().getToAddress(np));
System.out.println("out value:" + to.getValue().toString());
}
}
}
Can you share some snippet that will work for all transactions in the blockchain?
There are at least two type of transaction, P2PKH and P2SH.
Your code would work well with P2PKH, but wouldn not work with P2SH.
You can change the line from:
System.out.println("out address:" + to.getScriptPubKey().getToAddress(np));
to:
System.out.println("out address:" + to.getAddressFromP2PKHScript(np)!=null?to.getAddressFromP2PKHScript(np):to.getAddressFromP2SH(np));
The API of Bitcoin says the methods getAddressFromP2PKHScript() and getAddressFromP2SH() are deprecated, and I have not find suitable method.
However, P2SH means "Pay to Script Hash", which means it could contain two or more public keys to support multi-signature. Moreover, getAddressFromP2SH() returns only one address, perhaps this is the reason why it is deprecated.
I also wrote a convinient method to check the inputs and outputs of a block:
private void printCoinValueInOut(Block block) {
Coin blockInputSum = Coin.ZERO;
Coin blockOutputSum = Coin.ZERO;
System.out.println("--------------------Block["+block.getHashAsString()+"]------"+block.getPrevBlockHash()+"------------------------");
for(Transaction tx : block.getTransactions()) {
Coin txInputSum = tx.getOutputSum();
Coin txOutputSum = tx.getOutputSum();
blockInputSum = blockInputSum.add(txInputSum);
blockOutputSum = blockOutputSum.add(txOutputSum);
System.out.println("Tx["+tx.getTxId()+"]:\t" + txInputSum + "(satoshi) IN, " + txOutputSum + "(satoshi) OUT.");
}
System.out.println("Block total:\t" + blockInputSum + "(satoshi) IN, " + blockOutputSum + "(satoshi) OUT. \n");
}
I have wrote own in-memory ElasticSearch server in version 5.1.1. It works correctly for adding of documents but fails on removing.
Maven dependencies:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>5.1.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>transport-netty4-client</artifactId>
<version>5.1.1</version>
<scope>test</scope>
</dependency>
Map of settings for node:
settingsMap.put("node.name", nodeName);
settingsMap.put("path.conf", "target");
settingsMap.put("path.data", "target");
settingsMap.put("path.logs", "target");
settingsMap.put("path.home", "target");
settingsMap.put("http.type", "netty4");
settingsMap.put("http.port", httpPort);
settingsMap.put("transport.tcp.port", httpTransportPort);
settingsMap.put("transport.type", "netty4");
settingsMap.put("action.auto_create_index", "false");
Method for deleting many documents in once:
public boolean deleteType() throws IOException, CustomResponseException {
String query = "{\n" + " \"query\": {\n" + " \"match_all\": {}\n" + " }\n" + "}";
HttpEntity entity = new NStringEntity(query, ContentType.APPLICATION_JSON);
Response indexResponse = restClient.performRequest("POST",
"/" + this.getIndex() + "/" + this.getType() + "/_delete_by_query?conflicts=proceed",
Collections.<String, String>emptyMap(), entity);
return processStatusCode(indexResponse.getStatusLine()) == 200;
}
When I run tests, I get error:
org.elasticsearch.client.ResponseException: POST http://localhost:9205/testindexer/indexer/_delete_by_query?conflicts=proceed: HTTP/1.1 400 Bad Request
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/testindexer/indexer/_delete_by_query] contains unrecognized parameter: [conflicts]"}],"type":"illegal_argument_exception","reason":"request [/testindexer/indexer/_delete_by_query] contains unrecognized parameter: [conflicts]"},"status":400}
Additionally when I run it without conflicts proceeding I'm getting as an answer that document is created. Why is it working differently than in documentation for this version?
Here is status of my node:
{
"name" : "indexernode",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "trP6UQg1SMKVyfR0qTEjYw",
"version" : {
"number" : "5.1.1",
"build_hash" : "5395e21",
"build_date" : "2016-12-06T12:36:15.409Z",
"build_snapshot" : false,
"lucene_version" : "6.3.0"
},
"tagline" : "You Know, for Search"
}
Side note: This is not the correct way to delete all index data, but I guess you are aware of that.
Can you try to supply the parameter in the restclient as a parameter and not as part of the URL, jsut to make sure that this is not the issue.
I think the main issue here is, that you said you are running your own embedded version of elasticsearch (is this what you mean with in memory). Have you checked that the reindex plugin is installed with that version?
I have included LanguageTool coding in my Java Maven project as below;
Java Code
List<Language> realLanguages = Languages.get();
for (Language language : realLanguages) {
System.out.println(language.getName() + " ==> " + language.getShortName());
if (language.getName().startsWith("English (US)")) {
JLanguageTool langTool = new JLanguageTool(language);
PatternRuleLoader patternRuleLoader = new PatternRuleLoader();
List<PatternRule> abstractPatternRuleList = new ArrayList<PatternRule>();
abstractPatternRuleList = patternRuleLoader.getRules(new File(LTPath + "/CustomGrammar.xml"));
System.out.println("\n\nDefault Active Rules: " + langTool.getAllActiveRules().size());
<-- More coding goes here -->
and it works absolutely fine when the module's jar is invoked from one project (on server 'A'), but the same throws the below attached exception, "Could not initialize English chunker" when invoked from another (on server 'B').
Dependency
<dependency>
<groupId>org.languagetool</groupId>
<artifactId>language-en</artifactId>
<version>3.1</version>
</dependency>
Exception
Please help !
I have downloaded the indexes generated for Maven Central from http://mirrors.ibiblio.org/pub/mirrors/maven2/dot-index/nexus-maven-repository-index.gz
I would like to list the artifacts information from these index files (groupId, artifactId, version for example). I have read that there is a high level API for that. It seems that I have to use the following maven dependency. However, I don't know what is the entry point to use (which class?) and how to use it to access those files:
<dependency>
<groupId>org.sonatype.nexus</groupId>
<artifactId>nexus-indexer</artifactId>
<version>3.0.4</version>
</dependency>
Take a peek at https://github.com/cstamas/maven-indexer-examples project.
In short: you dont need to download the GZ/ZIP (new/legacy format) manually, it will indexer take care of doing it for you (moreover, it will handle incremental updates for you too, if possible).
GZ is the "new" format, independent of Lucene index-format (hence, independent of Lucene version) containing data only, while the ZIP is "old" format, which is actually plain Lucene 2.4.x index zipped up. No data content change happens currently, but is planned in future.
As I said, there is no data content difference between two, but some fields (like you noticed) are Indexed but not stored on index, hence, if you consume the ZIP format, you will have them searchable, but not retrievable.
The https://github.com/cstamas/maven-indexer-examples is obsolete. And the build fails (tests do not pass).
The Nexus Indexer has moved along and included the examples too:
https://github.com/apache/maven-indexer/tree/master/indexer-examples
That builds, and the code works.
Here is a simplified version if you want to roll your own:
Maven:
<dependencies>
<dependency>
<groupId>org.apache.maven.indexer</groupId>
<artifactId>indexer-core</artifactId>
<version>6.0-SNAPSHOT</version>
<scope>compile</scope>
</dependency>
<!-- For ResourceFetcher implementation, if used -->
<dependency>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-http-lightweight</artifactId>
<version>2.3</version>
<scope>compile</scope>
</dependency>
<!-- Runtime: DI, but using Plexus Shim as we use Wagon -->
<dependency>
<groupId>org.eclipse.sisu</groupId>
<artifactId>org.eclipse.sisu.plexus</artifactId>
<version>0.2.1</version>
</dependency>
<dependency>
<groupId>org.sonatype.sisu</groupId>
<artifactId>sisu-guice</artifactId>
<version>3.2.4</version>
</dependency>
Java:
public IndexToGavMappingConverter(File dataDir, String id, String url)
throws PlexusContainerException, ComponentLookupException, IOException
{
this.dataDir = dataDir;
// Create Plexus container, the Maven default IoC container.
final DefaultContainerConfiguration config = new DefaultContainerConfiguration();
config.setClassPathScanning( PlexusConstants.SCANNING_INDEX );
this.plexusContainer = new DefaultPlexusContainer(config);
// Lookup the indexer components from plexus.
this.indexer = plexusContainer.lookup( Indexer.class );
this.indexUpdater = plexusContainer.lookup( IndexUpdater.class );
// Lookup wagon used to remotely fetch index.
this.httpWagon = plexusContainer.lookup( Wagon.class, "http" );
// Files where local cache is (if any) and Lucene Index should be located
this.centralLocalCache = new File( this.dataDir, id + "-cache" );
this.centralIndexDir = new File( this.dataDir, id + "-index" );
// Creators we want to use (search for fields it defines).
// See https://maven.apache.org/maven-indexer/indexer-core/apidocs/index.html?constant-values.html
List<IndexCreator> indexers = new ArrayList();
// https://maven.apache.org/maven-indexer/apidocs/org/apache/maven/index/creator/MinimalArtifactInfoIndexCreator.html
indexers.add( plexusContainer.lookup( IndexCreator.class, "min" ) );
// https://maven.apache.org/maven-indexer/apidocs/org/apache/maven/index/creator/JarFileContentsIndexCreator.html
//indexers.add( plexusContainer.lookup( IndexCreator.class, "jarContent" ) );
// https://maven.apache.org/maven-indexer/apidocs/org/apache/maven/index/creator/MavenPluginArtifactInfoIndexCreator.html
//indexers.add( plexusContainer.lookup( IndexCreator.class, "maven-plugin" ) );
// Create context for central repository index.
this.centralContext = this.indexer.createIndexingContext(
id + "Context", id, this.centralLocalCache, this.centralIndexDir,
url, null, true, true, indexers );
}
final IndexSearcher searcher = this.centralContext.acquireIndexSearcher();
try
{
final IndexReader ir = searcher.getIndexReader();
Bits liveDocs = MultiFields.getLiveDocs(ir);
for ( int i = 0; i < ir.maxDoc(); i++ )
{
if ( liveDocs == null || liveDocs.get( i ) )
{
final Document doc = ir.document( i );
final ArtifactInfo ai = IndexUtils.constructArtifactInfo( doc, this.centralContext );
if (ai == null)
continue;
if (ai.getSha1() == null)
continue;
if (ai.getSha1().length() != 40)
continue;
if ("javadoc".equals(ai.getClassifier()))
continue;
if ("sources".equals(ai.getClassifier()))
continue;
out.append(StringUtils.lowerCase(ai.getSha1())).append(' ');
out.append(ai.getGroupId()).append(":");
out.append(ai.getArtifactId()).append(":");
out.append(ai.getVersion()).append(":");
out.append(StringUtils.defaultString(ai.getClassifier()));
out.append('\n');
}
}
}
finally
{
this.centralContext.releaseIndexSearcher( searcher );
}
We use this in the Windup project - JBoss migration tool.
The legacy zip index is a simple lucene index. I was able to open it with Luke
and write some simple lucene code to dump out the headers of interest ("u" in this case)
import org.apache.lucene.document.Document;
import org.apache.lucene.search.IndexSearcher;
public class Dumper {
public static void main(String[] args) throws Exception {
IndexSearcher searcher = new IndexSearcher("c:/PROJECTS/Test/index");
for (int i = 0; i < searcher.maxDoc(); i++) {
Document doc = searcher.doc(i);
String metadata = doc.get("u");
if (metadata != null) {
System.out.println(metadata);
}
}
}
}
Sample output ...
org.ioke|ioke-lang-lib|P-0.4.0-p11|NA
org.jboss.weld.archetypes|jboss-javaee6-webapp|1.0.1.CR2|sources|jar
org.jboss.weld.archetypes|jboss-javaee6-webapp|1.0.1.CR2|NA
org.nutz|nutz|1.b.37|javadoc|jar
org.nutz|nutz|1.b.37|sources|jar
org.nutz|nutz|1.b.37|NA
org.openengsb.wrapped|com.google.gdata|1.41.5.w1|NA
org.openengsb.wrapped|openengsb-wrapped-parent|6|NA
There may be better ways to achieve this though ...
For the records, there is now a tool to extract and export maven indexes as text files: the Maven index exporter. It's available as a Docker image and no code is required.
It basically downloads all .gz index files, extracts the indexes using maven-indexer cli and exports them to a text file with clue. It has been tested on Maven Central and works on many other Maven repositories.
I'm trying to read certificate from smime.p7s file, the certificate chain is:
Baltimora Cyber Trust --> DigitPA --> Aruba PEC
So when i'm trying to extract, I retrieve only the last two certificate, the last like subject and the first like issuer.
What am I wrong?
the code:
private List<CertificateInfo> reading(ASN1InputStream asn1Stream) throws IOException, CMSException, CertificateException {
ArrayList<CertificateInfo> infos = new ArrayList<CertificateInfo>();
ASN1Primitive obj = asn1Stream.readObject();
ContentInfo contentInfo = ContentInfo.getInstance(obj);
CMSSignedData cms = new CMSSignedData(contentInfo);
JcaX509CertificateConverter converter = new JcaX509CertificateConverter().setProvider(BouncyCastleProvider.PROVIDER_NAME);
Store store = cms.getCertificates();
SignerInformationStore signersInfoStore = cms.getSignerInfos();
Collection<SignerInformation> signers = signersInfoStore.getSigners();
logger.debug("signers num [" + signers.size() + "]");
for (SignerInformation si : signers) {
SignerId sid = si.getSID();
Collection<X509CertificateHolder> holders = store.getMatches(sid);
logger.debug("holders num [" + holders.size() + "]");
for (X509CertificateHolder certholder : holders) {
X509Certificate cert = converter.getCertificate(certholder);
logger.debug("Issuer [" + cert.getPublicKey() + "]");
CertificateInfo certInfo = util.parse(cert);
infos.add(certInfo);
}
}
return infos;
}
I'm using these bouncy castle jar like dependecies:
<dependency>
<groupId>bouncycastle</groupId>
<artifactId>bcprov-jdk15</artifactId>
<version>150</version>
</dependency>
<dependency>
<groupId>bouncycastle</groupId>
<artifactId>bcmail-jdk15</artifactId>
<version>150</version>
</dependency>
<dependency>
<groupId>bouncycastle</groupId>
<artifactId>bcpg-jdk15</artifactId>
<version>150</version>
</dependency>
<dependency>
<groupId>bouncycastle</groupId>
<artifactId>bcpkix-jdk15</artifactId>
<version>150</version>
</dependency>
thanks in advance.
Probably nothing is wrong. PKI works with a tree-like structure. It is possible to trust Aruba PEC using DigitPA. But how can you trust DigitPA? The most common method is to store the root certificate in a trust store. This trust store is e.g. distributed by the application (like the trust store within web browsers).
Now if the Baltimora Cyber Trust is already in the trust store, there is no need to send it within the PKCS#7 container. The certificate chain can be constructed to the trusted root without it.
So you either read the cert from the trust store directly, or you retrieve the root cert from the certificate chain created for verification.