I downloaded a lot of blockchain data using https://bitcoin.org, I took some file and I try to analyse it with bitcoinj library.
I would like to get information from every transaction:
-who send bitcoins,
-how much,
-who receive bitcoins.
I use:
<dependency>
<groupId>org.bitcoinj</groupId>
<artifactId>bitcoinj-core</artifactId>
<version>0.15.10</version>
</dependency>
I have a code:
NetworkParameters np = new MainNetParams();
Context.getOrCreate(MainNetParams.get());
BlockFileLoader loader = new BlockFileLoader(np,List.of(new File("test/resources/blk00450.dat")));
for (Block block : loader) {
for (Transaction tx : block.getTransactions()) {
System.out.println("Transaction ID" + tx.getTxId().toString());
for (TransactionInput ti : tx.getInputs()) {
// how to get wallet addresses of inputs?
}
// this code works for 99% of transactions but for some throws exceptions
for (TransactionOutput to : tx.getOutputs()) {
// sometimes this line throws: org.bitcoinj.script.ScriptException: Cannot cast this script to an address
System.out.println("out address:" + to.getScriptPubKey().getToAddress(np));
System.out.println("out value:" + to.getValue().toString());
}
}
}
Can you share some snippet that will work for all transactions in the blockchain?
There are at least two type of transaction, P2PKH and P2SH.
Your code would work well with P2PKH, but wouldn not work with P2SH.
You can change the line from:
System.out.println("out address:" + to.getScriptPubKey().getToAddress(np));
to:
System.out.println("out address:" + to.getAddressFromP2PKHScript(np)!=null?to.getAddressFromP2PKHScript(np):to.getAddressFromP2SH(np));
The API of Bitcoin says the methods getAddressFromP2PKHScript() and getAddressFromP2SH() are deprecated, and I have not find suitable method.
However, P2SH means "Pay to Script Hash", which means it could contain two or more public keys to support multi-signature. Moreover, getAddressFromP2SH() returns only one address, perhaps this is the reason why it is deprecated.
I also wrote a convinient method to check the inputs and outputs of a block:
private void printCoinValueInOut(Block block) {
Coin blockInputSum = Coin.ZERO;
Coin blockOutputSum = Coin.ZERO;
System.out.println("--------------------Block["+block.getHashAsString()+"]------"+block.getPrevBlockHash()+"------------------------");
for(Transaction tx : block.getTransactions()) {
Coin txInputSum = tx.getOutputSum();
Coin txOutputSum = tx.getOutputSum();
blockInputSum = blockInputSum.add(txInputSum);
blockOutputSum = blockOutputSum.add(txOutputSum);
System.out.println("Tx["+tx.getTxId()+"]:\t" + txInputSum + "(satoshi) IN, " + txOutputSum + "(satoshi) OUT.");
}
System.out.println("Block total:\t" + blockInputSum + "(satoshi) IN, " + blockOutputSum + "(satoshi) OUT. \n");
}
Related
I am reading and processing 2 files from 2 different file locations and comparing the content.
If 2nd file is not available , the rest of the process execute with 1st file. If 2nd file is available, comparison process should happen. For this I am using camel pollEnrich, but here the problem is that, camel is picking the 2nd file at first time only. Without restarting the camel route 2nd file is not getting picked up even if it is present there.
After restarting the camel route it is working fine, but after that its not picking the 2nd file.
I am moving the files to different locations after processing it.
Below is my piece of code,
from("sftp:" + firstFileLocation + "?privateKeyFile=" + ppkFileLocation + "&username=" + sftpUsername
+ "&readLock=changed&idempotent=true&move=" + firstFileArchiveLocation)
.pollEnrich("sftp:" + secondFileLocation + "?privateKeyFile=" + ppkFileLocation + "&username=" + sftpUsername
+ "&readLock=changed&idempotent=true&fileExist=Ignore&move="+ secondFileLocationArchive ,10000,new FileAggregationStrategy())
.routeId("READ_INPUT_FILE_ROUTE")
Need help.
You're setting idempotent=true in the sftp consumer, which means camel will not process the same file name twice. Since you're moving the files, it would make sense to set idempotent=false.
Quoted from camel documentation
Option to use the Idempotent Consumer EIP pattern to let Camel skip
already processed files. Will by default use a memory based LRUCache
that holds 1000 entries. If noop=true then idempotent will be enabled
as well to avoid consuming the same files over and over again.
I'm adding an alternative solution based on comments for the answer posted by Jeremy Ross. My answer is based on the following code example. I've only added the configure() method in the test route for brevity.
#Override
public void configure() throws Exception {
String firstFileLocation = "//127.0.0.1/Folder1";
String secondFileLocation = "//127.0.0.1/Folder2";
String ppkFileLocation = "./key.pem";
String sftpUsername = "user";
String sftpPassword = "xxxxxx";
String firstFileArchiveLocation = "./Archive1";
String secondFileLocationArchive = "./Archive2";
IdempotentRepository repository1 = MemoryIdempotentRepository.memoryIdempotentRepository(1000);
IdempotentRepository repository2 = MemoryIdempotentRepository.memoryIdempotentRepository(1000);
getCamelContext().getRegistry().bind("REPO1", repository1);
getCamelContext().getRegistry().bind("REPO2", repository2);
from("sftp:" + firstFileLocation
+ "?password=" + sftpPassword + "&username=" + sftpUsername
+ "&readLock=idempotent&idempotent=true&idempotentKey=\\${file:name}-\\${file:size}-\\${file:modified}" +
"&idempotentRepository=#REPO1&stepwise=true&download=true&delay=10&move=" + firstFileArchiveLocation)
.to("direct:combined");
from("sftp:" + secondFileLocation
+ "?password=" + sftpPassword + "&username=" + sftpUsername
+ "&readLock=idempotent&idempotent=true&idempotentKey=\\${file:name}-\\${file:size}-\\${file:modified}" +
"&idempotentRepository=#REPO2" +
"&stepwise=true&delay=10&move=" + secondFileLocationArchive)
.to("direct:combined");
from("direct:combined")
.aggregate(constant(true), (oldExchange, newExchange) -> {
if (oldExchange == null) {
oldExchange = newExchange;
}
String fileName = (String) newExchange.getIn().getHeaders().get("CamelFileName");
String filePath = (String) newExchange.getIn().getHeaders().get("CamelFileAbsolutePath");
if (filePath.contains("Folder1")) {
oldExchange.getIn().setHeader("File1", fileName);
} else {
oldExchange.getIn().setHeader("File2", fileName);
}
String file1Name = oldExchange.getIn().getHeader("File1", String.class);
String file2Name = oldExchange.getIn().getHeader("File2", String.class);
if (file1Name != null && file2Name != null) {
// Compare files
// Both files are available
oldExchange.getIn().setHeader("PROCEED", true);
} else if (file1Name != null) {
// No comparison, proceed with File 1
oldExchange.getIn().setHeader("PROCEED", true);
} else {
// Do not proceed, keep file 2 data and wait for File 1
oldExchange.getIn().setHeader("PROCEED", false);
}
String fileName1 = oldExchange.getIn().getHeader("File1", String.class);
String fileName2 = oldExchange.getIn().getHeader("File2", String.class);
oldExchange.getIn().setBody("File1: " + fileName1 + " File2: " + fileName2);
System.out.println(oldExchange);
return oldExchange;
}).completion(exchange -> {
if(exchange.getIn().getHeader("PROCEED", Boolean.class)) {
exchange.getIn().removeHeader("File1");
exchange.getIn().removeHeader("File2");
return true;
}
return false;
}).to("log:Test");
}
In this solution, two SFTP consumers were used, instead of pollEnrich, since we need to capture the file changes of both SFTP locations. I have used an idempotent repository and an idempotent key for ignoring duplicates. Further, I've used the same idempotent repository as the lock store assuming only camel routes are accessing the files.
After receiving the files from SFTP consumers, they are sent to the direct:combined producer, which then routes the exchange to an aggregator.
In the example aggregator strategy I have provided, you can see, that the file names are being stored in the exchange headers. According to the file information retrieved from the headers, the aggregator can decide how to process the file and whether or not to proceed with the exchange. (If only file2 is received, the exchange should not proceed to the next stages/routes)
Finally, the completion predicate expression decides whether or not to proceed with the exchange and log the exchange body, based on the headers set by the aggregator. I have added an example clean-up process in the predicate expression processor as well.
Hope you will get the basic idea of my suggestion to use an aggregator from this example.
I'm trying to use vmware sdk for java to collect the perfomance data of each entity (cluster/datastore/Host/VM) in the vmware environment.
The idea is to get the available PerfMetricIds for the target entity with queryAvailablePerfMetric, query those and report the details of the counter, the timestamp and the value.
However when I get the PerfMetricIds for an entity, not every detected (returned) PerfMetricId is reporting data. For example for each Datastore I get at least 4 ids which do not return data when queried, these IDs represent the counters associated with the average number of read and write operations, and for a cluster I'm missing the cpu usage, and so on ...
so I was wondering when does this happen? Shouldn't every metric returned by queryAvailablePerfMetric report data? what am I missing here?
Minimal code snippet:
// VMWare credentials
String vmwareUrl = args[0];
String vmwareUsername = args[1];
String vmwarePassword = args[2];
// connect to vCenter
ServiceInstance si = new ServiceInstance(new URL(vmwareUrl), vmwareUsername, vmwarePassword, true);
// get performance manager
PerformanceManager perfMgr = si.getPerformanceManager();
// define the time window (the last one hour)
Calendar calTo = Calendar.getInstance();
Calendar calFrom = Calendar.getInstance();
calFrom.setTime(calTo.getTime());
calFrom.add(Calendar.HOUR, -1);
// get any datastore for testing purposes
Folder rootFolder = si.getRootFolder();
ManagedEntity[] datastores = new InventoryNavigator(rootFolder).searchManagedEntities("Datastore");
ManagedEntity me = datastores[1];
// query all available metrics for the entity
PerfMetricId[] availablePmis = perfMgr.queryAvailablePerfMetric(me, calFrom, calTo, perfMgr.getHistoricalInterval()[0].getSamplingPeriod());
// create PerfQuerySpec
PerfQuerySpec qSpec = new PerfQuerySpec();
qSpec.setEntity(me.getMOR());
qSpec.setMetricId(availablePmis);
qSpec.setFormat("csv");
qSpec.setStartTime(calFrom);
qSpec.setEndTime(calTo);
// query perf
PerfEntityMetricBase[] perfValues = perfMgr.queryPerf(new PerfQuerySpec[]{qSpec});
// Printing
System.out.println("Found pmis (CounterIDs only): ");
for (PerfMetricId pmi : availablePmis){
System.out.print(pmi.getCounterId() + ", ");
}
System.out.print("\nPmis with values:");
int pmisCount=0;
for (PerfEntityMetricBase value : perfValues) {
PerfMetricSeriesCSV[] csvValues = ((PerfEntityMetricCSV) value).getValue();
pmisCount += csvValues.length;;
for (PerfMetricSeriesCSV csv : csvValues) {
System.out.println("Counter ID: " + csv.getId().getCounterId() + " ---- Metric instance: " + csv.getId().getInstance());
System.out.println("\tInfo: " + ((PerfEntityMetricCSV) value).getSampleInfoCSV());
System.out.println("\tValues: " + csv.getValue());
}
}
System.out.println("---------------");
System.out.println("Detected PMIs: " + availablePmis.length);
System.out.println("PMIs with values: " + pmisCount);
Any help (or discussions) would be appreciated
Does anyone have an example of retrieving data using Actian's JCL to a loosely coupled pervasive database in Java? The database I am connecting to only has DAT files. My goal is to create a link between pervasive and MS SQL.
I am not looking for a freebie, but someone to point me in the right direction so I can learn and grow.
Thank you in advanced!
Found this in my archives. Don't know when it was written, whether it works, or if this interface is still supported. You don't say what version of PSQL you're using so I don't even know if this will work with your version.
import pervasive.database.*;
public class VersionTest implements Consts
{
public VersionTest()
{
try
{
Session session = Driver.establishSession();
Database db = session.connectToDatabase("PMKE:");
XCursor xcursor = db.createXCursor(57000);
//Using local TABL.DAT (length 255 assures no leftovers!)
xcursor.setKZString(0,255,"plsetup\\tabl.dat");
//Open the file to load local MKDE
int status = xcursor.BTRV(BTR_OPEN);
System.out.println("Local Open status: " + status);
//Using remote TABL.DAT (length 255 assures no leftovers!)
xcursor.setKZString(0,255,"h:\\basic2c\\develop\\tabl.dat");
//set the buffer size
xcursor.setDataSize(15);
//get version
status = xcursor.BTRV(BTR_VERSION);
System.out.println("Version status: " + status);
// should be 15, always prints 5
System.out.println("Version length: " + xcursor.getRecLength());
System.out.println("Version: " + xcursor.getDString(0,15));
// try with an open file on a server
XCursor xcursor2 = db.createXCursor(57000);
//Using remote TABL.DAT (length 255 assures no leftovers!)
xcursor2.setKZString(0,255,"h:\\basic2c\\develop\\tabl.dat");
//Open the file
status = xcursor2.BTRV(BTR_OPEN);
System.out.println("Remote Open status: " + status);
//set the buffer size
xcursor2.setDataSize(15);
//get version
status = xcursor2.BTRV(BTR_VERSION);
System.out.println("Version status: " + status);
// should be 15, always prints 5
System.out.println("Version length: " + xcursor2.getRecLength());
System.out.println("Version: " + xcursor2.getDString(0,15));
// clean up resources
Driver.killAllSessions();
}catch(Exception exp)
{
exp.printStackTrace();
}
}
public static void main(String[] args)
{
new VersionTest();
}
}
JCL APIs are still supported with Actian PSQL v12 and v13.
You can find more documentation on retrieving data using Actian JCL at
http://docs.pervasive.com/products/database/psqlv12/wwhelp/wwhimpl/js/html/wwhelp.htm#href=jcl/java_api.2.2.html
To link to MS Sql Server you would need to create the data dictionary files(DDFs) for the PSQl data files to use with relational interfaces.
I am really new to SOAP web services and to Netsuite ERP and I am trying to generate a report in my company where I need to obtain all the Clients and their Invoices using the data available in Netsuite ERP. I followed the Java and Axis tutorial they offer with their sample app for the ERP and I successfully created a Java project in Eclipse that consumes the WSDL for netsuite 2015-2 and compiles the needed classes to run the sample app. So, I followed an example found in their CRM exapmle app to obtain a Client's information but the only problem is that their example method needs you to introduce the Client's ID. Here is the sample code:
public int getCustomerList() throws RemoteException,
ExceededUsageLimitFault, UnexpectedErrorFault, InvalidSessionFault,
ExceededRecordCountFault, UnsupportedEncodingException {
// This operation requires a valid session
this.login(true);
// Prompt for list of internalIds and put in an array
_console
.write("\ninternalIds for records to retrieved (separated by commas): ");
String reqKeys = _console.readLn();
String[] internalIds = reqKeys.split(",");
return getCustomerList(internalIds, false);
}
private int getCustomerList(String[] internalIds, boolean isExternal)
throws RemoteException, ExceededUsageLimitFault,
UnexpectedErrorFault, InvalidSessionFault, ExceededRecordCountFault {
// Build an array of RecordRef objects and invoke the getList()
// operation to retrieve these records
RecordRef[] recordRefs = new RecordRef[internalIds.length];
for (int i = 0; i < internalIds.length; i++) {
RecordRef recordRef = new RecordRef();
recordRef.setInternalId(internalIds[i]);
recordRefs[i] = recordRef;
recordRefs[i].setType(RecordType.customer);
}
// Invoke getList() operation
ReadResponseList getResponseList = _port.getList(recordRefs);
// Process response from get() operation
if (!isExternal)
_console.info("\nRecords returned from getList() operation: \n");
int numRecords = 0;
ReadResponse[] getResponses = getResponseList.getReadResponse();
for (int i = 0; i < getResponses.length; i++) {
_console.info("\n Record[" + i + "]: ");
if (!getResponses[i].getStatus().isIsSuccess()) {
_console.errorForRecord(getStatusDetails(getResponses[i]
.getStatus()));
} else {
numRecords++;
Customer customer = (Customer) getResponses[i].getRecord();
_console.info(" internalId="
+ customer.getInternalId()
+ "\n entityId="
+ customer.getEntityId()
+ (customer.getCompanyName() == null ? ""
: ("\n companyName=" + customer
.getCompanyName()))
+ (customer.getEntityStatus() == null ? ""
: ("\n status=" + customer.getEntityStatus().getName()))
+ (customer.getEmail() == null ? ""
: ("\n email=" + customer.getEmail()))
+ (customer.getPhone() == null ? ""
: ("\n phone=" + customer.getPhone()))
+ "\n isInactive="
+ customer.getIsInactive()
+ (customer.getDateCreated() != null ? ""
: ("\n dateCreated=" + customer
.getDateCreated().toString())));
}
}
return numRecords;
}
So as you can see, this method needs the internal ID of each Customer which I find not useful as I have a many Customers and I don't want to pass each Customer's ID. I read their API docs (which I find hard to navigate and kind of useless) and I found a web service called getAll() that gives all the records given a getAllRecord object which requires a getAllRecordType object. However, the getAllRecordType object does not support Customer entities, so I can't obtain all the customers on the ERP this way.
Is there an easy way to obtain all the Customers in my Netsuite ERP (maybe using other thing rather than the SOAP Web Services they offer? I am desperate about this situation as understanding how Netsuite's Web Services API has been really troublesome.
Thanks!
You would normally use a search to select a list of customers. On a large account you would not normally get all customers on any regular basis. If you are trying to get the invoices you might just find it more practical to get those with a search.
You wrote "in your company". Are you trying to write an application of some sort? If this is an internal project (and even if it's not) you'll probably find using SuiteScripts much more efficient in terms of your time and frustration level.
I made it using the following code on my getCustomerList method:
CustomerSearch customerSrch = new CustomerSearch();
SearchResult searchResult = _port.search(customerSrch);
System.out.println(searchResult.getTotalRecords());
RecordList rl = searchResult.getRecordList();
for (int i = 0; i <searchResult.getTotalRecords()-1; i++) {
Record r = rl.getRecord(i);
System.out.println("Customer # " + i);
Customer testcust = (Customer)r;
System.out.println("First Name: " + testcust.getFirstName());
}
//PRE-SET VARIABLES: symbolsToCheck, time
for (String s : symbolsToCheck) {
String fileName = "daylogs-" + time + "/" + s + ".txt";
File daylog = new File(fileName);
if (!daylog.exists()) {
if (!daylog.createNewFile()) {
System.out.println("ERROR creating day log for " + s);
} else {
System.out.println("Day log created: " + daylog.getCanonicalPath());
}
} else {
System.out.println("ERROR day log already exists for " + s);
}
}
Nothing is outputted from this, and I've confirmed that symbolsToCheck is populated (roughly a dozen strings). I can also confirm that time is set (integer timestamp) well before this code snippet is called. Been scratching my head for quite some time now, any ideas?
I've found the solution from a related post and Tom's suggestion, I've determined that the file creation is breaking due to my attempts to create a new folder and file at the same time, which does not work with createNewFile(). I followed the suggested in the related post and file creation works as expected now.