I am creating JmDNS service from other machine running windows-xp by
JmDNS dns = JmDNS.create("localhost");
dns.regesterService("_sreviceTest._tcp.local.", "Test-Service", 8765, "Description");
If I run other client which will resolve services, Created by JmDNS that finds it, irrespective of machine. But if i try to find same service through avahi-browse. It couldn't find it. and give following output.
avahi-browse -rv _sreviceTest._tcp
Server version: avahi 0.6.30; Host name: ubuntu.local
E Ifce Prot Name Type Domain
+ eth0 IPv4 Test-Service _sreviceTest._tcp local
Failed to resolve service 'Test-Service' of type '_serviceTest._tcp' in domain 'local': Timeout reached
It is a bug in JmDNS version 3.4.1 library. See for detail BUG
this.service_type = "_ros-master._tcp.local.";
this.service_name = "RosMaster";
this.service_port = 8888;
String service_key = "description"; // Max 9 chars
String service_text = "Hypothetical ros master";
HashMap<String, byte[]> properties = new HashMap<String, byte[]>();
properties.put(service_key, text.getBytes());
// service_info = ServiceInfo.create(service_type, service_name, service_port); // case 1, no text
// service_info = ServiceInfo.create(service_type, service_name, service_port, 0, 0, true, service_text); // case 2, textual description
service_info = ServiceInfo.create(service_type, service_name, service_port, 0, 0, true, properties); // case 3, properties assigned textual description
jmdns.registerService(service_info);
Case 1 and case 2 create services detectable by avahi, but they fail to resolve.
Case 3 works fine.
Related
I'm trying to decode testnet transaction using bitcoinj 0.14.7.
This is HEX of the transaction I'm trying to decode:
02000000000101ef4c1c3c60028b050a5798265c0b37719418dd71b0621e76a68a47d5d9eef55f000000001716001467d7c32c0dad98bf8dd3dea81cbb1dbd4ea4afb5feffffff02b7df02000000000017a914ebb55a85454dc15589b8a87bab4b438892b54c0c87b0ad0100000000001976a914140b85a1c430c8f4fd1f91ae1c7451902b8ce76c88ac024730440220359623836f97e9e4c04917455ed2f9fb2343b0bf96853d47313b0c96d828c889022046da37bfda03cf3481e3ef5c5b5e655d6fd280c2aec2831b6494a8207b76655b01210224c44e1af98b5c28ebf822b65e4a2872d0780b4b6935b3100f60d3ac3b78cb00b2f81500
When I go to the blockcipher https://live.blockcypher.com/btc/decodetx/ and decode the transaction there - it's decoded no problem. But when I'm trying to do this:
Transaction tx = new Transaction(params, HexUtils.hexToBytes(txHex));
LOGGER.info(tx.toString());
it prints
4c67f1e1b10b063210e59400466383fb18634c05430d4f53795a16216dd34ffd
version 2
INCOMPLETE: No inputs!
out [exception: Push of data element that is larger than remaining data]
prps UNKNOWN
Also, I checked my code against master and it worded like a charm! Here is the output:
65b47da760fb781a80e8607e19c82454f12c4ee8dd699045d4e96c869e07bf25
version 2
time locked until block 1439922
in PUSHDATA(22)[001467d7c32c0dad98bf8dd3dea81cbb1dbd4ea4afb5]
witness:30440220359623836f97e9e4c04917455ed2f9fb2343b0bf96853d47313b0c96d828c889022046da37bfda03cf3481e3ef5c5b5e655d6fd280c2aec2831b6494a8207b76655b01 0224c44e1af98b5c28ebf822b65e4a2872d0780b4b6935b3100f60d3ac3b78cb00
outpoint:5ff5eed9d5478aa6761e62b071dd189471370b5c2698570a058b02603c1c4cef:0
sequence:fffffffe
out HASH160 PUSHDATA(20)[ebb55a85454dc15589b8a87bab4b438892b54c0c] EQUAL 0.00188343 BTC
P2SH addr:2NEjY32rnrCdi8Cve6yJ4RaPanugBnJ8fme
out DUP HASH160 PUSHDATA(20)[140b85a1c430c8f4fd1f91ae1c7451902b8ce76c] EQUALVERIFY CHECKSIG 0.0011 BTC
P2PKH addr:mhLwcTEoquZcAjT34fD4uPySUAdK77uqvL
prps UNKNOWN
Please, help!
Your code just creating a transaction object so that's not enough.
Your requirement is fetching a transaction from block. So You need to connect a testnet node. Just create a PeerGroup connect local or remote bitcoin testnet node.
Check that example below.
final NetworkParameters params = TestNet3Params.get();
BlockStore blockStore = new MemoryBlockStore(params);
BlockChain chain = new BlockChain(params, blockStore);
PeerGroup peerGroup = new PeerGroup(params, chain);
peerGroup.start();
// Alternatively you can connect your localhost or another working testnet node
peerGroup.addAddress(new PeerAddress(params, "testnet-seed.bitcoin.jonasschnelli.ch", 18333));
peerGroup.waitForPeers(1).get();
Peer peer = peerGroup.getConnectedPeers().get(0);
Sha256Hash txHash = Sha256Hash.wrap(hexString);
ListenableFuture<Transaction> future = peer.getPeerMempoolTransaction(txHash);
System.out.println("Waiting for node to send us the requested transaction: " + txHash);
Transaction tx = future.get();
System.out.println(tx);
I'm writing an app that is consisted of 4 chained MapReduce jobs, which runs on Amazon EMR. I'm using the JobFlow interface to chain the jobs. Each job is contained in its own class, and has its own main method. All of these are packed into a .jar which is saved in S3, and the cluster is initialized from a small local app on my laptop, which configures the JobFlowRequest and submits it to EMR.
For most of the attempts I make to start the cluster, it fails with the error message Terminated with errors On the master instance (i-<cluster number>), bootstrap action 1 timed out executing. I looked up info on this issue, and all I could find is that if the combined bootstrap time of the cluster exceeds 45 minutes, then this exception is thrown. However, This only occurs ~15 minutes after the request is submitted to EMR, with disregard to the requested cluster size, be it of 4 EC2 instances, 10 or even 20. This makes no sense to me at all, what am I missing?
Some tech specs:
-The project is compiled with Java 1.7.79
-The requested EMR image is 4.6.0, which uses Hadoop 2.7.2
-I'm using the AWS SDK for Java v. 1.10.64
This is my local main method, which sets up and submits the JobFlowRequest:
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.ec2.model.InstanceType;
import com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduce;
import com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient;
import com.amazonaws.services.elasticmapreduce.model.*;
public class ExtractRelatedPairs {
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.err.println("Usage: ExtractRelatedPairs: <k>");
System.exit(1);
}
int outputSize = Integer.parseInt(args[0]);
if (outputSize < 0) {
System.err.println("k should be positive");
System.exit(1);
}
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
}
AmazonElasticMapReduce mapReduce = new AmazonElasticMapReduceClient(credentials);
HadoopJarStepConfig jarStep1 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase1")
.withArgs("s3://datasets.elasticmapreduce/ngrams/books/20090715/eng-gb-all/5gram/data/", "hdfs:///output1/");
StepConfig step1Config = new StepConfig()
.withName("Phase 1")
.withHadoopJarStep(jarStep1)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep2 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase2")
.withArgs("shdfs:///output1/", "hdfs:///output2/");
StepConfig step2Config = new StepConfig()
.withName("Phase 2")
.withHadoopJarStep(jarStep2)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep3 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase3")
.withArgs("hdfs:///output2/", "hdfs:///output3/", args[0]);
StepConfig step3Config = new StepConfig()
.withName("Phase 3")
.withHadoopJarStep(jarStep3)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep4 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase4")
.withArgs("hdfs:///output3/", "s3n://dsps162assignment2benasaf/output4");
StepConfig step4Config = new StepConfig()
.withName("Phase 4")
.withHadoopJarStep(jarStep4)
.withActionOnFailure("TERMINATE_JOB_FLOW");
JobFlowInstancesConfig instances = new JobFlowInstancesConfig()
.withInstanceCount(10)
.withMasterInstanceType(InstanceType.M1Small.toString())
.withSlaveInstanceType(InstanceType.M1Small.toString())
.withHadoopVersion("2.7.2")
.withEc2KeyName("AWS")
.withKeepJobFlowAliveWhenNoSteps(false)
.withPlacement(new PlacementType("us-east-1a"));
RunJobFlowRequest runFlowRequest = new RunJobFlowRequest()
.withName("extract-related-word-pairs")
.withInstances(instances)
.withSteps(step1Config, step2Config, step3Config, step4Config)
.withJobFlowRole("EMR_EC2_DefaultRole")
.withServiceRole("EMR_DefaultRole")
.withReleaseLabel("emr-4.6.0")
.withLogUri("s3n://dsps162assignment2benasaf/logs/");
System.out.println("Submitting the JobFlow Request to Amazon EMR and running it...");
RunJobFlowResult runJobFlowResult = mapReduce.runJobFlow(runFlowRequest);
String jobFlowId = runJobFlowResult.getJobFlowId();
System.out.println("Ran job flow with id: " + jobFlowId);
}
}
A while back, I encountered a similar issue, where even a Vanilla EMR cluster of 4.6.0 was failing to get past the startup, and thus it was throwing a timeout error on the bootstrap step.
I ended up just creating a cluster on a different/new VPC in a different region and it worked fine, and thus it led me to believe there may be a problem with either the original VPC itself or the software in 4.6.0.
Also, regarding the VPC, it was specifically having an issue setting and resolving DNS names for the newly created cluster nodes, even though older versions of EMR were not having this problem
I am using P4Java library in my build.gradle file to sync a large zip file (>200MB) residing at a remote Perforce repository but I am encountering a "java.net.SocketTimeoutException: Read timed out" error either during the sync process or (mostly) during deleting the temporary client created for the sync operation. I am referring http://razgulyaev.blogspot.in/2011/08/p4-java-api-how-to-work-with-temporary.html for working with temporary clients using P4Java API.
I tried increasing the socket read timeout from default 30 sec as suggested in http://answers.perforce.com/articles/KB/8044 and also by introducing sleep but both approaches didn't solved the problem. Probing the server to verify the connection using getServerInfo() right before performing sync or delete operations results in a successful connection check. Can someone please point me as to where I should look for answers?
Thank you.
Providing the code snippet:
void perforceSync(String srcPath, String destPath, String server) {
// Generating the file(s) to sync-up
String[] pathUnderDepot = [
srcPath + "*"
]
// Increasing timeout from default 30 sec to 60 sec
Properties defaultProps = new Properties()
defaultProps.put(PropertyDefs.PROG_NAME_KEY, "CustomBuildApp")
defaultProps.put(PropertyDefs.PROG_VERSION_KEY, "tv_1.0")
defaultProps.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000")
// Instantiating the server
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
p4Server.connect()
// Authorizing
p4Server.setUserName("perforceUserName")
p4Server.login("perforcePassword")
// Just check if connected successfully
IServerInfo serverInfo = p4Server.getServerInfo()
println 'Server info: ' + serverInfo.getServerLicense()
// Creating new client
IClient tempClient = new Client()
// Setting up the name and the root folder
tempClient.setName("tempClient" + UUID.randomUUID().toString().replace("-", ""))
tempClient.setRoot(destPath)
tempClient.setServer(p4Server)
// Setting the client as the current one for the server
p4Server.setCurrentClient(tempClient)
// Creating Client View entry
ClientViewMapping tempMappingEntry = new ClientViewMapping()
// Setting up the mapping properties
tempMappingEntry.setLeft(srcPath + "...")
tempMappingEntry.setRight("//" + tempClient.getName() + "/...")
tempMappingEntry.setType(EntryType.INCLUDE)
// Creating Client view
ClientView tempClientView = new ClientView()
// Attaching client view entry to client view
tempClientView.addEntry(tempMappingEntry)
tempClient.setClientView(tempClientView)
// Registering the new client on the server
println p4Server.createClient(tempClient)
// Surrounding the underlying block with try as we want some action
// (namely client removing) to be performed in any way
try {
// Forming the FileSpec collection to be synced-up
List<IFileSpec> fileSpecsSet = FileSpecBuilder.makeFileSpecList(pathUnderDepot)
// Syncing up the client
println "Syncing..."
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
catch (Exception e) {
println "Sync failed. Trying again..."
sleep(60 * 1000)
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
finally {
println "Done syncing."
try {
p4Server.connect()
IServerInfo serverInfo2 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo2.getServerLicense()
// Removing the temporary client from the server
println p4Server.deleteClient(tempClient.getName(), false)
}
catch(Exception e) {
println 'Ignoring exception caught while deleting tempClient!'
/*sleep(60 * 1000)
p4Server.connect()
IServerInfo serverInfo3 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo3.getServerLicense()
sleep(60 * 1000)
println p4Server.deleteClient(tempClient.getName(), false)*/
}
}
}
One unusual thing which I observed while deleting tempClient was it was actually deleting the client but still throwing "java.net.SocketTimeoutException: Read timed out" which is why I ended up commenting the second delete attempt in the second catch block.
Which version of P4Java are you using? Have you tried this out with the newest P4Java? There are notable fixes dealing with RPC sockets since the 2013.2 version forward as can be seen in the release notes:
http://www.perforce.com/perforce/doc.current/user/p4javanotes.txt
Here are some variations that you can try where you have your code to increase timeout and instantiating the server:
a] Have you tried to passing props in its own argument,? For example:
Properties prop = new Properties();
prop.setProperty(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "300000");
UsageOptions uop = new UsageOptions(prop);
server = ServerFactory.getOptionsServer(ServerFactory.DEFAULT_PROTOCOL_NAME + "://" + serverPort, prop, uop);
Or something like the following:
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
You can also set the timeout to "0" to give it no timeout.
b]
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
props.put(RpcPropertyDefs.RPC_SOCKET_POOL_SIZE_NICK, "5");
c]
Properties props = System.getProperties();
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
IOptionsServer server =
ServerFactory.getOptionsServer("p4java://perforce:1666", props, null);
d] In case you have Eclipse users using our P4Eclipse plugin, the property can be set in the plugin preferences (Team->Perforce->Advanced) under the Custom P4Java Properties.
"sockSoTimeout" : "3000000"
REFERENCES
Class RpcPropertyDefs
http://perforce.com/perforce/doc.current/manuals/p4java-javadoc/com/perforce/p4java/impl/mapbased/rpc/RpcPropertyDefs.html
P4Eclipse or P4Java: SocketTimeoutException: Read timed out
http://answers.perforce.com/articles/KB/8044
I am trying to attach a new volume to my instance using JClouds. But I can't find a way to do it.
final String POLL_PERIOD_TWENTY_SECONDS = String.valueOf(SECONDS.toMillis(20));
Properties overrides = new Properties();
overrides.setProperty(ComputeServiceProperties.POLL_INITIAL_PERIOD, POLL_PERIOD_TWENTY_SECONDS);
overrides.setProperty(ComputeServiceProperties.POLL_MAX_PERIOD, POLL_PERIOD_TWENTY_SECONDS);
Iterable<Module> modules = ImmutableSet.<Module> of(new SshjSshClientModule(), new SLF4JLoggingModule());
//Iterable<Module> modules = ImmutableSet.<Module> of(new SshjSshClientModule());
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("valid user", "valid password")
.modules(modules)
.overrides(overrides)
.buildView(ComputeServiceContext.class);
ComputeService computeService = context.getComputeService();
// Ubuntu AMI
Template template = computeService.templateBuilder()
.locationId("us-east-1")
.imageId("us-east-1/ami-7c807d14")
.hardwareId("t1.micro")
.build();
// This line creates the volume but does not attach it
template.getOptions().as(EC2TemplateOptions.class).mapNewVolumeToDeviceName("/dev/sdm", 100, true);
Set<? extends NodeMetadata> nodes = computeService.createNodesInGroup("m456", 1, template);
for (NodeMetadata nodeMetadata : nodes) {
String publicAddress = nodeMetadata.getPublicAddresses().iterator().next();
//String privateAddress = nodeMetadata.getPrivateAddresses().iterator().next();
String username = nodeMetadata.getCredentials().getUser();
String password = nodeMetadata.getCredentials().getPassword();
// [...]
System.out.println(String.format("ssh %s#%s %s", username, publicAddress, password));
System.out.println(nodeMetadata.getCredentials().getPrivateKey());
}
How can I create an attach a volume to the directory "/var" ?
How could I create the instance with more hard disk space ?
You can use the ElasticBlockStoreApi in jclouds for this. You have created the instance already, so create and attach the volume like this:
// Get node
NodeMetadata node = Iterables.getOnlyElement(nodes);
// Get AWS EC2 API
EC2Api ec2Api = computeServiceContext.unwrapApi(EC2Api.class);
// Create 100 GiB Volume
Volume volume = ec2Api.getElasticBlockStoreApi().get()
.createVolumeInAvailabilityZone(zoneId, 100);
// Attach to instance
Attachment attachment = ec2Api.getElasticBlockStoreApi().get()
.attachVolumeInRegion(region, volume.getId(), node.getId(), "/dev/sdx");
Now, you have an EBS volume attached, and the VM has a block device created for it, you just need to run the right commands to mount it on the /var directory, which will depend on your particular operating system. You can run a script like this:
// Run script on instance
computeService.runScriptOnNode(node.getId(),
Statements.exec("mount /dev/sdx /var"));
I'm experimenting with java flavored zmq to test the benefits of using PGM over TCP in my project. So I changed the weather example, from the zmq guide, to use the epgm transport.
Everything compiles and runs, but nothing is being sent or received. If I change the transport back to TCP, the server receives the messages sent from the client and I get the console output I'm expecting.
So, what are the requirements for using PGM? I changed the string, that I'm passing to the bind and connect methods, to follow the zmq api for zmq_pgm: "transport://interface;multicast address:port". That didn't work. I get and invalid argument error whenever I attempt to use this format. So, I simplified it by dropping the interface and semicolon which "works", but I'm not getting any results.
I haven't been able to find a jzmq example that uses pgm/epgm and the api documentation for the java binding does not define the appropriate string format for an endpoint passed to bind or connect. So what am I missing here? Do I have to use different hosts for the client and the server?
One thing of note is that I'm running my code on a VirtualBox VM (Ubuntu 14.04/OSX Mavericks host). I'm not sure if that has anything to do with the issue I'm currently facing.
Server:
public class wuserver {
public static void main (String[] args) throws Exception {
// Prepare our context and publisher
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket publisher = context.socket(ZMQ.PUB);
publisher.bind("epgm://xx.x.x.xx:5556");
publisher.bind("ipc://weather");
// Initialize random number generator
Random srandom = new Random(System.currentTimeMillis());
while (!Thread.currentThread ().isInterrupted ()) {
// Get values that will fool the boss
int zipcode, temperature, relhumidity;
zipcode = 10000 + srandom.nextInt(10000) ;
temperature = srandom.nextInt(215) - 80 + 1;
relhumidity = srandom.nextInt(50) + 10 + 1;
// Send message to all subscribers
String update = String.format("%05d %d %d", zipcode, temperature, relhumidity);
publisher.send(update, 0);
}
publisher.close ();
context.term ();
}
}
Client:
public class wuclient {
public static void main (String[] args) {
ZMQ.Context context = ZMQ.context(1);
// Socket to talk to server
System.out.println("Collecting updates from weather server");
ZMQ.Socket subscriber = context.socket(ZMQ.SUB);
//subscriber.connect("tcp://localhost:5556");
subscriber.connect("epgm://xx.x.x.xx:5556");
// Subscribe to zipcode, default is NYC, 10001
String filter = (args.length > 0) ? args[0] : "10001 ";
subscriber.subscribe(filter.getBytes());
// Process 100 updates
int update_nbr;
long total_temp = 0;
for (update_nbr = 0; update_nbr < 100; update_nbr++) {
// Use trim to remove the tailing '0' character
String string = subscriber.recvStr(0).trim();
StringTokenizer sscanf = new StringTokenizer(string, " ");
int zipcode = Integer.valueOf(sscanf.nextToken());
int temperature = Integer.valueOf(sscanf.nextToken());
int relhumidity = Integer.valueOf(sscanf.nextToken());
total_temp += temperature;
}
System.out.println("Average temperature for zipcode '"
+ filter + "' was " + (int) (total_temp / update_nbr));
subscriber.close();
context.term();
}
}
There are a couple possibilities:
You need to make sure ZMQ is compiled with the --with-pgm option: see here - but this doesn't appear to be your issue if you're not seeing "protocol not supported"
Using raw pgm requires root privileges because it requires the ability to create raw sockets... but epgm doesn't require that, so it shouldn't be your issue either (I only bring it up because you use the term "pgm/epgm", and you should be aware that they are not equally available in all situations)
What actually appears to be the problem in your case is that pgm/epgm requires support along the network path. In theory, it requires support out to your router, so your application can send a single message and have your router send out multiple messages to each client, but if your server is aware enough, it can probably send out multiple messages immediately and bypass this router support. The problem is, as you correctly guessed, trying to do this all on one host is not supported.
So, you need different hosts for client and server.
Another bit to be aware of is that some virtualization environments--RHEV/Ovirt and libvirt/KVM with the mac_filter option enabled come to mind-- that, by default, neuter one's abilities via (eb|ip)tables to utilize mcast between guests. With libvirt, of course, the solution is to simply set the option to '0' and restart libvirtd. RHEV/Ovirt require a custom plugin.
At any rate, I would suggest putting a sniffer on the network devices on each system you are using and watching to be sure traffic that is exiting the one host is actually visible on the other.