I am currently trying to read a certain cookie from my Java desktop application. I have used a chrome extension to create the cookie and using the chrome console I can view that it has been created and has the correct value.
The problem arises when I try to read it from my Java application, my current code gets 3 seemingly random cookies, which wouldn't be a problem if it included mine. I have ensured my cookie is not host only and it is not secure, it actually has the same properties as the other 3 but it just is not being returned.
Here's my current code:
Java Application Code:
import java.io.IOException;
import java.net.CookieHandler;
import java.net.CookieManager;
import java.net.CookiePolicy;
import java.net.CookieStore;
import java.net.URL;
import java.net.URLConnection;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
*
* #author Spud
*/
public class Reading_My_Created_Cookie
{
/**
* #param args the command line arguments
*/
private static final String SEARCH_TERM = "Tab";
private static final String ADDRESS = "http://www.youtube.com/";
private static final CookieManager COOKIEMANAGER = new CookieManager();
private static Object myCookie;
public static void main(String[] args)
{
try
{
COOKIEMANAGER.setCookiePolicy(CookiePolicy.ACCEPT_ALL);
CookieHandler.setDefault(COOKIEMANAGER);
URL url = new URL(ADDRESS);
URLConnection connection = url.openConnection();
connection.getContent();
CookieStore cookieStore = COOKIEMANAGER.getCookieStore();
Object[] cookieJar = cookieStore.getCookies().toArray();
for (int i = 0; i < cookieJar.length; ++i)
{
System.out.println(cookieJar[i]);
}
//myCookie = checkForCookie(cookieJar);
//System.out.println(myCookie);
}
catch (IOException ex)
{
Logger.getLogger(Reading_My_Created_Cookie.class.getName()).log(Level.SEVERE, null, ex);
}
}
private static Object checkForCookie(Object[] cookieJar)
{
for (int i = 0; i < cookieJar.length; ++i)
{
if (cookieJar[i].equals(SEARCH_TERM))
{
return cookieJar[i];
}
}
return "Cookie not found";
}
}
This currently outputs the following:
run:
YSC=6EIDrzvf02s
PREF=f1=50000000
VISITOR_INFO1_LIVE=qKCoNVZQMi8
BUILD SUCCESSFUL (total time: 4 seconds)
For anyone who wants it, this is my current extension JavaScript code:
chrome.browserAction.onClicked.addListener(run);
function run()
{
var cookieName, cookieValue, cookieURL;
cookieName = "Tab";
chrome.tabs.getSelected(null, function(tab)
{
cookieValue = tab.title;
cookieURL = tab.url;
createCookie(cookieName, cookieValue, cookieURL);
});
}
function createCookie(cookieName, cookieValue, cookieURL)
{
chrome.cookies.set({name: cookieName, value: cookieValue, domain: ".youtube.com", url: cookieURL});
alert("cookieName = " + cookieName + " cookieValue/tabTitle = " + cookieValue + " tabURL = " + cookieURL);
}
Thank you for your time, I really hope I can get this sorted as it is currently a key part of my A-level computing project.
Related
I am trying to implement s3select in a spring boot app to query parquet file in s3 bucket, I am only getting partial result from the s3select output, Please help to identify the issue, i have used aws java sdk v2.
Upon checking the json output(printed in the console), overall characters in the output is 65k.
I am using eclipse and tried unchecking "Limit console output" in the console preference, which did not help.
Code is here:-
import java.util.List;
import java.util.concurrent.CompletableFuture;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.core.async.SdkPublisher;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.services.s3.model.CompressionType;
import software.amazon.awssdk.services.s3.model.EndEvent;
import software.amazon.awssdk.services.s3.model.ExpressionType;
import software.amazon.awssdk.services.s3.model.InputSerialization;
import software.amazon.awssdk.services.s3.model.JSONOutput;
import software.amazon.awssdk.services.s3.model.OutputSerialization;
import software.amazon.awssdk.services.s3.model.ParquetInput;
import software.amazon.awssdk.services.s3.model.RecordsEvent;
import software.amazon.awssdk.services.s3.model.SelectObjectContentEventStream;
import software.amazon.awssdk.services.s3.model.SelectObjectContentEventStream.EventType;
import software.amazon.awssdk.services.s3.model.SelectObjectContentRequest;
import software.amazon.awssdk.services.s3.model.SelectObjectContentResponse;
import software.amazon.awssdk.services.s3.model.SelectObjectContentResponseHandler;
public class ParquetSelect {
private static final String BUCKET_NAME = "<bucket-name>";
private static final String KEY = "<object-key>";
private static final String QUERY = "select * from S3Object s";
public static S3AsyncClient s3;
public static void selectObjectContent() {
Handler handler = new Handler();
SelectQueryWithHandler(handler).join();
RecordsEvent recordsEvent = (RecordsEvent) handler.receivedEvents.stream()
.filter(e -> e.sdkEventType() == EventType.RECORDS)
.findFirst()
.orElse(null);
System.out.println(recordsEvent.payload().asUtf8String());
}
private static CompletableFuture<Void> SelectQueryWithHandler(SelectObjectContentResponseHandler handler) {
InputSerialization inputSerialization = InputSerialization.builder()
.parquet(ParquetInput.builder().build())
.compressionType(CompressionType.NONE)
.build();
OutputSerialization outputSerialization = OutputSerialization.builder()
.json(JSONOutput.builder().build())
.build();
SelectObjectContentRequest select = SelectObjectContentRequest.builder()
.bucket(BUCKET_NAME)
.key(KEY)
.expression(QUERY)
.expressionType(ExpressionType.SQL)
.inputSerialization(inputSerialization)
.outputSerialization(outputSerialization)
.build();
return s3.selectObjectContent(select, handler);
}
private static class Handler implements SelectObjectContentResponseHandler {
private SelectObjectContentResponse response;
private List<SelectObjectContentEventStream> receivedEvents = new ArrayList<>();
private Throwable exception;
#Override
public void responseReceived(SelectObjectContentResponse response) {
this.response = response;
}
#Override
public void onEventStream(SdkPublisher<SelectObjectContentEventStream> publisher) {
publisher.subscribe(receivedEvents::add);
}
#Override
public void exceptionOccurred(Throwable throwable) {
exception = throwable;
}
#Override
public void complete() {
}
}
}
I see you are using selectObjectContent(). Have you tried calling the s3AsyncClient.getObject() method. Does that work for you?
For example, here is a code example that gets a PDF file from an Amazon S3 bucket and write the PDF file to a local file.
package com.example.s3.async;
// snippet-start:[s3.java2.async_stream_ops.complete]
// snippet-start:[s3.java2.async_stream_ops.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.async.AsyncResponseTransformer;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import java.nio.file.Paths;
import java.util.concurrent.CompletableFuture;
// snippet-end:[s3.java2.async_stream_ops.import]
// snippet-start:[s3.java2.async_stream_ops.main]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class S3AsyncStreamOps {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <bucketName> <objectKey> <path>\n\n" +
"Where:\n" +
" bucketName - The name of the Amazon S3 bucket (for example, bucket1). \n\n" +
" objectKey - The name of the object (for example, book.pdf). \n" +
" path - The local path to the file (for example, C:/AWS/book.pdf). \n" ;
if (args.length != 3) {
System.out.println(usage);
System.exit(1);
}
String bucketName = args[0];
String objectKey = args[1];
String path = args[2];
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
S3AsyncClient s3AsyncClient = S3AsyncClient.builder()
.region(region)
.credentialsProvider(credentialsProvider)
.build();
GetObjectRequest objectRequest = GetObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.build();
CompletableFuture<GetObjectResponse> futureGet = s3AsyncClient.getObject(objectRequest,
AsyncResponseTransformer.toFile(Paths.get(path)));
futureGet.whenComplete((resp, err) -> {
try {
if (resp != null) {
System.out.println("Object downloaded. Details: "+resp);
} else {
err.printStackTrace();
}
} finally {
// Only close the client when you are completely done with it.
s3AsyncClient.close();
}
});
futureGet.join();
}
}
I'm not able to find a way to read messages from pub/sub using java.
I'm using this maven dependency in my pom
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<version>0.17.2-alpha</version>
</dependency>
I implemented this main method to create a new topic:
public static void main(String... args) throws Exception {
// Your Google Cloud Platform project ID
String projectId = ServiceOptions.getDefaultProjectId();
// Your topic ID
String topicId = "my-new-topic-1";
// Create a new topic
TopicName topic = TopicName.create(projectId, topicId);
try (TopicAdminClient topicAdminClient = TopicAdminClient.create()) {
topicAdminClient.createTopic(topic);
}
}
The above code works well and, indeed, I can see the new topic I created using the google cloud console.
I implemented the following main method to write a message to my topic:
public static void main(String a[]) throws InterruptedException, ExecutionException{
String projectId = ServiceOptions.getDefaultProjectId();
String topicId = "my-new-topic-1";
String payload = "Hellooooo!!!";
PubsubMessage pubsubMessage =
PubsubMessage.newBuilder().setData(ByteString.copyFromUtf8(payload)).build();
TopicName topic = TopicName.create(projectId, topicId);
Publisher publisher;
try {
publisher = Publisher.defaultBuilder(
topic)
.build();
publisher.publish(pubsubMessage);
System.out.println("Sent!");
} catch (IOException e) {
System.out.println("Not Sended!");
e.printStackTrace();
}
}
Now I'm not able to verify if this message was really sent.
I would like to implement a message reader using a subscription to my topic.
Could someone show me a correct and working java example about reading messages from a topic?
Anyone can help me?
Thanks in advance!
Here is the version using the google cloud client libraries.
package com.techm.data.client;
import com.google.cloud.pubsub.v1.AckReplyConsumer;
import com.google.cloud.pubsub.v1.MessageReceiver;
import com.google.cloud.pubsub.v1.Subscriber;
import com.google.cloud.pubsub.v1.SubscriptionAdminClient;
import com.google.common.util.concurrent.MoreExecutors;
import com.google.pubsub.v1.ProjectSubscriptionName;
import com.google.pubsub.v1.ProjectTopicName;
import com.google.pubsub.v1.PubsubMessage;
import com.google.pubsub.v1.PushConfig;
/**
* A snippet for Google Cloud Pub/Sub showing how to create a Pub/Sub pull
* subscription and asynchronously pull messages from it.
*/
public class CreateSubscriptionAndConsumeMessages {
private static String projectId = "projectId";
private static String topicId = "topicName";
private static String subscriptionId = "subscriptionName";
public static void createSubscription() throws Exception {
ProjectTopicName topic = ProjectTopicName.of(projectId, topicId);
ProjectSubscriptionName subscription = ProjectSubscriptionName.of(projectId, subscriptionId);
try (SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create()) {
subscriptionAdminClient.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 0);
}
}
public static void main(String... args) throws Exception {
ProjectSubscriptionName subscription = ProjectSubscriptionName.of(projectId, subscriptionId);
createSubscription();
MessageReceiver receiver = new MessageReceiver() {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
System.out.println("Received message: " + message.getData().toStringUtf8());
consumer.ack();
}
};
Subscriber subscriber = null;
try {
subscriber = Subscriber.newBuilder(subscription, receiver).build();
subscriber.addListener(new Subscriber.Listener() {
#Override
public void failed(Subscriber.State from, Throwable failure) {
// Handle failure. This is called when the Subscriber encountered a fatal error
// and is
// shutting down.
System.err.println(failure);
}
}, MoreExecutors.directExecutor());
subscriber.startAsync().awaitRunning();
// In this example, we will pull messages for one minute (60,000ms) then stop.
// In a real application, this sleep-then-stop is not necessary.
// Simply call stopAsync().awaitTerminated() when the server is shutting down,
// etc.
Thread.sleep(60000);
} finally {
if (subscriber != null) {
subscriber.stopAsync().awaitTerminated();
}
}
}
}
This is working fine for me.
The Cloud Pub/Sub Pull Subscriber Guide has sample code for reading messages from a topic.
I haven't used google cloud client libraries but used the api client libraries. Here is how I created a subscription.
package com.techm.datapipeline.client;
import java.io.IOException;
import java.security.GeneralSecurityException;
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.client.http.HttpStatusCodes;
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Create;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Get;
import com.google.api.services.pubsub.Pubsub.Projects.Topics;
import com.google.api.services.pubsub.model.ExpirationPolicy;
import com.google.api.services.pubsub.model.Subscription;
import com.google.api.services.pubsub.model.Topic;
import com.techm.datapipeline.factory.PubsubFactory;
public class CreatePullSubscriberClient {
private final static String PROJECT_NAME = "yourProjectId";
private final static String TOPIC_NAME = "yourTopicName";
private final static String SUBSCRIPTION_NAME = "yourSubscriptionName";
public static void main(String[] args) throws IOException, GeneralSecurityException {
Pubsub pubSub = PubsubFactory.getService();
String topicName = String.format("projects/%s/topics/%s", PROJECT_NAME, TOPIC_NAME);
String subscriptionName = String.format("projects/%s/subscriptions/%s", PROJECT_NAME, SUBSCRIPTION_NAME);
Topics.Get listReq = pubSub.projects().topics().get(topicName);
Topic topic = listReq.execute();
if (topic == null) {
System.err.println("Topic doesn't exist...run CreateTopicClient...to create the topic");
System.exit(0);
}
Subscription subscription = null;
try {
Get getReq = pubSub.projects().subscriptions().get(subscriptionName);
subscription = getReq.execute();
} catch (GoogleJsonResponseException e) {
if (e.getStatusCode() == HttpStatusCodes.STATUS_CODE_NOT_FOUND) {
System.out.println("Subscription " + subscriptionName + " does not exist...will create it");
}
}
if (subscription != null) {
System.out.println("Subscription already exists ==> " + subscription.toPrettyString());
System.exit(0);
}
subscription = new Subscription();
subscription.setTopic(topicName);
subscription.setPushConfig(null); // indicating a pull
ExpirationPolicy expirationPolicy = new ExpirationPolicy();
expirationPolicy.setTtl(null); // never expires;
subscription.setExpirationPolicy(expirationPolicy);
subscription.setAckDeadlineSeconds(null); // so defaults to 10 sec
subscription.setRetainAckedMessages(true);
Long _week = 7L * 24 * 60 * 60;
subscription.setMessageRetentionDuration(String.valueOf(_week)+"s");
subscription.setName(subscriptionName);
Create createReq = pubSub.projects().subscriptions().create(subscriptionName, subscription);
Subscription createdSubscription = createReq.execute();
System.out.println("Subscription created ==> " + createdSubscription.toPrettyString());
}
}
And once you create the subscription (pull type)...this is how you pull the messages from the topic.
package com.techm.datapipeline.client;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.ArrayList;
import java.util.List;
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.client.http.HttpStatusCodes;
import com.google.api.client.util.Base64;
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Acknowledge;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Get;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Pull;
import com.google.api.services.pubsub.model.AcknowledgeRequest;
import com.google.api.services.pubsub.model.Empty;
import com.google.api.services.pubsub.model.PullRequest;
import com.google.api.services.pubsub.model.PullResponse;
import com.google.api.services.pubsub.model.ReceivedMessage;
import com.techm.datapipeline.factory.PubsubFactory;
public class PullSubscriptionsClient {
private final static String PROJECT_NAME = "yourProjectId";
private final static String SUBSCRIPTION_NAME = "yourSubscriptionName";
private final static String SUBSCRIPTION_NYC_NAME = "test";
public static void main(String[] args) throws IOException, GeneralSecurityException {
Pubsub pubSub = PubsubFactory.getService();
String subscriptionName = String.format("projects/%s/subscriptions/%s", PROJECT_NAME, SUBSCRIPTION_NAME);
//String subscriptionName = String.format("projects/%s/subscriptions/%s", PROJECT_NAME, SUBSCRIPTION_NYC_NAME);
try {
Get getReq = pubSub.projects().subscriptions().get(subscriptionName);
getReq.execute();
} catch (GoogleJsonResponseException e) {
if (e.getStatusCode() == HttpStatusCodes.STATUS_CODE_NOT_FOUND) {
System.out.println("Subscription " + subscriptionName
+ " does not exist...run CreatePullSubscriberClient to create");
}
}
PullRequest pullRequest = new PullRequest();
pullRequest.setReturnImmediately(false); // wait until you get a message
pullRequest.setMaxMessages(1000);
Pull pullReq = pubSub.projects().subscriptions().pull(subscriptionName, pullRequest);
PullResponse pullResponse = pullReq.execute();
List<ReceivedMessage> msgs = pullResponse.getReceivedMessages();
List<String> ackIds = new ArrayList<String>();
int i = 0;
if (msgs != null) {
for (ReceivedMessage msg : msgs) {
ackIds.add(msg.getAckId());
//System.out.println(i++ + ":===:" + msg.getAckId());
String object = new String(Base64.decodeBase64(msg.getMessage().getData()));
System.out.println("Decoded object String ==> " + object );
}
//acknowledge all the received messages
AcknowledgeRequest content = new AcknowledgeRequest();
content.setAckIds(ackIds);
Acknowledge ackReq = pubSub.projects().subscriptions().acknowledge(subscriptionName, content);
Empty empty = ackReq.execute();
}
}
}
Note: This client only waits until it receives at least one message and terminates if it's receives one (up to a max of value - set in MaxMessages) at once.
Let me know if this helps. I'm going to try the cloud client libraries soon and will post an update once I get my hands on them.
And here's the missing factory class ...if you plan to run it...
package com.techm.datapipeline.factory;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.logging.Level;
import java.util.logging.Logger;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.PubsubScopes;
public class PubsubFactory {
private static Pubsub instance = null;
private static final Logger logger = Logger.getLogger(PubsubFactory.class.getName());
public static synchronized Pubsub getService() throws IOException, GeneralSecurityException {
if (instance == null) {
instance = buildService();
}
return instance;
}
private static Pubsub buildService() throws IOException, GeneralSecurityException {
logger.log(Level.FINER, "Start of buildService");
HttpTransport transport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleCredential credential = GoogleCredential.getApplicationDefault(transport, jsonFactory);
// Depending on the environment that provides the default credentials (for
// example: Compute Engine, App Engine), the credentials may require us to
// specify the scopes we need explicitly.
if (credential.createScopedRequired()) {
Collection<String> scopes = new ArrayList<>();
scopes.add(PubsubScopes.PUBSUB);
credential = credential.createScoped(scopes);
}
logger.log(Level.FINER, "End of buildService");
// TODO - Get the application name from outside.
return new Pubsub.Builder(transport, jsonFactory, credential).setApplicationName("Your Application Name/Version")
.build();
}
}
The message reader is injected on the subscriber. This part of the code will handle the messages:
MessageReceiver receiver =
new MessageReceiver() {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
// handle incoming message, then ack/nack the received message
System.out.println("Id : " + message.getMessageId());
System.out.println("Data : " + message.getData().toStringUtf8());
consumer.ack();
}
};
I upload an audio file to an audio & video bucket, called demo, using the AcrCloud RESTful services. I am getting a 500 Internal Server Error. This indicates that my signature is correct (I was getting a 422 when the signature was incorrect). The part that I suspect is incorrect is the construction of the multipart post request
My Code:
import com.xperiel.common.logging.Loggers;
import com.google.api.client.http.ByteArrayContent;
import com.google.api.client.http.GenericUrl;
import com.google.api.client.http.HttpContent;
import com.google.api.client.http.HttpHeaders;
import com.google.api.client.http.HttpMediaType;
import com.google.api.client.http.HttpRequestFactory;
import com.google.api.client.http.HttpResponse;
import com.google.api.client.http.MultipartContent;
import com.google.api.client.http.MultipartContent.Part;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.common.collect.ImmutableMap;
import com.google.common.io.BaseEncoding;
import com.google.common.io.CharStreams;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.security.InvalidKeyException;
import java.security.NoSuchAlgorithmException;
import java.util.Map.Entry;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
public class TestAcrCloudSignature {
private static final String ACCESS_KEY = "xxxx"; // confidential
private static final String SECRET_KEY = "yyyy"; // confidential
private static final String URL = "https://api.acrcloud.com/v1/audios";
private static HttpRequestFactory requestFactory = new NetHttpTransport().createRequestFactory();
private static final Logger logger = Loggers.getLogger();
public static void main(String [] args) {
String filePath = "/Users/serena/Desktop/ArcCloudMusic/Fernando.m4a";
String httpMethod = HttpMethod.POST.toString();
String httpUri = "/v1/audios";
String signatureVersion = "1";
long timestamp = System.currentTimeMillis();
String stringToSign = getStringToSign(httpMethod, httpUri, signatureVersion, timestamp);
String signature = getSignature(stringToSign);
logger.log(Level.INFO, "Timestamp:\t" + timestamp);
HttpResponse response = null;
try {
ImmutableMap<String, String> params = ImmutableMap.of(
"title", "fernando",
"audio_id", "1",
"bucket_name", "demo",
"data_type", "audio");
byte[] audio = getAudioFileTo(filePath);
String strResponse = sendMultiPartPostRequest(
"",
params,
ImmutableMap.of("audio-file", new Pair<>("Fernando.m4a", audio)),
signatureVersion,
signature,
timestamp);
logger.log(Level.INFO, "RESPONSE:" + strResponse);
} catch (Exception e) {
logger.log(Level.WARNING, "Response: " + response);
logger.log(Level.WARNING, "Exception: " + e.getMessage());
e.printStackTrace();
}
}
private static String getStringToSign(String method, String httpUri, String signatureVersion, long timestamp) {
String stringToSign = method+"\n"+httpUri+"\n"+ACCESS_KEY+"\n"+signatureVersion+"\n"+timestamp;
logger.log(Level.INFO, "String to Sign:\t" + stringToSign);
return stringToSign;
}
private static String getSignature(String stringToSign) {
String signature = BaseEncoding.base64().encode(hmacSha1(stringToSign));
logger.log(Level.INFO, "Signature:\t" + signature);
return signature;
}
private static byte[] hmacSha1(String toSign) {
try {
Mac mac = Mac.getInstance("HmacSHA1");
mac.init(new SecretKeySpec(SECRET_KEY.getBytes(), "HmacSHA1"));
return mac.doFinal(toSign.getBytes());
} catch (NoSuchAlgorithmException | InvalidKeyException e) {
throw new RuntimeException(e);
}
}
private enum HttpMethod {
GET, POST, PUT, DELETE,
}
private static byte[] getAudioFileTo(String filePath){
File file = new File(filePath);
byte[] buffer = null;
try {
InputStream fis = new FileInputStream(file);
buffer = new byte[(int) file.length()];
fis.read(buffer, 0, buffer.length);
fis.close();
} catch (IOException e) {
logger.log(Level.WARNING, "IOException: " + e.getMessage());
}
return buffer;
}
private static String sendMultiPartPostRequest(
String path,
ImmutableMap<String, String> parameters,
ImmutableMap<String, Pair<String, byte[]>> blobData,
String signatureVersion,
String signature,
long timestamp) {
try {
MultipartContent multipartContent = new MultipartContent();
multipartContent.setMediaType(new HttpMediaType("multipart/form-data"));
multipartContent.setBoundary("--------------------------0e94e468d6023641");
for (Entry<String, String> currentParameter : parameters.entrySet()) {
HttpHeaders headers = new HttpHeaders();
headers.clear();
headers.setAcceptEncoding(null);
headers.set("Content-Disposition", "form-data; name=\"" + currentParameter.getKey() + '\"');
HttpContent content = new ByteArrayContent(null, currentParameter.getValue().getBytes());
Part part = new Part(content);
part.setHeaders(headers);
multipartContent.addPart(part);
}
for (Entry<String, Pair<String, byte[]>> current : blobData.entrySet()) {
ByteArrayContent currentContent = new ByteArrayContent("application/octet-stream", current.getValue().second);
HttpHeaders headers = new HttpHeaders();
headers.clear();
headers.setAcceptEncoding(null);
headers.set("Content-Disposition", "form-data; name=\"" + current.getKey() + "\"; filename=\"" + current.getValue().first + '\"');
headers.setContentType("application/octet-stream");
multipartContent.addPart(new Part(headers, currentContent));
}
ByteArrayOutputStream out = new ByteArrayOutputStream();
multipartContent.writeTo(out);
HttpResponse response = requestFactory
.buildPostRequest(new GenericUrl(URL + path), multipartContent)
.setHeaders(new HttpHeaders()
.set("access-key", ACCESS_KEY)
.set("signature-version", signatureVersion)
.set("signature", signature)
.set("timestamp", timestamp))
.execute();
String responseString = CharStreams.toString(new InputStreamReader(response.getContent()));
return responseString;
} catch (IOException e) {
throw new RuntimeException(e);
}
}
private static class Pair<A, B> {
final A first;
final B second;
Pair(A first, B second) {
this.first = first;
this.second = second;
}
}
}
The error message I am getting from AcrCloud is:
500
{"name":"Internal Server Error","message":"There was an error at the server.","code":0,"status":500}
I am able to upload an audio file using this cUrl command:
Command: $ curl -H "access-key: xxxx" -H "signature-version: 1" -H
"timestamp: 1439958502089" -H "signature:
Nom6oajEzon260F2WzLpK3PE9e0=" -F "title=fernando" -F "audio_id=100" -F
"bucket_name=demo" -F "data_type=audio" -F
"audio_file=#/Users/serena/Desktop/ArcCloudMusic/Fernando.m4a"
https://api.acrcloud.com/v1/audios
Does anyone have any tips on how to debug this? Or has anyone had success using this service programmatically with Java? Or can someone show me how to print the contents of the HttpPOST request?
UPDATE I have also tried using their java example on GITHUB found here:
https://github.com/acrcloud/webapi_example/blob/master/RESTful%20service/UploadAudios.java
I get the same 500 error
UPDATE I no longer get the 500 error when I run their code. I fiddled with the apache jar versions and now I can successfully use the java code found on git hub. For record, The version that I used that work with their github code is apache-http-codec-1.10, apache-http-client-4.5, apache-http-core-4.4.1, apache-http-mime-4.5. When i used apache-http-core-4.5 it did not work.
UPDATE I have written a file that prints out the signatures generated by the java code on github reference above, and my own code. The signatures match so I am convinced that issue in the way I am constructing the multipart post request. I have also written the contents of both post requests to file and the headers contain different information in a few spots.
Thanks Serena for your patience, our team is doing a detailed analysis on the code and the apache jars now. Hopefully will have an update soon.
For now, if anyone who has the same problems, please use the following jars as mentioned in https://github.com/acrcloud/webapi_example/blob/master/RESTful%20service/UploadAudios.java
// import commons-codec-<version>.jar, download from http://commons.apache.org/proper/commons-codec/download_codec.cgi
import org.apache.commons.codec.binary.Base64;
// import HttpClient, download from http://hc.apache.org/downloads.cgi
/**
*
* commons-codec-1.1*.jar
* commons-logging-1.*.jar
* httpclient-4.*.jar
* httpcore-4.4.1.jar
* httpmime-4.*.jar
*
* */
I have a Grizzly Http Server with Async processing added. It is queuing my requests and processing only one request at a time, despite adding async support to it.
Path HttpHandler was bound to is: "/"
Port number: 7777
Behavior observed when I hit http://localhost:7777 from two browsers simultaneously is:
Second call waits till first one is completed. I want my second http call also to work simultaneously in tandom with first http call.
EDIT Github link of my project
Here are the classes
GrizzlyMain.java
package com.grizzly;
import java.io.IOException;
import java.net.URI;
import javax.ws.rs.core.UriBuilder;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.nio.transport.TCPNIOTransport;
import org.glassfish.grizzly.strategies.WorkerThreadIOStrategy;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;
import com.grizzly.http.IHttpHandler;
import com.grizzly.http.IHttpServerFactory;
public class GrizzlyMain {
private static HttpServer httpServer;
private static void startHttpServer(int port) throws IOException {
URI uri = getBaseURI(port);
httpServer = IHttpServerFactory.createHttpServer(uri,
new IHttpHandler(null));
TCPNIOTransport transport = getListener(httpServer).getTransport();
ThreadPoolConfig config = ThreadPoolConfig.defaultConfig()
.setPoolName("worker-thread-").setCorePoolSize(6).setMaxPoolSize(6)
.setQueueLimit(-1)/* same as default */;
transport.configureBlocking(false);
transport.setSelectorRunnersCount(3);
transport.setWorkerThreadPoolConfig(config);
transport.setIOStrategy(WorkerThreadIOStrategy.getInstance());
transport.setTcpNoDelay(true);
System.out.println("Blocking Transport(T/F): " + transport.isBlocking());
System.out.println("Num SelectorRunners: "
+ transport.getSelectorRunnersCount());
System.out.println("Num WorkerThreads: "
+ transport.getWorkerThreadPoolConfig().getCorePoolSize());
httpServer.start();
System.out.println("Server Started #" + uri.toString());
}
public static void main(String[] args) throws InterruptedException,
IOException, InstantiationException, IllegalAccessException,
ClassNotFoundException {
startHttpServer(7777);
System.out.println("Press any key to stop the server...");
System.in.read();
}
private static NetworkListener getListener(HttpServer httpServer) {
return httpServer.getListeners().iterator().next();
}
private static URI getBaseURI(int port) {
return UriBuilder.fromUri("https://0.0.0.0/").port(port).build();
}
}
HttpHandler (with async support built in)
package com.grizzly.http;
import java.io.IOException;
import java.util.Date;
import java.util.concurrent.ExecutorService;
import javax.ws.rs.core.Application;
import org.glassfish.grizzly.http.server.HttpHandler;
import org.glassfish.grizzly.http.server.Request;
import org.glassfish.grizzly.http.server.Response;
import org.glassfish.grizzly.http.util.HttpStatus;
import org.glassfish.grizzly.threadpool.GrizzlyExecutorService;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;
import org.glassfish.jersey.server.ApplicationHandler;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.spi.Container;
import com.grizzly.Utils;
/**
* Jersey {#code Container} implementation based on Grizzly
* {#link org.glassfish.grizzly.http.server.HttpHandler}.
*
* #author Jakub Podlesak (jakub.podlesak at oracle.com)
* #author Libor Kramolis (libor.kramolis at oracle.com)
* #author Marek Potociar (marek.potociar at oracle.com)
*/
public final class IHttpHandler extends HttpHandler implements Container {
private static int reqNum = 0;
final ExecutorService executorService = GrizzlyExecutorService
.createInstance(ThreadPoolConfig.defaultConfig().copy()
.setCorePoolSize(4).setMaxPoolSize(4));
private volatile ApplicationHandler appHandler;
/**
* Create a new Grizzly HTTP container.
*
* #param application
* JAX-RS / Jersey application to be deployed on Grizzly HTTP
* container.
*/
public IHttpHandler(final Application application) {
}
#Override
public void start() {
super.start();
}
#Override
public void service(final Request request, final Response response) {
System.out.println("\nREQ_ID: " + reqNum++);
System.out.println("THREAD_ID: " + Utils.getThreadName());
response.suspend();
// Instruct Grizzly to not flush response, once we exit service(...) method
executorService.execute(new Runnable() {
#Override
public void run() {
try {
System.out.println("Executor Service Current THREAD_ID: "
+ Utils.getThreadName());
Thread.sleep(25 * 1000);
} catch (Exception e) {
response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR_500);
} finally {
String content = updateResponse(response);
System.out.println("Response resumed > " + content);
response.resume();
}
}
});
}
#Override
public ApplicationHandler getApplicationHandler() {
return appHandler;
}
#Override
public void destroy() {
super.destroy();
appHandler = null;
}
// Auto-generated stuff
#Override
public ResourceConfig getConfiguration() {
return null;
}
#Override
public void reload() {
}
#Override
public void reload(ResourceConfig configuration) {
}
private String updateResponse(final Response response) {
String data = null;
try {
data = new Date().toLocaleString();
response.getWriter().write(data);
} catch (IOException e) {
data = "Unknown error from our server";
response.setStatus(500, data);
}
return data;
}
}
IHttpServerFactory.java
package com.grizzly.http;
import java.net.URI;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.http.server.ServerConfiguration;
/**
* #author smc
*/
public class IHttpServerFactory {
private static final int DEFAULT_HTTP_PORT = 80;
public static HttpServer createHttpServer(URI uri, IHttpHandler handler) {
final String host = uri.getHost() == null ? NetworkListener.DEFAULT_NETWORK_HOST
: uri.getHost();
final int port = uri.getPort() == -1 ? DEFAULT_HTTP_PORT : uri.getPort();
final NetworkListener listener = new NetworkListener("IGrizzly", host, port);
listener.setSecure(false);
final HttpServer server = new HttpServer();
server.addListener(listener);
final ServerConfiguration config = server.getServerConfiguration();
if (handler != null) {
config.addHttpHandler(handler, uri.getPath());
}
config.setPassTraceRequest(true);
return server;
}
}
It seems the problem is the browser waiting for the first request to complete, and thus more a client-side than a server-side issue. It disappears if you test with two different browser processes, or even if you open two distinct paths (let's say localhost:7777/foo and localhost:7777/bar) in the same browser process (note: the query string partecipates in making up the path in the HTTP request line).
How I understood it
Connections in HTTP/1.1 are persistent by default, ie browsers recycle the same TCP connection over and over again to speed things up. However, this doesn't mean that all requests to the same domain will be serialized: in fact, a connection pool is allocated on a per-hostname basis (source). Unfortunately, requests with the same path are effectively enqueued (at least on Firefox and Chrome) - I guess it's a device that browsers employ to protect server resources (and thus user experience)
Real-word applications don't suffer from this because different resources are deployed to different URLs.
DISCLAIMER: I wrote this answer based on my observations and some educated guess. I think things may actually be like this, however a tool like Wireshark should be used to follow the TCP stream and definitely assert this is what happens.
I'm trying to load map tiles from an internal SSL server. The SSL certificate's root of trust is not recognized by the Android system.
W/o*.o*.t*.m*.MapTileDow*(2837): IOException downloading MapTile: /8/37/4 :
javax.net.ssl.SSLPeerUnverifiedException: No peer certificate
I'm already familiar with the problem and have solved it in the rest of the application based on this excellent SO answer. Essentially, I extended my own SSLSocketFactory and X509TrustManager which load my SSL certificate's root of trust from a .bks file bundled with the app. To create a secure connection, I call ((HttpsURLConnection) connection).setSSLSocketFactory(mySSLSocketFactory) and the certificate is verified using my classes with my root of trust.
My question is how do I do the same thing for osmdroid? I'm creating my own XYTileSource where I set the URL, file extension, size, etc. of my map tiles. I see that osmdroid creates its connections to download map tile images in MapTileDownloader. I can write my own replacement class that will address the SSL issue in the same manner, but how do I tell osmdroid to use my custom downloader instead of the default?
It turns out this is possible without changing the source of osmdroid, due to the public MapView(Context context, int tileSizePixels, ResourceProxy resourceProxy, MapTileProviderBase aTileProvider) constrtuctor.
Assuming you already have a custom class like MySSLSocketFactory (which extends javax.net.ssl.SSLSocketFactory), the basic process looks like this:
Create a drop-in replacement class for MapTileDownloader to perform the download in a way that makes use of MySSLSocketFactory. Let's call this MyTileDownloader.
Create a drop-in replacement class for MapTileProviderBasic that instantiates your custom MyTileDownloader. Let's call this MyTileProvider.
Instantiate your tile source as a new XYTileSource (no need to write a custom class).
Instantiate MyTileProvider with your tile source instance.
Instantiate MapVew with your tile provider instance.
MySSLSocketFactory is left as an exercise for the reader. See this post.
MyTileDownloader looks something like this:
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.UnknownHostException;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLSocketFactory;
import org.osmdroid.tileprovider.MapTile;
import org.osmdroid.tileprovider.MapTileRequestState;
import org.osmdroid.tileprovider.modules.IFilesystemCache;
import org.osmdroid.tileprovider.modules.INetworkAvailablityCheck;
import org.osmdroid.tileprovider.modules.MapTileDownloader;
import org.osmdroid.tileprovider.modules.MapTileModuleProviderBase;
import org.osmdroid.tileprovider.tilesource.BitmapTileSourceBase.LowMemoryException;
import org.osmdroid.tileprovider.tilesource.ITileSource;
import org.osmdroid.tileprovider.tilesource.OnlineTileSourceBase;
import org.osmdroid.tileprovider.util.StreamUtils;
import android.graphics.drawable.Drawable;
import android.text.TextUtils;
import android.util.Log;
/**
* A drop-in replacement for {#link MapTileDownloader}. This loads tiles from an
* HTTP or HTTPS server, making use of a custom {#link SSLSocketFactory} for SSL
* peer verification.
*/
public class MyTileDownloader extends MapTileModuleProviderBase {
private static final String TAG = "MyMapTileDownloader";
protected OnlineTileSourceBase mTileSource;
protected final IFilesystemCache mFilesystemCache;
protected final INetworkAvailablityCheck mNetworkAvailablityCheck;
protected final SSLSocketFactory mSSLSocketFactory;
public MyTileDownloader(ITileSource pTileSource,
IFilesystemCache pFilesystemCache,
INetworkAvailablityCheck pNetworkAvailablityCheck,
SSLSocketFactory pSSLSocketFactory) {
super(4, TILE_DOWNLOAD_MAXIMUM_QUEUE_SIZE);
setTileSource(pTileSource);
mFilesystemCache = pFilesystemCache;
mNetworkAvailablityCheck = pNetworkAvailablityCheck;
mSSLSocketFactory = pSSLSocketFactory;
}
public ITileSource getTileSource() {
return mTileSource;
}
#Override
public void setTileSource(final ITileSource tileSource) {
// We are only interested in OnlineTileSourceBase tile sources
if (tileSource instanceof OnlineTileSourceBase)
mTileSource = (OnlineTileSourceBase) tileSource;
else
mTileSource = null;
}
#Override
public boolean getUsesDataConnection() {
return true;
}
#Override
protected String getName() {
return "Online Tile Download Provider";
}
#Override
protected String getThreadGroupName() {
return "downloader";
}
#Override
public int getMinimumZoomLevel() {
return (mTileSource != null ? mTileSource.getMinimumZoomLevel()
: MINIMUM_ZOOMLEVEL);
}
#Override
public int getMaximumZoomLevel() {
return (mTileSource != null ? mTileSource.getMaximumZoomLevel()
: MAXIMUM_ZOOMLEVEL);
}
#Override
protected Runnable getTileLoader() {
return new TileLoader();
};
private class TileLoader extends MapTileModuleProviderBase.TileLoader {
#Override
public Drawable loadTile(final MapTileRequestState aState)
throws CantContinueException {
if (mTileSource == null)
return null;
InputStream in = null;
OutputStream out = null;
final MapTile tile = aState.getMapTile();
try {
if (mNetworkAvailablityCheck != null
&& !mNetworkAvailablityCheck.getNetworkAvailable()) {
if (DEBUGMODE)
Log.d(TAG, "Skipping " + getName()
+ " due to NetworkAvailabliltyCheck.");
return null;
}
final String tileURLString = mTileSource.getTileURLString(tile);
if (DEBUGMODE)
Log.d(TAG, "Downloading Maptile from url: " + tileURLString);
if (TextUtils.isEmpty(tileURLString))
return null;
// Create an HttpURLConnection to download the tile
URL url = new URL(tileURLString);
HttpURLConnection connection = (HttpURLConnection) url
.openConnection();
connection.setConnectTimeout(30000);
connection.setReadTimeout(30000);
// Use our custom SSLSocketFactory for secure connections
if ("https".equalsIgnoreCase(url.getProtocol()))
((HttpsURLConnection) connection)
.setSSLSocketFactory(mSSLSocketFactory);
// Open the input stream
in = new BufferedInputStream(connection.getInputStream(),
StreamUtils.IO_BUFFER_SIZE);
// Check to see if we got success
if (connection.getResponseCode() != 200) {
Log.w(TAG, "Problem downloading MapTile: " + tile
+ " HTTP response: " + connection.getHeaderField(0));
return null;
}
// Read the tile into an in-memory byte array
final ByteArrayOutputStream dataStream = new ByteArrayOutputStream();
out = new BufferedOutputStream(dataStream,
StreamUtils.IO_BUFFER_SIZE);
StreamUtils.copy(in, out);
out.flush();
final byte[] data = dataStream.toByteArray();
final ByteArrayInputStream byteStream = new ByteArrayInputStream(
data);
// Save the data to the filesystem cache
if (mFilesystemCache != null) {
mFilesystemCache.saveFile(mTileSource, tile, byteStream);
byteStream.reset();
}
final Drawable result = mTileSource.getDrawable(byteStream);
return result;
} catch (final UnknownHostException e) {
Log.w(TAG, "UnknownHostException downloading MapTile: " + tile
+ " : " + e);
throw new CantContinueException(e);
} catch (final LowMemoryException e) {
Log.w(TAG, "LowMemoryException downloading MapTile: " + tile
+ " : " + e);
throw new CantContinueException(e);
} catch (final FileNotFoundException e) {
Log.w(TAG, "Tile not found: " + tile + " : " + e);
} catch (final IOException e) {
Log.w(TAG, "IOException downloading MapTile: " + tile + " : "
+ e);
} catch (final Throwable e) {
Log.e(TAG, "Error downloading MapTile: " + tile, e);
} finally {
StreamUtils.closeStream(in);
StreamUtils.closeStream(out);
}
return null;
}
#Override
protected void tileLoaded(final MapTileRequestState pState,
final Drawable pDrawable) {
// Don't return the tile Drawable because we'll wait for the fs
// provider to ask for it. This prevent flickering when a load
// of delayed downloads complete for tiles that we might not
// even be interested in any more.
super.tileLoaded(pState, null);
}
}
}
MyTileProvider looks something like this.
Note that you'll need a way to get access to your instance of MySSLSocketFactory inside this class. This is left as an exercise for the reader. I did this using app.getSSLSocketFactory(), where app is an instance of a custom class that extends Application, but your mileage may vary.
import javax.net.ssl.SSLSocketFactory;
import org.osmdroid.tileprovider.IMapTileProviderCallback;
import org.osmdroid.tileprovider.IRegisterReceiver;
import org.osmdroid.tileprovider.MapTileProviderArray;
import org.osmdroid.tileprovider.MapTileProviderBasic;
import org.osmdroid.tileprovider.modules.INetworkAvailablityCheck;
import org.osmdroid.tileprovider.modules.MapTileFileArchiveProvider;
import org.osmdroid.tileprovider.modules.MapTileFilesystemProvider;
import org.osmdroid.tileprovider.modules.NetworkAvailabliltyCheck;
import org.osmdroid.tileprovider.modules.TileWriter;
import org.osmdroid.tileprovider.tilesource.ITileSource;
import org.osmdroid.tileprovider.util.SimpleRegisterReceiver;
import android.content.Context;
/**
* A drop-in replacement for {#link MapTileProviderBasic}. This top-level tile
* provider implements a basic tile request chain which includes a
* {#link MapTileFilesystemProvider} (a file-system cache), a
* {#link MapTileFileArchiveProvider} (archive provider), and a
* {#link MyTileDownloader} (downloads map tiles via tile source).
*/
public class MyTileProvider extends MapTileProviderArray implements
IMapTileProviderCallback {
public MyTileProvider(final Context pContext, final ITileSource pTileSource) {
this(new SimpleRegisterReceiver(pContext),
new NetworkAvailabliltyCheck(pContext), pTileSource, app
.getSSLSocketFactory());
}
protected MyTileProvider(final IRegisterReceiver pRegisterReceiver,
final INetworkAvailablityCheck aNetworkAvailablityCheck,
final ITileSource pTileSource,
final SSLSocketFactory pSSLSocketFactory) {
super(pTileSource, pRegisterReceiver);
// Look for raw tiles on the file system
final MapTileFilesystemProvider fileSystemProvider = new MapTileFilesystemProvider(
pRegisterReceiver, pTileSource);
mTileProviderList.add(fileSystemProvider);
// Look for tile archives on the file system
final MapTileFileArchiveProvider archiveProvider = new MapTileFileArchiveProvider(
pRegisterReceiver, pTileSource);
mTileProviderList.add(archiveProvider);
// Look for raw tiles on the Internet
final TileWriter tileWriter = new TileWriter();
final MyTileDownloader downloaderProvider = new MyTileDownloader(
pTileSource, tileWriter, aNetworkAvailablityCheck,
pSSLSocketFactory);
mTileProviderList.add(downloaderProvider);
}
}
Finally, the instantiation looks something like this:
XYTileSource tileSource = new XYTileSource("MapQuest", null, 3, 8, 256, ".jpg",
"https://10.0.0.1/path/to/your/map/tiles/");
MapTileProviderBase tileProvider = new MyTileProvider(context, tileSource);
ResourceProxy resourceProxy = new DefaultResourceProxyImpl(context);
MapView mapView = new MapView(context, 256, resourceProxy, tileProvider);
I don't use osmdroid, but unless it has public interface to replace the downloader class(es), your best bet is to get the source and patch it to make it configurable or use your own downloader class. If MapTileDownloader implements some interface you could probably do some reflection voodoo to replace it at runtime, but that might have unknown side effects.