I'm using parseq framework for asynchronous computation.
Consider the following code. It first queries the content of google.com and then map the content to it's length. Finally, the length is printed.
The problem is that only the first task is ran. Why?
public class Main {
public static void main(String[] args) throws Exception {
OkHttpClient okHttpClient = new OkHttpClient();
final int numCores = Runtime.getRuntime().availableProcessors();
final ExecutorService taskScheduler = Executors.newFixedThreadPool(numCores + 1);
final ScheduledExecutorService timerScheduler = Executors.newScheduledThreadPool(numCores + 1);
final Engine engine = new EngineBuilder()
.setTaskExecutor(taskScheduler)
.setTimerScheduler(timerScheduler)
.build();
Task<Integer> task = Task.async(() -> {
SettablePromise<String> promise = Promises.settable();
Request request = new Request.Builder()
.url("http://google.com")
.build();
okHttpClient.newCall(request).enqueue(new Callback() {
#Override
public void onFailure(Call call, IOException e) {
System.out.println("error");
}
#Override
public void onResponse(Call call, Response response) throws IOException {
promise.done(response.body().string());
}
});
return promise;
}).map("map content to length", content -> content.length())
.andThen(System.out::println);
engine.blockingRun(task);
engine.blockingRun(task);
}
}
I was able to solve your problem with the use of HttpClient instead of OkHttp .
Below are the overall maven dependencies that i used for this code:
<dependency>
<groupId>com.linkedin.parseq</groupId>
<artifactId>parseq</artifactId>
<version>3.0.11</version>
</dependency>
<dependency>
<groupId>com.linkedin.parseq</groupId>
<artifactId>parseq-http-client</artifactId>
<version>3.0.11</version>
</dependency>
import com.linkedin.parseq.Engine;
import com.linkedin.parseq.EngineBuilder;
import com.linkedin.parseq.Task;
import com.linkedin.parseq.httpclient.HttpClient;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
public class Main {
private static Task<Integer> fetchBody(String url) {
Task<Integer> map = HttpClient.get(url).task().map("map content to length", content -> content.getResponseBody().length());
return map;
}
public static void main(String[] args) {
final int numCores = Runtime.getRuntime().availableProcessors();
final ExecutorService taskScheduler = Executors.newFixedThreadPool(numCores + 1);
final ScheduledExecutorService timerScheduler = Executors.newScheduledThreadPool(numCores + 1);
final Engine engine = new EngineBuilder()
.setTaskExecutor(taskScheduler)
.setTimerScheduler(timerScheduler)
.build();
final Task<Integer> stackOverFlow = fetchBody("http://www.stackoverflow.com");
final Task<Integer> google = fetchBody("http://www.google.com");
final Task<Integer> ethereum = fetchBody("http://ethereum.stackexchange.com");
final Task<String> plan = Task.par(stackOverFlow, google, ethereum)
.map((s, g, e) -> "StackOverFlow Page: " + s + " \n" +
"Google Page: " + g + "\n" +
"Ethereum Page: " + e + "\n")
.andThen(System.out::println);
engine.run(plan);
}
}
Output:
StackOverFlow Page: 149
Google Page: 13097
Ethereum Page: 152
This example is fully asynchronous. The home pages for StackOverflow ,
Google, and Ethereum are all fetched in parallel while the original
thread has returned to the calling code. We used Tasks.par to tell the
engine to parallelize these HTTP requests. Once all of the responses
have been retrieved they are transformed into a int (string
length)that is finally printed out.
Gist: https://gist.github.com/vishwaratna/26417f7467a4e827eadeee6923ddf3ae
beacause you use the same task to run.
Task is interface , and the abstract class is BaseTask which contains the filed "_stateRef", this filed maintain the task status.
first run the task in the INIT status, when the first has excute. status change to RUN.
in this code prevent the task excute.
com.linkedin.parseq.BaseTask#contextRun
has a judgement:transitionRun(traceBuilder).
so,the right way to excute the code is follow:
private void replayOkHttpNotExecuteSecondTask() {
try {
log.info("begin task");
engine.blockingRun(okHttpTask());
engine.blockingRun(okHttpTask());
} catch (Exception e) {
e.printStackTrace();
}
}
private Task okHttpTask() {
OkHttpClient okHttpClient = new OkHttpClient();
return Task.async(() -> {
SettablePromise<String> settablePromise = Promises.settable();
Request request = new Request.Builder().url("http://baidu.com").build();
okHttpClient.newCall(request).enqueue(new Callback() {
#Override
public void onFailure(Call call, IOException e) {
System.out.println("error");
}
#Override
public void onResponse(Call call, okhttp3.Response response) throws IOException {
settablePromise.done(response.body().string());
}
});
return settablePromise;
}).map("map to length", content -> content.length())
.andThen(System.out::println);
}
Related
I am developing a client-server application, where I wanted to have a persistent connection between client-server, and I chose the CometD framework for the same.
I successfully created the CometD application.
Client -
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.cometd.bayeux.Channel;
import org.cometd.bayeux.Message;
import org.cometd.bayeux.client.ClientSessionChannel;
import org.cometd.client.BayeuxClient;
import org.cometd.client.transport.LongPollingTransport;
import org.eclipse.jetty.client.HttpClient;
import org.eclipse.jetty.util.ssl.SslContextFactory;
import com.synacor.idm.auth.LdapAuthenticator;
import com.synacor.idm.resources.LdapResource;
public class CometDClient {
private volatile BayeuxClient client;
private final AuthListner authListner = new AuthListner();
private LdapResource ldapResource;
public static void main(String[] args) throws Exception {
org.eclipse.jetty.util.log.Log.getProperties().setProperty("org.eclipse.jetty.LEVEL", "ERROR");
org.eclipse.jetty.util.log.Log.getProperties().setProperty("org.eclipse.jetty.util.log.announce", "false");
org.eclipse.jetty.util.log.Log.getRootLogger().setDebugEnabled(false);
CometDClient client = new CometDClient();
client.run();
}
public void run() {
String url = "http://localhost:1010/cometd";
HttpClient httpClient = new HttpClient();
try {
httpClient.start();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
client = new BayeuxClient(url, new LongPollingTransport(null, httpClient));
client.getChannel(Channel.META_HANDSHAKE).addListener(new InitializerListener());
client.getChannel(Channel.META_CONNECT).addListener(new ConnectionListener());
client.getChannel("/ldapAuth").addListener(new AuthListner());
client.handshake();
boolean success = client.waitFor(1000, BayeuxClient.State.CONNECTED);
if (!success) {
System.err.printf("Could not handshake with server at %s%n", url);
return;
}
}
private void initialize() {
client.batch(() -> {
ClientSessionChannel authChannel = client.getChannel("/ldapAuth");
authChannel.subscribe(authListner);
});
}
private class InitializerListener implements ClientSessionChannel.MessageListener {
#Override
public void onMessage(ClientSessionChannel channel, Message message) {
if (message.isSuccessful()) {
initialize();
}
}
}
private class ConnectionListener implements ClientSessionChannel.MessageListener {
private boolean wasConnected;
private boolean connected;
#Override
public void onMessage(ClientSessionChannel channel, Message message) {
if (client.isDisconnected()) {
connected = false;
connectionClosed();
return;
}
wasConnected = connected;
connected = message.isSuccessful();
if (!wasConnected && connected) {
connectionEstablished();
} else if (wasConnected && !connected) {
connectionBroken();
}
}
}
private void connectionEstablished() {
System.err.printf("system: Connection to Server Opened%n");
}
private void connectionClosed() {
System.err.printf("system: Connection to Server Closed%n");
}
private void connectionBroken() {
System.err.printf("system: Connection to Server Broken%n");
}
private class AuthListner implements ClientSessionChannel.MessageListener{
#Override
public void onMessage(ClientSessionChannel channel, Message message) {
Object data2 = message.getData();
System.err.println("Authentication String " + data2 );
if(data2 != null && data2.toString().indexOf("=")>0) {
String[] split = data2.toString().split(",");
String userString = split[0];
String passString = split[1];
String[] splitUser = userString.split("=");
String[] splitPass = passString.split("=");
LdapAuthenticator authenticator = new LdapAuthenticator(ldapResource);
if(authenticator.authenticateToLdap(splitUser[1], splitPass[1])) {
// client.getChannel("/ldapAuth").publish("200:success from client "+user);
// channel.publish("200:Success "+user);
Map<String, Object> data = new HashMap<>();
// Fill in the structure, for example:
data.put(splitUser[1], "Authenticated");
channel.publish(data, publishReply -> {
if (publishReply.isSuccessful()) {
System.out.print("message sent successfully on server");
}
});
}
}
}
}
}
Server - Service Class
import java.util.List;
import java.util.concurrent.BlockingQueue;
import org.cometd.bayeux.MarkedReference;
import org.cometd.bayeux.Promise;
import org.cometd.bayeux.server.BayeuxServer;
import org.cometd.bayeux.server.ConfigurableServerChannel;
import org.cometd.bayeux.server.ServerChannel;
import org.cometd.bayeux.server.ServerMessage;
import org.cometd.bayeux.server.ServerSession;
import org.cometd.server.AbstractService;
import org.cometd.server.ServerMessageImpl;
import com.synacor.idm.resources.AuthenticationResource;
import com.synacor.idm.resources.AuthenticationResource.AuthC;
public class AuthenticationService extends AbstractService implements AuthenticationResource.Listener {
String authParam;
BayeuxServer bayeux;
BlockingQueue<String> sharedResponseQueue;
public AuthenticationService(BayeuxServer bayeux) {
super(bayeux, "ldapagentauth");
addService("/ldapAuth", "ldapAuthentication");
this.bayeux = bayeux;
}
public void ldapAuthentication(ServerSession session, ServerMessage message) {
System.err.println("********* inside auth service ***********");
Object data = message.getData();
System.err.println("****** got data back from client " +data.toString());
sharedResponseQueue.add(data.toString());
}
#Override
public void onUpdates(List<AuthC> updates) {
System.err.println("********* inside auth service listner ***********");
MarkedReference<ServerChannel> createChannelIfAbsent = bayeux.createChannelIfAbsent("/ldapAuth", new ConfigurableServerChannel.Initializer() {
public void configureChannel(ConfigurableServerChannel channel)
{
channel.setPersistent(true);
channel.setLazy(true);
}
});
ServerChannel reference = createChannelIfAbsent.getReference();
for (AuthC authC : updates) {
authParam = authC.getAuthStr();
this.sharedResponseQueue= authC.getsharedResponseQueue();
ServerChannel channel = bayeux.getChannel("/ldapAuth");
ServerMessageImpl serverMessageImpl = new ServerMessageImpl();
serverMessageImpl.setData(authParam);
reference.setBroadcastToPublisher(false);
reference.publish(getServerSession(), authParam, Promise.noop());
}
}
}
Event trigger class-
public class AuthenticationResource implements Runnable{
private final JerseyClientBuilder clientBuilder;
private final BlockingQueue<String> sharedQueue;
private final BlockingQueue<String> sharedResponseQueue;
private boolean isAuthCall = false;
private String userAuth;
private final List<Listener> listeners = new CopyOnWriteArrayList<Listener>();
Thread runner;
public AuthenticationResource(JerseyClientBuilder clientBuilder,BlockingQueue<String> sharedQueue,BlockingQueue<String> sharedResponseQueue) {
super();
this.clientBuilder = clientBuilder;
this.sharedQueue = sharedQueue;
this.sharedResponseQueue= sharedResponseQueue;
this.runner = new Thread(this);
this.runner.start();
}
public List<Listener> getListeners()
{
return listeners;
}
#Override
public void run() {
List<AuthC> updates = new ArrayList<AuthC>();
// boolean is = true;
while(true){
if(sharedQueue.size()<=0) {
continue;
}
try {
userAuth = sharedQueue.take();
// Notify the listeners
for (Listener listener : listeners)
{
updates.add(new AuthC(userAuth,sharedResponseQueue));
listener.onUpdates(updates);
}
updates.add(new AuthC(userAuth,sharedResponseQueue));
System.out.println("****** Auth consume ******** " + userAuth);
if(userAuth != null) {
isAuthCall = true;
}
} catch (Exception err) {
err.printStackTrace();
break;
}
// if (sharedQueue.size()>0) {
// is = false;
// }
}
}
public static class AuthC
{
private final String authStr;
private final BlockingQueue<String> sharedResponseQueue;
public AuthC(String authStr,BlockingQueue<String> sharedResponseQueue)
{
this.authStr = authStr;
this.sharedResponseQueue=sharedResponseQueue;
}
public String getAuthStr()
{
return authStr;
}
public BlockingQueue<String> getsharedResponseQueue()
{
return sharedResponseQueue;
}
}
public interface Listener extends EventListener
{
void onUpdates(List<AuthC> updates);
}
}
I have successfully established a connection between client and server.
Problems -
1- When I am sending a message from the server to the Client, the same message is sent out multiple times. I only expecting one request-response mechanism.
In my case- server is sending user credentila I am expecting result, whether the user is authenticated or not.
you can see in image how it is flooding with same string at client side -
2- There was other problem looping up of message between client and server, that I can be able to resolve by adding, but still some time looping of message is happening.
serverChannel.setBroadcastToPublisher(false);
3- If I change the auth string on sever, at client side it appears to be old one.
For example -
1 request from server - auth string -> user=foo,pass=bar -> at
client side - user=foo,pass=bar
2 request from server - auth string user=myuser,pass=mypass ->
at client side - user=foo,pass=bar
this are the three problems, please guide me and help me to resolve this.
CometD offer a request/response style of messaging using remote calls, both on the client and on the server (you want to use annotated services on the server).
Channel /ldapAuth has 2 subscribers: the remote client (which subscribes with authChannel.subscribe(...)), and the server-side AuthenticationService (which subscribes with addService("/ldapAuth", "ldapAuthentication")).
Therefore, every time you publish to that channel from AuthenticationService.onUpdates(...), you publish to the remote client, and then back to AuthenticationService, and that is why calling setBroadcastToPublisher(false) helps.
For authentication messages, it's probably best that you stick with remote calls, because they have a natural request/response semantic, rather than a broadcasting semantic.
Please read about how applications should interact with CometD.
About other looping, there are no loops triggered by CometD.
You have loops in your application (in AuthenticationService.onUpdates(...)) and you take from a queue that may have the same information multiple times (in AuthenticationResource.run() -- which by the way it's a spin loop that will likely spin a CPU core to 100% utilization -- you should fix that).
The fact that you see stale data it's likely not a CometD issue, since CometD does not store messages anywhere so it cannot make up user-specific data.
I recommend that you clean up your code using remote calls and annotated services.
Also, clean up your own code from spin loops.
If you still have the problem after the suggestions above, look harder for application mistakes, it's unlikely that this is a CometD issue.
I am trying to use the KinesisAsyncClient as described in https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-kinesis-src-main-java-com-example-kinesis-KinesisStreamRxJavaEx.java.html
I have a mac OS and I have configured the following dependencies for async http client
'software.amazon.awssdk:netty-nio-client:2.16.101'
'software.amazon.awssdk:kinesis:2.16.99'
Caused by: java.lang.NoClassDefFoundError: io/netty/internal/tcnative/SSLPrivateKeyMethod
at software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.newPool(AwaitCloseChannelPoolMap.java:119)
at software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.newPool(AwaitCloseChannelPoolMap.java:49)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
at software.amazon.awssdk.http.nio.netty.internal.SdkChannelPoolMap.get(SdkChannelPoolMap.java:44)
at software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.createRequestContext(NettyNioAsyncHttpClient.java:140)
at software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.execute(NettyNioAsyncHttpClient.java:121)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder$NonManagedSdkAsyncHttpClient.execute(SdkDefaultClientBuilder.java:463)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.doExecuteHttpRequest(MakeAsyncHttpRequestStage.java:219)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.executeHttpRequest(MakeAsyncHttpRequestStage.java:191)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$execute$1(MakeAsyncHttpRequestStage.java:100)
at java.base/java.util.concurrent.CompletableFuture.uniAcceptNow(CompletableFuture.java:753)
at java.base/java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:731)
at java.base/java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:2108)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:96)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:61)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:55)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.attemptExecute(AsyncRetryableStage.java:110)
In the same example folders, the syncClient works well and connects to kinesis on AWS. Does anyone know what can be done to fix this?
I just tested this code:
package com.example.kinesis.asny;
import java.util.concurrent.CompletableFuture;
import io.reactivex.Flowable;
import software.amazon.awssdk.core.async.SdkPublisher;
import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
import software.amazon.awssdk.services.kinesis.model.ShardIteratorType;
import software.amazon.awssdk.services.kinesis.model.StartingPosition;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardEvent;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardRequest;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardResponseHandler;
import software.amazon.awssdk.utils.AttributeMap;
public class KinesisStreamRxJavaEx {
private static final String CONSUMER_ARN = "arn:aws:kinesis:us-east-1:814548xxxxxx:stream/LamDataStream/consumer/MyConsumer:162645xxxx";
public static void main(String[] args) {
KinesisAsyncClient client = KinesisAsyncClient.create();
SubscribeToShardRequest request = SubscribeToShardRequest.builder()
.consumerARN(CONSUMER_ARN)
.shardId("shardId-000000000000")
.startingPosition(StartingPosition.builder().type(ShardIteratorType.LATEST).build())
.build();
responseHandlerBuilder_RxJava(client, request).join();
System.out.println("Done");
client.close();
}
/**
* Uses RxJava via the onEventStream lifecycle method. This gives you full access to the publisher, which can be used
* to create an Rx Flowable.
*/
private static CompletableFuture<Void> responseHandlerBuilder_RxJava(KinesisAsyncClient client, SubscribeToShardRequest request) {
// snippet-start:[kinesis.java2.stream_rx_example.event_stream]
SubscribeToShardResponseHandler responseHandler = SubscribeToShardResponseHandler
.builder()
.onError(t -> System.err.println("Error during stream - " + t.getMessage()))
.onEventStream(p -> Flowable.fromPublisher(p)
.ofType(SubscribeToShardEvent.class)
.flatMapIterable(SubscribeToShardEvent::records)
.limit(1000)
.buffer(25)
.subscribe(e -> System.out.println("Record batch = " + e)))
.build();
// snippet-end:[kinesis.java2.stream_rx_example.event_stream]
return client.subscribeToShard(request, responseHandler);
}
/**
* Because a Flowable is also a publisher, the publisherTransformer method integrates nicely with RxJava. Notice that
* you must adapt to an SdkPublisher.
*/
private static CompletableFuture<Void> responseHandlerBuilder_OnEventStream_RxJava(KinesisAsyncClient client, SubscribeToShardRequest request) {
// snippet-start:[kinesis.java2.stream_rx_example.publish_transform]
SubscribeToShardResponseHandler responseHandler = SubscribeToShardResponseHandler
.builder()
.onError(t -> System.err.println("Error during stream - " + t.getMessage()))
.publisherTransformer(p -> SdkPublisher.adapt(Flowable.fromPublisher(p).limit(100)))
.build();
// snippet-end:[kinesis.java2.stream_rx_example.publish_transform]
return client.subscribeToShard(request, responseHandler);
}
}
It successfully completed:
Make sure that you specify a valid consumer ARN value; otherwise the code does not work.
You can get a valid consumer ARN using this code;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.kinesis.KinesisClient;
import software.amazon.awssdk.services.kinesis.model.KinesisException;
import software.amazon.awssdk.services.kinesis.model.RegisterStreamConsumerRequest;
import software.amazon.awssdk.services.kinesis.model.RegisterStreamConsumerResponse;
public class RegisterStreamConsumer {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" ListShards <streamName>\n\n" +
"Where:\n" +
" streamName - The Amazon Kinesis data stream (for example, StockTradeStream)\n\n" +
"Example:\n" +
" ListShards StockTradeStream\n";
// if (args.length != 1) {
// System.out.println(USAGE);
// System.exit(1);
// }
String streamARN = "arn:aws:kinesis:us-east-1:8145480xxxxx:stream/LamDataStream" ; //args[0];
Region region = Region.US_EAST_1;
KinesisClient kinesisClient = KinesisClient.builder()
.region(region)
.build();
String arnValue = regConsumer(kinesisClient, streamARN);
System.out.println(arnValue);
kinesisClient.close();
}
public static String regConsumer(KinesisClient kinesisClient, String streamARN) {
try {
RegisterStreamConsumerRequest regCon = RegisterStreamConsumerRequest.builder()
.consumerName("MyConsumer")
.streamARN(streamARN)
.build();
RegisterStreamConsumerResponse resp = kinesisClient.registerStreamConsumer(regCon);
return resp.consumer().consumerARN();
} catch (KinesisException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
}
This worked for me after adding a couple of dependencies for netty
implementation 'io.netty:netty-tcnative:2.0.40.Final'
implementation 'io.netty:netty-tcnative-boringssl-static:2.0.40.Final'
I followed the https://docs.hazelcast.com/imdg/4.2/security/integrating-openssl.html to understand what was going on and why the netty.nio was having issues.
public class MainActivity extends AppCompatActivity {
public static final int TRANSMIT_DATA = 1;
public static String string0;
public String temp;//定义全局变量,想要把string0的值传给它。
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
loadData();
System.out.println("Main output:ID="+Thread.currentThread().getId());
anotherThread();
}
public void anotherThread(){
new Thread(){
public void run() {
System.out.println("anotherThread :ID="+Thread.currentThread().getId());
System.out.println("anotherThread output: Content="+temp);
}
}.start(); //开启一个线程
}
private Handler dataHandler =new Handler(){
#Override
public void handleMessage(Message msg) {
switch (msg.what) {
case TRANSMIT_DATA:
System.out.println("handleMessage output:ID="+Thread.currentThread().getId());
System.out.println("handleMessage output: Content="+msg.obj);
temp=msg.obj.toString();
break;
default:
break;
}
}
};
public void loadData() {
OkHttpClient okHttpClient = new OkHttpClient();
//构造Request,
//builder.get()代表的是get请求,url方法里面放的参数是一个网络地址
Request.Builder builder = new Request.Builder();
final Map params = new LinkedHashMap();// 请求参数
Request request = builder.get()
.url("https://api.avatardata.cn/Jztk/Query?key=15f9ceafeeb94a2492fd84b8c68a554c&subject=4&model=c1&testType=rand")
.build();
//3将Request封装成call
Call call = okHttpClient.newCall(request);
//4,执行call,这个方法是异步请求数据
call.enqueue(new Callback() {
#Override
public void onFailure(Call call, IOException e) {
//失败调用
Log.e("MainActivity", "onFailure: " );
}
#Override
public void onResponse(Call call, final Response response) throws IOException {
//成功调用
Log.e("MainActivity", "onResponse: " );
//获取网络访问返回的字符串
string0 = response.body().string();
System.out.println("Asynchronous Request Output:ID="+Thread.currentThread().getId());
Message message = new Message();
message.obj = string0;
message.what =TRANSMIT_DATA;
dataHandler.sendMessage(message);
}
});
}
}
The picture is about System.out.println
Just like the picture show above: the "anotherThread output: Content=null", I want to Pass information from the main thread to the child thread (in the run method), how can I do it? Try to avoid changing the code of other methods as soon as possible.
Given that you want minimal code changes you could use ThreadLocal , value set in Parent threads ThreadLocal is available for all child threads
I think your "otherthread" starts and ends before the data is available in the temp variable, hence it prints null.
You can do something like:
a. Either you start/run your "otherthread" after you fill the temp variable in the handleMessage function.
b. Or if you insist on starting the "otherthread" before you have the data, have the thread in a synchronized way, check the temp variable for being non null after some interval. Also have some sortof boolean to let the thread know, to exit incase you didnt receive any data.
my 2 cents
How can I send a PubSub message manually (that is to say, without using a PubsubIO) in Dataflow ?
Importing (via Maven) google-cloud-dataflow-java-sdk-all 2.5.0 already imports a version of com.google.pubsub.v1 for which I was unable to find an easy way to send messages to a Pubsub topic (this version doesn't, for instance, allow to manipulate Publisher instances, which is the way described in the official documentation).
Would you consider using PubsubUnboundedSink? Quick example:
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.ValueProvider.StaticValueProvider;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubClient;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubJsonClient;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubUnboundedSink;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubClient.TopicPath;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubMessage;
public class PubsubTest {
public static void main(String[] args) {
DataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args)
.as(DataflowPipelineOptions.class);
// writes message to "output_topic"
TopicPath topic = PubsubClient.topicPathFromName(options.getProject(), "output_topic");
Pipeline p = Pipeline.create(options);
p
.apply("input string", Create.of("This is just a message"))
.apply("convert to Pub/Sub message", ParDo.of(new DoFn<String, PubsubMessage>() {
#ProcessElement
public void processElement(ProcessContext c) {
c.output(new PubsubMessage(c.element().getBytes(), null));
}
}))
.apply("write to topic", new PubsubUnboundedSink(
PubsubJsonClient.FACTORY,
StaticValueProvider.of(topic), // topic
"timestamp", // timestamp attribute
"id", // ID attribute
5 // number of shards
));
p.run();
}
}
Here's a way I found browsing https://github.com/GoogleCloudPlatform/cloud-pubsub-samples-java/blob/master/dataflow/src/main/java/com/google/cloud/dataflow/examples/StockInjector.java:
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.model.PublishRequest;
import com.google.api.services.pubsub.model.PubsubMessage;
public class PubsubManager {
private static final Logger logger = LoggerFactory.getLogger(PubsubManager.class);
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
private static final Pubsub pubsub = createPubsubClient();
public static class RetryHttpInitializerWrapper implements HttpRequestInitializer {
// Intercepts the request for filling in the "Authorization"
// header field, as well as recovering from certain unsuccessful
// error codes wherein the Credential must refresh its token for a
// retry.
private final GoogleCredential wrappedCredential;
// A sleeper; you can replace it with a mock in your test.
private final Sleeper sleeper;
private RetryHttpInitializerWrapper(GoogleCredential wrappedCredential) {
this(wrappedCredential, Sleeper.DEFAULT);
}
// Use only for testing.
RetryHttpInitializerWrapper(
GoogleCredential wrappedCredential, Sleeper sleeper) {
this.wrappedCredential = Preconditions.checkNotNull(wrappedCredential);
this.sleeper = sleeper;
}
#Override
public void initialize(HttpRequest request) {
final HttpUnsuccessfulResponseHandler backoffHandler =
new HttpBackOffUnsuccessfulResponseHandler(
new ExponentialBackOff())
.setSleeper(sleeper);
request.setInterceptor(wrappedCredential);
request.setUnsuccessfulResponseHandler(
new HttpUnsuccessfulResponseHandler() {
#Override
public boolean handleResponse(HttpRequest request,
HttpResponse response,
boolean supportsRetry)
throws IOException {
if (wrappedCredential.handleResponse(request,
response,
supportsRetry)) {
// If credential decides it can handle it, the
// return code or message indicated something
// specific to authentication, and no backoff is
// desired.
return true;
} else if (backoffHandler.handleResponse(request,
response,
supportsRetry)) {
// Otherwise, we defer to the judgement of our
// internal backoff handler.
logger.info("Retrying " + request.getUrl());
return true;
} else {
return false;
}
}
});
request.setIOExceptionHandler(new HttpBackOffIOExceptionHandler(
new ExponentialBackOff()).setSleeper(sleeper));
}
}
/**
* Creates a Cloud Pub/Sub client.
*/
private static Pubsub createPubsubClient() {
try {
HttpTransport transport = GoogleNetHttpTransport.newTrustedTransport();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
HttpRequestInitializer initializer =
new RetryHttpInitializerWrapper(credential);
return new Pubsub.Builder(transport, JSON_FACTORY, initializer).build();
} catch (IOException | GeneralSecurityException e) {
logger.error("Could not create Pubsub client: " + e);
}
return null;
}
/**
* Publishes the given message to a Cloud Pub/Sub topic.
*/
public static void publishMessage(String message, String outputTopic) {
int maxLogMessageLength = 200;
if (message.length() < maxLogMessageLength) {
maxLogMessageLength = message.length();
}
logger.info("Received ...." + message.substring(0, maxLogMessageLength));
// Publish message to Pubsub.
PubsubMessage pubsubMessage = new PubsubMessage();
pubsubMessage.encodeData(message.getBytes());
PublishRequest publishRequest = new PublishRequest();
publishRequest.setMessages(Collections.singletonList(pubsubMessage));
try {
pubsub.projects().topics().publish(outputTopic, publishRequest).execute();
} catch (java.io.IOException e) {
logger.error("Stuff happened in pubsub: " + e);
}
}
}
You can send pubsub message using PubsubIO writeMessages method
dataflow Pipeline steps
Pipeline p = Pipeline.create(options);
p.apply("Transformer1", ParDo.of(new Fn.method1()))
.apply("Transformer2", ParDo.of(new Fn.method2()))
.apply("PubsubMessageSend", PubsubIO.writeMessages().to(PubSubConfig.getTopic(options.getProject(), options.getpubsubTopic ())));
Define Project Name and pubsubTopic where to want to send pub subs message in the PipeLineOptions
When I run the following piece of code:
import io.vertx.core.*;
import io.vertx.core.eventbus.MessageConsumer;
import io.vertx.core.json.JsonObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
class TestVerticle extends AbstractVerticle {
final Logger logger = LoggerFactory.getLogger(TestVerticle.class.getName());
public static final String ADDRESS = "oot.test";
public void start(Future<Void> startFuture) {
logger.info("starting test verticle");
MessageConsumer<JsonObject> consumer = vertx.eventBus().consumer(ADDRESS);
consumer.handler(message -> {
final JsonObject body = message.body();
logger.info("received: " + body);
JsonObject replyMessage = body.copy();
replyMessage.put("status", "processed");
message.reply(replyMessage);
});
logger.info("started test verticle");
}
}
public class Scratchpad {
private final static Logger logger = LoggerFactory.getLogger(Scratchpad.class);
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
logger.info("deploying test verticle");
Handler<AsyncResult<String>> completionHandler = result -> {
System.out.println("done");
if (result.succeeded()) {
logger.info("deployment result: " + result.result());
} else {
logger.error("failed to deploy: " + result);
}
};
TestVerticle testVerticle = new TestVerticle();
vertx.deployVerticle(testVerticle, completionHandler);
logger.info("deployment completed");
}
}
I expect the CompletionHandler content to be executed and therefore I should get something sent to stdout (at least "done", although the logging should be working as well) but nothing happens. All the other logging information shows up correctly on my screen. What am I doing wrong?
After:
logger.info("started test verticle");
Add:
startFuture.complete();
See Asynchronous Verticle start and stop section of the docs about the asynchronous start method:
This version of the method takes a Future as a parameter. When the method returns the verticle will not be considered deployed.
Some time later, after you've done everything you need to do (e.g. start other verticles), you can call complete on the Future (or fail) to signal that you're done.