I am trying to get a GraphQL subscription working with Java/Vert.x and to have the results shown in GraphiQL. I see all the System.out.println statements in the console, but GraphiQL is not displaying any results because the server is generating an 'Internal Server Error' message.
Schema:
type Subscription {
test: String
}
Vert.x Verticle
private RuntimeWiring getRuntimeWiring() {
return new RuntimeWiring()
.type("Subscription", builder -> builder
.dataFetcher("test", getTestDataFetcher()))
.build();
}
private VertxDataFetcher<Publisher<String>> getTestDataFetcher() {
return new VertxDataFetcher<>((env, future) -> future.complete(doTest()));
}
private Publisher<String> doTest() {
AtomicReference<Subscription> ar = new AtomicReference<>();
Observable<String> obs = Observable.just("Hello");
Publisher<String> pub = obs.toFlowable(BackpressureStrategy.BUFFER);
pub.subscribe(new Subscriber<String>() {
#Override
public void onSubscribe(Subscription s) {
System.out.println("SUBSCRIBE");
ar.set(s);
s.request(1);
}
#Override
public void onNext(String s) {
System.out.println("NEXT="+s);
ar.get().request(1);
}
#Override
public void onError(Throwable t) {
System.out.println("ERROR");
}
#Override
public void onComplete(){
System.out.println("COMPLETE");
}
}
return pub;
}
If I run the subscription using GraphiQL and look on my vert.x servers console, the output on the console is:
SUBSCRIBE
NEXT=Hello
COMPLETE
The GraphiQL output window says "Internal Server Error" and is sent a 500 error code from the server
If I modify the DataFetcher to exactly what is shown at the bottom of the first link, I also receive "Internal Server Error".
private DataFetcher<Publisher<String>> getTestDataFetcher() {
return env -> doTest();
}
I do not see any stack traces for the 500 error in the vertx console. So maybe this is a bug?
Sidenote - If I try using a CompletionStage as shown below (based off the bottom of the 2nd link) I get an error message saying 'You data fetcher must return a publisher of events when using graphql subscriptions'
private DataFetcher<CompletionStage<String>> getTestDataFetcher() {
Single<String> single = Single.create(emitter -> {
new Thread(()-> {
try {
emitter.onSuccess("Hello");
} catch(Exception e) {
emitter.onError(e);
}
}).start();
)};
return environment -> single.to(SingleInterop.get());
}
I have used the following sources as references to get this far:
https://www.graphql-java.com/documentation/v9/subscriptions/
https://vertx.io/docs/vertx-web-graphql/java/
Related
I am developing an Android security app and have decided to implement the PlayIntegrity API as an alternative to SafetyNet API. I have already completed the necessary setup steps such as enabling the Play and Cloud console, however, I am encountering an issue where I am getting an error 'GOOGLE SERVER UNAVAILABLE' when trying to obtain a token. Can anyone provide any insight into why this might be happening and possible solutions? Any help would be greatly appreciated.
Please see below code:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// playIntegritySetup.lol();
getToken();
}
private void getToken() {
String nonce = Base64.encodeToString(generateNonce(50).getBytes(), Base64.URL_SAFE | Base64.NO_WRAP | Base64.NO_PADDING);
// Create an instance of a manager.
IntegrityManager integrityManager = IntegrityManagerFactory.create(getApplicationContext());
// Request the integrity token by providing a nonce.
Task<IntegrityTokenResponse> integrityTokenResponse = integrityManager.requestIntegrityToken(
IntegrityTokenRequest.builder()
.setNonce(nonce)
.build());
integrityTokenResponse.addOnSuccessListener(new OnSuccessListener<IntegrityTokenResponse>() {
#Override
public void onSuccess(IntegrityTokenResponse integrityTokenResponse) {
String integrityToken = integrityTokenResponse.token();
SplashActivity.this.doIntegrityCheck(integrityToken);
Log.e("Integrity Token", "integrity token from the app" + integrityToken);
}
});
integrityTokenResponse.addOnFailureListener(e -> showErrorDialog("Error getting token from Google. Google said: " + getErrorText(e)));
}
private void doIntegrityCheck(String token) {
AtomicBoolean hasError = new AtomicBoolean(false);
Observable.fromCallable(() -> {
OkHttpClient okHttpClient = new OkHttpClient();
Response response = okHttpClient.newCall(new Request.Builder().url("money control url" + "token from backend server" + token).build()).execute();
Log.e("Token", "token from the app" + token);
if (!response.isSuccessful()) {
hasError.set(true);
return "Api request error. Code: " + response.code();
}
ResponseBody responseBody = response.body();
if (responseBody == null) {
hasError.set(true);
return "Api request error. Empty response";
}
JSONObject responseJson = new JSONObject(responseBody.string());
if (responseJson.has("error")) {
hasError.set(true);
return "Api request error: " + responseJson.getString("error");
}
if (!responseJson.has("deviceIntegrity")) {
hasError.set(true);
}
return responseJson.getJSONObject("deviceIntegrity").toString();
}) // Execute in IO thread, i.e. background thread.
.subscribeOn(Schedulers.io())
// report or post the result to main thread.
.observeOn(AndroidSchedulers.mainThread())
// execute this RxJava
.subscribe(new Observer<String>() {
#Override
public void onSubscribe(Disposable d) {
}
#Override
public void onNext(String result) {
if (hasError.get()) {
if (result.contains("MEETS_DEVICE_INTEGRITY") && result.contains("MEETS_BASIC_INTEGRITY")) {
//Here goes my other code
}
}
}
#Override
public void onError(Throwable e) {
}
#Override
public void onComplete() {
}
});
}
private String getErrorText(Exception e) {
String msg = e.getMessage();
if (msg == null) {
return "Unknown Error";
}
//the error code
int errorCode = Integer.parseInt(msg.replaceAll("\n", "").replaceAll(":(.*)", ""));
switch (errorCode) {
case IntegrityErrorCode.API_NOT_AVAILABLE:
return "API_NOT_AVAILABLE";
case IntegrityErrorCode.NO_ERROR:
return "NO_ERROR";
case IntegrityErrorCode.INTERNAL_ERROR:
return "INTERNAL_ERROR";
case IntegrityErrorCode.NETWORK_ERROR:
return "NETWORK_ERROR";
case IntegrityErrorCode.PLAY_STORE_NOT_FOUND:
return "PLAY_STORE_NOT_FOUND";
case IntegrityErrorCode.PLAY_STORE_ACCOUNT_NOT_FOUND:
return "PLAY_STORE_ACCOUNT_NOT_FOUND";
case IntegrityErrorCode.APP_NOT_INSTALLED:
return "APP_NOT_INSTALLED";
case IntegrityErrorCode.PLAY_SERVICES_NOT_FOUND:
return "PLAY_SERVICES_NOT_FOUND";
case IntegrityErrorCode.APP_UID_MISMATCH:
return "APP_UID_MISMATCH";
case IntegrityErrorCode.TOO_MANY_REQUESTS:
return "TOO_MANY_REQUESTS";
case IntegrityErrorCode.CANNOT_BIND_TO_SERVICE:
return "CANNOT_BIND_TO_SERVICE";
case IntegrityErrorCode.NONCE_TOO_SHORT:
return "NONCE_TOO_SHORT";
case IntegrityErrorCode.NONCE_TOO_LONG:
return "NONCE_TOO_LONG";
case IntegrityErrorCode.GOOGLE_SERVER_UNAVAILABLE:
return "GOOGLE_SERVER_UNAVAILABLE";
case IntegrityErrorCode.NONCE_IS_NOT_BASE64:
return "NONCE_IS_NOT_BASE64";
default:
return "Unknown Error";
}
}
private String generateNonce(int length) {
String nonce = "";
String allowed = getNonce();
for (int i = 0; i < length; i++) {
nonce = nonce.concat(String.valueOf(allowed.charAt((int) Math.floor(Math.random() * allowed.length()))));
}
return nonce;
}
public native String getNonce();
static {
System.loadLibrary("all-keys");
}
I ran into the same problem and I found a solution for this.
You need to specify cloudProjectNumber() when you are working on outside of Google Play, which can be found in google cloud console.
Quote from the doc:
Important: In order to receive and decrypt Integrity API responses,
apps not available on Google Play need to include their Cloud project
number in their requests. You can find this in Project info in the
Google Cloud Console.
So the code should be like this:
IntegrityTokenRequest.builder()
.setNonce(nonce)
.cloudProjectNumber(100004676) // your cloud project number here for dev build
.build());
I am new to vertx and async programming.
I have 2 verticles communicating via an event bus as follows:
//API Verticle
public class SearchAPIVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private Integer defaultPort;
private void sendSearchRequest(RoutingContext routingContext) {
final JsonObject requestMessage = routingContext.getBodyAsJson();
final EventBus eventBus = vertx.eventBus();
eventBus.request(GET_USEARCH_DOCS, requestMessage, reply -> {
if (reply.succeeded()) {
Logger.info("Search Result = " + reply.result().body());
routingContext.response()
.putHeader("content-type", "application/json")
.setStatusCode(200)
.end((String) reply.result().body());
} else {
Logger.info("Document Search Request cannot be processed");
routingContext.response()
.setStatusCode(500)
.end();
}
});
}
#Override
public void start() throws Exception {
Logger.info("Starting the Gateway service (Event Sender) verticle");
// Create a Router
Router router = Router.router(vertx);
//Added bodyhandler so we can process json messages via the event bus
router.route().handler(BodyHandler.create());
// Mount the handler for incoming requests
// Find documents
router.post("/api/search/docs/*").handler(this::sendSearchRequest);
// Create an HTTP Server using default options
HttpServer server = vertx.createHttpServer();
// Handle every request using the router
server.requestHandler(router)
//start listening on port 8083
.listen(config().getInteger("http.port", 8083)).onSuccess(msg -> {
Logger.info("*************** Search Gateway Server started on "
+ server.actualPort() + " *************");
});
}
#Override
public void stop(){
//house keeping
}
}
//Below is the target verticle should be making the multiple web client call and merging the responses
.
#Component
public class SolrCloudVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private SearchRepository searchRepositoryService;
#Override
public void start() throws Exception {
Logger.info("Starting the Solr Cloud Search Service (Event Consumer) verticle");
super.start();
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file")
.setConfig(new JsonObject().put("path", "conf/config.json"));
ConfigRetrieverOptions configRetrieverOptions = new ConfigRetrieverOptions()
.addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, configRetrieverOptions);
configRetriever.getConfig(ar -> {
if (ar.succeeded()) {
JsonObject configJson = ar.result();
EventBus eventBus = vertx.eventBus();
eventBus.<JsonObject>consumer(GET_USEARCH_DOCS).handler(getDocumentService(searchRepositoryService, configJson));
Logger.info("Completed search service event processing");
} else {
Logger.error("Failed to retrieve the config");
}
});
}
private Handler<Message<JsonObject>> getDocumentService(SearchRepository searchRepositoryService, JsonObject configJson) {
return requestMessage -> vertx.<String>executeBlocking(future -> {
try {
//I need to incorporate the logic here that adds futures to list and composes the compositefuture
/*
//Below is my logic to populate the future list
WebClient client = WebClient.create(vertx);
List<Future> futureList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
Future<String> future1 = client.post(8983, "127.0.0.1", "/solr/" + collection + "/query")
.expect(ResponsePredicate.SC_OK)
.sendJsonObject(requestMessage.body())
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
futureList.add(future1);
}
//Below is the CompositeFuture logic, but the logic and construct does not make sense to me. What goes as first and second argument of executeBlocking method
/*CompositeFuture.join(futureList)
.onSuccess(result -> {
result.list().forEach( x -> {
if(x != null){
requestMessage.reply(result.result());
}
}
);
})
.onFailure(error -> {
System.out.println("We should not fail");
})
*/
future.complete("DAO returns a Json String");
} catch (Exception e) {
future.fail(e);
}
}, result -> {
if (result.succeeded()) {
requestMessage.reply(result.result());
} else {
requestMessage.reply(result.cause()
.toString());
}
});
}
}
I was able to use the org.springframework.web.reactive.function.client.WebClient calls to compose my search result from multiple web client calls, as against using Future<io.vertx.ext.web.client.WebClient> with CompositeFuture.
I was trying to avoid mixing Springboot and Vertx, but unfortunately Vertx CompositeFuture did not work here:
//This method supplies the parameter for the future.complete(..) line in getDocumentService(SearchRepository,JsonObject)
private List<JsonObject> findByQueryParamsAndDataSources(SearchRepository searchRepositoryService,
JsonObject configJson,
JsonObject requestMessage)
throws SolrServerException, IOException {
List<JsonObject> searchResultList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
searchResultList.add(new JsonObject(doSearchPerCollection(collection.toString(), requestMessage.toString())));
}
return aggregateMultiCollectionSearchResults(searchResultList);
}
public String doSearchPerCollection(String collection, String message) {
org.springframework.web.reactive.function.client.WebClient client =
org.springframework.web.reactive.function.client.WebClient.create();
return client.post()
.uri("http://127.0.0.1:8983/solr/" + collection + "/query")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(message.toString()))
.retrieve()
.bodyToMono(String.class)
.block();
}
private List<JsonObject> aggregateMultiCollectionSearchResults(List<JsonObject> searchList){
//TODO: Search result aggregation
return searchList;
}
My use case is the second verticle should make multiple vertx web client calls and should combine the responses.
If an API call falls, I want to log the error and still continue processing and merging responses from other calls.
Please, any help on how my code above could be adaptable to handle the use case?
I am looking at vertx CompositeFuture, but no headway or useful example seen yet!
What you are looking for can done with Future coordination with a little bit of additional handling:
CompositeFuture.join(future1, future2, future3).onComplete(ar -> {
if (ar.succeeded()) {
// All succeeded
} else {
// All completed and at least one failed
}
});
The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join
takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of them is failed
Using join you will wait for all Futures to complete, the issue is that if one of them fails you will not be able to obtain response from others as CompositeFuture will be failed. To avoid this you should add Future<T> recover(Function<Throwable, Future<T>> mapper) on each of your Futures in which you should log the error and pass an empty response so that the future does not fail.
Here is short example:
Future<String> response1 = client.post(8887, "localhost", "work").expect(ResponsePredicate.SC_OK).send()
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
Future<String> response2 = client.post(8887, "localhost", "error").expect(ResponsePredicate.SC_OK).send()
map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
CompositeFuture.join(response2, response1)
.onSuccess(result -> {
result.list().forEach(x -> {
if(x != null) {
System.out.println(x);
}
});
})
.onFailure(error -> {
System.out.println("We should not fail");
});
Edit 1:
Limit for CompositeFuture.join(Future...) is 6 Futures, in the case you need more you can use: CompositeFuture.join(Arrays.asList(future1, future2, future3)); where you can pass unlimited number of futures.
Caller Of the method,
for (String name : controllerToPartitionModels.keySet())
{
List<PartitionModel> partitionsList = controllerToPartitionModels.get(name);
refreshPartition(partitionsList,false);
}
Method
private void refreshPartition(List<PartitionModel> partitionModels, boolean isSyncAll) {
ITModule.getITService()
.refreshPartitionStatus(new ArrayList<>(partitionModels), isSyncAll)
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.io())
.subscribe(new Action() {
#Override
public void run() throws Exception {
Logger.get().d(ATTActionManager.this, "Refreshing request sent successfully for list of size : " + partitionModels.size());
}
}, (#NonNull Throwable throwable) -> {
Logger.get().d(ATTActionManager.this, "Error on Refresh request");
});
}
Problem
If there are 2 requests that has to be sent, I sometime see only one request being sent. Meaning, even though for loop is executing twice for 2 request(HTTP), I see only one request is being sent to the server.
What is that i am doing wrong here?
Rxjava version in use : 2.2.19
You can merge the above 2 methods to solve your problem by using flatMapIterable.
Merged Solution:
private void refreshPartition(Map<String, ?> controllerToPartitionModels) {
Observable.just(controllerToPartitionModels)
.map(controllerToPartitionModels -> controllerToPartitionModels.keySet())
.flatMapIterable((Function<Set<String>, Iterable<String>>) name -> name)
.map(name -> {
boolean isSyncAll = false; // You can customise as per requirement
return new Pair<List<PartitionModel>, Boolean>(controllerToPartitionModels.get(name), isSyncAll)
})
.flatMap((Function<Pair<List<PartitionModel>, Boolean>, ObservableSource<?>>) pair -> {
boolean isSyncAll = pair.first;
List<PartitionModel> partitionModels = pair.second;
return ITModule.getITService()
.refreshPartitionStatus(new ArrayList<>(partitionModels), isSyncAll)
}
)
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.io())
.subscribe(new Action() {
#Override
public void run() throws Exception {
Logger.get().d(ATTActionManager.this, "Refreshing request sent successfully for list of size : " + partitionModels.size());
}
}, (#NonNull Throwable throwable) -> {
Logger.get().d(ATTActionManager.this, "Error on Refresh request");
});
}
*Kindly replace ? with the valid object type.
I'm trying to experiment with Akka and to use actors on different PCs. To start, I'm trying to connect to actors in the same JVM and in the same ActorSystem, but using a remote selection. However, I'm failing even at this simple task. The following is the minimized code that shows my problem. I believe I'm programmatically adding all the needed configuration. When I run the code as it is, using the line marked with /*works*/, I get B received dd; if I swap /*works*/ with /*fails*/, I get [INFO]..[akka://N1/deadLetters]..was not delivered.
What am I doing wrong? How can I access B using the remote selector?
class A extends AbstractActor {
public Receive createReceive() {
return receiveBuilder()
.match(String.class, s -> {
ActorSelection selection = context().actorSelection(
/*fails*/ //"akka.tcp://N1#127.0.0.1:2500/user/B"
/*works*/ "akka://N1/user/B"
);
selection.tell("dd", self());
})
.build();
}
}
class B extends AbstractActor {
public Receive createReceive() {
return receiveBuilder()
.match(String.class, s -> {
System.out.println("B received " + s);
})
.build();
}
}
public class AkkaS1 {
public static void main(String[] args) {
Config config =
ConfigFactory
.parseString("akka.remote.netty.tcp.port = 2500")
.withFallback(
ConfigFactory.parseString("akka.remote.netty.hostname = 127.0.0.1"))
.withFallback(ConfigFactory.load());
ActorSystem s = ActorSystem.create("N1", config);
ActorRef a = s.actorOf(Props.create(A.class, () -> new A()), "A");
s.actorOf(Props.create(B.class, () -> new B()), "B");
a.tell("Please discover b", ActorRef.noSender());
System.out.println(">>> Press ENTER to exit <<<");
try {
System.in.read();
} catch (IOException ioe) {
} finally {
s.terminate();
}
}
}
I believe I'm programmatically adding all the needed configuration.
You appear to be missing a couple of settings: akka.actor.provider = remote and akka.remote.enabled-transports = ["akka.remote.netty.tcp"]. Also, change akka.remote.netty.hostname to akka.remote.netty.tcp.hostname.
According to the documentation, the minimum configuration is the following:
akka {
actor {
provider = remote
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2500
}
}
}
I'm always getting an unhandled exception when google+ responses with error json
retrofit2.HttpException: HTTP 404
at retrofit2.RxJavaCallAdapterFactory$SimpleCallAdapter$1.call(RxJavaCallAdapterFactory.java:159)
at retrofit2.RxJavaCallAdapterFactory$SimpleCallAdapter$1.call(RxJavaCallAdapterFactory.java:154)
at rx.internal.operators.OperatorMap$1.onNext(OperatorMap.java:54)
at retrofit2.RxJavaCallAdapterFactory$CallOnSubscribe.call(RxJavaCallAdapterFactory.java:109)
at retrofit2.RxJavaCallAdapterFactory$CallOnSubscribe.call(RxJavaCallAdapterFactory.java:88)
at rx.Observable$2.call(Observable.java:162)
at rx.Observable$2.call(Observable.java:154)
at rx.Observable$2.call(Observable.java:162)
at rx.Observable$2.call(Observable.java:154)
....
In that code:
Observable.create(new Observable.OnSubscribe<String>() {
#Override
public void call(Subscriber<? super String> strSub) {
// Getting ID
strSub.onNext(AccountUtils.getAccountId(appContext));
strSub.onCompleted();})
.subscribeOn(Schedulers.io())
// Get Google+ Image through Retrofit2
.flatMap(str -> createGPlusUserObservable(str, AccountUtils.ANDROID_API_KEY))
.map(this::setprofileImage) // I don't see Timber.d message inside that method!
.compose(binder)
.observeOn(AndroidSchedulers.mainThread())
.subscribe(subscriber);
In createGPlusUserObservable I use Retrofit 2 to get google+ image
private Observable<GPlusUser> createGPlusUserObservable(String userId, String apiKey) {
//try {
GoogleApiService service = ServiceFactory.getInstance().createJsonRetrofitService(
GoogleApiService.class,
GoogleApiService.SERVICE_ENDPOINT
);
Observable<GPlusUser> result = service.getGPlusUserInfo(userId, apiKey);
Timber.d("Here1!"); // I see that in console!
return result; // It always returns result!
/*} catch (Throwable e) { - it doesn't catch anything!
Timber.d("Here!");
}*/
}
And subscriber is:
new Subscriber<GPlusUser>() {
#Override
public void onCompleted() {
Timber.d("GPlusUserSubscriber ON COMPLETED");
}
#Override
public void onError(Throwable e) {
if (e instanceof HttpException) {
Timber.d("RETROFIT!"); // I see that in console!
}
}
#Override
public void onNext(GPlusUser gPlusUser) {
setupAccountBox();
}
};
UPDATE: setprofileImage method
private GPlusUser setprofileImage(GPlusUser gPlusUser) {
Timber.d("FOUR"); // As I've said, it doesn't appear in console
AccountUtils.setProfileImage(appContext, gPlusUser.image.url);
Timber.d("Setting profile image: %s", gPlusUser.image.url);
return gPlusUser;
}
So the question is - why I'm getting unhandled exception if I handle it in subscriber's onError(Throwable e)
Thanks!
I think it is because error happens in retrofit factory logic, while converting from pure html string to my GPlusUser class.
I've eliminated that annoying exception in console log by working with pure html through Observable<Response<ResponseBody>> response and it's response.isSuccess()