I'm building a Spring Boot application that's deployed on Google App Engine. To find out the cause of some weird latency issues, I figured I'd enable Google Cloud Trace to view detailed latency reports.
Adding basic tracing was simple enough since this is supported natively by GAE. However, I am having a hard time adding details to any individual trace.
For example; the following code:
public MediaContent getMediaContent(long contentId) {
Optional<MediaContent> found;
try (Scope ss = tracer.spanBuilder("databaseSubSpan").setSampler(Samplers.alwaysSample()).startScopedSpan()) {
tracer.getCurrentSpan().addAnnotation("Retrieving MediaContent " + contentId + " from repository");
found = mediaContentRepository.findById(contentId);
}
if (found.isEmpty()) {
tracer.getCurrentSpan().setStatus(Status.NOT_FOUND);
throw new NoSuchContentException(contentId);
}
return found.get();
}
I figured that in the Cloud Trace UI, this would display a separate little latency line so I could see which part of the total request time is spent on database communication. However, no such information is visible to me:
I have made sure that this exact method is invoked by adding a few log entries around it. All the log entries (even the one inside the try block) show up in my logs.
I set the sampling rate to 100% by adding the following to my application.yml: spring.sleuth.sampler.probability: 1.0
Basically, what I expected to see on the Cloud Trace UI is a second bar underneath the primary request. As such:
Is this even possible in Cloud Trace? I expect that it would be since the chart is so tall which seems to me like it has space left for extra bars. If it is, what am I doing wrong?
Thanks.
It needs to be investigated by the Google product team.
Then I would suggest forwarding this through a Public Issue Tracker for further support. or you can open a ticket with Google Cloud support (For free users this link) so that we may investigate this issue further.
Related
I have a Java azure function (runtime 3.2.0) where I try to send some custom telemetry. I use the following code
TelemetryConfiguration config = TelemetryConfiguration.getActive()
TelemetryClient telemetry = new TelemetryClient(config);
telemetry.trackEvent("Test Event");
telemetry.trackTrace("Test Trace", SeverityLevel.Warning);
telemetry.flush(); // Not sure it is needed
When I check config.getInstrumentationKey() is is the correct key for the Application Insights I want to show the custom telemetry. However, I never receive the custom event and trace in my Insights. Also config.isTrackingDisabled() i false and config.getChannel() seems to make sense.
All code examples I have found and in the official documentation as well it seems to be all the code I need. When I use the logger from the ExecutionContext logs appears in Application inside, so my function has access to it. So I suspect I have overlooked some small important fact or there is some configuration of my function that is not set correctly.
Can anybody help me get custom telemetry to work on my java function?
I'm quite new to Apache Camel and trying to bring some routes into action.
I have a TCP server which serves large JSON-Messages (up to ~30-50kB in size, where i do not have any control about the source size) that contain lots of measurement data which i want to process using certain additional routes that work fine.
I'm using camel 2.20 within spring-boot environment 1.5.7.
I faced the problem that if i commented out every other routes except the incoming reduced netty4 route (only from and to a counter), see below
#Bean
public RouteBuilder getRoute() {
String fromSource = String.format("netty4:tcp://%s:%d?clientMode=true&textline=true&receiveBufferSize=64000&decoderMaxLineLength=64000",sourceIp,sourcePort);
return new RouteBuilder() {
from(fromSource)
.to("metrics:counter:incomingCounter");
};
}
The route works nearly fine but consumes more and more heap-space (around 2MB every second, where there are messages served with a frequency of around 20-30Hz) until java throws java.lang.OutOfMemoryError: Java heap space.
Without any route no memory-leak was registered, as i can focus the problem to the netty-route
Any help will be appreciated.
Thanks in advance.
I found the resolution myself by debugging the code.
I forgot to set property sync=false in netty4-camel endpoint as i don't want to process message and send an answer back to the server after processing, just consuming - while sync=true (default settings) buffers all incoming data for later response which caused my "memory-leak".
The behavior of "sync" was not totally clear from the netty4-camel documentation (http://camel.apache.org/netty4.html) - i'll suggest an improvement of the documentation (will write a mail with a proposal) to make the usage a little more clearly.
Maybe this helps someone another having a similar problem.
Best
Is there a Java wrapper for the current version of Stack Overflow? I have been looking at here and here, but they seem to be outdated for current API version. I keep getting connection refused when running their example code. I do have an API key.
StackExchangeApiQueryFactory queryFactory = StackExchangeApiQueryFactory.newInstance("MyApplicationKey");
QuestionApiQuery query = queryFactory.newQuestionApiQuery();
List<Question> questions = query.withSort(Question.SortOrder.HOT).withPaging(new Paging(1, 20)).withTimePeriod(new TimePeriod(new Date(), new Date())).withFetchOptions(EnumSet.of(FilterOption.INCLUDE_BODY, FilterOption.INCLUDE_COMMENTS)).list();
This was an internal proxy mixup. Make sure your proxy is set properly. Also, if you are using an IDE, make sure that Automatic Proxy Configuration is chosen in its settings.
Can someone help me with name of api which enables realtime prediction of a model. Please note that i am not requesting for RealtimeEndpointRequest object. i have gone through the entire documentation of AWS Machine Learning SDK but haven't found any thing.
Edit 1 :
This is the code that i have used -
CreateRealTimePrediction createRealTimePrediction ;
CreateRealtimeEndpointRequest createRealtimeEndPointReq;
CreateRealtimeEndpointResult createRealtimeEndPointRes;
PredictRequest predReq;
String mlModelId="ml-Lkqmcs8cM2W";
createRealtimeEndPointReq.setMLModelId(mlModelId);
PredictResult predRes = null;
Map<String,String> record=null;
// assume i have set a record in the Map.
createRealtimeEndPointRes = amlClient.createRealtimeEndpoint(createRealtimeEndPointReq);
String predictEndpoint=createRealtimeEndPointRes.getRealtimeEndpointInfo().getEndpointUrl();
predReq= new PredictRequest();
predReq.setMLModelId(mlModelId);
for (int i=0;i<recordKeys.length;i++){
record.put(recordKeys[i],recordValues[i]);
}
predReq.setRecord(record);
predReq.setPredictEndpoint(predictEndpoint);
predRes=amlClient.predict(predReq);
return predRes;
}
Now what is happening is - if i enable the real time prediction by using aws management console manually and then run this segment of code, then the results are generated as expected but when i the realtime prediction is disabled, then i get this error -
Exception in thread "main" com.amazonaws.services.machinelearning.model.PredictorNotMountedException: Either ML Model with id ml-Lkqmcs8
cM2W is not enabled for real-time predictions or the MLModelId is invalid. (Service: AmazonMachineLearning; Status Code: 400; Error Code
: PredictorNotMountedException; Request ID: 2dc70e58-07d0-11e5-a0c7-bb93f17d1b2e)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1160)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:748)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:467)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:302)
at com.amazonaws.services.machinelearning.AmazonMachineLearningClient.invoke(AmazonMachineLearningClient.java:1995)
at com.amazonaws.services.machinelearning.AmazonMachineLearningClient.predict(AmazonMachineLearningClient.java:637)
at com.nrift.aml.prediction.realtime.CreateRealTimePrediction.createRealTimePrediction(CreateRealTimePrediction.java:61)
at RealTimePrediction.main(RealTimePrediction.java:53)
which effectively means that this segment of code is not enabling the real time prediction though i have used
CreateRealtimeEndpoint
api in it.
P.s- the code segment i have posted is a not complete, the complete code is working correctly so you can make assumptions about the correctness of code.
The API you are looking for is CreateRealtimeEndpoint. Creating a real-time endpoint is the mechanism for enabling the model to be used for real-time predictions. When you no longer need to use this model, you can destroy the endpoint with the DeleteRealtimeEndpoint API. The model always stays intact, so you can create/delete endpoints when needed.
Im working on oauth 1 Sparklr and Tonr sample apps and I'm trying to create a two-legged call. Hipoteticly the only thing you're supposed to do is change the Consumer Details Service from (Im ommiting the igoogle consumer info to simplify):
<oauth:consumer-details-service id="consumerDetails">
<oauth:consumer name="Tonr.com" key="tonr-consumer-key" secret="SHHHHH!!!!!!!!!!"
resourceName="Your Photos" resourceDescription="Your photos that you have uploaded to sparklr.com."/>
</oauth:consumer-details-service>
to:
<oauth:consumer-details-service id="consumerDetails">
<oauth:consumer name="Tonr.com" key="tonr-consumer-key" secret="SHHHHH!!!!!!!!!!"
resourceName="Your Photos" resourceDescription="Your photos that you have uploaded to sparklr.com."
requiredToObtainAuthenticatedToken="false" authorities="ROLE_CONSUMER"/>
</oauth:consumer-details-service>
That's adding requiredToObtainAuthenticatedToken and authorities which will cause the consumer to be trusted and therefore all the validation process is skipped.
However I still get the login and confirmation screen from the Sparklr app. The current state of the official documentation is pretty precarious considering that the project is being absorbed by Spring so its filled up with broken links and ambiguous instructions. As far as I've understood, no changes are required on the client code so I'm basically running out of ideas. I have found people actually claiming that Spring-Oauth clients doesn't support 2-legged access (which I found hard to believe)
The only way I have found to do it was by creating my own ConsumerSupport:
private OAuthConsumerSupport createConsumerSupport() {
CoreOAuthConsumerSupport consumerSupport = new CoreOAuthConsumerSupport();
consumerSupport.setStreamHandlerFactory(new DefaultOAuthURLStreamHandlerFactory());
consumerSupport.setProtectedResourceDetailsService(new ProtectedResourceDetailsService() {
public ProtectedResourceDetails loadProtectedResourceDetailsById(
String id) throws IllegalArgumentException {
SignatureSecret secret = new SharedConsumerSecret(
CONSUMER_SECRET);
BaseProtectedResourceDetails result = new BaseProtectedResourceDetails();
result.setConsumerKey(CONSUMER_KEY);
result.setSharedSecret(secret);
result.setSignatureMethod(SIGNATURE_METHOD);
result.setUse10a(true);
result.setRequestTokenURL(SERVER_URL_OAUTH_REQUEST);
result.setAccessTokenURL(SERVER_URL_OAUTH_ACCESS);
return result;
}
});
return consumerSupport;
}
and then reading the protected resource:
consumerSupport.readProtectedResource(url, accessToken, "GET");
Has someone actually managed to make this work without boiler-plate code?