GAE Datastore Admin Backup failing with 404 on mapreduce - java

I have a Java GAE application with two modules for which I am having problems using the backup/restore Datastore Admin functionality. The tasks get created properly but fail and retry endlessly in the default queue. From the logs of my non-default ("engine") module it looks like it is trying to process them there (rather than in the default module for the app). I also don't have anything explicitly mapping /_ah/mapreduce in my web.xml for either module, which seems to be the symptom being reported. I don't see any documentation suggesting I need to go and manually configure appengine-mapreduce.jar so I haven't gone down that road yet.
0.1.0.2 - - [07/Jan/2015:08:11:19 -0800] "POST /_ah/mapreduce/kickoffjob_callback/15759222115551DD09797 HTTP/1.1" 404 234 "https://ah-builtin-python-bundle-dot-MYAPP.appspot.com/_ah/datastore_admin/backup.do" "AppEngine-Google; (+http://code.google.com/appengine)" "engine.MYAPP.appspot.com" ms=10713 cpu_ms=22 cpm_usd=0.000026 queue_name=default task_name=50337988952552890461 pending_ms=10702 instance=0 app_engine_release=1.9.17
This did work at one point but I've upgraded quite a bit (moved from backends to modules, moved to HRD, upgraded GAE version to 1.9.2, etc.).
Thanks in advance for any hints or suggestions!
Edit:
So I figured this out. I have two modules in my app (named default and engine). The default task queue is routed to the engine module (formerly a backend) in my queue.xml, rather than ah-builtin-python-bundle.
Adding a new queue in my queue.xml routed to ah-builtin-python-bundle and using that for Datastore Admin fixed the problem.

If you make any changes to the default task queue using queue.xml, in particular the target element, you'll want to create a queue for using the Datastore Admin. Soemthing like:
<queue>
<name>backup</name>
<rate>10/s</rate>
<bucket-size>40</bucket-size>
<max-concurrent-requests>10</max-concurrent-requests>
<target>ah-builtin-python-bundle</target>
</queue>

Related

Configuring open telemetry for tracing service to service calls ONLY

I am experimenting with different instrumentation libraries but primarily spring-cloud-sleuth and open-telemetry ( OT) are the ones I liked the most. Spring-cloud-sleuth is simple but it will not work for a non-spring ( Jax-RS)project , so I diverted my attention to open telemetry.
I am able to export the metrics using OT, but there is just too much data which I do not need. Spring sleuth gave the perfect solution wherein it just traces the call across microservices and links all the spans with one traceId.
My question is - How to configure OT to get an output similar to spring-sleuth? I tried various configuration and few worked but still the information is huge.
My configuration
-Dotel.traces.exporter=zipkin -Dotel.instrumentation.[jdbc].enabled=false -Dotel.instrumentation.[methods].enabled=false -Dotel.instrumentation.[jdbc-datasource].enabled=false
However, this still gives me method calls and other data. Also, one big pain is am not able to SHUT DOWN metrics data.
gets error like below
ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/0:0:0:0:0:0:0:1:4317
Anyhelp will be appreciated -
There are 2 ways to configure the open telemetry agent(otel).
Environment variable
Java system property
you can either set
export OTEL_METRICS_EXPORTER=none
or
java -Dotel.metrics.exporter=none app.jar
Reference
https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md

(GAE-Standard+Java11) Sessions with multiple instances running

I have deployed my spring boot application on GAE, Java 11, Standard Environment. As per the documentation for Java11 we need to use app.yaml for configuring the instances.
I wanted to know as to how I can enable sharing of sessions between instances. As per my research, Earlier we could simply solve this problem by setting sessions-enabled and async-session-persistence in appengine-web.xml. With appengine-web.xml gone, what is the equivalent way of doing this in app.yaml.
Use case that i am trying to achieve is :
Using spring security (Unfortunately i get logged out when according to me the request of the same user goes to another instance.)
Storing the user retrieved from DB in a #SessionScoped variable so as to avoid multiple DB calls.
Any help here would be really appreciated. Thanks!
I went through a lot of documentation, but I believe that this is not inside the app.yaml configuration reference.
Alternatively, I could find that you could use session affinity in order to use a instance to reply always the requests of a same user, this can be enabled in your app you can use the next tag in your app.yaml according to this documentation.
network:
session_affinity: true
Hope this works for you.

com.google.api.config.ServiceConfigSupplier - Failed to fetch default config version for service (only on localhost)

I'm using Cloud Endpoints Frameworks (2.0.1) for Java as part of my final year project and have been relatively successful with it so far.
I don't have any problems when deploying to my appspot.com domain, however, I am running into some problems when deploying locally.
(Any references to my-project-id in the following code blocks are aliases for my actual google cloud project id)
I have a valid openapi descriptor (openapi.json) of an annotated #API class which I am deploying to cloud endpoints using "gcloud service-management deploy openapi.json".
The command returns successfully:
Service Configuration [2017-02-23r0] uploaded for service [api.endpoints.<my-project-id>.cloud.goog]
I then map the returned config_id to the correct endpoints_api_service in my app.yaml
endpoints_api_service:
name: api.endpoints.<my-project-id>.cloud.goog
config_id: 2017-02-23r0
This service is listed by the gcloud cli tool using "gcloud service-management list"
NAME TITLE
storage-component.googleapis.com Google Cloud Storage
api.endpoints.<my-project-id>.cloud.goog api.endpoints.<my-project-id>.cloud.goog
etc...
and "gcloud service-management configs list --service api.endpoints.my-project-id.cloud.goog"
CONFIG_ID SERVICE_NAME
2017-02-23r0 api.endpoints.<my-project-id>.cloud.goog
... other version configs
and is accessible on my appspot.com domain (I can call the endpoint and receive the correct response)
I am trying to deploy my project on localhost using the maven appengine plugin for java (mvn appengine:devserver), but upon jetty startup I'm hit with the following Exception:
WARNING: Failed startup of context com.google.appengine.tools.development.DevAppEngineWebAppContext...
com.google.api.config.ServiceConfigException: Failed to fetch default config version for service 'api.endpoints.<my-project-id>.cloud.goog'. No versions exist!
at com.google.api.config.ServiceConfigSupplier.fetchLatestServiceVersion(ServiceConfigSupplier.java:155)
....
The deployment then gets stuck in an endless cycle of trying to start jetty, and being hit with that error message, and restarting etc. Any attempts to access localhost:8080 result in a "503: Service not found" error
I assumed that the local deployment of my app would be able to access the service config that was deployed using "gcloud service-management deploy", in the same way that the appspot.com deployment can, but is this not the case?
Looking at the source for ServiceConfigSupplier.getchLatestServiceVersion() I gather that serviceManagement.services().configs().list(my-service-name).execute().getServiceConfigs() is returning an empty list, but why is this only occurring locally?
Extra Information
my ENDPOINTS_SERVICE_NAME environment variable matches 'api.endpoints.my-project-id.cloud.goog'
I noticed that there was an update (1.0.2) to com.google.api.config a few days ago, and it has a dependency on an older version of com.google.api.services.servicemanagement (dependent on v1-rev14-1.22.0 with the newest version being v1-rev340-1.22.0)
I doubt this is the problem, but I thought I would mention it, as it contains classes relevant to the exception (ServiceManagement is used by ServiceConfigSupplier, which is throwing the exception). Perhaps there is an inconsistency in where they are looking for the service configs?
I'm quite stumped tbh, it's a bit over my head. I would dislike having to remove Endpoints, as I'm starting to like it, but we also can't really lose usage of our devserver either. I hope someone can shed a little bit of light on this issue.
It's not a fix but I was able to work around the problem by using the advice in https://stackoverflow.com/a/41493548/1410035.
Namely, commenting out the ServiceManagementConfigFilter:
b) Comment out the ServiceManagementConfigFilter from web.xml , ie,
<!--
<filter>
<filter-name>endpoints-api-configuration</filter-name>
<filter-class>com.google.api.control.ServiceManagementConfigFilter</filter-class>
</filter>
-->
<!--
<filter-mapping>
<filter-name>endpoints-api-configuration</filter-name>
<servlet-name>EndpointsServlet</servlet-name>
</filter-mapping>
-->
Note that you have to comment out the filter and the filter-mapping and they aren't right next to each other in the file.
I found that I didn't need to remove the scaling block as mentioned in point 'a' in the linked answer.
This may be related to a permission issue if you have pulled all recent updates. git pull Also, check that your Cloud SDK is up-to-date by using: gcloud components update.
Assuming you followed the instructions listed at https://cloud.google.com/endpoints/docs/frameworks/java/quickstart-frameworks-java. To get around this issue you can create a service account with necessary permissions or use the command gcloud auth application-default login.
You can setup a service account using the Cloud SDK gcloud at https://cloud.google.com/sdk/docs/authorizing
Please let me know if you have anymore questions.
As for the command gcloud auth application-default login. According to the help description:
Obtains user access credentials via a web flow and puts them in the
well-known location for Application Default Credentials to use them as
a proxy for a service account.
When you used this command it obtains credentials for gcloud your Gmail Account. something#gmail.com and then stores the credentials in a location known to contain application credentials.
It worked with "gradle appengineRun" but on IntelliJ Idea project I had to replace all the ${endpoints.project.id} in web.xml and appengine-web.xml to run/debug localhost from IntelliJ (imported from gradle sources, installed Google Cloud Tools plugin and set up run/debug configuration from Tools/GoogleCloudTools/Run on a local App Engine Standard dev server).
My error was:
Failed to fetch default config version for service 'echo-api.endpoints.${endpoints.project.id}.cloud.goog'. No versions exist!
cloud.google.com docs only have Maven build example Gradle build is at github.com
Another thing to bare in mind is your service account has the right permission, your service account is something looks like the following
[project ID]#appspot.gserviceaccount.com.
By default, it is Project(Editor), at least you need to provide it as Service Controller role.
If it is gone, then you can follow these instructions to add back in.
This can happen if you've changed the Google Cloud project you're trying to authenticate to (if someone else has changed the project, this can happen when you pull changes from source control). In this case, the service account credentials that you were using for the old project will no longer be valid, and you can authenticate to the new project by running:
gcloud auth application-default login
I was having this fairly similar error,
endpoints.repackaged.com.google.api.config.ServiceConfigException:
Failed to fetch service config (status code 404): The service config
name and config id could not be found. Double check that filter
initialization parameters endpoints.projectId and
endpoints.serviceName are correctly set.
and the issue for me was having ENDPOINTS_SERVICE_VERSION environment variable specified in my appengine-web.xml. So basically, deleting those lines was enough in my case (Since endpoints uses the most recent ENDPOINTS_SERVICE_VERSION if not provided any.):
<env-var name="ENDPOINTS_SERVICE_VERSION" value="1" />
For me, the problem was that I hadn't deployed the open API.
So running the below fixed the issue:
gcloud endpoints services deploy openapi.json

Component-based logging with logback (or: intercept foreign log messages)

I'm looking for a way to define transitive log message routing. Let's say we have an application called poly with these packages:
com.mycompany.server-common
com.mycompany.communication
com.mycompany.webservice
server-common is used by both of the 2 others. All 3 use org.hibernate as well.
Now, I like to have 1 logfile for the webservice component with all messages from com.mycompany.webservice and with those messages from com.mycompany.server-common and org.hibernate that were initiated by the webservice. And then, another coresponding file for the communication package.
My application is a war file running in tomcat, where all components run in 1 context (it comes in 1 war file). I already defined the multiple log files, but they naturally only log that what i defined statically, there is no transitive inclusion.
I would be very interested in ideas how I could achieve the desired behaviour. I already thought about using the MDC for that, but I'm not sure if that's a good idea.
Another idea was to separate the contexts, but I think in the current project state this will be hard and it does not offer the flexibility I hope for.
Any hints or discussions are appreciated.
If you set an MDC key when webservice starts serving a request and clear the MDC key at the end of the request, SiftingAppender will do what you are asking. Shout on the logback-user mailing list if you run into difficulties.

What causes duplicate requests to occur using spring,tomcat and hibernate

I'm working on a project in Java using the spring framework, hibernate and tomcat.
Background:
I have a form page which takes data, validates, processes it and ultimately persists the data using hibernate. In processing the data I do some special command (model)
manipulation prior to persisting using hibernate.
Problem:
For some reason my onSubmit method is being called twice, the first time through things
are processed properly. However the second time through they are not; and the incorrect
information is being persisted.
I've also noticed that on other pages which are simply pulling information from the data
base and displaying on screen; Double requests are happening there too.
Is there something misconfigured, am I not using spring properly..any help on this would
be great!
Additional Information:
The app is still being developed. In testing the app I'm running into this problem. I'm using the app as I would expect it to be used (single clicks,valid data,etc...)
If you are testing in IE, make note that in some versions of IE it sometimes submits two requests. What browsers are you testing the app in?
There is the javascript issue, if an on click handler is associated with submit button and calls submit() and does not return false to cancel the event bubble.
Could be as simple as users clicking on a link twice, re-submitting a form while the server is still processing the first request, or hitting refresh on a POST-ed page.
Are you doing anything on the server side to account for duplicate requests such as these from your users?
This is a very common problem faced by someone who is starting off. And not very sure about the application eco-system.
To deploy a spring app, we build the war file.
Then we put it inside 'webapps' folder of tomcat.
Then we run the tomcat instance using terminal (I am presuming a linux system).
Now, we set up env in that terminal.
The problem arises when we set up our environment for the spring application where there can be more than one war files to be deployed.
Then we must cater to the fact that the env must be exclusive to a specific war file.
To achieve this, what we can do is create exclusive env files for every war. (e.g. war_1.sh,war_2.sh,.....,war_n.sh) and so on.
Now we can source that particular env file for which we have to deploy its corresponding war. This way we can segregate the multiple wars (applications) and their environment.

Categories