How to Authenticate CloudTasksClient in App Engine - java

I am migrating my existing java application to google cloud app engine. The application creates threads on periodic basis to perform certain background tasks. As App Engine does not support threads hence I have to use "Tasks".
I could not find any sample code that uses an application (running in app engine) to create and send task to task handler (also running in app engine).
Sample code available on internet uses client code (task creater) running on local machine and using authentication (via setting key json path in environment variable). In my case I want task creater and task handle both to run on app engine.
My question is: Where can I find a sample code that programmatically authenticates and creates tasks? Basically CloudTasksClient needs to be authenticated programmatically.

I am giving some documentation links that can help with creating and handling the tasks using App Engine.
Please find an example of creating a task with authentication. Additionally this stackoverflow answer may help.

Related

Google Cloud scheduler Java job containerized with Selenium

I've got a Java code to perform some interactions with web pages and used Selenium for it.
Now I'd like to get this code executed every hours and I've thought it's a great occasion to discover the cloud world.
I've created an account on Google Cloud.
Because my app need to have a driver to use Selenium (gecko driver for Firefox), I'll have to create an docker image to set everything it need inside it.
In Google Cloud services, there is the "Cloud Scheduler" which can allow me to run a code when I want to.
But here are my questions :
What kind of target should I configure (HTTP, Pub/Sub, HTTP App Engine)?
Because I'm not using the Google Cloud Functions, my container will always be up, it doesn't seems as a great idea for a pricing reason? I would have like to have my container up only the time of the execution.
Also I was thinking to use Quarkus framework to wrap my application since I've since it was made for the cloud and very quick to start, is that the best option for me?
I'll be very glade if someone can help me to see this a little better. I'm not a total beginner I work as a Java / JavaScript developer for 5 years now and dockerized some application but everything about the cloud is a big piece, not easy to know where to start.
So you:
are using docker images
run your workload occasionally
aren't willing to use Cloud Function
==> Cloud Run is your best bet. Here is Google Cloud Run Quick start : https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy
Keep in mind that your containerised application needs to be listening to HTTP requests so take a look at Cloud Run Container runtime contract
Finally you can indeed trigger Cloud Run from Cloud Scheduler, and here a detailed documentation on how to do it https://cloud.google.com/run/docs/triggering/using-scheduler
As #MBHAPhoenix says, Cloud Run is your best option. You can then trigger the job from Cloud Scheduler. We have this exact scenario currently running for one of our projects but our container is Python. We wrote an article about it here
You should note that to trigger your Cloud Run job from Cloud Scheduler, you'll have to 'secure it'. This means means you won't be able to just type the URL in a web browser. A service account will be responsible for running the Cloud Run job and you'll then need to grant your Cloud Scheduler service access to this service account so it can invoke the Cloud Run Job. I've been meaning to put up a post about the exact steps for doing this (will try to get it done this weekend).
In terms of cost, we have this snippet from our article
...Cloud Run only runs when it receives an HTTP request. It plays dead and comes alive to execute your code when an HTTP request comes in. When it is done executing the request, it goes 'dead' again till the next request comes in. This means you're not paying for time spent idling i.e. when it is not doing anything.....

Manage running Java apps remotely

We have several Java standalone applications (in form of Jar files) running on multiple servers. These applications mainly read and stream data between systems. We are using Java 8 mainly in our development. I was put in charge recently. My main function is to manage and maintain these apps.
Currently, I check these apps manually by accessing these servers, check if the app is running, and sometimes run some database queries to see if the app started pulling data. My problem is that in many cases, some of these apps fail and shutdown due to data issue or edge cases without anyone noticing. We need some monitoring and application recovery in place.
We don't have docker infrastructure in place. We plan to implement docker in the future, but for now this is not an option.
After research, the following are options I thought of or solutions I tried:
Have the apps create a socket client which sends a heartbeat to a monitoring app (which needs to be developed). I am keeping this as my last option.
I tried to use Eclipse Vertx to wrap the apps into Verticles. Then create a web view that can show me status and other info. After several tries, the apps fail to parse the data correctly (might be due to my lack of understanding to Vertx library).
Have a third party solution that does this, but I have no idea what solutions are out there. I am open for suggestions.
My requirements are:
Proper monitoring of the apps running and their status.
In case of failure, the app should start again while notifying the admin/developer.
I am willing to develop a solution or implement a third party one. I need you guidance on this.
Thank you.
You could use spring-boot-actuator (see health). It comes with a built-in endpoint that has some health checks(depending on your spring-boot project), but you can create your own as well.
Then, doing a http request to http://{host}:{port}/{context}/actuator/health (replace with yours), you could see those health checks status and also use the response status code to monitor your application.
Have you heard of Java Service Wrappers? Not a full management functionality, however it would monitor for JVM crashes and out of memory conditions and restart your application for sure. Alerting should also be possible.
There is a small comparison table here: https://yajsw.sourceforge.io/#mozTocId284533
So some basic monitoring and management is included already. If you need more, I suggest using JMX (https://www.oracle.com/java/technologies/javase/javamanagement.html) or Prometheus (https://prometheus.io/ and https://github.com/prometheus/client_java)

Admin Process for 12-factor in Java

The 12-Factor blog suggests an App should 'Run admin/management tasks as one-off processes'.
What does that mean in the context of a Java/ Spring-boot application? Can I get an example.
https://12factor.net/admin-processes
The site does not suggest this. It says that developers may want to do this, and that if they do so, they should apply the same standards as other code:
One-off admin processes should be run in an identical environment as the regular long-running processes of the app. They run against a release, using the same codebase and config as any process run against that release. Admin code must ship with application code to avoid synchronization issues.
As an example from my application: Users may send invitations, to which the recipient must respond within 7 days or the invitation expires. This is implemented by having a timestamp on the invitation and executing a database query equivalent to DELETE FROM Invitations WHERE expiration < NOW().
Now, we could have someone log in to the database and execute this query periodically. Instead, however, this "cleanup" operation is built into the application at a URL like /internal/admin/cleanInvitations, and that endpoint is executed by an external cron job. The scheduling is outside the main application, but all database configuration, connectivity, and logic are contained within it alongside our main business logic.

How to Run Cron Jobs in Kotlin Ktor?

Is there a way to run Cron Jobs with Ktor? My end objective is to host a Cron Job written with Kotlin for the Coinverse app's backend service to populate data.
I'm currently hosting multiple Java .jar apps written in Kotlin on AppEngine. I'm looking to refactor these apps into Ktor apps on AppEngine with a Cron Job for scheduled tasks, as the .jar apps have more issues with dependencies.
I'm looking for Ktor's equivalent to Cloud Functions' built-in implementation for Cron Jobs with JavaScript.
functions.pubsub.schedule
Back-up option: If Ktor does not have this feature and I want to keep the code in Kotlin, Google has an alpha, Using Kotlin with Google Cloud Functions. It appears Kotlin + Cloud Functions' built-in implementation could be used with this approach.
Sergey Mashkov from the JetBrains team suggests in the kotlinlang Slack group to launch a Kotlin Coroutine on the Application scope using an infinite loop and delay.
Then, the Ktor app can be deployed to AppEngine.
fun Application.main() {
launch {
while(true) {
delay(600000)
// Populate data here.
}
}
}
As for my experience, this will not work the app will stop after 20 minutes or so.
The only solution I've found is to make a regular cron.yaml and and a ktor app, and it works without complain....(the ktor app shall implement a get, and will be called by the cron file)

Deploy Apache Spark application from another application in Java, best practice

I am a new user of Spark. I have a web service that allows a user to request the server to perform a complex data analysis by reading from a database and pushing the results back to the database. I have moved those analysis's into various Spark applications. Currently I use spark-submit to deploy these applications.
However, I am curious, when my web server (written in Java) receives a user request, what is considered the "best practice" way to initiate the corresponding Spark application? Spark's documentation seems to be to use "spark-submit" but I would rather not pipe out the command to a terminal to perform this action. I saw an alternative, Spark-JobServer, which provides an RESTful interface to do exactly this, but my Spark applications are written in either Java or R, which seems to not interface well with Spark-JobServer.
Is there another best-practice to kickoff a spark application from a web server (in Java), and wait for a status result whether the job succeeded or failed?
Any ideas of what other people are doing to accomplish this would be very helpful! Thanks!
I've had a similar requirement. Here's what I did:
To submit apps, I use the hidden Spark REST Submission API: http://arturmkrtchyan.com/apache-spark-hidden-rest-api
Using this same API you can query status for a Driver or you can Kill your Job later
There's also another hidden UI Json API: http://[master-node]:[master-ui-port]/json/ which exposes all information available on the master UI in JSON format.
Using "Submission API" I submit a driver and using the "Master UI API" I wait until my Driver and App state are RUNNING
The web server can also act as the Spark driver. So it would have a SparkContext instance and contain the code for working with RDDs.
The advantage of this is that the Spark executors are long-lived. You save time by not having to start/stop them all the time. You can cache RDDs between operations.
A disadvantage is that since the executors are running all the time, they take up memory that other processes in the cluster could possibly use. Another one is that you cannot have more than one instance of the web server, since you cannot have more than one SparkContext to the same Spark application.
We are using Spark Job-server and it is working fine with Java also just build jar of Java code and wrap it with Scala to work with Spark Job-Server.

Categories