How to disable live reloading in Quarkus? - java

I have a Qurkus application which I run in Dev mode:
./mvnw compile quarkus:dev
I have an issue that Quarkus reloads classes also when I don't want it to (dependent project updated, etc) which takes quite a lot of time.
Question: Is there a way to disable live reloading in dev mode?
I went through the Quarkus documentation, but couldn't find if there is such an option.

You can disable it with Quarkus since 2.0.0.Alpha3 by enabling the corresponding option in the console (Quarkus tells you which key you need to press to do that - it's l).
See https://github.com/quarkusio/quarkus/pull/17035 for the pull request that brought this feature

Related

Configuring open telemetry for tracing service to service calls ONLY

I am experimenting with different instrumentation libraries but primarily spring-cloud-sleuth and open-telemetry ( OT) are the ones I liked the most. Spring-cloud-sleuth is simple but it will not work for a non-spring ( Jax-RS)project , so I diverted my attention to open telemetry.
I am able to export the metrics using OT, but there is just too much data which I do not need. Spring sleuth gave the perfect solution wherein it just traces the call across microservices and links all the spans with one traceId.
My question is - How to configure OT to get an output similar to spring-sleuth? I tried various configuration and few worked but still the information is huge.
My configuration
-Dotel.traces.exporter=zipkin -Dotel.instrumentation.[jdbc].enabled=false -Dotel.instrumentation.[methods].enabled=false -Dotel.instrumentation.[jdbc-datasource].enabled=false
However, this still gives me method calls and other data. Also, one big pain is am not able to SHUT DOWN metrics data.
gets error like below
ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/0:0:0:0:0:0:0:1:4317
Anyhelp will be appreciated -
There are 2 ways to configure the open telemetry agent(otel).
Environment variable
Java system property
you can either set
export OTEL_METRICS_EXPORTER=none
or
java -Dotel.metrics.exporter=none app.jar
Reference
https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md

How to solve AttributeNotSupportedException in Hybris

Everytime that we add a new attribute to items.xml, we have to execute a hybris update, otherwise we will get some error like: JaloItemNotFoundException: no attribute Cart.newAttribute
But, sometimes after executing an update, instead of getting JaloItemNotFoundException, we get something like:
de.hybris.platform.servicelayer.exceptions.AttributeNotSupportedException: cannot find attribute newAttribute
For this second case, it always work if we restart the server after the update.
Is there any other way to fix that besides restarting the server after the update?
I worked for a company years ago that added this restart as a "deploy step" after the update. I am trying to avoid that here.
I tried to execute several updates and clean type cache. But no luck.
Platform Update with "Update Running System" is usually enough. If you have localization, impex, or some other changes, you might need to include the other options or extensions.
If you have a clustered environment, make sure all nodes have been updated / refreshed as well.
Make sure that your build and deploy process is something like:
Build
Deploy
Restart Server. You stop/start manually (or by script), or let Hybris restart itself when it detects changes from the deployment.
Run Platform Update
You can try to update the platform directly after the build from the command line(i.e "ant updatesystem") before starting the server.
The restart after deploy is a pretty common step(In case the update system is performed with the server started).
I believe that one of the reasons the restart is needed is due to the fact that the Spring Context needs to be reinitialized since some of the beans need the new type system information.
For example, Let's say you need to create a new type and an interceptor for that newly created type. When deploying this change you do the following:
Change the binaries and start the server
Perform an update system in order for the database to get the latest columns and so on
Now if you try to see whether the interceptor is working you will see it does not work because when its spring bean was instantiated(during the server startup) the type that it is suppose to handle was not present in the database.
Because of that, after a restart the Interceptor works as expected.
PS: The above described Interceptor problem might have been fixed somehow in the latest Hybris Versions.

com.google.api.config.ServiceConfigSupplier - Failed to fetch default config version for service (only on localhost)

I'm using Cloud Endpoints Frameworks (2.0.1) for Java as part of my final year project and have been relatively successful with it so far.
I don't have any problems when deploying to my appspot.com domain, however, I am running into some problems when deploying locally.
(Any references to my-project-id in the following code blocks are aliases for my actual google cloud project id)
I have a valid openapi descriptor (openapi.json) of an annotated #API class which I am deploying to cloud endpoints using "gcloud service-management deploy openapi.json".
The command returns successfully:
Service Configuration [2017-02-23r0] uploaded for service [api.endpoints.<my-project-id>.cloud.goog]
I then map the returned config_id to the correct endpoints_api_service in my app.yaml
endpoints_api_service:
name: api.endpoints.<my-project-id>.cloud.goog
config_id: 2017-02-23r0
This service is listed by the gcloud cli tool using "gcloud service-management list"
NAME TITLE
storage-component.googleapis.com Google Cloud Storage
api.endpoints.<my-project-id>.cloud.goog api.endpoints.<my-project-id>.cloud.goog
etc...
and "gcloud service-management configs list --service api.endpoints.my-project-id.cloud.goog"
CONFIG_ID SERVICE_NAME
2017-02-23r0 api.endpoints.<my-project-id>.cloud.goog
... other version configs
and is accessible on my appspot.com domain (I can call the endpoint and receive the correct response)
I am trying to deploy my project on localhost using the maven appengine plugin for java (mvn appengine:devserver), but upon jetty startup I'm hit with the following Exception:
WARNING: Failed startup of context com.google.appengine.tools.development.DevAppEngineWebAppContext...
com.google.api.config.ServiceConfigException: Failed to fetch default config version for service 'api.endpoints.<my-project-id>.cloud.goog'. No versions exist!
at com.google.api.config.ServiceConfigSupplier.fetchLatestServiceVersion(ServiceConfigSupplier.java:155)
....
The deployment then gets stuck in an endless cycle of trying to start jetty, and being hit with that error message, and restarting etc. Any attempts to access localhost:8080 result in a "503: Service not found" error
I assumed that the local deployment of my app would be able to access the service config that was deployed using "gcloud service-management deploy", in the same way that the appspot.com deployment can, but is this not the case?
Looking at the source for ServiceConfigSupplier.getchLatestServiceVersion() I gather that serviceManagement.services().configs().list(my-service-name).execute().getServiceConfigs() is returning an empty list, but why is this only occurring locally?
Extra Information
my ENDPOINTS_SERVICE_NAME environment variable matches 'api.endpoints.my-project-id.cloud.goog'
I noticed that there was an update (1.0.2) to com.google.api.config a few days ago, and it has a dependency on an older version of com.google.api.services.servicemanagement (dependent on v1-rev14-1.22.0 with the newest version being v1-rev340-1.22.0)
I doubt this is the problem, but I thought I would mention it, as it contains classes relevant to the exception (ServiceManagement is used by ServiceConfigSupplier, which is throwing the exception). Perhaps there is an inconsistency in where they are looking for the service configs?
I'm quite stumped tbh, it's a bit over my head. I would dislike having to remove Endpoints, as I'm starting to like it, but we also can't really lose usage of our devserver either. I hope someone can shed a little bit of light on this issue.
It's not a fix but I was able to work around the problem by using the advice in https://stackoverflow.com/a/41493548/1410035.
Namely, commenting out the ServiceManagementConfigFilter:
b) Comment out the ServiceManagementConfigFilter from web.xml , ie,
<!--
<filter>
<filter-name>endpoints-api-configuration</filter-name>
<filter-class>com.google.api.control.ServiceManagementConfigFilter</filter-class>
</filter>
-->
<!--
<filter-mapping>
<filter-name>endpoints-api-configuration</filter-name>
<servlet-name>EndpointsServlet</servlet-name>
</filter-mapping>
-->
Note that you have to comment out the filter and the filter-mapping and they aren't right next to each other in the file.
I found that I didn't need to remove the scaling block as mentioned in point 'a' in the linked answer.
This may be related to a permission issue if you have pulled all recent updates. git pull Also, check that your Cloud SDK is up-to-date by using: gcloud components update.
Assuming you followed the instructions listed at https://cloud.google.com/endpoints/docs/frameworks/java/quickstart-frameworks-java. To get around this issue you can create a service account with necessary permissions or use the command gcloud auth application-default login.
You can setup a service account using the Cloud SDK gcloud at https://cloud.google.com/sdk/docs/authorizing
Please let me know if you have anymore questions.
As for the command gcloud auth application-default login. According to the help description:
Obtains user access credentials via a web flow and puts them in the
well-known location for Application Default Credentials to use them as
a proxy for a service account.
When you used this command it obtains credentials for gcloud your Gmail Account. something#gmail.com and then stores the credentials in a location known to contain application credentials.
It worked with "gradle appengineRun" but on IntelliJ Idea project I had to replace all the ${endpoints.project.id} in web.xml and appengine-web.xml to run/debug localhost from IntelliJ (imported from gradle sources, installed Google Cloud Tools plugin and set up run/debug configuration from Tools/GoogleCloudTools/Run on a local App Engine Standard dev server).
My error was:
Failed to fetch default config version for service 'echo-api.endpoints.${endpoints.project.id}.cloud.goog'. No versions exist!
cloud.google.com docs only have Maven build example Gradle build is at github.com
Another thing to bare in mind is your service account has the right permission, your service account is something looks like the following
[project ID]#appspot.gserviceaccount.com.
By default, it is Project(Editor), at least you need to provide it as Service Controller role.
If it is gone, then you can follow these instructions to add back in.
This can happen if you've changed the Google Cloud project you're trying to authenticate to (if someone else has changed the project, this can happen when you pull changes from source control). In this case, the service account credentials that you were using for the old project will no longer be valid, and you can authenticate to the new project by running:
gcloud auth application-default login
I was having this fairly similar error,
endpoints.repackaged.com.google.api.config.ServiceConfigException:
Failed to fetch service config (status code 404): The service config
name and config id could not be found. Double check that filter
initialization parameters endpoints.projectId and
endpoints.serviceName are correctly set.
and the issue for me was having ENDPOINTS_SERVICE_VERSION environment variable specified in my appengine-web.xml. So basically, deleting those lines was enough in my case (Since endpoints uses the most recent ENDPOINTS_SERVICE_VERSION if not provided any.):
<env-var name="ENDPOINTS_SERVICE_VERSION" value="1" />
For me, the problem was that I hadn't deployed the open API.
So running the below fixed the issue:
gcloud endpoints services deploy openapi.json

Eclipse: no shown variables in debugging Java EE

Platform I am using:
Fedora 20;
mariadb-5.5.34-2.fc20.x86_64;
Eclipse Kepler Service Release from www.eclipse.org
I am implementing example
See here
and I am trying to manage to work the login interface.
Actually I am configuring TomEE to use JAAS auth technology.
Since I am having some troubles, I would like to solve them with the help of Eclipse debugging mode. To do that, I:
setted breakpoint at line number 79 of LoginController.java;
started TomEE in debug mode;
executed login.xhtml in debug mode too;
My problem is that I see nothing in debug mode: no variables, etc.
How is it possible? I have been using debugging mode for a long time, but it is my first time in web development.
Project archive
Click here for a larger Screenshot
The webpage bean has not been istantiated for an unknown reason. I opened a new question to fix it.
Bean not instantiated

Spring Tool Suite: Changes to static resources triggers redeploy

I recently got into a problem with STS. It redeploy's my application on all kind of changes (JSP, CSS, JS). It was only triggered on Java changes before I upgraded it to 3.4.0.
What I tried are the following:
Enable/Disable JMX-Reloading
Tried both "Automatically publish when resource change" and "Automatically publish after a build event"
I turned ON/OFF "Auto reloading" for the web module.
But I can only get it to not publish at all or publish on everything.
This slow's down my development process.
How do I get my Spring application to only redeploy on Java changes?
Edit:
If I turn off "Auto reloading" my JSP does not even refresh on change. This is very frustrating
I solved it buy simply removing the server and then add a new one.
Right click in servers window.
Add -> server
VMware -> VMware vFabric tc Server v2.7-2.9
Existing instance -> base-instance
Finish
I have no clue what the problem was with the first one. Settings where not changed and I couldent find anything suspect.

Categories