Configuring open telemetry for tracing service to service calls ONLY - java

I am experimenting with different instrumentation libraries but primarily spring-cloud-sleuth and open-telemetry ( OT) are the ones I liked the most. Spring-cloud-sleuth is simple but it will not work for a non-spring ( Jax-RS)project , so I diverted my attention to open telemetry.
I am able to export the metrics using OT, but there is just too much data which I do not need. Spring sleuth gave the perfect solution wherein it just traces the call across microservices and links all the spans with one traceId.
My question is - How to configure OT to get an output similar to spring-sleuth? I tried various configuration and few worked but still the information is huge.
My configuration
-Dotel.traces.exporter=zipkin -Dotel.instrumentation.[jdbc].enabled=false -Dotel.instrumentation.[methods].enabled=false -Dotel.instrumentation.[jdbc-datasource].enabled=false
However, this still gives me method calls and other data. Also, one big pain is am not able to SHUT DOWN metrics data.
gets error like below
ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/0:0:0:0:0:0:0:1:4317
Anyhelp will be appreciated -

There are 2 ways to configure the open telemetry agent(otel).
Environment variable
Java system property
you can either set
export OTEL_METRICS_EXPORTER=none
or
java -Dotel.metrics.exporter=none app.jar
Reference
https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md

Related

Azure release pipeline not picking up variable from nestedStack.yml file

I'm configuring our Release pipeline so that Integration Tests are automatically run after pull requests are merged to master and deployed to Dev environment.
I'm currently getting a connection error, specifically java.net.UnknownHostException: and it looks like my one of my output variables from my nestedStack.yml code is not being imported/read properly:
my-repo/cloud-formation/nestedStack.yml
You can see there is a property there "ApiGatewayInvokeUrl" which is marked as an Output. It is used on Azure DevOps for the "Integration Testing" task in my "Deploy to Dev" stage. It is written as $(ApiGatewayInvokeUrl) as that's how variables on Azure DevOps are used.
This Deploy to Dev will "succeed", however when I further inspect the Integration Tests, I see none actually ran and there was a connection error immediately. I can see it is outputting the variable as $(ApiGatewayInvokeUrl) , so it looks like it just reads it as a String, and never substitutes it for the correct URL value:
I was going off the way another team set up there Integration Tests on a similar pipeline but I might have missed something. Do I need to define $(ApiGatewayInvokeUrl) somewhere in my codebase, or somewhere on Azure? Or am I missing something? I checked the other teams code and didn't see them define it anywhere else, that's why I am ultra confused.
Update: I went into AWS API Gateway and copied the invoke URL and hard-coded that into the Azure DevOps Maven (integration testing) goal path and now it's connecting. So it's 100% just not importing that variable for somer reason.
you need to define\create the variable to use it (unless its an automatic variable, and this one is definitely not an automatic variable). that variable isnt getting substituted because it doesnt exist (afaik).

Spring Boot 2 how to change startup behaviour

do not judge this question) I want to implement WEB installer for my Spring Boot application, and very interesting moment is that my application plays 2 role: Installer and Backend(Application). When i first run my app i need to tell Spring to not initialize particular beans(Hibernate(while startup application must not to be failed to start because database may not exist), ActiveMq and others beans that will be added in installation process) and show some html pages with installation guide. Also i need to prevent access to endpoints where some logic with database occurs. When installation finished i will create new application.properties or some other file with settings and i tell Spring to initialize all required beans with Hibernate, ActiveMQ and others. Maybe i will make restart of application and new behaviour that based on installation will occur. And in next starts my application will not show installation guide. To simplify the question: I need to change startup behaviour of Spring Boot Application. For fun i can give an example with human: I need to make human live with no organs, and this human will live very good, and if i want i can add organs to human and he will be live very well))
You can use #Profile annotation. Check this link: https://www.mkyong.com/spring/spring-profiles-example/

AWS Elastic Beanstalk "Impaired services on all instances."

I have a spring service that I'm trying to load load into AWS Beanstalk. When i create the environment and upload my .war file it just stays stuck on degraded. When i look through the logs for errors i cannot see any errors. Also when try and connect to my url, for example http://something.us-east-1.elasticbeanstalk.com/, i get a 502 error. I've already looked at the documentation provided by amazon that states the red degraded message means that all/most of the requests to you page are failing. Any idea how i can find the issue? See the screenshot below for the Enhanced Health Overview.
So, it turned out that i was getting an error in my logs but i was not able to see them. I had to ignore all of the eb-something log files. I needed to be looking at the web-1.log. This file may be named different depending on your instance and your environment but this is where i could actually see my error.
people who are finding their actual log can look up for this section in their EB log file.
/var/log/web.stdout.log
I had the same problem, with "Impaired services on all instances."
As suggested above, I ssh'd into the elastic beanstalk instance and looked at /var/log/web.stdout.log.
As I found there, my problem was that I was following a django tutorial and created a django config file with the WSGIPath pointing to the project name that was in the tutorial, but my actual project had a different name.
I corrected the mistake, wiped the elastic beanstalk instance and set up the environment again from scratch.
No problems at all this time, everything turned green immediately.

Elastic Beanstalk host specific application configuration

I have a java web application I'm trying to re-factor to work with the elastic beanstalk way of doing things. The application will be load balanced and have (for the moment) 2 hosts without taking any advantage of auto-scaling. The issue is that there are slight configuration differences between the nodes, in particular authenticating to certain web-services is done with different credentials to effectively double throughput as there are per account throttling restrictions.
Currently my application treats configuration separately from the archive so its relatively simple on fixed hosts where the configuration remains in a relatively static file path and deployment of the war files is all that is required.
Going down the elastic beanstalk path I think I'll have to include all the configuration options inside the deployable artifact and some how get the application to load up the relevant host specific configuration. The problem I have is deciding which configuration to load inside the application. I could use a physical aspect about the host, i.e. an IP address or Instance ID that would effectively load the relevant config;
/config-<InstanceID-1>.properties
/config-<InstanceID-2>.properties
This approach is totally flawed given that if I create an entirely new environment in beanstalk, it would require me to update all the configuration files in the project to reflect the new Instance-id's created.
Has anyone come up with a good way of doing this in beanstalk?
If you have to have two different types of nodes, then you should consider SOA architecture for your application.
Create two environments, environment-a and environment-b. Either set all properties for the environments through AWS web console, or can reuse your existing configuration files and just set the specific configuration file name for each environment.
#environment-a
PARAM1 = config-environment-a.properties
#environment-b
PARAM1 = config-environment-b.properties
You share the same code base and push to either environment with -e modifier.
#push to environment-a
$ git aws.push -e environment-a
#push to environment-b
$ git aws.push -e environment-b
You can also create git alias to push to both environments at the same time :-)
Now, the major benefit of SOA approach is that you can scale and manage those environments separately. It is simple and elegant.
If you want more complex and less elegant, use simple token distribution service. On every environment initialization, send two messages to Amazon SQS. Each message should contain configuration name. Then pull those messages from SQS, each instance will get exactly one from the queue. Whichever configuration name the message contains, configure your node with that configuration. :-)
Hope it helps.
Update after #vcetinick comment:
All still seems rather complex for what should be pretty simple.
That's why I suggested separate environments. You can make your own registration service, when the node comes up, it registers with the service and in return gets configuration params. You keep available configurations in persistent DB. If the node dies and the service gets another registration request, the registration service can quickly check registered all nodes (because they all left their info during the registration), and if any of the nodes is not responding, its configuration data is reassigned to the new node. And now you have single point of failure on your hands :-)
Again, there might be other ways to approach that problem.

What causes duplicate requests to occur using spring,tomcat and hibernate

I'm working on a project in Java using the spring framework, hibernate and tomcat.
Background:
I have a form page which takes data, validates, processes it and ultimately persists the data using hibernate. In processing the data I do some special command (model)
manipulation prior to persisting using hibernate.
Problem:
For some reason my onSubmit method is being called twice, the first time through things
are processed properly. However the second time through they are not; and the incorrect
information is being persisted.
I've also noticed that on other pages which are simply pulling information from the data
base and displaying on screen; Double requests are happening there too.
Is there something misconfigured, am I not using spring properly..any help on this would
be great!
Additional Information:
The app is still being developed. In testing the app I'm running into this problem. I'm using the app as I would expect it to be used (single clicks,valid data,etc...)
If you are testing in IE, make note that in some versions of IE it sometimes submits two requests. What browsers are you testing the app in?
There is the javascript issue, if an on click handler is associated with submit button and calls submit() and does not return false to cancel the event bubble.
Could be as simple as users clicking on a link twice, re-submitting a form while the server is still processing the first request, or hitting refresh on a POST-ed page.
Are you doing anything on the server side to account for duplicate requests such as these from your users?
This is a very common problem faced by someone who is starting off. And not very sure about the application eco-system.
To deploy a spring app, we build the war file.
Then we put it inside 'webapps' folder of tomcat.
Then we run the tomcat instance using terminal (I am presuming a linux system).
Now, we set up env in that terminal.
The problem arises when we set up our environment for the spring application where there can be more than one war files to be deployed.
Then we must cater to the fact that the env must be exclusive to a specific war file.
To achieve this, what we can do is create exclusive env files for every war. (e.g. war_1.sh,war_2.sh,.....,war_n.sh) and so on.
Now we can source that particular env file for which we have to deploy its corresponding war. This way we can segregate the multiple wars (applications) and their environment.

Categories