I'm configuring our Release pipeline so that Integration Tests are automatically run after pull requests are merged to master and deployed to Dev environment.
I'm currently getting a connection error, specifically java.net.UnknownHostException: and it looks like my one of my output variables from my nestedStack.yml code is not being imported/read properly:
my-repo/cloud-formation/nestedStack.yml
You can see there is a property there "ApiGatewayInvokeUrl" which is marked as an Output. It is used on Azure DevOps for the "Integration Testing" task in my "Deploy to Dev" stage. It is written as $(ApiGatewayInvokeUrl) as that's how variables on Azure DevOps are used.
This Deploy to Dev will "succeed", however when I further inspect the Integration Tests, I see none actually ran and there was a connection error immediately. I can see it is outputting the variable as $(ApiGatewayInvokeUrl) , so it looks like it just reads it as a String, and never substitutes it for the correct URL value:
I was going off the way another team set up there Integration Tests on a similar pipeline but I might have missed something. Do I need to define $(ApiGatewayInvokeUrl) somewhere in my codebase, or somewhere on Azure? Or am I missing something? I checked the other teams code and didn't see them define it anywhere else, that's why I am ultra confused.
Update: I went into AWS API Gateway and copied the invoke URL and hard-coded that into the Azure DevOps Maven (integration testing) goal path and now it's connecting. So it's 100% just not importing that variable for somer reason.
you need to define\create the variable to use it (unless its an automatic variable, and this one is definitely not an automatic variable). that variable isnt getting substituted because it doesnt exist (afaik).
Related
I am experimenting with different instrumentation libraries but primarily spring-cloud-sleuth and open-telemetry ( OT) are the ones I liked the most. Spring-cloud-sleuth is simple but it will not work for a non-spring ( Jax-RS)project , so I diverted my attention to open telemetry.
I am able to export the metrics using OT, but there is just too much data which I do not need. Spring sleuth gave the perfect solution wherein it just traces the call across microservices and links all the spans with one traceId.
My question is - How to configure OT to get an output similar to spring-sleuth? I tried various configuration and few worked but still the information is huge.
My configuration
-Dotel.traces.exporter=zipkin -Dotel.instrumentation.[jdbc].enabled=false -Dotel.instrumentation.[methods].enabled=false -Dotel.instrumentation.[jdbc-datasource].enabled=false
However, this still gives me method calls and other data. Also, one big pain is am not able to SHUT DOWN metrics data.
gets error like below
ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/0:0:0:0:0:0:0:1:4317
Anyhelp will be appreciated -
There are 2 ways to configure the open telemetry agent(otel).
Environment variable
Java system property
you can either set
export OTEL_METRICS_EXPORTER=none
or
java -Dotel.metrics.exporter=none app.jar
Reference
https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md
We are using k8s for deploying our application and it works awesome.But there is a small issue.We have moved from http layer communication to tcp layer.And the communication between different micro-services is through the service (k8s service) name and it works great but the developer can't test the same code locally as the service name will be resolved inside the cluster only.So here are some solutions that I have :-
1.Provided them a different name space where they can test the app with small changes.
The issue with this is that the developers use some break points and test some small changes in code and debug that will be hard by this method.
2.They can implement minikube in local but that doesn't sound good to even me.
3.They can run the container for ms locally and enter the ip of container in /etc/hosts corresponding to the k8s service name.In this the same code will work.
Any other better solutions are welcomed.
😔😔ðŸ˜ðŸ˜
Did you consider using spring boot profiles for this purpose? We are using it effectively for long across our teams. For this purpose, you'll have to extract the service(s) host as separate properties in application.yml (or application.properties) and use this host in rest of the properties as a variable. Following snippet explains this
application.yml
----------------
serviceA:
host: service-A-Name
api-one-endpoint: http://${serviceA.host}/api/v1/one
api-two-endpoint: http://${serviceA.host}/api/v1/two
api-three-endpoint: http://${serviceA.host}/api/v1/three
api-four-endpoint: http://${serviceA.host}/api/v1/four
In production (any hosted/managed environment for that matter), you provide appropriate value for spring property serviceA.host. In your use case, you'll be using this value AS-IS and provide k8s service name binding instead.
For local dev environment, you only need to override single property. For simple use case (say you need to override only single property), you can pass it as an agrument to your spring boot launcher (e.g. "--serviceA.host=localhost"). If you have many services (you likely do) even then you'll need to override well known few host name properties only. Using a dedicated dev profiles is much better in this case. Following example illustrate same scenario
application-dev.yml
-------------------
serviceA:
host: mylocalhost:9090
Then you use this profile in your eclipse/intellij launcher configuraiton for execution or debugging purpose by adding "--spring.profiles.active=dev" as and additional argument and spring boot will use updated host from dev profile. In fact combining these two approaches gives you even more flexibility for advance cases. If you agree on a common port convention across team then you can even check-in application-dev.yml for usage by everyone pretty much as-is.
spring boot profiles is much more powerful feature, I'll strongly recommend to go through it's documentation and few tutorial (like this one) to understand it fully and exploit it effectively for use cases like this one.
Everytime that we add a new attribute to items.xml, we have to execute a hybris update, otherwise we will get some error like: JaloItemNotFoundException: no attribute Cart.newAttribute
But, sometimes after executing an update, instead of getting JaloItemNotFoundException, we get something like:
de.hybris.platform.servicelayer.exceptions.AttributeNotSupportedException: cannot find attribute newAttribute
For this second case, it always work if we restart the server after the update.
Is there any other way to fix that besides restarting the server after the update?
I worked for a company years ago that added this restart as a "deploy step" after the update. I am trying to avoid that here.
I tried to execute several updates and clean type cache. But no luck.
Platform Update with "Update Running System" is usually enough. If you have localization, impex, or some other changes, you might need to include the other options or extensions.
If you have a clustered environment, make sure all nodes have been updated / refreshed as well.
Make sure that your build and deploy process is something like:
Build
Deploy
Restart Server. You stop/start manually (or by script), or let Hybris restart itself when it detects changes from the deployment.
Run Platform Update
You can try to update the platform directly after the build from the command line(i.e "ant updatesystem") before starting the server.
The restart after deploy is a pretty common step(In case the update system is performed with the server started).
I believe that one of the reasons the restart is needed is due to the fact that the Spring Context needs to be reinitialized since some of the beans need the new type system information.
For example, Let's say you need to create a new type and an interceptor for that newly created type. When deploying this change you do the following:
Change the binaries and start the server
Perform an update system in order for the database to get the latest columns and so on
Now if you try to see whether the interceptor is working you will see it does not work because when its spring bean was instantiated(during the server startup) the type that it is suppose to handle was not present in the database.
Because of that, after a restart the Interceptor works as expected.
PS: The above described Interceptor problem might have been fixed somehow in the latest Hybris Versions.
I have a spring service that I'm trying to load load into AWS Beanstalk. When i create the environment and upload my .war file it just stays stuck on degraded. When i look through the logs for errors i cannot see any errors. Also when try and connect to my url, for example http://something.us-east-1.elasticbeanstalk.com/, i get a 502 error. I've already looked at the documentation provided by amazon that states the red degraded message means that all/most of the requests to you page are failing. Any idea how i can find the issue? See the screenshot below for the Enhanced Health Overview.
So, it turned out that i was getting an error in my logs but i was not able to see them. I had to ignore all of the eb-something log files. I needed to be looking at the web-1.log. This file may be named different depending on your instance and your environment but this is where i could actually see my error.
people who are finding their actual log can look up for this section in their EB log file.
/var/log/web.stdout.log
I had the same problem, with "Impaired services on all instances."
As suggested above, I ssh'd into the elastic beanstalk instance and looked at /var/log/web.stdout.log.
As I found there, my problem was that I was following a django tutorial and created a django config file with the WSGIPath pointing to the project name that was in the tutorial, but my actual project had a different name.
I corrected the mistake, wiped the elastic beanstalk instance and set up the environment again from scratch.
No problems at all this time, everything turned green immediately.
There is a team develops enterprise application with web interface: java, tomcat, struts, mysql, REST and LDAP calls to external services and so on.
All configuration is stored in context.xml --tomcat specific file that contains variables available via servlet context and object available via JNDI resources.
Developers have no access to production and QA platforms (as it should be) so context.xml is managed by support/sysadmin team.
Each release has config-notes.txt with instructions like:
please add "userLimit" variable to context.xml with value "123", rename "DB" resource to "fooDB" and add new database connection to our new server (you should know url and credentials) named "barDb"
That is not good.
Here is my idea how to solve it.
Each release has special config file with required variable names, descriptions and default values (if any): even web.xml could be used.
Here is pseudo example:
foo=bar
userLimit=123
barDb=SET_MANUAL(connection to our new server)
And there is a special tool that support team runs against deployment artifact.
Look at it (text after ">" is typed by support guy):
Config for version 123 of artifact "mySever".
Enter your config file location> /opt/tomcat/context/myServer.xml
+"foo" value "bar" -- already exists and would not be changed
+"userLimit" value "123" -- adding new
+"barDb"(connection to our new server) please type> jdbc:mysql:host/db
Saving your file as /opt/tomcat/context/myServer.xml
Your environment is not configured to run myServer-123.
That will give us ability to deploy application on any environment and update configuration if needed.
Do you like my idea? What do you use for environment configuration management? Does there is ready-to-use tools for that?
There are plenty of different strategies. All of them are good and depends on what suit you best.
Build a single artifact and deploy configs to a separate location. The artifact could have placeholder variables and, on deployment, the config could be read in. Have a look at Springs property placeholder. It works fantastically for webapps that use Spring and doesn't involve getting ops involved.
Have an externalised property config that lives outside of the webapp. Keep the location constant and always read from the property config. Update the config at any stage and a restart will be up the new values.
If you are modifying the environment (i.e. application server being used or user/group permissions) look at using the above methods with puppet or chef. Also have a look at managing your config files with these tools.
As for the whole should devs be given access to prod, it really depends on a per company basis. For smaller companies where the dev is called every time there is a problem, regardless of whether that problem is server or application related, then obviously devs require access to the box.
DevOps is not about giving devs access to the box, its about giving devs the ability to use infrastructure as a service, the ability to spawn new instances with application X with config Y and to push their applications into environments without ops. In a large company like ours, what it allows is the ability for devs to manage the application they put on a server. Operations shouldn't care what version is on their, thats our job, their job is all about keeping the server up and running.
I strongly disagree with your remark that devs shouldn't have access to prod or staging environments. It's this kind of attitude that leads to teams working against each other instead of with eath other.
But to answer your question: you are thinking about what is typically called continuous integration ( http://en.wikipedia.org/wiki/Continuous_integration ) and moving towards devops. Ideally you should aim for the magic "1 click automated deployment". The guys from Flickr wrote a lot of blogs (and books) about how they achieved that.
Anyhow .. there's a lot of tools around that sector. You may want to have a look a things like Hudson/Jenkins or Puppet/Chef.