I'm new to Spring and built a simple Springboot app that has a couple of endpoints and stores data in a mysql database. It is now time for me to deploy it to AWS. I setup a beanstalk instance, uploaded my war file, and also setup a rds db instance with mysql. When I try to visit one of my endpoints using my environment url/(the endpoint), I'm getting a 404 error. I know the endpoints work because when I ran the project locally, I tested them.
I suspect the issue is the Spring datasource url I have configured in my project. When I was running locally, I had the url set to the following: jdbc:mysql://127.0.0.1/exampleDb
I tried changing the url to the following: jdbc:mysql://(my aws environment url):3306/ebdb but I still get the same error.
What should I change the datasource url to? Any ideas? I'm new to Spring and AWS, so this has really been a roadblock.
Any help is appreciated.
Related
I'm creating a active directory in windows server and i make the deploy in aws ec2 i need make the connection with spring boot ldap, but i dont have any idea about how to make that, maybe someone have a tutorial or documentation about this?
I'am getting Resource not found when I try to reach http://SOME_URL/q/swagger-ui when running app on Kubernetes
I have set the property
quarkus.swagger-ui.always-include=true
http://SOME_URL/q/openapi is giving me API definition
When app is deployed locally , it works fine
Any idea ?
I have deployed a single fat .jar, containing a Spring Boot app and a React app in Openshift.
The Spring Boot app exposes a REST API, and the React app is the frontend that makes calls to that API.
Problem
Both apps are accessible externally just fine (with the url generated by Openshift), but the React app cannot communicate with the Spring Boot app through http://localhost:8080/... calls.
Attempts so Far
I have tried using 127.0.0.0 instead of localhost but with no avail.
I also tried performing curl -v http://localhost:8080/... from inside the pod where the 2 apps are deployed and it works fine.
Is this a configuration issue? Do I need to set up routes? Or use something other than localhost/127.0.0.1?
To answer my own question, I finally managed to solve this by following Boris Chistov's suggestion. I simply removed all the http://localhost:8080 url parts and used relative URLs instead.
i am using terraform scripts to create AWS resources. Using AWS ElastiCached memcached for caching some data.
output "configuration_endpoint" {
value = "${aws_elasticache_cluster.memcache.configuration_endpoint}"
}
i want to dynamically configure the memcached configuration end point into the spring boot application.properties file instead of hard coding it.
Currently its as below
memcached.addresses=xyz.cache.amazonaws.com:11211
i am unable to find any good references online for the same. Is there any way to set this dynamically once the resources are created in aws. I use Jenkins to run this terraform script and deploy the springboot application into AWS.
I have an application deployed to Heroku that uses Spring Security, and, by extension, HttpSessionSecurityContextRepository. Realizing that HttpSession will cause problems when scaling up to multiple dynos, I am trying to configure webapp-runner (https://github.com/jsimone/webapp-runner) with the --session_manager memcache flag (with the Heroku memcache addon).
A local configuration using Apache and mod_proxy, two Tomcat instances, and memcached 1.4.13 works fine. When deployed to Heroku, however, it fails, even with a single dyno - randomly redirecting to the login page as if unauthenticated, indicating that the session store is not working. Same Procfile, verified the MEMCACHE_* variables via heroku config, etc.
Does anyone have experience with a similar configuration?
Update: the configuration works as designed.
The issue was caused by a Spring Security mis-configuration. A bad image URL buried in the app triggered the redirect. While this should have simply been a 404, there was also a catch-all intercept-url pattern in context-security.xml with access set to IS_AUTHENTICATED_FULLY. The result was that any page with the bad URL redirected to the login page. Correcting those URLs addressed the problem when deployed to Heroku, though I can't explain why it did not manifest on a local system.