I 've a multi-project gradle configuration. So I've got a build.gradle for each subproject and another one where I define general tasks.
Basicly, in the general build.gradle file I set out performing environtments, each one for production, pre-production, development, and so on purposes.
I set several containers defining a class:
class RemoteContainer {
String name
String container
String hostname
Integer port
String username
String password
String purpose
}
So, I set the purpose of the container setting purpose field to 'production', 'pre-production' or 'development'.
Then, I'm able to create several containers:
def developmentRemoteContainers = [
new RemoteContainer(
name: 'wildfly8',
container: 'wildfly8x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'development'
),
new RemoteContainer(
name: 'glassfish4',
container: 'glassfish4x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'development'
)
]
def preproductionRemoteContainers = [
new RemoteContainer(
name: 'wildfly8',
container: 'wildfly8x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'pro-production'
),
new RemoteContainer(
name: 'glassfish4',
container: 'glassfish4x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'pre-production'
)
]
def productionUserRemoteContainers = [
new RemoteContainer(
name: 'wildfly8',
container: 'wildfly8x',
hostname: '---',
port: ----,
username: '----',
password: '----'
purpose: 'production'
),
new RemoteContainer(
name: 'glassfish4',
container: 'glassfish4x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'production'
)
]
After that, I create tasks according the content of each remote container:
Example tasks:
remoteContainers.each { config ->
task "deployRemote${config.name.capitalize()}"(type: com.bmuschko.gradle.cargo.tasks.remote.CargoDeployRemote) {
description = "Deploys WAR to remote Web Application Server: '${config.name}'."
containerId = config.container
hostname = config.hostname
port = config.port
username = config.username
password = config.password
dependsOn war
}
task "undeployRemote${config.name.capitalize()}"(type: com.bmuschko.gradle.cargo.tasks.remote.CargoUndeployRemote) {
description = "Deploys WAR to remote Web Application Server: '${config.name}'."
containerId = config.container
hostname = config.hostname
port = config.port
username = config.username
password = config.password
}
}
So, it's the way how I'm creating my deploy and undeploy tasks for each container and performing context.
As you're able to figure out each task depends of the war task. So, my projects have a file containing a string like ${stringKey} which I need to replace it according of each container purpose.
So, ${stringKey} must be replaced by config.purpose.
EDIT
There are basicly two files:
Under /src/main/resources/META-INF/persistence.xml: This file contains database server location information. According to the server environtment, the database location is at an IP/PORT/DATABASE... For example:
<property name="hibernate.ogm.datastore.host" value="${ip}"/>
<property name="hibernate.ogm.datastore.port" value="${port}"/>
Under /src/main/resources/configuration.settings.environtment: This file contains only this line scope = ${scope}.
The replacement must be made at war generation.
I've absolutly no idea how to do that.
Any ideas?
You can use ant.replace to do this:
replaceTokens << {
ant.replace(
file: "path/to/your/file",
token: "stringtoreplace",
value: config.purpose
)
}
war.dependsOn replaceTokens
I faced a somewhat similar issue. I found it easier to keep separate environment (e.g. dev, qa, staging, prod etc.) specific properties/settings/config and then loaded/applied a specific one at suitable time in the build life-cycle. Following links were helpful:
https://blog.gradle.org/maven-pom-profiles
Gradle : Copy different properties file depending on the environment and create jar
PS: I'm responding to a bit older question, but hopefully these pointer may be of help to someone facing a similar issue.
you can try something like this if only you need is to replace your placeholders
tasks.taskName.doFirst {
exec {
commandLine "perl", "-pi","-w","-e","s/${stringKey}/"+config.purpose+"/g" ,"filePath"
}
}
}
Related
I want to apply the following yaml multiple times with the fabric8 kubernetes-client
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: my-storage-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
I apply the yaml using createOrReplace()
Config config = new ConfigBuilder()
.withMasterUrl("https://my-kubernetes-root:6443")
.withNamespace("my-namespace")
.withOauthToken(token)
.withTrustCerts(true)
.build();
KubernetesClient client = new DefaultKubernetesClient(config);
ClasspathResource resource = new ClasspathResource("my-pvc.yaml");
client.load(resource.getInputStream()).createOrReplace(); // this works
TimeUnit.MINUTES.sleep(1); // volumeName is dynamically assigned during this period
client.load(resource.getInputStream()).createOrReplace(); // this fails
This works the first time (when the PVC does not exist in the namespace) but fails the second time that createOrReplace() is called for the same yaml with the following error
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: PUT at: https://my-kubernetes-root:6443/api/v1/namespaces/my-namespace/persistentvolumeclaims/my-pvc. Message: PersistentVolumeClaim "my-pvc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteMany"},
Selector: nil,
Resources: core.ResourceRequirements{Requests: core.ResourceList{s"storage": {i: resource.int64Amount{value: 1073741824}, s: "1Gi", Format: "BinarySI"}}},
- VolumeName: "",
+ VolumeName: "pvc-b79ebfcb-d5cb-4450-9f17-d69ec10b8712",
StorageClassName: &"my-storage-class",
VolumeMode: &"Filesystem",
DataSource: nil,
}
Notice how "volumeName" is not present in the yaml (nil) but in the error message "volumeName" is changing from empty string to the dynamically assigned volumeName.
I can reproduce this exact same behavior using kubectl and empty string for volumeName
I can kubectl apply the following yaml as many times as I like
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: my-storage-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
But if I kubectl apply a yaml with volumeName of empty string it works the first time and fails the second time (The error message is the same as above)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: my-storage-class
volumeName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
How can I get KubernetesClient to behave the same as kubectl apply? Is there any way that I can apply the same PersistentVolumeClaim yaml multiple times with KubernetesClient?
As a workaround, I have switched to using a StatefulSet to manage my Pod which allows me to specify volumeClaimTemplates
eg the following yaml can be applied multiple times:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
Kubernetes API doesn't allow you to change some fields for a certain resource type at a certain state. For a PVC, after its state is Bound, the volumeName field in its spec is immutable; you can't change the volume your PVC is referencing.
When you apply a bare PersistentVolumeClaim manifest like the one in your question (a manifest that has no volumeName key or the value of its volumeName is nil), If the cluster has a default StorageClass, it automatically creates a corresponding PersistentVolume object that's the abstraction of your volume in Kubernetes from the underlying storage provider, then assigns its name to your PersistentVolumeClaim with the volumeName key.
After a PersistentVolumeClaim gets bound to a PersistentVolume and its volumeName field gets populated, you can not edit or patch its volumeName field anymore; that's the reason your kubectl apply command fails for the second time.
It's now clear that a Replace verb is not gonna work for a PVC in Bound state. You can Create one and edit or Patch 'some' of its fields (like increasing the size). I'm not familiar with Java SDK, but it should include some APIs for creating, getting, patching, and deleting a PVC.
You can first check for the existence of your PVC using the GetNamespacedPersistentVolumeClaim API; then, if it's not in the cluster, create it using CreateNamespacedPersistentVolumeClaim API. Referencing the axiom: "Explicit is better than Implicit."
I raised this issue on the facric8 kubernetes-client codebase and got an interesting response which may do what I want. Please note that I haven't tried yet since I have a workaround using StatefulSet and volumeClaimTemplates (see my other answer)
The problem is the replace operation. It is not a substitute for apply / patch. Instead of createOrReplace you can use the new support for server side apply
And the relevant section from the link
Server Side Apply Basic usage of server side apply is available via
Patchable. At it's simplest you just need to call:
client.services().withName("name").patch(PatchContext.of(PatchType.SERVER_SIDE_APPLY), service);
For any create or update. This can be a good alternative to
using createOrReplace as it is always a single api call and does not
issue a replace/PUT which can be problematic.
If the resources may be created or modified by something other than a
fabric8 patch, you will need to force your modifications:
client.services().withName("name").patch(new PatchContext.Builder().withPatchType(PatchType.SERVER_SIDE_APPLY).withForce(true).build(), service);
I am trying to get my second project(prm) into the cloud.
Both the projects (pyp and prm) access the same database and with the same credentials.
The first project succeed, and the second gets Access denied for user root.
Some excerpts from my definitions:
apiVersion: v1
kind: Secret
metadata:
name: pyp-secret
data:
mysql_password: "<password>"
apiVersion: v1
kind: Service
metadata:
name: pyp-db
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
selector:
app: pyp
service: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pyp-db
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: pyp
service: mysql
template:
metadata:
labels:
app: pyp
service: mysql
spec:
containers:
- image: mysql:8
name: pyp-db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: pyp-secret
key: mysql_password
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prm
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: prm
service: cron
template:
metadata:
labels:
app: prm
service: cron
spec:
containers:
- image: prm-image-name
imagePullPolicy: Always
name: prm
env:
- name: MYSQL_HOST
value: pyp-db
- name: MYSQL_USER
value: root
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: pyp-secret
key: mysql_password
This is excerpts from the log where you can see the url for connecting the database, and the error I get:
This is from my java-application:
static Connection init(String host,String user, String password){
Connection con = null;
try {
if (con == null) {
Class.forName("com.mysql.cj.jdbc.Driver").newInstance();
String url = "jdbc:mysql://" + host + ":3306/PP_Master?user=" + user + "&password=" + password;
logger.trace("DB url:" + url);
con = DriverManager.getConnection(url);
}
} catch (Exception ex) {
logger.error("init: " , ex);
}
return con;
}
My cloud is hosted on Minikube and the database is Mysql 8.0.27. It is accessible from my localhost when I give the same credentials. My other project(pyp) is running in Tomcat, and connecting with credentials from a connection pool defined in context.xml. They use the same driver to connect the database. That project access the database just fine.
This is the users defined in the database:
I've also counted the number of characters in the url by url.length(). This gave 72 characters that match the actual size. So there are no extra invisible characters in the password.
About the project(pyp) that succeed in getting access to the database. Some days ago, I got an SqlSyntaxError from the first statement against the database, even if it was only "USE PP_Master", and it had worked before. There were no errors on the logs.
I had to delete the Minikube container, and start a new one.
That gave me access to the database from the pyp-project.
I wonder if one project using a DataConnectionPool could reserve access to the database, so no other projects could access it ?
I've tried now to change from connection pool to only one connection at time in the pyp-project. But that didn't solve the problems with the prm-project. I also tried simply to remove the deployment and the pyp-pod, but that didn't help the prm-project eigther. So that hypotesis seem to be wrong.
I looked at the pyp-db log. This pod is containing the database.
I don't know if some of this information could have an impact on my problem ?
I have also tried to delete the minikube again. This time I only deployed the pyp-db and the prm pods, to avoid a possible conflict with the pyp pod. But to no avail. The error-message connected to the prm pod persists.
So, it must be something wrong between the prm and the pyp-db, that has nothing to do with the pyp-pod. So I've testified that it is not due to a conflict with the pyp-pod.
I really hope someone is able to help me. I've been stuck for several days with this problem. If there are more information that could help, just ask.
Eventually, I managed to get rid of the "access denied" problem.
I just changed the content in the java-code to this:
String url = "jdbc:mysql://" + host + ":3306/PP_Master";
con = DriverManager.getConnection(url, user, password);
Before it was :
String url = "jdbc:mysql://" + host + ":3306/PP_Master?user=" + user + "&password=" + password;
con = DriverManager.getConnection(url);
I'm facing a problem using Spring Boot configurations:
My configuration is not loaded correctly for all of my parameters.
I'm using a profile (in my case 'qualif') with application-qualif.yml configuration file but values from it are not overwriting application.yml values. This behaviour is expected.
Also i'm using Lombok.
My application.yml here:
mail:
login: 'login'
address: 'login#email.com'
password: 'myRealPassword'
My application-qualif.yml here:
spring:
otherValues: ...
My configuration here:
#Configuration
#Getter
public class SmtpConfiguration {
#Value("${mail.login}")
private String login;
#Value("${mail.address}")
private String address;
#Value("${mail.password}")
private String password;
}
When I try to display password it always return me '123456' but I can't find any reference of '123456' in my project. It's expected to be myRealPassword here.
log.printf(Level.INFO, "Mail password: " + smtpConfiguration.getPassword());
Other values from smtpConfiguration return same value from application.yml
2021-08-19 10:55:26.102 INFO 192494 --- [ scheduling-1] x.x.x.x.x.x.XXXScheduledTask:
Attempt to send mail using login#email.com with password 123456
Console output:
2021-08-19 10:48:49.572 INFO 192412 --- [ scheduling-1] x.x.x.x.x.x.XXXService:
Mail password: 123456
This code is run under Linux env and it's working on my local computer (Windows) using no profiles.
Thanks for reply.
I found the issue.
It seems that linux env variable named MAIL_PASSWORD exists and equals to '123456'.
I didn't know this was overwriting YAML values if the configuration does not specified it like:
mail:
password: ${mail.password}
Implementing SAP xsuaa service with java and deployed, accessing the application URL facing below issue.
Posting this problem solution as I faced and resolved.
Case 1: If you are using BAS (Business Application Studio) and deploying through mta file,
make sure you have added forwardAuthToken: true for xsuaa service associated application instance.
in the mta file add the below properties.
requires:
- name: my_xsuaa
- name: MY_API
group: destinations
properties:
name: MY_API
url: '~{url}'
forwardAuthToken: true
If it is not working after that then check your xs-security.json file
"oauth2-configuration": {
"token-validity": 604800,
"refresh-token-validity": 2592000,
"redirect-uris":[
"https://*.<your_uri>/login/callback"
]
}
Case 2: If deploying a java application below properties should be set in the menifest.yml file.
env:
SESSION_TIMEOUT: 30
destinations: >
[{
"name":"MY_API",
"url" :"<you_java_application_uri>",
"forwardAuthToken": true
}
]
I had the same problem while doing this sap tutorial (https://developers.sap.com/tutorials/btp-cf-buildpacks-node-create.html) and it was solved by adding the oath2-configuration to the xs-security.json:
"oauth2-configuration": {
"token-validity": 604800,
"refresh-token-validity": 2592000,
"redirect-uris": [
"https://<App-router Url>/**"
]
},
I have problem with https services in my Gateway application ( which uses zuul )
it works great when proxying http services, but i have problems with proxying https services
I have exception
java.lang.IllegalStateException: Could not create URI object: Expected scheme-specific part at index 6: https:
at org.springframework.web.util.HierarchicalUriComponents.toUri(HierarchicalUriComponents.java:430) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.cloud.netflix.ribbon.RibbonClientConfiguration$OverrideRestClient.reconstructURIWithServer(RibbonClientConfiguration.java:184) ~[spring-cloud-netflix-core-1
at com.netflix.client.AbstractLoadBalancerAwareClient$1.call(AbstractLoadBalancerAwareClient.java:106) ~[ribbon-loadbalancer-2.1.5.jar:2.1.5]
at com.netflix.loadbalancer.reactive.LoadBalancerCommand$3$1.call(LoadBalancerCommand.java:303) ~[ribbon-loadbalancer-2.1.5.jar:2.1.5]
at com.netflix.loadbalancer.reactive.LoadBalancerCommand$3$1.call(LoadBalancerCommand.java:287) ~[ribbon-loadbalancer-2.1.5.jar:2.1.5]
at rx.internal.util.ScalarSynchronousObservable$4.call(ScalarSynchronousObservable.java:223) ~[rxjava-1.1.5.jar:1.1.5]
at rx.internal.util.ScalarSynchronousObservable$4.call(ScalarSynchronousObservable.java:220) ~[rxjava-1.1.5.jar:1.1.5]
at rx.Observable.unsafeSubscribe(Observable.java:8460) ~[rxjava-1.1.5.jar:1.1.5]
... 150 common frames omitted
aused by: java.net.URISyntaxException: Expected scheme-specific part at index 6: https:
at java.net.URI$Parser.fail(URI.java:2848) ~[na:1.8.0_92]
at java.net.URI$Parser.failExpecting(URI.java:2854) ~[na:1.8.0_92]
at java.net.URI$Parser.parse(URI.java:3057) ~[na:1.8.0_92]
at java.net.URI.<init>(URI.java:673) ~[na:1.8.0_92]
My Gateway config
server:
ssl:
key-store: classpath:my.jks
key-store-password: secret
key-password: secret
spring:
application:
name: mille-gateway
cloud:
config:
discovery:
enabled: true
serviceId: mille-config-server
eureka:
client:
healthcheck:
enabled: true
ribbon:
IsSecure: true
zuul:
ignoredServices: '*'
routes:
test:
path: /test/**
serviceId: mille-test2
test:
ribbon:
ReadTimeout: 5000
MaxAutoRetries: 2
IsSecure: true
My Registry ( Eureka ) server
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
server:
enableSelfPreservation: false
My client configuration
spring:
application:
name: mille-test2
cloud:
config:
discovery:
enabled: true
serviceId: mille-config-server
eureka:
client:
healthcheck:
enabled: true
server:
port: 50000
ssl:
key-store: classpath:my.jks
key-store-password: secret
key-password: secret
eureka:
client:
enabled: true
serviceUrl:
defaultZone: http://localhost:8761/eureka/
instance:
nonSecurePortEnabled: false
securePortEnabled: true
securePort: ${server.port}
instanceId: ${spring.application.name}:${spring.application.instance_id:${random.value}}
statusPageUrl: https://${eureka.hostname}:${server.port}/info
healthCheckUrl: https://${eureka.hostname}:${server.port}/health
homePageUrl: https://${eureka.instance.hostname}:${server.port}/
secureVirtualHostName: ${spring.application.name}
metadataMap:
hostname: ${eureka.instance.hostname}
securePort: ${server.port}
What could be a problem ?
I have traced this problem in the debugger in Brixton.SR6.
The FeignLoadBalancer.reconstructURIWithServer(...) calls RibbonUtils.updateToHttpsIfNeeded(...) before calling the superclass method that actually adds the server to the uri.
The uri passed in is "", which is the normal case when the incoming url matches the zuul mapping exactly and using service discovery to get the server/port.
updateToHttpsIfNeeded() adds "https" and calls UriComponentsBuilder via build(true) to create an instance of HierarchicalUriComponents.
It then calls toUri() on that instance, which, because of the true passed in to build(), follows the encode == true path and calls new URI(toString()) (line 415). That toString() returns "https:" since we have set the scheme, but not the server or port.
The URI does not like that at all and throws java.net.URISyntaxException: Expected scheme-specific part at index 6: https:.
As to solving the problem, perhaps FeignLoadBalancer should not ensure https until after the server and port are set in the URI? Or perhaps RibbonUtils.updateToHttpsIfNeeded should set the server and port in the UriComponentsBuilder before calling toUri() - it has the Server instance.
This accounts for why it only happens for secure connections. I have not yet found any workaround when using exact url matches in the mapping, other than reverting to Brixton.SR5, which works fine, because the call to build() does not pass the true flag, so new URI() is not called. HTH
Edit: see also spring-cloud-netflix issue 1099.
spring-cloud spring-cloud-netflix netflix-zuul netflix-ribbon
Not sure if you were able to resolve this but I just ran into the same thing. It appears to only occur though when Zuul attempts to proxy a request via https (as you've indicated) to a service endpoint "root".
For instance, taking your Zuul config above - hitting the endpoint /test will likely cause that exception, however hitting any other endpoint proxied to the mille-test2 service (that isn't mapped to the root) will probably work test/something. At least it has for me. So as a temporary work around I've simply created different endpoints that extend the root such as /test/new. Kludgy, yes - but workable.
I'm curently using spring cloud Brixton.SR4.