I am trying to get my second project(prm) into the cloud.
Both the projects (pyp and prm) access the same database and with the same credentials.
The first project succeed, and the second gets Access denied for user root.
Some excerpts from my definitions:
apiVersion: v1
kind: Secret
metadata:
name: pyp-secret
data:
mysql_password: "<password>"
apiVersion: v1
kind: Service
metadata:
name: pyp-db
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
selector:
app: pyp
service: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pyp-db
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: pyp
service: mysql
template:
metadata:
labels:
app: pyp
service: mysql
spec:
containers:
- image: mysql:8
name: pyp-db
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: pyp-secret
key: mysql_password
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prm
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: prm
service: cron
template:
metadata:
labels:
app: prm
service: cron
spec:
containers:
- image: prm-image-name
imagePullPolicy: Always
name: prm
env:
- name: MYSQL_HOST
value: pyp-db
- name: MYSQL_USER
value: root
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: pyp-secret
key: mysql_password
This is excerpts from the log where you can see the url for connecting the database, and the error I get:
This is from my java-application:
static Connection init(String host,String user, String password){
Connection con = null;
try {
if (con == null) {
Class.forName("com.mysql.cj.jdbc.Driver").newInstance();
String url = "jdbc:mysql://" + host + ":3306/PP_Master?user=" + user + "&password=" + password;
logger.trace("DB url:" + url);
con = DriverManager.getConnection(url);
}
} catch (Exception ex) {
logger.error("init: " , ex);
}
return con;
}
My cloud is hosted on Minikube and the database is Mysql 8.0.27. It is accessible from my localhost when I give the same credentials. My other project(pyp) is running in Tomcat, and connecting with credentials from a connection pool defined in context.xml. They use the same driver to connect the database. That project access the database just fine.
This is the users defined in the database:
I've also counted the number of characters in the url by url.length(). This gave 72 characters that match the actual size. So there are no extra invisible characters in the password.
About the project(pyp) that succeed in getting access to the database. Some days ago, I got an SqlSyntaxError from the first statement against the database, even if it was only "USE PP_Master", and it had worked before. There were no errors on the logs.
I had to delete the Minikube container, and start a new one.
That gave me access to the database from the pyp-project.
I wonder if one project using a DataConnectionPool could reserve access to the database, so no other projects could access it ?
I've tried now to change from connection pool to only one connection at time in the pyp-project. But that didn't solve the problems with the prm-project. I also tried simply to remove the deployment and the pyp-pod, but that didn't help the prm-project eigther. So that hypotesis seem to be wrong.
I looked at the pyp-db log. This pod is containing the database.
I don't know if some of this information could have an impact on my problem ?
I have also tried to delete the minikube again. This time I only deployed the pyp-db and the prm pods, to avoid a possible conflict with the pyp pod. But to no avail. The error-message connected to the prm pod persists.
So, it must be something wrong between the prm and the pyp-db, that has nothing to do with the pyp-pod. So I've testified that it is not due to a conflict with the pyp-pod.
I really hope someone is able to help me. I've been stuck for several days with this problem. If there are more information that could help, just ask.
Eventually, I managed to get rid of the "access denied" problem.
I just changed the content in the java-code to this:
String url = "jdbc:mysql://" + host + ":3306/PP_Master";
con = DriverManager.getConnection(url, user, password);
Before it was :
String url = "jdbc:mysql://" + host + ":3306/PP_Master?user=" + user + "&password=" + password;
con = DriverManager.getConnection(url);
Related
I develop an application in Spring Boot which is a group of microservices and run them as Docker containers. I'm using MongoDB as my database. I create Root User and User when creating Monga using the init-mongo.sh and stage_mongo.env files, then I try to connect to the database using the stage_mongo_auth.env file from other microservices. When I try to connect as Root User everything goes fine but when I try to connect as User I get an authentication error.
Error:
com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server mongodb:27017. The full response is {"ok": 0.0, "errmsg": "Authentication failed.", "code": 18, "codeName": "AuthenticationFailed"} at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:198) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:413) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:337) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:101) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:45) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.SaslAuthenticator.sendSaslStart(SaslAuthenticator.java:230) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.SaslAuthenticator.getNextSaslResponse(SaslAuthenticator.java:137) ~[mongodb-driver-core-4.6.0.jar!/:na]
docker-compose.yaml
version: '3.3'
services:
mongodb:
image: mongo:6.0.2
restart: unless-stopped
env_file:
- ../config/stage_mongo.env
volumes:
- ../mongodb/db:/data/db
- ./init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh
ports:
- 30430:27017
deploy:
resources:
limits:
cpus: '4.0'
memory: 2GB
logging:
driver: "json-file"
options:
tag: "mongodb"
max-size: 256m
api:
image: amazoncorretto:17.0.3-alpine
depends_on:
- mongodb
restart: unless-stopped
env_file:
- ../config/stage_mongo_auth.env
volumes:
- ./java/api-0.0.1-SNAPSHOT.jar:/gjava/java.jar
- ../files:/files
environment:
spring_data_mongodb_host: mongodb
command: /bin/sh -c "cd /gjava && chmod +x /gjava/*.jar && java -Xmx2g -Dspring.profiles.active=dev -jar /gjava/java.jar"
ports:
- 30429:30329
deploy:
resources:
limits:
cpus: '2.0'
memory: 2GB
logging:
driver: "json-file"
options:
tag: "api"
max-size: 256m
init-mongo.sh
mongo -- "$MONGO_INITDB_DATABASE" <<EOF
var rootUser = '$MONGO_INITDB_ROOT_USERNAME';
var rootPassword = '$MONGO_INITDB_ROOT_PASSWORD';
var admin = db.getSiblingDB('admin');
admin.auth(rootUser, rootPassword);
var user = '$MONGO_INITDB_USERNAME';
var passwd = '$MONGO_INITDB_PASSWORD';
db.createUser({user: user, pwd: passwd, roles: ["readWrite"]});
EOF
stage_mongo.env
MONGO_INITDB_ROOT_USERNAME=someRootName
MONGO_INITDB_ROOT_PASSWORD=someRootPassword
MONGO_INITDB_USERNAME=someName
MONGO_INITDB_PASSWORD=somePassword
MONGO_INITDB_DATABASE=someDatabaseName
stage_mongo_auth.env
spring_data_mongodb_username=someName
spring_data_mongodb_password=somePassword
I've looked through my code several times, but I can't find the reason for this error, I've also tried to search the internet for answers, but I haven't found anything either.
I will be grateful for any help.
Update 1
I found the reason why some login credentials work and others don't - commands from init-mongo.sh do not run. I removed it and got the same way to authenticate to MongoDB.
I've tried different ways to enter commands like that:
mongo <<EOF
var rootUser = "${MONGO_INITDB_ROOT_USERNAME}";
var rootPassword = "${MONGO_INITDB_ROOT_PASSWORD}";
db.getSiblingDB('admin').auth(rootUser, rootPassword);
use ${MONGO_INITDB_DATABASE}
db.createCollection("someCollectionName")
use admin
db.createUser(
{
user: "${MONGO_INITDB_USERNAME}",
pwd: "${MONGO_INITDB_PASSWORD}",
roles: [ { role: "readWrite", db: "${MONGO_INITDB_DATABASE}" } ]
}
)
EOF
I've tried adding the :ro suffix to docker-cospose:
volumes:
- ../mongodb/db:/data/db
- ./init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh:ro
but it still doesn't work.
I want to apply the following yaml multiple times with the fabric8 kubernetes-client
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: my-storage-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
I apply the yaml using createOrReplace()
Config config = new ConfigBuilder()
.withMasterUrl("https://my-kubernetes-root:6443")
.withNamespace("my-namespace")
.withOauthToken(token)
.withTrustCerts(true)
.build();
KubernetesClient client = new DefaultKubernetesClient(config);
ClasspathResource resource = new ClasspathResource("my-pvc.yaml");
client.load(resource.getInputStream()).createOrReplace(); // this works
TimeUnit.MINUTES.sleep(1); // volumeName is dynamically assigned during this period
client.load(resource.getInputStream()).createOrReplace(); // this fails
This works the first time (when the PVC does not exist in the namespace) but fails the second time that createOrReplace() is called for the same yaml with the following error
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: PUT at: https://my-kubernetes-root:6443/api/v1/namespaces/my-namespace/persistentvolumeclaims/my-pvc. Message: PersistentVolumeClaim "my-pvc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteMany"},
Selector: nil,
Resources: core.ResourceRequirements{Requests: core.ResourceList{s"storage": {i: resource.int64Amount{value: 1073741824}, s: "1Gi", Format: "BinarySI"}}},
- VolumeName: "",
+ VolumeName: "pvc-b79ebfcb-d5cb-4450-9f17-d69ec10b8712",
StorageClassName: &"my-storage-class",
VolumeMode: &"Filesystem",
DataSource: nil,
}
Notice how "volumeName" is not present in the yaml (nil) but in the error message "volumeName" is changing from empty string to the dynamically assigned volumeName.
I can reproduce this exact same behavior using kubectl and empty string for volumeName
I can kubectl apply the following yaml as many times as I like
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: my-storage-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
But if I kubectl apply a yaml with volumeName of empty string it works the first time and fails the second time (The error message is the same as above)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: my-storage-class
volumeName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
How can I get KubernetesClient to behave the same as kubectl apply? Is there any way that I can apply the same PersistentVolumeClaim yaml multiple times with KubernetesClient?
As a workaround, I have switched to using a StatefulSet to manage my Pod which allows me to specify volumeClaimTemplates
eg the following yaml can be applied multiple times:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
Kubernetes API doesn't allow you to change some fields for a certain resource type at a certain state. For a PVC, after its state is Bound, the volumeName field in its spec is immutable; you can't change the volume your PVC is referencing.
When you apply a bare PersistentVolumeClaim manifest like the one in your question (a manifest that has no volumeName key or the value of its volumeName is nil), If the cluster has a default StorageClass, it automatically creates a corresponding PersistentVolume object that's the abstraction of your volume in Kubernetes from the underlying storage provider, then assigns its name to your PersistentVolumeClaim with the volumeName key.
After a PersistentVolumeClaim gets bound to a PersistentVolume and its volumeName field gets populated, you can not edit or patch its volumeName field anymore; that's the reason your kubectl apply command fails for the second time.
It's now clear that a Replace verb is not gonna work for a PVC in Bound state. You can Create one and edit or Patch 'some' of its fields (like increasing the size). I'm not familiar with Java SDK, but it should include some APIs for creating, getting, patching, and deleting a PVC.
You can first check for the existence of your PVC using the GetNamespacedPersistentVolumeClaim API; then, if it's not in the cluster, create it using CreateNamespacedPersistentVolumeClaim API. Referencing the axiom: "Explicit is better than Implicit."
I raised this issue on the facric8 kubernetes-client codebase and got an interesting response which may do what I want. Please note that I haven't tried yet since I have a workaround using StatefulSet and volumeClaimTemplates (see my other answer)
The problem is the replace operation. It is not a substitute for apply / patch. Instead of createOrReplace you can use the new support for server side apply
And the relevant section from the link
Server Side Apply Basic usage of server side apply is available via
Patchable. At it's simplest you just need to call:
client.services().withName("name").patch(PatchContext.of(PatchType.SERVER_SIDE_APPLY), service);
For any create or update. This can be a good alternative to
using createOrReplace as it is always a single api call and does not
issue a replace/PUT which can be problematic.
If the resources may be created or modified by something other than a
fabric8 patch, you will need to force your modifications:
client.services().withName("name").patch(new PatchContext.Builder().withPatchType(PatchType.SERVER_SIDE_APPLY).withForce(true).build(), service);
I have elasticsearch 8.1 running in docker with this docker compose file:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.0
container_name: es-node
environment:
- xpack.security.enabled=false
- discovery.type=single-node
volumes:
- ./elastic-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
kibana:
image: docker.elastic.co/kibana/kibana:8.1.0
container_name: kibana
environment:
- ELASTICSEARCH_HOST=http://localhost:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
I'm trying to make a simple GET request to the es cluster using the org.elasticsearch.client.RestClient.
Request:
Request request = new Request("GET", "_cluster/health");
try {
return restClient.performRequest(request).toString();
} catch (IOException e) {
throw new RuntimeException(e);
}
Rest client initialisation:
var hosts = buildClusterHosts(transportAddresses);
restClient = RestClient.builder(hosts).build();
if (isElasticSniffEnabled) {
sniffer = Sniffer.builder(restClient).build();
}
var esTransport = new RestClientTransport(restClient, new JacksonJsonpMapper());
elasticsearchClient = new ElasticsearchClient(esTransport);
Main method:
var es = ElasticEightClient.builder()
.transportAddresses("localhost:9200")
.isElasticSniffEnabled(true)
.build();
System.out.println("Started elasticsearch with health: " + es.getHealth());
buildClusterHosts() method is correctly building an array of HttpHost (in this case only one) and provides it to the rest client builder.
In theory this should be enough, but I keep getting Caused by: java.net.ConnectException: Timeout connecting to [/172.20.0.2:9200] and I'm not sure why?
Tldr;
It seems you are confusing the Transport port and the Rest Api port of Elasticsearch.
To Fix
You will first need to expose the port of the transport layer, which is 9300 by default
services:
elasticsearch:
ports:
- 9300:9300
Then update the main method
var es = ElasticEightClient.builder()
.transportAddresses("localhost:9200")
.isElasticSniffEnabled(true)
.build();
System.out.println("Started elasticsearch with health: " + es.getHealth());
Figured out what was the problem. To use the Sniffer you need to add http.publish_host=localhost as environment variable in the docker compose file.
I am trying to do a simple task of creating a microservice with JAVA and MySQL.
I am using docker-compose on Windows 10 with Docker Desktop.
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
My docker-compose.yml is
version: '3.1'
services:
db:
#image: mysql:5.7.22
image: mysql:latest
ports: ["3306:3306"]
hostname: db
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=Users
container_name: mysqldatabase
web:
build: docker-mysql-connector
image: docker-mysql-connector
hostname: web
tty: true
depends_on:
- db
links:
- db:db
My JAVA code to check the connectivity is
package com.prasad.docker.mysql;
import java.net.InetAddress;
import java.sql.Connection;
import java.sql.DriverManager;
import java.util.Map;
public class MySQLConnection {
public static void main(String[] args) throws Exception {
String ipAddr = InetAddress.getLocalHost().getHostName();
System.out.println("Printing IP address of the host " + ipAddr);
Map<String, String> env = System.getenv();
for (String envName : env.keySet()) {
System.out.format("%s=%s%n", envName, env.get(envName));
}
Thread.sleep(10000);
boolean connected = false;
while (!connected) {
try {
String url = "jdbc:mysql://db:3306/Users?autoReconnect=false&useSSL=false";
String user = "root";
String password = "root";
System.out.println("Connecting to URL " + url);
Class.forName("com.mysql.cj.jdbc.Driver").newInstance();
Connection conn = DriverManager.getConnection(url, user, password);
System.out.println("Connection was successful");
connected = true;
} catch (Exception e) {
System.err.println("Error connecting to database");
e.printStackTrace();
Thread.sleep(5000);
}
}
}
}
I get the following error when I output the log of web container where my JAVA ode is running
Connecting to URL jdbc:mysql://db:3306/Users?autoReconnect=false&useSSL=false
Error connecting to database
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:590)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:57)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1606)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:633)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:219)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at com.prasad.docker.mysql.MySQLConnection.main(MySQLConnection.java:34)
Caused by: com.mysql.cj.core.exceptions.CJCommunicationsException: Communications link failure
I am able to test the connection successfully from MySQL Workbench and the MySQL database inside the container. I get the connectivity error only from JAVA code. I tried using latest version and v5.7.22 of MySQL database. Same error in both cases. Any help appreciated
Have you mentioned that the instance is for localhost ?
version: '3.1'
services:
db:
#image: mysql:5.7.22
image: mysql:latest
ports: ["3306:3306"]
hostname: db
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=Users
container_name: mysqldatabase
web:
build: docker-mysql-connector
image: docker-mysql-connector
hostname: web
tty: true
depends_on:
- db
links:
- db:db
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://docker-mysql:3306/database?autoReconnect=true&useSSL=false
Also adding network_mode: "host" will also help.
I 've a multi-project gradle configuration. So I've got a build.gradle for each subproject and another one where I define general tasks.
Basicly, in the general build.gradle file I set out performing environtments, each one for production, pre-production, development, and so on purposes.
I set several containers defining a class:
class RemoteContainer {
String name
String container
String hostname
Integer port
String username
String password
String purpose
}
So, I set the purpose of the container setting purpose field to 'production', 'pre-production' or 'development'.
Then, I'm able to create several containers:
def developmentRemoteContainers = [
new RemoteContainer(
name: 'wildfly8',
container: 'wildfly8x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'development'
),
new RemoteContainer(
name: 'glassfish4',
container: 'glassfish4x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'development'
)
]
def preproductionRemoteContainers = [
new RemoteContainer(
name: 'wildfly8',
container: 'wildfly8x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'pro-production'
),
new RemoteContainer(
name: 'glassfish4',
container: 'glassfish4x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'pre-production'
)
]
def productionUserRemoteContainers = [
new RemoteContainer(
name: 'wildfly8',
container: 'wildfly8x',
hostname: '---',
port: ----,
username: '----',
password: '----'
purpose: 'production'
),
new RemoteContainer(
name: 'glassfish4',
container: 'glassfish4x',
hostname: '----',
port: ----,
username: '----',
password: '----'
purpose: 'production'
)
]
After that, I create tasks according the content of each remote container:
Example tasks:
remoteContainers.each { config ->
task "deployRemote${config.name.capitalize()}"(type: com.bmuschko.gradle.cargo.tasks.remote.CargoDeployRemote) {
description = "Deploys WAR to remote Web Application Server: '${config.name}'."
containerId = config.container
hostname = config.hostname
port = config.port
username = config.username
password = config.password
dependsOn war
}
task "undeployRemote${config.name.capitalize()}"(type: com.bmuschko.gradle.cargo.tasks.remote.CargoUndeployRemote) {
description = "Deploys WAR to remote Web Application Server: '${config.name}'."
containerId = config.container
hostname = config.hostname
port = config.port
username = config.username
password = config.password
}
}
So, it's the way how I'm creating my deploy and undeploy tasks for each container and performing context.
As you're able to figure out each task depends of the war task. So, my projects have a file containing a string like ${stringKey} which I need to replace it according of each container purpose.
So, ${stringKey} must be replaced by config.purpose.
EDIT
There are basicly two files:
Under /src/main/resources/META-INF/persistence.xml: This file contains database server location information. According to the server environtment, the database location is at an IP/PORT/DATABASE... For example:
<property name="hibernate.ogm.datastore.host" value="${ip}"/>
<property name="hibernate.ogm.datastore.port" value="${port}"/>
Under /src/main/resources/configuration.settings.environtment: This file contains only this line scope = ${scope}.
The replacement must be made at war generation.
I've absolutly no idea how to do that.
Any ideas?
You can use ant.replace to do this:
replaceTokens << {
ant.replace(
file: "path/to/your/file",
token: "stringtoreplace",
value: config.purpose
)
}
war.dependsOn replaceTokens
I faced a somewhat similar issue. I found it easier to keep separate environment (e.g. dev, qa, staging, prod etc.) specific properties/settings/config and then loaded/applied a specific one at suitable time in the build life-cycle. Following links were helpful:
https://blog.gradle.org/maven-pom-profiles
Gradle : Copy different properties file depending on the environment and create jar
PS: I'm responding to a bit older question, but hopefully these pointer may be of help to someone facing a similar issue.
you can try something like this if only you need is to replace your placeholders
tasks.taskName.doFirst {
exec {
commandLine "perl", "-pi","-w","-e","s/${stringKey}/"+config.purpose+"/g" ,"filePath"
}
}
}