Spring Cloud config throws NPE empty for array in the file - java

We used Spring Cloud Config version 2.1 and it worked.
We updated to Spring Cloud Config 2.2, and now it does not work.
More details are
https://github.com/spring-cloud/spring-cloud-config/issues/1599
I reported the issue to accelerate the process as well, or maybe it is not an issue. I do not know, so I am asking you to help.
Our config file: python-service.yml
resources:
- resource1
- resource2
newResources: []
As I learned, Spring Cloud config client makes requests to fetch configuration, and it passes header
Accept: application/vnd.spring-cloud.config-server.v2+json.
In Spring Cloud config v 2.1
Note, Spring Cloud version 2.1 does not send such header; instead, it sends Accept: application/json
HTTP http://localhost:8888/python-service/dev
Accept: application/vnd.spring-cloud.config-server.v2+json
Returns
{
"name": "python-service",
"profiles": [
"dev"
],
"label": null,
"version": null,
"state": null,
"propertySources": [
{
"name": "file:/configuration/python-service.yml",
"source": {
"resources[0]": "resource1",
"resources[1]": "resource2",
"newResources": []
}
}
]
}
However, it Spring Cloud Config v 2.2, it fails
{
"timestamp": "2020-04-24T08:38:19.803+0000",
"status": 500,
"error": "Internal Server Error",
"message": "Could not construct context for config=python-service profile=dev label=null includeOrigin=true; nested exception is java.lang.NullPointerException",
"path": "/python-service/dev"
}
The funny thing is that there is no exception log output in config-service logs!
If I remove the accept header, I will get (version 2.2)
{
"name": "python-service",
"profiles": [
"dev"
],
"label": null,
"version": null,
"state": null,
"propertySources": [
{
"name": "file:/configuration/python-service.yml",
"source": {
"resources[0]": "resource1",
"resources[1]": "resource2",
"newResources": ""
}
}
]
}
Here, why "newResources": "" became an empty String, if it is expected to be an empty array - another question.
To sum up
1) How to use empty array in Spring Cloud config.
2) Why there is no log message about the NPE in Spring config-service logs.
3) Without the accept header, why "newResources": "" became an empty String, if I expected an empty array.
As for now, I can remove empty array from my config, but it is very scary because our config is used in many services! This breaks backward compatibility.

It turns out, it is a bug in Spring Boot.
https://github.com/spring-cloud/spring-cloud-config/issues/1572#issuecomment-620496235
https://github.com/spring-projects/spring-boot/issues/20506
Possible options:
1) Wait until it is fixed and update libraries.
2) What we did. We replaced empty array with empty element.
newResources:
anotherField: value
alternatively, use null. However, make sure, your code can handle it. Also, emptyArray can be treated as an emptyString. I found this out in debugger.

Related

How to translate the messages from the micronaut-problem-json library?

What: I would like to know how to translate the messages for the micronaut-problem-json library. Does it support src/main/resources/i18n/messages.properties? This information is not documented on the GitHub project's page.
The project: https://github.com/micronaut-projects/micronaut-problem-json/
Why: The motivation for this is obvious, to support the internationalization of the messages.
This is a typical payload generated by the library in case of a constraint validation error:
{
"type": "https://zalando.github.io/problem/constraint-violation",
"title": "Constraint Violation",
"status": 400,
"violations": [
{
"field": "create.signup.username",
"message": "size must be between 3 and 30"
}
]
}
I would like to add support to other languages, like Portuguese, Spanish, etc. When an HTTP request from the client includes the HTTP header Accept-Language: pt the server should return the following payload:
{
"type": "https://zalando.github.io/problem/constraint-violation",
"title": "Violação de integridade",
"status": 400,
"violations": [
{
"field": "create.signup.username",
"message": "o tamanho deve ser entre 3 e 30"
}
]
}
Out of the box, neither micronaut-problem-json nor the problem library uses Java Internationalization.
There are customizations that you can do within Micronaut. See Micronaut Problem Json User Documentation.
You can always request the feature or provide a pull-request.

JMeter POST response body is null

Response code is 200 with error count 0 but response body is null.
PostMan Request: i hit the api from postman with this request body.
{"body": {
"distance": 3466567.8,
"latitude": 45.7,
"longitude": 80.7}}
Postman response: I got this reponse on postman.
"requestId": "LME2206071048390000193004",
"msgId": "LME2206071048390000191004",
"accDate": null,
"startDateTime": [
2022,
6,
7,
10,
48,
39,
100000000
],
"locale": "zh_CN",
"routeInfo": "LME"
.......................
Now if i hit the same API with same request body from the JMeter, i got the below mentioned response.
{
"msgId": null,
"source": null,
"locale": null,
"body": null,
"userId": null,
"uri": null,
"accDate": null,
"startDateTime": null,
"requestId": null,
"msgCd": "SYS00001",
"msgInfo": null}
................
Can anyone help me how to resolve this issue.
We cannot "help me how to resolve this issue" unless you share the Postman and JMeter's HTTP Request sampler configurations
It might be the case you forgot something obvious, i.e. sending a relevant Content-Type header
In general if your request works fine in Postman you can just record it using JMeter's HTTP(S) Test Script Recorder
Start HTTP(S) Test Script Recorder (it's better to use Recording Template for this)
Import JMeter's certificate into Postman
Configure Postman to use JMeter as the proxy
Run your request in Postman
That's it, JMeter should intercept the request and generate the relevant HTTP Request sampler and friends so you can replay it with increased load
More information: How to Convert Your Postman API Tests to JMeter for Scaling
Above mentioned Issue is resolved.
The main issue was the order of the fields in the request body.
I was sending this request from postman and it was working fine.
{ "body": {
"distance": 3466567.8,
"latitude": 45.7,
"longitude": 80.7
}}
And when i send this same request from jmeter it was not working fine. i was getting null values in the fields of response body. Then i realized and change the order of fields in the request body as according to my business logic and it works.
{
"body": {
"latitude": 45.7,
"longitude": 80.7,
"distance": 3466567.8
}}

How do I set up S3 and IAM properly to upload files to a S3 bucket?

I want to use S3 for hosting files which I upload via a Kotlin Spring-Boot application. I followed the instructions and used various other documentations plus tried a few solutions for similar issues found on stackoverflow. I always receive a 403 error. How do I set up S3 and IAM so I can upload the file? And how do I find out what's wrong? Any help would be appreciated.
I have activated access logging, which takes ages and hasn't helped me much yet, especially because it takes like 45 minutes to generate the logs. Ignoring the responses with status 200, the following messages appear in the logs (bucket represents the name of my bucket):
GET /bucket?encryption= HTTP/1.1" 404 ServerSideEncryptionConfigurationNotFoundError
GET /bucket?cors= HTTP/1.1" 404 NoSuchCORSConfiguration
GET /bucket?policy= HTTP/1.1" 404 NoSuchBucketPolicy
PUT /bucket?policy= HTTP/1.1" 400 MalformedPolicy
GET /bucket/?policyStatus HTTP/1.1" 404 NoSuchBucketPolicy
PUT /bucket?policy= HTTP/1.1" 403 AccessDenied
I build an AmazonS3 instance by
AmazonS3ClientBuilder.defaultClient()
I've checked the implementation and it retrieves the credentials from the environment variables I've set up.
To submit the file, I use the following method in my S3Service implementation:
private fun uploadFileToBucket(fileName: String, file: File) {
s3client.putObject(
PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead)
)
}
This is my policy for the IAM user (the user inherits the policy from a group):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}
]
}
And this is the bucket policy:
{
"Version": "2012-10-17",
"Id": "PolicyId",
"Statement": [
{
"Sid": "StmtId",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account:user/username"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket"
}
]
}
In the end, I want to be able to put files onto the bucket and want to provide public access to those. For example I want to upload images from an Angular app, uploading them via my Spring Boot application and display them on the Angular app. Right now I can't even upload them via Postman without a 403 error.
The IAM policy could be shortened to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}
]
}
In other words, that second statement gives full S3 access, so the first statement in your IAM policy is pointless.
Your bucket policy probably has something wrong with it. It's hard to tell because you've replaced several values with placeholders I think. However, you don't need a bucket policy at all in this instance. I would just delete it.
As Mark B pointed out, the IAM policy can be shortened and the bucket policy isn't of any need anyway. However, uploading a file worked with those settings and with the modified one. The problem in my code and with the S3 configuration was that I tried to modify the ACL without allowing that in my bucket. As I marked in the screenshot below, blocking new public ACLs must be disabled, otherwise the server respond with a 403 by definition.

Is aws lambda can expose only one spring boot api

I have developed 4 apis using spring boot. Now i m trying to enable aws lambda for serverless. Is it possible to expose 4 api’s with single lambda.
Is it possible to expose 4 api’s with single lambda.
AWS lambda is FaaS - functions as a service, One function per Lambda.
However you can arguably achieve the intended functionality with a wrapper/proxy function as the entry point and route the request to upstream methods/functions as needed
Its described here aws api gateway & lambda: multiple endpoint/functions vs single endpoint
Take a look at the following documentation on creating an API Gateway => Lambda proxy integration:
http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-set-up-simple-proxy.html
The following is merely a reworded explanation of what's given here aws api gateway & lambda: multiple endpoint/functions vs single endpoint .
AWS example has a good explanation; A Lambda request like the following:
POST /testStage/hello/world?name=me HTTP/1.1
Host: gy415nuibc.execute-api.us-east-1.amazonaws.com
Content-Type: application/json
headerName: headerValue
{
"a": 1
}
Will end up sending the following event data to your AWS Lambda function:
{
"message": "Hello me!",
"input": {
"resource": "/{proxy+}",
"path": "/hello/world",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"cache-control": "no-cache",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Content-Type": "application/json",
"headerName": "headerValue",
"Host": "gy415nuibc.execute-api.us-east-1.amazonaws.com",
"Postman-Token": "9f583ef0-ed83-4a38-aef3-eb9ce3f7a57f",
"User-Agent": "PostmanRuntime/2.4.5",
"Via": "1.1 d98420743a69852491bbdea73f7680bd.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "pn-PWIJc6thYnZm5P0NMgOUglL1DYtl0gdeJky8tqsg8iS_sgsKD1A==",
"X-Forwarded-For": "54.240.196.186, 54.182.214.83",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"queryStringParameters": {
"name": "me"
},
"pathParameters": {
"proxy": "hello/world"
},
"stageVariables": {
"stageVariableName": "stageVariableValue"
},
"requestContext": {
"accountId": "12345678912",
"resourceId": "roq9wj",
"stage": "testStage",
"requestId": "deef4878-7910-11e6-8f14-25afc3e9ae33",
"identity": {
"cognitoIdentityPoolId": null,
"accountId": null,
"cognitoIdentityId": null,
"caller": null,
"apiKey": null,
"sourceIp": "192.168.196.186",
"cognitoAuthenticationType": null,
"cognitoAuthenticationProvider": null,
"userArn": null,
"userAgent": "PostmanRuntime/2.4.5",
"user": null
},
"resourcePath": "/{proxy+}",
"httpMethod": "POST",
"apiId": "gy415nuibc"
},
"body": "{\r\n\t\"a\": 1\r\n}",
"isBase64Encoded": false
}
}
Now you have access to all headers, url params, body etc. So you could use that to handle requests differently in your wrapper/proxy Lambda function and route to upstream functions as per your routing needs.
Many people are using this methodology today as opposed to creating lambda function for each method and an api gateway resource.
There are pros and Cons to this approach
Deployment: if each lambda function is discrete then you can deploy them independently, which might reduce the risk from code changes (microservices strategy). Conversely you may find that needing to deploy functions separately adds complexity and is burdensome.
Self Description: API Gateway's interface makes it extremely intuitive to see the layout of your RESTful endpoints -- the nouns and verbs are all visible at a glance. Implementing your own routing could come at the expense of this visibility.
Lambda sizing and limits: If you proxy all -- then you'll wind up needing to choose an instance size, timeout etc. that will accommodate all of your RESTful endpoints. If you create discrete functions then you can more carefully choose the memory footprint, timeout, deadletter behavior etc. that best meets the needs of the specific invocation.
More on Monolithic lambda vs Micro Lambda here : https://hackernoon.com/aws-lambda-should-you-have-few-monolithic-functions-or-many-single-purposed-functions-8c3872d4338f

RestTemplate and variable number of variable

I have to post json to api_url for login.
{
"username":"testre","password":"password"
}
When I use postman to check this api, it reply successful authentication like below.
{
"status": "success",
"code": 200,
"message": "username, password validated.",
"data": [
{
"password": "password",
"username": "testre"
}
],
"links": [
{
"rel": "self",
"link": "http://localhost:2222/pizza-shefu/api/v1.0/customers/login/"
},
{
"rel": "profile",
"link": "http://localhost:2222/pizza-shefu/api/v1.0/customers/testre"
}
]
}
For an unauthorized json like below.
{
"status": "unauthorized",
"code": 401,
"errorMessage": "HTTP_UNAUTHORIZED",
"description": "credentials provided are not authorized."
}
Previously I code to retrieve it using java. But now I want to refactor it using RestTemplate in spring. The problem is every example I read is written for fixed number of variables https://spring.io/guides/gs/consuming-rest/. Here I get different numbers of variable according to the login success status. I am new to spring so I'm confused in creating the class for login reply which we get from rest template. (Such as this in the example Quote quote = restTemplate.getForObject("http://gturnquist-quoters.cfapps.io/api/random", Quote.class); But I need to return a json object). I couldn't figure out how to write the RestTemplate part.
As suggested by #Andreas:
Add the superset of all fields for all possible responses
Identify the set of fields that are mandatory for every response and make them required
Make the rest of the fields as optional
Upon receveiving a response, check the status code and implement your logic accordingly.
If you are using Jackson for Deserialization, all fields are optional by default (see this question)

Categories