How to change collection format when using Swagger and Spring Fox? - java

I am using Spring Fox to generate the OpenApi 3.0 document for Swagger-ui.
With springfox-boot-starter:3.0.0
#GetMapping("/test")
public void test(TestParams testParams) {
}
public class TestParams {
List<String> list1;
}
When I get the json api-docs.I got this
{
// ...
"parameters": [
{
"name": "list1",
"in": "query",
"required": false,
"style": "pipeDelimited", << How to modify this to simple?
"schema": {
"type": "array",
"items": {
"type": "string"
}
}
}
]
// ...
}
Now I need comma separator for this query field, not pipe separator.
See https://swagger.io/docs/specification/serialization/
So How to modify the style 'pipeDelimited' to 'simple'?
Thanks for help!

Related

OpenApi specification generator - Supply values from multiple Enum classes for a String field

I'm writing a Spring Boot application in Kotlin, and I'm currently struggling to generate a specification for a DTO class that has a backing field of the type String, which I want to then later parse into one of two enum classes in the adapter layer.
I've tried the following approach using the oneOf Annotation value, which seemed like it does what I want:
data class MyDto(
#Schema(
type = "string",
oneOf = [MyFirstEnum::class, MySecondEnum::class]
)
val identifier: String,
val someOtherField: String
) {
fun transform() { ... } // this will use the string identifier to pick the correct enum type later
}
Which results in the following OpenApi Spec:
"MyDto": {
"required": [
"someOtherField",
"identifier"
],
"type": "object",
"properties": {
"identifier": {
"type": "object", // <--- this should be string
"oneOf": [{
"type": "string",
"enum": [
"FirstEnumValue1",
"FirstEnumValue2",
"FirstEnumValue3"
]
}, {
"type": "string",
"enum": [
"SecondEnumValue1",
"SecondEnumValue2",
"SecondEnumValue3"
]
}
]
},
"someOtherField": {
"type": "string"
}
}
}
As you can see, the enum constants are (I think) correctly inlined into the specification, but the type annotation on the field, which I set to string is bypassed, resulting in an object type, which I suppose is incorrect in this case.
My questions are:
Is my current code and the resulting spec valid with the object declaration instead of string?
Is there a better way to embed the enum values into the spec?
Edited to add: I'm using Spring Boot v2.7.8 in combination with springdoc-openapi v1.6.13 to automatically generate the OpenApi Spec.
The annotation based approach that I showed in my question does not seem to generate a valid OpenApi spec with springdoc-openapi:1.6.13. The type of the field identifier needs to be String, as Helen mentioned in the comments.
I was able to solve the issue by creating the Schema for this particular class manually, using a GlobalOpenApiCustomizer Bean:
#Bean
fun myDtoCustomizer(): GlobalOpenApiCustomizer {
val firstEnum = StringSchema()
firstEnum.description = "First Enum"
MyFirstEnum.values().forEach { firstEnum.addEnumItem(it.name) }
val secondEnum = StringSchema()
secondEnum.description = "Second Enum"
MySecondEnum.values().forEach { secondEnum.addEnumItem(it.name) }
return GlobalOpenApiCustomizer {
it.components.schemas[MyDto::class.simpleName] = ObjectSchema()
.addProperty(
MyDto::identifier.name,
StringSchema().oneOf(
listOf(
firstEnum,
secondEnum
)
)
)
.addProperty(MyDto::someOtherField.name, StringSchema())
}
}
Which in turn produces the following Spec:
"MyDto": {
"type": "object",
"properties": {
"identifier": {
"type": "string",
"oneOf": [{
"type": "string",
"description": "First Enum",
"enum": [
"FirstEnumValue1",
"FirstEnumValue2",
"FirstEnumValue3"
]
}, {
"type": "string",
"description": "Second Enum",
"enum": [
"SecondEnumValue1",
"SecondEnumValue2",
"SecondEnumValue3"
]
}
]
},
"someOtherField": {
"type": "string"
}
}
}

Query Elastic DSL - Search query using spring boot data

I have the following properties file generated via Java and spring boot data elasticsearch. The file is generated in a User.java class and the property "friends" is a List where Friends is a Fiends.java file, both class file act as the model. Essentially I want to produce a select statement but in Query DSL Language using Spring Boot Data. The index is called user.
So I am trying to achieve the following SELECT * FROM User where (userName ="Tom" OR nickname="Tom" OR friendsNickname="Tom") AND userID="3793"
or (verbose-dsl)
match where (userName="Tom" OR nickname="Tom" OR friendsNickname="Tom") AND userID="3793"
"mappings": {
"properties": {
"_class": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"userName": {
"type": "text"
},
"userId": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"friends": {
"type": "nested",
"properties": {
"firstName": {
"type": "text"
},
"lastName": {
"type": "text"
},
"age": {
"type": "text"
},
"friendsNickname": {
"type": "text"
}
}
},
"nickname": {
"type": "text"
}
}
}
I have tried the following code but return 0 hits back from a elastic search but no dice returns no hits
BoolQueryBuilder query =
QueryBuilders.boolQuery()
.must(
QueryBuilders.boolQuery()
.should(QueryBuilders.matchQuery("userName", "Tom"))
.should(QueryBuilders.matchQuery("nickname", "Tom"))
.should(
QueryBuilders.nestedQuery(
"friends",
QueryBuilders.matchQuery("friendsNickname", "Tom"),
ScoreMode.None)))
.must(QueryBuilders.boolQuery().must(QueryBuilders.matchQuery("userID", "3793")));
Apologies if this seems like a simple question, My knowledge on ES is quite thin, sorry if this may seem like an obvious answer.
Great start!!
You just have a tiny mistake on the following line where you need to prefix the field name by the nested field name, i.e. friends.friendsNickname
...
QueryBuilders.matchQuery("friends.friendsNickname", "Tom"),
... ^
|
prefix
Also you have another typo where the userID should read userId according to your mapping.
Use friends.friendsNickname and also user termsQuery on userId.keyword
`
.must(QueryBuilders.boolQuery()
.should(QueryBuilders.matchQuery("userName", "Tom"))
.should(QueryBuilders.matchQuery("nickname", "Tom"))
.should(QueryBuilders.matchQuery("friends.friendsNickname", "Tom"))
)
.must(QueryBuilders.termsQuery("userId.keyword", "3793"));
`
Although I recommend changing userName, userID to keyword.
"userId": {
"type": "keyword",
"ignore_above": 256,
"fields": {
"text": {
"type": "text"
}
}
}
Then you don't have to put keyword so you just have to put userId instead of userId.keyword. If you want to have full-text search on the field is use userId.text. The disadvantage of having a text type is that you can't use the field to sort your results that's why I encourage ID fields to be of type keyword.

MapStruct - Create Mapper 2 objects (simple and complex object)

I want to create a 2 objects using mapstruct. 1 is simple POJO but the other is complex POJO that has Java Map-like structure:
Complex object :
{
"property1": {
"type": "boolean",
"value": true,
"valueInfo": {
"info": "Hello"
}
},
"property2": {
"type": "string",
"value": "string234",
"valueInfo": {
"info": "World"
}
}
}
Simple object:
{
"id" : 123,
"name" : "Jon Doe"
}

How to aggregate a 'non - keyword' field in elasticsearch?

I am trying to write an elastic-search query that should list all distinct values held by various fields in a document.When the fields are of type Keyword,the term aggregate query works fine and I can see the values with their counts listed in the buckets.But, I don't get any result when I query for the distinct citrus fruit types, the mapping is as shown below:
{
"vegetables":{
"type": "text",
"fields": {
"keyword" : {
"type" : "keyword",
"ignore_above": 256
}
}
},
"fruits": {
"properties": {
"citrus": {
"properties": {
"orange": {
"type": "long"
},
"lemon": {
"type": "long"
},
"kiwi": {
"type": "long"
}
}
}
}
}
}
and the result I am expecting is :
"aggregations": {
"distinct_citrusy_fruits"{
"buckets" : [
{
"key":"oranges",
"doc_count": 23
},
{
"key":"lemon",
"doc_count": 21
},
{
"key":"kiwi",
"doc_count": 23
}
]
}
}
when I make a term aggregation for the "vegetables" field (which is a keyword type) i am able to get the buckets as above.
How to get the distinct counts in this case?Also, I don't have the option to change the document format.
EDIT- the only workaround I have found till now is to call the mappings api and then parse the nested JSON in my code to get the key values,if there is any better solution possible, please add an answer here.
I think you cannot query or run aggregations on the field names, only on values.
For the fruits i expect the following mapping:
{
"fruits": {
"properties": {
"citrus": {
"properties": {
"kind": {
"type": "keyword"
},
"count": {
"type": "long"
}
}
}
}
}
}
Maybe you can use the _field_names field which contains every fieldname that has a value. (https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-field-names-field.html)

How to make a JSON Parser to parse elasticsearch mapping in java?

I wanted to parse this structure which is an elasticsearch filter:
{
"filter": {
"name_synonyms_filter": {
"synonym_path": "sample.txt",
"type": "abc_synonym_filter"
},
"name_formatter": {
"name": "name_formatter",
"type": "abc_token_filter"
}
}
}
My question is how can I access individual filters without using key ("name_synonyms_filter" , etc) in java?
your JSON was impropertly formatted.
Here it is fixed:
{
"abc": [{
"name": "somename"
},
{
"name": "somename"
}
]
}
How to parse it:
let x = JSON.parse({
"abc": [{
"name": "somename"
},
{
"name": "somename"
}
]
});
console.log(x);
Let me know if you have any questions.

Categories