There is a JAVA REST API (Put request) that I want to hit with values from a CSV file.
The CSV file is:
The JMETER CSV configuration is:
This is how I have set up the JMETER Configuration to hit the API:
The deserialisation on the Java side is not happening correctly. From POSTMAN, the following works:
{
"productId": "ABC",
"score": 4.42489
}
Why is the Jmeter POST configuration not working correctly?
Error: Received Unknown exception com.fasterxml.jackson.databind.exc.InvalidFormatException: Can not construct instance of double from String value '$score': not a valid Double value
Update: On doing this in Jmeter configuration:
{
"productId" : "${product}",
"score": "${score}"
}
I got the following error:
Received Unknown exception com.fasterxml.jackson.core.JsonParseException: Unexpected character ('T' (code 84)): was expecting comma to separate OBJECT entries at [Source: java.io.PushbackInputStream#4063ce7d; line: 2, column: 18] Similarly for M, M and R. So 4 errors in total.
Update 2:[Solved]The following CSV Data Set Configuration worked without any change in the actual CSV file!!
A Big Thank you to #user7294900 for the help!!
i do wonder if you might also have a problem with product_id being the parameter name versus productID that works directly in postman.
You forgot curly braces (this is not velocity/postman)
You need to send HTTP Request Body Data:
{
"productId": "${product}",
"score": "${score}"
}
Check the maunual for more details:
Variables are referenced as follows:
${VARIABLE}
Also your quotes in CSV file is redundant, either change Allow quoted data to true or better yet remove all quotes " from CSV file.
You don't need quotation marks around "${product}" and ${score} as:
Your CSV file already has quotations around ProductId I don't think you need duplicate ones
Your application expects a Double and you are basically sending a String
So change your payload to look like:
{
"productId" : ${product},
"score": ${score}
}
More information:
JSON Data Types
REST API Testing - How to Do it Right
Related
The return from an external application is an Input Stream that looks like this:
JSONObj = {
"output":
[
{
"box":[0, 44, 43, 189],
"text":"~9 000 -"
}
]
}
I'm having trouble parsing the JSON in Java
The 'JSONOBJ' keeps coming back as an Invalid Token.
Is there a way to simply begin parsing at the '['?
I would advice against manipulating the JSON prior to parsing it. If you want to cut out parts it's a sign that your target data structure (the Java class you're intending to get) does not match the data you're receiving. These two should be in sync.
Throwing this into any decent JSON parser will tell you that this is invalid JSON.
Particularily, at line 6, you should remove the manual line breaks, since JSON only permits explicit ones using the line seperator \n:
{
"output": [
{
"box": [
0,
44,
43,
189
],
"text": "~9 000 -"
}
]
}
The easiest way might be:
json = json.substring(json.indexOf('{'))
And I actually disagree with Nicktar's statement, since that external program isn't even returning valid JSON (and I'd consider it a very very stupid implementation to be frank). There is no need in saying "hey, that's an object" because that's actually implicit. If a JSON starts with {, it's an object, if it starts with [, it's an array.
I am trying to use Rest Assured in the Serenity framework to validate an endpoint response. I send an xml body to the endpoint and expect a JSON response back like so:
{"Entry ID" : "654123"}
I want to send the XML and verify in the JSON response that the value of the key "Entry ID" is not empty or null. The problem is, the key has a space in it, and I believe it is causing an error. Here is what I have so far:
SerenityRest.given().contentType(ContentType.XML)
.body(xmlBody)
.when().accept(ContentType.JSON).post(endpoint)
.then().body("Entry ID", not(isEmptyOrNullString()))
.and().statusCode(200);
This produces the error:
java.lang.IllegalArgumentException: Invalid JSON expression:
Script1.groovy: 1: unable to resolve class Entry
# line 1, column 33.
Entry ID
^
1 error
I have tried wrapping the "Entry ID" term in different ways to no avail:
.body("'Entry ID'", not(isEmptyOrNullString()))
.body("''Entry ID''", not(isEmptyOrNullString()))
.body("\"Entry ID\"", not(isEmptyOrNullString()))
.body("['Entry ID']", not(isEmptyOrNullString()))
.body("$.['Entry ID']", not(isEmptyOrNullString()))
Is it possible to get the value of a key that contains a space in Rest Assured?
You just need to escape the key with single quotes:
then().body("'Entry ID'", not(isEmptyOrNullString()))
Here's an example (tested in version 3.0.6):
// Given
String json = "{\"Entry ID\" : \"654123\"}";
// When
JsonPath jsonPath = JsonPath.from(json);
// Then
assertThat(jsonPath.getString("'Entry ID'"), not(isEmptyOrNullString()));
I have a Java application that writes to a log file in json format.
The fields that come in the logs are variable.
The logstash reads this logfile and sends it to Kibana.
I've configured the logstash with the following file:
input {
file {
path => ["[log_path]"]
codec => "json"
}
}
filter{
json {
source => "message"
}
date {
match => [ "data", "dd-MM-yyyy HH:mm:ss.SSS" ]
timezone => "America/Sao_Paulo"
}
}
output {
elasticsearch_http {
flush_size => 1
host => "[host]"
index => "application-%{+YYYY.MM.dd}"
}
}
I've managed to show correctly everything in Kibana without any mapping.
But when I try to create a terms panel to show a count of the servers who sent those messages I have a problem.
I have a field called server in my json, that show the servers name (like: a1-name-server1), but the terms panel split the server name because of the "-".
Also I would like to count the number of times that a error message appears, but the same problem occurs, because the terms panel split the error message because of the spaces.
I'm using Kibana 3 and Logstash 1.4.
I've searched a lot on the web and couldn't find any solution.
I also tried using the .raw from logstash, but it didn't work.
How can I manage this?
Thanks for the help.
Your problem here is that your data is being tokenized. This is helpful to make any search over your data. ES (by default) will split your field message split into different parts to be able to search them. For example you may want to search for the word ERROR in your logs, so you probably would like to see in the results messages like "There was an error in your cluster" or "Error processing whatever". If you don't analyze the data for that field with tokenizers, you won't be able to search like this.
This analyzed behaviour is helpful when you want to search things, but it doesn't allow you to group when different messages that have the same content. This is your usecase. The solution to this is to update your mapping putting not_analyzed for that specific field that you don't want to split into tokens. This will probably work for your host field, but will probably break the search.
What I usually do for these kind of situations is to use index templates and multifields. The index template allow me to set a mapping for every index that match a regex and the multifields allow me to have the analyzed and not_analyzed behaviour in a same field.
Using the following query would do the job for your problem:
curl -XPUT https://example.org/_template/name_of_index_template -d '
{
"template": "indexname*",
"mappings": {
"type": {
"properties": {
"field_name": {
"type": "multi_field",
"fields": {
"field_name": {
"type": "string",
"index": "analyzed"
},
"untouched": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}'
And then in your terms panel you can use field.untouched, to consider the entire content of the field when you calculate the count of the different elements.
If you don't want to use index templates (maybe your data is in a single index), setting the mapping with the Put Mapping API would do the job too. And if you use multifields, there is no need to reindex the data, because from the moment that you set the new mapping for the index, the new data will be duplicated in these two subfields (field_name and field_name.untouched). If you just change the mapping from analyzed to not_analyzed you won't be able to see any change until you reindex all your data.
Since you didn't define a mapping in elasticsearch, the default settings takes place for every field in your type in your index. The default settings for string fields (like your server field) is to analyze the field, meaning that elastic search will tokenize the field contents. That is why its splitting your server names to parts.
You can overcome this issue by defining a mapping. You don't have to define all your fields, but only the ones that you don't want elasticsearch to analyze. In your particular case, sending the following put command will do the trick:
http://[host]:9200/[index_name]/_mapping/[type]
{
"type" : {
"properties" : {
"server" : {"type" : "string", "index" : "not_analyzed"}
}
}
}
You can't do this on an already existing index because switching from analyzed to not_analyzed is a major change in the mapping.
I'm calling a web service that returns JSON. Within that JSON I have a property that holds a URL. But the colon (:) within that URL is making Gson throw a gson.stream.MalformedJsonException error. I know these keys and values should be wrapped
JSON returned by web service:
{
ID=15;
Code=ZPFgNr;
UserName=https://www.google.com/accounts/o8/id?id=xxxxxx; //<--problem
FirstName=Joe
}
My Java:
resultData=((SoapObject) result).getProperty(0).toString();
User response = gson.fromJson(resultData, User.class);
I know these keys and values should be wrapped in double quotes. But they are not, and that seems to be the problem.
So my question is:
Should I be encoding this JSON before deserializing it somehow? If so, how?
or
Should I do a find and replace on https: and escape the colon, If so, how would I escape the colon?
JSON uses commas to separate attributes, colon to separate the attribute name from the attribute value, and double quotes around the names and the values. This is not valid JSON.
Here's valid JSON:
{
"ID" : "15",
"Code" : "ZPFgNr",
"UserName" : "https://www.google.com/accounts/o8/id?id=xxxxxx",
"FirstName" : "Joe"
}
I've worked with several different APIs where I needed to parse JSON. And in all cases the Response is constructed a bit differently.
I now need to expose some data via a JSON API and want to know the proper way to deliver that Response.
Here is an example of what I have now, however some users (one using Java) are having difficulty parsing.
{"status": "200 OK",
"code": "\/api\/status\/ok",
"result": {
"publishers": ["Asmodee", "HOBBITY.eu", "Kaissa Chess & Games"],
"playing_time": 30, "description": "2010 Spiel des Jahres WinnerOne player is the storyteller for the turn. He looks at the 6 images in his hand. From one of these, he makes up a sentence and says it out loud (without showing the card to the other players).The other players select amongst their 6 images the one that best matches the sentence made up by the storyteller.Then, each of them gives their selected card to the storyteller, without showing it to the others. The storyteller shuffles his card with all the received cards. ",
"expansions": ["Dixit 2", "Dixit 2: \"Gift\" Promo Card", "Dixit 2: The American Promo Card", "Dixit Odyssey"],
"age": 8,
"min_players": 3,
"mid": "\/m\/0cn_gq3",
"max_players": null,
"designers": ["Jean-Louis Roubira"],
"year_published": 2008,
"name": "Dixit"
}
}
The Java user in particular is complaining that they get the error:
org.json.JSONException:[json string] of type org.json.JSONObject cannot be converted to JSONArray
But in Python I am able to take in this Response, fetch "result" and then parse as I would any other JSON data.
* UPDATE *
I passed both my JSON and Twitter's timeline JSON to JSONLint. Both are valid. The Java user can parse Twitter's JSON but not mine. What I noticed with Twitter's JSON is that it's encapsulated with brackets [], signifying an array. And the error this user is getting with my JSON is that it cannot be converted to a JSON array. I didn't think I need to encapsulate in brackets.
It looks valid according to http://json.parser.online.fr/ (random json parser). Its in the other code i'd say ;)
How exactly are you generating this response? Are you doing it yourself?
I see you have a dangling comma at the end of the last element in publishers (i.e. after the value Kaissa Chess & Games).
I'd recommend using JSONLint to ensure your JSON is valid.