I have a simple java lambda function which has the following code
public String handleRequest(Map<String, Object> input, Context context) {
Map<String, String> result = new HashMap<String, String>() {{
put("status", "success");
}};
String resultStr = new GsonBuilder().create().toJson(result, HashMap.class);
logger.info("ended function successfully " + resultStr);
return resultStr;
}
I can see in the cloudwatch the following lines
2020-07-10T17:52:26.198-07:00
START RequestId: 1b0ff049-3a61-4874-9172-9bee142dc076 Version: $LATEST
2020-07-10T17:52:26.203-07:00
2020-07-11 00:52:26 INFO KVSTriggerLamda:53 - ended function successfully {"result":"Success"}
2020-07-10T17:52:26.204-07:00
END RequestId: 1b0ff049-3a61-4874-9172-9bee142dc076
My Amazon connects call triggers this function and plays a simple prompt of "Success" or "Error" depending on the state. I always get "error"
What should the correct return value? I have followed aws documentation which specifies that I need to provide a simple flat JSON return value.
I finally got it working by just returning the Map itself.
Map<String, String> result = new HashMap<String, String>() {{
put("status", "success");
}};
return result;
Thanks to #tgdavies comments -
The output type can be an object or void.
https://docs.aws.amazon.com/lambda/latest/dg/java-handler.html#java-handler-types
Related
I have a Kafka stream that receives records and I want to concatenate messages based on particular field.
A message in a stream looks like following:
Key: 2099
Payload{
email: tom#emample.com
eventCode: 2099
}
Expected Output:
key: 2099
Payload{
emails: tom#example, bill#acme.com, jane#example.com
}
I can get the stream to run fine, I just am not sure what the lamda should contain.
This is what I have done so far. I am not sure whether I should use map, aggregate or reduce or combination of those operations.
final StreamsBuilder builder = new StreamsBuilder();
KStream<String, Payload> inputStream = builder.stream(INPUT_TOPIC);
inputStream
.groupByKey()
.windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(300000)))
// Not sure what to do here …..
}).to (OUTPUT_TOPIC );
It could be something like this
inputStream.groupByKey().windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(300000)))
.aggregate(PayloadAggr::new, new Aggregator<String, Payload, PayloadAggr>() {
#Override
public PayloadAggr apply(String key, Payload newValue, PayloadAggr result) {
result.setKey(key);
if(result.getEmails()==null){
result.setEmails(newValue.getEmail());
}else{
result.setEmails(result.getEmails() + "," + newValue.getEmail());
}
return result;
}
}, .../* You serdes and store */}).toStream().to(OUTPUT_TOPIC);
I want to pass an argument to karate graphql test from selenium java test.
I tried to do that this way, but it didn't work.
HashMap<String, Object> args = new HashMap<String, Object>();
args.put("argument1", "value1");
Map<String, Object> result = CucumberRunner.runFeature(featureFile,
args, true);
I tried to put that value in karate file in ways like
<argument1>
or
#(argument1)
but literally this text was passed to query in karate test. Have someone done that by karate?
Here you have fragment of my feature file:
Given text query =
"""
{
element(name:"<argument1>") {
name
}
}
"""
And request {query: '#(query)'}
When method post
Then status 200
* print response
I think you missed a replace, try this:
Given text query =
"""
{
element(name:"<argument1>") {
name
}
}
"""
And replace query.argument1 = argument1
And request {query: '#(query)'}
When method post
Then status 200
I am messing around with RapidAPI and i dont undertand the code they give.
Could someone give me a teardown? It says in order to access api i have to write the following code
Map<String, Argument> body = new HashMap<String, Argument>();
body.put("ParameterKey1", new Argument("data", "ParameterValue1"));
body.put("ParameterKey2", new Argument("data", "ParameterValue2"));
try {
Map<String, Object> response = connect.call("APIName", "FunctionName", body);
if(response.get("success") != null) { }
what are parameter keys 1 and 2, the data, and the parameter values
edit1:this is the code snippet i want to use in android studio
HttpResponse<JsonNode> response = Unirest.get("https://spoonacular-recipe-
food-nutrition-v1.p.mashape.com/recipes/search?
diet=vegetarian&excludeIngredients=coconut&instructionsRequired=
false&intolerances=egg%2C+gluten&limitLicense=false&number=
10&offset=0&query=burger&type=main+course")
.header("X-Mashape-Key",
"Xxxxxx")
.header("X-Mashape-Host", "spoonacular-recipe-food-nutrition-
v1.p.mashape.com")
.asJson();
Your example code is using this https://github.com/zeeshanejaz/unirest-android. If you want to use that snippet you could start with that.
I want to implement express checkout link with 2CO payment gateway ; tested the following code:
public void testExpressCheckoutF()
{
Twocheckout.apiusername = "sonoratestw";
Twocheckout.apipassword = "sonorasonora";
Twocheckout.privatekey = "81DBF3R3-04B3-47DB-8068-ED3DAB20BC5A";
Twocheckout.mode = "sandbox";
HashMap<String, String> params = new HashMap<>();
params.put("sid", "901328163");
params.put("mode", "sandbox");
params.put("currency_code", "USD");
params.put("x_receipt_link_url", "http://www.test.com/summary_twocheckoutl_payment.xhtml");
params.put("comment", "some description");
params.put("li_0_product_id", "assdsdcas");
params.put("li_0_type", "product");
params.put("li_0_name", "test name");
params.put("li_0_quantity", String.valueOf(1));
params.put("li_0_price", String.valueOf(33));
params.put("li_0_description", "some description");
String expressCheckout = TwocheckoutCharge.url(params);
System.out.println("\n " + expressCheckout);
}
But when I run the code I always get ERROR CODE:PE104. I found this post as a possible solution http://help.2checkout.com/articles/Knowledge_Article/Error-Code-PE104/?l=en_US&fs=RelatedArticle
But still getting the same error. Can you propose some solution?
Please try removing the li_0_product_id param and trying again or changing the value to an integer and not a string. The article you linked to gives you general areas to start (no lineitems, invalid product_id, etc..) so it's just process of elimination at this point.
I have a simple program because I'm trying to receive data using kafka. When I start a kafka producer and I send data, for example: "Hello", I get this when I print the message: (null, Hello). And I don't know why this null appears. Is there any way to avoid this null? I think it's due to Tuple2<String, String>, the first parameter, but I only want to print the second parameter. And another thing, when I print that using System.out.println("inside map "+ message); it does not appear any message, does someone know why? Thanks.
public static void main(String[] args){
SparkConf sparkConf = new SparkConf().setAppName("org.kakfa.spark.ConsumerData").setMaster("local[4]");
// Substitute 127.0.0.1 with the actual address of your Spark Master (or use "local" to run in local mode
sparkConf.set("spark.cassandra.connection.host", "127.0.0.1");
// Create the context with 2 seconds batch size
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
Map<String, Integer> topicMap = new HashMap<>();
String[] topics = KafkaProperties.TOPIC.split(",");
for (String topic: topics) {
topicMap.put(topic, KafkaProperties.NUM_THREADS);
}
/* connection to cassandra */
CassandraConnector connector = CassandraConnector.apply(sparkConf);
System.out.println("+++++++++++ cassandra connector created ++++++++++++++++++++++++++++");
/* Receive kafka inputs */
JavaPairReceiverInputDStream<String, String> messages =
KafkaUtils.createStream(jssc, KafkaProperties.ZOOKEEPER, KafkaProperties.GROUP_CONSUMER, topicMap);
System.out.println("+++++++++++++ streaming-kafka connection done +++++++++++++++++++++++++++");
JavaDStream<String> lines = messages.map(
new Function<Tuple2<String, String>, String>() {
public String call(Tuple2<String, String> message) {
System.out.println("inside map "+ message);
return message._2();
}
}
);
messages.print();
jssc.start();
jssc.awaitTermination();
}
Q1) Null values:
Messages in Kafka are Keyed, that means they all have a (Key, Value) structure.
When you see (null, Hello) is because the producer published a (null,"Hello") value in a topic.
If you want to omit the key in your process, map the original Dtream to remove the key: kafkaDStream.map( new Function<String,String>() {...})
Q2) System.out.println("inside map "+ message); does not print. A couple of classical reasons:
Transformations are applied in the executors, so when running in a cluster, that output will appear in the executors and not on the master.
Operations are lazy and DStreams need to be materialized for operations to be applied.
In this specific case, the JavaDStream<String> lines is never materialized i.e. not used for an output operation. Therefore the map is never executed.