I have three component: rest, Cassandra and Kafka and I am using Apache camel. When the request receives, I want to add a record in Cassandra and after that, adding that record to Kafka. Finally generating rest response. May be pipeline is not a good solution for me! Because Cassandra part is InOnly and hasn't any out exchange!
I wrote this route:
rest().path("/addData")
.consumes("text/plain")
.produces("application/json")
.post()
.to("direct:requestData");
from("direct:requestData")
.pipeline("direct:init",
"cql://localhost/myTable?cql=" + CQL,
"direct:addToKafka"
)
.process(exchange -> {
var currentBody = (List<?>) exchange.getIn().getBody();
var body = new Data((String) currentBody.get(0), (Long) currentBody.get(1), (String) currentBody.get(2));
exchange.getIn.setBody(body.toJsonString());
});
from("direct:init")
.process(exchange -> {
var currentBody = exchange.getIn().getBody();
var body = Arrays.asList(generateId(), generateTimeStamp, currentBody);
exchange.getIn().setBody(body);
});
from("direct:addToKafka")
.process(
// do sth to add kafka
);
I tried sth such setting patternExtention to InOut for Cassandra!!! finally understand that this is impossible! because patternExtention used for consumer and I used Cassandra in a route as producer.
Related
I am trying to create Pub/Sub topic with customer-managed encryption keys in Java.
In Python we can create a topic using CMEK location as parameter as below:
topic = client.create_topic(
topic_path,
kms_key_name=cmek_location,
message_storage_policy=get_allowed_region()
)
In java I am using the following:
TopicAdminClient topicAdminClient = TopicAdminClient.create(topicAdminSettings);
topicAdminClient.createTopic(topic);
How can we use the CMEK location in java code?
For that purpose you can use the following code, extracted from the createTopic method documentation:
try (TopicAdminClient topicAdminClient = TopicAdminClient.create()) {
Topic request =
Topic.newBuilder()
.setName(TopicName.ofProjectTopicName("[PROJECT]", "[TOPIC]").toString())
.putAllLabels(new HashMap<String, String>())
.setMessageStoragePolicy(MessageStoragePolicy.newBuilder().build())
.setKmsKeyName("kmsKeyName412586233")
.setSchemaSettings(SchemaSettings.newBuilder().build())
.setSatisfiesPzs(true)
.setMessageRetentionDuration(Duration.newBuilder().build())
.build();
Topic response = topicAdminClient.createTopic(request);
}
Basically you provide a template of the Topic you want to create.
In your use case I suppose it will look like similar to this:
try (TopicAdminClient topicAdminClient = TopicAdminClient.create()) {
Topic request =
Topic.newBuilder()
.setName(TopicName.ofProjectTopicName("[PROJECT]", "[TOPIC]").toString())
.setKmsKeyName("kmsKeyName412586233") //cmek location
.setMessageStoragePolicy(
MessageStoragePolicy.newBuilder()
.addAllowedPersistenceRegions("us-central1") // get_allowed_region
.build()
)
.build();
Topic response = topicAdminClient.createTopic(request);
}
Please, pay attention to the setKmsKeyName method.
The API is described in this GCP documentation.
I have a problem trying to run a DRPC topology containing one single bolt and query it through a local cluster. After debugging with IntelliJ, the bolt is indeed executed but the JCQueue is stuck in an infinite loop after that the bolt has been executed and until a timeout is sent to the server.
Here is the code used to build the topology builder:
public static LinearDRPCTopologyBuilder createBuilder()
{
var bolt = new MRedisLookupBolt(createRedisConfiguration(), new RedisTurnoverMapper());
var builder = new LinearDRPCTopologyBuilder("sales");
builder.addBolt(bolt, 1).localOrShuffleGrouping();
return builder;
}
The MRedisLookupBolt is just a very simple implementation of IBasicBolt executing a hget command against Jedis. The execute method of the MRedisLookupBolt is just emitting an instance of Values containing the value for two fields that are declared like this:
declarer.declare(new Fields("id", "Value"));
The topology is built and queried in an unit test like this:
Config conf = new Config();
conf.setDebug(true);
conf.setNumWorkers(1);
try(LocalDRPC drpc = new LocalDRPC())
{
LocalCluster cluster = new LocalCluster();
var builder = BasicRedisRPCTopology.createBuilder();
LocalCluster.LocalTopology topo = cluster.submitTopology(
"Sales-fetch", conf, builder.createLocalTopology(drpc));
var result = drpc.execute("sales", "XXXXX");
System.out.println("################ Result: " + result);
}
catch (Exception e)
{
e.printStackTrace();
}
When reading the logs, I am sure that the data is well red by the bolt and that everything is emitted
But at the end, I have this stack trace gently printed out by my test method. Of course, no value is allocated to the result variable and the process never reach the last print instructions:
There is something that I am missing here. What I understand: the JCQueue used by BoltExecutor to retrieve the id of which bolt to execute is never ending although there is only one parameters sent to the local DRPC and only one bolt declared into the topology. I have already tried to add more bolts to the topology or change the builder implementation used to create it but with no success.
I found a solution suitable for my use case using Apache Storm 2.1.0.
It seems that invoking the submitTopology method of the local cluster as proposed by the documentation does not end the executor correctly with version 2.1.0 using the LinearDRPCTopologyBuilder to build the topology.
By looking closer to the source code, it was possible to understand how to apply the LinearDRPCTopologyBuilder logic to the TopologyBuilder directly.
Here is the change applied to the createBuilder method:
public static TopologyBuilder createBuilder(ILocalDRPC localDRPC)
{
var spout = Optional.ofNullable(localDRPC)
.map(drpc -> new DRPCSpout("sales", drpc))
.orElse(new DRPCSpout("sales"));
var bolt = new MRedisLookupBolt(createRedisConfiguration(), new RedisTurnoverMapper());
var builder = new TopologyBuilder();
builder.setSpout("drpc", spout);
builder.setBolt("redisLookup", bolt, 1)
.shuffleGrouping("drpc");
builder.setBolt("return", new ReturnResults())
.shuffleGrouping("redisLookup");
return builder;
}
And here is an exemple of execution:
Config conf = new Config();
conf.setDebug(true);
conf.setNumWorkers(1);
try(LocalDRPC drpc = new LocalDRPC())
{
LocalCluster cluster = new LocalCluster();
var builder = BasicRedisRPCTopology.createBuilder(drpc);
cluster.submitTopology("Sales-fetch", conf, builder.createTopology());
var result = drpc.execute("sales", "XXXXX");
System.out.println("################ Result: " + result);
}
catch (Exception e)
{
e.printStackTrace();
}
Unfortunately this solution does not allow to use all the embedded tools of the LinearDRPCTopologyBuilder and implies to build all the topology flow 'by hand'. Is is necessary to change the mapper behavior to as the fields are not exposed in the same order as before.
I'm using Amazon SQS. My goal is to read ApproximateReceiveCount attribute from ReceiveMessage API action using the Java SDK (v2.10.4, Java 11).
I tried the following code, but message.attributes() doesn't contain the required key:
String getApproximateReceiveCount() {
var receiveMessageRequest = ReceiveMessageRequest.builder()
.queueUrl("https://sqs.eu-west-1.amazonaws.com/012345678910/my-example-queue")
.build();
var sqsClient = SqsClient.builder().endpointOverride(URI.create("http://localhost:4576")).build();
var response = sqsClient.receiveMessage(receiveMessageRequest);
var message = response.messages().get(0);
return message.attributes().get(MessageSystemAttributeName.APPROXIMATE_RECEIVE_COUNT);
}
How to go about receiving an entry for MessageSystemAttributeName.APPROXIMATE_RECEIVE_COUNT key, in this map?
As per the documentation page to ReceiveMessage which you linked, there is a parameter called AttributeName.N described as
A list of attributes that need to be returned along with each message. These attributes include:
[...]
ApproximateReceiveCount – Returns the number of times a message has been received from the queue but not deleted.
Therefore you need to ask for the attribute in the request, for it to be available in the response. To do that use ReceiveMessageRequestBuilder.attributeNamesWithStrings() method like so:
String getApproximateReceiveCount() {
var receiveMessageRequest = ReceiveMessageRequest.builder()
.queueUrl("https://sqs.eu-west-1.amazonaws.com/012345678910/my-example-queue")
.attributeNamesWithStrings(MessageSystemAttributeName.APPROXIMATE_RECEIVE_COUNT.toString())
.build();
var sqsClient = SqsClient.builder().endpointOverride(URI.create("http://localhost:4576")).build();
var response = sqsClient.receiveMessage(receiveMessageRequest);
var message = response.messages().get(0);
return message.attributes().get(MessageSystemAttributeName.APPROXIMATE_RECEIVE_COUNT);
}
Note that there are two similarly named methods, which you can't use:
.attributeNames() - the parameter enum doesn't list the required key,
.messageAttributeNames() - corresponds to attributes sent along with the message body.
I want to return Flux to the browser but when i hit the end point it gives me "406 not acceptable" error.
This is for a Apache tomcat server,running spring-boot 5, JAVA 8 .In STS(Spring Tool Suite) IDE.
#RestController
public class CloudFoundry {
#GetMapping(value = "/LogApplication", produces =
MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> logApplication() throws Throwable {
return Flux.just("a", "b", "c", "d");
}
}
when i hit the end point on localhost it should give me stream of string but instead it's giving me "406 not acceptable" error.
MediaType.TEXT_EVENT_STREAM_VALUE is used for Server-Sent-Events that need to be consumed appropriately.
This is what you need to have on front-end side:
// Declare an EventSource
const eventSource = new EventSource('http://server.url/LogApplication');
// Handler for events without an event type specified
eventSource.onmessage = (e) => {
// Do something - event data etc will be in e.data
};
// Handler for events of type 'eventType' only eventSource.addEventListener('eventType', (e) => {
// Do something - event data will be in e.data,
// message will be of type 'eventType'
});
You can find a good explanation of Server-Sent-Events in the following blog post:
A Look at Server-Sent Events
I have a Java application running on Spring and defining multiple gRPC endpoints. These endpoints are meant to be queried from multiple clients, one of which being in PHP, so I used the PHP lib for gRPC. Now I wonder how to properly get the metadata from the server in case of an invalid request, this metadata containing mostly constraint violations built by the Java validator and transformed into a collection of gRPC FieldViolation objects. In this example, the server is supposed to return one single field violation as metadata, with the key "violationKey" and the description "violationDescription":
try {
// doStuff
} catch (ConstraintViolationException e) {
Metadata trailers = new Metadata();
trailers.put(ProtoUtils.keyForProto(BadRequest.getDefaultInstance()), BadRequest
.newBuilder()
.addFieldViolations(FieldViolation
.newBuilder()
.setField("violationKey")
.setDescription("violationDescription")
.build()
)
.build()
);
responseObserver.onError(Status.INVALID_ARGUMENT.asRuntimeException(trailers));
}
On the PHP side, this is the implementation to retrieve the metadata:
class Client extends \Grpc\BaseStub
{
public function callService()
{
$call = $this->_simpleRequest(
'MyService/MyAction',
$argument,
['MyActionResponse', 'decode'],
$metadata, $options
);
list($response, $status) = $call->wait();
var_dump($status->metadata); // A
var_dump($call->getMetadata()); // B
}
}
Result: "A" outputs an empty array, "B" outputs the proper metadata, formatted as follows:
array(1) {
["google.rpc.badrequest-bin"]=>
array(1) {
[0]=>
string(75) "
I
testALicense plate number is not in a valid format for country code FR"
}
}
Why is the metadata in the status empty, and why is the metadata retrieved by $call->getMetadata() is formatted that way ("I" followed by the violation key, then "A" and finally the violation description) ? How can I avoid to make potentially tedious transformation of this metadata client-side?
Can you please log an issue on our grpc/grpc Github repo so that we can better follow up there? Thanks.