How to create and send a PacketOut? - java

I'm trying to do a latency-monitoring system based on the OpenNetMon idea. What I want to do is to inject a packet to a switch so this node will forward the packet to another switch and this one will send it back to the controller. Finally, the controller will measure the latency.
To distinguish this kind of probe packets to make the measurements I will modify the DSCP field from the IPv4 packet.
What I had in mind is when the Opendaylight Controller receive a PacketIn this one will be copied but the DSCP field modified and then the copied_packet/probe_packet will send it to the data plane.
I can extract the RawPacket, EthernetPacket, IPv4Packet from a PacketChain:
RawPacket rawPacket = null;
EthernetPacket ethernetPacket = null;
Ipv4Packet ipv4Packet = null;
for (PacketChain packetChain : packetReceived.getPacketChain()) {
if (packetChain.getPacket() instanceof RawPacket) {
rawPacket = (RawPacket) packetChain.getPacket();
} else if (packetChain.getPacket() instanceof EthernetPacket) {
ethernetPacket = (EthernetPacket) packetChain.getPacket();
} else if (packetChain.getPacket() instanceof Ipv4Packet) {
ipv4Packet = (Ipv4Packet) packetChain.getPacket();
}
}
How can I send these packets?

You want something along the lines of the following, using PacketProcessingService:
NodeConnectorId egress = TABLE_PORT;
TransmitPacketInput input = new TransmitPacketInputBuilder()
.setNode(nodeRef(linkDef.srcNodeId))
.setEgress(nodeConnectorRef(linkDef.srcNodeId, egress))
.setPayload(STUFF)
.build();
packetProcessingService.transmitPacket(input);
With the following utilities:
// reserved ports
public final static NodeConnectorId INGRESS_PORT = new NodeConnectorId("0xfffffff8");
public final static NodeConnectorId TABLE_PORT = new NodeConnectorId("0xfffffff9");
public final static NodeConnectorId NORMAL_PORT = new NodeConnectorId("0xfffffffa"); // optional functionality
public final static NodeConnectorId FLOOD_PORT = new NodeConnectorId("0xfffffffb"); // optional functionality
public final static NodeConnectorId ALL_PORT = new NodeConnectorId("0xfffffffc");
public final static NodeConnectorId CONTROLLER_PORT = new NodeConnectorId("0xfffffffd");
public final static NodeConnectorId LOCAL_PORT = new NodeConnectorId("0xfffffffe");
public final static NodeConnectorId ANY_PORT = new NodeConnectorId("0xffffffff");
public static final InstanceIdentifier<Nodes> NODES_IID = InstanceIdentifier.builder(Nodes.class).build();
public static InstanceIdentifier<Node> nodeIId(NodeId nodeId) {
return NODES_IID.child(Node.class, new NodeKey(nodeId));
}
public static NodeRef nodeRef(NodeId nodeId) {
return new NodeRef(nodeIId(nodeId));
}
public static InstanceIdentifier<NodeConnector> nodeConnectorIId(NodeId nodeId, NodeConnectorId ncId) {
return NODES_IID.child(Node.class, new NodeKey(nodeId)).child(NodeConnector.class, new NodeConnectorKey(ncId));
}
public static NodeConnectorRef nodeConnectorRef(NodeId nodeId, NodeConnectorId ncId) {
return new NodeConnectorRef(nodeConnectorIId(nodeId, ncId));
}
Additionally, you can also set Ingress, which is the port where the packet 'is coming from', used by the in-port matchers.

Related

Java - Flink sending empty object on kafka sink

On my flink script I have a stream that I'm getting from one kafka topic, manipulate it and sending it back to kafka using the sink.
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
Properties p = new Properties();
p.setProperty("bootstrap.servers", servers_ip_list);
p.setProperty("gropu.id", "Flink");
FlinkKafkaConsumer<Event_N> kafkaData_N =
new FlinkKafkaConsumer("CorID_0", new Ev_Des_Sch_N(), p);
WatermarkStrategy<Event_N> wmStrategy =
WatermarkStrategy
.<Event_N>forMonotonousTimestamps()
.withIdleness(Duration.ofMinutes(1))
.withTimestampAssigner((Event, timestamp) -> {
return Event.get_Time();
});
DataStream<Event_N> stream_N = env.addSource(
kafkaData_N.assignTimestampsAndWatermarks(wmStrategy));
The part above is working fine no problems at all, the part below instead is where I'm getting the issue.
String ProducerTopic = "CorID_0_f1";
DataStream<Stream_Blocker_Pojo.block> box_stream_p= stream_N
.keyBy((Event_N CorrID) -> CorrID.get_CorrID())
.map(new Stream_Blocker_Pojo());
FlinkKafkaProducer<Stream_Blocker_Pojo.block> myProducer = new FlinkKafkaProducer<>(
ProducerTopic,
new ObjSerializationSchema(ProducerTopic),
p,
FlinkKafkaProducer.Semantic.EXACTLY_ONCE); // fault-tolerance
box_stream_p.addSink(myProducer);
No errors everything works fine, this is the Stream_Blocker_Pojo where I'm mapping a stream manipulating it and sending out a new one.(I have simplify my code, just keeping 4 variables and removing all the math and data processing).
public class Stream_Blocker_Pojo extends RichMapFunction<Event_N, Stream_Blocker_Pojo.block>
{
public class block {
public Double block_id;
public Double block_var2 ;
public Double block_var3;
public Double block_var4;}
private transient ValueState<block> state_a;
#Override
public void open(Configuration parameters) throws Exception {
state_a = getRuntimeContext().getState(new ValueStateDescriptor<>("BoxState_a", block.class));
}
public block map(Event_N input) throws Exception {
p1.Stream_Blocker_Pojo.block current_a = state_a.value();
if (current_a == null) {
current_a = new p1.Stream_Blocker_Pojo.block();
current_a.block_id = 0.0;
current_a.block_var2 = 0.0;
current_a.block_var3 = 0.0;
current_a.block_var4 = 0.0;}
current_a.block_id = input.f_num_id;
current_a.block_var2 = input.f_num_2;
current_a.block_var3 = input.f_num_3;
current_a.tblock_var4 = input.f_num_4;
state_a.update(current_a);
return new block();
};
}
This is the implementation of the Kafka Serialization schema.
public class ObjSerializationSchema implements KafkaSerializationSchema<Stream_Blocker_Pojo.block>{
private String topic;
private ObjectMapper mapper;
public ObjSerializationSchema(String topic) {
super();
this.topic = topic;
}
#Override
public ProducerRecord<byte[], byte[]> serialize(Stream_Blocker_Pojo.block obj, Long timestamp) {
byte[] b = null;
if (mapper == null) {
mapper = new ObjectMapper();
}
try {
b= mapper.writeValueAsBytes(obj);
} catch (JsonProcessingException e) {
}
return new ProducerRecord<byte[], byte[]>(topic, b);
}
}
When I open the messages that i sent from my Flink script using kafka, I find that all the variables are "null"
CorrID b'{"block_id":null,"block_var1":null,"block_var2":null,"block_var3":null,"block_var4":null}
It looks like I'm sending out an empty obj with no values. But I'm struggling to understand what I'm doing wrong. I think that the problem could be into my implementation of the Stream_Blocker_Pojo or maybe into the ObjSerializationSchema, Any help would be really appreciated. Thanks
There are two probable issues here:
Are You sure the variable You are passing of type block doesn't have null fields? You may want to debug that part to be sure.
The reason may also be in ObjectMapper, You should have getters and setters available for Your block otherwise Jackson may not be able to access them.

Handling dead-letter queue message-broker independent way

I have a project that currently uses Spring Cloud Streams and RabbitMQ underneath. I've implemented a logic based on the documentation. See below:
#Component
public class ReRouteDlq {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_RETRIES_HEADER = "x-retries";
private static final String X_ORIGINAL_EXCHANGE_HEADER = RepublishMessageRecoverer.X_ORIGINAL_EXCHANGE;
private static final String X_ORIGINAL_ROUTING_KEY_HEADER = RepublishMessageRecoverer.X_ORIGINAL_ROUTING_KEY;
#Autowired
private RabbitTemplate rabbitTemplate;
#RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);
String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);
this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
#Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}
It does what it is expected, however, it is binded to RabbitMQ, and my company is planning to stop using this message broker in one year or two (don't know why, must be some crazy business). So, I want to implement the same thing, but detach it from any message broker.
I tried changing the rePublish method this way, but it does not work:
#StreamListener(Sync.DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);
String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);
this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
It fails because the Message class has immutable Headers - throws exception on the put attempt saying you can't change its values (uses org.springframework.messaging.Message class).
Is there a way to implement this dead-letter queue handler in a message broker independent way?
Use
MessageBuilder.fromMessage(message)
.setHeader("foo", "bar")
...
.build();
Note that the message in #StreamListener is a spring-messaging Message<?>, not a spring-amqp Message and can't be sent using the template that way; you need an output binding to send the message to.

upwork-api return 503 ioexception

I created app for getting info from upwork.com. I use java lib and Upwork OAuth 1.0. The problem is local request to API works fine, but when I do deploy to Google Cloud, my code does not work. I get ({"error":{"code":"503","message":"Exception: IOException"}}).
I create UpworkAuthClient for return OAuthClient and next it is used for requests in JobClient.
run() {
UpworkAuthClient upworkClient = new UpworkAuthClient();
upworkClient.setTokenWithSecret("USER TOKEN", "USER SECRET");
OAuthClient client = upworkClient.getOAuthClient();
//set query
JobQuery jobQuery = new JobQuery();
jobQuery.setQuery("query");
List<JobQuery> jobQueries = new ArrayList<>();
jobQueries.add(jobQuery);
// Get request of job
JobClient jobClient = new JobClient(client, jobQuery);
List<Job> result = jobClient.getJob();
}
public class UpworkAuthClient {
public static final String CONSUMERKEY = "UPWORK KEY";
public static final String CONSUMERSECRET = "UPWORK SECRET";
public static final String OAYTŠ CALLBACK = "https://my-app.com/main";
OAuthClient client ;
public UpworkAuthClient() {
Properties keys = new Properties();
keys.setProperty("consumerKey", CONSUMERKEY);
keys.setProperty("consumerSecret", CONSUMERSECRET);
Config config = new Config(keys);
client = new OAuthClient(config);
}
public void setTokenWithSecret (String token, String secret){
client.setTokenWithSecret(token, secret);
}
public OAuthClient getOAuthClient() {
return client;
}
public String getAuthorizationUrl() {
return this.client.getAuthorizationUrl(OAYTŠ CALLBACK);
}
}
public class JobClient {
private JobQuery jobQuery;
private Search jobs;
public JobClient(OAuthClient oAuthClient, JobQuery jobQuery) {
jobs = new Search(oAuthClient);
this.jobQuery = jobQuery;
}
public List<Job> getJob() throws JSONException {
JSONObject job = jobs.find(jobQuery.getQueryParam());
jobList = parseResponse(job);
return jobList;
}
}
Local dev server works fine, I get resilts on local machine, but in Cloud not.
I will be glad to any ideas, thanks!
{"error":{"code":"503","message":"Exception: IOException"}}
doesn't seem like a response return by Upwork API. Could you please provide the full response including the returned headers? So, we will take a more precise look into it.

Can i send information to testrail for selenium when my test runs its course

I am trying to send my tests to testrail from selenium but im not using an assert to end the test, i just want it to pass if it runs to completion? Is this possible? Also is there any examples of how this working in the code? I currently have:
public class login_errors extends ConditionsWebDriverFactory {
public static String TEST_RUN_ID = "R1713";
public static String TESTRAIL_USERNAME = "f2009#hotmail.com";
public static String TESTRAIL_PASSWORD = "Password100";
public static String RAILS_ENGINE_URL = "https://testdec.testrail.com/";
public static final int TEST_CASE_PASSED_STATUS = 1;
public static final int TEST_CASE_FAILED_STATUS = 5;
#Test
public void login_errors() throws IOException, APIException {
Header header = new Header();
header.guest_select_login();
Pages.Login login = new Pages.Login();
login.login_with_empty_fields();
login.login_with_invalid_email();
login.email_or_password_incorrect();
login.login_open_and_close();
login_errors.addResultForTestCase("T65013",TEST_CASE_PASSED_STATUS," ");
}
public static void addResultForTestCase(String testCaseId, int status,
String error) throws IOException, APIException {
String testRunId = TEST_RUN_ID;
APIClient client = new APIClient(RAILS_ENGINE_URL);
client.setUser(TESTRAIL_USERNAME);
client.setPassword(TESTRAIL_PASSWORD);
Map data = new HashMap();
data.put("status_id", status);
data.put("comment", "Test Executed - Status updated automatically from Selenium test automation.");
client.sendPost("add_result_for_case/"+testRunId+"/"+testCaseId+"",data );
}
}
I am getting a 401 status from this code.
Simply place the addResultForTestCase method at the end of the run. Ensure the Test CASE is used rather than the run id. You are currently using the incorrect ID

Using AWS Java's SDKs, how can I terminate the CloudFormation stack of the current instance?

Uses on-line decomentation I come up with the following code to terminate the current EC2 Instance:
public class Ec2Utility {
static private final String LOCAL_META_DATA_ENDPOINT = "http://169.254.169.254/latest/meta-data/";
static private final String LOCAL_INSTANCE_ID_SERVICE = "instance-id";
static public void terminateMe() throws Exception {
TerminateInstancesRequest terminateRequest = new TerminateInstancesRequest().withInstanceIds(getInstanceId());
AmazonEC2 ec2 = new AmazonEC2Client();
ec2.terminateInstances(terminateRequest);
}
static public String getInstanceId() throws Exception {
//SimpleRestClient, is an internal wrapper on http client.
SimpleRestClient client = new SimpleRestClient(LOCAL_META_DATA_ENDPOINT);
HttpResponse response = client.makeRequest(METHOD.GET, LOCAL_INSTANCE_ID_SERVICE);
return IOUtils.toString(response.getEntity().getContent(), "UTF-8");
}
}
My issue is that my EC2 instance is under an AutoScalingGroup which is under a CloudFormationStack, that is because of my organisation deployment standards though this single EC2 is all there is there for this feature.
So, I want to terminate the entire CloudFormationStack from the JavaSDK, keep in mind, I don't have the CloudFormation Stack Name in advance as I didn't have the EC2 Instance Id so I will have to get it from the code using the API calls.
How can I do that, if I can do it?
you should be able to use the deleteStack method from cloud formation sdk
DeleteStackRequest request = new DeleteStackRequest();
request.setStackName(<stack_name_to_be_deleted>);
AmazonCloudFormationClient client = new AmazonCloudFormationClient (<credentials>);
client.deleteStack(request);
If you don't have the stack name, you should be able to retrieve from the Tag of your instance
DescribeInstancesRequest request =new DescribeInstancesRequest();
request.setInstanceIds(instancesList);
DescribeInstancesResult disresult = ec2.describeInstances(request);
List <Reservation> list = disresult.getReservations();
for (Reservation res:list){
List <Instance> instancelist = res.getInstances();
for (Instance instance:instancelist){
List <Tag> tags = instance.getTags();
for (Tag tag:tags){
if (tag.getKey().equals("aws:cloudformation:stack-name")) {
tag.getValue(); // name of the stack
}
}
At the end I've achieved the desired behaviour using the set of the following util functions I wrote:
/**
* Delete the CloudFormationStack with the given name.
*
* #param stackName
* #throws Exception
*/
static public void deleteCloudFormationStack(String stackName) throws Exception {
AmazonCloudFormationClient client = new AmazonCloudFormationClient();
DeleteStackRequest deleteStackRequest = new DeleteStackRequest().withStackName("");
client.deleteStack(deleteStackRequest);
}
static public String getCloudFormationStackName() throws Exception {
AmazonEC2 ec2 = new AmazonEC2Client();
String instanceId = getInstanceId();
List<Tag> tags = getEc2Tags(ec2, instanceId);
for (Tag t : tags) {
if (t.getKey().equalsIgnoreCase(TAG_KEY_STACK_NAME)) {
return t.getValue();
}
}
throw new Exception("Couldn't find stack name for instanceId:" + instanceId);
}
static private List<Tag> getEc2Tags(AmazonEC2 ec2, String instanceId) throws Exception {
DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest().withInstanceIds(instanceId);
DescribeInstancesResult describeInstances = ec2.describeInstances(describeInstancesRequest);
List<Reservation> reservations = describeInstances.getReservations();
if (reservations.isEmpty()) {
throw new Exception("DescribeInstances didn't returned reservation for instanceId:" + instanceId);
}
List<Instance> instances = reservations.get(0).getInstances();
if (instances.isEmpty()) {
throw new Exception("DescribeInstances didn't returned instance for instanceId:" + instanceId);
}
return instances.get(0).getTags();
}
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
// Example of usage from the code:
deleteCloudFormationStack(getCloudFormationStackName());
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Categories