HTTP Request limit with jnetpcap - java

i'm developing sniffer in java with eclipse . i can sniff HTTP packet, TCP , UDP . But i need to control or ''state'' if request limit more than 10 in one second. İ know with jnetpcap we can't block , i just want to know it is possible i can get this. Thank you . Below my code
PcapPacketHandler <String> jpacketHandler = new PcapPacketHandler<String> ()
{
private final Http h = new Http();
private final Tcp t = new Tcp();
#Override
public void nextPacket(PcapPacket packet, String user) {
if(packet.hasHeader(h)){
final JCaptureHeader header = packet.getCaptureHeader();
System.out.printf("---------1111--------" + header.toString()+ "-------1111--------");
System.out.printf("packet caplen= %d wiredlen = %d \n ",
header.caplen(),header.wirelen());
System.out.println(packet.toString());
// System.out.printf("---------+++++++++--------" + packet.getHeader(h).toString() + "-------++++++--------");
//packet.getHeader(h).toString();
//Find if the given packet is a Request/Response Pkt : First get the TCP header
packet.getHeader(t);
}

Related

Java: Azure Service Bus Queue Receiving messsages with sessions

I'm writing code in java (using Azure SDK for Java), I have a Service bus queue that contains sessionful messages. I want to receive those messages and process them to another place.
I make a connection to the Queue by using QueueClient, and then I use registerSessionHandler to process through the messages (code below).
The problem is that whenever a message is received, I can print all details about it including the content, but it is printed 10 times and after each time it prints an Exception.
(printing 10 times: I understand that this is because there is a 10 times retry policy before it throws the message to the Dead letter queue and goes to the next message.)
The Exception says
> USERCALLBACK-Receiver not created. Registering a MessageHandler creates a receiver.
The output with the Exception
But I'm sure that the SessionHandler does the same thing as MessageHandler but includes support for sessions, so it should create a receiver since it receives messages. I have tried to use MessageHandler but it won't even work and stops the whole program because it doesn't support sessionful messages, and the ones I receive have sessions.
My problem is understanding what the Exception wants me to do, and how can I fix the code so it won't give me any exceptions? Does anyone have suggestions on how to improve the code? or other methods that do the same thing?
QueueClient qc = new QueueClient(
new ConnectionStringBuilder(connectionString),
ReceiveMode.PEEKLOCK);
qc.registerSessionHandler(
new ISessionHandler() {
#Override
public CompletableFuture<Void> onMessageAsync(IMessageSession messageSession, IMessage message) {
System.out.printf(
"\nMessage received: " +
"\n --> MessageId = %s " +
"\n --> SessionId = %s" +
"\n --> Content Type = %s" +
"\n --> Content = \n\t\t %s",
message.getMessageId(),
messageSession.getSessionId(),
message.getContentType(),
getMessageContent(message)
);
return qc.completeAsync(message.getLockToken());
}
#Override
public CompletableFuture<Void> OnCloseSessionAsync(IMessageSession iMessageSession) {
return CompletableFuture.completedFuture(null);
}
#Override
public void notifyException(Throwable throwable, ExceptionPhase exceptionPhase) {
System.out.println("\n Exception " + exceptionPhase + "-" + throwable.getMessage());
}
},
new SessionHandlerOptions(1, true, Duration.ofMinutes(1)),
Executors.newSingleThreadExecutor()
);
(The getMessageContent(message) method is a separate method, for those interested:)
public String getMessageContent(IMessage message){
List<byte[]> content = message.getMessageBody().getBinaryData();
StringBuilder sb = new StringBuilder();
for (byte[] b : content) {
sb.append(new String(b)
);
}
return sb.toString();
}
For those who wonder, I managed to solve the problem!
It was simply done by using Azure Functions ServiceBusQueueTrigger, it will then listen to the Service bus Queue and process the messages. By setting isSessionsEnabled to true, it will accept sessionful messages as I wanted :)
So instead of writing more than 100 lines of code, the code looks like this now:
public class Function {
#FunctionName("QueueFunction")
public void run(
#ServiceBusQueueTrigger(
name = "TriggerName", //Any name you choose
queueName = "queueName", //QueueName from the portal
connection = "ConnectionString", //ConnectionString from the portal
isSessionsEnabled = true
) String message,
ExecutionContext context
) {
// Write the code you want to do with the message here
// Using the variable messsage which contains the messageContent, messageId, sessionId etc.
}
}

SMPP Server CloudHopper - how I should receive multipart messages?

I have CloudHopper SMPP server, at this moment I can receive a simple short messages.
if (pduRequest.getCommandId() == SmppConstants.CMD_ID_SUBMIT_SM) {
SubmitSm request = (SubmitSm) pduRequest;
request.getShortMessage();
....
}
But what I should do to receive long (Multipart) message?
I don't know what object I have to use ...
Help me, please.
Many thanks.
The following processes a multipart long message PDU that you would get when receiving a long message that has been split into multiple PDUs:
import com.cloudhopper.commons.charset.GSMCharset;
import com.cloudhopper.commons.gsm.GsmUtil;
import com.cloudhopper.smpp.pdu.DeliverSm;
import com.cloudhopper.smpp.util.SmppUtil;
...
DeliverSm mobileOriginatedMessage = (DeliverSm) pduRequest;
boolean isUdh = SmppUtil.isUserDataHeaderIndicatorEnabled(mobileOriginatedMessage.getEsmClass());
if (isUdh) {
byte[] userDataHeader = GsmUtil.getShortMessageUserDataHeader(messageBytes);
int thisMessageId = userDataHeader[3] & 0xff;
int totalMessages = userDataHeader[4] & 0xff;
int currentMessageNum = userDataHeader[5] & 0xff;
messageBytes = GsmUtil.getShortMessageUserData(messageBytes);
GSMCharset gsmCharset = new GSMCharset();
String message = gsmCharset.decode(messageBytes); // Example decoding, depends on charset used
System.out.println("thisMessageId: " + thisMessageId); // unique to message, same across all message parts
System.out.println("totalMessages: " + totalMessages);
System.out.println("currentMessageNum: " + currentMessageNum);
System.out.println("Message: " + message);
}
...
The above shows how to:
Determine if a PDU is multipart long (UDH) message
Get all the UDH header information so you can know
what message the part belongs to
what part number was received in order to put the message back together in the right order
and what the total number of parts you are expecting is
Get the actual message text of each part

Bluetooth LE data JSON in 20 bytes

I have a Bluetooth LE module on Arduino which sends a JSON string to an Android application.
The JSON string look like this:
{'d_stats':[{'t':'26.62','h':'59.64','p':'755.23','a':'109.02','hrm':'0.00'}]}
The Android app receives packets of 20 bytes (20 characters limit) and I can't find a method to put all packets together when the last packet was received.
Is there a way to know when the last packet is received?
Edit: the bluetooth sends data at a constant interval of time. There is a button connected to the Arduino board which, when pushed, will send other data via Bluetooth. The problem is that it overlaps with the timed transmission.
I found the solution, although not very elegant.
Instead of sending the whole JSON string, BLE will send a key/value pair in a single packet.
In C first:
char passMsg(String akey, char* origMsg){
// akey = object key must be 4 characters long
// origMsg + akey must be shorter than 20 characters
char* newmsg = origMsg;
size_t prevlen = strlen(newmsg);
memset(newmsg + prevlen, ' ', 15 - prevlen);
*(newmsg + 15) = '\0';
String bleMsg = akey + ":"+newmsg;
ble.print("AT+BLEUARTTX=");
ble.println(bleMsg);
}
This way I pass a string like this: temp:20.45
Then in Android/Java:
String[] rawString = data.replace(" ", "").split(":");
if(rawString.length>1){
String apiCallKey = rawString[0];
String apiCallVal = rawString[1];
callAPI(apiCallKey,apiCallVal);
}
Where data is raw data from Bluetooth.
Phew...

Using JZMQ with EPGM Transport Is Not Sending or Receiving Data

I'm experimenting with java flavored zmq to test the benefits of using PGM over TCP in my project. So I changed the weather example, from the zmq guide, to use the epgm transport.
Everything compiles and runs, but nothing is being sent or received. If I change the transport back to TCP, the server receives the messages sent from the client and I get the console output I'm expecting.
So, what are the requirements for using PGM? I changed the string, that I'm passing to the bind and connect methods, to follow the zmq api for zmq_pgm: "transport://interface;multicast address:port". That didn't work. I get and invalid argument error whenever I attempt to use this format. So, I simplified it by dropping the interface and semicolon which "works", but I'm not getting any results.
I haven't been able to find a jzmq example that uses pgm/epgm and the api documentation for the java binding does not define the appropriate string format for an endpoint passed to bind or connect. So what am I missing here? Do I have to use different hosts for the client and the server?
One thing of note is that I'm running my code on a VirtualBox VM (Ubuntu 14.04/OSX Mavericks host). I'm not sure if that has anything to do with the issue I'm currently facing.
Server:
public class wuserver {
public static void main (String[] args) throws Exception {
// Prepare our context and publisher
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket publisher = context.socket(ZMQ.PUB);
publisher.bind("epgm://xx.x.x.xx:5556");
publisher.bind("ipc://weather");
// Initialize random number generator
Random srandom = new Random(System.currentTimeMillis());
while (!Thread.currentThread ().isInterrupted ()) {
// Get values that will fool the boss
int zipcode, temperature, relhumidity;
zipcode = 10000 + srandom.nextInt(10000) ;
temperature = srandom.nextInt(215) - 80 + 1;
relhumidity = srandom.nextInt(50) + 10 + 1;
// Send message to all subscribers
String update = String.format("%05d %d %d", zipcode, temperature, relhumidity);
publisher.send(update, 0);
}
publisher.close ();
context.term ();
}
}
Client:
public class wuclient {
public static void main (String[] args) {
ZMQ.Context context = ZMQ.context(1);
// Socket to talk to server
System.out.println("Collecting updates from weather server");
ZMQ.Socket subscriber = context.socket(ZMQ.SUB);
//subscriber.connect("tcp://localhost:5556");
subscriber.connect("epgm://xx.x.x.xx:5556");
// Subscribe to zipcode, default is NYC, 10001
String filter = (args.length > 0) ? args[0] : "10001 ";
subscriber.subscribe(filter.getBytes());
// Process 100 updates
int update_nbr;
long total_temp = 0;
for (update_nbr = 0; update_nbr < 100; update_nbr++) {
// Use trim to remove the tailing '0' character
String string = subscriber.recvStr(0).trim();
StringTokenizer sscanf = new StringTokenizer(string, " ");
int zipcode = Integer.valueOf(sscanf.nextToken());
int temperature = Integer.valueOf(sscanf.nextToken());
int relhumidity = Integer.valueOf(sscanf.nextToken());
total_temp += temperature;
}
System.out.println("Average temperature for zipcode '"
+ filter + "' was " + (int) (total_temp / update_nbr));
subscriber.close();
context.term();
}
}
There are a couple possibilities:
You need to make sure ZMQ is compiled with the --with-pgm option: see here - but this doesn't appear to be your issue if you're not seeing "protocol not supported"
Using raw pgm requires root privileges because it requires the ability to create raw sockets... but epgm doesn't require that, so it shouldn't be your issue either (I only bring it up because you use the term "pgm/epgm", and you should be aware that they are not equally available in all situations)
What actually appears to be the problem in your case is that pgm/epgm requires support along the network path. In theory, it requires support out to your router, so your application can send a single message and have your router send out multiple messages to each client, but if your server is aware enough, it can probably send out multiple messages immediately and bypass this router support. The problem is, as you correctly guessed, trying to do this all on one host is not supported.
So, you need different hosts for client and server.
Another bit to be aware of is that some virtualization environments--RHEV/Ovirt and libvirt/KVM with the mac_filter option enabled come to mind-- that, by default, neuter one's abilities via (eb|ip)tables to utilize mcast between guests. With libvirt, of course, the solution is to simply set the option to '0' and restart libvirtd. RHEV/Ovirt require a custom plugin.
At any rate, I would suggest putting a sniffer on the network devices on each system you are using and watching to be sure traffic that is exiting the one host is actually visible on the other.

elasticsearch java bulk batch size

I want to use the elasticsearch bulk api using java and wondering how I can set the batch size.
Currently I am using it as:
BulkRequestBuilder bulkRequest = getClient().prepareBulk();
while(hasMore) {
bulkRequest.add(getClient().prepareIndex(indexName, indexType, artist.getDocId()).setSource(json));
hasMore = checkHasMore();
}
BulkResponse bResp = bulkRequest.execute().actionGet();
//To check failures
log.info("Has failures? {}", bResp.hasFailures());
Any idea how I can set the bulk/batch size?
It mainly depends on the size of your documents, available resources on the client and the type of client (transport client or node client).
The node client is aware of the shards over the cluster and sends the documents directly to the nodes that hold the shards where they are supposed to be indexed. On the other hand the transport client is a normal client that sends its requests to a list of nodes in a round-robin fashion. The bulk request would be sent to one node then, which would become your gateway when indexing.
Since you're using the Java API, I would suggest you to have a look at the BulkProcessor, which makes it much easier and flexibile to index in bulk. You can either define a maximum number of actions, a maximum size and a maximum time interval since the last bulk execution. It's going to execute the bulk automatically for you when needed. You can also set a maximum number of concurrent bulk requests.
After you created the BulkProcessor like this:
BulkProcessor bulkProcessor = BulkProcessor.builder(client, new BulkProcessor.Listener() {
#Override
public void beforeBulk(long executionId, BulkRequest request) {
logger.info("Going to execute new bulk composed of {} actions", request.numberOfActions());
}
#Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
logger.info("Executed bulk composed of {} actions", request.numberOfActions());
}
#Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
logger.warn("Error executing bulk", failure);
}
}).setBulkActions(bulkSize).setConcurrentRequests(maxConcurrentBulk).build();
You just have to add your requests to it:
bulkProcessor.add(indexRequest);
and close it at the end to flush any eventual requests that might have not been executed yet:
bulkProcessor.close();
To finally answer your question: the nice thing about the BulkProcessor is also that it has sensible defaults: 5 MB of size, 1000 actions, 1 concurrent request, no flush interval (which might be useful to set).
you need to count your bulk request builder when it hits your batch size limit then index them and flush older bulk builds .
here is example of code
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "MyClusterName").build();
TransportClient client = new TransportClient(settings);
String hostname = "myhost ip";
int port = 9300;
client.addTransportAddress(new InetSocketTransportAddress(hostname, port));
BulkRequestBuilder bulkBuilder = client.prepareBulk();
BufferedReader br = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream("my_file_path"))));
long bulkBuilderLength = 0;
String readLine = "";
String index = "my_index_name";
String type = "my_type_name";
String id = "";
while((readLine = br.readLine()) != null){
id = somefunction(readLine);
String json = new ObjectMapper().writeValueAsString(readLine);
bulkBuilder.add(client.prepareIndex(index, type, id).setSource(json));
bulkBuilderLength++;
if(bulkBuilderLength % 1000== 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk();
}
}
br.close();
if(bulkBuilder.numberOfActions() > 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk();
}
hope this helps you
thanks

Categories