I have created the normal publishers and subscribers implemented using java , which works as reading the contents by size as 1MB of total size 5MB and published on every 1MB to the subscriber.Data is getting published successfully .Now 'm facing the issue on appending the content to the existing file .Finally i could find only the last 1MB of data in the file.So please let me to know how to solve this issue ? and also i have attached the source code for publisher and subscriber.
Publisher:
public class MessageDataPublisher {
static StringBuffer fileContent;
static RandomAccessFile randomAccessFile ;
public static void main(String[] args) throws IOException {
MessageDataPublisher msgObj=new MessageDataPublisher();
String fileToWrite="test.txt";
msgObj.towriteDDS(fileToWrite);
}
public void towriteDDS(String fileName) throws IOException{
DDSEntityManager mgr=new DDSEntityManager();
String partitionName="PARTICIPANT";
// create Domain Participant
mgr.createParticipant(partitionName);
// create Type
BinaryFileTypeSupport binary=new BinaryFileTypeSupport();
mgr.registerType(binary);
// create Topic
mgr.createTopic("Serials");
// create Publisher
mgr.createPublisher();
// create DataWriter
mgr.createWriter();
// Publish Events
DataWriter dwriter = mgr.getWriter();
BinaryFileDataWriter binaryWriter=BinaryFileDataWriterHelper.narrow(dwriter);
int bufferSize=1024*1024;
File readfile=new File(fileName);
FileInputStream is = new FileInputStream(readfile);
byte[] totalbytes = new byte[is.available()];
is.read(totalbytes);
byte[] readbyte = new byte[bufferSize];
BinaryFile binaryInstance;
int k=0;
for(int i=0;i<totalbytes.length;i++){
readbyte[k]=totalbytes[i];
k++;
if(k>(bufferSize-1)){
binaryInstance=new BinaryFile();
binaryInstance.name="sendpublisher.txt";
binaryInstance.contents=readbyte;
int status = binaryWriter.write(binaryInstance, HANDLE_NIL.value);
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
k=0;
}
}
if(k < (bufferSize-1)){
byte[] remaingbyte = new byte[k];
for(int j=0;j<(k-1);j++){
remaingbyte[j]=readbyte[j];
}
binaryInstance=new BinaryFile();
binaryInstance.name="sendpublisher.txt";
binaryInstance.contents=remaingbyte;
int status = binaryWriter.write(binaryInstance, HANDLE_NIL.value);
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
}
is.close();
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// clean up
mgr.getPublisher().delete_datawriter(binaryWriter);
mgr.deletePublisher();
mgr.deleteTopic();
mgr.deleteParticipant();
}
}
Subscriber:
public class MessageDataSubscriber {
static RandomAccessFile randomAccessFile ;
public static void main(String[] args) throws IOException {
DDSEntityManager mgr = new DDSEntityManager();
String partitionName = "PARTICIPANT";
// create Domain Participant
mgr.createParticipant(partitionName);
// create Type
BinaryFileTypeSupport msgTS = new BinaryFileTypeSupport();
mgr.registerType(msgTS);
// create Topic
mgr.createTopic("Serials");
// create Subscriber
mgr.createSubscriber();
// create DataReader
mgr.createReader();
// Read Events
DataReader dreader = mgr.getReader();
BinaryFileDataReader binaryReader=BinaryFileDataReaderHelper.narrow(dreader);
BinaryFileSeqHolder binaryseq=new BinaryFileSeqHolder();
SampleInfoSeqHolder infoSeq = new SampleInfoSeqHolder();
boolean terminate = false;
int count = 0;
while (!terminate && count < 1500) {
// To run undefinitely
binaryReader.take(binaryseq, infoSeq, 10,
ANY_SAMPLE_STATE.value, ANY_VIEW_STATE.value,ANY_INSTANCE_STATE.value);
for (int i = 0; i < binaryseq.value.length; i++) {
toWrtieXML(binaryseq.value[i].contents);
terminate = true;
}
try
{
Thread.sleep(200);
}
catch(InterruptedException ie)
{
}
++count;
}
binaryReader.return_loan(binaryseq,infoSeq);
// clean up
mgr.getSubscriber().delete_datareader(binaryReader);
mgr.deleteSubscriber();
mgr.deleteTopic();
mgr.deleteParticipant();
}
private static void toWrtieXML(byte[] bytes) throws IOException {
// TODO Auto-generated method stub
File Writefile=new File("samplesubscriber.txt");
if(!Writefile.exists()){
randomAccessFile = new RandomAccessFile(Writefile, "rw");
randomAccessFile.write(bytes, 0, bytes.length);
randomAccessFile.close();
}
else{
randomAccessFile = new RandomAccessFile(Writefile, "rw");
long i=Writefile.length();
randomAccessFile.seek(i);
randomAccessFile.write(bytes, 0, bytes.length);
randomAccessFile.close();
}
}
}
Thanks in advance
It is hard to give a conclusive answer to your question, because your issue could be the result of several different causes. Also, once the cause of the problem has been identified, you will probably have multiple options to mitigate it.
The first place to look is at the reader side. The code does a take() in a loop with a 200 millisecond pause between each take. Depending on your QoS settings on the DataReader, you might be facing a situation where your samples get overwritten in the DataReader while your application is sleeping for 200 milliseconds. If you are doing this over a gigabit ethernet, then a typical DDS product would be able to do those 5 chunks of 1 megabyte within that sleep period, meaning that your default, one-place buffer will get overwritten 4 times during your sleep.
This scenario would be likely if you used the default history QoS settings for your BinaryFileDataReader, which means history.kind = KEEP_LAST and history.depth = 1. Increasing the latter to a larger value, for example to 20, would result in a queue capable of holding 20 chunks of your file while you are sleeping. That should be sufficient for now.
If this does not resolve your issue, other possible causes can be explored.
Related
I have an A.txt file of 100,000,000 records from 1 to 100000000, each record is one line. I have to read file A then write to file B and C, provided that even line writes to file B and the odd line writes to file C.
Required read and write time must be less than 40 seconds.
Below is the code that I already have but the runtime takes more than 50 seconds.
Does anyone have any other solution to reduce runtime?
Threading.java
import java.io.*;
import java.util.concurrent.LinkedBlockingQueue;
public class Threading implements Runnable {
LinkedBlockingQueue<String> queue = new LinkedBlockingQueue<>();
String file;
Boolean stop = false;
public Threading(String file) {
this.file = file;
}
public void addQueue(String row) {
queue.add();
}
public void Stop() {
stop = true;
}
public void run() {
try {
BufferedWriter bw = new BufferedWriter(new FileWriter(file));
while(!stop) {
try {
String rĘ” = queue.take();
bw.while(row + "\n");
} catch (Exception e) {
e.printStackTrace();
}
}
bw.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
ThreadCreate.java
// I used 2 threads to write to 2 files B and C
import java.io.*;
import java.util.List;
public class ThreadCreate {
public void startThread(File file) {
Threading t1 = new Threading("B.txt");
Threading t1 = new Threading("B.txt");
Thread td1 = new Thread(t1);
Thread td1 = new Thread(t1);
td1.start();
td2.start();
try {
BufferedReader br = new BufferedReader(new FileReader(file));
String line;
long start = System.currentTimeMillis();
while ((line = br.readLine()) != null) {
if (Integer.parseInt(line) % 2 == 0) {
t1.addQueue(line);
} else {
t2.addQueue(line);
}
}
t1.Stop();
t2.Stop();
br.close();
long end = System.currentTimeMillis();
System.out.println("Time to read file A and write file B, C: " + ((end - start)/1000) + "s");
} catch (Exception e) {
e.printStackTrace();
}
}
}
Main.java
import java.io.*;
public class Main {
public static void main(String[] args) throws IOException {
File file = new File("A.txt");
//Write file B, C
ThreadCreate t = new ThreadCreate();
t.startThread(file);
}
}
Why are you making threads? That just slows things down. Threads are useful if the bottleneck is either the calculation itself or the blocking nature of the operation, and they only hurt if it is not. Here, it isn't: The CPU is just idling (the bottleneck will be the disk), and the nature of what it is blocking on means that multithreading does not help either: Telling a single SSD to write 2 boatloads of bytes in parallel is probably no faster (only slower, as it needs to bounce back and forth). If the target disk is a spinning disk, it is way slower - the write head cannot make clones of itself to go any faster, and by making it multithreaded, you are wasting a ton of time by asking the write head to bounce back and forth between the different write locations.
There's nothing that immediately strikes me as ripe for significant speedups.
Sometimes, writing a ton of data to a disk just takes 50 seconds. If that's not acceptable, buy a faster disk.
try memory mapped files
byte[] buffer = "foo bar foo bar text\n".getBytes();
int number_of_lines = 100000000;
FileChannel file = new RandomAccessFile("writeFIle.txt", "rw").getChannel();
ByteBuffer wrBuf = file.map(FileChannel.MapMode.READ_WRITE, 0, buffer.length * number_of_lines);
for (int i = 0; i < number_of_lines; i++)
{
wrBuf.put(buffer);
}
file.close();
Took to my computer (Dell, I7 processor, with SSD, 32GB RAM) a little over half a minute to run this code)
I implemented a TCP client-server model to test my bandwidth with the server through sending number of packets with different sizes and see the RTT then calculate the bandwidth through linear regression,
Here is the server code:
import java.io.*;
import java.net.*;
public class Server implements Runnable {
ServerSocket welcomeSocket;
String clientSentence;
Thread thread;
Socket connectionSocket;
BufferedReader inFromClient;
DataOutputStream outToClient;
public Server() throws IOException {
welcomeSocket = new ServerSocket(6588);
connectionSocket = welcomeSocket.accept();
inFromClient = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream()));
outToClient = new DataOutputStream(connectionSocket.getOutputStream());
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
// TODO Auto-generated method stub
while(true)
{
try {
clientSentence = inFromClient.readLine();
if (clientSentence != null) {
System.out.println("Received: " + clientSentence);
outToClient.writeBytes(clientSentence + '\n');
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public static void main(String[] args) throws IOException {
new Server();
}
}
And this is the method in the Client class that return an array of the RTT by each packet
public int [] getResponseTime() throws UnknownHostException, IOException {
timeArray = new int[sizes.length];
for (int i = 0; i < sizes.length; i++) {
sentence = StringUtils.leftPad("", sizes[i], '*');
long start = System.nanoTime();
outToServer.writeBytes(sentence + '\n');
modifiedSentence = inFromServer.readLine();
long end = System.nanoTime();
System.out.println("FROM SERVER: " + modifiedSentence);
timeArray[i] = (int) (end - start);
simpleReg.addData(timeArray[i]* Math.pow(10, -9), sizes[i] * 2); // each char is 2 bytes
}
return timeArray;
}
when i get the slope it returns me a BW with kilo bytes however they are in the same network and the bandwidth should be much more . What i am doing wrong ?
Are you obliged to use linear regression or could it be a different estimator? I am actually not sure if linear regression is the best approach here. I am curious, do you happen to know any sources that suggest to use it in this kind of situation?
Note, that especially the initial BW measurements are much smaller than the real maximal goodput (due to TCP slow-start), so it is important to use a metric estimation that takes large wrong outliers into account.
In previous work I have used the harmonic mean to monitor the bandwidth over a longer period of time and it worked pretty good (also on links with a large bandwidth). The advantage of the harmonic mean over other means, is that while it is still very easy to compute, it mitigates the impact of large outliers, meaning the estimate is not as easily falsified.
Given a series of bandwidth measurements R_i, where i=0,1,2,..., n-1, the harmonic mean is calculated as:
R_total = (n+1)/((n/R_total) + (1/R_n))
It is also good practice to skip the first few measurement values (depending on how often you measure...), e.g., R_(0..5), since you might have initial bursts due to initial preparations in the different layers and are in the slow-start phase anyways.
Here an example implementation in Java. Even though in this case the measurement is done through a file download, it can be easily applied to your environment too - simply use your echo server instead of the file download:
public class Estimator
{
private static double R; // harmonic mean of all bandwidth measurements
private static int n = 0; // number of measurements
private static int skips = 5; // skip measurements for first 5 socket.read() operations
// size in bytes
// start/end in ns
public static double harmonicMean(long start, long end, double size){
// check if we need to skip this initial value, since it might falsify our estimate
if(skips-- > 0) return 0;
// get current value of R
double curR = (size/(1024*1024))/(double)((end - start)*Math.pow(10, -9));
System.out.println(curR);
if(n == 0) {
// initial value
R = curR;
} else {
// use harmonic mean
R = (n+1)/((n/R)+(1/curR));
}
n++;
return R;
}
public static void main(String[] args)
{
// temporary buffer to hold bytes
byte[] buffer = new byte[1024*1024*10]; // 10MB buffer - just in case ...
Socket socket = null;
try {
// measurement done through file download from server
// prepare request
socket = new Socket("yourserver.com",80);
PrintWriter pw = new PrintWriter(socket.getOutputStream());
InputStream is = socket.getInputStream();
pw.println("GET /test_blob HTTP/1.1"); // a test file, e.g., 1MB big
pw.println("Host: yourserver.com");
pw.println("");
pw.flush();
// prepare measurement
long start,end;
double bytes = 0;
double totalBytes = 0;
start = System.nanoTime();
while((bytes = is.read(buffer)) != -1) {
// socket.read() occurred -> calculate harmonic mean
end = System.nanoTime();
totalBytes += bytes;
harmonicMean(start, end, totalBytes);
}
// clean up
is.close();
pw.close();
}
catch(Exception e){
e.printStackTrace();
}
finally {
if(socket != null) {
try{
socket.close();
}
catch(Exception e){
e.printStackTrace();
}
}
}
System.out.println(R+" MB/s");
}
}
Additionally, for the sake of completeness, as I already mentioned in the comments it is important that the test messages/files are big enough, so TCP reaches the full goodput potential of the link.
Please also note, that this is a simplified way to estimate the bandwidth. In this example we start measuring (taking the first timestamp) from when the request was sent, meaning we include the link propagation and server processing delay, which in return will reduce the overall estimated value. Anyways, since you seem to use a local network, I expect the sum of these delays to be rather small, which means they will not falsify the final estimate too much.
I wrote a small blog post concerning measuring TCP connection metrics inside an application layer. Everything is described in more detail there (though the code examples are in C).
Info
I'm trying to find a way to read blocks of data from an incoming socket stream at a set interval, but ignoring the rest of the data and not closing the connection between reads. I was wondering if anyone had some advice?
The reason I ask is I have been given a network connected analogue to digital converter (ADC) and I want to write a simple oscilloscope application.
Basically once I connect to the ADC and send a few initialisation commands it then takes a few minutes to stabilise, at which point it starts throwing out measurements in a byte stream.
I want to read 1MB of data every few seconds and discard the rest, if I don't discard the rest the ADC will buffer 512kB of readings then pause so any subsequent reads will be of old data. If I close the connection between reads the ADC then takes a while before it sends data again.
Problem
I wrote a simple Python script as a test, in this I used a continuously running thread which would read bytes to an unused buffer, if a flag was set, which seems to work fine.
When I tried this on Android I ran into problems as it seems that only some of the data is being discarded, the ADC still pauses if the update interval is too long.
Where have I made the mistake(s)? My first guess is synchronisation as I'm not sure its working as intended (see the ThreadBucket class). I'll have to admit spending many hours on playing with this, trying different sync permutations, buffer sizes, BufferedInputStream and NIO, but with no luck.
Any input on this would be appreciated, I'm not sure if using a thread like this is the right way to go in Java.
Code
The Reader class sets up the thread, connects to the ADC, reads data on request and in between activates the bit bucket thread (I've omitted the initialisation and closing for clarity).
class Reader {
private static final int READ_SIZE = 1024 * 1024;
private String mServer;
private int mPort;
private Socket mSocket;
private InputStream mIn;
private ThreadBucket mThreadBucket;
private byte[] mData = new byte[1];
private final byte[] mBuffer = new byte[READ_SIZE];
Reader(String server, int port) {
mServer = server;
mPort = port;
}
void setup() throws IOException {
mSocket = new Socket(mServer, mPort);
mIn = mSocket.getInputStream();
mThreadBucket = new ThreadBucket(mIn);
mThreadBucket.start();
// Omitted: Send a few init commands a look at the response
// Start discarding data
mThreadBucket.bucket(true);
}
private int readRaw(int samples) throws IOException {
int current = 0;
// Probably fixed size but may change
if (mData.length != samples)
mData = new byte[samples];
// Stop discarding data
mThreadBucket.bucket(false);
// Read in number of samples to mData
while (current < samples) {
int len = mIn.read(mBuffer);
if (current > samples)
current = samples;
if (current + len > samples)
len = samples - current;
System.arraycopy(mBuffer, 0, mData, current, len);
current += mBuffer.length;
}
// Discard data again until the next read
mThreadBucket.bucket(true);
return current;
}
}
The ThreadBucket class runs continuously, on slurping data to the bit bucket if mBucket is true.
The synchronisation is meant to stop either thread from reading data whilst the other one is.
public class ThreadBucket extends Thread {
private static final int BUFFER_SIZE = 1024;
private final InputStream mIn;
private Boolean mBucket = false;
private boolean mCancel = false;
public ThreadBucket(final InputStream in) throws IOException {
mIn = in;
}
#Override
public void run() {
while (!mCancel && !Thread.currentThread().isInterrupted()) {
synchronized (this) {
if (mBucket)
try {
mIn.skip(BUFFER_SIZE);
} catch (final IOException e) {
break;
}
}
}
}
public synchronized void bucket(final boolean on) {
mBucket = on;
}
public void cancel() {
mCancel = true;
}
}
Thank you.
You need to read continuously, period, as fast as you can code it, and then manage what you do with the data separately. Don't mix the two up.
I want to play streaming media, received from a internet service. The media player works fine, but is sometimes interrupted due to poor download rate.
On receiving of media data I run a thread that does decoding and other manipulations, the abstract code looks like that:
private void startConsuming(final InputStream input) {
consumingThread = new Thread() {
public void run() {
runConsumingThread(input);
}
};
consumingThread.start();
}
My idea is to calculate the buffer size needed to prevent interruption, and to start media playback once the buffer is filled (or, of cause, if the stream ends).
private void startConsuming(final InputStream input) {
consumingThread = new Thread() {
public void run() {
runConsumingThread(input);
}
};
Thread fillBufferThread = new Thread() {
public void run() {
try {
while(input.available() < RECEIVING_BUFFER_SIZE_BYTES) {
log.debug("available bytes: " + input.available());
sleep(20);
}
} catch (Exception ex) {
// ignore
}
consumingThread.start();
}
};
fillBufferThread.start();
}
In debug I get continuously "available bytes: 0" while stream arrives and does not break the while loop. I recognized already, that EOFException will of cause not occur, since I do not read from InputStream.
How can I handle this? I thought that input.available() would increase on data arrival.
Why can runConsumingThread(input) work correctly in nearly the same manner, but my while loop in fillBufferThread does not?
EDIT: Following code nearly works (except that it wrongly consumes the input stream, which is then not played in consumingThread, but that will be easy to solve), but there must be a smarter solution.
[...]
Thread fillBufferThread = new Thread() {
public void run() {
final DataInputStream dataInput = new DataInputStream(input);
try {
int bufferSize = 0;
byte[] localBuffer = new byte[RECEIVING_BUFFER_SIZE_BYTES];
while(bufferSize < RECEIVING_BUFFER_SIZE_BYTES) {
int len = dataInput.readInt();
if(len > localBuffer.length){
if (D) log.debug("increasing buffer length: " + len);
localBuffer = new byte[len];
}
bufferSize += len;
log.debug("available bytes: " + bufferSize);
dataInput.readFully(localBuffer, 0, len);
}
consumingThread.start();
}
};
[...]
It can't be efficient to read from stream until I know, that I have it filled with a number of bytes, or is it?
I am working on a TFTP server application. I managed to process a successful file transfer from server to client however the other way around is bugged.
Client instead of transmitting the entire file simply terminated whit compiler returning no errors. Debugger shows IOBE exception on the marked code referring that the array is out of range.
The whole transfer process goes like so:
Client transmits a file name and requested operation WRQ - Write Request
Server received the packet and determines the operation if WRQ is gives the new file appropriate name.
Server now starts executing receiveData() until it gets a packet < 512 indicationg EOT
Client keeps transferring data it read from the file.
Key code:
Client:
private void sendWRQ() throws Exception
{
String rrq = "WRQ-" + data;
outgoingData = rrq.getBytes();
DatagramPacket output = new DatagramPacket(outgoingData, outgoingData.length, serverAddress, serverPort);
clientSocket.send(output);
//Thread.sleep(50);
sendData();
}
byte outgoingData = new byte[512];
private void sendData() throws Exception
{
DatagramPacket dataTransfer = new DatagramPacket(outgoingData, outgoingData.length, serverAddress, serverPort);
InputStream fis = new FileInputStream(new File(data));
int x;
while((x = fis.read(outgoingData,0,512)) != -1) // << Debugged gives IOBE
{
dataTransfer.setLength(x);
clientSocket.send(dataTransfer);
Thread.sleep(5);
}
fis.close();
}
Server:
private void listen() throws Exception
{
DatagramPacket incTransfer = new DatagramPacket(incomingData, incomingData.length);
serverSocket.receive(incTransfer);
clientAddress = incTransfer.getAddress();
clientPort = incTransfer.getPort();
String output = new String(incTransfer.getData());
if(output.substring(0, 3).equals("RRQ"))
{
File test = new File(output.substring(4));
responseData = output.substring(4);
if(test.exists())
{
sendResponse("Y");
} else {
sendResponse("N");
}
} else if (output.substring(0, 3).equals("WRQ"))
{
File test = new File(output.substring(4));
if(test.exists())
{
Calendar cal = Calendar.getInstance();
SimpleDateFormat prefix = new SimpleDateFormat(date_format);
String date = prefix.format(cal.getTime()).toString();
responseData = date + output.substring(4);
receiveData();
} else {
responseData = output.substring(4);
receiveData();
}
}
}
private void receiveData() throws Exception
{
DatagramPacket receiveData = new DatagramPacket(incomingData, incomingData.length);
OutputStream fos = new FileOutputStream(new File(responseData));
while(true)
{
serverSocket.receive(receiveData);
if(receiveData.getLength() == 512)
{
fos.write(receiveData.getData());
} else {
fos.write(receiveData.getData(), receiveData.getOffset(), receiveData.getLength());
break;
}
}
fos.close();
}
The only way that can happen is if the offset or length parameters violate the constraints specified for InputStream.read(byte[], int, int); in this case probably the buffer isn't 512 bytes long. There's no need to specify the 2nd nd third parameters in this case, just omit them, then it becomes read(buffer, 0, buffer.length) internally, which can't be wrong.
Okay, the way this is coded, the 'outgoingData' field is:
1) Initialized to a length of 512
2) Then, in sendWRQ(), 'outgoingData' is re-initialized to whatever rrq.getBytes() sends back.
3) Then, in sendData(), 'outgoingData' is used as the intermediate buffer to read data from file and put it in the dataTransfer object.
However, since 'outgoingData' is re-initialized in step #2, the assumption in step #3 that 'outgoingData' is still 512 bytes in length is false.
So while EJP was correct in saying that using read(outgoingData, 0, outgoingData.length()) will work, there are some architecture issues that if you address, you'll clean up a lot of potential errors.
For instance:
WIth the code provided, there is seemingly no reason to have outgoingData declared at the class level and shared among two functions. Depending on the rest of the app, this could end up being a Threading issue.
Perhaps byte[] buffer = rrq.getBytes(); in sendWRQ() and byte[] buffer = new byte[1024]; in sendData().
Also, the 'data' parameter is at the class level.... for what reason? Might be better able to be controlled if its a passed in parameter.
Lastly, I've had good luck using the do{} while() loop in network situations. Ensures that the send() gets at least one chance to send the data AND it keeps the code a bit more readable.