Reuse threads to download next segment of file - java

I am looking for possible methods that can increase download speed and improve cpu, memory performance. Currently I am downloading file in segments and transferring data using java nio transferFrom function.
public void startDownload() {
threadService.execute(() -> {
double currentBytes = bytesDone.doubleValue();
//Download each segment independently.
for (int i = 0; i < segments; i++) {
if (intialState[i] != -1) {
threadService.execute(new Segment((i * sizeOfEachSegment)
+ intialState[i], (i + 1) * sizeOfEachSegment, i));
}
}
if (intialState[segments] != -1) {
threadService.execute(new Segment((segments * sizeOfEachSegment)
+ intialState[segments], sizeofFile, segments));
}
// Keep saving states of threads. And updating speed.
while (bytesDone.get() < sizeofFile) {
for (int i = 0; i < 1; i++) {
try {
Thread.sleep(5000);
} catch (InterruptedException ex) {
System.out.println("thread interupted while sleeping");
}
System.out.println(speed
= (int) ((bytesDone.doubleValue() - currentBytes) / 5120));
currentBytes = bytesDone.doubleValue();
avgSpeed[0] += speed;
avgSpeed[1]++;
}
states.saveState(stateArray, currentState);
}
// Download Complete.
try {
fileChannel.close();
file.close();
} catch (IOException ex) {
System.out.println("failed to close file");
}
currentState.set(2);
states.saveState(stateArray, currentState);
System.out.println("Alhamdullilah Done :)");
System.out.println("Average Speed : " + avgSpeed[0] / avgSpeed[1]);
});
}
public class Segment implements Runnable {
long start;
long end;
long delta;
int name;
public Segment(long start, long end, int name) {
this.start = start;
this.end = end;
this.name = name;
}
#Override
public void run() {
try {
HttpGet get = new HttpGet(uri);
// Range header for defining which segment of file we want to receive.
String byteRange = start + "-" + end;
get.setHeader("Range", "bytes=" + byteRange);
try (CloseableHttpResponse response = client.execute(get)) {
ReadableByteChannel inputChannel = Channels.newChannel(
response.getEntity().getContent());
while (start < end && currentState.get() == 1) {
delta = fileChannel.transferFrom(inputChannel, start, 8192);
start += delta;
bytesDone.addAndGet(delta);
stateArray.set(name, start);
}
stateArray.set(name, -1);
}
System.out.println("Thread done: " + name);
} catch (IOException ex) {
System.out.println("thread " + name + " failed to download");
}
}
}
This implementation gives 400+ kb/s but Internet Download Manager downloads same file at 500+ kb/s.
Are there any resources I can reuse(I noticed every connection initially takes time to reach its maximum speed so is there any way i can reuse the same thread to download the next portion of file as soon as it complete downloading previous)?

Related

Is there a way to create a player specific countdown, that stops when the player leaves in Java?

I'm looking for a way to create a player-specific countdown for my BankSystem Plugin in Java.
Currently everybody gets their interest at the same time, because I'm using a Bukkit scheduler.
Bukkit.getScheduler().scheduleSyncRepeatingTask(Main.getPlugin(), new Runnable() {
#Override
public void run() {
try {
Statement stmt = DatabaseManager.getCon().createStatement();
String sql = ("SELECT uuid, money FROM Accounts");
stmt.executeUpdate("USE " + ConfigManager.getConf().getString("Database.DBName"));
ResultSet rs = stmt.executeQuery(sql);
while (rs.next()) {
uids.add(rs.getString(1));
money.put(rs.getString(1), rs.getInt(2));
}
if (!ConfigManager.getConf().getBoolean("Settings.PayInterestOffline")) {
try {
for (String uid : uids) {
Player pl = Bukkit.getPlayer(UUID.fromString(uid));
if (pl == null) {
uids.remove(IndexIdentifier.getIndex(uid, uids));
money.remove(uid);
}
}
} catch (Exception e) {
}
}
for (int i = 0; i < uids.size(); i++) {
try {
String puid = uids.get(i);
double doubleMoney = money.get(puid);
if (doubleMoney > ConfigManager.getConf().getInt("Settings.MaximumMoney")) {
continue;
} else {
doubleMoney = (((doubleMoney / 100) * percent) + doubleMoney);
int intMoney = (int) Math.ceil(doubleMoney);
stmt.executeUpdate("UPDATE Accounts SET money = " + intMoney + " WHERE uuid = '" + puid + "';");
Player p = Bukkit.getPlayer(UUID.fromString(puid));
if (p.isOnline() && p != null) {
p.sendMessage(
"§aYou've credited an interest of §6" + (int) Math.ceil((intMoney / 100) * percent)
+ ".0 " + ConfigManager.getConf().getString("Settings.currency"));
}
}
money.remove(puid);
uids.remove(i);
} catch (NullPointerException e) {
}
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}, 0, period);
Is there a way, to create a countdown for every online player. That means that the countdown stops, when the player leaves the server and resumes after rejoining.
You can associate an integer with a player in a hashmap:
HashMap<UUID, Integer> playersAndTimes = new HashMap<>();
To add players to the HashMap when you want to start the countdown:
playersAndTimes.put(player.getUniqueId(), time)
Now you just need to run this function when the plugin enables which loops through every player online, if they are in the HashMap (have a countdown on them) it will remove 1 every second from their value:
Bukkit.getScheduler().scheduleSyncRepeatingTask(Main.getPlugin(Main.class), new Runnable() {
#Override
public void run() {
for (Player player : Bukkit.getOnlinePlayers()) {
if (playersAndTimes.containsKey(player.getUniqueId())) {
if (playersAndTimes.get(player.getUniqueId()) >= 1) {
playersAndTimes.put(player.getUniqueId(), playersAndTimes.get(player.getUniqueId()) - 1);
} else {
//The Player's Time Has Expired As The Number Associated With Their UUID In The Hashmap Is Now Equal To 0.
//DO SOMETHING
}
}
}
}
}, 0, 20);

how get current CPU temperature programmatically in all Android Versions?

I am using this code for get current CPU Temperature :
and saw it too
private float getCurrentCPUTemperature() {
String file = readFile("/sys/devices/virtual/thermal/thermal_zone0/temp", '\n');
if (file != null) {
return Long.parseLong(file);
} else {
return Long.parseLong(batteryTemp + " " + (char) 0x00B0 + "C");
}
}
private byte[] mBuffer = new byte[4096];
#SuppressLint("NewApi")
private String readFile(String file, char endChar) {
// Permit disk reads here, as /proc/meminfo isn't really "on
// disk" and should be fast. TODO: make BlockGuard ignore
// /proc/ and /sys/ files perhaps?
StrictMode.ThreadPolicy savedPolicy = StrictMode.allowThreadDiskReads();
FileInputStream is = null;
try {
is = new FileInputStream(file);
int len = is.read(mBuffer);
is.close();
if (len > 0) {
int i;
for (i = 0; i < len; i++) {
if (mBuffer[i] == endChar) {
break;
}
}
return new String(mBuffer, 0, i);
}
} catch (java.io.FileNotFoundException e) {
} catch (java.io.IOException e) {
} finally {
if (is != null) {
try {
is.close();
} catch (java.io.IOException e) {
}
}
StrictMode.setThreadPolicy(savedPolicy);
}
return null;
}
and use it like it :
float cpu_temp = getCurrentCPUTemperature();
txtCpuTemp.setText(cpu_temp + " " + (char) 0x00B0 + "C");
it is work like a charm but for android M and under. For Android N and above (7,8,9) Do not Work and Show The Temp like this :
57.0 in android 6 and under (6,5,4)
57000.0 in android 7 and above (7,8,9)
I try this code too :
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.M) {
txtCpuTemp.setText((cpu_temp / 1000) + " " + (char) 0x00B0 + "C");
}
but do not work :(
How can I get the Temp in all android versions??
UPDATE:
I change The code like it and work on some devices
Except Samsung:
float cpu_temp = getCurrentCPUTemperature();
txtCpuTemp.setText(cpu_temp + " " + (char) 0x00B0 + "C");
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.M) {
txtCpuTemp.setText(cpu_temp / 1000 + " " + (char) 0x00B0 + "C");
}
Divide the value by 1000 on newer API:
float cpu_temp = getCurrentCPUTemperature();
if(Build.VERSION.SDK_INT > Build.VERSION_CODES.M) {
cpu_temp = cpu_temp / 1000;
}
I'd just wonder where batteryTemp comes from and how it should be related to the CPU.

How can I improve the performance of execution time? And Is their any better way to read this file?

I am trying to split a text file with multiple threads. The file is of 1 GB. I am reading the file by char. The Execution time is 24 min 54 seconds. Instead of reading a file by char is their any better way where I can reduce the execution time.
I'm having a hard time figuring out an approach that will reduce the execution time. Please do suggest me also, if there is any other better way to split file with multiple threads. I am very new to java.
Any help will be appreciated. :)
public static void main(String[] args) throws Exception {
RandomAccessFile raf = new RandomAccessFile("D:\\sample\\file.txt", "r");
long numSplits = 10;
long sourceSize = raf.length();
System.out.println("file length:" + sourceSize);
long bytesPerSplit = sourceSize / numSplits;
long remainingBytes = sourceSize % numSplits;
int maxReadBufferSize = 9 * 1024;
List<String> filePositionList = new ArrayList<String>();
long startPosition = 0;
long endPosition = bytesPerSplit;
for (int i = 0; i < numSplits; i++) {
raf.seek(endPosition);
String strData = raf.readLine();
if (strData != null) {
endPosition = endPosition + strData.length();
}
String str = startPosition + "|" + endPosition;
if (sourceSize > endPosition) {
startPosition = endPosition;
endPosition = startPosition + bytesPerSplit;
} else {
break;
}
filePositionList.add(str);
}
for (int i = 0; i < filePositionList.size(); i++) {
String str = filePositionList.get(i);
String[] strArr = str.split("\\|");
String strStartPosition = strArr[0];
String strEndPosition = strArr[1];
long startPositionFile = Long.parseLong(strStartPosition);
long endPositionFile = Long.parseLong(strEndPosition);
MultithreadedSplit objMultithreadedSplit = new MultithreadedSplit(startPositionFile, endPositionFile);
objMultithreadedSplit.start();
}
long endTime = System.currentTimeMillis();
System.out.println("It took " + (endTime - startTime) + " milliseconds");
}
}
public class MultithreadedSplit extends Thread {
public static String filePath = "D:\\tenlakh\\file.txt";
private int localCounter = 0;
private long start;
private long end;
public static String outPath;
List<String> result = new ArrayList<String>();
public MultithreadedSplit(long startPos, long endPos) {
start = startPos;
end = endPos;
}
#Override
public void run() {
try {
String threadName = Thread.currentThread().getName();
long currentTime = System.currentTimeMillis();
RandomAccessFile file = new RandomAccessFile("D:\\sample\\file.txt", "r");
String outFile = "out_" + threadName + ".txt";
System.out.println("Thread Reading started for start:" + start + ";End:" + end+";threadname:"+threadName);
FileOutputStream out2 = new FileOutputStream("D:\\sample\\" + outFile);
file.seek(start);
int nRecordCount = 0;
char c = (char) file.read();
StringBuilder objBuilder = new StringBuilder();
int nCounter = 1;
while (c != -1) {
objBuilder.append(c);
// System.out.println("char-->" + c);
if (c == '\n') {
nRecordCount++;
out2.write(objBuilder.toString().getBytes());
objBuilder.delete(0, objBuilder.length());
//System.out.println("--->" + nRecordCount);
// break;
}
c = (char) file.read();
nCounter++;
if (nCounter > end) {
break;
}
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
The fastest way would be to map the file into memory segment by segment (mapping a large file as a whole may cause undesired side effects). It will skip few relatively expensive copy operations. The operating system will load file into RAM and JRE will expose it to your application as a view into an off-heap memory area in a form of a ByteBuffer. It would usually allow you to squeze last 2x/3x of the performance.
Memory-mapped way requires quite a bit of helper code (see the fragment in the bottom), it's not always the best tactical way. Instead, if your input is line-based and you just need reasonable performance (what you have now is probably not) then just do something like:
import java.nio.Files;
import java.nio.Paths;
...
File.lines(Paths.get("/path/to/the/file"), StandardCharsets.ISO_8859_1)
// .parallel() // parallel processing is still possible
.forEach(line -> { /* your code goes here */ });
For the contrast, a working example of the code for working with the file via memory mapping would look something like below. In case of fixed-size records (when segments can be selected precisely to match record boundaries) subsequent segments can be processed in parallel.
static ByteBuffer mapFileSegment(FileChannel fileChannel, long fileSize, long regionOffset, long segmentSize) throws IOException {
long regionSize = min(segmentSize, fileSize - regionOffset);
// small last region prevention
final long remainingSize = fileSize - (regionOffset + regionSize);
if (remainingSize < segmentSize / 2) {
regionSize += remainingSize;
}
return fileChannel.map(FileChannel.MapMode.READ_ONLY, regionOffset, regionSize);
}
...
final ToIntFunction<ByteBuffer> consumer = ...
try (FileChannel fileChannel = FileChannel.open(Paths.get("/path/to/file", StandardOpenOption.READ)) {
final long fileSize = fileChannel.size();
long regionOffset = 0;
while (regionOffset < fileSize) {
final ByteBuffer regionBuffer = mapFileSegment(fileChannel, fileSize, regionOffset, segmentSize);
while (regionBuffer.hasRemaining()) {
final int usedBytes = consumer.applyAsInt(regionBuffer);
if (usedBytes == 0)
break;
}
regionOffset += regionBuffer.position();
}
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}

Using Kafka low-level API, should I commit the offset when finished fetching data?

public void run() {
// find the meta data about the topic and partition we are interested in
PartitionMetadata metadata = findLeader(a_seedBrokers, a_port, a_topic, a_partition);
if (metadata == null) {
System.out.println("Can't find metadata for Topic and Partition. Exiting");
return;
}
if (metadata.leader() == null) {
System.out.println("Can't find Leader for Topic and Partition. Exiting");
return;
}
String leadBroker = metadata.leader().host();
String clientName = "Client_" + a_topic + "_" + a_partition;
SimpleConsumer consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
long readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.EarliestTime(), clientName);
//long readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.LatestTime(), clientName);
int numErrors = 0;
while (a_maxReads > 0) {
if (consumer == null) {
consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
}
FetchRequest req = new FetchRequestBuilder()
.clientId(clientName)
.addFetch(a_topic, a_partition, readOffset, 100000) // Note: this fetchSize of 100000 might need to be increased if large batches are written to Kafka
.build();
FetchResponse fetchResponse = consumer.fetch(req);
if (fetchResponse.hasError()) {
numErrors++;
// Something went wrong!
short code = fetchResponse.errorCode(a_topic, a_partition);
System.out.println("Error fetching data from the Broker:" + leadBroker + " Reason: " + code);
if (numErrors > 5) break;
if (code == ErrorMapping.OffsetOutOfRangeCode()) {
// We asked for an invalid offset. For simple case ask for the last element to reset
readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.LatestTime(), clientName);
continue;
}
consumer.close();
consumer = null;
try {
leadBroker = findNewLeader(leadBroker, a_topic, a_partition, a_port);
} catch (Exception e) {
e.printStackTrace();
}
continue;
}
numErrors = 0;
long numRead = 0;
for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(a_topic, a_partition)) {
long currentOffset = messageAndOffset.offset();
if (currentOffset < readOffset) {
System.out.println("Found an old offset: " + currentOffset + " Expecting: " + readOffset);
continue;
}
readOffset = messageAndOffset.nextOffset();
ByteBuffer payload = messageAndOffset.message().payload();
byte[] bytes = new byte[payload.limit()];
payload.get(bytes);
try {
dataPoints.add(simpleAPIConsumer.parse(simpleAPIConsumer.deserializing(bytes)));//add data to List
} catch (Exception e) {
e.printStackTrace();
}
numRead++;
a_maxReads--;
}
if (numRead == 0) {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {
}
}
}
simpleAPIConsumer.dataHandle(dataPoints);//Handel Data
if (consumer != null) consumer.close();
}
I found this method in Kafka source. Should I use it?
/**
* Commit offsets for a topic to Zookeeper
* #param request a [[kafka.javaapi.OffsetCommitRequest]] object.
* #return a [[kafka.javaapi.OffsetCommitResponse]] object.
*/
def commitOffsets(request: kafka.javaapi.OffsetCommitRequest):kafka.javaapi.OffsetCommitResponse = {
import kafka.javaapi.Implicits._
underlying.commitOffsets(request.underlying)
}
The purpose of committing an offset after every fetch is to achieve exactly-once message processing.
You need to make sure that you commit offset once you processed the message (where "process" means whatever you do with a message after you pull it out from Kafka). Like you're wrapping message processing and offset commit into a transaction, where either both succeed or fail.
This way, if your client crashes you'll be able to start from the correct offset after you restart.

Download a Large Number of Files Using the Java SDK for Amazon S3 Bucket

I have a large number of files that need to be downloaded from an S3 bucket. My problem is similar to this article except I am trying to run it in Java.
public static void main(String args[]) {
AWSCredentials myCredentials = new BasicAWSCredentials("key","secret");
TransferManager tx = new TransferManager(myCredentials);
File file = <thefile>
try{
MultipleFileDownload myDownload = tx.downloadDirectory("<bucket>", null, file);
System.out.println("Transfer: " + myDownload.getDescription());
System.out.println(" - State: " + myDownload.getState());
System.out.println(" - Progress: " + myDownload.getProgress().getBytesTransfered());
while (myDownload.isDone() == false) {
System.out.println("Transfer: " + myDownload.getDescription());
System.out.println(" - State: " + myDownload.getState());
System.out.println(" - Progress: " + myDownload.getProgress().getBytesTransfered());
try {
// Do work while we wait for our upload to complete...
Thread.sleep(500);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
} catch(Exception e){
e.printStackTrace();
}
}
This was adapted from the TransferManager class example for multiple upload. There are well over a 100,000 objects in this bucket. Any help would be great.
Please use the list() method to get a list of your files, then use the get() method to get each file.
class S3 extends AmazonS3Client {
final String bucket;
S3(String u, String p, String Bucket) {
super(new BasicAWSCredentials(u, p));
bucket = Bucket;
}
String get(String k) {
try {
final S3Object f = getObject(bucket, k);
final BufferedInputStream i = new BufferedInputStream(f.getObjectContent());
final StringBuilder s = new StringBuilder();
final byte[] b = new byte[1024];
for (int n = i.read(b); n != -1; n = i.read(b)) {
s.append(new String(b, 0, n));
}
return s.toString();
} catch (Exception e) {
log("Cannot get " + bucket + "/" + k + " from S3 because " + e);
}
return null;
}
String[] list(String d) {
try {
final ObjectListing l = listObjects(bucket, d);
final List<S3ObjectSummary> L = l.getObjectSummaries();
final int n = L.size();
final String[] s = new String[n];
for (int i = 0; i < n; ++i) {
final S3ObjectSummary k = L.get(i);
s[i] = k.getKey();
}
return s;
} catch (Exception e) {
log("Cannot list " + bucket + "/" + d + " on S3 because " + e);
}
return new String[]{};
}
}
TransferManager internally uses countdownlatch which makes me believe is does concurrent download (which seems the right way to do it). It makes sense to use it than get one file after other sequentially?

Categories