Java multithreading without main method - java

I am new to Java.I have a function where I want the function to execute a multithreaded behaviour.The problem is that I will be making the jar without main method inside it.. Just wanted to know that can we have a multithreaded function in Java without a class having main method ??
I have the following code and I want this "myHandler" function to have multithreaded behaviour such that whenever this function gets called,different threads execute it...Can you please help me this code executing multithreaded behaviour?? Thank You
public String myHandler(KinesisEvent kinesisEvent,Context context)
{
int singleRecord=0;
long starttime=System.currentTimeMillis();
//LambdaLogger lambdaLogger=context.getLogger();
for(KinesisEventRecord rec : kinesisEvent.getRecords())
{
singleRecord=0;
System.out.println("Kinesis Record inside is:"+new String(rec.getKinesis().getData().array()));
//count++;
singleRecord++;
// System.out.println(new String(rec.getKinesis().getData().array()));
}
count=count+singleRecord;
long endtime=System.currentTimeMillis();
long totaltime = endtime-starttime;
time=time+totaltime;
System.out.println("Time required to execute single Lambda function for "+singleRecord+" records is"+" :: "+totaltime+" milliseconds");
System.out.println("Total time required to execute Lambda function for "+count+" records is"+" :: "+time+" milliseconds");
return null;
}

I'm not sure if that is exactly what you want but you could do something like this:
public String myHandler(final KinesisEvent kinesisEvent, final Context context)
{
Thread thread = new Thread(new Runnable(){
#Override
public void run() {
int singleRecord=0;
long starttime=System.currentTimeMillis();
//LambdaLogger lambdaLogger=context.getLogger();
for(KinesisEventRecord rec : kinesisEvent.getRecords())
{
singleRecord=0;
System.out.println("Kinesis Record inside is:"+new String(rec.getKinesis().getData().array()));
//count++;
singleRecord++;
// System.out.println(new String(rec.getKinesis().getData().array()));
}
count=count+singleRecord;
long endtime=System.currentTimeMillis();
long totaltime = endtime-starttime;
time=time+totaltime;
System.out.println("Time required to execute single Lambda function for "+singleRecord+" records is"+" :: "+totaltime+" milliseconds");
System.out.println("Total time required to execute Lambda function for "+count+" records is"+" :: "+time+" milliseconds");
}
});
thread.start();
}
This code will start your code in a new thread once you call the method but any parameters you want to use in the thread have to be declared final to be visible in the anonymous implementation of Runnable.
Another solution would be to create a new class and extend the Thread class:
public class MyHandlerThread extends Thread {
KinesisEvent kinesisEvent;
Context context;
public MyHandlerThread(KinesisEvent kinesisEvent, Context context) {
super();
this.kinesisEvent = kinesisEvent;
this.context = context;
}
#Override
public void run() {
int singleRecord = 0;
long starttime = System.currentTimeMillis();
//LambdaLogger lambdaLogger=context.getLogger();
for (KinesisEventRecord rec : kinesisEvent.getRecords()) {
singleRecord = 0;
System.out.println("Kinesis Record inside is:" + new String(rec.getKinesis().getData().array()));
//count++;
singleRecord++;
// System.out.println(new String(rec.getKinesis().getData().array()));
}
count = count + singleRecord;
long endtime = System.currentTimeMillis();
long totaltime = endtime - starttime;
time = time + totaltime;
System.out.println("Time required to execute single Lambda function for " + singleRecord + " records is" + " :: " + totaltime + " milliseconds");
System.out.println("Total time required to execute Lambda function for " + count + " records is" + " :: " + time + " milliseconds");
}
}
In order to start this as a thread you have to create an instance of the object and call its start method.
MyHandlerThread thread = new MyHandlerThread(param1, param2);
thread.start();
Hope this helps (:

If method should always be executed in separate thread, you can create a Thread, and call your code from that thread by following way:
public String myHandler(final KinesisEvent kinesisEvent, final Context context) {
new Thread(new Runnable() {
public void run() {
int singleRecord = 0;
long starttime = System.currentTimeMillis();
//LambdaLogger lambdaLogger=context.getLogger();
for (KinesisEventRecord rec : kinesisEvent.getRecords()) {
singleRecord = 0;
System.out.println("Kinesis Record inside is:" + new String(rec.getKinesis().getData().array()));
//count++;
singleRecord++;
// System.out.println(new String(rec.getKinesis().getData().array()));
}
count = count + singleRecord;
long endtime = System.currentTimeMillis();
long totaltime = endtime - starttime;
time = time + totaltime;
System.out.println("Time required to execute single Lambda function for " + singleRecord + " records is" + " :: " + totaltime + " milliseconds");
System.out.println("Total time required to execute Lambda function for " + count + " records is" + " :: " + time + " milliseconds");
return null;
}
}).start();
}

Related

Have I implemented deltatime correctly?

I am making a game in Java and wanted to implement a deltatime system. However I am not sure if I have implemented it correctly. Is the way I have done it correct, or should I change it.
My code looks like this:
long oldtime = System.nanoTime();
while (true) {
long newtime = System.nanoTime();
long deltatime = (newtime - oldtime) / 1000000;
System.out.println(deltatime);
oldtime = newtime;
// render code
try {
Thread.sleep(Math.max(0, 32 - deltatime));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
It looks like you want to measure how long the rendering took. Therefore, I suggest a cleaner approach by storing the starting time in a variable (start) and then calculating the difference to the current time after the rendering took place. This would allow you to measure sub-steps easily by just adding another comparison to the current time in between.
Always be careful with the units (ms, µs, ms) and make it obvious by naming the variable accordingly (e.g. deltaMs) or by using a comment. It's also a good idea to protect the reference by declaring it final.
Here is a simple example:
while (true) {
final long start = System.nanoTime(); // initial reference
// simulate render code
try { Thread.sleep(32); } catch (InterruptedException e) { e.printStackTrace(); }
final long deltaMs = (System.nanoTime() - start) / 1_000_000;
System.out.println("Render took " + deltaMs + "ms");
}
Here is a nested example:
while (true) {
final long start = System.nanoTime();
/* A */ try { Thread.sleep(20); } catch (InterruptedException e) { e.printStackTrace(); }
final long deltaMsPartA = (System.nanoTime() - start) / 1_000_000;
System.out.println("Render part A took " + deltaMsPartA + "ms");
final long startPartB = System.nanoTime();
/* B */ try { Thread.sleep(30); } catch (InterruptedException e) { e.printStackTrace(); }
final long deltaMsPartB = (System.nanoTime() - startPartB) / 1_000_000;
System.out.println("Render part B took " + deltaMsPartB + "ms");
final long deltaMs = (System.nanoTime() - start) / 1_000_000;
System.out.println("Overall render took " + deltaMs + "ms");
}

Using superfasthash in Java

I am testing the speed comparison between superfasthash and the default hashing algorithm used in Java.
But I'm not sure if I'm using the superfasthash algo correctly as I can't really find any documentation on what the params mean.
I got the java impl of the algo here.
This is my code:
public class Main
{
static List<String> dataArray = new ArrayList<>();
public static void main(String[] args) throws IOException
{
writeToArray();
System.out.println("Finished writing to the array.");
//hashing using default java
long startTime = System.nanoTime();
dataArray.stream()
.forEach(s -> s.hashCode());
long endTime = System.nanoTime();
long duration = (endTime - startTime); //divide by 1000000 to get milliseconds.
System.out.println("Finished hashing the file using default java (nanoseconds): " + duration );
//hashing using superfasthash algo
startTime = System.nanoTime();
dataArray.stream()
.forEach(s -> {
SuperFastHash.calculate(s.getBytes(), 1, 1, 1);
});
endTime = System.nanoTime();
duration = (endTime - startTime); //divide by 1000000 to get milliseconds.
System.out.println("Finished hashing superfasthash algo (nanoseconds): " + duration );
}
private static void writeToArray()
{
String abc = "abcdefghijklmnopqrstuvwxyz";
String toHash;
Random r = new Random();
for (int i = 0; i <= 1000000; i++)
{
toHash = "";
for (int ii = 0; ii < 10; ii++)
{
int low = 0;
int high = 26;
int result = r.nextInt(high - low) + low;
toHash = toHash + abc.charAt(result);
}
dataArray.add(toHash);
System.out.println("Writing index = " + i + ". String: " + toHash);
}
}
}
but not sure what to write in the parameter when using the calculate function from SuperFastHash class.

Different performance for one creating data task in Spring Boot, Postgres

I tried to reload data after creating it, but reloading task takes it too long when comparing with creating.
Performance:
CSV file: 1,2k records.
Insert data into table for the first time: 15.413022393 seconds
Then, I delete all data: 1.196959342 Seconds
Then, I insert data into table for the second time with same function, same csv file: 52.934162753 Seconds
Summary: 1st time: 15.4 seconds, 2nd time: 52.9 seconds.
When I change csv file with 66k records, I get the bitter result:
1st time: 15 mins, 2nd time: around 2 hours.
Do you know why it takes too different performance for the same task? And what should I do to get the performance in the 2nd time as same as the 1st time.
Here is my source code:
public class EtlApplication implements CommandLineRunner {
public static boolean acessDB = true;
#Autowired
ProcessDataController processDataController;
public static void main(String[] args) {
SpringApplication.run(EtlApplication.class, args);
}
#Override
public void run(String... args) throws Exception {
acessDB = false;
processDataController.createData();
acessDB = true;
}
}
ProcessData and test
public class ProcessDataController {
public static final String CSV_URL = "C:\\abc.csv";
List<CSVSales> data = new ArrayList<>();
#Autowired
private ARepository aRepository;
#Autowired
private BRepository bRepository;
#Autowired
private CRepository cRepository;
#Autowired
private DRepository dRepository;
#Autowired
private ERepository eRepository;
#Autowired
private FRepository fRepository;
#Autowired
private GRepository gRepository;
#Autowired
private HRepository hRepository;
#Autowired
private IRepository iRepository;
#Autowired
private KRepository kRepository;
#Autowired
private LRepository lRepository;
public void createData() throws IOException, ParseException {
// BEGIN - For TESTING
long step[] = new long[20];
double timer[] = new double[20];
ArrayList<String> table = new ArrayList<>();
table.add("A");
table.add("B");
table.add("C");
table.add("D");
table.add("E");
table.add("F");
table.add("G");
table.add("H");
table.add("I");
table.add("K");
table.add("L");
// END - For TESTING
CSVReadAndParse readAndParse = new CSVReadAndParse();
readAndParse.setUrl(CSV_URL);
step[0] = System.nanoTime();
data = readAndParse.getResult();
step[1] = System.nanoTime();
timer[0] = step[1] - step[0];
for (int num = 0; num < data.size(); num++) {
step[0] = System.nanoTime();
A a = new A(data.get(num).getACode(), data.get(num).getAName());
aRepository.save(a);
step[1] = System.nanoTime();
timer[1] += step[1] - step[0];
B b = new B(data.get(num).getBCode(), data.get(num).getBName());
bRepository.save(b);
step[2] = System.nanoTime();
timer[2] += step[2] - step[1];
C c = new C(data.get(num).getC());
cRepository.save(c);
step[3] = System.nanoTime();
timer[3] += step[3] - step[2];
D d = new D(data.get(num).getDCode(), data.get(num).getDName());
dRepository.save(d);
step[4] = System.nanoTime();
timer[4] += step[4] - step[3];
E e = new E(data.get(num).getECode(), data.get(num).getEName());
eRepository.save(e);
step[5] = System.nanoTime();
timer[5] += step[5] - step[4];
F f = new F(data.get(num).getF());
fRepository.save(month);
step[6] = System.nanoTime();
timer[6] += step[6] - step[5];
G g = new G(data.get(num).getGCode(), data.get(num).getGName());
gRepository.save(g);
step[7] = System.nanoTime();
timer[7] += step[7] - step[6];
H h = new H(data.get(num).getHCode(), data.get(num).getHName());
pRepository.save(h);
step[8] = System.nanoTime();
timer[8] += step[8] - step[7];
I i = new I(data.get(num).getICode(), data.get(num).getIName());
iRepository.save(i);
step[9] = System.nanoTime();
timer[9] += step[9] - step[8];
K k = new K(data.get(num).getK());
kRepository.save(k);
step[10] = System.nanoTime();
timer[10] += step[10] - step[9];
L l = new L();
L.setA(data.get(num).getNumberOfSale());
L.setB(data.get(num).getSalesAmount());
l.setC(a);
l.setC(b);
l.setD(c);
l.setE(d);
l.setF(e);
l.setG(f);
l.setH(g);
l.setI(h);
l.setK(i);
fRepository.save(l);
step[11] = System.nanoTime();
timer[11] += step[11] - step[10];
}
double sum = 0;
for (int i = 1; i <= 11; i++) {
System.out.println(table.get(i - 1) + " time: " + new DecimalFormat("#.##########").format(timer[i] / 1000000000) + " seconds");
sum += timer[i];
}
System.out.println("Reading data time: " + new DecimalFormat("#.##########").format(timer[0] / 1000000000) + " seconds");
System.out.println("Total creating table time: " + new DecimalFormat("#.##########").format(sum / 1000000000) + " seconds");
}
public void deleteAllData() {
lRepository.deleteAll();
aRepository.deleteAll();
bRepository.deleteAll();
cRepository.deleteAll();
eRepository.deleteAll();
fRepository.deleteAll();
gRepository.deleteAll();
hRepository.deleteAll();
iRepository.deleteAll();
kRepository.deleteAll();
}
#RequestMapping(value = "/api/reloadData", method = RequestMethod.GET)
public String reloadData() throws IOException, ParseException {
System.out.println("------------------------" + acessDB);
if (acessDB) {
acessDB = false;
long step1 = System.nanoTime();
deleteAllData();
long step2 = System.nanoTime();
createData();
long step3 = System.nanoTime();
double time1 = ((double) (step2 - step1) / 1000000000);
double time2 = ((double) (step3 - step2) / 1000000000);
double time3 = ((double) (step3 - step1) / 1000000000);
System.out.println("Delete time: " + new DecimalFormat("#.##########").format(time1) + " Seconds");
System.out.println("Create time: " + new DecimalFormat("#.##########").format(time2) + " Seconds");
System.out.println("Total time: " + new DecimalFormat("#.##########").format(time3) + " Seconds");
acessDB = true;
return "Done";
} else {
return "Busy";
}
}
}
If you have any idea, please help me. Thank you all for your support.
result for 1,2k records
1st time
A time: 1.447862537 seconds
Btime: 1.255404293 seconds
Ctime: 1.394218887 seconds
D time: 1.187494522 seconds
E time: 1.181336583 seconds
F time: 1.259357541 seconds
G time: 1.2722146 seconds
H time: 1.276657592 seconds
I time: 1.238350482 seconds
K time: 1.132834423 seconds
L time: 2.767290933 seconds
Reading data time: 0.017714579 seconds
Total creating table time: 15.413022393 seconds
2nd time
A time: 4.452199036 seconds
B time: 4.602505654 seconds
C time: 4.847908167 seconds
D time: 4.424638278 seconds
E time: 4.820910787 seconds
F time: 5.235425021 seconds
G time: 5.069998945 seconds
H time: 5.022227053 seconds
I time: 4.918734423 seconds
K time: 4.483681708 seconds
L time: 5.04199453 seconds
Reading data time: 0.008831614 seconds
Total creating table time: 52.920223602 seconds
Delete time: 1.196959342 Seconds
Create time: 52.934162753 Seconds
Total time: 54.131122095 Seconds
Before save entity, I tried to autewired Entity Manager by:
#Autowired
private EntityManager entityManager;
Then, instead of using save functions, use saveAndFlush functions.
After that, using entityManager.clear();

Why Cassandra UDF performance is worse than java code

I’ve a use case where I need to fetch all the records from Cassandra for a given time range and divide it into 30 chunks then further aggregate each chunk, for example let us suppose I’m fetching 60 records for a time range of 30 minutes. Now I need to divide into 30 chunk which will be 2 records per minute. If I’m fetching 600 records for a time range of 1 hour, then 30 chunk will be 20 records per 2 minutes. If I’m fetching 600 records for a time range of 1 week, then 30 chunk will be 20 records per 5.6 hours and so on.
For implementing the same I have written a java code which is giving result in 3 seconds for 100k records. I thought implementing the same in Cassandra UDF will have performance benefit, but the UDF is taking 6-7 seconds(double the time taken by java code) which is shocking for me. Somebody please guide, where I’m off the track, below is my table structure and java as well as UDF code.
Cassandra table schema
CREATE TABLE transactions_data (
app_name text,
api_name text,
app_id text,
start_time timestamp,
duration int,
end_time timestamp,
node_id text,
request_body text,
request_parameter_name1 text,
request_parameter_name2 text,
request_parameter_name3 text,
request_parameter_name4 text,
request_parameter_name5 text,
request_parameter_value1 text,
request_parameter_value2 text,
request_parameter_value3 text,
request_parameter_value4 text,
request_parameter_value5 text,
response_body text,
response_parameter_name1 text,
response_parameter_name2 text,
response_parameter_name3 text,
response_parameter_name4 text,
response_parameter_name5 text,
response_parameter_value1 text,
response_parameter_value2 text,
response_parameter_value3 text,
response_parameter_value4 text,
response_parameter_value5 text,
responsestatus text,
responsestatuscode text,
transaction_id text,
PRIMARY KEY ((app_name, api_name, app_id), start_time)
);
Java code
public class SamplingDataJava {
private static Logger logger = LoggerFactory.getLogger(SamplingDataJava.class);
private static String startTime = "2017-03-21 00:00:00.000";
private static String endTime = "2017-04-25 00:00:00.000";
private final String SELECT_STATEMENT = "select start_time,duration from transactions_data "
+ " where app_name='app_name-abc' and api_name='api_name-1' "
+ " and app_id='app_id-xyz' " + " AND start_time>='"
+ startTime + "' AND start_time<='" + endTime + "' ";
private Cluster cluster;
private Session session;
private String Host = "localhost";
public SamplingDataJava() throws IOException {
// this.query=query;
logger.info("Using CQL3 Writer");
cluster = Cluster.builder().addContactPoints(Host)
.withSocketOptions(new SocketOptions().setConnectTimeoutMillis(2000000)).build();
session = cluster.connect();
}
private class Result {
double duration;
int count;
Result(double duration, int count) {
this.duration = duration;
this.count = count;
}
#Override
public String toString() {
return "Result [duration=" + duration + ", count=" + count + "]";
}
}
public void hashSampling(long interval,long initTime) throws IOException {
HashMap<Long, Result> agg = new HashMap<>();
ResultSet rs = session.execute(SELECT_STATEMENT);
int i = 0;
for (com.datastax.driver.core.Row row : rs) {
i++;
Long hashcode = Math.abs((row.getTimestamp("start_time").getTime() - initTime) / interval);
Result hasResult = agg.get(hashcode);
if (hasResult == null) {
hasResult = new Result(row.getInt("duration"), 1);
} else {
hasResult.duration = (hasResult.duration + row.getInt("duration"));
hasResult.count++;
}
agg.put(hashcode, hasResult);
}
System.out.println("total number of records " + i);
Long code=0L;
while (code<30) {
System.out.println(" code "+agg.get(code));
code++;
}
}
public void close() {
cluster.close();
session.close();
}
public static void main(String[] args) throws IOException {
long beginTime = System.currentTimeMillis();
SamplingDataJava cqp = new SamplingDataJava();
long onlyQueryTime = System.currentTimeMillis();
DateTimeFormatter readPattern = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss.SSS");
DateTime sTime = readPattern.parseDateTime(startTime);
DateTime eTime = readPattern.parseDateTime(endTime);
long interval = (eTime.getMillis() - sTime.getMillis()) / 30;
System.out.println("start end time :" + eTime.getMillis() + " " + sTime.getMillis());
cqp.hashSampling(interval,sTime.getMillis());
System.out.println("total time without open close " + (System.currentTimeMillis() - onlyQueryTime));
cqp.close();
System.out.println("total time " + (System.currentTimeMillis() - beginTime));
}
}
UDF code
CREATE OR REPLACE FUNCTION txn_group_count_and_sum( txn map<bigint,
frozen<tuple<int,int>>>, start_time bigint, duration int , sample_size
bigint, begin_time bigint )
RETURNS NULL ON NULL INPUT
RETURNS map<bigint, frozen<tuple<int,int>>>
LANGUAGE java AS '
Long hashcode = (start_time - begin_time)/sample_size;
TupleValue tupleValue = txn.get(hashcode);
if (tupleValue == null) {
com.datastax.driver.core.TupleType tupleType =
com.datastax.driver.core.TupleType.of(
com.datastax.driver.core.ProtocolVersion.
NEWEST_SUPPORTED, com.datastax.driver.core.CodecRegistry.DEFAULT_INSTANCE,
com.datastax.driver.core.DataType.cint(),
com.datastax.driver.core.DataType.cint());
tupleValue = tupleType.newValue(1, duration );
}else{
tupleValue.setInt(0, tupleValue.getInt(0) + 1);
tupleValue.setInt(1, tupleValue.getInt(1) + duration);
}
txn.put(hashcode, tupleValue);
return txn; ' ;
CREATE OR REPLACE AGGREGATE group_count_and_sum(bigint, int ,bigint, bigint)
SFUNC txn_group_count_and_sum
STYPE map<bigint, frozen<tuple<int,int>>>
INITCOND {};
query
select group_count_and_sum(toUnixTimestamp(start_time),duration,100800000,1490054400000) from transactions_data
where app_name='app_name-abc' and api_name='api_name-1'
and app_id='app_id-xyz'
AND start_time>='2017-03-21 00:00:00.000' AND start_time<='2017-04-25 00:00:00.000';
Note:-
100800000 = (end_time - start_time)/30
1490054400000 = millisecond of 2017-03-21 00:00:00.000

How to measure time taken by websocket to respond

We are using web sockets in our project and there is a requirement on to evaluate the speed of websocket. How to measure the time taken by websocket to respond
Given that I am new in Stack Overflow and I can't write comments, I will try to give you an answer with the info that you posted.
If you go to Google you will find many examples about "How to calculate elapsed" or "execute time in Java". The following examples were extracted from mkyong
Date().getTime():
long lStartTime = new Date().getTime();
//some tasks
long lEndTime = new Date().getTime();
long difference = lEndTime - lStartTime;
System.out.println("Elapsed milliseconds: " + difference);
System.currentTimeMillis()
long lStartTime = System.currentTimeMillis();
//some tasks
long lEndTime = System.currentTimeMillis();
long difference = lEndTime - lStartTime;
System.out.println("Elapsed milliseconds: " + difference);
System.nanoTime()
long lStartTime = System.nanoTime();
//some tasks
long lEndTime = System.nanoTime();
long difference = lEndTime - lStartTime;
System.out.println("Elapsed milliseconds: " + difference/1000000);
Full example
import java.util.Date;
public class TimeApp {
public static void main(String[] argv) {
long lStartTime = new Date().getTime(); // start time
createArray(); // some tasks to eat time
long lEndTime = new Date().getTime(); // end time
long difference = lEndTime - lStartTime; // check different
System.out.println("Elapsed milliseconds: " + difference);
}
public static void createArray() {
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
String sArray[] = new String[1000000];
for (int i = 0; i < 1000000; i++)
sArray[i] = "Array " + i;
}
}
Hopefully this will help you!

Categories