reproduce OutOfMemoryError with -Xss - java

I'm trying to reproduce java.lang.OutOfMemoryError: unable to create new native thread
But with -Xss VM argument.
I'm guessing that if we have a large number of threads, and every thread takes X stack space, I will have the exception if threads*X > total stack size.
But nothing happened.
my tester:
`
public static void main(String[] args) throws Exception
{
ThreadPoolExecutor executor = new ThreadPoolExecutor(1000, 15000, 0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
int i = 0;
try
{
Thread.sleep(100);
for (; i <= 10; i++)
{
Runnable t = new Runnable()
{
List<Object> objects = new LinkedList<>();
public void run()
{
while (true)
{
objects.add(new Object());
}
}
};
executor.submit(t);
}
}
catch (Throwable e)
{
System.err.println("stop with " + i + " threads, " + e);
System.err.println("task count " + executor.getTaskCount());
System.out.println("get active thread count " + executor.getActiveCount());
executor.shutdownNow();
}
}
`
And my VM args are
-Xms512m -Xmx512m -Xss1g
Any Idea why I'm not having the exception? and how do I repudce it?
thanks

On most OSEes, the stack is allocated lazily, ie only the pages you actually use turn into real memory. Your process is limited to 128 to 256 TB of virtual memory per process, depending on the OS you are using, so at 1 GB per thread, you need at least 128k threads. I would try a much larger stack. E.g. 256g
EDIT: Trying this myself, it looks like it ignores stack sizes of 4g and above. The largest size is -Xss4000m on windows.
Trying to reproduce this on Windows, and it appears to overload the machine before any exception is thrown.
This is what I tried. Run with -Xss4000m, it got to over 20 threads (total 80g before my windows laptop stopped working)
You might find in Linux, it will reach a ulimit before overloading the machine.
import java.util.concurrent.*;
class A {
public static void main(String[] args) throws InterruptedException {
ThreadPoolExecutor pool = new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<>());
try {
for (int i = 0; i < 100; i++) {
System.out.println(i);
pool.submit(() -> {
try {
System.out.println(recurse() + " size " + pool.getPoolSize());
} catch (Throwable t) {
t.printStackTrace();
}
return null;
});
Thread.sleep(1000);
}
} finally {
pool.shutdown();
}
}
static long recurse() {
try {
return 1 + recurse();
} catch (Error e) {
try {
Thread.sleep(10000);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
return 1;
}
}
}

Related

java multi threading executor never shutdown threads

Could some one take a look this below program ?.
It is working fine for small process, but not exiting the program after completing large process.
Note: If it is small size query, about 50 records (retrieving and updating), the program is Exiting normally....
The purpose of this program is to get the data from the database, go to cloud to read JSON, validate the data and update the record in database with result.
public class ThreadLauncher
{
public static void main(String args[])
{
final ExecutorService service = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors()); // or hardcode a number
List<Future<Runnable>> futures = new ArrayList<Future<Runnable>>();
for (int n = 0; n < 10; n++)
{
Future f = service.submit(new Task(n));
futures.add(f);
}
// wait for all tasks to complete before continuing
for (Future<Runnable> f : futures)
{
try {
f.get();
//shut down the executor service so that this thread can exit
} catch (InterruptedException e) {
System.out.println("Exiting with InterruptedException : " + e.getMessage());
e.printStackTrace();
} catch (ExecutionException e) {
System.out.println("Exiting with ExecutionException : " + e.getMessage());
e.printStackTrace();
}
}
service.shutdownNow();
System.out.println("Exiting normally...");
}
}
final class Task
implements Runnable
{
private int loopCounter;
private int totalLoops = 5;
public Task(int counter)
{
this.loopCounter = counter;
}
#Override
public void run()
{
try {
GCPJSON.getInstance().getGCPDataFromJSON(PRODDataAccess.getInstance().getDataToProcess(loopCounter,totalLoops));
System.out.println("Task ID : " + this.loopCounter + " performed by " +
Thread.currentThread().getName());
} catch (Exception e) {
e.printStackTrace();
}
}
}
Here is my updated code.I have changed it from Future to FutureTask and added few lines items. I am hoping all these 10 tasks run in parallel.
List<FutureTask<Runnable>> futures = new ArrayList<FutureTask<Runnable>>();
for (int n = 0; n < 10; n++)
{
FutureTask f = (FutureTask) service.submit(new Task(n));
futures.add(f);
}
// wait for all tasks to complete before continuing
// for (FutureTask<Runnable> f : futures)
for (int i=0; i< futures.size(); i++)
{
FutureTask f = (FutureTask)futures.get(i) ;
//System.out.println("Number of futureTasks: " + i);
try {
if(!f.isDone()){
//wait indefinitely for future task to complete
f.get();
//System.out.println("FutureTask output="+f.get());
}else{
System.out.println("Task number :" + i + "Done.");
}
} catch (InterruptedException | ExecutionException e) {
System.out.println("Exiting with InterruptedException : " + e.getMessage());
e.printStackTrace();
}
}
//If we come out from the loop, we must have completed all the tasks. e.e. In above case , 10 tasks ( 10 loop submites)
try {
if (!service.awaitTermination(10000000, TimeUnit.MICROSECONDS)) {
System.out.println("Exiting normally...");
service.shutdownNow();
System.exit(0);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
if(!service.isShutdown()){
System.exit(0);
}
It's because when you call shutdown or shutdownNow on executorService it only try to stop active threads and it will return list of active tasks based on Java documentation:
Attempts to stop all actively executing tasks, halts the processing of
waiting tasks, and returns a list of the tasks that were awaiting
execution.
This method does not wait for actively executing tasks to terminate.
Use {awaitTermination} to do that.
As documentation says you need to call awaitTermination to make sure every thread has finished or this method will kill them at the end of timeout.
UPDATE:
If you have no idea about timing estimation you can add following lines to make sure that all threads have been finished successfully.
int filesCount = getFileCount();//you know the files count, right?
AtomicInteger finishedFiles = new AtomicInteger(0);
ExecutorService executorService = Executors.newFixedThreadPool(threadCount);
for (int i = 0; i < threadCount; i++)
executorService.submit(() -> {
//do you work
//at the end of each file process
finishedFiles.incrementAndGet();
}
while (finishedFiles.get() < filesCount) { //let's wait until all files have been processed
Thread.sleep(100);
}
executorService.shutdown();
executorService.awaitTermination(1, TimeUnit.MINUTES);//anyway they already should have finished

How to schedule concurrent Runnables according to number of threads whereas a thread has to wait a delay after completing the task

I am scheduling tasks with a specific delay to process my items:
while (currentPosition < count) {
ExtractItemsProcessor extractItemsProcessor =
getExtractItemsProcessor(currentPosition, currentPositionLogger);
executor.schedule(extractItemsProcessor, waitBetweenProzesses, TimeUnit.SECONDS);
waitBetweenProzesses += sleepTime;
currentPosition += chunkSize;
}
How can i schedule for example 3 tasks (having an executor with 3 threads) and each thread has to wait for 10 seconds after it finished its task?
You can use Executors.newFixedThreadPool(NB_THREADS) which returns an ExecutorService. Then with this executorService you can submit a task. Example :
private static final int NB_THREADS = 3;
public static void main(String[] args) {
ExecutorService executorService = Executors.newFixedThreadPool(NB_THREADS);
for (int i = 0; i < NB_THREADS; i++) {
final int nb = i + 1;
Runnable task = new Runnable() {
public void run() {
System.out.println("Task " + nb);
try {
TimeUnit.SECONDS.sleep(10);
System.out.println("Task " + nb + " terminated");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println("Error during thread await " + e); // Logging framework should be here
}
}
};
executorService.submit(task);
}
executorService.shutdown();
try {
executorService.awaitTermination(1, TimeUnit.SECONDS);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println("Error during thread await " + e);
}
}
It will run te 3 tasks in parallel and output looks like this :
Task 1
Task 3
Task 2
Task1 terminated
Task2 terminated
Task3 terminated
In your case you could do something like :
ExecutorService executorService = Executors.newFixedThreadPool(NB_THREADS);
while (currentPosition < count) {
ExtractItemsProcessor extractItemsProcessor =
getExtractItemsProcessor(currentPosition, currentPositionLogger);
executorService.submit(extractItemsProcessor); // In processor you should add the sleep method
waitBetweenProzesses += sleepTime;
currentPosition += chunkSize;
}

How to optimize Java thread concurrency with thousands of SSH connections per thread?

So I have this piece of code I want to optimize. The idea of the program is to iterate a list of hosts and then create one thread per host and connect to it via SSH to validate some things and then save the results in an object.
Right now the code takes a long time (20 minutes) to run through 8000 hosts. I want to optimize this code to fully maximize the utilization of CPU, cores and memory and hopefully finish faster.
ConcurrentLinkedQueue<Host> hosts= new ConcurrentLinkedQueue<Host>();
List<FutureTask<Host>> taskList = new ArrayList<FutureTask<Host>>();
final int cpus = Runtime.getRuntime().availableProcessors();
int threadCount = 1000;
if(cpus <= 2){
threadCount = cpus * 4;
}
ExecutorService executor = Executors.newFixedThreadPool(threadCount);
for (String resource : resources) {
FutureTask<Host> task = new FutureTask<Host>(new Worker(resource);
taskList.add(task);
executor.submit(task);
}
System.out.println("Processing... Waiting for all threads to finish...");
for (int j = 0; j < taskList.size(); j++) {
FutureTask<Host> futureTask = taskList.get(j);
try {
hosts.add(futureTask.get());
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
executor.shutdown();

Locking access to another class, from run method

fairly complex code but it's a very simple issue.
I have a thread, this is its run method:
public void run() //gets pages and writes to them
{ // i printed the pageId of every process to check they are running at the same time and competing for resources
for(ProcessCycle currentCycle : processCycles.getProcessCycles())
{
Long[] longArray = new Long[currentCycle.getPages().size()];
try {
Page<byte[]>[] newPages = mmu.getPages(currentCycle.getPages().toArray(longArray));
for(int i = 0; i < newPages.length; i++)
{
MMULogger.getInstance().write("GP:P" + id + " " + currentCycle.getPages().get(i) + " " + Arrays.toString(currentCycle.getData().get(i)), Level.INFO);
}
List<byte[]> currentPageData = currentCycle.getData();
System.out.println("process id " + id);
for(int i = 0; i < newPages.length;i++)
{
byte[] currentData = currentPageData.get(i);
newPages[i].setContent(currentData);
}
Thread.sleep(currentCycle.getSleepMs());
} catch (ClassNotFoundException | IOException | InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
don't bother reading all of it. just notice that after the mmu.getpages there is a for loop.
While a process is inside the for loop, i want to lock access to mmu.getpages for all other threads. synchronized is no good since my original process is no longer in mmu, but in the for loop, and reentrantlock might be a good idea but I'm unfamiliar with the syntax and ran into some issues.
long story short - how do i make all other threads wait while some thread is inside the for loop after mmu.getpages?
Usually I chose an approach like this:
private Object lock = new Object();
public void run() //gets pages and writes to them
{ // i printed the pageId of every process to check they are running at the same time and competing for resources
for(ProcessCycle currentCycle : processCycles.getProcessCycles())
{
Long[] longArray = new Long[currentCycle.getPages().size()];
try {
synchrnonized(lock) {
Page<byte[]>[] newPages = mmu.getPages(currentCycle.getPages().toArray(longArray));
for(int i = 0; i < newPages.length; i++)
{
MMULogger.getInstance().write("GP:P" + id + " " + currentCycle.getPages().get(i) + " " + Arrays.toString(currentCycle.getData().get(i)), Level.INFO);
}
}
List<byte[]> currentPageData = currentCycle.getData();
System.out.println("process id " + id);
for(int i = 0; i < newPages.length;i++)
{
byte[] currentData = currentPageData.get(i);
newPages[i].setContent(currentData);
}
Thread.sleep(currentCycle.getSleepMs());
} catch (ClassNotFoundException | IOException | InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Not sure if there is a better way. This will only work as expected when all threads share the same instance of this object, otherwise you have to make lock a static member variable.
In my opinion a ReadWriteLock might be a best approach.
Something like this:
public class MmuClass {
private ReadWriteLock blockGetPages = new ReentrantReadWriteLock();
public byte [] getPages(...) {
try{
blockGetPages.readLock().lock();
// ...
// ...
// ...
return result;
finally{
blockGetPages.readLock().unlock();
}
}
public void lockAccessToGetPages(){
blockGetPages.writeLock().lock();
}
public void unlockAccessToGetPages(){
blockGetPages.writeLock().unlock();
}
}
and
Page<byte[]>[] newPages = mmu.getPages(currentCycle.getPages().toArray(longArray));
try{
mmu.lockAccessToGetPages();
for(int i = 0; i < newPages.length; i++) {
MMULogger.getInstance().write("GP:P" + id + " " + currentCycle.getPages().get(i) + " " + Arrays.toString(currentCycle.getData().get(i)), Level.INFO);
}
} finally{
mmu.unlockAccessToGetPages();
}
In this solutions all "readers" can simultaneously call getPages(), the access is blocked after calling lockAccessToGetPages() and unblocked after calling unlockAccessToGetPages(). If one thread locks the object in write mode, only this thread has access to the method. If some thread tries to lock it in write mode, must wait until all readers, which are currently "inside" the metod, finish their fork and leave the method.

Multithreading in java having array of threads [duplicate]

This question already has answers here:
How to use an ExecutorCompletionService
(2 answers)
Closed 7 years ago.
public static void getTestData() {
try {
filename = "InventoryData_" + form_id;
PrintWriter writer = new PrintWriter("/Users/pnroy/Documents/" +filename + ".txt");
pids = new ArrayList<ProductId>();
GetData productList = new GetData();
System.out.println("Getting productId");
pids = productList.GetProductIds(form_id);
int perThreadSize = pids.size() / numberOfCrawlers;
ArrayList<ArrayList<ProductId>> perThreadData = new
ArrayList<ArrayList<ProductId>>(numberOfCrawlers);
for (int i = 1; i <= numberOfCrawlers; i++) {
perThreadData.add(new ArrayList<ProductId>(perThreadSize));
for (int j = 0; j < perThreadSize; j++) {
ProductId ids = new ProductId();
ids.setEbProductID((pids.get(((i - 1) * perThreadSize + j))).getEbProductID());
ids.setECProductID((pids.get(((i - 1) * perThreadSize + j))).getECProductID());
perThreadData.get(i - 1).add(ids);
}
}
BlockingQueue<String> q = new LinkedBlockingQueue<String>();
Consumer c1 = new Consumer(q);
Thread[] thread = new Thread[numberOfCrawlers];
for (int k = 0; k <= numberOfCrawlers; k++) {
// System.out.println(k);
GetCombinedData data = new GetCombinedData();
thread[k] = new Thread(data);
thread[k].setDaemon(true);
data.setVal(perThreadData.get(k), filename, q);
thread[k].start();
// writer.println(data.getResult());
}
new Thread(c1).start();
for (int l = 0; l <= numberOfCrawlers; l++) {
thread[l].join();
}
} catch (Exception e) {
}
}
Here number of crawlers is the number of threads.
The run method of GetCombined class has the following code:
The pids is passed as perThreadData.get(k-1) from the main method
The class CassController queries a API and i get a string result after some processing.
public void run(){
try{
for(int i=0;i<pids.size();i++){
//System.out.println("before cassini");
CassController cass = new CassController();
String result=cass.getPaginationDetails(pids.get(i));
queue.put(result);
// System.out.println(result);
Thread.sleep(1000);
}
writer.close();
}catch(Exception ex){
}
Consumer.java has the following code :
public class Consumer implements Runnable{
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run(){
try {
while (queue.size()>0)
{
consume(queue.take());
}
} catch (InterruptedException ex)
{
}
}
void consume(Object x) {
try{
PrintWriter writer = new PrintWriter(new FileWriter("/Users/pnroy/Documents/Inventory", true));
writer.println(x.toString());
writer.close();
}catch(IOException ex){
}
}
So if i set the number of crawlers to 10 and if there are 500 records each thread will process 50 records.I need to write the results into a file.I am confused how i can achieve this since its a array of thread and each thread is doing a bunch of operations.
I tried using blocking queue but that is printing repetitive results.I am new to multi threading and not sure how can i handle the case.
Can you please suggest.
With the introduction of many useful high-level concurrency classes, it now recommended not to directly use the Thread class anymore. Even the BlockingQueue class is rather low-level.
Instead, you have a nice application for the CompletionService, which builds upon the ExecutorService. The below example shows how to use it.
You want to replace the code in PartialResultTask (that's where the main processing happens) and System.out.println (that's where you probably want to write your result to a file).
public class ParallelProcessing {
public static void main(String[] args) {
ExecutorService executionService = Executors.newFixedThreadPool(10);
CompletionService<String> completionService = new ExecutorCompletionService<>(executionService);
// submit tasks
for (int i = 0; i < 500; i++) {
completionService.submit(new PartialResultTask(i));
}
// collect result
for (int i = 0; i < 500; i++) {
String result = getNextResult(completionService);
if (result != null)
System.out.println(result);
}
executionService.shutdown();
}
private static String getNextResult(CompletionService<String> completionService) {
Future<String> result = null;
while (result == null) {
try {
result = completionService.take();
} catch (InterruptedException e) {
// ignore and retry
}
}
try {
return result.get();
} catch (ExecutionException e) {
e.printStackTrace();
return null;
} catch (InterruptedException e) {
e.printStackTrace();
return null;
}
}
static class PartialResultTask implements Callable<String> {
private int n;
public PartialResultTask(int n) {
this.n = n;
}
#Override
public String call() {
return String.format("Partial result %d", n);
}
}
}

Categories