Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
What is the difference between them? I know that
A queue is designed to have elements inserted at the end of the queue, and elements removed from the beginning of the queue.
Where as Dequeue represents a queue where you can insert and remove elements from both ends of the queue.
But which is more efficient?
Plus what's the difference between them two? Because I have a bit of knowledge about them, what I said above, but I would like to know more about them.
Deque is short for "double ended queue". With an ordinary queue, you add things to one end and take them from the other. With a double ended queue, you can add things to either end, and take them from either end. That makes it a bit more versatile; for example, you could use it as a stack if you wanted to.
In terms of efficiency, it really depends on the implementation. But generally speaking, you wouldn't expect a deque to outperform a queue, because a (single ended) queue could be implemented in a way that doesn't allow objects to be added or removed at the "wrong" end. Whereas any implementation of a deque would also work as an implementation of a queue.
Deque and queue are abstract data types that can be implemented in different ways. To talk about performance you have to specify which implementations you want to compare and which operation(s) you're interested in. Even better, do the benchmark yourself with the workload your application has and in the environment you're going to use (hardware, operating system, JVM version).
Since every deque is also a queue, in general you can say that deques can be at most as good as a queues.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I’m developing a network proxy application using Java 8. For ingress, the main logic is the data-processing-loop: getting a packet in the inbound queue, processing the content data (e.g. protocol-adoption), and put it in the send-queue. Multi virtual TCP channels are allowed in the design, so a data processing thread, among a list of data-processing threads, handles a bunch of channels at a specific time duration, as a part of the whole job (e.g., for the channels with channel.channelId%NUM_DATA_PROCESSING_THREADS = 0, which is determined by a load-balancing scheduler). Channels are stored in an array and accessed by using the channeled as the index of the cell, which is wrapped by a class that provides methods like register, deregister, getById, size, etc., and the instance is called CHANNEL_STORE in the program. I need to use these methods in the main logic (data-processing-loop) by different threads (at least dispatcher thread, data processing thread, and the control operation thread for destroying a channel from the GUI). Then I need to consider concurrency among these threads. I have several candidate-approaches:
Use synchronized or reentrant locks surrounding the register, deregister, getById, etc. This is the simplest and its thread-safe. But I have performance concerns about the lock (CAS) mechanisms since I need to perform the operations on the CHANNEL_STORE (especially getById) at a very high frequency.
Designate the operations of CHANNEL_STORE to a SingleThreadExecutor by executor.execute(runnable) and/or executor.submit(callable). The concern is the performance of creating runnable/callables at each such destination in the data-processing-loop: creating the runnable instance and call execute – I have no idea will this be even more expansive than the synchronized or reentrant locks. In the reality (so far) there is post-operation so only putting runnable and no need to wait for the callable return in the data-processing-loop, although post-operation is needed in the control loop.
Designate the operations of CHANNEL_STORE to a dedicated task by a pair of ArrayBlockingQueue instead of Executor For each access to CHANNEL_STORE, put a task-indicator together with an attachment of parameters to the first queue, and then the dedicated thread loops on this queue by the blocking method take and operates on the CHANNEL_STORE. Then, it put the result to the 2nd queue for the Designator to continue the post-operation (currently no need, however). I regard this as the fastest, assuming the blocking queue in JVM is lock-free. The concern on this is that code is very messy and error-prone.
I think the 2nd and 3rd may be called "serialization".
The reason that I cannot simply assign tasks to a thread-pool for data processing and forget them is that the TCP stream data packets of each channel cannot be disordered, it has to be in serial per channel base.
Questions:
what’s the performance of the second way comparing to the first way?
what’s the suggestion for my situation?
I'm currently using stream-IO for LAN read/write. If using NIO, the coordination between the NIO thread and data processing threads may bring additional complexity (e.g post operations). So I think this question is meaningful for time-critical (stream-based, multi-channel network) applications like mine.
If I understand well your use case, this is a common problem in concurrent programming. One solution is to use the ring buffer approach, which usually offers a good solution to both synchronization and too many objects creation problems.
You can find a good implementation of this in the lmax dispruptor library. See https://lmax-exchange.github.io/disruptor/ to know more about this. But keep in mind that it is not magic and must be adapted to your use case.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am still learning about java and need a recommendation on best practices.
Here is the scenerio:
An encrypted file comes in
A java app picks up the file when it comes.
The java app from the class that listens for the file, in its main method creates 5 blockingqueues (for the consumers), starts up a producer and 5 consumer threads.
The producer thread reads the file and creates 1 big object consisting of 5 other smaller objects within it.
The producer thread then puts each big object into the blockingqueues.
Each consumer thread looks into its own blockingqueue, retrieves the big object, then it retrieves 1 of the 5 smaller objects and writes a file with the information related to that 1 small object.
my problem:
If anything goes wrong in the producer thread while its reading the file, I want the listening class (the one that starts everything up) to know about it so that it can change the extension of the encrypted file to .err
I also want the other 5 consumer threads to know if something wrong occurs in the producer thread so that they can also change the extension of the file that each creates to .err
Not sure if a wrapper class would be recommended more in this scenerio that I pass into the blockingqueue or to use a static variable in the listening or producer class that all the threads can look at to know if an error occurred. Thank you for your help
or if there is a better solution please let me know
What if instead of having each child thread write out their results to a file, the results were aggregated back to a result handler? This way if there was an error, the result handler can handle it appropriately (by adding the .err extension).
Most of the performance advantages of concurrency have to do with better CPU usage, but since you're writing to a single piece of hardware (disk, probably) there really isn't an advantage to doing that concurrently anyway.
The main disadvantage to this approach would be that your memory overhead would be a little bigger, since you would have to keep the outputs from each consumer in memory until all five had finished writing, instead of being able to have them each finish and persist separately. Honestly, you'll have to do that anyway, since an error in one consumer could happen after some other consumer had already finished and persisted.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have an interesting scenario based question related to java threads.
Sam has designed an application. I t segregates tasks that are critical and executed frequently from tasks that are non critical and executed less frequently. He has prioritized these tasks based on their criticality and frequency of execution. After close scrutiny, he finds that the tasks designed to be non critical are rarely getting executed. From what kind of problem is the application suffering?
I have figured it out as "Starvation" but i am still confuse whether I am right or wrong.
Starvation is a reasonable term for what is going on here. However, your lecturer might have something more specific in mind (... I'm not sure what ...) so check your lecture notes and text books.
... i am still confuse whether I am right or wrong.
If you stated why you are confused, we might be able to sort out your confusion.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
1.Is it possible to acheive multithreading with single processor?
Multiprocessing : Several jobs can run at the same time.(So, it requires more than one processor)
Multitasking : Sharing of processor among various tasks, here some scheduling algorithms come in to context switch tasks (Not necessarily need multiple processor)
Multithreading : A single process broken into sub tasks(threads) which enables you to execute like multitasking or multiprocessing and their results can be combined at the end. (Not necessarily multiple processors)
Links:
http://en.wikipedia.org/wiki/Computer_multitasking#Multithreading
http://en.wikipedia.org/wiki/Multiprocessing
http://en.wikipedia.org/wiki/Multiprogramming#Multiprogramming
Edit : To answer your question , multithreading is quite possible with one processor
Yes, it is possible.
With a single processor, the threads will take turns executing. Exactly how this is implemented is up to the operating system.
If the work done is computation heavy, you will probably lose more than you gain because of the added scheduling overhead.On the other hand, if there is a lot of waiting, for example for network resources, you can gain a lot from using several threads on a single processor.
Yes it is possible.
The threads can get their turn in time-slice i.e. each thread can be executed for some particular interval and then other will get turn.
For more info.
Time-slicing
Preemption
Threads concept mainly used for acheiving the multitask in a single processor,to minimize the ideal time of the of the processor we are using the multithreading concept in java.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
We know that if we execute the getcurrentthread.priority method we will get the thread priority as 5. I am not able to get the answer for the threads having priority higher than the Main method.
I assume you mean that this is somehow the index of the thread in a priority queue, and that therefore at least 4 other threads must exist.
Well, this isn't the case, it's not an index but a value used to compare its priority with other concurrent threads, not only in your VM but also on your system. In fact, threads can have the same priority.
Side note: when setting the priorities on Threads, always use the constants MIN_PRIORITY, NORM_PRIORITY and MAX_PRIORITY. If you need intermediate values, calcuate them using the constants:
int mediumHighPriority = (Thread.NORM_PRIORITY+Thread.MAX_PRIORITY)/2;
The constant values may get changed in the future (could have a wider range, or even be reversed so that lower number equals higher priority, or the NORM_PRIORITY could get lower or higher), if you use the constants rather than their value you're on the save side and the code gets more legible.
You can enumerate all threads in the current program via ThreadGroup, see for example this answer.
When a java application is started, the JVM creates the main ThreadGroup as a member of the system ThreadGroup and set the priority to default(i.e NORM_PRIORITY).
This only applies to the threads in the current Java program though and will not provide any information about all the other threads running in other processes in the OS.