Modular Design Patterns - java

I've started drawing plugs in Java, like connectors using bezier curves, but just the visual stuff.
Then I begin wondering about making some kind of modular thing, with inputs and outputs. However, I'm very confused on decisions about how to implement it. Let's say for example, a modular synthesizer, or Pure Data / MaxMSP concepts, in which you have modules, and any module has attributes, inputs and outputs.
I wonder if you know what keywords should I use to search something to read about. I need some basic examples or abstract ideas concerning this kind of interface. Is there any some design pattern that fits this idea?

Since you're asking for a keyword real-time design patterns, overly OOP is often a performance bottleneck to real-time applications, since all the objects (and I guess polymorphism to some extent) add overhead.
Why real-time application? The graph you provided looks very sophisticated,
You process the incoming data multiple times in parallel, split it up, merge it and so on.
Every node in the graph adds different effects and makes different computations, where some computations may take longer than others - this leads to the conclusion, that in order to have uniform data (sound), you have to keep the data in sync. This is no trivial task.
I guess some other keywords would be: sound processing, filter. Or you could ask companies that work in that area for literature.
Leaving the time sensitivity aside, I constructed a little OOP example,
maybe an approach like that is sufficient for less complex scenarios
public class ConnectionCable implements Runnable, Closeable {
private final InputLine in;
private final OutputLine out;
public ConnectionCable(InputLine in, OutputLine out) {
this.in = in;
this.out = out;
// cable connects open lines and closes them upon connection
if (in.isOpen() && out.isOpen()) {
in.close();
out.close();
}
}
#Override
public void run() {
byte[] data = new byte[1024];
// cable connects output line to input line
while (out.read(data) > 0)
in.write(data);
}
#Override
public void close() throws IOException {
in.open();
out.open();
}
}
interface Line {
void open();
void close();
boolean isOpen();
boolean isClosed();
}
interface InputLine extends Line {
int write(byte[] data);
}
interface OutputLine extends Line {
int read(byte[] data);
}

Related

Why java doesn't give a bit read api?

I am using java ByteBuffer to save some basic data into streams. One situation is that I must transfer a "Boolean list" from one machine to another through the internet, so I want the buffer to be as small as possible.
I know the normal way of doing this is using buffer like this:
public final void writeBool(boolean b) throws IOException {
writeByte(b ? 1 : 0);
}
public final void writeByte(int b) throws IOException {
if (buffer.remaining() < Byte.BYTES) {
flush();
}
buffer.put((byte) b);
}
public boolean readBool(long pos) {
return readByte(pos) == 1;
}
public int readByte(long pos) {
return buffer.get((int)pos) & 0xff;
}
This is a way of converting a boolean into byte and store into buffer.
But I'm wandering, why not just putting a bit into buffer, so that a a byte can represent eight booleans, right?
The code maybe like this? But java doesn't have a writeBit function.
public final void writeBool(boolean b) throws IOException {
// java doesn't have it.
buffer.writeBit(b ? 0x1 : 0x0);
}
public final boolean readBool(long pos) throws IOException {
// java doesn't have it
return buffer.getBit(pos) == 0x01;
}
So I think the only way doing that is "store eight booleans into a byte and write",like ((0x01f >>> 4) & 0x01) == 1 to check if the fifth boolean is true. But if I can get a byte, why not just let me get a bit?
Is there some other reason that java cannot let us operate bit?
Yeah, so I mean why not create a BitBuffer?
That would be a question for the Java / OpenJDK development team, if you want a definitive answer. However I expect they would make these points:
Such a class would have extremely limited utility in real applications.
Such a class is unnecessary given that an application doing (notionally) bit-oriented I/O can be implemented using ByteBuffer and a small amount of "bit-twiddling"
There is the technical issue that main-stream operating systems, and main-stream network protocols only support I/O down to the granularity of a byte1. So, for example, file lengths are recorded in bytes, and creating a file containing precisely 42 bits of data (for example) is problematic.
Anyway, there is nothing stopping you from designing and writing your own BitBuffer class; e.g. as a wrapper for ByteBuffer. And sharing it with other people who need such a thing.
Or looking on (say) Github for a Java class called BitBuffer.
1 - Indeed I don't know of any operating system, file system or network protocol that has a smaller granularity than this.

designing classes for other developers to use in java

class CSVReader {
private List<String> output;
private InputStream input;
public CSVReader(InputStream input) {
this.input = input;
}
public void read() throws Exception{
//do something with the inputstream
// create output list.
}
public List<String> getOutput() {
return Collections.unmodifiableList(output);
}
I am trying to create a simple class which will be part of a library. I would like to create code that satisfies the following conditions:
handles all potential errors or wraps them into library errors and
throws them.
creates meaningful and complete object states (no incomplete object structures).
easy to utilize by developers using the library
Now, when I evaluated the code above, against the goals, I realized that I failed badly. A developer using this code would have to write something like this -
CSVReader reader = new CVSReader(new FileInputStream("test.csv");
reader.read();
read.getOutput();
I see the following issues straight away -
- developer has to call read first before getOutput. There is no way for him to know this intuitively and this is probably bad design.
So, I decided to fix the code and write something like this
public List<String> getOutput() throws IOException{
if(output==null)
read();
return Collections.unmodifiableList(output);
}
OR this
public List<String> getOutput() {
if(output==null)
throw new IncompleteStateException("invoke read before getoutput()");
return Collections.unmodifiableList(output);
}
OR this
public CSVReader(InputStream input) {
read(); //throw runtime exception
}
OR this
public List<String> read() throws IOException {
//read and create output list.
// return list
}
What is a good way to achieve my goals? Should the object state be always well defined? - there is never a state where "output" is not defined, so I should create the output as part of constructor? Or should the class ensure that a created instance is always valid, by calling "read" whenever it finds that "output" is not defined and just throw a runtime exception? What is a good approach/ best practice here?
I would make read() private and have getOutput() call it as an implementation detail. If the point of exposing read() is to lazy-load the file, you can do that with exposing getOutput only
public List<String> getOutput() {
if (output == null) {
try {
output = read();
} catch (IOException) {
//here you either wrap into your own exception and then declare it in the signature of getOutput, or just not catch it and make getOutput `throws IOException`
}
}
return Collections.unmodifiableList(output);
}
The advantage of this is that the interface of your class is very trivial: you give me an input (via constructor) I give you an output (via getOutput), no magic order of calls while preserving lazy-loading which is nice if the file is big.
Another advantage of removing read from the public API is that you can go from lazy-loading to eager-loading and viceversa without affecting your clients. If you expose read you have to account for it being called in all possible states of your object (before it's loaded, while it's already running, after it already loaded). In short, always expose the least possible
So to address your specific questions:
Yes, the object state should always be well-defined. Your point of not knowing that an external call on read by the client class is needed is indeed a design smell
Yes, you could call read in the constructor and eagerly load everything upfront. Deciding to lazy-load or not is an implementation detail dependent on your context, it should not matter to a client of your class
Throwing an exception if read has not been called puts again the burden to calling things in the right, implicit order on the client, which is unnecessary due to your comment that output is never really undefined so the implementation itself can make the risk-free decision of when to call read
I would suggest you make your class as small as possible, dropping the getOutput() method all together.
The idea is to have a class that reads a CSV file and returns a list, representing the result. To achieve this, you can expose a single read() method, that will return a List<String>.
Something like:
public class CSVReader {
private final InputStream input;
public CSVReader(String filename) {
this.input = new FileInputStream(filename);
}
public List<String> read() {
// perform the actual reading here
}
}
You have a well defined class, a small interface to maintain and the instances of CSVReader are immutable.
Have getOutput check if it is null (or out of date) and load it in automatically if it is. This allows for a user of your class to not have to care about internal state of the class's file management.
However, you may also want to expose a read function so that the user can chose to load in the file when it is convenient. If you make the class for a concurrent environment, I would recommend doing so.
The first approach takes away some flexibility from the API: before the change the user could call read() in a context where an exception is expected, and then call getOutput() exception-free as many times as he pleases. Your change forces the user to catch a checked exception in contexts where it wasn't necessary before.
The second approach is how it should have been done in the first place: since calling read() is a prerequisite of calling getOutput(), it is a responsibility of your class to "catch" your users when they "forget" to make a call to read().
The third approach hides IOException, which may be a legitimate exception to catch. There is no way to let the user know if the exception is going to be thrown or not, which is a bad practice when designing runtime exceptions.
The root cause of your problem is that the class has two orthogonal responsibilities:
Reading a CSV, and
Storing the result of a read for later use.
If you separate these two responsibilities from each other, you would end up with a cleaner design, in which the users would have no confusion over what they must call, and in what order:
interface CSVData {
List<String> getOutput();
}
class CSVReader {
public static CSVData read(InputStream input) throws IOException {
...
}
}
You could combine the two into a single class with a factory method:
class CSVData {
private CSVData() { // No user instantiation
}
// Getting data is exception-free
public List<String> getOutput() {
...
}
// Creating instances requires a factory call
public static CSVData read(InputStream input) throws IOException {
...
}
}

How to read and display large text files in Swing?

This might sound a bit complicated, I'll try to simplify what I am asking.
A program I am developing can read and write from/to files using a JTextArea. When files are rather large, it does take a cumbersome amount of time to read the data from that file into the text area. As an example, I have a file that currently has 40,000 lines of text, roughly 50 characters a line; also, some lines wrap. There is quite a lot of text and it takes a lot more time to read from that file then I would like.
Currently, I am using the standard read method utilizing a BufferedReader instance that the JTextArea component includes. What I would like to do is load the JTextArea with a certain amount of text loaded on screen. The rest of the text that is off-screen loaded in a separate thread in the background.
Would using a InputStream and write each character to an array then write characters to the JTextArea be sufficient? Or should there be a different approach at this? I'm trying to accomplish a fast and efficient read method.
There are two, immediate, issues at hand
First, the need to read a file in such away that it can progressively update the UI without causing unacceptable delays
Second, the ability for the JTextArea to actually deal with this amount of data...
The first issues is, relatively, simple to fix. What you need to make sure of is that you are not blocking the Event Dispatching Thread while you read the file and that you are only updating the JTextArea from within the context of the Event Dispatching Thread. To this end a SwingWorker is an excellent choice, for example...
public class FileReaderWorker extends SwingWorker<List<String>, String> {
private File file;
private JTextArea ta;
public FileReaderWorker(File file, JTextArea ta) {
this.file = file;
this.ta = ta;
}
public File getFile() {
return file;
}
public JTextArea getTextArea() {
return ta;
}
#Override
protected List<String> doInBackground() throws Exception {
List<String> contents = new ArrayList<>(256);
try (BufferedReader br = new BufferedReader(new FileReader(getFile()))) {
String text = null;
while ((text = br.readLine()) != null) {
// You will want to deal with adding back in the new line characters
// here if that is important to you...
contents.add(text);
publish(text);
}
}
return contents;
}
#Override
protected void done() {
try {
get();
} catch (InterruptedException | ExecutionException ex) {
ex.printStackTrace();
// Handle exception here...
}
}
#Override
protected void process(List<String> chunks) {
JTextArea ta = getTextArea();
for (String text : chunks) {
ta.append(text);
}
}
}
Take a look at Concurrency in Swing and Worker Threads and SwingWorker for more details
ps- You don't need to use the List to store the contents, I just did it as an example...
The second problem is far more complicated and would need some additional testing to ensure that it is actually a problem, but generally speaking, contents of over about 1mb tends to lead to issues...
To this end, you would need to be able to manage the JScrollPane, be able to request chunks of text from the file both in backwards and forwards direction and try and effectively "fudge" the process (so that you only have the text you need loaded, but can still make it look like you have all the text loaded within the JTextArea)...
You could also take a look at FileChannel, which provides more functionality over the standard java.io classes, including memory mapping, for starters, have a look at Reading, Writing, and Creating Files.
You also might consider using a JList or JTable which are highly optimised for displaying large quantities of data. There are limitations to this, as there is an expectation of fixed row heights, which when changed (to dynamic row heights) can affect the performance, but might be a suitable alternative...

replacing text in a printstream

Is it possible to have a regexp replace in a printstream?
I have a piece of code that logs all text that is shown in my console windows but it also logs ANSI escape codes.
I have found this regexp "s:\x1B\[[0-9;]*[mK]::g" to remove them but that only works with strings.
Is there a way to apply a regex replace to a constant stream of strings and filter out the ANSI escape codes?
If possible, dumb it down as much as possible, I am still a newbie when it comes to programming, I am just building upon a already program.
EDIT:
I have this code which I found somewhere else on stack overflow, this allows me to stream to a logfile and to the console at the same time.
This is what I use and then I set the out to tee after this.
Logging tee = new Logging(file, System.out);
.
package com.md_5.mc.chat;
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintStream;
public class Logging extends PrintStream
{
private final PrintStream second;
public Logging(OutputStream main, PrintStream second)
{
super(main);
this.second = second;
}
public void close()
{
super.close();
}
public void flush()
{
super.flush();
this.second.flush();
}
public void write(byte[] buf, int off, int len)
{
super.write(buf, off, len);
this.second.write(buf, off, len);
}
public void write(int b)
{
super.write(b);
this.second.write(b);
}
public void write(byte[] b) throws IOException
{
super.write(b);
this.second.write(b);
}
}
Create create a subclass of FilterOutputStream, say RegexOutputStream. This class should buffer all data written to it (from the different write(...) methods). In the flush() method, it should apply the regex and then write the result to the underlying OutputStream.
Next, instantiate the PrintWriter to write to the RegexOutputStream. This way you don't need to alter the behaviour of the PrintWriter class. In case you don't want the filtering anymore, you can just take the RegexOutStream out of the chain, and everything will work again.
Note that, depending on how you use the PrintWriter, this might cause the RegexOutputStreams buffer to get quite big. If you create the PrintWriter to autoflush, it will flush after every line and after every byte array. See its JavaDoc for details.
You could subclass the print stream in question and perform your regexp replacing prior to calling the appropriate super method? E.g.
public void ExampleStream extends PrintStream {
#Override
public void print(String s) {
super(s.replaceAll(ANSI_PATTERN,""));
}
}
I think that the code in Logging class is not a good approach (at least as it is):
If you have access to the PrintStream source code you might find that the methods currently redefined might not being used at all: the PrintStream#print(...) methods delegate on textOut#write(...) (not on the redefined OutputStream#write(...) ).
Therefore, you should redefine the print(String) and print(char[]) methods in order to effectively filter the output.
There are a few examples of redefined methods in the answers (including further down on this one).
Alternatively, if you just want a PrintStream that filters out the ANSI codes (as I originally understood), then it would be more convenient to implement it on a FilterOutputStream (as mthmulders suggests, as you will have to redefine fewer stuff and will be easier to re-use):
Make a copy of BufferedOutputStream class. Name it however you prefer. (E.g. TrimAnsiBufferedStream)
Then redefine de flushBuffer() method:
private void flushBuffer() throws IOException {
if (count > 0) {
String s = new String(buf, 0, count); // Uses system encoding.
s.replaceAll(ANSI_PATTERN, "");
out.write(s.getBytes());
count = 0;
}
}
When you need to instantiate a PrintStream that replaces ANSI, invoke new PrintStream(new TrimAnsiBufferedStream(nestedStream)).
This is probably not bullet-proof (e.g. whatever may happen with encoding configuration, or if buffer size is not big enough, or flushing options in printstream), but I won't overcomplicate it.
By the way. Welcome kukelekuuk00. Just be sure to read the FAQ and feedback on the answers (we care about you, please reciprocate).

Java: Framework for thread shared data

I've written a few multithreaded hobby programs and some in my previous(engineering/physics) studies as well, so I consider myself to have an above-beginner knowledge in the area of synchronization/thread safety and primitives, what the average user finds to be challanging with the JMM and multiple threads etc.
What I find that I need and there is no proper method of marking instance or static members of classes as shared by different threads. Think about it, we have access rules such as private/protected/public and conventions on how to name getters/setters and a lot of things.
But what about threading? What if I want to mark a variable as thread shared and have it follow certain rules? Volatile/Atomic refs might do the job, but sometimes you just do need to use mutexes. And when you manually have to remember to use something...you will forget about it :) - At some point.
So I had an idea, and I see I am not the first, I also checked out http://checkthread.org/example-threadsafe.html - They seem to have a pretty decent code analyzer which I might try later which sort of lets me do some of the things I want.
But coming back to the initial problem. Let's say we need something a little more low level than a message passing framework and we need something a little more high level than primitive mutexes... What do we have...wel...nothing?
So basically, what I've made is a sort of pure java super-simple framework for threading that lets you declare class members as shared or non-shared...well sort of :).
Below is an example of how it could be used:
public class SimClient extends AbstractLooper {
private static final int DEFAULT_HEARTBEAT_TIMEOUT_MILLIS = 2000;
// Accessed by single threads only
private final SocketAddress socketAddress;
private final Parser parser;
private final Callback cb;
private final Heart heart;
private boolean lookingForFirstMsg = true;
private BufferedInputStream is;
// May be accessed by several threads (T*)
private final Shared<AllThreadsVars> shared = new Shared<>(new AllThreadsVars());
.
.
.
.
static class AllThreadsVars {
public boolean connected = false;
public Socket socket = new Socket();
public BufferedOutputStream os = null;
public long lastMessageAt = 0;
}
And to access the variables marked as thread shared you must send a runnable-like functor to the Shared object:
public final void transmit(final byte[] data) {
shared.run(new SharedRunnable<AllThreadsVars, Object, Object>() {
#Override
public Object run(final AllThreadsVars sharedVariable, final Object input) {
try {
if (sharedVariable.socket.isConnected() && sharedVariable.os != null) {
sharedVariable.os.write(data);
sharedVariable.os.flush();
}
} catch (final Exception e) { // Disconnected
setLastMessageAt(0);
}
return null;
}
}, null);
}
Where a shared runnable is defined like:
public interface SharedRunnable<SHARED_TYPE, INPUT, OUTPUT> {
OUTPUT run(final SHARED_TYPE s, final INPUT input);
}
Where is this going?
Well this gives me the help (yes you can leak out and break it but far less likely) that I can mark variable sets (not just variables) as thread shared, and once that is done, have it guaranteed in compile time ( I cannot forget to synchronize some method). It also allows me to standardize and perform tests to look for possible deadlocks also in compile time (Though atm I only implemented it in runtime cause doing it in compile time with the above framework will probably require more than just the java compiler).
Basically this is extremely useful to me and I'm wondering if I'm just reinventing the wheel here or of this could be some anti-pattern I don't know of. And I really don't know who to ask. (Oh yeah and Shared.run(SharedRunnable r, INPUT input) works just like
private final <OUTPUT, INPUT> OUTPUT run(final SharedRunnable<SHARED_TYPE, INPUT, OUTPUT> r, final INPUT input) {
try {
lock.lock();
return r.run(sharedVariable, input);
} finally {
lock.unlock();
}
}
This is just my own experimentation so it's not really finished by any means, but I have one decent project using it right now and it's really helping out a lot.
You mean something like this? (which can be enforced by tools like findbugs.)
If you have values which should be shared, the best approach is encapsulate this within the class. This way the caller does need to know what thread model you are using. If you want to know what model is used internally, you can read the source, however the caller cannot forget to access a ConcurrentMap (for example) correctly because all its method are thread safe.

Categories