I was recently asked on a coding interview to write a simple Java console app that does some file io and displays the data. I was going to go to town with a DAO but since I never manipulate the data past a read, the entire idea of a DAO seems overkill.
Anyone know a clean way to ensure separation of concern without the weight of full CRUD when you don't need it ?
Looks like standard MVC pattern. Your console is the view, the code that reads file is the controller and the code that captures file line or whole file content is your model.
You can further simplify it as View and Model where model will encapsulate both file reading and wrapping its content into Java class.
How about Martin Fowler's Table Gateway pattern, explained here. Just include the find (Read) methods and miss create, insert, and update.
you can simply refer Command /Query pattern ,where commands are one which perform create update and delete operation seperately and Queries are introduce to read only purpose .
hence you implement what you need and left the others
This question was in interview so there was not much time for detailed design, As a minimum fulfillment of above concerns, following structure will provide flexibility. details could be filled as per the requirements.
public interface IODevice {
String read();
void write(String data);
}
class FileIO implements IODevice {
#Override
public String read() {
return null;
}
#Override
public void write(String data) {
//...;
}
}
class ConsoleIO implements IODevice {
#Override
public String read() {
return null;
}
#Override
public void write(String data) {
//... null;
}
}
public class DataConverter {
public static void main(String[] args) {
FileIO fData1 = null;// ... appropriately obtained instance;
FileIO fData2 = null;// ... appropriately obtained instance;
ConsoleIO cData = null;// ... appropriately obtained instance;
cData.write(fData2.read());
fData1.write(cData.read());
}
}
The client class uses only APIs of the devices. This will keep option of extending interface to implement new device wrapper (e.g. xml, stream etc)
I have a "legacy" code that I want to refactor.
The code basically does a remote call to a server and gets back a reply. Then according to the reply executes accordingly.
Example of skeleton of the code:
public Object processResponse(String responseType, Object response) {
if(responseType.equals(CLIENT_REGISTERED)) {
//code
//code ...
}
else if (responseType.equals(CLIENT_ABORTED)) {
//code
//code....
}
else if (responseType.equals(DATA_SPLIT)) {
//code
//code...
}
etc
The problem is that there are many-many if/else branches and the code inside each if is not trivial.
So it becomes hard to maintain.
I was wondering what is that best pattern for this?
One thought I had was to create a single object with method names the same as the responseType and then inside processResponse just using reflection call the method with the same name as the responseType.
This would clean up processResponse but it moves the code to a single object with many/many methods and I think reflection would cause performance issues.
Is there a nice design approach/pattern to clean this up?
Two approaches:
Strategy pattern http://www.dofactory.com/javascript/strategy-design-pattern
Create dictionary, where key is metadata (in your case metadata is responseType) and value is a function.
For example:
Put this in constructor
responses = new HashMap<string, SomeAbstraction>();
responses.Put(CLIENT_REGISTERED, new ImplementationForRegisteredClient());
responses.Put(CLIENT_ABORTED, new ImplementationForAbortedClient());
where ImplementationForRegisteredClient and ImplementationForAbortedClient implement SomeAbstraction
and call this dictionary via
responses.get(responseType).MethodOfYourAbstraction(SomeParams);
If you want to follow the principle of DI, you can inject this Dictionary in your client class.
My first cut would be to replace the if/else if structures with switch/case:
public Object processResponse(String responseType, Object response) {
switch(responseType) {
case CLIENT_REGISTERED: {
//code ...
}
case CLIENT_ABORTED: {
//code....
}
case DATA_SPLIT: {
//code...
}
From there I'd probably extract each block as a method, and from there apply the Strategy pattern. Stop at whatever point feels right.
The case you've describe seems to fit perfectly to the application of Strategy pattern. In particular, you've many variants of an algorithm, i.e. the code executed accordingly to the response of the remote server call.
Implementing the Stategy pattern means that you have to define a class hierachy, such the following:
public interface ResponseProcessor {
public void execute(Context ctx);
}
class ClientRegistered implements ResponseProcessor {
public void execute(Context ctx) {
// Actions corresponding to a client that is registered
// ...
}
}
class ClientAborted implements ResponseProcessor {
public void execute(Context ctx) {
// Actions corresponding to a client aborted
// ...
}
}
// and so on...
The Context type should contain all the information that are needed to execute each 'strategy'. Note that if different strategies share some algorithm pieces, you could also use Templeate Method pattern among them.
You need a factory to create a particular Strategy at runtime. The factory will build a strategy starting from the response received. A possibile implementation should be the one suggested by #Sattar Imamov. The factory will contain the if .. else code.
If strategy classes are not to heavy to build and they don't need any external information at build time, you can also map each strategy to an Enumeration's value.
public enum ResponseType {
CLIENT_REGISTERED(new ClientRegistered()),
CLIENT_ABORTED(new ClientAborted()),
DATA_SPLIT(new DataSplit());
// Processor associated to a response
private ResponseProcessor processor;
private ResponseType(ResponseProcessor processor) {
this.processor = processor;
}
public ResponseProcessor getProcessor() {
return this.processor;
}
}
I have a class with a few methods advised through an input validation aspect (validates whether all input parameters are not-null/non-empty strings).
I am facing an issue while writing test case for them and want to verify if this is indeed a bad design issue.
Here's a very simplified version of my class:
public class A {
public String one(String word) {
// Some actions
String val = two(word2);
// Some more actions
}
protected String two(String word) {
// Some actions
}
}
Now while writing test cases for one() I use Mockito and want to mock calls to two(). So I use:
#Spy
A a;
#Test
void test() {
doReturn("Bye").when(A).two(Mockito.anyString());
a.one("hello");
// Some validations
}
This test fails as the: doReturn() line fails with input being empty for two().
Should I not mock two() or can I make this work somehow?
Edit:
Adding a more specific example related to the two methods being present in two different classes as requested:
Create a page through a WebService. This builds a putRequest, executes it and returns a response.
public class AUtility implements BaseUtility {
public Response create(Params params) {
try {
PutMethod putRequest = buildPUTRequest(params.getAttr1(), params.getAttr2());
return Utils.buildResponse(client.executeMethod(putRequest),
params.getAttr3(),
params.getAttr4());
} catch (Exception e) {
throw new AppException(e);
}
}
}
The put request marshals the data into a file to write it through the HttpClient
private PutMethod buildPUTRequest(final String url, final Object obj) throws IOException, JAXBException {
// Create a temp file to store the stream
File tempFile = File.createTempFile(APPLICATION_LABEL, XML_LABEL);
decoder.marshal(obj, tempFile);
// Build the put method
return putMethod;
}
XMLMarshaller
public interface XMLDecoder implement Decoder {
public void marshal(Object obj, File tempFile) throws IOException, JAXBException {
// Perform marshalling operations
}
}
The test fails on line2 with the inputs being null.
#Test
public void createPageParamsHttpException() throws HttpException, IOException, JAXBException {
expectedException.expect(AppException.class);
doNothing().when(decoder).marshal(Mockito.anyString(), Mockito.any(File.class));
doThrow(HttpException.class).when(client).executeMethod(Mockito.any(HttpMethod.class));
Params params = new Params(new Application(),
APPLICATION_URL_LABEL,
SITE_NAME_LABEL,
URL_WITHOUT_HTTP_N_HTML);
utility.createPage(params);
}
Any idea how should I proceed for the same?
You don't want to do this.
You are inherently changing the behavior of the class. If you change what two() does, how do you know that one() will do what it's supposed to do in production?
If you truly want to do this, you should extract the behavior of two() into another top level class, and then inject the dependency into A. Then you can mock this dependency and you don't have to worry about going to the trouble of creating a partial mock for A.
In a similar vein, if you must keep two in the same class (because it's behavior is part of the same responsibility that is assigned to A - see the Single Responsibility Principle - why is it public?
The reason you are having trouble is because you are violating the SRP, see my note above. You said this:
This builds a putRequest, executes it and returns a response.
You should not be trying to test the behavior of all three of those things at the same time. Ultimately, this method does not really do anything. The buildPUTRequest method does, and shouldn't be in a class called AUtility, it should be in a class RequestFactory. Then, you would want to test the Utils.buildResponse method, except that shouldn't be in a class called Utils, it should be in a class called Responder or something... and this method ABSOLUTELY should not be static.
Work on naming your classes better things, and if you can't come up with a good name, that means the class probably does too much and should be refactored. And a method that wraps the work in two other methods doesn't need to be unit tested. Integration tested, perhaps, but that's another story.
How does one go about and try to find all subclasses of a given class (or all implementors of a given interface) in Java?
As of now, I have a method to do this, but I find it quite inefficient (to say the least).
The method is:
Get a list of all class names that exist on the class path
Load each class and test to see if it is a subclass or implementor of the desired class or interface
In Eclipse, there is a nice feature called the Type Hierarchy that manages to show this quite efficiently.
How does one go about and do it programmatically?
Scanning for classes is not easy with pure Java.
The spring framework offers a class called ClassPathScanningCandidateComponentProvider that can do what you need. The following example would find all subclasses of MyClass in the package org.example.package
ClassPathScanningCandidateComponentProvider provider = new ClassPathScanningCandidateComponentProvider(false);
provider.addIncludeFilter(new AssignableTypeFilter(MyClass.class));
// scan in org.example.package
Set<BeanDefinition> components = provider.findCandidateComponents("org/example/package");
for (BeanDefinition component : components)
{
Class cls = Class.forName(component.getBeanClassName());
// use class cls found
}
This method has the additional benefit of using a bytecode analyzer to find the candidates which means it will not load all classes it scans.
There is no other way to do it other than what you described. Think about it - how can anyone know what classes extend ClassX without scanning each class on the classpath?
Eclipse can only tell you about the super and subclasses in what seems to be an "efficient" amount of time because it already has all of the type data loaded at the point where you press the "Display in Type Hierarchy" button (since it is constantly compiling your classes, knows about everything on the classpath, etc).
This is not possible to do using only the built-in Java Reflections API.
A project exists that does the necessary scanning and indexing of your classpath so you can get access this information...
Reflections
A Java runtime metadata analysis, in the spirit of Scannotations
Reflections scans your classpath, indexes the metadata, allows you to query it on runtime and may save and collect that information for many modules within your project.
Using Reflections you can query your metadata for:
get all subtypes of some type
get all types annotated with some annotation
get all types annotated with some annotation, including annotation parameters matching
get all methods annotated with some
(disclaimer: I have not used it, but the project's description seems to be an exact fit for your needs.)
Try ClassGraph. (Disclaimer, I am the author). ClassGraph supports scanning for subclasses of a given class, either at runtime or at build time, but also much more. ClassGraph can build an abstract representation of the entire class graph (all classes, annotations, methods, method parameters, and fields) in memory, for all classes on the classpath, or for classes in selected packages, and you can query this class graph however you want. ClassGraph supports more classpath specification mechanisms and classloaders than any other scanner, and also works seamlessly with the new JPMS module system, so if you base your code on ClassGraph, your code will be maximally portable. See the API here.
Don't forget that the generated Javadoc for a class will include a list of known subclasses (and for interfaces, known implementing classes).
I know I'm a few years late to this party, but I came across this question trying to solve the same problem. You can use Eclipse's internal searching programatically, if you're writing an Eclipse Plugin (and thus take advantage of their caching, etc), to find classes which implement an interface. Here's my (very rough) first cut:
protected void listImplementingClasses( String iface ) throws CoreException
{
final IJavaProject project = <get your project here>;
try
{
final IType ifaceType = project.findType( iface );
final SearchPattern ifacePattern = SearchPattern.createPattern( ifaceType, IJavaSearchConstants.IMPLEMENTORS );
final IJavaSearchScope scope = SearchEngine.createWorkspaceScope();
final SearchEngine searchEngine = new SearchEngine();
final LinkedList<SearchMatch> results = new LinkedList<SearchMatch>();
searchEngine.search( ifacePattern,
new SearchParticipant[]{ SearchEngine.getDefaultSearchParticipant() }, scope, new SearchRequestor() {
#Override
public void acceptSearchMatch( SearchMatch match ) throws CoreException
{
results.add( match );
}
}, new IProgressMonitor() {
#Override
public void beginTask( String name, int totalWork )
{
}
#Override
public void done()
{
System.out.println( results );
}
#Override
public void internalWorked( double work )
{
}
#Override
public boolean isCanceled()
{
return false;
}
#Override
public void setCanceled( boolean value )
{
}
#Override
public void setTaskName( String name )
{
}
#Override
public void subTask( String name )
{
}
#Override
public void worked( int work )
{
}
});
} catch( JavaModelException e )
{
e.printStackTrace();
}
}
The first problem I see so far is that I'm only catching classes which directly implement the interface, not all their subclasses - but a little recursion never hurt anyone.
I did this several years ago. The most reliable way to do this (i.e. with official Java APIs and no external dependencies) is to write a custom doclet to produce a list that can be read at runtime.
You can run it from the command line like this:
javadoc -d build -doclet com.example.ObjectListDoclet -sourcepath java/src -subpackages com.example
or run it from ant like this:
<javadoc sourcepath="${src}" packagenames="*" >
<doclet name="com.example.ObjectListDoclet" path="${build}"/>
</javadoc>
Here's the basic code:
public final class ObjectListDoclet {
public static final String TOP_CLASS_NAME = "com.example.MyClass";
/** Doclet entry point. */
public static boolean start(RootDoc root) throws Exception {
try {
ClassDoc topClassDoc = root.classNamed(TOP_CLASS_NAME);
for (ClassDoc classDoc : root.classes()) {
if (classDoc.subclassOf(topClassDoc)) {
System.out.println(classDoc);
}
}
return true;
}
catch (Exception ex) {
ex.printStackTrace();
return false;
}
}
}
For simplicity, I've removed command line argument parsing and I'm writing to System.out rather than a file.
Keeping in mind the limitations mentioned in the other answers, you can also use openpojo's PojoClassFactory (available on Maven) in the following manner:
for(PojoClass pojoClass : PojoClassFactory.enumerateClassesByExtendingType(packageRoot, Superclass.class, null)) {
System.out.println(pojoClass.getClazz());
}
Where packageRoot is the root String of the packages you wish to search in (e.g. "com.mycompany" or even just "com"), and Superclass is your supertype (this works on interfaces as well).
Depending on your particular requirements, in some cases Java's service loader mechanism might achieve what you're after.
In short, it allows developers to explicitly declare that a class subclasses some other class (or implements some interface) by listing it in a file in the JAR/WAR file's META-INF/services directory. It can then be discovered using the java.util.ServiceLoader class which, when given a Class object, will generate instances of all the declared subclasses of that class (or, if the Class represents an interface, all the classes implementing that interface).
The main advantage of this approach is that there is no need to manually scan the entire classpath for subclasses - all the discovery logic is contained within the ServiceLoader class, and it only loads the classes explicitly declared in the META-INF/services directory (not every class on the classpath).
There are, however, some disadvantages:
It won't find all subclasses, only those that are explicitly declared. As such, if you need to truly find all subclasses, this approach may be insufficient.
It requires the developer to explicitly declare the class under the META-INF/services directory. This is an additional burden on the developer, and can be error-prone.
The ServiceLoader.iterator() generates subclass instances, not their Class objects. This causes two issues:
You don't get any say on how the subclasses are constructed - the no-arg constructor is used to create the instances.
As such, the subclasses must have a default constructor, or must explicity declare a no-arg constructor.
Apparently Java 9 will be addressing some of these shortcomings (in particular, the ones regarding instantiation of subclasses).
An Example
Suppose you're interested in finding classes that implement an interface com.example.Example:
package com.example;
public interface Example {
public String getStr();
}
The class com.example.ExampleImpl implements that interface:
package com.example;
public class ExampleImpl implements Example {
public String getStr() {
return "ExampleImpl's string.";
}
}
You would declare the class ExampleImpl is an implementation of Example by creating a file META-INF/services/com.example.Example containing the text com.example.ExampleImpl.
Then, you could obtain an instance of each implementation of Example (including an instance of ExampleImpl) as follows:
ServiceLoader<Example> loader = ServiceLoader.load(Example.class)
for (Example example : loader) {
System.out.println(example.getStr());
}
// Prints "ExampleImpl's string.", plus whatever is returned
// by other declared implementations of com.example.Example.
It should be noted as well that this will of course only find all those subclasses that exist on your current classpath. Presumably this is OK for what you are currently looking at, and chances are you did consider this, but if you have at any point released a non-final class into the wild (for varying levels of "wild") then it is entirely feasible that someone else has written their own subclass that you will not know about.
Thus if you happened to be wanting to see all subclasses because you want to make a change and are going to see how it affects subclasses' behaviour - then bear in mind the subclasses that you can't see. Ideally all of your non-private methods, and the class itself should be well-documented; make changes according to this documentation without changing the semantics of methods/non-private fields and your changes should be backwards-compatible, for any subclass that followed your definition of the superclass at least.
The reason you see a difference between your implementation and Eclipse is because you scan each time, while Eclipse (and other tools) scan only once (during project load most of the times) and create an index. Next time you ask for the data it doesn't scan again, but look at the index.
I'm using a reflection lib, which scans your classpath for all subclasses: https://github.com/ronmamo/reflections
This is how it would be done:
Reflections reflections = new Reflections("my.project");
Set<Class<? extends SomeType>> subTypes = reflections.getSubTypesOf(SomeType.class);
You can use org.reflections library and then, create an object of Reflections class. Using this object, you can get list of all subclasses of given class.
https://www.javadoc.io/doc/org.reflections/reflections/0.9.10/org/reflections/Reflections.html
Reflections reflections = new Reflections("my.project.prefix");
System.out.println(reflections.getSubTypesOf(A.class)));
Add them to a static map inside (this.getClass().getName()) the parent classes constructor (or create a default one) but this will get updated in runtime. If lazy initialization is an option you can try this approach.
I just write a simple demo to use the org.reflections.Reflections to get subclasses of abstract class:
https://github.com/xmeng1/ReflectionsDemo
I needed to do this as a test case, to see if new classes had been added to the code. This is what I did
final static File rootFolder = new File(SuperClass.class.getProtectionDomain().getCodeSource().getLocation().getPath());
private static ArrayList<String> files = new ArrayList<String>();
listFilesForFolder(rootFolder);
#Test(timeout = 1000)
public void testNumberOfSubclasses(){
ArrayList<String> listSubclasses = new ArrayList<>(files);
listSubclasses.removeIf(s -> !s.contains("Superclass.class"));
for(String subclass : listSubclasses){
System.out.println(subclass);
}
assertTrue("You did not create a new subclass!", listSubclasses.size() >1);
}
public static void listFilesForFolder(final File folder) {
for (final File fileEntry : folder.listFiles()) {
if (fileEntry.isDirectory()) {
listFilesForFolder(fileEntry);
} else {
files.add(fileEntry.getName().toString());
}
}
}
If you intend to load all subclassess of given class which are in the same package, you can do so:
public static List<Class> loadAllSubClasses(Class pClazz) throws IOException, ClassNotFoundException {
ClassLoader classLoader = pClazz.getClassLoader();
assert classLoader != null;
String packageName = pClazz.getPackage().getName();
String dirPath = packageName.replace(".", "/");
Enumeration<URL> srcList = classLoader.getResources(dirPath);
List<Class> subClassList = new ArrayList<>();
while (srcList.hasMoreElements()) {
File dirFile = new File(srcList.nextElement().getFile());
File[] files = dirFile.listFiles();
if (files != null) {
for (File file : files) {
String subClassName = packageName + '.' + file.getName().substring(0, file.getName().length() - 6);
if (! subClassName.equals(pClazz.getName())) {
subClassList.add(Class.forName(subClassName));
}
}
}
}
return subClassList;
}
find all classes in classpath
public static List<String> getClasses() {
URLClassLoader urlClassLoader = (URLClassLoader) Thread.currentThread().getContextClassLoader();
List<String> classes = new ArrayList<>();
for (URL url : urlClassLoader.getURLs()) {
try {
if (url.toURI().getScheme().equals("file")) {
File file = new File(url.toURI());
if (file.exists()) {
try {
if (file.isDirectory()) {
for (File listFile : FileUtils.listFiles(file, new String[]{"class"}, true)) {
String classFile = listFile.getAbsolutePath().replace(file.getAbsolutePath(), "").replace(".class", "");
if (classFile.startsWith(File.separator)) {
classFile = classFile.substring(1);
}
classes.add(classFile.replace(File.separator, "."));
}
} else {
JarFile jarFile = new JarFile(file);
if (url.getFile().endsWith(".jar")) {
Enumeration<JarEntry> entries = jarFile.entries();
while (entries.hasMoreElements()) {
JarEntry jarEntry = entries.nextElement();
if (jarEntry.getName().endsWith(".class")) {
classes.add(jarEntry.getName().replace(".class", "").replace("/", "."));
}
}
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
} catch (URISyntaxException e) {
e.printStackTrace();
}
}
return classes;
}
enter link description hereService Manager in java will get all implementing classes for an interface in J
I inherited an application which uses a java properties file to define configuration parameters such as database name.
There is a class called MyAppProps that looks like this:
public class MyAppProps {
protected static final String PROP_FILENAME = "myapp.properties";
protected static Properties myAppProps = null;
public static final String DATABASE_NAME = "database_name";
public static final String DATABASE_USER = "database_user";
// etc...
protected static void init() throws MyAppException {
try {
Classloader loader = MyAppException.class.getClassLoader();
InputStream is = loader.getResourceAsStream(PROP_FILENAME);
myAppProps = new Properties();
myAppProps.load(is);
} catch (Exception e) {
threw new MyAppException(e.getMessage());
}
}
protected static String getProperty(String name) throws MyAppException {
if (props==null) {
throw new MyAppException("Properties was not initialized properly.");
}
return props.getProperty(name);
}
}
Other classes which need to get property values contain code such as:
String dbname = MyAppProps.getProperty(MyAppProps.DATABASE_NAME);
Of course, before the first call to MyAppProps.getProperty, MyAppProps needs to be initialized like this:
MyAppProps.init();
I don't like the fact that init() needs to be called. Shouldn't the initialization take place in a static initialization block or in a private constructor?
Besides for that, something else seems wrong with the code, and I can't quite put my finger on it. Are properties instances typically wrapped in a customized class? Is there anything else here that is wrong?
If I make my own wrapper class like this; I always prefer to make strongly typed getters for the values, instead of exposing all the inner workings through the static final variables.
private static final String DATABASE_NAME = "database_name"
private static final String DATABASE_USER = "database_user"
public String getDatabaseName(){
return getProperty(MyAppProps.DATABASE_NAME);
}
public String getDatabaseUser(){
return getProperty(MyAppProps.DATABASE_USER);
}
A static initializer looks like this;
static {
init();
}
This being said, I will readily say that I am no big fan of static initializers.
You may consider looking into dependency injection (DI) frameworks like spring or guice, these will let you inject the appropriate value directly into the places you need to use them, instead of going through the indirection of the additional class. A lot of people find that using these frameworks reduces focus on this kind of plumbing code - but only after you've finished the learning curve of the framework. (DI frameworks are quick to learn but take quite some time to master, so this may be a bigger hammer than you really want)
Reasons to use static initializer:
Can't forget to call it
Reasons to use an init() function:
You can pass parameters to it
Easier to handle errors
I've created property wrappers in the past to good effect. For a class like the example, the important thing to ensure is that the properties are truly global, i.e. a singleton really makes sense. With that in mind a custom property class can have type-safe getters. You can also do cool things like variable expansion in your custom getters, e.g.:
myapp.data.path=${myapp.home}/data
Furthermore, in your initializer, you can take advantage of property file overloading:
Load in "myapp.properties" from the classpath
Load in "myapp.user.properties" from the current directory using the Properties override constructor
Finally, load System.getProperties() as a final override
The "user" properties file doesn't go in version control, which is nice. It avoids the problem of people customizing the properties file and accidentally checking it in with hard-coded paths, etc.
Good times.
You can use either, a static block or a constructor. The only advice I have is to use ResourceBundle, instead. That might better suit your requirement. For more please follow the link below.
Edit:
ResourceBundles vs Properties
The problem with static methods and classes is that you can't override them for test doubles. That makes unit testing much harder. I have all variables declared final and initialized in the constructor. Whatever is needed is passed in as parameters to the constructor (dependency injection). That way you can substitute test doubles for some of the parameters during unit tests.
For example:
public class MyAppProps {
protected static final String PROP_FILENAME = "myapp.properties";
protected Properties props = null;
public String DATABASE_NAME = "database_name";
public String DATABASE_USER = "database_user";
// etc...
public MyAppProps(InputStream is) throws MyAppException {
try {
props = new Properties();
props.load(is);
} catch (Exception e) {
threw new MyAppException(e.getMessage());
}
}
public String getProperty(String name) {
return props.getProperty(name);
}
// Need this function static so
// client objects can load the
// file before an instance of this class is created.
public static String getFileName() {
return PROP_FILENAME;
}
}
Now, call it from production code like this:
String fileName = MyAppProps.getFileName();
Classloader loader = MyAppException.class.getClassLoader();
InputStream is = loader.getResourceAsStream(fileName);
MyAppProps p = new MyAppProps(is);
The dependency injection is when you include the input stream in the constructor parameters. While this is slightly more of a pain than just using the static class / Singleton, things go from impossible to simple when doing unit tests.
For unit testing, it might go something like:
#Test
public void testStuff() {
// Setup
InputStringTestDouble isTD = new InputStreamTestDouble();
MyAppProps instance = new MyAppProps(isTD);
// Exercise
int actualNum = instance.getProperty("foo");
// Verify
int expectedNum = 42;
assertEquals("MyAppProps didn't get the right number!", expectedNum, actualNum);
}
The dependency injection made it really easy to substitute a test double for the input stream. Now, just load whatever stuff you want into the test double before giving it to the MyAppProps constructor. This way you can test how the properties are loaded very easily.