Does it make sense to write unit/integration tests that verify the properties or configurations, since any medium to highly complex applications include a lot of configurations (via YAML or properties files)?
Many of these configurations derive the run time behaviour even if they are used by the underlying libraries or frameworks. Would it be a sensible idea to verify that the configurations are correctly used at runtime?
One pro-argument is since there is no compiler safety, we need to somehow verify if the configurations are dictating the behaviour correctly.
The con argument is, are we verifying the underlying frameworks implementation?
Testing just the configuration file might not be enough as it does not guarantee if the configuration will be correctly employed at runtime (there could be typos or other similar mistakes).
No. Unit tests will tell you what works in the test harness, which is—and should be—different from what is in production.
You hit the nail on the head when you said you want to verify the configuration. Testing and verification are two entirely different things. If you have a way to verify the runtime configuration in production, it will also help you diagnose runtime misbehavior.
There are lots of ways to verify the runtime configuration. The simplest and best is logging (e.g. "2016-09-24 10:13:00 connecting to http://my-configured-server.example.com to get user token"). Don't just dump the configuration to the log file-- that's not end-to-end verification-- sprinkle configuration details into log messages.
Configuration issues are often all-or-nothing; if you don't get the configuration just right, nothing happens and you don't know why. (This is especially true with functional programming.) Logging can tell you not only what the configuration is, but at what moment the configuration fails.
There are other clever ways to sprinkle configuration details into the runtime. For example, attaching verbose runtime details to error messages, especially email where you have more room than in logs. Or a debug mode where hovering over a UI element tells you the class and other facts about that element.
Integration tests (the components work when plugged together) and smoke tests (some complete configuration does something right) can be important-- I would say necessary if you deploy without manual testing-- but they are no substitute for runtime verification.
This is pretty reasonable IMHO, especially when talking ever growing virtualization. In one of the previous projects I was involved - a mSOA platform, which supported (back then) hundreds of web sites with ease, we found out that most of the issues were due to exactly
configurations are correctly used at runtime
The Orchard recepies were messed up a lot of the the times, as well. So a point for Docker containers. It is pretty much straightforward to test the infrastructure and its configuration your product/service is using. I've used successfully serverspec.
Related
I'm so worried about people logging confidential information to server logs.
I have seen server logs in production. Some developers are accidentally logging security related
information like password, clientId, clientSecret etc.
Is there any way, like Eclipse plugin or any tool, to warn developers while writing their code?
`ex : log.info("usernam = " + username + "password = " + password) ;` //
Warn that confidential info is getting logged.
I have done some research... I have seen tools like sonarLint and FindBug
but those plugins are unable to solve my problem.
SonarLint offers the rule S2068: Credentials should not be hard-coded, which targets the use of hard-coded credentials, and it seems close to what you are trying to achieve, though it may be not enough for your needs.
As stated in other answers, however, identifying such security holes can be ultimately hard and strong code reviews is certainly a good move to reduce the risks.
Now, if you really fear about usages of loggers, already knows potential issues, and what data could leak, I would suggest to write your own Java Custom Rule for SonarQube.
Custom rules are supported by SonarLint and can be applied at enterprise level once the Custom Plugin containing it is deployed on a SonarQube server. This solution would allow you to explicitly define what you want to target, and fine-tune a rule depending on your needs and enterprise specifics. Writing such rules is not hard and documented in the following tutorial: Custom rules for Java.
There are many different ways how security holes can appear. Logging data to the browser console is only one of them.
And to my knowledge, there is no tool that can detect those security issues automatically. It is the responsibility of the programmer to not expose private user information on a page.
In this case the advice is: Never log passwords (especially unencrypted ones) to the browser console! Instead, encrypt your passwords in the database with an algorithm that can't be decrypted.
Another approach is to create a custom log appender that looks for certain tell-tale patterns (e.g. works like "password" and "passwd") and obliterates the messages, or throws an error.
However, this could be dangerous. If the bad guys knew you were doing this, they might try to exploit it to cover their tracks or even crash your server.
I wouldn't hold my breath for some out-of-the-box solution on this one. Beyond your own logging, you also have to be concerned about the logging done by your dependencies. That said, you have two areas to work on: what goes into the logs and who has access to the logs.
As far as what goes into the logs, your best tools to combat this problem are education and collaboration (including the aforementioned code reviews). Start with writing a list of non-functional requirements for logging that includes security that addresses what to log and how to log (markers, levels, sensitive parameters, etc). I recommend working with colleagues on defining this list so it doesn't become known as "Ravi's logging crusade" instead of "something we really need to do".
Once that list is defined and you get your colleague's and/or management's buy-in, you can write wrappers for logging implementations that support the list of non-functional logging requirements that you assembled. If it is really necessary to log sensitive parameters, provide a way for the parameters to be asymmetrically encrypted for later retrieval by a root account: such as the encryption key stored in a file only accessible by root/container. For management, you might have to spend some time writing up value propositions describing why your initiative is valuable to your company.
Next work with whoever defines your SLDC - make sure the change to your SDLC is outwardly communicated. Have them create a Secure Coding checklist for your company to implement with 1 item on it that says: All logging is implemented using OurCompanySecureLogger. Now you can start working on enforcing the initiative. I recommend writing a check on the build server that looks at dependencies and fails the build if it finds a direct reference to log4j, slf4j, logback, etc.
Regarding the other half of the problem, work with your SysOps team to define rules of Segregation of Duties. That is, software engineers shouldn't have access to the servers where logging is being performed. If you're not staffed well enough at this point to support this notion, you might have to get creative.
May be you should try Contrast tool. Its good one and we are using it since long.
It takes care of all updated owasp top 10 issues.
Quite good for finding security holes in enterprise applications.
Their support is also good.
I have an application that consists of approx. 20 java components.
About half of the components are servers and the other half batch programs.
Almost all of them talk directly to an oracle database (jdbc via some of our infrastructure code jars) the other couple of components talk to some of the servers which talk to the database.
Anyway, each component is configured with numerous XML configuration files.
These are becoming almost impossible to maintain.
Some of the configuration is specific to a component others are similar (database URLs, connectors etc)
What is worse is that the application is not installed in many environments - in fact only about 10 environments (qa, dev, production etc etc).
But the people who own these environments don't seem able to maintain the configs correctly.
In particular whenever there is an upgrade there is invariably configuration errors.
I have even started checking in some of the environments configurations into SVN along with the code.
I tried an xml schema validator at one point (it consisted of defining the valid XML in .xsd files and then throwing an error if the schema rules were breached but that didnt work)
I'm thinking I am missing something basic here - perhaps there is a tool to manage this or perhaps I should be storing the configuration in the database.
The application was largely designed by a colleague but I feel myself that it's overly configurable - in fact many of the config actually refers to classes - i.e. one can choose handlers and parsers etc - the XML config almost looks like code.
Any advice greatly appreciated
Peter
Substituting XML for code is usually a bad idea; things that are declarative are probably OK, but things that are procedural probably aren't.
If all that configuration was defined in Java code, a lot of the upgrade issues would turn into compilation issues. The compiler would pick them out for you, and you could correct them.
So you've got a multi-part problem. You need to rationalize your configuration information into a set of partitions (per-component, per-installation, global). You need to try to verify configuration information at compile-time, where possible. And you need to write validation for the loaded configurations, to sanity check them.
To the extent possible, shift configuration relatively static stuff into Guice (at least, it's what I prefer). A lot of things happen in a nice, type-safe way with it.
Consider running a WebDAV server for each instance of the app, and storing configuration into it. Each can hit a simple URL to pull the current versions of the configuration files.
Or, stand up a lightweight XML database like BaseX with its REST capability, then store and load your configuration information there. Use JSLP or something like it to have your components find the central configuration repository.
An additional advantage to using an XML DB is that you'll be able to do a lot of sanity checking and updating by querying across the set of all configuration files. For example, if a given instance of the application should have the same JDBC parameters in each configuration file, a simple xquery will tell you if that's true.
If you don't have the ability to modify the applications that are pulling the configuration file (the config file format is fixed), then consider writing a query servlets for the XML database that assemble the required configuration information, from nested blocks or templates. That will allow you to figure out what's common between the configuration files and dynamically generate parameterized versions of those blocks.
Sounds like the key here is making incremental improvements. Allow the old way to configure, but have the configuration load look for a central config source first.
I don't think that the syntax of the configuration files is at the heart of the problem: using Java properties files instead of XML would leave you with exactly the same issues. There may be an issue that the configuration information is too dispersed - it's hard to tell. The main issue seems to be that the whole thing is too fragile - the application is too dependent on manual configuration, and it seems that the configuration for each environment needs to be different. You should try to focus on reducing the number of configuration parameters that need to be set to make the system work (without necessarily reducing the options available for diagnostics etc for use when they are really needed.), on having intelligent defaults and self-configuration. Perhaps even invest in creating an installation wizard.
As you have some Oracle databases handy why not store your configuration in there?
Then you only need one or two configuration parameters to point to an Oracle database suitable for that environment and download the rest of the configuration from the database.
The contents of the configuration table should be pretty static for any given environment so there should be no need to amend anything except the jdbc connection when you migrate your software through its life cycle.
At work we are trying to simplify an application that was coded with an overkill use of Spring Remoting. This is how it works today:
(Controllers) Spring MVC -> Spring Remoting -> Hibernate
Everything is deployed in a single machine, Spring Remoting is not needed (never will be needed) and adds complexity to code maintenance. We want it out.
How to ensure everything works after our changes? Today we have 0% code coverage! We thought on creating integration tests to our controllers so when we remove Spring Remoting they should behave exactly the same. We thought on using a mix of Spring Test framework in conjunction with DBUnit to bring up Oracle up to a known state every test cycle.
Has anyone tried a similar solution? What are the drawbacks? Would you suggest any better alternative?
It always depends on the ratio effort / benefit that you are willing to take. You can get an almost 100% code coverage if you are really diligent and thorough. But that might be overkill too, especially when it comes to maintaining those tests. But your idea is good. I've done this a couple of times before with medium applications. This is what you should do:
Be sure that you have a well known test data set in the database at the beginning of every test in the test suite (you mentioned that yourself)
Since you're using Hibernate, you might also try using HSQLDB as a substitute for Oracle. That way, your tests will run a lot faster.
Create lots of independent little test cases covering most of your most valued functionality. You can always allow yourself to have some minor bugs in some remote and unimportant corners of the application.
Make sure those test cases all run before the refactoring.
Make sure you have a reference system that will not be touched by the refactoring, in order to be able to add new test cases, in case you think of something only later
Start refactoring, and while refactoring run all relevant tests that could be broken by the current refactoring step. Run the complete test suite once a night using tools such as jenkins.
That should work. If your application is a web application, then I can only recommend selenium. It has a nice jenkins integration and you can create hundreds of test cases by just clicking through your application in the browser (those clicks are recorded and a Java/Groovy/other language test script is generated).
In our Spring MVC / Hibernate (using v3.4) web app we use an Oracle database for integration testing.
To ensure that our database is in a known state each time the test suites are run, we set the following property in our test suite's persistence.xml:
<property name="hibernate.hbm2ddl.auto" value="create"/>
This ensures that the db schema is created each time our tests are run based on the Hibernate annotations in our classes. To populate our database with a know data set, we added a file named import.sql to our classpath with the appropriate SQL inserts. If you have the above property set, Hibernate will run the statements in import.sql on your database after creating the schema.
I have a couple of design/architectural questions that always come up in our shop. I said "our", as opposed to "me" personally. Some of the decisions were made and made when J2EE was first introduced so there are some bad design choices and some good.
In a web environment, how do you work with filters. When should you use J2EE filters and when shouldn't you? Is it possible to have many filters, especially if you have too much logic in them. For example, there is a lot of logic in our authentication process. If you are this user, go to this site and if not go to another one. It is difficult to debug because one URL path could end up rendering different target pages.
Property resource bundle files for replacement values in JSP files: It seems that the consensus in the Java community is to use bundle files that contain labels and titles for a jsp parsing. I can see the benefit if you are doing development with many different languages and switching the label values based on locale. But what if you aren't working with multiple languages? Should every piece of static text in a JSP file or other template file really have to be put into a property file. Once again, we run into issues with debugging where text may not show up due to misspelling with property value keys or corrupt property files. Also, we have a process where graphic designers will send us html templates and then we convert them to jsp. It seems it more confusing to then remove the static text, add a key, add the key/value in a property file, etc.
E.g. A labels.properties file may contain the Username: label. That gets replaced by some key and rendered to the user.
Unit Testing for all J2EE development - we don't encourage unit testing. Some people do but I have never worked at shop that uses extensive unit testing. Once place did and then when crunch time hit, we stopped doing unit testing and then after a while the unit tests were useless and wouldn't ever compile. Most of the development I have done has been with servers, web application development, database connectivity. I see where unit testing can be cumbersome because you need an environment to unit test against. I think unit test manifestos encourage developers not to actually connect to external sources. But it seems like a major portion of the testing should be connecting to a database and running all of the code, not just a particular unit. So that is my question, for all types of development (like you see in CRUD oriented J2EE development) should we write unit tests in all cases? And if we don't write unit tests, what other developer testing mechanisms could we use?
Edited: Here are some good resources on some of these topics.
http://www.ibm.com/developerworks/java/library/j-diag1105.html
Redirection is a simpler way to handle different pages depending on role. The filter could be used simply for authentication, to get the User object and any associated Roles into the session.
As James Black said, if you had a central controller you could obviate the need to put this logic in the filters. To do this you'd map the central controller to all urls (or all non-static urls). Then the filter passes a User and Roles to the central controller which decides where to send the user. If the user tries to access a URL he doesn't have permission for, this controller can decide what to do about it.
Most major MVC web frameworks follow this pattern, so just check them out for a better understanding of this.
I agree with James here, too - you don't have to move everything there but it can make things simpler in the future. Personally, I think you often have to trade this one off in order to work efficiently with designers. I've often put the infrastructure and logic in to make it work but then littered my templates with static text while working with designers. Finally, went back and pulled all the static text out into the external files. Sure enough, found some spelling mistakes that way!
Testing - this is the big one. In my experience, a highly disciplined test-first approach can eliminate 90% of the stress in developing these apps. But unit tests are not quite enough.
I use three kinds of tests, as indicated by the Agile community:
acceptance/functional tests - customer defines these with each requirement and we don't ship til they all pass (look at FitNesse, Selenium, Mercury)
integration tests - ensure that the logic is correct and that issues don't come up across tiers or with realistic data (look at Cactus, DBUnit, Canoo WebTest)
unit tests - both defines the usage and expectations of a class and provides assurance that breaking changes will be caught quickly (look at JUnit, TestNG)
So you see that unit testing is really for the benefit of the developers... if there are five of us working on the project, not writing unit tests leads one of two things:
an explosion of necessary communication as developers try and figure out how to use (or how somebody broke) each other's classes
no communication and increased risk due to "silos" - areas where only one developer touches the code and in which the company is entirely reliant on that developer
Even if it's just me, it's too easy to forget why I put that little piece of special case logic in the class six months ago. Then I break my own code and have to figure out how... it's a big waste of time and does nothing to reduce my stress level! Also, if you force yourself to think through (and type) the test for each significant function in your class, and figure out how to isolate any external resources so you can pass in a mock version, your design improves immeasurably. So I tend to work test-first regardless.
Arguably the most useful, but least often done, is automated acceptance testing. This is what ensures that the developers have understood what the customer was asking for. Sometimes this is left to QA, and I think that's fine, but the ideal situation is one in which these are an integral part of the development process.
The way this works is: for each requirement the test plan is turned into a script which is added to the test suite. Then you watch it fail. Then you write code to make it pass. Thus, if a coder is working on changes and is ready to check in, they have to do a clean build and run all the acceptance tests. If any fail, fix before you can check in.
"Continuous integration" is simply the process of automating this step - when anyone checks code in, a separate server checks out the code and runs all the tests. If any are broken it spams the last developer to check in until they are fixed.
I once consulted with a team that had a single tester. This guy was working through the test plans manually, all day long. When a change took place, however minor, he would have to start over. I built them a spreadsheet indicating that there were over 16 million possible paths through just a single screen, and they ponied up the $10k for Mercury Test Director in a hurry! Now he makes spreadsheets and automates the test plans that use them, so they have pretty thorough regression testing without ever-increasing QA time demands.
Once you've begun automating tests at every layer of your app (especially if you work test-first) a remarkable thing happens. Worry disappears!
So, no, it's not necessary. But if you find yourself worrying about technical debt, about the big deployment this weekend, or about whether you're going to break things while trying to quickly change to meet the suddenly-urgent customer requirements, you may want to more deeply investigate test-first development.
Filters are useful to help move logic such as is the user authenticated, to properly handle this, since you don't want this logic in every page.
Since you don't have a central controller it sounds like your filters are serving this function, which is fine, but, as you mentioned, it does make debugging harder.
This is where unit tests can come in handy, as you can test different situations, with each filter individually, then with all the filters in a chain, outside of your container, to ensure it works properly.
Unit testing does require discipline, but, if the rule is that nothing goes to QA without a unit test then it may help, and there are many tools to help generate tests so you just have to write the test. Before you debug, write or update the unit test, and show that the unit test is failing, so the problem is duplicated.
This will ensure that that error won't return, and that you fixed it, and you have updated a unit test.
For resource bundles. If you are certain you will never support another language, then as you refactor you can remove the need for the bundles, but, I think it is easier to make spelling/grammar corrections if the text is actually in one place.
Filters in general are expected to perform smaller units of functionality and filter-chaining would be used to apply the filters as needed. In your case, maybe a refactoring can help to move out some of the logic to additional filters and the redirecting logic can be somewhat centralized through a controller to be easier to debug and understand.
Resource bundles are necessary to maintain flexibility, but if you know absolutely that the site is going to be used in a single locale, then you might skip it. Maybe you can move some of the work in maintaining the bundles to the designers i.e let them have access to the resource bundles, so that you get the HTML with the keys in place.
Unit testing is much easier to implement at the beginning of a project as opposed to building it into a existing product. For existing software, you may still implement unit tests for the new features. However, it requires a certain amount of insistence from team leads and the team needs to buy into the necessity of having unit tests. Code review for unit tests helps and a decision on what parts of the code need to be absolutely covered can help developers. Tools/plugins like Coverlipse can indicate the unit testing coverage, but they tend to look at every possible code path, some of which may be trivial.
At one of my earlier projects, unit tests were just compulsory and unit tests would be automatically kicked off after each check-in. However, this was not Test-driven development, as the tests were mostly written after the small chunks of code were written. TDD can result in developers writing code to just work with the unit tests and as a result, developers can lose the big picture of the component they are developing.
In a web environment, how do you work with filters. When should you use J2EE filters and when shouldn't you?
Filters are meant to steer/modify/intercept the actual requests/responses/sessions. For example: setting the request encoding, determining the logged-in user, wrapping/replacing the request or response, determining which servlet it should forward the request to, and so on.
To control the actual user input (parameters) and output (the results and the destination) and to execute actual business logic, you should use a servlet.
Property resource bundle files for replacement values in JSP files.
If you don't do i18n, just don't use them. But if you ever grow and the customer/users want i18n, then you'll be happy that you're already prepared. And not only that, it also simplifies the use of a CMS to edit the content by just using the java.util.Properties API.
Unit Testing for all J2EE development
JUnit can take care about it. You can also consider to "officially" do user tests only. Create several use cases and test it.
I have a new puzzle for you :-).
I was thinking on how should an application handle his own start up. Like : checking for required libraries, correct versions, database connectivity, database compatibility, etc. To be specific, here is the test case. I use SWT and Log4J, for obvious reasons. Now, the questions :
Should the app check itself for the required dependencies? If yes, should the user be given specific details of what it's missing? Or just a message, and details to the logs?
What if the log4J library is unavailable?
What is the best to do the test? Verifying the file existance (using file.exists(), at specified path), or loading a class, say Class.forName("org.apache.log4j.Logger")? What should be the proper order to do the checks? For instance, if i test for SWT, i have no idea if logger is available or not, and the error will occur when i try to access that. Backwards, if i test for the logger 1st : a) The lib could be unavailable - i cannot log the error; b) SWT could be unavailable - unable to display the user message.
I've discovered apache.commons.lang framework today, and i find very useful the method org.apache.commons.lang.SystemUtils.isJavaVersionAtLeast(Float value)
, and manny others, i am sure. However, importing too much libs to your project dont make it hard to maintain? Versions change, compatibilities are lost, eg. one cannot control a 3rd party developement style or direction.
Thank u for your answers.
I agree with your need. Checking for required runtime environment provides:
immediate feedback, instead of randomly breaking when accessing some functionnality
hopefully more skilled user, as the immediate feedback is available to the guy that is installing the software, hopefully more skilled than an average user, or at least less confident (installing is always a special operation). A more skilled user is less disturbed if the error is coming in the console, he doesn't depend on a graphical interface.
improved reporting : the error message can be explicit (you're in charge), while default error messages come in many flavours (they are not always that helpful on 1. what's wrong 2. suggesting a fix).
But please note that the runtime requirements could be checked in two situations:
when installing : long verifications are always acceptable ; if a library is not here, a required database or WebService is not accessible, it won't be here at runtime either, so you can complain immediately.
when starting the execution : you can verify again (and some verifications may only happen at that point)
This suggests creating an installer for your application.
Potentially, errors would not all be blocking for the installation. Some would rather accumulate as a list of tasks to be done after installation, maybe nicely formatted in a file with all reference information.
Here, we once again hit the notion of error level in validation (similar to what happens for Log4j) : some validation errors are at fatal level, others are errors, possibly also warnings ...
In our projects, we have some sort of initialization and validation going on on startup. Based on our day-to-day experience, I would suggest the following:
When the application gets big, you don't want to have all init centralized in one class, so we have a modular structure.
A small kernel is configured with a list of modules classes. It's whole init sequence is under strict control, ready for any exceptions (translating them to appropriate messages, but memorizing the stack traces that are so useful to the developpers), making no assumption on the available libraries and so on... CheckStyle can be configured specially for this code.
The interface (of course, abstract class is possible) that the modules implement typically have several initialization methods. They could be:
getDependencies : returns a list of modules that this one depends on.
startup : when the whole application is starting. This will be called only once during startup, and cannot be called again.
start : when the module gets ready for regular operation
stop : reverse from start
shutdown : reverse from startup.
The kernel instanciates each of the module in turn. Then he calls one init method on all of them, then another init method and so on as needed. Each init method can:
signal error conditions (using levels, like Log4J).
an exception thrown would be caught by the kernel, and translated to an error condition
consult another module for its status (because dependencies are the general case), and react accordingly. If needed, the dependencies could be made declaratively.
The kernel takes care of module dependencies generically:
He sorts the modules so that dependencies are respected.
He doesn't initialize a module if one of its dependencies couldn't make it.
If asked to stop a module, he will first stop the modules that depends on it.
A nice feature of this kernel approach is that it is easy to aggregate the errors, at various levels (although fatal could stop it), and report all of them at the end, using whatever means is available (SWT or not, Log4J or not ...). So instead of discovering the problems one after the other, and having to start again each time, you could deliver in one blow (nicely prioritized of course).
Concerning your precise questions:
Should the app check itself for the required dependencies?
Yes (see higher)
If yes, should the user be given specific details of what it's missing? Or just a message, and details to the logs?
As said higher, when installing the user is more prepared to deal with this.
When starting, we use an easy message for the end-user, but give access to the full stack traces for the developper (we have a button that copies in the clipboard the application environment, the stack traces and so on).
What if the log4J library is unavailable?
Log without it (see higher).
What is the best to do the test? Verifying the file existance (using file.exists(), at specified path), or loading a class, say Class.forName("org.apache.log4j.Logger")?
I would load a class. But if it failed, I might check the file existence on disk to give a improved message, including "how to fix".
What should be the proper order to do the checks? For instance, if i test for SWT, i have no idea if logger is available or not, and the error will occur when i try to access that. Backwards, if i test for the logger 1st : a) The lib could be unavailable - i cannot log the error; b) SWT could be unavailable - unable to display the user message.
As I said higher, I suggest these low-level errors get accumulated in a small area of code (kernel), where you could use anything that is available to display them. If nothing is available, you could simply log in the console without Log4J.
The short answer is no. The JVM appropriately handles this functionality on initialization, or at runtime. If a required class is not found on the classpath, a ClassNotFoundException will be thrown. If a class was found, but a required method was not, a NoSuchMethodException is thrown.
Regarding 1 through 3 , there are 2 main use cases here:
application packaging is under your control, and can make sure that all required dependencies are packaged properly. Run-time validations are not useful here.
application packaging is not under your control, and you deliver the main jar and the instructions on what the requirements are. Run-time validations might be useful, but someone who wants to package your application usually has enough skill to understand what a ClassNotFoundException: org.apache.logging.LogManager means.
Regarding 4, as long as you keep the same version of the dependency included in your project, you will have no problems in keeping control. Upgrading to a newer version is a conscious decision, which requires thought and testing.