[u-u] On testing Re: jobs @ rakuten/kobo
D'Arcy Cain
darcy at druid.net
Sun Aug 6 10:35:35 EDT 2017
On 08/06/2017 09:20 AM, Ken Burtch wrote:
> Hi Folks,
>
> In general regards to automated testing, there are a lot of myths that
> don't seem to be supported.
As someone who very successfully built a major system using agile
methods including unit testing I would like to clarify some ideas here.
> * As Dave suggests, unit tests are often written to get the test
> coverage green, rather than to ensure the program works as intended.
> This is one of the risks of hiring someone to write tests separate from
> the programmer, as the tester may not know what was the program is
> supposed to do.
Tests are not supposed to be written by someone else. The proper
process is for the programmer (actually a pair of programmers in agile)
to write the test. The order is;
1. Write the test for a new feature or function
2. Run the test to prove that it fails
3. Write code until the test passes
4. Stop programming
Steps 2 and 4 are very important. See below.
> * Unit testing 100% of the program is often seen as a good thing, when
> it can be a liability. There are parts of a program, for example, that
> may be there for future features or as good style to catch bugs during
> development, which are not meant to normally run. Yet they show up as
> dead code in coverage tests.
If it is a future feature then write it in the future. Requirements
change. Write what you need when you need it. I don't know what this
"good style" is but if it catches bugs then put it into the unit tests.
Don't clog up working code with it. However, once a test passes stop
programming otherwise you risk adding code that never gets tested. If
you think that a feature is missing then write the test that proves that
it is missing.
> * Some types of programming are cost-prohibitive to unit test, such as
> features that are temporary or are in constant flux. Such things may
Ask seasoned programmers about "temporary features" that are actually
temporary. I think I saw one in the wild once but it may have been a
mirage. And unit testing is way cheaper than debugging.
It is also important to run the unit tests all the time. We used to run
them from a cron job and, if there was any output, we emailed the result
to all the programmers. A regression never lasted more than 24 hours.
> not be worth unit testing. Other features, like handling out-of-memory
> or out-of-disk-space errors, may not be worth simulating because the
> operating environment itself becomes unstable and unpredictable under
> these conditions, and the test results may not mean much.
If there is a real possibility of these errors then there should be code
to handle it and the code should have unit tests. It's way too
expensive to have the client discover these problems for you.
> * Too many managers assume 100% unit test coverage means a flawless
> program, and they don't consider other forms of testing, such as team
> walkthroughs (e.g. to ensure all exceptions are caught, all
> possibilities are identified and handled).
Nothing prevents secondary testing in this environment and if they find
problems then a unit test should be written for any code meant to fix
it. Also, no one expects 100% unit testing or at least there is no way
to prove that you have achieved it. One of the most important benefits
of unit testing is to prevent regressions. When a problem is found then
a new unit test is written and once fixed all the tests are run to make
sure that the fix didn't break something that was already fixed before.
Find a bug in any program is pretty much to be expected. Finding a bug
*again* should not be.
> * Integration testing (i.e. requirements testing, black-box testing) is
> often neglected. Even if program components test successfully
> individually, it doesn't mean that they work function correctly when
> assembled together. Ideally, integration testing should be separately
> evaluated with its own test coverage but seldom is.
Agreed but that's not an argument against unit testing.
> * TDD itself has many claims such as better design, faster development,
> eliminating documentation (since the tests themselves theoretically
> describe the project requirements) but I have not found these to be true.
My experience is absolutely the opposite.
> In regards to documentation, in my last couple of jobs, the developers
> argued that documentation slowed them down (deflected them from
> development) and was pointless when they were the only programmers on a
> project. That had little interest in having their projects ready to
> hand off to another developer should they leave.
That's why unit tests as well as self documenting programs are so
important. If you mean user documentation then that needs to be written
anyway and probably shouldn't be written by the programmers although
they should be asked to review it.
> When I introduced unit testing, the developers took offense, saying that
> testing questioned their skills and slowed them down. Their concerns
Perhaps you didn't describe the process or the benefits properly. I
found that after a short, initial skepticism the programmers loved the
idea and were the biggest proponents.
> about speed were not entirely unjustified, as management often measures
> job performance by short-term speed in closing work tickets, not
> quality, meeting industry standards or ease of maintenance, which can be
> bigger performance factors in the long-term.
Management sometimes needs a bit of training as well but again, I found
that after a few weeks they saw the amazing progress that the team made
and over time the benefits just became more obvious. In fact pair
programming was the harder sell but if you can get them to let go of the
reins for a while they will see how well it works.
> Robert L. Glass, in "Facts and Fallacies of Software Engineering",
> cautions that there is no single good approach to testing, as each
> method has different strengths and weaknesses: a person must examine a
> project from different angles, using different approaches, to ensure
Sure. That's kind of applehood and mother pie.
> good quality. Unit testing, unfortunately, makes pretty Jenkins graphs
> that impress management, so developers often use that as an excuse to
> stop testing early and get back to development.
Never used Jenkins graphs. My reports to management were just the sheer
volume of completed tasks.
> Being a test engineer is a special personality, as many would find
> debugging others people's stuff all day long immensely boring.
That's why testing should be done by the programmers themselves.
--
D'Arcy J.M. Cain <darcy at druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner.
IM: darcy at Vex.Net, VoIP: sip:darcy at druid.net
More information about the u-u
mailing list