$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
From: Robert Ramey (ramey_at_[hidden])
Date: 2024-01-12 07:02:11
On 1/11/24 9:26 PM, Alan de Freitas via Boost wrote:
>>
>> a) Treating Boost "as a unit" and testing on this basis results in an
>> amount of work which increases with the square of the number of libraries.
>>
> 
> Sorry. Why exactly the square of the number of libraries?
Suppose you've got one library with 10 cases you want to test and each 
test takes 1 second to run.  Now suppose you've got 2 libraries each 
with 10 cases.  Suppose you're concerned about one library provoking a 
failure in the other.  Then for each test in the first library, there 
might be 10 conditions in the second library which you would want to 
test against.  etc...
Actually, a better analysis might conclude that the number of possible 
cross failure modes might increase with the following number:
n + n * (n-1) / 2 + n * (n-1) * (n-2) / 3, + n * (n-1) * (n-2) * (n-3) / 
4 ...
Of course it's a crude measure (and argument).  But it illustrates that 
if you're trying to test cross impacts of libraries, the number of 
possible failure modes increases disproportionately to the number 
libraries to be tested.
Actually when we think we're "cross testing" we're really not because we 
aren't really writing tests to consider these kinds of failures.  So the 
whole idea of thinking that we're actually testing anything when we test 
all at once is very misleading.
A related situation occurs when making a scientific experiment. 
Typically such an experiment has a control case and the test case which 
varies from the control case in only one variable.  So if the two cases 
result in different results, we know that that one variable is the 
source of the difference.  Trying to test "all at once" is exactly the 
opposite of scientific method.  The whole idea of unit testing is an 
attempt to make our testing more useful and scientific.
In the "old days", we would write the whole program from start to finish 
before we did any testing. This is comparable to the "cross testing" 
argument from earlier in this post. This wasn't called "testing" it was 
called "debugging".  It proved to be a very inefficient and time 
consuming operation.  In reference to the above, consider how much more 
time it takes to "debug" the whole program as opposed to testing each 
function/type individually.
As yet another aside, I worked for years as a freelance 
developer/consultant.  I only got called when things were stuck and they 
needed some to take the blame and they had no other choice.  Part of 
this was likely due to my annoying and pedantic personality.  I have 
never had a customer who ever wrote unit test.  When I asked why, the 
answer was always "we haven't got time".
Historically, the idea of unit testing only really became a "thing" 
around the year 2000.  Imagine - 30-40 years of software development 
with the build and crash method.
Another historical note that I believe that I'm repeating correctly. 
When the first stored program computer was fired up they tried a program 
like factoring a number or something.  They (including John von Neuman), 
were astonished that it didn't work the first time!!!  Given the mindset 
of my collegues this doesn't amaze me.
Another interesting note from the past was that up until ~1960 
programmers were almost all female.  It didn't take long (~10 years) 
before most of them were men.  I have no idea why this is/was. Make of 
this whatever you want. I'm sure someone will have a theory.
Robert Ramey
> 
> _______________________________________________
> Unsubscribe & other changes: http://listarchives.boost.org/mailman/listinfo.cgi/boost
>