Can You Hear Me Now?: Real Testing

For about a year and a half, I owned a Motorola E815 mobile phone. I loved the thing. It worked flawlessly until the Bluetooth feature decided to stop working one day and I could no longer pair a headset with it. I called Verizon Wireless, which agreed there was a physical malfunction and offered to replace it with a refurbished unit. I took them up on their offer and received a replacement unit within three days.

Along with the replacement unit came a two-page printout of very cryptic test results. From what I could tell, they had hooked the refurbished unit up to a computer and ran a bunch of unit tests on the phone to prove to me and themselves that I would receive a functioning unit. The tests came in two flavors:

  1. Happy Path
    “A well-defined test case that uses known input, that executes without exception and that produces an expected output” ( In other words, the computer testing my phone made phone calls, used the built-in contact list, and exercised other common functionality in ordinary ways.
  2. Boundary Condition
    Read any of the Pragmatic Unit Testing books (available in both Java and C# flavors) and you will learn that software often fails on unexpected input and boundary conditions–really large numbers, really large negative numbers, zero, null values, full hard disks, or anything else the developer wasn’t expecting when s/he was writing code.

I clearly remember thinking “Wow, yet another reason to like Verizon Wireless. They really tested this replacement phone.”

The funny thing was that the number two (2) button on the phone didn’t work all the time. After trying to live with the inconvenience of a fickle button, I called Verizon to get another replacement. Again I received a refurbished phone along with the same two-page printout with slightly different but successful test results. All the buttons worked this time, but the speaker buzzed like it was overdriving whenever someone would talk to me, even if the volume was all the way down at its lowest setting. After trying to live with that inconvenience, I again called to get a replacement. Another refurbished phone, accompanying test results, and this time one out of every three attempts to flip the phone open resulted in a phone power reset.

And then it dawned on me: Verizon (or Motorola, not quite sure) probably spends much time, effort, and money creating well thought-out and automated happy path and boundary condition tests to run on phones before shipping them out. However, I have a high degree of confidence that a human never tried to actually make a phone call with any of the phones I received. I noticed all three replacements were broken during the first calls I tried to make with them. All that time, effort, and money was wasted (in my situation at least). Once I realized the testing process for refurbished units was broken, I decided to just cough up the money and buy a totally new phone. (Which I just dropped the other day and shattered the external screen on. We’ll see how long I can live with that nuisance.)

The moral of this long story is not to bash Verizon. (Their network truly is what it’s hyped-up to be.) The moral of the story is that real testing needs to be done. Verizon should be making real phone calls using real humans–or at least a robotic device that simulates a human’s interaction with its phones.

Integrated test suites that know the guts of an implementation and execute at lightening speed are great–let’s not discount those. However, we must ensure that real testing takes place from the deepest parts of the system all the way out to the point of human touch. Obviously, subjecting humans to perform all the testing of a product by hand is inhumane and grossly cost-inefficient. (This is particularly true in the case of multiple iterations of regression testing–don’t laugh, I’ve seen it happen.) Testers should strike a balance. Testers should use automated, but realistic, simulated interaction tests with software, web sites, and product interfaces. They should use application test suites that actually click software buttons and Sahi, Selenium, or Watir to click web-based hyperlinks and check checkboxes. This type of testing provides a nice balance of both automation and human interaction simulation.

In short, testing should involve traditional, automated happy path and boundary condition tests; automated human-touch simulations; and, finally, real human-touch. The order of importance will depend on what exactly is being tested; just make sure all three happen on your project or else I might be blogging about you too.

3 thoughts on “Can You Hear Me Now?: Real Testing

  1. You should join a real man’s network, like AT&T, so you wouldn’t have to worry about crappy phones. I’m gonna love the iPhone… Sucks for you guys though, the iPhone will be archaic when it finally is available for Verizon.


  2. Don’t want to get into a network war here, but as far as I and my friends who have/had AT&T can tell, Verizon had better coverage and more land-line-like quality…at least on the East Coast.

    I’m paying a monthly fee for good, quality mobility not necessarily the next cool phone. Although, I won’t lie. I will definitely be suffering from iPhone-envy in a few weeks. When you get yours, make sure to blog about it.

  3. I’ll be suffering from the same iPhone envy on my never-drop-a-call (and I used to constantly with AT&T) network. I’ll find solice in the fact that I can’t afford one.

    Excellent blog Ian.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.