Analogies of a Parking Violation, Part One: Security Enforcement

A few days ago I couldn’t find my car. After five minutes of pressing the panic button on my keychain, I realized that I accidentally left it at the community pool behind my house. My homeowner’s association placed a large, 5.5×4.25″ sticker on one of its windows, informing me that I had broken a community rule by leaving it there overnight and that I had 72 hours to move it. Exhibit A:

Parking Violation

The pool’s parking lot never fills up during the times that I park in it and it only has more spaces at night when people are not swimming. Regardless, I implicitly agreed to not leave it there overnight when I bought my house, and I broke that agreement. I deserved the sticker.

As I moved my vehicle from the pool parking lot to a space in front of my house, I noticed four vehicles parked illegally in front of three clearly-visible no-parking signs. The signs are there because of a fire hydrant and a crosswalk for pedestrians. None of the vehicles had stickers on them.

In my year of living in the neighborhood, we have not had any fires; however, I have seen two people and a dog almost get clobbered by other drivers because illegally-parked vehicles were obstructing the view of the crosswalk and those trying to cross it.

If you have to choose between protecting a nearly-empty pool parking lot and a fire hydrant and busy crosswalk, you protect the fire hydrant and crosswalk, for obvious reasons. There is real danger in an inaccessible fire hydrant and playing chicken with pedestrians.

You would not believe the hours I have wasted arguing with other software engineers, security engineers, and server administrators about non-existent security issues in my software. Numerous individuals have insisted that HTTP GET query strings should never be used because users can change data values before sending them to the server. They conclude by stating that HTTP POST is “more secure.” I quickly disprove these statements by demonstrating how to muck with POST data using a local web proxy and help them understand when to use GET and when to use POST. (There are times when you should use POST over GET, but not for security’s sake.)

I have also been directed in the past to store database connection or service account credential information in the Windows registry instead of a configuration file. Access Control Lists (ACL)–the technology that restricts access to things in Windows–sees both the registry and folders as logical containers and treats them the exact same way. There is no security difference between the two when they are protected by the same ACLs.

While these individuals were distracted by these non-security “issues,” they were simultaneously overlooking real security vulnerabilities: publicly-accessible application log files and database build scripts, weak database usernames and passwords, and inconsistent Access Control Matrices. They were focused on what appeared to be insecure when, in fact, they were missing what truly was insecure. In just about every case, the misapplication of software security was the result of not understanding why they thought things were insecure. They didn’t really understand what they were looking at.

To ensure that you and your colleagues apply your energy and acumen to that which truly is a security issue, seek to be educated by Bruce Schneir. He’s the expert on the topic and has written wonderful material on it. In particular, I recommend:

There is a huge difference between that which appears to be a security risk and that which is a security risk: one is imagined the other is real. We should all focus on the latter.

Helvetica and Software

I saw the documentary Helvetica Sunday night at AFI SILVERDOCS 2007. You have to see it. (Screenings are sparse and they have been selling out, so you might have to wait until October when the DVD comes out.)

Helvetica and its director

My friend Matt pointed out the fact that there were three groups of designers interviewed in the film: those of the modern design camp who saw the typeface’s birth in 1957 and love it to this day; those of the grunge design camp who didn’t like structure in the late 60s and 70s and hate it for its lack of emotion; and cutting-edge designers who love it and are bringing the design community back to it.

This contrast between camps becomes clear in the middle of the film when the interviewer asks designers why they like or dislike the typeface. Eric Spiekermann (of the modern design camp) summarizes the beauty of it when he states that Helvetica is a perfect balance between foreground and background–it does not distract the reader from the content of the message that is being communicated. Soon afterwards, David Carson (of the grunge design camp) explains that he, and other grunge designers by extension, believes that graphic design should be the expression of the artist’s feelings as he reads the content. Carson went on to illustrate through a personal experience. At one point in his career, he published an entire article in ITC Zapf Dingbats because he thought the article was dry and boring, rendering it unreadable.

Do you see what’s happening here? Grunge designers are forcing their personal impressions upon their audiences. I have no problem with this technique in art–art is often supposed to express an artist’s subjectivity and invoke a similar reaction in the audience; however, when presenting text, especially text written by another, designer expressiveness distracts. Readers are drawn to the way a chunk of text looks rather than how it reads. They are impressed but not convinced.

As I was watching this dichotomy unfold, I realized that this same mistake happens in software design. Developers can create subjectively “cool” functionality or user interface components and then force them on their users–not realizing that they are distracting users from the real reason they are using the software in the first place: to manage data. Unless users are trying to be wowed (i.e. videogames), developer expression is going to sidetrack or confuse users at best. This is why there are detailed user interface standards, guidelines, and best practices, and this is why software engineers and user interface designers should follow them.

Don’t try to be an artist unless you’re creating art.

Can You Hear Me Now?: Real Testing

For about a year and a half, I owned a Motorola E815 mobile phone. I loved the thing. It worked flawlessly until the Bluetooth feature decided to stop working one day and I could no longer pair a headset with it. I called Verizon Wireless, which agreed there was a physical malfunction and offered to replace it with a refurbished unit. I took them up on their offer and received a replacement unit within three days.

Along with the replacement unit came a two-page printout of very cryptic test results. From what I could tell, they had hooked the refurbished unit up to a computer and ran a bunch of unit tests on the phone to prove to me and themselves that I would receive a functioning unit. The tests came in two flavors:

  1. Happy Path
    “A well-defined test case that uses known input, that executes without exception and that produces an expected output” (http://en.wikipedia.org/wiki/Happy_path). In other words, the computer testing my phone made phone calls, used the built-in contact list, and exercised other common functionality in ordinary ways.
  2. Boundary Condition
    Read any of the Pragmatic Unit Testing books (available in both Java and C# flavors) and you will learn that software often fails on unexpected input and boundary conditions–really large numbers, really large negative numbers, zero, null values, full hard disks, or anything else the developer wasn’t expecting when s/he was writing code.

I clearly remember thinking “Wow, yet another reason to like Verizon Wireless. They really tested this replacement phone.”

The funny thing was that the number two (2) button on the phone didn’t work all the time. After trying to live with the inconvenience of a fickle button, I called Verizon to get another replacement. Again I received a refurbished phone along with the same two-page printout with slightly different but successful test results. All the buttons worked this time, but the speaker buzzed like it was overdriving whenever someone would talk to me, even if the volume was all the way down at its lowest setting. After trying to live with that inconvenience, I again called to get a replacement. Another refurbished phone, accompanying test results, and this time one out of every three attempts to flip the phone open resulted in a phone power reset.

And then it dawned on me: Verizon (or Motorola, not quite sure) probably spends much time, effort, and money creating well thought-out and automated happy path and boundary condition tests to run on phones before shipping them out. However, I have a high degree of confidence that a human never tried to actually make a phone call with any of the phones I received. I noticed all three replacements were broken during the first calls I tried to make with them. All that time, effort, and money was wasted (in my situation at least). Once I realized the testing process for refurbished units was broken, I decided to just cough up the money and buy a totally new phone. (Which I just dropped the other day and shattered the external screen on. We’ll see how long I can live with that nuisance.)

The moral of this long story is not to bash Verizon. (Their network truly is what it’s hyped-up to be.) The moral of the story is that real testing needs to be done. Verizon should be making real phone calls using real humans–or at least a robotic device that simulates a human’s interaction with its phones.

Integrated test suites that know the guts of an implementation and execute at lightening speed are great–let’s not discount those. However, we must ensure that real testing takes place from the deepest parts of the system all the way out to the point of human touch. Obviously, subjecting humans to perform all the testing of a product by hand is inhumane and grossly cost-inefficient. (This is particularly true in the case of multiple iterations of regression testing–don’t laugh, I’ve seen it happen.) Testers should strike a balance. Testers should use automated, but realistic, simulated interaction tests with software, web sites, and product interfaces. They should use application test suites that actually click software buttons and Sahi, Selenium, or Watir to click web-based hyperlinks and check checkboxes. This type of testing provides a nice balance of both automation and human interaction simulation.

In short, testing should involve traditional, automated happy path and boundary condition tests; automated human-touch simulations; and, finally, real human-touch. The order of importance will depend on what exactly is being tested; just make sure all three happen on your project or else I might be blogging about you too.

Usable Trash Cans and Metro Lines

My design life has been altered by three really good books:

  1. Donald Norman’s The Design of Everyday Things
  2. Edward Tufte’s Visual Explanations
  3. Steve Krug’s Don’t Make Me Think

After reading them, I can’t help but regularly see how I might go about fixing broken designs or simply improving ones that already work.

Last night was no exception.

Exhibit A: The Unusable Trash Can

While looking for a place to discard the remains of my dinner, I passed a row of recycling bins twice. Patrick, being a more intelligent individual, actually read the print on the recycling bins and noticed that one was really a trash can:

Unusable Trash Can

I’m all for reading and intelligent thinking, but whoever designed this fleet of waste bins could have done two things to aid their usability:

  1. Use a different color. Gestalt psychology teaches us that our brains tend to be holistic. When we see things that look the same, we at first believe they actually are the same–or at worst highly similar. I saw three blue bins and assumed all three were for recycling. I was wrong.
  2. Remove the conflicting text. I don’t know about yours, but my mind juxtaposes recycling and waste. (I think it’s because of all the positive “marketing” I’ve heard over the years about the benefits of recycling over simply throwing things in the trash.) I read “recycling” and stopped reading because I wasn’t looking for a recycling bin; I was looking for a trash can. It was right there in front of me.

Exhibit B: The Red Line

Patrick and I had two options as to which Metro station we wanted to start our trip from. He picked Grosvener-Strathmore over White Flint because he knew that more trains visited Grosvener and that we would be on our way quicker if it was our starting point.

The Red Line

Both stations are on the Red Line and no other lines intersect Grosvener. So why and how can more trains visit it? Naturally, demand for the Metro increases the closer you get to the heart of DC, and they can handle this demand by allowing for trains to reverse direction at this particular station.

How are ignorant people like me supposed to know this helpful information? As I was asking myself this question, my mind subconsciously jumped to Minard’s Napoleon’s March and thought it would be nice if the thickness of the Metro lines on signs and printed material was relative to the frequency of train visits. In short, a thin line would mean few train visits and a thick one would mean more.

Obviously, this idea breaks down if the train schedule is dynamic (which it isn’t) or if a train breaks down on the tracks blocking traffic, which, unfortunately, my sister can attest to. However, under normal conditions, it reflects reality and would probably prove useful as people plan their trips without having to inspect a daunting, six-page train schedule table.

Although neither thoughts are mind-blowing, they struck me as nice ones to reflect on and share.

(See Patrick’s post on Subway Maps and Scope Creep.)

Amazon Web Services

Tonight I went to a presentation by Jeff Barr, Senior Web Services Evangelist for Amazon, with my friend Patrick. I was interested in learning more about the storage, computing, and queuing services I have been reading and hearing a lot about lately and to check out DCRUG, who hosted the event. (Thanks for the dinner.)

I’ll leave the explanation of the services to Amazon, as they do a fine job doing so. However, I thought I would highlight some of Jeff’s points that may not be inherently apparent or published but are quite interesting and helpful to understand:

Groups often optimistically project and expect their web sites, applications, and services to grow in demand as time goes on. However, if they build a robust infrastructure up front, they typically waste precious time and money on an underutilized system. If they use a shared environment, they imagine a magical transformation from a shared to a dedicated host without considering how they will move their system seamlessly before allowing their creation to cut-off its own revenue stream as it grinds to a halt. Amazon Web Services are built with both robustness and low costs in mind.

Amazon is clearly building a framework, and its licensing agreement makes it clear that they want people building businesses around their services. They’re providing the tools and want to see people build with them.

In addition to a base agreement, each web service has its own sublicense. I like this because it cuts down on legalize and makes this more digestible.

One of the primary goals of the team is to build simple and self-serving services for developers to utilize. They don’t want to throw money at marketing, sales, or support when they don’t have to. This savings in overhead costs equates to cheaper fees for the customers who use their services. They have invested quite a bit of time making their APIs insanely simple and building a positive, self-serving community of developers through the use of blogs, forums, and the like.

SQS is like FIFO but not exactly (it isn’t a trivial problem to solve at such a massive scale).

Amazon doesn’t have as many points of presence as Akamai, so they do not see themselves in the content delivery market. They are still fast, however.

They are hoping to provide a way for vendors to dynamically charge for use of their software inside EC2. One of the audience members pointed out that vendors are currently clueless as to how to appropriately charge in an elastic computing environment.

Overall, I was quite pleased. Jeff did an excellent job evangelizing the services and answered a load of really good questions during and after his presentation. He really knew the services, so it wasn’t just a bunch of high-level garbage.

 I’ve been converted.