Day Three

A few days ago I left my day job to take a serious stab at a startup with Patrick Joyce.

Day one: Patrick helped me get up-to-speed on Mac OS X; we defined our tasks for the week; we looked at the problem domain and designed the core of our web app; and we set up our Rails development instance.

Day two: we developed with Patrick in the driver’s seat and me in the passenger’s. I played back-seat-driver by helping to make design decisions, catching typos and logic bugs, and learning from the driver.

Day three: we developed with me in the driver’s seat and Patrick in the passenger’s. Patrick played back-seat-driver.

Affirmations

  1. Having two, complementary co-founders makes for good, quick, and concrete decisions. Good, quick, and concrete decisions lead to appropriate and quick actions.
  2. Mac OS X, Rails, and TextMate make web development super-efficient.
  3. Pair programming results in the development of correct, readable, and maintainable code.
  4. Working a reasonable amount of hours, remembering to eat healthy food, and sleeping well keeps you sharp.

Controlled Vocabulary on a Budget

I called a DSL provider recently to cancel a 30-day trial. The first action the system demanded of me was to enter my “ten digit account number.” I frantically started looking for my shipping statement to see if it had my account number on it. It didn’t. The automated voice told me I hadn’t entered my account number yet and asked me for it again–time was ticking. I darted for my filing cabinet in search of my last phone bill; maybe they used the same account number since they were both provided by the same company. But, no, that couldn’t be it; my telephone account number was longer than ten digits. Then it dawned on me, the system was asking for my plain old phone number. I entered it, and the system recognized me.

Why didn’t it just ask me for my phone number in the first place?

Usability engineers promote the use of controlled vocabularies–consistent naming of items within products and services so that users quickly recognize the content being referenced. If there are two or more ways to name something, content authors should pick an authoritative descriptor and stick with it. Doing otherwise confuses readers and users. The goal is recognizable terms that lead to quick formation of concepts.

Several information architecture books cover the topic. My favorites are Information Architecture by Christina Wodtke, Hot Text, and Web Bloopers. All three suggest developing lexicons for site authors to use as the de-facto list of approved descriptors. Use of competing terms is prohibited.

Developing a lexicon upfront can be time-consuming, largely because you have to guess what terms will be necessary. Second, it’s an expensive process because you have to decide which terms win among the competitors. There are three ways to make the construction of a controlled vocabulary inexpensive and simple:

  1. Use a wiki or something else that doesn’t involve a lot of administrative overhead to store and disseminate the list of terms and the synonyms that lost the naming war.
  2. Debate and add words when authors and editors need to use them. This comes from a programming paradigm called lazy-loading. Don’t do the heavy-lifting unless or until you absolutely have to. You can experiment with letting authors decide which terms win when concepts are first encountered or having authors get help from the senior editor who is responsible for defining the appropriate lexicon. Obviously, whoever is making the decision needs to be informed and logical. Experiment until you identify the method that works best for your team.
  3. Save intra- and interpersonal debate time by using authoritative sources. Use websites on the topic or pull out your old college textbooks to get the wording right. Minimize the amount of time you spend determining which words your team will be using.

So that I can be as lazy and cheap as possible, and yet construct the appropriate lexicons for what I develop, I simply use Wikipedia whenever I can. The Wikipedia community is doing an excellent job creating a controlled vocabulary in a methodical way.

It’s okay to be lazy and cheap if you arrive at the right conclusion. It’s smarter than brute force.

(Speaking of laziness, the system could have just used Caller ID to identify me. But then I wouldn’t have anything to write about today.)

Closed-Ended Feedback

One of eBay’s strengths is its Feedback Forum where users are able to give open and honest feedback to each other after transactions are completed. The feedback results are open to all who want to view them. Users can discern whether or not they should buy from or sell to others based on prior feedback from the community. It is recorded using a score metric which can be set to “positive,” “neutral,” or “negative.” As well, feedback includes an open-ended text entry so that users can articulate specifics behind their score selection.

eBay assumes that a user is inherently neutral–that is, he is not good or evil. His lifetime Feedback Score starts off at zero (0). Whenever he receives a positive feedback post, his score increases by one; when he receives a neutral feedback post, it remains unchanged; and when he receives a negative feedback post, it decreases by one. All the while, a Positive Feedback percentage is calculated–much like a test grade in school.

eBay Feedback Profile

The beauty behind the eBay feedback model is its ability to convey whether or not a user can be trusted and also to what degree the entire community agrees in that trust. All things equal, if given a choice to purchase from a user with a Feedback Score of 2 or another of 157, most will choose to buy from the latter. The only problem with the Feedback Score display is the star graphic, which is somehow tied to it. I have been using eBay for almost a decade and its variations still mean nothing to me. It only adds noise.

The more serious flaw is the open-ended text feedback. Users have to manually skim over textual entries to get a feel for why someone has been given a particular score. Many entries add no or little value to the scores they describe. When every seller and buyer on eBay has “A+++++++++++++!!!!” entries, the playing field is leveled inappropriately. Good textual feedback typically falls in one of four categories: customer service, promptness of delivery, quality of good sold, and whether the purchaser would buy from him again.

eBay Positive Comments

If the textual responses were closed-ended instead, the feedback system could provide a clearer picture into why a user is getting the Feedback Score he is getting by calculating totals in each category. For example, this particular user had a history of sending imitation products. Most users still gave positive feedback because everything else was stellar, including situations where products were returned. If the quality of good sold category had a low score, those only interested in genuine products would steer away from this seller. Feedback would be specifically aggregated and useful.

Another benefit to closed-ended feedback is the prevention of flame wars, where users participate in mutual verbal attacks on character. Flame wars are subjectively blind and often heated by emotion rather than reason.

eBay Negative Comment

They divide communities, and make them unappealing to outsiders. Closed-ended feedback options avoid flame wars by keeping discussions objective.

Good metrics are devoid of emotion, and good metrics result in better decisions.

Analogies of a Parking Violation, Part Two: Governing Communities

Community governance was the second nerd thought that came to mind as I was soaking and scraping the parking violation sticker off my vehicle. Rules and guidelines exist within any reasonable community. The fun is in how strict they are and how they are enforced. My community’s homeowners association enforces rules centrally–it is the one that calls the shots and levies punishments. The community doesn’t have much say in individual cases. Online communities, however, do handle individual cases as a community, which results in better monitoring, better decision making, and better enforcement.

Many sites have terms of use. Sites assume that typical uses are valid but provide a way for users to report misuses. Facebook, for example, has a “Report This Photo” link whenever you view an album image. If an image is reported, it is inspected by a Facebook team member who makes a final decision on whether the photo stays or goes (source: http://www.facebook.com/help.php?tab=safety#ansj7).

Facebook Report This Photo

This technique was first popularized by the dating site HOTorNOT around 2000. One of the site’s founders, James Hong, originally hired his parents to screen flagged photos so he could continue coding. James quickly realized that the enforcement model had two problems with it. First, it didn’t scale well as the number of photos on the site increased exponentially–he needed to hire more people. Second, his parents were looking at inappropriate pictures eight hours a day.

Over the past seven years, the site has slowly matured from a centralized moderation system to a decentralized one consisting of volunteers. There is a nice explanation on Wikipedia of the site’s implementation of the principles found in The Wisdom of Crowds–a book that discusses how decentralized decision making results in better decisions. Although effective, the system requires volunteers who are willing to subject themselves to potentially vile images. In addition, as addressed in The Wisdom of Crowds, judgments rendered by appointed individuals do not reflect the values of the community accurately. We need a solution that relies on the community to make judgments. Digg is a popular news and media aggregation site that thrives on democracy. Readers vote for or against published content. Higher-ranked content gets more exposure, while other content gets buried. One of its weaknesses though is its susceptibility to mob-effect.

A community-based solution to the “Report This Photo” feature would be a voting mechanism that would kick-in if an image had been flagged as a violation of the terms of use. When community members would stumble upon a flagged image, they would be given the option to vote for or against it. Once a certain threshold had been met (albeit relatively low), the image could be flagged as appropriate or inappropriate permanently. Inappropriate images would be blurred beyond human recognition or removed completely. After time, those who voted in-line with the community’s final decisions could be given weighted votes to expedite future judgment calls. Such weighted voters would have more influence on future cases not because they are considered experts on morality but because they make judgments that best reflect the entire community.

In order to provide a truly decentralized judgment system and avoid mob-effect, the vote tally would need to remain hidden. Taking this idea further, “Report This Photo” could simply be a facade to the voting system so that flagging and voting is truly blind. Views of the photo without the link being clicked would be a vote for the photo to remain on the site and views with a “Report This Photo” click would be a vote to remove the photo from the site. Obviously, views would be tracked once per user. (Maybe Facebook is doing all of this already but just hasn’t updated its help documentation.)

If my real-life housing community was self-governed (like my parents’ neighborhood), monitoring and reporting would be handled by the community. If what I was doing was truly an inconvenience to the community, it would act appropriately. Punishment would still need to be levied by the association but the punishment would be more in-line with what the community deems appropriate.