Design bugs in everyday life

A long time ago, I think it was in the book "Information Rules", I read a great example of a pure inefficiency, a problem where everyone involved is worse off than they could be under a simple alternative scenario. At a deli or coffee shop, where coffee is served in disposable cups, it's better if the covers are the same size for different size cups (i.e. the circumference of the top of the cups is the same whether the cup is small, medium or large). Otherwise, people will waste time because they picked the wrong size cover and have to pick again, others will also waste time as they wait for them to get out of the way, some lids get wasted, the counter gets messier, etc. Many small problems arise from having different size lids.

It's a striking example because the difference between the right and wrong solution is so trivial. It's a design error, and if you realize it early on, the fix is virtually free but later it's very costly. Nothing new in that, that's exactly the nature of software bugs and it's not surprising that similar "bugs" exist elsewhere. But what really fascinated me in this example is that the bug, doesn't get fixed even in subsequent versions! You still have many coffee places that have different size cup covers, for decades, for no good reason.What a fascinating bug! It should have been crushed years ago and yet it continues to hang around generation after generation.

Since then, I've noticed other examples of design bugs in every day life that are surprisingly resilient. Some, like the one above, probably survive because the inefficiency occurs in such tiny increments, we don't appreciate the cumulative cost. Others survive because all they need is one chance to get into the system and then they are locked in forever. Here are a few random ones that I can think of right now:
  • Typographical Fonts where l (lower case L) and I (capital i) are hard to distinguish.... It seems like if you are designing a font, making letters distinct would be one of your first requirements, so how do these fonts survive, and even thrive? Imagine all the damage that has been done throughout history because someone misread a "l" as an "I"... it's hard to estimate but it must be huge. Maybe it caused a shipwreck at some point!
  • Alphanumeric key mapping on phones: a neat old idea which allows you to make memorable words out of phone numbers, like 1-800-FLOWERS. But let's look at that mapping on our phones We have 2: ABC, 3: DEF, 4: GHI, 5:JKL, 6: MNO, 7 : PQRS, 8:TUV, 9 : WXYZ , and 1 and 0 have no letters. This leaves a small doubt in a some cases: when you see O is it really an O which makes it a 6 or is it a zero? Similarly if I see a I, I'm not sure if I should dial a 4 or a 1. That's a bug in the design. The fix would obviously have been to assign I to 1 and O to 0, and then assign all the others alphabetically 4: GHJ, 5: KLM, 6: NPQ, .... with a nice side effect that now all the keys would have three letters on them (instead of two of them having 4 letters, or omitting Q and Z like they did in some old phones). Again a trivial fix but the design bug got locked-in, became the standard, and now it will never be fixed. Imagine.... maybe some lives were lost because someone wasted precious seconds by dialing a 4 instead of a 1!
  • Bank ATMs that give the cash before returning the card; it seems obvious that will cause a lot of people to leave their card behind which in turn is a huge cost for the bank and the customer! The fix is to design the machines to always return the card first and then proceed with the withdrawal or deposit. Fortunately this last bug seems more prevalent among older machines than newer ones, which hopefully means it's on it's way to extinction...


On Optics (channeling Safire)

William Safire recently died. Even though I disagreed with most of his political opinions, I loved and will miss his columns, especially "On Language". So I want to note his passing here, with my own little post on language. Now of course I don't aspire to be as entertaining or educational as him... so I'll just rant about something I find annoying.

Politicians and journalists in the US have started using the word "optics" when they mean "perception" or "appearance" . E.g. instead of "this looks bad" they say "the optics of this are not good"..... urgh! Extremely annoying fad! Here are two quick examples I just found using my favorite search engine. The first one is from a politician in 2008:

Rep. Steve King, R-Iowa [...] said that terrorists would dance in the streets if Sen. Barack Obama, D-Illinois, is elected president [...] because Obama's middle name is "Hussein," his father's Muslim roots, and his appearance -- or "optics," as King put it. "I'll just say this that when you think about the optics of a Barack Obama potentially getting elected President of the United States -- and I mean, what does this look like to the rest of the world? What does it look like to the world of Islam? "

Well Mr. Congressman, the optics of this are that you are a pretentious buffoon who thinks that borrowing scientific sounding words makes you look smart. (Oh and it seems you are also a bigot... but that's off-topic here.) The second one is a more recent example from an actual writer this time:

"In response to the leak, the White House kicks into high damage-control mode [...], but even here shows some clumsiness, at least regarding civil-military optics: the 25 hours for the Olympics vs. 25 minutes for McChrystal optic..."

The "McChrystal optic"? To me those two words invoke a beam of light going through a solid material whose constituent atoms are arranged an orderly repeating pattern. Which of course has nothing to do with what (I think) the writer meant to communicate -- something about Gen. McChrystal and perceptions. The faddish metaphor failed, the sentence is ugly and borderline incomprehensible. Way to go, Mr. Professional Writer.

I think this usage of "optics" right now, in 2009, is just at the point where it's perfect indicator of a certain kind of pomposity. Normal people haven't started using it (and hopefully never will), but it seems to be trendy with hacks who either can't come up with better metaphors or fear that simple words would expose their paucity of meaning. Am I being to harsh ? OK, let's give that last writer the benefit of the doubt, and see what else he's written... in an even more recent post, the following:

"President Obama and his advisors seem to be wrestling with this fundamental issue in Afghanistan and the optics and the body language...."

Bingo! Optics and body language.... just horrible isn't it?

It's not that I am just a cranky conservative when it comes to language -- far from it, I love its constant evolution -- slang, jargon, lingo.. it's all great! But that doesn't mean that all neologisms are good. It doesn't mean that "any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light or hansom cabs to aeroplanes", to quote George Orwell. For a new word, usage, phrase, or expression to work, for it to be cool, in any language "what is above all needed is to let the meaning choose the word, and not the other way around."

R.I.P. William Safire


Do you feel lucky... punk?

Here are two reasons why humanity might soon go extinct, and why it wouldn't be such a big loss. As you can see, I am in a cheerful mood today. 

Big rock from outer space 

Last year, using the example of the asteroid Apophis that might destroy the world in 27 years, I made the point that human beings are sometimes astonishingly stupid when it comes to making decisions that involve low probability events. If we were rational mathematical creatures, humanity as a whole should be willing to spend billions of dollars to insure against that 0.0023% chance that we will all be wiped out. If you don't like my argument based on the present value of future GDP, here's another way of arriving at the same point. If you are willing to spend a trillion dollars say on nuclear weapons to defend against other humans, and say there's a 1 in 50 chance that you actually need them, logically, you should be willing to spend a billion dollars on threats that have a 1/50,000 chance of happening. (I am using conservative orders of magnitude here, obviously a nuclear war has less than 1/50 chance of happening, so that makes my point even stronger). Today, in this article from Ars Technica, I found out just how stupid we are.
Congress awarded NASA a $1.6 million grant in 1999 to put towards the NEO discovery program. Unfortunately, this was the only funding Congress gave to NASA to pursue this goal.
Yup, the US government allocated $1.6 million dollars to save all of human life from extinction... Total! And just in case you are inclined to blame "the Americans" for being so short sighted, consider that all the other countries in the world are allocating.. ZERO! (Ok maybe they have a couple of telescopes pointing at the sky but we need giant laser beams or something...) At this point, I am almost rooting for the asteroid to kick human ass. We deserve it. 

Small germs from inner space 

And of course, a big stone falling from the sky is not the only threat we face. Tiny germs are threatening us too. Let's take the H1N1 virus -- the swine flu of recent fame. You'd think that at least when it comes to human health, humanity can be rational, right? Not so quick. Let's see how are favorite mammal is dealing with this problem. Consider the following article from the Guardian (great newspaper btw): "Experts warned dispersal of Tamiflu would do more harm than good" about the debate on anti-virus treatments for H1N1. Here's the scientific view, summarized by one expert quoted in the article:
"Some people wanted to take a long-term view of the risk of resistance developing and to seek to preserve the effectiveness of antivirals for the next pandemic, which may be more severe."
"If you get a resistant strain that becomes dominant in the autumn, Tamiflu will then be useless."
And here's another scientist:
"I am concerned about the vast amount of Tamiflu that is going out almost unregulated," he told the Guardian. "We are increasing the possibility that the flu will become resistant sooner or later. At the moment there is no desperate need for Tamiflu. We should be reconsidering its issue, rather than encouraging its use. "I think we should stop the national pandemic flu service. It was put there for an outbreak of far higher mortality than we have. If you get a resistant strain that becomes dominant in the autumn, Tamiflu will then be useless."
Ok, thank God for all these smart scientists who have thought it through! The politicians should logically follow their advice right? Well actually
"It was felt ... it would simply be unacceptable to the UK population to tell them we had a huge stockpile of drugs but they were not going to be made available"
So they just decided to go ahead and do the wrong thing! It's like a parent saying: "If I told my 5 year old not to play with this loaded gun, he would have been upset, so I decided to let him play with it." Mind you we're not talking about some distant threat here. The next mutation of the virus could be this autumn. Granted there's a low probability that it will mutate into a real killer, but that's my whole point. It's a low probability but high impact threat. And faced with that, the British government is willingly increasing the probability of a pandemic that could kill hundreds of millions of people, because they are afraid of being unpopular for the next two months! Seriously! If this was a movie, whose side would you be on? I would be like: Humans suck! Go H1, Go N1, it's your birthday! 

No rare events in the savanna 

None of this is original of course. Evolutionary biologists will say it's because our brain evolved in an environment where we just never had to consider small probabilities. We have no problem dealing with quantities like "if I go left, I get 1 potato, if I turn right I get 12 eggs"... Our brain can compute those things even as a toddler. But things like "1 in 50,000 chance" just don't compute in ye olde wetware. It's only after years of formal schooling, e.g. by the high-school level, that we start to get intuition on really small numbers. Because until the modern age, we didn't need to! Sure there were rare things like being hit by lightning, or having an earthquake, but since there wasn't anything we could do about them, there was no evolutionary advantage to actually being able to reason logically about really small probabilities. Good old superstition would work just as well. You could say "I got hit by lightning because Zeus is angry at me because I didn't offer animal sacrifice". If you are a hunter gatherer living in the bush, that explanation is practically speaking, just as good as the scientific one. But now, by our own hands, we have a world where we do need to reason about small probabilities... Problem is, the brain hasn't caught up! Global warming is another example. Twenty years ago, it was a low probability but high impact threat, just like our two examples above. Scientists were running around screaming "There's a 1 in 100 chance that the polar ice caps will melt! That's huge!" But humanity just couldn't deal with it. People were like: "One in a hundred chance of extincttion? Pffft. I'm feeling lucky. Let me go buy a lottery ticket." 
Well now global warming is in the same range of probability as 1 potato and 12 eggs, so people are dealing with it, but it may be too late. Is this the end-game of evolution? Is this what the epitaph will say:
Here lies humanity. They became really good at reproduction -- 6 billion individuals! But not quite good enough at probability.
Maybe it's all part of a master plan. A conspiracy! Apophis contains some organic molecules which are distant relatives of the H1N1 virus. Together the asteroid and the swine flu are collaborating to take us out, and recolonize the planet with a new dominant species that they like better. After all, that could be how we got here too!


Gell-Mann Amnesia

Last November, I came across this piece by Michael Crichton. I found the following bit brilliant:
Media carries with it a credibility that is totally undeserved. You have all experienced this, in what I call the Murray Gell-Mann Amnesia effect. (I call it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.)

Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.

That is the Gell-Mann Amnesia effect. I'd point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all.
But when it comes to the media, we believe against evidence that it is probably worth our time to read other parts of the paper. When, in fact, it almost certainly isn't. The only possible explanation for our behavior is amnesia.
Brilliant! It reminds me of my rants about the NY Times. Yet I still buy it most days.... Amnesia.

Note: when I wrote that post, in November 2007, the Times' newsstand price had just increased from $1 to $1.25. Now it's $2. 100% increase in less than two years!


Mark Cuban's advice to Myspace

I made what turned out to be a rather lengthy comment on the latest post at blogmaverick.com wherein Mark Cuban gives advice to Rupert Murdoch. A quick survey of my readers (hey me!) indicated that close to 100% would like to have that insightful comment right here on their favorite blog. Hence this post.

The first part of Cuban's advice is kind of crazy. He wants news sites to block incoming links from aggregators. Block links! That's a surprising level of cluelessness from our good friend, who is getting all the flack he deserves for that idea from other people so I won't add to it.

The more interesting part of the post is on Myspace's potential future business model... I really think he's on to something. Here's what I had to say about it (Since Wave is not integrated with Blogger yet, I can only cut & paste):

Excellent advice for Myspace, Mark! I think being a music platform is the best business plan for them. They have the audience with the right demographics, and the artists. For now… But they can’t pull it off with the website they have today. So the big question is, do they have the technical capability to support that business plan?

It would take a significant breakthrough, a next generation web application. It would have streaming, download and playback, syncing with devices, all better or at least as good as todays iTunes client/server combo. It would also have to be a great authoring/publication tool for artists to easily create a good looking online presence, perhaps even some actual post-production music features to create special samples and mixes…

In short, they need a site that is as different from today’s Myspace pages as, let’s say, Gmail in 2009 is different from Hotmail of 1999. The ingredients are available and ripe: after years of stagnation, browsers and web languages are in a period of intense innovation. But can Myspace pull them together to create a cool and, as Steve Jobs would say, “insanely great” technology for the new web-based music universe? I doubt it. I just don’t see any evidence whatsoever, at Myspace or anywhere else at News corp, of the level of technical depth required to lead the world into this new — dare I say it? — “web 3.0″ music world. Still, you are right IMHO, it’s their best bet and they should at least try rather than wither away.


Chronicle of a death foretold

Sometimes little things are very telling. In the case of Facebook, one of those things is "Reply-to:"... or rather three of those things. To wit:
  • When you receive a message on Facebook, it sends you a notification by email. Great. But then you can't reply! Why? Why don't they just set the reply-to in the email header to an address that will send it back to the person's inbox in Facebook? That way you and your friend are still communicating through Facebook but with the added convenience of email e.g. on your mobile. But no, they force you to login to the Facebook website to reply.
  • Similarly, suppose you are logged in to Facebook, and you want to email your friend. You go to you friends profile, and guess what, you can't click on the email address to send them email! Why? Why can't they just make it a mailto link?
  • So ok, you decide to just copy and paste the address, of course. But you can't -- it's an image! Why? Whyyyyy? Why can't they just leave it in plain text, why do they want to go through the extra expense of converting everyone's email into an image?
In short, they are really really going out of their way to discourage you from using your friend's email address. Why? Fear of spam is not the reason, we're talking about authenticated contacts. Obviously the reason is that their business model is such that when you visit their website, they make (or at least hope to make) money from advertising.

The technical ideal here is obviously flexibility: let users exchange emails, SMSes, IMs, everything they want with their friends, with Facebook being the hub of their online universe. Instead of re-inventing separate and more primitive versions of email and IM inside their closed world, they could inter-connect and inter-operate. They could also for example enable you to chat with your Facebook contacts directly even if only one of you is logged in to Facebook and the other is on AIM, Yahoo Messenger, MSN messenger, or Google Talk... If Gaim and Trillian could do that years ago, surely Facebook can. They could effectively unify all the existing message systems into a grand Facebook Open Overlay IM ("FOO IM"). It would be a great service to their users, and a manifestation of the core raison d'être of a social network. And of course, they already have plenty of employees there who are very smart and experienced with this kind of stuff, so they definitely could. But, instead of doing the right thing, their business model is forcing them to instead handicap their users' communications!

Every company must have a way to make money of course. Through some combination of good ideas, timing, environment, luck etc. companies end up with very different business models. Here it looks like Facebook is trending toward one which requires an "adversarial" relationship with the user. We're seeing hints that their need to reach profitability is starting to go against the best interest of their users. Sure you can still make money that way. But that road is ugly. Down that road you end up with health insurance companies whose profits rely on denying coverage to people who tought they had paid for it. Shady calling cards where they put obstacles in your way so you can't fully use the advertised number of minutes. Sleezy subscription schemes that generate profits by making it difficult to cancel even when you are entitled to. Everyone knows that world, those businesses you just hate, the ones you complain about. Those are simply businesses where the company's incentives are not aligned with the users'.

In that sense, Facebook today is eerily reminiscent of AOL in the late 1990s. Facebook is the king of social networks with something like 300 million users. AOL was the king of Internet access providers, with 30 million users paying $20/month! (Here's an interesting side question: I wonder how Facebook users as a percentage of total Internet users today, compares to AOL subscribers as a percentage of total Internet population in 1999? I wouldn't be surprised if it's roughly the same.) And at the very peak of its dominance, AOL was showing the same signs. Instead of letting their users just go to any website directly, they had this limited proprietary system with "rooms", "keywords", "channels", their own content, their own applications, etc. The reason was because they were stuck in a business model of a closed online service from the 1980s. So even though they knew the open network was infinitely better, they were devoted to a doomed goal of keeping the users inside their own closed world. Inevitably their users realized they could get more for less: pay $10/month to a no-name ISP, use a free browser and just surf the web... ("surf the web" sounds so quaint doesn't it?) And they started leaving AOL in droves. Even after merging with Time Warner, AOL couldn't capitalize on the shift to broadband. They remained desperately focused on trying to keep subscribers from leaving the old "America On Line", they became a monster that took adversarial customer relations to a whole new level, before finally giving up in 2006. (By the way all this has little to do with what AOL is today in 2009).

To be sure.... Wow for a long time, I've wanted to start a paragraph with "To be sure ...", and this is the first! But I digress.

To be sure, despite the dramatic title of this post, and despite the fact that I've picked on them once before, it's far from over for Facebook. They may yet decide to give the users the obvious flexibility, and make enough money with higher quality ad targetting when the users naturally come to the site anyway. Maybe they will find new ways to advertise as messages flow openly in and out of their network, or maybe they will figure out brand new business models. Whatever the case is, they do have one great thing going for them. Execution. They know how to get things done. You don't get to 300 million users by being stupid or lazy. They just need to make sure they are not smartly and expertly marching off a cliff.

Ornella Muti, isn't she beautiful?
The title of this post by the way is from a novel by one my favorite authors. Not his best novel, but a great title. And a pretty good movie too.


$135B: pros and cons

Last week, a friend pointed me to the following story:

"Italy’s financial police (Guardia italiana di Finanza) has seized US bonds worth US 134.5 billion from two Japanese nationals ...."

My first thought .... well not my first, the first was of course: "WTF?!!", but the second or third thought was "this could be terrorism!" A deliberate attack on the ability of the US govt to finance itself, by shaking confidence in the debt instruments.

And of course I expected it to be huge news. The biggest case of counterfeiting in history and a new kind of terrorism etc, etc. Yet, I searched and searched, and there was barely any mention of it anywhere else that day and the next day! No follow-ups, no debunking, nothing. In fact even the initial story is completely absent from mainstream news. Why the silence, what's going on? Where are all the experts and the pros? Then when you think about it... it makes sense. If it really is an attack, an attack on the very essence of money -- confidence, that's exactly how you would want to respond isn't it? Is it possible that say all the reporters who called the US Federal Reserve for comment got a quiet very high level response saying: "please bury this story", and did so? After all it is well known that major US news organizations have in the recent past complied when the US government asked them not to reveal national security secrets that they knew.

Today a week later, there are still very few stories about it on the web, and zero from the major US news organizations, nothing from the New York Times, Washington Post, CNN et al.

Here's a good blog post theorizing that the bonds must be counterfeit, and likely designed to be caught.


100% wireless

Continuing with the persotechnomobilephoto upload theme:

This past weekend, I finally completed the last step of a very gradual evolution...  to H. s. sapiens radionsis i.e. the all-wireless man.  It started with giving up the telephone landline years ago and just using the cell phone.   Then, after a recent move, I could get many high-def digital TV channels completely free... via good old fashioned broadcast.

So I decided to take the leap with Internet too.   I had a  Sprint broadband wireless modem I'd been using on a laptop, and one day plugged it into my 4-year old Mac mini. That worked no problem (after  I realized I had to put in the "phone number" #777).   So now, after procrastinating for weeks,  I was finally going to look into how to enable IP forwarding in OS X and what do you know, it's right there in System Preferences under Network, there's an icon for sharing, et voila!  Now I needed a DHCP server and I was just getting ready to download and install one when, lo and behold, I see there's already one in OS X.  Oh cool,  home network done.  Right now I am sitting writing this post from my laptop, which is connected to the Internet wirelessly via the Mac, which is connected to the Sprint cellular network.   It all just works! Sometimes you just have to say: cool! This is another one of those times.

The same little mac is also the host of  my music collection and old-fashioned CD player too. It's also plugged into the TV to serve as DVD player. And of course it's a computer connected to the internet, so with bluetooth mouse and keyboard, I can kick back and surf the net on big screen.  That goes very well with boxee.tv, which makes it really hard to miss cable TV (I never really cared much for it and barely ever had it anyway). 

Two cool things. First is that the little mac mini  has totally replaced the roles normally played by a DSL/cable modem,  a WiFi router,  a home computer, a set top cable/satellite box, and a DVD/CD player.  Not only has it replaced them all, but for the home environment, it actually does a better job in many of those cases! All in one very compact, well designed little box (HDMI for example, pretty far-sighted) where all the details just worked so easily I probably spent more time writing this post than setting it all up.  What can I say but: I love OS X and the mac mini.  

The second cool thing is that  I now enjoy home Internet, TV, and phone service all wirelessly, untethered! I could just as well be living on a boat... with a big battery. 

Always remember batteries. In this wonderfully convenient wirelessness, the weak link is  the battery.  Was it Napoleon who said something about how a whole battle could turn on a simple horseshoe? Today the same can be said about batteries.  


Meta social network

I'm forced to recognize that despite making fun of the social networking hype, I do use a number of web things which feed other things and involve people I know. So, jokes aside, here's an attempt at making sense of the feed topology, my personal feedological graphic if you will:

This is also my first attempt at a blog post from a mobile photo... I took the above a few days ago and synced it to picasa over the air. But you can't rotate!


Trouble in (AAPL) paradise?

Henri Rousseau, Le reve (de Yadwiga?)
Last year my beloved blackberry was stolen... at gunpoint! That was the single most lopsided cost and benefit equation (for all involved) I have ever been a part of in my life... but that's not today's story. The story is, I decided to replace it with a second generation iPhone (with 3G and GPS), which had just come out. I'm not about to write a product review, God knows enough has been written about the iPhone. I'll just sya it's a really cool device.

But there's one aspect that doesn't seem to be talked about at all. A few weeks later I went to Ethiopia, and coincidentally again, I got one of the very first 3G SIM cards in the country. Amazing, the coolest phone and the fastest wireless network, woohoo. Except... there was no crack to unlock the 3G iPhone! So I had to carry two phones, one to make calls, and the iPhone for my address book etc. Second, my MacBook pro doesn't have a modem! And of course, who remembers to take an extra modem with them? Thankfully I had an old IBM Thinkpad, which has a built-in modem, so I could get on the Internet. The point is that the two Apple products I had were unusable in the third world. Whereas their competitors products (IBM and Blackberry in this case) are perfectly usable in those same conditions.

When I came back, a few weeks later, I was given an HTC G1 Android. Again many people have written comparing the two, but the thing that immediately struck me as the most important in comparing the two is a very basic point. The Android doesn't assume you have a computer. Everything is over the air, your contacts, applications, OS updates etc. are all updated/synced wirelessly. Whereas the iPhone requires that you have a computer, and a pretty powerful one at that (it has to be able to run iTunes on Windows or Mac OS X). To use an iPhone, you have to not only buy the phone, you must also already have a $1,000-$2,000 computer at home. If you live in the first world, that's a perfectly valid assumption, no problem. But it means that Apple's total market is a few hundred million people in the first world. This is true of Apple products in general, but is even more true of the iPhone which is a hugely important piece of that company's future. For comparison, Android's market is those people, plus the other 3 billion people in the world who can afford a $200 phone but not a $2,000 computer.

Then few months ago I read a great blog post (unfortunately I can't find the url to link) which argued that because Apple's marketing has been based on "coolness" and "exclusivity", once a device reaches a critical mass of users, the marketing starts defeating itself. Same psychology which limits the lifespan new fashion or of "hip" nightclubs: exclusivity is key to success, and eventually when the B & T crowd can get in, it's no longer cool.

Add to that the phenomenal success of iPhone sales so far, and you can only conclude that pretty soon, it might, just might saturate its potential market, much sooner than you would expect. There's some evidence this is already happening with the iPod. And the iPhone has more formidable competitors and more complicated market dynamics than the iPod.

Recall what happened with personal computers, Apple invented the category and dominated it with a unique approach until the mid 80s. But as the overall market grew from millions to billions of users, they peaked and ended up stuck at well under 5% market share, as cheaper and uglier IBM PC clones took the other 95+%. On the other hand, in the last 10 years, Apple has pulled off several bet-the-farm miracles. Not just the invention of the iPhone, and the iPod, but also two earlier huge gambles: switching from PowerPC to Intel CPUs in the Mac, and switching from the old Mac OS to Unix-based OS-X, both were incredible successes of business and engineering that defied the conventional wisdom completely.

So this is a tricky one. It could go either way. But I'm going to go out and a limb and predict that 2008 was the year of Apple's peak. Short AAPL.


Florida 2000

I was reading Super Crunchers a little while ago. I got to this passage which is one of those things that are deceptively low-key but then make you go WTF? A big WTF?!! Such a big one in fact that I am quoting it here. At the end of a section about data mashing the author adds the following cautionary tale (pp. 138-139):
Yet the art of indirect matching can also be prone to error. Database Technologies (DBT), a company that was ultimately purchased by ChoicePoint, got in a lot of trouble for indirectly identifying felons before the 2000 Florida elections. The state of Florida hired DBT to create a list of potential people to remove from the list of registered voters. DBT matched the database of registered voters to lists of convicted felons not just from Florida but from every state in the union. The most direct and conservative means to match would have been to use the voter's name and date of birth as necessary identifiers. But DBT, possibly under direction from Florida's Division of Elections, cast a much broader net [...] Its matching algorithm required only a 90 percent match between the name of the registered voter and the name of the convict. In practice this meant that there were lots of false positives [....] For example the Rev. Willie D. Whiting, Jr., a registered voter was initially told that he could not vote because someone named Willie J. Whiting, born two days later, had a felony conviction. The Division of Elections also required DBT to perform "nickname matches" for first names and to match on first and last names regardless of their order -- so that the name Deborah Ann would also match the name Ann Deborah, for example.

The combination of these low matching requirements together with the broad universe of all state felonies produced a staggeringly large list of 57,746 registered Floridians who were identified as convicted felons. The concern was not just with the likely large number of false positives, but also with the likelihood that a disproportionate number of the so-called purged registrations would be for African-American voters. This is especially true because the algorithm was not relaxed when it came to race. Only registered voters who exactly matched the race of the convict were subject to exclusion from the voting rolls.

[...] What makes the DBT story so troubling is that the convict/voter data seemed so poorly matched relative to the standards of modern-day merging and mashing.
Note that this book is all about number crunching, not politics, and overall very optimistic, gung-ho even. But this brief passage, specifically the things that I have highlighted in bold above, gave me pause... It's been bothering me for a couple of weeks.

Clearly, technically, DBT made a blatant mistake as the author concludes. But how come? Why did the government of Florida give directions that led directly and predictably to the "mistakes"?First of all, why allow any false positives at all? It's perfectly possible to get to almost zero false positives if you tolerate more false negatives, i.e. err on the safe side. In fact, in legal terms, that's the rule: "innocent until proven guilty" -- not 90 percent, but beyond a reasonable doubt. How could they accidentally forget this principle when it came to denying basic rights like voting? Second, in addition to the bias mentioned in the passage, it seems obvious to me that African Americans have a higher frequency of occurence of the same names. So not only did they err on the unsafe side, but the way in which the error expanded happened to be doubly targeted at a particular demographic group -- how come? Oh and who ran the state government of Florida at the time, and who benefited from those errors? Those are rhetorical questions by the way. But it's still surprising. Anyway it's history now...

Speaking of history, the title of this post comes from the name of a disco in Nairobi way back in the day. The first time I ever heard about the concept of a nightclub was when we drove by Florida 2000 one day, and I asked what's that place? I was too young to even think about going in but it was a fascinating thing -- it actually looked like a flying saucer.

And speaking of flying saucers, I wonder what it must be like for kids to not have the "Year 2000" in the future... Which reminds of a song by Fela and Roy Ayers.


What happened to Kavo, Tizaa?

I want my
I want my
I want my DVRP2P

From the first time I heard about Tivo, I expected the obvious next step, which would be gargantuan, the biggest thing since the web browser. It was obvious; Tivo + Napster! But Napster was already dead so I started saying: Tivo + Kazaa! Kazaa just happened to be the hot P2P file sharing network at the time (circa 2002). It's not necessarily Tivo anymore either, now it's DVR a whole category. So to update the idea let's call it DVR+P2P.

It's obvious. The DVR is basically a computer with a big hard drive, and a fancy video decoder/tuner card. Tivo is essentially an application that runs on Linux, I believe. Moreover DVRs connect to the Internet. So if they just added a P2P software client on it, boom! Suddenly not only can you record your own TV programs, you can also search every other user's recorded programs. This means almost anything that has ever been on TV on any channel is accessible for viewing on demand by everyone! The benefit to users would be ... I can't find a strong enough superlative. It'd obviously be HUGE. And incredibly easy to do.

So why hasn't it happened yet?

  • Copyright infringement? This is running on a closed device so they could easily restrict the software to only search "legal" videos from the same cable or satellite provider only.
  • Advertising? They already allow fast-forwarding through commercials, it doesn't seem to have killed the ad revenue. In any case they could disable ffwd if they wanted to.
  • Revenue? DVR+P2P would be so great they could charge any price for the service everyone would still sign-up for it.

Are they just extremely paranoid?

Here's what AFAIK is the conventional theory about this: Traditional laws relied on the physical form of books, records etc. to control the amount of copying, and now with digital media + data networks making copying exponentially easier the laws just don't fit anymore, and so there will be some major adjustments in the coming decades. In the meantime content owners are paranoid and are just blocking every new distribution method even if it's beneficial to them, like they tried to do when VCRs first came about. Scrounging through some links on my old homepage, I found a link to the first article I first read on this: "Who will own your next good idea?".

Today, I stumbled across a brilliant presentation by Laurence Lessig from 2002 entitled "Free Culture". In fact this post was supposed to be a quick link to that preso but it has released years of pent-up frustration on this subject in me. Anyway, "Free Culture" augurs a much darker cloud over the same field. He makes the point that digitization is expanding the scope of regulated use dramatically to the point of suffocating unregulated use. Which seems upside down because we are so conditioned to think of digitization as threatening regulated use. But when you think about it, it's absolutely true! Brilliant!

That link on the word Brilliant which I've used before was to the hilarious Guinness commercial where they keep saying "Brilliant!"  Now it says "This video has been removed due to terms of use violation." What a perfect example of legal protection of creativity!

The more I think about it, the more amazed I am by the truth,  simplicity, and importance of that fact: digitization is expanding the scope of regulated use. Unregulated use which used to be 90% of the activity, like simply reading a book or lending it to a friend, is being replaced by regulated use: reading a web page is technically a regulated activity, there are restrictions on what you can or can't do with those bits of content whether they are in your computer's RAM or HD, or pixels.  "Fair use" is just a minor sideshow. Unregulated use is the 800lb gorilla. I don't think most people realize that and they really should.  Lessig is a giant.  

"Free societies enable the future by limiting the past" -- Laurence Lessig.


Will the circle be unbroken

For the first time ever, the IMF is borrowing.

When the lender
of last resort

Becomes a borrower,
Will the circle be unbroken?

(Couldn't find the John Lee Hooker version...)


Nemo, Zen and the art of 20%

One of the cool things about my employer is the concept of 20% time. Basically it's a license to spend a chunk of your time working on things that you think are good, useful interesting, etc., but otherwise might not get done. Recently I finished a small 20% project, and it was officially announced today. Read all about it, and then go and try it!

Note: If you can't see Ethiopic fonts on your computer, here are some links to Ethiopic fonts to download and install.



Today, I ain't about economics, or politics. Not technology, nor internet. Not even epistemology. Not even wow! It could be "De gainsbarre a biggie". But no. Today, it's "Lui, naguère si beau, qu'il est comique et laid!".... beau de l'air comme disait Charles:

Souvent, pour s'amuser, les hommes d'équipage
Prennent des albatros, vastes oiseaux des mers,
Qui suivent, indolents compagnons de voyage,
Le navire glissant sur les gouffres amers.

A peine les ont-ils déposés sur les planches,
Que ces rois de l'azur, maladroits et honteux,
Laissent piteusement leurs grandes ailes blanches
Comme des avirons traîner à côté d'eux.

Ce voyageur ailé, comme il est gauche et veule!
Lui, naguère si beau, qu'il est comique et laid!
L'un agace son bec avec un brûle-gueule,
L'autre mime, en boitant, l'infirme qui volait!

Le Poète est semblable au prince des nuées
Qui hante la tempête et se rit de l'archer;
Exilé sur le sol au milieu des huées,
Ses ailes de géant l'empêchent de marcher.

Exilé sur le sol au milieu des huées....