Sunday, May 15, 2011

Myths and Misunderstandings of Chromebooks

When the iPod was launched, pundits immediately compared it to the existing mp3 player market. It was too big, too expensive, not powerful enough, and lacked the features other players had.

Then, when the iPhone was announced, a similarly sounding chorus declared its deficiencies, some of which I detailed here in 2007 It had no MMS, no swappable battery, no 3g, etc. Oh sure, there were other smartphones on the market that had a much longer list of features and were even cheaper, but it didn't matter. Paul Buchheit covers this "more features = better" philosophy here.

Now, Paul is somewhat pessimistic about Chrome OS, which I disagree with, but what's interesting, is that some of his arguments against Chromebooks I actually used myself when the iPad came around -- If already have an iPhone, and a Macbook Air, why do I need this tablet thingy? It sucks for content creation, and if I already lug around my notebook, why would I lug this thing around in addition? I was wrong, because there are times of the day where you need to create content, and times when you simply need to consume it, and tablets are perfect for the latter. More on that divide later.

The parallels today are appearing again. At $349, Chromebooks are too expensive compared to Netbooks at the same price level! Netbooks had been tried before, and failed! With Windows 7 basic or whatever, on a Netbook, you have far more flexibility and features! If someone already has a tablet, and a notebook, why would they want a Chromebook! And so on. There may be some merit to those arguments, but at iPad launch, the idea that a third kind of device wasn't needed also had merit. After all, tablets had been tried before and failed. We now know there was a market need.

Content creation vs Content Consumption

Where I disagree with Paul and others, is that even a souped up tablet is not a good stand-in for a work device. Perhaps with a large external monitor, and external keyboard and mouse, but as tablets constructed today, I would not want to write code on them, or even long blog articles like this. Perhaps it's my generation and young people won't have such hangups, but my generation is still a large market.

Thus, I assert, the need for a traditional WIMP device: physical keyboard, mouse, connection for monitor. I don't think enterprise IT would really find this controversial. So the next question is, why would a Web-only device be better than one with "native apps".

Myths about Chrome

In Paul's message on Chrome OS, he rightly sees that Android is becoming more "webby", but does not comment on the fact that the Web is becoming more "natively". What is the difference between an offline, cached, Chrome Web Application, that uses file system APIs, and WebGL, and a Android app, other than the fact that one is written in Java and uses Java APIs, the other is written in JavaScript and uses Browser APIs? Is Angry Birds Chrome really that much different than Angry Birds Android?

The principle difference between Web, Flash, and Android, is merely the difference in virtual machine, programming language, and API flavor. With Chrome, you have V8 as the VM, JavaScript as the source language, and HTML5 as the API to the VM. With Flash, you've got Flash VM + Actionscript3 + Flex APIs. With Android, you have Dalvik, Java, and Android APIs. All three have APIs for UI, 2d/3d, sound, network, etc. All three platforms have "web-like" install/update models, and all three have webby sandbox security models.

Well, you say, Android has NDK. To that I answer, Chrome has Native Client.

Well, you say, Android is Linux. To that I answer, so is Chrome OS.

Well, you say, Android doesn't force every app be a "document". To that, I point to Bespin and Angry Birds.

The conceptual gulf just isn't that big folks.

Other Myths overheard

  • If I'm not online, I can't do anything!

    Not true, Web apps can be offline. Chromebooks have local storage which can act as a sync repository/cache. It will depend on how apps are written. It's really no different than iOS/Android.

  • Everything is in the cloud, Google can read everything.

    Conditionally true, depending on the app, data can be encrypted on the server with only the client app able to view it

  • I can't access some critical windows apps

    True in some cases, false in others, see: Citrix, VNC, X

So if they're the same, why not just merge the platforms?

That may very well happen someday, who knows. But I do want to take issue with a point Paul made about the webby-ness of iOS/Android apps, because I think it is only true in theory, much like (not not as severe as) Web apps being offline capable.

Assertion: Web apps encourage linkability and searchability more than native apps

While it is true that you can deep link into iOS or Android apps, how many times have you followed a link from Flipboard directly into a story in the HuffingtonPost app? Web apps by their nature, and by years of convention, make their location and state known. By contrast, if native apps have URL schemes to trigger deep links, they are not omnipresent in an Address bar, but typically custom and hidden and not easy to find. Web apps are composeable by links because of their relative transparency, and it is more difficult to achieve in a native app ecosystem. Such conventions may evolve later, but it requires vigilence by developers for the gradient is not towards such transparency.

This concerns me, because if every website becomes a custom app, the web-of-links will be broken, and even Page Rank will become less relevant. Granted, this is not an argument against merging, but it is in argument to preserving the current model of publishing information via a standardized document format as opposed to proprietary per-website native applications.

So why then, do we need Chrome OS?

Because 90% of what people seem to do this days, outside of games, can be done on the Web. If a user is spending almost all of their time running Web apps and NOT running native apps, why not construct a device that is stripped down and simplified to streamline exactly what he needs?

I hear you say, "but why not include Android as well?" But this gets back to "more features == better". Remember, the postulated user is one who spends most of his time browsing Sure, it sounds good to have all of these extra features and access to the wealth of Android Apps, but non-iPod MP3 players also sounded like a better value proposition compared to the iPod originally.

In particular, for old fuddie-duddie Enterprise users, a locked down device, centrally managed, with cloud backup, and no expensive IT department needed, sounds very viable.

The idea of producing a netbook, which boots up with Android OS and contains a *full, not chopped down* version of the latest Chrome, also sounds like a viable SKU for content-creation activities. It may very well be the winning formula as Paul indicates!

But Google's trying Chrome OS first and experiments are good. I don't think power users who want to run Crysis on high-powered windows notebooks, can dismiss it, nor can it be dismissed just because it doesn't include every feature and the kitchen sink.

Sometimes I just need a fast, convenient browser.

p.s. I'm also not sure, a future in which every website is a native app (all newspapers, all blogs, etc) that I will be able to find information as easily, or to extract, mashup, and link information as easily. The Web has its problems, but let's not throw it out because of smooth animations and sexy devices.

p.p.s. The separate evolution of Chrome as an OS in which you can do everything and to which apps can be targeted will be good for a merged Chrome OS/Android OS too, because it ensures the Chrome team will have to make Chrome as great as possible before that merge happens. If the merge happened too early (say, Chrome 1.0), then Dalvik would have been a crutch, and they probably wouldn't have improved it as much as they have. Right now, the idea of Web as an app platform is a forcing function for improvement.

Friday, May 13, 2011

The problems with HTML5 <Audio>

I've been having a Twitter back and forth discussion with Giorgio Sardo, Microsoft's HTML5/IE evangelist on Audio, but I find 140 characters too limiting to explain the issues, and Giorgio seems more interested in snark and attacking Chrome, than attacking the root of the problem.


This all started because after the release of Angry Birds at Google I/O, people noticed that it was requesting Flash. Angry Birds is written in GWT and uses a GWT library written by Fred Sauer called GWT-voices. This library not only supports HTML5 audio, but has fallbacks to flash and even <bgsound> on IE6!

There was speculation the Flash requirement was done for nefarious purposes (to block iOS) or because Chrome includes Flash, but the reality is, it was done *both* because Chrome has some bugs, and HTML5 <audio> just isn't good for games or professional music applications.

I first noticed the shortcomings of the audio tag last year when we ported Quake2 to HTML5 (GwtQuake) shown at last years I/O, where I also demoed a Commodore 64 SID music emulator. There are two issues with using HTML5 Audio, which was originally designed to support applications like streaming music players.

It is missing functionality

The HTML5 audio element permits operations like seeking, looping, and volume control, which are great for jukebox applications, but you cannot synthesize sound on the fly, retrieve sound samples, process sound samples, apply environmental effects, or even do basic stereo panning. Quake2 required 3D sound based OpenAL's inverse distance damping method as well as stereo panning, I did my best, and implemented distance damping with the volume control, but had no ability to position sounds left or right.

For sound synthesis, there is no official way to play back dynamically created buffers. The workaround is to use Javascript to encode sample buffers into PCM or OGG in realtime, convert them to data URLs and use those as the source for an audio element, which is very computationally expensive and chews up browser memory. For developers wishing to create even basic music visualizers, it creates huge difficulties.

Problem #2: Latency

Audio applications require low latency. Studies have shown human beings can perceive audio latency down to the millisecond, but in general, lower than 7ms is considered good enough. This means in some circumstances, you need to schedule sounds within 7ms of one another, for example, if you need to simultaneously start two sounds, one on the left ear, and one on the right ear, or if you need to concatenate several sounds together in series.

Giorgio has a neat demo here of playing piano notes in sequence, and hats off to Microsoft for providing a great <audio> implementation. It's a cool demo, but I still hear latency variation in playback between notes and occasional glitches. No one's going to build something even 1/10th as good as Garage Band on iPad using this technique. That's because the one way you can schedule audio in HTML5 is via the browser's event-loop using setInterval or setTimeout, and that's problematic for several reasons.

First, it's unreliable. Over the years, setInterval/Timeout has been clamped to different minimal resolutions, depending on the browser and operating system. On some systems, it was tied to vertical refresh and would clamp to 16ms, then vendors started clamping to 10ms, and now they clamp as low as 4ms. But 4ms isn't a guarantee, it's a request. Many things can stand in the way of that request, for example, by just mousing over the page, user interface events can trigger Javascript handlers, CSS rules which force a relayout, and excessive Javascript work can trigger garbage collection.

Secondly, aggressive setInterval periods can delay response to user input, making the browser feel sluggish. If the user tabs to another window, the browser must decide whether or not to clamp timeouts to a much higher value (say 1 second), to avoid needlessly burning CPU which could harm background playback. Unlike requestAnimationFrame which solves this problem for graphics, there's no "requestSoundEvent".

Music Apps and Games sometime require playback of short-buffers

Some of the sounds in Quake2, for example, the hyper-blaster are sample buffers as small as 300 bytes. At 44khz, this is a hard deadline of 8ms to schedule the playback of the next sound in the sequence. With all of the other stuff going on within a frame, processing physics, AI, rendering, it is highly unlikely to be consistent, and do we really want JS performing this scheduling task.

Especially on mobile

Remember, mobile devices are HTML5 devices as well, and are continually getting better at HTML5, but they are much more resource constrained, and Javascript is even slower. Here, native scheduling is even more beneficial, and intensive Javascript scheduling of playback would be difficult, and waste battery.

That's why the Web Audio APIis important, because it permits complex audio schedule tasks, application of environmental effects, convolutions, etc to be natively accelerated without involving the Javascript engine in many cases. This takes pressure off the CPU, off of memory and the garbage collector, and makes timing overall more consistent. Here's a neat demo recently shown at Google I/O

Microsoft deserves credit

They made massive improvements in support of HTML5 from IE8 and IE9 especially in <canvas> and <audio>, and they deserve the right to feel proud and evangelize them. We celebrate that. It's why Angry Birds works, to some people's shock on other browsers, and it's not by accident. We built in fallbacks in our core library for 2d canvas, and tested on non-WebGL capable browsers like IE9 which have excellent GPU accelerated 2d support.

Angry Birds was not an attempt to make non-Chrome browsers look bad, but to make HTML5 look good, because when developers start realizing that professionally developed and polished games and applications can be done in HTML5, we all win.

But now is not the time to rest on our laurels. HTML5 is not done. There are many things incomplete and broken in the spec. I am sad to see Microsoft trying to talk down the experimentation that is going on in Firefox and Chrome, vis-a-vis WebGL and new Audio APIs, just because they are on a slower release cycle and do not have these bleeding edge features.

Giorgio seems to be suggesting in his tweets that the basic HTML5 <audio> tag is "good enough" and that the current IE9 implementation covers use cases sufficiently, and I disagree with that strongly.

We need 3d on the web. We need high quality, low latency, audio. We need to be able to do the things that OpenAL and DirectX can do with sound on the Web. And we're not going to get there by sticking our head in the sand and declaring premature victory.

Labels: , , , , ,

Thursday, January 11, 2007

Top In-Denial Comments Overheard about the iPhone

1. What's the big deal?! It's just a touchscreen phone, like numerous PowerPC phones!
2. My Nokia already has WiFi and an Opera browser, where's the innovation.
3. Way too expensive
4. Non-replaceable batteries kill any chance of success
5. But my phone already has a camera, photo management, and email
6. Either you have a great music player or great phone, a combination device can't possibly be good at both.
7. Battery life is too short!
8. No UMTS/3G! No EVDO.
9. Multitouch is nothing more than a gimmick
10. You can't install any third party apps
11. It doesn't support Exchange!
12. Won't view MS Office attachments!
13. Cingular! Wahh!

Those are just a few. Yes, another year, another MacWorld, another flurry of contrarian naysayers trying to bash holes in the reality distortion field. Yes, some of the comments have some merit, but many of them are plainly based on unproven assumptions for which no real facts exist.

I used to be a big Apple basher. Macs are for non-technical clueless people. Apple sells inferior overpriced hardware. I mean, are iPod users brainwashed zombies? You go to COMDEX or CES, and you'll see a bazillion Chinese no-name MP3 players offer similar technical specs to the iPod for far cheaper and smaller. What's so special about the iPod that justifies its price? It's obviously going to be a failure and offers no innovation. That was before they sold 100 million of them. I couldn't understand it either, until I started using one. When the iPod arrived on the scene, it simply offered a better UI and desktop integration experience compared to the myriad of PC market players (go back to the year it was introduced and try to find something as easy to use as iPod + iTunes) Later, a huge add-on market emerged around it, that increased the value of having one. I wouldn't deny that there is an "incrowd" fashion/sexiness aspect to them, but it can't explain away the iPod's success.

Is the iPhone different? Yes, because the iPhone actually has alot more innovative hardware features that no one else has at the moment, which I'll get into later.

Let's talk about UI. People who often compare Macs to PCs exclaim that the Mac WIMP interface is easier to use and more consistent than Windows, which it may be, but it's hard to prove, and subject to aesthetics. On mobile devices however, UI and ergonomics are the number one most important issue, given the really poor screen size, pointing, and data entry capabilities.

A good way to visualize your interaction with a mobile device is to visualize the interaction of a disabled person with a real PC using accessibility tools, like huge fonts, a screen reader, magnifying glass window, or perhaps a menu based input system for the paralyzed. The speed at which you can navigate menus using a 1-9 keyboard and softkeys is not much better than the speed at which Stephen Hawking can navigate word lists, in fact, Stephen would probably kick your ass.

And that's exactly how I feel using the vast majority of smartphones today -- handicapped. The applications are typically poorly designed for numeric keyboards or joysticks, the information display makes you feel like you have tunnel vision, and to top it all off, many of the devices have serious UI latency issues. You know, you go to your address book, photos, or calendar and about 500ms to a full second pass before you see the result. One of my favorite things to do when I pick up a new phone is to rapidly jump around the menus just to see how bad the UI latency is.

I don't have any inside information, but from the keynote, let's try to see how the iPhone attempts to address many of these issues:

1. UI latency - From the keynote video, the iPhone appeared not only capable of keeping up with UI events, but it appeared to render UI transitions at atleast 30hertz. It looked super smooth and didn't appear to have many issues. Jobs is a stickler for perfection, and I doubt he would tolerate a laggy device.

2. Tunnel Vision (Small Screen) - Some data is best viewed in portrait format, some in landscape. The iPhone first and foremost appears to offer the user a choice. It also appears to have designed many of common apps like contacts, to be optimized for the most common display attributes and operations. The high resolution screen helps with smaller fonts to pack more information on the screen and remain readable. The fast rendering speed allows rapid switching between different screens and zoomable user interface techniques. Zooming only works if it can be done in real time. Fast screen switching provides a sort of "virtual desktop" which expands how much information can be displayed. If you can't keep up however, you'll start lagging behind user inputs. Using zoom/scroll when it takes you a second or more to get where you want is frustrating.

3. Input - different applications call for different inputs. I think we can all agree that a keyboard is best for text entry, or that a mouse rocks at First Person Shooters, or that a pen-tablet is better for architectural drawing/painting, but on small devices, a mapping application works best if you've got absolute pointing capability. (Try Mobile Google Maps on any non-touchscreen phone to see what I mean) I think the touchscreen is a nice compromise, since you get a recofigurable input, but of course, you lose tactile feedback. Maybe the next-gen iPhone will have some kind of Haptic feedback, or the ability to raise parts of the touchscreen at will :) (maybe using smart materials) Apple appears to have done something smart. The multitouch sensor opens the doors to much better accuracy (yes, if they are using FTIR it provides multitouch as well as better accuracy) as well as being able to discard accidental touchs of other fingers easier.

Moreover, Apple appears to be using contextual clues to enhance touch recognition. So for example, if you typing the word "PHONE" and when you're about to type the 'N' you press the the J and the N simultaneously, it uses statistical models to predict the more likely intended key. This differs considerably from the way stylus input keyboards on PPC and other smartphones work.

4. Context - many phones don't have all their little applets integrated very well, so when inside of these applets, you can't see whats happening elsewhere. What I liked about the iPhone was how when you were on a call, the caller-ID information was omnipresent and one click away. Some phones do better than others. Can we trust Apple to be consistent here?

5. WiFi and Browsing - Kick ass. I've used Opera Mobile and PocketIE on smart phones. Scrolling around and zooming is painful (as well as laggy and slow). The iPhone "double click to zoom" feature was extremely nice, and it's amazing no one thought it this before! The browser's live DOM/CSS information has an already calculated tree of bounding boxes for each screen element. All they have to do is start on the DOM node the user touched, and walk up the tree to find the biggest box that fits. Analog scrolling and zooming with the touchscreen IMHO is far better than pressing digital scroll-up-down buttons on most phones in the same way that a mouse is better than joystick. If I know I want to jump really far, I flick my finger faster or further, rather than trying to work those horizontal/vertical digital scroll buttons.

6. Seemless WiFi<->EDGE roaming. Um, yeah, if they can make this work seemlessly, it will be incredible. I've tried PocketPC Phones and Nokia phones with WiFi, for which they are supposed to offer automatic failover, but which never worked properly.

7. PIM (Address Book, Calendar, etc) - One of the problems with PocketPC and Palm phones that I've used is that they still assume people want their PDAs to be miniaturized versions of desktop apps. Microsoft is the biggest offender. Their contacts on PPC is atrocious, and impossible to dial from with one hand. There wasn't any integration between the Phone dialer and Contacts, so if you entered a number on the dialer, there was no 1-click "add as new contact" option. Maybe that's changed in recent revisions. Apple appears not to have followed this route. The iPhone address book is not simply a hacked port of OS X Address Book, but apparently totally new app optimized for touch navigation.

8. Photos. Nuff Said. Who's kidding who, you think any of the smartphones in the iPhone class will have better UI for doing this?

Future Directions

The multitouch screen's true potential seems barely tapped on the iPhone when you consider it's pedigree . I think Apple should also look at Zoomable User Interfaces. In any case, this feature is most definately not a gimmick, and one that sets the iPhone apart from any other device on the market.

The accelerometer potential can be further exploited as well. See Smackbook for example. Games could take advantage of the yaw axis detection, such as steering vehicles, for example.

The WiFi inclusion seems to offer the potential for iChat and VOIP, as well as seemless switching between SMS and Jabber/Bonjour. Since in some countries, both the receive and sender pay, this would cut costs for many.

Summing up my opinion:

1. It really is innovative hardware, period. The best smartphones on the market don't have everything this device has.
2. The user interface seems more reactive, faster, and easier to use
3. The browser especially seems nice
4. Jobs is right, interplay of hardware and software design is important. Many uber phones make the mistake of designing a good hardware platform and filling it up with crappy non-integrated applications and saddling it with bad ergonomics.
5. It's only too expensive if you don't want a video iPod. it's not really that much more expensive than other highend smartphones.
6. Talk time is comparable to other smartphones
7. fixed battery - legitimate issue for some, but probably impractical given the design of their case/screen
8. Don't support Exchange? Talk to your system administrator about supporting internet standards like everyone else: IMAP, SMTP
9. Cingular. Yeah, sucks for people who are not Cingular customers. Apple is clearly trying to grab the largest market, and that means GSM. In the US, the largest GSM carrier is Cingular.

What is it missing?
1. AGPS would have been AWESOME incombination with Google Maps
2. Non-adjustable camera/non-forward facing means no video conferencing
3. No 3G UMTS/EVDO (IMHO, not a big issue given WiFi. I imagine I'll be browsing mostof the time at the airport or at a cafe which has a WiFi hotspot. 3G data services are often congested and expensive anyway) Would be nice to have tho.

As for the reports of it not displaying PDF or Word attachments, and Apple not allowing any third party app development, none of this has been officially confirmed by Apple at the top level, and I suspect that Apple is still evaluating how they're going to do it, but I don't doubt that they will eventually allow it. If not Cocoa apps, then Java J2ME and JavaScript widgets.

Thursday, October 19, 2006

18 Mistakes That Kill Startups...

I always get a chuckle out of Paul Graham's essays, the latest one 18 Mistakes That Kill Startups being no exception. As with most business essays, it is full of obvious truisms, cliches, and of course, the mythical great hacker hero. One of the problems of autobiographical business writers who write from their own experience is selection bias, the inability of people to be objective about their own success and see it within the big picture.

One's own success, of course, is always due to one's great ideas, work, foresight, and lack of mistakes, and never due to serendipitous events, suddenly favorable market conditions, or just plain old personal connections. Why was Viaweb a "success"? Surely, it's because it was written in Lisp! and not because the internet boom was in a full swing sellers market, larger companies were practically buying anything with a half dozen users and a web site. As a programmer, I'd love to believe that the biggest influence on the success or failure of a business idea is programming ability, it strokes the narcissist ego of anyone who considers themselves a good programmer. I mean, for every socially isolated geek growing up in school, what could be better than finding out that the universe is optimized for nerds to succeed.

This is not to say that one's success can't be because of merit, just that one can't determine which factors lead to success by only measuring one's own successes in isolation. Would you take a new medicine because the last couple of times it worked for someone else to cure their disease? Business advice essays are the financial equivalent of diet pill and herbal remedy testimonials.

Many people who invest in the stock market love to believe that their successful stock picks are because of a special insight, or technique that they use for their picks. The reality is, most people are unable to beat a random dart board or index fund, but in isolation and over small time windows, their successful choices look to be correlated with their behavior -- until -- they lose spectacularly on some trade. And so it goes, in sports, in politics, in competitions, many winning competitors attribute their success to lucky charms, daily superstituous rituals, herbal supplements, religious prayers. Does athletic training and genetics play a part? Certainly the answer is yes, but at high levels of competition, an athlete can also win by another competitor falling on bad luck -- a gust of wind, a slippery track, a misplaced foot, an equipment malfunction, a bad starting position draw. That's why writing a book on "18 mistakes that stop you from winning a Gold Medal at the Olympics" is practically worthless.

There may be a list of "do's and don'ts" that are neccessary (but not sufficient) for successful startups, but I believe that most of them are probably conditional on situation, and probably not discoverable in practice, there are just too many variables. Instead, I would view the market as a memetic ecosystem, and startups as organisms competiting within the environment. Which organism is the fittest? The only way to know is to release it into the environment and see. The system is too complex for a simple set of rules of thumb to determine optimal fitness. Sometimes organisms which appear poorly designed or weak, end up surviving because off an odd confluence of external factors.

I'm often stumped by stories of accidental success, of people who created something on a whim, and overnight had it spread like wildfire, while at the same time, people who had created almost exactly the same ideas in the past failed miserably. We'd like to attribute their success to design or intent or "something they did right" and the failures to "something they did wrong" Did they use the wrong programming language? Did they raise too little money? Too much. Hire the wrong people? All the while, ignoring the fact that sometimes the result is due to the nebulous, continually shifting aspects of human desire.

In the end, probably the most important worthwhile advice that most business writers give regardless of what you believe determines success is this: you're more likely to succeed if you try to do something, than if you do nothing. You won't know if your organism will survive in the memetic environment if you don't release it.

Graham's 18 point list is full of truisms and hedged statements.

Rule #3. Don't choose niche ideas because other people are already doing the other great ideas
Rule #4. Don't choose ideas other people are already doing

Rule #8. Don't launch your site too late.
Rule #9. Don't launch your site too early.

Rule #11. Don't raise too little money.
Rule #12. Don't spend too much money.
Rule #13. Don't raise too much money.

Which I reduce to the following:
1. A successful startup does exactly what is required for success, and not too little or too much.

And of course, the implicit rules, like Don't Solve Other People's Problems (unless of course, you're ViaWeb) Spending time doing business development is bad (especially by hiring a business guy) unless of course it's done by a hacker who takes time out of hacking to make calls to other executives, who of course, love getting calls from hackers. Oh, and don't hire Bad Programmers, except that no one can tell what makes a Good Programmer, except for Good Programmers(*). Apparently, it's impossible for any Good Programmer to want to work with a Business Guy(tm) also. That's why so many startups failed during the dot-com boom, because of bad programmers, not writing in LISP on Unix machines, and the inability of Business Guy to inspire Good Programmer Guy to join him.

Oh, and if you want to know which programming language to use on your next startup project, go find out what Phd students are using on research projects at elite colleges. (of course, an elite college by definition won't be using Java for such projects.)


(*) This reminds me of the way immortals in the HighLander series "sense" other immortals. Hmm, maybe a YouTube spoof is in order? Hacker gods walk among us undetectable to all but themselves! (cue weird "six sense" soundfx as Paul Graham enters a room where Eric S Raymond is sitting)


Saturday, September 30, 2006

Geeking out: Quad Core Tower of Power Yumminess

I've needed to upgrade my P4 3Ghz Dell Dimension desktop for awhile -- it's been 2+ years since I got a new computer. Once Apple went x86, I always knew my next computer would be a Mac. I've been using Unix for the last 16 years as a development environment, but I never really liked any of the Unix desktops (except NeXT), so I always kept a PC around as my main terminal and connected remotely to Unix boxes. Not so much for applications per se, but mostly for games, multimedia, and Java performance (remember the bad ole days of no Hotspot for Linux and old JDKs?)

Mac OS X gives me the environment I'm most comfortable in: Unix and the Unix command line, as well as a fabulous desktop. My only dilemma was: MacBook Pro or Mac Pro? Last year, I traveled alot for work, so it would be a no brainer, but given that I rarely travel now, and I'm replacing a PC Desktop that also acts as a gaming rig, the Mac Pro became alot more desirable.

The only worry was I haven't seen the Merom based notebooks and Clovertown based Mac Pros on the horizon, with supposedly a new case design for the Mac Pro by an internationally famous designer. Should I wait till January or longer? The next-gen NVidia and ATI DirectX10 video cards would be out by then as well. The problem with waiting is that there is *always* something new and great in the pipeline 6 months later. If you wait for the next big thing, you'll always be paralyzed.

I decided that the Mac Pro was so much better than my current setup, it doesn't matter what's coming 6 months from now. The Mac Pro case, before redesign, is already very very nice, especially the RAM and SATA connections. But once I saw how Anandtech was able to just plug Clovertown engineering samples into a Mac Pro and turn it into a Octo-Core monster, it clinched it -- this baby's CPU is upgradable too. In any case, I need to do lots of video editing, and 4 easily accessible SATA drive bays + 4 cores is good for the task.

So last Friday, I pulled the trigger. Mac Pro, 2.667 Ghz, minimum sized HD, ATI X1900XT, 2Gb FB-DIMM RAM, Bluetooth = $3003. I did not get the 3Ghz version because I think a 12% gain in serial performance is not worth $800. I expect that when Clovertown is released, Xeon 5160 prices will drop radically, and I'll upgrade later if I need to, and it will be far cheaper. Moreover, Apple charges $400 for 500Gb HDs when I can buy them online for $170. I'll use the 160Gb that comes with it as my Windows XP Bootcamp drive for gaming. I'll also have 4 empty FB-DIMM slots to upgrade RAM later if I need more, when the price is cheaper.

Now goddamn it, hurry up and deliver my machine Apple!

P.S. Parallels announced that they will support DirectX acceleration near the end of the year, which would effectively eliminate the need for Boot Camp for gaming in most cases. Yay!

As for my old Dell PC, I've got another 1TB RAID sitting in the house and I've been itching to run ZFS, so I'm gonna try running OpenSolaris on it. The Linux and Apple guys need to get their butts into gear and get ZFS ported ASAP!


Wednesday, September 27, 2006

Balanced Incomplete Block Designs, Difference Sets, Klein's Quartic Curve, Projective Planes, and lots more!

I've been busy lately and unable to provide the answer for the Extreme Programming Code Review problem posted last time. I chose this problem, because when I was taking a graduate combinatorics class in college, the underlying problem had an almost mystic relation to a large number of other areas of combinatorics and geometry, as well as producing a simple, but beautiful and symmetric structure.

Mathematical history has many examples of similar, but harder arrangement problems. Catherine The Great supposedly kicked things off by posing to Euler, The 36 Officers Problem. Later, Kirkman proposed the 15 schoolgirl problem (my oh my, a Reverend and thoughts of schoolgirls eh?). These seemingly simple problems acted as a catalyst and resulted in great advances in the field of combinatorics, which has enormous applications in many fields. One application of course, is solving the problem I posed.

The problem says, given 7 people, arrange them into 7 teams of 3 each, such that no two people are ever on the same team twice. It is probably possible to construct this design using trial and error, but there are many other ways to do it.

Let us number the programmers 0 through 6, and start off with the team {0,1,3}. If we were to add 1 to each number, it is easy to see that the new team will consist of 3 new members, no two of which are in the first team. {1,2,4} And likewise, the same will be true for each new team, for all teams {x+0, x+1, x+3} up to x=0 to 6. The first two positions will always differ by 1, the second and third by 2, and the first and third by 3. If you read down the three columns, we're just rewritten the numbers 0 through 6 shifted by 1 or 2 places. Here's our list of teams, you can also verify them by visual inspection.


It turns out that this solution is also unique except for permuting order.

Combinatorics describes this as a t-(v,k,l) design. A t-(v,k,l) design is a set P of v points, a set of subsets of P called blocks (also called lines) B containing k points, and an incidence structure I in P * B which describes the relation of P and B (which elements of P are in which blocks). For any t points, exactly l blocks are incident.

Our problem says that there are p=7 points, and it says that for any t=2 points, they are contained in only l=1 block (team), and the blocks consist of k=3 points. Therefore, our problem is to construct a 2-(7,3,1) design. Such 2-designs are called Steiner Triple Systems and denoted STS(7). One result of combinatorics is that STS(v) exists if and only if v = 1 or 3 (mod 6).

So what's so special about this problem to get so worked up about? It turns out there are many connections with STS(7) and other areas of mathematics, in addition to which I mentioned in the header, there are also connections to coding theory, latin squares, octonions, simple groups, and geometry.

And now I will show you how this problem is connected to the Balsa Wood problem, as well as to very beautiful geometry.

Recall that there are 7 teams of 3-programmers, 7 programmers, no two programmers are on the same team twice. Remember back to the defination of a 2-(7,3,1) design, the programmers are called "points" and the teams are called "lines". If no two programmers are on the same team twice, that means 2 programmers (or points) uniquely determine a line, and also, that at most, two teams can only have 1 programmer in common, which says that any two lines intersect at one and only 1 point. Hmm...

  • 7 points
  • 7 lines
  • 2 points determine a line
  • 2 lines determine a point

What would happen if we wrote down those 7 programmers and connected the dots according to teams?

We form what's called a projective plane. This plane, the projective plane of order 2 called the Fano Plane has the interesting property that there are no parallel lines. It also has a 168-fold symmetry. (another interesting property, since each line has 3 points as well as each point is incident with 3 lines, you can interchange points and lines to no effect! The Fano Plane is its own dual.)

How can we use this to solve the Balsa Wood problem? I will give a partial solution, by showing the solution for 3 successive writes.

  • if you are writing X and the ram is already in state X, do nothing
  • if the ram is empty, and you're writing I, then place a 1 in bit position I
  • if the ram contains 1 bit in position I and you are writing the number J, find a line (I,J,K) and place a 1 in bit position K
  • if the ram contains 2 bits in positions I, K, and you are writing value X, write 2 more bits such that 3 of the bits will be colinear on a line, and X will be a point NOT on that line. That is, if X != I or != K, then find the line (I,J,K) and place a 1 bit in position J, as well as in position X. If X == I then write two bits on the other line containing K, and if X == K, then write two bits on the other line containing I.
Step 2 and 3 work because of the inherent property of projective planes. Step 2 works because 2 points uniquely determine a line. Step 3 works, because no matter what, there are three lines intersecting any point colinear with X, therefore, you can always place 2 1-bits on one of those lines.

The rules for reading are as follows:

  • if the memory is blank, nothing has been written yet
  • if exactly one bit is written in position I, then the value in the memory is I
  • if exactly two bits are written in position I and K, then the value in the memory is J, where J is on the line containing I and K
  • if exactly 4 bits are written in positions X, I, J, K, find the 3 positions which form a colinear line, the one which is "left out" by itself is the value in the memory.

I leave the construction of the final rule for writing and reading, as well as an argument as to its correctness to the reader. There is an alternate solution to this problem by using womcode techniques, which have applications to both error correcting codes and cryptography. Sadly, Rivest and Shamir seemed to have patented it, and the solution to the Balsa Wood problem actually violates the patent.

Another related problem to look up is the Transylvanian Lottery Problem, also solved with the Fano Plane.



Wednesday, September 20, 2006

The Extreme Programming Code Review Problem

You manage a team of 7 programmers and have become a recent convert to extreme programming. You have decided to use a style of pair-programing that you call the weekly triplet review.

Each day of the week, 3 programmers will be picked from the 7 available to conduct code reviews together such that no two programmers will ever be on the same team more than once in one week. (Note, these programmers work on weekends too, so you need 7 teams of 3.)

If the programmers are numbered 1 through 7, can you enumerate all the possible teams?


P.S. It is no coincidence that this problem features 7 objects like the previous problem. Believe it or not, this is a hint: if you solve this problem, you may have some inspiration for how to solve the Balsa Wood RAM problem.