Saturday, October 29, 2011

Big milestone for the new Bay Bridge

This week, a major milestone in the construction of the new Bay Bridge: the last section of deck for the suspension segment is now in place. I like this article in the Contra Costa Times because it does a good job of explaining what's going on, and has some nice pictures of the Left Coast Lifter in action. As the CCTimes article notes:
The deck piece installed Friday is key.

"It's the place where the cable actually comes in and locks into the bridge," Caltrans spokesman Bart Ney said.

"It's self-anchored. That means the cable doesn't go into the ground, but into this piece. There's a big hole in the deck where will the cable will go down and anchor into this section."

Unlike the Golden Gate Bridge, where cables were strung before the deck was installed, the new Bay Bridge suspension span requires that the deck be installed before the cable. Temporary supports hold up the deck until the cable is tensioned.

I guess that means that the Lifter is likely to move on now ... is there another bridge somewhere on the planet awaiting its services? Check out this great picture of the Lifter setting that massive piece of roadway into the final slot just as easy as pie!

Here's a pretty computer-generated video showing what it will (soon) look like.

Next step: string the cables! I believe that starts in earnest after Thanksgiving. It should be something to see; if you look carefully at the various pictures, you can see the catwalks that the crew will walk during cable installation.

Wednesday, October 26, 2011

KFOG Live from the Archive -- online or in the store?

The Peet's Coffee website wants $8 for shipping the KFOG Live from the Archives CD.

For that, it's worth waiting until Tuesday and walking 2 blocks down to the Peets to get my copy.

And maybe get a cup of coffee, too :)

Derby 10.8.2.2 is released

A new maintenance release of Derby has been released, with the typical collection of bug fixes and small features.

What was more interesting about this release was the way that the community handled the decision-making surrounding a relatively minor performance feature: concurrent generation of new values for identity columns.

The feature went through the typical Derby development cycle: a developer, interested in the problem, created a proposal for fixing it; the patch was reviewed and discussed; feedback was addressed; the patch was committed.

Somewhat later, near the point when it was ready to be released, other developers in the community were running certain complex test suites that aren't routinely run, and these test suites were not performing well.

It wasn't obvious what was causing the test suites to mis-behave. When test suites get sufficiently complicated, diagnosing their behaviors can be as hard or harder than diagnosing the behavior of the actual product. We have lots of practice diagnosing the behaviors of the main product code, and over time we've built all sorts of debugging and diagnosis hooks into that code to assist with analysis. But tests generally are rather leaner in this area, and it's hard to tell what's going on when a complex, unfamiliar, and rarely-run test fails: is there a real problem in the product? Or is this a weak test, which was accidentally passing in the past because it wasn't written quite carefully enough?

The Derby community responded very well to these challenges, I feel.

There was no rush to judgement, no finger-pointing, no blaming. Instead, different members of the community studied the new behaviors, constructed theories, ran experiments, offered suggestions, and considered the ideas of others.

As time passed, the first release candidate missed its schedule, then was withdrawn from consideration. The suspect feature was backed out from the codeline, and testing continued. It was very hard to tell: was the new behavior more stable? Or about the same?

After considerable time, the product was felt to be ready for release, and the second release candidate was released this week.

Even though I had only a minor part in this process, I was fascinated to be involved. At a Derby community luncheon during the Oracle Java conference earlier this month, I attempted to sum up my observations about what had occurred to the group at large as follows:

  • Software is hard
  • We made a change
  • Something new and interesting happened in the software
  • We talked about it as a community
  • Lots of people learned new things about the software
  • We addressed the problem, and moved on

I know that it was quite painful for some of the participants who were more closely involved in the effort, but from my point of view, this was software engineering at its finest: a team of dedicated and talented people collaborate on an extremely complex undertaking and, together, overcome obstacles and deliver a strong new release.

Congratulations Derby on Derby 10.8.2.2!

Tuesday, October 25, 2011

Ubuntu 11.10 nits

Since upgrading my Dell Latitude D610 to Ubuntu 11.10, two nits:
  1. My computer seems to drop the wireless connection, and the only way to restart it is to reboot. Without warning, the system is suddenly offline, and fiddling with the Network widgets doesn't bring it back online. ifconfig down/up doesn't bring it back online, either. But rebooting it gets me back online with 100% reliability
  2. The screen dims when the computer is idle for a certain period of time. Then, after waking it back up from screen-dim, the screen lights up, but then slowly darkens again to an almost-dim state. If I then go into the Displays control panel and fiddle with the screen resolution (e.g., set it to a different resolution but then revert to the current resolution), the screen becomes bright again.

I'm not sure what's causing either of these problems, and I'm not sure how to diagnose them further. For the time being, Ubuntu 11.10 on this computer is a bit of an annoyance.

Oh, by the way, 3 other computers that I've got which are running 11.10 are not displaying these symptoms. Just this one.

Monday, October 24, 2011

Hot Wheels!

Mmmm... Hot Wheels ... I love Hot Wheels! So I completely loved this video.

The analysis of the track seems pretty reasonable to me; I think the killer question is the cost. Those booster motors aren't cheap!

I wonder if the author of the video has some special access, like he maybe works for Mattel's marketing department or something like that.

I had a childhood friend whose father worked for Mattel (he was a car designer/artist). So it's possible :)

Another possibility is that the video was spliced together out of multiple shots, and the person making the video only had to have enough track supplies for any single shot. That would make the overall resource consumption more reasonable, since no single shot had an unreasonable amount of track/turns/loops/boosters in it.

I wish the video had had more shots of the kids admiring the car running around the track. The first scene was great!

I thought the single least-likely part of the video was where the car jumped into that funnel-like thing and then got going again on the track. I suspect that particular part of the video took multiple shots...

Saturday, October 22, 2011

Mojave Air and Space Port

Here's a great photo-essay on the Mojave Air and Space Port. I used to pass this pretty routinely in the 1990's, when I would go visit my parents in Ridgecrest.

It's been 20 years now, but the memories are still strong, and this nice short photo-essay was fun to browse through.

Friday, October 21, 2011

Personal Bests

It is no secret that my two favorite writers in The New Yorker are (in either order): Atul Gawande, and Adam Gopnik. So it was with some pleasure that I noticed that two recent issues featured articles from them: Personal Best, by Gawande, and Broken Kingdom, by Gopnik. Each essay is different, and their topics are completely unrelated, yet I find them intriguingly inter-connected.

Gopnik's article celebrates the fiftieth anniversary of a very under-appreciated classic: Norton Juster's The Phantom Tollbooth (published the year I was born!). As Gopnik says, it really means something when a children's book stands the test of time like this:

it means that the book hasn't been passed just from parent to child but from parent to child and on to child again.

Somehow, Gopnik tracks down not just Juster, but also Jules Feiffer, the illustrator, and spends several hours with them ("a pair of wryly benevolent uncles") talking about the book, about how it came to be, and about what they think it means that it has survived to the present day: "The book is made magical by Juster's and Feiffer's gift for transforming abstract philosophical ideas into unforgettable images."

Gopnik proposes that the "enduring magic" of The Phantom Tollbooth is in fact quite fundamental:

As with every classic of children's literature, its real subject is education. The distinctive quality of modern civilization, after all, is that children are subjected to year after year after year of schooling. In the best-loved kids' books, the choice is often between the true education presented in the book -- say, Arthur's through anaimals at the hands of Merlyn, in The Sword in the Stone -- and the false education of the world and school. The child being read to (and the adult reading) is persuaded that self-reliance is a better model for learning than slavish obedience.

The Phantom Tollbooth, claims Gopnik, represents the ultimate paean to curiosity and a love of all knowledge, wherever it might lie. It is:

not just a manifesto for learning; it is a manifesto for the liberal arts, for a liberal education, and even for the liberal-arts college.

Meanwhile, Gawande is thinking about learning, too, but from a quite different angle:

I've been a surgeon for eight years. For the past couple of them, my performance in the operating room has reached a plateau. I'd like to think it's a good thing -- I've arrived at my professional peak. But mainly it seems as if I've just stopped getting better.

Not surprisingly, this worries Gawande, and he tries various techniques to figure out what is wrong, but then inspiration arrives from a somewhat unlikely corner; while at a medical conference, he takes a break to practice his tennis, and works out with the club's "house pro". Afterwards he happens to be watching a tennis match, and:

I watched Rafael Nadal play a tournament match on the Tennis Channel. The camera flashed to his coach, and the obvious struck me as interesting: even Rafael Nadal has a coach. Nearly every elite tennis player in the world does. Professional athletes use coaches to make sure they are as good as they can be.

But doctors don't. I'd paid to have a kid just out of college look at my serve. So why did I find it inconceivable to pay someone to come into my operating room and coach me on my surgical technique?

The wonderful thing about Gawande, the reason that he is so incredibly inspirational and motivational, is that he doesn't just stop with this insight, he acts upon it:

I decided to try a coach. I called Robert Osteen, a retired general surgeon, whom I trained under during my residency, to see if he might consider the idea. He's one of the surgeons I most hoped to emulate in my career. His operations were swift without seeming hurried and elegant without seeming showy. He was calm. I never once saw him lose his temper. He had a plan for every circumstance. He had impeccable judgement. nad his patients has unusally few complications.

Sounds like the dream coach!

Osteen agrees to the request, and the coaching begins:

He came to my operating room one morning and stood silently observing from a step stool set back a few feet from the table. He scribbled in a notepad and changed position once in a while, looking over the anesthesia drape or watching from behind me.

Afterward, Gawande worries that it was all a waste of time:

The case went beautifully. The cancer had not spread beyond the thyroid, and, in eighty-six minutes, we removed the fleshy, butterfly-shaped organ, carefully detaching it from the trachea and from the nerves to the vocal cords. Osteen had rarely done this operation when he was practicing, and I wondered whether he would find anything useful to tell me.

Gawande need not have worried:

I'd positioned and draped the patient perfectly for me [...] but not for anyone else. [...] At one point, we found ourselves struggling to see [...] I should have made more room [...] my right elbow rose to the level of my shoulder [...] I operate with magnifying loupes and wasn't aware how much this restricted my peripheral vision [...] the operating light drifted out of the wound.
In fact, there are plenty of opportunities for improvement:
That one twenty-minute discussion gave me more to consider and work on that I'd had in the past five years.
How wonderful!

They repeat the process, multiple times, verifying that earlier mistakes are corrected, and moving on to other areas for improvement. Gawande is thrilled:

Since I have taken on a coach, my complication rate has gone down. It's too soon to know for sure whther that's not random, but it seems real. I know that I'm learning again. I can't say that every surgeon needs a coach to do his or her best work, but I've discovered that I do.

Oh, how deeply that one sentence resonates with me: "I know that I'm learning again." Is there any sensation more wonderful? Perhaps there are one or two, but this is near to the pinnacle of what it means to be a human. Will Gawande's observations lead others to find coaches? In my own field of software engineering, one of the big breakthroughs of this century is something called "Peer programming", in which engineers are challenged to work side-by-side, thinking out loud, sharing observations and ideas, listening and learning from each other constantly. It's fatiguing, but oh-so-helpful: when I get stuck, the first thing I do is call over my cube wall:

Hey Cal, are you there? Can you come be another pair of eyes? I'm just not seeing this...

And now we come full circle, back to Gopnick and Norton Juster. Thinking and thinking about The Phantom Tollbooth, Gopnik finally zeroes in on the specific insight that the book conveys, the reason that, fifty years later, it still thrills reader after reader.

"Many of the things I'm supposed to know seem so useless that I can't see the purpose in learning them at all," Milo complains to Rhyme and Reason. They don't tell him to listen to his inner spirit, or trust the Force. Instead, Reason says, "You may not see it now, but whatever we learn has a purpose and whatever we do affects everything and everyone else. ... Whenever you learn something new, the whole world becomes that much richer."

Indeed, says Gopnik:

Learning isn't a set of things that we know but a world that we enter.

Oh, dear reader: write that down and place it somewhere important, and look at it every day.

Wednesday, October 19, 2011

Yay yay yay yay! The malaria vaccine works (though not perfectly)

Thank you Gates Foundation! Thank you GlaxoSmithKline!

It's nice to have good news like this. Keep it coming!

Flooding in Thailand could affect hard disk drive manufacturing

Here's an interesting article in InfoWorld about how the severe flooding in Thailand will probably have a significant effect on the world's supply of hard disk drives.
Western Digital, the largest hard disk manufacturer, makes more than 30 percent of all hard drives in the world. Its plants in Ayutthaya's Bang Pa-In Industrial Estate and Pathum Thani's Navanakorn Industrial Estate together produce about 60 percent the company's disks. Both were shut down last Wednesday.

I didn't realize how significant Thailand had become in the manufacturing of electronic components:

Key disk component suppliers have also been hit. Nidec, which makes more than 70 percent of all hard drive motors, has temporarily suspended operations at all three of its plants in Thailand, affecting 30 percent of its production capacity. Hutchinson Technologies, which makes drive suspension assemblies, has also suspended operations due to power outages,

The Register offers some additional details, describing the extent of the flooding as "worse than feared":

"Over the weekend, rising water penetrated the Bang Pa-in Industrial Park flood defences, inundating the company’s manufacturing facilities there and submerging some equipment," WD said in a statement.

CNN has video, and Yahoo news is covering the question of whether the government initially under-reacted to the extent of the floods.

The world is very inter-connected; events in one area effect other areas. We are all one planet, one people.

Tuesday, October 18, 2011

Correctly handling RENAME in SCM merge processing

Here's an interesting and detailed article by the team at Atlassian about the complexities of merging changes when the changes involve renamed files. In their case, they found that Git was substantially better than the older Subversion releases at handling this task, and migrating one of their projects to Git has resulted in improved handling of their merge and rename scenarios.

Perforce once suffered from similar problems, and the handling of renames has been a major focus over the past few years.

Happily, the implementation has gone well, and the new Perforce integration engine, which is now in its final testing, has received great feedback from our early users, and I'm excited about the upcoming release.

This will be the second major release of the Perforce server since I joined, and although I didn't work directly on the integration engine, I was glad to have contributed to the release in a number of other areas.

If you're struggling with how to correctly handle complicated rename and merge scenarios in your SCM procedures, check into Perforce; I think you'll be pleased with the sophistication of the Perforce server in this area!

Saturday, October 15, 2011

The bubble is back!

I think that this article is mostly rot and rubbish, but it's getting a bit of attention on the Netz right now, particularly because of
I recently had a sit-down chat with Ping Li, a venture capitalist at Accel Partners who does investments across the layers of the cloud stack. Over the course of our conversation, Ping expressed frustration about the difficulty of hiring and maintaining talent right now. “It’s this heated funding environment,” he said, going on to explain that all of the money sloshing around in the Valley had created a market for talent that’s just as tight as it was during the dotcom boom. What’s worse, he explained, is that the talent shortage is stifling fundamental innovation in the cloud space.

Sour grapes, I'd say.

Though it's definitely true that if Google continues adding 800 new employees a month, that will create a significant competition for employees.

However, the article goes on to say things like:

We’re written in Ruby and hosted on Heroku, a pair of technical decisions that we made so that we could easily and painlessly scale, and so that we wouldn’t have to waste resources on any sort of sysadmin work. Back in the depth of the last downturn, we were fortunate to have found a team of contract developers ...

This is the sort of rot and rubbish that drives good engineers crazy. Sure, Heroku is super-trendy and just the bee's knees right now, but any entrepreneur with any sense ought to understand that smart engineers have the ability to learn all sorts of different technologies, and make technology transitions all the time.

None of us emerged from the womb with five year's experience in iOS application design, after all :)

It's simple: if you hire for buzzwords, you get people who have learned how to shape their career around buzzwords: shallow thinkers who are adept at jumping on board the latest trend and can market themselves as experts in whatever new gewgaw is flitting about.

If you hire for intelligence, adaptability, communication skills, ability to think abstractly, experience working in a team, and, of course most important of all, whether they Get Things Done, you're building a team that matters, will last, and will actually succeed in making something new.

So I'm neither the slighest bit surprised nor the slighest bit sympathetic that the business leaders that haven't learned these lessons are failing. There's a reason that those lessons are important; it's because if you don't understand them and realize why they matter, you too will find yourself whining that " top engineers are being enticed with truckloads of money to break off and form two- and three-person startups," and not comprehending what's really wrong with your company and your hiring and retention practices.

Sorry about the mini-rant, but when I see buzzword-driven headhunters bemoaning the scarcity of some trendy technology fad or another, I know that what I'm reading is not a serious discussion about whether or not our education system is encouraging the sort of academic disciplines which lead to long-term systems thinkers who can work effectively in a world of constantly changing technology, but rather is a self-serving attention grab to pump up their own visibility by getting their name in the paper.

Now, we've spent too much time on that particular article :)

Wednesday, October 12, 2011

Dennis Ritchie has died?

Oh, dear. Rob Pike is reporting that Dennis Ritchie has died. This is very sad news. He certainly was one of the top ten figures in Computer Science in my lifetime.

Three unrelated topics

Sorry for the "link bait" aspect of this post, but I've been falling a bit behind recently and I wanted to get these items out there before I forgot about them:
  1. There's been a fair amount of discussion over the past 4 weeks about Google's proposed new replacement for JavaScript, the programming language Dart. JavaScript is neat but I'm all in favor of improving programming languages, so I'm pleased to see Google pushing the discussion forward. If you're interested, you'll want to spend some time looking at the Technical Overview, and browsing the other online docs. If you'd like to see what others think, try starting with this comment thread at Lambda the Ultimate; or, for a contrarian perspective, try Peter-Paul Koch's take
  2. The always fascinating Jeff Atwood has a fantastic essay online about Gamification. Atwood isn't just interested in Gamification, he's (attempting to) live it:
    Stack Overflow is in many ways my personal Counter-Strike. It is a programmer in Brazil learning alongside a programmer in New Jersey. Not because they're friends -- but because they both love programming. The design of Stack Overflow makes helping your fellow programmers the most effective way to "win" and advance the craft of software development together.
    There are great comments at the end of the Atwood article, and also don't miss the presentation he links to, Sebastian Deterding's presentation Meaningful Play: Getting Gamification Right
  3. Lastly, while I've generally avoided the flood of mostly melodramatic and mawkish writing about the tenth anniversary of 9/11, let me draw your attention to this stellar piece by Ken Regan about Danny Lewin, MIT PhD student, co-inventor of consistent hashing, co-founder of Akamai Technology, and passenger on American Airlines Flight 11. Regan's article has many links to chase and many ideas to pursue, but make sure (if you're even the slightest bit interested in Computer Science) that you follow the most important link, to the paper that made the web work

Tuesday, October 11, 2011

You can't cowboy this through

I'm not exactly sure why, but something really fascinated me about this great story that Geoff Manaugh carried in his BldBlog web site the other day, following up on a more detailed story in the New York Times.
How do you do it? The rock has already been raised off the ground by hydraulic lifts and put in a cradle; steel trusses were built around the cradle, all part of a modular tractor with 22 axles, each with its own set of brakes, and 196 wheels. It will weigh 1,210,900 pounds, including the rock. “That’s a lot,” Mr. Albrecht said. “But the weight per axle should be about 349,950 pounds. That’s not so bad. You’ll get more on some of these rock trips coming out of this quarry every day. We’re not worried.”

The rig will be about 295 feet long and 27 feet wide and require a crew of 12 people to operate it. The modular assembly means it should be able to turn, like a caterpillar, and thus navigate corners in Los Angeles that can challenge more conventional rigs.

The efforts involved in moving objects like this around remind me of the current work underway at the Altamont Pass Wind Farm. The current wind turbines are quite dangerous to the local birds, and so a massive project is underway to upgrade all the turbines to new models that are more bird-safe.

Nearly 2,000 of the 4,000 wind turbines in operation, many of which are nearly 30 years old, will be replaced over the next four years with about 100 huge state-of-the-art turbines that, at 430 feet, stand taller than the tallest coast redwood trees. For every new turbine installed, 23 of the old ones will be removed -- a dramatic drop expected to significantly reduce the number of birds killed each year.

Of course, this involves building these turbines, bringing them to the area, and installing them,

On a recent morning, a construction crew used a 315-foot-tall crane to lift a 180,000-pound unit that contains the gear box and generator to the top of the tower. Each of the three blades on a turbine is 150 feet long, nearly the width of a football field. In high winds, the tips of the blades spin at 180 mph.

But this, too, is no simple task. Although this isn't the first time they've done this, and even though the movers try to plan ahead, life happens, and complications ensue. (Do you suppose it's just a coincidence that the staging area for the new turbines is a location in Solano County named "Birds Landing"???)

Climbing a tree

Here's a nice video I hadn't seen before. It's 5 years old, but still quite enjoyable.

Monday, October 10, 2011

A Day on the SS Jeremiah O'Brien

While it isn't much to look at, the SS Jeremiah O'Brien is something of a national treasure. It is one of just a handful of surviving "Liberty Ships", World War II cargo vessels that were built at an incredible pace to serve the shipping needs of the war. Nearly three thousand Liberty ships were built, but only a few survive (given that their original design only designed for a 5 year service cycle, it's astonishing that any survive!). Additionally, only two Liberty Ships are still operational; the O'Brien is one of those two, rescued after decades at rest in the "mothball fleet", restored and returned to service as a floating museum and memorial.

Although most Liberty ships served as cargo transports, some played multiple roles on successive missions, and the O'Brien had a storied career, including a role as one of the seven thousand ships that comprised the D-Day armada, delivering jeeps, trucks, and other supplies to Omaha Beach off the Normandy coast. This mission was memorialized 50 years later by an astounding visit by the O'Brien back to Normandy for the 50th anniversary by an all-volunteer crew of retired sailors.

This weekend, my father treated me to a special event, the 2011 Fleet Week cruise on the SS Jeremiah O'Brien. This is the 30th anniversary of San Francisco Fleet Week, as well as the 100th anniversary of Naval Aviation. Until this weekend, I hadn't realized that the first-ever landing of an airplane on a ship occurred right here in San Francisco Bay!

For the Fleet Week cruise, the O'Brien puts on a very special show:

  • The ship sets to sea, and steams out under the Golden Gate Bridge
  • She views the parade of ships as they pass under the bridge and into the bay
  • She joins the parade as the final ship in the parade; though she is a civilian cargo ship, she's granted this position of honor due to her long service and dedicated crew
  • She takes up station just off Alcatraz Island, and treats her guests to "the best seats in town" for the airshow
  • Then returns to dock, to return to being a dockside museum, until the next special event
This, of course, is not something that comes along very often, so I was pleased as punch to get the invite!

The cruise boards early; we were at the dock by 8:00 AM.

Boarding involves climbing the steep gangplank up to the main deck. After you board, there are donuts and coffee to get your day started.

We pulled away from dock shortly after 9:00 AM and headed out toward the Golden Gate. This seemed like a good time to explore the ship, so we did. Unlike some restored naval vessels, the volunteers on the O'Brien have done an incredible job making nearly the entire ship available for you to wander about and see, including:

  • The Flying Bridge, where the Captain and Pilot oversee the ship's activities
  • The gun tubs, with various guns of various types from the original ship's weaponry
  • The Radio Room, Mess Hall, Galley, crew quarters, etc.
  • The main cargo hold, which has been turned into a museum, gift ship, and exhibit hall
  • The engine room

Of all these, the engine room is the most exciting; it is a descent into a world of yester-year. You climb down 4 stories of steep stairs into the very bottom of the ship, where is located a single enormous 2,500 horsepower triple-expansion steam engine designed by North East Marine Engineering Co of Sunderland, England nearly 100 years ago.

The engine room, as a whole, is divided into three parts:

  • The boilers, where eight enormous furnaces boil enormous tanks of water to produce steam.
  • The expansion cylinders and their pistons. Since the three cylinders of the triple-expansion engine operate at different steam pressures, the cylinders and pistons must be of progressively larger sizes to produce the same pressure.
  • The propeller shaft itself, a 300 foot long immense shaft, which extends out to the stern where it drives the immense single propellor that powers the O'Brien.
The engine room is hot, loud, large, smelly, and oily, and is thrilling beyond description. It was definitely worth the short wait in line to go visit!

By the time we finished touring the ship, we climbed back up to the main deck, where we found that we were already well past the Golden Gate Bridge, and the parade of naval ships had begun. The first ship in the line was one of the jewels of the U.S. Navy fleet, and a ship with a long San Francisco history: the USS Carl Vinson. The Carl Vinson was followed by a number of other ships, including a naval minesweeper, an Aegis cruiser (the USS Antietam), several Canadian Navy vessels, and the Coast Guard's newest cutter, the USCGC Bertholf).

After the naval vessels passed by, the O'Brien swung around and took up her place at the rear of the line, and we steamed back under the Golden Gate Bridge into the brilliant sunshine of a perfect October afternoon. We had lunch on the main deck and found good viewing positions on the rail for the airshow.

The airshow lasted more than 3 hours, and included demonstrations by a USAF F15 Eagle, a USMC V22 Osprey, the Patriots jet team, the stunt plane flown by Sean D Tucker (as part of his show, he flew a circle around the USA 76 America's Cup yacht), but these were all just side-shows compared to the real stars of the show:

  • Canada's 9-plan 431 squadron, the Snowbirds
  • A surprise appearance by one of the most distinctive planes you'll ever see aloft, the USAF B-2 Bomber
  • And, of course, the best-known flight team of them all, the Navy's Blue Angels

It is an unbelievable thrill to be out in the middle of San Francisco Bay, on a beautiful autumn afternoon, as a squadron of F-18 jets pass barely 100 feet overhead, in perfect unison, barely inches apart from each other, two planes inverted and the other two standard, in the renowned Blue Angels Inverted Diamond formation.

Once the show was over, the hundreds (thousands?) of pleasure craft on the bay headed for home, and so did the O'Brien. Just another wonderful day on the bay!

InfoCide

I don't have much to say about this, but if you're interested in online culture, you might find it intriguing to browse some of the discussion underway about the server response formerly known as DiveIntoMark:

For myself, I've enjoyed reading many of Mark Pilgrim's online books, and think it's a shame that they aren't online.

But I also understand that it must have been a huge amount of effort to build and maintain those works, and it's his right to do with them what he wishes.

Wednesday, October 5, 2011

Stay hungry. Stay foolish.

56 is Too Damned Young.

A short history of the BTree

My latest article is online at the Perforce blog, so I thought I'd post a link to it here in case you find it interesting (Christopher did, and sent me a nice note!).

Meanwhile, you might find some of the other articles on the Perforce blog interesting; I particularly thought that Zig's latest was the bee's knees!

Tuesday, October 4, 2011

Security analysis of the modern automobile

I thoroughly enjoyed this paper from the 20th Usenix Security Symposium: Comprehensive Experimental Analyses of Automotive Attack Surfaces.

The authors study the security aspects of "late model mass-production sedans", with respect to:

  • Threat modelling
  • Vulnerability analysis
  • Threat assessment

After a quick review of modern automotive computer technology, the authors get right down to brass tacks and start exploring the risks in your car.

Here are some of the wonderful and creative attacks they come up with:

  • Service personnel routinely connect Windows-based computers to the OBD-II port on your engine's computer during service and maintenance. So compromising those Windows-based systems can allow attacks on cars during service.
  • Electric vehicles communicate through their charging cables.
  • Car stereos nowadays contain CD players, USB ports, iPod connectors, and other digital multimedia ports, which are a rich pathway for attacks to travel.
  • Hands-free telephone support in cars usually is based on Bluetooth connections
  • Cars usually have RF-based Remote Keyless Entry systems to unlock doors, activate alarms, flash lights, etc.
  • Since 2007, cars have Tire Pressure Monitoring systems that use similar radio communications
  • The very latest cars are capable of becoming mobile WiFi hotspots themselves.
This is a vast number of potential vectors! "But wait, there's more!"
  • Cars have GPS systems, Satellite Radio receivers, Digital Radio receivers, Radio Data Systems, Traffic Message Channel devices
  • Cars have remote telematics, such as OnStar, Sync, BMW Assist, etc.
  • Cars have anti-theft devices, hands-free driving directions, etc.
And the list isn't getting any shorter over time...

So, what's the real, current threat? Well, as the paper states, the authors developed nearly a dozen demonstrated attacks, and:

Combining these ECU control and bridging components, we constructed a general "payload" that we attempted to deliver in our subsequent experiments with the external attack surface. To be clear, for every vulnerability we demonstrate, we are able to obtain complete control over the vehicle's systems. We did not explore weaker attacks.

The specific attack vectors they used included:

  • A CD containing malware, which they convince the automobile owner to put into their CD player via social engineering
  • A Windows laptop containing malware which they plug directly into the OBD-II port
  • WiFi network attacks to the car's PassThru devices
  • An Android phone containing malware which they convince the car owner to allow into the car via social engineering
  • Calling the car's telematics system and exploiting it
  • Loading an iPod with malware and convincing the car owner to plug it into the iPod dock
None of these are even the slightest bit ridiculous; these are real, solid, critical vulnerabilities.

As computers become more sophisticated, as the appliances in our life become more automated, and as the world becomes more networked, what once might have seemed far-fetched is now startlingly immediate. We all have to become security engineers; we all have to understand how to build secure and reliable systems, and work like this is crucial in helping us understand where we are and where we need to be.

As I said, I thoroughly enjoyed the paper, and hope you do, too!

Monday, October 3, 2011

The Architecture of Open Source Applications

Over the summer, I read The Architecture of Open Source Applications, a rather unusual book.

The term "architecture", when it comes to software engineering, is a somewhat soft and fuzzy concept; the editors of AOSA define it as follows:

Each chapter describes the architecture of an open source application: how it is structured, how its parts interact, why it's built that way, and what lessons have been learned that can be applied to other big design problems.

Sometimes I get very frustrated when the term "architecture" is used, because it often feels like "title inflation": software engineers who want a bit of an ego boost describe themselves as "architects", a problem made vivid by Joel Spolsky's wonderful essay: Don't Let Architecture Astronauts Scare You in one of the greatest eviscerations ever committed to the World Wide Web:

When great thinkers think about problems, they start to see patterns. They look at the problem of people sending each other word-processor files, and then they look at the problem of people sending each other spreadsheets, and they realize that there's a general pattern: sending files. That's one level of abstraction already. Then they go up one more level: people send files, but web browsers also "send" requests for web pages. And when you think about it, calling a method on an object is like sending a message to an object! It's the same thing again! Those are all sending operations, so our clever thinker invents a new, higher, broader abstraction called messaging, but now it's getting really vague and nobody really knows what they're talking about any more.

When you go too far up, abstraction-wise, you run out of oxygen. Sometimes smart thinkers just don't know when to stop, and they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don't actually mean anything at all.

So it was with rather a fair amount of trepidation that I wandered over to Lulu late last spring, and plunked down some money for my own personal copy of The Architecture of Open Source Applications: was I going to find insight? Or Architecture Astronauts?

I am pleased to say that this book is, for the most part, happily free of vague descriptions and hand-waving, and enjoyably packed with concrete thought and hard-earned wisdom.

AOSA is a compilation of 25 essays, by 25 different authors. Each author writes about a particular Open Source application, one which they know intimately and thoroughly. The authors, for the most part, are the original creators or the primary current maintainers of the applications in question. Better, the applications are chosen wisely and represent some of the best-written, most well-proven, most widely-used software on the planet. Let's look at the applications they picked:

  1. Asterisk
  2. Audacity
  3. The Bourne-Again Shell
  4. Berkeley DB
  5. CMake
  6. Continuous Integration
  7. Eclipse
  8. Graphite
  9. The Hadoop Distributed File System
  10. Jitsi
  11. LLVM
  12. Mercurial
  13. The NoSQL Ecosystem
  14. Python Packaging
  15. Riak and Erlang/OTP
  16. Selenium WebDriver
  17. Sendmail
  18. SnowFlock
  19. SocialCalc
  20. Telepathy
  21. Thousand Parsec
  22. Violet
  23. VisTrails
  24. VTK
  25. Battle for Wesnoth

You could quibble with some of these picks, maybe, but you'd be beaten down by your friends: this is a serious list of substantial and important applications, and if you can't find something here that both (a) interests you and (b) has something to teach you about how software is structured, designed, and written, then the software field is not for you.

In a book this large and varied, it's hard to pick out individual passages, since different aspects will appeal to different people. But in an attempt to give you a feel for the book, here are a handful of observations that should allow you to understand what sort of book this is:

  • Talking about the development of sendmail, Eric Allman shares a laundry list of wisdom developed over the years, including principles such as:
    • Make Sendmail Adapt to the World, Not the Other Way Around
    • Change as Little as Possible
    • Think About Reliability Early
    , and describes how they evolved an approach that, decades later, came to be known as one of the tents of Extreme Programming, Do the simplest thing that could possibly work:
    There were many things that were not done in the early versions. I did not try to re-architect the mail system or build a completely general solution: functionality could be added as the need arose. Very early versions were not even intended to be completely configurable without access to the source code and a compiler (although this changed fairly early on). In general, the modus operandi for sendmail was to get something working quickly and then enhance working code as needed and as the problem was better understood.

    Note how Allman's approach echoes many of the principles of the Agile Manifesto.

    Another chapter about gaming, describing the space-based strategy game Thousand Parsec, talks about the value of incremental development:

    A major key to the development of Thousand Parsec was the decision to define and build a subset of the framework, followed by the implementation. This iterative and incremental design process allowed the framework to grow organically, with new features added seamlessly. This led directly to the decision to version the Thousand Parsec protocol, which is credited with a number of major successes of the framework.

    A similar approach is described by Chris Davis in the chapter on Graphite:

    By and large Graphite evolved gradually, hurdle by hurdle, as problems arose. Many times the hurdles were foreseeable and various pre-emptive solutions seemed natural. However it can be useful to avoid solving problems you do not actually have yet, even if it seems likely that you soon will. The reason is that you can learn much more from closely studying actual failures than from theorizing about superior strategies.
    "Avoid solving problems you do not actually have yet" -- I wonder how many thousands of failed software projects would have succeeded if their teams had just been able to comprehend and follow this simple rule of thumb?
  • Different authors reveal differing approaches to similar problems:
    • Describing the construction of the fantasy strategy game Battle for Wesnoth, the authors talk about the temptation to use object-oriented inheritance techniques to model the various types of units that can appear in the game, and why those approaches don't work:
      It is tempting to make a base unit class in C++, with different types of units derived from it. For instance, a wose_unit class could derive from unit, and unit could have a virtual function, bool is_invisible() const, which returns false, which the wose_unit overrides, returning true if the unit happens to be in a forest.

      ...

      Wesnoth's unit system doesn't use inheritance at all to accomplish this task.

      Why did they make this choice? Well, you'll need to read their essay :)
    • Another fun thing about the Thousand Parsec chapter is to see the contrast with the Battle of Wesnoth chapter:
      In a Thousand Parsec universe, every physical thing is an object. In fact, the universe itself is also an object. This design allows for a virtually unlimited set of elements in a game, while remaining simple for rulesets which require only a few types of objects.
    That's the wonderful thing about software: two different groups can look at things and take very different approaches, and both approaches are worth understanding and learning from.
  • Given the work I do in my day job, I found Chet Ramey's observation about his work on bash particularly worth noting:
    I have spent over twenty years working on bash, and I'd like to think I have discovered a few things. The most important -- one that I can't stress enough -- is that it's vital to have detailed change logs. It's good when you can go back to your change logs and remind yourself about why a particular change was made. It's even better when you can tie that change to a particular bug report, complete with a reproducible test case, or a suggestion.

I'll close with an observation about open source in the context of this book: the book is much more about architecture than it is about open source. That is, I didn't find a lot of discussion about topics such as: building your community, establishing roles and relationships in an open source organization, learning to deal with uninvited feedback, or finding value from unexpected contributions, all of which are part of the open source process, but don't have a lot to do with the architecture of software.

So in that respect, the book sticks to its knitting, and concentrates on what it sets out to do. But note: this wouldn't be possible unless these were open source applications! That is, by definition we can't have these sorts of public discussion about the architecture of closed source applications, because there is no open discussion of architecture without open design, and open source. Consider books such as VAX/VMS Internals and Data Structures, or the more modern Understanding the Linux Kernel; the only reason you see books like these is that the systems they describe allow access to their source code, and so we can all profit from studying how these systems are built.

While I find the open source development process intriguing, it is nice to see a book such as The Architecture of Open Source Applications, because studying software itself is incredibly important, and it is vital that we study real systems, not just toy applications built as exercises in programming courses, in order to learn the lessons and techniques that come from the challenges of building real systems that have to solve real world problems.

This is not the best book ever written: the varied style of the authors results in a somewhat choppy experience, and some authors are better than others at sharing what they know and what they've learned. But it is a fascinating set of essays, and if you are (or want to be) a practicing software engineer, you will find much to learn by digging into and reading this material closely.

Saturday, October 1, 2011

Academic publishing issues in Computer Science

As the new school year gets underway, a few snapshots from the current state of affairs with respect to scholarly research and academic publishing in the Computer Science field:
  • At Princeton University, a faculty committee studying the issues of Open Access have issued a report detailing their recommendations for the university's open access policy. In the report, they say:
    We recommend a revision to the Rules and Procedures of the Faculty that will give the University a nonexclusive right to make available copies of scholarly articles written by its faculty, unless a professor specifically requests a waiver for particular articles. The University authorizes professors to post copies of their articles on their own web sites or on University web sites, or in other not-for-a-fee venues. Of course, the faculty already had exclusive rights in the scholarly articles they write; the main effect of this new policy is to prevent them from giving away all their rights when they publish in a journal.
    The Princeton policy is based on an earlier one issued by Harvard, which states:
    The intention of the policy is to promote the broadest possible access to the university’s research
    That policy, in turn, is based on earlier policies adopted by MIT, Stanford, and Duke. At the Freedom to Tinker blog, Professor Andrew Appel of Princeton discusses the new policy in more detail, saying:
    Basically, this means that when professors publish their academic work in the form of articles in journals or conferences, they should not sign a publication contract that prevents the authors from also putting a copy of their paper on their own web page or in their university's public-access repository.
  • Meanwhile, Matt Welsh points us to a recent article in the Communications of the ACM: Rebooting the CS Publications Process. If you have trouble getting to the article because it's behind a paywall (see the previous item!), try this link.

    The article details a litany of problems, including low-quality reviews, long lag times for article publication, and low acceptance rates, and proposes a solution named CSPub:

    CSPub is, at its core, a mashup of conference and journal submission and review management software, such as HotCRP [26], with technical report archiving services like arXiv and with bibliographic management and tracking and search services like DBLP, Google Scholar, and CiteULike (see Section 4 for extended discussion on how these systems work).
    As Welsh notes, however, it's not clear that the problem here is a technology problem:
    The fact is that we cling to our publication model because we perceive -- rightly or wrongly -- that there is value in the exclusivity of having a paper accepted by a conference. There is value for authors (being one of 20 papers or so in SOSP in a given year is a big deal, especially for grad students on the job market); value for readers (the papers in such a competitive conference have been hand-picked by the greatest minds in the field for your reading pleasure, saving you the trouble of slogging through all of the other crap that got submitted that year); and value for program committee members (you get to be one of the aforementioned greatest minds on the PC in a given year, and wear a fancy ribbon on your name badge when you are at the conference so everybody knows it).

These problems have been around for a long time. I'm not sure whether the Computer Science field has a worse case of these problems than other fields such as Physics or Mathematics. Clearly, the existence of the problems is not for lack of effort by the community; it's just not entirely obvious what to do.