Wednesday, March 30, 2011

The complexities of interpreting a benchmark

Benchmark wars are no fun, and I hate them as much as the next engineer. So it's a breath of fresh air to read, over at the Real World Technologies web site, David Kanter's thorough and detailed analysis of the recent benchmarks of the new AMD server micro-architecture code-named "bulldozer".

It's worth reading Kanter's article just to get some perspective into how hard it is to conduct a detailed and accurate analysis of a benchmark of a modern server. Kanter identifies dozens of variables that can affect the overall results, both procedural aspects such as compiler settings and runtime options, as well as engineering decisions such as system bus speeds, memory subsystem design, power management, etc.

Kudos to Kanter for taking the time to study the issues in such detail, and for sharing with us the thought process he went through as he analyzed the data that has emerged so far. For my part, I'm much less interested in the results than in the process, as it's always good to consider how one can become a better benchmarker, and reading a benchmark critique is a great way to do that.

Tuesday, March 29, 2011

Amazon CloudDrive debuts

Amazon has announced its CloudDrive service, which is kind of a competitor to systems like DropBox, or perhaps to online backup services like Carbonite or Mozy.

The Amazon pricing model appears quite simple: $1 / GB / year. The service is positioned as a place to store your music and books and other digital media that you purchase from Amazon, and for those purposes: "Any Amazon MP3 purchases that you elect to store on your Cloud Drive at the time of purchase do not count against your storage quota."

Note that a raw storage device is not a complete backup solution, of course; a backup solution knows how to keep multiple versions of your files on the backup device, can take full and incremental backups, can restore you to a point in time, etc. So if you wanted to use CloudDrive for backup, you'd need to combine it with some of those procedures. Of course, maybe Amazon will add to the service in the future.

Monday, March 28, 2011

Up and Down California

I just happened upon a wonderful blog that I hadn't seen before: Up and Down California. The blog is, one post at a time, publishing and illustrating the travels of the 1860 Whitney Survey of California.

Although, like most blogs, the website loads with the newest article at the top, the authors have kindly provided an index and you can start reading from the beginning by going here.

This is great work, and the pictures really bring the stories to life. Enjoy!

Gosling joins Google

The net is abuzz with the news that James Gosling has joined Google. Congratulations Google! I'm sure he'll be an interesting person to have around.

I tend to doubt he'll work on Java though; I suspect he's over that.

I wonder if he'll work on Go?

Sunday, March 27, 2011

The information is available, if you are a subscriber

There's been a lot of discussion about access to information lately, brought particularly to the forefront by the New York Times's decision to join the Wall Street Journal, the Financial Times, the New Yorker, and other major publications in restricting access to its online articles to those who have paid to subscribe.

Reader of my blog will know that I have long been frustrated by the computer science research community's use of academic research journals to publish their research, journals which charge enormous fees to access the information, even years or decades later, yet which are publishing work that is largely (though not totally) funded by public taxpayer grants. This hurts everyone. However, the publishing industry is very wealthy and can afford Washington lobbyists.

So I was interested to see a recent article discussing an effort by a group of librarians to try to organize an effort to address this problem. It sounded like a promising idea, so I thought I'd link to the article.

But it's only available to subscribers willing to pay for access.

Wednesday, March 23, 2011

Comodo fraud incident

This short incident report from Comodo is fascinating; in particular, check out the last line!

This nice summary report over on Freedom to Tinker does a good job of summarizing what is currently known about the event.

The Freedom to Tinker report links to this awesome post by Jacob Applebaum over at the Tor project discussing more details.

There are also short posts from Microsoft, Mozilla, and Google.

This is all very interesting, and I suspect we'll learn more about this in the coming weeks.

For now, though, I instantly wonder (and presumably others do as well): is this the first salvo of a Stuxnet counter-attack?

Is there more information about Google Snappy?

I see that Google have released their internal compression library, Snappy, as open source.

Unfortunately, besides the code itself, there doesn't appear to be a lot of additional information about the library. The README is clear and well written, but not tremendously detailed.

The README, and the project's home page, basically say: "read the code".

Which is fine, of course!

But I was hoping to find some design documents or other discussions about why one might choose to use or not use this library, how and when to use it, how to inter-operate (or not) with other compression schemes, etc.

Google seem to have a lot of interesting libraries that fall into states like these: the Google collection classes, the Google enhanced networking code, all the fancy Google web tools. It's amazing stuff, but it's kind of hard to deal with when it just arrives on the scene, without a lot of the backstory included.

Tuesday, March 22, 2011

Kseniya Simonova's delightful sand art performances

I only just learned about Kseniya Simonova's spectacularly beautiful sand art performances.

Do yourself a favor: block out 9 minutes of time, and go watch her amazing performance at the season finale of the 2009 "Ukraine's Got Talent" competition.

I came across this video after seeing another very nice performance of hers, at this year's Melody Amber chess tournament, where she entertained the hosts and players.

The chess art is very nice, but really, that wartime romance is superb. Enjoy.

Les Valiant has been awarded the Turing Award

I see on the ACM's website that Professor Leslie Valiant has been awarded this year's Turing Award, and will deliver the annual Turing Award lecture this June.

Here's the ACM release describing the award, and here's a nice document from 25 years ago summing up many of the fascinating things he'd done even then.

I never met Professor Valiant, but I studied a number of his papers in the early 1980's in my undergraduate computational complexity class. At the time that I got my copy of Garey and Johnson, the references section listed The complexity of computing the permanent as "to appear". I remember in those days we used to get photocopies of review pre-prints of interesting articles from our instructor, though I can't recall if any of his articles were among those.

Congratulations on an award long deserved, Professor Valiant!

Saturday, March 19, 2011

Writing for another blog

Recently, I've started learning how to write for the Perforce corporate blog. I'm still learning how the process works, but as I get the hang of it, you'll be able to see my posts at this link.

There are different tools to learn, and a whole different process for writing, reviewing, and publishing. I'm enjoying the process, and I hope you enjoy reading what I (and my co-workers) write over at the Perforce blog!

Friday, March 18, 2011

GSoC 11 organizations list is published

Google have announced the list of mentoring organizations that have been accepted for this year's Google Summer of Code program.

Assuming I read the list correctly, the Apache Software Foundation is on the list, and has been selected once again to be a participant in the program.

You can read more about Apache's participation in the program at this Apache page.

Stuff I'm reading recently

I don't usually favor those "linkbait" type of blog posts, but recently I've been a bit frazzled, and my attention has been wandering over a collection of completely unrelated things.

So, just for yucks, here's a bunch of what I've been reading this week, in no particular order:



As I said, it's a pretty random collection of stuff, and ranges all over the place. But maybe there's something in there you'll find interesting, and some new ideas to take you someplace you've not been thinking about recently!

Tuesday, March 15, 2011

Book Review: Zeller's Why Programs Fail

Andreas Zeller's Why Programs Fail is a practical book for practical programmers. Professor Zeller has written a book that I wish I could have read 25 years ago, but am happy to have read even now.

Why Programs Fail is a straightforward collection of hard-earned knowledge about the task of figuring out what's wrong with a computer program and how to fix it; as the subtitle notes, it is "A Guide to Systematic Debugging".

Many programmers simply never develop their debugging skills, and hence they are at best inefficient and at worst ineffective at debugging. Zeller sets out to help you past these problems, by showing you tools and techniques that will make you a more efficient and more effective finder and fixer of bugs in programs.

Professor Zeller is well-known as the author of the ddd graphical debugger on Linux, and the delta debugging algorithm, which later became the delta debugging tool, so it is no surprise that some of the strongest sections of the book are the ones which describe how to use debuggers effectively. Chapter 8 describes, in detail, how to use the gdb debugger to step through programs, examine state, interpret the debugger displays, control the program's behavior, etc.

I liked several other sections of the book as well. For example, chapters 3, 4, and 5 do a good job of presenting lots of different ideas and strategies for reproducing problems, writing tests to demonstrate problems, narrowing down test cases, making unreproducible problems easier to reproduce, etc.

And there are some nice little tid-bits sprinkled throughout the book that show that the author knows of which he writes. For eaxmple, when discussing the need to re-run the bug script after fixing the bug (to demonstrate that you've actually fixed the bug), he notes that you may find that the script still fails even after your fix, in which case:

Being wrong about a correction should:

- Leave you astonished.

- Cause self-doubt, personal re-evaluation, and deep soul searching.

- Happen rarely.

Not everybody cares this much about fixing bugs, but it's wonderful to feel the passion of somebody who does. And then, later, when Zeller reflects on what you should do after you've fixed the bug, in a section titled Learning from Mistakes, he suggests:

  • Improve your test suite.

  • Test early, test often.

  • Review your code.

  • Improve your analysis tools.

  • Calibrate coverage metrics.

  • Consider mutation testing.


It's a great list, and it's even better to see people pushing the point that bug fixing doesn't stop with fixing the bug; you need to take that opportunity to try to figure out (a) how the bug got there in the first place, and (b) how come it took so long to find it.

I probably won't go forcing this book on people who haven't asked me about it, but I definitely didn't consider it to be wasted time. If you've got some time and energy, and you're interested in improving your practical programming skills, go check it out!

Monday, March 14, 2011

Vanity Fair profiles moot

This month's issue of Vanity Fair hits it out of the park! In addition to the Stuxnet article that I was reading over the weekend, there's a short article about Jack Dorsey, formerly of Twitter and now of Square.

And, then, tucked in-between a memoir from JFK Jr's ex-girlfriend and a memoir from William Styron's daughter, we find this: 4chan's Chaos Theory, all about moot, a.k.a. Christopher Poole, and his creation: 4chan.

Covering 4chan is venturing far, far afield for Vanity Fair, which usually prefers to discuss subjects such as celebrities, politics, or fashion (ideally, all three, c.f. the article on JFK Jr mentioned above).

But here, in the pages of stately Vanity Fair, we find an article containing phrases such as:

Don't feed the troll.

/b/

IRL

Low-orbit ion cannon.

rick-rolling

Rule 1. Rule 2. and Rule 3.


This is strange and alien stuff for Vanity Fair to be covering, indeed! I'll venture that I'm one of a very short list of VF subscribers who even knew what these things were prior to taking this issue.

But it's interesting and topical stuff, covering issues of the day such as Wikileaks, the attack on Gawker, Anonymous vs. HBGary, and so on. The dark side of the Internet has finally met up with modern society; everyone has a Facebook nowadays and we're all part of whatever thing this is that the online world has become. In every society there are a range of participants, from boring citizens like me to the pranksters, clowns, cut-ups and out-right dissenters and rebels. Bravo to Vanity Fair for dipping a toe into the chilly waters of 2011; what wonders will occur next?

Sunday, March 13, 2011

Bear died

Owsley "Bear" Stanley died this weekend in Australia. He was 76. Here's a brief story.

Bear's Choice is a wonderful album, and I've always thought it was under-appreciated. It came out a few years after their greatest work: Workingman's Dead and American Beauty, both of which should be on anyone's list of the top albums of all time, but Bear's Choice is a wonderful, alternate look at their music. It's more bluesy, but most importantly Bear's Choice is your best introduction to Ron "Pigpen" McKernan.

RIP, Bear, here's someone thinking of you.

Stuxnet article in April Vanity Fair

This month's issue of Vanity Fair magazine (yes, Vanity Fair!) has a long retrospective surveying what we know, and don't know, about the mysterious Stuxnet worm.

Although it's been 8 months since the worm became widely known, and although it has been extensively studied since that time, there is still a large amount of uncertainty about the source of the worm, the forces behind the worm, and what this all means for the future of malware.

In its breathless fashion, Vanity Fair tries to make the case. The opening paragraph claims that

Stuxnet is the new face of 21st-century warfare: invisible, anonymous, and devastating

and the article concludes by painting a bleak picture of war-by-computer:

The wars would often be secret, waged by members of anonymous, elite brain trusts, none of whom would ever have to look an enemy in the eye. For people whose lives are connected to the targets, the results could be as catastrophic as a bombing raid, but would be even more disorienting. People would suffer, but would never be certain about whom to blame.


Unfortunately, after all this time, Stuxnet is still better described by what we don't know, than by what we do. I can't fault the author of the Vanity Fair article, Michael Gross, for lack of effort in trying to understand what's behind the worm. Gross travels to Moscow to meet with Eugene Kaspersky, travels to Hamburg to meet with Ralph Langer, travels to Berlin to meet with Frank Rieger, and gets stone-walled by all sorts of other people: "Mossad could not be reached ... C.I.A. spokesman declined to comment ... National Security Agents representative wrote 'I don't have any information' ... U.S. Cyber Command has nothing further." Gross's strongest quasi-government source is Richard Clarke, who has been out of government for almost a decade.

As Gross observes, there are still two crucial aspects to this story:

  1. Who developed this worm?

  2. Did the worm actually work as intended by its authors?



And there are other, less vital, but still fascinating aspects to the story, such as whether and how the worm was actually uncovered:

From the beginning, many have found it odd that, of all the security companies in the world, an obscure Belarusian firm should be the one to find this threat -- and odder still that the serial rebooting that gave Stuxnet away has been reported nowhere else, as far as most of the worm's top analysts have heard.


All in all, the Stuxnet story continues to be quite intriguing, and the Vanity Fair article does a good job of keeping us up to date with the overall progress of the story.

Saturday, March 12, 2011

Sometimes a rumor is just a rumor

Apple has now released the latest version of their Mac OS X developer tools: Xcode 4. It looks like there are some very interesting new features in this version:

  • Apple have switched from GCC to LLVM as their base compiler technology.

  • In a related move, the debugger infrastructure switches from GDB to LLDB.


There are a variety of other improvements; it looks like a major, powerful upgrade.

The Apple "what's new" pages are kind of brief, however, so I'll need to dig deeper as I get a chance. For starters, it's not obvious whether "Apple LLVM Compiler 2.0" is the same thing as Clang -- anybody know?

Secondly, Clang says that they achieve "GCC compatibility"; does this mean that we can mix-and-match GCC-compiled modules and Clang-compiled modules? Can GDB debug a Clang executable? Can Xcode 4 debug a GCC executable?

Lots to learn!

Meanwhile, however, all the coverage seems to want to talk about is Apple's decision to change $4.99 for Xcode 4 through the Mac App Store.

Mac developers are an odd bunch. They had this same discussion last month when it came to Apple's decision to change $0.99 for FaceTime.

In fact, they've been having this same discussion for years; here it is in 2007, regarding a $4.99 charge for 802.11n features in certain MacBook models.

So, over 5 years or so, there has grown this persistent rumor, which goes something like this: Apple doesn't really want to charge their devoted and loyal Apple fans these prices for their software; they really want to give it away for free; but the evil federal government has tied sweet Apple's hands and so they are forced to charge us in order to satisfy the Sarbanes-Oxley auditors.

The source of most of these rumors appear to date back to this post, where the author describes how he came up with the story:

From an Apple representative on the show floor ... which really just makes no sense to me at all, but the claim Apple’s making is that it _can’t_ give you the 802.11n-unlocking software for free. The reason: the Core 2 Duo Macs weren’t advertised as 802.11n-ready, and a little law called the Sarbanes-Oxley Act supposedly prohibits Apple from giving away an unadvertised new feature for one of its products. Hence, said the Apple rep, the company’s not distributing new _features_ in Software Update any more, just _bug fixes._ Because of Sarbanes-Oxley. If this is an accurate statement of Apple’s position, which as an attorney (but not one with any Sarbanes background) I find at least plausible, this is really crazy.


That's it; that's the complete source of this rumor, which has now made it to thousands of web sites in the typical Internet style of picking it up and passing it along.

It looks as though CNet made a bit of an attempt to verify this rumor; their page states:

Apple said it is required under generally accepted accounting principles to charge customers for the software upgrade. "The nominal distribution fee for the 802.11n software is required in order for Apple to comply with generally accepted accounting principles for revenue recognition, which generally require that we charge for significant feature enhancements, such as 802.11n, when added to previously purchased products," Fox said in a statement.


But where is this statement? CNet doesn't link to it, doesn't give us any hard citations, just says that they got this from "Lynn Fox, an Apple spokeswoman."

All my attempts to find such an actual statement, either as a press release, or as an actual page on Apple's own web site, were failures.

I think that there is another, much simpler explanation: Apple charges for their software because they can. Apple are a very large, very successful company, and they've got that way by building products that people want to buy, and selling those people additional applications to run on those products. It's a good business model, and I applaud them for it: build good products and people will buy them. You don't have to fabricate these bizarre stories to cast Apple as some sort of benevolent entity who is being forced by evil overlords to abuse the undeserving peasants.

Sometimes, a rumor is just a rumor.

Meanwhile, back to studying those LLVM compiler internals!

Friday, March 11, 2011

Do you like my caricature?

It's my new profile picture. Here's a larger version.

Inferring that coincidence may indeed be causality

Google's latest Chrome 10 browser apparently can ask the mothership whether my sluggish response from the server is related to a possible problem on the server side.

This morning I received the following error message while pressing "reload":

Oops! Google Chrome could not connect to news.yahoo.com

Other users are also experiencing difficulties connecting to this site, so you may have to wait a few minutes.


The Google knoweth all (or at least more than you might think!) ...

Wednesday, March 9, 2011

Language ambiguities and open source licensing

A discussion such as this one is the sort of thing that makes organizations run away in horror from open source software projects.

I've met a number of the people in this discussion, and they are genuinely trying to work within the framework provided to them by legal and social obligations, but the struggle to adhere to those rules is vividly apparent in the discussion.

Any time you take plain old ordinary Java engineers, and ask them what "implements" means, disaster awaits, for you may find yourself in a debate such as this:

Part of the confusion here is over the use of the word "implements" in the Java language vs. the use of the word "implements" in the Java compatibility rules. These two uses do *not* have the same meaning.


And if you tell those engineers that "compatible" does not have to do with "compatibility", they will just stare at you, aghast, when you write:

A JDBC device driver that meets certain additional requirements may be labeled as JDBC 4.0 compatible, but it's not required that all drivers do that, and such requirements have nothing to do with the Java compatibility requirements in the JDBC spec license.


As best I can understand it, the underlying issue is whether code which implements an interface defined in a JCP specification is or is not an implementation of the specification, and what it means to state, publically, that your software is or is not "compatible" and does or does not "implement" a JSR.

The Apache Software Foundation has a page which tries to explain how they see some of these issues. The important sentence is:

Projects are free to implement whatever JSR a project community desires, as long the specification license that you agree to allows open source implementations.

The Apache site also includes this related set of explanations on their JCP FAQ, including dense sections of jargon such as:

The JSPA requires expert group members to license their necessary IP to the spec lead, who in turn is obligated to license all necessary IP to any compatible implementation that passes the TCK.


Why look! It's those words again! "Compatible implementation"...

Sheesh.

It's frustrating to watch these sorts of discussions, and I feel powerless to understand what the real problem is and how to solve it. What I do know, however, is that nowadays this sort of thing is everywhere; you don't have to look very far to see immense amounts of effort being consumed in arguing over these sorts of licensing and intellectual property ownership details; couldn't we be putting that effort to more productive use?

Monday, March 7, 2011

Tail-padding reuse in GCC

Today, let me take you on a very deep dive into a corner of the C++ language.

First, have a look at the following program. Think about it before you read the rest of this article. You might even want to load it up in your compiler, but before you do so: what do you think it will print?


# include <stdio.h>

class Super {
short s;
char c1;
} ;

struct Sub : public Super
{
char c2;
} ;

int main()
{
printf("Size of Super is %d\n", sizeof(Super));
printf("Size of Sub is %d\n", sizeof(Sub));
return 0;
}



Do you see what the program is doing? It's declaring a sub-class which extends the super-class, and adds some additional state, and it is computing the size of the memory footprint that will be used for an instance of the super-class, and for an instance of the sub-class.

So, armed with that knowledge, and armed with the knowledge that a short generally requires 2 bytes, and a char generally requires 1 byte, what do you think the program prints?

The answer, at least on the various versions of GCC that I've tried on various platforms, is that program prints:

Size of Super is 4
Size of Sub is 4


You might find this a surprising result; at least, I did. The sub-class Sub adds additional state to its super-class Super, so how can the two classes have the same sizeof?

The mystery may start to clear up in your mind a little bit if we add a couple more lines to the program, so that it now looks like this:


# include <stdio.h>

class Super {
short s;
char c1;
} ;

struct Sub : public Super
{
char c2;
} ;

int main()
{
Sub sub;

printf("Size of Super is %d\n", sizeof(Super));
printf("Size of Sub is %d\n", sizeof(Sub));
printf("Offset of c2 in Sub is %d\n", (char *)&sub.c2 - (char *)&sub);
return 0;
}


Now what do you think it prints?

For me, it prints:

Size of Super is 4
Size of Sub is 4
Offset of c2 in Sub is 3


This, too, is quite startling behavior! How can the offset of a field in the sub-class be a smaller value than the size of the super-class? Doesn't the first field in the sub-class always have to be laid out in memory strictly after the memory space used by the super-class?

It turns out that this behavior is something called "tail-padding reuse", I believe, and it dates back to GCC Version 3.2, and the adoption of a specification called the C++ Application Binary Interface, which specifies

the Application Binary Interface for C++ programs, that is, the object code interfaces between user C++ code and the implementation-provided system and libraries. This includes the memory layout for C++ data objects, including both predefined and user-defined data types, as well as internal compiler generated objects such as virtual tables.


The C++ ABI is a documented intended for authors of compilers. Unfortunately, even for them, reading and understanding it is tricky business.

But the bottom line is that, for reasons of memory efficiency, in cases such as the code that I show in this sample program, the compiler is allowed to (in fact, is actually encouraged to) pack the data members of the sub-class closely together with the data members of the super class, eliminating the tail-padding that would otherwise have occurred, and shrinking the memory footprint of the resulting code.

Sounds great, doesn't it?

I think that, in general, it is great. Unless, that is, your code

  • assumes that when a sub-class adds state to a super-class, the size of the sub-class will always exceed the size of the super-class, or

  • assumes that the offset of the first field in the sub-class will be no less than the size of the super-class, or

  • assumes that you could safely write a bit of code such as:

    sub.c2 = 'b';
    memset(&sub, '\0', sizeof(Super));
    if( sub.c2 == 'b' ) { ... }

    and expect that the body of the if statement would be executed.



For most C/C++ programmers that I know, these are reasonable and widely-held assumptions.

But, clearly, they are not correct assumptions.

So, be careful out there, now that you know about tail-padding reuse in GCC!

Sunday, March 6, 2011

Droid

I've finally joined the smartphone crowd; this weekend I upgraded to a Droid X. It appears to be a beautiful device, very powerful and sophisticated. As with any such device, it is going to take me a while to become comfortable with it and learn how to use it.

The device is larger and heavier than my old phone, but not so much so as to be unwieldy, just noticeable.

As with most new gadgets nowadays, it comes with almost no instructions. There is a tiny leaflet that guides you through turning it on and getting to the home screen, and then there is small built-in tutorial that takes you through the next few steps. In this modern world, it seems, the expectation is that you will learn the gadget yourself, by trial and error, operating it until you become comfortable with it.

Let's see if I still possess that level of patience...

Saturday, March 5, 2011

How can I get GCC to be much more verbose?

I've got a very strange situation, in which my GCC compiler (v. 4.2.1 I think) is acting oddly: it is apparently computing the wrong offset for a particular field of a particular structure.

Now, I'm sure this is actually not the compiler's fault, but is my own; somehow, I have made some exceedingly subtle mistake in my header file declarations, and the compiler is simply doing what I told it to. Remember: the last explanation that you choose should be "it's a bug in the compiler". It's vastly more likely that you have just given it invalid code to compile.

But, GCC isn't telling me that my code is invalid; it is compiling my code into something that produces a strange result.

And my attempts to reduce this to a smaller standalone example have not produced a similar behavior.

So how can I get GCC to tell me more about what it's doing, and why? Ideally, I'd like to have some sort of tracing capability where I can get GCC to dump out the exact code that it's actually compiling (after all the various preprocessor includes and macro definitions have been performed, etc.), together with a listing that shows how it is translating that source code into object code.

Then maybe I could see what's wrong.

Is there a GCC feature that does this?

Here comes the saddle!

It's been a big week for the Oakland-San Francisco Bay Bridge replacement project. Yesterday, at long last, the final sections of the main support tower were put in place.

This SF Chronicle article describes the work, and has some great pictures:

Crews carefully lifted and then lowered the 105.6-foot-long, 500-ton tower leg into place about 6:30 p.m. and began bolting it to the third level. All told, it takes about 30 hours to install each leg.

When all four are bolted in place, by Friday, if weather cooperates, the tower will stand at 480 feet of its ultimate 525 feet.



The weather has indeed cooperated, and the fourth-segment work has indeed finished as planned: Major Milestone Reached in Bay Bridge Construction proclaims the local ABC news station, with some nice video of the tower's construction.

At the end of the ABC video, you can see some pictures of "the saddle", which is the last piece of the tower. The saddle sits on top of the tower and the suspension cables rest in the saddle.

Over the next two months, two major events are soon to follow:

  1. The saddle is to be raised and mounted atop the tower, completing the main tower construction.

  2. The eastbound connection of the old bridge will be re-routed somewhat, so that the construction crews can start attaching the new bridge to the Oakland waterfront.



Up to this time, the new bridge has been standing disconnected, neither end joined to its final destination. Connecting the new bridge to the waterfront will be a big event. I suspect that Caltrans will probably schedule the big re-route operation over the long Memorial Day weekend, as it will be quite disruptive.

It's an astounding construction project. Engineers involved in the project seriously discuss topics such as the proposed 150-year lifetime of the bridge. Can they achieve this? Well, I won't be around to know, but if they do, that would be quite impressive, as the current bridge has lasted 75 years so far.

Thursday, March 3, 2011

Earthcaching: a geological variation on Geocaching

You probably already know about GeoCaching, which is a brilliant idea and a lot of fun.

Now, via Andrew Alden's marvelous Oakland Geology blog, I learned about a variation on GeoCaching, called EarthCaching.

EarthCaching is GeoCaching specifically focused on geology in the natural environment. Andrew Alden highlights several local earth cache sites, while the main EarthCache site has a short list of the Top Ten Earth Cache Sites In The World!

Say, I've been to a number of them: the Green Sands beach on the Big Island, the Thunder Eggs in Alabama, and the Merced River in Yosemite! I've flown over the Great Salt Lake, and I've been to Tuolomne Meadows. Haven't made it to New Zealand, Germany, or Portugal yet, though...

Atlassian's Room To Read philanthropy

Congratulations to the folks over at Atlassian on reaching a significant milestone with their Room To Read charity program.

I've used a variety of Atlassian software over the last 5 years, and it's very good. I think it's great that they've figured out this way to support a worthy charity with some of their success. Good work!

The Narwhal and the Banshee

As Ubuntu moves toward next month's Natty Narwhal release, an interesting discussion has been occurring regarding the Banshee music player.

The discussion gained a lot of attention with this essay in NetworkWorld: Banshee Amazon Store disabled in Ubuntu 11.04 by Canonical, pointing to a post by one of the Banshee developers, Gabriel Burt, where we read that:

As maintainers of the Banshee project, we have opted unanimously to decline Canonical's revenue sharing proposal, so that our users who choose the Amazon store will continue supporting GNOME to the fullest extent.


It turns out the discussion is about money, and Luis Villa offered some ideas about how to handle the money:

A sliding scale for revenue sharing can address this by giving one party a lot of the early, smaller revenue, and the other party a lot of the later, larger revenue.


Phil Bull pointed out that there are other frustrations, not just the money:

For starters, some people in the GNOME community moan about how Ubuntu doesn't pull its weight upstream. They then make it difficult for Ubuntu-y folks to contribute things upstream. People within the Ubuntu community, Canonical employees included, have tried to make significant contributions and have been knocked back on several occasions, in most cases not for any particularly good reason I would judge. I've even heard stories about Canonical having to upstream patches via a third party because a GNOME maintainer wouldn't accept (identical) patches from them! (I know; citation needed.) There is an anti-Ubuntu (or at least anti-Canonical) sentiment in parts of the GNOME community.


Bull's essay points to Greg DeKoenigsberg's essay from last summer about the problematic relationship between Canonical and other parts of the Linux community, where he says that:

One of the most irritating things about working at Red Hat was watching Canonical take credit for code that Red Hat engineers wrote.


Meanwhile, Mark Shuttleworth acknowledges that it is, indeed, at least partly about the money, in his essay Mistakes made, lessons learned, a principle clarified and upheld:

Money is particularly contentious in a community that mixes volunteer and paid effort, we should have anticipated and been extra careful to have the difficult conversations that were inevitable up front and in public, at UDS, when we were talking about the possibility of Banshee being the default media player in Ubuntu. We didn’t, and I apologise for the consequential confusion and upset caused.


And Gabriel Burt confirms that, indeed, the discussion continues:

Canonical asked the Banshee maintainers to join a conference call about an hour ago. They announced their new plan, calling past proposals mistakes


As for myself, I've been a Ubuntu user for four years, and I've been a member of the Apache open source community for closer to 6 years. There are indeed complicated issues here, and it's no surprise that the various communities are struggling with what it all means and how to work together. I was lucky that my interactions with my open source community were entirely free of money concerns, as I was neither paid nor did I pay for the open source software that I was contributing to.

And, I'd love to have a new music tool in Ubuntu, as the only reason that I keep my old Windows XP system around is that I haven't found a Ubuntu package that can successfully handle podcast subscriptions for my iPod. Can Banshee do that well?

It's an interesting debate, with lots of well-thought-out and well-presented ideas about the problems.

Wednesday, March 2, 2011

Did Yahoo Mail change its opinion of Chrome recently?

I typically read my Yahoo email via Google Chrome on Ubuntu Linux, which is my primary home browser platform of late.

Until recently, Yahoo email would give me a warning, but then had a "if you wish to proceed to use all-new Yahoo email anyway, press this button".

And pressing the button would indeed take me through to the Yahoo email screen, which seemed to work fine with Google's Chrome browser.

However, starting today, I get a screen that says:

Sorry, the all-new Yahoo! Mail does not support your browser.

You can either download a compatible browser or proceed to Yahoo! Mail Classic.


And it flat-out refuses to let me use the new Yahoo email. Instead, the only choice it gives me is to use Yahoo email classic.

Which is fine; Yahoo email classic works just fine for reading and writing mail.

But still, I wonder what changed? I was able to use the new Yahoo email with Google Chrome as recently as yesterday morning, and I don't recall changing anything relevant since then.

Was this a change on Yahoo's part? Or on Google's? Anyone know?

Update: As a helpful commenter pointed out, I appear to have recently upgraded to Chrome 10, without paying much attention to that, and this may have been what causes Yahoo to fail to recognize my browser.

Mac OS X 10.7 "Lion" is coming soon

Now that my primary work machine is a Mac, I'm (slowly) starting to be more aware of the features and power of Mac OS X, which is a very sophisticated operating system nowadays.

Here's a nice preview of a number of the new features that will be coming with Mac OS X 10.7 "Lion", in beta testing now and with an anticipated general release this summer: The 10 Best Things About OS X Lion 10.7 Developer Preview

Tuesday, March 1, 2011

Matt Blaze takes a brave stand

I think that Professor Blaze makes some excellent points, and has the issue exactly right when he says:


These organizations, rooted in a rapidly disappearing print-based publishing economy, believe that they naturally "own" the writings that (unpaid) authors, editors and reviewers produce. They insist on copyright control as a condition of publication, arguing that the sale of conference proceedings and journal subscriptions provides an essential revenue stream that subsidizes their other good works. But this income, however well it might be used, has evolved into an ill-gotten entitlement. We write scientific papers first and last because we want them read. When papers were disseminated solely in print form it might have been reasonable to expect authors to donate the copyright in exchange for production and distribution. Today, of course, this model seems, at best, quaintly out of touch with the needs of researchers and academics who no longer desire or tolerate the delay and expense of seeking out printed copies of far-flung documents. We expect to find on it on the open web, and not hidden behind a paywall, either.


I've enjoyed studying Professor Blaze's work for many years, and I'm hoping to continue to do so in the future, so I hope that others hear his well-written criticisms and act on them.

Update: Professor Steve Bellovin has written a great essay about some recent experiences he had doing research, and how he experiences policies like these.

It drives me absolutely crazy that there are seminal publications in the computer science field which are now 35 or more years old, and yet they cannot be read by young students in the field without paying ridiculous fees to these societies. Here's one of my favorite examples: The design and implementation of INGRES, published by Professors Stonebraker, Held, and Wong of U.C. Berkeley, was published in 1976. This work was done at a public university, funded by public money (yes, my own taxes), and yet the crummy Association for Computing Machinery demands $15 to allow anyone to read this article.

Grrr...