Sunday, September 25, 2016

Stuff I'm reading, Harvest Festival edition

Take a week or two off from the Internet, and look what happens...

  • Postmortem of the Firefox (and Tor Browser) Certificate Pinning Vulnerability Rabbit Hole
    Certificate Pinning is the process of forcing a browser to only use certain certificates in the validation of a TLS connection to a certain domain. This is done by either a static certificate pin list included with the browser or using a standard called HTTP Public Key Pinning (HPKP) which allows a site to push down its own pins on the first connection to it.

    Mozilla uses Certificate Pinning to protect connections to addons.mozilla.org (AMO), which is used for the updates of most Firefox extensions. The purpose of pinning this domain is to prevent a rogue CA from being able to generate a certificate for AMO that could then be used to perform a man-in-the-middle (MiTM) attack on the extension update process.

    ...

    The vulnerability here is that Mozilla failed to set the expiration date for the static pins and HPKP pre-load list long enough into the future to last until the next release of Firefox.

  • Someone Is Learning How to Take Down the Internet
    Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet. These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. We don't know who is doing this, but it feels like a large nation state.
  • KrebsOnSecurity Hit With Record DDoS
    The attack began around 8 p.m. ET on Sept. 20, and initial reports put it at approximately 665 Gigabits of traffic per second. Additional analysis on the attack traffic suggests the assault was closer to 620 Gbps in size, but in any case this is many orders of magnitude more traffic than is typically needed to knock most sites offline.

    Martin McKeay, Akamai’s senior security advocate, said the largest attack the company had seen previously clocked in earlier this year at 363 Gbps. But he said there was a major difference between last night’s DDoS and the previous record holder: The 363 Gpbs attack is thought to have been generated by a botnet of compromised systems using well-known techniques allowing them to “amplify” a relatively small attack into a much larger one.

    In contrast, the huge assault this week on my site appears to have been launched almost exclusively by a very large botnet of hacked devices.

  • DDoS Mitigation Firm Has History of Hijacks
    In my follow-up report on their arrests, I noted that vDOS itself had gone offline, and that automated Twitter feeds which report on large-scale changes to the global Internet routing tables observed that vDOS’s provider — a Bulgarian host named Verdina[dot]net — had been briefly relieved of control over 255 Internet addresses (including those assigned to vDOS) as the direct result of an unusual counterattack by BackConnect.
  • "Defensive" BGP hijacking?
    After the DDoS attacks subsided, the attackers started to harass us by calling in using spoofed phone numbers. Curious to what this was all about, we fielded various calls which allowed us to ascertain who was behind the attacks by correlating e-mails with the information they provided over the phone. Throughout the day and late into the night, these calls and threats continued to increase in number. Throughout these calls we noticed an increasing trend of them bringing up personal information of myself and employees. At this point I personally filled a police report in preparation to a possible SWATing attempt. As they continued to harass our company, more and more red flags indicated that I would soon be targeted. This was the point where I decided I needed to go on the offensive to protect myself, my partner, visiting family, and my employees. The actions proved to be extremely effective, as all forms of harassment and threats from the attackers immediately stopped. In addition to our main objective, we were able to collect intelligence on the actors behind the bot net as well as identify the attack servers used by the booter service.
  • An Important Message About Yahoo User Security
    Based on the ongoing investigation, Yahoo believes that information associated with at least 500 million user accounts was stolen and the investigation has found no evidence that the state-sponsored actor is currently in Yahoo’s network.
  • How Dropbox securely stores your passwords
    We rely on bcrypt as our core hashing algorithm with a per-user salt and an encryption key (or global pepper), stored separately. Our approach differs from basic bcrypt in a few significant ways.

    First, the plaintext password is transformed into a hash value using SHA512. This addresses two particular issues with bcrypt. Some implementations of bcrypt truncate the input to 72 bytes, which reduces the entropy of the passwords. Other implementations don’t truncate the input and are therefore vulnerable to DoS attacks because they allow the input of arbitrarily long passwords. By applying SHA, we can quickly convert really long passwords into a fixed length 512 bit value, solving both problems.

    Next, this SHA512 hash is hashed again using bcrypt with a cost of 10, and a unique, per-user salt. Unlike cryptographic hash functions like SHA, bcrypt is designed to be slow and hard to speed up via custom hardware and GPUs. A work factor of 10 translates into roughly 100ms for all these steps on our servers.

  • Introducing the GitHub Load Balancer
    We set out to design a new director tier that was stateless and allowed both director and proxy nodes to be gracefully removed from rotation without disruption to users wherever possible. Users live in countries with less than ideal internet connectivity, and it was important to us that long running clones of reasonably sized repositories would not fail during planned maintenance within a reasonable time limit.

    The design we settled on, and now use in production, is a variant of Rendezvous hashing that supports constant time lookups. We start by storing each proxy host and assign a state. These states handle the connection draining aspect of our design goals and will be discussed further in a future post. We then generate a single, fixed-size forwarding table and fill each row with a set of proxy servers using the ordering component of Rendezvous hashing. This table, along with the proxy states, are sent to all director servers and kept in sync as proxies come and go. When a TCP packet arrives on the director, we hash the source IP to generate consistent index into the forwarding table. We then encapsulate the packet inside another IP packet (actually Foo-over-UDP) destined to the internal IP of the proxy server, and send it over the network. The proxy server receives the encapsulated packet, decapsulates it, and processes the original packet locally. Any outgoing packets use Direct Server Return, meaning packets destined to the client egress directly to the client, completely bypassing the director tier.

  • Oracle's Cloudy Future
    Consider your typical Chief Information Officer in the pre-Cloud era: for various reasons she has bought in to some aspect of the Microsoft stack (likely Exchange). So, in order to support Exchange, the CIO must obviously buy Windows Server. And Windows Server includes Active Directory, so obviously that will be the identity service. However, now that the CIO has parts of the Microsoft stack in place, she is likely to be much more inclined to go with other Microsoft products as well, whether that be SQL Server, Dynamics CRM, SharePoint, etc. True, the Microsoft product may not always be the best in a vacuum, but no CIO operates in a vacuum: maintenance and service costs are a huge concern, and there is a lot to be gained by buying from fewer vendors rather than more. In fact, much of Microsoft’s growth over the last 15 years can be traced to Ballmer’s cleverness in exploiting this advantage through both new products and also new pricing and licensing agreements that heavily incentivized Microsoft customers to buy ever more from the company.

    As noted above, this was the exact same strategy as Oracle. However, enterprise IT decision-making is undergoing dramatic changes: first, without the need for significant up-front investment, there is much less risk in working with another vendor, particularly since trials usually happen at the team or department level. Second, without ongoing support and maintenance costs there is much less of a variable cost argument for going with one vendor as well. True, that leaves the potential hassle of incorporating those fifty different vendors Ellison warned about, but it also means that things like the actual quality of the software and the user experience figure much more prominently in the decision-making — and the point about team-based decision-making makes this even more important, because the buyer is also the user.

    Oracle’s lock on its existing customers, including the vast majority of the largest companies and governments in the world, remains very strong. And to that end its strategy of basically replicating its on-premise business in the cloud (or even moving its cloud hardware on-premise) makes total sense; it’s the same sort of hybrid strategy that Microsoft is banking on. Give their similarly old-fashioned customers the benefit of reducing their capital expenditures (increasing their return on invested capital) and hopefully buy enough time to adapt to a new world where users actually matter and flexible and focused clouds are the best way to serve them.

  • Neither Uber nor Lyft believe sharing is the future
    Last January, Lyft announced a partnership with General Motors to launch an on-demand network of autonomous vehicles. If you live in San Francisco or Phoenix, you may have seen these cars on the road, and within five years a fully autonomous fleet of cars will provide the majority of Lyft rides across the country.

    Tesla CEO Elon Musk believes the transition to autonomous vehicles will happen through a network of autonomous car owners renting their vehicles to others. Elon is right that a network of vehicles is critical, but the transition to an autonomous future will not occur primarily through individually owned cars. It will be both more practical and appealing to access autonomous vehicles when they are part of Lyft’s networked fleet.

    See that? No individual ownership. No sharing. Lyft’s vision is for large fleet ownership. Explicitly corporate. Explicitly non-sharing.

  • The Art of a Pull Request
    Code review is almost always performed by a couple (or more) of Kibana core engineers, but it’s important to have our review process out there so that there are no surprises. We follow these guidelines both when creating a Pull Request ourselves, as well as when someone external to the organization submits a PR. Having a single process for everyone results in better quality and more consistency across all Pull Requests. Sometimes this can be a problem when you’re trying to be friendly with an external PR, as you may feel inclined to lower the bar just for that one PR so that the contributor is happier, but being clear on the rules benefits everyone in the end.
  • Oracle Announces Jigsaw Delays Push Java 9 Launch Date to 2017
    it’s still on the roadmap for Java 9. The bad news is that we’ll have to wait to 2017. Originally targeting September 2016, the target date for general availability is now set to March 2017.

    Project Jigsaw’s goal is to make Java modular and break the JRE to interoperable components. Once it’s finished, it would allow creating a scaled down runtime Jar (rt.jar) customised to the components a project actually needs. The JDK 7 and JDK 8 rt.jars have about 20,000 classes that are part of the JDK even if many of them aren’t really being used in a specific environment. The motivation behind this is to make Java easily scalable to small computing devices, improve security and performance, and mainly make it easier for developers to construct and maintain libraries.

  • IMF: An Open Standard with Open Tools
    A few years ago we discovered the Interoperable Master Format (IMF), a standard created by the Society of Motion Picture and Television Engineers (SMPTE). The IMF framework is based on the Digital Cinema standard of component based elements in a standard container with assets being mapped together via metadata instructions. By using this standard, Netflix is able to hold a single set of core assets and the unique elements needed to make those assets relevant in a local territory. So for a title like Narcos, where the video is largely the same in all territories, we can hold the Primary AV and the specific frames that are different for, say, the Japanese title sequence version. This reduces duplication of assets that are 95% the same and allows us to hold that 95% once and piece it to the 5% differences needed for a specific use case.
  • The GitHub GraphQL API
    GraphQL is a querying language developed by Facebook over the course of several years. In essence, you construct your request by defining the resources you want. You send this via a POST to a server, and the response matches the format of your request.

    ...

    You can see that the keys and values in the JSON response match right up with the terms in the query string.

  • Fine-grained Language Composition
    Programming languages therefore resemble islands: each language defines its own community, culture, and implements its own software. Even if one wants to travel to another island, we often find ourselves trapped, and unable to do so. The only real exception to this are languages which run on a single Virtual Machine (VM), most commonly a Java VM. These days, many languages have JVM implementations, but it can sometimes seem that they've simply swapped their expectations of an FFI from C to Java: non-Java JVM languages don't seem to talk to each other very much.

    We're so used to this state of affairs that it's difficult for us to see the problems it creates. Perhaps the most obvious is the huge effort that new languages need to expend on creating libraries which already exist in other languages, an effort far greater than the core language or compiler require. Less obvious is that the initial language used to write a system becomes a strait-jacket: we almost always stick with our original choice, even if a better language comes along, even if no-one is trained in the original language, or even if it simply becomes difficult to run the original language.

  • Zero-Knowledge: Definitions and Theory
    You can’t understand where the following definitions come from without the crucial distinction between information and knowledge from the computer scientist’s perspective. Information concerns how many essential bits are encoded in a message, and nothing more. In particular, information is not the same as computational complexity, the required amount of computational resources required to actually do something. Knowledge, on the other hand, refers to the computational abilities you gain with the information provided.

    Here’s an example in layman’s terms: say I give you a zero-knowledge proof that cancer can be cured using a treatment that takes only five days. Even though I might thoroughly convince you my cure works by exhibiting patients with vanishing tumors, you’ll still struggle to find a cure. This is despite the fact that there might be more bits of information relayed in the messages sent during my “zero-knowledge proof” than the number of bits needed to describe the cure! On the other hand, every proof that 1+1=2 is a zero-knowledge proof, because it’s not computationally difficult to prove this on your own in the first place. You don’t gain any new computational powers even if I tell you flat out what the proof is.

  • How I learned to program
    When I look at the bad career-related stuff I’ve experienced, almost all of it falls into one of two categories: something obviously bad that was basically unavoidable, or something obviously bad that I don’t know how to reasonably avoid, given limited resources. I don’t see much to learn from that. That’s not to say that I haven’t made and learned from mistakes. I’ve made a lot of mistakes and do a lot of things differently as a result of mistakes! But my worst experiences have come out of things that I don’t know how to prevent in any reasonable way.

    This also seems to be true for most people I know. For example, something I’ve seen a lot is that a friend of mine will end up with a manager whose view is that managers are people who dole out rewards and punishments (as opposed to someone who believes that managers should make the team as effective as possible, or someone who believes that managers should help people grow). When you have a manager like that, a common failure mode is that you’re given work that’s a bad fit, and then maybe you don’t do a great job because the work is a bad fit. If you ask for something that’s a better fit, that’s refused (why should you be rewarded with doing something you want when you’re not doing good work, instead you should be punished by having to do more of this thing you don’t like), which causes a spiral that ends in the person leaving or getting fired. In the most recent case I saw, the firing was a surprise to both the person getting fired and their closest co-workers: my friend had managed to find a role that was a good fit despite the best efforts of management; when management decided to fire my friend, they didn’t bother to consult the co-workers on the new project, who thought that my friend was doing great and had been doing great for months!

    I hear a lot of stories like that, and I’m happy to listen because I like stories, but I don’t know that there’s anything actionable here. Avoid managers who prefer doling out punishments to helping their employees? Obvious but not actionable.

  • The MIT License, Line by Line
    If you’re involved in open-source software and haven’t taken the time to read the license from top to bottom—it’s only 171 words—you need to do so now. Especially if licenses aren’t your day-to-day. Make a mental note of anything that seems off or unclear, and keep trucking. I’ll repeat every word again, in chunks and in order, with context and commentary. But it’s important to have the whole in mind.

    ...

    To fill the gap between legally effective, well-documented grants of rights in contributions and no paper trail at all, some projects have adopted the Developer Certificate of Origin, a standard statement contributors allude to using Signed-Off-By metadata tags in their Git commits. The Developer Certificate of Origin was developed for Linux kernel development in the wake of the infamous SCO lawsuits, which alleged that chunks of Linux’ code derived from SCO-owned Unix source. As a means of creating a paper trail showing that each line of Linux came from a contributor, the Developer Certificate of Origin functions nicely. While the Developer Certificate of Origin isn’t a license, it does provide lots of good evidence that those submitting code expected the project to distribute their code, and for others to use it under the kernel’s existing license terms.

  • Palmer Luckey denies writing blog posts slamming Clinton, says he's not voting for Trump
    The posts by "NimbleRichMan" — which Luckey now says he didn't write, but he specifically confirmed with The Daily Beast as his — were mostly found on Reddit's meme-centric unofficial Donald Trump subreddit, dubbed "The Donald." Many have been deleted (which Luckey also claims to not have done), but can still be found archived elsewhere.
  • The Era of Consumer Deception: Why Do We Tolerate Such Price Opacity?
    just the other day, while I was in the midst of congratulating myself for avoiding the Hertz $10/gallon refueling fee, I looked on the receipt and saw a per-mile fee that nearly doubled the cost of my rental — when was the last time a rental car didn’t have unlimited miles?

    It’s a cat-and-mouse game and companies keep getting better at playing it.

  • A conversation with Aston Motes, Dropbox’s first employee.
    Drew and Arash are super, super smart and great engineers. To this day, I’ll hold Drew as one of the best Windows programmers I have ever met. And Arash is just sick at all things backend. They were this perfect pairing. So, as far as the team went, I was certain that these guys were going to be great people to work with.

    For the longest time I had seen product demos that were just video. But once I played with the product I was like, “Oh, this thing works. I really like this product and it would be awesome to get a chance to work with these guys, on this thing.” It was a product that, as an MIT student, it matched my expectations for how something should work.

  • The Clean Architecture
    The overriding rule that makes this architecture work is The Dependency Rule. This rule says that source code dependencies can only point inwards. Nothing in an inner circle can know anything at all about something in an outer circle. In particular, the name of something declared in an outer circle must not be mentioned by the code in the an inner circle. That includes, functions, classes. variables, or any other named software entity.

    By the same token, data formats used in an outer circle should not be used by an inner circle, especially if those formats are generate by a framework in an outer circle. We don’t want anything in an outer circle to impact the inner circles.

  • The cypherpunk revolution: How the tech vanguard turned public-key cryptography into one of the most potent political ideas of the 21st century.
    Public-key cryptography made it possible to keep a message private: The sender would scramble the clear text with a key that the recipient had “publicly revealed.” Then the recipient, and only the recipient, could use the matching private key to unscramble the message’s ciphertext. But the new technique could do even more. Public-key cryptography made it possible to “sign” a message electronically, by doing exactly the opposite: having the sender encipher a signature with a privately held encryption key, thus enabling the recipient to verify the message’s origin by deciphering that signature with the sender’s publicly revealed key, thereby proving that only one party, the legitimate sender, could have scrambled the message’s signature. Everybody could decipher and read the signature, but in only one way: with the sender’s public key.
  • N+1 queries are hardly a feature
    In a word, the idea that having a larger amount of simpler queries is better is nonsense. In particular, it completely ignores the cost of going to the database. Sure, a more complex query may require the database to do additional work, and if you are using caching, then you’ll not have the data in the cache in neat “cache entry per row”. But in practice, this leads to applications doing hundreds of queries per page view, absolute reliance on the cache and tremendous cost at startup.
  • Building Sourcegraph, a large-scale code search & cross-reference engine in Go
    Our goal, based on our experience with similar systems, was to avoid complexity and repetition. Large web apps can easily become complex because they almost always need to twist the abstractions of whatever framework you’re using. And “service-oriented” architectures can require lots of repetitive code because not only do you have a client and server implementation for each service, but you often find yourself representing the same concepts at multiple levels of abstraction.
  • p-values in software engineering
    A commonly encountered cut-off value is 0.05 (sometimes written as 5%).

    Where did this 0.05 come from? It was first proposed in 1920s by Ronald Fisher. Fisher’s Statistical Methods for Research Workers and later Statistical Tables for Biological, Agricultural, and Medical Research had a huge impact and a p-value cut-off of 0.05 became enshrined as the magic number.

    To quote Fisher: “Either there is something in the treatment, or a coincidence has occurred such as does not occur more than once in twenty trials.”

    Once in twenty was a reasonable level for an event occurring by chance (rather than as a result of some new fertilizer or drug) in an experiment in biological, agricultural or medical research in 1900s. Is it a reasonable level for chance events in software engineering?

  • Converting between IFPUG & COSMIC function point counts
    Replication, repeating an experiment to confirm the results of previous experiments, is not a common activity in software engineering. Everybody wants to write about their own ideas and academic journals want to publish what is new (they are fashion driven).

    Conversion between ways of counting function points, a software effort estimating technique, is one area where there has been a lot of replications (eight studies is a lot in software engineering, while a couple of hundred is a lot in psychology).

  • The wind is not yet blowing in software engineering research
    An article by Andrew Gelman is getting a lot of well deserves publicity at the moment. The topic of discussion is sloppy research practices in psychology and how researchers are responding to criticism (head in the sand and blame the messenger).

    I imagine that most software developers think this is an intrinsic problem in the ‘soft’ sciences that does not apply to the ‘hard’ sciences, such as software; I certainly thought this until around 2000 or so. Writing a book containing a detailed analysis of C convinced me that software engineering was mostly opinion, with a tiny smattering of knowledge in places.

    The C book tried to apply results from cognitive psychology to what software developers do. After reading lots of books and papers on cognitive psychology I was impressed with how much more advanced, and rigorous, their experimental methods were, compared to software engineering.

    Writing a book on empirical software engineering has moved my views on to the point where I think software engineering is the ideal topic for the academic fraudster.

  • I Used to Be a Human Being
    If the internet killed you, I used to joke, then I would be the first to find out. Years later, the joke was running thin. In the last year of my blogging life, my health began to give out. Four bronchial infections in 12 months had become progressively harder to kick. Vacations, such as they were, had become mere opportunities for sleep. My dreams were filled with the snippets of code I used each day to update the site. My friendships had atrophied as my time away from the web dwindled. My doctor, dispensing one more course of antibiotics, finally laid it on the line: “Did you really survive HIV to die of the web?”

    But the rewards were many: an audience of up to 100,000 people a day; a new-media business that was actually profitable; a constant stream of things to annoy, enlighten, or infuriate me; a niche in the nerve center of the exploding global conversation; and a way to measure success — in big and beautiful data — that was a constant dopamine bath for the writerly ego. If you had to reinvent yourself as a writer in the internet age, I reassured myself, then I was ahead of the curve. The problem was that I hadn’t been able to reinvent myself as a human being.

  • The Free-Time Paradox in America
    Erik Hurst, an economist at the University of Chicago, was delivering a speech at the Booth School of Business this June about the rise in leisure among young men who didn’t go to college. He told students that one “staggering” statistic stood above the rest. "In 2015, 22 percent of lower-skilled men [those without a college degree] aged 21 to 30 had not worked at all during the prior twelve months,” he said.

    "Think about that for a second,” he went on. Twentysomething male high-school grads used to be the most dependable working cohort in America. Today one in five are now essentially idle. The employment rate of this group has fallen 10 percentage points just this century, and it has triggered a cultural, economic, and social decline. "These younger, lower-skilled men are now less likely to work, less likely to marry, and more likely to live with parents or close relatives,” he said.

  • There was a bomb on my block
    I grew up on the internet. I grew up with the mantra “don’t feed the trolls.” I always saw this as a healthy meditation for navigating the internet, for focusing on the parts of the internet that are empowering and delightful. Increasingly, I keep thinking that this is a meditation that needs to be injected into the news ecosystem. We all know that the whole concept of terrorism is to provoke fear in the public. So why are we not holding news media accountable for opportunistically aiding and abetting terroristic acts? Our cultural obsession with reading news that makes us afraid parallels our cultural obsession with crises.

    There’s a reason that hate is growing in this country. And, in moments like this, I’m painfully reminded that we’re all contributing to the culture of hate. When we turn events like what happened this weekend in NY/NJ into spectacle, when we encourage media to write stories about how afraid people are, when we read the stories of how the suspect was an average person until something changed, we give the news media license to stoke up fear. And when they are encouraged to stoke fear, they help turn our election cycle into reality TV and enable candidates to spew hate for public entertainment. We need to stop blaming what’s happening on other people and start taking responsibility.

No comments:

Post a Comment