Skip to content

Archives

Links for 2013-07-25

  • Twilio Billing Incident Post-Mortem

    At 1:35 AM PDT on July 18, a loss of network connectivity caused all billing redis-slaves to simultaneously disconnect from the master. This caused all redis-slaves to reconnect and request full synchronization with the master at the same time. Receiving full sync requests from each redis-slave caused the master to suffer extreme load, resulting in performance degradation of the master and timeouts from redis-slaves to redis-master. By 2:39 AM PDT the host’s load became so extreme, services relying on redis-master began to fail. At 2:42 AM PDT, our monitoring system alerted our on-call engineering team of a failure in the Redis cluster. Observing extreme load on the host, the redis process on redis-master was misdiagnosed as requiring a restart to recover. This caused redis-master to read an incorrect configuration file, which in turn caused Redis to attempt to recover from a non-existent AOF file, instead of the binary snapshot. As a result of that failed recovery, redis-master dropped all balance data. In addition to forcing recovery from a non-existent AOF, an incorrect configuration also caused redis-master to boot as a slave of itself, putting it in read-only mode and preventing the billing system from updating account balances.
    See also http://antirez.com/news/60 for antirez’ response. Here’s the takeaways I’m getting from it: 1. network partitions happen in production, and cause cascading failures. this is a great demo of that. 2. don’t store critical data in Redis. this was the case for Twilio — as far as I can tell they were using Redis as a front-line cache for billing data — but it’s worth saying anyway. ;) 3. Twilio were just using Redis as a cache, but a bug in their code meant that the writes to the backing SQL store were not being *read*, resulting in repeated billing and customer impact. In other words, it turned a (fragile) cache into the authoritative store. 4. they should probably have designed their code so that write failures would not result in repeated billing for customers — that’s a bad failure path. Good post-mortem anyway, and I’d say their customers are a good deal happier to see this published, even if it contains details of the mistakes they made along the way.

    (tags: redis caching storage networking network-partitions twilio postmortems ops billing replication)

  • Tuning and benchmarking Java 7’s Garbage Collectors: Default, CMS and G1

    Rudiger Moller runs through a typical GC-tuning session, in exhaustive detail

    (tags: java gc tuning jvm cms g1 ops)

  •  Censum

    [JVM] GC is a difficult, specialised area that can be very frustrating for busy developers or devops folks to deal with. The JVM has a number of Garbage Collectors and a bewildering array of switches that can alter the behaviour of each collector. Censum does all of the parsing, number crunching and statistical analysis for you, so you don’t have to go and get that PhD in Computer Science in order to solve your GC performance problem. Censum gives you straight answers as opposed to a ton of raw data. can eat any GC log you care to throw at it. is easy to install and use.
    Commercial software, UKP 495 per license.

    (tags: censum gc tuning ops java jvm commercial)

Comments closed