Cloudflare are running a 40-machine, 50TB Kafka cluster, ingesting at 15 Gbps, for log processing. Also: Go producers/consumers, capnproto as wire format, and CitusDB/Postgres to store rolled-up analytics output. Also using Space Saver (top-k) and HLL (counting) estimation algorithms.
a command line tool for JVM diagnostic troubleshooting and profiling.
‘Simplistic interactive filtering tool’ — live incremental-search filtering in a terminal window
Yay. Sensible Docker registry pricing at last. Given the high prices, rough edges and slow performance of the other registry offerings, I’m quite happy to see this.
Google Container Registry helps make it easy for you to store your container images in a private and encrypted registry, built on Cloud Platform. Pricing for storing images in Container Registry is simple: you only pay Google Cloud Storage costs. Pushing images is free, and pulling Docker images within a Google Cloud Platform region is free (Cloud Storage egress cost when outside of a region). Container Registry is now ready for production use: * Encrypted and Authenticated – Your container images are encrypted at rest, and access is authenticated using Cloud Platform OAuth and transmitted over SSL * Fast – Container Registry is fast and can handle the demands of your application, because it is built on Cloud Storage and Cloud Networking. * Simple – If you’re using Docker, just tag your image with a gcr.io tag and push it to the registry to get started. Manage your images in the Google Developers Console. * Local – If your cluster runs in Asia or Europe, you can now store your images in ASIA or EU specific repositories using asia.gcr.io and eu.gcr.io tags.
Pragmatic evolution story, adding Docker as a packaging/deploy format for an existing production Capistrano/Rails fleet
Hystrix-style Circuit Breakers and Bulkheads for Ruby/Rails, from Shopify
GitHub’s statsd replacement in C
‘Want to learn how Facebook scales their load balancing infrastructure to support more than 1.3 billion users? We will be revealing the technologies and methods we use to route and balance Facebook’s traffic. The Traffic team at Facebook has built several systems for managing and balancing our site traffic, including both a DNS load balancer and a software load balancer capable of handling several protocols. This talk will focus on these technologies and how they have helped improve user performance, manage capacity, and increase reliability.’ Can’t find the standalone slides, unfortunately.
a good collection of coding fonts (via Tony Finch)
Finagle Futures ported to C++11
So then they reach inside to one of the layers and spin the knob randomly to fuck it up. Lower layers are edges and curves. Higher layers are faces, eyes and shoggoth ovipositors. [....] But the best part is not when they just glitch an image — which is a fun kind of embossing at one end, and the “extra eyes” filter at the other — but is when they take a net trained on some particular set of objects and feed it static, then zoom in, and feed the output back in repeatedly. That’s when you converge upon the platonic ideal of those objects, which — it turns out — tend to be Giger nightmare landscapes. Who knew. (I knew.)This stuff is still boggling my mind. All those doggy faces! That is one dog-obsessed ANN.
The paper describing the innards of Spark Streaming and its RDD-based recomputation algorithm:
we use a data structure called Resilient Distributed Datasets (RDDs), which keeps data in memory and can recover it without replication by tracking the lineage graph of operations that were used to build it. With RDDs, we show that we can attain sub-second end-to-end latencies. We believe that this is sufficient for many real-world big data applications, where the timescale of the events tracked (e.g., trends in social media) is much higher.
Gor, a very nice-looking tool to log and replay HTTP traffic, specifically designed to “tee” live traffic from production to staging for pre-release testing
Well-written description of the pros and cons. I’m a rebaser, fwiw. (via Darrell)
To sum up, if you want a perfect performance you need to: Ensure traffic is distributed evenly across many RX queues and SO_REUSEPORT processes. In practice, the load usually is well distributed as long as there are a large number of connections (or flows). You need to have enough spare CPU capacity to actually pick up the packets from the kernel. To make the things harder, both RX queues and receiver processes should be on a single NUMA node.
This is amazing, and a little scary.
If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.An enlightening comment from the G+ thread:
This is the most fun we’ve had in the office in a while. We’ve even made some of those ‘Inceptionistic’ art pieces into giant posters. Beyond the eye candy, there is actually something deeply interesting in this line of work: neural networks have a bad reputation for being strange black boxes that that are opaque to inspection. I have never understood those charges: any other model (GMM, SVM, Random Forests) of any sufficient complexity for a real task is completely opaque for very fundamental reasons: their non-linear structure makes it hard to project back the function they represent into their input space and make sense of it. Not so with backprop, as this blog post shows eloquently: you can query the model and ask what it believes it is seeing or ‘wants’ to see simply by following gradients. This ‘guided hallucination’ technique is very powerful and the gorgeous visualizations it generates are very evocative of what’s really going on in the network.?
This is great news — the current protocol is a binary, proprietary horrorshow, particularly around error reporting. Available “later this year” in production, and Pushy plan to support it.
tl;dr: Code review trumps TDD alone for finding bugs. (Via Mark Dennehy)
Objective: Our objective is to describe how software engineering might benefit from an evidence-based approach and to identify the potential difficulties associated with the approach. Method: We compared the organisation and technical infrastructure supporting evidence-based medicine (EBM) with the situation in software engineering. We considered the impact that factors peculiar to software engineering (i.e. the skill factor and the lifecycle factor) would have on our ability to practice evidence-based software engineering (EBSE). Results: EBSE promises a number of benefits by encouraging integration of research results with a view to supporting the needs of many different stakeholder groups. However, we do not currently have the infrastructure needed for widespread adoption of EBSE. The skill factor means software engineering experiments are vulnerable to subject and experimenter bias. The lifecycle factor means it is difficult to determine how technologies will behave once deployed. Conclusions: Software engineering would benefit from adopting what it can of the evidence approach provided that it deals with the specific problems that arise from the nature of software engineering.(via Mark Dennehy)
curl -s http://checkip.amazonaws.com/
The ruling is terrible through and through. First off, it insists that the comments on the news story were clearly “hate speech” and that, as such, “did not require any linguistic or legal analysis since the remarks were on their face manifestly unlawful.” To the court, this means that it’s obvious such comments should have been censored straight out. That’s troubling for a whole host of reasons at the outset, and highlights the problematic views of expressive freedom in Europe. Even worse, however, the Court then notes that freedom of expression is “interfered with” by this ruling, but it doesn’t seem to care — saying that it is deemed “necessary in a democratic society.”This is going to have massive chilling effects. Terrible ruling from the ECHR.
In the wake of this judgment, the legal situation is complicated. In an e-mail to Ars, T J McIntyre, who is a lecturer in law and Chairman of Digital Rights Ireland, the lead organization that won an important victory against EU data retention in the Court of Justice of the European Union last year, explained where things now stand. “Today’s decision doesn’t have any direct legal effect. It simply finds that Estonia’s laws on site liability aren’t incompatible with the ECHR. It doesn’t directly require any change in national or EU law. Indirectly, however, it may be influential in further development of the law in a way which undermines freedom of expression. As a decision of the Grand Chamber of the ECHR it will be given weight by other courts and by legislative bodies.”
We used three key principles in designing our datacenter networks: We arrange our network around a Clos topology, a network configuration where a collection of smaller (cheaper) switches are arranged to provide the properties of a much larger logical switch. We use a centralized software control stack to manage thousands of switches within the data center, making them effectively act as one large fabric. We build our own software and hardware using silicon from vendors, relying less on standard Internet protocols and more on custom protocols tailored to the data center.
Nice hack. An automated nginx reverse proxy which regenerates as the Docker containers update
Parse on their ditching-Rails story. I haven’t heard a nice thing about Ruby or Rails as an operational, production-quality platform in a long time :(
we are introducing Flow Logs for the Amazon Virtual Private Cloud. Once enabled for a particular VPC, VPC subnet, or Elastic Network Interface (ENI), relevant network traffic will be logged to CloudWatch Logs for storage and analysis by your own applications or third-party tools. You can create alarms that will fire if certain types of traffic are detected; you can also create metrics to help you to identify trends and patterns. The information captured includes information about allowed and denied traffic (based on security group and network ACL rules). It also includes source and destination IP addresses, ports, the IANA protocol number, packet and byte counts, a time interval during which the flow was observed, and an action (ACCEPT or REJECT).
‘[Tim Hunt] said that while he meant to be ironic, he did think it was hard to collaborate with women because they are too emotional – that he was trying to be honest about the problems.’ So much for the “nasty twitter took my jokes seriously” claims then.
Good post, and hard to disagree.
One of the “features” of systemd is that it allows you to boot a system without needing a shell at all. This seems like such a senseless manoeuvre that I can’t help but think of it as a knee-jerk reaction to the perception of Too Much Shell in sysv init scripts. In exactly which universe is it reasonable to assume that you have a running D-Bus service (or kdbus) and a filesystem containing unit files, all the binaries they refer to, all the libraries they link against, and all the configuration files any of them reference, but that you lack that most ubiquitous of UNIX binaries, /bin/sh?
ouch, really sounds like Storm didn’t cut the muster. ‘It’s hard to imagine something more damaging to Apache Storm than this. Having read it through, I’m left with the impression that the paper might as well have been titled “Why Storm Sucks”, which coming from Twitter themselves is quite a statement.’ If I was to summarise the lessons learned, it sounds like: backpressure is required; and multi-tenant architectures suck.
Allied Irish Banks’s web and mobile banking portals are ludicrously insecure. Vast numbers of accounts have easily-guessable registration numbers and are thus ‘protected’ by a level of security that is twice as easy to crack as would be provided by a single password containing only two lowercase letters. A person of malicious intent could easily gain access to hundreds, possibly thousands, of accounts as well as completely overwhelm the branch network by locking an estimated several 100,000s of people out of their online banking. Both AIB and the Irish Financial Services Ombudsman have refused to respond meaningfully to multiple communications each in which these concerns were raised privately.
Nice detailed description of an auto-scaled SQS worker pool
The Daily Beast is scathing re the OPM hack:
Here’s where things start to get scary. Whoever has OPM’s records knows an astonishing amount about millions of federal workers, members of the military, and security clearance holders. They can now target those Americans for recruitment or influence. After all, they know their vices, every last one—the gambling habit, the inability to pay bills on time, the spats with former spouses, the taste for something sexual on the side—since all that is recorded in security clearance paperwork. (To get an idea of how detailed this gets, you can see the form, called an SF86, here.) Speaking as a former counterintelligence officer, it really doesn’t get much worse than this.
‘Easy Skeezy Ruby Date/Time Formatting’ — or indeed anywhere else strftime() is supported
‘a fully-automated solution to build auto-scaling etcd clusters in AWS’
New static analysis goodnews, freshly open-sourced by Facebook:
Facebook Infer uses logic to do reasoning about a program’s execution, but reasoning at this scale — for large applications built from millions of lines of source code — is hard. Theoretically, the number of possibilities that need to be checked is more than the number of estimated atoms in the observable universe. Furthermore, at Facebook our code is not a fixed artifact but an evolving system, updated frequently and concurrently by many developers. It is not unusual to see more than a thousand modifications to our mobile code submitted for review in a given day. The requirements on the program analyzer then become even more challenging because we expect a tool to report quickly on these code modifications — in the region of 10 minutes — to fit in with developers’ workflow. Coping with this scale and velocity requires advanced mathematical techniques. Facebook Infer uses two such techniques: separation logic and bi-abduction. Separation logic is a theory that allows Facebook Infer’s analysis to reason about small, independent parts of the application storage, rather than having to consider the entirety of the memory potentially at every step. That would be a daunting task on modern processors with their large addressable virtual memories. Bi-abduction is a logical inference technique that allows Facebook Infer to discover properties about the behavior of independent parts of the application code. By storing these properties between runs, Facebook Infer needs to analyze only the parts of the software that have changed, reusing the results of its previous analysis where it can. By combining these approaches, our analyzer is able to find complex problems in modifications to an application built from millions of lines of code, in minutes.(via Bryan O’Sullivan)
This is great. the story of how cheesy funk carioca tune “A Minha Amiga Fran” managed to become “Kawo Kawo” and become a massive hit in Thailand
oh dear. I can see how this happened — in many cases they may not still have samples to derive new sums from :(
what’s the export policy for Google’s new Photos service? pretty good, it turns out
the bottom line is as follows: If the time it takes to create a dump, ? < M/2 then use ?opt = ?(2?M) – ? Otherwise (it takes longer than M/2 to create a dump), just use ?opt = M.
Creating a user with our DSL looks like: registrationAPI.createUser(“user”); You might expect this to create a user with the username ‘user’, but then we’d get conflicts between every test that wanted to call their user ‘user’ which would prevent tests from running safely against the same deployment of the exchange. Instead, ‘user’ is just an alias that is only meaningful while this one test is running. The DSL creates a unique username that it uses when talking to the actual system. Typically this is done by adding a postfix so the real username is still reasonably understandable e.g. user-fhoai42lfkf.Nice approach — makes sense.
Orbit Async implements async-await methods in the JVM. It allows programmers to write asynchronous code in a sequential fashion. It was developed by BioWare, a division of Electronic Arts.Open source, BSD-licensed.
Well, well — looks like AWS is about to disrupt PKI, and about time too. If they come up with a Plex-style “provision a cert” API, it’ll be revolutionary
including a lovely set from Tove Jansson
large-scale automated TLS certificate deployment. very impressive and not easy to reproduce, good work Plex! (via Nelson)
Airbnb’s workflow management system; works off a DAG defined in Python code (ugh). Nice UI though, but I think Pinboard’s take is neater
Introducing the chumbox
A high-performance java build tool, from Facebook. Make-like
in favour of a proprietary ground-up rewrite called Heron. Reading between the lines it sounds like Storm had problems with latency, reliability, data loss, and supporting back pressure.
neat substitute for physical-time clocks in synchronization and ordering in a distributed system, based on Lamport’s Logical Clocks and Google’s TrueTime. ‘HLC captures the causality relationship like LC, and enables easy identification of consistent snapshots in distributed systems. Dually, HLC can be used in lieu of PT clocks since it maintains its logical clock to be always close to the PT clock.’
Increasingly bizarre postal address obfuscation with An Post, the Irish postal service. Example:
I have decided to see what you can post [....] My first experiment was a dice [sic] with one line of the address on each side. An Post delivered two days later. They win this roundVia JG
‘Can do ~1M queries to ~3K public DNS servers within ~3 minutes with just a few threads.’ via Trustin Lee. Netty is the business
Good series of blog posts on the LMAX trading platform’s performance testing strategy — they capture live traffic off the wire, then build statistical models simulating its features. See also http://epickrram.blogspot.co.uk/2014/07/performance-testing-at-lmax-part-two.html and http://epickrram.blogspot.co.uk/2014/08/performance-testing-at-lmax-part-three.html .
The modern state system is built on a bargain between governments and citizens. States provide collective social goods, and in turn, via a system of norms, institutions, regulations, and ethics to hold this power accountable, citizens give states legitimacy. This bargain created order and stability out of what was an increasingly chaotic global system. If algorithms represent a new ungoverned space, a hidden and potentially ever-evolving unknowable public good, then they are an affront to our democratic system, one that requires transparency and accountability in order to function. A node of power that exists outside of these bounds is a threat to the notion of collective governance itself. This, at its core, is a profoundly undemocratic notion—one that states will have to engage with seriously if they are going to remain relevant and legitimate to their digital citizenry who give them their power.
great example of how Minecraft solved the problem the easy way — by simply not making an MMO, the whole problem effectively goes away
Changes which server-side developers will need to start considering as HTTP/2 rolls out. Remove domain sharding; stop concatenating resources; stop inlining resources; use server push.
Without switching to chronyd, ntpd -x sounds not too suboptimal:
With ntpd, the kernel backward step is used by default. With ntpd versions before 4.2.6, or 4.2.6 and later patched for this bug, the -x option (added to /etc/sysconfig/ntpd) can be used to disable the kernel leap second correction and ignore the leap second as far as the local clock is concerned. The one-second error gained after the leap second will be measured and corrected later by slewing in normal operation using NTP servers which already corrected their local clocks.It’s all pretty messy though :(
Russia’s troll farms. Ladies and gentlemen — the future
this is appalling. And of course we can only find out about it from overseas media because our own media is quaking in their boots :(
nice clear data there
I was in the middle of writing a breakdown of what went wrong, but you’ve beat me to it. Basically, they have a LinuxSecureRandom class that’s supposed to override the standard SecureRandom. This class reads from /dev/urandom and should provide cryptographically secure random values. They also seed the generator using SecureRandom#setSeed with data pulled from random.org. With their custom SecureRandom, this is safe because it mixes the entropy using XOR, so even if the random.org data is dodgy it won’t reduce security. It’s just an added bonus. BUT! On some devices under some circumstances, the LinuxSecureRandom class doesn’t get registered. This is likely because /dev/urandom doesn’t exist or can’t be accessed for some reason. Instead of screaming bloody murder like any sensible implementation would, they just ignore that and fall back to using the standard SecureRandom. If the above happens, there’s a problem because the default implementation of SecureRandom#setSeed doesn’t mix. If you set the seed, it replaces the entropy entirely. So now the entropy is coming solely from random.org. And the final mistake: They were using HTTP instead of HTTPS to make the webservice call to random.org. On Jan 4, random.org started enforcing HTTPS and returning a 301 Permanently Moved error for HTTP – see https://www.random.org/news/. So since that date, the entropy has actually been the error message (turned into bytes) instead of the expected 256-bit number. Using that seed, SecureRandom will generate the private key for address 1Bn9ReEocMG1WEW1qYjuDrdFzEFFDCq43F 100% of the time. Ouch. This is around the time that address first appears, so the timeline matches. I haven’t had a thorough look at what they’ve replaced it with in the latest version, but initial impressions are that it’s not ideal. Not disastrous, but not good.Always check return values; always check HTTP status codes.
A strongly specified, highly compatible implementation of Markdown
‘A Decentralized GitHub’. nifty
“Slim by Chocolate!” the headlines blared. A team of German researchers had found that people on a low-carb diet lost weight 10 percent faster if they ate a chocolate bar every day. It made the front page of Bild, Europe’s largest daily newspaper, just beneath their update about the Germanwings crash. From there, it ricocheted around the internet and beyond, making news in more than 20 countries and half a dozen languages. It was discussed on television news shows. It appeared in glossy print, most recently in the June issue of Shape magazine (“Why You Must Eat Chocolate Daily”, page 128). Not only does chocolate accelerate weight loss, the study found, but it leads to healthier cholesterol levels and overall increased well-being. The Bild story quotes the study’s lead author, Johannes Bohannon, Ph.D., research director of the Institute of Diet and Health: “The best part is you can buy chocolate everywhere.” I am Johannes Bohannon, Ph.D. Well, actually my name is John, and I’m a journalist. I do have a Ph.D., but it’s in the molecular biology of bacteria, not humans. The Institute of Diet and Health? That’s nothing more than a website. Other than those fibs, the study was 100 percent authentic. My colleagues and I recruited actual human subjects in Germany. We ran an actual clinical trial, with subjects randomly assigned to different diet regimes. And the statistically significant benefits of chocolate that we reported are based on the actual data. It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.Interesting bit: the online commenters commenting on the published stories quickly saw through the bullshit. Why can’t the churnalising journos do that?
mainly interesting for the dataviz and the Google-Doc-driven backend. wish they published the script though
Very long, but tl;dr:
the trick to creating an effective error message is to answer the 3 Questions within your message: What is the error? What was the probable cause of the error? What is the probable remedy?
Grim meathook future, courtesy of Volvo:
“The Volvo XC60 comes with City Safety as a standard feature however this does not include the Pedestrian detection functionality [...] The pedestrian detection feature [...] costs approximately $3,000.However, there’s another lesson here, in crappy car UX and the risks thereof:
But even if it did have the feature, Larsson says the driver would have interfered with it by the way they were driving and “accelerating heavily towards the people in the video.” “The pedestrian detection would likely have been inactivated due to the driver inactivating it by intentionally and actively accelerating,” said Larsson. “Hence, the auto braking function is overrided by the driver and deactivated.” Meanwhile, the people in the video seem to ignore their instincts and trust that the car assumed to be endowed with artificial intelligence knows not to hurt them. It is a sign of our incredible faith in the power of technology, but also, it’s a reminder that companies making AI-assisted vehicles need to make safety features standard and communicate clearly when they aren’t.
‘Due to how the banner notifications process the Unicode text. The banner briefly attempts to present the incoming text and then “gives up” thus the crash’. Apparently the entire Springboard launcher crashes.
The Unreliable Town Clock (UTC) is a new, free, public SNS Topic (Amazon Simple Notification Service) that broadcasts a “chime” message every quarter hour to all subscribers. It can send the chimes to AWS Lambda functions, SQS queues, and email addresses. You can use the chime attributes to run your code every fifteen minutes, or only run your code once an hour (e.g., when minute == “00″) or once a day (e.g., when hour == “00″ and minute == “00″) or any other series of intervals. You can even subscribe a function you only want to run only once at a specific time in the future: Have the function ignore all invocations until it’s after the time it wants. When it is time, it can perform its job, then unsubscribe itself from the SNS Topic.
Soylent’s not purchased by the Mark Zuckerbergs or the Larry Pages or the other tech aristocrats [...] Rather, it’s been taken up by white-collar workers and students destined for perpetual toil in the digital mills. Their embrace of life hacking represents the internalisation of management practices by the managed themselves.
some good Spark optimization tips
Great, encyclopedic blog post rounding up common architectural and algorithmic patterns using in scalable data platforms. Cut out and keep!
Morehshin Allahyari, an Iranian born artist, educator, and activist [....] is working on digitally fabricating [the] sculptures [ISIS destroyed] for a series called “Material Speculation” as part of a residency in Autodesk’s Pier 9 program. The first in the series is “Material Speculation: ISIS,” which, through intense research, is modeling and reproducing statues destroyed by ISIS in 2015. Allahyari isn’t just interested in replicating lost objects but making it possible for anyone to do the same: Embedded within each semi-translucent copy is a flash drive with Allahyari’s research about the artifacts, and an online version is coming. In this way, “Material Speculation: ISIS,” is not purely a metaphorical affront to ISIS, but a practical one as well. Allahyari’s work is similar to conservation efforts, including web-based Project Mosul, a small team and group of volunteers that are three-dimensionally modeling ISIS-destroyed artifacts based on crowd-sourced photographs. “Thinking about 3D printers as poetic and practical tools for digital and physical archiving and documenting has been a concept that I’ve been interested in for the last three years,” Allahyari says. Once she began exploring the works, she discovered a thorough lack of documentation. Her research snowballed. “It became extremely important for me to think about ways to gather this information and save them for both current and future civilizations.”
Lovely description of cracking (ie. copy-protection removal) in the Apple-II era. Very reminiscent of the equivalent in the C=64 scene, from my experience. ;)
‘Early gravity mapping efforts in the 1960s revealed that the Hudson Bay area in particular exerts a weaker gravitational force. Since less mass equals less gravity, there must be less mass underneath these areas.’ informed!
Interesting product line — I didn’t know this existed, but it makes good sense as a “network flight recorder”. Big in finance.
SolarCapture is powerful packet capture product family that can transform every server into a precision network monitoring device, increasing network visibility, network instrumentation, and performance analysis. SolarCapture products optimize network monitoring and security, while eliminating the need for specialized appliances, expensive adapters relying on exotic protocols, proprietary hardware, and dedicated networking equipment.See also Corvil (based in Dublin!): ‘I’m using a Corvil at the moment and it’s awesome- nanosecond precision latency measurements on the wire.’ (via mechanical sympathy list)
This is a phenomenally useful ML/data-mining resource post — ‘the top 10 most influential data mining algorithms as voted on by 3 separate panels in [ICDM '06's] survey paper’, but with a nice clear intro and description for each one. Here’s the algorithms covered:
1. C4.5 2. k-means 3. Support vector machines 4. Apriori 5. EM 6. PageRank 7. AdaBoost 8. kNN 9. Naive Bayes 10. CART
g’wan the Colm!
a fuzzy matching library. Given a byte stream with a minimum length of 512 bytes, TLSH generates a hash value which can be used for similarity comparisons. Similar objects will have similar hash values which allows for the detection of similar objects by comparing their hash values. Note that the byte stream should have a sufficient amount of complexity. For example, a byte stream of identical bytes will not generate a hash value.Paper here: https://drive.google.com/file/d/0B6FS3SVQ1i0GTXk5eDl3Y29QWlk/edit via adulau
What is the relationship between Kubernetes, Borg and Omega (the two internal resource-orchestration systems Google has built)? I would say, kind of by definition, there’s no shared code but there are shared people. You can think of Kubernetes?—?especially some of the elements around pods and labels?—?as being lessons learned from Borg and Omega that are, frankly, significantly better in Kubernetes. There are things that are going to end up being the same as Borg?—?like the way we use IP addresses is very similar?—?but other things, like labels, are actually much better than what we did internally. I would say that’s a lesson we learned the hard way.
Considering the similarity of its ingredients, canned dog food could be a suitable and inexpensive substitute for pâté or processed blended meat products such as Spam or liverwurst. However, the social stigma associated with the human consumption of pet food makes an unbiased comparison challenging. To prevent bias, Newman’s Own dog food was prepared with a food processor to have the texture and appearance of a liver mousse. In a double-blind test, subjects were presented with five unlabeled blended meat products, one of which was the prepared dog food. After ranking the samples on the basis of taste, subjects were challenged to identify which of the five was dog food. Although 72% of subjects ranked the dog food as the worst of the five samples in terms of taste (Newell and MacFarlane multiple comparison, P<0.05), subjects were not better than random at correctly identifying the dog food.
when you run with python2.7 you get the following: ISBN = 9780199226559 Which is Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds. and so now I have a lot more respect for the Director.
via the Dublin Ladies Beer Society ;)
‘The multiple repository tool’. How Google kludged around the split-repo problem when you don’t have a monorepo.
I really wish Docker/CoreOS would look at copying some of the deterministic-build ideas from Nix; see also http://gregoryszorc.com/blog/2014/10/13/deterministic-and-minimal-docker-images/
In his excellent blog post [...] Jeff Hodges recommends that you use the CAP theorem to critique systems. A lot of people have taken that advice to heart, describing their systems as “CP” (consistent but not available under network partitions), “AP” (available but not consistent under network partitions), or sometimes “CA” (meaning “I still haven’t read Coda’s post from almost 5 years ago”). I agree with all of Jeff’s other points, but with regard to the CAP theorem, I must disagree. The CAP theorem is too simplistic and too widely misunderstood to be of much use for characterizing systems. Therefore I ask that we retire all references to the CAP theorem, stop talking about the CAP theorem, and put the poor thing to rest. Instead, we should use more precise terminology to reason about our trade-offs.
Non-monospaced coding fonts! I’m all in favour…
As writing and managing code becomes more complex, today’s sophisticated coding environments are evolving to include everything from breakpoint markers to code folding and syntax highlighting. The typography of code should evolve as well, to explore possibilities beyond one font style, one size, and one character width.
a Zipkin-compatible distributed-system tracing framework in Java, in the Apache Incubator
To reduce the latency impact of storing to disk, Weaver’s team looked to buffering as a means to absorb the writes and sync them to disk periodically, rather than for each entry. Tradeoffs? They knew memory buffers would help, but there would be potential difficulties with smaller clusters if they violated the stable storage requirement. Instead, they turned to Intel’s silicon architects about features available in the Xeon line. After describing the core problem, they found out this had been solved in other areas with ADR. After some work to prove out a Linux OS supported use for this, they were confident they had a best-of-both-worlds angle. And it worked. As Weaver detailed in his CoreOS Fest discussion, the response time proved stable. ADR can grab a section of memory, persist it to disk and power it back. It can return entries back to disk and restore back to the buffer. ADR provides the ability to make small (<100MB) segments of memory “stable” enough for Raft log entries. It means it does not need battery-backed memory. It can be orchestrated using Linux or Windows OS libraries. ADR allows the capability to define target memory and determine where to recover. It can also be exposed directly into libs for runtimes like Golang. And it uses silicon features that are accessible on current Intel servers.
Visual programming, Yahoo! Pipes style, back again:
we have created streamtools – a new, open source project by The New York Times R&D Lab which provides a general purpose, graphical tool for dealing with streams of data. It provides a vocabulary of operations that can be connected together to create live data processing systems without the need for programming or complicated infrastructure. These systems are assembled using a visual interface that affords both immediate understanding and live manipulation of the system.via Aman
a Java based low latency, high throughput message bus, built on top of a memory mapped file; inspired by Java Chronicle with the main difference that it’s designed to efficiently support multiple writers – enabling use cases where the order of messages produced by multiple processes are important. MappedBus can be also described as an efficient IPC mechanism which enable several Java programs to communicate by exchanging messages.
Oh dear, Amazon.
These aren’t actual technologies yet. [...] All of which underscores that Amazon might never ever ever ever actually implement delivery drones. The patent paperwork was filed nearly a year after Amazon’s splashy drone program reveal on 60 Minutes. At the time we called it revolutionary marketing because, you know, delivery drones are technical and logistical madness, not to mention that commercial drone use is illegal right now. Although, in fairness the FAA did just relax some rules so that Amazon could test drones. At this point it feels like Amazon is just trolling. It’s trolling us with public relations BS about its future drones, and it’s trolling future competitors — Google is also apparently working on this — so that if somebody ever somehow does anything relating to drone delivery, Amazon can sue them. If I’m wrong, I’ll deliver my apology via Airmail.
This is like watching a train-wreck in slow motion on Groundhog Day. We, in the broader Linux and open source community, have been down this path multiple times over the past fifteen years, specifically with package formats. While there needs to be room for experimentation, having two incompatible specs driven by two startups trying to differentiate and in direct competition is *not* a good thing. It would be better for the community and for everyone who depends on our collective efforts if CoreOS and Docker collaborated on a standardized common spec, image format, and distribution protocol. To this end, we at Red Hat will continue to contribute to both initiatives with the goal of driving convergence.
Bookmarking for future reference. recommended by one of the GC experts, I can’t recall exactly who ;)
Looks like it’s pretty feasible to run a private Docker registry on every host, backed by S3 (according to the ECS team’s AMA). SPOF-free — handy
$GRADLE_USER_HOME, basically — it may also be possible to set from the Gradle script itself too
“An archive of the former Internet of Things”
Nearly universally, B-trees win when the data gets big enough.
good advice. see also the Best Practices official doc at https://docs.docker.com/articles/dockerfile_best-practices/
Interesting that GOOG are still doing these big-bang releases — I guess crunching the data to come up with new weights/rules is a heavyweight, time-consuming process
Dublin Cycling Campaign’s survey results: estimated 20,000 bikes stolen per year in Dublin; only 1% of thefts results in a conviction
The Hospira drug pump vulnerabilities described here sound pretty horrific
+1 to ALL of this. We are doing exactly the same in Swrve and it has radically improved our release quality
Good explanation of this NLP tokenization/feature-extraction technique. Example result: “Jimi/B-PER Hendrix/I-PER played/O at/O Woodstock/B-LOC ./O”
Excellent deep dive into a production issue. Root causes: crappy error handling code in Zookeeper; lack of bounds checking in ZK; and a nasty kernel bug.
This honestly fits a narrow niche, but one that is gaining in popularity. If your messages take > 100?s to process, or your worker threads are consistently saturated, the standard ThreadPoolExecutor is likely perfectly adequate for your needs. If, on the other hand, you’re able to engineer your system to operate with one application thread per physical core you are probably better off looking at an approach like the LMAX Disruptor. However, if you fall in the crack in between these two scenarios, or are seeing a significant portion of time spent in futex calls and need a drop in ExecutorService to take the edge off, the injector may well be worth a look.