Category: worth reading
Worth Reading: entr: Rerun Your Build when Files Change
Julia Evans recently described another awesome Linux tool: entr allows you to run a bash command every time a watched file changes (and it works on Linux and OSX).
I wish I found it years ago…
Worth Reading: Written communication is remote work super power
Snir David wrote a great article explaining why you should focus on documenting stuff you do instead of solving other people’s challenges (or putting out fires) on Slack/Zoom/whatever. Enjoy ;)
Worth Reading: Working with TC on Linux systems
Here’s one of the weirdest ideas I’ve found recently: patch together two dangling ends of virtual Ethernet cables with PBR.
To be fair, Jon Langemak used that example to demonstrate how powerful tc could be. It’s always fun to see a totally-unexpected aspect of Linux networking… even though it looks like the creators of those tools believed in Perl mentality of creating a gazillion variants of line noise to get the job done.
Worth Reading: Lies, Damned Lies, and Keynotes
Got sick and tired of conference keynotes? You might love the Lies, Damned Lies, and Keynotes rant by Corey Quinn. Here are just two snippets:
They’re selling a fantasy, and you’ve been buying it all along.
We’re lying to ourselves. But it feels better than the unvarnished truth.
Enjoy!
Worth Reading: When Security Takes a Backseat to Productivity
Brian Krebs wrote an interesting analysis of CIA’s Wikileaks report. In a nutshell, they were a victim of “move fast to get the mission done” shadow IT.
It could have been worse. Someone with a credit card could have started deploying stuff in AWS ;))
Not that anyone would learn anything from the PR nightmare that followed.
Worth Reading: Emerging Communications Technologies
Every few years someone within the ITU-T (the standard organization that mattered when we were still dealing with phones, virtual circuits and modems) realizes how obsolete they are and tries to hijack and/or fork the Internet protocol development. Their latest attempt is the “New IP” framework, and Geoff Huston did a great job completely tearing that stupidity apart in his May 2020 ISP column. My favorite quote:
It’s really not up to some crusty international committee to dictate future consumer preferences. Time and time again these committees with their lofty titles, such as “the Focus Group on Technologies for Network 2030” have been distinguished by their innate ability to see their considered prognostications comprehensively contradicted by reality! Their forebears in similar committees missed computer mainframes, then they failed to see the personal computer revolution, and were then totally surprised by the smartphone.
Enjoy!
Worth Reading: The Burning Bag of Dung
Loved the article from Philip Laplante about environmental antipatterns. I’ve seen plenty of founderitis and shoeless children in my life, but it was worshipping the golden calf that made me LOL:
In any environment where there is poor vision or leadership, it is often convenient to lay one’s hopes on a technology or a methodology about which little is known, thereby providing a hope for some miracle. Since no one really understands the technology, methodology, or practice, it is difficult to dismiss. This is an environmental antipattern because it is based on a collective suspension of disbelief and greed, which couldn’t be sustained by one or a few individuals embracing the ridiculous.
That paragraph totally describes the belief in the magical powers of long-distance vMotion, SDN (I published a whole book debunking its magical powers), building networks like Google does it, intent-based whatever, machine learning…
Worth Reading: 10 Optimizations on Linear Search
Stumbled upon an article by Tom Limoncelli. He starts with a programming question (skip that) but then goes into an interesting discussion of what’s really important.
Being focused primarily on networking this is the bit I liked most (another case of Latency Matters):
I once observed a situation where a developer was complaining that an operation was very slow. His solution was to demand a faster machine. The sysadmin who investigated the issue found that the code was downloading millions of data points from a database on another continent. The network between the two hosts was very slow. A faster computer would not improve performance.
The solution, however, was not to build a faster network, either. Instead, we moved the calculation to be closer to the data.
Lesson learned: always figure out the real problem and what the most effective way of solving it as opposed to pushing the problem down the stack or into the cloud.
Worth Reading: Do We Need Regulation for IoT Security?
A pretty good summary of the topic by Drew Conry-Murray: the market is not going to correct itself, it’s very hard to hold manufacturers or developers accountable for security defects in their products, and nothing much will change until someone dies.
And just in case you wonder how "innovative forwarding-looking disruptive knowledge-focused" companies could produce such ****, I can highly recommend The Stupidity Paradox.
Worth Reading: Why Must Systems Be Operated?
Every now and then I find an IT professional claiming we should not be worried about split-brain scenarios because you have redundant links.
I might understand that sentiment coming from software developers, but I also encountered it when discussing stretched clusters or even SDN controllers deployed across multiple data centers.
Finally I found a great analogy you might find useful. A reader of my blog pointed me to the awesome Why Must Systems Be Operated blog post explaining the same problem from the storage perspective, so the next time you might want to use this one: “so you’re saying you don’t need backup because you have RAID disks”. If someone agrees with that, don’t walk away… RUN!
Worth Reading: SD-WAN Scalability Challenges
In January 2020 Doug Heckaman documented his experience with VeloCloud SD-WAN. He tried to be positive, but for whatever reason this particular bit caught my interest:
Edge Gateways have a limited number of tunnels they can support […]
WTF? Wasn’t x86-based software packet forwarding supposed to bring infinite resources and nirvana? How badly written must your solution be to have a limited number of IPsec tunnels on a decent x86 CPU?
Worth Reading: Machine Learning Explained
I hope you're familiar with Clarke's third law (and leave it to your imagination to explain how it relates to SDN ;). In case you want to look beyond the Machine Learning curtain, you might find the Machine Learning Explained article highly interesting. Spoiler: it all started in 1960s with over 2000 matchboxes.
Worth Reading: Seven Deadly Sins of Predicting the Future of AI
The next time the sales system engineer working for your beloved $vendor drops by with a glitzy unicorn-based slide deck full of AI/ML goodies, read this article to get a slightly better understanding of where we are... from the perspective of someone who has actual experience doing that stuff.
Worth Reading: Apple II Had the Lowest Input Latency Ever
It's amazing how heaping layers of complexity (see also: SDN or intent-based whatever) manages to destroy performance faster than Moore's law delivers it. The computer with lowest keyboard-to-screen latency was (supposedly) Apple II built in 1983, with modern Linux having keyboard-to-screen RTT matching the transatlantic links.
No surprise there: Linux has been getting slower with every kernel release and it takes an enormous effort to fine-tune it (assuming you know what to tune). Keep that in mind the next time someone with a hefty PPT slide deck will tell you to build a "provider cloud" with packet forwarding done on Linux VMs. You can make that work, and smart people made that work, but you might not have the resources to replicate their feat.
Worth Reading: Understanding Scale Computing HC3 Edge Fabric
A long while ago someone told me about a "great" idea of using multi-port server NICs to build ad-hoc (or hypercube or whatever) server-only networks. It's pretty easy to prove that the approach doesn't make sense if you try to build generic any-to-any-connectivity infrastructure... but it makes perfect sense in a small environment.
One can only hope Scale Computing keeps their marketing closer to reality than some major vendors (that I will not name because I'm sick-and-tired of emails from their employees telling me how I'm unjustly picking on the stupidities their marketing is evangelizing) and will not start selling this approach as save-the-world panacea... but we can be pretty sure there will be people out there using it in way-too-large environments, and then blame everything else but their own ignorance or stubbornness when the whole thing explodes into their faces.