Complex Systems are not always Common Sense: Some Computer Science Thoughts on COVID-19

Since the start of the pandemic, many people have questioned the lockdowns, the emergency measures and overall government response. Especially after the curve was flattened in Canada, even more people question whether the measures are still necessary, or whether the goalposts are shifting. This defies common sense, the critique goes.

Well, I don’t think common sense is very helpful here. While I think there are important questions to be asked about what specific public health measures are needed and how to balance the direct risk of COVID-19 with mental health or economic risks, what I see lacking in so much of the critique is complex systems thinking. And I can’t help but see this through a computer science lens.

A Complex Systems Perspective

When the pandemic was first declared in March, one of the best articles that I read early on was from the always insightful Zeynep Tufekci, taking a look at COVID-19 from a complex systems perspective. Tufekci laments “widespread asystemic thinking: the inability to think about complex systems and their dynamics… when multiple things can go wrong together.” A lack of systems thinking is why COVID-19 seemed like a distant, faraway threat to most people in February, she writes:

Without using systemic thinking, in isolation, the case-fatality rate may not have seemed that alarming, especially because the virus seemed to disproportionately affect the elderly. But viewed through a systemic lens, even a small fatality rate foretold a disaster. It is true that the flu kills tens of thousands annually, but the choice here wasn’t between worrying about this coronavirus or seasonal influenza. It was about assessing what adding a COVID-19 pandemic on top of a flu season would mean—and how it would overwhelm health-care systems.

In complex systems, one can think about linear interactions and complex or nonlinear interactions. In linear interactions, we can add numbers to guess at combined impact. If the flu kills about 40,000 people annually in the U.S., and car accidents kill another 40,000 people annually, their combined impact is pretty much just that. They are both predictable events for which we have built infrastructure and expectations; our system anticipates both. But adding one more flu-like illness (as COVID-19 was presented) isn’t a linear event. Tipping points, phase transitions (water boiling or freezing), and cascades and avalanches (when a few small changes end up triggering massive shifts) are all examples of nonlinear dynamics in which the event doesn’t follow simple addition in its impacts…

This is why the case-fatality rate for COVID-19 was never a sufficient indicator of its threat. If emergency rooms and ICUs are overloaded from COVID-19, we will see more deaths from everything else: from traffic accidents, heart attacks, infections, seasonal influenza, falls and traumas—basically anything that requires an emergency-room response to survive.

So many people didn’t — and still don’t — see what the big deal is. I think that’s because the threat is not intuitive, it’s not common sense or easy to comprehend — it’s not a linear dynamic, and the danger isn’t immediately visible.

Over the Christmas break, I watched the TV miniseries Chernobyl. Despite some historical and scientific inaccuracies, I’ve often compared COVID-19 to a nuclear disaster because it’s one of the rare examples of a disaster involving complex non-linear systems, and threats that are not intuitive or visible. Nuclear radiation is not something that we can see, or that non-physicists really understand. I can’t help but think of the way in which so many people in the miniseries are depicted as not recognizing the threat, because everything appears to be okay in the moment. The danger of radiation poisoning takes a few weeks to be actualized, yet the danger of spread occurs before you can see that anything is wrong.

A nuclear disaster like Chernobyl is to physics what a pandemic like COVID-19 is to biology. In January 2020, everyone in Ontario received an emergency altert about a potential disaster at the Pickering nuclear power plant. Everything was fine, but many people very quickly looked into exactly how far away from the Pickering power plant they are (I’m pretty close…). Would people fault the government for taking emergency measures to protect the public against a nuclear disaster? Would that be anathema in a free and democratic society? What if only 40,000 people had been affected in Ontario, and most of them had a small enough dosage that they were alright? Does that mean the danger is over, and there’s no longer a need for precautions?

Now, the COVID-19 pandemic is not a nuclear disaster, and there are lots of things that are different about a pandemic, but I think the nuclear disaster thought experiment is helpful to encourage some humility and caution. Is there something about the danger that I don’t understand? That isn’t immediately visible or intuitive? Is common sense enough to solve this complex of a problem?

There are two areas in which I find a computer science lens helps me to understand the pandemic:

  1. Exponential math — approaching infinity is radically different than linear addition
  2. The importance of simple and accessible security layers in achieving protection for the average person in the real world

Exponential Math

People don’t get exponential math. When you begin studying computer science, it takes a while until you can think exponentially. One of the most dangerous things about COVID-19 is the exponential, asymptomatic spread — without precautions, this virus can spread out of control and we wouldn’t realize it until it’s too late.

Pop quiz: If a number is doubling daily and reaches X by Day 60, on which day are we at 50%? And on which day are we at 1%?

Answer: 50% is Day 59 (maybe you got that, maybe you didn’t – but it makes sense when you think about it for a second). But get this: 1% isn’t reached until Day 54.

Now think of COVID-19 and the asymptomatic spread for up a week or two. Yes, we’ve flattened the curve now (in Canada). Why are public health authorities still demanding vigilance and special measures? Because in a couple weeks, it could spread wildly and we wouldn’t know until it’s already happened.

For easy math, let’s look at what a daily doubling rate would look like:

  • Day 1: 1
  • Day 2: 2
  • Day 3: 4
  • Day 4: 8
  • Day 5: 16
  • Day 6: 32
  • Day 7: 64
  • Day 8: 128
  • Day 9: 256
  • Day 10: 512
  • Day 11: 1024
  • Day 12: 2048
  • Day 13: 4096
  • Day 14: ~8,000
  • Day 15: ~16,000
  • Day 16: ~32,000
  • Day 17: ~64,000
  • Day 18: ~128,000
  • Day 19: ~256,000
  • Day 20: ~512,000
  • Day 21: 1,000,000

If something is doubling daily, that’s 3 weeks to a million. The first week is only 100, and the second week is barely 10,000.

Now, COVID-19 was only doubling every 3-5 days in Canada in March (with an extreme lockdown implemented part way through the month). So we’re not talking one to a million in 3 weeks, but the point is the radical acceleration. Exponential growth isn’t just faster linear growth — it’s approaching infinity. The numbers grow very slowly at first, and all of the sudden very quickly become astronomic. With a 1-2 week delay for symptoms to appear, and this is why public health measures are still necessary, even though things seem to be under control now. We’re a month away from a New York City or Italy scenario at any point, without continued vigilance.

Simple and Accessible Security Measures

A lot of people are critical of the public health measures because they say they don’t make sense. Why do I have to wear my mask when standing in a restaurant, but not when sitting? Why do I even have to wear a mask when public health authorities are telling me that masks won’t protect me from COVID-19 and I need to maintain physical distancing with or without a mask? How come only 50 people are allowed in a movie theatre, but you can have 400 kids in a school?

A physician friend of mine explained this in terms of layered security, and it makes perfect sense to me from an IT security perspective.

I remember seeing a photo going around on social media saying something like, “this is what virologists wear to stay safe from COVID-19, and you think a mask is going to protect you?”

Yeah, well, this is how Edward Snowden types his password to be safe from the NSA.

Sure, if you wanted complete and total protection from the virus, everyone would be in full-body PPE. Just like if you want full security on a computer, you’d be using an airgapped machine with Tails GNU/Linux and a whole bunch of specialized software, typing your password in under a blanket. In both cases, basically no one except specialists with a high level of expertise will follow those kinds of practices.

So, in the real world, you need simple and accessible measures — measures that add some level of protection, and that people will actually follow in practice.

Also, the value of any security measures is not simply the level of protection offered, but also adoption. For example, if a system has password rules that are too strict or onerous and users end up with hard to remember passwords… they’re just going to put them on sticky notes on their computer screen. Security measures for the public aren’t about what offers the best protection, but what offers the best protection in the way that it’s actually used in the real world by the average person.

This is where layers come in. There’s only so much that can be done to encourage strong passwords among users. However, a really effective way to increase security exponentially is to add a second layer. Two-factor authentication, which uses something you have (a phone) and something you know (a password), is way stronger than single-factor authentication. It’s not that it’s fully secure — there are many ways in which the password can be compromised, or SMS messages can be intercepted — but it’s takes a lot more work for an attacker to defeat both layers at the same time than just one.

So, a specialist is going to use all sorts of fancy tools. But the average person won’t. So for IT security in the real world, it’s a combination of simple and accessible security layers: password strength, two-factor authentication, anti-virus software, browser warnings, etc.

It’s the same for public health. Of course a mask isn’t going to help in the way full-body PPE would, but it’s a question of what’s practical — and yes, that means different rules for different contexts, so that people will actually be able to follow them. It’s a few simple layers of public health measures: hand hygiene, physical distancing, and mask usage in some contexts. These measures independently only provide a certain amount of protection, but in combination decreased the risk of transmission immensely — and they’re simple enough for people to understand and to follow, which is what makes them the most effective public health option. The effectiveness isn’t just the level of protection offered by any given measure in isolation, but the level of protection when layered together and used in the real world. And, thus, the rules are adapted for real world scenarios as well.

Complex Systems are not always Common Sense

This is not to say there isn’t a lot to criticize or adapt in any government’s response. There are no perfect solutions to many of the challenges faced during the pandemic. But when it comes to being critical at the overall public health response, as though it defies common sense, my experience understanding other complex systems suggests that maybe common sense is not what’s called for. Maybe some humility is called for, in recognizing that with complex systems, sometimes the challenges and the answers are complex, and the most effective responses don’t always make sense to us right away.

Leave a comment

Your email address will not be published. Required fields are marked *

2 thoughts on “Complex Systems are not always Common Sense: Some Computer Science Thoughts on COVID-19”