Part one of a two part series exploring the human relationship with technology
A white hat approach to the information economy
In Part one, I introduce some basic concepts in the sociology of technology and economy to examine the current state of information technology. In Part two, I introduce human’s historical and evolutionary relationship with technology to explore the implications for development of an ethical relationship with technology.
https://www.engadget.com/2019/05/13/whatsapp-call-exploit-allowed-spyware/
The recent reveal of an Israeli hacking company (NSO Group) secretly selling security exploits to governments explains why closed source publicly-held companies often don’t like open source ethos. We can examine this situation by looking at the two different classical types or intentions of “hacking” from the perspective of proprietary, for-profit (esp. publicly-held) software and IT solutions development. WhatsApp is a closed-source messaging app owned by Facebook, which is in-turn publicly-held.
Hacking itself is merely figuring out how to gain access to something. In the process, it involves learning how something works, and what might break it, or what could fix or improve it. What lessons someone learns from a hacking project, and how they use that knowledge, depends largely on the type of hacking they do.
“Black hat” hackers are traditionally known as the “bad guys.” Their work encompasses three primary objectives:
1. Find weaknesses, vulnerabilities or bugs in current design
2. Figure out how to exploit them
3. Make money or gain prestige
a. through direct exploitation
b. through the “black hat” marketplace
This is reportedly what the NSO Group did: it found exploits, and rather than responsibly reporting them to WhatsApp, they sold those exploits for money to government agencies interested in spying on people’s communications. Social conscience and implications be damned.
In contrast, “White hat” hackers also pursue three similar objectives:
1. Find weaknesses, vulnerabilities or bugs in current design
2. Figure out how to exploit them
3. Make money or gain prestige
a. by reporting the bugs directly to the company, and/or when the company doesn’t respond,
b. by making the bugs public, often to put pressure on the company to fix its problems and resecure the data for which it is responsible
White hat hackers are classically the “good guys,” because they feel concerned about others’ vulnerabilities, and work to identify and strengthen those vulnerabilities. Open source software developers make up a large subset of white hats. They don’t do it out of complete self-interest (they benefit from social prestige and income), but they attempt to integrate their self-interest with the common good. Often, hacking falls into a grey area, for example, because black hats also do white hat things (like make the exploit public, but after playing around with it or benefiting from it in some other way), or vice-versa, as white hats sometimes break laws that conflict with their ethical intentions.
I associate the white hat/black hat terms with the Mad Spy vs Spy comic, a critique of the Cold War where two nearly-identical spies were locked in endless and often-deadly competition with one-another. They were identical in appearance and behavior, differing only in the color of clothing they wore. In contrast, white and black hats of the hacking world actually derive from the Western movie genre, where bad guy archetypes wore dark hats and clothing, and white archetypes wore light hats and clothing in a simple moral dualism that our culture finds comforting. Spy vs Spy serves as an analogy for general confusion between the surface-level similarities of the two categories of hacking: all three objectives are the same. Yet the devil is in the details and intentions. White hat hackers seek to solve problems, whereas black hat hackers seek to exploit them. So why do we struggle so much to distinguish between them? Our society has a tendency to “shoot the messenger,” and white hat hackers by-definition tend to position themselves as messengers. Black hats tend to turn vulnerabilities into greater problems by exploiting them. In contrast, poor socioeconomic reactions to well-intentioned white hats tend to make mountains out of mole hills. In either case, white and black hats get lumped together as troublemakers, which in turn creates a negative general opinion of “hacking” as if it were synonymous with troublemaking.
The ethics of making exploits of software public knowledge stem from the collision between closed-source (proprietary/secretive) and open-source models in our current economic context. Black hat has immediate financial incentives attached to it. White hat doesn’t. You have to gain notoriety and become a security contractor before your contributions to security are recognized as legitimate and legal. And even then, your accomplishments and the skills they represent lead many to treat you like a liability. So immediately, here, the scales are tipped against the white hat. At their worst, white hat hackers act like whistle blowers. Nominally, they simply operate with an open source philosophy: shine a light into darkness. With transparency of open source values comes accountability.
However, accountability is expensive: it takes time, expertise and money to track down, fix and verify vulnerabilities and other flaws. It takes less to not look for them, or ignore them completely. When a company successfully avoids accountability, it can externalize those costs (such as security vulnerabilities) onto its users. This externality often takes the form of what economists call “intertemporal discounting,” where you reap benefits now and deal with the snowballing consequences somewhere “in the future.” If that “someone else’s problem somewhere else at some other time” mentality sounds familiar, it describes a lot of how our society functions, and helps us understand some of the hidden costs of goods and services offered to us for “free.” Previous and current generations regularly make choices without considering how they would impact their future, let alone the lives of subsequent generations. We often end up paying for our short-sightedness later, somehow. It would do us well to assume a cost exists, and ask who, what, when, where and how, rather than “whether.” Put another way: Free now, but you’ll typically pay for it later, one way or another.
Sooner or later, an unaddressed vulnerability or other flaw in software design becomes public. But when, where, how and to whom it becomes public matters. When a company sits on an unknown security vulnerability (or ignores a known one) for long enough, black hat hackers eventually get ahold of the information they need to pursue an opportunity. They typically get ahold of it first, simply because they are actively looking for such opportunities. When they get to the vulnerability first, they are off to the races to exploit it before a white hat finds and reports or (in an open source circumstance) fixes it. Thus, fixing vulnerabilities is a race between white hat and black hat hackers.
Ideally, white hats find vulnerabilities first, and no opportunity for black hat exploitation occurs. But publicly-owned companies operate with concern to quarterly profits. To maximize current or even next quarter profit, it is rationally better for a company to suppress or even ignore vulnerabilities rather than allow them to become public, even though this has major repercussions further into the future. Although we, the users of such technologies (and primary victims of its flaws), all want vulnerabilities and other flaws discovered by the “good guys” and patched or otherwise-corrected ASAP, that doesn’t necessarily make sense to a publicly-held, closed-source company. They a. simply can’t see that far into the future and b. are far more concerned about immediate profitability, anyway. Once a consumer purchases and “owns” a product, providing warranty and support is only expensive. It positively impacts profit only inasmuch as it contributes to company reputation and customer loyalty. While black hats are a bigger threat to a publicly-held closed-source company, white hats keep the company on their toes, increase their (short term) expenses (although often by helping them avoid the black hat threat), and even publicly embarass them. To such companies, this results in the worst of both worlds: all the extra work (to fix exposed flaws) with none of the positive PR. As long as such corporations think they can avoid accountability, all hackers (black or white) are liabilities, annoyances, or threats…with one important exception that I will discuss later.
In many cases, closed source proprietary philosophies and practices make a self-fulfilling prophecy out of white hat intentions. In closed source circumstances, white hats can only identify problems, and don’t have access to the source code to even propose or collaborate on solutions. And the closed source proprietary perspective may view anything they do submit as evidence of a violation of proprietary secrets. Thus in an economic milieu that values and protects secrecy of information, white hats get lumped with the “bad” black hats. It’s not that competition doesn’t exist in open source circumstances. It’s just that competition and collaboration behave very differently, and often even coexist in the same time and space. In contrast, closed source circumstances tend to dichotomize competition and collaboration as mutually exclusive. You are either a competitor or a collaborator. In open source, you can actually be both at the same time thanks to upstream/downstream project contribution and forking. Many people — even those heavily involved in open source projects — suffer from cognitive dissonance. As a result, it often becomes difficult to structure effective collaborations, and I have seen many react inappropriately to the presence of competition as “bad” or collaborative gestures as “disingenuine.”
Coming back to the moral ambiguity of Spy vs Spy, the view of black hats as “bad” itself is a bit more complex. Closed source philosophies and practices evolved out of and depend upon a paradigm of scarcity that itself leans heavily on a zero-sum analysis: if I have more, you have less, and the value of what I have increases. That is currently how money currently operates: its value comes not only from collective belief in it as a medium of exchange, but also from its limited presence and unequal distribution to create and maintain artificial gradients of supply and demand. If only a few people have a million dollars, that million dollars is worth a lot. However, if everyone has a million dollars, then they might as well only have one dollar, because it is all worth the same. In a scarcity-based economy, distribution of the resource matters. In other words, it’s not so much how big the pie is (economic growth), but how the pieces get divided and distributed.
However, technology development — especially information technology — deals primarily with ideas and concepts (information). And the rules of scarcity do not apply to information, which is more like the pie recipe than the pie itself. My possession of information neither devalues nor excludes your possession of the same or different information. In fact, our mutual possession of information (same or complementary) can actually add value to what we currently have. However, the context of the scarcity economy tends to infiltrate and redefine our thinking about information according to its rules. My recipe quickly becomes a guarded secret. Thus a major conflict exists between the “economic” and “informational” aspects of our modern information economy preventing us from exploring and engaging with the full potential the information economy promises, which would likely be based on open source principles that embrace or even depend upon a pre-analytic vision of abundance vs scarcity. We may need to develop new monetary systems and related institutions to replace outdated ones and overcome this conflict. At the same time, pursuit of a fully-empowered (open source) information economy also provides us an opportunity to adopt and apply abundance-based paradigm to other aspects of socioeconomic organization. We can just as easily design an economy that ties its value to the health rather than exploitation of the commons: the commonweal. Doing so uncouples economic production, for example, from environmental destruction, and recouples economic production with environmental regeneration and other forms of net good as byproducts of economic activity.
Black hats exist primarily as an outgrowth of artificial concentrations and gradients of information, money and power created by scarcity-based models of distribution and social dynamics. Nature abhors a gradient. Another way to look at it: bank robbers only make sense in a society where banking represents the accumulated financial interests of an elite few. In this sense, the problem of black hat hacking is a vestige of an elite minority wanting to “have their cake and eat it, too,” in terms of wanting to accumulate power, money or information without incurring the inherent security risks of doing so. So we can’t accurately blame the black hat problem on the “immoral” choices or “moral weakness” of individual black hats. Likewise, many white hats in the same economy often have grey ethics with regards to their motivations and methods. Like many of the corporations that black hats antagonize, they simply see opportunity. A capitalist society does not make moral (let alone ethical) distinctions between “good” and “bad” economic opportunity. The legality of pursuing “black hat” opportunities often arises more from how it impacts the interests of a politically well-connected elite rather than from ethical considerations. If they see such pursuits as opportunity, then it tends to become legal. If they see such pursuits as liability, then they tend to become illegal. Morality, ethics and actual impact be damned.
In the best of circumstances, white hats become appreciated and valued security consultants or contractors, and find a socioeconomic niche rewarding (rather than punishing) their ethical focus. And that’s part of the problem: We exist in a cultural, social and economic milieu that, for all our complex laws and rhetoric about accountability, very heavily rewards unethical behavior, and often punishes or discourages ethical behavior. The two paradigms of open and closed source are completely alien from one-another, and it confuses people to no end. In an open source context, competition and cooperation co-exist and converge. Self-interest and altruism also tend to converge. This has a tendency to lower the stakes from “fight for life” to “play,” the same way wolf pups will “play” ferociously with one-another, but will break it up or take a break before someone gets seriously hurt. Through this process, individuals explore and negotiate the limits and qualities of their strengths and weaknesses, relationships with one another and ultimately their social role or niche. They clarify and develop both their identities as individuals and members of the pack, they find their optimal niche, and they contribute to the overall viability of the pack by increasing its strengths and mitigating its weaknesses. That, by way of analogy, is the goal for our economy.
So, to change this dynamic in the software field, we have a few (and potentially coincidental) options in social design:
1. Enforce decoupling of closed-source and public holding (you can be one but not the other). In other words, you can be open source and publicly-held, or closed source and privately-held, but not both.
2. Somehow change the stock market time frame to prioritize long-term outcomes over short-term (quarterly) outcomes. The longer the timeframe, the more universal interests become. “Self-interest” and altruism tend to blend seemlessly into one-another. This change in scale of focus also leads to a substantial difference in accountability and behavior choices — not just the choices made, but even the range of choices that seem available and viable in the first place.
3. We might consider the implications of requiring an open source ethic for all technology development, as doing so completely changes the ethics of social and market dynamics by inherently supporting transparency and accountability.
4. Incorporate broader consideration of liabilities and assets beyond profit and loss analysis into economic function and valuation. For example, the Center for Humane Technology’s ledger of harms seeks to render visible the previously-ignored externalities of technology design and use. What impact does the technology have on its users? On society? On ecosystems?
The second and fourth options have positive implication far beyond the design and development of technology, extending into all investment activities.
Extending this analysis further: In today’s climate, social media and related app development have largely functioned like “black hats” with regards to social psychology of software and telecommunications development. They hack and exploit vulnerabilities in human hardware and software for profit. This is the basis of the “hooked” model for user experience design: find the vulnerabilities in the human social psyche and exploit them to reap greater attention, screen time, advertising and therefore profits. You “hook” users by designing software to maximize addiction by lowering the work threshold to receive a positive reward (dopamine response), until the user internalizes motivation and acheives dependency or even addiction.
Tristan Harris of the Center for Humane Technology identified five core vulnerabilities in human social psychology that companies like Facebook exploit to maximize the time people spend in their interface:
1. Inaccurate forecasting (how long will this task, project or distraction actually take?)
2. Intermittent variable rewards (aka the infinite scroll, “click next” vortex)
3. Loss-aversion (fear of missing out (FOMO) with regards to something important; but even unimportant things gain importance when they come via a trusted connection)
4. Fast vs slow thought (mindless reaction to quick and easy stimuli vs mindful behavior)
5. Stress and altered states (we enter fight or flight easily, and make impulsive decisions)
While a company like Facebook (and by extension, WhatsApp) may fear and hate both white and black hat hackers…they do so with a failure to acknowledge or admit their role in society as a black hat biohacker of the human psyche, keeping us plugged into their machines for the maximum time possible. Facebook doesn’t hate hacking generally or even black hat hacking specifically. Facebook simply doesn’t care for any hacking that doesn’t help it produce greater short-term profit. Does this make Facebook evil? I don’t think so.
Black hats compete all the time for the same scarce resource: access to a potentially-lucrative vulnerability. Their opportunity lies in access to vulnerabilities for them to exploit. While they all compete, they don’t just despise white hats for making their access to a scarce resource even more scarce and tenuous. White hats represent an ethical system (e.g., open source values) completely alien and unintelligible to a black hat economy. Black hats at least make sense to each-other. They know what they do and why they do it. But white hats don’t make sense to black hats. Hacking is all about understanding and access. If something is alien and unintelligible, it remains inaccessible and unavailable to us, for better or worse. Black hats not only hate what white hats do (ruin opportunity), they also don’t understand why white hats do what they do. In this sense, widespread comprehension of the motivation and ethics of white hats may serve to indicate a significant shift in social affect toward open source principles.
Unfortunately, Facebook is not an exceptional case, but merely a case study in the rule of an extractive economy. This same pattern, for example, perfectly describes the infamous Opium Wars between the British Empire and China. Nearly 1/2 of the entire Chinese population had become addicted to opium, simply because British merchant corporations found it to be profitable. Nothing personal, just business. They didn’t want to create widespread misery. They just didn’t care about (or have to deal with) the consequences of profiteering off of opium sales. They only had to deal with a drop in sales when China tried to intervene out of self-defense, which kicked off the war. China lost the war, ceded Hong Kong and begrudgingly made opium legal in a tragedy from which it is still trying to recover. The British people (merchants and shareholders) responsible for the Opium War were not evil people. They were simply investors seeking to maximize their profit, much like many of us today who hold stock in the stock market.
We all operate with the same vulnerabilities that make us susceptible, e.g., to social media manipulation. Hackers rarely concern themselves with strengths. Rather, they focus on weaknesses — a difficult topic for our society (perhaps even our species) to address. But we must accustom ourselves to acknowledging and addressing weakness and vulnerability. What are the weakest links that render us most vulnerable to manipulation and hijacking? Likewise, what are the weakest links of the manipulation and hijacking chain of production?
The unfortunate pattern of the extractive economy will continue until society develops and enforces clear values that consider not only long-term implications of a profit scheme (to prevent intertemporal discounting), but also non-monetary externalities, such as the impact on public health, social fabric, or ecosystems. That describes the basis for the triple bottom line: not just profit, but also people and planet. Not just economics, but also ethics and environment.
Who or what will bring about such change? Again, the white hat/black hat distinction applies to this level of hacking. The Center for Humane Technology (CHT), in contrast to an outfit like Facebook, represents the first real example of what I would consider a “white hat lab” with regards to those same innate weaknesses and vulnerabilities that Facebook exploits, for example, creating a “ledger of harms” to track the true cost of our current and historical relationship with technology. Beyond a simple accounting process, such information could become embedded in monetary systems pegged to public welfare rather than reductionist profit. Imagine economic growth not just from sales, but actual, measurable growth in general public welfare, in a holistic monetary scheme that accounts for and discourages externalities with an embedded accounting process. Under such a scheme, only ethical projects would prove profitable. Sound far fetched? Check out the Regen Network. They are creating an economic currency whose value is tied to the verification of regenerative outcomes. Both the Regen Network and CHT give me hope that we (or they, if you see yourself as a passive bystander) might positively shift the role and impact of technology in human life.
In Part two, I will explore the deeper origins of the human relationship with technology, and how that might inform our way forward.
Beyond technological morality and the tip of the iceberg, part two
May 19, 2019Part Two of a two part series exploring the human relationship with technology.
From moral dualism to contextual ethics
In part one, I introduced the concept of vulnerabilities, the hackers who try to understand them, and the economic and social context in which those hackers operate. I end in a transition from software/hardware hacking to explain how corporations like Facebook have successfully hacked and exploited vulnerabilities in the human psyche for profit.
In part two, I will explore the deeper history of the human relationship with technology, and what it means for the development of a more humane outcome that acknowledges and respects and appreciates humans holistically: mind, body, spirit, relationships, environment.
The innate human vulnerabilities I mention in part one didn’t come out of nowhere. Likewise, they aren’t as simple as software to patch. Lastly, we need to be careful what we label as “buggy behavior,” as it is highly situation-dependent, and we will quickly end up in a victim-blaming, socially-darwinistic tailspin of naively labeling more sensitive or susceptible people as “weak.” What we often consider weakness or vulnerability in today’s social climate have, for most of our species’ existence, served us as strengths.
For example, consider breathing process and fight or flight (sympathetic nervous response). When we perceive a threat or stressor, our body prepares to fight or flight, and floods itself with oxygen and adrenaline that it expects to use in an amazing feat of confrontation or escape. This maximized our chance of surviving physically-dangerous encounters. But in a modern context, both chronic and acute stressors are often socially-constructed rather than directly physical. In this context, fight or flight doesn’t serve us well in two ways: 1. physical fight or flight often gets us into even more trouble, and therefore causes more stress damage from social consequences we are trying to avoid in the first place, and 2. the flood of oxygen in what is a “sedentary stress” situation (such as checking email) becomes excessive. Instead of helping us get out of a bad situation, it often perpetuates and extends or even creates the bad situation by translating into oxidative stress and inflammation, which can lead to depression, anxiety and irritability. Where does depression, anxiety and irritability get us? It quickly becomes a vicious cycle. In the past, our evolutionary stress response helped us avoid further stress. Currently, it contributes to greater stress, provoking more frequent and severe stress responses. Our health and relationships suffer.
I call this vicious cycle of previously-good characteristics becoming liabilities in today’s modern world “evolutionary dysregulation.” In this particular case, physical stress responses have become virtual, decoupling the stress with the actual need for more oxygen. Our evolved response does not actually match our modern sociocultural circumstances. We are simply not well-adapted to the circumstances we have created.
Breath is a drug. It is not good or bad. How we breathe matters. Likewise, technology is also a drug. How we “technify” matters. In the same way that breath work is important, we can mitigate bad outcomes in part through education and training. But while this helps us respond more effectively to bad situations, it doesn’t address the bad situation itself. What other leverage points do we have? If we created the circumstances in which we suffer, can’t we also create circumstances that help us suffer?
1. We can’t really mess with our evolutionary heritage. It is what it is. We are a long way from gene hacking, and can barely wrap our minds around the ethics of software hacking. So we have to work with it. That means we need to focus more on other opportunities, such as…
2. Environmental factors (context). Contextual factors determine whether something is beneficial or detrimental. In other words, nothing is inherently “good” or “bad.” This occurs in two ways with regards to genetics:
a. The behavior or gene or other characteristic itself doesn’t change, but will become an asset or liability purely due to change in context. Example: sickle-cell anemia in a malaria-prone context makes sickle-cells beneficial due to their innate resistance to infection, even though it comes at the cost of decreased red blood cell function otherwise.
b. The behavior or gene expression actually changes (epigenetically) based on context. This describes epigenetics: not only do which genes you have matter, but also whether/when and where/how they express themselves. Different circumstances can turn certain genes on or off, or change when or how they activate. Thus, while we can’t directly hack our genes, we can hack how the genes we do have express, by working with the circumstances in which those genes exist. That means our diet, lifestyle, social and ecological context.
A signficant part of our lifestyle includes the type, amount, frequency and duration of our exposure to various technologies throughout our lives, especially during heavy development phases (childhood and adolescence). We (as individuals and as a species) can have an immensely positive impact on our lives by changing when, where, how and to what extent we engage with a given type of technology, and also the type of technology we engage with. What do technologies that only engage us on our terms, to our benefit, look and act like? In what socioeconomic context do they make sense?
We can ascribe two major labels to technology: informational (IT) and mechanical (MT). A chainsaw is mechanical. And a computer, smartphone or TV is obviously informational. But these are not hard distinctions. IT involves MT, and MT involves IT. A chainsaw mediates our relationship with the rest of the world, as does Facebook. It acts as an “informational server and filter.” It takes in information, rejects certain types of information as invalid (which therefore becomes non-existent), and formats the remaining information for compatibility and distribution. It then redistributes that information to interested parties. Facebook takes information from users and clients (advertisers). Then it filters, formats (transforms) and redistributes it to users and clients. A chainsaw takes in information from our senses, and then filters it and redistributes to our hands to operate the chainsaw.
Thanks to the leadership of the Center for Humane Technology (CHT), we can now clearly sub-categorize the IT label based on how it interacts with our sociobiological programming: Does it attempt to hijack our vulnerabilities, as a proxy for black hat work, or does it attempt to expose and “patch” our vulnerabilities with good habits, as a proxy for white hat work? Does it distract us from life, or help us develop and share our gifts? The lines aren’t always so clear. I wouldn’t call a chainsaw (or Facebook) inherently bad, but it does have a tendency to hijack our perception to make itself useful. I have seen this over and over again with both MT and IT. My permaculture instructor, Toby Hemenway (RIP 2016), first made me aware of this phenomenon in 2006. In the same way that, “if all you have is a hammer, then all your problems start looking like nails,” likewise, “spend enough time behind a chainsaw, and a forest starts to look like a collection of trees (with some other “unimportant” stuff thrown in), and then the trees start looking like board-feet.” Before you know it, you are clear-cutting forests.
Technology enables the extractive economy. But does technology determine extraction? If so, it seems we would be doomed as a species to self-extinguish…We need a better understanding of our vulnerabilities with technology, precisely because we evolved to depend on technology for our survival. Our bodies are designed to develop and make use of technology. We are mostly hairless — we need clothes and foot protection. Our big brains and dextrous hands imagine, build and wield tools that enhance our strengths and mitigate our weaknesses. To naively dismiss technology as inherently “bad” would call into question the basis for the existence of our species. Likewise, to naively embrace all technology as inherently or innately “good” has gotten us into our current situation. A simple moral philosophy of bad/good ignores and distracts from the actual depth and complexity of the issue. Other species can make and use tools, but they don’t depend on them to the level we do. Who, for example, disputes “clothing” as a basic human need alongside food and shelter? What other species also uses clothing to protect itself from environmental hazards?
As we depend on technology, we must reconcile our relationship with it. To accomplish that, we must also understand our historical relationship with it. (what follows is a very dense condensation of a much more complex topic that ignores interesting nuances and does not apply to all humans everywhere at all times, but certainly applies to the species as a whole)
Humans have excessive brains per body weight compared most other species. We are also weaker and slower than almost every other species, per body weight or size. At our best, we can’t climb, run, jump, bite, scratch or hit to compete with most other large species, prey or predator. So how do we survive? With the same brain that makes us so vulnerable. We invent — and heavily rely on — tools of defense and assault.
The large brain we have also happens to require an incredible amount of calories per body weight. We evolved to meet this caloric need through animal fats. While vulnerable to other more adept predators, we may have used rocks to smash open bones and eat the nutritious marrow leftover from a kill after a succession of more dangerous predators took what they wanted. Predators kept our population and behavior regulated, until we began using weapons. First blunt weapons such as rocks, then later, the edge, which became knives and spear tips and bows and arrows that gave us ranged attacks and made us deadly predators and competitors almost overnight. Through these inventions, we escaped predator regulation and our ecological niche changed from prey/scavenger to apex predator, almost overnight. We could not only hunt and kill our own prey, but we could effectively defend the food from competitors. But we maintained a prey animal/scavenger mentality inappropriate to our new role as apex predator. Whereas predators ask, “What do we need, when do we need it?” A scavenger might ask, “What can I get?” Technologically-empowered with an answer of, “Anything we want,” that question becomes pathological. Similarly, our fear of apex predators became problematic when we became an apex and often the top predator.
Free from regulation, our population grew exponentially, and bands of hunters split and migrated. We invented clothing to protect ourselves from increasingly extreme conditions as we searched for more food. We began a phase of migratory overkill lasting thousands of years, following and decimating herds of prey species and the other predator species dependent on them. Everywhere we went, massive extinctions and loss of biodiversity followed, without exception. The Bible as well as anthropological record both echo this story. While technology freed us, we could not embrace the responsibility that came with such freedom. We struggle with it today.
From population growth, overkill and extinction we entered a period of prolonged food scarcity and desperation that still marks our psyche. We design and operate entire societies based on the presumption of scarcity. With few or no large animals left, we began eating lower on the food chain, making use of foods with increasingly less balanced and bioavailable nutrition and greater toxicity. We began to eat more of what our disappearing prey used to eat: the parts and children (seeds) of plants. Our control and use of fire allowed us to more efficiently hunt remaining prey and cook increasingly difficult-to-digest food substitutes, even as our use of fire in hunting and ecosystem management contributed to more overkill and extinction, creating fire-adapted and desertifying landscapes. Our dependence on these new carbohydrate-rich food sources enabled sedentary, permanent human settlement through agricultural development with intensified ecological degradation. Sedentary, carbohydrate-rich populations exploded. We began replacing and converting land and biodiversity into more humans and technomass into constructs we now call “cities” — the basis of civilization.
Settlements grew in social complexity, developing institutions and hierarchies. Populations in excess of Dunbar’s Number (the social carrying capacity of the human brain) became common, and the increased social alienation and stress required dedicated administrative attention and tools to retain social order. Humans evolved with linguistic capacity and a capacity for symbolic thought and understanding. We combined these two capacities to create the written language, the predecessor and foundation of all modern IT. As an administrative and accounting tool, writing has in its origins an association with sociopolitical power. It still holds that same sway today. We say, “knowledge is power,” and we tend to associate literacy with (access to) knowledge and power.
Literacy became linked to power and prestige in the first race to the bottom, where literate people of influence soon found their eyesight suffering, due to atrophy of the focal muscles. Eye problems and other chronic illness in civilization stem from two main causes: malnutrition and technological ergonomics (how often we blink and change focal points). IT, starting with the written language, decreased blinking (lubrication and moisture) and initiated the atrophy of the optical focal apparatus by putting us in front of non-fluctuating, near focal points for increasing times at increasing frequencies. A secondary impact of decreased moisture and increased focal muscle atrophy is the enhanced stiffening of the focal membrane with age. Modern multimedia technology intensified the problem, with more attention-grabbing features like strategically-designed flicker that have even demonstrably lowered our blink rate, causing the eye itself to dry out and become brittle.
In a physical sense, and as an example of fundamentally-unergonomic technology design, reading and writing is not really good for us. Literacy is literally bad for our eyes, locking us into a near-sighted state of increasing blindness. It can also be bad for our posture, neck, shoulders, and is strongly-correlated with chronic illness linked to physical inactivity. We also suffer regular and widespread hearing loss from loud noises of technological origin. Today’s typical civilized humans enjoy a fraction of the physical and sensory capacities that our ancestors and indigenous cousins enjoy(ed). Some part of our collective consciousness must understand this, as the popularized depiction of aliens as bald little pot-bellied figures with very large heads and advanced technology caricatures and predicts the relationship between technological development and physical degeneration.
Our (mis)use of technology in our role as a brand-new apex predator helped our species play a central role in the global extinction of most of the world’s modern megafauna, both predators and prey (about 70-90% extinction rates wherever we migrated). It did not seem to concern us that the speed of this transition into severely-augmented capacities gave most other species very little chance to adapt, until it was too late and entire populations and even species disappeared. This pattern of extinction occurred with simple technologies such as blades and spears and clothing and shelter (which allowed us to inhabit harsh climates and biomes away from the tropics), long before the appearance of the first civilization. If a piece of technology is inherently good, you needn’t consider when, where or how to use it. You just use it.
It soon became common for humans to have relatively little direct interaction with and feel alienated or independent from the rest of the biosphere. From this alienation, we developed the erroneous belief that technology is inherently “good,” even to the point of being capable of providing salvation: “Technology will save us.” More technology. New technology. Better technology. Technology will deliver us from scarcity, rather than reproduce it. And to some extent, as this account attests, it has. Technological advancement has opened new resources for us to exploit. But our relationship with it up to this point also entraps and threatens to destroy us and everything we love.
Our struggle to design and use technology ethically goes back thousands of years, at least. You might even say that it is baked into our DNA. By extension, the modern problems with technology we face today evolved out of our struggling relationship with the world and our place in it. The problem of Facebook manipulating our perspective has had thousands of years to develop, neither recent nor original. While humans are a fundamentally technological (and tech-dependent) species, we are not fundamentally literate. We created a new way to communicate where we really craved greater connection, a tragic pattern that prototyped all social media to follow. We became a highly communicative, although not necessarily more connected, species. The written language has had a profound impact on our cultural psychology and evolution. The rest, as they say, is history…written history, to be exact.
Do not mistake me for arguing against or hating technology. I write this essay on a modern Core i5 laptop running linux kernel version 5.0. I am not a luddite. Nor do I think our species can afford or acheive ludditism without millions of years of evolutionary development toward that capacity. The point of this brief history was to highlight how deep and integrated our struggles with technology are in our identity and even existence as a species. The modern problems of technological addiction and the negative impacts of technology on our development are outgrowths of this more fundamental issue that runs in the history of our species. This is a problem that is embedded within the developmental journey of our species, a species-wide crisis of identity bigger than any government or non-profit or certainly any single person, no matter how much breath work or change in diet and lifestyle we (try to) do, no matter what positions of socioeconomic privilege anyone holds. These are structural issues that we have to work with highly coordinated collaborative attention to change.
I believe the root problem of technology is fundamentally relational. We can think of technology as an amplifier of impact, for better or worse. We have often suffered from unintended consequences past whenever we engaged with technology naively, both in terms of unexpected or excessive outcomes. Blaming or giving credit to technology has rendered us blind to the relationship between technology and context, unable to assess when, where and how a given technology may prove appropriate. An outgrowth of this context-blindness has enabled the naive “more is better” mentality. Specifically, more power and more technology. If an axe works, a chainsaw must work better. If “chopping down as many trees as quickly as possible” is the only value reflected in the use scenario, then that is true. If the use scenario also represents our values of safety, health and fitness, resilient thriving ecosystems and renewable energy sources, then our answer might change.
We, humans, can create, modify and destroy those use scenarios. We haven’t addressed the issue of an ethical relationship with our technical capacities only to the ironic extent that we haven’t turned our technical capacities toward that task, yet. I see that beginning to change, which gives me hope. We are beginning to hack into the barriers and threats to our development and existence as a species. We are beginning to hook ourselves into the problem of technological addiction and abuse. The main components of this paradigm of struggle involve our innate characteristics, our beliefs about technology, and the circumstances in which technological design and use occurs. Circumstances include the state of ecosystems, culture, laws, economy, socioeconomic psychology and the extant sum of technomass with which we currently exist, and a holistic and accurate understanding of the consequences of making and using that technology in a given context.
As we cannot ethically or effectively modify our genetic evolutionary heritage, our first barrier is moving past the falsely-dualistic “technology good/bad” belief paradigm. We must acknowledge our technological dependence and end our worship/fear of technology. In doing so, we shift from a falsely-dualistic moral philosophy of technology to an ethical philosophy capable of considering both use and context in addition to inherent design. Instead of blaming ourselves or the technology, we must accept responsibility to pursue the opportunities available to us from this point. Once we transcend the dualism, we can begin to engage differently with our understanding of the current and appropriate role(s) of technology in human existence. We are neither victim nor perpetrator, nor is this a question of individual willpower.
We also need to consider the context of the technology itself. Just like every tool has a technique, a way in which it was designed to be used, it also has an appropriate context which determines whether and to what extent it can contribute to general welfare. The context interacts with the design and use to determine appropriateness. For example, in the context of the Cold War, a nuclear stockpile made sense in the context of an arms race, to get us from “assured destruction” (hegemony) to mutually-assured destruction (stalemate). The stalemate gave way to detente, and eventually, the demand for nuclear non-proliferation. Our ongoing struggle to implement nuclear non-proliferation and decommission nuclear weapons indicates the extent to which the Cold War persists.
It also made sense to develop nuclear weapon capabilities in the context of where several warring nations were attempting to gain strategic superiority. In this context, all outcomes except one affirm the existence of nuclear development. All nations would have to back out of it together in order for it to matter, which is impossible in the context of a power struggle. Unilaterally backing out would affirm the superiority of any nations who didn’t. And participating only further spurred the arms race. In this context, it becomes a bit more understandable why nations such as Iran, Pakistan and North Korea are developing nukes, when the US, Russia, UK, France, China, and Israel all have them. Our ongoing struggle to decomission nuclear weapons indicates the extent to which nations still struggle and compete for power and control. Until the relationships between nations change away from power struggles, nuclear weapons will persist.
The example of nuclear proliferation exemplifies the parable of the tribes: one tribe mobilizes for power, and surrounding tribes must either do the same or perish. Nuclear proliferation remains one of the most universally-held examples of “inherent technological evil” and a “race to the bottom” from the old morally-dualistic paradigm. This example demonstrates how a technology can (re)create and affirm the context in which it makes sense for it to exist and operate. We can say the same of assault weapons, cars, literacy and social media. As a young teenager, I saw an overnight transition one year when, where and how all my friends and cohort socialized. It came about due to a new invention: AOL Instant Messenger. Friends used to hang out in the neighborhood after school. But then some of them stopped. And the rest of us asked what they were doing. They were going home immediately after school. And then getting on the computer to use AOL IM to chat with each other. It emphasized some substantive benefits. It made chatting with multiple people from multiple cliques at the same time very easy. I felt like I could take time and space to compose my interactions on my terms, and it was easier to say “goodbye” online than on the phone or in person. In reflection, I ended up making the transition simply because everyone else had made the transition, not because I felt it was a superior way to socialize. I wonder whether everyone else felt the same.
All history is naturalized. But it is not deterministic. While development and use of technology itself may be baked into our DNA, what we develop, how we use it, and to what effect, is not. But we have some catching up to do. As entemologist EO Wilson reflected, our modern struggle emerges from the fact that we have paleolithic emotions (and bodies and genes), medieval socioeconomic institutions, and god-like technology. We need to redirect the capacities we have used to create that god-like technology toward the redesign of our institutions to respect and affirm rather than manipulate our emotions (and bodies and genes). We are a clever species. We just aren’t very wise, yet. That is our struggle, and that helps describe the struggle and mission of the CHT: the pursuit of technological wisdom and ethics to steer us away from our historic tendencies.
What, fundamentally, were we trying to accomplish with developing and using AOL IM or nuclear weapons? To what extent did we actually accomplish it? How do we feel about the outcomes? By understanding the context in which our current technological struggles make sense, we can begin to explore context shifts and the work necessary to bring them about. That means clarifying our values and matching our education (structure and process as well as content), economy and institutions (such as government, law and organizational structure) to our values.
The comparison between AOL IM and nuclear weapons may feel jarring, but I wonder whether we could use something like a “social media non-proliferation treaty” in a context where everyone is trying to “out-hook” or “out-proliferate” everyone else. Nuclear proliferation in this example serves as an analogy for how the “race to the bottom” occurs as a social construction. To undo that social construction and change our behavior, we need to move away from the context in which such behavior makes logical sense. This partly comes in the form of a completely novel psychoeconomic model of an IT marketplace, transforming competition for attention and dependency to competition (and even competitive collaboration) for demonstrative positive (humane) outcomes.
Context shift differs from merely writing new apps or designing new technologies, such as Loop Habit Tracker, which aim to use the “hooked” model of addiction for higher purpose. Doing so helps contribute to that context shift, but has severe limits ignoring the context of an overall race to the bottom of the brain stem. It is not enough to add some good into a harmful context. We must at some point also directly address the contextual factors that lead to undesired tendencies in order to continue past harm reduction toward positive outcomes that address our core existential and developmental needs.
This means creating an economic context that seeks, values and rewards (socially, financially, psychologically, spiritually) humane tech design. Part of this transition can involve “hooking” Silicon Valley with Humane Technology habits using the hooked model itself to reinforce more traditional efforts to empower well-meaning allies with education and training, policy and design work and other transformative tactics and strategies. It also involves a transformation of economic production, uncoupling it from socioecological degeneration and recoupling it to socioecological regeneration. The Regen Network exemplifies this sort of work.
Likewise, policy in large part forms the legal context in which markets operate. Populist responses to the issue of accountability typically involve more targeted regulation of markets (e.g., https://www.npr.org/2019/05/09/721685211/facebook-co-founder-hughes-says-zuckerberg-must-be-held-accountable)
I do not oppose regulation. However, micromanaging markets through highly-specific regulation is a futile game of whack-a-mole. What are the conditions we need to bring about to create and use IT (and other technology) in ways that enhance the pursuit of human quality of life rather than exploiting it? The Community Rights movement has made strange bedfellows of the political left and right as it explores a very similar question about the relationship between liberty, rights, prosperity, sovereignty and quality of life. The Community Rights movement works to create a legal context based completely on recognizing and protecting the inalienable rights of natural entities to exist and flourish, especially wherever they conflict with legal fictions (such as governments and corporations). Likewise, a discussion of regulation ignores the extent to which economic transformation may address issues of ethical behavior. Economic transformation does not serve as an alternative to policy work, but depends upon it. As the Community Rights movement demonstrates, we must use laws and other policy process far beyond targeted regulation to change the fundamental behavior of markets.
Finally, we must clarify our values to find our sense of belonging in nature and with the land and planet that ultimately birthed us and supports our existence more fundamentally than any technology. Only then will we be able to ask question about what our technology needs to accomplish to support our existence and values and affirm our sense of belonging and connection. Once we start asking that question, we can start looking at how design interacts with intended outcomes appropriate to context. In this phase of human transition and development, we have a contextually-appropriate need for technologies of transition and development. We need to make sure our proposed solutions are not band-aids sitting on top of poorly-designed and targeted reward systems. Understanding this, we can use principles of biomimicry to design technological and other contexts that better meet our evolved needs and support our individual and collective development. Permaculture and other ecological design sciences, have much to offer in this respect, by way of pattern languages and design principles for appropriate technology and human habitat.
For example, oral traditions were optimal in uncivilized societies of relatively small groups with high amount of trust, as they affirmed connection to and between land and people using innate capacities. Literacy evolved merely as a means to a greater end of the problem of storing and transmitting knowledge and experience in a manner that builds and maintains trust (i.e., through the “written record”) in the context of complex social structures where people interact regularly with others whom they know only in very shallow ways. However, our modern technology capacity has given us the opportunity to turn our focus back to the more fundamental problems of humane social structure and knowledge systems appropriate to our needs and the needs of the planet. Combined with modern understandings of biomimicry (using nature as inspiration for solving complex human problems), we can guide information technology to honor our evolutionary roots in oral tradition, and also create new ways of distributing and engaging with information that help rather than harm our bodies, relationships and landscapes, and that helps create sociobiologically-appropriate conditions for our lives that foster connection, belonging, purpose and fulfillment — a switch from dopamine-centric responses to serotonin-centric responses. A paradigm shift from scarcity to abundance.
Our relationship with technology is a never-ending, iterative process requiring constant monitoring, planning, and intervention, because it represents a type II chaotic system: Not only is it inherently difficult to predict outcomes and trends (like weather reports), but also the interactions that arise from our understanding of the system change the way it behaves. Imagine if accurately predicting the weather actually caused the weather itself to change. That is the amount of complexity we are dealing with. The ony framework I have encountered up to this point that can help us ethically and effectively engage with such unpredictable and chaotic systems is something called Holistic Management, which is a decision making framework for managing complexity. At its most general, managing holistically involves four components:
1. Understand the most important elements of our context (Whole Under Management, WUM) with regards to our decision-making, and “checking” our decisions against that understanding.
2. Maintain access to a full set of management tools (keep your options open)
3. Explore and understand the appropriate application of tools to context
4. Create a feedback loop through ecological, social and financial planning and monitoring.
Without any of these components, management becomes completely reductionist. Reductionism simplifies complexity and destroys life processes, including ecosystem and quality of life. Therefore, the management process itself also requires monitoring as to whether it trends holistic or reductionist (as per the four criteria above). Given the complexity of the Whole Under Management, we will likely never achieve even a momentarily perfect implementation of HM. Monitoring management itself not only allows us to move toward more holistic approaches, but also allows us to assess the quality of adoption rates among those of us claiming to manage holistically. Thus monitoring management itself supports higher rates and quality of adoption and implementation, ensuring the integrity of management.
The recent WhatsApp debacle and even the bigger question of Facebook and its role and impact in society represents merely the tip of the iceberg of the challenges we face as a species in our relationship with technology. There is lots more to do to bring about a revolution of humane technology, but the current work of CHT gives me hope that enough people are interested and willing to try and bridge the gap between our ability to design and create technology and our ability to use it appropriately.