Part one of a two part series exploring the human relationship with technology
A white hat approach to the information economy
In Part one, I introduce some basic concepts in the sociology of technology and economy to examine the current state of information technology. In Part two, I introduce human’s historical and evolutionary relationship with technology to explore the implications for development of an ethical relationship with technology.
The recent reveal of an Israeli hacking company (NSO Group) secretly selling security exploits to governments explains why closed source publicly-held companies often don’t like open source ethos. We can examine this situation by looking at the two different classical types or intentions of “hacking” from the perspective of proprietary, for-profit (esp. publicly-held) software and IT solutions development. WhatsApp is a closed-source messaging app owned by Facebook, which is in-turn publicly-held.
Hacking itself is merely figuring out how to gain access to something. In the process, it involves learning how something works, and what might break it, or what could fix or improve it. What lessons someone learns from a hacking project, and how they use that knowledge, depends largely on the type of hacking they do.
“Black hat” hackers are traditionally known as the “bad guys.” Their work encompasses three primary objectives:
1. Find weaknesses, vulnerabilities or bugs in current design
2. Figure out how to exploit them
3. Make money or gain prestige
a. through direct exploitation
b. through the “black hat” marketplace
This is reportedly what the NSO Group did: it found exploits, and rather than responsibly reporting them to WhatsApp, they sold those exploits for money to government agencies interested in spying on people’s communications. Social conscience and implications be damned.
In contrast, “White hat” hackers also pursue three similar objectives:
1. Find weaknesses, vulnerabilities or bugs in current design
2. Figure out how to exploit them
3. Make money or gain prestige
a. by reporting the bugs directly to the company, and/or when the company doesn’t respond,
b. by making the bugs public, often to put pressure on the company to fix its problems and resecure the data for which it is responsible
White hat hackers are classically the “good guys,” because they feel concerned about others’ vulnerabilities, and work to identify and strengthen those vulnerabilities. Open source software developers make up a large subset of white hats. They don’t do it out of complete self-interest (they benefit from social prestige and income), but they attempt to integrate their self-interest with the common good. Often, hacking falls into a grey area, for example, because black hats also do white hat things (like make the exploit public, but after playing around with it or benefiting from it in some other way), or vice-versa, as white hats sometimes break laws that conflict with their ethical intentions.
I associate the white hat/black hat terms with the Mad Spy vs Spy comic, a critique of the Cold War where two nearly-identical spies were locked in endless and often-deadly competition with one-another. They were identical in appearance and behavior, differing only in the color of clothing they wore. In contrast, white and black hats of the hacking world actually derive from the Western movie genre, where bad guy archetypes wore dark hats and clothing, and white archetypes wore light hats and clothing in a simple moral dualism that our culture finds comforting. Spy vs Spy serves as an analogy for general confusion between the surface-level similarities of the two categories of hacking: all three objectives are the same. Yet the devil is in the details and intentions. White hat hackers seek to solve problems, whereas black hat hackers seek to exploit them. So why do we struggle so much to distinguish between them? Our society has a tendency to “shoot the messenger,” and white hat hackers by-definition tend to position themselves as messengers. Black hats tend to turn vulnerabilities into greater problems by exploiting them. In contrast, poor socioeconomic reactions to well-intentioned white hats tend to make mountains out of mole hills. In either case, white and black hats get lumped together as troublemakers, which in turn creates a negative general opinion of “hacking” as if it were synonymous with troublemaking.
The ethics of making exploits of software public knowledge stem from the collision between closed-source (proprietary/secretive) and open-source models in our current economic context. Black hat has immediate financial incentives attached to it. White hat doesn’t. You have to gain notoriety and become a security contractor before your contributions to security are recognized as legitimate and legal. And even then, your accomplishments and the skills they represent lead many to treat you like a liability. So immediately, here, the scales are tipped against the white hat. At their worst, white hat hackers act like whistle blowers. Nominally, they simply operate with an open source philosophy: shine a light into darkness. With transparency of open source values comes accountability.
However, accountability is expensive: it takes time, expertise and money to track down, fix and verify vulnerabilities and other flaws. It takes less to not look for them, or ignore them completely. When a company successfully avoids accountability, it can externalize those costs (such as security vulnerabilities) onto its users. This externality often takes the form of what economists call “intertemporal discounting,” where you reap benefits now and deal with the snowballing consequences somewhere “in the future.” If that “someone else’s problem somewhere else at some other time” mentality sounds familiar, it describes a lot of how our society functions, and helps us understand some of the hidden costs of goods and services offered to us for “free.” Previous and current generations regularly make choices without considering how they would impact their future, let alone the lives of subsequent generations. We often end up paying for our short-sightedness later, somehow. It would do us well to assume a cost exists, and ask who, what, when, where and how, rather than “whether.” Put another way: Free now, but you’ll typically pay for it later, one way or another.
Sooner or later, an unaddressed vulnerability or other flaw in software design becomes public. But when, where, how and to whom it becomes public matters. When a company sits on an unknown security vulnerability (or ignores a known one) for long enough, black hat hackers eventually get ahold of the information they need to pursue an opportunity. They typically get ahold of it first, simply because they are actively looking for such opportunities. When they get to the vulnerability first, they are off to the races to exploit it before a white hat finds and reports or (in an open source circumstance) fixes it. Thus, fixing vulnerabilities is a race between white hat and black hat hackers.
Ideally, white hats find vulnerabilities first, and no opportunity for black hat exploitation occurs. But publicly-owned companies operate with concern to quarterly profits. To maximize current or even next quarter profit, it is rationally better for a company to suppress or even ignore vulnerabilities rather than allow them to become public, even though this has major repercussions further into the future. Although we, the users of such technologies (and primary victims of its flaws), all want vulnerabilities and other flaws discovered by the “good guys” and patched or otherwise-corrected ASAP, that doesn’t necessarily make sense to a publicly-held, closed-source company. They a. simply can’t see that far into the future and b. are far more concerned about immediate profitability, anyway. Once a consumer purchases and “owns” a product, providing warranty and support is only expensive. It positively impacts profit only inasmuch as it contributes to company reputation and customer loyalty. While black hats are a bigger threat to a publicly-held closed-source company, white hats keep the company on their toes, increase their (short term) expenses (although often by helping them avoid the black hat threat), and even publicly embarass them. To such companies, this results in the worst of both worlds: all the extra work (to fix exposed flaws) with none of the positive PR. As long as such corporations think they can avoid accountability, all hackers (black or white) are liabilities, annoyances, or threats…with one important exception that I will discuss later.
In many cases, closed source proprietary philosophies and practices make a self-fulfilling prophecy out of white hat intentions. In closed source circumstances, white hats can only identify problems, and don’t have access to the source code to even propose or collaborate on solutions. And the closed source proprietary perspective may view anything they do submit as evidence of a violation of proprietary secrets. Thus in an economic milieu that values and protects secrecy of information, white hats get lumped with the “bad” black hats. It’s not that competition doesn’t exist in open source circumstances. It’s just that competition and collaboration behave very differently, and often even coexist in the same time and space. In contrast, closed source circumstances tend to dichotomize competition and collaboration as mutually exclusive. You are either a competitor or a collaborator. In open source, you can actually be both at the same time thanks to upstream/downstream project contribution and forking. Many people — even those heavily involved in open source projects — suffer from cognitive dissonance. As a result, it often becomes difficult to structure effective collaborations, and I have seen many react inappropriately to the presence of competition as “bad” or collaborative gestures as “disingenuine.”
Coming back to the moral ambiguity of Spy vs Spy, the view of black hats as “bad” itself is a bit more complex. Closed source philosophies and practices evolved out of and depend upon a paradigm of scarcity that itself leans heavily on a zero-sum analysis: if I have more, you have less, and the value of what I have increases. That is currently how money currently operates: its value comes not only from collective belief in it as a medium of exchange, but also from its limited presence and unequal distribution to create and maintain artificial gradients of supply and demand. If only a few people have a million dollars, that million dollars is worth a lot. However, if everyone has a million dollars, then they might as well only have one dollar, because it is all worth the same. In a scarcity-based economy, distribution of the resource matters. In other words, it’s not so much how big the pie is (economic growth), but how the pieces get divided and distributed.
However, technology development — especially information technology — deals primarily with ideas and concepts (information). And the rules of scarcity do not apply to information, which is more like the pie recipe than the pie itself. My possession of information neither devalues nor excludes your possession of the same or different information. In fact, our mutual possession of information (same or complementary) can actually add value to what we currently have. However, the context of the scarcity economy tends to infiltrate and redefine our thinking about information according to its rules. My recipe quickly becomes a guarded secret. Thus a major conflict exists between the “economic” and “informational” aspects of our modern information economy preventing us from exploring and engaging with the full potential the information economy promises, which would likely be based on open source principles that embrace or even depend upon a pre-analytic vision of abundance vs scarcity. We may need to develop new monetary systems and related institutions to replace outdated ones and overcome this conflict. At the same time, pursuit of a fully-empowered (open source) information economy also provides us an opportunity to adopt and apply abundance-based paradigm to other aspects of socioeconomic organization. We can just as easily design an economy that ties its value to the health rather than exploitation of the commons: the commonweal. Doing so uncouples economic production, for example, from environmental destruction, and recouples economic production with environmental regeneration and other forms of net good as byproducts of economic activity.
Black hats exist primarily as an outgrowth of artificial concentrations and gradients of information, money and power created by scarcity-based models of distribution and social dynamics. Nature abhors a gradient. Another way to look at it: bank robbers only make sense in a society where banking represents the accumulated financial interests of an elite few. In this sense, the problem of black hat hacking is a vestige of an elite minority wanting to “have their cake and eat it, too,” in terms of wanting to accumulate power, money or information without incurring the inherent security risks of doing so. So we can’t accurately blame the black hat problem on the “immoral” choices or “moral weakness” of individual black hats. Likewise, many white hats in the same economy often have grey ethics with regards to their motivations and methods. Like many of the corporations that black hats antagonize, they simply see opportunity. A capitalist society does not make moral (let alone ethical) distinctions between “good” and “bad” economic opportunity. The legality of pursuing “black hat” opportunities often arises more from how it impacts the interests of a politically well-connected elite rather than from ethical considerations. If they see such pursuits as opportunity, then it tends to become legal. If they see such pursuits as liability, then they tend to become illegal. Morality, ethics and actual impact be damned.
In the best of circumstances, white hats become appreciated and valued security consultants or contractors, and find a socioeconomic niche rewarding (rather than punishing) their ethical focus. And that’s part of the problem: We exist in a cultural, social and economic milieu that, for all our complex laws and rhetoric about accountability, very heavily rewards unethical behavior, and often punishes or discourages ethical behavior. The two paradigms of open and closed source are completely alien from one-another, and it confuses people to no end. In an open source context, competition and cooperation co-exist and converge. Self-interest and altruism also tend to converge. This has a tendency to lower the stakes from “fight for life” to “play,” the same way wolf pups will “play” ferociously with one-another, but will break it up or take a break before someone gets seriously hurt. Through this process, individuals explore and negotiate the limits and qualities of their strengths and weaknesses, relationships with one another and ultimately their social role or niche. They clarify and develop both their identities as individuals and members of the pack, they find their optimal niche, and they contribute to the overall viability of the pack by increasing its strengths and mitigating its weaknesses. That, by way of analogy, is the goal for our economy.
So, to change this dynamic in the software field, we have a few (and potentially coincidental) options in social design:
1. Enforce decoupling of closed-source and public holding (you can be one but not the other). In other words, you can be open source and publicly-held, or closed source and privately-held, but not both.
2. Somehow change the stock market time frame to prioritize long-term outcomes over short-term (quarterly) outcomes. The longer the timeframe, the more universal interests become. “Self-interest” and altruism tend to blend seemlessly into one-another. This change in scale of focus also leads to a substantial difference in accountability and behavior choices — not just the choices made, but even the range of choices that seem available and viable in the first place.
3. We might consider the implications of requiring an open source ethic for all technology development, as doing so completely changes the ethics of social and market dynamics by inherently supporting transparency and accountability.
4. Incorporate broader consideration of liabilities and assets beyond profit and loss analysis into economic function and valuation. For example, the Center for Humane Technology’s ledger of harms seeks to render visible the previously-ignored externalities of technology design and use. What impact does the technology have on its users? On society? On ecosystems?
The second and fourth options have positive implication far beyond the design and development of technology, extending into all investment activities.
Extending this analysis further: In today’s climate, social media and related app development have largely functioned like “black hats” with regards to social psychology of software and telecommunications development. They hack and exploit vulnerabilities in human hardware and software for profit. This is the basis of the “hooked” model for user experience design: find the vulnerabilities in the human social psyche and exploit them to reap greater attention, screen time, advertising and therefore profits. You “hook” users by designing software to maximize addiction by lowering the work threshold to receive a positive reward (dopamine response), until the user internalizes motivation and acheives dependency or even addiction.
Tristan Harris of the Center for Humane Technology identified five core vulnerabilities in human social psychology that companies like Facebook exploit to maximize the time people spend in their interface:
1. Inaccurate forecasting (how long will this task, project or distraction actually take?)
2. Intermittent variable rewards (aka the infinite scroll, “click next” vortex)
3. Loss-aversion (fear of missing out (FOMO) with regards to something important; but even unimportant things gain importance when they come via a trusted connection)
4. Fast vs slow thought (mindless reaction to quick and easy stimuli vs mindful behavior)
5. Stress and altered states (we enter fight or flight easily, and make impulsive decisions)
While a company like Facebook (and by extension, WhatsApp) may fear and hate both white and black hat hackers…they do so with a failure to acknowledge or admit their role in society as a black hat biohacker of the human psyche, keeping us plugged into their machines for the maximum time possible. Facebook doesn’t hate hacking generally or even black hat hacking specifically. Facebook simply doesn’t care for any hacking that doesn’t help it produce greater short-term profit. Does this make Facebook evil? I don’t think so.
Black hats compete all the time for the same scarce resource: access to a potentially-lucrative vulnerability. Their opportunity lies in access to vulnerabilities for them to exploit. While they all compete, they don’t just despise white hats for making their access to a scarce resource even more scarce and tenuous. White hats represent an ethical system (e.g., open source values) completely alien and unintelligible to a black hat economy. Black hats at least make sense to each-other. They know what they do and why they do it. But white hats don’t make sense to black hats. Hacking is all about understanding and access. If something is alien and unintelligible, it remains inaccessible and unavailable to us, for better or worse. Black hats not only hate what white hats do (ruin opportunity), they also don’t understand why white hats do what they do. In this sense, widespread comprehension of the motivation and ethics of white hats may serve to indicate a significant shift in social affect toward open source principles.
Unfortunately, Facebook is not an exceptional case, but merely a case study in the rule of an extractive economy. This same pattern, for example, perfectly describes the infamous Opium Wars between the British Empire and China. Nearly 1/2 of the entire Chinese population had become addicted to opium, simply because British merchant corporations found it to be profitable. Nothing personal, just business. They didn’t want to create widespread misery. They just didn’t care about (or have to deal with) the consequences of profiteering off of opium sales. They only had to deal with a drop in sales when China tried to intervene out of self-defense, which kicked off the war. China lost the war, ceded Hong Kong and begrudgingly made opium legal in a tragedy from which it is still trying to recover. The British people (merchants and shareholders) responsible for the Opium War were not evil people. They were simply investors seeking to maximize their profit, much like many of us today who hold stock in the stock market.
We all operate with the same vulnerabilities that make us susceptible, e.g., to social media manipulation. Hackers rarely concern themselves with strengths. Rather, they focus on weaknesses — a difficult topic for our society (perhaps even our species) to address. But we must accustom ourselves to acknowledging and addressing weakness and vulnerability. What are the weakest links that render us most vulnerable to manipulation and hijacking? Likewise, what are the weakest links of the manipulation and hijacking chain of production?
The unfortunate pattern of the extractive economy will continue until society develops and enforces clear values that consider not only long-term implications of a profit scheme (to prevent intertemporal discounting), but also non-monetary externalities, such as the impact on public health, social fabric, or ecosystems. That describes the basis for the triple bottom line: not just profit, but also people and planet. Not just economics, but also ethics and environment.
Who or what will bring about such change? Again, the white hat/black hat distinction applies to this level of hacking. The Center for Humane Technology (CHT), in contrast to an outfit like Facebook, represents the first real example of what I would consider a “white hat lab” with regards to those same innate weaknesses and vulnerabilities that Facebook exploits, for example, creating a “ledger of harms” to track the true cost of our current and historical relationship with technology. Beyond a simple accounting process, such information could become embedded in monetary systems pegged to public welfare rather than reductionist profit. Imagine economic growth not just from sales, but actual, measurable growth in general public welfare, in a holistic monetary scheme that accounts for and discourages externalities with an embedded accounting process. Under such a scheme, only ethical projects would prove profitable. Sound far fetched? Check out the Regen Network. They are creating an economic currency whose value is tied to the verification of regenerative outcomes. Both the Regen Network and CHT give me hope that we (or they, if you see yourself as a passive bystander) might positively shift the role and impact of technology in human life.
In Part two, I will explore the deeper origins of the human relationship with technology, and how that might inform our way forward.