In the 20th century, politicians’ views of human nature shaped societies. But now, creators of new technologies increasingly drive societal change. Their view of human nature may shape the 21st century. We must know what technologists see in humanity’s heart.…

In the 20th century, politicians views of human nature shaped societies. But now, creators of new technologies increasingly drive societal change. Their view of human nature may shape the 21st century. We must know what technologists see in humanitys heart.
The economist Thomas Sowell proposed two visions of human nature. The utopian vision sees people as naturally good. The world corrupts us, but the wise can perfect us.
The tragic vision sees us as inherently flawed. Our sickness is selfishness. We cannot be trusted with power over others. There are no perfect solutions, only imperfect trade-offs.
Science supports the tragic vision. So does history. The French, Russian and Chinese revolutions were utopian visions. They paved their paths to paradise with 50 million dead.
The USAs founding fathers held the tragic vision. They created checks and balances to constrain political leaders worst impulses.
Technologists visions
Yet when Americans founded online social networks, the tragic vision was forgotten. Founders were trusted to juggle their self-interest and the public interest when designing these networks and gaining vast data troves.
Users, companies and countries were trusted not to abuse their new social-networked power. Mobs were not constrained. This led to abuse and manipulation.
Belatedly, social networks have adopted tragic visions. Facebook now acknowledges regulation is needed to get the best from social media.
Tech billionaire Elon Musk dabbles in both the tragic and utopian visions. He thinks most people are actually pretty good. But he supports market, not government control, wants competition to keep us honest, and sees evil in individuals.
Musks tragic vision propels us to Mars in case short-sighted selfishness destroys Earth. Yet his utopian vision assumes people on Mars could be entrusted with the direct democracy that Americas founding fathers feared. His utopian vision also assumes giving us tools to think better wont simply enhance our Machiavellianism.
Bill Gates leans to the tragic and tries to create a better world within humanitys constraints. Gates recognises our self-interest and supports market-based rewards to help us behave better. Yet he believes creative capitalism can tie self-interest to our inbuilt desire to help others, benefiting all.
Peter Thiel considers the code of human nature. Heisenberg Media/Flickr, CC BY-SA
A different tragic vision lies in the writings of Peter Thiel. This billionaire tech investor was influenced by philosophers Leo Strauss and Carl Schmitt. Both believed evil, in the form of a drive for dominance, is part of our nature.
Thiel dismisses the Enlightenment view of the natural goodness of humanity. Instead, he approvingly cites the view that humans are potentially evil or at least dangerous beings.
The consequences of seeing evil
The German philosopher Friedrich Nietzsche warned that those who fight monsters must beware of becoming monsters themselves. He was right.
People who believe in evil are more likely to demonise, dehumanise, and punish wrongdoers. They are more likely to support violence before and after anothers transgression. They feel that redemptive violence can eradicate evil and save the world. Americans who believe in evil are more likely to support torture, killing terrorists and Americas possession of nuclear weapons.
Technologists who see evil risk creating coercive solutions. Those who believe in evil are less likely to think deeply about why people act as they do. They are also less likely to see how situations influence peoples actions.
Two years after 9/11, Peter Thiel founded Palantir. This company creates software to analyse big data sets, helping businesses fight fraud and the US government combat crime.
Thiel is a Republican-supporting libertarian. Yet, he appointed a Democrat-supporting neo-Marxist, Alex Karp, as Palantirs CEO. Beneath their differences lies a shared belief in the inherent dangerousness of humans. Karps PhD thesis argued that we have a fundamental aggressive drive towards death and destruction.
Just as believing in evil is associated with supporting pre-emptive aggression, Palantir doesnt just wait for people to commit crimes. It has patented a crime risk forecasting system to predict crimes and has trialled predictive policing. This has raised concerns.
Karps tragic vision acknowledges that Palantir needs constraints. He stresses the judiciary must put checks and balances on the implementation of Palantirs technology. He says the use of Palantirs software should be decided by society in an open debate, rather than by Silicon Valley engineers.
Yet, Thiel cites philosopher Leo Strauss suggestion that America partly owes her greatness to her occasional deviation from principles of freedom and justice. Strauss recommended hiding such deviations under a veil.
Thiel introduces the Straussian argument that only the secret coordination of the worlds intelligence services can support a US-led international peace. This recalls Colonel Jessop in the film, A Few Good Men, who felt he should deal with dangerous truths in darkness.
Seeing evil after 9/11 led technologists and governments to overreach in their surveillance. This included using the formerly secret XKEYSCORE computer system used by the US National Security Agency to collect peoples internet data, which is linked to Palantir. The American people rejected this approach and democratic processes increased oversight and limited surveillance.
Facing the abyss
Tragic visions pose risks. Freedom may be unnecessarily and coercively limited. External roots of violence, like scarcity and exclusion, may be overlooked. Yet if technology creates economic growth it will address many external causes of conflict.
Utopian visions ignore the dangers within. Technology that only changes the world is insufficient to save us from our selfishness and, as I argue in a forthcoming book, our spite.
Technology must change the world working within the constraints of human nature. Crucially, as Karp notes, democratic institutions, not technologists, must ultimately decide societys shape. Technologys outputs must be democracys inputs.
This may involve us acknowledging hard truths about our nature. But what if society does not wish to face these? Those who cannot handle truth make others fear to speak it.
Straussian technologists, who believe but dare not speak dangerous truths, may feel compelled to protect society in undemocratic darkness. They overstep, yet are encouraged to by those who see more harm in speech than its suppression.
The ancient Greeks had a name for someone with the courage to tell truths that could put them in danger the parrhesiast. But the parrhesiast needed a listener who promised to not to react with anger. This parrhesiastic contract allowed dangerous truth-telling.
We have shredded this contract. We must renew it. Armed with the truth, the Greeks felt they could take care of themselves and others. Armed with both truth and technology we can move closer to fulfilling this promise.
This article is republished from The Conversation by Simon McCarthy-Jones, Associate Professor in Clinical Psychology and Neuropsychology, Trinity College Dublin under a Creative Commons license. Read the original article.
Published September 14, 2020 — 10:00 UTC