Issue 1

nonfiction

take back the future! the progressive case for techno-optimism

written by Jasmine Sun

I came into college as a baby “social justice warrior” planning to major in ethnic studies and become a professor. I was deeply skeptical of capitalism and the state, so academia seemed like a refuge — the only career path that was intellectually and politically pure. But when I arrived at Stanford in the wake of the 2016 election and the start of the “techlash,” I was shocked to see how much money, power, and attention still poured into the pockets of 20-something engineers who clearly had little idea of what to do with it. Whether or not I liked it, tech was having a huge impact on the communities and issues I cared about, so I wanted to get up to speed on tech, its builders, and its underlying philosophies.

I began by reading countless books about Silicon Valley: The Know It Alls by Noam Cohen, Hackers by Steven Levy, even Ben Horowitz’s The Hard Thing About Hard Things and Peter Thiel’s Zero To One. I talked to venture capitalists and reported on startups for a campus publication. I found myself surprisingly enthralled by the experimental, fast-moving, and forward-looking culture of the startup scene. Tech leaders possessed an infectious optimism: an unfailing belief in the possibility of making the world better than they found it.

Still, I feared selling out, and I was dubious about leaning into an industry that had caused such harm. I sought out more critical, diverse, and interdisciplinary perspectives in science, technology, and society studies (STS), where scholars were already thinking about the intersection of technology and social justice. While STS offers useful frameworks for describing problems, many articles concluded by blanket rejecting a technology or positing no alternative at all. This cynicism didn’t resonate with my experience. Like many others my age, I grew up on the internet. I learned to write by publishing on Wordpress, pursued niche research passions from late nights on Google Scholar, and found community in online forums and Twitter group chats when I felt loneliest as a teen. I knew Google and Facebook didn’t have my best interests in mind, but quitting them felt equally untenable.

In theory, it’s easy to reject technology wholesale, but in practice, that means ceding its power to myopic megacorporations. Extreme techno-pessimism absolves technologists of using their skills for good, and absolves the left of using technology’s speed and scale to empower people. Like science fiction writer Ted Chiang said in an interview with journalist Ezra Klein, “Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us.” The challenge, then, is how to disentangle the two.

My goal is to make the case for a progressive techno-optimism: not blind faith in the powers that control technology today, but in our ability as thinkers, builders, and organizers to join together and deploy technology for a better collective future. Technology is neither inherently good nor bad; rather, it can be part of a broader political project of securing the “good life” for everyone.


To imagine a feasible future for progressive technology, we have to first recognize the gains of the past. The soaring optimism of the 2000s and early 2010s had some basis in reality. The internet presented us with radical new ways of connecting with each other, sharing resources, and creating knowledge outside of institutional and geographic bounds.

As faith in financial, political, and media institutions collapsed, networked technologies formed the foundation for grassroots movements like the Arab Spring, #MeToo, and Black Lives Matter. On Twitter, people spread protest messages beyond their immediate social networks, moving from viral tweets to thousands-large demonstrations from Zuccotti Park to Tahrir Square. Smartphones enabled people to countersurveil police brutality, share the evidence, and hold the state accountable. Although Twitter, Apple, and Google might not have designed their products for activist uses, the ways people repurposed these tools highlighted their progressive potentials.

People discovered infinite channels to tell their own stories and raise concerns about neglected issues from net neutrality to prison abolition, trans rights to unsafe workplace conditions. In an everyday context, free educational resources on Wikipedia, Khan Academy, and even YouTube allow individual volunteers to teach millions. Those with marginalized identities, such as LGBTQ+ people and people facing domestic violence, have found both community and crisis support online when it was not safe to do so in their physical environments. Internet platforms might not themselves be altruistic actors, but they expand the capabilities of those who are.

Dismissing these benefits would be just as ahistorical as dismissing technology’s harms. Many of these outcomes were only possible because software is cheap and scalable. The goal is not to prove that these technologies are perfect or that they cannot be co-opted. Rather, technology is one tool in the toolbox — an immensely and undeniably powerful one — in fighting for the future we want.


In light of these potentials, where did tech go wrong? Past techno-utopian movements have often assumed that innovators and titans of industry drive progress for everyone else, and any attempts to slow down their plans — protests, regulation, ethics committees, and more — are blockers to opportunity.

For example, the early cypherpunks of the 1980s, including cryptographers like Timothy C. May, emphasized the importance of untraceable online identities for evading “governmental regulation, taxation, or interference.” By the turn of the millennium, WIRED Magazine was publishing gushing profiles of young founders next to Newt Gingrich’s anti-regulation tirades. Critical coverage was all but absent throughout the 2000s. Instead, a credulous tech press framed tech founders as boy geniuses changing the world by “democratizing” access to knowledge, jobs, or enterprise software. In 2020, venture capitalist Marc Andreessen published his pro-progress manifesto “It’s Time To Build,” blaming partisan gridlock and bureaucratic inefficiency for the United States’s inability to scale access to the resources — housing supply, ventilators, university education — that innovators created.

This kind of thinking suggests that new tech should be built first — usually by a small, elite, homogenous group — and social questions should get punted to policymakers, who often end up lagging decades behind. By the time conversations about diversity, ethics, and access enter the conversation, large corporations will have already locked in their technological dominance and naturalized their harms. For example, as Amazon built an empire on the promise of super-fast shipping, consumers grew to accept inhumane warehouse conditions as a necessary cost of convenience.

Furthermore, tech companies have not only profited from phone addictions and teen anxiety, but they’ve also aided and abetted governments in violating human rights. Facebook’s lack of moderation in Myanmar enabled Burmese military personnel to orchestrate a genocide of the Muslim Rohingya minority group. In the U.S., judges use biased risk assessment software like COMPAS to keep Black defendants in prison for longer than their white counterparts, and Palantir develops tools for Immigration and Customs Enforcement to surveil, detain, and deport undocumented Americans. While early tech leaders hoped to liberate humanity from hierarchical governments, their products instead multiplied the available channels for exerting control.


Systemic problems require systemic solutions. We can recognize that technology is critical to our future without replicating the narrow techno-utopianism of the past. From the beginning, a new left techno-optimism must be willing to engage with social justice, politics, and history in a way that past techno-utopians have not.

We cannot shy from the social. Algorithmic bias, workplace harassment, and viral hate speech show how structural inequality — sexism, racism, and classism — seep and scale through our technological systems. Instead, our analysis must be intersectional, and our new centers of power diverse. Equity must be baked into every stage of technological development, from problem identification to team-building to design to distribution. Most importantly, social justice is a social effort. Beyond just augmenting individual ability, we must build technologies for cooperation, redistribution, and collective action.

We cannot shy from the political. In the past, tech leaders placed new tools in the hands of old hegemons, and were surprised when oppression persisted or people resisted. Yet power shapes what gets built, by whom, how, and for whom. Decisions such as a company’s organizational structure or whether to cooperate with government censorship are embedded in politics. Additionally, as tech companies approach new domains like healthcare, cities, and education, they will need to learn from policymakers, advocates, and the public rather than running roughshod over local laws and livelihoods.

We cannot shy from the historical. Many technologists seem to fear history, instead pursuing escapist visions — seasteading, charter cities, Mars colonization — that offer them blank slates to design from. Even well-intentioned founders can end up replicating historical power dynamics, such as coding bootcamps with tuition models eerily similar to for-profit colleges or emotion-scanning artificial intelligence that acts as a 21st century phrenology. Only by knowing our history can we pattern-match for risks and opportunities, distinguishing the broken and stale from the truly inventive.


These principles call for a dramatically different tech industry and society from what we have today. Yet power demands counterpower; our efforts can be both generative and adversarial. When prison abolitionist Mariame Kaba describes hope as a “discipline” and philosopher Ernst Bloch calls for “militant optimism,” both emphasize the centrality of ongoing, conscious effort in bringing about justice.

In going from ideals to action, one useful framework is the “utopian demand” as defined by feminist theorist Kathi Weeks. She describes it as “a political demand that takes the form not of a narrowly pragmatic reform but of a more substantial transformation of the present configuration of social relations; it is a demand that raises eyebrows, one for which we would probably not expect immediate success.” Examples of utopian demands might include prison abolition or a universal basic income, but what’s most important is that the utopian demand is not informed by political feasibility alone but by its end vision. Weeks continues, “To make a demand is to affirm the present desires of existing subjects: this is what we want now. At the same time, the utopian demand also points in the direction of a different future and the possibility of desires and subjects yet to come.”

What utopian demands might we dream of for technology’s future? Who is working on bringing them about?


One set of demands must involve thwarting dangerous and oppressive technologies. Reclaiming technology for a just society means being accountable for what we decide to build. Drawing clear ethical boundaries sets a precedent against the use of technology to control and police people, and toward a world where civil society plays a larger role in deciding where and how technology can be exercised.

Researchers, like those at the Algorithmic Justice League, study tech’s current and potential harms. Movements like Mijente’s #NoTechForICE and the Campaign To Stop Killer Robots rally public opinion against harmful uses of technology. Meanwhile, policymakers have imposed regulations locally, like San Francisco’s ban on police use of facial recognition; federally, like the Biden administration’s push for stronger antitrust action; or internationally, like the European Union’s General Data Protection Regulation (GDPR).

Resistance has also spanned tech worker activism, such as Google workers’ refusal of Project Maven, a Department of Defense contract to build computer vision for drones; Gig Workers Rising’s “No on 22” campaign to secure a living wage; and unionization efforts at Alphabet, Kickstarter, and digital media outfits like the Times Tech Guild. While campaigns often react to particular ethical oversights, each builds workers’ collective capacity to assert decision-making power over their working conditions and the outputs they produce.

Finally, technologists can use their skills to identify abuses of power and build defensive infrastructures in their place. For example, whistleblower Edward Snowden could not have learned about the National Security Agency’s privacy violations without first being a high-level system administrator, and he could not have safely leaked what he knew without understanding the intricate tripwires of the NSA surveillance network. As Snowden posits in his memoir Permanent Record, “As long as legal innovation lags behind technological innovation, institutions will seek to abuse that disparity… It falls to… developers to close that gap by providing the vital civil liberties protections that the law may be unable, or unwilling, to guarantee.”


In addition to resistance, we need parallel efforts around imagining better worlds and building the tools, resources, and infrastructure that will get us there. Unlike those of past techno-utopians, these projects can be promising because they start with an understanding of our social, political, and historical conditions before scoping tech’s role.

We should start with visioning: telling stories about who will be included and how; rallying people around technology’s human potentials rather than its abstract technical features. Early science fiction, from Russian cosmists’ writing on spaceflight to Vernor Vinge’s pseudonymous web, catalyzed decades of research and development that turned ideas into reality. Yet unlike these past sci-fi scenes, our visions for a progressive techno-optimism must be shaped by diverse voices. We can draw inspiration from Afrofuturism, the cultural movement that tells alternate histories and speculative futures by and for Black people. From the blockbuster movie Black Panther to author Octavia Butler’s novel Kindred, Afrofuturists weave narratives based in Afro-diasporic history and modern technology. Speculative fiction thus becomes an instrument for opening up new frontiers of investigation, creating new solutions, and giving marginalized people the opportunity to design the worlds they want to live in.

The government can play a major role in funding transformative technology research. The famed Defense Advanced Research Projects Agency (DARPA) is a federal agency granted wide latitude to fund moonshot-style R&D on defense-related technologies, an approach that has led to historic breakthroughs like the laser and the internet. We should push policymakers to invest in similarly ambitious efforts for other domains, from climate modeling to pharmaceutical development. Federal funding also offers a sorely needed alternative to industry-dominated research. The vast majority of AI research today happens in Google’s, Facebook’s, and Microsoft’s thick-walled research arms, making it hard for universities to compete on salaries or compute. Instead, projects like a National Research Cloud or funding for massive open datasets democratize access to AI resources, and can be coupled with policy requiring that resulting research is published in the public domain.

Next, we’ll need to create devices that help people call out and circumvent inaccessible, hostile, or corrupt systems. The messaging app Signal and the Tor network are encrypted technologies that protect the safety of dissidents, journalists, sex workers, and other groups at risk of suppression and censorship. Upsolve became the largest bankruptcy nonprofit in the U.S. in three years, empowering millions of people who couldn’t afford lawyers to file for bankruptcy for free online. Raheem AI developed a chatbot to anonymously report experiences with police, while Callisto offers a similar recording, reporting, and matching system for survivors of sexual assault. The Ameelio app connects incarcerated people with loved ones, offering a free alternative to exorbitantly priced prison phone calls and reducing recidivism in the long run. The publication The Markup accompanies their tech accountability journalism with apps such as a promotion-free search engine and a website privacy inspector.

Many of the most interesting innovations revolve around novel production, funding, and ownership models that shift the balance of power toward workers and ordinary people. Commons-based peer production (CBPP), the system that produced critical open infrastructure such as Wikipedia and Linux, involves a large volunteer community supported by smaller revenue-generating and governance arms. While CBPP promotes broader participation in shaping technology, the newer platform cooperative model gives these workers equity to reap the rewards. For example, The Drivers Cooperative is a ridesharing platform that employs over 2,500 New York cab drivers as worker-owners and pays them eight to ten percent more per ride than their venture-backed competitors. In the future, the blockchain may create the technical foundation for new forms of social, political, and economic organization built on principles of openness, transparency, and participation. These new structures, called decentralized autonomous organizations (DAOs), can be used for anonymous voting systems, creator cooperatives, and community-run treasuries. One example is MakerDAO, a price-stable global cryptocurrency governed by a protocol and stakeholder community rather than a central bank. These models are all exciting testing grounds for wider participation in the economy.

Power is sticky, and systemic change is hard. We need people working on all these projects in tandem. We have to diversify our bets from technology to organizing, from government to grassroots change. We need to talk to and learn from each other; we need to see ourselves as part of a shared mission. And instead of enforcing alignment to a rigid ideological consensus, we need to accept the possibility — or probability — of disagreement, conflict, and failure. These iterative processes make our movement more robust. They are essential, inevitable components of pluralism itself.


Where does Reboot fit into this constellation of efforts? Reboot is a publication and a community, a knowledge ecosystem and a convening space, a classroom and a network and a digital playground all dedicated to reclaiming techno-optimism for a better collective future.

While we avoid prescribing particular tactics or policy positions, Reboot’s work embodies a set of shared frameworks. Our analysis starts with power, both technological and political. First, we analyze technology in the context of the institutions and ideologies it shapes and is shaped by. Our discussion topics span history, theory, and current events; topics from job hunting to Marxism to accessible design. Second, we see theory and practice as mutually informative. We support technologists in being thinkers, writers, and advocates through programs like our undergraduate fellowship and open pitch process. Finally, we view optimism as an action, not a belief. We move between systemic analysis and our personal theories of change. We insist on embracing our agency to build a better world than the one we inherited.

Reboot’s programs form a flywheel between content and community, beginning with our public-facing media. With the ivory tower of academia looming on one side and uncritical startup hype on the other, we insist on publishing critical yet generative content that is written by and for technologists. For example, our volunteer-run newsletter spotlights the work of activists, founders, artists, and researchers; every month, we host authors who write science fiction, startup journalism, and cultural criticism. By learning in public, we bring new members with varied backgrounds into the community, spark partnerships with existing organizations working on related issues, and move closer to our long-term goal of facilitating a culture shift in technology.

Finally, our internal community coordinates our volunteer task forces and acts as a gathering space for a wide-ranging group of curious, open-minded, and mission-driven technologists. In an industry where there is rarely space to read and reflect, our members self-organize book clubs to learn about our role in broader systems of power. In a political environment where polarization runs deep, we insist on the discussion norms of assuming good faith and calling each other in instead of out. In a society where communities are increasingly privatized and commodified, we emphasize openness through remote-first programs and a norm of resource-sharing. And in a generation that has struggled to maintain hope in the future, we challenge each other to write about the version of the world we want to live in and the steps we’ll take to get there.


After the Cold War, mass anxiety quickly turned into rage at the military-industrial institutions that brought society so close to the brink. From the ashes, two main youth movements emerged. One was the New Communalists, who believed that a cocktail of personal computing and psychedelic drugs could offer refuge from government power and the struggle in the streets. The other was the New Left, a campus-centric movement encompassing direct action campaigns for feminism, pacifism, free speech, and civil rights. Long after they dissipated, these movements had major downstream effects on the cultures of technology and politics respectively.

Today, we’ve found ourselves in a similar historical moment. The triple whammy of the 2008 financial crisis, the 2016 U.S. presidential election, and the COVID-19 pandemic destroyed any faith people had in corporate and political elites to save us. Tech companies, once hailed as fearless changemakers, turned out to be just as susceptible to greed, callousness, and corruption. Faced with a slew of big life decisions amid this uncertain environment, I’ve found myself asking: where do we go from here?

One option is to recuse ourselves from social responsibility. One could work at Facebook for the free lunches or join a low-tech commune in the woods. Another is uprising: shouting grandiose calls to burn society down, lighting symbolic bonfires to make the revolution seem real. But a third option, the most difficult, is to lean into the challenge ahead of us. After all, our work is not separate from our personal identities, our political beliefs, and our duties as thinkers, citizens, and advocates. As technologists, the labor market has granted us disproportionate leverage over the industry. This privilege is a duty. We should situate ourselves in broader systems of power; we should consider how we can either help or harm. We cannot use code to abstract ourselves from consequences. We see progress not as a guarantee, but something to chip away at, day by day.

Reclaiming techno-optimism means reclaiming human optimism. We must seek a shared vision of the world that is just, equitable, and abundant. We have the resources to bring it about. Utopianism implies a state of unattainable perfection, but optimism requires the simple yet daring belief that we have agency over our future.


Acknowledgements

This essay is a narrativization of an incredible number of people and ideas that have influenced my thinking around technology over the last five years; and in particular, those that keep me optimistic and inspired in the face of grand challenges.

I wanted to extend a particular thank you to my editors — Lucas Gelfond, Jasmine Wang, Chris Painter, and Emily Liu — for balancing patient support and rigorous feedback; to my early readers — Lessley Hernandez, Swetabh Changkakoti, Joel Gustafson, Spencer Chang, and Ethan Reeder — for making time to share their personal impressions; to the 49 of you who sponsored this essay through the Ghost Knowledge crowdfund; and to the broader Reboot network — community members, volunteers, advisors, guest authors — who have informed and inspired this by sharing their knowledge, time, and conversations.

Bibliography

I used the bibliography to list the works that had the most direct influence on this essay via explicit reference or borrowed concepts. There are hundreds more people and pieces that have influenced my thinking in less perceptible, but equally important ways.

  1. Algorithmic Justice League. https://www.ajl.org
  2. Ameelio. https://ameelio.org
  3. Andreessen, M. (2020, April 18). It’s time to build. Andreessen Horowitz. https://a16z.com/2020/04/18/its-time-to-build/
  4. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  5. Bruce, D. (2020, September 3). Afrofuturism: From the past to the living present. UCLA Newsroom. https://newsroom.ucla.edu/magazine/afrofuturism
  6. Campaign To Stop Killer Robots. https://www.stopkillerrobots.org
  7. Conger, K., Fausset, R., & Kovaleski, S. F. (2019, May 14). San Francisco bans facial recognition technology. The New York Times. https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html
  8. Dai, J., & Kim, A. (2021, May 13). Why I’m unionizing. Reboot. https://reboothq.substack.com/p/unionize
  9. Daub, A. (2020). What tech calls thinking: An inquiry into the intellectual bedrock of Silicon Valley. Farrar, Straus and Giroux.
  10. Fisher, M. (2009). Capitalist realism: Is there no alternative?. Zero Books.
  11. Gelfond, L. (2021, April 30). The creator’s dilemma. Reboot. https://reboothq.substack.com/p/cbpp
  12. Gig Workers Rising. Pledge to vote no on 22. https://act.gigworkersrising.org/no_on_prop22
  13. Gurri, M. (2018). The revolt of the public and the crisis of authority in the new millennium. Stripe Press.
  14. Klein, E. (Host). (2021, June 22). Sarah Schulman’s radical approach to conflict, communication, and change [Audio podcast episode]. In The Ezra Klein Show. The New York Times. https://www.nytimes.com/2021/06/22/podcasts/transcript-ezra-klein-interviews-sarah-schulman.html
  15. Klein, E. (Host). (2021, March 30). Why sci-fi legend Ted Chiang fears capitalism, not A.I. [Audio podcast episode]. In The Ezra Klein Show. The New York Times. https://www.nytimes.com/2021/03/30/podcasts/ezra-klein-podcast-ted-chiang-transcript.html
  16. MakerDAO. (2019). MakerDAO documentation. https://docs.makerdao.com
  17. McCabe, D., & Kang, C. (2021, June 16) One of Big Tech’s biggest critics is now its regulator. The New York Times. https://www.nytimes.com/2021/06/16/technology/lina-khan-big-tech.html
  18. Menozzi, F. (2020, July 20). Militant optimism: A state of mind that can help us find hope in dark times. The Conversation. https://theconversation.com/militant-optimism-a-state-of-mind-that-can-help-us-find-hope-in-dark-times-141165
  19. Mijente. #NoTechForICE. https://notechforice.com
  20. Mitchell, A., Sun, J., Mak, J., & Rose, N. (2021, January). A national AI for good initiative. Day One Project. https://www.dayoneproject.org/post/a-national-ai-for-good-initiative
  21. Mozur, P. (2018, October 15). A genocide incited on Facebook, with posts from Myanmar’s military. The New York Times. https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
  22. Nedzhvetskaya, N., & Tan, J.S. (2019, December 23). What we learned from over a decade of tech activism. The Guardian. https://www.theguardian.com/commentisfree/2019/dec/22/tech-worker-activism-2019-what-we-learned
  23. Raheem. https://www.raheem.ai
  24. Reich, R. (2021, May 20). Why AI needs academia. Boston Review. https://bostonreview.net/forum/redesigning-ai/rob-reich-why-ai-needs-academia
  25. Reinhardt, B. (2020, June). Why does DARPA work? https://benjaminreinhardt.com/wddw
  26. Scahill, J. (2021, March 17). Hope is a discipline: Mariame Kaba on dismantling the carceral state. The Intercept. https://theintercept.com/2021/03/17/intercepted-mariame-kaba-abolitionist-organizing
  27. Snowden, E. (2019). Permanent record. Macmillan.
  28. Srnicek, N., & Williams, A. (2015). Inventing the future: Postcapitalism and a world without work. Verso Books.
  29. Sun, J. (2021). Reboot. https://reboothq.substack.com/about
  30. Tarnoff, B. (2018, June 6). Tech workers versus the Pentagon. Jacobin. https://jacobinmag.com/2018/06/google-project-maven-military-tech-workers
  31. The Drivers Cooperative. https://drivers.coop
  32. The Markup. https://themarkup.org
  33. Tufekci, Z. (2017). Twitter and tear gas. Yale University Press.
  34. Turner, F. (2010). From counterculture to cyberculture. University of Chicago Press.
  35. Upsolve. https://upsolve.org
  36. Vinge, V. (2015). True names and the opening of the cyberspace frontier. Tor Books.
  37. Weeks, K. (2011). The problem with work. Duke University Press.
  38. Woodman, S. (2017, March 2). Palantir provides the engine for Donald Trump’s deportation machine. The Intercept. https://theintercept.com/2017/03/02/palantir-provides-the-engine-for-donald-trumps-deportation-machine
  39. Xie, L. (2021, March 12). A beginner’s guide to DAOs. Mirror. https://linda.mirror.xyz/Vh8K4leCGEO06_qSGx-vS5lvgUqhqkCz9ut81WwCP2o

headshot of Jasmine Sun

Author

Jasmine Sun

*Inventing The Future* by Alex Williams and Nick Srnicek got me wildly fired up about reclaiming technology for progressive goals, and about embracing ambitious idealism without compromising on ideals. In many ways, it inspired my piece!