While designers hold considerable power, AI is a tool designed for our benefit; communities must ensure its intended use
Superhuman ability. Disastrous. Game-changing. Negligent. Productivity-enhancing. Perilous. These words characterize AI recently. ChatGPT’s public launch propels AI into attention, prompting questions about its distinctiveness compared to other tech and the consequences of profound shifts in work and life.
To begin with, acknowledging AI’s nature is crucial. As emphasized in our book, “The Tech That Comes Next,” technology is a human-created tool, influenced by human beliefs and limitations. While AI has often been portrayed as an entirely independent, self-learning entity, it truly operates within the boundaries of its design. For instance, when I inquire of ChatGPT, “Which country makes the finest jollof rice?” it replies, “As an AI language model, I lack personal opinions, but can offer information. Determining the best jollof rice country is subjective, influenced by cultural background, taste, or experiences.”
This illustrates a conscious decision by AI developers to avoid giving definitive answers on cultural opinions. While ChatGPT users might ask for opinions on more contentious subjects than rice, this design choice will yield a comparable response. Lately, ChatGPT has adjusted its code to address claims of bias in responses, such as sexism and racism. It’s vital to hold developers to high expectations, anticipating oversight in AI tools. We must also insist on inclusivity and a certain level of transparency in the process of establishing these limitations.
While designers wield significant influence over AI functionality, industry leaders, government bodies, and nonprofits also hold power in determining AI’s applications. Despite generative AI’s capabilities in tasks like producing visuals, planning trips, generating presentations, and coding, it’s not a universal problem solver. Amid technological enthusiasm, those directing AI’s usage should consult the impacted community, asking, “What do you require?” and “What are your aspirations?”. These responses should shape developer constraints and guide choices on AI’s deployment and methods.
During early 2023, Koko, a mental health app, experimented with GPT-3 for counseling 4,000 individuals but discontinued the test due to a perceived “clinical” feel. It swiftly became evident that the impacted community preferred human therapists over an AI program. Despite the extensive AI discourse, its application isn’t ubiquitous nor obligatory. Hastily relying solely on AI for medical access, housing prioritization, or corporate hiring can lead to grave consequences; such systems may amplify exclusion and harm. Those contemplating its use must acknowledge that opting not to use AI holds equal significance to choosing its application.
Beneath these challenges lie fundamental concerns about data quality driving AI and accessibility to the technology. AI operates by manipulating existing data mathematically to predict or create content. If the data carries biases, lacks representation, or lacks linguistic diversity, the chatbot’s replies, suggested activities, and generated images may inherit these biases.
To counter this, responsible technology tool development should be influenced by the work of researchers and advocates bridging technology, society, race, and gender. Safiya Noble, for instance, exposed bias in Google search results for terms like “professional hairstyles” and “unprofessional hairstyles for work.” The former showed images of white women; the latter displayed Black women with natural hairstyles. Research-driven awareness and advocacy prompted Google to revise its system.
Efforts have been made to impact AI systems even before their finalization and implementation. Researchers from Carnegie Mellon University and the University of Pittsburgh employed AI lifecycle comic boarding, converting AI reports and tools into comprehensible descriptions and visuals. This method engaged frontline workers and homeless individuals in discussions about an AI-driven decision support system for local homeless services. They grasped the system’s functioning and offered specific input to developers. The takeaway is that AI is utilized by humans, necessitating an approach that integrates technology within its societal framework to shape its evolution.
What’s our societal trajectory now? Who bears the responsibility of harmonizing AI tool design, usage decisions, and harm mitigation? Every individual has a role to fulfill. As previously explored, technologists and leaders carry distinct duties in AI system creation and implementation. Policymakers wield the capacity to establish AI development and usage guidelines—aiming not to stifle innovation, but to channel it towards harm reduction. Funders and investors can back human-centric AI systems, promoting timelines inclusive of community input and analysis. These roles must collaborate for more equitable AI systems.
The interdisciplinary, cross-sector approach can yield improved outcomes, with several promising examples today. For instance, Farmer.chat employs Gooey.AI to provide agricultural knowledge in local languages via WhatsApp to farmers in India, Ethiopia, and Kenya. The African Center for Economic Transformation is crafting a multi-country, multi-year initiative for AI sandbox trials in economic policymaking. Researchers are also exploring AI’s role in revitalizing Indigenous languages, such as the Cheyenne language in the western United States.
These instances showcase AI’s potential to serve society with fairness. Past experiences have shown that technology’s unequal impacts accumulate; addressing these disparities isn’t solely the responsibility of the tech community. Rather, together, we can enhance the standards of AI systems developed and applied in our lives.