Amidst AI’s evolution, Microsoft’s report offers trustworthy governance insights
In the realm of AI, trusting governments without guidance is unwise. Governments’ actions to address AI risks can potentially cause more harm than good. For example, penalizing companies like Shell Oil led to a shift in oil control to unfriendly regions. Similarly, correcting RCA’s dominance in consumer electronics shifted the market to Japan. Without guidance, US tech leadership could be transferred to China. Hence, Microsoft’s “Governing AI: A Blueprint for the Future” report holds great significance.
Microsoft’s report identifies the issue, presents a viable approach that preserves U.S. competitiveness, and tackles AI-related concerns.
Now, let’s delve into Microsoft’s AI governance blueprint, concluding with my Product of the Week—a series of trackers aiding in locating misplaced items.
Requesting regulation without proper context is imprudent. Governments’ uninformed reactions can lead to more harm than good. While I mentioned a few antitrust examples earlier, the Equal Employment Opportunity Commission (EEOC) serves as a particularly distressing illustration.
Established by Congress in 1964, the EEOC aimed to address the pressing issue of racial discrimination in the workforce. Workplace racial discrimination was a clear and addressed problem for the EEOC. However, a more significant challenge persisted regarding discrimination in education, which the EEOC failed to tackle.
When businesses hired employees based on qualifications and utilized established industry methodologies to reward them with positions, salary increases, and promotions based on education and accomplishments, they were compelled to abandon these practices in favor of improving company diversity. Unfortunately, this often resulted in inexperienced minorities being placed in roles they were ill-prepared for.
By assigning inexperienced minorities to jobs they lacked proper training for, the system set them up for failure, reinforcing the misguided notion that minorities were somehow inadequate. In reality, they were denied equal opportunities for education and mentorship. This disparity was not limited to people of color but also affected women, irrespective of their racial background.
Brad Smith, Microsoft’s President, stands out among technology leaders for his strategic thinking rather than focusing solely on tactical responses. His approach is evident in the Microsoft blueprint, where instead of demanding government action without considering long-term consequences, Smith presents a well-defined solution in a five-point plan.
Smith’s opening statement, “Don’t ask what computers can do, ask what they should do,” resonates with John F. Kennedy’s iconic quote, “Don’t ask what your country can do for you, ask what you can do for your country.” This statement originates from a book Smith co-authored in 2019, addressing one of the generation’s defining questions.
Smith’s initial recommendation is to adopt and expand existing government-led AI safety frameworks, as governments often overlook the tools already at their disposal and waste time reinventing them.
The U.S. National Institute of Standards and Technology (NIST) has made commendable progress with the AI Risk Management Framework (AI RMF), although it remains an incomplete framework. Smith’s first point emphasizes utilizing and building upon this existing framework.
His second point underscores the necessity of implementing effective safety mechanisms for AI systems controlling critical infrastructure. In the event that such AI systems malfunction, the potential for significant harm or even loss of life is alarming.
To mitigate risks, it is crucial to subject these systems to rigorous testing, incorporate substantial human oversight, and evaluate them against various scenarios—both probable and improbable—to ensure that the AI does not exacerbate the situation.