How Artificial Intelligence Impacts Underprivileged Communities

By R.H.M. Cruz , Executive Director of Inspire One Billion People | Posted on July 22,2023 at 10:07 am

In the rapidly advancing world of technology, artificial intelligence (AI) has emerged as a powerful tool with the potential to stimulate economic growth and revolutionize various industries. However, it is crucial to acknowledge that AI is not immune to bias, and in a society riddled with systemic inequality and racism, these biases can exacerbate existing issues faced by marginalized communities. As the Executive Director of Inspire One Billion People (IOBP), I feel it is essential to shed light on the impact of AI on underprivileged communities, the consequences it brings, and the potential solutions that can mitigate harm.

AI’s potential benefits are undeniable, but its implementation in various industries, such as education, housing, employment, and credit, can be problematic if the systems perpetuate existing biases. As Olga Akselrod points out in her article, “How Artificial Intelligence Can Deepen Racial and Economic Inequities,” AI has the potential to worsen discriminatory practices, particularly affecting marginalized groups that have long faced systemic discrimination. For instance, algorithms used to screen apartment renters and mortgage applicants can disproportionately disadvantage Black people due to historical patterns of segregation embedded in the data on which these algorithms are built.

The consequences of biased AI in healthcare can further exacerbate disparities for people of color, deepening the economic and racial divide in our society. As an organization committed to serving communities and advocating for social justice, IOBP recognizes the urgency of addressing these biases and taking concrete steps to mitigate their impact.

To tackle the issue of bias in AI, experts and thought leaders in the field propose innovative solutions. A crucial aspect of addressing bias lies in acknowledging the lack of diversity in the technology industry. Rashida Richardson emphasizes the need for transformative justice principles in AI development, which includes involving victims and impacted communities in conversations about building and designing AI models. IOBP strongly supports this approach and encourages the active participation of marginalized people as co-creators and co-implementers of AI technology.

Moreover, Andrew Burt offers a potential legal and statistical framework to measure and ensure algorithmic fairness in AI technology. Drawing from U.S. laws in areas like civil rights, housing, healthcare, and employment, businesses can implement strategies that have been effective in tackling discrimination issues in other sectors. The Federal Trade Commission (FTC) also plays a crucial role in holding businesses accountable. Through laws like Section 5 of the FTC Act, The Fair Credit Reporting Act, and The Equal Credit Opportunity Act, the FTC reinforces the importance of fair and unbiased AI implementations.

However, to truly foster equitable AI models, IOBP believes that the most impactful solution is Richardson’s proposal to include victims and impacted communities in the AI design process. By involving those directly affected by AI’s biases, experts can gain valuable insights that lead to creative and thoughtful solutions that minimize harm to marginalized groups.

As an organization that serves the needs of underprivileged communities, IOBP will continue to advocate for inclusive AI development and promote initiatives that empower marginalized individuals. Our commitment to creating a just and equitable society extends to technology, and we will work tirelessly to ensure AI benefits everyone, regardless of race or background.

Together, let’s reimagine the future of AI, one that truly inspires and empowers one billion people.

#AI #ArtificialIntelligence #Technology #SocialJustice #Equality #UnderprivilegedCommunities #Inclusion #InspireOneBillionPeople #IOBP #TransformativeJustice #Fairness #Equity #FutureOfAI #Empowerment

*Note: The information and perspectives presented in this blog are based on the collective understanding of AI’s impact on marginalized communities and the potential solutions proposed by experts in the field. For more in-depth information, please refer to the cited articles:

Citations:

  1. Akselrod, Olga. “How Artificial Intelligence Can Deepen Racial and Economic Inequities.” [The Network – UC Berkeley School of Law], January 26, 2022, https://sites.law.berkeley.edu/thenetwork/2022/01/26/how-artificial-intelligence-can-deepen-racial-and-economic-inequities/
  2. Johnson, Khari. “A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI.” [Wired], https://www.wired.com/story/a-move-for-algorithmic-reparation-calls-for-racial-justice-in-ai/
  3. Burt, Andrew. “Measuring and Ensuring Algorithmic Fairness in AI.” [AI Journal]
  4. Federal Trade Commission (FTC). “Laws Addressing Discrimination in AI Implementations.” [FTC]
  5. Vladeck, Steve. “The Legal Fight Over College Admissions and AI Technology.” [CNN]

Leave a Reply

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading