Quantcast
Channel: Article One
Viewing all articles
Browse latest Browse all 21

RightsCon 2025 and Beyond: Key Trends in AI, Human Rights, and Corporate Responsibility

$
0
0

By Sarah Ryan

After an exciting, stimulating, and rights-filled week at RightsCon 2025 in Taipei, it’s clear that the conversation around AI and human rights is evolving rapidly. This year’s discussions highlighted both progress and persistent challenges in ensuring AI systems are developed and deployed responsibly. With generative AI continuing to reshape industries, the urgency of embedding human rights considerations into AI governance has never been greater. 

Reflecting on this year’s sessions five major themes stood out that will shape the road ahead for AI, human rights, and corporate responsibility. 

1. Moving from Principles to Practice  

The past few years have seen a proliferation of Responsible AI principles from governments, companies, and civil society. Yet a common theme at RightsCon was the struggle to translate high-level commitments into concrete action. Many organizations recognize the need for transparency, accountability, and fairness in AI, but there remains a gap in operationalizing these values at scale. 

Panelists across multiple sessions emphasized the need for better tools, governance structures, and regulatory clarity to help companies navigate this complex landscape. RightsCon 2025 made it clear that the conversation must now shift toward implementation and answering the question of how we can ensure that AI governance is not just about aspirational principles but about measurable impact.  

2. Assessing Tradeoffs is Key  

A crucial theme that emerged throughout RightsCon 2025 was the importance of assessing the trade-offs of AI mitigations and guardrails themselves. While efforts to mitigate AI risks—such as content moderation filters, bias-reduction techniques, or stricter data governance rules—are necessary, they can also create unintended human rights consequences. For example, overly restrictive content moderation can suppress freedom of expression, while aggressive bias mitigation techniques might impact the accuracy of decision-making systems in ways that disadvantage certain groups. 

Effective AI governance and due diligence requires a holistic approach that considers not just the risks AI poses but also the potential downsides of the safeguards put in place to address them. This means conducting rigorous human rights impact assessments for interventions, engaging affected communities in decision-making, and continuously refining approaches to ensure that AI mitigations do not inadvertently create new harms. 

3. Expanding Due Diligence Beyond Development to Deployment 

Human rights due diligence in AI has traditionally focused on the development phase—ensuring that data collection, model training, and algorithmic auditing and design align with ethical principles. However, discussions at RightsCon underscored the growing need for due diligence across the full AI lifecycle, especially at the deployment stage. 

As AI systems are integrated into real-world decision-making processes—whether in hiring, law enforcement, financial services, or human resources—new risks emerge that were not always anticipated at the development stage. Companies must adopt ongoing due diligence approaches that account for evolving risks post-deployment. This means monitoring AI in use, ensuring proper grievance mechanisms for affected communities, and engaging stakeholders beyond just the initial design phases. 

Expanding to deployment also means we need to bring new players into the fold. Companies who do not consider themselves technology companies—including most notably consumer products companies—will need to quickly learn the lessons from AI developers to ensure their use of AI is done in ways that respect human rights.   

4. Addressing the Hidden Labor Behind AI 

One of the most pressing but often overlooked issues in AI governance is the treatment of the workers who power AI systems—data enrichment workers, content moderators, and annotators who play a crucial role in training AI models. Several sessions at RightsCon–including the panel I moderated–highlighted ongoing labor rights concerns, including low wages, lack of job security, and exposure to harmful content. 

This year, discussions moved beyond just identifying these challenges to pushing for concrete solutions. There was a strong call for better labor protections, fair compensation, and ethical sourcing guidelines for AI supply chains. Organizations like the Partnership on AI have already started creating standards to address these issues, but sessions and conversations at RightsCon made it clear that sustained pressure and accountability from both the private sector and civil society are needed to drive real change.  

5. Collaboration is Fundamental 

One of the clearest takeaways from RightsCon is that no single company, government, or civil society group can tackle AI’s human rights challenges alone. Ensuring AI respects human rights requires a truly multistakeholder approach, where private sector actors, regulators, civil society organizations, academics, and—most importantly—impacted communities work together to shape policies and practices. 

Successful AI governance must be built on transparency, shared responsibility, and open channels of communication. RightsCon sessions underscored the need for better collaboration across industries, particularly in areas like standard-setting, taxonomy, human rights due diligence, and accountability mechanisms. Moreover, engaging directly with those most affected by AI—whether they are marginalized communities, data workers, or consumers—ensures that AI governance is not only robust but also equitable and just.  

Looking Ahead  

As we look ahead, one thing is clear: the conversation around AI and human rights is gaining momentum—and rightly so. The next year will be critical in determining whether companies and policymakers can move beyond discussions and take meaningful action to align AI development and deployment with human rights principles. 

The challenge ahead is not just about mitigating AI’s risks—it’s about shaping a future where AI actively upholds and advances human rights. That will require cross-sector collaboration, stronger enforcement mechanisms, and a commitment to ensuring AI serves all communities, not just those in positions of power. 

  

The post RightsCon 2025 and Beyond: Key Trends in AI, Human Rights, and Corporate Responsibility appeared first on Article One.


Viewing all articles
Browse latest Browse all 21

Trending Articles