The Road Ahead for Identity Management in Open Source: Humans, Bots, AI Agents—and the Future of Attribution

Share this Post

Table of Contents

Open source has always been built on trust—trust in code, in contributors, and in the collaborative process itself. But the ground is shifting. As everything from financial systems to AI infrastructure leans more heavily on open source, one issue in particular is moving to the center of the conversation: contributor identity. It’s no longer just a metadata problem. It’s a governance issue, a compliance issue, and—most urgently—a trust issue.

At Bitergia, we’ve spent years helping foundations, communities, and OSPOs understand participation. Who’s contributing, how they’re contributing, and where those contributions are coming from. Our analytics stack—SortingHat and the rest—resolves identities across platforms and affiliations. We’ve been good at catching automation bots and recognizing organizational footprints. But we’re entering a new phase. Bots were just the beginning. Now we have AI agents writing code, reviewing PRs, and even initiating architectural decisions.

And that’s where things get messy.

Where Attribution Gets Complicated

Traditional bots are relatively easy to handle. They’re predictable. They follow rules. They do what you tell them to do. But AI agents? That’s different. They generate novel content. They adapt, learn, and can initiate pull requests without direct human oversight. And more and more, they’re quietly entering our workflows.

We’re already seeing this phenomenon in practice. Developers are merging pull requests that were partially or entirely written using tools like GitHub Copilot, ChatGPT plugins, and other LLM-powered coding assistants—often without disclosing authorship. Sometimes the suggestion is a single line; other times, it’s a whole function or file. And unless you’re paying very close attention, you wouldn’t know the difference.

All this creates real challenges for projects that depend on accurate attribution. Who’s responsible for the code? Who gets credit? Who reviews it—and how? If we can’t answer these questions clearly, contributor metrics lose their meaning, and governance starts to slip.

A Look Back: What the Linux Kernel Taught Us

This isn’t entirely new. The Linux kernel community—arguably the most influential open source project in history—has long struggled with contributor identity. Shared corporate accounts, inconsistent aliases, and ghost authorship all made it difficult to figure out who was really behind the code. That created friction for maintainers, legal teams, and community stewards alike.

To address the issue, the Linux Foundation (under Jim Zemlin’s leadership) invested in mechanisms like DCO enforcement, contributor license agreements, and more structured affiliation tracking. These efforts helped bring more visibility and accountability to one of the most complex collaborative codebases in the world.

But now, as AI becomes part of the contributor pool, those systems need to evolve again.

The Emerging Answer: First-Person Credentials (FPCs)

To navigate this new terrain, we need a stronger, more expressive identity layer—one that can account for different types of contributors: humans, bots, and AI agents. One idea that’s gaining traction is First-Person Credentials (FPCs).

Still in development, FPCs are designed to let contributors cryptographically assert their identity and the nature of their work. Think of it as a privacy-preserving way to prove authorship—human or otherwise—without relying on centralized gatekeepers. These credentials could be portable, verifiable, and interoperable across systems. And if adopted responsibly, they could help communities reintroduce trust into a more complex attribution landscape.

This isn’t about banning AI-generated code. It’s about disclosing when and how non-human agents contribute—so communities can make informed decisions and maintain the integrity of their processes.

So Who Gets to Issue These Credentials?

That brings us to the next challenge: who issues trust?

If we agree that credentials are useful—or even necessary—to distinguish between humans and machines, we still have to decide who should issue them. That’s a big, unresolved question, and the answer has major implications for openness, privacy, and autonomy.

Government-Issued Credentials

Governments are experimenting with digital identity systems. British Columbia has the BC Services Card and Wallet. Estonia and India have their own models. But state-issued credentials don’t always inspire confidence—especially when they feel tied to surveillance or centralized control. Switzerland, for example, rejected a national digital identity platform by referendum in 2021 over exactly those concerns.

Government credentials might be appropriate for regulated sectors—but they aren’t a universal fit for open source.

Foundation-Issued Credentials

Could organizations like the Linux Foundation or OpenSSF manage credentialing infrastructure? In theory, yes—they have the trust and governance models. But building a global, flexible, and transparent credential system is a heavy lift. Enforcement, revocation, and interoperability introduce serious complexity.

Platform-Issued Credentials

GitHub, GitLab, and similar platforms already sit at the center of developer workflows. They manage user accounts, control access, and could enforce metadata standards around how contributions are authored—human, bot, or AI. But if identity credentials are issued solely at the platform level, we risk fragmenting trust across proprietary silos. Portability suffers. So does community oversight.

Where Bitergia Fits: Identity-Aware Vendors

Vendors like Bitergia occupy an adjacent space. We’re not a platform in the GitHub sense, but we do maintain deep insight into identity across systems. Our tooling resolves contributor aliases, detects bots, and analyzes behavior at scale. We’ve spent years building trust with open source communities, and we understand the nuances of attribution better than most.

We’re not positioning Bitergia as a credential issuer. Frankly, it would be a stretch without significant community alignment, external support, and funding. But in a future where the ecosystem demands portable, cryptographically verifiable contributor credentials, it’s possible that a consortium we’re part of could play a meaningful role.

This approach would only make sense if we did it the right way: It would need to be grounded in transparency, governed by the community, and aligned with the principles of contributor sovereignty, privacy, and openness.

Design Principles for Trustworthy Credentialing

If any third party—including us—wants to issue credentials responsibly, here’s what it should look like:

  • Voluntary, opt-in, and transparent—no surveillance, no coercion.
  • Aligned with open standards like DIDs and Verifiable Credentials.
  • Governed by the community, not by a single vendor or company.
  • Cryptographically verifiable, but privacy-respecting.
  • Non-exclusive and interoperable, so other issuers can participate.

Credential issuance isn’t just a tech task. It’s a governance decision. And getting it wrong could do more harm than good.

What Bitergia Can Do Next

We’re not trying to become the identity authority for open source. But we do believe Bitergia can help shape how this plays out—through infrastructure, transparency, and open collaboration.

Here’s what’s on our radar:

  • Add richer contributor classifications: human, AI-assisted, AI-generated, bot.
  • Integrate support for verifiable credentials in contributor profiles.
  • Work with communities and foundations to define open metadata schemas.
  • Build visual dashboards that surface changes in contributor composition over time.
  • Collaborate with standards bodies (CHAOSS, OpenWallet, OpenSSF, DIF, TOIP, Arya) to co-create trusted metrics and practices.

Because at the end of the day, open source doesn’t just need better tools—it needs better ways to understand and trust who (or what) is doing the work.

Final Word

We’re at a turning point. If we don’t create open, auditable ways to track attribution in a world of AI and automation, someone else will—and they might not care about open source community values, transparency, or user control.

At Bitergia, we’re committed to helping build what’s next: not just tooling, but trust infrastructure for open source communities.

Because in the end, this isn’t about controlling who contributes. It’s about keeping the playing field fair—and making sure we can still believe in the data that tells us where our software comes from.

This blog post was written by Diane Mueller.

Picture of Diane Mueller

Diane Mueller

Managing Director, Bitergia Research

More To Explore

Do You Want To Start
Your Metrics Journey?

drop us a line and Start with a Free Demo!