We Found Our .env File in 47 Public Forks After a Junior Dev's First Open Source PR
← Back
March 16, 2026Security9 min read

We Found Our .env File in 47 Public Forks After a Junior Dev's First Open Source PR

Published March 16, 20269 min read

Six days after our junior developer submitted their first open source contribution, our security scanner flagged an anomaly: our production .env file was publicly accessible in 47 GitHub forks. Every key. Every secret. The database password, the payment provider webhook secret, the internal API tokens. Sitting in plain text, indexed by GitHub search, for six days.

How It Happened

The story starts with the best of intentions. A junior developer — let's call him Arjun, three months into his first engineering job — spotted a bug in one of our internal tools that we had open-sourced. He wanted to contribute a fix. He'd never submitted a pull request to a public repo before. So he did what the GitHub UI suggests: he forked the repository to his personal account.

What Arjun didn't know, and what nobody had told him: this tool lived in a mono-repo. The public-facing library code was in /packages/ui-kit/. The repository root also contained .env, .env.local, and a /config/ directory with production service credentials — because three years ago, someone had consolidated "all config" into a single repo, and nobody had audited what that meant when the repo went public.

Arjun forked the entire repo. He committed his fix to his fork. He pushed. GitHub automatically made his fork public. And with it, the root-level .env file containing our production secrets.

47 public forks with our .env
6 days credentials exposed publicly
12 production secrets in the file
0 alerts from our secret scanner

Why Our Secret Scanning Missed It

We had GitHub Advanced Security enabled. We had a secret scanning policy. We even got an email from GitHub congratulating us on "0 secret scanning alerts" that quarter. So how did this get through?

Secret scanning scans your repository — the canonical one, at github.com/ourorg/our-repo. It does not scan forks. A fork is a separate repository under a different owner. GitHub's secret scanning does not propagate to forks made by external contributors unless those forks are internal to your organisation.

Our CI/CD pipeline had a pre-commit hook that checked for secrets before pushes. But that hook ran in our organisation's dev environment. Arjun was working from his personal laptop, cloning from his own fork, using his own Git config. The hook never ran on his machine.

We had a false sense of security. Our scanning covered our repo. Our contributor's fork was invisible to us — until the secrets were already out.

The 47 forks weren't 47 malicious actors. Most were automated mirroring bots — tools that clone popular repos to build training datasets, search indices, or developer analytics. By the time we found the exposure, several of those mirrors had already scraped and stored the content independently of GitHub. Deleting the files from Arjun's fork didn't mean we'd deleted them from everywhere.

The Incident Response

We found out on a Friday afternoon — of course — via a routine security audit, not an automated alert. The next four hours looked like this:

  • Hour 0–1: Triage. Pull the exact list of exposed secrets. Determine which are still active. Cross-reference with access logs for anomalous usage. Our database showed no unusual query patterns. Payment provider showed no webhook replays from unexpected IPs. That was the only good news.
  • Hour 1–2: Rotate everything. Database passwords. API tokens. Webhook secrets. OAuth client secrets. Every credential in that file, regardless of whether we saw evidence of misuse. Rotation order mattered — rotating the DB password before updating the connection string on all 14 pod replicas would cause a rolling outage.
  • Hour 2–3: Audit the forks. Use GitHub's API to enumerate all forks of the repository. File DMCA takedown requests for the most visible public mirrors. Contact GitHub support to have the sensitive commits scrubbed from fork history (this takes days, not hours).
  • Hour 3–4: Customer notification decision. Legal said: if there's no evidence of misuse and we can confirm rotation within the exposure window, we may not have a mandatory disclosure obligation. We notified our enterprise customers anyway. Silence is worse than transparency.

The Real Problem Wasn't Arjun

The post-mortem had one critical rule: no blame. Arjun did exactly what a first-time contributor is supposed to do. He found a bug, he forked, he fixed it, he submitted a PR. The failure was systemic, not individual.

Here's what actually went wrong:

  • The .env file was committed to the repo in the first place. This is the original sin. Your secrets should never be in version control, even in a private repo. Private repos get leaked, forked, archived, and cloned. Treat every committed secret as already compromised.
  • The .gitignore didn't exclude .env at the root. It excluded .env.local — the file developers were told to create locally. But the committed .env served as a "template" that someone had filled with real values and pushed three years prior. Nobody noticed because it was private.
  • We had no contributor onboarding that covered secrets. Our CONTRIBUTING.md explained how to run tests and format code. It said nothing about never including environment files in your fork.
  • We had no tooling to detect secrets in incoming PRs from forks. A fork-aware secret scanning step in our PR review workflow would have caught this before merge — or even before Arjun's push hit GitHub's servers.

What We Changed

The fixes fell into three categories: eliminate, detect, and contain.

Eliminate: Remove secrets from version control entirely

# Audit every file in git history for secrets
git log --all --full-history -- "**/.env" "**/*.env" "**/config/secrets*"

# Use BFG Repo Cleaner to purge historical commits
java -jar bfg.jar --delete-files .env --no-blob-protection .

# Force push the cleaned history
git reflog expire --expire=now --all && git gc --prune=now --aggressive
git push origin --force --all

This is painful. Force-pushing rewrites history for every contributor. We coordinated it as a maintenance window, notified all contributors, and required a fresh clone. Worth it.

After the purge: .env files of any kind added to .gitignore at the root level. A .env.template with placeholder values committed instead. Any real secrets only in the secrets manager (AWS Secrets Manager in our case) and injected at deploy time — never stored in the filesystem at all.

Detect: Secret scanning that covers forks

# .github/workflows/pr-secret-scan.yml
name: Secret Scan on PR

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Run Gitleaks
        uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          GITLEAKS_LICENSE: ${{ secrets.GITLEAKS_LICENSE }}

Gitleaks runs on the PR diff, not just the base branch. It catches secrets introduced in the contributor's fork before they're merged. We set it as a required status check — PR cannot merge if Gitleaks fails.

Contain: Assume breach, rotate on a schedule

Even if a secret is never exposed, we now rotate on a schedule. Database passwords: every 90 days. API tokens: every 60 days. Webhook secrets: every 30 days. The rotation itself is automated — a cron job updates the secret in AWS Secrets Manager, triggers a rolling restart of the relevant pods, and posts a notification to the security Slack channel. Human in the loop only to confirm the rotation succeeded.

The logic: if rotation is painful and manual, it doesn't happen. If it's automated and routine, it's just another Tuesday.

The Conversation With Arjun

After the incident, I had a one-on-one with Arjun. Not a performance conversation — a learning one. He was mortified. He'd been at the company three months, he'd tried to do something good, and he'd accidentally caused a security incident. That's a brutal way to start a career.

I told him two things. First: you did nothing wrong. You followed the normal contributor workflow. The system failed you. Second: you just learned something it takes most engineers five years to learn — that "private" on a hosting platform is not the same as "secret" in a security sense. Private repos get forked. Private repos get leaked. Private repos get acquired and their new owners make decisions about visibility you never anticipated.

"Private" is an access control, not a security guarantee. Treat every file in version control as potentially public. If it can't be public, it shouldn't be in version control.

He's now one of the team's strongest advocates for secrets hygiene. He reviews every new repository setup and checks the .gitignore before anything else. The best security engineers I've worked with all have a story like this. Now he has his.

What You Should Do Today

You don't need to have had this incident to protect against it. Here's a 30-minute audit that covers the most common failure modes:

  • Check if .env is in your git history, even if it's gitignored now. Run git log --all --full-history -- .env. If it appears in any commit, it's still accessible via git show regardless of your current .gitignore.
  • Add Gitleaks (or truffleHog) as a required PR check. Takes 15 minutes to set up. Catches secrets before they merge. Configure it to fail on any token/key pattern, not just well-known formats.
  • Audit your CONTRIBUTING.md. Does it mention secrets? It should. Add a clear section: never commit .env files, never include credentials in a PR, always check your fork's file list before pushing.
  • List all external contributors who have forked your repo. GitHub API: GET /repos/{owner}/{repo}/forks. You may be surprised how many exist and when they were last pushed to.
  • Run git-secrets or detect-secrets as a pre-commit hook in your standard dev setup docs. If you document how to set up the dev environment, include the hook installation. Make it the default, not the opt-in.

The worst part of this incident wasn't the six days of exposure. It was finding out that the underlying cause — a real secret committed to a semi-public repo — had existed for three years. The fork just finally made it visible.

Somewhere in your codebase right now, there's probably a secret that's been committed and forgotten. Find it before someone else does.

Share this
← All Posts9 min read