How to capture and share AI discoveries across your development team (2025)

How to Capture and Share AI Discoveries Across Your Development Team (2025)

Your team is drowning in AI tokens. Everyone's using ChatGPT, Claude, or Copilot. Productivity metrics look great on paper. But ask a teammate about a solution someone solved last week, and you get blank stares. The organization isn't learning—individual developers are, and the moment they context-switch, that knowledge vanishes.

This is the real cost of unrestricted AI adoption: dispersed intelligence that never crystallizes into organizational capability.

The Problem: Individual Wins, Zero Organizational Memory

When developers use AI tools in isolation, three things happen:

  1. Repeated token spending: Team member A spends 50 tokens figuring out how to optimize a React query. Team member B spends 60 tokens solving the identical problem two weeks later.
  2. No institutional knowledge: The solution exists only in Slack history or a closed chat window. It's invisible to code review, documentation, or future hires.
  3. Lost context: The "why" behind a decision (edge cases, performance trade-offs, tested alternatives) never leaves the individual's mental model.

This happens because AI tools are conversational by design—great for solving problems in the moment, terrible for propagating discoveries.

Step 1: Establish a Discovery Capture Protocol

Before tools, establish a lightweight process that developers actually follow.

Create a Structured Template

Set up a GitHub Discussions forum or a lightweight wiki (Notion, Obsidian vault with Git sync) where developers post discoveries within 24 hours of solving something non-trivial:

# [Discovery]: Optimizing PostgreSQL Query Performance in Prisma ORM

**Date**: 2025-01-15
**Author**: @jane-dev
**Time Invested**: 30 minutes
**Tokens Used**: ~40 (Claude 3.5 Sonnet)

## Problem
N+1 queries in relationship loading causing P99 latency spikes.

## AI Prompt That Worked
"I'm using Prisma with PostgreSQL. I have a Post model with comments. When I fetch posts with `.include({ comments: true })`, I see separate queries per post. How do I batch this?"

## Solution
Use `select()` with nested relations instead of `include()`:

```typescript
const posts = await prisma.post.findMany({
  select: {
    id: true,
    title: true,
    comments: {
      select: { id: true, text: true }
    }
  }
});

Why This Matters

  • Reduced query count from O(n) to O(1) batch query
  • Single round-trip to database
  • 60% latency reduction in staging

Tested On

  • Prisma 5.8+
  • PostgreSQL 14+
  • Node 18+

Caveats

  • Only works if you explicitly select fields (no wildcard)
  • Doesn't help with circular relationships

The template captures:
- **What was solved** (concrete problem)
- **How AI helped** (the actual prompt)
- **The answer** (code + explanation)
- **Context** (versions, constraints, performance impact)
- **Caveats** (where this breaks)

## Step 2: Integrate Discovery Capture into Code Review

Capture isn't enough if it's optional. Make it part of your workflow.

### GitHub Actions Workflow

Add a bot comment to PRs that touch unfamiliar or optimized code:

```yaml
name: AI Discovery Prompt
on:
  pull_request:
    types: [opened]

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Add Comment
        if: github.event.pull_request.title contains 'optimization' or github.event.pull_request.title contains 'fix'
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: '🤖 **Did AI help solve this?** If yes, add a discovery entry to `docs/ai-discoveries/` so the team learns from this.'
            })

This gentle reminder increases capture rate from ~5% to ~40% without being heavy-handed.

Step 3: Central Index and Cross-Linking

Discoveries scattered across docs are still invisible.

Create a Searchable Discovery Index

Use a simple DISCOVERIES.md at the repo root:

# Team AI Discoveries Index

## Database Optimization
- [Prisma Query Batching with select()](./docs/ai-discoveries/prisma-batching.md)
- [PostgreSQL JSON Aggregation for Reports](./docs/ai-discoveries/postgres-json-agg.md)

## React Performance
- [useCallback vs useMemo: When to Use Each](./docs/ai-discoveries/react-callback-memo.md)
- [Fixing Hydration Mismatches in Next.js](./docs/ai-discoveries/nextjs-hydration.md)

## DevOps & Infrastructure
- [Docker Multi-Stage Build Size Reduction](./docs/ai-discoveries/docker-multistage.md)

Link to this index in:

  • Onboarding docs
  • Team Slack/Discord channel topic
  • GitHub org README
  • CI/CD documentation

Add Searchable Metadata

If using a Git-backed wiki (Obsidian, GitBook, or MkDocs), tag each discovery:

---
title: Prisma Query Batching with select()
tags:
  - database
  - prisma
  - performance
  - n+1-queries
techs:
  - prisma
  - postgresql
date: 2025-01-15
author: jane-dev
---

This enables filter/search tools to surface relevant discoveries during code review or architecture decisions.

Step 4: Tool Integration: Pick One Lightweight System

Option A: GitHub Discussions + GitHub Pages

Free, integrated, discoverable.

  1. Enable Discussions in repository settings
  2. Create a "Category" called "AI Discoveries"
  3. Link from README and docs
  4. Use GitHub's search and filter

Pros: No extra tools, built into workflow, searchable Cons: Limited formatting, not ideal for long-form content

Option B: MkDocs + Git

Structured, version-controlled, professional.

mkdocs new ai-discoveries
cd ai-discoveries
# Edit mkdocs.yml
# Add markdown files to docs/
mkdocs serve  # Local preview

Pros: Beautiful output, full version control, offline-capable Cons: Requires setup, minor maintenance

Option C: Obsidian Vault + GitHub Sync

Maximum flexibility and personal knowledge management.

  1. Create shared Obsidian vault in GitHub repo
  2. Use git pull on team machines
  3. Each dev contributes locally, commits discoveries
  4. Configure backlinks for cross-discovery navigation

Pros: Powerful linking, local-first, low friction Cons: Steeper learning curve for non-Obsidian users

Step 5: Monthly Synthesis and Team Sync

Capture alone doesn't create learning. You need intentional synthesis.

Monthly Discovery Review

Schedule 30 minutes every month (Friday morning works):

  1. One person reviews all new discoveries from the past month (5 min)
  2. Team discusses: Which discoveries affect our architecture? Which should inform standards? (15 min)
  3. Document decisions: If a discovery should become a team standard, link it from your coding standards or architecture decision log (10 min)

Link to Real Decisions

When you codify a discovery, reference it explicitly:

# Coding Standards: Query Optimization

All ORM queries must use explicit `select()` to prevent N+1 queries.

**See also**: [AI Discovery: Prisma Query Batching](../ai-discoveries/prisma-batching.md)

This creates a feedback loop: individual discovery → team learning → organizational standard.

Common Pitfalls to Avoid

| Pitfall | Impact | Solution | |---------|--------|----------| | No template, freeform posts | Inconsistent, hard to reference | Use the structured template above | | Capture buried in Slack/email | Knowledge is lost within 48 hours | Require GitHub or wiki entry within 24 hrs | | Discovery index never updated | Index becomes stale, team stops trusting it | Assign one person monthly to refresh links | | No time allocated for synthesis | Discoveries pile up, never inform decisions | Block 30 min/month on calendar as non-negotiable | | Discoveries only cover solutions, not failures | Team repeats mistakes | Encourage "AI led me down the wrong path" posts too |

Measuring Success

After 2-3 months, you should see:

  • Reduced token spend: Developers reference discoveries instead of re-prompting (10-20% reduction typical)
  • Faster onboarding: New hires find solutions without asking
  • Better code review: "Have you checked the discovery for this pattern?" becomes common feedback
  • Informed architecture: Decisions reference actual experiment results, not opinion

The goal isn't perfect documentation. It's converting ephemeral AI conversations into organizational capital.

Next: Automate Discovery Extraction

If you want to go further, consider tools that extract AI insights automatically:

  • LangChain context managers: Log all LLM calls with metadata
  • OpenAI API logging: Use temperature, top_p, and system prompt versioning to track what works
  • Custom middleware: Intercept API calls and surface novel results to a team channel

But start simple. The process matters more than the tool. A shared GitHub Discussions forum with a template and monthly 30-minute sync will outperform a fancy platform that nobody uses.

The question isn't "How much AI is the team using?" It's "What did we learn that actually changed how we work?" Build capture mechanisms around that.

Recommended Tools

  • GitHubWhere the world builds software
  • RenderZero-DevOps cloud platform for web apps and APIs