I paste slow SQL queries into Claude and get optimized versions back instantly
← Back
April 4, 2026Claude6 min read

I paste slow SQL queries into Claude and get optimized versions back instantly

Published April 4, 20266 min read

I was staring at an 8-second query in our analytics dashboard. I had already tried adding an index and rewriting the WHERE clause. On a whim, I pasted the query plus the EXPLAIN ANALYZE output into Claude. Two minutes later I had a rewritten query using a CTE and a partial index suggestion that brought it down to 180ms. I have been using this workflow ever since.

The prompt that works

The key is providing Claude with the full context: the slow query, the EXPLAIN ANALYZE output, and your table schemas. Without the explain plan, Claude will guess at the problem. With it, Claude can see exactly what the query planner is doing wrong.

text
You are a PostgreSQL performance expert.

I have a slow query. I will provide:
1. The current query
2. EXPLAIN ANALYZE output
3. Relevant table schemas

Please:
1. Identify the specific performance problem (seq scan, bad join order, missing index, etc.)
2. Provide an optimized rewrite
3. Suggest any indexes that would help
4. Explain WHY the rewrite is faster

Current query:
[PASTE QUERY HERE]

EXPLAIN ANALYZE output:
[PASTE EXPLAIN ANALYZE OUTPUT HERE]

Table schemas:
[PASTE CREATE TABLE statements]

A real example

Here is a slow query I ran this month:

sql
-- Slow query: finds top 10 users by order value in the last 30 days
-- Runtime: 8.2 seconds

SELECT 
    u.id,
    u.email,
    COUNT(o.id) as order_count,
    SUM(o.total_cents) as total_value
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE o.created_at > NOW() - INTERVAL '30 days'
    AND o.status = 'completed'
GROUP BY u.id, u.email
ORDER BY total_value DESC
LIMIT 10;

EXPLAIN ANALYZE showed a sequential scan on orders (2.1M rows) and a hash join. Claude identified three issues and produced:

sql
-- Optimized: 180ms

WITH recent_orders AS (
    SELECT 
        user_id,
        COUNT(*) as order_count,
        SUM(total_cents) as total_value
    FROM orders
    WHERE created_at > NOW() - INTERVAL '30 days'
        AND status = 'completed'
    GROUP BY user_id
    ORDER BY total_value DESC
    LIMIT 10
)
SELECT 
    u.id,
    u.email,
    ro.order_count,
    ro.total_value
FROM recent_orders ro
JOIN users u ON u.id = ro.user_id
ORDER BY ro.total_value DESC;

-- Required index (add this):
CREATE INDEX idx_orders_status_created_user 
ON orders (status, created_at DESC, user_id, total_cents)
WHERE status = 'completed';

Claude's explanation: The original query joined all users with all recent completed orders before filtering. The CTE aggregates orders first (using the new covering index), produces just 10 rows, then joins to users. The partial index (WHERE status = 'completed') eliminates the filter overhead on the majority of queries that only look at completed orders.

The helper script

bash
#!/bin/bash
# sql-optimize.sh — paste this in your PATH

QUERY_FILE="$1"
EXPLAIN_FILE="$2"

if [ -z "$QUERY_FILE" ]; then
  echo "Usage: sql-optimize  [explain-output.txt]"
  exit 1
fi

CONTEXT=""
CONTEXT+="Current slow query:
```sql
$(cat $QUERY_FILE)
```

"

if [ -n "$EXPLAIN_FILE" ]; then
  CONTEXT+="EXPLAIN ANALYZE output:
```
$(cat $EXPLAIN_FILE)
```

"
fi

echo -e "$CONTEXT" | claude --print   --system "You are a PostgreSQL performance expert. Analyze slow queries and provide optimized rewrites with index suggestions. Explain your reasoning."   "Optimize this query for performance:"

Getting the EXPLAIN ANALYZE output

sql
-- Run this to get the full explain output
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT 
    u.id,
    u.email,
    -- your slow query here
FROM ...;

The BUFFERS option shows cache hit rates, which helps Claude identify whether the problem is I/O bound (needs better indexes) or compute bound (needs query restructuring).

Types of optimizations Claude reliably suggests

  • CTE push-down — aggregating before joining instead of after
  • Partial indexes — indexes with WHERE clauses for selective queries
  • Covering indexes — including all columns needed by the query to avoid heap access
  • JOIN order rewriting — filtering smaller tables first
  • EXISTS vs IN — switching from IN (subquery) to EXISTS or JOIN for correlated subqueries

Claude does not replace understanding query plans, but it dramatically accelerates the diagnostic cycle. What used to take 30-60 minutes of trial and error now takes 5 minutes.

Share this
← All Posts6 min read