Idempotency

Handle duplicate webhook deliveries safely with idempotent event processing.

What is Idempotency?

Idempotency means that processing the same webhook event multiple times has the same effect as processing it once. Because Hook Mesh provides at-least-once delivery guarantees, your webhook endpoints must be designed to handle duplicate deliveries.

Duplicate Deliveries Are Normal

Network failures, timeouts, and retries can cause the same webhook to be delivered more than once. Your application must handle this gracefully.

Why Idempotency Matters

Without idempotent processing, duplicate webhooks can cause serious problems:

💰 Financial Errors

A payment.succeeded event processed twice could credit a user's account twice.

📧 Duplicate Notifications

An order.shipped event processed twice could send the same shipping notification twice.

🔢 Data Corruption

A subscription.canceled event processed twice could decrement a counter twice.

Implementation Strategies

There are two common approaches to implementing idempotent webhook processing:

1️⃣

Track Processed Events (Recommended)

Store the event_id of each processed webhook in your database. Before processing, check if the event has already been handled.

Simple and reliable
Works for all event types
Audit trail of processed events
2️⃣

Naturally Idempotent Operations

Design operations that are inherently idempotent. For example, setting a user's email to a specific value is idempotent—doing it twice has the same result.

No extra storage needed
Not suitable for all operations
Requires careful design

Complete Implementation Example

Here's a production-ready implementation that tracks processed events in a database:

// Node.js + PostgreSQL example
import express from 'express';
import { Pool } from 'pg';
import crypto from 'crypto';

const app = express();
const db = new Pool({ connectionString: process.env.DATABASE_URL });

// Webhook endpoint
app.post('/webhooks/hookmesh', async (req, res) => {
  const { event_id, event_type, payload, timestamp } = req.body;

  // 1. Verify webhook signature first
  const signature = req.headers['x-hookmesh-signature'];
  if (!verifySignature(req.body, signature, process.env.WEBHOOK_SECRET)) {
    return res.status(401).json({ error: 'Invalid signature' });
  }

  try {
    // 2. Start database transaction
    const client = await db.connect();

    try {
      await client.query('BEGIN');

      // 3. Check if event already processed
      const existing = await client.query(
        `SELECT id FROM processed_webhooks
         WHERE event_id = $1 FOR UPDATE`,
        [event_id]
      );

      if (existing.rows.length > 0) {
        // Event already processed - return success
        await client.query('COMMIT');
        console.log(`Duplicate event ${event_id} ignored`);
        return res.status(200).json({
          status: 'already_processed',
          event_id
        });
      }

      // 4. Process the webhook event
      await processEvent(client, event_type, payload);

      // 5. Mark event as processed
      await client.query(
        `INSERT INTO processed_webhooks
         (event_id, event_type, processed_at, payload)
         VALUES ($1, $2, NOW(), $3)`,
        [event_id, event_type, JSON.stringify(payload)]
      );

      // 6. Commit transaction
      await client.query('COMMIT');

      console.log(`Successfully processed event ${event_id}`);
      res.status(200).json({
        status: 'processed',
        event_id
      });

    } catch (error) {
      // Rollback on any error
      await client.query('ROLLBACK');
      throw error;
    } finally {
      client.release();
    }

  } catch (error) {
    console.error(`Webhook processing error:`, error);

    // Return 500 to trigger retry
    res.status(500).json({
      error: 'Processing failed',
      message: error.message
    });
  }
});

// Process event based on type
async function processEvent(client, eventType, payload) {
  switch (eventType) {
    case 'user.created':
      await client.query(
        'INSERT INTO users (id, email, name) VALUES ($1, $2, $3)',
        [payload.user_id, payload.email, payload.name]
      );
      break;

    case 'subscription.created':
      await client.query(
        `UPDATE users
         SET subscription_tier = $1, subscription_status = 'active'
         WHERE id = $2`,
        [payload.tier, payload.user_id]
      );
      break;

    case 'payment.succeeded':
      await client.query(
        `INSERT INTO payments (id, user_id, amount, status)
         VALUES ($1, $2, $3, 'completed')`,
        [payload.payment_id, payload.user_id, payload.amount]
      );
      break;

    default:
      console.warn(`Unknown event type: ${eventType}`);
  }
}

// Verify HMAC signature
function verifySignature(body, signature, secret) {
  const hmac = crypto.createHmac('sha256', secret);
  const digest = hmac.update(JSON.stringify(body)).digest('hex');
  return crypto.timingSafeEqual(
    Buffer.from(signature),
    Buffer.from(digest)
  );
}

app.listen(3000, () => {
  console.log('Webhook server running on port 3000');
});

Database Table Schema

You'll need a table to track processed events. See the schema below.

Database Schema

Create a table to track processed webhook events:

-- PostgreSQL schema
CREATE TABLE processed_webhooks (
  id SERIAL PRIMARY KEY,
  event_id VARCHAR(255) UNIQUE NOT NULL,
  event_type VARCHAR(100) NOT NULL,
  processed_at TIMESTAMP NOT NULL DEFAULT NOW(),
  payload JSONB,

  -- Index for fast lookups
  INDEX idx_event_id (event_id),
  INDEX idx_processed_at (processed_at)
);

-- Optional: Cleanup old records after 90 days
CREATE INDEX idx_processed_at_cleanup
  ON processed_webhooks (processed_at)
  WHERE processed_at < NOW() - INTERVAL '90 days';

Column Purposes:

event_id

Unique identifier from Hook Mesh. Primary deduplication key.

event_type

Type of event (e.g., user.created). Useful for debugging and analytics.

processed_at

When the event was processed. Used for cleanup and auditing.

payload

Optional: Store full payload for debugging and replay.

Handling Race Conditions

When multiple webhook deliveries arrive simultaneously, race conditions can occur. Use database transactions and row-level locks to prevent this:

// PostgreSQL with row-level locking
async function ensureIdempotent(client, eventId) {
  // Use SELECT FOR UPDATE to lock the row
  const result = await client.query(
    `SELECT id FROM processed_webhooks
     WHERE event_id = $1 FOR UPDATE`,
    [eventId]
  );

  if (result.rows.length > 0) {
    // Another request is processing or already processed
    return false; // Skip processing
  }

  // Insert immediately to claim this event
  await client.query(
    `INSERT INTO processed_webhooks (event_id, processed_at)
     VALUES ($1, NOW())`,
    [eventId]
  );

  return true; // Safe to process
}

// Usage in webhook handler
await client.query('BEGIN');

if (await ensureIdempotent(client, event_id)) {
  // This process won the race - process the event
  await processEvent(client, event_type, payload);
  await client.query('COMMIT');
  res.status(200).json({ status: 'processed' });
} else {
  // Another process got here first
  await client.query('COMMIT');
  res.status(200).json({ status: 'already_processed' });
}

Always Use Transactions

Without transactions, you could mark an event as processed but fail to actually process it. Use BEGIN/COMMIT to ensure atomicity.

Cleanup Strategy

The processed_webhooks table will grow over time. Implement a cleanup strategy to manage storage:

Option 1: Automated Cleanup Job

Run a daily cron job to delete old records:

-- Delete events older than 90 days
DELETE FROM processed_webhooks
WHERE processed_at < NOW() - INTERVAL '90 days';

-- Or use partitioning for large tables
-- (PostgreSQL 10+)

Option 2: Partitioning

For high-volume applications, partition the table by date:

-- Create partitioned table (PostgreSQL)
CREATE TABLE processed_webhooks (
  id SERIAL,
  event_id VARCHAR(255) NOT NULL,
  event_type VARCHAR(100) NOT NULL,
  processed_at TIMESTAMP NOT NULL,
  payload JSONB
) PARTITION BY RANGE (processed_at);

-- Create monthly partitions
CREATE TABLE processed_webhooks_2026_01
  PARTITION OF processed_webhooks
  FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');

-- Drop old partitions instead of deleting rows
DROP TABLE processed_webhooks_2025_01;

Option 3: Archive to Cold Storage

Move old records to cheaper storage (S3, BigQuery) before deletion for long-term compliance and debugging.

💡

Retention Recommendation

Keep processed events for at least 48 hours (Hook Mesh's retry window). Most applications keep 30-90 days for debugging.

Alternative: Idempotency Keys

Instead of tracking event IDs, you can use the resource ID from the payload as an idempotency key for certain operations:

// Example: Use user_id as idempotency key
async function handleUserCreated(payload) {
  const { user_id, email, name } = payload;

  // Use INSERT ... ON CONFLICT to handle duplicates
  const result = await db.query(
    `INSERT INTO users (id, email, name, created_at)
     VALUES ($1, $2, $3, NOW())
     ON CONFLICT (id) DO NOTHING
     RETURNING id`,
    [user_id, email, name]
  );

  if (result.rows.length === 0) {
    console.log(`User ${user_id} already exists - duplicate ignored`);
    return { status: 'already_processed' };
  }

  console.log(`User ${user_id} created successfully`);
  return { status: 'processed' };
}

When to Use Resource IDs:

Creating Records

Use user_id, order_id, etc. as natural idempotency keys.

Incrementing Counters

Don't use for operations that modify state. You could increment twice.

Sending Notifications

Don't use for side effects. You could send the same email twice.

Combine Both Approaches

For maximum safety, use resource IDs for natural idempotency AND track processed event IDs. This provides defense in depth.

Testing Idempotency

Always test that your webhook handler is truly idempotent:

// Test: Send the same webhook twice
import { describe, test, expect } from 'vitest';

describe('Webhook Idempotency', () => {
  test('processes webhook once when sent twice', async () => {
    const webhook = {
      event_id: 'evt_test_123',
      event_type: 'user.created',
      payload: {
        user_id: 'user_test_456',
        email: 'test@example.com',
        name: 'Test User'
      }
    };

    // Send webhook first time
    const response1 = await fetch('http://localhost:3000/webhooks', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(webhook)
    });

    expect(response1.status).toBe(200);
    const result1 = await response1.json();
    expect(result1.status).toBe('processed');

    // Send same webhook again (duplicate)
    const response2 = await fetch('http://localhost:3000/webhooks', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(webhook)
    });

    expect(response2.status).toBe(200);
    const result2 = await response2.json();
    expect(result2.status).toBe('already_processed');

    // Verify user was only created once
    const user = await db.query(
      'SELECT * FROM users WHERE id = $1',
      ['user_test_456']
    );
    expect(user.rows).toHaveLength(1);

    // Verify only one processed_webhooks record
    const processed = await db.query(
      'SELECT * FROM processed_webhooks WHERE event_id = $1',
      ['evt_test_123']
    );
    expect(processed.rows).toHaveLength(1);
  });

  test('handles concurrent duplicate webhooks', async () => {
    const webhook = {
      event_id: 'evt_test_789',
      event_type: 'payment.succeeded',
      payload: {
        payment_id: 'pay_test_101',
        amount: 5000,
        user_id: 'user_test_202'
      }
    };

    // Send same webhook 10 times concurrently
    const promises = Array(10).fill(null).map(() =>
      fetch('http://localhost:3000/webhooks', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify(webhook)
      })
    );

    const responses = await Promise.all(promises);

    // All should return 200
    responses.forEach(response => {
      expect(response.status).toBe(200);
    });

    // Verify payment was only recorded once
    const payments = await db.query(
      'SELECT * FROM payments WHERE id = $1',
      ['pay_test_101']
    );
    expect(payments.rows).toHaveLength(1);
    expect(payments.rows[0].amount).toBe(5000);
  });
});

Best Practices

1️⃣

Always Track event_id

The event_id field is guaranteed unique by Hook Mesh. Use it as your primary deduplication key.

2️⃣

Use Database Transactions

Wrap both the idempotency check and business logic in a single transaction. This ensures atomicity.

3️⃣

Return 200 for Duplicates

When you detect a duplicate, return HTTP 200 with status: 'already_processed'. This tells Hook Mesh the delivery was successful.

4️⃣

Test Race Conditions

Send the same webhook 10+ times concurrently in your tests. Verify only one operation completed.

5️⃣

Clean Up Old Records

Implement a cleanup strategy to prevent unbounded table growth. 30-90 days is typical.

6️⃣

Store Full Payload (Optional)

Storing the full webhook payload in processed_webhooks helps with debugging and allows you to replay events if needed.

Common Pitfalls

Checking After Processing

Don't check for duplicates after processing the event. Check FIRST, then process within the same transaction.

Using Timestamps as Keys

Don't use timestamp orcreated_at for deduplication. Clock skew and retries make this unreliable.

Relying on In-Memory Cache

Don't use Redis or in-memory caches as your only idempotency mechanism. Cache eviction or server restarts can cause duplicates.

Marking Processed on Failure

Don't mark an event as processed if the business logic fails. Let the error propagate so Hook Mesh retries.

Related Documentation