How we store API keys (and why we don't)
A short tour through the API key flow: SHA-256 hashing, constant-time comparison, atomic credit deduction, and why your plaintext key is shown exactly once.
This is a quick tour through how SocialRouter handles API keys end-to-end. We thought about showing it as a system diagram, but writing it out is more honest about the tradeoffs we made.
The shape of a key
SocialRouter API keys look like this:
sr_prod_aF93kn28dnQpMzKxn8kdThat's three parts: sr (brand prefix), prod (environment — also dev, test, stg), and 24 base62 characters of randomness. That's about 143 bits of entropy, plenty for anything we'd do with it.
The environment tag exists so a key accidentally pasted into a Slack channel or a stack trace is immediately recognizable as a SocialRouter key, and so we can do prefix-based scanning if needed in the future. GitHub does this with ghp_, Stripe with sk_live_. The pattern is well-trodden.
What we store, and what we don't
We never store the plaintext key. When you create one, we:
- Generate the random suffix with
crypto.randomBytes(18).toString("base64url") - SHA-256 hash the full string
- Insert
(key_hash, key_prefix, key_suffix)into theapi_keystable - Return the plaintext exactly once in the HTTP response
That's it. The plaintext exists in your browser, in our response, in your clipboard if you copy it, and on whatever you paste it into. After that, it's only in your environment. We literally cannot recover it because we never had it past the response.
The dashboard shows you sr_prod_••••••••n8kd — the prefix and the last 4 chars, enough to disambiguate which key is which without revealing anything useful to a shoulder-surfer.
Validating an incoming request
When a request comes in to /v1/social/read, the server:
- Extracts the key from
Authorization: Bearer sr_prod_xxx - SHA-256 hashes it
- Looks up
api_keys WHERE key_hash = $1 AND status = 'active'via the service-role Supabase client - If miss → 401 with
invalid_credentials
The lookup is indexed (create index api_keys_hash_idx on public.api_keys(key_hash) where status = 'active') so it's O(1) for practical purposes. We don't iterate, we don't string-compare. The hash is the primary lookup, which means a leaked database is dangerous (an attacker could submit hashes back to the lookup) but only insofar as their attack happens before we revoke. There's no rainbow table for SHA-256 of 24-char random strings.
The constant-time wrinkle
For string comparison anywhere in the auth path, we use Node's crypto.timingSafeEqual() rather than ===. Standard timing-attack hardening. The fast-equals comparison can leak information about how many leading characters matched if you're patient enough to measure microsecond-scale response time differences. timingSafeEqual always takes the same amount of time regardless of where the mismatch is.
For a hashed lookup like ours this is mostly belt-and-suspenders since we're using the database index, not in-process comparison. But the helper is in lib/api-keys.ts exported as safeEquals() and we use it anywhere we compare key-related strings.
Charging for the request
Once we've validated the key and looked up the user, we charge for the request. This is where the millicents come in. Money in our database is bigint millicents: 1 cent = 1000 millicents. That gives us sub-cent precision (Reddit reads are 1000 millicents = $0.001 each) without floating point.
Deduction is atomic via a Postgres function:
select public.deduct_credits(
p_user_id => 'uuid-here',
p_cost_millicents => 1000
);
-- returns true if charged, false if insufficientThe function locks the user's credits row, decrements free tier first if any remains, otherwise decrements paid balance, and returns a boolean. All in one transaction. No race condition where two requests both see "100 credits remaining" and both succeed.
Logging the request
After the actual work happens (Reddit fetch, response normalization), we insert into usage_logs:
{
user_id, api_key_id,
endpoint: "read",
platform: "reddit",
status_code: 200,
latency_ms: 142,
cost_millicents: 1000,
metadata: { source: "r/programming", limit: 25 }
}We log status, latency, cost, and a small metadata blob. We don't log the response body, ever. Logs are retained for 30 days for billing reconciliation and abuse detection, then deleted.
The dashboard's usage page is just SQL aggregations against this table grouped by day, endpoint, and platform. Cheap, accurate, no caching layer needed at our current volume.
Refunds on failure
If Reddit returns a 5xx or we time out hitting them, we refund the credit. The deduction function takes a negative cost to undo:
await supabase.rpc("deduct_credits", {
p_user_id: userId,
p_cost_millicents: -cost,
});Best-effort — if the refund itself fails for some reason (DB hiccup), we log it and move on rather than block the response. Worst case: you're out a tenth of a cent and we'll see the anomaly in the logs.
What's next
A few things on the roadmap that aren't shipped yet:
- Per-key rate limits. Currently rate-limited at the account level. Per-key (so a leaked staging key can't burn through prod quota) is coming.
- Key rotation API. Right now you revoke and create. We want a one-shot rotate that returns a new key while leaving the old one valid for 5 minutes.
- Scopes. A key that can only read from
r/specific-subor only write to a single platform. Useful for multi-tenant apps embedding SocialRouter.
Questions or spotting a mistake? Email security@socialrouter.ai. We answer.