Designing a High-Performance ACM Service Cache-First Authorization

Designing a High-Performance ACM Service Cache-First Authorization

How we built an enterprise-grade Access Control Management (ACM) service using FastAPI, Redis, and PostgreSQL to eliminate repeated database permission checks while preserving strong security guarantees.

Shahid Malik
Shahid Malik7 Jan, 2026 · 12 min read

Introduction

Authorization is not just another feature it is foundational infrastructure.

As systems scale, the number of users, roles, permissions, and APIs grows rapidly. Every incoming request needs an authorization decision, often before any meaningful business logic runs. If this layer is slow, unclear, or inconsistent, the entire system suffers.

In many backend systems, authorization logic evolves organically. Permission checks creep into services, repositories, and controllers. Database queries multiply. Latency increases. Debugging becomes painful.

We hit that exact wall.

This blog explains how we solved the multiple database query problem for permission checks by designing a cache-first Access Control Management ACM service that centralizes authorization and treats permissions as what they really are: read-heavy, rarely-changing data.

The Problem with Naive RBAC Implementations

Before ACM, permission checks lived directly inside business services.

For every API request:

  • The database was queried to verify permissions
  • The same permission data was fetched repeatedly
  • Authorization logic was scattered across services
  • Each service implemented checks slightly differently

This approach was functionally correct — but architecturally flawed.

The Resulting Problems

  • Increased database load
  • Higher request latency
  • Poor scalability under traffic spikes
  • Difficult auditing and debugging
  • Tight coupling between business logic and authorization

The core issue was clear:

Permissions are read-heavy and rarely updated, yet they were treated like transactional data.

That mismatch became a systemic bottleneck.

Introducing ACM: Centralized, Cache-First Authorization

To fix this, we introduced ACM (Access Control Management) as a dedicated authorization layer.

ACM acts as a single, authoritative place to answer one question:

"Can this user perform this action on this resource?"

Instead of every service querying the database, all permission checks flow through ACM, which aggressively caches permissions and minimizes database access.

Key Objectives

  • Eliminate repeated permission database queries
  • Centralize authorization logic
  • Decouple authorization from business services
  • Guarantee consistent enforcement across the system
  • Preserve strong security guarantees

ACM is implemented as an internal module today for performance reasons, but its design allows it to be extracted into a standalone service later without refactoring.

Core ACM Model

ACM is intentionally opinionated to keep authorization predictable and secure.

Design Constraints

  • A user belongs to exactly one organization at a time
  • Superadmin users operate outside organization scope
  • Permissions are exact-match only (no wildcards)
  • Authorization is fail-closed by default

These constraints reduce ambiguity and make authorization decisions deterministic.

Permission Structure

Permissions follow a strict, hierarchical naming convention:

text
<module_group>.<module_name>.<action>

Examples

  • users.user.read_all
  • prospects.prospect.delete
  • organizations.organization.update

Why This Matters

  • Consistency across teams
  • Easy auditing and governance
  • Deterministic permission checks
  • Efficient caching and lookup

Every permission resolves to a single string comparison no rule engines, no runtime evaluation.

Why Cache-First Authorization Is Required

Permission checks happen on almost every request.

Permission updates happen rarely.

ACM explicitly optimizes for this reality.

Cache-First Strategy

  • Check permissions in Redis
  • If cache hit → return immediately
  • If cache miss → fallback to database
  • Populate cache
  • Serve all subsequent requests from cache

PostgreSQL remains the source of truth, but Redis handles the hot path.

Why Redis SET + SISMEMBER Was Chosen

We evaluated multiple caching approaches and chose Redis SETs with SISMEMBER.

Why This Works So Well

  • O(1) membership checks
  • No payload deserialization
  • Minimal memory overhead
  • Extremely fast under high concurrency

Since every request triggers a permission check, even micro-optimizations matter at scale.

Cache Key Strategy

Each user's permissions are cached as a Redis SET.

Regular Users

text
user:permissions:{org_id}:{user_id}

Superadmin Users

text
user:permissions:superadmin:{user_id}

Cache Characteristics

  • TTL: Infinite
  • Invalidation: Event-driven
  • Lookup: SISMEMBER

Permissions remain cached until something meaningful changes.

Complete Permission Check Flow

Below is the end-to-end permission resolution flow, from request entry to authorization decision:

text
1. User makes API request with JWT token
   ↓
2. FastAPI extracts token → get_auth_context_from_token()
   ↓
3. Router dependency: require_permission("users.user", "read_all")
   ↓
4. PermissionClient.has_permission() called
   ↓
5. LocalPermissionClient → acm_require_permission()
   ↓
6. PermissionCache.has_permission() checks Redis:
   ├─ Redis SISMEMBER check (O(1) lookup)
   ├─ If found → Return True/False
   └─ If cache miss → Continue to DB fallback
   ↓
7. Database fallback:
   ├─ Fetch user.permissions_json from DB
   ├─ Flatten permissions to Set[str]
   ├─ Cache in Redis (SET with SADD)
   └─ Check permission in flattened set
   ↓
8. Audit logging (if enabled):
   ├─ Denied permissions: Always logged
   └─ Granted permissions: Sampled (configurable rate)
   ↓
9. Result:
   ├─ Permission granted → Return AuthContext, continue request
   └─ Permission denied → Raise PermissionDeniedError (403)

From the API developer's perspective, authorization becomes a single declarative line:

text
Depends(require_permission("prospects.prospect", "delete"))

No database logic. No conditionals. No duplication.

Cache Invalidation Strategy

Permissions are cached indefinitely, but never trusted blindly.

Invalidation Triggers

  • User deactivation
  • Role assignment changes
  • Permission updates
  • Organization membership changes

Invalidation runs asynchronously in the background, keeping request latency low.

Fail-Closed Security Model

  • Redis failure → Database fallback
  • Database failure → Access denied
  • Permission resolution failure → Access denied

No permission is ever granted unless it is explicitly verified.

Consistency is eventual, but security is never compromised.

Role Ownership & Governance

To prevent permission drift, ACM enforces strict governance rules:

  • Superadmin defines role templates
  • Organization Admin assigns roles to users

This balance allows flexibility without losing control.

Conclusion

Authorization should be invisible to developers and predictable for systems.

By centralizing RBAC, eliminating repeated database queries, adopting cache-first authorization, enforcing exact-match permissions, and failing closed by design, ACM provides a secure and scalable foundation for enterprise access control.

Good authorization systems are boring —

and that's exactly what you want.

Share :