Security Infrastructure

Building a tamper-proof audit trail.


"We have audit logs" is almost never the complete picture. The meaningful question is: what properties do those logs actually have? Can they be modified without detection? Who has the ability to delete them? If a privileged administrator were compromised, could that attacker erase evidence of their access? Audit logs that can be edited are just notes.

Tamper-evident audit logging β€” where modification is detectable even if it can't be fully prevented β€” is an achievable goal in most stacks. Tamper-proof logging β€” where modification is architecturally impossible β€” requires write-once storage or an external anchor. The distinction matters for compliance, incident response, and anyone reviewing the logs after a breach.

This post covers the properties a production audit trail needs, the cryptographic chaining mechanism that makes logs tamper-evident, the storage patterns that make them tamper-resistant, and how this works in practice inside Sven Agent and the 47Network platform.

What audit logs actually need to be

Not all logging is audit logging. Application logs are for debugging. Metrics are for dashboards. An audit trail has a different set of requirements:

REQ 01

Complete

Every relevant action is recorded. Missing events are the most dangerous audit log failure β€” you can't detect absence from within the log itself. The system must enforce logging at the point of action, not as a best-effort side-effect.

REQ 02

Accurate

Each event record must contain enough context to reconstruct what happened: who (identity, not just username), what (action and parameters), when (timestamp with timezone, server-side not client-reported), from where (IP, session ID, request ID), and result (success, failure, partial).

REQ 03

Tamper-evident

Any modification to a past event β€” or any deletion β€” must be detectable. This is the property that standard database tables and log files don't have by default. It requires additional cryptographic structure on top of the records themselves.

REQ 04

Append-only

Writers can add new events. Nobody should be able to modify or delete existing ones at the application level. This is a permission and architecture constraint, not just a policy. "Please don't modify the logs" is not append-only enforcement.

REQ 05

Exportable and queryable

Logs locked inside a proprietary system are a liability during incidents. The audit trail must be exportable in a structured format (JSON, CSV, structured syslog), and queryable by time range, actor, resource, and action type without requiring specialist tooling.

REQ 06

Separated from operational data

Audit logs stored in the same database they're auditing can be modified by anyone with write access to that database. The audit trail should live in a different store, written to by a dedicated pipeline, with permissions that application code cannot acquire at runtime.

Cryptographic chaining: making modification detectable

The mechanism is conceptually similar to a blockchain without the distributed consensus: each log event includes a cryptographic hash of the previous event. Modifying any past event changes its hash, which breaks the chain β€” anyone re-computing the chain will detect the inconsistency.

Event #1
id: evt_001
actor: user:alice
action: vault.secret.read
ts: 2026-02-24T09:12:04Z
prev_hash: 0000...0000 (genesis)
hash: a3f8...b12c
β†’
Event #2
id: evt_002
actor: svc:sven-agent
action: mission.submit
ts: 2026-02-24T09:12:19Z
prev_hash: a3f8...b12c
hash: 7d2e...f41a
β†’
Event #3
id: evt_003
actor: user:bob
action: user.role.grant
ts: 2026-02-24T09:14:02Z
prev_hash: 7d2e...f41a
hash: c91b...3d78

If an attacker modifies Event #1 to change the actor or action, its hash changes. Event #2's prev_hash field no longer matches the (now different) hash of Event #1. The chain is broken from that point forward. Anyone running a verification pass over the log β€” even days later β€” will detect the inconsistency at exactly the point of modification.

The hash function matters. We use SHA-256. The input to each hash includes all event fields plus the previous hash:

// Pseudocode for chained hash computation
function computeEventHash(event, prevHash) {
  const canonical = JSON.stringify({
    id:        event.id,
    actor:     event.actor,
    action:    event.action,
    resource:  event.resource,
    params:    event.params,
    result:    event.result,
    timestamp: event.timestamp,
    session:   event.session_id,
    prev_hash: prevHash,
  });
  return sha256(canonical);
}

// On append
const prevEvent = await getLatestEvent();
const prevHash  = prevEvent ? prevEvent.hash : GENESIS_HASH;
const newHash   = computeEventHash(newEvent, prevHash);
await insertEvent({ ...newEvent, prev_hash: prevHash, hash: newHash });

Important: Chaining makes modification detectable, not impossible. A sophisticated attacker with full database access could re-compute the entire chain after modifying events. The chain is tamper-evident, not tamper-proof. To achieve tamper-proof properties, you need an external anchor β€” a hash published to an external, immutable location at regular intervals that an attacker cannot retroactively modify.

External anchoring

The simplest external anchor is periodic publication of the chain's current head hash to a location the attacker cannot modify. Options in increasing strength:

  • Email digest: Email the current chain head hash to a mailbox that the application cannot write to. Primitive, but effective if the mailbox is genuinely independent.
  • Signed timestamps (RFC 3161): A trusted timestamp authority signs a hash of the chain head. The signature is a timestamped commitment that the chain existed in a specific state at a specific moment. Retroactive modification of events before that point would have to invalidate the signed timestamp.
  • Append-only object storage: Write each period's chain summary to S3/GCS with object lock enabled. Object lock prevents overwrite and deletion for a configured retention period β€” even by the object storage account owner using most APIs.
  • Certificate Transparency–style logs: For the highest assurance, publish to a public append-only log where inclusion proofs can be independently verified. This is the same mechanism used to verify TLS certificate issuance.

For most production deployments, hourly anchoring to write-once object storage (S3 Object Lock, GCS Retention Policy) provides a practical balance of assurance and operational complexity.

Storage architecture

The audit pipeline should be structurally separated from operational data:

  • Dedicated PostgreSQL schema or database β€” separate from application data, with role-based permissions that prevent application users from running UPDATE or DELETE on audit tables. The application role should have INSERT and SELECT only.
  • Row-level security β€” enforce append-only at the database level, not just the application level. A BEFORE UPDATE and BEFORE DELETE trigger that raises an exception, or a PostgreSQL row security policy, makes it structurally impossible for application code to modify rows even if a bug or compromise exposes the audit writer credentials.
  • Async write pipeline β€” the audit write should be on a separate path from the main request handling. If the audit write blocks or fails, the request should succeed but raise an alert β€” not silently miss the audit event. A queue-based pipeline (Redis XADD β†’ consumer β†’ audit DB) decouples write latency from request latency while ensuring at-least-once delivery.
  • Column-level encryption for sensitive params β€” audit records often contain sensitive parameters (secret names accessed, document IDs, personal data in search queries). Encrypt sensitive fields at the application level before writing; store the encryption key in Vault with an audit-specific policy.

What to audit

Completeness requires a deliberate decision about what constitutes an auditable event. A useful framework is to audit any action that:

  • Changes access control (role grants, permission changes, account creation/deletion)
  • Reads or modifies sensitive data (secrets, personal data, financial records, health data)
  • Changes system configuration (infrastructure changes, policy changes, integration configuration)
  • Represents authentication or authorization decisions (login, token issue, access grant, access denial)
  • Results in a privileged operation (admin panel access, bulk export, impersonation)

Everything else β€” read-only non-sensitive data access, routine health checks, static asset requests β€” generally doesn't belong in the audit trail. High-volume noise degrades the signal quality of audit logs and makes them harder to query under forensic pressure.

Audit logging in Sven Agent

Every Sven Agent mission, skill invocation, approval decision, and telemetry event is written to a cryptographically chained audit log. The chain is per-workspace; each workspace has its own genesis block.

The audit trail is append-only at the application level (INSERT-only database role) and enforced at the PostgreSQL level via triggers. The chain head hash is published hourly to a dedicated S3 bucket with object lock enabled β€” this is the external anchor. Chain verification runs as a daily background job; any detected inconsistency triggers a P0 alert and automatic workspace isolation.

The entire audit trail is exportable via the REST API in JSON or CSV format. Exported records include the chain hash for each event, allowing recipients to verify chain integrity independently. This matters for enterprise clients who need to present audit evidence to external auditors β€” the log is self-verifying.


← Back to Blog Sven Agent β†’