ZERO-TRUST INFRASTRUCTURE

Full-stack sovereign infrastructure for a Romanian fintech

Confidential client Β· Bucharest, Romania Β· Q3 2025


0
vendor lock-in after migration
<80ms
internal service latency
100%
MFA coverage across all users
0
hardcoded secrets remaining
GitOps
all config in version control

Related from the blog.

Feb 23, 2026 Β· 13 min read

HashiCorp Vault: getting off environment variables

The exact Vault setup deployed in this engagement β€” KV v2, AppRole auth, dynamic database credentials, and the Kubernetes Agent Injector.

Read β†’
Feb 10, 2026 Β· 7 min read

Zero-trust for SMEs: the architecture behind this engagement

The identity-first principles and Pomerium + Keycloak pattern used in this fintech deployment β€” and how to apply them at any scale.

Read β†’
Feb 24, 2026 · 9 min

SSH hardening checklist for Linux servers

Key-only auth, AllowUsers, fail2ban, and the sshd_config changes applied to every server in this engagement β€” the last mile of zero-trust access control before your network perimeter.

Read →
Feb 25, 2026 · 12 min

Tailscale and Headscale: zero-config mesh VPN

Headscale on the management server, ACLs restricting admin access to port 22/Vault API β€” how all privileged access in this engagement flows through WireGuard tunnels.

Read →
Feb 25, 2026 · 14 min

GDPR compliance engineering: a developer's practical guide

GDPR was a central driver of this engagement β€” hardcoded credentials in version control, lack of access audit logs, and unencrypted internal traffic all had GDPR implications.

Read →

0
Vendor lock-in
<80ms
Internal latency
100%
GDPR compliant

The situation.

A Romanian fintech processing sensitive financial data for retail investors had reached a critical inflection point. Their existing cloud setup β€” spread across three providers with no unified identity layer β€” was becoming both operationally brittle and legally untenable under tightening GDPR enforcement. Secrets lived in plain-text environment variables committed to private repos. Admin access required no second factor. There was no audit trail for who accessed what, when.

The technical debt compounded the compliance problem: micro-services communicated over unencrypted internal HTTP, database credentials were static and shared across teams, and there was no mechanism to enforce least-privilege access. The CTO estimated they had 60–90 days before their next compliance audit would flag these issues as blockers for a planned Series A raise.

The migration also had to be completed without a single hour of production downtime β€” they were processing live transactions 24/7.

What we built.

We started with a two-week discovery phase: full inventory of all 23 services, dependency graph construction, and risk matrix. We identified 14 high-priority items β€” hardcoded credentials, unencrypted service-to-service traffic, and three services with direct production database access from developer laptops.

The target architecture was a GitOps-managed Kubernetes cluster on dedicated bare-metal hardware co-located in a Bucharest Tier-III data centre. Every deployment is tracked as a Git commit in Argo CD. No kubectl apply commands in production β€” ever.

Phase 1: Identity and Secrets (Weeks 1–2)

We deployed Keycloak (47ID) as the centralized IdP. MFA became mandatory for all admin access on day one. We migrated all service credentials to HashiCorp Vault with dynamic secrets β€” database passwords now rotate automatically every two hours and are never the same twice.

Phase 2: Network Zero-Trust (Weeks 3–4)

Pomerium replaced the VPN entirely. All internal services are now identity-aware proxy endpoints β€” you cannot reach the internal network without a valid JWT from Keycloak. mTLS enforced on all service-to-service communication via Linkerd service mesh. The internet-facing perimeter is a hardened HAProxy cluster with rate limiting and WAF rules.

Phase 3: Migration and Handover (Weeks 5–8)

Services were migrated one by one using blue-green deployments β€” old service stays live while the new one warms up, traffic shifts 10% at a time, and we maintain instant rollback capability throughout. Kyverno enforces pod security standards and prevents privilege escalation at the cluster level. Prometheus + Loki + Grafana provide full observability. The final week was on-site team training and documentation handover β€” 47 pages of runbooks, architecture diagrams, and incident response playbooks.

"The 47Network Studio didn't just deploy infrastructure β€” they transferred knowledge so our team could own and evolve it independently. Three months in, we've deployed six new services without any outside help."

β€” CTO, Confidential Fintech Client
01
Discovery & Risk Assessment β€” Week 1–2Full service inventory (23 services), dependency graph, risk matrix with 14 high-priority items identified. Architecture proposal delivered end of week 2.
02
Identity & Secrets Baseline β€” Week 3–4Keycloak deployment, MFA enforcement, Vault integration with dynamic secrets, all hardcoded credentials eliminated.
03
Network Zero-Trust β€” Week 5–6Pomerium VPN replacement, Linkerd mTLS mesh, Kyverno policies, Argo CD GitOps pipeline operational.
04
Service Migration β€” Week 7–8Blue-green migration of all 23 services. Zero production incidents. Zero downtime. Prometheus + Loki observability stack live.
05
Handover & Training β€” Week 9On-site team training (2 days), full runbook delivery, 30-day support window, architecture diagrams and incident response playbooks.

Delivery timeline

W1–2
Discovery & risk assessment Full inventory of 23 services, dependency graph construction, risk matrix with 14 high-priority items. Architecture proposal for Vault, SPIFFE/SPIRE mTLS, and Pomerium proxy delivered end of week 2.
W3–5
Vault deployment & AppRole configuration Vault cluster provisioned on primary datacenter nodes. Kubernetes Agent Injector configured. AppRole auth set up for each service. Dynamic database credential rotation enabled β€” 2-hour rotation cycle.
W6–11
Service-by-service migration 23 services migrated from environment-variable secrets to Vault-injected secrets. Each migration followed a standardised playbook: code change, manifest update, coordinated deployment with rollback path. Zero production incidents.
W12–13
mTLS & Pomerium deployment SPIFFE/SPIRE deployed for workload identity. mTLS enforced between all microservices. Pomerium configured as the identity-aware proxy for all external admin access. Keycloak SSO integration for developer access.
W14
Handover & compliance sign-off 47 pages of runbooks delivered. Client engineering team trained. Compliance audit evidence package prepared β€” Vault audit log, access control matrix, and penetration test report. Series A readiness confirmed.
23
services migrated with zero downtime
<80ms
internal service latency (down from 340ms)
2h
automatic database credential rotation cycle
47
pages of runbooks and incident playbooks delivered

Three months after handover, the client's engineering team had independently deployed six new services into the GitOps pipeline without any outside assistance β€” exactly the outcome a knowledge-transfer engagement should produce.

The infrastructure we delivered

The core of the engagement was a HashiCorp Vault cluster deployed on their primary datacenter nodes, with the Kubernetes Agent Injector pushing dynamic secrets into pods at startup. No service retained long-lived credentials β€” each pod received a short-lived database token valid for two hours, automatically rotated. The Vault audit log, with cryptographic chaining, became the compliance evidence for every credential access event.

Service-to-service authentication used mTLS via SPIFFE/SPIRE β€” each workload received a cryptographic identity certificate valid for four hours, eliminating the need for shared secrets between microservices entirely. Pomerium served as the identity-aware proxy for all external developer and admin access, enforcing SSO via their Keycloak deployment (branded as 47ID for their internal users). Every access decision was logged and exportable for the compliance audit.

The migration itself was the hardest part. Twenty-three services had to move from environment-variable secrets to Vault-injected secrets β€” each one required a code change, a Kubernetes manifest update, and a coordinated deployment window. We built a migration playbook that let their team execute each service migration independently, in any order, with a rollback path that took under five minutes. The full migration ran over six weeks with zero production incidents.

Kubernetes Argo CD Pomerium Keycloak HashiCorp Vault Kyverno Linkerd Prometheus Loki Grafana HAProxy mTLS GitOps Bare-metal
ClientConfidential
ServiceZERO-TRUST INFRASTRUCTURE
LocationBucharest, RO
Duration9 weeks
Year2025
Similar Project? β†’

Zero-trust Kubernetes infrastructure: GitOps, identity-aware proxying, dynamic secrets, and full observability.

Service Details β†’

Tell us about your infrastructure challenge.

studio@the47network.com

Related Case Studies