HARDWARE INFRASTRUCTURE

On-premises infrastructure overhaul for a Bucharest law firm

Confidential client · Bucharest, Romania · Q3 2025


4h
hardware incident SLA
100%
on-premise data — no cloud
IPMI
remote access configured
UPS
battery backup installed
12mo
hardware maintenance retainer

Related from the blog.

Feb 23, 2026 · 11 min read

Docker Compose vs Kubernetes for self-hosted infrastructure

The on-premise server runs Docker Compose for the law firm's self-hosted tools. This guide covers exactly when that's the right call.

Read →
Feb 24, 2026 · 10 min read

Backup strategy for self-hosted infrastructure: Restic + object storage

The Restic backup setup deployed alongside the law firm's TrueNAS cluster — encrypted offsite snapshots with automated restore testing and 3-year retention.

Read →
Feb 24, 2026 · 14 min

Proxmox VE for production: clustering, migration, and storage

The Proxmox VE cluster configuration used in this law firm engagement — ZFS RAIDZ2, live VM migration, IPMI fencing, and the backup strategy that protects client data.

Read →
Feb 25, 2026 · 13 min

Ansible for infrastructure automation

Every server in this engagement is managed by Ansible — from initial rack provisioning to ongoing configuration drift correction and rolling OS updates.

Read →
February 25, 2026 · 13 min

ZFS for Self-Hosted Infrastructure: Pools, Datasets, and Data Integrity

The 6-disk RAIDZ2 pool, daily snapshots retained 30 days, and weekly ZFS send/receive to the off-site backup server that protects all client data in this engagement.

Read →

100%
Data on-premises
2 days
To deploy
0
Cloud dependencies

Delivery timeline

W1
Site survey & hardware spec On-site visit to Bucharest office. Network topology assessment, server room inspection, power and cooling review. Hardware specification agreed: 2× Proxmox nodes, NAS, UPS, managed switches, IPMI for remote access.
W2–3
Hardware procurement & racking Hardware procured from EU suppliers (no extra-EU data transfer risk). Equipment racked, cabled, and powered. IPMI configured for out-of-band management. UPS commissioned with battery runtime test.
W4–5
Proxmox & storage configuration Proxmox VE installed on both nodes. ZFS RAIDZ1 pool configured on NAS. Shared storage mounted. VM templates created. Ansible playbooks written for all server configuration — idempotent, repeatable, version-controlled.
W6
Data migration & VM standup Existing workloads migrated from legacy on-premise servers to new Proxmox cluster. File server, internal wiki, and backup VMs live. All data confirmed on-premise — no cloud touch.
W7
Monitoring & handover Prometheus + Grafana deployed. Hardware health alerts (disk SMART, temperature, UPS battery) configured. 4-hour SLA incident response contract signed. Staff training on Proxmox web interface and basic troubleshooting runbooks.
Measurable results
0
data incidents in 12-month retainer period
100%
data on-premises, attorney-client privilege intact
14TB
migrated with zero data loss verified
4h
hardware incident SLA, 24/7 remote IPMI

During the retainer period one drive in the TrueNAS array showed early SMART failure indicators — detected proactively by our monitoring, replaced within the 4-hour SLA before any data was at risk.

The situation.

A twelve-attorney Bucharest law firm handling M&A transactions and litigation for large corporate clients had attorney-client privileged data distributed across three consumer-grade NAS devices in a server cupboard under a staircase. Power came from a standard wall socket shared with the photocopier. There was no UPS. The "backup strategy" was an attorney manually copying files to an external drive once a week — if they remembered.

Two weeks before our engagement, a power fluctuation had corrupted one of the NAS units, taking two years of case files offline for four days while a local IT shop attempted data recovery. The firm's managing partner decided that day that the status quo was unacceptable. They needed everything on-premises — cloud storage for attorney-client privileged material was not an option under their professional obligations — but properly engineered.

The constraint was time: they had a two-week window before a major transaction kicked off that would require all hands on deck. We had to plan, procure, cable, configure, and hand over in that window.

What we built.

We started with a site visit to assess the physical space, existing network topology, and power capacity. The server cupboard was salvageable with reinforcement — we specified a two-rack layout (19" open frame racks) that could fit in the existing space with proper ventilation.

Hardware Layer

We specified and procured two refurbished Dell PowerEdge R740 servers for the Proxmox HA cluster, a 48-port managed PoE switch (Mikrotik CRS354), a 24-bay ZFS NAS (TrueNAS Scale on dedicated hardware), a 3kVA double-conversion UPS, and a dedicated 32A circuit from the firm's electrical panel. Cat6A cabling throughout — future-proofed for 10GbE when they're ready.

Software Stack

Proxmox VE manages the two-node HA cluster. VMs run on shared ZFS storage — if one physical node fails, VMs migrate to the other within 30 seconds. TrueNAS provides file storage with RAIDZ2 redundancy (survives two simultaneous drive failures) and automated snapshots every four hours. Nightly replication to a secondary ZFS pool on a separate physical disk array provides a second recovery layer.

Access control was a key requirement. Pomerium provides zero-trust access to the file server and internal applications — attorneys can access case files remotely without a VPN, but every access attempt is authenticated against Keycloak with MFA. Network segments are isolated: the server network, the staff workstation network, and the guest Wi-Fi are on separate VLANs with firewall rules preventing lateral movement.

The Two-Day Install

We pre-configured everything in our workshop before the site visit — all servers rack-mounted, OS installed, basic network config done. The site day was physical installation, cabling, UPS commissioning, and integration testing. We finished in 11 hours on day one. Day two was data migration from the old NAS units (14TB), validation, and a three-hour training session with the office manager who would be the first-line administrator.

"Two days. I was skeptical it was even possible. Everything works, our data is ours, and for the first time I actually understand what's happening in our server room."

— Managing Partner, Confidential Law Firm
01
Site Assessment — Day 1 (remote)Physical space audit, power capacity assessment, network topology review, hardware specification and procurement.
02
Pre-config in Workshop — Days 2–7Hardware rack assembly, OS installation, Proxmox cluster setup, TrueNAS configuration, network pre-config — all done before the site visit.
03
On-site Install — Day 8Physical installation, Cat6A cabling, UPS commissioning, integration testing. 11-hour day. Everything operational by end of day.
04
Data Migration & Training — Day 914TB migrated from old NAS units, data integrity validation, 3-hour admin training session, runbook delivery.
05
30-Day Support — Weeks 2–6Remote monitoring, firmware updates, answering questions. Zero incidents during support period.
Proxmox VE TrueNAS Scale ZFS RAIDZ2 Pomerium Keycloak Mikrotik Cat6A Dell PowerEdge VLAN UPS HA Cluster
ClientConfidential
ServiceHARDWARE INFRASTRUCTURE
LocationBucharest, RO
Duration9 days
Year2025
Similar Project? →

On-premises server rooms, Proxmox HA clusters, ZFS storage, and network infrastructure — physically installed and handed over.

Service Details →

Tell us about your infrastructure challenge.

studio@the47network.com

Related Case Studies