Lenovo IBM System X3100 M4 Homelab Test Environment
Executive Summary
This project documents the setup of a refurbished Lenovo IBM System X3100 M4 tower server as a dedicated DevOps test environment, separate from the existing primary homelab.
The current homelab (documented in the project “Proxmox Homelab: Your Private Cloud—Architecture, Services, and Setup” and the blog “Homelab: Why Every DevOps Engineer Should Have One—And What to Run There”) is deliberately kept stable and production-like. It runs long‑lived services, documented architectures, and repeatable tutorials.
This X3100 M4 environment has a different purpose:
- It is a sandbox for breaking things on purpose.
- It will host Kubernetes experiments, CI/CD pipelines, IaC demos, DNS and networking labs, reverse proxy trials, observability stacks, security tooling, and cloud-integration prototypes.
- It is expected to be rebuilt, reinstalled, and reconfigured frequently.
This page is not a step-by-step tutorial. It is a structured engineering case study and an architecture anchor for future tutorials and blogs that will reference this hardware and its evolution.
Why Refurbished Enterprise Hardware?
Choosing a refurbished Lenovo IBM System X3100 M4 instead of a new mini PC or white-box build was intentional.
Reasons:
Cost efficiency
- Refurbished 1U/2U/tower servers from earlier generations (Sandy/Ivy Bridge) are widely available at reasonable prices in INR, especially when buying barebones and upgrading RAM and storage yourself.
- The budget here was to get a full test node (CPU, ECC RAM, NVMe storage) for less than what a mid‑range new mini PC would cost.
ECC memory stability
- The platform supports ECC Registered DDR3, which reduces the risk of silent memory corruption during long test runs, Kubernetes control-plane experiments, and CI pipelines.
- For infrastructure experiments that may run for days, ECC is preferable to consumer non‑ECC memory.
Expandability
- Full‑size PCIe slots allow:
- NVMe via PCIe x16 adapter
- Additional NICs (10 GbE in the future)
- HBA cards if this node is ever repurposed for storage.
- Multiple drive bays give room for future SSD or HDD additions.
- Full‑size PCIe slots allow:
Reliability vs consumer hardware
- Enterprise servers are designed for 24x7 duty cycles, better cooling, and known firmware behavior.
- The X3100 M4 platform is well-documented, with a long history in small office deployments.
Tradeoffs (Power and Noise)
- Higher power draw than modern low‑power mini PCs, especially at idle.
- Fan noise is noticeable compared to fanless/mini machines and may require physical placement away from a living space.
- Older hardware lacks cutting-edge CPU efficiency, so workload density is lower than modern platforms.
Given the goal—a serious test environment that can be broken and rebuilt repeatedly—the tradeoffs are acceptable.
Hardware Specification
System Overview
| Component | Detail |
|---|---|
| Server | Lenovo IBM System X3100 M4 (tower form factor) |
| CPU | Intel Xeon E3-1220 v2 (Ivy Bridge, 4 cores / 4 threads) |
| Memory | 32 GB DDR3 ECC Registered (4 × 8 GB) |
| Storage | 1 TB NVMe SSD, PCIe Gen3, via x16 NVMe adapter |
| Storage Bus | NVMe over PCIe x16 (adapter), onboard SATA available for future disks |
| Networking | Onboard Gigabit Ethernet NIC |
| Expansion | PCIe x16 slot (occupied by NVMe adapter), additional PCIe/PCI slots |
| Form Factor | Tower server |
| Power | Single PSU (standard for X3100 M4), 230V AC (India) |
This configuration is sufficient for:
- A Proxmox base installation.
- One or more Kubernetes clusters (single-node control plane with multiple worker VMs or LXCs).
- CI/CD runners, observability stack, and DNS / reverse proxy experiments in parallel, within reasonable resource limits.
Cost Breakdown (INR)
The goal of this section is pricing transparency, so other engineers can estimate what a similar build would cost in India. Values are realistic placeholders based on typical refurbished pricing and the actual purchase details.
Component-Level Cost (Actual Spend)
| Item | Description | Cost (INR) | Notes |
|---|---|---|---|
| Server chassis + CPU | Lenovo IBM System X3100 M4 with Xeon E3‑1220 v2 | 10,000 | Refurbished unit with base configuration |
| RAM | 32 GB DDR3 ECC Registered (4 × 8 GB) | 8,000 | Refurbished, matched modules |
| NVMe SSD (1 TB) | PCIe Gen3 NVMe SSD | 6,000 | Purchased ~1 year ago during low SSD pricing |
| PCIe x16 NVMe adapter | PCIe → NVMe adapter card | 800 | Simple single‑drive adapter |
| Cables & misc | SATA/Power cables, minor accessories | 1,000 | Includes spares, labeling, and small hardware |
| Total (actual) | 25,800 | Approximate total spend for this build |
Current Market Adjustment (2026)
As of today, 1 TB NVMe SSDs are closer to 21–25K INR in the local market. If priced at 22,000 INR instead of the historical 6,000 INR:
- Adjusted total ≈ 41,800 INR.
This difference is called out deliberately to show how timing and hardware reuse affect homelab build costs. Future revisions of this project page will update costs as hardware changes.
Design Goals
The Lenovo IBM System X3100 M4 environment is guided by a few clear goals:
Isolation from production homelab
- No shared control plane or shared storage with the main Proxmox homelab.
- Safe to wipe, reinstall, and reconfigure without touching “stable” services.
Reproducible DevOps labs
- Rebuildable from documentation and IaC where possible.
- Used to back future tutorials and blogs, which will reference this project page.
Version testing for Kubernetes & Terraform
- Run multiple Kubernetes versions (e.g. 1.29, 1.30) and Terraform/OpenTofu versions side-by-side.
- Test upgrade paths, provider changes, and cluster lifecycle workflows.
CI/CD sandbox
- Host GitHub Actions self‑hosted runners or other CI agents.
- Validate pipelines and runners against a controlled but realistic environment.
Architecture experimentation
- Try different ingress controllers, DNS setups, monitoring stacks, and service topologies.
- Validate reverse proxy, DNS, and security designs before they are proposed for the main homelab.
High-Level Architecture Plan
At a high level, the X3100 M4 will be structured as follows:
flowchart TB
X3100["Lenovo IBM System X3100 M4<br/>Tower Server"]
OS["Proxmox VE (Base OS)"]
K8s["Kubernetes Cluster (kubeadm)"]
Traefik["Traefik Ingress / Reverse Proxy"]
DNS["DNS Services (CoreDNS / Lab DNS)"]
CICD["CI/CD Runner (e.g. GitHub Self-Hosted Runner)"]
Observability["Monitoring & Observability Stack<br/>(Prometheus, Grafana, Logs)"]
X3100 --> OS
OS --> K8s
OS --> CICD
OS --> Observability
K8s --> Traefik
K8s --> DNS
K8s --> Observability
This diagram is intentionally simplified. Future pages will introduce:
- Network segments and VLANs.
- Storage layout (local vs network-attached).
- External integrations with the primary homelab and cloud services.
Planned Software Stack (Initial Roadmap)
The initial roadmap focuses on a clean, Proxmox‑centric base with Kubernetes and supporting services:
Base OS: Proxmox VE
- Chosen as the hypervisor for:
- Simple VM lifecycle management.
- Snapshotting and rollback during experiments.
- Consistency with the existing Proxmox homelab project.
- Chosen as the hypervisor for:
Kubernetes cluster (kubeadm-based)
- Single or multi‑VM cluster bootstrapped with
kubeadm. - Control plane and workers hosted as Proxmox VMs.
- Variants for different Kubernetes versions.
- Single or multi‑VM cluster bootstrapped with
Container runtime
- Containerd or Docker used as the Kubernetes CRI, depending on the experiment.
- Additional standalone Docker usage for utility containers and experiments.
Traefik Ingress
- Acts as the edge router for Kubernetes services.
- Used to test routing, TLS, and domain/path-based rules.
DNS
- CoreDNS inside the cluster for service discovery.
- Additional lab DNS experiments (e.g. forward/reverse zones, split DNS) as part of networking labs.
Observability stack
- Prometheus for metrics.
- Grafana for dashboards.
- Room for log aggregation and tracing in later stages.
CI/CD Runner
- GitHub self-hosted runner (and/or other CI systems) pinned to this node.
- Used for testing infrastructure pipelines (Terraform/OpenTofu), Kubernetes manifests, Helm charts, and GitOps workflows.
Future tutorials will reference this stack explicitly and call back to this project page for context.
Upgrade & Evolution Plan
This Stage 1 setup is a baseline, not a final state. The project is expected to evolve in several dimensions:
Hardware upgrades
- Possibility of:
- More RAM (up to platform limits) if Kubernetes or CI workloads grow.
- Additional SSDs or HDDs for local registry, logs, or backup targets.
- Additional NICs for dedicated storage or Kubernetes traffic.
- Possibility of:
Software version upgrades
- Regular updates of:
- Proxmox VE releases.
- Kubernetes versions (cluster upgrades via
kubeadm). - Traefik, Prometheus, Grafana, and related tooling.
- Each upgrade cycle will be treated as a small engineering case study.
- Regular updates of:
Cost tracking updates
- This page will be updated as new components are added or replaced, including INR cost deltas.
- Helps readers understand the real long‑term cost of maintaining a serious DevOps lab.
Performance benchmarks
- Future posts may include:
- Baseline CPU, memory, and storage benchmarks.
- Impact of adding more VMs or containers.
- Comparative notes vs newer hardware (e.g. mini PCs or NUCs).
- Future posts may include:
Integration with primary homelab and cloud
- Controlled connectivity between this test environment and the main homelab.
- Experiments with hybrid topologies (on‑prem Kubernetes ↔ cloud services).
Related Content (Placeholders)
This project page is meant to be the anchor for a series of tutorials and blogs. The items below are placeholders for future content that will link back here.
Upcoming Tutorials
Initial OS Installation on the X3100 M4 Installing Proxmox VE on the refurbished server and validating hardware health.
BIOS Configuration for Virtualization Enabling VT‑x, VT‑d, power profiles, and fan behavior suitable for a homelab.
Installing a Kubernetes Cluster on Proxmox VMs Using
kubeadmto build a small but realistic Kubernetes cluster on this node.Setting up Traefik with Domain Routing Designing ingress rules, TLS termination, and paths for multiple test services.
Upcoming Blogs
Why I Bought a Refurbished Server for My Homelab Deep dive into the decision‑making process, costs, and lessons learned.
Enterprise Server vs Mini PC for DevOps Labs A practical comparison of power, noise, performance, expandability, and cost, using this X3100 M4 and the existing homelab as reference points.
This page will be updated as the “Lenovo IBM System X3100 M4 Homelab Test Environment” evolves. Tutorials and blogs will reference this project for hardware context, design goals, and architectural intent, keeping the documentation coherent across the BeingDevOps platform.