NVIDIA
NVIDIA’s Deep Learning Frameworks (DLFW) Infrastructure team is looking for a deeply technical Senior HPC Cluster Administrator to lead the design, deployment, and reliability of our large-scale GPU compute clusters. These systems run the most demanding deep learning training, inference, and high-performance computing workloads in the industry — from DGX/HGX platforms to ground-breaking Grace Blackwell systems. You will drive architectural decisions across compute, networking, and storage, and partner closely with software, research, and product teams to keep our infrastructure ahead of the workloads it supports.
What you’ll be doing:
•
Own the full lifecycle of GPU compute clusters — procurement, provisioning, configuration management, monitoring, and deprecation — across heterogeneous Linux environments (DGX, HGX, embedded systems)
•
Design and scale storage solutions (NFS, Lustre, WekaFS, or equivalent) with a clear roadmap for capacity and performance growth
•
Lead automation of infrastructure using modern IaC tools (Ansible, Terraform) and CI/CD pipelines (GitLab)
•
Manage and optimize job scheduling via Slurm, including fair-share policies, reservation management, and MIG/GPU partitioning strategies
•
Maintain and improve observability stacks (Prometheus, Grafana, DCGM) and drive proactive resolution of hardware and software incidents
•
Collaborate with ML engineers and software teams to tune cluster configuration for large-scale distributed training workloads
•
Evaluate and introduce new technologies — networking fabrics (InfiniBand, NVLink, EFA/RDMA), storage tiers, container runtimes — to improve performance and reliability
•
Mentor junior engineers and contribute to team-wide engineering standards
What we need to see:
•
BS/MS in CS, EE, CE, or equivalent hands-on experience
•
5+ years of experience deploying and administering large-scale HPC or ML training clusters
•
Deep expertise in Linux systems administration at scale
•
Strong scripting and automation skills in Python and/or bash
•
Hands-on experience with Slurm (scheduling, accounting, cgroup configuration)
•
Proficiency with configuration management and IaC (Ansible required; Terraform a plus)
•
Experience with container technologies (Docker, Apptainer/Singularity, Kubernetes)
•
Solid understanding of high-speed networking (InfiniBand, RoCE, RDMA, EFA)
•
Experience with distributed/parallel filesystems and storage architecture
•
Ability to own problems end-to-end and communicate clearly with engineering and management stakeholders
Ways to stand out from the crowd:
•
Experience with NVIDIA GPU infrastructure tools (DCGM, nvidia-smi, MIG, NVSwitch diagnostics)
•
Familiarity with cluster management platforms (Colossus, Bright Cluster Manager, xCAT, or similar)
•
Experience supporting large-scale distributed deep learning workloads (PyTorch, JAX, Megatron)
•
Knowledge of BMC/IPMI/Redfish for out-of-band management and hardware lifecycle
•
Background in MLOps tooling or ML platform engineering
Join our team of world-class engineers and be part of the groundbreaking work we do at NVIDIA. We are committed to encouraging a collaborative and inclusive environment, where every team member has the opportunity to thrive and make a significant impact!
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. For Poland: The base salary range is 221,250 PLN – 383,500 PLN for Level 3, and 292,500 PLN – 507,000 PLN for Level 4.
To apply for this job please visit jobicy.com.

