Penguin Computing is a global leader in high-performance computing (HPC), delivering complete, integrated HPC solutions, from the workstation to the cloud.
With a focus on cutting-edge technology, ease-of-use and exceptional customer service, Penguin cost-effectively meets the needs of the worlds most demanding HPC users, including Caterpillar, Lockheed Martin, the U.S. Department of Defense, and dozens of higher education and federally funded research and development centers.
Today, Penguin delivers a range of solutions, from massive Linux clusters to Penguin on Demand(POD), a new service that provides a complete HPC solution in the cloud.
Penguin has been an innovator in HPC solutions for over a decade, and one of the company's founders is recognized as the Father of Linux Clustering.
About the Job:
This is a technical position that requires hands-on HPC cluster configuration experience, and a breadth of skills ranging from the integration of complex HPC clusters and application integration to on-site service and support. You will interact with and support some of the top engineering, research, academic and IT professionals in the HPC community.
Candidates should have proven implementation skills with a number HPC technologies. These include:
Cluster management software (Scyld, xCAT, ROCKS),
Job/resource schedulers (TORQUE, Moab, SGE),
Network configuration (TCP/IP administration, layer 7 applications and protocols (HTTP, DNS, NFS, TFTP, NIS, PXE), high performance interfaces (10gigE, InfiniBand), and switch configuration),
Data storage (DAS, NAS, and SAN configurations, RAID implementations (HW and SW), SAS expanders, parallel and scale-out file systems (Lustre, Panasas, Hadoop),
GNU toolchain (including Fortran, C & C++ Intel compilers),
MPI libraries (MPICH, MVAPICH, and OpenMPI),
General Purpose GPU (Nvidia, Intel Phi, AMD Fusion)
It is desirable that the candidate be familiar with any of a number of parallel applications available for industry verticals. These applications include: Abacus, Fluent, Ansys, WRF, MatLab, BLAST, LS-DYNA, Gaussian, etc.
Install, configure and maintain Linux HPC and HA clusters.
Testing and integrating third party applications in Linux HPC and HA clusters.
Troubleshoot software/hardware/OS/application compatibility and configuration issues.
Onsite installation, integration and verification of customer systems.
Bachelorís Degree in Computer Science, or Electrical Engineering
RedHat and CentOS are required.
5 years of hands-on experience with application of parallel programming technologies, application optimization in a Linux cluster environment, software installation in a variety of cluster environments, cluster set-up and configuration
Ability to travel as needed up to 50% through the assigned region
Experience with SUSE a plus.
Strong knowledge of High Performance Computing (HPC) application development
Security clearance is a plus
Outstanding verbal, written and interpersonal communication skills
Ability to help grow a vibrant, leading edge professional services organization a plus.
Penguin Computing is an Affirmative Action/Equal Opportunity Employer and is strongly committed to all policies which will afford equal opportunity employment to all qualified persons without regard to age, national origin, race, ethnicity, creed, gender, disability, veteran status, or any other characteristic protected by law.
Penguin Computing will consider applicants with criminal histories in a manner consistent with the requirements of the City of San Francisco Fair Chance Ordinance.
This job is no longer active. Please click here to see current job listings.