Job Description
Title : Hadoop Platform Engineer
Location : Remote (U.S.-based)
Job Type : Contract (6 months)
Industry: Retail
Compensation : $80 – $150/hour (W2)
Required Skills: Java, Hadoop, Hive, Spark, Hadoop Distributed File System (HDFS), Linux
---
About the Role We are seeking a highly skilled
Hadoop Platform Engineer to support a leading enterprise in the retail and data analytics sector. This organization operates at massive scale, processing petabytes of data daily to drive business insights and customer experiences. As part of their data infrastructure team, you’ll play a critical role in building and maintaining a robust, scalable Hadoop ecosystem that powers advanced analytics and decision-making across the company.
Job Description As a Hadoop Platform Engineer, you will be responsible for the design, development, and maintenance of a high-performance Hadoop platform. You’ll leverage deep expertise in the Apache Hadoop ecosystem and Java to optimize cluster performance, enhance observability, and automate operational workflows. Your work will directly impact the reliability and efficiency of data processing pipelines used across the organization.
Key Responsibilities - Architect and maintain a scalable Hadoop platform supporting large-scale data workloads.
- Diagnose and resolve performance issues across HDFS, YARN, Hive, Spark, and other core components.
- Implement and standardize monitoring frameworks and health checks to ensure platform stability.
- Optimize Linux-based systems (especially Ubuntu) for Hadoop workloads through kernel tuning and resource management.
- Develop automated alerting systems to proactively identify and resolve issues.
- Collaborate with cross-functional teams to improve job efficiency and cluster utilization.
- Establish operational standards for performance tuning, capacity planning, and upgrades.
- Automate routine tasks using scripting languages such as Python, Bash, or Scala.
- Ensure compliance with data governance and security protocols.
- Conduct root cause analysis and implement preventive measures for recurring incidents.
- Document best practices and maintain operational runbooks for platform management.
Qualifications Required Qualifications - Proven experience with the Apache Hadoop ecosystem, including: HDFS, Hive & Spark
- Strong programming skills in Java
- Proficiency in Linux system administration and performance tuning
- Experience diagnosing and optimizing distributed systems at scale
Preferred Qualifications - Experience with observability tools such as Grafana
- Scripting experience in Python
- Familiarity with Trino or other distributed SQL engines
Benefits Dahl Consulting is proud to offer a comprehensive benefits package to eligible employees that will allow you to choose the best coverage to meet your family’s needs. For details, please review the DAHL Benefits Summary:
How to Apply Take the first step on your new career path!
To submit yourself for consideration for this role, simply click the apply button and complete our mobile-friendly online application. Once we’ve reviewed your application details, a recruiter will reach out to you with next steps! For questions or more information about this role, please call our office at (651) 772-9225.
Equal Opportunity Statement As an equal opportunity employer, Dahl Consulting welcomes candidates of all backgrounds and experiences to apply. If this position sounds like the right opportunity for you, we encourage you to take the next step and connect with us. We look forward to meeting you!
Job Tags
Contract work, Work at office, Remote work,