Find an IT Job

Big Data Administrator (Hadoop, SRE)

Location: East Lansing, MI

Job Type: Perm

Our client in East Lansing is hiring a permanent Big Data Administrator to join their growing team. 

Sr. Engineer/Team lead responsible for design, implementation, and ongoing support of the Big Data platforms (Hadoop, HBase, HIVE, Spark) in Ad Tech environment. Our Big Data environment handles more than 40+ PBs of data over large clusters with 1000+ nodes.

Responsibilities:
  • Expertise at maintaining the cluster, not developing the cluster. 
  • This is a hands on role with leadership responsibilities on a large multi-tenant Analytics Hadoop Cluster
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Own the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Manage and mentor Hadoop engineers and administrators
  • Tune performance and operational efficiency
  • Formulate methods to enable consistent Data loading and optimize Data operations
  • Evaluate and introduce technology tools and processes to streamline operation of the data platform and for streamlining use case execution
  • Meet performance and SLA requirements of the Hadoop clusters
  • Prior experience with remote monitoring and event handling using Nagios
  • Design metrics and monitor real-time performance of clusters
  • Evaluate and recommend systems software and hardware for the enterprise system including capacity modeling 

Qualifications

  • 10+ years' experience as a technical lead with focus on open source and Petabyte range data implementations. 5+ years of experience as technical lead for a Hadoop/Big Data implementation
  • Expertise with YARN, Spark,  Kafka, Storm, Zookeeper, Hive, Hbase
  • Experience in optimization, capacity planning & architecture of a large multi-tenant cluster
  • Expertise with one or more of the following languages: Python, Scala, Java, Perl, Ruby, or Bash.
  • Experience with multiple open source tool sets in the Big Data space
  • Experience in tool Integration, automation, configuration management in GIT (Puppet)
  • Excellent verbal and written communication and presentation skills, analytical and problem solving skills

Apply Now

  • Allowed file types: txt, rtf, doc, docx, pdf