Data Engineer Lead

Apply Now

How to Apply

A cover letter and resume are required submissions for the hiring team to understand your experience. In your cover letter, in 1 page or less, please explain how this role aligns with your career aspirations and skills. Submit both the cover letter and the resume as a single file.

A competitive salary above the posted range is available based on the selected candidate's qualifications, experience, and education.

Job Summary

The University of Michigan's Data Science Practice team is seeking a Data Engineering Lead to help lead our data engineering and architecture teams. As stewards of the University's enterprise data assets, our team is reimagining the future of our data infrastructure to support advanced analytics, data science, Artificial Intelligence, and institutional decision-making. This role will work directly with Information and Technology Services (ITS) partners and Campus partners to enable a flexible data ecosystem that meets today's users' needs and scales for our future growth. This position will directly lead our data engineers and architects and report to the Associate Director of the Data Science Practice team. 

What You'll Do

You are motivated, innovative, and creative, and you see opportunities for improvement in both technologies and processes. You have a background in data and can see the big picture. You are a leader and a team player.

Who We Are

We are moving to a new cloud-based, scalable, and secure data ecosystem. We manage our enterprise data warehouse and have a group of data engineers and data architects who will be working in a new environment, implementing new processes and technologies, and migrating data from a legacy system.

Why Work at Michigan?

  • Be part of a collaborative, forward-thinking team with a mission to modernize data infrastructure in service of education, research, and institutional excellence.
  • Access to professional development opportunities, conferences, and cutting-edge tools.
  • Competitive salary and benefits, including generous time off and tuition support.

Responsibilities*

  • Mentor and collaborate with data engineers and architects, and other high-performing team members.
  • Design, implement, and optimize data pipelines that support the university's modernization strategy.
  • Contribute to the design of our future cloud-native data architecture and the selection and implementation of our new toolset.
  • Collaborate with data scientists, analysts, and stakeholders to deliver high-quality, well-documented data products.
  • Develop scalable ETL/ELT processes that integrate data from enterprise systems (e.g., PeopleSoft, Salesforce, Blackbaud).
  • Establish and promote best practices for data modeling, engineering, and governance in a modern data platform environment.
  • Support agile delivery and CI/CD practices to improve efficiency and quality across the data lifecycle.

Required Qualifications*

  • Bachelor's degree in a related field and/or equivalent combination of education, certification, and experience.  
  • 5+ years of professional experience in data engineering, software development, or a related discipline.
  • Strong proficiency in SQL and experience with both relational and non-relational database systems.
  • Experience developing in Python and/or Scala, particularly for data pipeline creation and automation.
  • Hands-on experience with cloud environments (AWS, Azure, or GCP) and associated data services.
  • Experience with big data frameworks such as Apache Spark or Databricks.
  • Knowledge of workflow orchestration tools (e.g., Apache Airflow, Prefect, or similar).
  • Familiarity with modern CI/CD and infrastructure-as-code practices (e.g., Terraform, GitHub Actions).
  • Strong understanding of data architecture, integration, and governance principles.
  • Excellent communication skills and ability to collaborate in a team-oriented environment.

Desired Qualifications*

  • Master's degree in a relevant field (e.g., Data Science, Computer Science, Engineering).
  • Experience supporting data and analytics workflows (BI, data science, machine learning, etc).
  • Familiarity with modern data stack tools and concepts (platforms such as Snowflake, Databricks, Big Query, Redshift; ETL tools such as dbt and spark)
  • Understanding of data privacy and compliance frameworks (e.g., FERPA, HIPAA, GDPR).
  • Experience modernizing legacy data systems or migrating to cloud-native architectures.
  • Knowledge of ML Ops, feature engineering, or model deployment tools.
  • Experience mentoring team members or leading technical projects.
  • Background in higher education or research environments is a plus.

You may not check every box, or your experience may look a little different from what we've outlined, but if you think you can bring value to our team, we encourage you to apply!

Modes of Work

Positions that are eligible for hybrid or mobile/remote work mode are at the discretion of the hiring department. Work agreements are reviewed annually at a minimum and are subject to change at any time, and for any reason, throughout the course of employment. Learn more about the work modes.

Application Deadline

Job openings are posted for a minimum of seven calendar days.  The review and selection process may begin as early as the eighth day after posting. This opening may be removed from posting boards and filled anytime after the minimum posting period has ended.

U-M EEO Statement

The University of Michigan is an equal employment opportunity employer.