About the role
We’re looking for a skilled Data Engineer with deep Snowflake expertise to help modernize and scale our data platform. If you thrive in a fast-moving environment, can wrangle messy pipelines, and want to build the backbone of a cloud-first data strategy, this role is for you. You’ll work across legacy and modern systems to deliver reliable, high-quality data to customers and colleagues who depend on it every day.
Responsibilities
- Design, build, and maintain scalable and efficient data pipelines to support analytics, reporting, and operational use cases
- Collaborate closely with product owners, analysts, and data consumers to translate business requirements into reliable data solutions
- Develop and maintain data integration workflows across both cloud-native and on-premise systems
- Champion best practices in data architecture, modeling, and quality assurance to ensure accuracy and performance
- Participate in sprint planning, daily stand-ups, and retrospectives as an active member of a cross-functional agile team
- Identify and remediate technical debt across legacy pipelines and contribute to the modernization of the data platform
- Implement robust monitoring and alerting for pipeline health, data quality, and SLA adherence
- Write and maintain documentation for data flows, transformations, and system dependencies
- Contribute to code reviews and peer development to foster a collaborative and high-quality engineering culture
- Ensure adherence to security, privacy, and compliance standards in all data engineering practices
Skills & Qualifications
5+ years of professional experience in data engineering, analytics engineering, or related fieldsBachelor’s Degree in Computer Science, or equivalent field and 2+ years of experienceAdvanced SQL skills, including performance tuning and query optimizationExpertise in Snowflake , including data warehousing concepts, architecture, and best practicesExperience with modern data transformation tools (e.g., dbt )Experience building and maintaining automated ETL / ELT pipelines , with a focus on performance, scalability, and reliabilityProficiency with version control systems (e.g., Git), working within CI / CD pipelines and experience with environments that depend on infrastructure-as-codeExperience writing unit and integration tests for data pipelinesFamiliarity with data modeling techniques (e.g., dimensional modeling, star / snowflake schemas)Experience with legacy, on-premise databases such as Microsoft SQL Server is preferredExposure to cloud platforms (e.g., AWS, Azure, GCP), cloud-native data tools , and data federation tools is a plusExperience with Sql Server Reporting Services (SSRS) is beneficialSoft Skills & Collaboration
Ability to work effectively in a hybrid technology environment (legacy and modern tools)Able to collaborate and communicate with both technical and non-technical stakeholdersProblem-solving mindset , especially when working with incomplete documentation or legacy systemsA pragmatic, iterative approach to balancing short-term delivery with long-term platform improvementTenacity and drive to reduce technical debt and lead modernization effortsPassion for continuous improvement and advocating for best practices in data engineering