We are a forward-thinking organization assisting clients with modern data solutions. We are currently seeking a skilled Mid-Level Data Engineer to support a critical data platform migration from Databricks to Snowflake. This is a short-term contract with potential for extension based on project needs and performance.
Job Summary:
The Data Engineer will play a key role in ensuring a seamless migration process from Databricks to Snowflake. You will work closely with senior engineers and cross-functional teams to develop ETL pipelines, ensure data integrity, and optimize performance. This role requires hands-on experience with Python, Snowflake, and ideally Databricks.
Key Responsibilities:
- Support the migration of data from Databricks to Snowflake, ensuring seamless data transfer with minimal disruption.
- Develop and maintain ETL pipelines using Python and Spark for data processing.
- Optimize performance and storage within Snowflake for efficiency and cost-effectiveness.
- Collaborate with cross-functional teams to gather requirements and support data modeling efforts.
- Troubleshoot and resolve data-related issues during the migration process.
- Ensure data quality, integrity, and compliance with best practices.
Required Skills:
- Strong proficiency in Python for data engineering tasks.
- Hands-on experience with Snowflake for data warehousing and storage.
- Solid understanding of ETL pipeline development and data processing concepts.
- Experience with cloud-based data storage and platforms.
- Ability to work collaboratively in a fast-paced, team-oriented environment.
Nice-to-Have Skills:
- Experience with Databricks and Spark or Snowpark.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.