Data Engineer
Data EngineerWe are seeking a data enthusiast who thrives on working with both structured and unstructured data. In this role, you will be involved in cutting-edge data fabric and data mesh projects, working on complex initiatives that integrate multiple data sources and formats.
Key Responsibilities:
Develop ETL pipelines using Python and Azure Data Factory, along with DevOps CI/CD pipelines.
Engage in software engineering and systems integration through REST APIs and other standard interfaces.
Collaborate with a professional team to develop data pipelines, automate processes, deploy infrastructure as code, and manage multicloud solutions.
Participate in agile ceremonies, weekly demos, and effectively communicate daily commitments.
Configure and connect various data sources, particularly SQL databases.
Ideal Candidate:
Holds a degree in Computer Science (BSc and/or MSc desired).
Has 3+ years of practical experience in similar roles.
Proficient with ETL products such as Spark, Databricks, Snowflake, and Azure Data Factory.
Strong skills in Azure Data Factory, Databricks/Snowflake, PySpark, and Azure DevOps Classic/YAML Pipelines.
Advanced SQL knowledge and experience with relational databases like MS SQL Server, Oracle, MySQL, and PostgreSQL.
Understanding of data architecture concepts, data modeling, and advanced programming skills in Python.
Familiar with CI/CD principles and best practices in an agile environment (SCRUM, Kanban).