Job Description
This is a remote position.
Job Title : Data Platform Engineer
Location : Portugal
Work Regime : Full-time & Remote
Overview / Summary :
We are looking for a Data Platform Engineer to join our team!
Responsibilities and Tasks :
- Design, build, and maintain production-grade data pipelines using SQL, Apache Spark, and AWS services;
- Operate and troubleshoot data orchestration and streaming tools (dbt, Kafka, NiFi, Ignite, Airflow);
- Develop data engineering solutions in Python or Scala with proper testing and CI / CD practices;
- Ensure data governance, security, and compliance across cloud environments;
- Monitor and optimize workflows with Grafana.
Requirements
Mandatory Requirements :
Experience with data pipelines in a production environment;Strong hands-on knowledge of SQL and experience with distributed data processing (e.g., Apache Spark);Proficiency and knowledge of troubleshooting dbt, Apache Kafka, Apache Ignite, Apache NiFi and Apache AirFlow;Proficiency in a programming language such as Python or Scala for data engineering tasks;Experience with cloud data services (AWS);Familiarity with Grafana tools;Knowledge of data governance, security, and compliance in a cloud environment;Experience implementing CI / CD and testing frameworks for data workflows;Strong problem-solving skills, communication skills, and passion for collaboration;Fluent in written and spoken Portuguese and English.Complementary Requirements :
Familiarity with Dremio data lake warehouse architectures and concepts;Experience with Terraform provisioning data services;Familiarity with Power BI on-premises gateway;Hands-on experience with data catalogue and governance tools (e.g., Collibra, Alation);Experience with machine learning pipelines and feature engineering.Benefits
Important :
Our company does not sponsor work visas or work permits. All applicants must have the legal right to work in the country where the position is based.Only candidates who meet the required qualifications and match the profile requested by our clients will be contacted.#VisionaryFuture - Build the future, join our living ecosystem!
Requirements
Mandatory Requirements : Experience with data pipelines in a production environment; Strong hands-on knowledge of SQL and experience with distributed data processing (e.g., Apache Spark); Proficiency and knowledge of troubleshooting dbt, Apache Kafka, Apache Ignite, Apache NiFi and Apache AirFlow; Proficiency in a programming language such as Python or Scala for data engineering tasks; Experience with cloud data services (AWS); Familiarity with Grafana tools; Knowledge of data governance, security, and compliance in a cloud environment; Experience implementing CI / CD and testing frameworks for data workflows; Strong problem-solving skills, communication skills, and passion for collaboration; Fluent in written and spoken Portuguese and English. Complementary Requirements : Familiarity with Dremio data lake warehouse architectures and concepts; Experience with Terraform provisioning data services; Familiarity with Power BI on-premises gateway; Hands-on experience with data catalogue and governance tools (e.g., Collibra, Alation); Experience with machine learning pipelines and feature engineering.