Data Engineer – Azure & Databricks

The company Adeva is hiring a Data Engineer – Azure & Databricks

Job description

Adeva is a global talent network that enables work without boundaries by connecting tech professionals with top companies worldwide. 

This role is for a highly skilled Data Engineer specializing in building, optimizing, and managing data pipelines for data ingestion, transformation, and publication using Azure Databricks and related Azure services. The ideal candidate should have strong experience with Databricks, Spark (SQL, PySpark), Databricks Workflows, and Azure data services like Azure Data Lake, with a focus on scalability and performance. You will focus on building and maintaining data pipelines that support ingestion, transformation, and publishing of data for integrations and downstream analytics, ensuring the performance, scalability, and reliability of data workflows.

Responsibilities

  • Design and implement end-to-end data pipelines in Azure Databricks, handling ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services.
  • Write efficient and standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline.
  • Automate pipeline orchestration using Databricks Workflows or integration with external tools (e.g., Apache Airflow, Azure Data Factory).
  • Build scalable data ingestion processes to handle structured, semi-structured, and unstructured data from various sources (APIs, databases, file systems).
  • Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements.
  • Leverage Databricks features such as Delta Lake to manage and track changes to data, enabling better versioning and performance for incremental data loads.
  • Publish clean, transformed data to Azure Data Lake or other cloud storage solutions for consumption by analytics and reporting tools.
  • Define and document best practices for managing and maintaining robust, scalable data pipelines.
  • Implement and maintain data governance policies using Unity Catalog, ensuring proper organization, access control, and metadata management across data assets.
  • Ensure data security best practices, such as encryption at rest and in transit, and role-based access control (RBAC) within Azure Databricks and Azure services.
  • Optimize Spark jobs for performance by tuning configurations, partitioning data, and caching intermediate results to minimize processing time and resource consumption.
  • Continuously monitor and improve pipeline performance, addressing bottlenecks and optimizing for cost efficiency in Azure.
  • Automate data pipeline deployment and management using tools like Terraform, ensuring consistency across environments.
  • Set up monitoring and alerting mechanisms for pipelines using Databricks built-in features and Azure Monitor to detect and resolve issues proactively.

Requirements

  • Extensive experience in designing and implementing scalable ETL/ELT data pipelines in Azure Databricks, transforming raw data into usable datasets for analysis.
  • Strong knowledge of Spark (SQL, PySpark) for data transformation and processing within Databricks, along with experience building workflows and automation using Databricks Workflows.
  • Hands-on experience with Azure services like Azure Data Lake, Azure Blob Storage, and Azure Synapse for data storage, processing, and publication.
  • Familiarity with managing data governance and security using Databricks Unity Catalog, ensuring data is appropriately organized, secured, and accessible to authorized users.
  • Proven experience in optimizing data pipelines for performance, cost-efficiency, and scalability, including partitioning, caching, and tuning Spark jobs.
  • Strong understanding of Azure cloud architecture, including best practices for infrastructure-as-code, automation, and monitoring in data environments.
  • Fluent English and excellent communication skills.
  • Ability to work as part of an international, distributed team and resolve potential issues and challenges that come with remote work.

About Adeva

Adeva is an exclusive network of engineers, product and data professionals that connects consultants with leading enterprise organizations and startups. Our network is distributed all over the world, with engineers in more than 35 countries. Our company culture builds connections, careers, and employee growth. We are creating a workplace from the future that values flexibility, autonomy, and transparency. If that sounds like something you’d like to be part of, we’d love to hear from you.

💡ideas & bugs