/

January 22, 2025

Data Architect

Job Description :

We are seeking a visionary Data Architect to lead the data engineering practice within our
product engineering services. This role involves defining best practices for building data lak
and warehouses, identifying cutting-edge tools and platforms for exploration and adoption, and
driving architectural decisions for data engineering projects. The ideal candidate will shape our
strategy and guide our teams in delivering scalable, future-ready data solutions.

Responsibilities:

  • Design and implement the overall data architecture strategy, including data models, data
    integration patterns, and data governance frameworks.
  • Develop and maintain enterprise data standards, policies, and procedures to ensure data
    consistency, quality, and compliance across the organization.
  • Design and oversee the implementation of data integration solutions, including ETL/ELT
    processes, data pipelines, and real-time data streaming architectures.
  • Collaborate with data engineers to implement and optimize data storage solutions, including data warehouses, data lakes, and data marts.
  • Work with security teams to implement data security measures, including data
    encryption, access controls, and data masking techniques.
  • Evaluate and recommend new data technologies and tools to enhance the organization’s
    data capabilities.
  • Provide technical leadership and mentorship to data engineers and other technical team
    members.
  • Collaborate with business stakeholders to understand data requirements and translate
    them into technical specification
  • Develop and maintain documentation of data architecture, including data flow diagram entity-relationship diagrams, and system integration maps.
  • Ensure compliance with data privacy regulations (e.g., GDPR, CCPA) and industry standards.

Qualification:

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related
    field
  • 7+ years of experience in data architecture or related roles.
  • Strong understanding of data modeling techniques, including dimensional modeling and data vault modeling. Extensive experience with relational databases (e.g., Oracle, SQL
    Server, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Experience in designing and managing data pipelines that move data across different
    layers within Databricks. Experience with Azure ADLS Gen2 SDK for monitoring data
    ingestion and managing large data sets.
  • Proficiency in data warehousing concepts and technologies (e.g., Snowflake, Am
    Redshift, Google BigQuery, Databricks-hosted data warehouse following a medallion
    architecture.
  • Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and their data services.
  • Familiarity with data integration tools and ETL/ELT processes. Understanding of data.
    governance principles and experience implementing data governance frameworks.
  • Strong skills in SQL and at least one programming language (e.g., Python, Java, Scala).
  • Experience with data visualization tools (e.g., Tableau, Power BI) and their architectural requirements.
  • Excellent communication skills and ability to translate complex technical concepts to non-technical stakeholders.
  • Strong analytical and problem-solving skills.

Preferred Qualifications:

  • ● Experience with machine learning and AI architectures.
  • Knowledge of graph databases and their applications.
  • Familiarity with data mesh and data fabric concepts.
  • Experience with real-time data streaming technologies (e.g., Kafka, Apache Flink).