Summary
Your role in our mission
Essential Job Functions
style="margin-top:0.0cm;margin-bottom:0.0cm"-
-Design, develop and deploy pipelines including ETL-processes using Apache Spark Framework.
-Monitor, manage, validate, and synthetic test data extraction, movement, transformation, loading, normalization, cleansing and updating processes in product development.
-Coordinates with the stakeholders to understand the needs and delivery with a focus on quality, reuse, consistency, and security.
-Collaborate with team-members on various models and schemas.
-Collaborate with team-members on documenting source-to-target mapping.
-Conceptualize and visualize frameworks
-Communicate effectively with various stakeholders.
What we're looking for
Basic Qualifications
style="margin-top:0.0cm;margin-bottom:0.0cm" type="disc"-
-Bachelor's degree in computer sciences or related field
-3 years of relevant experience and equivalent education in ETL Processing/data architecture or equivalent.
-3+ years of experience working with big data technologies on AWS/Azure/GCP
-2+ years of experience in the Apache Spark/DataBricks framework (Python/Scala)
-Databricks and AWS developer/architect certifications a big plus
Other Qualifications
style="margin-top:0.0cm;margin-bottom:0.0cm" type="disc"-
-Strong project planning and estimating skills related to area of expertise
-Strong communication skills
-Good leadership skills to guide and mentor the work of less experienced personnel
-Ability to be a high-impact player on multiple simultaneous engagements
-Ability to think strategically, balancing long and short-term priorities
What you should expect in this role
Working environment : Remote