Data Engineer

Athena

Building one-to-one partnerships between best-in-class remote Executive Assistants and high-achieving professionals.

The Role

The Data Engineer will be in charge of the following tasks:


High-level Responsibilities

  • Create and maintain optimal, company-wide data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements, including:
  • automating manual processes
  • optimizing data delivery
  • re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python, SQL, etc.
  • Build analytics tools that use the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other business KPIs
  • Work with the Executive, Product, Operations, and Design teams to assist with data-related technical & infrastructure needs

Specific Responsibilities

The Data Engineer will support our data analysts, data operations associates, database architects, and various business teams on data initiatives, and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

Some Of The Things You Will Work On

  • Build data pipelines that clean, transform and aggregate disparate data
  • Use software development principles to improve our back-end systems
  • Model front- & back-end data sources to draw a comprehensive picture of data flow throughout the organization and enable powerful data analysis
  • Develop models that can be used to answer questions for the business
  • Communicate with other teams to understand their data needs

Your Metrics

  • Uptime for data dashboards
  • Response time for data dashboards
  • Internal user satisfaction with data services

Qualifications

To be successful you will need a combination of problem-solving, technical, and communication skills. The following qualifications are important success factors:

  • Degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field, or related experience:
  • Experience with ETL tools (e.g., Fivetran, dbt, etc.)
  • Experience with relational SQL and NoSQL databases, (e.g., PostgreSQL, Airtable, etc.)
  • Experience with data pipeline and workflow management tools(e.g., Airflow, Luigi, Azkaban, etc.)
  • Experience with GCP cloud services(e.g., BigQuery, Cloud Composer, Cloud SQL for PostgreSQL, etc.)
  • Experience with object-oriented/object function scripting languages (e.g., Python, Java, etc.) & querying languages (e.g., SQL, etc.)
  • Experience performing root cause analysis on data & processes to answer business questions and identify opportunities for improvement
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management
  • Strong analytic skills related to working with structured & unstructured datasets
  • A successful history of manipulating, processing, and extracting value from large, disconnected data systems
  • Strong project management and organizational skills
  • Experience supporting and working with cross-functional teams
  • Great problem-solving and communication skills
  • Ambitious, quick to learn, and takes ownership
  • Be an advocate for best practices and continued learning

Are you looking to sharpen your Software Development skills to stay relevant in the market? CLICK HERE to have a look at the top schools.

For all your IT certification needs, please, click here for information on how to get started

To apply for this job please visit www.linkedin.com.

Job Overview
Job Location