Python
AWS
SQL
ETL Developer/Architect
Overview
Clarasight is seeking a stellar person with exceptional data and engineering talent that is profoundly committed to reinventing how enterprises close the gap between climate-related intent and action through building the Carbon Planning and Analytics platform.
Job Description
Clarasight provides global enterprises with a Carbon Planning platform to power emissions reduction. Some of the world's most recognizable companies rely on Clarasight's software to reduce risks, save costs, protect revenues, and manage emissions aligned with business objectives and sustainability goals.
Responsibilities
- - Develop and maintain Clarasight''s data architecture strategy
- - Own Clarasight''s data warehouse – the backend of our B2B SaaS platform
- - Work directly with customers on data ingest and reporting requirements
- - Manage the process of data ingestion into Clarasight''s data warehouse
- - Establish an enterprise ETL technical architecture for ingestion of all third party and customer data feeds
- - Perform data cataloging analytics on source data including data provided by clients and determine business rules for data ingestion
- - Write requirements for and then implement ETL and data pipelines including exception-handling protocols and processes in python
- - Perform continuous improvement of ETL processes
- - Build core datasets
- - Implement manual and automated data quality management
- - Define and uphold SLAs
- - Become a subject matter expert in travel and expense data
- - Collaborate with Software Development, Data Scientists, Product Management, and Customer Success
- - Democratize compliant and secure access to data through self-serve channels
- - Establish data governance policies and procedures
- - Monitor and troubleshoot data infrastructure issues
- - Manage the planning, implementation and testing of Continuity of Business and Disaster Recovery for Clarasight''s data warehouse
- - Provide technical leadership and share knowledge, for instance, through code reviews, pairing, ADRs, and presentations
Required Skills
- - 5+ years of data architecture, data modeling, data warehousing concepts, methodologies, and best practices
- - Experience working directly with customers
- - Experience scaling and re-architecting data platforms and infrastructure through orders of magnitude of growth and scaling data volume
- - Experience working on data infrastructure-focused engineering teams building a data lake or data warehouse from scratch
- - Experience with state-of-the-art ETL tools, techniques and processes
- - Experience with AWS and cloud-based data services
- - RDBMS; SQL; query optimization; performance troubleshooting
- - 3+ years of python programming experience
- - Familiarity with DBT or SQLMesh
- - Familiarity with several data warehousing platforms like Google Big Query, Snowflake, MemSQL (SQL Store), Apache Ignite, Redshift, Databricks
- - 0 to 1 startup experience
- - Exceptional common sense and the ability to manage ambiguity
Benefits
- - Generous options in our early stage startup for key members of the team
- - Working among a diverse, expert team at the forefront of climate tech, behavioral change, and people technology
- - Opportunity to make a foundational contribution to our mission: close the gap between climate-related intention and action
- - Remote and Hybrid work environments
- - Robust benefits including full medical, dental and vision insurance, 401k (or similar retirement programs) and sustainability benefit ($500 annually to spend on increasing personal sustainability)