About the Role:
We are seeking a highly skilled Analytics Engineer to join our dynamic team. As a key member of our Customer Care data analytics department, you will play a pivotal role in transforming raw data into actionable insights that drive business decisions. You will leverage your expertise in data engineering, data warehousing, and data modeling to develop and maintain robust data pipelines, ensuring the seamless flow of data across our organization.
Responsibilities:
- Data Modeling Architecture: Design, develop, and maintain a robust data model using dbt Core / dbt Cloud, adhering to a medallion architecture (bronze, silver, gold layers). This includes creating and managing data pipelines to transform raw data into foundational and modular analytical datasets.
- Data Availability Optimization: Identify and implement strategies to improve query performance and reduce data load times. Determine optimal refresh schedules (full refreshes, incremental loads, or near real-time) for different data layers to balance data freshness and system performance.
- Data Exposure: Ensure the availability, quality, and security of silver and gold layer datasets. Implement data governance policies and procedures to maintain data integrity and consistency. Integrate these datasets into various BI tools such as Tableau, Streamlit, and text to SQL to empower data-driven decision-making.
- Data Quality and Performance Optimization: Implement robust data quality checks and validation procedures to ensure data integrity and accuracy. Proactively monitor data pipelines and data warehouses to identify and address performance bottlenecks, such as slow query response times and data latency. Establish efficient alerting mechanisms to promptly detect data quality issues and system failures, minimizing mean time to resolution.
- Analytical Support: Collaborate with data analysts and business stakeholders to understand their requirements and provide data-driven insights and appropriate data models in production and lower environments.
- Technology Evaluation: Stay up to date with the latest data analytics technologies and tools, and recommend their adoption as needed.
Qualifications:
- Bachelor's degree in Computer Science, Data Engineering, Statistics, or a related field.
- 7+ years of experience in data engineering, data warehousing, or a similar role.
- Strong proficiency in SQL and data modeling techniques.
- Strong background and hands on experience with dbt core and / or dbt cloud.
- Experience with data warehousing and ETL tools (e.g., Snowflake (preferred) or similar Redshift, BigQuery, Vertica).
- Familiarity with cloud platforms (e.g., AWS (preferred), or similar GCP, Azure) and cloud-based data services.
- Knowledge of data visualization tools (e.g., Tableau, Streamlit, ThoughtSpot, PowerBI).
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Preferred Qualifications:
- Experience with data pipelines and orchestration tools (e.g., Apache Airflow, Iceberg, AWS).
- Knowledge of Python or other scripting languages.
- Experience with data science and machine learning frameworks (e.g., TensorFlow, PyTorch).
- Certification in cloud platforms or data engineering.
Chewy is committed to equal opportunity. We value and embrace diversity and inclusion of all Team Members. If you have a disability under the Americans with Disabilities Act or similar law, and you need an accommodation during the application process or to perform these job requirements, or if you need a religious accommodation, please contact CAAR@chewy.com.
If you have a question regarding your application, please contact HR@chewy.com.
To access Chewy's Customer Privacy Policy, please click here. To access Chewy's California CPRA Job Applicant Privacy Policy, please click here.