The Role
GM Energy is a subsidiary of General Motors that provides home energy management products and services related to GM electric vehicles (EVs). The GM Energy Data Platform, known as the GM Energy Cloud (GMEC), serves as the central backend service . This domain encompasses IoT integrations with GM Energy products installed at customer locations and utility companies. Additionally, GMEC integrates with EV telemetry data to support GM Energy's smart charging and home energy management initiatives.
The GM Energy Data Engineering team is on the lookout for a talented Staff Software Engineer who is a highly skilled and hands-on Cloud Data Warehouse Architect with expertise in IoT data integration, Snowflake, Databricks, and Apache Spark. The ideal candidate will design, implement, and optimize scalable cloud-based data warehouse solutions, including an Operational Data Store (ODS) to serve as a speed layer for advanced analytics and business intelligence initiatives.
What You’ll Do
-
Architect and develop cloud data warehouse solutions leveraging Databricks and Kubernetes.
-
Design and implement robust data pipelines to ingest, process, and store IoT data from diverse sources.
-
Integrate and manage large-scale datasets using Apache Iceberg for efficient, reliable, and scalable data lake operations.
-
Design and build an Operational Data Store (ODS) to function as a speed layer, enabling rapid access for near real-time Use Cases.
-
Collaborate with data engineers, analysts, and business stakeholders to understand requirements and deliver scalable solutions.
-
Optimize data models, storage, and compute resources for performance and cost efficiency.
-
Ensure data quality, security, and compliance across all cloud data platforms.
-
Develop and maintain documentation for architecture, processes, and best practices.
-
Setting up monitoring tools and dashboards to track pipeline health, diagnose issues, and optimize performance.
-
Mentor and support junior engineers through guidance, coaching, and learning opportunities.
-
Stay current with industry trends and best practices in data management, ODS technologies, and API development.
Your Skills & Abilities (Required Qualifications)
-
Bachelor’s in software engineering, Computer Science, Information Technology, or a related field or equivalent experience.
-
Minimum 10+ years experience designing, developing, and supporting enterprise applications .
-
Minimum 5+ years experience as a development lead or solution architect.
-
Possess broad software project delivery experience in leading technical efforts developing applications using a variety of tools, languages, frameworks, and technologies.
-
Proven experience architecting and implementing cloud data warehouses using Databricks or any other data platform.
-
Hands-on experience with Apache Spark and/or Apache Flink for data lake pipelines.
-
Experience designing and building Operational Data Stores (ODS) as speed layers for analytic environments.
-
Experience with IoT data ingestion, processing, and analytics.
-
Strong proficiency in SQL, Python, Java and ETL tools .
-
Proven cloud experience and strong familiarity with at least one cloud platform ( Microsoft Azure - preferred , AWS, GCP).
-
Experience with data visualization tools to effectively communicate insights is preferred.
-
Experienced in proactive issue detection leveraging anomaly detection techniques, with a strong focus on monitoring pipeline performance, ensuring system reliability, and identifying bottlenecks through observability metrics.
-
Good understanding and experience with CI/CD practices.
What Will Give You a Competitive Edge (Preferred Qualifications)
-
Experience with Databricks platform and Databricks certifications .
-
Experience with real-time data streaming technologies (e.g., Kafka, Spark Streaming, Flink ).
-
Knowledge of data governance and security best practices.
-
OCPP Protocol expertise