What does a successful Senior Data Engineer do at Fiserv?
You will build and take ownership of the design and development of large-scale data engineering, integration, and warehousing projects, building custom integrations between cloud-based systems using APIs. You will write complex and efficient queries to transform raw data sources into easily accessible models by using the Data integration tool with coding across several languages such as Java, Python, and SQL. You will also architect, build, and launch new data models that provide intuitive analytics to the team and Build data expertise and own data quality for the pipelines you create.
What you will do:
- Collaborate with cross-functional teams to design scalable data architecture and create robust data processing pipelines
- Design and implement data models that align with business requirements, enabling seamless data access and analytics
- Identify opportunities to enhance data processing efficiency and implement performance optimizations for our data pipelines
- Implement data quality checks and validation processes to ensure the accuracy and integrity of our data
- Collaborate with the security team to enforce data privacy standards, ensuring compliance with relevant regulations
- Work closely with data scientists, analysts, and software engineers to understand data needs, provide technical support, and troubleshoot data-related issues
- Maintain comprehensive documentation of data engineering processes, data flows, and system configurations
What you will need to have:
- 10+ years of overall IT experience
- 6+ years' experience in building large-scale big data applications development
- 3+ years of technical leadership skills, demonstrating expertise in developing data solutions, building frameworks, and designing solutions for processing large volumes of data using data processing tools and Big Data platforms
- 3+ experience building Data Lake, EDW, and data applications on Azure cloud
- 2+ years’ experience in major programming/scripting languages like Java, and/or Python and with cluster and parallel architecture, high-scale databases and SQL, and exposure to NoSQL databases like Cassandra, HBase, DynamoDB, and Elastic Search
- 2+ years’ experience in real-time data processing and streaming technologies like Kafka / Apache Beam/ Spark and working with PCI Data, collaborating with data scientists including data governance, security, and privacy principle
- Bachelor’s degree in data science, Computer Science, Engineering, Mathematics or an equivalent combination of education, work, and/or military experience
What would be great to have:
- 1+ years’ experience with machine learning frameworks and data science workflows
- 1+ years’ experience in containerization technologies like Docker and orchestration tools like Kubernetes
#LI-RM1
R-10355722