What does a successful Event Streaming DevOps Engineer (Kafka/Flink) do at Fiserv?
You will support enterprise solutions using your extensive experience with event driven platforms and automation, transforming the Enterprise Data Streaming Services. You will contribute to the engineering and automation of Fiserv’s global (Kafka) Event Streaming Services Platform, and assist in new application builds, scaled deployments, and operations for all of our customers. An ideal candidate will be responsible for designing, implementing, and maintaining infrastructure that supports real-time data streaming using Apache Flink technology, while ensuring scalability, security, and high availability of mission critical systems.
What you will do:
- Design, deploy, manage, and troubleshoot Kafka and Flink clusters in Cloud or on-prem environments
- Build and maintain CI/CD pipelines for seamless integration and automation of Event Streaming services
- Automate infrastructure provisioning using tools like Terraform, Ansible, or CloudFormation
- Monitor and optimize Kafka and Flink infrastructure performance
- Collaborate with developers and architects to build & support Event Streaming solutions while leveraging tools such as Kafka Connect, Schema Registry, Flink, Cruise Control, Spark, Snowflake, Streams Replication Manager or Mirror Maker 2.0 etc
- Ensure observability through monitoring tools, like Grafana, Moogsoft and Dynatrace, to visualize system health and performance metrics
What you will need to have:
- 3+ years of experience with DevOps, Site Reliability Engineer(SRE) , or related role in an Agile environment
- 3+ years of experience working with message systems, such as Apache Kafka, and stream- processing frameworks like Apache Flink
- 3+ years of experience designing, deploying, and supporting production cloud environments, such as Amazon Web Services (AWS), Azure, and private cloud environments
- Hands-on experience with configuration management and infrastructure-as-code (IaC) tools like Ansible, Terraform, or equivalent tools like Chef and strong scripting skills in Python, Bash or similar languages
- Knowledge of monitoring and logging frameworks such as Splunk, Dynatrace, Grafana etc
- Experience with GitLab or equivalent CI/CD tools like Jenkins/Rundeck
- Bachelor’s degree in a relevant field, or an equivalent combination of work, education, and/or military experience
What would be great to have:
- Experience in managing Kafka streams or Flink jobs on Cloud/on-prem production environment
- OS management experience for Linux file systems and HDFS paths
- Experience with integration of databases such as Oracle, MS SQL Server, Postgres, etc
- Solid understanding of networking, security, and system administration (both Linux and Windows)
- Experience with containerization technologies such as Docker and orchestration platforms like Kubernetes.
#LI-CD1
#ApacheFlink
#StreamProcessing
#RealTimeData
#FlinkCommunity
#DataStreaming
#DistributedSystems
R-10342576