What does a successful Security Analytics Data Engineer do at Fiserv?
You will interface with security Big Data environments, aiding in their design and configuration, to analyze and present findings within them. You will work independently with internal clients and management on expanding and optimizing our data pipeline architecture and optimizing data flow/ collection for cross-functional teams, manage new and existing requirements and fully document processes and solutions.
What you will do:
- Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional/non-functional business requirements
- Identify, design, and implement internal process improvements, automate manual processes, optimizing data delivery, re-design infrastructure for greater scalability
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using technologies like Talend/BigQuery
- Work with stakeholders, including Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Lead end-to-end implementation of the Axonius or similar Cybersecurity Asset Management tool and SaaS Management solutions, including planning, setup, configuration, and support
- Create data tools for analytics/data scientist teams and work with them to strive for greater functionality in our data systems
What you will need to have:
- 6+ years of IT experience
- 5+ years’ experience working with relational/non-relational databases, query authoring (SQL)
- 3+ years’ experience with Axonius, Google cloud, Big Query and Cloud storage
- 3+ years’ experience with data pipeline and workflow management tools like Talend/Informatica and experience working in data warehousing or data lake platforms
- Experience building/optimizing ‘big data’ data pipelines, architectures, data sets and performing root cause analysis on data and processes to answer specific business questions and identify opportunities for improvement/ build processes supporting data transformation, data structures, metadata, dependency/workload management
- Experience manipulating, processing/extracting value from large, disconnected datasets and message queuing, stream processing, and highly scalable ‘big data’ data stores
- Fundamental knowledge of firewalls, networking, operating systems, databases, and storage
- Bachelor’s degree in data science, Computer Science, Engineering, Mathematics or an equivalent combination of education, work, or military experience
What would be great to have:
- 3+ years of Big Data platform experience
- 1+ year experience with
- scripting languages like Python, Java, C++, Scala
- relational SQL and NoSQL databases, including PostgreSQL or Alloy DB
- Repository management systems like GIT and SIEM systems
- Experience with reporting tools like PowerBI/Cognos
- Experience with Agile methodologies and Jira or other tasks/project tracking tools
#LI-RM1
R-10358761