Senior Big Data Engineer
That you as software developer are responsible for delivering data to reporting, operational systems and for the Rabobank app for all department within the Rabobank. The data is key in our communication towards to client, in our marketing communication and for management information. You develop complete and scalable solutions in Spark on a Cloudera platform using Azure DevOps pipelines.
As a developer you can make a difference
The Data and Content within the organization has the responsibility to help teams with data for all IT Systems applications. The goal is to deliver data in a secure, compliant and automated way as fast as possible meeting Security DevOps principles working together with all data teams.
These teams are responsible for data storage, data processing, data flows and data provisioning. We work in DevOps teams where we work on data modelling, data logistics, data quality, data lake and data warehousing to adjust and maintain the data flows and data services we provide for systems and departments within Rabobank.
For one of our teams working on the data lake we are looking for a senior big data engineer.
Growing a better world together
You are already aware that Rabobank is a financial services provider for millions of customers in 40 countries. But did you know that we aim to contribute to real change with our “Growing a better world together” mission? We do so in countless ways, such as:
- A third of all the food we purchase is thrown away. Together with Nature & Environment, we’re working to increase awareness among consumers of how to reduce food waste.
- As part of a project with Humanitas, we are helping people who are experiencing financial difficulties to get their household finances in order.
- Together with Vluchtelingenwerk Nederland (the Dutch Council for Refugees), we are helping 1,500 refugees find a suitable job.
You will be responsible for
- Development and maintenance of datapipelines mainly in spark.
- Setting up and maintain CI/CD pipelines in Azure devops.
- Setting up monitoring.
- Development of scalable and performant solutions based on open source technologies (Spark, Sqoop, Hive, Hbase, Oozie)
- Streaming flow development with HDF (hortonworks) NiFi and Spark streaming.
- Advice on and involvement in the realization of the Data Lake .
- Working in a multi-disciplinary scrum team and contributing to the T-shaping in the team.
- Providing input and contributing to experiments.
- Keeping informed on the developments in the market.
The compentences/expertise we would like you to have:
- Willingness to learn and being open to new techniques.
- Willingness and capability to teach and share knowledge.
- Good communication skills.
- Strong in collaboration.
- Showing entrepreneurship.
- Knowledge on financial products and business processes.
- Being able to work in a structured manner.
- Able to advise and communicate on new solutions
The knowledge we would like you to have:
- Knowledge on (Big) Data environments and tooling (Hadoop, Cloudera , Hortonworks Microsoft Azure, NiFi tooling)
- Relevant HBO or WO education (Informatica, Informatiekunde of Bedrijfskundige richting)
- 4-6 years of experience in ICT data environment
- Extensive experience in software development in an agile environment
- Cloudera 5.12.1
- Streaming (Kafka)
- Spark / Scala and/or Java
- Cloud platform (preferably Azure)
- CI/CD pipelines (preferably Azure devops)
- SQL + NoSQL (at least 2 years of experience)
- Scripting languages (Python/Bash)
What we offer you:
A challenging environment for talent in Data. The environment is dynamic and we are working with new technologies. We encourage personal growth and there is room to learn from others. You get a lot of own responsibility and freedom to explore ideas. We value hard work, but also to have a lot of fun with each other.
Do you want to become the ideal version of yourself? We would love to help you achieve this by focusing firmly on your growth, personal development, and investing in an environment where you keep learning every day. We provide the space to innovate and initiate. In this way, we offer you numerous opportunities to grow and help you exceed your expectations, to do the right things exceptionally well, and to therefore grow as a professional. In addition, you can also expect:
- A gross monthly salary between € 3.677,26 - € 5252.05;
- A thirteenth month and holiday pay;
- An Employee Benefit Budget (9% or 10% of your monthly salary). You decide how to spend this budget. This may include purchasing extra leave days, making extra pension contributions or even receiving a monthly cash payout;
- A personal budget that you can spend on activities related to your personal development and career;
- 100 % reimbursement of commuting costs if you travel by public transport! Do you still prefer to travel by car or motorbike? Then choose a home/work travel allowance;
- A pension scheme, to which your contribution is only 5%.
Are you ready to join Rabobank as a Senior Big Data Engineer to make a difference for yourself, for our customers and for our society? Than do not hesitate and send your cv and motivation letter to us so we can invite you for a personal meeting.
Good to know:
- If you have any questions about specific details of this position, please contact Lois Hageman (Recruiter) via email@example.com.
- The on boarding process includes screening.
- Everyone is different, and it is exactly those differences that help us become an even better bank.
- We are looking forward to meeting you for this vacancy in Utrecht!