Azure Specialist (Data Engineer Enterprise Data Lake)
Leveraging your passion for data in supporting Rabobank’s data driven ambitions by implementing an Enterprise Data Lake platform in the cloud. To support all business initiatives with a need for data (structured and unstructured, high volume and/or streaming).
As we are still at the start of our data lake journey you can contribute in shaping the Enterprise Data Lake by bringing in your broad experience in the data domain. You can help to further refine the building blocks of the data lake and advise the organisation in becoming a Data as a Service organisation.
One of the key characteristics of this Enterprise Data Lake platform is the concept of self-service: both producers and consumers of data should be able to use the platform in a self-service way.
As a Data Engineer Enterprise Data Lake you make a difference by
- bringing in experience in data engineering and data pipelines in the cloud for both batch and streaming data processing
- translating the requirements into technical solutions using the data lake platform. You understand the concepts and solutions forming the Enterprise Data Lake.
As a Data Engineer Enterprise Data Lake you will be part of the tribe Data & Content services. The tribe Data & Content services is empowering the Rabobank to maximize the value of data and content. Every day the 27 squads of the tribe work on innovative solutions to help the Rabobank reach this goal. The members of the tribe continuously improve themselves and their squads. They have an open culture and an urge to getting things done. Collaboration is at the heart of everything we do. All squads you interact with, both within the tribe and outside the tribe work in scrum.
You are responsible to create the building blocks to make this self-service contact possible. Furthermore you advise other teams on how efficiently they make use of this platform, for example by creating a Proof-of-Concept for them. You are the point of contact for teams using the EDL platform and advise on topics like data processing, data quality and data governance.
You are experienced in building and deploying (big) data pipelines in the cloud. Preferably on Azure, where you got familiar with cloud native components and the separation between storage and compute. Dataframes are not a new concept to you and you have preferably written Spark code in either PySpark or Scala for your ETL needs. All your deployments were automated and you are able to quickly prototype your solutions using one of the more popular scripting languages.
You have experience in using four or more of below concepts/components:
- traditional Apache Hadoop ecosystem and/or HDInsight/AWS EMR
- Azure Data Factory / Apache Airflow / ...
- Databricks or Jupyter notebooks using either Python or Scala
- Apache Kafka/Event Hub/Amazon Kinesis
- Azure functions / AWS Lambda
- Azure DevOps / AWS CloudFormation
Do you want to become the ideal version of yourself? We would love to help you achieve this by focusing firmly on your growth, personal development, and investing in an environment where you keep learning every day. We provide the space to innovate and initiate. In this way, we offer you numerous opportunities to grow and help you exceed your expectations, to do the right things exceptionally well, and to therefore grow as a professional. In addition, you can also expect:
A gross monthly salary between € 3.677,26 - € 5252.05;
A thirteenth month and holiday pay;
An Employee Benefit Budget (9% or 10% of your monthly salary). You decide how to spend this budget. This may include purchasing extra leave days, making extra pension contributions or even receiving a monthly cash payout;
A personal budget that you can spend on activities related to your personal development and career
100 % reimbursement of commuting costs if you travel by public transport! Do you still prefer to travel by car or motorbike? Then choose a home/work travel allowance;
A pension scheme, to which your contribution is only 5%.The on boarding process includes screening.
Are you ready to join Rabobank as a Data Engineer to make a difference for yourself, for our customers and for our society? Than do not hesitate and send your cv and motivation letter to us so we can invite you for a personal meeting.
Good to know:
If you have any questions about specific details of this position, please contact Lois Hageman (Recruiter) via email@example.com.
Everyone is different, and it is exactly those differences that help us become an even better bank
We are looking forward to meeting you for this vacancy in Utrecht!