Data Engineer

Job description

Are you up for building the data infrastructure of one of the fastest growing software scale-ups in Europe? Do you have the engineering chops and data savviness to set up the data platform that propels us to world domination? Join Zivver as a data engineer!

Zivver is a technology scale-up that provides secure communication software to reduce the risk of data leaks for businesses. We are one of the fastest growing businesses in Europe, having taken less than five years to go from a mere idea to hundreds of enterprise customers, over $30 million raised and 130+ employees. We are shooting for world domination, and you can be a part of this!

As a data engineer, you are on the intersection between engineering and data. You enable these teams to deliver great analytical insights that drive our decisions and dazzling data products that delight our customers. To do this, you build data pipelines that fuel our data warehouse and our applications. Additionally, you work closely with the data team to build out our data models based on sound engineering practices. Additionally, you cooperate with engineering to ensure that high quality data is collected and used as the foundation for data-intensive features. Your work forms the foundation upon which a data cathedral is built. An awesome role in an awesome company.

Hot take

  • Data levels all arguments.
  • You’d heard of Bobby Tables before it was cool.
  • Data quality keeps you awake at night. At least, it would if we had fewer tests.

What will you do?

  • Build and maintain scalable data pipelines fueling our data warehouse and product.
  • Work with the data team to build out our Snowflake data warehouse, built using dbt.
  • Work with the engineering team to build out our Snowplow based data collection.
  • Create interfaces to help product teams build data-intensive features

What do we offer?

  • Ownership of data infrastructure. You get the space to identify, prioritize and execute, supported by your data team and engineering colleagues;
  • An exciting, fast-growing environment: we are going for world domination, and you can be a part of it;
  • A top-of-market salary and pension plan: we pay well for talent;
  • A remote-friendly environment: we had people working from sunny places around the world before it was cool.
  • When COVID is over: Free lunch, a great office, and awesome people from all over the world to enjoy them with;
  • Flexible working hours;
  • At least €1.000,- per year on personal development budget.

A day at Zivver

Your day obviously starts with coffee (or tea/water/milk/kerosine: we’re beverage-inclusive). As you gradually enter the world of the living, you check if any alerts came in during the night. There are no urgent problems, but you do notice that the number of Snowplow events that fail validation has been on the rise. You have a moment to look into it now and quickly discover that this is due to a bug in a recent version of our Gmail Extension. You flag the issue to the team that owns this feature and then go to the data team stand-up.

At the stand-up you talk the team through your progress and your plan for the day. You spend an extra 15 minutes after the stand-up giving a data analyst advice on how to better structure some SQL data modeling that they are working on and refer them to a neat article that explains how to be more modular in your data warehouse. You are happy that your advice was asked in advance and are looking forward to seeing the analyst use your ideas to build a better model.

Next up, it is focus time. You are working on a project to convert a proof of concept for data leak detection to a production-grade system. A particularly interesting nut to crack revolves around processing sensitive information that shouldn’t be written to disk unencrypted, while minimizing risk of data loss under load. After forming your initial ideas, you hop onto a call with one of your peers in the back-end engineering team to check if the ideas hold up under scrutiny. You subsequently implement a minimal version of your idea and note some experimenting that you’d like to do on it tomorrow.

As the end of the day nears, you do a final review of the model that you helped the analyst with in the morning. Truth be told, the analysts already did a thorough review and you are happy with the quality of the work. Still, you identify an opportunity to improve performance by using a new feature of Snowflake and suggest we use it. With this last change implemented, you almost start preparing for a release out of habit. Just as you are checking the diffs, you remember that you have recently enabled the analyst team to do their own releases. You wish them the best of luck, close your laptop and disappear into the sunset.


Skills / Experience

  • Masters degree in computer science or something even more complicated.
  • Strong SQL and NoSQL skills.
  • Solid experience with Python.
  • Experience with DevOps practices (CI/CD, deployments, AWS infrastructure).
  • Experience with big data frameworks such as Spark.
  • Experience with building data products.
  • And preferably:
    • Experience with Snowplow, Snowflake or dbt.
    • Experience with the JVM ecosystem in general and Scala in particular.
    • An understanding of data warehouse architecture.
    • A fair understanding of statistics and machine learning.

Personality / attitude

  • You push yourself and others to the highest possible quality bar.
  • You are looking to grow every day by being curious and coachable.
  • You are pragmatic: nothing needs to be perfect from day one (or day zero?).
  • You are a strong collaborator: straightforward and direct, but respectful with a big smile