Aka Hadoop magician

Can you tame the tiny little elephant who hides a lot of data?

Yes, I’m talking about the Hadoop Ecosystem, so if it is in your little finger keep reading and challenge yourself with an exciting job!

Your tasks would be:

  • Ability to scope and design Big Data solutions independently based on high level architecture design
  • Selecting and integrating Big Data tools to provide a solution serving an analytics back end for the Risk Analytics project
  • Design and implement ETL processes for extracting, transforming and loading massive amounts of data into the Hadoop clusters
  • Provide analytics solutions on the Hadoop clusters by implementing complex queries running in parallel and with high performance

Must have skills you have:

  • Solid background in understanding the HDFS distributed file system
  • Previous experience in Data Engineering projects and solutions in the Hadoop environment
  • Knowledge of Big Data related ETL tools and messaging systems
  • Experience in implementing components/queries using different Spark modules
  • Knowledge of Big Data Querying tools
  • Ability to implement functional modules with Python &/or R programming languages using the Hadoop ecosystem frameworks

Musthave technologies we need:

  • Hadoop Ecosystem
  • Cloudera Hadoop
  • Big Data related ETL tools

Nice to have technologies:

  • Flume
  • Python OR R programming
  • HBASE
  • HDFS
  • HIVE
  • Kafka
  • Impala
  • MapReduce
  • MongoDB
  • NoSQL
  • Pig
  • Spark

What can we offer?

  • Internal trainings, to get to know individual technology solutions & applications
  • Participation in international projects in collaboration with experts from abroad
  • Competitive salary, cafeteria and other benefits
  • Friendly and modern work environment

Place of work – Budapest

If you find the job enough exciting and you know the proverb – “The early bird catches the worm.” – Don’t hesitate, apply for the job!

We can’t wait to get to know you!