Engineer data pipelines (Spark, Kafka-Streams, Flink, Scala) and build data-driven products.
Designing and implementing solutions using open source data engineering tools.
Devops work to keep the Data platform secure, reliable and fast.
Design and develop solutions using data science techniques ranging from statistics, ML and deep learning.
Fork open source projects and enhance them to suite our needs.
o 2+ year experience with any JVM functional programming language ( Scala/Clojure ).
o Very strong Computer science and distributed computing fundamentals
o Experience with building real time systems using Flink/Kafka streams is must
o Experience with high performance Spark batch applications is must
o Understanding of Lambda architecture ( Connecting realtime with batch ) is must
Nice to have
Hands-on experience of any OLAP ( Redshift / Druid )
Experience in functional programming
Hands-on experience of Big-data technologies ( Spark, HDFS, S3, Dynamodb,HBase/Cassandra, Zookeeper, Kafka, Kafka connect, Kafka streams, SQS)
Experience of atleast one native language ( C / C++ / Go / Rust )
Work with top-notch data team and cutting-edge technologies.
Open leave policy