Google Cloud Platform experience is required as well as GA360.
60-70% of the job will be measurement, analysis, and insights particularly focused on website/content personalization.
30-40% of the role will be data engineering projects/activities such as stored procedures, data extract/transform/load
Hands-on experience with Hadoop/similar (HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building data pipelines).
Working knowledge on real-time data pipelines is added advantage.
Experience in a programming language such as Java, Scala, Python.
Knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Literacy of content taxonomy, content schemas like News, semantic web, and similar from a data/application perspective