Technology

Data Engineer

Pasadena, California, United States
View All jobs

The Opportunity:

OpenX is looking for talented and highly motivated Data Engineer to help us innovate and improve our data products.

We are looking for a Data Engineer to join the rapidly growing OpenX data team that is responsible for building the best big data ad tech platform on GCP (Google Cloud). We serve up billions of ad requests, hundreds of terabytes a day more efficiently; we are largest ad tech platforms in the world servicing more than 800 publishers worldwide, including 65% of the comScore 100, with 200 million unique visitors and 100 billion ads auctioned per month through its exchange.

If you get excited by any of these nerdy stats and want to be part of a transformative team, read on! Join our collaborative team of brainstorming developers and enjoy a role where you can own your initiatives, dig deep into the latest technologies and understand how systems inter-operate and think creatively to tackle hard engineering problems.

Some of the challenges you help us tackle include:

  • Design, build and launch extremely efficient & reliable Non-ETL data processing pipeline to process large volumes of data using Java and Scala.
  • Design and develop new data pipelines systems and tools to enable folks to consume and understand data faster
  • Provide consultative solutions approach to business partners such as Analysts, Management, End Users and Developers to clarify objectives, determine scope, drive consensus, identify problems and recommend solutions.
  • Support end users on ad hoc data usage and be a subject matter expert on the functional side of the business.
  • Create strong internal relationships, train others and evangelize your findings and implications.
  • Use your expert coding skills across a number of languages from Java, Scala, any other modern programming languages.

Job requirements:

  • 5+ years of hands-on development experience with large scale Hadoop environments, performance tuning, and monitoring
  • Solid programming experience in modern programming languages, specifically Java, Scala, and Spark
  • Fluently speak algorithms and data structures, have great knowledge of design patterns and software architecture and be able to whiteboard elegant Pseudocode at will.
  • Experience architecting and building high-volume data platform with either Java MR programming or Spark programming using rock-solid Computer Science fundamentals.
  • Expertise using an appropriate mix of application in the big data ecosystem (Kafka, Spark, Hadoop MapReduce, Hive, YARN, Zookeeper, and other NoSQL products
  • Solid understanding of running application on Linux variants, familiarity with bash scripts
  • Soft skills such as professionalism, excellent communication, teamwork, and documentation.