Technology

Data Engineer

Pasadena, California, United States
View All jobs

The Opportunity:

OpenX is looking for talented and highly motivated Data Engineer to help us innovate and improve our products.

We are looking for a top notch Data Engineer to join the rapidly growing OpenX data team to help us build the best big data platform through the entire stack. Our platform serves billions of ad requests, hundreds of terabytes a day; it one of largest, scalable ad tech platform in the world servicing more than 800 publishers worldwide, including 65% of the comScore 100, with 200 million unique visitors and 100 billion ads auctioned per month through its exchange. 

Openx also contributes to open source tools and technology (https://www.openx.com/blog); it makes billions of real time bidding decisions a day; and, it is one of the fastest growing company in the country.

If you get excited by any of these nerdy stats, read on! Join our collaborative team of brainstorming developers and enjoy a role where you can own your initiatives, dig deep into the latest technologies and understand how systems inter-operate and think creatively to tackle hard engineering problems.

Some of the challenges you help us tackle include:

  • Design, build and launch extremely efficient & reliable data processing pipeline to process large volumes of data
  • Design and develop new systems and tools to enable folks to consume and understand data faster
  • Provide consultative solutions approach to business partners such as Analysts, Management, End Users and Developers to clarify objectives, determine scope, drive consensus, identify problems and recommend solutions.
  • Support end users on ad hoc data usage and be a subject matter expert on functional side of the business.
  • Create strong internal relationships, train others and evangelize your findings and implications.
  • Use your expert coding skills across a number of languages from Java, Scala, any other modern programming languages.

Job requirements:

  • 5+ years of hands on experience with large scale Hadoop environments, performance tuning and monitoring
  • Solid understand of running application on linux variants, familiarity with bash scripts
  • Fluently speak algorithms and data structures, have great knowledge of design patterns and software architecture, and be able to whiteboard elegant sudo code at will.
  • Experience architecting and building high-volume data platform with either Java MR programming or Spark programming using rock solid Computer Science fundamentals.
  • Soft skills such as professionalism, excellent communication, teamwork, and documentation.
  • Expertise using an appropriate mix of application in the big data ecosystem (Kafka, Spark, Hadoop MapReduce, Hive, YARN, Zookeeper, and other NoSql products)
  • Strong programming in Spark or MapReduce