Big Data System Administrator // Motionlogic

November 17, 2016

Would you like to concurrently scale up and out your career as a Systems Administrator?

Do you want that you can keep Petabyte-scale systems run smoothly?

Then read on!


Job Description

Motionlogic is trying to give answers to big questions and it is trying to use really big data to do so. We are building data products and the infrastructure that can handle 100s of Terabytes, billions of events daily and provide actionable insights in near real time.

Does it sounds fun and challenging? If yes, that what you'll have to do when you show up in the office:

  • Support Motionlogic's complex internal and customer-based infrastructire.
  • Partner with internal and customer teams to drive project and apply improvements from lessons learned. Work with customer teams to drive project goals and prioritization.
  • Develop, coordinate and drive small projects and service level.
  • Solve problems occurring in critical services and build automation to prevent the recurrence of them.
  • Ensure compliance with very strict security and privacy policies.
  • Employ monitoring tools to collect KPIs, interpret them to optimize the operational profile and prohibit problems from recurring.
  • Build Big-Data appliances which will operate in secure data centers all around the planet.
  • Select which new technologies to incorporate in the existing tool chain.
  • Automate the operation of internal systems.
  • Improve continuously, you will be encouraged and rewarded for it.

Expected skills and experience

If you are up for the challenge, we think you should carry some of the following skills and experiences:

Minimum qualifications

  • Self driven personality.
  • BS degree in Computer Science, another quantitative field, or equivalent practical experience.
  • Experience with algorithms, data structures and software design.
  • Experience in at least one high level language like Ruby, Python and Shell.

Preferred qualifications

  • Experience with production Linux environments.
  • Proficient with configuration management for YARN-based systems and web services.
  • Knowledge of TCP/IP networking, load balancing, workflow automation, security practicesand LAN/WAN environments.
  • Familiarity with running data pipelines, on premise installations.
  • Experience with scripting in Python and bash.
  • Exposure onvirtualization engines (libvirt and qemu) and automating their operational aspects.
  • Knowledge of container technologies, like LXC and Docker.
  • Knowledge of at least one monitoring solution like Prometheus, Nagios, Munin or Zabbix.
  • Knowledge on operating big data technologies (YARN, Mesos, Zookeeper, etc.).
  • Analytical skills and passion for measuring everything and improving systems based on the insights gained from the measured data.

If it sounds something you like to know more about don't hesitate to contact us!

Visit the company website

Related Jobs

Related listings on StepStone:

Career Insights - Journal by Jobspotting

Job Matching Made Simple