At SpringML, we are all about empowering the ‘doers’ in companies to make smarter decisions with their data. Our predictive analytics products and solutions apply machine learning to today’s most pressing business problems so customers get insights they can trust to drive business growth.
We are a tight knit, friendly team of passionate and driven people who are dedicated to learning, get excited to solve tough problems and like seeing results, fast. Our core values include putting our customers first, empathy and transparency, and innovation. We are a team with a focus on individual responsibility, rapid personal growth and execution. If you share similar traits, we want you on our team.
What’s the opportunity?
SpringML is looking to hire a topnotch Data Engineer who is passionate about working with data and using latest distributed framework to process large dataset.
As a Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company.
Chosen team member will be part of the core team and play a critical role in scaling up our emerging practice.
- Ability to work as a member of a team assigned to design and implement data integration solutions.
- Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open source solutions.
- Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
- Propose design solutions and recommend best practices for large scale data analysis
Desired Skills and Experience
- B.S. or equivalent degree in computer science, mathematics or other relevant fields.
- 5-10 years of experience in ETL, Datawarehouse, Visualization and building data pipelines.
- Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
- Proficient in big data/distributed computing frameworks such as Apache Spark, Kafka,
- Experience with Agile implementation methodologies.