GCP Data Architect

Remote
Contracted
Experienced

At SpringML, we are all about empowering the “doers” in companies to make smarter decisions with their data. We are a tight-knit, friendly team of passionate and driven people who are dedicated to learning, get excited to solve tough problems, and like seeing results, fast.

Our core values include placing our customers first, empathy and transparency, and innovation. We are a team with a focus on individual responsibility, rapid personal growth, and execution. If you share similar traits, we want you on our team.

SpringML is looking for a contract GCP Data Architect who is passionate about working with data and using the latest distributed framework to process large datasets.

Your primary role will be to design and build data pipelines.  You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early wins, consultative approaches with clients, interact daily with executive leadership, and help build a great company.

This role is open to candidates in the United States and Canada only. Remote option available with potential for minor travel, if needed.

Responsibilities:

  • Ability to work as a member of a team assigned to design and implement data integration solutions.
  • Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open source solutions.
  • Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
  • Propose design solutions and recommend best practices for large-scale data analysis.

Required Qualifications:

  • Minimum of 10 years of professional experience in data engineering and/or architecture
  • Minimum of 3 years of professional experience working with Google Cloud (GCP)
  • Minimum of 2 years of professional experience working with BigQuery

Preferred Qualifications:

  • Experience in ETL, data warehousing, visualization, and building data pipelines
  • Strong Programming skills in one of the following: Java, Python, or Scala
  • Proficient in big data frameworks such as Apache Spark, and Kafka
  • Experience with Agile implementation methodologies

In order to comply with OFCCP compliance regulations, the target hourly range for this position is $50.00-$200.00/hr.

 

Share

Apply for this position

Required*
Apply with Indeed
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*