Job ID :
26123
Company :
Internal Postings
Location :
Hillsboro, OR
Type :
Contract
Duration :
6 Months
Salary :
DOE
Status :
Active
Openings :
1
Posted :
23 Jan 2020
Job Seekers, Please send resumes to resumes@hireitpeople.com

Must Have Skills:

  1. Spark
  2. Python

Detailed Job Description:

Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to design and implement end to end solution.Experience publishing RESTful APIs to enable realtime data consumption using OpenAPI specificationsExperience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra Familiar with Distributed Stream Processing frameworks for Fast Big Data like ApacheSpark, Flink, Kafka streamBuil

Minimum years of experience*: 5+

Certifications Needed: No

Top 3 responsibilities you would expect the Subcon to shoulder and execute*:

  1. Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to design and implement end to end solution
  2. Experience publishing RESTful APIs to enable realtime data consumption using OpenAPI specifications
  3. Experience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra

Interview Process (Is face to face required?) No

Does this position require Visa independent candidates only? No