Job Seekers, Please send resumes to resumes@hireitpeople.com
Must Have Skills:
- Spark
Nice to have:
- Building data processing pipelines using Scala and Spark
- Hands - on experience with Data visualization tools - Tableau, Datameer etc
- Experience with Data Migration projects and methodology
Detailed Job Description:
- Intake Developers (EPDB Data SME level knowledge, Outbound file dev & support for Alliances)
- Building distributed data processing pipelines using Spark, Hive, Python and other tools/languages prevalent in the Hadoop ecosystem
- Building data ingestion framework using Sqoop
- Strong understanding of BigData Enterprise Architecture (Hortonworks preferred)
- UNIX/Linux skills: using CLI and shell scripting
- Parsing structured and semi-structured data (XML, JSON) using Hadoop stack tools
- Building Spark components for data manipulation, preparation, cleansing
- Strong understanding of the best practices in the Hadoop ecosystem; good troubleshooting and performing tuning skills
- Using source code and version control systems like GIT
- EPDB & Provider data knowledge
- Healthcare experience
- Work independently with minimal oversight
Minimum years of experience*: 5+
Certifications Needed: No
Top 3 responsibilities you would expect the Subcon to shoulder and execute*:
- Building data processing pipelines using Scala and Spark
- Handson experience with Data visualization tools Tableau, Datameer etc
- Experience with Data Migration projects and methodology
Interview Process (Is face to face required?) No
Does this position require Visa independent candidates only? No