Job Seekers, Please send resumes to resumes@hireitpeople.com
Must Have Skills:
- Spark
- Python
- Unix scripting
Nice to have skills:
- Oozie, cron job scheduling
Detailed Job Description:
Teradata DBConvert existing Data Engineering code to pySpark for parallel processing. Develop generic frame work for retrieve data from Hive, convert RDD to DF Expose webservice. ETL support as needed. Model output Ingestion to target DB. Help dashboard developers to consume Model outputs from Hadoop systems. Schedule Jobs
Minimum years of experience*: 4