Senior Big Data Consultant Resume
4.00/5 (Submit Your Rating)
Plano, TX
SUMMARY:
- 14 years of IT experience with strong credentials in design, implement and support technical solutions for Hadoop and CRM applications.
- 3+ years of experience in implementing Hadoop applications
- Hortonworks Certified Spark Developer (HDPCD: Spark)
- 5+ years of experience in working in Agile Software development lifecycle
- Working knowledge of big data management in Microsoft Azure HDInsight using Hortonworks
- Hands on experience with Microsoft Azure; HD Insight Clusters and Azure Data Lake
- Experiences in working on HDFS, YARN, Sqoop, Hive, Spark, Flume, Kafka, Map reduce etc.
- Working knowledge in AWS and Microsoft Azure Cloud computing platforms
- Knowledge on Hadoop architecture and Spark Architecture and its components
- Experience in working on Spark data abstractions such as Data Frames and RDD
- Experience in managing and reviewing log files for Hive, Sqoop, Spark applications
- Experience in import / export data from relational database to and from HDFS using Sqoop.
- Experience in implementing partitioning and bucketing techniques in HIVE.
- Experience in writing HQL/SQL queries to work in HIVE and Spark SQL
- Completed a POC on real - time streaming using Spark Streaming
- Experience in using Flume to read Weblogs and store the file in HDFS
- Managing and monitoring Hadoop clusters using Ambari
- Experience with different file formats like CSV, Text, Sequence, JSON, Parquet, ORC, Avro
- Active member of Technical leadership team to decide tools and technologies.
- Implemented a Continuous Delivery pipeline with Jenkins, Git
- Worked in several data migration projects between Siebel CRM and other applications
- Handled several roles as required from Onsite Coordinator, Technical Lead and Architect
- Experience in Airline, Healthcare, Retail, Marketing, Customer Care business areas
- Excellent team player with good oral and written communication skills
TECHNICAL SKILLS:
- Bigdata
- Spark HDFS Kafka Map Reduce
- Hive Sqoop Flume YARN
- Spark Zookeeper Data frames Jenkins
- Microsoft Azure NiFi Azure Datalake Scala
- SSQL, PL/SQL, BI Publisher, HTML5, CSS, JavaScript, Java, TOAD, Informatica, SQL Developer, JIRA, Intellij IDEA, Rally, Agile Craft, MS Visio, Project, MS Office
PROFESSIONAL EXPERIENCE:
Confidential, Plano, TX
Senior Big Data Consultant
Responsibilities:
- Extensively worked on Azure HDInsight platform and its components
- Worked on in deploying Virtual machines and managed disks in Azure
- Deployed different types of HDInsight clusters Azure
- Performed POT on Apache NiFi product and compared the capabilities with Informatica
- Ingested data from Teradata to HDFS using NiFi
Confidential, Fort Worth, TX
Big Data Consultant
Responsibilities:
- Part of the team performed a POC for the business requirement and mapped with Bigdata solution
- Extracted structured data from reservation and Loyalty applications using Sqoop
- Configuring and working with Flume to load the data from aa.com weblogs to Kafka(POC)
- Configured Kafka queues to receive data from Flume
- Developed ETL scripts to import data from RDBMS into Hive, HDFS using different file formats
- Created Hive Internal/External Tables using HiveQL
- Developing Hive DDLs to create alter and drop Hive tables, partitions and buckets
- Expertise in partitioning and bucketing techniques in Hive
- Handling different file formats on ORC, TXT, CSV, Avro, Sequence, JSON, XML, and CSV.
- Implemented Spark-shell applications using Scala to process weblogs and reservation data
- Worked on SparkContext and sqlContext to process data by using transformations and actions
- Developed Scala programs to apply business rules, data cleanse, transform the raw data into meaningful business information and uploaded it into Hive.
- Experience in Partitioning, Bucketing to improve performance in Hive
- Created Accumulators and broadcast variable in Spark Scala
- Created RDD and data frames using sparkContext and sqlContext respectively
- Using Intellij to create Scala based spark application/JARs and shipped the JARs to cluster
- Part of Agile software development team, worked in parallel development environment
Confidential, Fort Worth, TX
Senior Big Data ConsultantResponsibilities:
- Migrated data from Microsoft Dynamics to Siebel system using Informatica and Siebel EIM
- Working with Siebel Data Model & optimizing the Siebel SQL queries by analyzing Oracle Execution Plans, creating/optimizing the database indexes.
- Created Informatica Workflows to load data from Siebel to OBIEE for Analytics reporting
- Part of Agile team to deliver the customizations in monthly Sprint
Confidential, Rochester, NY
Senior Big Data ConsultantResponsibilities:
- Developed iFacets Sync with Siebel application using daily batch using Siebel EIM utility
- Created Informatica mapping between Siebel and iFacets entities
- Using Informatica and DAC to pulled data from Siebel and load into OLAP database
Confidential, Easton, PA
Senior Big Data ConsultantResponsibilities:
- Designed Web Service to create a Service request from Victolic ecommerce system.
- Configured External BC to show data from other database in the Siebel Application
- Loaded Account data (from Confidential ) data into Siebel through EIM Process using Informatica.
- End to End support for monitoring Production applications
Confidential, Albuquerque, NM
Senior Big Data ConsultantResponsibilities:
- Actively participated in the requirements and process mapping workshops for the project.
- Designed Task based UI for Billing, Payments, Payment Disputes, PCP Change, Address change.
- Developed custom business services to implement business logic for different process.
Confidential
Senior Big Data ConsultantResponsibilities:
- Created Business Component, Business Object, Applet, View and Screen.
- Configured User Property, Static Pick list, Dynamic Pick list, Join, Link, drilldown and MVG.
- Developed Custom Business Service, Applet script, and BC script.