We provide IT Staff Augmentation Services!

Hadoop Solution/operations Bigdata Architect Resume

0/5 (Submit Your Rating)

Sunnyvale, CA

PROFESSIONAL SUMMARY:

  • Bigdata, Cloud architect/Developer/ Data Scientist with over 18 years of professional experience serving in Data Warehousing, Data pipeline, Web, Client, Server Application and business advanced analytical technical solutions. Providing thought leadership and ownership for Hadoop echo systems using design and implementation, I’ve researched, evaluated and implemented tools, technologies, designs and processes, ensuring a quality product that is responsive to market needs.Expertise in Hadoop, Map Reduce, Elastic Search, Spark, Yarn, SparkStream Hive, Pig, HBase, Kafka, Cassandra, Oracle, TeraData, AWS, EC2, S3, SNS, SQS, Cloud Watch, RESTful API, Machine Learning, Tableau, Anroid, SDK, NDK, JNI, iPhone SDK.
  • Expertise in C, C++, Java, J2EE, Scala, Python, Go, CDH, HDP,EMR, TCP/IP, Bluetooth, Android SDK, Communication Drivers and Application software..
  • Hands - on experience with major components in Hadoop Ecosystem including Hive, HBase, Hive, PIG, Sqoop, Flume, Kafka, Avro, Oozie, Zookeeper and MapReduce frameworks, Cassandra, TeraData..
  • Strong knowledge of Data Mining and MLib, Mahout Machine learning techniques, Mongo DB

TECHNICAL SKILLS:

Project Management: Java, Scala, Python, C++Infrastructure PlanningElastic SearchData WarehousingProduct Development Data Architect Machine learning

Data Analytics: HBase, Cassandra. Mongo DB Rest APIAgile Methodologies

Data Modeling: Storm, Spark, Kafka, ZookeeperCloud ComputingOracle, MS SQL, MY SQLChef, Salt

Data Architecture Data Governance: AWS, EC2, S3 HadoopLeadership OpenStack, Private CloudTeam Building

PROFESSIONAL EXPERIENCE:

Confidential, Sunnyvale, CA

Hadoop Solution/Operations BigData Architect

Responsibilities:

  • Designed Application architecture and the product roadmap aligned to Business objectives
  • Installed and configured Elastic Search Cluster in Production environments.
  • Worked on ElasticSearch query and Filtering
  • Worked on ElasticSearch Indexing data.
  • Setup Kakfa Cluster and implement and automate the Da-vinci message producers and consumers.
  • Developed data pipeline using Kafka, Spark Stream and lambda architecture
  • Work with Business stakeholder and translate Business objectives, requirements into technical requirements and design
  • Operationalize Elastic Search and indexed all the Da-vinci proedures.
  • Import the historical data from RDBMS into HDFS using Sqoop
  • Peform data trnasformation in Hive, Spark SQL.
  • Developed analytical component susing Scala, Spark and Spark Stream.
  • Responsible for designing Data Architecture for a Big Data initiative to perform data analytics on Healthcare domain
  • Worked on Hbase Architecture Design with the Hadoop to develop a Database Design in HDFS.
  • Setup and configured Nagios for monitoring.
  • Guidance to the development team

Languages: Java, Scala, Python, J2EE, Hadoop, Elastic Search, Kafka, Spark, SparkStream, HBase, Hive, Pig, Sqoop, MySQL, MS SQL Server, Tableau 8.X

Confidential, Palo Alto, CA

BigData/Hadoop Solution Architect

Responsibilities:

  • Design and develop Map reduce/Yarn programs
  • Data analysis through Pig, Map Reduce, Hive.
  • Architect and develop data pipelines for various business requirements.
  • Perform POC on Openstack Technologies.
  • Import and export of data using Sqoop from or to HDFS and Relational DB Teradata, DB2
  • Implement Flume, Spark, Spark Stream framework for real time data processing.
  • Hands on experience in installing, configuring and using eco-System components like Hadoop MapReduce, HDFS, Hbase, Pig, Flume, Hive and Sqoop.
  • Develope analytical component susing Scala, Spark and Spark Stream.
  • Develope BI reports and dashboards using Tableau 8.x.

Languages: Java, Scala, Python, J2EE, Hadoop, SPark, Cassandra, HBase, Hive, Pig, Sqoop, MySQL, TeraData, DB2, Tableau 8.X

Confidential, Las Vegas, CA

Responsibilities:

  • Design and develop Mapreduce/Yarn programs
  • Migrated Solar to ElasticSearch
  • Kerberos security was implemented to safeguard the cluster
  • Data analysis through Pig, Map Reduce, Hive.
  • Cluster coordination services through Zookeeper
  • Developed POS analytical Component.
  • Import and export of data using Sqoop from or to HDFS and Relational DB Teradata, DB2
  • Implement Flume, Spark, Spark Stream framework for real time data processing.
  • Hands on experience with ingesting data from various sources, installing, configuring and using eco-System components like Hadoop MapReduce Frameworks, HDFS, Hbase, Pig, Flume, Hive and Sqoop.
  • Developed Social Media component to store data in Cassandra.

Confidential, Sunnyvale, CA

Lead Architect/Hadoop Architect

Responsibilities:

  • Design and develop DataIngestion component.
  • Design and develop Mapreduce applications for data analysis.
  • Implement Flume based framework for data ingestion.
  • Integration of Hadoop with BI tools like Alpine Data, Tableau.
  • Involved in writing Java API’s for running the apache Spark.
  • Imported Bulk Data into HBase Using MapReduce programs.
  • Implemented Elastic search indexing on movie titles
  • Setup and monitoring of Hadoop deveopment envirmonment
  • Setup Hadoop Cluster on Amazon EC2
  • Setup Kafka Cluster.
  • Data Refinedment through Pig and HIVE.
  • Data migration between RDBMS and Hadoop through sqoop

Confidential, San Jose, CA

Lead Architect/Hadoop Architect

Responsibilities:

  • Design the Samsung analytics componet in the Hadoop eco system.
  • Design and develop Mapreduce applications for data analysis.
  • Develop MapReduce programs
  • Setup MySQL for Hive Metastore.
  • Develop Java Restful API.
  • RDBMS, HDFS integration through Sqoop
  • Developed Location Services component to record/play location specific voice notes
  • Develop WebServices, voice notes search componenteta Store
  • Performed query optimization through views, indexing, partitoning, bucketing.
  • Develop custom mappers and reducers for Hive

Confidential, Mountain view, CA

Lead Architect/Sr Hadoop, Mobile Software developer

Responsibilities:

  • Design and Developed Curvone.
  • Worked with different SDK of Google Android.
  • Interface with native code using NDK, OpenGL Es.
  • Devloped Navigation Bar and integrated with Cloud service.
  • Devloped Activity, content provider, Intent.
  • Developed connectivity component using USB
  • Integrated with Facebook API.

Confidential, San Jose, CA

Lead Sr. Software Engineer

Responsibilities:

  • Develop USB communication driver using C++, STL
  • Develop Confidential Communicator/SoundStation using C++, MFC, SIP.
  • Integrated Confidential communicator in Skype.
  • Used an Agile development and Scrum model
  • Develop UI automation using Python.

Confidential, Mountain View, CA

Sr. Software Engineer and Team Lead

Responsibilities:

  • Developed the content management UI using WPF.
  • Developed file transfer component
  • Developed Shell extension component using C++ and COM
  • Used an Agile development and Scrum model

Confidential, Fostercity, CA

Sr. Software Engineer

Responsibilities:

  • Developed the TinyBird player UI using MFC, Wind32, and C#
  • Developed multi threaded application and Timeshift Bar using C++
  • Developed next generation player using Silver light and C#
  • Used an Agile development and Scrum model

Confidential, Mountain View, CA

Sr. Software Engineer

Responsibilities:

  • Confidential Edition is an integrated and comprehensive software platform developed specifically to deliver broadcast-quality video and new, integrated TV services over broadband networks.

Confidential, San Francisco, CA

Sr. Software Engineer

Responsibilities:

  • Project: eProcurement eProcurement is a complete solution for automating the entire purchasing process, from purchase order creation on the front end to decision-making and reporting tools that provide management control, to integration with Back office.

Confidential, Santa Clara, CA

Sr. Software Engineer

Responsibilities:

  • FAB300mm is a next generation MES solution of software and services, which control and automate real-time fab operations in semiconductor industries. The Engineering Data Collection module collects real time engineering data, which is used for analysis and process improvement.

We'd love your feedback!