We provide IT Staff Augmentation Services!

Big Data Consultant Resume

2.00/5 (Submit Your Rating)

Cincinnati, OhiO

SUMMARY:

  • 5 years of total IT experience as a Senior Technical Developer.
  • 2 + year of experience in Hadoop Big data implementation experience.
  • 2 + years of experience in Detail - oriented data modelling, Analysis, Reporting and arranging design sessions and brainstorm meetings.
  • Experience in Insurance, Financial Services, Retail and healthcare Domain.
  • Experience communicating with clients internationally, crisis management and handling. Hands-on experience in multitasking several sprints. Experience in coaching, mentoring and finding new opportunities for the team members to achieve project goals.
  • Strong hands-on experience in NOSQL databases like Cassandra and MongoDB.
  • Experience in accomplishing the complete life cycle of the project (Waterfall, Agile) i.e. End to end implementation.
  • Involved in working with BI tools like Tableau, Power BI for report creation and further analysis from the front end.
  • Experience in Bigdata related technologies like Hadoop frameworks, Map Reduce, Hive QL, HBase, and ZOOKEEPER.
  • Experience in setting up HIVE, PIG, HBASE, and SQOOP on Linux Operating System.
  • Expertise in Database Design, Creation and Management of Schemas, writing Stored Procedures, Functions, DDL, DML SQL queries.
  • Worked on HBase to load and retrieve data for real time processing using Rest API.
  • Good working experience using Sqoop to import data into HDFS or Hive from RDBMS and exporting data back to HDFS or HIVE from RDBMS.
  • Worked on extracting files from Cassandra through Sqoop and placed in HDFS and processed.
  • Extending HIVE core functionality by using custom User Defined Function's (UDF), User Defined Table-Generating Functions (UDTF) and User Defined Aggregating Functions (UDAF) for Hive.
  • Designed and implemented Hive UDF's using Python for evaluation, filtering, loading and storing of data.
  • Exploring with Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark context, Spark-SQL, Data Frame, pair RDD's, Spark YARN.
  • Developed Simple to complex Map/reduce streaming jobs using Python/Scala languages that are implemented using Hive.

TECHNICAL SKILLS:

Tools: Eclipse, Maven

Programming Language: Spark, Shell scripts, Scala

Database Packages: DB2, Oracle, SQL, MSSQL

Big Data: Hadoop framework, HDFS, MapReduce, Hive, Oozie, Spark, ScalaPhoenix, HBase, Impala, Cassandra.

PROFESSIONAL EXPERIENCE:

Confidential, Cincinnati, Ohio

Big Data Consultant

Roles and Responsibilities

  • Responsible for creating Technical design document
  • Develop and deploy Service oriented applications leveraging Apache Spark and Scala language.
  • Use Git, TeamCity and SonarQube for continuous integration and continuous development
  • (CI/CD) of code.
  • Test features and functions for Confidential and upgrade project according to specifications and
  • Requirements from business users.
  • Utilize knowledge of modern software development process, methodologies, software tools and
  • Applying extensive experience to project development.
  • Design and develop different program modules and classes, ensuring efficiency, appropriate
  • Simplicity, robustness and scalability of codes.
  • Perform Solution Requirements Analysis (SRA) & design using Hadoop technologies.
  • Automating batch process on TWS (Tivoli Workload Scheduler) to run programs on a specific daily schedule to perform updates to data.
  • Utilize Team City to deploy code to various upper environments and maintain its technical integrity.
  • Conducts research and integrates industry best practices into Systems processes and potential solutions.
  • Develop and administer Design Service Architecture Document, which includes technical dependencies, technical risks, technical assumptions, and conceptual, logical, and physical architecture diagrams.
  • Integrating DB2 applications to a Big Data Platform using IBM Big SQL.
  • Utilize Apache Hadoop components such as HBASE, HIVE, HDFS for storing data and processes.

Confidential, Connecticut

Big Data Analyst

Specification: Cloudera, Hive, MSSQL, QlikView, Map Reduce, Sqoop, Flume, spark, Scala

  • Gathered business requirements to come up with an efficient and cost-effective plan for Business analysts.
  • Work with creating and assisting documents such as BRD, FRD, TRD, RTM and involve in JAD sessions to involve the teams in interactive knowledge sessions.
  • Outlined acceptance standards for all the requirements and translate those standards into acceptance tests.
  • Tested the new models developed and validated them at an earlier phase to avoid defects.
  • Write SQL queries to pull data from the Database (NoSQL, T- SQL) and Sqoop the data from Hive to MSSQL.
  • Communicated with the UI team to create and assist them with data modelling, mining, analytics.
  • Analyzed the Risk and possibilities/requirements into a release considering the cost, time and budget.
  • Communicated with the business partners to track progress of the requirements with the deadline. Ensured that the project requirements are met.
  • Updated and identified the risks and issues associated with the project on the daily scrum call.
  • Primarily initiated and lead the team of continuous improvement and process maturity project. Constructed and maintained process flow diagrams to provide visual representation of activities and relationships.
  • Introduced organizational methods to improve accuracy and effectiveness of the testing models.
  • Communicated with the clients to troubleshoot issues related to current project in claim models.
  • Analyzed the Web logs such as XML data, unstructured data.
  • Handled the unstructured data, processed it and stored it in Hive, Created Hive views.

Confidential

Sr. Associate Engineer

Specification: Horton works, HDFS framework, Hive, HBASE, Oozie, Sqoop, Pearl

  • Involved in understanding and translating the requirements into application and system design.
  • Prepared and maintained Architectural documents, non-functional requirement documents for various project modules.
  • Involved in designing Dream oracle database with the help of DBA.
  • Involved in testing of design attributes while coordinating with team members in accomplishing overall objectives.
  • Extensively involved in producing complex reports to Confidential ’s Business users on the status of enrollment and eligibility of Argus and non-Argus members.
  • Implemented automation tool for report generation of frequently used repetitive SQL queries in the database.
  • Involved in design and implementation of backend and middle tier layer using spring technologies.
  • Handled huge volume of data in the application during the year end as a result of new customers and renewals of existing customers.
  • Worked with release management to provide the status of the release work on their weekly calls and the move up plan OOA activities.
  • Involved in design walkthrough, functional requirement and code reviews.
  • Worked with Confidential ’s Disaster Recovery Team for Confidential ’s Enrollment and Eligibility Applications.
  • Extensively worked on NDM scripts for Ftp’d the files to the vendors.
  • Integrated the web services with Mainframes.
  • Worked on Agile Scrum methodology with 2 weeks sprints, attended milestone meetings, planning meetings.

Environment : JDK1.6, Spring MVC, Spring Integration, Hibernate, JAXB, XML, XSD, Web logic, Oracle 11g DML, DDL, PL/SQL, Oracle Data Modeler, JUnit, Restful Web Services, Maven.

Confidential

Sr. Software Developer

Specification: HDFS framework, Map Reduce, Hive, HBASE, Oozie, Sqoop, zookeeper, Spark

  • Fair knowledge in big data technologies such as Hadoop, MapReduce and Hive.
  • Experienced in installing, configuring Hadoop in standalone mode.
  • Load and transform large sets of structured, semi structured and unstructured data
  • Developing and Testing Map Reduce jobs in local, pseudo distributed modes.
  • Hand-on experience in importing and exporting into HDFS using Sqoop.
  • Involving in migration of OCD (Other Carrier Data) process to Hadoop technology.
  • Exporting data from MySQL, DB2 to HDFS filesystem.
  • Handling bad data, failures of Data Node and daemon processes.
  • ­­­­­Enhancement and support for Java web application.
  • Involving in team lead activities in preparation of Monthly Progress Report, which provides details about the team information, production issues, incident, change tickets, effort spent for various teams.
  • Created KCD document
  • Communicated with the customer base to describe the complex analysis and to look for the Opportunities to improve on business decision making.
  • Analyzing large datasets and storing in HDFS using different serialization formats such as sequence files, Avro and Weblog analysis of EPDS application.

We'd love your feedback!