Hadoop Developer Resume
4.00/5 (Submit Your Rating)
Texas, DallaS
SUMMARY
- Having 9+ Years of IT experience in Software Development with proficiency in design & development of Apache Hadoop, Hive applications, MR jobs and Web - based applications using Java/J2EE technologies with SDLC Process including over 8 years of big data (Hadoop & Java) implementation.
- Experience in implementation and Application support for Apache Hadoop and Spark application projects.
- Extensively worked on Hadoop, SparkSQL, Hive (Data Warehouse), Map Reduce Framework& HDFS Architecture, SQOOP, Spark, Scala, OOZIE, Shell Scripting, Python, Cassandra, Kafka, Talend, Tableau, JAVA and Java script.
- Extensively worked on SPARK SQL with SCALA scripts and schedule the spark jobs in OOZIE workflow.
- Experience in implementing the HIVE Queries with Shell script to load data from various sources and automate the Hadoop applications for daily loads and Worked on UDF's in Apache HIVE.
- Experience in implementing the OOZIE workflow scripts & scheduling the Spark jobs and Hql scripts for daily runs.
- Experience in migrating the data using SQOOP from HDFS to Relational Database System and vice-versa & Teradata to Hadoop and Vice-versa. Knowledge of Apache Hadoop ecosystem.
- Experience in migrating the data using disctp from one Hadoop cluster to another Hadoop cluster.
- Experience in optimize the hive Queries/SparkSQL performance if they are running longer than expected time.
- Experience in developing applications using Scala to compare the performance of Spark with Hadoop.
- Experience in ETL Data Big Data Analytical tool Talend for data processing and SQL.
- Experience in Software Development with involvement in all phases of SDLC using traditional Agile process.
- Experience in software development including agile methods, code review, and unit/functional/integration testing, continuous integration/deployment.
- Experience in web-based technologies like HTML, HTML5, XML, Ajax, CSS and Java script Frameworks, Js, Angular JS and jQuery. Experience developing in NoSQL databases such as HBase, Cassandra.
- Experience in client side designing and validations using JSP, HTML, Java Script and AngularJS.
- Knowledge in Cloud services Amazon EC2, Dynamo DB, API Gateways, S3, Athena and GCP.
- Experience in agile tools like JIRA, Pivotal Tracker. Experience in error logging and debugging using Log4J.
- Experience in CI/CD process for Hadoop project deployment to QA/PR environment by using Jenkins with puppet.
- Extensive Knowledge in application development using Java, J2ee, spring, Hibernate, Servlets, JSP, JDBC, Web Services (Soap), Micro services, EJB, Junit, JAXB, tomcat. Knowledge on AWS as well.
- Extensive experience in developing the enterprise business applications using IDE’s like Eclipse, Intellij.
- Worked with operating systems like Linux, UNIX and Windows 98/NT/2000/XP/Vista/7/Windows10.
- Experience in working with versioning tools like SVN, Bit bucket & GitHub.
- Experience in developing data communication protocol between the different India space research organizations.
- Capable of quickly learning and delivering solutions as a part of a team.
- Good communication and interpersonal skills coupled with problem solving skills.
- Strong debugging and problem-solving skills with excellent understanding of system development methodologies, techniques and tools.
PROFESSIONAL EXPERIENCE
Hadoop Developer
Confidential, Texas, Dallas
Responsibilities:
- Extensively worked on customer data integration by using Hadoop, SparkSQL, Hive, MapReduce, SQOOP, UDF’s, Scala, Shell script and OOZIE workflow.
- Worked on implementing the SparkSQL with Scala scripts to process and analyze the data as per the business requirement and load the data into Hadoop environment from various sources.
- Worked on implementing the Spark application JAR files by using Intellij (sbt-shell) and scheduling the spark jobs in OOZIE workflow. Involved in developing the OOZIE scripts to execute and automate the spark applications
- Worked on writing Hive queries with Shell scripts and MR jobs to load the delta data files from bastion server to Hadoop environment.
- Implemented the MapReduc program to parse the xml data as per the business requirement.
- Worked on data migration using Sqoop which does end-to-end data movement from RDBMS to Hadoop staging tables and Hadoop to EDW Tera Database systems by using Talend/Sqoop.
- Worked on Spark Data streaming process to parse the XML data files into Hadoop hive tables as per the requirement.
- Worked on Hadoop enhancements (CR’s) for customer data applications as per the client requirements.
- Worked on project deployment process to QA/PR environment by using Jenkins with Puppet.
- Worked on ETL Talend jobs for pushing the data from HDFS to Tera data systems.
- Good Knowledge on Informatica and Kafka distributed messaging system and Cassandra data flow in this application.
- Responsible migrating long running customer marketing applications from Hadoop to Tera data.
- Validating the application data as per business logic.
- Used various technologies like Hadoop, Tera data, Talend, SQL, Hive, MR and ActiveMQ, Kafka, JIRA, Pivotal Tracker.
Hadoop Developer
Confidential
Responsibilities:
- Extensively worked on customer data integration by using Hadoop, Hive, MapReduce, SQOOP, Spark Streaming, Scala, Shell script and Java. Load the Sales data into Hadoop table by using Spark streaming.
- Worked on writing hive queries with shell scripts and MR jobs to load the delta data files from bastion server to Hadoop environment and load the data from various sources to Production as per the business requirement.
- Worked on data migration using Sqoop which does end-to-end data movement from RDBMS to Hadoop staging tables and Hadoop to EDW Tera Database systems by using Talend/Sqoop.
- Worked on Spark Data streaming process to parse the XML data files into Hadoop hive tables as per the requirement.
- Worked on Hadoop enhancements (CR’s) for customer data applications as per the client requirements.
- Worked on project deployment process to QA/PR environment by using Jenkins with Puppet.
- Worked on ETL Talend jobs for pushing the data from HDFS to Tera data systems.
- Goo Knowledge on Kafka distributed messaging system and Cassandra data flow in this application.
- Responsible migrating long running customer marketing applications from Hadoop to Tera data.
- Validating the application data as per business logic.
- Used various technologies like Hadoop, Tera data, Talend, SQL, Hive, MR and ActiveMQ, Kafka, JIRA, Pivotal Tracker.
Hadoop Developer
Confidential, Atlanta, Georgia
Responsibilities:
- Involved in Developing the code and shell scripting as per business logic. Involved in creating the hive queries as per client requirements. Involved in implementation of business logic and Data manipulation with DB. Involved in fixing the bugs and live support (production) to client. Validating the data as per business logic.
Developer
Confidential, Atlanta, Georgia
Responsibilities:
- Involved in Developing the code and shell scripting as per business logic. Involved in creating the hive queries as per client requirements. Involved in implementation of business logic and Data manipulation with DB. Involved in fixing the bugs and live support(production) to client. Validating the data as per business logic.
Developer
Confidential
Responsibilities:
- Designing of screens using JSP with Spring Lib, HTML and Ajax. Involved in the development of Spring Controller and Service classes. Participated in creating hibernate Data transfer Objects, XML files. Validating the forms using JavaScript and Ajax. Coordinating the team member’s work. Involved in writing back-end database queries. Involved in Bug fixing.
Developer
Confidential
Responsibilities:
- Involved in desinging the page layouts using JSP's. Involved in Client side validations using Java scripting. Implemented Data Access Layer using JDBC.
- I have involved in end to end testing.