We provide IT Staff Augmentation Services!

Informatica Developer Resume

3.00/5 (Submit Your Rating)

PROFESSIONAL SUMMARY:

  • Around ten years of work experience in IT, which includes experience in Development and Implementation of Hadoop and Data warehousing solutions.
  • Experience in dealing with Apache Hadoop components like HDFS, MapReduce, Hive, HBase, Pig, Sqoop, Impala, Big Data and Big Data Analytics.
  • Extensive experience in importing/ exporting data to / from RDBMS and HDFS using Apache Sqoop.
  • Expertise in working with HIVE data warehouse infrastructure - creating tables, data distribution by implementing Partitioning and Bucketing, developing and tuning the HQL queries.
  • Involved in creating Hive tables, loading with data and writing Hive queries that will run internally in MapReduce and TEZ.
  • Experience developing Kafka producers and Kafka Consumers for streaming millions of events per second on streaming data
  • Experience in manipulating/analyzing large datasets and finding patterns and insights within structured and unstructured data.
  • Experience working with NoSQL database technologies, including MongoDB, Cassandra and HBase.
  • Knowledge of job workflow management and coordinating tools like Oozie.
  • Strong experience production ailing end to end data pipelines on Hadoop platform.
  • Significant experience writing custom UDF’s in Hive and custom Input Formats in MapReduce.
  • Good Knowledge in writing Spark Applications in Scala and PySpark
  • Hands on Experience in designing and developing applications In Spark using Scala and Pyspark to compare the performance of Spark with Hive and SQL/Oracle.
  • Good Knowledge in Spark- Data Frames, Data sets.
  • Expertise in writing the Real - time processing application Using spout and bolt in Storm.
  • In depth knowledge of Cloudera and HortonWorks Hadoop distribution.
  • Software developer in core Java Application Development, Client/Server Applications, and Internet/Intranet based database applications and developing, testing and implementing application environment using J2EE, JDBC, JSP, Servlets, Web Services, Oracle, PL/SQL and Relational Databases.
  • Having working experience with Building RESTful web services, and RESTful API
  • Solid design skills using Java Design Patterns and Unified Modeling Language UML.
  • Experience in working with different operating systems Windows 98/NT/2000/XP/2007/2008, UNIX, and LINUX.
  • Strong understanding of real time streaming technologies Spark and Kafka.
  • Strong understanding of Logical and Physical data base models and entity-relationship modeling.
  • Replaced existing MR jobs and Hive scripts with Spark SQL, Spark data transformations for efficient data processing.
  • Snowflake Schema Modeling, Fact and Dimension tables.
  • Strong understanding of Java Virtual Machines and multithreading process.
  • Experience in writing complex SQL queries, creating reports and dashboards.
  • Expertise in handling ETL tools like Informatica, Talend.
  • Excellent analytical, communication and interpersonal skills.
  • Possess excellent communication, interpersonal and analytical skills along with positive attitude.

PROFESSIONAL EXPERIENCE:

Confidential

Informatica Developer

Responsibilities:

  • Extensive experience in importing/ exporting data to / from RDBMS and HDFS using Apache Sqoop.
  • Expertise in working with HIVE data warehouse infrastructure - creating tables, data distribution by implementing Partitioning and Bucketing, developing and tuning the HQL queries.
  • Involved in creating Hive tables, loading with data and writing Hive queries that will run internally in MapReduce and TEZ.
  • Experience developing Kafka producers and Kafka Consumers for streaming millions of events per second on streaming data
  • Experience in manipulating/analyzing large datasets and finding patterns and insights within structured and unstructured data.
  • Experience working with NoSQL database technologies, including MongoDB, Cassandra and HBase.
  • Knowledge of job workflow management and coordinating tools like Oozie.
  • Strong experience production ailing end to end data pipelines on Hadoop platform.
  • Significant experience writing custom UDF’s in Hive and custom Input Formats in MapReduce.
  • Good Knowledge in writing Spark Applications in Scala and PySpark
  • Hands on Experience in designing and developing applications In Spark using Scala and Pyspark to compare the performance of Spark with Hive and SQL/Oracle.
  • Good Knowledge in Spark- Data Frames, Data sets.
  • Expertise in writing the Real - time processing application Using spout and bolt in Storm.
  • In depth knowledge of Cloudera and HortonWorks Hadoop distribution.
  • Software developer in core Java Application Development, Client/Server Applications, and Internet/Intranet based database applications and developing, testing and implementing application environment using J2EE, JDBC, JSP, Servlets, Web Services, Oracle, PL/SQL and Relational Databases.
  • Having working experience with Building RESTful web services, and RESTful API
  • Solid design skills using Java Design Patterns and Unified Modeling Language UML.
  • Experience in working with different operating systems Windows 98/NT/2000/XP/2007/2008, UNIX, and LINUX.
  • Strong understanding of real time streaming technologies Spark and Kafka.
  • Strong understanding of Logical and Physical data base models and entity-relationship modeling.
  • Replaced existing MR jobs and Hive scripts with Spark SQL, Spark data transformations for efficient data processing.
  • Snowflake Schema Modeling, Fact and Dimension tables.
  • Strong understanding of Java Virtual Machines and multithreading process.
  • Experience in writing complex SQL queries, creating reports and dashboards.
  • Expertise in handling ETL tools like Informatica, Talend.

We'd love your feedback!