We provide IT Staff Augmentation Services!

Lead Software Engineer Resume

5.00/5 (Submit Your Rating)

KansaS

SUMMARY

  • 10 years of IT Experience in Architecture, Analysis, design, development, implementation, maintenance and support, with experience in developing strategic methods for deploying big data technologies to efficiently solve Big Data processing requirements.
  • Experience in Data warehouse, Test and Production Environments on various business domains like Consumer, Insurance and Health Care.
  • Architect, develop and deploy pipelines in AWS, Azure HD Insight, Databricks, DataFactory, AzureFunctions, PowerBI, PowerApps.s
  • Over 6 years of experience in BIG DATA using HADOOP framework and related technologies such as HDFS, HBASE, MapReduce, HIVE, PIG, FLUME, OOZIE, SQOOP, ZOOKEEPER, KAFKA, STORM, SPARK, ElasticSearch and Hbase.
  • Excellent understanding/knowledge on building, deploying and managing applications on Cloud - Azure and AWS.
  • Worked with different file formats like flat files, Sequence, RC, ORC, Avro and Parquet.
  • Well versed with Schema design and Performance tuning.
  • Experience in working with Flume, Oracle GoldenGate and Kafka to load incremental data from multiple sources directly in to Datalake, HDFS, KAFKA, STORM and S3.
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems (RDBMS) and from RDBMS to HDFS.
  • Good experience working with Azure, MapR, Cloudera Distribution and AWS EMR, Lambda, S3, Redshift, Kinesis, Route 53, RDS and DynamoDB.
  • Complete application Monitoring, debugging and reporting using Splunk, Datadog, JMX, Tableau and NewRelic.
  • Migrate all applications into docker containers and manage using Kubernetes.
  • Experience in designing both time driven and data driven automated workflows using Oozie.
  • Experience in Object Oriented Analysis Design (OOAD) and development of software using UML Methodology, good knowledge of J2EE design patterns and Core Java design patterns.
  • Experience building Restful Webservices, CSS, JavaScript, XML/JSON.
  • Extensive experience working in Oracle, DB2, SQL Server and My SQL database.
  • Experience in writing UNIX shell scripts and Python scripts.

TECHNICAL SKILLS

Hadoop Ecosystem: MapReduce, HDFS, Hive, Impala, Pig, kafka, Spark, Storm, Sqoop, Splunk, Oozie, Flume and HBase.

Language: C, C++, Java, J2EE, Scala, Spark SQL, Python, Shell scripting, Restful Web Services, XML/JSON.

Methodologies: Agile/Scrum.

Databases: Oracle, MySQL, NoSQL databases Hbase, Cassandra, Redshift and ElasticSearch.

IDE’s: Eclipse, IntelliJ

ETL & Reporting Tools: Talend, Tableau and Platfora

Version Control: SVN, CVS and GIT.

Network Protocols: TCP/IP, UDP, HTTP, DNS, FTP and L2/L3 layer protocols

PROFESSIONAL EXPERIENCE

Confidential, Kansas

Lead Software Engineer

Responsibilities:

  • As a Lead Bigdta Software Engineer in Data Platform team, I am responsible for bringing the bigdata tools and lead the other teams with demos, poc’s, best approaches, create guidelines and re-architect to minimize the cost.
  • Architect to migrate the platform to cloud and use Bigdata tools for ETL operations.
  • Create POC’s for Datalake standards, HDinsight Kafka, Confluent Schema Registry, Hadoop/Sqoop clusters, MongoDB, Azure SQL, Datafactory, Elastic Search(ELK stack), snowflake and Databricks.
  • Transfer real time change-data from oracle to Kafka topics using oracle golden gate replicat.
  • Create usecases for Kafka consumers/producers using Kafka replicator, Kafka connect, kafka streams in spark, python, java, ruby, .Net
  • Implement CICD for all the applications using Git, Docker, Kubernetes and DevOps tools like Terraform, Jenkins, Chef.
  • Use Spark, SparkSQL, KSQL to process data. Use Java Springboot to create simple UI applications to manage KafkaACL’s.
  • Customize schema-registry with custom deserializers. Use Burrow, kafka-manager and build custom UI for creating and management of alerts.
  • Design and automate monitoring alerts and dashboards for cluster and application monitoring using Datadog, KafkaManager, Burrow, New Relic.

Confidential, Philadelphia, PA

Sr Hadoop Consultant

Responsibilities:

  • Created and configured FLUME custom sources and receivers.
  • Build Spark applications using Spark streaming and Spark SQL.
  • Developed apache storm topologies which pull logs from flume servers and parse and aggregate logs at real time and store in memsql and ElasticSearch.
  • Worked with ElasticSearch to create indices and handle performance issues.
  • Developed Restful web services to pull data from ElasticSearch.
  • Fine-tuned the configuration to improve the performance and to curb the several issues of log drop.
  • Developed java MR jobs to parse, aggregate, store to Oracle database and send to external servers like Cassandra using rest API.
  • Configured Oozie workflows to schedule the MR jobs.
  • Developed maintenance and monitoring scripts in pig, bash for higher visibility and better debugging.
  • Created SPLUNK alerts and queries to generate reports and to debug.
  • Created various visualization reports in Tableau.

Environment: Cloudera Hadoop Framework, Flume, Storm, ElasticSearch, MemSql, MapReduce, Pig, Splunk, Spark, Tableau, Java, UNIX Shell Scripting.

Confidential, Richardson, TX

Sr Hadoop Consultant

Responsibilities:

  • Involved in architecting the data model to store claims data in proper format for efficient analysis.
  • Designed solution to perform ETL tasks like data acquisition, data transformation, data cleaning and efficient data storage on HDFS.
  • Developed Sqoop jobs for data acquisition and to export reports to Sql Server
  • Developed Java Map-reduce jobs - custom parser to parse EDI format data
  • Designed Hive tables to efficiently store incremental data in columnar format.
  • Developed proof of concepts and provided with pros and cons for different tools in Hadoop space for easy evaluation.
  • Developed Java Map-reduce programs and Hive scripts for analytics - developed a scoring model based on number of metrics and build dashboards in Tableau and Platfora for business teams.
  • Developed Shell scripts, fired by TWS scheduler. Designed a system, which sends emails about the status of the jobs.
  • Designed and developed a data governance system for managing volume usage, data retention/archive/purge. Responsible for ongoing support.

Environment: MapR Hadoop Framework, MapReduce, Hive, Pig, HBase, Impala, Spark, Talend, Tableau, Platfora, HParser, Java, Python, UNIX Shell Scripting.

Confidential, Englewood, CO

Big Data/Hadoop Developer

Responsibilities:

  • Developed PIG UDFs to perform data cleansing and transforming for ETL activities.
  • Developed data pipeline using Flume, Sqoop, Pig and Java MapReduce to ingest data into HDFS for analysis
  • Worked on Creating the MapReduce jobs to parse the raw web logs data into delimited records.
  • Developed Oozie workflows to automate hadoop jobs such as Java map-reduce, Pig, Hive, Sqoop, Distcp and shell scripts.

Confidential

Java/Hadoop Developer

Responsibilities:

  • Moved data from SQL server to Hadoop HDFS.
  • Worked with hive and pig to perform deep analysis.
  • Created and maintained automated scripts to data movement from SQL server to HDFS and vice-versa.
  • Worked on a project to built a communication model to move the logs generated in flight to the ground station on a secure channel.
  • Used Netfilter, iptables and tc to achieve the requirements.
  • Implement applications that do Bandwidth management and traffic prioritizing for inflight passenger entertainment system.

Confidential

Sr Systems Engineer

Responsibilities:

  • Setting a cross-development environment,
  • Writing Make files and shell scripts.
  • Implementing Firewall using IPTABLES module, Providing support for NAT, Routing (RIP Implementation).
  • Implementation of DDNS service, telnet feature, IPV4 and IPV6 functionality.
  • Worked on LAN, WAN, IPV4, IPV6, Dual-Stack, Load balancing, DHCP Server/Client, PPPoE, PPTP, L2TP, Firewall, DMZ, SSL VPN, IPsec VPN, NTP, SNMP, Authentication, Security and Logging

We'd love your feedback!