We provide IT Staff Augmentation Services!

Senior Software Developer / Big Data Technologist Resume

4.00/5 (Submit Your Rating)

Houston, TX

SUMMARY:

Senior software engineer/analyst with 20 years of experience in developing Unix/Oracle - based enterprise systems for Fortune-100 companies. Proven track record in delivering and supporting server-side software that streamlines business processes, using Big Data, SQL, scripting, and Object-Oriented technology with high quality and low production impact. Recognized expertise in data models, ETL data flows, and data migration. Noted for tenacity, teamwork and attention to detail.

KEY COMPETENCIES IN:

  • Big Data system design & integration
  • Large-scale distributed systems
  • Data feeds & ETL workflow
  • End-to-end lifecycle versatility
  • Data analysis & reconciliation
  • Production admin & support

TECHNICAL SKILLS:

Big Data: Hadoop admin. (Cloudera & Hortonworks), HDFS, YARN, Flume, Kafka, Hive, NoSQL (HBase/Phoenix, Splice Machine), Spark SQL & Streaming, Oozie, Hue

Enterprise Development: Linux (RHEL/Debian), shell scripts, Perl, Oracle11g PL/SQL, Pro*C, PostgreSQL, ETL workflow, Ganglia, SiteScope, Perforce, Subversion, Git, Maven

Object-Oriented Technology: Java, Scala, C++, servlet, JDBC, XML, UML, refactoring, test-driven development

EXPERIENCE:

Confidential, Houston, TX

Senior Software Developer / Big Data Technologist

Responsibilities:

  • Created the original Hadoop POC cluster after evaluating NoSQL products (incl. Splice Machine and Cassandra). Ported data model from existing RDBMS schema. Co-developed load test suites to benchmark file and DB ingestion. Performed load/capacity estimates to determine cluster layout and Flume configuration
  • Designed most of the production IoT data flow: Two-tier Flume ingestion from 30+ T1 aggregator nodes (standalone agents replaced by embedded Flume SDK). Oozie daily batches clean and load ingested HDFS files into Hive, for R and Spark analytics. Set up auxiliary Sqoop reference data loaders, reporting, and de-duplication batches in Oozie. Implemented design on both Cloudera and Hortonworks clusters. All processing monitored in Hue/Cloudera Manager/Ambari, with report and alert via email.
  • Championed HBase/Phoenix for real-time OLTP storage. Co-developed several Scala Spark SQL batch applications to aggregate job-run metrics to train downstream predicative models. Navigated subtle differences between CDH and HDP to create best practices for building and executing Oozie-compatible Spark-Phoenix applications cross-platform from a single code base. Submitted own patch to Cloudera Phoenix Git repo for CDH 5.12 compatibility.
  • Continuously monitored and optimized performance, such as Flume file roll thresholds, Hive partition size, Oozie/Spark YARN memory allocation, de-duplication logic, and automated archive compaction.
  • Managed and upgraded Cloudera and Hortonworks distributions through multiple releases. Migrated production data between them. Active contributor to the corporate Big Data platform standardization initiative. Continual investigation of emerging trends such as hybrid cloud and containerization.

Confidential, Houston, TX

Database Developer / Feed Specialist

Responsibilities:

  • Analyzed, coded, and integration-tested DB schema and outgoing feed scripts to support federal-mandated Central Clearing for Derivatives across Credit Risk division, ahead of competitors
  • Wrote server scripts (ksh, SQL) to upload derivative trade data to TriOptima on-demand via SFTP, enabling DIY 3rd-party collateral portfolio reconciliation for customers
  • Collaborated in a major 2011 re-engineering of core trade data model and feed loaders - planned, implemented, tested, and deployed a parallel data flow in phases for four data suppliers disruption-free
  • Successfully delivered new loaders to on-board 110m trades from two major upstream systems
  • Perfected generic ETL testing workflow to support 10 major feed migration projects and regular maintenance using automated compares and Excel report generation

Application Support Analyst

Confidential

Responsibilities:

  • Performed production support for five of Confidential ’s main Credit Risk and Collateral Management applications, on WebLogic/WebSphere-hosted Java front-end and Oracle9i/C++/Unix scripts back-end, with automated monitor & alert by Tivoli and SiteScope.
  • Monitored daily processing of over 2,500 system jobs scheduled by Autosys, timely arrival of 1,200 global data feeds, and nightly batch runs that calculated the Bank’s credit-risk exposure to all clients. Managed the migration of a 10m trade reconciliation feed off Business Objects platform to improve system stability.
  • Deployed software releases from multiple teams into production.
Senior Technical Staff Member

Confidential, Middletown, NJ

Responsibilities:

  • End-to-end development of large telecomm service-ordering and maintenance OLTP systems, including requirement negotiation, platform evaluation, field support and training - using best-in-class object-oriented software engineering processes based on C++, Java, and web technologies.

Senior Technical Staff Member

Confidential

Responsibilities:

  • Co-designed and maintained the ObjectQ-based server API / data dictionary and client library
  • Feature Lead in 8 out of 20 releases, with management and mentoring of junior developers
  • Maintained and enhanced more than five external interfaces (including ObjectQ, MQSeries, LibE, SQL*Net, and Unix batch feed)

We'd love your feedback!