We provide IT Staff Augmentation Services!

System Engineer Resume

4.00/5 (Submit Your Rating)

SUMMARY

  • Over 6 Years of IT experience in RBMS & Enterprise Data Warehousing. Subject Matter Expertise in design, development, testing, implementation and support in Teradata & Hadoop technologies. Trained on AWS cloud computing with strong knowledge on Spark Framework. Exploring to learn new emerging technologies with strong analytics skills.
  • Experienced in design and develop ETL data pipelines. Proficient in writing Advanced PL/SQLs and performance tuning of SQLs.
  • Strong knowledge of Teradata architecture and Teradata concepts.
  • Extensive knowledge in Teradata Utilities such as BTEQ, FLOAD, MLOAD, TPUMP and TPTExport/ TPTLoad.
  • Proficient in performance analysis, monitoring and SQL query tuning using EXPLAIN PLAN, Collect Statistics, SQL Trace both in Teradata.
  • Well knowledge on writing Macros and Stored Procedures with multiple loops.
  • Expertise in Teradata Admin tool for creating database, tables and users. Granting access to the users. Monitoring the Teradata performance using viewpoint.
  • Strong in DWH concepts, Dimensional Table, Fact Table, Dimensional Data Modeling, Star Schema and Snowflakes Schema methodologies.
  • Experienced in Data Analysis and debugging Data issues at Batch systems scheduling and processing.
  • Extensively used PL/SQL queries for data verification and validation as part of unit testing.
  • Experienced in Hadoop ecosystem and implement solutions for Big Data Applications with excellent knowledge of Hadoop architecture (Hive, Sqoop, Spark, MapReduce, Oozie, Yarn).
  • Hands on experience on Spark framework for batch and real - time data processing.
  • Proficient in developing data transformation and other analytical applications in Spark, Spark-SQL.
  • Good at performance tuning of Hive & Spark jobs.
  • Very strong at transformations & actions on Spark RDD, Memory & Custom Partitioning and a good understanding of Spark Architecture.
  • Working experience with Fixed & Delimited text files, JSON, ORC, Avro & Parquet files in Hive, Spark & Sqoop
  • Imported and exported large sets of data from RDBMS to HDFS and vice versa using Sqoop.
  • Worked on Hive partition and bucketing concepts and creating hive external and internal table with Hive partition.
  • Experienced in writing Oozie Script to schedule shell script jobs in the Hadoop environment.

TECHNICAL SKILLS

  • Technologies: Teradata 14, Hadoop (HDFS, MapReduce, Spark with Scala, Hive, Sqoop, Oozie)
  • Cloud Computing: AWS (RDS, S3, EMR, IAM)
  • Languages: JCL, SQL, C, UNIX, Excel Macros
  • Additional Packages: TSO/ISPF, CA7, Changeman, NDM, Teradata SQL Assistant, PODS

PROFESSIONAL EXPERIENCE

System Engineer

Confidential

Responsibilities:

  • Analyzing the source data coming from different sources and working with business users and developers to design the DW Model.
  • Translated requirements into business rules & made recommendations for innovative IT solution.
  • Implemented the DW tables in a flexible way to cater the future business needs.
  • Extensively Used Environment SQL commands in workflows prior to extracting the data in the ETL tool.
  • Removed bottlenecks at source level, transformation level, and target level for the optimum usage of sources, transformations and target loads.
  • Captured data error records corrected and loaded into target system.
  • Interfacing with and supporting QA/UAT groups to validate functionality.
  • Used Autosys, Oozie as a scheduling tools in the project.
  • Scheduling the Workflows and monitoring them. Provided Pro-Active Production Support after go-live.
  • Extracted data from Legacy systems and placed in HDFS and processed.
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Experienced in managing and reviewing Hadoop log files.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Involved in loading data from UNIX file system to HDFS.
  • Involved in creating Hive tables, loading with data and writing Hive queries which will run internally in map reduce way.
  • Used Oozie as an automation tool for running the jobs.
  • Client and offshore coordination for the project including status meetings.
  • Worked with business users to create reports and engaged them for testing.
  • Analyzed the files arrival timings for scheduling Mainframe jobs & Autosys jobs.
  • Testing and validating the code by executing test scripts to ensure that all the client requirements are satisfied and monitoring the performance of the system by collecting and analyzing the metrics.
  • Annotation of documents for code development.

Environment: Teradata SQL Assistant, File-Aid, File-Manager, CA7, Changeman, Unix, Hadoop - HDFS, Map reduce, Oozie, Sqoop, Hive, Scheduler - Autosys, Operating System - OS/390.

Confidential

Teradata Developer

Responsibilities:

  • Analyzing Collect statistics either for removal or applying Sample.
  • Analyzing jobs for change of frequency of collect statistics being collected.
  • Creating weekly and monthly reports on the CPU cycles saved.
  • Generating dashboard & performance metrics.
  • Utility conversion like MLOAD to TPUMP to free up slots for job run.
  • Provide users assistance with access issues and data questions on all the Teradata platforms.
  • Assist users with tuning complex queries with efficiency of PI/SI indexes, Join Index, PPI, Using Explain analyzing the data distribution among AMPs and index usage, collect statistics, definition of indexes, revision of correlated sub queries, etc. that adversely affect the Teradata platform.
  • Provide users assistance with access issues and data questions on all the Hadoop platforms.
  • Analyze various SESS tools and provide enhancement request for development.
  • Co-ordinate with new users from different Lines of Business (LOBs) in on-boarding to Teradata and Hadoop platforms and educate them.
  • Conduct daily calls with new users and assist them with their issues.
  • Experience in generating reports using PDCR, DBQL and DBC tables for senior management at the bank. These reports are for various requirements like assessing CPU/IO/space usage for different applications.
  • Spearheaded enhancement and automation of traditional access review process to save cost efforts.
  • Impact CPU and Heavy Hitter identification and suggesting the load teams for tuning bad performed queries.
  • Developed business architecture such as Scope, processes, alternatives and risks to cover new threat reports.
  • Create control procedure documents along with end to end process flow diagrams using ARIS - Business Process Modelling & Analysis Tool.

Environment: Teradata SQL Assistant, Teradata Viewpoint, File-Aid, File-Manager, CA7, Changeman, Excel Macros, Operating System - OS/390

Teradata Developer

Confidential

Responsibilities:

  • Participated in client discussions to gather requirements and perform initial analysis to plan for the extensive disaster recovery analysis.
  • Interacted with project stakeholders from Other technology teams like Sourcing/ Testing/ DBAs, etc. (TDMs, PMs, BAs, etc.)
  • Provided inputs for overall implementation plan, lead deployment of applications/infrastructure and post-production support activities.
  • Developed complex modules and delivered defect free and highly optimized deliverables.
  • Trigger emails with the data from SQL Server Database through SSIS package daily to the desired recipients.
  • Modified Existing SSIS package to meet current needs.
  • Performed detailed analysis of data according to the mapping & transformation rules, which helps in catching many sourcing Issues upfront.
  • Developed MS Visio flows to illustrate the mapping from source to target of multiple applications.
  • Mentored other team members on important aspects of the project and the technology for overall team building and raise the quality of deliverables consistently.
  • Handled critical testing timelines with end to end processing of daily cycle files and validating the data.
  • As a Disaster Recovery lead, was coordinating the annual DR Exercise for Mainframe, Teradata, and Hadoop platform applications.
  • Ensure that different technical groups create Disaster Recovery processes and procedures that can be executed.
  • Report on Post DR activities to the wide audience, track and resolve DR defects after the DR Exercise and fix them prior to the next exercise.

Environment: Teradata, Tableau & SQL Server

We'd love your feedback!