We provide IT Staff Augmentation Services!

Data Architect Resume

Carrolton, TX

SUMMARY:

  • Overall 10+ years of Professional IT experience in Implementing, Supporting and testing with 1.5+ Years of Experience Implementing Spark, Spark Streaming and Kafka 4+ Years of Experience with Big Data/Hadoop Ecosystem and 6+ Years Implementing Siebel/OBIEE/Informatica/PL SQL.
  • Self - starter, Excellent team player with strong consulting, Interpersonal, writing, Communication and Techno Functional Skills.
  • Experience on Hadoop clusters using major Hadoop Distributions - AWS and CDH5 .
  • Experience on Major Hadoop ecosystem’s Components such as Spark Streaming, Spark, Kafka, Zookeeper, YARN, AWS, S3 Buckets, Sqoop, HDFS, MapReduce, HIVE, PIG, HBASE,and Job/workflow scheduling tools like Oozie and monitoring them through Cloudera manager.
  • Experience in working with Spark using Scala. Worked on Structured Streaming and Unstructured Streaming and processing using Spark SQL .
  • Experience working on NoSQL Database such as HBase and columnar Database MEM SQL
  • Created HBase tables, Hive tables to store large sets of Structured, semi-structured and unstructured from Various Sources.
  • Expertise in Importing/Exporting data into S3 Buckets from existing relational databases and Vice Versa using Sqoop.
  • Worked on converting stored procedure into Spark transformations using Scala .
  • Imported data from AWS S3 and Converted, transformed and Performed actions on RDDs, Data Frames, Datasets using Scala.
  • Constantly monitoring the process of software configuration/development/testing to assure a quality deliverable with minimal defects.
  • Research technical requests and other issues raised throughout all phases of Client projects
  • Make sure standards of Quality Assurance (QA) are being met as part of ongoing development System in accordance with Client standards and the QA methodology
  • Acts as the representative of Client and face customer for all technical, database, and infrastructure related issues.

TECHNICAL SKILLS:

Applications: Communications, Income Tax, Finance, Life Sciences, Automotive, Call Center, eSales, eService, eConsumer, Production Support

Big Data: Spark Streaming, Spark, SparkSQL, Kafka, Zookeeper, Yarn, AWS, S3, EMR, Pipelines, Sqoop, HDFS, Map Reduce, Hive, Pig, HBase, Flume, OozieDatabases: Oracle MEM SQL, MySQL, Hbase, SQL Server

Programming Languages: Python, Scala, Java, Scripting, Core Java

Analytics: OBIEE, Tableau

Integration: PL/SQL, JMS, MSMQ, EBC, VBC, Web Services, IBM MQ Series, TIBCOETL: Informatica Spark

Siebel Areas: Installation, Tools Config, Scripting, Workflows, EAI, Application Deployment Manger, Asset Management, Order Management, Quote Management, PricingScripting: Visual Basic, COM, XML, Siebel VB, eScript

PROFESSIONAL EXPERIENCE:

Confidential, Carrolton, TX

Data Architect

Responsibilities:

  • Responsible for building scalable distributed solutions using Spark Streaming and Kafka.
  • Participated in business requirement gathering and translating requirements into technical specifications, development and testing.
  • Installed Kafka on 10 AWS EC2 Instances, with 6 Brokers, 3 Zookeepers and 1 EC2 Instance for managing Kafka using Yahoo Kafka Manager.
  • Installed Spark Cluster on AWS on 6EC2 m3.Xlarge instances (1 as Master and 5 as Slaves).
  • Developed simple/Complex Spark Streaming Jobs using Scala to consume data from kafka topics and process JSON Objects received from Different Streams using spark SQL and store it in MEM SQL and S3 Buckets.
  • Worked on Structured Streaming at RDD levels and joined RDDs to get golden Records.
  • Developed Nightly Batch job using both Spark and Spark SQL.
  • Worked on Different File Formats such as XMLs, JSON, txt and CSV .
  • Worked on Data Frames and Datasets, used Case Classes, and defined Schemas in certain Cases.
  • Worked on Designing Kafka Topics, Partitions, Used GZIP compression.
  • Used Kryo Serializations and also string Serializations in different Business Uses cases.
  • Manually Managed Offset in Zookeeper using kafka Direct Stream API and zkclient API.
  • Continuous Monitoring and Managing both Kafka Cluster and Spark Cluster.
  • Used Kafka Manager to manage Kafka and Managed Zookeeper using Shell Commands.

Environment: Hadoop, AWS, Spark, Spark Streaming, EMR, redis, Kafka, Zookeeper, Yarn, Scala, Mem SQL, Hive, Snow Flake, Oracle PL SQL, SQL Server.

Confidential, Plano, TX

Sr. Data Engineer

Responsibilities:

  • Participated in business requirement gathering and translating requirements into technical specifications, development and testing.
  • Developed solutions to process data into HDFS (Hadoop Distributed File System), process within Hadoop and emit the summary results from Hadoop to downstream systems.
  • Worked on analyzing data in Hadoop cluster using different big data analytic tools including Pig, Hive, Spark and Sqoop.
  • Involved in importing and exporting data (SQL Server, Oracle, csv and other formats) from local/external file system and RDBMS to HDFS and vice versa using Sqoop.
  • Developed Spark Application using Scala and Python for data extraction, transformations and loading.
  • Worked on Spark streaming using Scala for real time reporting and monitoring.
  • Knowledge in developing customized UDF's to extend Hive functionality.
  • Responsible to manage data coming from different sources.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing with Pig.
  • Performed Text mining using Scala scripts to extract Information and relevant BI information.
  • Used Collections in Scala for manipulating and looping through different user defined objects
  • Designed the logical and physical data model, generated DDL scripts, and wrote DML scripts for Oracle 10g database.
  • Integrated multiple sources data (SQL Server, DB2, Oracle) into Hadoop cluster and analyzed data using Spark Processing .
  • Worked on creating a live stream of data from traditional RDBMS using Kafka connect, so it can be consumed by spark streaming.
  • Deep understanding of how Whole Kafka Brokers are managed By Zookeeper and How Offset Management plays a critical role in delivering the Messages to Consumers.
  • Understood complex data structures of different type (structured, semi structured) and de-normalizing for storage in Hadoop.
  • Worked as the production support member resolving the production issues.

Environment: Hadoop, HDFS, Pig, Hive, MapReduce, Sqoop, Spark, Scala, Oozie, Flume, Kafka connect, Shell Scripting.

Confidential, Kansas City, MO

Hadoop Developer

Responsibilities:

  • Involved in all phases of the project gathering business requirements, translating business requirements into technical specifications, development, testing.
  • Designed Technical Design Documents and Functional Design Documents.
  • Installed and configured Hive, Pig, Sqoop, Flume and Oozie on the Hadoop cluster.
  • Used Sqoop to extract data from Oracle server and MySQL databases to HDFS.
  • Involved in loading data from UNIX file system to HDFS and Manage Incoming Data.
  • Developed workflows in Oozie for business requirements to extract the data using Sqoop.
  • Developed MapReduce (YARN) jobs for cleaning, accessing and validating the data.
  • Used Pig Scripts and HIVE quires to prepare source data for specific use cases and loaded data into specific DataMarts.
  • Optimized the existing Hive and Pig Scripts and as we Learnt new Strategies.
  • Automated the work flows to export data from databases into Hadoop.
  • Designed workflows by scheduling Hive processes for Log file data, which is streamed into HDFS using Flume.
  • Worked on Different File Formats such as XMLs, JSON and CSV.
  • Used HDFS as a data staging area and then data is loaded into enterprise data warehouse.
  • Integrated Hadoop into traditional ETL, accelerating the extraction, transformation, and loading of massive structured and unstructured data.
  • Involved in migration of some ETL process from Oracle 10g to Hadoop utilizing Hive as a SQL interface for easy data manipulation.

Environment: Hadoop, MapReduce, HDFS, Hive, Pig, Sqoop, Flume, Oozie, HBase, and Oracle.

Confidential, Kansas City, MO

Sr. Siebel Developer/OBIEE/ETL Developer

Responsibilities:

  • Created Siebel Screens, Business Components with Visibility mode, Pick list.
  • Creating new Joins, Links, MVGs, Pick Applets and Business Component User properties.
  • Created and Modified Applet Level Server and Browser Scripts.
  • Created Business Services for Creating Activity and assigning based on the Certain Conditions to calculate the next available Business day using the Holiday calendar in Siebel.
  • Created Indexes, Predefined Quires, and Symbolic URLS.
  • Created a New Applet to display the Custom buttons throughout the Siebel Application so that user has more Clean Screen and big buttons on the screens.
  • Implemented Siebel TASK Based UI solution for Business requirements.
  • Developed Siebel Workflow for customer specific requirements. Creating new User Properties and Workflow Polices to trigger Workflow process.
  • Created List of Values, State Model, Business Services, BC, Applets, Views and Screens.
  • Developed PL SQL Procedures to Load Data from Different Interfaces into Siebel using EIM.
  • Responsible for Compiling the Siebel Server Tools and Replace the SRF on Siebel Servers.
  • Upgraded OBIEE 10g to 11g, Set up new Dev Servers with the Help of Third Party Vendor.
  • Developed Many OBIEE Reports for Users to carry out their Day to Day Business.
  • Created new iBots, Dashboards and Migrated Reports from different Environments. .
  • Migrated Repositories from One Environment to Different Environments.
  • Scheduled Reports to be delivered to users using delivers and Email delivery Option.
  • Applied several patches for the OBIEE 11g suite to solve some of the critical Errors.
  • Created New ETLs to load new Data in to Data warehouse using ETL.
  • Used DAC Client to Manage, Configure and Monitor ETL Process.
  • Trouble Scooted Reports Failures, iBot Failures and Caching Failures in Production.

Environment: Siebel 8.1.1 Siebel Finance, Siebel Analytics, Configuration, Scripting, Workflows, EAI,TASK UI, EIM, Excel Reports, Oracle 11g,, Informatica, Web logic server and Testing

Confidential, AKRON, OH

Sr. Siebel Developer/OBIEE/ETL Developer

Responsibilities:

  • Involved in all phases of the project understanding business requirements, translating business requirements into technical specifications, development, testing.
  • Worked on configuration extensively for highly customization. Created new Siebel objects under the UI, BO and DO layers to achieve business functionalities.
  • Created Joins, Links, MVGs, Pick lists, Pick Applets, BC User properties, Calculated Fields, Applets, BO’s, Integration Objects, New Business Services and Server and Browser Scripts.
  • Developed Workflows for customer specific requirements. Creating new User Properties to invoke Workflow process based on user changes from BC, Created Custom Components.
  • Implemented Integration Methodology like Posting the Messages to Queue Tables using middle ware Java Messaging System.
  • Created and Modified the Existing Workflows to Post the Messages in to the JAVA Messaging System Replacing Synchronous Integration.
  • Using Standard Data Transformation Business Services like EAI XML read from File, EAI XML write to file, EAI JMS Transport to enable Transformation of Data in Siebel EAI.
  • Debugging and Maintaining Siebel Workflows.
  • Created EBC, VBC to access the date from Oracle Trade Management application database.
  • Created new Inbound/Outbound Web service and tested with SOAP UI.
  • Worked on Product, Accounts, Promotions, Deductions, Claims, Tactics, Sales Volume Planning, Funds Creation and Administration of Funds Modules.
  • Created New Profile configurations to make the JMS Queue Connection and EBC Connection.
  • Developed ETL packages to transform data using SQL stored procedures and Informatica.
  • Created IFB Files and KSH Files for EIM, Database views and Queue tables for JMS Queue.
  • Responsible for Compiling the Siebel Server Tools and Replace the SRF on Siebel Servers.
  • Created OBIEE Reports, BIP Reports and OBIEE Dashboards and integrated in to Siebel.

Environment: Siebel 8.0.10 Consumer Goods,, Configuration, Scripting, Workflows, EAI,JAVA Messaging System, EIM, Excel Reports, Oracle 10g, PL SQL, Informatica, OBIEE 10g and Testing

Confidential, Minneapolis, MN

Sr. Siebel Developer

Responsibilities:

  • Involved in all phases of the project life cycle including understanding business requirements, translating business requirements into technical specifications, development, testing.
  • Involved in configuration of many requirements which involves many Joins which also includes Join Constrain and LINKS, Server scripts, Browser Scripts and Business Services.
  • Created Run-time Events to trigger Actions based on Data Conditions.
  • Developed both Inbound/Outbound Workflows and Work flow Policies.
  • Created Job Templates for Repeating Component Request for Integration Needs.
  • Tested internal and external integration object as per interface requirement.
  • Worked on Integration between Siebel and SAP applications.
  • Developed an Interface between Siebel and external Systems of FedEx and DHL for Tracking the Package Delivery Status. Used MQ Series as a middle Ware for Communicating.
  • Involved in EAI Integration with SAP for communicating with SAP for Placing Orders.
  • Extending error and exception handling framework (Oracle AIA PIP Extension Service).
  • Testing and validation of the PIP extension using end-to-end integration testing scenarios.
  • Worked in Production Support and was primary go to person for the Business Users.
  • Prepared technical design documents, Test Cases and performed Unit testing for the developed requirements and also Regression Testing.
  • Worked with offshore team in assigning work and also assist in development Process.
  • Design and Develop interfaces for EIM based on client requirements.

Environment: Siebel 7.7.2.12 Tools, Workflows, Java Scripting, EAI, EIM, Workflow Policies, Excel Reports and Actuate Reports, Oracle 10g.

Confidential, Hershey, PA

Siebel Developer

Responsibilities:

  • Involved in all phases of the project life cycle including understanding business requirements, translating business requirements into technical specifications, development and testing.
  • Involved in design and development of User interfaces Layers like Applets, Views, BCs, BO, IO, ICs, Joins, Links, MVGs, Picklists, Pick Applets, BC User properties and Screens .
  • Extended Siebel Base Tables to meet the Business Requirement.
  • Developed New Siebel Workflows for customer specific requirements. Creating new RCR, Workflow policies for Run Time Events and Debugging Existing Workflows.
  • Using Standard Data Transformation Business Services like EAI XML read from File, EAI XML write to file to enable Transformation of Data in Siebel EAI
  • Extensively involved in Creating Functional and Technical Design Document.
  • Worked on Siebel EAI & configuration extensively for highly customization. Created new Siebel objects under the UI, BO and DO layers to achieve business functionalities.
  • Worked on Workflow, Assignment Manager & Siebel script.
  • Configured New Integration Objects, Access control mechanism for visibility.
  • Developed integration solutions using the EAI web methods Transport service and eScript.
  • Configured Internal Integration Objects and External Integration Objects.
  • Worked on Inbound & Outbound integration using Web services and HTTP.
  • Development of Siebel Workflow for customer specific requirements.
  • Performing Server/Browser side scripting in Applet, BC and Business Service level.
  • Unit, Integration and regression testing, and support for production environment.

Environment: Siebel 8.0, Configuration, Java Scripting, Workflows, EAI, EIM and Oracle Pl SQL.

Hire Now