Bigdata/hadoop Developer Resume
Cleveland, OH
PROFESSIONAL SUMMARY:
- 10+ years of total IT experience in analyzing, designing, administer, tuning, and developing Client/Server Applications and 3+ Years in Big Data - Hadoop Development and Ecosystem Analytics, Development and Design of Java based enterprise applications.
- Experience on BIG DATA using HADOOP framework and related technologies such as HDFS, HBase, MapReduce, Hive, Pig, Impala, Flume, Oozie, Sqoop, Spark and Zookeeper
- Experience in working Cloudera CDH3, CDH4 and CDH5 distributions. Experience in working with Flume to load the log data from multiple sources directly into HDFS.
- Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems (RDBMS) and from RDBMS to HDFS.
- Experience in data analysis using Hive, Pig Latin, Impala, HBase and Custom Map Reduce programs in Java.
- Experience in writing custom UDFs in Java for Hive and Pig to extend the functionality.
- Around 6 months experience on Spark, Scala, Data Frames.
- Developed analytical components using Kafka, Scala, Spark SQL and Spark Stream.
- Worked on the Spark SQL and Spark Streaming modules of Spark extensively and used Scala to write code for all Spark use cases.
- Experience in load balancing and stress testing.
- Experience with Sequence files and AVRO file formats and compression.
- Experience in working with Amazon Web Services EC2 instances and S3 buckets.
- Experience in designing both time driven and data driven automated workflows using Oozie.
- Implemented Hadoop based data warehouses, Integrated Hadoop with Enterprise Data Warehouse systems.
- Extensive experience in Data Ingestion, In-Stream data processing, Batch Analytics and Data Persistence Strategy.
- Worked in Windows, Unix/Linux platforms.
- Experience working in with ORACLE and My SQL databases.
- Experience in creating Spark Contexts, Spark SQL Contexts, Spark Streaming Context to process huge sets of data
- Promote full cycle approach including request analysis, creating/pulling dataset, report creation and implementation and providing final analysis to the requestor
- Very Good understanding of SQL, ETL and Data Warehousing Technologies
- Business Intelligence (BI) database applications and various segments of SDLC, using MS SQL Server 2014/2012/ 2008/2005/2000, DTS/SSIS, and Reporting & Analysis Services.
- Extensive experience with different phases of project (project initiation, project requirement and specification gathering, designing system, administer, coding, testing, and debugging new and existing client-server based applications).
- Expertise in database optimization using tools like Database engine Tuning Advisor, SQL profiler, DBCC utilities and Windows Performance Monitor for monitoring and tuning MS SQL Server performance.
- A well-organized, goal-oriented, highly motivated and effective team leader/member with excellent analytical, troubleshooting, and problem solving Skill.
- Excellent Verbal & Written Communication skills and strong in Documentation.
- Flexible, enthusiastic and project oriented team player with solid communication and leadership skills to develop creative solution for challenging client needs.
TECHNICAL SKILLS:
Databases: MS-SQL Server 2014/2012/ 2008/2005/2000 /7.0,VS2013/2010, Oracle 8i/9i/10g, Toad, MySQL, BigData/ Hadoop Framework HDFS, MapReduce, Pig, Hive, Sqoop, Oozie, Zookeeper, Flume and HBase, Amazon Web Services, Spark(Spark-Scala,Pyspark), Kafka, Hue Web, Impala Cloudera Hadoop CDH5, Cloudera Manager CM5
Languages: C, C++, Java, Scala, Python, SQL, Pig Latin, HiveQL
T: SQL, PL/SQL, C, C++, C#, HTML, XML, Java ASP .NET, VB .NET
Operating System: CentOS, Linux, Windows 98/2000/2003/ XP/NT/Vista, Windows 2000Advanced Server, Windows 2003 Enterprise Server
Development tools: Microsoft SQL Studio, Eclipse, NetBeans
Development methodologies: Agile/Scrum, Waterfall
Other tools: GIT, MS SQL Server Reporting Services 2008/2005/2000 (SSRS), MS SQL Server Analysis Services 2008/2005(SSAS), MS SQL Server Integration Services 2014/2012/ 2008/2005/2000 (SSIS), Data Transformation Services (DTS), ODBC, SQL Server Management studio (SSMS), Erwin 7.2/7.1/4.1 MS Visio 2007/2003, BCP, Active Directory, RS Utility, MS Office
PROFESSIONAL EXPERIENCE:
Confidential, Cleveland, OH
BigData/Hadoop Developer
Responsibilities:
- Imported data to HDFS from MySQL and exported data from HDFS to MySQL data, using Apache Sqoop
- Modified and Optimized databases to speed up importing to HDFS
- Performed data analysis of Twitter by importing data to HDFS using Apache Flume
- Used SQOOP to import to import Teradata data to HDFS
- Cleaned data and preprocessed data using MapReduce for efficient data analysis
- Used Scala and Java to develop MapReduce programs for data cleansing and analysis
- Developed custom UDFs using Apache Hive to manipulate data sets
- Created Hive Compact/ Bitmap Indexes to speed up the processing of data
- Created/Inserted/Updated Tables in Hive using DDL, DML commands
- Improved performance of datasets for querying through partitions and buckets
- Worked with Hive file formats such as ORC, sequence file, text file to load data in tables and perform queries
- Migration of 100+ TBs of data from different databases (i.e. Oracle, SQL Server) to Hadoop . Wring code in different applications of Hadoop Ecosystem to achieve the required output in a sprint time period
- Worked on various file formats Avro, SerDe, Parquet, and Text by using snappy compression
- Used Pig Custom Loaders to load different from data file types such as XML, JSON and CSV
- Developed PIG Latin scripts to extract the data from the web server output files and to load into HDFS
- Scheduled workflow of jobs using Oozie to perform sequential and parallel processing
- Worked on NoSQL database HBase to perform operations on sparse data set
- Developed shell scripts, python scripts to check the health of Hadoop Daemons and schedule jobs
- Integrated Hive with HBase to upload data and perform row level operations
- Experienced in creating SparkContext and performing RDD transformations and actions using Python API
- Used SparkContext to create RDDs to use incoming data to perform Spark Transformations and Actions
- Created Spark SQLContext to load data from Parquet, JSON files and perform SQL queries
- Created data frames out of text files to execute SparkSQL queries
- Used Spark's enable HiveSupport to execute Hive queries in Spark
- Created DStreams on incoming data using createstream
- Developed Spark streaming applications to work with data generated by sensors in real time
- Linked Kafka and Flume to Spark by adding dependencies for data ingestion Performed data extraction, aggregation, log analysis on real time data using Spark Streaming
- Created Broadcast and Accumulator variables to share data across nodes
- Used case classes, higher order functions, collections of Scala to apply map transformations on RDDs
- Used Scala sbt to develop Scala coded spark projects and executed using spark-submit
- Leveraged option monad with Some and None in Scala to avoid null pointer exceptions
- Implemented Pattern matching in Scala to identify the desired sensor type for performing analysis
- Developed Scala Traits to reuse code in other classes
Cluster Size: 200 Nodes
Environment: HDFS, Map Reduce, Hive, HBase, Pig, Java, Oozie Scala, Kafka, Spark, Git, CentOS 6.4, SBT, RDBMS
Confidential, Cleveland, OH
Hadoop Developer
Responsibilities:
- Collected the logs from the physical machines and the OpenStack controller and integrated into HDFS using Flume
- Sqoop jobs, PIG and Hive scripts were created for data ingestion from relational databases to compare with historical data
- Responsible for building scalable distributed data solutions using Hadoop
- Transformed incoming data with Hive & Pig to make data available to internal users
- Performed extensive Data Mining applications using HIVE
- Experienced in migrating HiveQL into Impala to minimize query response time
- Worked on Sequence files, RC files, Map side joins, bucketing, partitioning for Hive performance enhancement and storage improvement
- Worked with Hive AVRO data format to compress data and speed up processing
- Implemented business logic by writing Pig UDFs in Java and used various UDFs from Piggybanks and other sources
- Developed multiple MapReduce programs in Java for Data Analysis
- Performed performance tuning and troubleshooting of MapReduce jobs by analyzing and reviewing Hadoop log files
- Used Pig as ETL tool to do transformations, event joins, filter and some pre-aggregations
- Designed and presented plan for POC on impala
- Implemented Daily Cron jobs that automate parallel tasks of loading the data into HDFS using autosys and Oozie coordinator jobs
- Responsible for performing extensive data validation using Hive
- Created HBase tables to load large sets of structured, semi-structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios
- Involved in submitting and tracking Map Reduce jobs using JobTracker
- Involved in creating Oozie workflow and Coordinator jobs to kick off the jobs on time for data availability
- Created Spark streaming jobs using Spark submit and SBT tools to perform streaming on incoming data
- Knowledge on handling Hive queries using Spark SQL that integrate with Spark environment
- Configured Kafka brokers, Zookeepers to increase node utilization
- Integrated Kafka with Flume to send data to Spark Streaming context, HDFS
Cluster Size: 45 Nodes
Environment: HDFS, MapReduce, Hive, HBase, Pig, Java, Scala, Kafka, Spark, Git, CentOS 6.4
Confidential, Cleveland, OH
Hadoop Developer
Responsibilities:
- Conduct vendor analysis and proof-of-concepts for new Hadoop Technologies/Solutions.
- Worked on implementation and maintenance of Cloudera Hadoop cluster.
- Assisted in upgrading, configuration and maintenance of various Hadoop infrastructures like Pig, Hive, and HBase.
- Developed and executed custom MapReduce programs, PigLatin scripts and HQL queries.
- Used Hadoop FS scripts for HDFS ( Hadoop File System) data loading and manipulation.
- Performed Hive test queries on local sample files and HDFS files.
- Developed and optimized Pig and Hive UDFs (User-Defined Functions) to implement the functionality of external languages as and when required.
- Extensively used Pig for data cleaning and optimization.
- Developed Hive queries to analyze data and generate results.
- Exported data from HDFS to RDBMS via Sqoop for Business Intelligence, visualization and user report generation.
- Managed, reviewed and interpreted Hadoop log files. Involved with the application teams to install Hadoop updates, patches and version upgrades as required.
- Developed Map Reduce jobs for data cleaning and transformation.
- Developed Hive queries and Pig scripts to analyze large datasets.
- Collected the logs data from the web servers and integrated it to HDFS using Flume.
- Involved in importing and exporting the data from RDBMS to HDFS and vice versa using Sqoop.
- Created Hive tables and analyzing the loaded data in the hive tables using hive queries.
- Experienced in integrating Hive and HBase for better performing the Map Reduce algorithms.
- Loaded large sets of structured, semi - structured and unstructured data.
- Used Oozie job scheduler to automate the job flows.
- Developed Pig UDFs in java for preprocessing the data.
- Cluster coordination services using ZooKeeper.
- Created and maintained Technical documentation for all the tasks performed like executing Pig scripts and Hive queries.
Cluster Size: 30 Nodes
Environment: Hadoop Ecosystem, HDFS, Map Reduce, Pig, Hive, Sqoop, Eclipse, Shell Scripting, RDBMS
Confidential, Cleveland, OH
Sr. SQL DBA/ Sr. Database Consultant
Responsibilities:
- Involved in T-SQL Coding using Temporary Tables, CTE’s, Cursor and also Insert Update Commands to Update the Target Tables of the Database and Views to reduce database complexities.
- Establish and maintain the databases, storage capacity on server drives, taking backup of databases, restore databases on various environments such as Development Server, QA (Test) Server and Staging Server.
- Worked in Active-Active Cluster environment as a High availability solution.
- Worked on setting up Transactional Replication (Push and Pull) and Merge Replication.
- Checking Database Health by using DBCC Commands
- Implementing high availability with Always-On Availability Groups and failover clustering
- Developed ETL Processes using SQL Server 2008/2005 Integration Services (SSIS) to migrate the data from various sources as per project requirement
- Developed and Modified number of SSIS Packages using tools in Data Flow Transformations that includes Derived Column, Conditional Split, Data Conversion, Sort, Union all, Merge join, Multicast, Lookup etc.
- Developed number of Iterative Builds related to different projects by comparing the T SQL Scripts, Functions, Stored Procedures and Tables.
- Provide Instructions, development support for maintenance release, Updated Versions, patches as DBA and worked with development team for migration of data from various sources of the projects
- Upgrading SSIS Packages, SSRS Reports and Databases from SQL Server 2005/2008 Version to SQL Server 2012 & 2014
- Working Knowledge of Crystal Reports
- Involved in code reviews meetings and designing sessions and get mentored from the tech leads in order to understand the business requirements in more elaborated manner.
- Performed T-SQL tuning and optimization of queries for reports that take longer execution time using MS SQL Profiler, SQL Query Analyzer in MS SQL Server, Execution Plans, Index Tuning Wizard, and Database Engine Tuning Advisor.
Hardware/Software: MS SQL Server 2014/2012/2008, 2005 T-SQL, SQL Server 2014/2 Integration Services (SSIS), TFS, SQL Server 2008 Reporting Services(SSRS),VS 2013/2010, MS Excel, Visio 2007, Erwin 7.2, SQL Server 2008 Analysis Services, Windows server 2008, BCP, Active Directory
Confidential
Sr.Database Administrator (DBA) / Sr.SQL Database Consultant
Responsibilities:
- Created & modified a number of Stored Procedures based on the Normalization and Constraints of the Associated Tables.
- Created Custom Tables in Order to provide the Interface between the Source Tables and the Target Tables and gives the Custom Generated Messages based on the constraints provided which will help the Target Database Tables to get the data in refined manner.
- Involved in Huge amount of Coding using Temporary Tables, CTE’s, Cursor, and also Insert Update Commands to Update the Target Tables of the Database and Views to reduce database complexities.
- Developed ETL Processes using SQL Server 2008 Integration Services to migrate data of Large Number of Tables from SQL Server to SQL Server and Also to Excel Files.
- Developed the ETL Processes Using the Tasks Like File System Task, Execute SQL Task, Script Task.
- Created Packages by testing and cleaning the Standardized Data by using tools in Data Flow Transformations (Data Conversion, Sort, Union all, Conditional Split, Merge join and more) and huge XLS file is created which have the Overall Data Involved with Different Input and Target Tables.
- Involved in Creating Variables, Manifest Files, Logging Files (Log to SQL Server DB and Windows Event Logs) and also to create the dtsConfig File using Package Configurations.
- Setup Test, Dev, Staging and Production Environments.
- Create Users and assign permissions based on the level of database access the user would need.
- Maintain the database security, its performance, space allocation, database integrity check and planning the future changes or workloads
- Created and scheduled Backup jobs, index maintenance jobs, consistency checks and update statistics.
- Used Log Shipping for synchronization of database.
- Involved in code reviews meetings and designing sessions and get mentored from the tech leads.
- Created sub-reports, drilldown-reports, parameterized reports in SSRS.
- Created standard & Data driven report subscriptions. Also Created report snapshots and cache for better performance.
- Performed T-SQL tuning and optimization of queries for reports that take longer execution time using MS SQL Profiler and SQL Query Analyzer in MS SQL Server 2008.
- Unit Tested All of the Stored Procedures and Queries in order to get the Right Matching Results
- Documented all the Phases Involved in the project and also the Stored Procedures and Scripts for future references.
Hardware/Software: MS SQL Server 2008, T-SQL, SQL Server 2008 Integration Services(SSIS), TFS, SQL Server 2008 Reporting Services(SSRS), MS Excel, Visio 2007, Erwin 7.2, SQL Server 2008 Analysis Services, Windows server 2008
Confidential, Newark, DE
Database Administrator (DBA) /Sr. MS SQL Server/SSIS/SSRS Developer
Responsibilities:
- Developed ETL Processes using SQL Server 2008 Integration Services to migrate data from Oracle data source into a SQL Server 2008 database
- Created SSIS Packages by using different kinds of transformations (i.e. pivot Transformation, Derived Columns, Conditional Split, Term extraction, Aggregations, Multicasting).
- Created Packages by testing and cleaning the Standardized Data by using tools in Data Flow Transformations (Data Conversion, Export Column, Merge join, Sort, Union all, Conditional Split and more) for existing/new packages and huge CSV file import to Sale Force from different data sources and other ongoing tasks for mapping.
- Created and scheduled Backup jobs, index maintenance jobs, consistency checks and update statistics.
- Involved in Installing SQL Server 2008 and creating users and Groups.
- Created & modified a number of existing stored procedures based on the changes made to some existing tables.
- Establish and maintain sound backup and recovery policies and procedures. Establishing the needs of users and monitoring user access and security
- Work as part of a team and provide 7×24 supports when required.
- Designed and deployed reports with Drill Down, Drill Through and Drop down menu option and Parameterized and Linked reports.
- Deployed and scheduled Reports using SQL Server 2008 Reporting Services to generate all daily, weekly, monthly and quarterly Reports including current status.
- Created standard & Data driven report subscriptions. Created report snapshots and cache for better performance.
- Configured report builder and trained business users so that they could create reports from the report models I created.
- Prepared the Layouts by placing the fields to the appropriate place according to the requirement of the final Report.
- Do general technical troubleshooting and give consultation to development teams.
- Used SQL Profiler for troubleshooting, monitoring, and optimization of SQL Server and non-production database code as well as T-SQL code from developers and QA.
Hardware/Software: MS SQL Server 2008, T-SQL, SQL Server 2008 Integration Services(SSIS), TFS, SQL Server 2008 Reporting Services(SSRS), MS Excel, Visio 2007, Erwin 7.2, SQL Server 2008 Analysis Services, Windows server 2008
Confidential, Burlington, Massachusetts
Database Administrator (DBA) / Sr. MS SQL Server/SSIS/SSRS Developer
Responsibilities:
- Responsible for creating logical and physical model using Erwin Data Modeler.
- Created reports from prototype Analysis Services Cube using SQL Server 2008 Reporting Services.
- Extracted data from SharePoint Server to build SSIS packages.
- Created report snapshots and cache for better performance. Created standard & Data driven report subscriptions.
- Establish and maintaining the Databases, Database design and implementation, Database tuning, Performance monitoring and Review Error Logs, Storage capacity on server drives, backup/restore databases on various environments such as Development Server, Test Server and Staging Server
- Developed SSIS packages to extract data from OLTP to OLAP systems and Scheduled Jobs to call the packages and Stored Procedures.
- Created Alerts for successful or unsuccessful completion of Scheduled Jobs.
- Used various SSIS tasks such as Conditional Split, Derived Column, which were used for Data Scrubbing, data validation checks during Staging, before loading the data into the Data warehouse.
- Used ETL to implement the Slowly Changing Transformation, to maintain Historical Data in Data warehouse.
- Worked on database Design and Architecture based on business user requirements and performed Normalization (3NF) and De-Normalization
- Created Error and Performance reports on SSIS Packages, Jobs, Stored procedures and Triggers.
- Created Views to reduce database complexities for the end users.
- Created sub-reports, drilldown-reports, summary reports, and parameterized reports in SSRS.
- Generated ad hoc reports using MS Reporting services.
- Performed T-SQL tuning and optimization of queries for reports that take longer execution time using MS SQL Profiler, index tuning wizard and SQL Query Analyzer in MS SQL Server 2008.
- Generated reports using SSRS that could be used to send information to different primary vendors, clients, and managers.
- Involved in documentation for all kinds of SSRS report and SSIS packages.
- Designed and implemented stored procedures and triggers for automating tasks.
Hardware/Software: MS SQL Server 2008, T-SQL, SSIS packages, VSS, SSRS, MS Excel, Visio2007, Erwin 7.1, SQL Server Analysis Services, Windows server 2008
Confidential, Baton Rouge, LA
Database Administrator (DBA) /SQL Developer/SSIS/SSAS/SSRS Developer
Responsibilities:
- Actively participated in interaction with users, team lead, DBA’s and technical manager to fully understand the requirements of the new system.
- Generated complex stored procedures and functions for better performance and flexibility.
- Created indexes for the quick retrieval of the data from the database.
- Defined constraints, rules, indexes and views based on business requirements.
- Used Execution Plan, SQL Profiler and database engine tuning advisor to optimize queries and enhance the performance of databases.
- Successfully migrated data from Microsoft SQL Server 2000 to Microsoft SQL Server 2005 using SSIS.
- Created SSIS Packages using Pivot Transformation, Fuzzy Lookup, Derived Columns, Condition Split, Term extraction, Aggregate, Execute SQL Task, Data Flow Task, and Execute Package Task etc. to generate underlying data for the reports and to export data from Excel Spreadsheets, Text file, MS Access and CSV files.
- Involved in Designing and deploying cubes in SSAS environment using Snowflake and Star Schema Designs for the operational data
- Involved in Analyzing, designing, building &, testing of OLAP cubes with SSAS 2005 and in adding calculations using MDX.
- Experience in Programming OLAP advanced objects like KPI ’s and Perspectives.
- Experience in using XMLA protocol for SQL Server Analysis Services 2005 for interaction between client application and instance of Analysis Services.
- Extensive Knowledge in SSAS storage and partitions, and Aggregations, calculation of queries with MDX, Data Mining Models, developing reports using MDX and SQL.
- Creating Static Reports using MDX queries and creating MDX Base Report Definition.
- Implemented OLAP Cubes, Facts, and Dimensions for providing summarized and aggregate views of large sets of data and used Multi-Dimensional Expression (MDX) to access them.
- Created business critical KPIs using SQL Server Analysis Services (SSAS ) representing aggregations in several different ways - hierarchically and using custom groupings that the company will use to analyze performance.
- Developed and designed cubes with dimensions, Measures and calculated members, Design Dimensions and using the MDX expressions for the various calculated members, actions and KPI .
Hardware/Software: Windows Server 2003, MS SQL Server 2005, Business Intelligence Development Studio (SSAS, SSRS & SSIS), TFS, MS Office 2007, Erwin4.1
Confidential - San Mateo, California
SQL/SSIS/SSRS/SSAS Developer
Responsibilities:
- Extracted required mapping fields from different sources to Warehouse for Staging Table.
- Performed all Conversion goals and objectives which include identifying data for processing sales, and validating of historical sales which is available in Central Data Warehouse (CDW) for migrating the data.
- Analyzed and performed Data mapping which involves identify source data field, identify target entities and their lookup table id’s, and translation rules. Evaluation of data groups and data critically and developed automated conversion process.
- Developed and modified many Scalar Valued Functions, Stored Procedures for parsing data required for mapping. Tweaked low performance functions.
- Developed Store Procedure and Staging table for data push from CDW.
- Developed SSIS Packages not only for particular table’s fields and required data types for lookup tables in Staging but also for data Warehouse requirements of CDW.
- Increased query performance which is necessary for statistical reporting after monitoring, tuning, and Optimizing Indexes by using Performance Monitor and SQL Profiler. Reduced and obliterated unnecessary joins and indexes.
- Created and scheduled SSIS packages for running AM and PM feeds from various departments and multiple servers and resources to Development Servers.
- Created SSIS Packages by using advanced tools (i.e. pivot Transformation, Derived Columns, Condition Split, Term extraction, Aggregations, Multicasting).
- Created Packages by testing and Cleansing the Standardized Data by using transformations in Data Flow Task (i.e. Data Conversion, Export Column, Merge join, Sort, Union all, Conditional Split and more) for existing/new packages and huge CSV file import to Data Warehouse from different data sources and other ongoing tasks for mapping.
- Developed and modified the existing reports from basic Chart and tabular to parameterized reports including Single/Multi/Boolean valued parameters and linked reports based on new business logic.
- Created SQL server configurations for SSIS packages and XML & Windows event logs.
- Used sub reports functionality for complex report calculations.
- Created parameterized reports passing parameter through report screen.
- Rendered reports in different formats like pdf., excel etc. to be executed automatically on weekly or monthly basis and managed subscriptions.
- Using MDX created calculated members for customized calculated measures.
- Analyzing the Performance of a Cube by partitioning it and creating Perspective Cubes.
- Documented all database objects, procedures, views, functions & packages for future references.
- Documented all old and migrated Store Procedures and scripts for future references.
Hardware/Software: Windows XP/2003 Enterprise edition, MS SQL Server 2005, Excel 2003, TFS, VBScript, Database Engine Tuning Advisor, SQL Profiler, SQL Server Business Intelligence Development Studio 2005, Reporting Services Configuration, SSIS
Confidential
JR. SQL Developer
Responsibilities:
- Created new tables, wrote static/dynamic stored procedures and some user defined functions for Application Developers.
- Performed data conversions from flat files into a normalized database structure.
- Created Triggers, Check Constraint and Primary/Foreign Key to enforce data and referential integrity and business rules.
- Worked on DTS Packages, DTS Import/Export for transferring data between SQL Server.
- Generated summary reports like Tabular, Matrix, Drill down and Drill through using Report builder.
- Performance tuning of SQL queries and stored procedures using SQL Profiler, Query Plan (Graphical/Text) and Statistics I/O.
- Involved in development, support, document reviews, testing and integration of the system.
- Created and maintained various additional indexes, additional intermediate tables for faster and efficient reports retrieval.
- Managed historical data from various heterogeneous data sources like Excel, Fixed Width Flat Files, and Access.
- Worked on Log Shipping, Replication to restore Database backup.
- Used linked tables, pass through queries and many other MS Access features like Macros, Reports etc.
Hardware/Software: SQL Server 2000, VSS, VB Script, DTS, MS Access 2000/2003, Visual Studio.Net 2003