Data Engineer Resume
Desmoines, IA
SUMMARY
- Overall, 8+ years of technical IT experience in all phases of Software Development Life Cycle (SDLC) with skills in data analysis, design, development, testing and deployment of software systems.
- 6+ yearsof industrial experience in Big Data analytics,Data manipulation, using Hadoop Eco system toolsMap - Reduce, HDFS, Yarn/MRv2, Pig, Hive, HDFS, HBase, Spark, Kafka, Flume, Sqoop, Flume, Oozie, Avro, Sqoop,AWS,Spring Boot, Spark integration with Cassandra, Avro, Solr and Zookeeper.
- Experience in developing data pipelines using AWS services including EC2, S3, Redshift, Glue, Lambda functions, Step functions, CloudWatch, SNS, DynamoDB, and SQS.
- Proficiency in multiple databases like MongoDB, Cassandra, MySQL, ORACLE, and MS SQL Server. Worked on different file formats like delimited files, avro, json and parquet. Docker container orchestration using ECS, ALB and lambda.
- Extensive knowledge onQlikView Enterprise Management Console (QEMC), QlikView Publisher, QlikView Web Server.
- Implemented a batch process to load the heavy volume data loading using Apache Dataflow framework using Nifi in Agile development methodology.
- Worked as team JIRA administrator providing access, working assigned tickets, and teaming with project developers to test product requirements/bugs/new improvements.
- CreatedSnowflake Schemasby normalizing the dimension tables as appropriate and creating a Sub Dimension named Demographic as a subset to the Customer Dimension.
- Experienced in Pivotal Cloud Foundry (PCF) on Azure VM's to manage the containers created by PCF.
- Hands on experience in test driven development(TDD),Behavior driven development(BDD)and acceptance test driven development (ATDD)approaches.
- Managing Database, Azure Data Platform services (Azure Data Lake (ADLS), Data Factory (ADF), Data Lake Analytics, Stream Analytics, Azure SQL DW, HDInsight/Databricks, NoSQL DB), SQL Server, Oracle,Data Warehouse etc. Build multiple Data Lakes.
- Extensive experience in Text Analytics, generating data visualizations using R, Python and creating dashboards using tools like Tableau, PowerBI.
- Worked with Google Compute Cloud Data Flow and Big Query to manage and move data within a 200 Petabyte Cloud Data Lake for GDPR Compliance. Also designed star schema in Big Query.
- Extensive programming expertise in designing and developing web-based applications using Spring Boot, Spring MVC, Java servlets, JSP, JTS, JTA, JDBC and JNDI.
- Experience in MVC and Microservices Architecture with Spring Boot and Docker, Swamp.
- Expertise in Java programming and have a good understanding on OOPs, I/O, Collections, Exceptions Handling, Lambda Expressions, Annotations
- Provided full life cycle support to logical/physical database design, schema management and deployment. Adept at database deployment phase with strict configuration management and controlled coordination with different teams.
- Experience in Spring Frameworks like Spring Boot, Spring LDAP, Spring JDBC, Spring Data JPA, Spring Data REST
- Experience in writing code in R and Python to manipulate data for data loads, extracts, statistical analysis, modeling, and data munging.
- Familiar with latest software development practices such as Agile Software Development, Scrum, Test Driven Development (TDD) and Continuous Integration (CI).
- Utilized Kubernetes and Docker for the runtime environment for the CI/CD system to build, test, and deploy. Experience in working on creating and running docker images with multiple microservices.
- Utilized analytical applications like R, SPSS, Rattle and Python to identify trends and relationships between different pieces of data, draw appropriate conclusions and translate analytical findings into risk management and marketing strategies that drive value.
- Extensive hands-on experience in using distributed computing architectures such as AWS products (e.g. EC2, Redshift, EMR, and Elastic search), Hadoop, Python, Spark and effective use of Azure SQL Database, MapReduce, Hive, SQL and PySpark to solve big data type problems.
- Strong experience in Microsoft Azure Machine Learning Studio for data import, export, data preparation, exploratory data analysis, summary statistics, feature engineering, Machine learning model development and machine learning model deployment into Server system.
- Proficient in Statistical Methodologies including Hypothetical Testing,ANOVA,Time Series,Principal Component Analysis,Factor Analysis,Cluster Analysis,Discriminant Analysis.
- Expertise in transforming business resources and requirements into manageable data formatsandanalytical models,designing algorithms,building models,developing data miningandreporting solutionsthat scale across a massive volume of structured and unstructured data.
- Worked with various text analytics libraries like Word2Vec, GloVe, and LDA and experienced with Hyper Parameter Tuning techniques like Grid Search, Random Search, model performance tuning using Ensembles and Deep Learning.
- Skilled in System Analysis, E-R/Dimensional Data Modeling, Database Design and implementing RDBMS specific features.
- Knowledge of working with Proof of Concepts (PoC's) and gap analysis and gathered necessary data for analysis from different sources, prepared data for data exploration using data manipulation and Teradata.
- Well experience in Normalization and De-Normalization techniques for optimum performance in relational and dimensional database environments.
- Experience in developing customizedUDF’sin Python to extend Hive and Pig Latin functionality.
- Expertise in designing complex Mappings and have expertise in performance tuning and slowly changing Dimension Tables and Fact tables
- Extensively worked with Teradata utilities Fast export, and Multi Load to export and load data to/from different source systems including flat files.
- Experienced in building Automation Regressing Scripts for validation of ETL process between multiple databases like Oracle, SQL Server, Hive, and Mongo DB usingPython.
- Proficiency in SQL across several dialects (we commonly write MySQL, PostgreSQL, Redshift, SQL Server, and Oracle)
- Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the SQL Activity. Build an ETL which utilizes spark jar inside which executes the business analytical model.
- Excellent communication skills. Successfully working in fast-paced multitasking environment both independently and in collaborative team, a self-motivated enthusiastic learner.
- Skilled in performing data parsing, data ingestion, data manipulation, data architecture, data modelling and data preparation with methods including describe data contents, compute descriptive statistics of data, regex, split and combine, Remap, merge, subset, reindex, melt and reshape.
TECHNICAL SKILLS
Big Data Ecosystem: HDFS, MapReduce, HBase, Pig, Hive, Sqoop, KafkaFlume, Cassandra, Impala, Oozie, Zookeeper, MapR, Amazon Web Services (AWS), EMR
Machine Learning Classification Algorithms: Logistic Regression, Decision Tree, Random Forest, K-Nearest Neighbor (KNN), Gradient Boosting Classifier, Extreme Gradient Boosting Classifier, Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayes Classifier, Extra Trees Classifier, Stochastic Gradient Descent, etc.
Cloud Technologies: AWS, Azure, Google cloud platform (GCP)
IDE’s: IntelliJ, Eclipse, Spyder, Jupyter
Ensemble and Stacking: Averaged Ensembles, Weighted Averaging, Base Learning, Meta Learning, Majority Voting, Stacked Ensemble, AutoML - Scikit-Learn, MLjar, etc.
Databases: Oracle 11g/10g/9i, MySQL, DB2, MS-SQL Server, HBASE
Programming / Query Languages: Java, SQL, Python Programming (Pandas, NumPy, SciPy, Scikit-Learn, Seaborn, Matplotlib, NLTK), NoSQL, PySpark, PySpark SQL, SAS, R Programming (Caret, Glmnet, XGBoost, rpart, ggplot2, sqldf), RStudio, PL/SQL, Linux shell scripts, Scala.
Data Engineer/Big Data Tools / Cloud / Visualization / Other Tools: Databricks, Hadoop Distributed File System (HDFS), Hive, Pig, Sqoop, MapReduce, Spring Boot, Flume, YARN, Hortonworks, Cloudera, Mahout, MLlib, Oozie, Zookeeper, etc. AWS, Azure Databricks, Azure Data Explorer, Azure HDInsight, Salesforce, NI-FI, GCP, Google Shell, Linux, Big Query, Bash Shell, Unix, Tableau, Power BI, SAS, We Intelligence, Crystal Reports, Dashboard Design.
PROFESSIONAL EXPERIENCE
Data Engineer
Confidential, DesMoines, IA
Responsibilities:
- Performed data analysis and developed analytic solutions. Data investigation to discover correlations / trends and the ability to explain them.
- Worked with Data Engineers, Data Architects, to define back-end requirements for data products (aggregations, materialized views, tables - visualization)
- Developed frameworks and processes to analyze unstructured information. Assisted in Azure Power BI architecture design
- Experienced with machine learning algorithm such as logistic regression, random forest, XGboost, KNN, SVM, neural network, linear regression, lasso regression and k - means
- Implemented Statistical model and Deep Learning Model (Logistic Regression, XGboost, Random Forest, SVM, RNN, and CNN).
- Designing and Developing Oracle PL/SQL and Shell Scripts, Data Import/Export, Data Conversions and Data Cleansing.
- Architect & implement medium to large scale BI solutions on Azure using Azure Data Platform services (Azure Data Lake, Data Factory, Data Lake Analytics, Stream Analytics
- Performing data analysis, statistical analysis, generated reports, listings and graphs using SAS tools, SAS/Graph, SAS/SQL, SAS/Connect and SAS/Access.
- Developing Spark applications using Scala and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats. Using Kafka and integrating with the Spark Streaming. Developed data analysis tools using SQL andPythoncode.
- Authoring Python (PySpark) Scripts for custom UDF’s for Row/ Column manipulations, merges, aggregations, stacking, data labeling and for all Cleaning and conforming tasks. Migrate data from on-premises to AWS storage buckets.
- Agile methodology including test-driven and pair-programming concept.
- Created functions and assigned roles in AWS Lambda to run python scripts, and AWSLambda using java to perform event driven processing.
- Involved in Installation QlikView 12.0 SR5, Nprinting 16/17 in both publisher and server.
- Involved in testing dashboards of Qlikview 11.2 version to migrate it to Qlikview 12.1. Extensive experience with Extraction, Transformation, Loading (ETL) process using Ascential Data Stage EE/8.0/
- Developed a python script to transfer data, REST API’s and extract data from on-premises to AWS S3. Implemented Micro Services based Cloud Architecture using Spring Boot.
- Worked on Ingesting data by going through cleansing and transformations and leveraging AWS Lambda, AWS Glue and StepFunctions.
- Created yaml files for each data source and including glue table stack creation. Worked on a python script to extract data from Netezza databases and transfer it to AWS S3
- Developed Lambda functions and assigned IAM roles to run python scripts along with various triggers (SQS, EventBridge, SNS)
- Writing UNIX shell scripts to automate the jobs and scheduling cron jobs for job automation using commands with Crontab. Created a Lambda Deployment function, and configured it to receive events from S3 buckets
- Built the machine learning model include: SVM, random forest, XGboost to score and identify the potential new business case with Python Scikit-learn.
- Experience in Converting existing AWS Infrastructure to Server less architecture(AWS Lambda, Kinesis),deploying viaTerraformand AWS Cloud Formation templates.
- Worked onDocker containerssnapshots, attaching to a running container, removing images, managing Directory structures and managing containers.
- Experienced in day - to-day DBA activities includingschema management, user management(creating users, synonyms, privileges, roles, quotas, tables, indexes, sequence),space management(table space, rollback segment),monitoring(alert log, memory, disk I/O, CPU, database connectivity),scheduling jobs, UNIX Shell Scripting.
- Developed normalizedLogicalandPhysicaldatabase models to design OLTP system for insurance applications.
- Coordinated with different data providers to source the data and build the Extraction, Transformation, and Loading (ETL) modules based on the requirements to load the data from source to stage and performed Source Data Analysis.
- Analyzed existing Data Model and accommodated changes according to the business requirements.
- Decommission 2 data marts and expanded the data model of an existing Oracle DW used as their Data Mining and Metrics repository.
- Implemented a top-down/bottom-up analysis approach to assess the current warehouse environment and extend the to-be data models.
- Created dimensional model for the reporting system by identifying required dimensions and facts usingErwin
- Used forward engineering to create a Physical Data Model withDDLthat best suits the requirements from the Logical Data Model.
- Developed complexTalend ETL jobsto migrate the data fromflat filesto database. Pulled files frommainframe into Talendexecution server using multipleftpcomponents.
- Developed complexTalend ETL jobstomigratethe data from flat files to database. DevelopedTalend ESBservices and deployed them onESBservers on different instances.
- Architect and design serverless application CI/CD by using AWS Serverless (Lamda) application model.
- Developedstored procedures/views in Snowflakeand use inTalendfor loading Dimensions and Facts.
- Developed merge scripts toUPSERTdata intoSnowflakefrom an ETL source.
Environment: Hadoop, Map Reduce, HDFS, Hive, Ni-fi, Spring Boot, Cassandra, Swamp, Data Lake, Sqoop, Oozie, SQL, Kafka, Spark, Scala, Java, AWS, GitHub, Talend Big Data Integration, Solr, Impala.
Data Engineer
Confidential, Seattle, WA
Responsibilities:
- Transforming business problems into Big Data solutions and define Big Data strategy and Roadmap. Installing, configuring, and maintaining Data Pipelines
- Developed thefeatures,scenarios,step definitionsforBDD (Behavior Driven Development)andTDD (Test Driven Development)usingCucumber, Gherkinandruby.
- Designing the business requirement collection approach based on the project scope and SDLC methodology.
- Creating Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.
- Files extracted from Hadoop and dropped on daily hourly basis intoS3. Working with Data governance and Data quality to design various models and processes.
- Experience in deploying the Spring Boot Microservices to Pivotal Cloud Foundry (PCF) using build pack and Jenkins for continuous integration, Deployments in Pivotal Cloud Foundry (PCF) and binding of Services in Cloud and Installed Pivotal Cloud Foundry (PCF) on Azure to manage the containers created by PCF.
- Analyzed clickstream data from Google analytics with Big Query. Designed APIs to load data from Omniture, Google Analytics, and Google Big Query.
- Maintained JIRA team and program management review dashboards and maintained COP account and JIRA team sprint metrics reportable to customer and SAIC division management
- Maintained JIRA team Confluence System Engineering pages that included: Process Flow Management, Team Requirements, Roles and Responsibilities, and COP User Metrics.
- Involved in all the steps and scope of the project reference data approach to MDM, have created a Data Dictionary and Mapping from Sources to the Target in MDM Data Model.
- Experience managing Azure Data Lakes (ADLS) and Data Lake Analytics and an understanding of how to integrate with other Azure Services. Knowledge of USQL
- Responsible for working with various teams on a project to develop analytics-based solution to target customer subscribers specifically.
- Created functions and assigned roles inAWS Lambdato run python scripts, andAWS Lambdausing java to perform event driven processing. Created Lambda jobs and configured Roles usingAWS CLI.
- Built a new CI pipeline. Testing and deployment automation with Docker, Swamp, Jenkins and Puppet. Utilized continuous integration and automated deployments with Jenkins and Docker.
- Data visualization:Pentaho, Tableau, D3. Have knowledge of Numerical optimization, Anomaly Detection and estimation, A/B testing, Statistics, and Maple. Have big data analysis technique using Big data related techniques i.e.,Hadoop, MapReduce, NoSQL, Pig/Hive, Spark/Shark, MLlibandScala, numpy, scipy, Pandas, scikit-learn.
- UtilizedSpark, Scala, Hadoop, HBase, Cassandra, MongoDB, Kafka, Spark Streaming, MLLib, and Pythonand utilized the engine to increase user lifetime by 45% and triple user conversations for target categories.
- Used ApacheSpark Data frames, Spark-SQL, Spark MLLibextensively and developing and designing POC's using Scala, Spark SQL and MLlib libraries.
- Data Integrationingests, transforms, and integrates structured data and delivers data to a scalable data warehouse platform using traditional ETL (Extract, Transform, and Load) tools and methodologies to collect of data from various sources into a single data warehouse.
- Applied variousmachine learning algorithmsand statistical modeling likedecision trees, text analytics, natural language processing (NLP),supervised and unsupervised, regression models, social network analysis, neural networks, deep learning, SVM, clusteringto identify Volume usingscikit-learn packageinpython, R, and Matlab. Collaborate withData Engineers and Software Developersto develop experiments and deploy solutions to production.
- Create and publish multiple dashboards and reports usingTableau server and work onText Analytics, Naive Bayes, Sentiment analysis, creating word cloudsand retrieving data fromTwitterand othersocial networking platforms.
- Work on data that was a combination of unstructured and structured data from multiple sources and automate the cleaning usingPython scripts.
- Created User manual on using Atlassian Products (Jira/Confluence) and trained end users project wise.
- Implemented the Atlassian Stash application as the SCM tool of choice for central repository management
- Designed and developed architecture for data services ecosystem spanning Relational, NoSQL, and Big Data technologies.
- Used SQL Server Integrations Services (SSIS) for extraction, transformation, and loading data into target system from multiple sources
- Involved inUnit Testingthe code and provided the feedback to the developers. PerformedUnit Testingof the application by usingNUnit.
- Designed both 3NF data models for OLTP systems and dimensional data models using star and snowflake Schemas.
- Created and maintained SQL Server scheduled jobs, executing stored procedures for the purpose of extracting data from Oracle into SQL Server. Extensively used Tableau for customer marketing data visualization
- Optimize algorithm with stochastic gradient descent algorithmFine-tuned thealgorithm parameterwith manual tuning and automated tuning such asBayesian Optimization.
- Strong understanding ofenterprise data warehouse architecture and big data. Responsible for data model design using ERwin/Power Designer/Cast.
- Built strategic relationship with vendors and reduced customization and implementation cost by 50.
- Communicated with CxOs to align business strategy and increased customer base.
- Architected data processes and reduced latency to close to real time. Processing time was reduced from 50 minutes to 10 seconds.
- DevelopedData Mapping, Data Governance, TransformationandCleansingrules for the Master Data Management Architecture involving OLTP, ODS and OLAP
- Migrated Database from SQL Databases (Oracle and SQL Server) to NO SQL Databases (Cassandra/MONGODB);
- Studied the existing OLTP systems (3NF models) and created facts and dimensions in the data mart.Worked with different cloud - based data warehouse like SQL, Redshift.
- Write research reports describing the experiment conducted, results, and findings and make strategic recommendations to technology, product, and senior management. Worked closely with regulatory delivery leads to ensure robustness in prop trading control frameworks using Hadoop, Python Jupyter Notebook, Hive and NoSql.
- Wrote production level Machine Learning classification models and ensemble classification models from scratch using Python and PySpark to predict binary values for certain attributes in certain time frame.
- Performed all necessary day-to-day GIT support for different projects, Responsible for design and maintenance of the GIT Repositories, and the access control strategies.
Environment: Hadoop, Kafka, Spark, Sqoop, Docker, Swamp, Big Query, Spark SQL, TDD, Spark-Streaming, Hive, Scala, pig, NoSQL, Impala, Oozie, Hbase, Data Lake, Zookeeper.
Big data Developer/ Data Engineer
Confidential, St. Louis, MO
Responsibilities:
- Responsible to develop the applications on Data Lake as per the client requirements and exposing that data to client.
- Developing the code to move the data from one zone to another zone in Data fabric platform.
- Create the applications like Claims Sweep WGS for transforming the data as per the client requirements using Spark, hive and Python.
- Developing automation scripts to do validation like Record count, schema Check etc. and load the data into corresponding partitions.
- Implemented Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data.
- Used PySpark to write the code for all the use cases in spark and extensive experience with Scala for data analytics on Spark cluster and Performed map-side joins on RDD.
- Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Scala and have a good experience in using Spark-Shell and Spark Streaming.
- Build near real-time pipelines that operate efficiently to handle huge volumes of incoming business activity
- Developing the programs to validate the data after ingesting the data into Data Lake using UNIX.
- Developing the scripts to generate reconciliation reports using Python
- Involved in moving data from different source systems like Oracle, SQL and DB2 etc. to Data Lake.
- Identifying the layout for COBOL copybooks and clean up the copybooks and ingest the data as per the layout.
- Loaded the data from Teradata to HDFS using Teradata Hadoop connectors.
- Ingest the data from Oracle Database thru Oracle Golden Gate to Hadoop Data Lake with the help of Kafka.
- Responsible to provide the technical solutions for the team facing issues.
- Responsible to guide the team when they have issues.
- Responsible to provide design and architecture to the team to develop applications.
- Responsible to review the code and make the code as per the Client Standards.
- Creating data model for the data to be ingested for each table.
- Identifying the appropriate file formats for the tables to retrieve the data faster.
- Deciding the column data types properly so that we do not lose the data or miss the data.
- Responsible to identify the user stories or work items for the initiatives in PI planning.
- Responsible to develop Masking Algorithms to mask PHI columns to expose the data to offshore so that they cannot see actual data.
- Setup GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.
- Supported Site reliability engineer
- Created projects, VPC's, Subnetwork's, GKE Clusters for environments QA3, QA9 and prod using Terraform.
- Worked on Jenkins file with multiple stages like checkout a branch, building the application, testing, pushing the image into GCR, deploying to QA3, deploying to QA9, Acceptance testing and finally Deploying to Production.
- Identifying the user stories for the requirements/EPICS.
- Environments: Kafka, Restful, Amazon Web Services, Scala, Hive, Jira, Stream sets, HDFS, Control-M, GCP, Spark, Teradata, Hortonworks, Scrum., Pig, Tez, Oozie, Hbase, Scala, Pyspark, Spark SQL, Kafka, Python, LINUX, Putty, Cassandra
Data Engineer/Big data Developer
Confidential, Washington, DC
Responsibilities:
- Worked on analyzing Hadoop cluster and different Big Data analytic tools including Pig, Hive HBase database and SQOOP.
- Installed Hadoop, Map Reduce, HDFS, and Developed multiple map reduce jobs in PIG and Hive for data cleaning and pre-processing.
- Coordinated with business customers to gather business requirements. And also interact with other technical peers to derive Technical requirements and delivered the BRD and TDD documents.
- Extensively involved in Design phase and delivered Design documents.
- Involved in Testing and coordination with business in User testing.
- Importing and exporting data into HDFS and Hive using SQOOP.
- Written Hive jobs to parse the logs and structure them in tabular format to facilitate effective querying on the log data.
- Experience building data and/or ETL pipelines
- Designed and Implemented Partitioning (static, Dynamic) and Bucketing in HIVE, AWS
- Involved in creating Hive tables, loading with data and writing hive queries that will run internally in map reduce way.
- Imported data from AWS S3 into Spark RDD, Performed transformations and actions on RDD'
- Experienced in defining job flows.
- Worked on creating pipelines to migrate data from Hadoop to Azure Data Factory.
- Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
- Experienced in managing and reviewing the Hadoop log files.
- Used Pig as ETL tool to do Transformations, even joins and some pre-aggregations before storing the data onto HDFS.
- Load and Transform large sets of structured and semi structured data.
- Responsible to manage data coming from different sources.
- Involved in creating Hive Tables, loading data and writing Hive queries.
- Utilized Apache Hadoop environment by Cloudera.
- Created Data model for Hive tables.
- Involved in Unit testing and delivered Unit test plans and results documents.
- Exported data from HDFS environment into RDBMS using Sqoop for report generation and visualization purpose.
- Worked on Oozie workflow engine for job scheduling.
Environment: Hadoop, Cloudera, HDFS, Hive, AWS, Azure Data Factory, Azure Storage, HBase, Sqoop, Kafka, Agile, SQL, Teradata, XML, UNIX, Shell Scripting, WINSQL
Tableau Developer
Confidential
Responsibilities:
- Worked as a Tableau admin, in charge of the report development and maintenance
- Independently worked on owning IT support tasks related to Tableau Reports on Server.
- Built and maintained many dashboards.
- Training classes to the employees on Tableau Software.
- Creating dataflow diagram and documenting the whole data movement process.
- Responsible for writing complete requirement document by interacting with the business directly to ascertain the business rules and logic. Writing high level and detail design documents.
- Created Backup scripts to take periodic backups for the content on Tableau server.
- Worked on the requirement gathering of the reports
- Analyzed the database tables and created database views based on the columns needed.
- Created dashboards displaying sales, variance of sales between planned and actual values.
- Implemented calculations and parameters wherever needed.
- Published dashboards to Tableau server.
- Created Users, Adding Users to a Site, Adding Users to a Group Viewing, Editing Deleting Users and Activating their Licenses, Distributed Environments, Installing Worker Servers, Maintaining a Distributed Environment, High Availability.
- Developed SQL Server Impersonation for Impersonation Requirements, How Impersonation Works,
- Impersonating with a Run as User Account, Impersonating with Embedded SQL Credentials.
- Good Knowledge and Worked on TABCMD TABADMIN, Database Maintenance and Troubleshooting.
