We provide IT Staff Augmentation Services!

Big Data/hadoop Engineer Resume

2.00/5 (Submit Your Rating)

CA

SUMMARY

  • 7 years of Professional experience in IT Software and Confidential in design, development and testing various applications in telecom, financial and healthcare domains
  • Expertise inHDFS,MapReduce, YARN, Hive, HBase, Pig, Phoenix, Sqoop, Flume, NO SQL, Spark, Clouds, Zookeeper, Oozieand various other ecosystem components with knowledge in Hadoop Cluster Setup, Integrations, and Installations.
  • Experience working with Open stack (icehouse, liberty), Ansible, Kafka, Elastic Search, Hadoop, Stream Sets MySQL, Cloudera, MongoDB, UNIX Shell Scripting, PIG scripting, Hive, FLUME, Zookeeper, Sqoop, Oozie. Python, Spark, Git and a variety of RDBMS in the UNIX and Windows environment, agile methodology.
  • Have Extensive Experience in IT data analytics projects, Hands on experience in migrating on premise ETLs to Google Cloud Platform (GCP) using cloud native tools such as BIG query, Cloud Data Proc, Google Cloud Storage, Composer
  • Having proficient experience in various Big Data technologies like Hadoop, Apache NiFi, Hive Query Language, HBase NoSQL database, Sqoop, Spark, Scala, OOZIE and Pig. Oracle Database and Unix shell Scripting technologies
  • Strong experience on real time data analytics usingSpark Streaming, KafkaandNiFi.
  • Had competency in usingChef, PuppetandAnsible configurationand automation tools. Configured and administered CI tools likeJenkins, Hudson Bambinofor automated builds.
  • Strong Technical Knowledge in JAVA,SQL and ETL(Extract, Transform and Load) process, Scripting (Shell/Python)
  • In depth knowledge of Testing methodologies, concepts, phases, and types of testing, developing Test Plans, Test Scenarios, Test Cases, Test Procedures, Test Reports and documenting test results accordingly after analyzing Business Requirements Documents (BRD), Functional Requirement Specifications (FRS).
  • Hands on experience in installation, configuration, supporting and managing Cloudera, Hortonworks Hadoop platform along with CDH3 and CDH4 clusters.
  • Good understanding onBusiness Intelligence,ETL TransformationsandHadoop Cluster.
  • Experience in building data Ingestion, extraction, transformation for various datasets ontoHDFSandHivefor data processing.
  • Expertise in implementingHBase schemaswith optimizedRow - key designto avoidHot-spotting.
  • Experience in PopulatingHBase tablesviaphoenixDB to expose the data for Spot-fire Analytics.
  • Involved in designingHive schemas, using performance tuning techniques like partitioning, bucketing.
  • OptimizedHiveQLby using execution engine likeTez.
  • Performed Importing and exporting data into HDFS and Hive from DBMS usingSqoop.
  • Analysed the data usingHive queriesand runningPig scriptsto study customer behaviour.
  • DevelopedPig UDF'Sto pre-process the data for analysis.
  • UsedSFTPto transfer the files to server.
  • Experience inTalendBig Data Platform Studio, implemented financial audit ETL transformations flows.
  • ImplementedSplunkEnterprise environment in HDP Cluster for log aggregation to analyze ecosystem issues.
  • ImplementedAppDynamicsin HDP to understand the cluster behaviour for alerting and monitoring.
  • Good understanding of Java Object Oriented Concepts and development of multi-tier enterprise web applications.
  • Strong Knowledge on Architecture of Distributed systems and parallel processing, In-depth understanding of MapReduce programming paradigm and Spark execution framework.
  • Knowledge on cloud Confidential likeAzureandAWS E2instances.
  • Experience with Operating Systems like Windows, Linux, and Mac
  • Involved in designing and deploying multi-tier applications using all the AWS services like (EC2, Route53, S3, RDS, Dynamo DB, SNS, SQS, IAM) focusing on high-availability, fault tolerance, and auto-scaling in AWS Cloud Formation
  • Experience in writing real time query processing using Cloudera Impala.
  • Strong working experience in planning and carrying out of Teradata system extraction using Informatica, Loading Process and Data warehousing, Large-scale Database Management and Reengineering.
  • Highly experienced in creating complex Informatica mappings and workflows working with major transformations.
  • In depth understanding of Apache spark job execution Components like DAG, lineage graph, Dag Scheduler, Task scheduler, Stages and Worked on NoSQL databases including HBase and Mongo DB.
  • Experienced with performing CRUD operations using HBase Java Client API and Solar API.
  • Good experience in working with cloud environment like Amazon Web Services (AWS) EC2 and S3.
  • Experience in Implementing Continuous Delivery pipeline with Maven, Ant, Jenkins, and AWS.
  • Good understanding of Spark Architecture with Databricks, Structured Streaming. Setting Up AWS and Microsoft Azure with Databricks, Databricks Workspace for Business Analytics, Manage Clusters in Databricks, Managing the Machine Learning Lifecycle
  • Experience in creating Docker Containers leveraging existing Linux Containers and AMI’s in addition to Creating Docker Containers from scratch.
  • Managed Docker orchestration and Docker containerization using Kubernetes.
  • Experience writing Shell scripts in Linux OS and integrating them with other solutions.
  • Strong Experience in working with Databases like Oracle 10g, DB2, SQL Server 2008 and MySQL and proficiency in writing complex SQL queries.
  • Experience in using PL/SQL to write Stored Procedures, Functions and Triggers.
  • Experience in automation and building CI/CD pipelines by using Jenkins and Chef.
  • Experience on agile methodologies Scrum.
  • Very keen in knowing newer techno stack that Google Cloud platform (GCP) adds.
  • Research and resolve issues in regard with Scrum/Kanban methodologies / Process Improvement.
  • Strong knowledge of Data Warehousing implementation concept in Redshift.
  • Very good exposure and rich experience in ETL tools like Alteryx, Matillion & SSIS
  • Experienced in migrating form on premise toAWS using AWS Data PipelineandAWS Firehose.
  • Experience writingpythonscript to spin upEMRcluster along withshell scripting.
  • Experience in using SDLC methodologies like Waterfall, Agile Scrum for design and development.
  • Experience in implementing Azure data solutions, provisioning storage account, Azure Data Factory, SQL server, SQL Databases, SQL Data warehouse, Azure Data Bricks and Azure Cosmos DB.
  • Experience developing iterative Algorithms using Spark Streaming in Scala and Python to builds near real time dashboard
  • Have work experience in Web based application development, Database programming Server side programming and Client Server computing in multi-threaded software systems using C#,.Net 3.5,.Net 4.5 & Client Side Programming using AngularJS 1.4,AngularJS 1.6, HTML5, CSS3
  • Experience in designing web applications by using HTML, HTML 5, XML, XHTML, JavaScript, CSS, CSS3 and JQUERY
  • Experienced on R and Python (pandas, numpy, scikit-learn) for statistical computing. Also experience with Mllib (Spark), Matlab, Excel, Minitab, SPSS and SAS
  • Experienced on ImplementingService Oriented Architecture (SOA) using Web Services and JMS (Java Messaging Service).
  • Extensive experience in loading and analyzing large datasets with Hadoop framework (MapReduce, HDFS, PIG, HIVE, Flume, Sqoop, SPARK, Impala, Scala), NoSQL databases like MongoDB, HBase, Cassandra

TECHNICAL SKILLS

Hadoop/BigData: HDFS, MapReduce, Hive, HBase, Pig, Sqoop, Flume, Oozie, Cassandra, YARN, Zookeeper, Spark SQL, Apache Spark, Impala, Apache Drill, Kafka, Elastic MapReduce Hadoop Frameworks Cloudera CDHs, Hortonworks HDPs, MAPR

Cloud Environment: AWS, Google Cloud

Java & J2EE Technologies: Core Java, Servlets, Java API, JDBC, Java Beans

IDE and Tools: Eclipse, Net beans, Maven, ANT, Hue (Cloudera Specific), Toad, Sonar, JDeveloper

Frameworks: MVC, Struts, Hibernate, Spring

Programming Languages: C, C++, Java, Scala, Python, Shell Scripting

Web Technologies: HTML, XML, DHTML, HTML5, CSS, JavaScript

Databases: MYSQL, DB2, MS-SQL Server, Oracle

NO SQL Databases: HBase, Cassandra, Mongo DB

Methodologies: Agile Software Development, Waterfall

Version Control Systems: GitHub, SVN, CVS, ClearCase

Operating Systems: Linux, Unix, Windows XP/Vista/7/8/10

PROFESSIONAL EXPERIENCE

Big Data/Hadoop Engineer

Confidential, CA

Responsibilities:

  • Implemented solutions for ingesting data from various sources and processing the Data-at-Rest utilizing Big Data technologies such as Hadoop, Map Reduce Frameworks, HBase, Hive.
  • Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Experienced developing and maintaining ETL jobs.
  • Utilized Apache Spark with Python to develop and execute Big Data Analytics and Machine learning applications, executed machine Learning use cases under Spark ML and Mllib.
  • Used Spark streaming to receive real time data from the Kafka and store the stream data to HDFS using Scala and No Sql databases such as HBase and Cassandra.
  • Worked on Cassandra Data modelling, NoSQL Architecture, DSE Cassandra Database administration, Key space creation, Table creation, Secondary and Solr index creation, User creation & access administration.
  • Developed Spark code and Spark-SQL/Streaming for faster testing and processing of data.
  • Performed data profiling and transformation on the raw data using Pig, Python, and oracle
  • Experienced with batch processing of data sources using Apache Spark.
  • Developing predictive analytic using Apache Spark Scala APIs.
  • Created Hive External tables and loaded the data into tables and query data using HQL.
  • Used Sqoop to efficiently transfer data between databases and HDFS and used Flume to stream the log data from servers.
  • Developed Spark code using Scala and Spark-SQL for faster testing and data processing.
  • Imported millions of structured data from relational databases using Sqoop import to process using Spark and stored the data into HDFS in CSV format.
  • Developed Spark streaming application to pull data from cloud to Hive table.
  • Used Spark SQL to process the huge amount of structured data.
  • Developed Scala scripts, UDF's using both Data frames/SQL and RDD/MapReduce in Spark for Data Aggregation, queries and writing data back into RDBMS through Sqoop.
  • Developed Spark code using Scala and Spark-SQL/Streaming for faster processing of data.
  • Load the data from different sources such as HDFS or HBase into Spark RDD and implement in memory data computation to generate the output response.
  • Extracted files from Mongo DB through Sqoop, placed in HDFS, and processed.
  • Developed complete end to end Big-data processing in Hadoop eco system.
  • Created automated python scripts to convert the data from different sources and to generate the ETL pipelines.
  • Configured Stream sets to store the converted data to SQL SERVER using JDBC drivers.
  • Converted some existing hive scripts to Spark applications using RDD's for transforming data and loading into HDFS.
  • Extensively worked on Text, ORC, Avro and Parquet file formats and compression techniques like snappy, Gzip and zlib.
  • Extensively used Hive optimization techniques like partitioning, bucketing, Mapside and parallel execution.
  • Worked with different tools to verify the Quality of the data transformations.
  • Worked with Spark-SQL context to create data frames to filter input data for model execution.
  • Configured the setup of Development and PROD environment.
  • Worked Extensively with Linux platform to setup the server.
  • Extensively Worked on Amazon S3 for data storage and retrieval purposes.
  • Experienced in running query using Impala and used BI tools to run ad-hoc queries directly on Hadoop.
  • Worked with Alteryx a data Analytical tool to develop workflows for the ETL jobs.
  • Implemented daily workflow for extraction, processing and analysis of data with Oozie.
  • Designed and implemented importing data to HDFS using Sqoop from different RDBMS servers.
  • Participated in requirement gathering of the project in documenting the business requirements.
  • Experienced using Ansible scripts to deploy Cloudera CDH 5.4.1 to setup Hadoop Cluster.
  • Experienced in working with Cloud Computing Services, Networking between different Tenants.
  • Installed and Worked with Hive, Pig, Sqoop on the Hadoop cluster.
  • Developed HIVE queries to analyze the data imported to HDFS.
  • Extensive experience in Amazon Web Services (AWS) Cloud services such as EC2, VPC, S3, IAM, EBS, RDS, ELB, VPC, Route53, Ops Works, DynamoDB, Autoscaling, CloudFront, CloudTrail, CloudWatch, CloudFormation, Elastic Beanstalk, AWS SNS, AWS SQS, AWS SES, AWS SWF & AWS Direct Connect.
  • Worked with Sqoop commands to import the data from different databases.
  • Experience with building kafka cluster setup required for the environment.
  • Dry Run Ansible Playbook to Provision the OpenStack Cluster and Deploy CDH Parcels.
  • Experience Using Tools such as Terasort, Test DFSIO and HIBENCH.
  • Supported Map Reduce Programs those are running on the cluster.

Environment: Hadoop, HBase, HDFS, Map Reduce, Teradata, SQL, Cloudera, Ganglia, Pig Latin, Sqoop, Hive, pig, MySQL, Oozie, Flume, Informatica, Zookeeper, R, and Python

Sr Hadoop Engineer

Confidential

Responsibilities:

  • Framework runs on Python and uses the hive queries for data related operations in Hadoop. The whole framework is made to run on UNIX server with the help of shell Script. It is designed to accomplish the above requirements with some important additional features.
  • Sourcing and Processing of data for any particular campaign can be done in one go as framework allows to execute hive queries one after another on the same data set to give out the required output.
  • Allows to apply Business rules in the form of Hive queries.
  • Worked on AWS cloud environment on S3 storage and EC2 instances.
  • Assisted in upgrading, configuration and maintenance of various Hadoop infrastructures like Pig, Hive, and HBase.
  • Configured Flume to capture the news from various sources for testing the classifier.
  • Developed MapReduce jobs using various Input and output formats.
  • Assisted in upgrading, configuration and maintenance of various Hadoop infrastructures like Pig, Hive, and HBase.
  • Developed workflow in Oozie to automate the tasks of loading the data into HDFS and pre-processing, analyzing and training the classifier using MapReduce jobs, Pig jobs and Hive jobs.
  • Developed Spark scripts by using Scala shell commands as per the requirement.
  • Involved in loading data into Cassandra NoSQL Database.
  • Developed Spark applications to move data into Cassandra tables from various sources like Relational Database or Hive.
  • Worked on Spark streaming collects the data from Kafka in near real time and performs necessary transformations and aggregations on the fly to build the common learner data model and persists the data in Cassandra.
  • Worked on Cassandra Data modelling, NoSQL Architecture, DSE Cassandra Database administration, Key space creation, Table creation, Secondary and Solr index creation, User creation & access administration.
  • Extract Transform and Load data from sources Systems to Azure Data Storage services using a combination of Azure Data factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics. Data ingestion to one or more Azureservices (Azure Data Lake, Azure Storage, Azure SQL, Azure DW)and processing the data inAzure Databricks
  • Worked on performance tuning Cassandra clusters to optimize writes and reads.
  • Developed Python scripts, UDF's using both Data frames/SQL and RDD/MapReduce in Spark for Data Aggregation, queries and writing data back into RDBMS through Sqoop.
  • Used Pig and Hive in the analysis of data.
  • Loaded the data into SparkRDD and performed in-memory data computation to generate the output response.
  • Developed Spark code using Scala and Spark-SQL/Streaming for faster testing and processing of data.
  • Performed integration and dataflow automation Using NiFi tool that allows a user to send, receive, route, transform, and sort data, as needed.
  • Delivered the data from source to analytical platform using Nifi.
  • Performed Sqooping for various file transfers through the HBase tables for processing of data to several NoSQL DBs- Cassandra, MongoDB.
  • Managed the data flow from source to Kafka by NiFi
  • Implemented Spark Storm builder topologies to perform cleansing operations before moving data into Cassandra.
  • Developed ETL workflow which pushes webserver logs to an Amazon S3 bucket.
  • Implemented Cassandra connection with the Resilient Distributed Datasets (local and cloud).
  • Importing and exporting data into HDFS and Hive.
  • Implemented ETL code to load data from multiple sources into HDFS using Pig Scripts.
  • Implemented Pig as ETL tool to do Transformations, event joins and some pre-aggregations before storing the data onto HDFS.
  • Worked on TalenD ETL scripts to pull the data from TSV Files/Oracle Data Base into HDFS.
  • Worked extensively on design, development and deployment of TalenD jobs to extract data, filter the data and load them into Data Lake.
  • Extract data from source system and transform into newer systems using TalenD DI Components.
  • Worked on Storm to handle the parallelization, partitioning, and retrying on failures and developed a data pipeline using Kafka and Strom to store data into HDFS.
  • Improved Spark performance & optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, data frame pair RDD’s, Spark YARN.
  • Developed Spark code and Spark-SQL/Streaming for faster testing and processing of data.
  • Supported Map Reduce Programs those are running on the cluster.

Environment: Hadoop, HDFS, Cloudera, Python, AWS, Spark, YARN, Map Reduce, Hive, Teradata SQL, PL/SQL, Pig, TalenD, Data Lake, Data Integration 6.1/5.5.1 (ETL), Kafka, Sqoop, Oozie, HBase, Cassandra, Java, Scala, Python, UNIX Shell Scripting

Data Engineer

Confidential, NY

Responsibilities:

  • Worked on end to end machine learning workflow, written python code for gathering the data from AWS snowflake, data pre processing, feature extraction, feature engineering, modelling, evaluating the model, deployment.
  • Written python code for exploratory data analysis using Scikit-learn machine learning python packages- NumPy, Pandas, Matplotlib, Seaborn, stats models, pandas profiling.
  • Trained Random forest algorithm on customer web activity data on media applications to predict the potential customers. Worked on Google TensorFlow, Keras API- convolution neural networks for classification problems.
  • FollowedAgile methodologyand participated in dailySCRUMmeetings.
  • ImplementedSpark ScalaandPy Spark using Data Frames, RDD, Datasetsand Spark SQL for processing of data.
  • WorkedAzure Databricksto develop notebooks of pyspark and Scala for spark transformations.
  • ImplementedPy Sparkjobs for Batch Analysis.
  • Worked on YAML scripting to orchestrate.
  • Worked onStored Proceduresto retrieve data from Database.
  • Worked withXML and JSONcontents.
  • Worked on DatabaseStored Procedures, Functions, Triggers and views.
  • UsedGITto track and maintain the different version of the project.
  • UsedJenkinsas a primary tool for implementing the CI/CD during code releases.
  • Written code for feature engineering, Principal component analysis PCA, hyperparameter tuning to improve the accuracy of the model.
  • Worked on various machine learning algorithms like Linear regression, logistic regression, Decision trees, random forests, K- means clustering, Support vector machines, XGBoosting on client requirements.
  • Developed machine learning models using recurrent neural networks - LSTM for time series, predictive analytics.
  • Developed machine learning models using Google TensorFlow keras API Convolution neural networks for Classification problems, fine-tuned the model performance by adjusting the epochs, bath size, Adam optimizer.
  • Good knowledge on image classification problems using the Keras Models for image classification with weights trained on ImageNet like VGG16, VGG19, ResNet, ResNetV2, InceptionV3. Knowledge on OpenCV for real time computer vision.
  • Worked on natural language processing for documentation classification, text processing using NLTK, SPACY, TextBlob to find the sensitive information in the electronically stored files and text summarization.
  • Developed the Python automation script for consuming the Data subjects request from AWS snowflake tables and post the data to adobe analytics privacy API.
  • Developed the python script to automate the data cataloging in Alation data catalog tool. Tagged all the Personally identified Information (PII) data in Alation enterprise data Catalog tool, to identify the sensitive consumer information
  • Consumed the Adobe analytics web API and written the python script to get the adobe consumer information for digital marketing into snowflake. Worked on Adobe analytics ETL jobs.
  • Written stored procedures in AWS snowflake to look for sensitive information across all the data sources and hash the sensitive data with salt value to anonymize the sensitive information to meet the CCPA law.
  • Worked on AWS boto3 API to make the HTTP calls to AWS amazon web services like S3, AWS secrets manager, AWS SQS.
  • Created an integration to consume the HBO consumer subscription information posted to AWS SQS- simple queue services and loaded into Snowflake tables for data processing, stored the meta data information into Postgres tables.
  • Worked on generating the reports to provide the warner media brands consumer information to data subjects through python automation jobs.
  • Implemented AWS lambda functions, python script that pulls the privacy files from AWS S3 buckets to post to it the Malibu data privacy endpoints.
  • Involved in different phases of data acquisition, data collection, data cleaning, model development, model validation, and visualization to deliver solutions.
  • Worked with Python NumPy, SciPy, Pandas, Matplot, Stats packages to perform dataset manipulation, data mapping, data cleansing and feature engineering. Built and analyzed datasets using R and Python.
  • Extracted the data required for building models from AWS snowflake Database. Performed data cleaning including transforming variables and dealing with missing value and ensured data quality, consistency, and integrity using Pandas and NumPy.
  • Tackled highly imbalanced Fraud dataset using sampling techniques like under-sampling and over-sampling with SMOTE using Python Scikit-learn.
  • Utilized PCA and other feature engineering techniques to reduce the high dimensional data, applied feature scaling, handled categorical attributes using one hot encoder of Scikit-learn library.
  • Developed various machine learning models such as Logistic regression, KNN, and Gradient Boosting with Pandas, NumPy, Seaborn, Matplotlib, Scikit-learn in Python.
  • Elucidating the continuous improvement opportunities of current predictive modeling algorithms. Proactively collaborates with business partners to determine identified population segments and develop actionable plans to enable the identification of patterns related to quality, use, cost and other variables.
  • Experimented with ensemble methods to increase the accuracy of the training model with different bagging and boosting methods and deployed the model on AWS.
  • Created and maintained reports to display the status and performance of deployed model and algorithm with Tableau.

Environment: Python, Postgres, AWS snowflake, Alation data catalog tool, snowsql, AWS EC2, S3, AWS lambda, AWS secrets manager, AWS SQS, Adobe analytics, Linux, Scikit-learn, SciPy, NumPy, Pandas, Matplotlib, Seaborn, JIRA, GitHub, Agile/ SCRUM.

Big Data Engineer

Confidential

Responsibilities:

  • Using Sqoop to import and export data from Oracle and PostgreSQL into HDFS to use it for the analysis.
  • Migrated Existing MapReduce programs to Spark Models using Python.
  • Migrating the data from Data Lake (hive) into S3 Bucket.
  • Done data validation between data present in Data Lake and S3 bucket.
  • Used Spark Data Frame API over Cloudera platform to perform analytics on hive data.
  • Designed batch processing jobs using Apache Spark to increase speeds by ten-fold compared to that of MR jobs.
  • Used Kafka for real time data ingestion.
  • Created different topic for reading the data in Kafka.
  • Read data from different topics in Kafka.
  • Involved in converting the hql’s in to spark transformations using Spark RDD with support of python and Scala.
  • Implemented Azure Data Factory operations and deployment into Azure for moving data from on-premises into cloud.
  • Moved data from S3 bucket to Snowflake Data Warehouse for generating the reports.
  • Written Hive queries for data analysis to meet the business requirements.
  • Migrated an existing on premises application to AWS.
  • Used AWS Cloud with Infrastructure Provisioning / Configuration.
  • Developed PIG Latin scripts to extract the data from the web server output files and to load into HDFS.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Created many Spark UDF and UDAFs in Hive for functions that were not pre existing in Hive and Spark Sql.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs and Scala.
  • Implementing different performance optimization techniques such as using distributed cache for small datasets, partitioning, and bucketing in hive, doing map side joins etc.
  • Good knowledge on Spark platform parameters like memory, cores, and executors
  • By using Zookeeper implementation in the cluster, provided concurrent access for Hive Tables with shared and exclusive locking.
  • Configured the monitoring solutions for the project using Data Dog for infrastructure, ELK for app logging.

Environment: Linux, Apache Hadoop Framework, HDFS, YARN, HIVE, HBASE, AWS (S3, EMR), Scala, Spark, SQOOP.

Data Analyst

Confidential

Responsibilities:

  • Document the complete process flow to describe program development, logic, testing, and implementation, application integration, coding.
  • Recommended structural changes and enhancements to systems and Databases.
  • Conducted Design reviews and technical reviews with other project stakeholders.
  • Was a part of the complete life cycle of the project from the requirements to the production support.
  • Created test plan documents for all back-end database modules.
  • Used MS Excel, MS Access, and SQL to write and run various queries.
  • Worked extensively on creating tables, views, and SQL queries in MS SQL Server.
  • Worked with internal architects and assisting in the development of current and target state data architectures.
  • Coordinate with the business users in providing appropriate, effective, and efficient way to design the new reporting needs based on the user with the existing functionality.
  • Remain knowledgeable in all areas of business operations to identify systems needs and requirements.

Environment: SQL, SQL Server, MS Office, and MS Visio

We'd love your feedback!