We provide IT Staff Augmentation Services!

Snowflake Developer Resume

0/5 (Submit Your Rating)

Shorthills, NJ

SUMMARY

  • Over 8 years of experience in architecting and implementing very large - scale data intelligence solutions around Snowflake Data Warehouse also solid experience and understanding of architecting, designing and operationalization of large scale data and analytics solutions on Snowflake Cloud Data Warehouse.
  • Developing ETL pipelines in and out of data warehouse using combination of Python and Snowflake’s Snow SQL writing SQL queries against Snowflake.
  • Strong experience in migrating other databases to Snowflake.
  • Work with domain experts, engineers, and other data scientists to develop, implement, and improve upon existing systems.
  • Experience with Snowflake Multi - Cluster Warehouses.
  • Understanding of Snowflake cloud technology.
  • Developed entire frontend and backend modules using Python on Django Web Framework.
  • Experience with Snowflake cloud data warehouse and AWS S3 bucket for integrating data from multiple source system which include loading nested JSON formatted data into snowflake table.
  • Understanding data pipelines and modern ways of automating data pipelines using cloud based testing.
  • Understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies.
  • Experience in Configuring Snowpipe.
  • Experience with Snowflake Snow SQL and writing use defined functions.
  • In-depth knowledge of Data Sharing in Snowflake.
  • Experience in using Snowflake Clone and Time Travel.
  • Professional knowledge of AWS Redshift.
  • Participates in the development, improvement and maintenance of Snowflake database applications.
  • Experience in various methodologies like Waterfall and Agile.
  • In-depth knowledge of Snowflake Database, Schema, and Table structures.
  • Experience in developing SQL scripts and stored procedures that processes data from databases.
  • Understanding of various data formats such as CSV, XML, JSON, etc.
  • Have good knowledge of RDBMS topics, ability to write complex SQL, PL/SQL.
  • Design, develop, test, implement and support of Data Warehousing ETL using Talend.
  • Experience working in Snowflake database implementations.
  • Define roles, privileges required to access different database objects.
  • Define virtual warehouse sizing for Snowflake for different type of workloads.
  • Worked with cloud architect to set up the environment.
  • Familiar with data visualization tools (Tableau/Power BI).
  • Designs batch cycle procedures on major projects using scripting and control.
  • Hands-on experience with Snowflake utilities, Snow SQL, SnowPipe, Big Data model techniques using Python/Java.
  • Extensive experience in developing complex stored procedures/BTEQ queries.
  • Performance tuning of Big Data Workloads.
  • Working experience in Snowflake, Data warehousing technologies (Hadoop-HDFS, Hive, SQOOP, SPARK, Scala, Python, JAVA, SQL, Impala, Datastage, Tableau).
  • Experience in Design, Implement, and maintain the reusable components for Extract, Transform, and Load (ETL) jobs.
  • Operationalize data ingestion, data transformation and data visualization for enterprise use.
  • Mentor and train junior team members and ensure coding standard is followed across the project.
  • Progressive experience in the field of Big Data Technologies, Software Programming and Developing, which also includes Design, Integration, Maintenance.
  • Managed Amazon Web Service(AWS) projects while coaching the agile process and help implement agile methodology.
  • ETL pipelines in and out of data warehouse using combination of Python and Snowflakes Snow SQL writing SQL queries against Snowflake.

TECHNICAL SKILLS

Cloud Technologies: Snowflake,, SnowSQL, Snowpipe AWS.

Spark, Hive: LLAP, Beeline, Hdfs, MapReduce, Pig, Sqoop, HBase, Oozie, Flume

Reporting Systems: Splunk

Hadoop Distributions: Cloudera, Hortonworks

Programming Languages: Scala, Python, Perl, Shell scripting.

Data Warehousing: Snowflake, Redshift, Teradata

DBMS: Oracle, SQL Server,MySql,Db2

Operating System: Windows, Linux, Solaris, Centos, OS X

IDEs: Eclipse, NetBeans.

Servers: Apache Tomcat

PROFESSIONAL EXPERIENCE

Confidential, Shorthills, NJ

SNOWFLAKE DEVELOPER

Responsibilities:

  • Identifying the flaws of design on the overall architecture and suggesting some modifications like query optimization techniques.
  • Extensive hands-on expertise with SQL and SQL analytics.
  • Expertise in migration of hive scripts to Snowflake.
  • Perform Data entry, build, deploy and project implementation activities.
  • Work closely with application developers and administrators to create and improve data flows between internal/external systems and the data warehouse.
  • Engage with partners in testing, release management, and operations to ensure quality of code development, deployment and post-production support.
  • Perform tuning, testing, and problem analysis.
  • Developing stored procedures, views, and adding/changing tables for data load, transformation and data extraction.
  • Handling team on the development phase, code review and analyzing programs to identify and fix issues on the application.
  • Developing Shell script programs for the job/process automation in UNIX environment.
  • Assisting the team with performance tuning for ETL and database processes.
  • Experience in a variety of database technologies is preferred, including Oracle RDBMS, MS-SQLServer.
  • Provide data analysis and identify data-related issues within the Data warehouse environment and source systems as needed.
  • Participate in design reviews and provide input to thr design recommendations.
  • Experience working with offshore teams (giving directions, doing code reviews,etc).
  • Experience with Unix/Linux.
  • Develop views and queries for application use.
  • In-depth knowledge of Snowflake database, Schema, and table structure.
  • Familiar with data visualization tools(Power BI).

Confidential, Dayton, Ohio

SENIOR SNOWFLAKE DEVELOPER

Responsibilities:

  • Understanding data transformation and translation requirements and which tools to leverage to get the job done
  • Expertise in data privacy, security, data streaming & consumption in multi cloud environment
  • Experienced in importing data from various sources usingStreamSets.
  • Experience implementing ETL pipelines using custom and packaged tools.
  • AWS EC2, EBS, Trusted Advisor,S3, Cloud Watch, Cloud Front, IAM, Security Groups, Auto-Scaling.
  • Design and Develop Informatica processes to extract the data from Juris Feed on a daily basis.
  • Understanding of complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools.
  • Used DataStage director to run the jobs manually.
  • Used the DataStage Director and its run-time engine to schedule the jobs, testing and debugging its components, and monitoring the resulting executable versions.
  • Extensively worked on Spark using Scala onclusterfor computational (analytics),application by making use of Spark with Hive and SQL/Oracle/Snowflake.
  • Wrote various data normalization jobs for new data ingested into Redshift
  • Advanced knowledge on Confidential Redshift and MPP database concepts.
  • Worked on Letter generated programs using C and UNIX shell scripting.
  • ConfiguredStreamsetsto store the converted data to SQL SERVER using JDBC drivers.
  • Oversee the migration of data from legacy systems to new solutions.
  • Monitor the system performance by performing regular tests, troubleshooting and integrating new features.
  • Implemented Change Data Capture technology in Talend in order to load deltas to a Data Warehouse.
  • Collaborate with IT Security to ensure necessary controls to Cloud services are deployed and tested.
  • Experience with Agile and DevOps concepts.
  • Wrote ETL jobs to read from web APIs using REST and HTTP calls and loaded into HDFS using java and Talend.
  • Understanding and hands on experience of operating system, e.g. Unix/ Linux.
  • Experience in PL/SQL development and performance tuning.
  • Worked on Oracle Databases, RedShift and Snowflakes

Confidential, Phoneix, AZ

SNOWFLAKE DEVELOPER

Responsibilities:

  • Driving replacing every other data platform technology using Snowflake with lowest TCO with no compromise on performance, quality and scalability.
  • Worked on SnowSQL and Snowpipe
  • Designed cloud based Datawarehouse using Snowflake architecture that includes data load using ELT tools like SnowPipe that being used for data analytics in dashboarding.
  • Developed end to end cloud-based data warehouse which includes loading data from various third party applications.
  • Created SnowPipes, databases, schemas, objects, UDFs and Shares using Snowflake.
  • Designing and working on proof of concept for integrating new bolt on applications.
  • Supporting and ensuring smooth deployment of releases.
  • Converted Talend Joblets to support the snowflake functionality.
  • Created data sharing between two snowflake accounts(Prod—Dev). Redesigned the Views in snowflake to increase the performance.
  • Unit tested the data between S3 and Snowflake.
  • Snowflake SQL Writing SQL queries against Snowflake Developing scripts Unix, Python etc
  • Extract, Load and Transform of experience in Data management (e.g. DW/BI) solutions.
  • Practicing consulting on Snowflake Data Platform Solution Architecture, Design, development and deployment focused to bring the data driven culture across the enterprises
  • Experience in various data ingestion patterns to hadoop.
  • Created data access modules in python.
  • Troubleshooting Datastage users login issues.
  • Used informatica to parse out the xml data into the datamart structures that is further utilized for the reporting needs.
  • InstalledData Stage clients /Serversand maintained metadata in repositories.
  • Used DataStage Manager for importing metadata from repository, new job categories and creating new data elements
  • Automated JIRA processes using Python and bash scripts.
  • To fetch data of definite options that are selected, Python routines were written to log into websites.
  • Automated AWS S3 data upload / download using Python scripts.
  • Worked on AWS Data Pipeline to configure data loads from S3 to into Redshift
  • Used JSON schema to define table and column mapping from S3 data to Redshift
  • Experience in Sqoop ingesting data from relational to hive.
  • Worked on data streaming tools and ETL tools likeStreamsetsData Collector, Attunity Replicate.
  • Strong hands-on onAWScloud services likeEC2, S3, RDS, ELB,andEBSfor installing, configuring
  • Heavily involved in testing Snowflake to understand best possible way to use the cloud resources.
  • Cloned Production data for code modifications and testing.
  • Used FLATTEN table function to produce lateral view of VARIENT, OBECT and ARRAY column.
  • Created Talend Mappings to populate the data into dimensions and fact tables.
  • Used Temporary and Transient tables on diff datasets.
  • Developed data warehouse model in snowflake for over 100 datasets using whereScape.
  • Experience in designingStar, Snowflakesschemas anddatabase modelingusing Erwin tool.
  • Converted the SQL Server Stored procedures and views to work with Snowflake.
  • Migrated Data from SQL Server 2016 to Snowflake using ELT Maestro.
  • Experience with Snowflake cloud data warehouse and AWS S3 bucket for integrating data from multiple source system which include loading nested JSON formatted data into snowflake table.
  • Performed bulk load of JSON data from s3 bucket to snowflake.
  • Converted Talend Joblets to support the snowflake functionality
  • Used Snowflake functions to perform semi structures data parsing entirely with SQL statements.

Confidential, Mountain View, CA

SENIOR DATA ENGINEER

Responsibilities:

  • Developed Spark jobs in Java to perform ETL from SQL Server to Hadoop.
  • Designed HBase schema based on data access patterns. Used Phoenix to add SQL layer on HBase table. Created indexes on Phoenix tables for optimization.
  • Integrated Spark with Drools Engine to implement business rule management system.
  • Benchmarked compression algorithms and file formats (Avro, ORC, Parquet, and Sequence File) for Hive, MapReduce, Spark, and HBase.
  • Analyzed Stored Procedures to convert business logic into Hadoop jobs
  • Used SSIS and SSRS for BI projects
  • Worked on various POCs including Apache NiFi as a data flow\orchestration engine, Talend as ESB and Big Data solutions, Elasticsearch as an indexing engine for MDM, and SMART on FHIR with MongoDB for FHIR based app interface.
  • Used Cassandra to build next generation storage platform. Employed CQL for data manipulation.

We'd love your feedback!