Big Data Developer Resume
Minnesota, MN
CAREER SUMMARY:
- IBM MQ/MB and Hortonworks HDPCD certified Big Data Developer offering 8+ years of diverse IT experience in all the stages of Software Development Life Cycle (SDLC) with excellent working knowledge of Retail Business process.
- Good understanding of Retail domain, extensive involvement in requirements gathering, providing solutions for Integration modules and translating requirements into technical designs for implementation.
- Good knowledge of major Retail functions such as Supply Chain, Inventory and Merchandise management, Order Management and Point of Sale (POS).
- Working at a client location for more than 3 years and leading a team in Onsite - Offsite model. A passionate developer, eager to learn new technologies and apply them in real-world scenarios.
- Expert in SOA and ESB implementations using multiple technologies like Java and Message Broker.
- Proficient in designing, developing and supporting applications in IBM MQ/MB technologies.
- Developed custom SFTP programs using Java for file transfers between external systems at an enterprise level.
- Developed re-usable integration solutions using Java like file to queue program, file to database program etc.
- Worked closely with MQ Administrators to setup queue topology across many projects.
- Have good knowledge on UNIX scripting, Core Java, XML and JSON technologies.
- Developed a complex central hub application in IBM Message Broker which integrates with many downstream systems like Contract Management, Tax system, Commission applications, Demand planning, Supply Chain and many more.
- 3 years of strong experience in developing Big Data solutions using various Big Data technologies like Apache Pig, Apache Nifi, Apache Spark, Apache Hive, MapReduce, Sqoop and Apache Flume.
- Experience in data export process from databases to Hadoop HDFS using Sqoop.
- Experience in doing data analysis using Apache Pig, Hive and Spark.
- Experience in writing MapReduce programs and UDF’s in Pig.
- Experience in developing data ingestion and processing workflows using Apache Nifi.
- Experience in design and development of analytics solution using Apache Spark.
- Interactions with clients and application stakeholders for requirement gathering and analysis followed by creation of functional designs, process flow diagrams.
- Design and develop big data oriented solutions solving complete data integration and analytics use cases using various big data tools (Apache Pig, Apache Hive, Apache Spark, Apache Nifi, Apache Flume, SQOOP, Hadoop etc.) and cloud solutions like Amazon AWS and Microsoft Azure.
- Supporting installation, configuration and tuning of various big data technologies comprising a solution.
- Identify customer needs and requirement on a detailed level and consequently matching those requirements back to proposed solutions.
- Participate in analysis and design of solution which may include selecting technologies or estimating level of effort.
- Ensuring SDLC process or Agile process is followed throughout the project life cycle.
- Understand all elements of a client’s technical computing environment, and work with vendor IT teams to integrate various solution component as required. Assist clients with trouble-shooting of existing solution related to performance, scalability and maintenance.
- Code development and deployments across all the environments.
- Support the talent screening and acquisition process for the Big Data practice. Attend industry conferences and perform hands on evaluation of new products/services.
- Suggesting value additions to clients and addressing the improvements in problem areas.
CORE COMPETENCIES:
- Requirement analysis scoping
- IBM Message Queue
- IBM Message Broker
- Java SE
- XML
- JSON
- UNIX
- Pig
- Hive
- Sqoop
- MapReduce
- Team Management
- Problem Solving & Troubleshooting Skills
- Excellent Communication Skill
- Rest Service using Apache Http Client
- Apache Nifi
- Apache Spark
- Agile Methodologies
PROFESSIONAL EXPERIENCE:
Confidential, Minnesota, MN
Big Data Developer
Environment: Spark, Hive, Sqoop, Unix, Cloudera
Responsibilities:
- Analyze the mapping document of all the systems and extract the common fields along with its transformation logic.
- Built spark job to extract data from Hive, perform transformation and load it into Hive.
- Built Spark jobs to extract data from Hive and SpliceMachine, perform transformation and load into Splice Machine.
- Built custom deployment shell scripts to deploy jars, configuration, sql files into Hadoop cluster.
- Extract the data from Teradata source using SQOOP and load it into Hive tables/views.
- Created standard ETL framework using Spark scala which was used across enterprise to perform data loads between Hive - Hive, Hive - SpliceMachine, Hive - Teradata and vice versa.
- Co-ordinate with all the systems in standardizing the transformations for implementation.
- Build Spark jobs to extract, transform and load the data into target Hive tables as per the business requirements.
- Created wrapper scripts to invoke the spark jobs and passing parameters dynamically to spark jobs.
- Setup Control-M scheduling to run the spark batch jobs.
Confidential, Kirkland, WA
Azure Developer
Environment: .NET, Unix, Powershell, Azure Bot Framework, Azure Cloud, MQ queue and topics
Responsibilities:
- Worked with Confidential ’s and client in converting their business requirements into technical solution for implementation.
- Designed ChatBot architecture along with Confidential ’s using Azure Bot Framework.
- Built custom webjob component called DialogueManager in Azure to manage/restart the conversations between front end application and Bot service hosted in Azure.
- Participated in daily scrum, story planning and product backlog creation.
- Built REST web api component to access Azure Cognitive API’s i.e. Azure QA Framework, Azure LUIS service, Azure Bot service etc.
- Built Servicebus queues and topics in Azure to broker messages between front end app and DialogueManager.
- Built Redis cache in Azure to persist the conversation state for each users interacting with Bot so that conversation can be restarted in case connection is lost.
- Worked with DevOps team to deploy the solution in Azure.
Confidential, Kirkland, WA
Big Data Developer and Consultant
Environment: Pig, Hive, MapReduce, Sqoop, Flume, Spark, Unix, AWS, Nifi, Core Java, XML, JSON, Docker, Mongo DB
Responsibilities:
- Identify client requirement and convert it into a technical solution for implementation.
- Design an end to end solution using big data technologies.
- Used SQOOP to load the data from Oracle into Hive tables.
- Built Nifi workflows to read the data from FTP location, process the data as per the transformation rules using multiple nodes and loading it into target Mongo DB tables.
- Built custom Apache Nifi Docker images as per client requirement and install it on Amazon AWS.
- Worked with Dev Ops architect to design CI and CD solution for Big Data technologies.
- Created deployment script to automatically deploy Nifi workflows in production environment.
- Participated in daily scrum, story planning and product backlog creation.
- Worked with Test Lead to create test cases and scenario to test Big Data technologies.
- Working with Dev Ops architect, Infrastructure architect in creating production deployment strategies.
Confidential, Kirkland, WA
Big Data Developer and Consultant
Environment: Hive, MapReduce, Sqoop, Flume, Microsoft Azure, Nifi
Responsibilities:
- Worked with client to identify requirement and convert it into a technical solution for implementation.
- Design an end to end solution using big data technologies.
- Designing and developing Apache Nifi workflows.
- Writing Hive queries and loading data into target tables using Apache Nifi.
- Participated in daily scrum, story planning and product backlog creation.
- Provided consultation to client on big data technologies to be used for the new solution.
- Worked with client in installing, configuring big data technologies into their cloud environment.
- Assisted client by providing a custom code for 1 of the Apache Nifi processor as and when new version of Apache Nifi is released.
- Worked with clients IT team to lay down the test cases and scenarios to test the POC solution.
- Created deployment scripts to deploy solution in cloud.
- Deploy Big Data solution in Azure.
Confidential, Richfield, MN
Senior Software Engineer
Environment: Core Java, IBM MQ/MB, UNIX, XML, Manhattan, Oracle, UNIX
Responsibilities:
- Understanding existing supply chain process i.e. order fulfilment, inventory management, transport management and many more.
- Interaction with users, architects, vendors and stakeholders to understand requirements and assimilate existing process into new system.
- Requirement Analysis and Functional Design creation.
- Delegating technical designs to offshore team for development.
- Engage MQ Infrastructure team to set up queue based environment for supporting integration with the new vendor system.
- Assist Data Architect team in mapping integration components with new system.
- Participating in deployment planning meeting to decide the rollout strategy across multiple warehouses. Also, responsible for laying out integration deployment plan.
- Development of feeds in Java and IBM MB along with bug fixes in QA.
- Preparing Weekly status report and communicate the same to Program Manager.
- Effort Estimation for all the project activities.
- Developed HTTP client programs to interact with Restful service using Apache HttpClient API.
- Setup ELK stack on log files of each and every interface for better visibility to Business team.