Solution Architect And Lead Developer Resume
PROFILE SUMMARY:
- Solution Architect/Lead Developer/IT Consultant with over 18 years of experience in IT industry.
- Successfully architected/designed/led/developed enterprise projects covering various aspects of software development, processes, and methodology.
- Extensive experience in building and hosting highly scalable enterprise level BigData, J2EE middleware and micro services applications in Cloud.
- Ability to see, suggest the model and deliverable technical solutions for real - world business problems.
TECHNICAL SKILLS:
BigData: Apache Spark, Spark SQL, Spark Streaming Scala, Kafka, HIVE, HBase, PIG, MapReduce, Hadoop, Flume, Sqoop, HDFS, YARN, Oozie, Zookeeper.
Java EE: Java, J2EE, EJB, Struts, Confidential, Servlet, JSP, XML, XSL, XSLT, JMS, JDBC, JNDI, and JAXB.
Cloud: Confidential Bluemix, SoftLayer.
SOA: Microservices, RESTful, JAX-RPC, JAX-WS Soap and Web services.
ETL: Confidential Datastage, Confidential Cognos.
UX: HTML, CSS, JavaScript, JQuery, Ajax, Angular JS.
Build/CI Tools: DevOps, Jenkins, UCD, Apache Ant, Maven.
RIA : Adobe Flex, Action Script, MATE, GWT, Smart GWT.
IDE: RAD, RSA, WPF, Eclipse, Bootstrap, Flex Builder.
NOSQL: Confidential Cloudant, HBase, Cassandra, Couch DB
RDMBS: DB2, DB2/400, Oracle
Commerce: Confidential Websphere Commerce 7/6/5, Endeca Faceted Search .
Web/Application Servers: Websphere Liberty/16.X/8.X,TWAS 8.x/7.x/6x, IHS, WPS, Weblogic, Apache Tomcat
Version Control: GIT, Stash, RTC. RAM, CVS, PVCS.
OS: Windows, UNIX, Linux, AIX.
Test &Code Coverage: JUnit, MRUnit, SonarCube, Karma.
Project Management: Agile, Scrum, Jira, RTC.
PROFESSIONAL EXPERIENCE:
Confidential
Solution Architect and lead developer
Responsibilities:
- Involved in full software development life cycle including requirement analysis, solution architecture, development, writing coding standards, code reviews, performance tuning and build processes.
- Working closely with business analysts and understanding the requirements and analyzing data.
- Developed Sqoop jobs to load the data from the different data sources like DB2, Oracle, and flat files into partitioned Hive tables and export the analyzed data back for visualization and report generation by the BI team.
- Developed Spark Scala scripts, UDFFs using Spark SQL, Data frames and RDDs for cleansing, data aggregation queries.
- Developed Spark Scala programs to process different file formats e.g., Sequence Files, ORC, AVRO, JSON, and Parquet.
- Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames and Pair RDD's.
- Collected the JSON data from HTTP Source and developed Spark APIs that helps to do inserts and updates in Hive tables.
- Extensively used big data analytical and processing tools Hive, Spark Core, Spark SQL for batch processing large data sets on Hadoop cluster hosted on Confidential Cloud.
- Performance tuning of Spark Applications using an efficient number of executors, memory and CPU allocation.
- Developed Flume ETL job for handling data from HTTP Source and Avro Sink to HDFS.
- Extensively used HiveQL DDL statements for creating partitioned and bucketed Hive tables in Avro, Parquet File Formats with snappy compression.
- Implemented Hive optimized joins to gather data from different sources and run ad-hoc queries on top of them.
- Writing Hive Generic UDF's using JAVA to perform business logic operations at the record level.
- Involved in performance tuning of Hive from design, storage, and query perspectives.
- Developed Spark Scala applications using Kafka Consumer API for data processing
- Developed dashboards, visualizations, reports for the business users.
- Configuring Oozie workflow jobs to run multiple Spark, HiveQL MapReduce, and Pig jobs.
- Developed Kafka consumer's API in Java for consuming data from Kafka topics.
- Design and implement MapReduce jobs to support distributed processing using java, Hive and Apache Pig.
- Testing Hive queries that helped spot emerging trends by comparing fresh data with historical data ingested in Hadoop Ecosystem.
- Involved in POCs to build efficient real-time data processing pipeline to ingest and process streaming data using Spark streaming, Kafka, and HDFS.
Environment: YARN, MapReduce, HDFS, Hive, Spark, Spark-Streaming, Spark SQL, Apache Kafka, Flume, Sqoop, OOZIE, Scala, Java, CDH5, Eclipse, DB2, Oracle, Git, and HBase.
Confidential
L ead developer
Responsibilities:
- As a Lead developer, I was responsible for customization of Confidential Websphere Commerce Server for Hallmark. I received Bravo award for delivering this customer engagement on time and on the budget.
- As a Lead developer, I was responsible for the development of B2B Stores, Store Flex Flow configuration, Demand Chain, Supply Chain modules in WebSphere Commerce Server.
- As an Confidential consultant, I worked closely with on-site Sony team and involved in customization of several Confidential Websphere Commerce Server B2C, B2B Supply Chain Storefront features.
- As a Lead developer, I was responsible for customization of Confidential Websphere Commerce Server and Rollover the product for Gateway Computers.
- Responsible for L3 support for Confidential Websphere Commerce Product Storefront.
Confidential
Java developer
Responsibilities:
- Responsible for the design and development of Get-Services, Get-Resources, and Get-Answers modules.
- Responsible the development of iWave links with SITA.
- Extensive familiarization of Trilogy (Third party middleware).
- Responsible for Analysis, Design, Coding & Implementation of Enhancements and Fault requests.
- Responsible for development or EJBS and user interface using JSP.