We provide IT Staff Augmentation Services!

Ceo Resume

2.00/5 (Submit Your Rating)

Rockville, MD

SUMMARY

  • 15+ years of experience of architecture, design and implementation of complex enterprise infrastructure and software solutions to meet big data challenges.
  • Worked as a Developer, Engineer, Administrator, Architect, Lead and Manager - Always hands-on.
  • Experience in architecture and implementation of systems to handle massive parallel datasets on large clusters involving Hadoop, Spark and related big data technologies. Hand-on Hadoop/Spark platform experience - Both Administration and Development
  • Hands on experience in performing analytics on complex datasets- implement statistical calculations, build models, apply algorithm and train the systems. Extensive experience in Java, Python, R, and shell scripting (Linux).
  • Expert in implementing complete software life cycle, including design, architecture, development, testing of many projects using various Object Oriented technologies- Java/EE, Python and .Net and Service Oriented Architecture. DevOps Development.
  • Expert in extracting relevant features from a large dataset available in a wide range of possible structured and unstructured formats. Specialized in translating business requirements into data workflows.
  • Experience in design and development of the corporate operation data store and enterprise data warehouse environment, mainly Oracle, Netezza, Redshift, MySQL and Hadoop. Mentor, team lead and team player.

TECHNICAL SKILLS

  • Data Exploration and Machine Learning: Python, Scala, R, GATE, NLP, Mahout, AQL, JAQL, SystemT, SAS, Supervised and Unsupervised learning algorithms, Neural Networks, Deep Learning CNN, RNNs, LSTMs, XGBoost, Markov, ChatBots development, Test Analytics, Computer Vision, Web scraping, site maps, Watson Explorer, Shiny, D3.
  • Blockchain: Ethereum, Solidity, Python, Node JS, Web3, Ganache, AWS Blockchain Template, Testrpc, Truffle, Metamask, Ethereum, Hyperledger, Tobalaba, ERC - 20, Ocean Protocol, Parity, Energy Web consortium, Chainspecs, Validator Nodes.
  • Big Data: Hadoop, Spark, HDFS, MapReduce, Apache Pig, Sqoop, HBase, Hive, Oozie, Flume, IBM Big Insights, PySpark/Scala, Hadoop streaming, Athena, Glue, Deltalake.
  • Data Streaming Services: Kafka, Kinesis streams, and Spark Streaming.
  • Analytics & Visualization: Python, R, GATE, NLP, Mahout, AQL, JAQL, SystemT, SAS, AIML, Supervised and Unsupervised learning algorithms, Deep Learning, Reinforcement Learning, Grafana, ChatBots development, Tensorflow, Web scraping, site maps, SOLR
  • Visualization: D3, IBM Watson Explorer, Kibana, Banana, Tableau, Ansys (3D modeling)
  • Software Development: Python, Scala, Django, Core Java, Java Enterprise Edition, C/C++, SDLC, Spring, Struts, Hibernate, JDBC, SOA, Web Services, Django, Anaconda, Oracle BPEL, ESB, XML, HTML, JSP, AJAX, JSON, AngularJS, JQuery, Design Patterns, UML, Visio, ANT, Maven, Continuous Integration, Jenkins, Splunk, Wily, Application Servers, Application Security, Business Objects, Windows, Shell Scripting, Sharepoint, NiFi, AWS: Lambda, API gateway, Terraform, CloudFormation. Azure: Sentinel, Logical App, Monolithic Apps, Blob storage, HDInsights, Functions, Kusto, Multitenancy, IaC, EventHub.
  • Databases: Oracle, PostgreSQL, MS SQL Server, MySQL, Netezza, Redshift, Cassandra.
  • Platforms: AWS, Azure, Linux, Windows
  • Management Methodologies: Waterfall, Agile (Scrum), Kanban, Test-driven development

PROFESSIONAL EXPERIENCE

Confidential, Rockville, MD

CEO

Responsibilities:

  • Big Data Development, System Administration, Enterprise Software development and support development of infrastructure and software solutions (Short term and long term).
  • Support small and medium-sized projects.
  • Designing Enterprise data warehouse.
  • Data warehouse administration and support.
  • Developing architectural roadmaps and guidelines.
  • Building proof of concepts.
  • Facilitating effective staff handover.
  • Provide customized technical training to employees based on current work environment.

Technologies: Big Data, Enterprise Software Development, and Enterprise Data Warehousing

Confidential, Washington, DC

Lead Big Data Administrator/Developer

Responsibilities:

  • The EDW team supports consolidation and processing on the relevant information to support SEC's mission. The Division of Economic and Risk Analysis (DERA) integrates financial economics and rigorous data analytics into the core mission of the SEC.
  • Led multiple short-term and long-term projects related to development efforts and environment support for Big Data unstructured platforms within Enterprise Data warehouse. Always hands on.
  • Managed co-ordination with client, architects, managers, higher management, onsite, remote and various cross-functional teams.
  • Provided hands-on technical assistance working with developers, DBAs, data architects and data quality analysts, and other teams with the following tasks:
  • Full administration support, including Hadoop cluster design, implementation, maintenance, enhancement and troubleshooting.
  • Develop, maintain and support projects using Big Data technologies specifically Hadoop (MapReduce and Java), Spark and its various other echo-systems (hive, pig, hbase, oozie, sqoop), IBM Data Explorer, BigSQL, IBM Big Insights, Python, Scala, Netezza, and other external databases such as Oracle. Supports adhoc analysis/data/code requests which require immediate solution.
  • Create project proposals, project plans, risk planning, detailed design, write code and assist in writing code, conduct code reviews, perform testing, deploy and provide production support.
  • Support multiple Petabytes Hadoop cluster and IBM data explorer servers.
  • Complex jobs development with custom record readers and complex regular expressions. Utilized R and Big Sheets for adhoc data analysis/reporting.
  • Design, develop, and implement Big Data analytic solutions. Taking the solution to full project life cycle - including analysis, design, unit test, UAT, pre-prod testing, 508 testing, security testing and prod promotion.
  • Data extraction from various structured, semi-structured and free-text filings.
  • Create custom analytic and data mining algorithms to help extract knowledge and meaning from vast stores of data.
  • Refine data processing pipelines focused on unstructured and semi-structured data refinement
  • Designed and development system to cleanse and analyze complex EDGAR unstructured filings and collect meaningful information in form of tables, big sheets, dashboards and reports. Worked on complete life cycle from data exploration, visualization and cleansing, to fitting into model and present the findings to analysts and business.
  • Develop webservices to make cleansed data available.
  • Analyzed Whistleblower complaints by utilizing natural language processing techniques (sentimental, coreference, etc.) in conjunction with Hadoop.
  • Cleansed, exchanged and federated data received monthly (About 40 TB/Month) among Netezza, Hadoop and remote filesystem.
  • Built and upgraded Big Insights/Hadoop cluster (multiple times) and IBM Data explorer servers
  • Implemented security structure, authentication mechanism (LDAP/IDM) and user provisioning strategies for the Hadoop Cluster
  • Performed R-Integration with Hadoop to support statisticians and economists
  • Reviewed, benchmarked and performed performance tuning of the implemented jobs within Hadoop cluster
  • Create custom analytic and data mining algorithms to help extract knowledge and meaning from vast stores of data.
  • Refine data processing pipelines focused on unstructured and semi-structured data refinement.
  • Automated testing of ETL/ELT solution as well as performing quality assurance for the deliverables.
  • Worked on the efforts to write technical proposals/responses to the RFPs.

Technologies Used: IBM Big Insights/ Hadoop, HDFS, MapReduce (map/reduce/custom record readers), Java, HBase, Oozie, Pig, Hive, Streaming, Zookeeper, Sqoop, Flume, SystemT, JAQL, AQL, Spark (Scala/PySpark), Ambari, YARN, Knox, Flume, Kite, Slider, Watson Explorer, GATE, NLP, Mahout, PdfToHtml, Big sheets, Linux, Shell Scripting, xCat (cluster management), IMM, LDAP/IDM, Knox, R, R Studio, Python, SAS, Netezza, Infosphere Information Server, Oracle, Virtual Desktop Infrastructure, Visio, UML.

Confidential

Lead VDS Portal/Chatbot Developer

Responsibilities:

  • Database development and reporting for the Virtual Desktop Services Portal, that serves various infrastructure sources, including Citrix, Active directory and other database platforms in-house.
  • Development of the MS SQL Server based data repository, receiving feeds from Citrix, Active Directory, other databases, filesystems and Sharepoint.
  • Chatbot development to support Virtual Desktop Services’ operations. The chatbot utilizes AIML as well as decision trees/NLP to respond to its customers.
  • Web-scraping information and downloading of files from password protected HTML and Angular/Javascript rendered websites, so that the information can be utilized by Chatbot for its responses.
  • Worked on the enhancement of the AI chatbot available to thousands of IT employees to help them automate manual storage-related tasks.
  • Perform Quality Assurance and Testing for the project deliverables.

Technologies: Python, NLP, AIML, Machine Learning, .Net, Java, SQL Server, TSQL, SSIS, SSRS, GIT integration, Windows.

Confidential, Washington, DC

Data Scientist/Data Warehouse Developer

Responsibilities:

  • Supporting Enterprise Data Warehouse (EDW) contract for the client Confidential through government contracting company Fulcrum. Responsible for handling end-to-end DW, Data Integration and BI solutions by taking into consideration the business cases, objectives and requirements.
  • Develop data science dashboard to understand students’ performance in different subject areas throughout US.
  • Develop statistical models for the students’ assessments and surveys, and the external datasets to support the findings of the surveys. (Survey analysis, Jackknife, Significance testing) using R and Python.
  • Perform data integration, data architecture, information delivery, infrastructure, testing, performance tuning for all components of the project.
  • Develops practical and workable solutions to clients’ technical and business problems.
  • Analyzes requirements and potential solutions for technical and economic feasibility.
  • Improve upon current methods for the automated processing and exploitation of large data sets.
  • Design, develop and advance new methods to extract information from diverse data sources using flexible querying methods, innovative visualization, and data aggregation/integration, data mining and analytical techniques.
  • Extract information from variety of sources including spreadsheets, text and pdf documents, web pages and images.

Technologies Used: C# .Net, Webservices, node.js, Angular2, R, Shell scripting, Python, SOLR/Banana, Elastic MapReduce, Hadoop/Spark, Redshift, D3, Pentaho/Kettle, R, Amazon RDS, MYSQL, Redshift, SQL Server, IIS, AWS, Visual Studio.

Confidential, ALEXANDRIA, VA

Lead Software Architect

Responsibilities:

  • Planning and managing technology development activities for large scale modernization projects.
  • Working as hands-on team lead for the Enterprise Information Portal and PPA Search projects for USPTO. Architect, design, develop, lead and assist other developers and tester in the team.
  • Leading the entire software development life cycle for EIP at USPTO from prototyping to project approval, and from requirement gathering to production deployment of the final product. Assisted client in understanding opportunities within the system and building up the requirements through examples, working prototypes and the technology overview.
  • Successfully developed the working prototype of EIP as a proof of concept within limited timeframe, and received approval for the first phase of the product.
  • Integrated Business Objects and Cassandra (NoSQL) with Enterprise Report Portal (EIP) to provide one-stop shop to analyze documents and reports from multiple sources, create personalized widgets and dashboards, dynamic role based menus and page views, mark favorites and recent, perform federated search on Business Object and Cassandra (through SOLR), establish announcements and alerts, and administer and schedule business objects jobs.
  • Developed a flexible, loosely coupled EIP solution by utilizing Angular JS, REST Web Services, SOAP Web Services, Hibernate with Oracle.
  • Implemented Single Sign On and developed authorization framework based on Enterprise Data Warehouse Security model.
  • Compared candidate technologies, established architecture and design, built logic for all use cases, performed hands on development, as well as assigned tasks to developers within team to implement use cases by providing detailed steps to follow.
  • Lead daily huddle, bi-weekly demos, client presentations and meetings.
  • Assist developers in implementing an enterprise PPA Search to search Oracle Data Warehouse by utilizing technologies such as AngularJS, JQuery, REST Web services, MyBatis and Oracle Text Search.
  • Implemented ARIA for Angular JS, and tweaked default libraries for AngularJS and JQuery to comply with 508.
  • Provide schedule and timelines, perform review, and analyze runtime of the code.
  • Utilized maven for compilation, SVN as code repository, Jenkins for continuous integration. Established local maven repository for the Business Object libraries.
  • Activity involved in proposal writing for the Big Data initiatives at USPTO.
  • Ensure tasks are completed within allocated time and budget.

Technologies Used: SOA, Java, JEE, Python, SOLR, Oracle 11g DB, Oracle 12C, Cloudera/Hadoop, Cassandra NoSQL, Oracle Text Search, SOLR, RESTful web services, SOAP Web Services, Hibernate, MyBatis, Business Objects SDK, JSON, XML, JQuery, Angular JS, node.js, JSP, Maven, Jenkins, SVN, SQL Developer, Visio, Eclipse, Big O, JUnit, JAWS.

We'd love your feedback!