Big Data Architect Resume
4.00/5 (Submit Your Rating)
SUMMARY
- Looking for a Big data architect wif extensive experience building big data applications using the Hadoop ecosystem and related technologies, both on traditional clusters and cloud platforms, to collaborate wif internal product development teams constructing scalable and performant big data applications using cloud - based infrastructure.
PROFESSIONAL EXPERIENCE
Big Data ArchitectConfidential
Responsibilities:
- Collaborating closely wif application team architects and engineers to identify technologies and platforms suitable for their big data processing requirements, and then assisting those teams wif onboarding, development, deployment, and debugging on those platforms
- Investigating new big data tools and technologies for their potential application to common use cases; establishing best practices, developing design patterns, and writing documentation to disseminate new capabilities to a broad technical audience; working wif platform engineers and product managers to specify and deliver new major technology features
- Providing technical assistance to a broad community of big data infrastructure users, such as software application engineers and data scientists, through research, investigation, collaboration, and hands-on debugging, often driven by specific use case requirements
- Ensuring dat application big data solutions adhere to best practices and enterprise standards for scalability, availability, efficiency, data lifecycle management, information security, fault tolerance, and disaster recovery
- Experience wif software design process considerations throughout stages including requirements, architecture, design, development, quality assurance, deployment, and maintenance
- Strong understanding of big data concepts, Hadoop ecosystem components, and complimentary technologies such as HDFS, Hive, Spark, and Kafka; as well as cloud technologies such as block storage, object storage, computational infrastructure services, and higher-level database services
- Experience wif scripting in languages such as Python, bash, Perl; or software development in Java or Scala
- Hands-on experience wif writing, debugging, and optimizing big data processing applications using Hadoop Streaming, Hive, Spark; ODBC/JDBC connectivity such as HiveServer and Thrift; and streaming data management using Kafka
- Familiarity wif the strengths, weaknesses, and idiosyncrasies of big data solutions of cluster-based platforms (e.g., Hortonworks, Cloudera, and MapR) as compared to as cloud resource providers (e.g., Google Cloud Platform, Microsoft Azure, and similar object storage)
- Experience building and nurturing long-term technical advisory and consulting relationships wif software engineering teams
- Demonstrable proficiency wif Linux tools at a shell command line, a basic understanding of the Java build processes used to compile and package Hadoop