Fidelity is seeking a Principal Hadoop big data engineer with expertise in Hadoop who will work across the ECC product areas and lines of business. The Hadoop big data engineer will be the team’s subject matter expertise in all aspects of Hadoop frameworks, designs, development, implementation, administration, performance, management, and support for internal deployments and public cloud as-a-service deployments. They will work with the cloud database architects and developers on all aspects of Hadoop as-a-service automation that leverage a common deployment pipeline across multiple cloud service providers. The Hadoop database engineer will work collaboratively with other ECC architects, ECC developers, ECC security, ECC operations, and Business Unit cloud application and big data engineering teams to ensure that applications make efficient and appropriate usage of Hadoop big data stores and are following the required Fidelity operational and security policies.
The Purpose of Your Role: Reporting to the Director of Cloud Database Development, the Principal Hadoop Big Data Engineer will collaborate with other administrators, architects, developers, and engineers across ECC and the Business Unit development teams to design, develop, and implement Hadoop big data automation services for Fidelity’s private cloud and the public clouds (AWS, Azure, Google) deployments. This is a hands-on big data engineering role which requires strong technical skills for both physically deployed databases and cloud database service offerings.
The Value You Deliver Lead engineer for all Hadoop big data frameworks, designs, development, implementation, administration, performance, management, and support for internal deployments and cloud based deployments. Code and contribute Hadoop big data automation services to Fidelity’s inner-source database community for use in the private cloud and public clouds (AWS, Azure, Google). Code test harnesses for Hadoop big data performance and scalability testing; run tests, troubleshoot, resolve, monitor, and automate Collaborate with the cloud database architects and developers on aspects of Hadoop big data as-a-service automation that leverages a common deployment pipeline across the cloud service providers. Code prototypes for proof of concept and cloud lab validation of Hadoop big data automation technologies. Build strong relationship with database development teams to ensure alignment and automation adoption Facilitate implementation of Hadoop big data automation with agreed to standards, policies, design patterns. Participate in the planning, definition, design, and integration of Hadoop big data usage patterns with the business unit big data developers to ensure consistency of product development and adoption across the firm. Specify, scope, guide Hadoop big data automation project implementations to ensure alignment and adoption. Participate in governance bodies to approve changes to Hadoop big data design, development, code reviews. Participate in critical Hadoop big data automation problem solving and advanced technical troubleshooting to assist the organization; drive resolution of technical issues; lead and perform impact analysis. Provide stewardship at critical junctures during implementation, and ensure preservation of design intent in code, and lead Hadoop big data as-a-service automation code reviews. Actively monitor and participate in external communities sharing knowledge with cloud database communities. Coach and mentor cloud big data database technical resources across the inner-source community.
How Your Work Impacts the Organization:
The Cloud Database Team, a part of the Emerging Technology group within Enterprise Cloud Computing, is responsible for setting the technical strategy and vision for cloud data store technology both for the Fidelity’s private cloud and Fidelity’s implementation of the public clouds (AWS, Azure, Google). This includes driving the research, design, development, incubation, configuration, usage, administration, management, and overall solutions architecture and development automation for in-memory, NoSQL, big data, and relational cloud data store technologies. Our mission is to enable application teams to deploy and manage their databases in the cloud using frictionless automation for provisioning, encryption, security, administration, and data protection.
The Expertise We’re Looking For:
Bachelor’s degree, Master’s degree a plus 7+ years of enterprise level experience in big data design, development, deployment, and administration In-depth understanding of application deployment and cloud big data usage patterns In-depth coding and engineering of Hadoop (Cloudera, Hortonworks, EMR) big data designs Required certification in; Cloudera, Hortonworks Certifications in; PostgresSQL, Gemfire, Oracle, Cloudera, Pivotal Cloud Foundry, AWS, and Azure a plus
The Skills You Bring:
Expertise in supporting n-tiered applications, built with Hadoop (Cloudera, Hortonworks, EMR) big data stores Expertise with Hadoop big data design, implementation, performance tuning, and management Expertise with identifying and addressing performance bottlenecks of supported Hadoop big data store clusters Expertise with tuning parameters to optimize Hadoop (Cloudera, Hortonworks, EMR) big data store clusters Experienced with in-memory data grids, NoSQL, big data, and relational cloud database technologies Experienced in cloud technology, services (IaaS, PaaS, SaaS, DBaaS), and platforms (AWS, Azure, Google) Experienced in coding POCs with different technologies (application, middleware, database, infrastructure) Experienced with Java, Spring, C++, GO, Curl, Python, Bash and Open Source tech stack Experienced with CI/CD pipelines PCF, BOSH , Cloud Foundry; Concourse, Jenkins, Artifactory a plus Experienced in Oracle, Hadoop, MariaDB, In-MemoryDB migration/upgrade experience and cloud migrations Experienced in Cloud Formation scripting, Concourse, Jenkins, Chef/Ansible for automated E2E deployments Knowledge of 12-factor cloud principles, API/ARB documents, processes, and deployment patterns/strategies Knowledge of core concepts in distributed systems design, CAP theorem and corollaries, enterprise integration patterns, EDA/SOA 2.0, microservices-based developments and domain-driven design Ability to build support among key stakeholders across BUs for proposed strategies and solutions Ability to provide technical leadership to the database development, QA testing, and support teams in preparing the design artifacts and implementation of database-as-a-service automation solutions Ability to identify integration patterns and points between various design areas and should be able to track the implementation of integrations for the automation of cloud database deployments Ability to work on initiatives and projects that cut across business unit boundaries. Working with peers (technical/non-technical) team members on POCs and projects independently to drive results and business value Ability to identify measurable dimensions (ROI) of a business problem and present the options (pros/cons) Excellent presentation, documentation, communication and influencing skills as well as skills which present/influence technology direction in business context to the stakeholders Ability to coach and mentor members of the application and database cloud development teams
Additional Salary Information: Competitive salary plus bonus, great benefits
Internal Number: 1802882
About Fidelity Investments
At Fidelity, we are focused on making our financial expertise broadly accessible and effective in helping people live the lives they want. We are a privately held company that places a high degree of value in creating and nurturing a work environment that attracts the best talent and reflects our commitment to our associates. For information about working at Fidelity, visit FidelityCareers.com
Fidelity Investments is an equal opportunity employer.