Addison, Texas

Job Description:

Come join an exciting team within Global Information Security (GIS). Cyber Security Technology (CST) is a globally distributed team responsible for cyber security innovation and architecture, engineering, solutions and capabilities development, cyber resiliency, access management engineering, data strategy, deployment maintenance, technical project management and information technology security control support.

This role is responsible for leading efforts to develop and deliver complex data solutions to accomplish technology and business goals. Key responsibilities include code design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. They work with stakeholders, Product Owners and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. These individuals are proficient in data engineering practices, and have extensive experience of using design and architectural patterns.

Primary Level of Engagement: Works as a team member under supervision from a more senior domain expert.

Primary Interactions:
  • Product Owner
  • Scrum Master
  • Development Team
  • Feature Lead
  • Architect
  • Domain Community of Practice
  • Data Scientists

Key Responsibilities:
  • Contribute to story refinement/defining requirements
  • Participate in estimating work necessary to realize a story/requirement through the delivery lifecycle.
  • Utilize multiple architectural components in design and development of client requirements.
  • Code complex solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria.
  • Assist team with resolving technical complexities involved in realizing story work.
  • Collaborate with development teams to understand data requirements and ensure the data architecture is feasible to implement and subsequently implemented accurately.
  • Assemble large, complex data sets that meet functional / non-functional requirements.
  • Build processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management.
  • Define and build data pipelines that enable faster, better, data-informed decision-making within the business.
  • Contribute to existing test suites (integration, regression, and performance), analyze test reports, identify any test issues/errors, and triage the underlying cause.
  • Document and communicate required information for deployment, maintenance, support, and business functionality.
  • Adhere to team delivery/release process and cadence pertaining to code deployment and release
  • Identify gaps in data management standards adherence, and work with appropriate partners to develop plans to close gaps.
  • Lead concept and experimentation testing and synthesize the results to validate and improve analytical solution.
  • Monitor key performance indicators and internal controls.
  • Mentor more junior Data Engineers and coach the team on CI-CD practices and automating tool stack.

Required Skills:
  • Strong SQL Skills - one or more of MySQL, HIVE, Impala, SPARK SQL
  • Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, etc.
  • Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few
  • Performance tuning experience with spark /MapReduce and/or SQL jobs
  • Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
  • Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.
  • Experience working with Jenkins and Jar management
  • Experience and proficiency with Linux operating system is a must
  • Experience working with Cloudera manager portal and YARN
  • Experience with complex resource contention issues in a shared cluster/cloud environment
  • Troubleshoot platform problems and connectivity issues
  • Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.
  • Bachelors and/or Master's degree in Computer Science, Information Technology or related field or equivalent degree and professional experience
  • Provide support for production code migrations and lower environment platform outages/service disruptions on a rotation and need basis
  • Provide code deployment support for Test and Production environments
  • Diagnose and address database performance issues using performance monitors and various tuning techniques
  • Interact with Storage and Systems administrators on Linux/Unix/VM operating systems and Hadoop Ecosystems
  • Document programming problems and resolutions for future reference.
  • Ability to work well as a team and as an individual with minimal supervision
  • Excellent communication and project management skills


Enterprise Role Overview:
Responsible for leading efforts to develop and deliver complex data solutions to accomplish technology and business goals. Codes design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. Works with stakeholders, Product Owners, and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. Proficient in data engineering practices, and have extensive experience of using design and architectural patterns. Contributes to story refinement/defining requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Utilizes multiple architectural components in design and development of client requirements. Codes complex solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria. Assists team with resolving technical complexities involved in realizing story work. Collaborates with development teams to understand data requirements, ensure the data architecture is feasible to implement and subsequently implemented accurately. Assembles large, complex data sets that meet functional / non-functional requirements. Builds processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management. Defines and builds data pipelines that enable faster, better, data-informed decision-making within the business. Contributes to existing test suites (integration, regression, performance), analyzes test reports, identifies any test issues/errors, and triages the underlying cause. Documents and communicates required information for deployment, maintenance, support, and business functionality. Adheres to team delivery/release process and cadence pertaining to code deployment and release. Identifies gaps in data management standards adherence and works with appropriate partners to develop plans to close gaps. Leads concept and experimentation testing and synthesizes the results to validate and improve analytical solution. Monitors key performance indicators and internal controls. Mentors more junior Data Engineers and coaches team on CI-CD practices and automating tool stack. Individual contributor.

Job Band:
H5

Shift:
1st shift (United States of America)

Hours Per Week:
40

Weekly Schedule:

Referral Bonus Amount:
0
--> Job Description:

Come join an exciting team within Global Information Security (GIS). Cyber Security Technology (CST) is a globally distributed team responsible for cyber security innovation and architecture, engineering, solutions and capabilities development, cyber resiliency, access management engineering, data strategy, deployment maintenance, technical project management and information technology security control support.

This role is responsible for leading efforts to develop and deliver complex data solutions to accomplish technology and business goals. Key responsibilities include code design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. They work with stakeholders, Product Owners and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. These individuals are proficient in data engineering practices, and have extensive experience of using design and architectural patterns.

Primary Level of Engagement: Works as a team member under supervision from a more senior domain expert.

Primary Interactions:
  • Product Owner
  • Scrum Master
  • Development Team
  • Feature Lead
  • Architect
  • Domain Community of Practice
  • Data Scientists

Key Responsibilities:
  • Contribute to story refinement/defining requirements
  • Participate in estimating work necessary to realize a story/requirement through the delivery lifecycle.
  • Utilize multiple architectural components in design and development of client requirements.
  • Code complex solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria.
  • Assist team with resolving technical complexities involved in realizing story work.
  • Collaborate with development teams to understand data requirements and ensure the data architecture is feasible to implement and subsequently implemented accurately.
  • Assemble large, complex data sets that meet functional / non-functional requirements.
  • Build processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management.
  • Define and build data pipelines that enable faster, better, data-informed decision-making within the business.
  • Contribute to existing test suites (integration, regression, and performance), analyze test reports, identify any test issues/errors, and triage the underlying cause.
  • Document and communicate required information for deployment, maintenance, support, and business functionality.
  • Adhere to team delivery/release process and cadence pertaining to code deployment and release
  • Identify gaps in data management standards adherence, and work with appropriate partners to develop plans to close gaps.
  • Lead concept and experimentation testing and synthesize the results to validate and improve analytical solution.
  • Monitor key performance indicators and internal controls.
  • Mentor more junior Data Engineers and coach the team on CI-CD practices and automating tool stack.

Required Skills:
  • Strong SQL Skills - one or more of MySQL, HIVE, Impala, SPARK SQL
  • Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, etc.
  • Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few
  • Performance tuning experience with spark /MapReduce and/or SQL jobs
  • Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
  • Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.
  • Experience working with Jenkins and Jar management
  • Experience and proficiency with Linux operating system is a must
  • Experience working with Cloudera manager portal and YARN
  • Experience with complex resource contention issues in a shared cluster/cloud environment
  • Troubleshoot platform problems and connectivity issues
  • Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.
  • Bachelors and/or Master's degree in Computer Science, Information Technology or related field or equivalent degree and professional experience
  • Provide support for production code migrations and lower environment platform outages/service disruptions on a rotation and need basis
  • Provide code deployment support for Test and Production environments
  • Diagnose and address database performance issues using performance monitors and various tuning techniques
  • Interact with Storage and Systems administrators on Linux/Unix/VM operating systems and Hadoop Ecosystems
  • Document programming problems and resolutions for future reference.
  • Ability to work well as a team and as an individual with minimal supervision
  • Excellent communication and project management skills


Enterprise Role Overview:
Responsible for leading efforts to develop and deliver complex data solutions to accomplish technology and business goals. Codes design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. Works with stakeholders, Product Owners, and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. Proficient in data engineering practices, and have extensive experience of using design and architectural patterns. Contributes to story refinement/defining requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Utilizes multiple architectural components in design and development of client requirements. Codes complex solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria. Assists team with resolving technical complexities involved in realizing story work. Collaborates with development teams to understand data requirements, ensure the data architecture is feasible to implement and subsequently implemented accurately. Assembles large, complex data sets that meet functional / non-functional requirements. Builds processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management. Defines and builds data pipelines that enable faster, better, data-informed decision-making within the business. Contributes to existing test suites (integration, regression, performance), analyzes test reports, identifies any test issues/errors, and triages the underlying cause. Documents and communicates required information for deployment, maintenance, support, and business functionality. Adheres to team delivery/release process and cadence pertaining to code deployment and release. Identifies gaps in data management standards adherence and works with appropriate partners to develop plans to close gaps. Leads concept and experimentation testing and synthesizes the results to validate and improve analytical solution. Monitors key performance indicators and internal controls. Mentors more junior Data Engineers and coaches team on CI-CD practices and automating tool stack. Individual contributor.

Job Band:
H5

Shift:
1st shift (United States of America)

Hours Per Week:
40

Weekly Schedule:

Referral Bonus Amount:
0
Job Description:

Come join an exciting team within Global Information Security (GIS). Cyber Security Technology (CST) is a globally distributed team responsible for cyber security innovation and architecture, engineering, solutions and capabilities development, cyber resiliency, access management engineering, data strategy, deployment maintenance, technical project management and information technology security control support.

This role is responsible for leading efforts to develop and deliver complex data solutions to accomplish technology and business goals. Key responsibilities include code design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. They work with stakeholders, Product Owners and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. These individuals are proficient in data engineering practices, and have extensive experience of using design and architectural patterns.

Primary Level of Engagement: Works as a team member under supervision from a more senior domain expert.

Primary Interactions:
  • Product Owner
  • Scrum Master
  • Development Team
  • Feature Lead
  • Architect
  • Domain Community of Practice
  • Data Scientists

Key Responsibilities:
  • Contribute to story refinement/defining requirements
  • Participate in estimating work necessary to realize a story/requirement through the delivery lifecycle.
  • Utilize multiple architectural components in design and development of client requirements.
  • Code complex solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria.
  • Assist team with resolving technical complexities involved in realizing story work.
  • Collaborate with development teams to understand data requirements and ensure the data architecture is feasible to implement and subsequently implemented accurately.
  • Assemble large, complex data sets that meet functional / non-functional requirements.
  • Build processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management.
  • Define and build data pipelines that enable faster, better, data-informed decision-making within the business.
  • Contribute to existing test suites (integration, regression, and performance), analyze test reports, identify any test issues/errors, and triage the underlying cause.
  • Document and communicate required information for deployment, maintenance, support, and business functionality.
  • Adhere to team delivery/release process and cadence pertaining to code deployment and release
  • Identify gaps in data management standards adherence, and work with appropriate partners to develop plans to close gaps.
  • Lead concept and experimentation testing and synthesize the results to validate and improve analytical solution.
  • Monitor key performance indicators and internal controls.
  • Mentor more junior Data Engineers and coach the team on CI-CD practices and automating tool stack.

Required Skills:
  • Strong SQL Skills - one or more of MySQL, HIVE, Impala, SPARK SQL
  • Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, etc.
  • Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few
  • Performance tuning experience with spark /MapReduce and/or SQL jobs
  • Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
  • Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.
  • Experience working with Jenkins and Jar management
  • Experience and proficiency with Linux operating system is a must
  • Experience working with Cloudera manager portal and YARN
  • Experience with complex resource contention issues in a shared cluster/cloud environment
  • Troubleshoot platform problems and connectivity issues
  • Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.
  • Bachelors and/or Master's degree in Computer Science, Information Technology or related field or equivalent degree and professional experience
  • Provide support for production code migrations and lower environment platform outages/service disruptions on a rotation and need basis
  • Provide code deployment support for Test and Production environments
  • Diagnose and address database performance issues using performance monitors and various tuning techniques
  • Interact with Storage and Systems administrators on Linux/Unix/VM operating systems and Hadoop Ecosystems
  • Document programming problems and resolutions for future reference.
  • Ability to work well as a team and as an individual with minimal supervision
  • Excellent communication and project management skills


Enterprise Role Overview:
Responsible for leading efforts to develop and deliver complex data solutions to accomplish technology and business goals. Codes design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. Works with stakeholders, Product Owners, and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. Proficient in data engineering practices, and have extensive experience of using design and architectural patterns. Contributes to story refinement/defining requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Utilizes multiple architectural components in design and development of client requirements. Codes complex solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria. Assists team with resolving technical complexities involved in realizing story work. Collaborates with development teams to understand data requirements, ensure the data architecture is feasible to implement and subsequently implemented accurately. Assembles large, complex data sets that meet functional / non-functional requirements. Builds processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management. Defines and builds data pipelines that enable faster, better, data-informed decision-making within the business. Contributes to existing test suites (integration, regression, performance), analyzes test reports, identifies any test issues/errors, and triages the underlying cause. Documents and communicates required information for deployment, maintenance, support, and business functionality. Adheres to team delivery/release process and cadence pertaining to code deployment and release. Identifies gaps in data management standards adherence and works with appropriate partners to develop plans to close gaps. Leads concept and experimentation testing and synthesizes the results to validate and improve analytical solution. Monitors key performance indicators and internal controls. Mentors more junior Data Engineers and coaches team on CI-CD practices and automating tool stack. Individual contributor.

Shift:
1st shift (United States of America)

Hours Per Week:
40
Learn more about this role

Addison, Texas

You’ve led troops, now help lead your community 

As a leader in the military, you motivated troops to get the job done. We value your ability to influence change and encourage you to continue that influence here and in our communities. Our Military Affairs Team proudly supports veterans in our communities through education and volunteer events. Together, we can create better communities and a brighter future for us all. 

First you fought for the American dream, now you can guide its future

You joined the military to protect a nation and its people. Let the same passion for making a difference lead you to a new career. At Bank of America, we’re proud that more than 6,800 veterans work for us. They’ve discovered that their desire to help others didn’t end with their service. Here, you’ll help our customers and clients connect to better financial lives.

Similar jobs