Impact and Influence:

This position will interact on a consistent basis with architects, leads, data engineers, data scientists to support cutting edge analytics in development and design of data Acquisitions, data ingestions and data products. Developers will be working on Big data technologies and platforms on prem and cloud. The resources will be involved in following and contributing to best practices in for software development in the big data ecosystem. The Roles and Responsibilities are as follows,

Create and maintain optimal data pipeline architecture.
Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
Build tools to utilize the data pipeline to provide actionable insights into Data acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Technical Skills Required:

In depth knowledge & experience in Big data technologies (Spark, Python, Java based applications) Cloud based Experience is Mandatory (AWS or Azure).
Good understanding & experience with Performance and Performance tuning for complex S/W projects mainly around large scale and low latency
Experience with data flow.
NoSQL exposure like MongoDB or PostgresDB
Databricks certification. (Good to have)
AWS/Azure certifications is a must.
Experience in understanding Java programs and troubleshooting.
Able to understand Map reduce/Tez/Hive code and help in conversions to Spark.
Excellent communication skills.
Ability to work in a fast-paced, team-oriented environment.

Qualifications - External
Mandatory Skills: 4 - 6years of experience

Unix/Linux shell scripting
Spark (Python or Scala based)
Cloud experience and Certification is a must.
Databricks certification. (Good to have)


AT&T is bringing it all together for our customers, from revolutionary smartphones to next-generation TV services and sophisticated solutions for multi-national businesses.

For more than a century, we have consistently provided innovative, reliable, high-quality products and services and excellent customer care. Today, our mission is to connect people with their world, everywhere they live and work, and do it better than anyone else. We're fulfilling this vision by creating new solutions for consumers and businesses and by driving innovation in the communications and entertainment industry.

We're recognized as one of the leading worldwide providers of IP-based communications services to businesses. We also have the nation's most reliable 4G LTE network.* We also have the largest international coverage of any U.S. wireless carrier, offering the most phones that work in the most countries. AT&T operates the nation's largest Wi-Fi network** including more than 32,000 AT&T Wi-Fi Hot Spots at popular restaurants, hotels, bookstores and retailers, and provides access to nearly 1 million hotspots globally through roaming agreements.

AT&T U-verse is TV inspired by you. It's TV the way you want it, with tons of cool features and capabilities. AT&T is the only national TV service provider to offer a 100-percent IP-based television service. It's part of our "three-screen" integration strategy to deliver services across the three screens people rely on most - the mobile device, the PC and the TV.

As we continue to break new ground and deliver new solutions, we're focused on delivering the high-quality customer service that is our heritage.