PALANTIR Senior Data Engineer- Remote
Posted 2025-03-14About the position
The Senior Data Engineer position at Cognizant focuses on leveraging artificial intelligence and analytics to transform data into actionable insights for clients. This remote role involves building data pipelines using Pyspark in AWS environments, collaborating with stakeholders, and enhancing data systems and architecture to meet project goals. The position requires a strong background in data engineering, particularly with Palantir Foundry, and emphasizes the importance of effective communication and agile methodologies.
Responsibilities
? Use Pyspark to build data pipes in AWS environments
,
? Write design documents and independently build the Data Pipes based on the defined source to target mappings
,
? Convert complex stored procedures, SQL triggers, etc. logic using PySpark in the Cloud platform
,
? Be open to learning new technologies and implementing solutions quickly in the cloud platform
,
? Communicate with program key stakeholders to keep the project aligned with their goals
,
? Effectively interact with QA and UAT team for code testing and migrate to different regions
,
? Spearheads data engineering initiatives targeting moderately to complex data and analytics challenges, delivering impactful outcomes through comprehensive analysis and problem-solving
,
? Pioneers the identification, conceptualization, and execution of internal process enhancements, encompassing scalable infrastructure redesign, optimized data distribution, and the automation of manual workflows
,
? Addresses extensive application programming and analysis quandaries within defined procedural guidelines, offering resolutions that span wide-ranging scopes
,
? Actively engages in agile/scrum methodologies, actively participating in ceremonies such as stand-ups, planning sessions, and retrospectives
,
? Orchestrates the development and execution of automated and user acceptance tests, integral to the iterative development lifecycle
,
? Fosters the maturation of broader data systems and architecture, assessing individual data pipelines and suggesting/implementing enhancements to align with project and enterprise maturity objective
,
? Envisions and constructs infrastructure that facilitates access and analysis of vast datasets while ensuring data quality and metadata accuracy through systematic cataloging
Requirements
? Total 10+ years of experience with 3 plus years of experience in data engineering/ETL ecosystems with Palantir Foundry, Python, PySpark and Java
,
? Required skills: Palantir
,
? Expert in writing shell scripts to execute various job scheduler
,
? Hands-on experience in Palantir & PySpark to build data pipes in AWS environments
,
? Good knowledge of Palantir components
,
? Good exposure to RDMS
,
? Basic understanding of Data Mappings and Workflows
,
? Any knowledge of the Palantir Foundry Platform will be a big plus
,
? Implemented a few projects in Energy and Utility space is a plus
Nice-to-haves
? Pyspark and Python
Benefits
? Medical/Dental/Vision/Life Insurance
,
? Paid holidays plus Paid Time Off
,
? 401(k) plan and contributions
,
? Long-term/Short-term Disability
,
? Paid Parental Leave
,
? Employee Stock Purchase Plan
For more such jobs please click here!