About the role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Netskope Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics , Streaming and Digital Experience Management products. What's in it for you As part of the Digital Experience Management team, you will work on state-of-the-art, cloud-scale distributed systems at the intersections of networking, cloud security and big data. You will be part of designing and building systems that provide critical infrastructure for global Fortune 100 companies. What you will be doing Designing and implementing planet-scale distributed data platforms, services and frameworks including solutions to address high-volume and complex data collections, processing, transformations and analytical reporting Working across the DEM stack, including backend services & APIs, OLAP & RDBs, data analysis / ML, the probing station & client, and cloud infrastructure. Working with the application development team to implement data strategies, build data flows and develop conceptual data models Understanding and translating business requirements into data models supporting long-term solutions Analyzing data system integration challenges and proposing optimized solutions Researching effective data designs, new tools and methodologies for data analysis Providing guidance and expertise to other developers on the effective implementation of data models, and building high throughput data access services Providing technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Required skills and experience 8 years of experience designing and coding scalable distributed systems The ability to conceptualize and articulate ideas clearly and concisely Experience in building big data pipelines that manage petabytes of data and ingest billions of data items every day. Experience with big data analytics Experience with monitoring or alerting technologies Excellent algorithm, data structure, and coding skills with Python3 async, Go, C++ and / or Rust Proficiency in networking protocols and network security such as TCP/UDP/IP, TLS, HTTP, IPSec/GRE, PKI, traceroutes, etc. Experience with Relational SQL Experience with ReST / OpenAPI Experience coding with OSS like Kafka, Redis, and Clickhouse, as well as cloud infrastructure (ideally GCP), Kubernetes, etc. Excellent written and verbal communication skills Bonus points for contributions to the open source community Education BSCS or equivalent required, MSCS or equivalent strongly preferred J-18808-Ljbffr