(FEQ425R126) As a Senior Specialist Solutions Engineer (Data engineering) , you will guide customers in building big data solutions on Databricks for a wide variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects. This role requires hands-on production experience with Apache Spark and expertise in other data technologies. Our Specialist team helps customers through the design and successful implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Data Intelligence Platform.
As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, peer delivering and internal training programs and establish yourself in an area of speciality - whether that be streaming, performance tuning, industry expertise, or more.
Notes on mandatory requirements:
Fluent in Italian and English (Business-level proficiency minimum)
Flexible to travel for customer meetings and internal events/training (up to 20-30%)
Based in the Milan region or within commutable distance for a hybrid schedule
The impact you will have:
Guide and empower strategic customers in their big data journey, from architectural design to complex data pipeline engineering.
Craft resilient data pipelines, including comprehensive performance testing and optimization.
Collaborate with customers on advanced technical aspects, developing tailored solutions and architectures.
Foster community growth through engaging tutorials and training sessions.
Enrich the Databricks Community with your unique perspective and expertise.
Integrate Databricks with diverse applications to support customer needs.
Build lasting relationships with customers, delivering both technical excellence and business value.
What we look for:
Experience in a technical role with expertise in at least one of the following:
Data Engineering: Proficiency in data ingestion, streaming technologies (e.g., Spark Streaming, Kafka), performance enhancement, and troubleshooting.
Data Applications: Ability to develop impactful use cases such as risk modeling or customer lifetime value analysis.
Hands-on experience with SQL and Apache Spark (Python, Scala).
Understanding of customer-centric pre-sales or consulting, with a focus on data engineering.
Enthusiasm for sharing technical knowledge through various mediums.
Collaborative spirit and dedication to supporting others in overcoming challenges.
Aptitude for designing and implementing sophisticated data processing systems in public cloud environments.
Bachelor's degree in Computer Science, Information Systems, Engineering, Data Science, or equivalent work experience.
Desirable:
Databricks Certification and experience in legacy system migration #LI-Hybrid
#J-18808-Ljbffr