We’re entering a new chapter for Information Technology at Water Corporation as we deliver bold, future-focussed solutions. With the launch of our new Information and Technology Group Operating Model, we’re transforming how we deliver technology services, making them faster, more responsive, and more aligned with the needs of our business and customers. As part of this transformation, we’re hiring multiple new positions. These roles are key to shaping a vibrant, forward-thinking Information Technology Group - one that adds real value, delivers better outcomes, and works in smarter, more agile ways. It’s a strategic shift aligned with our organisational strategies, designed to modernise systems, embrace innovation, and create a more flexible, collaborative, and customer-centric IT environment. If you're equally excited about innovation, transformation, and making a meaningful impact, now is the perfect time to join us.
About the role
We are seeking a Data Engineer to join our Data & Analytics team within the Information & Technology Group. This technical specialist role is responsible for implementation, optimisation and support of reliable, performant and secure data platform and pipelines that enable analytics, insights and data driven decision making across the business.
In this role, you will work at scale with the Enterprise data and analytics platform, building production-grade frameworks and data pipelines that ingest, transform, model and serve high-quality data to analysts, data scientists and business stakeholders, using Databricks and AWS cloud-native technologies.
Real benefits that matter
- Real flexibility with options to work from home, 9-day fortnight, flexible work hours
- An additional 2 well-being days each year
- Access to long service leave pro rata after 3 years of service
- Generous co-contribution superannuation scheme, which offers up to 16%. This includes the standard 12% employer contribution, plus an additional 2% employer co-contribution that matches your own 2% contribution
- Purchase additional leave of up to 12 weeks or work 4 years at a reduced salary and take the fifth year off as paid leave
Discover more benefits we offer to support the unique and individual ways our employees live.
What the role will involve
- Design, build and optimise data pipelines, datasets and data products using Databricks and AWS services.
- Develop ELT pipelines using Spark, Delta Lake, Python and SQL, following approved patterns.
- Implement data modelling, data quality checks, validation and observability as part of production-ready solutions.
- Deploy and support data pipelines across Dev, Test and Prod environments using CI/CD practices.
- Execute testing and validation to ensure the accuracy, reliability and performance of data transformations.
- Monitor and support production data products, conducting incident investigation, root cause analysis and remediation.
- Participate in code reviews and apply software engineering standards, patterns and best practices.
- Work closely with senior data engineers, analysts and stakeholders to understand requirements and implement fit-for-purpose data solutions.
- Operation within Agile delivery, DataOps, DevOps and IT Service Management frameworks.
Key skills and experience
Qualifications & Experience
- Degree in Computer Science, Information Systems or a related discipline, or equivalent practical experience
- Considerable professional experience in data engineering roles
- Practical experience designing and building:
- Data ingestion and transformation pipelines
- Analytical data models
- Data quality checks and validation logic
- Considerable experience with cloud-based data and analytics platforms, particularly Databricks and AWS
- Strong experience using Python and SQL for data engineering workloads
- Experience working in Agile and DevOps environments, using Git-based version control.
- Relevant AWS and Databricks certifications are highly regarded
Technical Skills
Experience (or developing capability) with the following technologies and practices:
Databricks Lakehouse
- Apache Spark
- Delta Lake
- Structure Streaming (desirable)
AWS Services
- S3, Glue, Lambda
- RDS, DynamoDB, API Gateway
Orchestration
- Databricks Lakeflow Jobs
- Airflow or similar orchestration tools
DataOps and CI/CD
- Git-based workflows
- CI/CD pipelines
- Automated testing
Strong Python, SQL and scripting skills for analytics and data transformations
Personal Attributes
- Strong analytical mindset with a structured approach to problem solving
- Eager to learn and grow as a data engineer in a modern Lakehouse environment
- Collaborative and able to work effectively within cross-functional teams
- Open to feedback and keen to continuously improve engineering practices
- Demonstrates behaviours aligned with a high-performance, values-driven culture.
Apply: If you are interested in the above opportunity, please submit a covering letter and resume that best demonstrates your ability to meet the requirements of the role.
As part of the recruitment process you may be required to complete pre-employment screening which may include a medical, qualification check, police clearance and Australian working rights check.
Applications close Wednesday 6th May 2026
Our commitment to a diverse and inclusive workplace
Diversity and inclusion are more than words. They guide us on building a thriving workforce that reflects the diversity of our customers and our community.
We encourage applications from every background, including Aboriginal and Torres Strait Islander people, people with disability, women, youth, LGBTQIA+ folks and people from culturally and linguistically diverse backgrounds.
Applicants with disability who require adjustments, or alternative methods of communication in the recruitment process, can contact a Recruitment Officer [email protected] or 9420 2000.
To read our diversity and inclusion statement, please visit our website