Technical Lead Manager
About Mutinex
At Mutinex, we believe marketing deserves to be treated as a performance discipline — not a cost centre. We're building the growth decision engine that replaces slow, conflicted measurement with fast, independent, AI-powered intelligence. Trusted by brands like Samsung, and Domino's, our platform empowers marketers to make confident, data-driven decisions that drive real business growth. We're an Australian-born, globally scaling B2B SaaS company — and we're just getting started.
Hybrid across Sydney, Melbourne, and New York. We highly value communication, open-mindedness, and a culture of feedback.
How We Work
The way software gets built is changing. We're changing with it — deliberately and quickly, but honestly: we're not done yet.
Our direction is clear. The hard work of engineering is shifting from implementation to planning, orchestration, and judgment. We want engineers who direct AI agents, review output, and focus their energy on the decisions that matter — architecture, design, product thinking. Writing code by hand is becoming the exception, not the default.
Here's where we are today: we use Claude Code as our primary AI development tool. We've built Forge — our internal tooling layer with shared skills, agents, and hooks that encode team knowledge. We run weekly AI Office Hours to pair on real problems. We track AI usage and change lead time to keep ourselves honest. Every engineer is expected to be actively building this skill.
Here's where we're heading: a workflow where AI handles ideation scaffolding, task breakdowns, implementation, test generation, code review prep, and release notes — and engineers focus on the thinking that AI can't do. Problem decomposition. Architectural trade-offs. Knowing when the AI is wrong and why. We're not there yet on every team, but the leaders joining now will shape how we get there.
If this is already how your team works — or it's clearly where you're taking them — read on.
The Role
This is a hybrid technical leadership and people management role. You'll lead a team building and scaling our core platform: GrowthOS / MAITE (our customer-facing growth co-pilot). You stay technical — you're in the code, directing agents, reviewing output, making architectural calls. But your primary impact is through your team: how they work, how fast they're learning, and the quality of what they ship.
What the work looks like:
You're managing humans who manage AI agents. That's a new kind of leadership. The old playbook — sprint planning, ticket estimation, velocity tracking — breaks down when agents compress two days of work into hours. You need to design the operating rhythm, review processes, and guardrails that keep quality high when your team is shipping faster than ever before.
Specifically:
Own the technical direction of your team's domain. Make architectural decisions that compound. Set the standard for how AI-directed work is planned, reviewed, and shipped. You're accountable for what goes to production.
Design the team's operating rhythm for AI-native delivery. Sprint cadences, review processes, and quality gates built for a world where implementation is fast and the bottleneck has shifted to planning and judgment. What worked in 2024 doesn't work now — you'll figure out what does.
Build your team's engineering capability — deliberately. You're hiring a mix of experienced engineers and hungry earlier-career talent who are AI-native. Your job is to turn AI velocity into AI quality across the team. That means designing code reviews that interrogate architectural reasoning, not just syntax. It means pairing sessions that build judgment. It means knowing when to let someone struggle productively and when to step in.
Combat the Silent Silo. When AI gives your team instant answers, the natural mentorship loop — struggle, ask a senior, learn the why — breaks. You need to build team rituals that keep learning in the open. Run reviews that ask why this approach and what alternatives were rejected. Create psychological safety to challenge AI output and share workflows openly.
Stay in the code. You direct agents, review AI output, and make hands-on architectural decisions. You can't lead an AI-native team if you're not working this way yourself. But you're selective about where you spend your technical time — high-leverage architectural calls, not every feature.
Measure what matters now. You measure outcomes, impact, code health, and change lead time. You track whether AI is making your team better or just faster.
Shape Forge and the broader engineering culture — alongside our Head of Engineering. You'll partner directly with engineering leadership to redefine how product is delivered in an AI-native world. Contribute skills, agents, and patterns to our shared tooling layer. Run or evolve AI Office Hours. What your team learns becomes part of how the whole company works.
Partner with product and design to ensure the team is building the right things. When engineering velocity outpaces the backlog, you help the organisation make sharper decisions about where to invest.
What We're Looking For
You've led engineering teams. You've built production systems. And you've genuinely adapted how you or your team work with AI. All three are required — the combination is what makes this role hard to fill.
Technical Leadership
You've led teams that shipped production software. You've made architectural decisions under real constraints, navigated technical debt, and built systems that scaled. You've hired, developed, and retained engineers. You know what good looks like — in code, in process, and in people.
You think in systems, not features. You design team structures, review processes, and delivery cadences — not just technical architectures. You understand that how a team works is as much an engineering problem as what they build.
You've managed the tension between speed and quality. You know that velocity without oversight creates haunted codebases. You've built the guardrails — testing strategies, review standards, deployment gates — that let teams move fast without accumulating problems they haven't found yet.
AI-Native Practice (Non-Negotiable)
You work this way yourself. You direct AI agents, review output, and have a genuine AI-driven development workflow. You can walk through how you approach a non-trivial feature from planning to shipping. You're opinionated about tools — Claude Code, Cursor, MCP servers — and you've built systematic guardrails based on real failure modes you've encountered.
You've started this journey — and you're ready to take it further with a team. Maybe you've already led a team through this transition. Maybe you've gone deep on AI-augmented development in your own work — nights, weekends, side projects — and you're itching to bring that to how your team operates. Either path is valid. What matters is that you've thought seriously about what changes in team rituals, code review, mentorship, and metrics when agents do most of the implementation. You have a point of view, even if you haven't had the environment to fully test it yet.
You're thinking about the mentorship challenge. You recognise that AI intercepts the learning loop for earlier-career engineers. Maybe you've already built approaches to keep learning visible — annotated reviews, architectural interrogation, deliberate pairing. Maybe you've just started thinking about how you'd tackle it. What matters is that you see it as a real problem, not a side concern. It's central to building a team that gets better, not just faster.
Engineering Depth
You've built and maintained production-grade systems. Distributed systems, data-intensive applications, cloud infrastructure — you've worked at this level. You understand SOLID principles, design patterns, concurrency, and the unique challenges of operating at scale. This depth is what lets you evaluate AI output and guide your team's architectural decisions.
Full-stack fluency. You've worked across backend and frontend. You can evaluate and direct work across the entire stack, even if you have a natural centre of gravity.
Our stack, for context: GCP, TypeScript, React, Python (in some places), Pulumi. Specific language experience matters less than architectural range and the ability to ramp fast.
What Matters Most
Judgment over velocity. You optimise for sustainable speed. You know that the fastest team is the one that doesn't have to stop and untangle what it shipped last month.
Builder who leads. You haven't left the code behind. You're selective about where you go deep, but you're credible with your team because you work the way you're asking them to work.
People development as craft. You take mentorship seriously — not as a checkbox, but as a skill you're deliberately building. You think about how your engineers grow, not just what they deliver.
Comfort with ambiguity. The playbook for managing an AI-native team is being written in real time. You're energised by that, not paralysed.
What We Screen Out
Engineering managers who've stopped being technical. If you can't review AI-generated code and make architectural calls, you can't lead this team.
Leaders who dismiss AI or treat it as a fad. If your team's workflow looks the same as it did in 2024, the velocity gap is already significant and widening.
Process-heavy managers who optimise for predictability over outcomes. We need operating rhythms, not bureaucracy. The cadence has to match the pace.
Anyone who hasn't thought about how management changes in an AI-native world. You don't need to have all the answers — but you need to have wrestled with the questions. If the mentorship model, review process, and metrics haven't crossed your mind, this probably isn't the right time
Why This Role
You've led teams before. Here's what's different about leading one here.
You'll define how AI-native engineering management works — and you won't do it alone. The playbook doesn't exist yet. You'll write it alongside our Head of Engineering — the team rituals, the review cadences, the mentorship patterns, the metrics, how product gets delivered. If you've been experimenting with AI-augmented workflows and dreaming about what it would look like to run a whole team this way, this is the environment to make it real.
A team that's already moving. You're not dragging people toward AI adoption. You're leading engineers who've already made the shift — and developing hungry earlier-career talent who arrived AI-native. The challenge is quality, architecture, and growth at velocity — not convincing anyone to change.
Hard problems with real constraints. Complex data systems, cloud infrastructure at scale, a product that handles real investment decisions for global brands. This is worth building, and worth building well.
Autonomy over how your team works. We've bet on Claude Code and Forge as our foundation. How you build your team's operating rhythm on top of that is yours to design.
Speed of decision-making. Our product process is built to keep pace with engineering velocity. Your team won't ship in days and then wait weeks for the next decision.
Equity ownership. All team members receive equity (ESOP).
Generous time off. 20 days annual leave to start, plus 5 extra after your first year, and 1 additional day each year after that (up to 30).
Parental leave. 12 weeks paid for the primary carer, 6 weeks for the secondary carer after 2 years.
Work from anywhere. Up to 6 weeks each year to work from anywhere in the world, with time zone crossover with Australia.
Committed to inclusion. Mutinex is proud to be an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We welcome applicants of all backgrounds, experiences, identities, and abilities.
Ready?
This is a full-time permanent position.
If you've led engineering teams and you've been experimenting with AI-augmented ways of working — whether with a team or on your own — and you're excited about building the playbook for what engineering leadership looks like next, we want to hear from you.
Send us your resume and a short note. Tell us how you work today, how you'd want your team to work, and what excites you about building this at Mutinex.
We move fast on candidates we're excited about.
- Department
- Engineering
- Locations
- Sydney office
- Remote status
- Hybrid
Colleagues
About Mutinex
Mutinex is the end to end market mix solution that helps marketers, media leads, agencies, analytics and finance teams identify and optimise the different factors that drive sustainable business growth.