Zammad has gone through an intensive few months as the first AI features have really started to come together. What was merely a concept in our last interview has since been tested, refined, and further developed based on real-world feedback. A perfect moment to sit down with Product Owner Gerrit Daute once again to discuss progress, challenges, and the road ahead.
Gerrit, what is the current status of Zammad's AI features? Has anything fundamentally changed since our last conversation?
The core priorities haven’t changed, but our understanding of the finished product has become much clearer. We now have a very concrete picture of what the AI release in Zammad 7.0 should look like. And we’re sticking to the guiding principle we outlined back then: we build AI features that support agents — not replace them.
The first features are defined. These include automatic ticket summaries, a secure text assistant, and the first AI agents that take over simple, repetitive tasks such as prioritizing, categorizing, and assigning tickets. They may not sound flashy at first, but they make a significant difference in day-to-day support work.
All of these components serve one goal: to relieve agents without taking control away from them or handing decisions over to a black box. That’s what makes AI truly useful in customer support in the first place.
📘 Read the First Interview
In our initial conversation, our Product Owner explained how Zammad’s AI strategy took shape and which principles guided the early decisions. AI that serves support agents
Which of these features turned out to be more challenging than expected?
Surprisingly, it wasn’t the features themselves but the environment they run in. Large AI models tend to produce good results quickly — most people know this from tools like ChatGPT. The real challenge begins when you aim for the same quality on much smaller models.
And that part is essential for us, because we want Zammad users to be able to run their own AI servers without sending sensitive data to major cloud providers.
Smaller models, however, are far more sensitive. They react to even minor inaccuracies in prompts, lose context more easily, and are more prone to errors. That meant we had to learn how to craft prompts that can handle a wide variety of content across industries and use cases — consistently and on the first try.
The biggest effort, therefore, wasn’t the features themselves but the fine-tuning: testing prompts, rewriting them, discarding them, rethinking them. This iterative process took significantly longer than the actual technical implementation.
You tested the new AI features in a beta program. What kind of feedback did you receive?
Overall, the feedback was very positive, but also very insightful. The AI ticket summary, in particular, received a lot of praise from our beta testers. At the same time, we became aware of how high expectations around AI have become, and how widely they vary — from simple assistive features all the way to fully automated support workflows.
What reassured us that we’re still on the right path was the strong support we received once we explained why we’re intentionally not aiming for full automation yet and are instead focusing on assistive functions. Many people have experienced firsthand what happens when AI responds completely beside the point. Our approach — keeping agents at the center and letting AI assist — resonated strongly with testers.
How do you collect, evaluate, and measure the quality of AI feedback?
At the beginning, most of the feedback we received from beta testers was subjective. We quickly realized that wouldn’t be enough — we needed measurable quality metrics. That’s why we decided to move up a feature originally planned for later: a built-in feedback mechanism directly in the interface.
Testers can now rate the AI output with a simple thumbs up or thumbs down. If they give a negative rating, they can also add a short comment explaining why. This gives us not just the evaluation, but valuable context behind it.
It’s incredibly helpful. We can see which prompts work well, which ones need refinement, and when a feature is stable enough to be released. And it will also benefit Zammad admins, who can write their own prompts for features like the text assistant.
Which teams or use cases gain the most value from AI features?
It’s less about the industry and more about the underlying workflows. Wherever support processes become more complex, AI tends to create the greatest impact. Depending on the task, different teams benefit from efficiency gains, quality improvements, or greater security.
Ticket summaries, for example, are especially valuable for teams where tickets frequently move between departments or remain open over longer periods. They save agents a significant amount of context-switching and onboarding time.
A similar effect can be seen with automatic dispatching. Companies with central information or support addresses gain a lot of time when tickets are routed automatically and reliably to the right team.
The same goes for organizations with high ticket volumes. Automatic prioritization takes considerable pressure off agents and ensures that critical issues become visible more quickly.
And across all industries, AI addresses another often overlooked problem: shadow IT. Once we offer agents a secure, built-in text assistant within Zammad, there’s no need for anyone to rely on private ChatGPT accounts for work-related content. That’s a win for every organization — no matter the industry.
🌒 Shadow AI: What Happens in the Dark
Many employees resort to private AI services due to a lack of internal solutions. What that means and how to address it is explained in this blog article: How to address Shadow AI risks in your organization
How does the free choice of AI models work in practice?
It’s very straightforward. SaaS customers can activate the AI directly in Zammad without needing any additional infrastructure. On-premise users can either connect to our hosted AI service using an API key, or — if they prefer full control — run their own AI server and integrate it via the existing interfaces.
With that, we combines the openness and autonomy of open source with the convenience of a secure, reliable AI backend. Many companies don’t want (or aren’t able) to operate their own models, but are equally reluctant to entrust their sensitive data to a corporate cloud provider. Our solution is designed precisely for this middle ground.
When can users expect the first AI features, and how will the roadmap evolve from here?
Zammad 7.0 will mark the starting point for our AI capabilities. We’ve now reached a stage where the architecture is in place, the models run reliably, beta feedback is flowing in, and we’re gathering meaningful usage data. Our focus is currently on quality assurance and finalizing the initial feature set.
We don’t develop features because they’re trending; we develop them based on usefulness, responsibility, and transparency. The first release will center on assistive capabilities such as summaries, intelligent routing, and secure text support.
In the long run, Zammad will continue expanding its AI — always with the goal of genuinely empowering agents rather than handing decisions blindly over to a AI. This is exactly what will make Zammad one of the most exciting open-source helpdesk solutions in the coming years.