Zonotopes are promising abstract domains for machine learning oriented tasks due to their efficiency, but they fail to capture complex transformations needed in new architectures, like the softmax, used in the attention mechanism of Large Language Models. While recent work proposed more precise Zonotopes, like Polynomial Zonotopes or Hybrid Constrained Zonotopes (HCZ), they often require solving MILP at each step, which makes them unusable in large dimensions. This presentation – focused on the recent transformer architecture (LLM) – will show how to take advantage of most of the HCZ’s precision, while maintaining computational efficiency using dual optimisation.