Your resource for web content, online publishing
and the distribution of digital products.
«  
  »
S M T W T F S
 
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 
 

How We Integrated Downstream Processes and Workflows

Tags: google
DATE POSTED:March 18, 2025
Table of Links

Abstract and 1 Introduction

2 Survey with Industry Professionals

3 RQ1: Real-World use cases that necessitate output constraints

4 RQ2: Benefits of Applying Constraints to LLM Outputs and 4.1 Increasing Prompt-based development Efficiency

4.2 Integrating with Downstream Processes and Workflows

4.3 Satisfying UI and Product Requirements and 4.4 Improving User Experience, Trust, and Adoption

5 How to Articulate output constraints to LLMS and 5.1 The case for GUI: A Quick, Reliable, and Flexible Way of Prototyping Constraints

5.2 The Case for NL: More Intuitive and Expressive for Complex Constraints

6 The Constraint maker Tool and 6.1 Iterative Design and User Feedback

7 Conclusion and References

A. The Survey Instrument

4.2 Integrating with Downstream Processes and Workflows

Because LLMs are often used as sub-components in larger pipelines, respondents emphasized that guaranteed constraints are critical to ensuring that the output of their work is compatible with downstream processes, such as downstream modules that expect a specific format or functional code as input. Specifically for code generation, they highlighted the necessity of constraining the output to ensure “executable” code that adheres to only “methods specified in the context” and avoids errors, such as hallucinating “unsupported operators” or “SQL … in a different dialect.” Note that while the “function calling” features in the latest LLMs [8, 26] can “select” functions to call from a predefined list, users still have to implement these functions correctly by themselves.

\ Many studies indicate that LLMs are highly effective for creating synthetic datasets for AI training [9, 15, 38], and our survey respondents postulated that being able to impose constraints on LLMs could improve the datasets’ quality and integrity. For instance, one respondent wished that model-generated movie data would “not say a movie’s name when it describes its plot,” as they were going to train using this data for a “predictive model of the movie itself.” Any breach of such constraints could render the data “unusable.”

\ Furthermore, given the industry trend of continuously migrating to newer, more cost-effective models, respondents highlighted the importance of “canonizing” constraints across models to avoid extra prompt-engineering after migration (e.g., “if I switch model, I get the formatting immediately”). This suggests that it could be more advantageous for models to accept output constraints independent of the prompt, which should now solely contain task instructions.

\

:::info This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

:::

:::info Authors:

(1) Michael Xieyang Liu, Google Research, Pittsburgh, PA, USA ([email protected]);

(2) Frederick Liu, Google Research, Seattle, Washington, USA ([email protected]);

(3) Alexander J. Fiannaca, Google Research, Seattle, Washington, USA ([email protected]);

(4) Terry Koo, Google, Indiana, USA ([email protected]);

(5) Lucas Dixon, Google Research, Paris, France ([email protected]);

(6) Michael Terry, Google Research, Cambridge, Massachusetts, USA ([email protected]);

(7) Carrie J. Cai, Google Research, Mountain View, California, USA ([email protected]).

:::

\

Tags: google