The LLM Component processes the data accessed by this component based on the provided prompt, producing the output in the format required for use by the subsequent components in the AI flow.
Write Prompt: Here you specify the data for LLM processing, your desired actions with that data, and the required output format.
<aside> ❗
The prompt should follow the syntax of the Jinja2 template engine. For more information on Jinja2 usage, visit: Jinja2 Documentation.
</aside>
Example Jinja2 Syntax Prompt:
Your task is to summarize the following text:
Text: {{ text_data }}
In this example:
{{ text_data }}
is a placeholder for the data coming into the component that will be processed by the LLM.Define the Output Schema
str
: Choose this mode for single entity, such as rewriting, summarizing, or classifying single piece of data from the previous component. This mode allows you to configure multiple fields simultaneously.
📝 Example: If you have a text and you would like a brief summary of it, the output might look like this:
{
"summary": "This is a brief summary of the text."
}
list[str]
: Select this mode to handle multiple entities, such as identifying multiple classifications or summarizing several characteristics from the previous component’s output.
📝 Example: If you want to identify several topics from a text, the output might look like this:
{
"topics": [
"Technology",
"Innovation",
"AI"
]
}
Input Type: List of dictionaries
Description: The input is a list of dictionaries. Each dictionary contains identical fields and data types. When configuring the component's prompt, you can reference these fields by typing /
.
Imagine you have a list of questions that need rephrasing to be more friendly and engaging. Each question is represented as a dictionary in the list.
[
{ "generated_questions": "What are some easy recipes for a quick dinner?" },
{ "generated_questions": "Can you suggest some fun activities for kids?" },
{ "generated_questions": "How can I improve my productivity at work?" }
]
Input: The LLM component receives the above list where each dictionary contains a question that needs to be rephrased.
Prompt: The LLM component uses a prompt like this:
Your task is to turn the Old_Question into a friendly and engaging question.
Old_Question: {{ generated_questions }}
When the LLM component runs, it replaces {{ generated_questions }}
with the actual value for each generated_questions
entry in the dictionary list.
Your task is to turn the "Old_Question" into a friendly and engaging
question.
Old_Question: What are some easy recipes for a quick dinner?