Get the LLM to reason step-by-step before answering.
Chain-of-Thought (CoT) prompting is a method where you guide a language model to solve a problem by thinking step-by-step, instead of jumping directly to an answer. This improves performance on tasks that require reasoning, such as math, logic, and multi-step decision making.
In Zedflows, you can implement CoT prompting using:
Input Question:
If Alice has 3 apples and buys 2 more, how many apples does she have?
Prompt to LLM:
Question: If Alice has 3 apples and buys 2 more, how many apples does she have?
Think step-by-step to find the answer.
LLM Output:
Alice starts with 3 apples.
She buys 2 more apples.
3 + 2 = 5
Answer: 5
Input Question:
John has twice as many apples as Sarah. Together they have 18 apples.
How many apples does each person have?
Parallel LLMs (different roles):
LLM A (Math Teacher):
Let Sarah have x apples.
Then John has 2x apples.
Together: x + 2x = 18
3x = 18
x = 6 (Sarah), 2x = 12 (John)
LLM B (Logic Solver):
If Sarah has 6 apples, John would have 12.
6 + 12 = 18, so the numbers work.
Final Answer: Sarah has 6, John has 12.
Critique LLM:
All answers are correct and consistent.
LLM A provided the full equation-based breakdown.
LLM A's response is more thorough.
Selected Answer: Sarah has 6 apples, John has 12 apples.
Chain-of-Thought prompting makes complex tasks easier for language models by forcing them to think before answering. With Zedflows, you can expand this method using parallel nodes and self-critiquing workflows for even better results.
Type your message below to begin chatting. I'm here to help and respond to your questions.