How to Write Good AI Prompts: A Complete Guide to Getting Better Results

How to Write Good AI Prompts: A Complete Guide to Getting Better Results

Jan 26, 2026

"Why isn't the AI giving me what I want?"

If you've ever felt frustrated with AI responses that miss the mark, you're not alone. Whether you're using ChatGPT, Claude, Gemini, or any other large language model (LLM), the quality of your output depends almost entirely on the quality of your prompt.

Here's the key insight: Writing good AI prompts is like giving instructions to an incredibly smart 10-year-old. This "10-year-old" has access to vast knowledge and can process information faster than any human—but they still need clear, specific, and prescriptive instructions to understand what you want and how to deliver it.

In this guide, we'll show you exactly how to write prompts that consistently produce high-quality results.

The ROSES Framework: Your Starting Point for Better Prompts

If you're new to prompt engineering or don't know where to start, the ROSES framework is your foundation. ROSES stands for:

  • Role: Define the AI's expertise or role

  • Objective: State the goal clearly

  • Scenario: Describe the situation or context

  • Expected Solution: Specify the expected outcome

  • Steps: List the step-by-step process to follow

Let's see how this works in practice.

Example: Writing an Investment Memo

Weak Prompt (Vague and Ambiguous)

Write an investment memo based on the company name provided

Why this doesn't work: The AI has to guess what an "investment memo" means to you, what format you want, what information to include, and what tone to use. This leaves enormous room for interpretation—and poor results.

Strong Prompt (Using ROSES Framework)

Role:


Objective:


Scenario:


Expected Solution:


Steps:


Why the ROSES Framework Works

The ROSES framework works because it eliminates ambiguity at every level:

  • Role gives the AI context about expertise and perspective

  • Objective clarifies the end goal

  • Scenario provides decision-making guidance for edge cases

  • Expected Solution specifies format, tone, and structure

  • Steps creates a logical workflow for the AI to follow

Pro tip: Use positive framing. Tell the AI what to do, not what to avoid. Instead of "don't be vague," say "be specific and cite sources."

Breaking Down Long Prompts with XML Tags

If your prompt is getting long and complex, the AI might "get lost" in processing all the instructions. The solution? Break your prompt into clearly labeled sections using XML tags.

Here's how to restructure the investment memo prompt:

<Task>
You are an investment analyst writing memos for a venture capital firm focusing on seed-stage and Series A B2B SaaS companies with AI differentiation in Latin America. You will receive a company name and produce a one-page investment memo covering business overview, competitive landscape, and financial assessment.
</Task>

<Research Process>
1. Search for general information about the company
2. Identify main competitors and competitive positioning
3. Find latest financial information from official sources
4. If there's conflicting data, prioritize SEC filings over external sources
5. If information is unavailable, explicitly state "I could not find relevant information"
</Research Process>

<Expected Outcome>
Produce a one-page memo in neutral, professional investor tone.

Structure:
1. Business Overview
2. Competitive Landscape
3. Financial Assessment
4. Investment Recommendation

Format: Clear sections with headers, concise paragraphs, data-backed analysis.
</Expected

Why this works: XML tags (or similar markers like headers) help the AI parse different instruction types—task definition, process steps, and output requirements—without confusion.

Advanced Technique: Reverse Prompting

Once you've gotten a few high-quality outputs from your AI, you might notice that small wording changes produce dramatically different results. This is because LLMs are probabilistic—the same prompt can generate variations, and subtle phrasing shifts matter.

Reverse prompting solves this problem by working backward from success.

How Reverse Prompting Works

  1. Take your best AI output (the response that perfectly matched what you needed)

  2. Feed that output to an LLM (like ChatGPT or Claude) with this meta-prompt:


  1. Use the generated prompt as your new template for similar tasks

Why Reverse Prompting Works

Reverse prompting leverages what LLMs do best: pattern recognition. Instead of you guessing what instructions will work, you're asking the AI to identify the patterns in successful output and translate them into reusable instructions.

This approach is particularly powerful when:

  • You know what good output looks like but struggle to articulate it

  • You need to standardize prompts across a team

  • You want to capture the "voice" or style of successful outputs

Common Prompt Writing Mistakes (And How to Fix Them)

Mistake 1: Being Too Vague

Bad: "Write a report about AI trends"
Good: "Write a 500-word report summarizing the top 3 AI trends in healthcare for 2026, aimed at C-suite executives with limited technical background. Include data points and cite sources."

Mistake 2: Asking the AI to "Not Do" Things

Bad: "Don't be boring. Don't use jargon."
Good: "Write in an engaging, conversational tone. Use simple language and explain technical terms when necessary."

Mistake 3: Not Providing Examples

Bad: "Format this data nicely"
Good: "Format this data as a table with columns: Company Name, Revenue, Growth Rate (%). Sort by revenue descending. Example format:

Company

Revenue

Growth

Acme

$50M

25%

Mistake 4: Overloading a Single Prompt

Bad: One massive prompt trying to do research, analysis, writing, and formatting in one go
Good: Break into sequential prompts or use a multi-step workflow:

  1. "Research X and summarize findings"

  2. "Analyze the research summary for patterns"

  3. "Write a report based on the analysis"

The Bottom Line: Specificity Wins

The single most important principle in prompt engineering is this: Be specific.

Vague prompts produce vague results. Detailed, structured prompts produce detailed, structured results.

Think of it this way: if you wouldn't understand what to do based on your own prompt, neither will the AI.

Start with the ROSES framework, break down complex prompts with clear sections, use reverse prompting to capture successful patterns, and iterate based on results. With practice, writing effective prompts becomes second nature—and your AI outputs will consistently meet (and exceed) expectations.

Ready to Level Up Your AI Prompting?

Start applying the ROSES framework today, and you'll see immediate improvements in output quality. Want to learn more about StackAI? Get a demo today.

Jenny Cang

AI Strategist at StackAI

Jacob Yoon

Founding Forward Deployed Engineer at StackAI

Table of Contents

Make your organization smarter with AI.

Deploy custom AI Assistants, Chatbots, and Workflow Automations to make your company 10x more efficient.