How to start writing a prompt: Persona, task, context, and format
When writing a prompt, it helps to start with a simple structure. Most high-performing prompts follow a predictable blueprint: they define a persona, a task, the necessary context, and a target format. This structure provides the model with enough direction to go from a generic response to a precise, usable output. This prompt structure makes it easier to create prompts that are clear, relevant, and aligned with a desired outcome.
Keep in mind, you don't have to use every piece every time. For simple requests, just saying what you need plus a little context usually does the trick. When things get more complex, that's when the extra details really pay off. Once you get the hang of these building blocks, you'll have a solid starting point for any prompt, and you'll get better at tweaking and refining them as you go.
Persona
Persona defines the role or point of view the AI should take when responding. It helps shape tone, depth, and how the explanation is framed for a specific audience. Without a persona, most AI tools respond as a generic assistant, which might not fit your specific needs. Using a persona is especially useful when you’re writing a blog post, explaining a topic to students, or adapting content for users with different levels of experience. It’s not just roleplay, it’s about providing the model with a clear frame of reference to improve its output.
| Quality | Prompt example | Why it works / fails |
|---|---|---|
| Good | “Act as a senior SRE explaining this to a junior hire.” | Instantly sets technical depth and tone. |
| Bad | “Act as a genius and explain everything.” | Too vague. Can result in pompous filler text or hallucinations. |
Task
The task tells the AI what you want it to do. This is the most important part of any writing prompt. A clear task gives the model a specific goal, while a vague task forces it to guess how it should respond.
For creating effective prompts, focus on actions and outcomes. Use clear instructions that describe what a good answer looks like. This applies whether you’re asking the LLM to explain data, write code, or generate ideas. If your first prompt doesn’t achieve what you want, refining the task is often the fastest way to get better results. More specificity usually matters more than longer prompts.
| Quality | Prompt example | Why it works / fails |
|---|---|---|
| Good | “Summarize the top 3 security risks in these logs as a bulleted list.” | Defines action, quantity, and format. |
| Bad | “Look at this and write something.” | Forces the AI to guess; usually leads to irrelevant output. |
Context
Context provides the background information the AI needs to respond accurately. This can include who the audience is, what the content will be used for, or any limitations the AI should consider. Without enough context, AI responses tend to be generic or irrelevant.
Adding context doesn’t mean adding more words. The goal is to include only the details that help the AI understand your request. Context is especially important when working with data, complex topics, or AI tools that generate confident but incorrect answers. Providing some relevant context reduces assumptions and improves your LLM’s response quality.
| Quality | Prompt example | Why it works / fails |
|---|---|---|
| Good | “This is for a non-technical stakeholder. Focus on budget, not implementation.” | Sets clear boundaries on what to include and ignore. |
| Bad | “Here’s a report, explain it.” | No frame of reference; results in a generic summary. |
Format
Format defines how the response should be presented. It helps structure the output so it’s easier to read, reuse, or review. Common formats include bullet lists, tables, short paragraphs, or step-by-step instructions.
Format becomes increasingly important as prompts get more complex, or when the output needs to be complete and ready to use. Clear format instructions help the AI organize information instead of deciding the structure on its own. This is especially useful when you plan to reuse responses in a project, course, or workflow.
| Quality | Prompt example | Why it works / fails |
|---|---|---|
| Good | “Return as a Markdown table with columns: Service, Status, and Next Step.” | Direct and usable. |
| Bad | “Make it neat.” | Subjective; usually ends up as a plain list. |
Best practices for effective prompts
Building successful prompts is key to getting the most out of your AI tools. Here are some best practices to help you create more effective prompts:
Use ‘do’ and ‘don’ts’
One of the easiest ways to create good prompts is to explicitly state what the AI should do and what it should avoid. Clear instructions help reduce irrelevant responses and prevent the AI from making assumptions that don’t match your specific needs.
AI tools are designed to respond helpfully, but when instructions are vague, they tend to fill gaps with extra details or unnecessary explanations. “Do” and “don’t” statements set boundaries that guide how the response should be formed.
This approach is especially useful when:
- Writing a descriptive prompt for a broad audience
- Adapting the same prompt across different AI tools
- Trying to get consistent results from repeated requests
Using “do” and “don’t” instructions makes prompts easier to reuse, easier to adapt, and more likely to produce reliable answers.
| Quality | Prompt example | Why it works / fails |
|---|---|---|
| Good | “Explain X in simple terms. Do not use technical jargon or acronyms.” | Sets a hard limit that forces a specific vocabulary. |
| Bad | “Explain this clearly.” | “Clearly” is subjective; the AI will guess what that means. |
Use ‘Act as if…’ approach
The “act as if” approach is a simple way to make writing prompts more precise by setting a clear perspective for the AI. Instead of letting the model respond in a generic way, you tell it how to frame the response by assigning a role, audience, or situation.
This technique works well when tone, level of detail, or decision-making style matters. It’s especially useful for descriptive prompts, educational content, or situations where you need the AI to explain something in a specific way.
When using the “act as if” approach, keep the role practical and tied to the task. The goal is to improve clarity and relevance, not to turn the prompt into role-playing. Used correctly, this method helps create effective prompts that are easier to reuse and adapt across different AI tools.
| Quality | Prompt example | Why it works / fails |
|---|---|---|
| Good | “Act as an infrastructure architect reviewing a new service design.” | Sets an analytical, high-level, and critical tone. |
| Bad | “Act as an expert.” | Too broad; "expert" doesn't specify the field or audience's needs. |
Chain-of-thought prompting
Chain-of-thought prompting encourages AI to break a response into smaller, logical steps before reaching a final answer. Instead of jumping straight to a conclusion, the model explains its reasoning along the way. This is especially useful for tasks involving analysis, problem-solving, or multi-step decisions.
Chain-of-thought prompting works well when a simple prompt produces shallow or incorrect answers. By asking the AI to show how it arrived at a response, you make the reasoning process more transparent and easier to evaluate.
It’s especially useful when working with data, writing code, comparing options, or explaining complex topics. It also makes it easier to spot mistakes. You can see where the reasoning breaks down instead of only seeing the final result.
This technique isn't necessary for every task. For straightforward requests, asking for step-by-step reasoning can add unnecessary verbosity. Use it when understanding the process matters as much as the answer itself.
| Quality | Prompt example | Why it works / fails |
|---|---|---|
| Good | “Explain your reasoning step-by-step before suggesting the final fix.” | Exposes the logic, making it easier to spot errors. |
| Bad | “Give me the answer.” | Increases the risk of the model guessing incorrectly on complex tasks. |
Meta prompting
Meta prompting is the practice of asking the AI to reflect on how it should respond before generating the actual answer. Instead of focusing only on the content, you guide the process by telling the model to think about structure, quality, or limitations first.
This technique is especially useful when you want more control over responses or when previous prompts have produced inconsistent results. Meta prompting helps turn vague requests into clearer instructions by making the AI evaluate its own output criteria.
Meta prompting works well when you’re:
- Refining your own prompts through an iterative process
- Working on complex projects that require structured thinking
- Trying to adapt the same prompt across different AI tools
It’s also helpful when accuracy matters, such as when summarizing data, writing explanations, or generating code. By asking the AI to reflect on its approach, you reduce hidden assumptions and improve the consistency of results.
Used sparingly, meta prompting can significantly improve prompt quality. Overused, it can slow things down. As with most prompt engineering techniques, the key is knowing when the extra guidance adds value and when a simple, direct request is enough.
| Quality | Prompt example | Why it works / fails |
|---|---|---|
| Good | "Before you finalize the article, audit your own claims. Identify any statistics or trends that lack a clear source and tell me where the logic might be weak." | Forces the model to review its work through a critical, skeptical lens. |
| Bad | "Make sure this is all true." | Too vague; the AI will just agree with itself and move on. |
How do I check the credibility of the AI-generated output?
AI tools are fluent, but fluent doesn't mean accurate. Even with a well-crafted prompt, AI-generated answers can still contain incorrect facts, missing context, or confident-sounding statements with nothing to back them up. That's why checking credibility is essential, especially when the output shapes decisions, analysis, or anything you publish.
Watch out for AI hallucinations. This is when AI folds false information into its output to satisfy your prompt. It happens most often when prompts lack context, ask the model to fill in gaps, or push it beyond what it can reliably know. Better prompts help reduce the risk, but nothing eliminates it entirely. For enterprise use cases, techniques like retrieval-augmented generation (RAG) can ground responses in verified data sources, significantly improving accuracy.
Don't take AI responses at face value. Scrutinize every claim, especially numbers, dates, technical details, and bold conclusions. These are where AI is most likely to slip up.
If something feels off, push back. Ask the AI to explain its assumptions, walk through its reasoning, or admit where it's guessing. You can also run identical prompts through different models. Inconsistencies between them are a red flag that the information might be weak or outright fabricated. You're the fact-checker, not the AI.
Know the limits. Models don't verify facts unless you tell them to, and they often run on outdated training data. Treat AI responses as a starting point, not the final word. When accuracy matters, human review and external validation aren't optional; they're required.
Build verification into your workflow. The best results come from combining thoughtful prompts with consistent review. nexos.ai supports this by letting you compare AI models side by side, review outputs in one place, and refine prompts across your workspace. You catch issues early and improve over time.
Test: How to write a prompt
Put what you've learned into practice. Below are two prompts asking for the same result. Read both carefully and decide which one will produce a better response.
| Prompt A | Prompt B |
|---|---|
| "Our new hires feel lost in their first week. Can you suggest some onboarding improvements? Please be detailed and helpful." | "Act as an HR specialist. Suggest 5 onboarding improvements for a new hire's first week. For each, include: the problem it solves and how to implement it. Format as a numbered list." |
Which prompt would you choose?
Both prompts describe the same problem and ask for suggestions. One will consistently produce more useful results. Take a moment to think about why.
The winner: Prompt B.
Why?
| | Prompt A | Prompt B |
|---|---|---|
| Persona | None | HR Specialist |
| Task | Open-ended | "5 improvements" sets scope |
| Context | Mentions problem, no focus | Focuses on first week |
| Format | "Be detailed" is vague | Numbered list with structure |
The takeaway: Conversational prompts feel intuitive but often produce generic results. Adding structure gets you answers you can actually use.
Ultimately, writing effective prompts is less about finding a secret formula and more about developing a methodical approach. Test, review, refine, repeat. nexos.ai’s AI platform for business supports this workflow by giving you the tools to test, compare, and iterate, so you can move from guesswork to confident, repeatable outcomes.