Processed in your browser. Your data never leaves your device.
Build structured prompts with reusable templates, variables, and multi-format output for OpenAI, Anthropic, and Gemini APIs. All processing happens in your browser.
Prompt Sections
Variables (use {{name}} in prompts)
{{}}={{}}=Output Format
Output Preview
~177 tokens708 chars
[SYSTEM] You are an expert code reviewer. Analyze the provided code for bugs, performance issues, security vulnerabilities, and adherence to best practices. Be specific and actionable in your feedback. [USER] ## Role You are an expert TypeScript code reviewer with 10+ years of experience. ## Context Review the following TypeScript code from a web application project. ## Task Analyze this code for: 1. Bugs and logical errors 2. Performance issues 3. Security vulnerabilities 4. Best practice violations 5. Readability improvements ## Output Format For each issue found, provide: - **Severity**: Critical / Warning / Info - **Line**: The affected code - **Issue**: What's wrong - **Fix**: How to fix it
Variable Quick-Fill
Frequently Asked Questions
How do I structure an effective AI prompt?
An effective AI prompt has four key sections: role or system context that defines the AI's persona and constraints, task description that clearly states what you want, input data or context the AI needs to work with, and output format specifying the expected response structure. The prompt builder provides templates for each section and lets you combine them into a complete prompt. Being specific about format (JSON, markdown, bullet points) dramatically improves output consistency. Including examples of desired output (few-shot prompting) further improves accuracy. Constraints like word limits, tone requirements, or topics to avoid help keep responses focused and useful.
What is the difference between system prompts and user prompts?
System prompts set the AI's behavior, persona, and constraints for an entire conversation. They are processed before any user messages and establish rules the model follows throughout the session. User prompts are the individual messages containing questions, tasks, or data. In the OpenAI API, these map to the system and user roles in the messages array. Anthropic uses a dedicated system parameter separate from the messages. System prompts are ideal for persistent instructions like output format, tone, domain expertise, and safety guardrails. User prompts handle the specific request for each turn. Combining both effectively gives you consistent, well-structured AI responses.
How do I format prompts for the OpenAI and Anthropic APIs?
OpenAI's Chat Completions API expects a messages array with objects containing role (system, user, or assistant) and content fields. The system message goes first, followed by alternating user and assistant messages. Anthropic's Messages API uses a separate top-level system parameter for system instructions and a messages array with user and assistant roles only. Both APIs support multi-turn conversations by including message history. The prompt builder formats your content for either API, generating the correct JSON structure you can copy directly into your API call. Temperature, max_tokens, and model parameters are set alongside the messages.
Related Generate Tools
PP
Privacy Policy Generator
Generate a customized privacy policy with GDPR, CCPA, cookies, analytics, and payment sections
MCK
JSON Mock Data Generator
Generate realistic fake JSON data for API testing with 30+ field types, preset templates, and schema builder
RDM
README Generator
Generate professional GitHub README.md files with badges, installation steps, usage examples, and more
R.T
robots.txt Generator
Generate robots.txt files with crawl rules for Googlebot, Bingbot, AI bots, and more — presets included