Tasks
Tasks are single-turn LLM calls with prompt templates. They are the basic building blocks for AI-powered operations in runpiper.
What is a Task?
A task defines:
- Which model to use (provider and model name)
- System prompt - The AI’s role and instructions
- User prompt template - How to structure the input
- Optional settings - Temperature, max tokens, etc.
Task Configuration
Tasks are defined in TOML files with the following structure:
[task]
name = "summarize"
[model]
provider = "anthropic"
name = "claude-3-haiku"
[prompts]
system = "Summarize the given text concisely"
user = "{{input.text}}"
Required Fields
| Field | Description | Example |
|---|---|---|
task.name | Unique task name | "summarize" |
model.provider | LLM provider | "anthropic", "openai" |
model.name | Model name | "claude-3-haiku", "gpt-4" |
prompts.system | System prompt | "You are a helpful assistant" |
prompts.user | User prompt template | "{{input.message}}" |
Optional Fields
| Field | Description | Default |
|---|---|---|
model.temperature | Sampling temperature (0-2) | Model default |
model.max_tokens | Maximum response tokens | Model default |
model.top_p | Nucleus sampling (0-1) | Model default |
Input Templates
Use {{input.<field>}} to reference input data in your prompts:
[prompts]
system = "You are a {{input.role}} assistant"
user = """
Topic: {{input.topic}}
Tone: {{input.tone}}
Please write a response.
"""
When calling the task, pass input as JSON:
rp task test summarize --input '{
"text": "Long text to summarize...",
"role": "summarization",
"tone": "professional"
}'
Supported Providers
Anthropic
[model]
provider = "anthropic"
name = "claude-3-haiku"
Supported models:
claude-3-haikuclaude-3-sonnetclaude-3-opusclaude-sonnet
OpenAI
[model]
provider = "openai"
name = "gpt-4"
Supported models:
gpt-4gpt-4-turbogpt-3.5-turbo
OpenAI Compatible
[model]
provider = "openai-compatible"
name = "custom-model"
base_url = "https://api.custom.com/v1"
Task Examples
Text Summarization
[task]
name = "summarize"
[model]
provider = "anthropic"
name = "claude-3-haiku"
[prompts]
system = "Summarize the given text concisely in 2-3 sentences."
user = "{{input.text}}"
Data Extraction
[task]
name = "extract"
[model]
provider = "anthropic"
name = "claude-3-sonnet"
[prompts]
system = "Extract structured data from the text. Return JSON."
user = """
Text: {{input.text}}
Fields to extract: {{input.fields}}
"""
Translation
[task]
name = "translate"
[model]
provider = "openai"
name = "gpt-4"
[prompts]
system = "Translate the following text from {{input.from}} to {{input.to}}."
user = "{{input.text}}"
Managing Tasks
Create a Task
rp task init my-task
This creates a my-task.toml template file.
Validate a Task
rp task validate my-task.toml
Push a Task (Preview)
rp task push my-task.toml
This creates a preview version you can test.
Test a Task
rp task test my-task --input '{"message": "Hello"}'
Deploy a Task
rp task deploy my-task
Deploy the latest preview version to production.
List Tasks
rp task list # All tasks
rp task list --live # Production only
rp task list --preview # Preview only
Pull a Task
rp task pull my-task
Download the Taskfile for a deployed task.
Delete a Task
rp task delete my-task
Generate Test Input
rp task gen-test my-task
Generate a sample input file for testing.
Best Practices
- Keep prompts focused - Single-purpose tasks work best
- Use templates wisely - Make tasks reusable with input variables
- Version your prompts - Deploy preview versions before production
- Test thoroughly - Use
rp task testbefore deploying - Choose the right model - Use faster models for simple tasks
Next Steps
- Capabilities: Connect to external services
- Agents: Build autonomous multi-turn agents
- CLI Reference: Complete CLI command documentation