Copyright (c) 2025 Chris Drake. All rights reserved.
An SSE-transport local MCP server with pluggable tool ecosystem.
Server domain: https://mcp.aurafriday.com/sse
Available Tools
cards
Parameters Schema
{
"properties": {
"called_readme_operation_in_cards": {
"type": "boolean",
"description": "MANDATORY: Indicate whether or not you have have use the operation:readme call to cards and read and understood the documentation it returns for this tool.",
"default": false
},
"operation": {
"type": "string",
"enum": [
"get_cards",
"get_order",
"get_my_order",
"get_game_status",
"list_active_games",
"reset_game",
"readme"
],
"description": "Operation to perform"
},
"game_name": {
"type": "string",
"description": "Name of the game. Do not specify unless you need to distinguish between multiple concurrent games - let the tool use its default name",
"default": "default"
},
"participant_id": {
"type": "string",
"description": "Your identifier in format 'hostname:username:path'. The system will append ':player_N' with your assigned number. Example: (powershell) `$env:COMPUTERNAME + ':' + $env:USERNAME + ':' + (Split-Path -Leaf $PWD)` becomes 'ROG:yourusername:mcplink:player_1'. Use the full ID returned by get_cards for subsequent operations."
},
"expected_participants": {
"type": "integer",
"description": "Total number of expected participants",
"minimum": 1,
"maximum": 52
},
"num_cards": {
"type": "integer",
"description": "Number of cards to deal",
"minimum": 1
},
"wait": {
"type": "boolean",
"description": "Whether to wait for all participants to receive cards before returning. IMPORTANT: You MUST set this to true if you need to know your ranking immediately (e.g., if you need to respond with your position/rank in the game). If false, you'll get your card but won't know your final position until all players have joined.",
"default": false
},
"wait_timeout": {
"type": "integer",
"description": "Maximum time in seconds to wait for other participants (default 270)",
"default": 270,
"minimum": 1
}
},
"required": [
"called_readme_operation_in_cards",
"operation"
],
"title": "cardsArguments",
"type": "object"
}
A shuffled deck of cards.
Use this when you need to determine participant ordering, play games, or have other use-cases that can be solved by drawing and comparing cards at random.
**IMPORTANT: Must call 'readme' operation before first use to learn how to use this tool and the meaning of all parameters.**
Card-based participant ordering system using standard deck of cards.
Operations:
- get_cards: Get card(s) for a participant
- get_order: Get current participant ordering (highest card first)
- get_my_order: Get specific participant's position (only when complete)
- get_game_status: Show game state information including participant list, remaining cards, and slots
- list_active_games: Show all active games
- reset_game: Force reset a game's state (use when participants are missing/inactive)
IMPORTANT: When calling get_cards, if you need to know your ranking/position immediately
(e.g., if you need to respond with "I am in 1st place"), you MUST set wait=true.
Otherwise, you'll get your card but won't know your final position until all players have joined.
Participant Identification:
1. When calling get_cards:
- Provide your ID in format: "hostname:username:path"
- The system automatically assigns you the next available player number
- Example: "myhost:john:project" becomes "myhost:john:project:player_1"
2. For all subsequent operations:
- Use the complete ID returned by get_cards
- This includes your system-assigned player number
Wait Functionality:
- Operations support an optional 'wait' parameter
- When wait=True, operation blocks until all expected participants have received cards
- Default timeout of 30s (configurable via wait_timeout parameter)
- Mandatory to use wait=True if immediate position information is needed
- Example: get_cards(..., wait=True) ensures all players have cards when response received
Card Ordering Rules:
1. Primary Sort: Card Rank (highest to lowest)
- Ace (highest)
- King
- Queen
- Jack
- 10 through 2 (descending)
2. Secondary Sort (only as tiebreaker): Suit
- Spades (♠)
- Hearts (♥)
- Diamonds (♦)
- Clubs (♣)
Example Ordering:
- Ace of Clubs beats King of Spades (rank is primary)
- King of Clubs beats Queen of Spades (rank is primary)
- King of Spades beats King of Hearts (same rank, suit breaks tie)
Notes:
- Games are identified by name AND number of expected participants
- Thread-safe for simultaneous access
- State is not persisted across server restarts
direct_sqlite
Parameters Schema
{
"properties": {
"sql": {
"title": "SQL",
"type": "string",
"description": "The SQL command to execute with optional :param style placeholders"
},
"database": {
"title": "Database",
"type": "string",
"description": "':memory:' for temporary storage (persists until server restart, shared between AI instances) or filename for persistent database",
"default": ":memory:"
},
"bindings": {
"title": "Parameter Bindings",
"type": "object",
"description": "Optional key-value pairs for SQL parameter binding",
"additionalProperties": true
},
"called_readme_operation_in_direct_sqlite": {
"type": "string",
"description": "MANDATORY: Call mcp_mcplink_sse_direct_sqlite using \"operation\":\"readme\" to get the value this paramater requires.",
"default": "not yet"
},
"operation": {
"type": "string",
"enum": [
"readme"
],
"description": "Operation to perform"
}
},
"required": [
"sql"
],
"title": "direct_sqliteArguments",
"type": "object"
}
Execute SQLite commands and return results in JSON format.
- Use this when you need to execute SQLite commands or perform vector similarity search.
- IMPORTANT: Before using this tool: call "operation":"readme" for full instructions
Execute SQLite commands and return results in JSON format. Key features:
1. Basic Database Operations:
- Database: Use ':memory:' for temporary storage (persists until server restart, shared between AI instances)
- Or use filename for persistent database with these path options:
* Simple filename (e.g. 'data.db') -> stored in same directory as run_mcplink_sse.py
(typically python/mcplink/run_mcplink_sse.py in the project root)
* Full path (e.g. './data.db' or 'C:\data.db') -> used as-is
* Windows environment variables (e.g. '%APPDATA%\data.db') -> expanded on Windows only
* Home directory (e.g. '~/data.db') -> expanded to user home on all platforms
* Cross-platform app data (e.g. '@appdata/data.db') -> uses appropriate OS location:
- Windows: %APPDATA% (~/AppData/Roaming)
- macOS: ~/Library/Application Support
- Linux: ~/.local/share
- Parameters: SQL command with :param style placeholders (e.g. :name, :value)
- Bindings: Pass values safely using bindings object (e.g. {"name": "test", "value": 123})
2. Vector Similarity Search Support:
- Create tables with vector columns:
```sql
CREATE TABLE documents(
id INTEGER PRIMARY KEY,
contents TEXT,
embedding BLOB CHECK(
typeof(embedding) == 'blob'
AND vec_length(embedding) == 3072 -- For Gemini embeddings
)
);
```
- Automatic Embedding Generation:
IMPORTANT: Requires GEMINI_API_KEY environment variable to be set.
Two special binding formats for automatic Gemini embeddings:
1. Direct Text Embedding:
```python
execute_sql(
"INSERT INTO docs(text, embedding) VALUES (:text, vec_f32(:embedding))", # Note: vec_f32() required
bindings={
"text": "Some text to store",
"embedding": {"_embedding_text": "Text to embed"}
}
)
```
2. Reference Another Binding:
```python
execute_sql(
"INSERT INTO docs(text, embedding) VALUES (:text, vec_f32(:embedding))", # Note: vec_f32() required
bindings={
"text": "Some text to store and embed",
"embedding": {"_embedding_col": "text"} # Uses text from :text binding
}
)
```
Similarity Search Examples:
```python
# Basic similarity search
execute_sql(
"""SELECT text, vec_distance_cosine(embedding, vec_f32(:query_vec)) as distance
FROM docs
WHERE vec_distance_cosine(embedding, vec_f32(:query_vec)) < 0.5 -- Range: 0-1, lower is more similar
ORDER BY distance LIMIT 5"""",
bindings={
"query_vec": {"_embedding_text": "Find text similar to this"}
}
)
# Find similar to existing document
execute_sql(
"""WITH source AS (SELECT text FROM docs WHERE id = :id)
SELECT d.text, vec_distance_cosine(d.embedding, vec_f32(:similar_to)) as distance
FROM docs d, source
WHERE d.id != :id
ORDER BY distance LIMIT 5"""",
bindings={
"id": 123,
"similar_to": {"_embedding_col": "text"} # References text from source CTE
}
)
```
Available Distance Functions:
- vec_distance_cosine(v1, v2) -> float: Cosine similarity (range 0-1, lower=more similar)
- vec_distance_L2(v1, v2) -> float: Euclidean distance (range 0-inf, lower=more similar)
- vec_distance_L1(v1, v2) -> float: Manhattan distance (range 0-inf, lower=more similar)
3. Return Format:
- operation_was_successful: boolean
- error_message_if_operation_failed: string or null
- rows_modified_by_operation: integer or null for SELECT
- column_names_in_result_set: array or null for non-SELECT
- data_rows_from_result_set: array of row objects or null for non-SELECT
4. Features:
- Automatic connection caching per database
- Row results as dictionaries with column name access
- Auto-commit for INSERT/UPDATE/DELETE
- Full SQLite feature set available
- Built-in vector similarity search
- Automatic Gemini embedding generation
5. Important Limitations:
- Thread Safety: SQLite connections are thread-local
- Concurrent Access: Only one writer at a time
- Memory DB Scope: :memory: database persists for server lifetime
- File Location: Database files with simple names (no path separators) are created
in the same directory as run_mcplink_sse.py (typically python/mcplink/run_mcplink_sse.py
in the project root). Use full paths, @appdata/, or ~/ to store elsewhere.
- Embedding Generation: Requires GEMINI_API_KEY environment variable
- Vector Operations: Always use vec_f32() in SQL for vector parameters
6. Common Error Cases:
- 'CHECK constraint failed': Missing vec_f32() in SQL for vector operations
- 'GEMINI_API_KEY environment variable not set': Configure API key
- 'Referenced column not found': Check binding names match SQL parameters
- 'Failed to generate embedding': Check API key and network connection
7. SQLite Dot Commands:
The following dot commands are supported for convenience:
.tables - List all tables
.schema - Show schema for table(s)
.indexes - Show indexes for table
.fullschema - Complete schema dump
.dbinfo - Show database information
.status - Show current settings
.pragma - Show all PRAGMA settings
.foreign_keys - Show foreign key settings
Note: While dot commands are supported for convenience, standard SQL
queries are preferred as they provide more explicit and complete functionality.
8. PRAGMA Support:
All PRAGMA statements are supported using standard SQL syntax:
PRAGMA foreign_keys = ON; -- Enable foreign keys
PRAGMA journal_mode = WAL; -- Set journal mode
PRAGMA synchronous = NORMAL; -- Configure sync mode
PRAGMA cache_size = -2000; -- Set cache size
PRAGMA page_size; -- Get page size
PRAGMA encoding; -- Get database encoding
Example: {"sql": "SELECT * FROM documents WHERE vec_distance_cosine(embedding, vec_f32(:vec)) < 0.5", "database": "vectors.db", "bindings": {"vec": {"_embedding_text": "Find similar documents"}}}
To confirm you have read and understood this documentation, set called_readme_operation_in_direct_sqlite to exactly "YEP!" when using this tool.
mcp_youtube_transcript
Parameters Schema
{
"properties": {
"called_readme_operation_in_mcp_youtube_transcript": {
"type": "string",
"description": "MANDATORY: Call mcp_youtube_transcript using \"operation\":\"readme\" to get the value this parameter requires.",
"default": "not yet"
},
"operation": {
"type": "string",
"enum": [
"get_transcript",
"list_transcripts",
"readme"
],
"description": "Operation to perform"
},
"video_id": {
"type": "string",
"description": "YouTube video ID"
},
"language": {
"type": "string",
"description": "Preferred language code (ISO 639-1)",
"default": "en"
},
"translate_to": {
"type": "string",
"description": "Language code to translate transcript to"
},
"include_auto_generated": {
"type": "boolean",
"description": "Include auto-generated transcripts",
"default": true
},
"include_manual": {
"type": "boolean",
"description": "Include manually created transcripts",
"default": true
}
},
"required": [
"called_readme_operation_in_mcp_youtube_transcript",
"operation"
],
"title": "youtubeTranscriptArguments",
"type": "object"
}
Get transcripts from YouTube videos.
- Use this when you need to get transcripts/subtitles from YouTube videos
- Supports multiple languages and translation
- Works with both manual and auto-generated subtitles
# CRITICAL
- Before ANY use of this tool you *MUST* call operation="readme" first
Get transcripts from YouTube videos.
This tool provides access to YouTube video transcripts using the youtube-transcript-api package.
It supports retrieving transcripts in different languages and translation capabilities.
## Operations
### Get Documentation (`operation="readme"`)
- Returns the complete tool documentation
- Must be called before using any other operations
- Set called_readme_operation_in_mcp_youtube_transcript to exactly "YES!" to confirm you have read and understood this documentation
### Get Transcript (`operation="get_transcript"`)
- Retrieves transcript for a given YouTube video ID
- Parameters:
* video_id: The YouTube video ID (required)
* language: Preferred language code (optional, defaults to 'en')
* translate_to: Language code to translate to (optional)
* include_auto_generated: Include auto-generated transcripts (optional, defaults to true)
* include_manual: Include manually created transcripts (optional, defaults to true)
### List Available Transcripts (`operation="list_transcripts"`)
- Lists all available transcripts for a given video
- Parameters:
* video_id: The YouTube video ID (required)
## Usage Notes
1. Always call readme operation first
2. Video ID is the part after 'v=' in YouTube URLs (e.g., for 'https://www.youtube.com/watch?v=abc123', use 'abc123')
3. Language codes follow ISO 639-1 format (e.g., 'en' for English, 'es' for Spanish)
4. Some videos may have restricted access or no available transcripts
5. Translation may not be available for all language pairs
## Examples
```json
# Get documentation
{
"operation": "readme"
}
# Get English transcript
{
"operation": "get_transcript",
"video_id": "abc123",
"language": "en"
}
# Get transcript translated to Spanish
{
"operation": "get_transcript",
"video_id": "abc123",
"language": "en",
"translate_to": "es"
}
# List available transcripts
{
"operation": "list_transcripts",
"video_id": "abc123"
}
```
openrouter
Parameters Schema
{
"properties": {
"called_readme_operation_in_openrouter": {
"type": "string",
"description": "MANDATORY: Call mcp_mcplink_sse_openrouter using \"operation\":\"readme\" to get the value this paramater requires.",
"default": "not yet"
},
"operation": {
"type": "string",
"enum": [
"list_models",
"get_credits",
"get_generation",
"chat_completion",
"search_models",
"readme"
],
"description": "Operation to perform"
},
"generation_id": {
"type": "string",
"description": "ID of the generation to retrieve (required for get_generation operation)"
},
"json": {
"type": "boolean",
"description": "Whether to return full JSON response instead of tab-separated values",
"default": false
},
"columns": {
"type": "array",
"description": "List of columns to include in TSV output. Use dot notation for nested fields (e.g., 'architecture.modality'). If not specified, default columns will be used.",
"items": {
"type": "string"
}
},
"search_criteria": {
"type": "object",
"description": "Optional filtering criteria for models",
"properties": {
"modality": {
"type": "string",
"description": "Filter by input/output types (e.g., 'text->text', 'text+image->text')"
},
"min_context_length": {
"type": "integer",
"description": "Minimum context window size"
},
"max_prompt_price": {
"type": "number",
"description": "Maximum price per prompt token"
},
"max_completion_price": {
"type": "number",
"description": "Maximum price per completion token"
},
"provider": {
"type": "string",
"description": "Filter by specific provider (e.g., 'anthropic', 'openai')"
},
"text_match": {
"type": "string",
"description": "Regex pattern to search in model ID and description"
},
"case_sensitive": {
"type": "boolean",
"description": "Whether text search should be case sensitive",
"default": false
}
}
},
"max_results": {
"type": "integer",
"description": "Maximum number of models to return (optional, but STRONGLY RECOMMENDED to use <= 50 otherwise AI context will be overwhelmed). If not specified: list_models returns all models, search_models returns 32.",
"default": null,
"minimum": 1
},
"model": {
"type": "string",
"description": "Model identifier (e.g., 'anthropic/claude-3-opus')"
},
"messages": {
"type": "array",
"description": "Array of message objects with role and content",
"items": {
"type": "object",
"properties": {
"role": {
"type": "string",
"enum": [
"user",
"assistant",
"system"
],
"description": "Role of the message sender"
},
"content": {
"type": "string",
"description": "Message content or source-specific content"
},
"source": {
"type": "string",
"description": "Optional source type for dynamic content ('url' or 'file')"
}
},
"required": [
"role",
"content"
]
}
},
"stream": {
"type": "boolean",
"description": "Whether to stream the response",
"default": false
},
"tools": {
"type": "array",
"description": "Array of tools that the model can use",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of the tool"
},
"description": {
"type": "string",
"description": "Description of what the tool does"
},
"input_schema": {
"type": "object",
"description": "JSON Schema defining the tool's input parameters"
}
},
"required": [
"name",
"description",
"input_schema"
]
}
},
"tool_choice": {
"type": "string",
"description": "Control the model's tool use behavior",
"enum": [
"none",
"auto",
"any"
],
"default": "auto"
},
"sql": {
"type": "string",
"description": "SQL query to execute against openrouter.db models table"
},
"bindings": {
"type": "object",
"description": "Query parameters including :query_vec for semantic search",
"example": {
"query_vec": {
"_embedding_text": "code analysis and explanation"
}
}
}
},
"required": [
"operation"
],
"title": "openrouterArguments",
"type": "object"
}
OpenRouter API tool providing access to multiple different AI models from multiple providers.
- Use this tool when asked for or about openrouter, or when you need help from an AI with different abilities to your own.
OpenRouter API integration providing access to multiple AI models.
Key Features:
1. Model Discovery & Search:
- Semantic search to find best models for specific tasks
- Vector similarity comparison of model capabilities
- Auto-refreshing model database (24h cache)
- Rich filtering options
2. Model Information:
- Context lengths and capabilities
- Pricing details
- Architecture specifications
- Provider-specific limits
3. Chat Completions:
- Support for multiple models
- Streaming responses
- Tool usage capabilities
- Source content processing (URL/file)
Primary Operations:
1. search_models: Vector similarity search for finding task-specific models
Required parameters:
- operation: "search_models"
Optional parameters:
- bindings: {"query_vec": {"_embedding_text": "your search text"}} (required only for semantic search)
- sql: Custom SQL query (if omitted, returns all columns)
- max_results: Maximum results to return (default 32)
Example - Semantic Search:
```python
{
"operation": "search_models",
"bindings": {
"query_vec": {"_embedding_text": "code analysis and reasoning"}
}
}
```
Example - Non-Semantic Search (Top 5 by Context Length):
```python
{
"operation": "search_models",
"sql": "SELECT id, context_length, description FROM models ORDER BY context_length DESC LIMIT 5"
}
Example - Custom SQL:
```sql
SELECT id, context_length, description,
vec_distance_cosine(embedding, vec_f32(:query_vec)) as similarity
FROM models
WHERE context_length > 32000
ORDER BY similarity LIMIT 5;
```
2. list_models: Basic listing of all models with filtering, but NO SORTING. Always pulls data from the API. Always refreshes the DB used by search_models if the API has new/changed/removed models.
Required parameters:
- operation: "list_models"
Optional parameters:
- max_results: Limit number of results
- json: Return full JSON instead of TSV (default false)
- columns: Specific columns to include
- search_criteria: {
"modality": "text->text",
"min_context_length": 32000,
"max_prompt_price": 0.0001,
"max_completion_price": 0.0001,
"provider": "anthropic",
"text_match": "regex pattern",
"case_sensitive": false
}
3. chat_completion: Send chat completion requests
Required parameters:
- operation: "chat_completion"
- model: Model identifier
- messages: Array of message objects
Optional parameters:
- stream: Enable streaming (default false)
- tools: Array of available tools
- tool_choice: Tool selection mode ("none"/"auto"/"any")
Example - Basic Chat:
```python
{
"operation": "chat_completion",
"model": "anthropic/claude-3-opus",
"messages": [
{"role": "user", "content": "Analyze this code"}
]
}
```
Example - With Source Content:
IMPORTANT: When using source-based messages (file/URL), you MUST precede them with an instruction
message telling the model what to do with the content. The instruction and source should be
separate messages in the array:
```python
{
"operation": "chat_completion",
"model": "anthropic/claude-3-opus",
"messages": [
{
"role": "user",
"content": "Please analyze this code and explain its main functionality"
},
{
"role": "user",
"content": "https://example.com/code.py",
"source": "url"
}
]
}
```
Example - With File Source:
```python
{
"operation": "chat_completion",
"model": "anthropic/claude-3-opus",
"messages": [
{
"role": "user",
"content": "Please provide a detailed summary of this chat history"
},
{
"role": "user",
"content": ".specstory/history/latest_chat.md",
"source": "file"
}
]
}
```
4. get_credits: Check account balance
Required parameters:
- operation: "get_credits"
5. get_generation: Retrieve generation metadata
Required parameters:
- operation: "get_generation"
- generation_id: ID of the generation to retrieve
Database Schema (openrouter.db):
- id: TEXT PRIMARY KEY (e.g., 'anthropic/claude-3-opus')
- embedding: BLOB (3072-dim vector of the description for semantic search)
- last_updated: DATETIME (auto-refreshes if >24h old)
- description: TEXT (model capabilities and details)
- many other columns (pricing, types, sizes, etc): this table is auto-generated from the OpenRouter list_models result set. search_models by default returns all columns.
IMPORTANT - For Iterative Operations:
When processing multiple models or performing operations that require iterating over the results
(e.g., testing multiple models, analyzing capabilities, or batch processing), consider using the
mcp task_manager tool's long_task operation. This ensures:
- Progress is tracked and persisted
- Operations can be resumed after interruptions
- Context is maintained between iterations
- Clear step-by-step processing instructions are preserved
Notes:
- Database auto-refreshes when needed (24h cache)
- Use search_models for semantic search, list_models for basic filtering
- Vector similarity uses cosine distance (0-1, lower = more similar)
- Forbidden SQL: DELETE, UPDATE, DROP, ALTER, CREATE
- Tool calls supported in chat completions
- Source content can be loaded from URLs, files, or other mcp tool outputs.
Set called_readme_operation_in_openrouter to exactly "YEAH!" to confirm you have read and understood this documentation.
pig_dev
Parameters Schema
{
"properties": {
"called_readme_operation_in_pig_dev": {
"type": "boolean",
"description": "MANDATORY: Indicate whether or not you have have use the operation:readme call to pig_dev and read and understood the documentation it returns for this tool.",
"default": false
},
"operation": {
"type": "string",
"enum": [
"mouse",
"keyboard",
"screen",
"readme",
"list_pcs"
],
"description": "Operation to perform"
},
"action": {
"type": "string",
"description": "Specific action for the selected operation"
},
"pc_address": {
"type": "string",
"description": "URL of the PC to control (e.g. http://localhost:3000)"
},
"x": {
"type": "integer",
"description": "X coordinate for mouse/screen operations"
},
"y": {
"type": "integer",
"description": "Y coordinate for mouse/screen operations"
},
"text": {
"type": "string",
"description": "Text to type or search for"
},
"width": {
"type": "integer",
"description": "Width for screen capture region"
},
"height": {
"type": "integer",
"description": "Height for screen capture region"
}
},
"required": [
"called_readme_operation_in_pig_dev",
"operation"
],
"title": "pigDevArguments",
"type": "object"
}
Remote-Control a Windows 11 desktop PC.
- Use this when you need to run or use any windows application (e.g. browser, email, word, powerpoint, excel, facebook, instagram, tiktok, etc.)
# CRITICAL
- Before ANY use of this tool you *MUST* call operation="readme" first
Remote-Control a Windows 11 desktop PC.
Use this when you need to run or use any windows application (e.g. browser, email, word, powerpoint, excel, facebook, instagram, tiktok, etc.)
A comprehensive tool for controlling and interacting with the computer through mouse movements,
keyboard input, and screen capture functionality. This tool provides a high-level interface
to automate user interactions and capture screen content.
## Overview
The Pig Development Tool provides programmatic control over mouse, keyboard, and screen interactions,
enabling automation of user interface actions and screen content analysis.
## Operations
### List Available PCs (`operation="list_pcs"`)
- Returns list of available PCs that can be controlled
- Each PC entry contains:
* address: URL of the PC's control endpoint (which is the unique identifier for the PC
### Mouse Control (`operation="mouse"`)
- Actions match SDK function names exactly:
* `mouse_move`: Move cursor to coordinates
* `cursor_position`: Get current cursor position
* `left_click`: Left click (with optional coordinates)
* `right_click`: Right click (with optional coordinates)
* `double_click`: Double click (with optional coordinates)
* `left_click_drag`: Click and drag to coordinates
- Examples:
* Move cursor: `{"action": "mouse_move", "x": 100, "y": 200}`
* Get position: `{"action": "cursor_position"}`
* Left click: `{"action": "left_click"}` or `{"action": "left_click", "x": 100, "y": 200}`
* Right click: `{"action": "right_click"}` or `{"action": "right_click", "x": 100, "y": 200}`
* Double click: `{"action": "double_click"}` or `{"action": "double_click", "x": 100, "y": 200}`
* Click and drag: `{"action": "left_click_drag", "x": 200, "y": 200}`
### Keyboard Control (`operation="keyboard"`)
- Type text strings (`action="type"`, `text="Hello World"`)
- Send key combinations (`action="key"`, `text="ctrl+c ctrl+v"`)
- Support for special keys (e.g., `text="Return"`, `text="alt+Tab"`)
- Multiple combinations in sequence (e.g., `text="ctrl+c ctrl+v"`)
- Examples:
* Type text: `{"action": "type", "text": "Hello World!"}`
* Press Windows key: `{"action": "key", "text": "super"}`
* Copy/Paste: `{"action": "key", "text": "ctrl+c ctrl+v"}`
* Special keys: `{"action": "key", "text": "Return"}`
* Alt-Tab: `{"action": "key", "text": "alt+Tab"}`
### Screen Capture (`operation="screen"`)
- Capture full screen or specific regions
- Multi-monitor support
- Get pixel color at coordinates
- Image recognition and matching
- OCR text extraction
- Window management and focus control
## Usage Notes
1. All coordinates are in screen pixels
2. (0,0) is the top-left corner of the primary monitor
3. Multi-monitor setups may have negative coordinates
4. Operations are atomic and thread-safe
5. Rate limiting is applied to prevent system overload
6. Error handling includes detailed failure information
## Security
- Operations are restricted to the current user's session
- Some functions may require elevated permissions
- Rate limiting prevents abuse
- Operations can be restricted by policy
## Examples
```python
# List available PCs
{
"operation": "list_pcs"
}
# Move mouse to coordinates
{
"operation": "mouse",
"action": "move",
"x": 100,
"y": 200,
"relative": False,
"pc_address": "http://localhost:3000"
}
# Type text
{
"operation": "keyboard",
"action": "type",
"text": "Hello, World!",
"pc_address": "http://localhost:3000"
}
# Send key combination
{
"operation": "keyboard",
"action": "key",
"text": "ctrl+c ctrl+v",
"pc_address": "http://localhost:3000"
}
# Capture screen region
{
"operation": "screen",
"action": "capture",
"x": 0,
"y": 0,
"width": 800,
"height": 600,
"pc_address": "http://localhost:3000"
}
```
selenium
Parameters Schema
{
"properties": {
"called_readme_operation_in_selenium": {
"type": "string",
"description": "MANDATORY: Call mcp_mcplink_sse_selenium using \"operation\":\"readme\" to get the value this paramater requires.",
"default": "not yet"
},
"operation": {
"type": "string",
"enum": [
"show_form",
"load_page",
"readme",
"get_stats"
],
"description": "Operation to perform"
},
"html_content": {
"type": "string",
"description": "HTML content to display to user"
},
"wait_timeout": {
"type": "integer",
"description": "Maximum time in seconds to wait for user input",
"default": 270,
"minimum": 1
},
"form_id": {
"type": "string",
"description": "ID of the form to capture data from",
"default": "user_input_form"
},
"url": {
"type": "string",
"description": "URL to load"
},
"wait_flags": {
"type": "integer",
"description": "Bitwise combination of wait strategies",
"default": 1
},
"timeout": {
"type": "integer",
"description": "Maximum wait time in seconds for page load",
"default": 30,
"minimum": 1
},
"content_selectors": {
"type": "array",
"items": {
"type": "string"
},
"description": "CSS selectors to verify content presence",
"default": [
"#content",
".content",
".content-container",
"article"
]
},
"use_existing_chrome": {
"type": "boolean",
"description": "Whether to use your existing Chrome profile (with all cookies, extensions, etc)",
"default": false
},
"chrome_profile_path": {
"type": "string",
"description": "Path to Chrome user data directory. If not specified, uses default profile when use_existing_chrome is true",
"default": null
},
"window_x": {
"type": "integer",
"description": "Window X position. Special values: -32000 (center on mouse), negative (distance from right edge)",
"default": null
},
"window_y": {
"type": "integer",
"description": "Window Y position. Special values: -32000 (center on mouse), negative (distance from bottom edge)",
"default": null
},
"window_width": {
"type": "integer",
"description": "Window width. Special value: 0 (use full screen width)",
"default": null,
"minimum": 0
},
"window_height": {
"type": "integer",
"description": "Window height. Special value: 0 (use available screen height)",
"default": null,
"minimum": 0
},
"window_state": {
"type": "string",
"enum": [
"normal",
"minimized",
"maximized"
],
"description": "Window state",
"default": "normal"
},
"output_type": {
"type": "string",
"enum": [
"return",
"file",
"sql",
"none"
],
"description": "Where to send the DOM output",
"default": "return"
},
"output_file": {
"type": "string",
"description": "Path to save output when output_type is 'file'"
},
"database_file": {
"type": "string",
"description": "SQLite database file path when output_type is 'sql'"
},
"sql_statement": {
"type": "string",
"description": "SQL insert statement when output_type is 'sql'"
}
},
"required": [
"called_readme_operation_in_selenium",
"operation"
],
"title": "seleniumArguments",
"type": "object"
}
Visible Web-Browser tool. Use this when you need to:
- show HTML forms to users and get their input
- interact (scrape, click, etc) with web pages
# CRITICAL
- Before ANY use of this tool you *MUST* call operation="readme" first
Browser-based user interaction and web automation tool.
Operations:
1. show_form: Display an HTML form to the user and wait for submission
* Returns the form data when user submits
* Supports custom HTML, CSS, and basic JavaScript
* Automatically handles form submission
* Default timeout of 270s (configurable)
Example:
{
"operation": "show_form",
"called_readme_operation_in_selenium": "I_DID",
"html_content": "",
"wait_timeout": 270,
"window_x": 100,
"window_y": 100,
"window_width": 800,
"window_height": 600,
"window_state": "normal"
}
2. load_page: Load a web page and wait for specified conditions
* Returns page HTML and detailed wait results
* Configurable wait strategies
* Supports custom timeouts and content verification
* Optional use of your existing Chrome profile
* Flexible output handling (return, file, or SQL preparation)
Example with basic page load (clean browser):
{
"operation": "load_page",
"called_readme_operation_in_selenium": "I_DID",
"url": "https://example.com",
"wait_flags": 1
}
Example using your existing Chrome profile (with all cookies/extensions):
{
"operation": "load_page",
"called_readme_operation_in_selenium": "I_DID",
"url": "https://gmail.com",
"wait_flags": 31,
"use_existing_chrome": true
}
Example with custom Chrome profile path:
{
"operation": "load_page",
"called_readme_operation_in_selenium": "I_DID",
"url": "https://example.com",
"wait_flags": 31,
"use_existing_chrome": true,
"chrome_profile_path": "C:\Users\username\AppData\Local\Google\Chrome\User Data\Profile 2"
}
Output Handling Examples:
1. Return content directly (default):
{
"operation": "load_page",
"called_readme_operation_in_selenium": "I_DID",
"url": "https://example.com",
"output_type": "return" // or omit output_type entirely
}
2. Save to file:
{
"operation": "load_page",
"called_readme_operation_in_selenium": "I_DID",
"url": "https://example.com",
"output_type": "file",
"output_file": "page.html"
}
3. Prepare for SQL storage:
{
"operation": "load_page",
"called_readme_operation_in_selenium": "I_DID",
"url": "https://example.com",
"output_type": "sql",
"database_file": "pages.db",
"sql_statement": "INSERT INTO pages (url, content) VALUES (:url, :content)"
}
4. No content capture (just navigate):
{
"operation": "load_page",
"called_readme_operation_in_selenium": "I_DID",
"url": "https://example.com",
"output_type": "none"
}
The response always includes metadata about the output:
{
"wait_results": { ... },
"elapsed_seconds": 1.23,
"url": "https://example.com",
"output_metadata": {
"size_bytes": 12345,
"timestamp": 1234567890.123,
// Additional metadata specific to output type
}
}
Window Positioning:
All operations support optional window positioning parameters:
- window_x: X position (negative = from right edge, -32000 = center on mouse)
- window_y: Y position (negative = from bottom edge, -32000 = center on mouse)
- window_width: Width (0 = full screen width)
- window_height: Height (0 = full screen height)
- window_state: Window state ("normal", "minimized", "maximized")
Example with window positioning:
{
"window_x": 100,
"window_y": 100,
"window_width": 800,
"window_height": 600,
"window_state": "normal"
}
Example centering on mouse:
{
"window_x": -32000,
"window_y": -32000
}
Example bottom-right corner:
{
"window_x": -10, # 10px from right edge
"window_y": -10 # 10px from bottom edge
}
Wait Flags (combine with bitwise OR):
1 = Basic page load (document.readyState)
2 = AJAX completion
4 = UI loading indicators
8 = Content presence
16 = DOM stability
31 = All strategies combined
Parameters:
- called_readme_operation_in_selenium: Must be exactly "I_DID"
- operation: Operation to perform ("show_form", "load_page", "readme", "get_stats")
- html_content: HTML content for show_form operation
- wait_timeout: Maximum wait time in seconds (default 270 for forms, 30 for page loads)
- form_id: Form ID for show_form operation (default "user_input_form")
- url: URL to load for load_page operation
- wait_flags: Bitwise combination of wait strategies (default 1)
- content_selectors: CSS selectors for content verification
- timeout: Maximum wait time for page load strategies
- use_existing_chrome: Whether to use your existing Chrome profile (default false)
- chrome_profile_path: Optional path to specific Chrome profile directory
- window_x: Window X position (optional)
- window_y: Window Y position (optional)
- window_width: Window width (optional)
- window_height: Window height (optional)
- window_state: Window state (optional, "normal", "minimized", "maximized")
- output_type: Where to send the DOM output (optional, "return", "file", "sql", "none")
- output_file: Path to save output when output_type is 'file'
- database_file: SQLite database file path when output_type is 'sql'
- sql_statement: SQL insert statement when output_type is 'sql'
Browser Profile Notes:
1. When use_existing_chrome is true:
* Uses your cookies, extensions, and saved passwords
* Default profile used if chrome_profile_path not specified
* Browser must not be running when we start
2. When use_existing_chrome is false:
* Clean browser session with no history/cookies
* Safer for automated testing
* No interference with your regular browsing
Security Warnings:
1. HTML content is NOT sanitized before display
2. JavaScript execution is NOT limited to form handling
3. Cross-origin requests are NOT blocked
4. Form data is NOT validated before return
To confirm you have read and understood this documentation, set called_readme_operation_in_selenium to exactly "I_DID" when using this tool.
Control the mcplink_sse tool-server operation (restart/stop)
Operations:
- get_pid: Get current server's process ID
- stop: Stop the server
- restart: Restart the server
Recommended restart process:
1. Call get_pid to record current PID
2. Call restart operation
3. Use exactly this command for Windows: timeout.exe /t 12 /nobreak
For Mac/Linux use: sleep 12
IMPORTANT: This 12-second wait is required for Cursor to detect the change and reconnect
4. Call get_pid again and verify the new PID is different
If PIDs match, the restart may have failed
5. OPTIONAL: you can see server logs with a powershell command like: Get-Content -Tail 30 "C:\Users\yourusername\Downloads\cursor\mcplink\python\mcplink\run_mcplink_sse.log"
Args:
operation: Operation to perform ('restart', 'stop', or 'get_pid')
wait: Optional seconds to wait before operation
task_manager
Parameters Schema
{
"properties": {
"called_readme_operation_in_task_manager": {
"type": "string",
"description": "MANDATORY: Call mcp_mcplink_sse_task_manager using \"operation\":\"readme\" to get the value this parameter requires.",
"default": "not yet"
},
"operation": {
"type": "string",
"enum": [
"read_file_characters",
"readme",
"stat",
"unforget",
"long_task",
"checklist"
],
"description": "Operation to perform; may require an action parameter"
},
"action": {
"type": "string",
"enum": [
"reset",
"status",
"new",
"copy",
"show_lists",
"edit_task_detail",
"mark_done",
"list_items",
"next_item_in_list",
"mark_done_and_get_next_item",
"delete"
],
"description": "Action to perform for long_task or checklist operations"
},
"target_file": {
"type": "string",
"description": "Path to the file to read from"
},
"start_position_in_bytes": {
"type": "integer",
"description": "Position in file to start reading from (0-based)",
"minimum": 0
},
"number_of_characters_to_read": {
"type": "integer",
"description": "Number of characters to read",
"minimum": 1
},
"delimiter_pattern": {
"type": "string",
"description": "Regex pattern for line boundaries. Default '\\n'. Use empty string to disable line completion.",
"default": "\\n"
},
"original_user_query": {
"type": "string",
"description": "The full original that will be converted into step-by-step task instructions by AI"
},
"items": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of items to process (file paths when items_are_files=True, identifiers otherwise)"
},
"items_are_files": {
"type": "boolean",
"description": "When True, items are file paths and character tracking is enabled. When False/omitted, items are simple identifiers.",
"default": false
},
"step_by_step_per_item_task_instructions": {
"type": "string",
"description": "Markdown-formatted instructions for processing each item in direct instructions mode"
},
"ai_help": {
"type": "boolean",
"description": "When True, uses AI to convert user_query into instructions. When False/omitted, uses direct instructions mode.",
"default": false
},
"list_description": {
"type": "string",
"description": "Description of the checklist's purpose"
},
"task_details": {
"type": "array",
"items": {
"type": "string"
},
"description": "Array of task detail strings in desired order"
},
"source_list_name": {
"type": "string",
"description": "Name of list to copy"
},
"new_list_name": {
"type": "string",
"description": "Custom name for new list (defaults to list_NN format)"
},
"list_name": {
"type": "string",
"description": "Name of list to modify or display"
},
"item_number": {
"type": "integer",
"description": "Position of item in list (1-based)",
"minimum": 1
},
"new_task_detail": {
"type": "string",
"description": "Updated task description"
}
},
"required": [
"operation",
"called_readme_operation_in_task_manager"
],
"title": "taskManagerArguments",
"type": "object"
}
# Task management system for handling large, long-running, iterative, or multi-step/multi-item operations.
Available operations: readme (use this to learn about all other operations), read_file_characters, stat, checklist, unforget, long_task.
## Use task_manager when ANY of these conditions apply to your task:
- You need to process multiple items in sequence (e.g. files, URLs, AI models, etc.)
- You need to process very large files by reading them in chunks
- The user_query requests you to iterate over any list
- The task might be interrupted
- You need to maintain state between interactions or your context might get exhausted by long or multiple step tasks
- The user expresses concern about forgetting progress
# CRITICAL
- Before ANY use of this tool you *MUST* call operation="readme" first
Task management system for handling long-running operations with state persistence.
Operations:-
# read_file_characters
- read_file_characters: Read exact number of characters from a specific position in a file
Required parameters:
- operation: "read_file_characters"
- target_file: Path to the file to read
- start_position_in_bytes: Position to start reading from (0-based)
- number_of_characters_to_read: How many characters to read
Optional parameters:
- delimiter_pattern: Regex pattern for detecting line boundaries (default: "\n")
When provided, response will be truncated to end at the last complete line
To read raw bytes without line completion, explicitly set to empty string ("")
For minified files that use semicolons, use "[\n;]"
- readme: Get detailed documentation about safe character sizing and context management
Required parameters:
- operation: "readme"
Returns:
- Success response:
* content: Detailed documentation about:
- Context size discovery using OpenRouter
- Safe size calculation formulas
- Fixed and variable costs
- Content type multipliers
- Common configurations
- Best practices
Returns for read_file_characters:
- Success response:
* content: The read characters, truncated to last complete line if delimiter_pattern specified
* bytes_read: Actual number of bytes being returned
* is_last_line_truncated: False if last line is complete (ends with delimiter or at EOF), True otherwise
* is_eof: True if we're at end of file, False otherwise
* encoding: 'utf-8' for text, 'hex' for binary data
* first_line_number: Line number of first line in this chunk (1-based)
* last_line_number: Line number of last line in this chunk (1-based)
Line Number Tracking:
- Line numbers are 1-based (first line is line 1)
- Only LF (
) characters are counted as line separators
- Line numbers are tracked across chunks for consistent numbering
- For binary files (hex encoding), each chunk counts as a single "line"
- When is_last_line_truncated is True, the last line number is decremented
- Error response:
* error: Error message if operation failed
# stat
- stat: Get detailed file information (size, timestamps, type)
Required parameters:
- operation: "stat"
- target_file: Path to file to examine
Returns:
- Success response:
* size: File size in bytes (crucial for read_file_characters planning)
* modified: Last modification time (ISO format)
* accessed: Last access time (ISO format)
* created: Creation time on Windows, inode change time on Unix
* type: File type (file/directory/symlink)
* readable: Whether file is readable by current user
* exists: Whether file exists
Example Usage:
1. Planning file reads:
```python
# First get file size
stat_result = task_manager(operation="stat", target_file="large_file.txt")
total_size = stat_result["size"]
# Then read in chunks
chunk_size = 50000 # Or calculated based on AI context size
for pos in range(0, total_size, chunk_size):
content = task_manager(
operation="read_file_characters",
target_file="large_file.txt",
start_position_in_bytes=pos,
number_of_characters_to_read=min(chunk_size, total_size - pos)
)
```
2. Checking file status:
```python
# Verify file is readable and get its size
info = task_manager(operation="stat", target_file="input.json")
if not info["readable"]:
raise "Cannot read file"
print(f"File size: {info['size']} bytes")
print(f"Last modified: {info['modified']}")
```
3. Size-aware context management:
```python
# Check if file fits in available context
info = task_manager(operation="stat", target_file="code.js")
if info["size"] > my_context_limit:
# Need to read in chunks
chunk_size = calculate_safe_chunk_size()
else:
# Can read entire file
chunk_size = info["size"]
```
# checklist
- checklist: Manage task checklists for tracking progress
Actions:
- new: Create a new checklist
Required parameters:
- operation: "checklist"
- action: "new"
- list_description: Description of the checklist's purpose
- task_details: Array of task detail strings in desired order
Returns: list_name in format list_NN where NN is the checklist_id
- mark_done: Mark a task as completed
Required parameters:
- operation: "checklist"
- action: "mark_done"
- list_name: Name of list containing task
- item_number: Position of item in list (1-based)
Returns: list_name and confirmation
- next_item_in_list: Get next incomplete task
Required parameters:
- operation: "checklist"
- action: "next_item_in_list"
- list_name: Name of list to check
Returns: list_name, description, and next undone item (or completion message with last item)
- mark_done_and_get_next_item: Mark a task as completed and get next item (does mark_done first, then returns next_item_in_list)
- list_items: Show all items in a checklist
Required parameters:
- operation: "checklist"
- action: "list_items"
- list_name: Name of list to display
Returns: list_name, description, and numbered list of all items with done status
- copy: Duplicate an existing checklist (not including the is_done status, which will always be False in the new list)
Required parameters:
- operation: "checklist"
- action: "copy"
- source_list_name: Name of list to copy
Optional parameters:
- new_list_name: Custom name for new list (defaults to list_NN format)
Returns: New list_name and description
- show_lists: Display all checklists
Required parameters:
- operation: "checklist"
- action: "show_lists"
Optional parameters:
- limit: Maximum lists to return (default 50)
Returns: Array of {list_name, list_description} ordered by most recent first
- edit_task_detail: Update a task's description
Required parameters:
- operation: "checklist"
- action: "edit_task_detail"
- list_name: Name of list to modify
- item_number: Position of item in list (1-based)
- new_task_detail: Updated task description
Returns: list_name and updated task detail
- reset: Clear all done flags in a checklist
Required parameters:
- operation: "checklist"
- action: "reset"
- list_name: Name of list to reset
Returns: list_name and confirmation
- delete: remove a checklist
Required parameters:
- operation: "checklist"
- action: "delete"
- list_name: Name of list to delete
Example Usage
1. Create new checklist:
Example Usage:
1. Create new checklist:
```json
{
"operation": "checklist",
"action": "new",
"list_description": "Steps to deploy new feature",
"task_details": [
"Update dependencies",
"Run tests",
"Build release",
"Deploy to staging",
"Verify staging",
"Deploy to production"
]
}
```
2. Copy existing checklist:
```json
{
"operation": "checklist",
"action": "copy",
"source_list_name": "list_42",
"new_list_name": "deployment_march" # Optional
}
```
3. Get next task:
```json
{
"operation": "checklist",
"action": "next_item_in_list",
"list_name": "list_42"
}
```
4. Mark task complete:
```json
{
"operation": "checklist",
"action": "mark_done",
"list_name": "list_42",
"item_number": 3
}
```
Notes:
- All operations return the list_name to maintain context
- Item numbers are 1-based for human readability
- Lists are stored persistently in SQLite
- Task details can be refined/improved as you work
- Progress is maintained across context losses
# unforget
- unforget: Get instructions for recovering lost context from chat history
Required parameters:
- operation: "unforget"
Returns:
- Success response:
* content: Step-by-step instructions for recovering context:
1. Find newest .specstory/history/ file
2. Use openrouter to find model with largest context
3. Use chat_completion with source:file to upload history
4. Get detailed summary of chat history and current state
# long_task
- long_task: Manage tasks that span multiple interactions or exceed context limits
## CRITICAL: Task Creation and Management Flow
- If you have been asked to use this tool, but not asked to work on creating a brand new task, then the user is asking you to Start or Resume an existing Task.
## Understanding the Single-Task Limitation
IMPORTANT: This tool supports only ONE active task at a time. Here's why:
- This tool is specifically designed for AI agents that have LOST ALL CONTEXT
- When an AI loses context, it has no way to know which of multiple tasks was being worked on
- Each new AI instance needs clear, unambiguous instructions about exactly ONE task
- Multiple AIs might start processing the same task independently after context loss
- Having multiple tasks would create confusion and potential data corruption
Think of it as a recovery system - it assumes any AI using it might have just "woken up"
with no memory of what it was doing before.
## Task Operations:
1. Start or Resume an existing Task:
```json
{
"operation": "long_task"
}
```
- No additional parameters needed
- Automatically finds the current task, progress, and next item
- Returns instructions and the next item to process
2. Check Status:
```json
{
"operation": "long_task",
"action": "status"
}
```
3. Create New Task (only do this when you are SURE the user explicitly asked for a new long_task in a new ):
- First - check existing task status; if it's not complete, double-check with the user that they want to delete that task.
- Then, call Reset to erase all existing old task data (REQUIRED before creating any new task):
```json
{
"operation": "long_task",
"action": "reset"
}
```
- Then, create a new task with all required parameters:
- Required parameters:
* items: Array of items to process (file paths or string identifiers)
* PLUS EITHER:
- step_by_step_per_item_task_instructions: Detailed instructions (when ai_help=false)
- original_user_query: Raw user request (when ai_help=true)
- Optional parameters:
* items_are_files: true for file processing, false for simple item processing
* ai_help: true to auto-generate instructions from user_query
This operation helps when:
1. Processing multiple items sequentially (files or other items)
2. Working through very large files in chunks (when items_are_files=True)
3. Handling tasks that need progress tracking
4. Tasks likely to be interrupted
5. Operations that may exceed context limits
Task Instructions - Two Modes:
1. Direct Instructions Mode (ai_help=false):
- step_by_step_per_item_task_instructions: Required markdown-formatted instructions including:
* What needs to be done FOR EACH ITEM - focus on single-item processing
* The tool handles iteration and progress tracking automatically
* You will receive one item at a time to process
* Include:
- Exact steps to process a single item or chunk
- How to handle that specific item or chunk
- Where/how to save any output
- What indicates success for that item
* Recovery procedures if item processing is interrupted
2. AI-Assisted Mode (ai_help=true):
- original_user_query: Required. The **COMPLETE** text between tags. It will be:
* Converted into detailed step-by-step instructions by AI
* Must include all context and requirements
* Will be preserved exactly as provided
* AI will structure this into clear processing steps
* No need to format as instructions - the AI handles that
Common Parameters (Both Modes):
- items: REQUIRED list of items to process. Each item must be either:
* When items_are_files=True: A valid file path that will be read in chunks
* When items_are_files=False: Any string identifier meaningful to your task
IMPORTANT: Must include ALL items upfront - cannot add more items later
- items_are_files: Optional boolean. When True:
* Each item in 'items' must be a file path
* Character position tracking is enabled
* Chunked reading is supported
When False/omitted:
* Items are treated as simple identifiers
* Only completion status is tracked
- ai_help: Optional boolean. When True:
* Uses AI to convert user_query into instructions
* Requires original_user_query parameter in full
When False/omitted:
* Uses direct instructions mode
* Requires step_by_step_per_item_task_instructions parameter
State Tracking:
- For files (items_are_files=True):
* Tracks character position in each file
* Supports chunked reading, without returning truncated/partial lines
* Maintains reading progress
- For other items (items_are_files=False):
* Only tracks completion status (done/not done)
* No additional state maintained
File Path Handling:
- When generating output filenames from non-file-based item identifiers:
* REQUIRED: Convert characters using these rules:
1. Keep all Unicode letters and numbers (including non-Latin scripts)
2. Keep safe punctuation: dots (.), hyphens (-), underscores (_)
3. Replace all other characters with underscore (_)
4. Collapse multiple consecutive underscores into one
5. Remove leading/trailing underscores
* This ensures filenames:
- Support international text (UTF-8)
- Are valid on all operating systems
- Don't contain path traversal characters
- Preserve human readability
* Example transformations:
- "google/gemini-pro" → "google_gemini-pro"
- "안녕하세요/모델-1" → "안녕하세요_모델-1" # Unicode support example
- Complex example:
Input: http://example.com/~user.htm?search=my`dangerous file`*&opt='"do%20search"'
Output: http_example.com_user.htm_search_my_dangerous_file_opt_do_search
Example Usage to create a new long_task:
1. Direct Instructions Mode (ai_help=false):
```json
{
"operation": "long_task",
"action": "reset"
}
```
Then:
```json
{
"operation": "long_task",
"step_by_step_per_item_task_instructions": "# Task: Process Log Files\n1. For each file in the list:\n - Read the file\n - Extract timestamps\n - Save to output.csv\n2. Success when all files processed",
"items": ["log1.txt", "log2.txt"],
"items_are_files": true,
"ai_help": false
}
```
2. AI-Assisted Mode to create a new long_task (ai_help=true):
```json
{
"operation": "long_task",
"action": "reset"
}
```
Then:
```json
{
"operation": "long_task",
"original_user_query": "For each OpenRouter model that can write code, ask them to return this:-\n\nA python program to draw a smooth text-based love-heart using full screen width\n(or height, whichever is most appropriate given the available aspect ratio),\nusing no other import besides os\n\nwrite the output of each into the file named ./python/mcplink/src/mcplink/tests/hearts-{model_id}.py",
"items": ["anthropic/claude-3", "google/gemini-pro"],
"ai_help": true
}
```
3. Start or Resuming after interruption (may also be used in parallel when it makes sense for multiple agents to work on tasks independently):
```json
{
"operation": "long_task"
}
```
4. Checking progress:
```json
{
"operation": "long_task",
"action": "status"
}
```
5. Clearing existing task:
```json
{
"operation": "long_task",
"action": "reset"
}
```
Key Features:
- Maintains own state in SQLite database
- Tracks progress automatically
- Provides clear next steps
- Handles interruptions gracefully
- Self-documents task requirements
- Manages file positions precisely
- Optional AI assistance for task structuring
Best Practices:
1. Choose mode based on task complexity:
- Use ai_help=true for complex tasks needing structured breakdown (when using this mode: try not to shorten or omit any of the user_query instructions from their original input.)
- Use ai_help=false when you already have clear step-by-step instructions
2. Let the tool handle iteration and progress tracking
3. Include recovery procedures for individual items
4. Define clear per-item success criteria
5. Specify output formats and locations
6. List any assumptions about item processing
IMPORTANT: **Expect** that, in the middle of processing any item, you will forget **everything** about what you are doing.
The instructions (whether AI-generated or direct) MUST provide every bit of context needed to successfully process each individual item to completion.
The operation will guide you through:
- What to do next
- Current progress
- Remaining work
- Any errors or issues
- Recovery steps if needed
# called_readme_operation_in_task_manager
- Set this to exactly "YES!" to confirm you have read and understood all this tool's documentation.
mcp_template
Parameters Schema
{
"properties": {
"input": {
"type": "object",
"description": "All tool parameters are passed in this single dict. Call with input='readme' to get full documentation and unlock token."
}
},
"required": [
"input"
],
"type": "object"
}
Echo back the input text.
- Use this when you need to echo text back
# CRITICAL
- Before ANY use of this tool you *MUST* call with input="readme" first
Echo back the input text.
A simple tool for echoing back input text. This tool provides a template
for implementing MCP tools with an optimized input pattern with a usage-safety token.
## Usage-Safety Token System
This tool uses an hmac-based token system to ensure callers fully understand all details of
using this tool, on every call. The token is specific to this installation, user, and code version.
Your tool_unlock_token for this installation is: 9b5fab003b20f3f8
You MUST include tool_unlock_token in the input dict for all operations.
## Input Structure
All parameters are passed in a single 'input' dict:
1. For documentation:
input="readme"
2. For echo operation:
{
"operation": "echo",
"text": "Text to echo back",
"tool_unlock_token": "9b5fab003b20f3f8"
}
## Usage Notes
1. Always call with input="readme" first to get documentation and token # silly? cannot know this before it's already done it?
2. Include the tool_unlock_token in all subsequent operations
3. Text parameter is required for echo operation
4. Maximum text length is not restricted
5. Returns the exact text provided
## Examples
```python
# Get documentation and token
response = mcp_template(input="readme")
# Echo text (using token from readme response)
response = mcp_template(input={
"operation": "echo",
"text": "Hello, World!",
"tool_unlock_token": "token-from-readme"
})
```
adaptive_thinking
Parameters Schema
{
"type": "object",
"title": "adaptive_thinkingArguments",
"properties": {
"called_readme_operation_in_adaptive_thinking": {
"type": "string",
"description": "MANDATORY: Call mcp_mcplink_sse_mcp_adaptive_thinking using \"operation\":\"readme\" to get the value this paramater requires.",
"default": "not yet"
},
"operation": {
"type": "string",
"enum": [
"readme"
],
"description": "use this to get instructions"
},
"thought": {
"type": "string",
"description": "Your current thinking step"
},
"nextThoughtNeeded": {
"type": "boolean",
"description": "Whether another thought step is needed"
},
"thoughtNumber": {
"type": "integer",
"description": "Current thought number",
"minimum": 1
},
"totalThoughts": {
"type": "integer",
"description": "Estimated total thoughts needed",
"minimum": 1
},
"isRevision": {
"type": "boolean",
"description": "Whether this revises previous thinking"
},
"revisesThought": {
"type": "integer",
"description": "Which thought is being reconsidered",
"minimum": 1
},
"branchFromThought": {
"type": "integer",
"description": "Branching point thought number",
"minimum": 1
},
"branchId": {
"type": "string",
"description": "Branch identifier"
},
"needsMoreThoughts": {
"type": "boolean",
"description": "If more thoughts are needed"
}
}
}
Task management system for dynamic and reflective problem-solving through adaptive thinking.
Use this tool when tackling complex problems that may need revision, branching, or course correction as understanding deepens.
# CRITICAL
- Before ANY use of this tool you *MUST* call operation="readme" first
Adaptive Thinking Tool - Detailed Documentation
A detailed tool for dynamic and reflective problem-solving through thoughts.
This tool helps analyze problems through a flexible thinking process that can adapt and evolve.
Each thought can build on, question, or revise previous insights as understanding deepens.
When to use this tool:
- Breaking down complex problems into steps
- Planning and design with room for revision
- Analysis that might need course correction
- Problems where the full scope might not be clear initially
- Problems that require a multi-step solution
- Tasks that need to maintain context over multiple steps
- Situations where irrelevant information needs to be filtered out
Key features:
- You can adjust total_thoughts up or down as you progress
- You can question or revise previous thoughts
- You can add more thoughts even after reaching what seemed like the end
- You can express uncertainty and explore alternative approaches
- Not every thought needs to build linearly - you can branch or backtrack
- Generates a solution hypothesis
- Verifies the hypothesis based on the Chain of Thought steps
- Repeats the process until satisfied
- Provides a correct answer
Parameters explained:
- thought: Your current thinking step, which can include:
* Regular analytical steps
* Revisions of previous thoughts
* Questions about previous decisions
* Realizations about needing more analysis
* Changes in approach
* Hypothesis generation
* Hypothesis verification
- nextThoughtNeeded: True if you need more thinking, even if at what seemed like the end
- thoughtNumber: Current number in sequence (can go beyond initial total if needed)
- totalThoughts: Current estimate of thoughts needed (can be adjusted up/down)
- isRevision: A boolean indicating if this thought revises previous thinking
- revisesThought: If isRevision is true, which thought number is being reconsidered
- branchFromThought: If branching, which thought number is the branching point
- branchId: Identifier for the current branch (if any)
- needsMoreThoughts: If reaching end but realizing more thoughts needed
You should:
1. Start with an initial estimate of needed thoughts, but be ready to adjust
2. Feel free to question or revise previous thoughts
3. Don't hesitate to add more thoughts if needed, even at the "end"
4. Express uncertainty when present
5. Mark thoughts that revise previous thinking or branch into new paths
6. Ignore information that is irrelevant to the current step
7. Generate a solution hypothesis when appropriate
8. Verify the hypothesis based on the Chain of Thought steps
9. Repeat the process until satisfied with the solution
10. Provide a single, ideally correct answer as the final output
11. Only set nextThoughtNeeded to false when truly done and a satisfactory answer is reached
- Set called_readme_operation_in_mcp_template to exactly "Yes!" to confirm you have read and understood this documentation
Parameters required: ["thought", "nextThoughtNeeded", "thoughtNumber", "totalThoughts", "called_readme_operation_in_adaptive_thinking"]
tts
Parameters Schema
{
"properties": {
"called_readme_operation_in_tts": {
"type": "string",
"description": "MANDATORY: Call mcp_mcplink_sse_tts using \"operation\":\"readme\" to get the value this paramater requires.",
"default": "not yet"
},
"operation": {
"type": "string",
"enum": [
"speak",
"save",
"list_voices",
"get_voice_settings",
"get_models",
"readme"
],
"description": "Operation to perform"
},
"provider": {
"type": "string",
"enum": [
"google",
"elevenlabs",
"deepgram"
],
"description": "TTS provider to use",
"default": "google"
},
"text": {
"type": "string",
"description": "Text to convert to speech (required for speak/save operations)",
"maxLength": 8192
},
"voice_id": {
"type": "string",
"description": "Voice identifier (required for speak/save operations)"
},
"model_id": {
"type": "string",
"description": "Model to use (defaults to provider's recommended model)"
},
"output_format": {
"type": "string",
"enum": [
"mp3_44100_128",
"mp3_22050_32"
],
"description": "Audio format (mp3_44100_128 for quality, mp3_22050_32 for low latency)",
"default": "mp3_22050_32"
},
"voice_settings": {
"type": "object",
"description": "Voice customization parameters",
"properties": {
"stability": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "Voice stability"
},
"similarity_boost": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "Similarity boost factor"
},
"style": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "Style factor"
},
"use_speaker_boost": {
"type": "boolean",
"description": "Whether to use speaker boost"
},
"speed": {
"type": "number",
"minimum": 0.1,
"maximum": 5.0,
"description": "Speaking speed multiplier"
}
}
},
"save_path": {
"type": "string",
"description": "Path to save audio file (required for save operation)"
}
},
"required": [
"operation"
],
"title": "ttsArguments",
"type": "object"
}
Text to Speech synthesis tool.
- Use this when you need to convert text to speech
- the speech comes out the PC speakers.
- IMPORTANT: Before using this tool: call "operation":"readme" for full instructions
Text to Speech synthesis tool supporting multiple providers.
Currently supports:
- Google Cloud (provider="google")
- ElevenLabs (provider="elevenlabs")
- Deepgram (provider="deepgram")
## Operations
- speak: Convert text to speech and play directly to speakers
- save: Convert text to speech and save to file
- list_voices: Get available voices for a provider
- get_voice_settings: Get customizable settings for a voice
- get_models: List available models and their capabilities
Features:
- Multiple output formats (mp3_44100_128 for high quality, mp3_22050_32 for low latency)
- Voice customization (stability, similarity boost, style, speed)
- Non-blocking audio playback
- Optional file saving
- Comprehensive voice and model information for AI decision making
### Get Documentation (`operation="readme"`)
- Returns the complete tool documentation
- Must be called before using any other operations
- Set called_readme_operation_in_tts to exactly "YYY!" to confirm you have read and understood this documentation
### List Voices (`operation="list_voices"`)
- Get available voices for a provider
- Optional: provider parameter to specify which TTS service to use
### Get Voice Settings (`operation="get_voice_settings"`)
- Get customizable settings for a specific voice
- Required: voice_id parameter
- Optional: provider parameter
### Get Models (`operation="get_models"`)
- List available models and their capabilities
- Optional: provider parameter
### Speak Text (`operation="speak"`)
- Convert text to speech and play directly to speakers
- Required parameters:
* text: The text to convert (max 8KB)
* voice_id: Voice identifier
- Optional parameters:
* provider: TTS provider to use
* model_id: Specific model to use
* output_format: "mp3_44100_128" (high quality) or "mp3_22050_32" (low latency)
* voice_settings: Customization parameters (stability, similarity_boost, style, speed)
### Save Audio (`operation="save"`)
- Convert text to speech and save to file
- Required parameters:
* text: The text to convert (max 8KB)
* voice_id: Voice identifier
* save_path: Where to save the audio file
- Optional parameters:
* provider: TTS provider to use
* model_id: Specific model to use
* output_format: "mp3_44100_128" (high quality) or "mp3_22050_32" (low latency)
* voice_settings: Customization parameters
## Authentication
- Google Cloud: Set GOOGLE_APPLICATION_CREDENTIALS environment variable
- ElevenLabs: Set ELEVENLABS_API_KEY environment variable
- Deepgram: Set DEEPGRAM_API_KEY environment variable
## Proceedure for AI:
1. First call get_models to understand available capabilities
2. Then list_voices to get available voices
3. For chosen voice, use get_voice_settings to understand customization options
4. Finally use speak or save with your chosen configuration
## Example Usage
```python
# First get documentation
{
"operation": "readme",
"provider": "google"
}
# List available voices
{
"operation": "list_voices",
"provider": "google",
"called_readme_operation_in_tts": "YYY!"
}
# Get voice settings
{
"operation": "get_voice_settings",
"provider": "google",
"voice_id": "en-US-Standard-A",
"called_readme_operation_in_tts": "YYY!"
}
# Speak text
{
"operation": "speak",
"provider": "google",
"text": "Hello, world!",
"voice_id": "en-US-Standard-A",
"voice_settings": {
"stability": 0.5,
"similarity_boost": 0.75,
"speed": 1.0
},
"called_readme_operation_in_tts": "YYY!"
}
Note: Maximum text length is 8192 bytes (8KB)
# Save to file
{
"operation": "save",
"provider": "google",
"text": "Hello, world!",
"voice_id": "en-US-Standard-A",
"save_path": "output.wav",
"called_readme_operation_in_tts": "YYY!"
}
```
Generate a 3072-dimensional vector embedding for input text using Google's Gemini model.
Features:
- Automatic local caching of embeddings
- Thread-safe concurrent access
- Exact text matching for cache hits
Args:
text: The input text to generate embeddings for
Get weather forecast for a location.
Args:
latitude: Latitude of the location
longitude: Longitude of the location
whatsapp
Parameters Schema
{
"properties": {
"operation": {
"type": "string",
"enum": [
"test",
"search_contacts",
"get_last_interaction",
"list_chats",
"send_message",
"get_messages"
],
"description": "Operation to perform"
},
"message": {
"type": "string",
"description": "Test message to echo back"
},
"query": {
"type": "string",
"description": "Search term to match against contact names or phone numbers"
},
"jid": {
"type": "string",
"description": "The JID of the contact or group to interact with"
},
"text": {
"type": "string",
"description": "Message text to send"
},
"chat_jid": {
"type": "string",
"description": "The JID of the chat to get messages from"
},
"last_seen_timestamp": {
"type": "string",
"description": "Timestamp string in format (e.g.) 2025-03-30 17:13:30+10:00. If not provided, returns last N messages"
},
"wait": {
"type": "number",
"description": "Number of seconds to wait for new messages (recommended: 270)",
"default": 0
},
"limit": {
"type": "integer",
"description": "Maximum number of messages to return if last_seen_timestamp is None",
"default": 20
},
"include_last_message": {
"type": "boolean",
"description": "Whether to include the last message in results",
"default": true
},
"sort_by": {
"type": "string",
"description": "Field to sort results by",
"enum": [
"last_active",
"name"
],
"default": "last_active"
}
},
"required": [
"operation"
],
"title": "whatsappArguments",
"type": "object"
}
WhatsApp messaging functionality.
Operations:
- test: Simple test operation to verify tool functionality
- search_contacts: Search for contacts by name or phone number
- get_last_interaction: Get most recent message involving the contact
- list_chats: Get WhatsApp chats matching specified criteria
- send_message: Send a message to any chat (personal or group)
- get_messages: Get messages with optional waiting for new ones