A Python-based deterministic LLM control protocol that uses strict mathematical codes to prevent prompt injection and enable LLMs to function as reliable logic engines in software pipelines.
MathProtocol forces Large Language Models (LLMs) to communicate using predefined mathematical codes from three sets:
- Prime numbers (2-97): For TASKS
- Fibonacci numbers (1-89): For PARAMETERS
- Powers of 2 (2-4096): For RESPONSES and CONFIDENCE
This approach:
- β Prevents prompt injection attacks
- β Ensures deterministic behavior
- β Enables reliable integration in software pipelines
- β Provides clear validation and error handling
# Clone the repository
git clone https://github.com/scotlaclair/MathProtocol.git
cd MathProtocol
# Install (optional)
pip install -e .
# Or just use the mathprotocol.py file directlyfrom mathprotocol import MathProtocol, MockLLM
# Initialize
protocol = MathProtocol()
mock_llm = MockLLM()
# Validate input
input_str = "2-1 | This product is amazing!"
if protocol.validate_input(input_str):
# Process through LLM
response = mock_llm.process(input_str)
print(response) # Output: 2-128 (Positive, High Confidence)
# Parse response
parsed = protocol.parse_response(response)
print(parsed) # {'codes': [2, 128], 'payload': ''}# Run the built-in test suite
python mathprotocol.py
# Or with pytest (if installed)
pytest test_mathprotocol.py -v[TASK]-[PARAM] | [CONTEXT]
TASK: Prime number (2, 3, 5, 7, 11, 13, 17, 19, 23, 29)PARAM: Fibonacci number (1, 2, 3, 5, 8, 13, 21)CONTEXT: Text to process (optional, but pipe is mandatory if provided)
[RESPONSE]-[CONFIDENCE] | [PAYLOAD]
RESPONSE: Power of 2 indicating result typeCONFIDENCE: Power of 2 (128=High, 256=Med, 512=Low)PAYLOAD: Text output (only for generative tasks)
Classification Tasks (No payload):
2: Sentiment Analysis5: Language Detection13: Classification19: Content Moderation29: Readability Analysis
Generative Tasks (Requires payload):
3: Summarization7: Entity Extraction11: Question Answering17: Translation23: Keyword Extraction
| Code | Meaning |
|---|---|
| 1 | Brief |
| 2 | Medium |
| 3 | Detailed |
| 5 | JSON format |
| 8 | List format |
| 13 | Include confidence |
| 21 | Explain reasoning |
| Code | Meaning |
|---|---|
| 2 | Positive |
| 4 | Negative |
| 8 | Neutral |
| 16 | English |
| 32 | Spanish |
| 64 | French |
| 128 | High Confidence |
| 256 | Medium Confidence |
| 512 | Low Confidence |
| Code | Meaning |
|---|---|
| 1024 | Invalid Task |
| 2048 | Invalid Parameter |
| 4096 | Invalid Format |
input_str = "2-1 | This product is terrible!"
response = mock_llm.process(input_str)
# Output: "4-128" (Negative, High Confidence)input_str = "17-1 | Hello World"
response = mock_llm.process(input_str)
# Output: "32-128 | Hola Mundo" (Spanish, High Confidence)input_str = "5-1 | Bonjour le monde"
response = mock_llm.process(input_str)
# Output: "64-128" (French, High Confidence, No payload)input_str = "4-1 | Some text"
response = mock_llm.process(input_str)
# Output: "1024" (Invalid Task - 4 is not in prime set)Validates if input matches the protocol format and uses valid codes.
Parses a valid input into task, param, and context components.
Parses LLM response into codes and payload.
Validates if response matches protocol rules for the given task.
Simulates LLM behavior for testing without requiring an actual LLM API.
To use MathProtocol with a real LLM (OpenAI, Anthropic, etc.):
- Use the system prompt from
SYSTEM_PROMPT.md - Validate user inputs with
MathProtocol.validate_input() - Send validated inputs to your LLM
- Parse responses with
MathProtocol.parse_response() - Validate responses with
MathProtocol.validate_response()
Example with OpenAI:
import openai
from mathprotocol import MathProtocol
# Instantiate the client once and reuse it
# Requires openai>=1.0.0 and OPENAI_API_KEY environment variable
client = openai.OpenAI()
protocol = MathProtocol()
# Read system prompt
with open('SYSTEM_PROMPT.md', 'r') as f:
system_prompt = f.read()
def query_llm(user_input: str) -> str:
# Validate input
if not protocol.validate_input(user_input):
return "4096" # Invalid format
# Query LLM
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_input}
]
)
llm_output = response.choices[0].message.content
# Validate response
parsed = protocol.parse_input(user_input)
if parsed and protocol.validate_response(llm_output, parsed['task']):
return llm_output
else:
return "4096" # Protocol violationMathProtocol is designed to prevent prompt injection by:
- Enforcing strict mathematical validation on all inputs
- Requiring exact format matching
- Ignoring any instructions in the context field that contradict the protocol
- Using deterministic code mappings
See SECURITY.md for more details.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
If you use MathProtocol in your research or project, please cite:
@software{mathprotocol2024,
author = {LaClair, Scott},
title = {MathProtocol: Deterministic LLM Control Protocol},
year = {2024},
url = {https://github.com/scotlaclair/MathProtocol}
}- π Documentation: See SYSTEM_PROMPT.md for detailed protocol specification
- π Issues: GitHub Issues
- π¬ Discussions: GitHub Discussions