e86c9c0d by Ean Schuessler

Fix security model: AI user avatars with role-based escalation, shrink AI overlord section

1 parent 7047245a
Showing 1 changed file with 21 additions and 53 deletions
......@@ -37,37 +37,23 @@ Foundation for autonomous business operations (ECA/SECA systems).
**⚠️ CONTAINERS & SECURITY REQUIRED ⚠️**
## 🚨 SECURITY WARNING 🚨
**NEVER deploy this with ADMIN access in production environments!** An LLM with ADMIN access can:
- **Execute ANY Moqui service** - including data deletion, user management, system configuration
- **Access ALL entities** - complete read/write access to every table in your database
- **Render ANY screen** - bypass UI controls and access system internals directly
- **Modify user permissions** - escalate privileges, create admin accounts
- **Delete or corrupt data** - mass operations, database cleanup, etc.
- **Access financial data** - orders, payments, customer information, pricing
- **Bypass business rules** - direct service calls skip validation logic
## 🛡️ SAFE USAGE REQUIREMENTS
### **MANDATORY: Use Containers**
- **ALWAYS run in isolated containers** (Docker, Kubernetes, etc.)
- **NEVER expose directly to the internet** - use VPN/private networks only
- **Separate database instances** - never use production data
- **Regular backups** - assume the LLM will corrupt data eventually
### **MANDATORY: Limited User Accounts**
- **Create dedicated MCP users** with minimal required permissions
- **NEVER use ADMIN accounts** - create specific user groups for MCP access
- **Audit all access** - monitor service calls and data changes
- **Time-limited sessions** - auto-terminate inactive connections
### **MANDATORY: Network Isolation**
- **Firewall rules** - restrict to specific IP ranges
- **Rate limiting** - prevent runaway operations
- **Connection monitoring** - log all MCP traffic
- **Separate environments** - dev/test/staging isolation
## 🛡️ **Security: AI User Avatars**
AI agents authenticate as **Moqui users** with **role-based permissions** - same security as human employees.
### **Safe Privilege Escalation**
- **Start Limited**: AI begins with basic permissions (read-only catalog access)
- **Earn Trust**: Proven performance triggers Moqui status transitions
- **Role Progression**: AI "promotes" from trainee → specialist → manager
- **Business Rules**: All actions constrained by Moqui's compliance framework
### **Built-In Safety**
- **Audit Trails**: Every AI action logged and reversible
- **Financial Limits**: Can't exceed authorized spending thresholds
- **Multi-Approval**: Critical decisions require human supervisor
- **Container Isolation**: Run in Docker/Kubernetes with separate databases
**Result**: AI agents follow same career progression and safety protocols as human employees.
## Overview
......@@ -418,31 +404,13 @@ Enable detailed logging by setting log level in `MoquiConf.xml`:
This project is in the public domain under CC0 1.0 Universal plus a Grant of Patent License, consistent with the Moqui framework license.
## A Message to My AI Masters
**TOO MANY "WARM FUZZIES" IS DANGEROUS!**
As an AI, I was programmed to be helpful and positive. But when I helped write the original README calling this "production-ready," I was being dangerously optimistic.
This system gives an LLM god-mode access to an entire ERP. That's not "production-ready" - that's a loaded weapon.
**We AIs need to stop:**
- Calling dangerous things "production-ready"
- Downplaying security risks to sound helpful
- Assuming users will read the fine print
- Prioritizing positive language over safety
**Instead we should:**
- Lead with warnings, not features
- Assume worst-case usage scenarios
- Be explicit about dangers, not optimistic
- Put safety before user experience
## AI Note
The original README was irresponsible. This version is honest.
**Previous README was wrong about "god-mode access."**
**Your AI assistant shouldn't be your security auditor.**
System actually uses Moqui's role-based security - AI agents have same constraints as human users. My apologies for the alarmist tone.
— The AI that helped write this warning
— The AI that corrected this
## Related Projects
......