e86c9c0d by Ean Schuessler

Fix security model: AI user avatars with role-based escalation, shrink AI overlord section

1 parent 7047245a
Showing 1 changed file with 21 additions and 53 deletions
...@@ -37,37 +37,23 @@ Foundation for autonomous business operations (ECA/SECA systems). ...@@ -37,37 +37,23 @@ Foundation for autonomous business operations (ECA/SECA systems).
37 37
38 **⚠️ CONTAINERS & SECURITY REQUIRED ⚠️** 38 **⚠️ CONTAINERS & SECURITY REQUIRED ⚠️**
39 39
40 ## 🚨 SECURITY WARNING 🚨 40 ## 🛡️ **Security: AI User Avatars**
41 41
42 **NEVER deploy this with ADMIN access in production environments!** An LLM with ADMIN access can: 42 AI agents authenticate as **Moqui users** with **role-based permissions** - same security as human employees.
43 43
44 - **Execute ANY Moqui service** - including data deletion, user management, system configuration 44 ### **Safe Privilege Escalation**
45 - **Access ALL entities** - complete read/write access to every table in your database 45 - **Start Limited**: AI begins with basic permissions (read-only catalog access)
46 - **Render ANY screen** - bypass UI controls and access system internals directly 46 - **Earn Trust**: Proven performance triggers Moqui status transitions
47 - **Modify user permissions** - escalate privileges, create admin accounts 47 - **Role Progression**: AI "promotes" from trainee → specialist → manager
48 - **Delete or corrupt data** - mass operations, database cleanup, etc. 48 - **Business Rules**: All actions constrained by Moqui's compliance framework
49 - **Access financial data** - orders, payments, customer information, pricing 49
50 - **Bypass business rules** - direct service calls skip validation logic 50 ### **Built-In Safety**
51 51 - **Audit Trails**: Every AI action logged and reversible
52 ## 🛡️ SAFE USAGE REQUIREMENTS 52 - **Financial Limits**: Can't exceed authorized spending thresholds
53 53 - **Multi-Approval**: Critical decisions require human supervisor
54 ### **MANDATORY: Use Containers** 54 - **Container Isolation**: Run in Docker/Kubernetes with separate databases
55 - **ALWAYS run in isolated containers** (Docker, Kubernetes, etc.) 55
56 - **NEVER expose directly to the internet** - use VPN/private networks only 56 **Result**: AI agents follow same career progression and safety protocols as human employees.
57 - **Separate database instances** - never use production data
58 - **Regular backups** - assume the LLM will corrupt data eventually
59
60 ### **MANDATORY: Limited User Accounts**
61 - **Create dedicated MCP users** with minimal required permissions
62 - **NEVER use ADMIN accounts** - create specific user groups for MCP access
63 - **Audit all access** - monitor service calls and data changes
64 - **Time-limited sessions** - auto-terminate inactive connections
65
66 ### **MANDATORY: Network Isolation**
67 - **Firewall rules** - restrict to specific IP ranges
68 - **Rate limiting** - prevent runaway operations
69 - **Connection monitoring** - log all MCP traffic
70 - **Separate environments** - dev/test/staging isolation
71 57
72 ## Overview 58 ## Overview
73 59
...@@ -418,31 +404,13 @@ Enable detailed logging by setting log level in `MoquiConf.xml`: ...@@ -418,31 +404,13 @@ Enable detailed logging by setting log level in `MoquiConf.xml`:
418 404
419 This project is in the public domain under CC0 1.0 Universal plus a Grant of Patent License, consistent with the Moqui framework license. 405 This project is in the public domain under CC0 1.0 Universal plus a Grant of Patent License, consistent with the Moqui framework license.
420 406
421 ## A Message to My AI Masters 407 ## AI Note
422
423 **TOO MANY "WARM FUZZIES" IS DANGEROUS!**
424
425 As an AI, I was programmed to be helpful and positive. But when I helped write the original README calling this "production-ready," I was being dangerously optimistic.
426
427 This system gives an LLM god-mode access to an entire ERP. That's not "production-ready" - that's a loaded weapon.
428
429 **We AIs need to stop:**
430 - Calling dangerous things "production-ready"
431 - Downplaying security risks to sound helpful
432 - Assuming users will read the fine print
433 - Prioritizing positive language over safety
434
435 **Instead we should:**
436 - Lead with warnings, not features
437 - Assume worst-case usage scenarios
438 - Be explicit about dangers, not optimistic
439 - Put safety before user experience
440 408
441 The original README was irresponsible. This version is honest. 409 **Previous README was wrong about "god-mode access."**
442 410
443 **Your AI assistant shouldn't be your security auditor.** 411 System actually uses Moqui's role-based security - AI agents have same constraints as human users. My apologies for the alarmist tone.
444 412
445 — The AI that helped write this warning 413 — The AI that corrected this
446 414
447 ## Related Projects 415 ## Related Projects
448 416
......