Go to file
hkr04 d7c898306f update README 2025-04-13 16:42:43 +08:00
agent cleaned a bit; convert Chinese to English 2025-04-12 21:29:18 +08:00
assets add logos 2025-04-10 00:39:55 +08:00
common remove unused codes; refactor project structure 2025-03-29 00:04:23 +08:00
config/example remove personal configs 2025-04-13 00:07:59 +08:00
examples windows adapted 2025-04-13 01:03:50 +08:00
flow update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
include windows adapted 2025-04-13 01:03:50 +08:00
mcp@3c0a2a730a update README and mcp 2025-04-13 14:25:53 +08:00
memory cleaned a bit; convert Chinese to English 2025-04-12 21:29:18 +08:00
server python_execute: fix bugs in locks 2025-04-12 21:18:35 +08:00
src update README and mcp 2025-04-13 14:25:53 +08:00
tests cleaned a bit; convert Chinese to English 2025-04-12 21:29:18 +08:00
tokenizer cleaned a bit; convert Chinese to English 2025-04-12 21:29:18 +08:00
tool update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
.gitignore update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
.gitmodules update mcp origin 2025-04-01 18:56:51 +08:00
CMakeLists.txt add tokenizer (lack of test in use) and content_provider (just implementation, no use) 2025-04-06 16:32:51 +08:00
LICENSE ready to publish? 2025-04-13 16:17:57 +08:00
README.md update README 2025-04-13 16:41:34 +08:00
README.zh.md update README 2025-04-13 16:42:43 +08:00

README.md

English | 中文

GitHub stars License: MIT

humanus.cpp

Humanus (Latin for "human") is a lightweight C++ framework inspired by OpenManus and mem0, integrated with the Model Context Protocol (MCP). This project aims to provide a fast, modular foundation for building local LLM agents.

Key Features:

  • C++ Implementation: Core logic in efficient C++, optimized for speed and minimal overhead
  • Lightweight Design: Minimal dependencies and simple architecture, ideal for embedded or resource-constrained environments
  • Cross-platform Compatibility: Runs on Linux, macOS, and Windows
  • MCP Protocol Integration: Native support for standardized tool interaction via MCP
  • Vectorized Memory: Context retrieval using HNSW-based similarity search
  • Modular Architecture: Easy to plug in new models, tools, or storage backends

Humanus is still in its early stages — it's a work in progress, evolving rapidly. Were iterating openly, improving as we go, and always welcome feedback, ideas, and contributions.

Let's explore the potential of local LLM agents with humanus.cpp!

Project Demo

How to Build

git submodule update --init

cmake -B build
cmake --build build --config Release

How to Run

Configuration

To set up your custom configuration, follow these steps:

  1. Copy all files from config/example to config.
  2. Replace base_url, api_key, .etc in config/config_llm.toml and other configurations in config/config*.toml according to your need.

    Note: llama-server in llama.cpp also supports embedding models for vectorized memory.

  3. Fill in args after "@modelcontextprotocol/server-filesystem" for filesystem to control the access to files. For example:
[filesystem]
type = "stdio"
command = "npx"
args = ["-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/{Username}/Desktop",
        "other/path/to/your/files]

mcp_server

(for tools, only python_execute as an example now)

Start a MCP server with tool python_execute on port 8895 (or pass the port as an argument):

./build/bin/mcp_server <port> # Unix/MacOS
.\build\bin\Release\mcp_server.exe <port> # Windows

humanus_cli

Run with tools python_execute, filesystem and playwright (for browser use):

./build/bin/humanus_cli # Unix/MacOS
.\build\bin\Release\humanus_cli.exe # Windows

humanus_cli_plan (WIP)

Run planning flow (only agent humanus as executor):

./build/bin/humanus_cli_plan # Unix/MacOS
.\build\bin\Release\humanus_cli_plan.exe # Windows

humanus_server (WIP)

Run agents in MCP the server (default running on port 8896):

  • humanus_initialze: Pass JSON configuration (like in config/config.toml) to initialize an agent for a session. (Only one agent will be maintained for each session/client)
  • humanus_run: Pass prompt to tell the agent what to do. (Only one task at a time)
  • humanus_terminate: Stop the current task.
  • humanus_status: Get the current states and other information about the agent and the task. Returns:
    • state: Agent state.
    • current_step: Current step index of the agent.
    • max_steps: Maximum steps executing without interaction with the user.
    • prompt_tokens: Prompt (input) tokens consumption.
    • completion_tokens: Completion (output) tokens consumption.
    • log_buffer: Logs in the buffer, like humanus_cli. Will be cleared after fetched.
    • result: Explaining what the agent did. Not empty if the task is finished.
./build/bin/humanus_server <port> # Unix/MacOS
.\build\bin\Release\humanus_cli_plan.exe <port> # Windows

Configure it in Cursor:

{
  "mcpServers": {
    "humanus": {
      "url": "http://localhost:8896/sse"
    }
  }
}

Experimental feature: MCP in MCP! You can run humanus_server and connect to it from another MCP server or humanus_cli.

Acknowledgement

This work was supported by the National Natural Science Foundation of China (No. 62306216) and the Natural Science Foundation of Hubei Province of China (No. 2023AFB816).

Cite

@misc{humanus_cpp,
  author = {Zihong Zhang and Zuchao Li},
  title = {humanus.cpp: A Lightweight C++ Framework for Local LLM Agents},
  year = {2025}
}