Go to file
hkr04 f6cc8995fb update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
agent cleaned a bit; convert Chinese to English 2025-04-12 21:29:18 +08:00
assets add logos 2025-04-10 00:39:55 +08:00
common remove unused codes; refactor project structure 2025-03-29 00:04:23 +08:00
config update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
examples update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
flow update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
include update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
mcp@0dbcd1e6d5 add humanus_server (mcp) 2025-04-08 23:26:53 +08:00
memory cleaned a bit; convert Chinese to English 2025-04-12 21:29:18 +08:00
server python_execute: fix bugs in locks 2025-04-12 21:18:35 +08:00
src update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
tests cleaned a bit; convert Chinese to English 2025-04-12 21:29:18 +08:00
tokenizer cleaned a bit; convert Chinese to English 2025-04-12 21:29:18 +08:00
tool update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
.gitignore update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00
.gitmodules update mcp origin 2025-04-01 18:56:51 +08:00
CMakeLists.txt add tokenizer (lack of test in use) and content_provider (just implementation, no use) 2025-04-06 16:32:51 +08:00
README.md update README; fix bug for deepseek-chat (could not handle array content) 2025-04-13 00:02:18 +08:00

README.md

humanus.cpp

Humanus (meaning "human" in Latin) is a lightweight framework inspired by OpenManus and mem0, integrated with the Model Context Protocol (MCP). humanus.cpp enables more flexible tool choices, and provides a foundation for building powerful local LLM agents.

Let's embrace local LLM agents w/ humanus.cpp!

Project Demo

How to Build

git submodule update --init --recursive

cmake -B build
cmake --build build --config Release

How to Run

Configuration

Switch to your own configration first:

  1. Copy configuration files from config/example to config.
  2. Replace base_url, api_key, .etc in config/config_llm.toml and other configurations in config/config*.toml according to your need.

    Note: llama-server in llama.cpp also support embedding models.

  3. Fill in args after "@modelcontextprotocol/server-filesystem" for filesystem to control the access to files. For example:
[filesystem]
type = "stdio"
command = "npx"
args = ["-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/{Username}/Desktop",
        "other/path/to/your/files]

mcp_server

(for tools, only python_execute as an example now)

Start a MCP server with tool python_execute on port 8895 (or pass the port as an argument):

./build/bin/mcp_server <port> # Unix/MacOS
.\build\bin\Release\mcp_server.exe  <port> # Windows

humanus_cli

Run with tools python_execute, filesystem and playwright (for browser use):

./build/bin/humanus_cli # Unix/MacOS
.\build\bin\Release\humanus_cli.exe # Windows

humanus_cli_plan (WIP)

Run planning flow (only agent humanus as executor):

./build/bin/humanus_cli_plan # Unix/MacOS
.\build\bin\Release\humanus_cli_plan.exe # Windows

humanus_server (WIP)

Run agents in MCP the server (default running on port 8896):

  • humanus_initialze: Pass JSON configuration (like in config/config.toml) to initialize an agent for a session. (Only one agent will be maintained for each session/client)
  • humanus_run: Pass prompt to tell the agent what to do. (Only one task at a time)
  • humanus_terminate: Stop the current task.
  • humanus_status: Get the current states and other information about the agent and the task. Returns:
    • state: Agent state.
    • current_step: Current step index of the agent.
    • max_steps: Maximum steps executing without interaction with the user.
    • prompt_tokens: Prompt (input) tokens consumption.
    • completion_tokens: Completion (output) tokens consumption.
    • log_buffer: Logs in the buffer, like humanus_cli. Will be cleared after fetched.
    • result: Explaining what the agent did. Not empty if the task is finished.
./build/bin/humanus_server <port> # Unix/MacOS
.\build\bin\Release\humanus_cli_plan.exe <port> # Windows

Configure it in Cursor:

{
  "mcpServers": {
    "humanus": {
      "url": "http://localhost:8896/sse"
    }
  }
}

What if add humanus to mcp_servers? It might be interesting.

Acknowledgement

Cite

@misc{humanus_cpp,
  author = {Zihong Zhang and Zuchao Li},
  title = {humanus.cpp: A Lightweight C++ Framework for Local LLM Agents},
  year = {2025}
}