|
||
---|---|---|
agent | ||
assets | ||
common | ||
config | ||
examples | ||
flow | ||
include | ||
mcp@0dbcd1e6d5 | ||
memory | ||
server | ||
src | ||
tests | ||
tokenizer | ||
tool | ||
.gitignore | ||
.gitmodules | ||
CMakeLists.txt | ||
README.md |
README.md
humanus.cpp
Humanus (meaning "human" in Latin) is a lightweight framework inspired by OpenManus and mem0, integrated with the Model Context Protocol (MCP). humanus.cpp
enables more flexible tool choices, and provides a foundation for building powerful local LLM agents.
Let's embrace local LLM agents w/ humanus.cpp!
Project Demo
How to Build
git submodule update --init --recursive
cmake -B build
cmake --build build --config Release
How to Run
Configuration
Switch to your own configration first:
- Copy configuration files from
config/example
toconfig
. - Replace
base_url
,api_key
, .etc inconfig/config_llm.toml
and other configurations inconfig/config*.toml
according to your need.Note:
llama-server
in llama.cpp also support embedding models. - Fill in
args
after"@modelcontextprotocol/server-filesystem"
forfilesystem
to control the access to files. For example:
[filesystem]
type = "stdio"
command = "npx"
args = ["-y",
"@modelcontextprotocol/server-filesystem",
"/Users/{Username}/Desktop",
"other/path/to/your/files]
mcp_server
(for tools, only python_execute
as an example now)
Start a MCP server with tool python_execute
on port 8895 (or pass the port as an argument):
./build/bin/mcp_server <port> # Unix/MacOS
.\build\bin\Release\mcp_server.exe <port> # Windows
humanus_cli
Run with tools python_execute
, filesystem
and playwright
(for browser use):
./build/bin/humanus_cli # Unix/MacOS
.\build\bin\Release\humanus_cli.exe # Windows
humanus_cli_plan
(WIP)
Run planning flow (only agent humanus
as executor):
./build/bin/humanus_cli_plan # Unix/MacOS
.\build\bin\Release\humanus_cli_plan.exe # Windows
humanus_server
(WIP)
Run agents in MCP the server (default running on port 8896):
humanus_initialze
: Pass JSON configuration (like inconfig/config.toml
) to initialize an agent for a session. (Only one agent will be maintained for each session/client)humanus_run
: Passprompt
to tell the agent what to do. (Only one task at a time)humanus_terminate
: Stop the current task.humanus_status
: Get the current states and other information about the agent and the task. Returns:state
: Agent state.current_step
: Current step index of the agent.max_steps
: Maximum steps executing without interaction with the user.prompt_tokens
: Prompt (input) tokens consumption.completion_tokens
: Completion (output) tokens consumption.log_buffer
: Logs in the buffer, likehumanus_cli
. Will be cleared after fetched.result
: Explaining what the agent did. Not empty if the task is finished.
./build/bin/humanus_server <port> # Unix/MacOS
.\build\bin\Release\humanus_cli_plan.exe <port> # Windows
Configure it in Cursor:
{
"mcpServers": {
"humanus": {
"url": "http://localhost:8896/sse"
}
}
}
What if add
humanus
tomcp_servers
? It might be interesting.
Acknowledgement
Cite
@misc{humanus_cpp,
author = {Zihong Zhang and Zuchao Li},
title = {humanus.cpp: A Lightweight C++ Framework for Local LLM Agents},
year = {2025}
}