|
||
---|---|---|
agent | ||
assets | ||
common | ||
config/example | ||
examples | ||
flow | ||
include | ||
mcp@3c0a2a730a | ||
memory | ||
server | ||
src | ||
tests | ||
tokenizer | ||
tool | ||
.gitignore | ||
.gitmodules | ||
CMakeLists.txt | ||
README.md |
README.md
humanus.cpp
Humanus (meaning "human" in Latin) is a lightweight C++ framework inspired by OpenManus and mem0, integrated with the Model Context Protocol (MCP).
Key Features:
- C++ Implementation: Core functionality written in efficient C++ for optimal performance and resource utilization
- Lightweight Design: Minimalist architecture with minimal dependencies, suitable for resource-constrained environments
- Cross-platform Compatibility: Full support for Unix, MacOS, and Windows systems
- MCP Protocol Integration: Seamless integration with Model Context Protocol for standardized tool interactions
- Vectorized Memory Storage: Efficient similarity search based on HNSW algorithm for intelligent context retrieval
- Modular Architecture: Easy to extend and customize, supporting various LLM models and tool integrations
Let's embrace local LLM agents w/ humanus.cpp!
Project Demo
How to Build
git submodule update --init
cmake -B build
cmake --build build --config Release
How to Run
Configuration
To set up your custom configuration, follow these steps:
- Copy all files from
config/example
toconfig
. - Replace
base_url
,api_key
, .etc inconfig/config_llm.toml
and other configurations inconfig/config*.toml
according to your need.Note:
llama-server
in llama.cpp also supports embedding models for vectorized memory. - Fill in
args
after"@modelcontextprotocol/server-filesystem"
forfilesystem
to control the access to files. For example:
[filesystem]
type = "stdio"
command = "npx"
args = ["-y",
"@modelcontextprotocol/server-filesystem",
"/Users/{Username}/Desktop",
"other/path/to/your/files]
mcp_server
(for tools, only python_execute
as an example now)
Start a MCP server with tool python_execute
on port 8895 (or pass the port as an argument):
./build/bin/mcp_server <port> # Unix/MacOS
.\build\bin\Release\mcp_server.exe <port> # Windows
humanus_cli
Run with tools python_execute
, filesystem
and playwright
(for browser use):
./build/bin/humanus_cli # Unix/MacOS
.\build\bin\Release\humanus_cli.exe # Windows
humanus_cli_plan
(WIP)
Run planning flow (only agent humanus
as executor):
./build/bin/humanus_cli_plan # Unix/MacOS
.\build\bin\Release\humanus_cli_plan.exe # Windows
humanus_server
(WIP)
Run agents in MCP the server (default running on port 8896):
humanus_initialze
: Pass JSON configuration (like inconfig/config.toml
) to initialize an agent for a session. (Only one agent will be maintained for each session/client)humanus_run
: Passprompt
to tell the agent what to do. (Only one task at a time)humanus_terminate
: Stop the current task.humanus_status
: Get the current states and other information about the agent and the task. Returns:state
: Agent state.current_step
: Current step index of the agent.max_steps
: Maximum steps executing without interaction with the user.prompt_tokens
: Prompt (input) tokens consumption.completion_tokens
: Completion (output) tokens consumption.log_buffer
: Logs in the buffer, likehumanus_cli
. Will be cleared after fetched.result
: Explaining what the agent did. Not empty if the task is finished.
./build/bin/humanus_server <port> # Unix/MacOS
.\build\bin\Release\humanus_cli_plan.exe <port> # Windows
Configure it in Cursor:
{
"mcpServers": {
"humanus": {
"url": "http://localhost:8896/sse"
}
}
}
What if add
humanus
tomcp_servers
? It might be interesting.
Acknowledgement
Cite
@misc{humanus_cpp,
author = {Zihong Zhang and Zuchao Li},
title = {humanus.cpp: A Lightweight C++ Framework for Local LLM Agents},
year = {2025}
}