**Humanus** (Latin for "human") is a **lightweight C++ framework** inspired by [OpenManus](https://github.com/mannaandpoem/OpenManus) and [mem0](https://github.com/mem0ai/mem0), integrated with the Model Context Protocol (MCP). This project aims to provide a fast, modular foundation for building local LLM agents.
- **C++ Implementation**: Core logic in efficient C++, optimized for speed and minimal overhead
- **Lightweight Design**: Minimal dependencies and simple architecture, ideal for embedded or resource-constrained environments
- **Cross-platform Compatibility**: Runs on Linux, macOS, and Windows
- **MCP Protocol Integration**: Native support for standardized tool interaction via MCP
- **Vectorized Memory**: Context retrieval using HNSW-based similarity search
- **Modular Architecture**: Easy to plug in new models, tools, or storage backends
**Humanus is still in its early stages** — it's a work in progress, evolving rapidly. We’re iterating openly, improving as we go, and always welcome feedback, ideas, and contributions.
Run agents in MCP the server (default running on port 8896):
-`humanus_initialze`: Pass JSON configuration (like in `config/config.toml`) to initialize an agent for a session. (Only one agent will be maintained for each session/client)
-`humanus_run`: Pass `prompt` to tell the agent what to do. (Only one task at a time)
-`humanus_terminate`: Stop the current task.
-`humanus_status`: Get the current states and other information about the agent and the task. Returns:
-`state`: Agent state.
-`current_step`: Current step index of the agent.
-`max_steps`: Maximum steps executing without interaction with the user.
This work was supported by the National Natural Science Foundation of China (No. 62306216) and the Natural Science Foundation of Hubei Province of China (No. 2023AFB816).