humanus.cpp/README.md

151 lines
5.6 KiB
Markdown
Raw Normal View History

<p align="center">
2025-04-10 00:39:55 +08:00
<img src="assets/humanus.png" width="200"/>
</p>
2025-03-19 18:44:54 +08:00
2025-04-13 16:17:57 +08:00
English | [中文](README_zh.md)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) &ensp;
# humanus.cpp
2025-03-19 18:44:54 +08:00
2025-04-13 16:17:57 +08:00
**Humanus** (Latin for "human") is a **lightweight C++ framework** inspired by [OpenManus](https://github.com/mannaandpoem/OpenManus) and [mem0](https://github.com/mem0ai/mem0), integrated with the Model Context Protocol (MCP). This project aims to provide a fast, modular foundation for building local LLM agents.
2025-04-13 14:25:53 +08:00
**Key Features:**
2025-04-13 16:17:57 +08:00
- **C++ Implementation**: Core logic in efficient C++, optimized for speed and minimal overhead
- **Lightweight Design**: Minimal dependencies and simple architecture, ideal for embedded or resource-constrained environments
- **Cross-platform Compatibility**: Runs on Linux, macOS, and Windows
- **MCP Protocol Integration**: Native support for standardized tool interaction via MCP
- **Vectorized Memory**: Context retrieval using HNSW-based similarity search
- **Modular Architecture**: Easy to plug in new models, tools, or storage backends
**Humanus is still in its early stages** — it's a work in progress, evolving rapidly. Were iterating openly, improving as we go, and always welcome feedback, ideas, and contributions.
2025-03-23 14:35:54 +08:00
2025-04-13 16:17:57 +08:00
Let's explore the potential of local LLM agents with **humanus.cpp**!
2025-03-19 18:44:54 +08:00
## Project Demo
2025-03-19 18:44:54 +08:00
2025-04-13 16:17:57 +08:00
<video src="https://private-user-images.githubusercontent.com/54173798/433116754-6e0b8c07-7ead-4e25-8fec-de3a3031f583.mp4?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDQ1MzI0NzMsIm5iZiI6MTc0NDUzMjE3MywicGF0aCI6Ii81NDE3Mzc5OC80MzMxMTY3NTQtNmUwYjhjMDctN2VhZC00ZTI1LThmZWMtZGUzYTMwMzFmNTgzLm1wND9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA0MTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNDEzVDA4MTYxM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQzZDM4MGMzZjExN2RhNDE5ZDFhYWMzYmZkMjJiZjI0ZDE1MTk1Mzk0YjFkNzhjYjhlZjBhOWI5NTRhZDJmNjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.PMbjc8jfhyTQHrCisJzNNjdllLART95rPDY5E1A2vM8"
controls
muted
style="max-height:640px; border:1px solid #ccc; border-radius:8px;">
</video>
2025-03-19 18:44:54 +08:00
## How to Build
```bash
2025-04-13 01:03:50 +08:00
git submodule update --init
2025-03-19 18:46:55 +08:00
2025-03-19 18:44:54 +08:00
cmake -B build
cmake --build build --config Release
```
## How to Run
### Configuration
2025-04-13 01:03:50 +08:00
To set up your custom configuration, follow these steps:
1. Copy all files from `config/example` to `config`.
2. Replace `base_url`, `api_key`, .etc in `config/config_llm.toml` and other configurations in `config/config*.toml` according to your need.
2025-04-13 14:25:53 +08:00
> Note: `llama-server` in [llama.cpp](https://github.com/ggml-org/llama.cpp) also supports embedding models for vectorized memory.
3. Fill in `args` after `"@modelcontextprotocol/server-filesystem"` for `filesystem` to control the access to files. For example:
2025-03-23 14:46:39 +08:00
```
[filesystem]
type = "stdio"
command = "npx"
args = ["-y",
"@modelcontextprotocol/server-filesystem",
"/Users/{Username}/Desktop",
2025-03-23 14:46:39 +08:00
"other/path/to/your/files]
```
### `mcp_server`
(for tools, only `python_execute` as an example now)
Start a MCP server with tool `python_execute` on port 8895 (or pass the port as an argument):
2025-03-23 14:35:54 +08:00
```bash
./build/bin/mcp_server <port> # Unix/MacOS
2025-03-23 14:35:54 +08:00
```
```shell
2025-04-13 14:25:53 +08:00
.\build\bin\Release\mcp_server.exe <port> # Windows
2025-03-23 14:35:54 +08:00
```
### `humanus_cli`
Run with tools `python_execute`, `filesystem` and `playwright` (for browser use):
2025-03-23 14:35:54 +08:00
2025-03-19 18:44:54 +08:00
```bash
./build/bin/humanus_cli # Unix/MacOS
2025-03-23 14:35:54 +08:00
```
2025-03-19 18:44:54 +08:00
2025-03-23 14:35:54 +08:00
```shell
2025-03-19 18:44:54 +08:00
.\build\bin\Release\humanus_cli.exe # Windows
```
### `humanus_cli_plan` (WIP)
Run planning flow (only agent `humanus` as executor):
2025-03-23 14:35:54 +08:00
```bash
./build/bin/humanus_cli_plan # Unix/MacOS
```
```shell
.\build\bin\Release\humanus_cli_plan.exe # Windows
```
### `humanus_server` (WIP)
Run agents in MCP the server (default running on port 8896):
- `humanus_initialze`: Pass JSON configuration (like in `config/config.toml`) to initialize an agent for a session. (Only one agent will be maintained for each session/client)
- `humanus_run`: Pass `prompt` to tell the agent what to do. (Only one task at a time)
- `humanus_terminate`: Stop the current task.
- `humanus_status`: Get the current states and other information about the agent and the task. Returns:
- `state`: Agent state.
- `current_step`: Current step index of the agent.
- `max_steps`: Maximum steps executing without interaction with the user.
- `prompt_tokens`: Prompt (input) tokens consumption.
- `completion_tokens`: Completion (output) tokens consumption.
- `log_buffer`: Logs in the buffer, like `humanus_cli`. Will be cleared after fetched.
- `result`: Explaining what the agent did. Not empty if the task is finished.
```bash
./build/bin/humanus_server <port> # Unix/MacOS
```
```shell
.\build\bin\Release\humanus_cli_plan.exe <port> # Windows
```
Configure it in Cursor:
```json
{
"mcpServers": {
"humanus": {
"url": "http://localhost:8896/sse"
}
}
}
```
2025-04-13 16:17:57 +08:00
> Experimental feature: MCP in MCP! You can run `humanus_server` and connect to it from another MCP server or `humanus_cli`.
2025-04-10 00:39:55 +08:00
## Acknowledgement
2025-03-23 14:35:54 +08:00
2025-04-10 00:39:55 +08:00
<p align="center">
<img src="assets/whu.png" height="180"/>
<img src="assets/myth.png" height="180"/>
</p>
2025-04-13 16:17:57 +08:00
This work was supported by the National Natural Science Foundation of China (No. 62306216), the Natural Science Foundation of Hubei Province of China (No. 2023AFB816), the Fundamental Research Funds for the Central Universities (No. 2042023kf0133).
## Cite
2025-04-13 16:17:57 +08:00
```bibtex
@misc{humanus_cpp,
author = {Zihong Zhang and Zuchao Li},
title = {humanus.cpp: A Lightweight C++ Framework for Local LLM Agents},
year = {2025}
}
```