update README and mcp

main
hkr04 2025-04-13 14:25:53 +08:00
parent b751398493
commit 16e34c4eee
3 changed files with 32 additions and 8 deletions

View File

@ -4,7 +4,15 @@
# humanus.cpp # humanus.cpp
Humanus (meaning "human" in Latin) is a lightweight framework inspired by [OpenManus](https://github.com/mannaandpoem/OpenManus) and [mem0](https://github.com/mem0ai/mem0), integrated with the Model Context Protocol (MCP). `humanus.cpp` enables more flexible tool choices, and provides a foundation for building powerful local LLM agents. Humanus (meaning "human" in Latin) is a **lightweight C++ framework** inspired by [OpenManus](https://github.com/mannaandpoem/OpenManus) and [mem0](https://github.com/mem0ai/mem0), integrated with the Model Context Protocol (MCP).
**Key Features:**
- **C++ Implementation**: Core functionality written in efficient C++ for optimal performance and resource utilization
- **Lightweight Design**: Minimalist architecture with minimal dependencies, suitable for resource-constrained environments
- **Cross-platform Compatibility**: Full support for Unix, MacOS, and Windows systems
- **MCP Protocol Integration**: Seamless integration with Model Context Protocol for standardized tool interactions
- **Vectorized Memory Storage**: Efficient similarity search based on HNSW algorithm for intelligent context retrieval
- **Modular Architecture**: Easy to extend and customize, supporting various LLM models and tool integrations
Let's embrace local LLM agents **w/** humanus.cpp! Let's embrace local LLM agents **w/** humanus.cpp!
@ -27,7 +35,7 @@ To set up your custom configuration, follow these steps:
1. Copy all files from `config/example` to `config`. 1. Copy all files from `config/example` to `config`.
2. Replace `base_url`, `api_key`, .etc in `config/config_llm.toml` and other configurations in `config/config*.toml` according to your need. 2. Replace `base_url`, `api_key`, .etc in `config/config_llm.toml` and other configurations in `config/config*.toml` according to your need.
> Note: `llama-server` in [llama.cpp](https://github.com/ggml-org/llama.cpp) also supports embedding models. > Note: `llama-server` in [llama.cpp](https://github.com/ggml-org/llama.cpp) also supports embedding models for vectorized memory.
3. Fill in `args` after `"@modelcontextprotocol/server-filesystem"` for `filesystem` to control the access to files. For example: 3. Fill in `args` after `"@modelcontextprotocol/server-filesystem"` for `filesystem` to control the access to files. For example:
``` ```
[filesystem] [filesystem]
@ -38,7 +46,6 @@ args = ["-y",
"/Users/{Username}/Desktop", "/Users/{Username}/Desktop",
"other/path/to/your/files] "other/path/to/your/files]
``` ```
4. Ensure all requirements for the MCP servers are installed. For example, run `npx playwright install` first for `playwright`.
### `mcp_server` ### `mcp_server`
@ -50,7 +57,7 @@ Start a MCP server with tool `python_execute` on port 8895 (or pass the port as
``` ```
```shell ```shell
.\build\bin\Release\mcp_server.exe <port> # Windows .\build\bin\Release\mcp_server.exe <port> # Windows
``` ```
### `humanus_cli` ### `humanus_cli`
@ -112,7 +119,6 @@ Configure it in Cursor:
> What if add `humanus` to `mcp_servers`? It might be interesting. > What if add `humanus` to `mcp_servers`? It might be interesting.
## Acknowledgement ## Acknowledgement
<p align="center"> <p align="center">

2
mcp

@ -1 +1 @@
Subproject commit 88237f2eaae1dc89b32d2a693ac2bd15fb6ad269 Subproject commit 3c0a2a730ad9da3da61cd7a35ab8bbe5ff078c11

View File

@ -315,8 +315,8 @@ json LLM::ask_tool(
} }
// If the logger has a file sink, log the request body // If the logger has a file sink, log the request body
if (logger->sinks().size() > 1) { for (const auto& sink : logger->sinks()) {
auto file_sink = std::dynamic_pointer_cast<spdlog::sinks::basic_file_sink_mt>(logger->sinks()[1]); auto file_sink = std::dynamic_pointer_cast<spdlog::sinks::basic_file_sink_mt>(sink);
if (file_sink) { if (file_sink) {
file_sink->log(spdlog::details::log_msg( file_sink->log(spdlog::details::log_msg(
spdlog::source_loc{}, spdlog::source_loc{},
@ -325,6 +325,24 @@ json LLM::ask_tool(
"Failed to get response from LLM. Full request body: " + body_str "Failed to get response from LLM. Full request body: " + body_str
)); ));
} }
auto stderr_sink = std::dynamic_pointer_cast<spdlog::sinks::stderr_color_sink_mt>(sink);
if (stderr_sink) {
stderr_sink->log(spdlog::details::log_msg(
spdlog::source_loc{},
logger->name(),
spdlog::level::debug,
"Failed to get response from LLM. See log file for full request body."
));
}
auto session_sink = std::dynamic_pointer_cast<SessionSink>(sink);
if (session_sink) {
session_sink->log(spdlog::details::log_msg(
spdlog::source_loc{},
logger->name(),
spdlog::level::debug,
"Failed to get response from LLM. See log file for full request body."
));
}
} }
throw std::runtime_error("Failed to get response from LLM"); throw std::runtime_error("Failed to get response from LLM");