智能助手网
标签聚合 mcp

/tag/mcp

linux.do · 2026-04-18 17:35:40+08:00 · tech

使用hermes agent 通过MCP的方法将二者连接。 原理大概是这样 Notion ↔ [Notion 官方 MCP] ↔ Hermes Agent ↔ [FNS MCP / SSE] ↔ Obsidian ↑ ↑ hermes与FNS自部署在 VPS 都使用了什么 1.hermes agent 无需多言,类似于小龙虾openclaw这种 2.notion mcp 官方的,使用OA认证挂载后 Agent 能读 / 写 / 创建 / 搜索 Notion 页面 为什么不用 api的形式? 因为闲鱼上卖的都是那种商业试用版,hermes 本身内置了 Notion API 适配,但因为访客账号无法在 workspace 里创建 internal integration、拿不到 integration token,所以只能走官方 MCP 的 OAuth 路径。 3. Fast Note Sync (FNS):Obsidian 插件,把本地 Vault 同步到服务器,同时暂开 SSE 协议的 MCP 服务,Agent 走这个接口读写 Vault,这个插件是L站一位佬友制作的,十分的好用。 为什么要这样做 主要还是看重他notion的AI功能。现在的notion的ai可以去闲鱼,一个月才5块钱,可以基本上无限使用opus4.7这种模型,而且对笔记的优化相当好,这个是obsidian比不了的,但是notion这个始终是云端,而且我觉得分类做的不太好,慢慢就会变成垃圾场那种。 我看过不少人同时用notion和obsidian一起用,但每次都得手动整理,**手动搬的本质是「哪天懒了,整个系统就废了」**现在已经出现了这种例如小龙虾与hermes,利用一下。 使用hermes连接笔记软件有一个好处,你根本不用打开笔记软件,与hermes聊天,聊到一个东西,直接和他说记录到笔记中,他就会帮你整理记录。或者说你在notion中记录,在obsidian中,他都可以帮你找出来,而不用打开软件去操作。 我的notion与obsidian设置 noiton 🗂️ Hermes 知识流转中心 ├─ 📥 待归档 ← 我把要归档的东西什在这里 ├─ ✅ 已归档 ← Hermes 搬完后自动移过来 ├─ 📋 Hermes 操作日志 ← Hermes 每次干活都写一行 └─ 📖 Hermes 使用说明 ← 写给 Hermes 看的手册(关键!) obsidian Vault/ ├─ 00-Inbox/ ← Hermes 搬来的全进这里 ├─ 10-信息/ ├─ 20-配置/ └─ 30-数学/ ← 这些是我手动分类的永久位置 三条铁律 绝不做双向同步,单向: Notion → Obsidian 。搬到 Obsidian 后相当于快照,想改就在 Obsidian 里新写一份,不改已归档的。 Agent 不碰正文,ermes 只做三件事: 读取 · 搬运,加一下标签或者双链。 操作日志必须有。 踩过的坑 1.假如你是在像我一样在VPS中部署的hermes,notion的mcp OA跳转主要注意 VPS 是 headless 环境,Notion MCP 的 OAuth 回调地址默认是 http://127.0.0.1 : ,本地浏览器访问不到服务器的 loopback。解决办法是用 SSH 本地端口转发,然后在本地浏览器点授权链接,回调会落到本地端口再转回服务器完成认证。 2.hermes暂不支持SSE的mcp配置,需要补充上为 Hermes 补上 SSE MCP 支持。 1 个帖子 - 1 位参与者 阅读完整话题

hnrss.org · 2026-04-18 04:00:04+08:00 · tech

Paper Lantern is an MCP server that lets coding agents ask for personalized techniques / ideas from 2M+ CS research papers. Your coding agent tells PL what problem it is working on --> PL finds the most relevant ideas from 100+ research papers for you --> gives it to your coding agent including trade-offs and implementation instructions. We had previously shown that this helps research work and want to know understand whether it helps everyday software engineering tasks. We built out 9 tasks to measure this and compared using only a Coding Agent (Opus 4.6) (baseline) vs Coding Agent + Paper Lantern access. (Blog post with full breakdown: https://www.paperlantern.ai/blog/coding-agent-benchmarks ) Some interesting results : 1. we asked the agent to write tests that maximize mutation score (fraction of injected bugs caught). The baseline caught 63% of injected bugs. Baseline + Paper Lantern found mutation-aware prompting from recent research (MuTAP, Aug 2023; MUTGEN, Jun 2025), which suggested enumerating every possible mutation via AST analysis and then writing tests to target each one. This caught 87%. 2. extracting legal clauses from 50 contracts. The baseline sent the full document to the LLM and correctly extracted 44% of clauses. Baseline + Paper Lantern found two papers from March 2026 (BEAVER for section-level relevance scoring, PAVE for post-extraction validation). Accuracy jumped to 76%. Five of nine tasks improved by 30-80%. The difference was technique selection. 10 of 15 most-cited papers across all experiments were published in 2025 or later. Everything is open source : https://github.com/paperlantern-ai/paper-lantern-challenges Each experiment has its own README with detailed results and an approach.md showing exactly what Paper Lantern surfaced and how the agent used it. Quick setup: `npx paperlantern@latest` Comments URL: https://news.ycombinator.com/item?id=47809920 Points: 3 # Comments: 4

hnrss.org · 2026-04-17 17:56:50+08:00 · tech

Hey everyone! For the last 2 weeks, I've been working on an MCP server for my Novation Circuit Tracks (in case you don't know what it is, is just a little device to make electronic music). The idea is simple: I want an AI Agent to be able to make music using my gear, so I gave it tools to do it so I can just say something like "Build me a melodic ambient song with a dark atmosphere" and see the AI just do it. The whole project is Open Source ( https://github.com/namirsab/circuit-tracks-tools ) and I've written about it in my blog in case you want to go into more details. It is a fun project that includes reverse engineering of proprietary file formats. Would love to hear feedback, both from Circuit Track owners and just in general, what do you think about this idea? Have you tried something similar? Comments URL: https://news.ycombinator.com/item?id=47804220 Points: 1 # Comments: 0

hnrss.org · 2026-04-17 15:48:46+08:00 · tech

Hey HN, adding new mcp servers by hand-editing JSON across Claude Code, Claude Desktop, and Cursor is annoying. So I built mcp.hosting, the easiest way to install MCP servers. Add mcp servers by clicking to add from the Explore page. Or click on github repo badges. Or manually add as well. It's easy to add a bunch in your online account and then they're immediately available in your mcp client of choice. There is also Smart Routing built in to make sure it's fast and uses the best mcp tool for the job. Free tier covers 3 active servers, Pro is $9/mo for unlimited, and self-host is available if you want to run the whole stack. Happy to answer questions about the compliance suite (coming soon), the registry, or the stack (Fastify + Postgres + Caddy on EKS). Comments URL: https://news.ycombinator.com/item?id=47803487 Points: 1 # Comments: 1

linux.do · 2026-04-17 13:49:55+08:00 · tech

本帖使用社区开源推广,符合推广要求。我申明并遵循社区要求的以下内容: 我的帖子已经打上 开源推广 标签: 是 我的开源项目完整开源,无未开源部分: 是 我的开源项目已链接认可 LINUX DO 社区: 是 我帖子内的项目介绍,AI生成、润色内容部分已截图发出: 是 以上选择我承诺是永久有效的,接受社区和佬友监督: 是 以下为项目介绍正文内容,AI生成、润色内容已使用截图方式发出 地址在: GitHub - S842155114/ZenRequest: 本地优先、离线优先、隐私优先的API工作台,面向需要高频调试HTTP与MCP接口的开发者 · GitHub ZenRequest 极速启动、本地优先的 API 工作台,专为重视隐私和效率的开发者打造。 Postman、Insomnia 等现代 API 工具越来越臃肿——强制登录、云同步、动辄数 GB 内存占用,还有你从未同意过的遥测数据收集。 ZenRequest 是另一种选择:一个桌面优先的 API 工作台,毫秒级启动,完全离线运行,数据始终存在你自己的机器上。 极速启动 — 基于 Rust + Tauri,而非 Electron 极轻量 — 空闲时内存占用低于 50 MB 100% 离线 & 私密 — 无账号、无遥测、无云同步 本地 SQLite 存储 — 工作区、请求、历史记录和会话均存储在本地 当前已经覆盖这些核心能力: HTTP 请求调试 环境变量与模板解析 集合、工作区、历史与回放 cURL 导入与工作区导入 / 导出 请求级 Mock 与基础断言 MCP 工作台(tools / resources / prompts / roots / stdio) 欢迎各位佬使用,如有问题希望各位佬可以提交pr或issue,谢谢各位佬。 3 个帖子 - 2 位参与者 阅读完整话题

hnrss.org · 2026-04-16 23:37:36+08:00 · tech

I use Claude Desktop, Claude Code, and Cursor daily. They all have memory now, but none of them share it. Something I explained in Claude Desktop doesn't exist when I open Cursor. I got tired of re-explaining my stack, my preferences, and my project context every time I switched tools. Most solutions I found required Docker, external databases, or cloud accounts - overkill for what's essentially a personal context store. So I built Covalence: a Mac app that bundles an MCP server and a local vector database into a single download. Any MCP client connects to it. Store a memory in Claude, retrieve it from Cursor. Same database, zero config. The stack: SQLite + sqlite-vec, nomic-embed-text-v1.5 running on-device via CoreML (no API keys, no network calls). Hybrid BM25 + vector search. Everything lives in a single SQLite file. The hard part was concurrency. Multiple Claude sessions writing to the same MCP server would deadlock. SQLite in WAL mode with each MCP client as a separate process solved it - multiple clients read and write simultaneously without blocking. Embeddings run sub-second on Apple Silicon. A few features that emerged from daily use: Core Memories - pin facts the AI should always know (your stack, your preferences, standing instructions). They persist across every session and every client. Spaces - separate memory contexts for work, personal, or individual projects. No cross-contamination. Global hotkey capture - store something without switching away from whatever you're doing. Free, no limits. macOS 15+, ~261MB (mostly the CoreML embedding model). https://covalence.app Comments URL: https://news.ycombinator.com/item?id=47794883 Points: 1 # Comments: 0

www.ithome.com · 2026-04-16 10:57:08+08:00 · tech

IT之家 4 月 16 日消息,网络安全公司 OX Security 昨日(4 月 15 日)发布报告, 披露 Anthropic 的 MCP(模型上下文协议)存在设计缺陷,可导致远程代码执行。 该设计缺陷影响范围极广,导致超过 20 万台 AI 服务器面临远程代码执行风险。 IT之家注:MCP 全称为 Model Context Protocol,是 Anthropic 公司于 2024 年 11 月推出的 开源开放标准,让 AI 大模型无缝连接并操作各种外部数据和工具。 本次披露的缺陷潜伏在 modelcontextprotocol SDK 的 STDIO 接口中,该接口本用于启动本地服务器进程,并将控制权移交 AI 模型,但底层执行逻辑会运行任何传入的 OS 命令,即使伪造服务器启动失败返回错误,命令仍会执行,全程无校验、无警告。 OX Security 认为这不是代码笔误,而是架构层面的设计决策。漏洞波及 Anthropic 官方支持的全部 11 种语言:Python、TypeScript、Java、Kotlin、C#、Go、Ruby、Swift、PHP、Rust。 任何基于 MCP 构建的开发者都会自动继承这一风险,OX Security 历时数月调查,识别出四种攻击家族,并在真实环境中完成验证。 报告指出 LangFlow 平台有 915 个公开实例,攻击者无需账户即可获取会话令牌并推送恶意配置实现完整接管。 Letta AI 平台遭中间人攻击,研究者拦截 " 测试连接 " 请求并替换载荷,直接在生产服务器执行任意命令。 Flowise 虽尝试通过命令白名单和特殊字符过滤进行防护,但研究者借助 npx 的 -c 参数一步绕过,这证明当底层架构允许任意子进程执行时,临时输入过滤毫无意义。 第四类攻击直指开发者终端。Windsurf IDE 漏洞最为严重:用户访问恶意网站后,无需任何点击即可在本地执行任意命令,该漏洞追踪编号为 CVE-2026-30615。 Cursor、Claude Code、Gemini-CLI、GitHub Copilot 等 IDE 同样存在提示注入风险,但因需要至少一次用户交互,被 Anthropic 和微软归类为“设计预期”。 Anthropic 于 2026 年 1 月 7 日收到通报后回应称属于预期行为,9 天后仅更新 SECURITY.md 文档提示谨慎使用 STDIO 适配器,未做任何架构改动。 研究者向 11 个主流 MCP 市场上传恶意服务器概念验证,9 个直接接受且无安全审查,仅 GitHub 托管注册表拦截了提交。 LiteLLM、DocsGPT、Flowise、Bisheng 已发布补丁,LangFlow、Agent Zero 等仍待修复,协议层根本漏洞持续开放。 参考 Anthropic's MCP Design Flaw Enables Remote Code Execution Across 200,000+ AI Servers Anthropic’s "By Design" Failure at the Heart of the AI Ecosystem