
Open-source agentic memory framework for 24/7 proactive AI agents with file-system memory, intention prediction, and lower token costs.
memU is an open-source memory infrastructure for LLM applications and AI agents, designed for long-running, always-on assistants that need persistent, evolving memory. It organizes memories like a file system (categories as folders, memory items as files, cross-links as symlinks) and supports proactive behavior such as capturing user intent, predicting next steps, and injecting relevant memory into the agent’s context. memU aims to reduce token spend for continuous agents by caching insights and avoiding redundant LLM calls, and it includes companion components like a backend service (memU-server) and a web UI (memU-ui). In addition to the open-source framework, memU offers hosted APIs (Memory API / Response API) with usage-based pricing and a “start free” path.
75%
Loading Community Opinions...