Feng's Notes Isn't coding fun?
Posts with the tag AI:

How SOTA Agent Systems Manage Sessions and Memory

“Agent memory” sounds like one feature, but in practice it is at least four different problems: session state, durable memory, project context, and recall strategy. The current generation of agent systems does not solve these in the same way. OpenClaw treats memory as file-backed knowledge plus retrieval tools; Hermes Agent separates bounded persistent memory from a searchable session archive; Codex CLI leans on local transcripts, layered project instructions, and skills; Claude Code combines persistent CLAUDE.md rules with auto memory and resumable sessions.

Gpt-Researcher Deep Dive

I recently discovered GPT Researcher, an impressive project that’s revolutionizing how we conduct online research. Its ability to generate comprehensive reports quickly and cost-effectively caught my attention, so I decided to dive deeper into its inner workings. In this article, I’ll explore the architecture behind GPT Researcher, why it’s so fast, discuss considerations for deploying it as a service, and look at potential future developments.

1. How GPT Researcher Works: The Architecture

GPT Researcher employs a sophisticated multi-agent architecture that’s both efficient and effective. Here’s a breakdown of its key components:

Multi-agents for long article generation

It’s not too hard these days to generate a research report or an article with the help of AI. You just need to figure out the topic and prompt chatgpt, it will generate a draft for you. Potentially you can polish the draft and work towards a decent article. However I found myself even lazier than that. I just want to show the AI the topic and some keypoints and let it generate a long comprehensive article for me.