Roman Neruda, Institute of Computer Science, Czech Academy of Sciences
Tuesday, June 17, 2025, 13.30–15.30 (1.30–3:30 PM) CEST
Meeting Room 318, Pod Vodárenskou věží 2, Prague 8
Traditional AI has long been built on the concept of agents—autonomous entities that perceive their environment, reason about it using symbolic models, and act to achieve goals. These classical agents rely on explicit representations of state, logic-based planning, and modular architectures. In contrast, Large Language Models (LLMs) introduce a radically new approach: agents that reason, plan, and act through language alone. This survey lecture explores the ongoing shift from traditional AI agents to LLM-based agents, highlighting how LLMs can perform planning, tool use, and goal-directed behavior via prompt engineering and contextual memory. We’ll compare the symbolic and neural paradigms, examine the strengths and limitations of each, and discuss what it means for agency when cognition is embedded in pretrained language. The talk will conclude with a look at emerging research of hybrid neural-symbolic architectures.