As autonomous agents continue to evolve, the integration of Large Language Models (LLMs) is reshaping their capabilities and impact. However, this new frontier presents unique challenges and opportunities in ensuring the safety, reliability, and ethical deployment of these systems. This session explores the complexities of designing and managing autonomous agents in the age of LLMs, focusing on issues such as robustness, explainability, bias, and accountability. Experts will discuss the latest advancements, potential risks, and strategies for fostering trust in these powerful technologies, while identifying key opportunities for innovation and responsible development.