(from Yeh course unless specified otherwise)

  • Frameworks
    • Auto Jam – Simplifies complex AI workflows using pre-built templates.
    • Meta GPT & Crew AI – Enable highly customized multi-agent simulations, mimicking human roles
    • Wikipedia on multi agent systems
  • Apps / Tools

Background & Intro

  • Four behaviors of Agent behavior (from Yeh course)
    1. Reflection: Before responding, the agent assesses whether it needs more information.
    2. Tool Use: Agents access external resources, like checking live flight prices or retrieving updated data.
    3. Planning: They break down complex tasks into step-by-step solutions.
    4. Multi-Agent Coordination: Multiple agents work together like an efficient team, each handling different roles.

Yeh’s Framework (or “Equation”)

  • Yeh breaks down the Agent’s engagement and interaction into parts of a framework:
    • See: assume this is the ability for the Agent to see (process?) an initial prompt entered by the user
    • Think: assume this is a directive to the agent to engage in a particular way:
      1. Role: adopt a role as the Agent (“act as a helpful real estate agent”)
      2. Plan: step through a series of process steps to ultimately achieve the goal
    • Remember: use data to aid in the process, seems to come in a couple flavors:
      • Historical Data: assume this is past interactions the Agent has had with this use (?)
      • Contextual Data: assume this is application data (e.g. from a database or system) or third party data (e.g. stock prices)
    • Can: assume this is to execute on particular tasks or invoke other entities to execute tasks on the Agent’s behalf (e.g. initial a call, book an appointment)
  • That whole package above ⬆️ is considered a big prompt to be executed by an LLM

View this page on GitHub