• Hackworth@piefed.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    20 hours ago

    “Role-playing machine” is where it seems like the research is ending up. Language always has an implied communicator, and therefore an implied persona to adopt. LLMs are foremost maintaining a contextual role. Post-training is an attempt to keep them in the Assistant role, but (particularly as contexts get large) it’s trivial to push them into nearly any role imaginable. We made an improv bot that’s so good at playing a coder that it can actually code, kinda.

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 hours ago

      I wish there was some way to convince the idiots LARGE LANGUAGE MODELS ARE NOT INTELLIGENCE.

      They’re hotwired eliza with a shit-ton more computational grunt, but they aren’t intelligence and these companies foisting it on people without proper warnings and guard rails are just asking for tragedies.