@hlfshell
@programming.devhttps://hlfshell.ai/posts/llms-and-robotics-papers-2023/
tldr I write about some of the more interesting works that shaped my understanding of applying LLMs for AI agents and robotic applications. Introduction What is this LLMs as a fad - a caveat Are LLMs actually going to be useful for robotics? Instruct based Benchmarking LLM basics n-shot and reasoning via prompting Chain of Thoughts Self consistency ReAct Tree of Thoughts Automating our automatons Lies, safeguards, and Waluigi Building LLM Agents Finetuning With Tool APIs MRKL Toolformer TALM MM-React Generative Agents: Interactive Simulacra of Human Behavior Socratic Models LLMs and Robotics Code generation and multi agents SayCan Inner Monologues Code As Policies ProgPrompt Statler LM-Nav RT2 Final Thoughts Introduction What is this I’m kicking off a project that is centered around the idea of applying large language models (LLMs) as a context engine; understanding contextual information about the environment and deriving additional steps from human intent to shape its actions.
Playing multiple co-op campaigns, and every druid across multiple campaigns experience, randomly, unexpected deaths when being knocked out of wild shape. Not death saving throws either - straight into full blown d-e-a-d.
Per the rules, it should go from wild shape -> spill over damage being removed from the remaining HP of the druid. Reading the combat logs we see that while enough damage is done to knock the druid out of their wild shape, the remainder is not nearly enough to put them into death saving throws, let alone outright kill them.
Is anyone else encountering this issue?