A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey human commands to protect their own kind. I've had these assertions presented to me as evidence of (take your pick): AI is already conscious; AI is evil and will destroy us; AI is capable of lying to protect itself; and other highly anthropomorphized interpretations. My first thought was, 'Has this behavior been independently verified'? The Gemini 3 quote is highly suspicious. it sounds too much like a segment from a cautionary science fiction tale. LLMs and other flavors of AI are not designed with motivation beyond optimizing their performance in response to human queries/instructions. Behavioral responses of biological animals with brains were optimized via natural selection to favor self-preservation.