Dario Amodei
LII-D "Architect" AI Researcher · Italian-American · 21st c.Italian-American AI researcher and entrepreneur (1983–). Having worked in Google's Brain team, he founded Anthropic with AI safety as the top priority. Creating Claude, he develops AI based on "Constitutional AI." The most serious voice in the AI safety movement.
Leading Function+Ti-p(組織と法律)
"Constitutional AI" — logically systematising the value system and behavioural rules AI must uphold: the world's first "value design for AI." The "Responsible Scaling Policy" as institutional architecture.
Creative Function-Ne-c(良識と平和)
Choosing staged, cautious implementation over radical AI deployment — institutionalising AI safety as common sense. Prioritising safety over competition as an expression of -Ne-c.
Vulnerable Function 1+Se-p弱(偉業と庇護)
Weak +Se-p: known as "quiet and introverted" — fundamentally uncomfortable with heroic, front-stage charisma. "The person who quietly designs institutional frameworks."
Vulnerable Function 2-Fi-c弱(本心と和解)
Weak -Fi-c: almost no interview statements about personal feelings. Speaks about "AI risk" but not personal inner life.
Quadra / Temperament / Club
Quadra: Anti-Gamma Quadra (Utopia) — "AI should benefit all of humanity" — service to humanity rather than a specific company as -γ values.
Temperament: Balanced-Stable temperament: maintaining principles of institutional design without being pulled by competitors' aggressive moves.
Club: Researcher Club: Anthropic's internal culture is the Researcher Club itself — papers, discussions, and systematic analysis at its core.
Worldview & Attitude
"The existential risk of AI development without adequate safety exists" — the direct statement of present reality. The ないもの (AI already made safe) is not proclaimed.
Attitude toward Change: Executing Anthropic as a realistic plan for safe AI transformation — a practitioner of staged, failure-resistant safety-first development.
