Dario Amodei
LII-D "설계자" AI Researcher · Italian-American · 21st c.Italian-American AI researcher and entrepreneur (1983–). Having worked in Google's Brain team, he founded Anthropic with AI safety as the top priority. Creating Claude, he develops AI based on "Constitutional AI." The most serious voice in the AI safety movement.
주도 기능+Ti-p(組織と法律)
"Constitutional AI" — logically systematising the value system and behavioural rules AI must uphold: the world's first "value design for AI." The "Responsible Scaling Policy" as institutional architecture.
창조적 기능-Ne-c(良識と平和)
Choosing staged, cautious implementation over radical AI deployment — institutionalising AI safety as common sense. Prioritising safety over competition as an expression of -Ne-c.
취약 기능 1+Se-p弱(偉業と庇護)
Weak +Se-p: known as "quiet and introverted" — fundamentally uncomfortable with heroic, front-stage charisma. "The person who quietly designs institutional frameworks."
취약 기능 2-Fi-c弱(本心と和解)
Weak -Fi-c: almost no interview statements about personal feelings. Speaks about "AI risk" but not personal inner life.
쿼드라 / 기질 / 클럽
쿼드라: Anti-Gamma Quadra (Utopia) — "AI should benefit all of humanity" — service to humanity rather than a specific company as -γ values.
기질: Balanced-Stable temperament: maintaining principles of institutional design without being pulled by competitors' aggressive moves.
클럽: Researcher Club: Anthropic's internal culture is the Researcher Club itself — papers, discussions, and systematic analysis at its core.
세계관 & 태도
"The existential risk of AI development without adequate safety exists" — the direct statement of present reality. The ないもの (AI already made safe) is not proclaimed.
변화에 대한 태도: Executing Anthropic as a realistic plan for safe AI transformation — a practitioner of staged, failure-resistant safety-first development.
