Meet Dan Hendrycks, the AI researcher on a mission from Elon Musk to safeguard against the dangers of artificial intelligence.
In a role that's more about preventing risks than profiting from them, Hendrycks earns just $1 per year for his advisory work with xAI and $12 annually from Scale AI. His work involves assessing and mitigating potential threats - from bioweapons to cyber attacks - and ensuring AI systems remain below specific danger thresholds.
Hendrycks' focus is on measuring political bias in AI systems, tracking issues like "covert activism" where a chatbot presents facts in an overly positive or negative light. He believes that by identifying these biases, he can drive down the threshold of risk to make AI systems more neutral.
Working with Elon Musk has its perks, according to Hendrycks. He describes Musk as "a very enjoyable person to work with," and praises his focus on AI safety, citing support for California's SB-1047 bill that aimed to establish safety standards for advanced AI systems. Musk's independence allows him to take public stances without worrying about pleasing investors.
Hendrycks has collaborated with other prominent tech figures, including Eric Schmidt and Alexandr Wang, to warn against the dangers of unchecked AI development. They've coined the term "Mutual Assured A.I. Malfunction (MAIM)" to describe the risks of an AI system going rogue.
One of Hendrycks' biggest concerns is how cyber attacks could target critical but outdated infrastructure. He warns that this infrastructure, which hasn't had software updates in decades, is vulnerable to attack and poses a significant threat to national security.
Hendrycks' work is a reminder that as AI becomes more advanced, it's essential to prioritize its safety and security. His efforts demonstrate that even the most unlikely partnerships can lead to meaningful progress in mitigating the risks of AI systems.
In a role that's more about preventing risks than profiting from them, Hendrycks earns just $1 per year for his advisory work with xAI and $12 annually from Scale AI. His work involves assessing and mitigating potential threats - from bioweapons to cyber attacks - and ensuring AI systems remain below specific danger thresholds.
Hendrycks' focus is on measuring political bias in AI systems, tracking issues like "covert activism" where a chatbot presents facts in an overly positive or negative light. He believes that by identifying these biases, he can drive down the threshold of risk to make AI systems more neutral.
Working with Elon Musk has its perks, according to Hendrycks. He describes Musk as "a very enjoyable person to work with," and praises his focus on AI safety, citing support for California's SB-1047 bill that aimed to establish safety standards for advanced AI systems. Musk's independence allows him to take public stances without worrying about pleasing investors.
Hendrycks has collaborated with other prominent tech figures, including Eric Schmidt and Alexandr Wang, to warn against the dangers of unchecked AI development. They've coined the term "Mutual Assured A.I. Malfunction (MAIM)" to describe the risks of an AI system going rogue.
One of Hendrycks' biggest concerns is how cyber attacks could target critical but outdated infrastructure. He warns that this infrastructure, which hasn't had software updates in decades, is vulnerable to attack and poses a significant threat to national security.
Hendrycks' work is a reminder that as AI becomes more advanced, it's essential to prioritize its safety and security. His efforts demonstrate that even the most unlikely partnerships can lead to meaningful progress in mitigating the risks of AI systems.