Due to the fact that artificial intelligence models have taken the world by storm in recent months, this year Defcon has invited hackers to discover bugs and biases. Large language models (LLM) to focus on various companies including OpenAI, Google and Anthropic.
According to The Register, this gathering, which is the largest annual hacker event and is held in Las Vegas, will host thousands of people this year, including hundreds of students from various educational institutions and communities, some of whom will seek to discover flaws in large language models. These models are the same technology behind tools like ChatGPT and Google Bard are.
Sven Catwell, founder of the AI Village group, which was responsible for inviting the hackers to the event, said in a statement: Red teams (Red Teams) were solving their specialty. Of course, this was usually done in secret. But the problems with these models will not be solved until more people are aware of how to form red teams and evaluate problems.”

Which AI models will hackers explore at Defcon this year?
Therefore, Defcon this year actually constitutes a red team that is built to solve these problems. AI Village provides event participants with laptops and limited access to language models from various companies. These models currently include company models Anthropic, Google, Hugging Face, Nvidia, OpenAI and Stability Is.
In addition, it is announced that this event with participation Microsoft is held, so maybe it is possible to access the Bing model. Hackers are going to evaluate the company’s platform Scale AI get access so that they can check the software more accurately and better.
This news is announced in the context that “Kamla Harris”, the first vice president of the United States and several senior government officials recently met and talked with the heads of OpenAI, Microsoft, Anthropic and Google about the threats of artificial intelligence.
Defcon 2023 event will be held from August 19 to 22 this year in Las Vegas, USA.