Mark Zuckerberg’s Meta has this week released an open source version of an artificial intelligence model, Llama 2, for public use. The Large Language Model (LLM), which can be used to create a chatbot similar to ChatGPT, is available to startups, established companies, and independent operators. But why is Meta doing this, and what are the potential risks involved?
What does an open source LLM do?
LLMs endorse AI tools like chatbots. They are trained on vast data sets that allow them to mimic human language and even computer coding. If an LLM is made open source, that means its content is freely available for people to access, use, and modify for their own purposes.
Llama 2 will be released in three versions, including one that can be built into an AI chatbot. The idea is that startups or established companies can take the Llama 2 models and play with them to create their own products, including, potentially, rivals to ChatGPT or Google’s chatbot Bard, though by Meta’s own admission. , Flame 2 is not at the level of GPT. 4, the LLM behind OpenAI’s ChatGPT.
Nick Clegg, Meta’s president of global affairs, told BBC Radio 4’s Today program on Wednesday that making LLMs open source would make them “safer and better” by inviting external scrutiny.
“With the…wisdom of crowds, it actually makes these systems safer and better and, more importantly, gets them out of the…sweaty hands of big tech companies, who are currently the only companies that have the computing power or vast data stores to build these models in the first place.”
There’s also the possibility that by giving all comers the chance to pitch a rival to ChatGPT, Bard, or Microsoft’s Bing chatbot, Meta is potentially diluting the competitive edge of tech peers like Google.
Meta has admitted in research published alongside Llama 2 that it “lags behind” GPT-4, but is nonetheless a free competitor to OpenAI.
Microsoft is a key financial backer of OpenAI, but nonetheless supports the release of Llama 2. The LLM is available for download through the Microsoft Azure, Amazon Web Services, and Hugging Face platforms.
Are there concerns about open source AI?
Tech professionals, including Elon Musk, co-founder of OpenAI, have raised concerns about an AI arms race. Open source makes a powerful tool in this technology available to everyone.
Dame Wendy Hall, Regia Professor of Computer Science at the University of Southampton, told the Today show that there were questions about whether the tech industry could be trusted to self-regulate LLMs, and the problem loomed even bigger for open source models. “It’s a bit like giving people a template to build a nuclear bomb,” she said.
Dr Andrew Rogoyski, from the University of Surrey’s Human-Centered AI Institute, said open source models were difficult to regulate. “You can’t really regulate open source. You can regulate repositories, like Github or Hugging Face, based on local law,” he said.
“It can issue license terms on software that, if abused, could make the abusing company liable under various forms of legal redress. However, being open source means anyone can get their hands on it, so it doesn’t stop the wrong people from taking the software, or stop anyone from misusing it.”
If you request to download Llama 2, you must agree to an “acceptable use” policy which includes not using the LLMs to further or plot “violence or terrorism” or to create disinformation. However, LLMs like the one behind ChatGPT are prone to producing false information and can be persuaded to bypass security barriers to produce dangerous content. The release of Llama 2 also comes with a responsible use guide for developers.