What is Llama and how is it different from other generative AI tools?
Llama is Meta’s class of , which are a new technology that can essentially understand and generate language. These types of tools have taken the AI space by storm, and they are very useful in a range of applications: for example, chatbots, collecting information, summarizing text, writing code and so on. What differentiates Llama is that it is open source, meaning it provides access to the source code, which enables the community to modify the technology for their own requirements. We have released a set of models of different sizes and for different needs. Basically, the larger the model, the more accurate its responses, while the smaller the model, the cheaper it is to deploy. These are all open source, so the majority of users can access them for free, adapt them to their own infrastructure and use them with their own data.
By letting the community converge on the tool to improve it, we can continually advance and fine tune it.
Some are concerned that open source poses a risk that AI could get into the wrong hands. Why do you believe it is a responsible approach?
We are acutely conscious of safety. For every release, our role is to understand what are the responsibility considerations? What are the expectations of society? What are the risks? We recently released Purple Llama, which includes a set of tools to help developers implement these applications safely and responsibly. Unlike other AI systems that are tightly guarded proprietary models, Llama is part of an ecosystem, which allows users around the world to improve the technology and contribute to making it safer, including in key areas: cyber security, data privacy, content safety, making AI content transparent so users can identify it, etc. An open approach can converge on standards faster in a more democratized way. In Europe and around the world, there is a growing recognition that this technology should be distributed rather than centralized, and its values should be determined through wide input rather than by a specific country or company.
Who is involved in the Llama ecosystem?
Researchers, academics, industry leaders, start-ups, non-profits, developers, teachers, learning designers, to name just a few: so far we’ve had over 100 million downloads. It’s been beyond even our most optimistic forecasts, and we’ve seen a lot of excitement from the community around being able to democratize this technology. It’s this aspect of openness that amplifies innovation, making it more likely to find unexpected use cases. It also accelerates development, as we get faster feedback loops. By letting the community converge on the tool to improve it, we can continually fine tune it. Part of this effort is the , an international community bringing together leading technology developers, universities, research institutes and other collaborators to advance open, safe, responsible AI. We firmly believe that open innovation can lead to technologies that bring more benefit to people.
How are Meta and Orange working together?
We have a long history of partnering together on various mobile and Internet projects. From the beginning, Orange has been quite sophisticated in approaching AI and really wanting to embrace it both for the benefit of its customers as well as internally to optimize operations. Orange has been using Llama 2 for various use cases – to improve customer service, to aid software development, to support training, etc. It’s very useful to have Orange as a partner providing feedback on the performance of the tools. It’s also a large company with a presence in emerging markets such as Africa, so it’s very helpful to be exposed to use cases outside the Western world. Having Orange test these different models provides vital information that is fed back to the community to improve the performance and responsible deployment of this technology.
How should AI be regulated?
It’s still early days, but we’re starting to see the first regulations. AI is already regulated by GDPR, FCRA, Section 5 and Civil Rights laws. We encourage an AI regulatory approach that does not create multitudes of conflicts of law and one that’s focused on high risk end uses. We need to continuously assess the situation as these tools become more powerful and we see how they are used. The technology is so new that all the implications are not yet well understood, and the situation is still rapidly evolving. But it’s positive that policymakers are engaging proactively and trying to understand how to navigate the risks to ensure that the technology is beneficial for society. Ultimately regulation is an expression of society’s expectations, and so this is essential input for how we build AI that reflects this.
Read more :
Provisionally agreed on 8 December 2023, it is a legal framework for the development, marketing and use of AI in the European Union ‑ a world first. In July 2023, the US White House convened leading AI developers to sign a pledge to foster safe AI development, with a blueprint for an AI Bill of Rights.
LLM: an AI algorithm that uses deep learning techniques to generate text by repeatedly predicting the next word in response to a prompt. These models are trained using massive amounts of data. This type of model powers tools such as Meta’s AI assistant and characters, OpenAI’s ChatGPT and Google’s Bard.
An initiative launched by IBM and Meta on 5 December 2023 in collaboration with over 50 leading organizations across industry, academia, research and government including AMD, CERN, Cleveland Clinic, Dell Technologies, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance, NASA, NSF, Oracle, Partnership on AI, University of California Berkeley, Yale University and others. The AI Alliance aims to foster an open community to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness.