Decisions and knowledge

Research Domain

The GPT-3 language model, revolution or evolution?

The release of GPT-3 has opened up a new horizon of possibilities and much has been written about it: “GPT-3, Artificial Intelligence that (almost) writes articles on its own”, ‎[1]“GPT-3, Artificial Intelligence capable of competing with Ernest Hemingway”‎[2]. Indeed, this model seems to be capable of many things. For example, one student managed to populate a blog with articles automatically generated by GPT-3, and many people thought they were reading texts written by a real person [3]. The model was able to respond to quiz questions on general knowledge [12] and even medicine [13]. It has also generated music tracks [14]. But this model can also be unreliable. It may tell you that your cranberry drink is poison, or advise a lawyer to go to work in a swimming costume to replace his stained suit [15].
For researchers in Natural Language Processing (NLP), this model is certainly new but its principle is already well known. More recently, Le Monde relied on the testimony of experts in NLP to qualify the hype: “GPT-3, artificial intelligence that has taught itself almost everything it knows”‎[4].
Let’s try to demystify what is behind GPT-3, this 3rd generation of “Generative Pre-Training”, the last link in the evolution from language models to Transformers architecture.
Read the article

When mental load reflects the effects of remote working during lockdown

Read the article

Better interactions with automatic emotion recognition

Read the article

Understanding FEARS, an approach to time series classification.

Read the article

The use of social robots in mental health: what ethical issues?

Read the article

Benefits and challenges of a machine learning system in managing email overload at work

Read the article
DAGOBAH-un-tableau-ne-parle-que-par-celui-qui-sait-l-annoter_960x620

DAGOBAH: Make Tabular Data Speak Great Again

Read the article