GPT-3 is a monster of an AI system capable of responding to almost any text prompt with unique, original responses that are often surprisingly cogent. It’s an example of what incredibly talented developers can do with cutting-edge algorithms and software when given unfettered access to supercomputers. But it’s not very efficient. At least not when compared to a new system developed by LMU researchers Timo Schick and Hinrich Schutze. [Read: OpenAI reveals the pricing plans for its API — and it ain’t cheap] According to a recent pre-print paper on arXiv, the duo’s system outperforms GPT-3 on the “superGLUE” benchmark test with only 223 million parameters: Parameters are variables used to tune and tweak AI models. They’re intimated from data – in essence the more parameters an AI model is trained with, the more robust we expect it to be. When a system using 99.9% less model parameters is able to best the best at a benchmark task, it’s a pretty big deal. This isn’t to say that the LMU system is better than GPT-3, nor that it’s capable of beating it in tests other than the SuperGLUE benchmark – which isn’t indicative of GPT-3’s overall capabilities. The LMU system’s results come courtesy of a training method called pattern-exploiting training (PET). According to Open AI policy director Jack Clark, writing in the weekly ImportAI newsletter: Clark goes on to point out that, while it won’t outperform GPT-3 in every task, it does open new avenues for researchers looking to push the boundaries of AI with more modest hardware. For more information check out the duo’s paper here. H/t: Jack Clark and ImportAI So you’re interested in AI? Then join our online event, TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.