

Here’s the interesting part.
“We didn’t do any LLMs. There is significant interest in that. There are lots of people trying those ideas out, but I think they’re still in the exploratory phase,” Desai told El Reg.
As it turned out, the researchers didn’t need them. “We used a simpler model called a variational auto encoder (VAE). This model was established in 2013. It’s one of the early generative models,” Desai said.
By sticking with domain-specific models based on more mature architectures, Sandia also avoided hallucinations – the errors that arise when AI makes stuff up – which have become one of the biggest headaches associated with deploying generative AI.
“Hallucinations were not that big a concern here because we build a generative model that is tailored for this very specific task,” Desai explained.


Yeah, that title is just awful. When they have nothing interesting to write about, they use the title to intentionally mislead the readers. There should be laws against that.