

I’m not talking about the quality of LLMs (they suck, in so many different ways…).
I’m criticizing the experiment setup, it is not really statistically sound. Doing 10 tests each with 52 different models is almost bound to have one model be correct 100% of the time (even if the true probability is closer to 50%), by pure chance. Doing 100 tests each might yield very different results with none of them answering correct 100% of the time. Or put another way, the p-values of the tests performed are pretty high, not <0.05, so the results don’t really say what they purport to say.


Thai has some different words and accents used by male and female speakers. best source i could find with a quick search though i’d have liked a more detailed one.