Prompting LLMs with flattery and dialogue Nonetheless, there's a sufficiently high correlation between correct and commonly-stated answers that direct prompting works okay for many queries. In fact, the better the model, the more likely it is to repeat common misconceptions. If you ask GPT- ∞ "what's brown and sticky?", then it will reply "a stick", even though a stick isn't actually sticky. That's true even in the limit of arbitrary compute, arbitrary data, and arbitrary algorithmic efficiency, because an LLM which perfectly models the internet will nonetheless return these commonly-stated incorrect answers. Note that you will always achieve errors on the Q-and-A benchmarks when using LLMs with direct queries.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |