From ad26582751198b1af715df70ce97367bbe4482bc Mon Sep 17 00:00:00 2001 From: David Gasquez Date: Wed, 5 Mar 2025 09:15:34 +0100 Subject: [PATCH] =?UTF-8?q?docs:=20=F0=9F=92=A1=20Add=20insight=20on=20LLM?= =?UTF-8?q?=20verification=20asymmetry?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- Artificial Intelligence Models.md | 1 + 1 file changed, 1 insertion(+) diff --git a/Artificial Intelligence Models.md b/Artificial Intelligence Models.md index 67f7af5..23a4966 100644 --- a/Artificial Intelligence Models.md +++ b/Artificial Intelligence Models.md @@ -5,6 +5,7 @@ - Classic ML system where humans are designing how the information is organized (feature engineering, linking, graph building) scale poorly ([the bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html)). LLMs are able to learn how to organize the information from the data itself. - [LLMs may not yet have human-level depth, but they already have vastly superhuman breadth](https://news.ycombinator.com/item?id=42625851). - Learning to prompt is similar to learning to search in a search engine (you have to develop a sense of how and what to search for). +- [LLMs are useful when exploiting the asymmetry between coming up with an answer and verifying the answer](https://vitalik.eth.limo/general/2025/02/28/aihumans.html) (similar to how a sudoku is difficult to solve, but it's easy to verify that a solution is correct). ## Prompting