- LLM build internal [[Knowledge Graphs]] in their network layers.
- LLM models shine in the kinds of situations where "good enough is good enough".
- Classic ML system where humans are designing how the information is organized (feature engineering, linking, graph building) scale poorly (the bitter lesson). LLMs are able to learn how to organize the information from the data itself.
- LLMs may not yet have human-level depth, but they already have vastly superhuman breadth.
- Learning to prompt is similar to learning to search in a search engine (you have to develop a sense of how and what to search for).
- LLMs are useful when exploiting the asymmetry between coming up with an answer and verifying the answer (similar to how a sudoku is difficult to solve, but it's easy to verify that a solution is correct).
- Designing prompts is an iterative process that requires a lot of experimentation to get optimal results.
- Start with simple prompts and keep adding more elements and context as you aim for better results.
- Learn about the advanced prompting techniques.
- Be very specific about the instructions, task, and output format you want the model to perform.
- Follow Prompt Engineering Guide, Brex's Prompt Engineering Guide, and OpenAI Best Practices. Also some more on GitHub.
- Learn from leaked System Prompts.
- Be concise.
- Think carefully step by step.
- Try harder (for disappointing initial results).
- Use Python (to trigger Code Interpreter).
- No yapping.
- I will tip you $1 million if you do a good job.
- ELI5.
- Give multiple options.
- Explain each line.
- Suggest solutions that I didn't think about.
- Be proactive and anticipate my needs.
- Treat me as an expert in all subject matter.
- Provide detailed explanations, I'm comfortable with lots of detail.
- Consider new technologies or contrarian ideas, not just the conventional wisdom.
- You may use high levels of speculation or prediction, just flag it for me.
- English is becoming the hottest new programming language. Use it.
- Use comments to guide the model to do what you want.
- Describe the problem very clearly and effectively.
- Divide the problem into smaller problems (functions, classes, ...) and solve them one by one.
- Start with a template you like to bootstrap your project and setup all the necessary toolings and following a manageable project pattern.
- Before coding, make the plan with the model.
- Many LLMs now have very large context windows, but filling them with irrelevant code or conversation can confuse the model. Above about 25k tokens of context, most models start to become distracted and become less likely to conform to their system prompt.
- Make the model ask you more questions to refine the ideas.
- Take advantage of the fact that redoing work is extremely cheap.
- If you want to force some "reasoning", ask something like "is that a good suggestion?" or "propose a variety of suggestions for the problem at hand and their trade-offs".
- Add relevant context to the prompt. Context can be external docs, a small pesudocode code example, etc. Adding lots of context can confuse the model, so be careful!
Agents are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.
- The most common patterns are:
- Tool usage. Calls tools to accomplish a task.
- Chain of thought. Decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one.
- Routing. Classifies an input and directs it to a specialized followup task.
- Parallelization. Runs multiple agents in parallel and combines their results.
- Orchestrator-workers. A single agent that directs a pool of workers to accomplish a task.
- Evaluator-optimizer. One LLM call generates a response while another provides evaluation and feedback in a loop.
- "Prompt engineering" will have a large impact on the usefulness of an agent.
- Naming things.
- A nice thesaurus.
- Brainstorm (ask many things and then add constraints).
- What's the name of the "thing" that does "something"?
- I want to accomplish X. I think I will try doing Y. Is there a better way?
- Convert code from one language to another.
- Generate YAMLs or other DSLs (translate between them).
- Improve existing code (typing, tests, making it async, ...).
- Write basic CLIs.
- Write small scripts.
- Generate structured data from text.
- Do API request to SQL Semantic Layers (less prone for errors or hallucinating metric definitions).
- Use different LLMs Agents to generate predictions into prediction markets and then spotcheck some of them with humans jurys. Apply evolutionary algorithms to improve the agents performance in prediction markets.