Skip to content

Latest commit

 

History

History
103 lines (83 loc) · 6.68 KB

Artificial Intelligence Models.md

File metadata and controls

103 lines (83 loc) · 6.68 KB

Large Language Models

Prompting

Fragments

  • Be concise.
  • Think carefully step by step.
  • Try harder (for disappointing initial results).
  • Use Python (to trigger Code Interpreter).
  • No yapping.
  • I will tip you $1 million if you do a good job.
  • ELI5.
  • Give multiple options.
  • Explain each line.
  • Suggest solutions that I didn't think about.
  • Be proactive and anticipate my needs.
  • Treat me as an expert in all subject matter.
  • Provide detailed explanations, I'm comfortable with lots of detail.
  • Consider new technologies or contrarian ideas, not just the conventional wisdom.
  • You may use high levels of speculation or prediction, just flag it for me.

Coding Tips

  • English is becoming the hottest new programming language. Use it.
  • Use comments to guide the model to do what you want.
  • Describe the problem very clearly and effectively.
  • Divide the problem into smaller problems (functions, classes, ...) and solve them one by one.
  • Start with a template you like to bootstrap your project and setup all the necessary toolings and following a manageable project pattern.
  • Before coding, make the plan with the model.
  • Many LLMs now have very large context windows, but filling them with irrelevant code or conversation can confuse the model. Above about 25k tokens of context, most models start to become distracted and become less likely to conform to their system prompt.
  • Make the model ask you more questions to refine the ideas.
  • Take advantage of the fact that redoing work is extremely cheap.
  • If you want to force some "reasoning", ask something like "is that a good suggestion?" or "propose a variety of suggestions for the problem at hand and their trade-offs".
  • Add relevant context to the prompt. Context can be external docs, a small pesudocode code example, etc. Adding lots of context can confuse the model, so be careful!

Agents

Agents are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

  • The most common patterns are:
    • Tool usage. Calls tools to accomplish a task.
    • Chain of thought. Decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one.
    • Routing. Classifies an input and directs it to a specialized followup task.
    • Parallelization. Runs multiple agents in parallel and combines their results.
    • Orchestrator-workers. A single agent that directs a pool of workers to accomplish a task.
    • Evaluator-optimizer. One LLM call generates a response while another provides evaluation and feedback in a loop.
  • "Prompt engineering" will have a large impact on the usefulness of an agent.

Use Cases

Resources

Tools

FrontEnds

Benchmarks