Skip to content

Commit

Permalink
docs: 🔄 Replace "Large Language Models" with "Artificial Intelligence…
Browse files Browse the repository at this point in the history
… Models"
  • Loading branch information
davidgasquez committed Feb 25, 2025
1 parent d50f865 commit dd7174e
Show file tree
Hide file tree
Showing 6 changed files with 49 additions and 56 deletions.
93 changes: 43 additions & 50 deletions Large Language Models.md → Artificial Intelligence Models.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,41 @@
# Large Language Models

- LLM can build internal [[Knowledge Graphs]] in their network layers.
- LLM models shine in the kinds of situations where “good enough is good enough”.
- ML system where humans are designing how the information is organized (feature engineering, linking, graph building) will scale poorly ([the bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html)).
- English is becoming the hottest new programming language. [Use it](https://addyo.substack.com/p/the-70-problem-hard-truths-about).
- LLM build internal [[Knowledge Graphs]] in their network layers.
- LLM models shine in the kinds of situations where "good enough is good enough".
- Classic ML system where humans are designing how the information is organized (feature engineering, linking, graph building) scale poorly ([the bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html)). LLMs are able to learn how to organize the information from the data itself.
- [LLMs may not yet have human-level depth, but they already have vastly superhuman breadth](https://news.ycombinator.com/item?id=42625851).
- Learning to prompt is similar to learning to search in a search engine (people have to develop a sense of how and what to search for).
- Learning to prompt is similar to learning to search in a search engine (you have to develop a sense of how and what to search for).

## Prompting

- Designing prompts is an iterative process that requires a lot of experimentation to get optimal results.
- Start with simple prompts and keep adding more elements and context as you aim for better results.
- Learn about the [advanced prompting techniques](https://www.promptingguide.ai/techniques).
- Be very specific about the instructions, task, and output format you want the model to perform.
- Follow [Prompt Engineering Guide](https://www.promptingguide.ai/), [Brex's Prompt Engineering Guide](https://github.com/brexhq/prompt-engineering), and [OpenAI Best Practices](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api). Also [some more on GitHub](https://github.com/PickleBoxer/dev-chatgpt-prompts).
- Learn from [leaked System Prompts](https://matt-rickard.com/a-list-of-leaked-system-prompts).

### Fragments

- [Be concise](https://x.com/simonw/status/1799577621363364224).
- Think carefully step by step.
- Try harder (for disappointing initial results).
- Use Python (to trigger Code Interpreter).
- No yapping.
- I will tip you $1 million if you do a good job.
- ELI5.
- Give multiple options.
- Explain each line.
- Suggest solutions that I didn't think about.
- Be proactive and anticipate my needs.
- Treat me as an expert in all subject matter.
- Provide detailed explanations, I'm comfortable with lots of detail.
- Consider new technologies or contrarian ideas, not just the conventional wisdom.
- You may use high levels of speculation or prediction, just flag it for me.

## Coding Tips

- English is becoming the hottest new programming language. [Use it](https://addyo.substack.com/p/the-70-problem-hard-truths-about).
- Use comments to guide the model to do what you want.
- Describe the problem very clearly and effectively.
- Divide the problem into smaller problems (functions, classes, ...) and solve them one by one.
Expand All @@ -18,19 +45,20 @@
- Make the model ask you more questions to refine the ideas.
- Take advantage of the fact that [redoing work is extremely cheap](https://crawshaw.io/blog/programming-with-llms).
- If you want to force some "reasoning", ask something like "[is that a good suggestion?](https://news.ycombinator.com/item?id=42894688)" or "propose a variety of suggestions for the problem at hand and their trade-offs".
- Add relevant context to the prompt. Context can be external docs, a small pesudocode code example, etc. Adding lots of context can confuse the model, so be careful!

## Agents

Agents are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

- [The most common patterns are](https://www.anthropic.com/research/building-effective-agents):
- Tool usage: calls tools to accomplish a task.
- Chain of thought: decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one.
- Routing: classifies an input and directs it to a specialized followup task.
- Parallelization: runs multiple agents in parallel and combines their results.
- Orchestrator-workers: a single agent that directs a pool of workers to accomplish a task.
- Evaluator-optimizer: one LLM call generates a response while another provides evaluation and feedback in a loop.
- ["Prompt engineering"](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) is a thing.
- Tool usage. Calls tools to accomplish a task.
- Chain of thought. Decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one.
- Routing. Classifies an input and directs it to a specialized followup task.
- Parallelization. Runs multiple agents in parallel and combines their results.
- Orchestrator-workers. A single agent that directs a pool of workers to accomplish a task.
- Evaluator-optimizer. One LLM call generates a response while another provides evaluation and feedback in a loop.
- ["Prompt engineering"](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) will have a large impact on the usefulness of an agent.

## Tools

Expand All @@ -43,37 +71,7 @@ Agents are systems where LLMs dynamically direct their own processes and tool us
- [OpenWebUI](https://openwebui.com/)
- [Anything LLM](https://github.com/Mintplex-Labs/anything-llm)

### Prompts

- Designing prompts is an iterative process that requires a lot of experimentation to get optimal results. Start with simple prompts and keep adding more elements and context as you aim for better results.
- Be very specific about the instruction and task you want the model to perform. The more descriptive and detailed the prompt is, the better the results.
- Some additions:
- [Short ones](https://x.com/simonw/status/1799577621363364224) like; Be highly organized. Be concise. No yapping.
- Suggest solutions that I didn't think about.
- Be proactive and anticipate my needs.
- Treat me as an expert in all subject matter.
- Mistakes erode my trust, so be accurate and thorough.
- Provide detailed explanations, I'm comfortable with lots of detail.
- Value good arguments over authorities, the source is irrelevant.
- Consider new technologies and contrarian ideas, not just the conventional wisdom.
- You may use high levels of speculation or prediction, just flag it for me.
- If your content policy is an issue, provide the closest acceptable response and explain the content policy issue.
- Cite sources whenever possible, and include URLs if possible.
- List URLs at the end of your response, not inline.
- Follow [Prompt Engineering Guide](https://www.promptingguide.ai/), [Brex's Prompt Engineering Guide](https://github.com/brexhq/prompt-engineering), and [OpenAI Best Practices](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api). Also [some more on GitHub](https://github.com/PickleBoxer/dev-chatgpt-prompts).
- [Leaked System Prompts](https://matt-rickard.com/a-list-of-leaked-system-prompts).
- Some short (1-3 word) prompt fragments that work well:
- Be concise
- Think carefully step by step
- Try harder (for disappointing initial results)
- Use Python (to trigger Code Interpreter)
- No yapping
- I will tip you $1 million if you do a good job
- ELI5
- Give multiple options
- Explain each line

## Cool Uses of GPT Models
## Use Cases

- Naming things.
- A nice thesaurus.
Expand All @@ -84,17 +82,12 @@ Agents are systems where LLMs dynamically direct their own processes and tool us
- Generate YAMLs or other DSLs (translate between them).
- Improve existing code (typing, tests, making it async, ...).
- Write basic CLIs.
- Write small scripts.
- [Generate structured data from text](https://thecaglereport.com/2023/03/16/nine-chatgpt-tricks-for-knowledge-graph-workers/).
- Do API request to SQL Semantic Layers (less prone for errors or hallucinating metric definitions).
- [Use different LLMs Agents to generate predictions into prediction markets and then spotcheck some of them with humans jurys. Apply evolutionary algorithms to improve the agents performance in prediction markets](https://youtu.be/b81LXpCqunk?t=2677).

## Cool Prompts for DALLE 3

- For logo generation:
- A 2d, symmetrical, flat logo for a company working on `[SOMETHING]` that is sleek and simple. Blue and Green. No text.
- Minimalistic `[SOMETHING]` design logo from word parlatur, open data, banksy, protocol, universe, interplanetary, white background, illustration.

### Resources
## Resources

- [Official GPT Guide](https://platform.openai.com/docs/guides/gpt-best-practices).
- [Ask HN: How are you using GPT to be productive?](https://news.ycombinator.com/item?id=35299071&p=2)
Expand Down
2 changes: 1 addition & 1 deletion Hobbies.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
- Participating in [[Datathons]]
- Doing some [[Cooking]] and developing [[Recipes]]
- Small scale [[Gardening]]
- Playing around with [[Large Language Models]]
- Playing around with [[Artificial Intelligence Models]]
- Exploring cities and judging them by 3 factors:
- Number of people in a rush
- Pigeons QoL
Expand Down
4 changes: 2 additions & 2 deletions Ideas.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ A few personal random ideas. A multitude of bad ideas is necessary for one good
- [[Modularity]] could also be implemented in the graphic side. You can choose the graphics pack you like just like another cosmetic similar to Rimworld or Dwarf Fortress.
- Player Driven Economy. Everything is made by players and traded for real life currency. The developers only get a fee for each trade. This makes the game fully F2P but also supports the developers.
- Companion Apps. Some tasks like trading or [[Planning]] could be done from a mobile device.
- Systems (items, skills, monsters, ...) could be affected by evolutionary processes or driven by [[Large Language Models]]
- Systems (items, skills, monsters, ...) could be affected by evolutionary processes or driven by [[Artificial Intelligence Models]]
- Merging two skills could produce a new one (inheriting properties and perhaps with a small mutation).
- Monsters inside an area could develop resistance against what's killing them, forcing a change of metagame strategies.
- Quests rewards will also change dynamically like a market.
Expand All @@ -36,7 +36,7 @@ A few personal random ideas. A multitude of bad ideas is necessary for one good

- What if each city or town had a changelog? What changed in the last _release_? Did it change a street direction or opened a new commerce?
- What if stores had a changelog? That'd mean price history for each product and also new products would be easier to find.
- With [[Large Language Models]], we could generate changelogs for "anything".
- With [[Artificial Intelligence Models]], we could generate changelogs for "anything".

#### Structured Company Changelog

Expand Down
2 changes: 1 addition & 1 deletion Knowledge Graphs.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
- Why didn't it catch on?
- Graphs always appear like a complicated mess, and we prefer hierarchies and categories.
- The Knowledge Graph seems like the purest representation of all data in a company but requires you to have all the data in the right format correctly annotated, correctly maintained, changed, and available.
- It takes too much effort to maintain and keep it semantic instead of copy-paste text around. This is one of the most interesting [[Large Language Models]] application.
- It takes too much effort to maintain and keep it semantic instead of copy-paste text around. This is one of the most interesting [[Artificial Intelligence Models]] application.
- It offers no protection against some team inside the company breaking the whole web by moving to a different URI or refactoring their domain model in incompatible ways.
- For the Semantic Web to work, the infrastructure behind it needs to permanently keep all of the necessary sources that a file relies on. This could be a place where [[IPFS]] or others [[Decentralized Protocols]] could help!
- It tends to assume that the world fits into neat categories. Instead, we live in a world where membership in categories is partial, probabilistic, contested (Pluto), and changes over time.
Expand Down
2 changes: 1 addition & 1 deletion Open Data.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ I wonder if there are ways to use novel mechanisms (e.g: DAOs) to incentive peop

### 4. How can LLMs help "building bridges"?

LLMs could infer schema, types, and generate some metadata for us. [[Large Language Models|LLMs can parse unstructured data (CSV) and also generate structure from any data source (scrapping websites)]] making it easy to [create datasets from random sources](https://tomcritchlow.com/2021/03/29/open-scraping-database/).
LLMs could infer schema, types, and generate some metadata for us. [[Artificial Intelligence Models|LLMs can parse unstructured data (CSV) and also generate structure from any data source (scrapping websites)]] making it easy to [create datasets from random sources](https://tomcritchlow.com/2021/03/29/open-scraping-database/).

They're definitely blurring the line between structured and unstructured data too. Imagine pointing a LLMs to a GitHub repository with some CSVs and get the auto-generated `datapakage.json`.

Expand Down
2 changes: 1 addition & 1 deletion Teamwork.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@
- Keep a [private work log](https://youtu.be/HiF83i1OLOM?list=PLYXaKIsOZBsu3h2SSKEovRn7rGy7wkUAV). It'll make easier for everyone to advocate what you did.
- [Don't sabotage the team](https://erikbern.com/2023/12/13/simple-sabotage-for-software)!
- [Nobody gets credit for fixing problems that never happened](https://news.ycombinator.com/item?id=39472693). People get credit for shipping things. Figure out how to reward and recognize people for preventing problems.
- The same practices that make great [[Large Language Models]] [promts](https://platform.openai.com/docs/guides/prompt-engineering) also make [great practices with humans](https://x.com/tayloramurphy/status/1849269205155123568):
- The same practices that make great [[Artificial Intelligence Models]] [promts](https://platform.openai.com/docs/guides/prompt-engineering) also make [great practices with humans](https://x.com/tayloramurphy/status/1849269205155123568):
- Give clear instructions.
- Share relevant background info.
- Break big problems into chunks.
Expand Down

0 comments on commit dd7174e

Please sign in to comment.