diff --git a/content/c/concepts/pointers/terms/double-pointer/double-pointer.md b/content/c/concepts/pointers/terms/double-pointer/double-pointer.md new file mode 100644 index 00000000000..019dcd18374 --- /dev/null +++ b/content/c/concepts/pointers/terms/double-pointer/double-pointer.md @@ -0,0 +1,71 @@ +--- +Title: 'Double Pointer' +Description: 'Holds the memory address of a pointer.' +Subjects: + - 'Code Foundations' + - 'Computer Science' +Tags: + - 'Pointers' + - 'Memory' + - 'Variables' + - 'Arrays' +CatalogContent: + - 'learn-c' + - 'paths/computer-science' +--- + +In C, a **double pointer** is a [pointer](https://www.codecademy.com/resources/docs/c/concepts/pointers/terms/pointer/pointer) that holds the memory address of another pointer. It allows indirect access to the value of a variable. + +## Syntax + +A double pointer is declared using two asterisks (`**`) before the pointer variable name: + +```pseudo +type **name +``` + +- `type`: The type of data the double pointer will point to (e.g., `int`, `char`, etc.). +- `name`: The identifier for the double pointer. + +## Example + +The following example demonstrates how a double pointer is declared and used: + +```c +# include + +int main() { + int value = 35; + int *pointer = &value; // Pointer to an integer (stores the address of 'value') + int **doublePointer = &pointer; // Double pointer to an integer pointer (stores the address of 'pointer') + + // Printing the values + printf("Value of value: %d\n", value); // Direct access to value + printf("Value of *pointer: %d\n", *pointer); // Dereferencing pointer to access value + printf("Value of **doublePointer: %d\n", **doublePointer); // Dereferencing double pointer twice to access value + + // Printing the addresses + printf("Address of value: %p\n", (void*)&value); // Address of the variable 'value' + printf("Address of pointer: %p\n", (void*)&pointer); // Address of the pointer 'pointer' + printf("Address of doublePointer: %p\n", (void*)&doublePointer); // Address of the double pointer 'doublePointer' + + return 0; +} +``` + +The above code will give the following output: + +```shell +Value of value: 35 +Value of *pointer: 35 +Value of **doublePointer: 35 +Address of value: 0x7ffcbffdcc14 +Address of pointer: 0x7ffcbffdcc18 +Address of doublePointer: 0x7ffcbffdcc20 +``` + +In the example: + +- `value` is an integer variable. +- `pointer` is a pointer tha stores the address of `value`. +- `doublePointer` is a double pointer that stores the address of the pointer. diff --git a/content/general/concepts/artificial-intelligence/artificial-intelligence.md b/content/general/concepts/artificial-intelligence/artificial-intelligence.md new file mode 100644 index 00000000000..39c16b1ef09 --- /dev/null +++ b/content/general/concepts/artificial-intelligence/artificial-intelligence.md @@ -0,0 +1,33 @@ +--- +Title: 'Artificial Intelligence' +Description: 'Simulates human intelligence in computers, enabling learning, reasoning, and problem-solving to provide solutions across various tasks.' +Subjects: + - 'AI' + - 'Computer Science' +Tags: + - 'AI' + - 'Algorithms' + - 'Automation' +CatalogContent: + - 'paths/computer-science' + - 'paths/data-science' +--- + +**Artificial Intelligence (AI)** is described as the simulation of human intelligence in computer systems. AI technology can perform learning, reasoning, and problem-solving tasks, enabling solutions to a variety of complex problems. AI systems can range from simple algorithms, such as those making product recommendations based on purchase history, to complex systems that power self-driving cars, planning routes, and avoiding obstacles without human intervention. The demand for AI-powered solutions is growing rapidly and intersecting almost every aspect of our daily lives. + +## Types of AI + +- **Reactive Machines AI**: These are the most basic AI systems. They do not have memory or use past data to form decisions or factor into solutions. They react to specific inputs with specific outputs. +- **Limited Memory AI**: These systems can use memory and stored data to make future decisions. However, they only have temporary memory, which is stored briefly. +- **Theory of Mind AI**: These advanced AI systems involve machines that can understand emotions, behaviors, and interactions, making them more human-like in their ability to interact with humans. +- **Self-Aware AI**: This would be the most advanced type of AI system and is currently hypothetical. In this type of system, a machine would have a defined consciousness and be self-aware. It would be able to make decisions, feel emotions, and act on them based on its own intentions and desires. + +## Applications of AI + +Artificial Intelligence plays a significant role in various fields of computer science and programming. The most popular applications include: + +- **Business**: AI is playing an increasingly important role in businesses. AI-powered tools help collect, analyze, and visualize data efficiently, leading to improved decision-making, productivity, and cost reduction. +- **Healthcare**: AI assists doctors in diagnosing diseases, developing treatments, and providing personalized care to patients. +- **Education**: AI can personalize learning, enhance student engagement, and automate administrative tasks for schools and organizations. +- **Finance**: AI aids financial institutions by personalizing services and products, managing risk and fraud, ensuring compliance, and automating operations to reduce costs. +- **Manufacturing**: AI is used in manufacturing including automating tasks, such as assembly and inspection, optimizing production processes, and can be used to detect defects and improve quality control. diff --git a/content/general/concepts/linux/linux.md b/content/general/concepts/linux/linux.md new file mode 100644 index 00000000000..b91314ecbf5 --- /dev/null +++ b/content/general/concepts/linux/linux.md @@ -0,0 +1,84 @@ +--- +Title: 'Linux' +Description: 'Free and open-source operating system kernel that forms the foundation of numerous operating systems (distributions).' +Subjects: + - 'Computer Science' + - 'Code Foundations' +Tags: + - 'Developer Tools' + - 'Linux' + - 'Operating Systems' + - 'Unix' +CatalogContent: + - 'paths/computer-science' + - 'paths/code-foundations' +--- + +**Linux** was created by Linus Torvalds in 1991 as an alternative to proprietary Unix systems. It has since grown into a powerful, secure, and highly customizable operating system that powers everything from personal computers to servers, smartphones (Android), and supercomputers. The Linux ecosystem is supported by a vast community of developers and users who contribute to its continuous development and improvement. + +## Working + +Linux operates on a kernel-based architecture, where the kernel manages hardware resources and provides essential services to higher-level software. It supports a multi-user, multi-tasking environment, allowing processes to run concurrently while securely sharing system resources among users. The system follows a hierarchical file structure, starting from the root directory (`/`), where all devices and resources are represented as files. + +## Architecture of Linux + +![Linux Architecture](https://raw.githubusercontent.com/Codecademy/docs/main/media/general-linux.png) + +- Hardware Layer (Core) + + - Contains physical components like CPU, memory, and storage devices + - Forms the base layer, interfacing directly with hardware + +- Kernel Layer + + - Core of the Linux operating system + - Manages hardware resources, memory, and processes + - Provides essential services and hardware abstraction + - Handles system calls and device drivers + +- Shell Layer + + - A command-line interpreter that bridges the user and kernel + - Processes user commands and scripts + - Examples include Bash, Zsh, and Fish + +- Application Layer (Outermost) + - Includes user applications, GUI interfaces, and system utilities + - Supports third-party applications and system tools + +This layered architecture follows a hierarchical structure where each layer communicates with adjacent layers, with the kernel serving as the critical intermediary between hardware and software components. Each outer layer depends on the services provided by the inner layers, creating a robust and modular system design. + +## Linux Commands + +| Command | Description | Example Usage | +| --------- | ---------------------------------------- | ----------------------------- | +| `ls` | List files and directories | `ls -l` | +| `cd` | Changes the current directory | `cd /home/user` | +| `pwd` | Displays the current directory path | `pwd` | +| `mkdir` | Creates a new directory | `mkdir new_folder` | +| `rm` | Deletes files or directories | `rm file.txt` | +| `rmdir` | Removes empty directories | `rmdir empty_folder` | +| `cp` | Copies files or directories | `cp source.txt destination/` | +| `mv` | Moves or rename files and directories | `mv old.txt new.txt` | +| `cat` | Displays file contents | `cat file.txt` | +| `nano` | Edits a file using the nano editor | `nano file.txt` | +| `vim` | Edits a file using the Vim editor | `vim file.txt` | +| `touch` | Creates an empty file | `touch newfile.txt` | +| `chmod` | Modifies file permissions | `chmod 755 script.sh` | +| `chown` | Changes file ownership | `chown user:group file.txt` | +| `find` | Searches for files in a directory | `find /home -name "*.txt"` | +| `grep` | Searches for a pattern inside files | `grep "error" logfile.log` | +| `ps` | Displays running processes | `ps aux` | +| `kill` | Terminates a process by its ID | `kill 1234` | +| `top` | Shows system resource usage in real time | `top` | +| `df` | Shows disk space usage | `df -h` | +| `du` | Shows directory size | `du -sh folder/` | +| `tar` | Archives multiple files | `tar -cvf archive.tar files/` | +| `unzip` | Extracts files from a ZIP archive | `unzip archive.zip` | +| `wget` | Downloads a file from the internet | `wget https://example.com` | +| `curl` | Fetches data from a URL | `curl -O https://example.com` | +| `echo` | Prints text to the terminal | `echo "Hello, World!"` | +| `whoami` | Displays the current logged-in user | `whoami` | +| `uptime` | Shows system uptime | `uptime` | +| `history` | Displays command history | `history` | +| `clear` | Clears the terminal screen | `clear` | diff --git a/content/general/concepts/processor/processor.md b/content/general/concepts/processor/processor.md new file mode 100644 index 00000000000..eb3f408de1c --- /dev/null +++ b/content/general/concepts/processor/processor.md @@ -0,0 +1,39 @@ +--- +Title: 'Processor' +Description: 'A processor is a hardware component, typically a chip, that executes instructions and performs data processing tasks in a computer or electronic device.' +Subjects: + - 'Computer Science' + - 'Information Technology' +Tags: + - 'Memory' + - 'Components' +CatalogContent: + - 'paths/computer-science' + - 'paths/front-end-engineer-career-path' +--- + +A **processor** is a hardware component that interprets and executes instructions from programs or data. It performs calculations, logic operations, and manages data flow within a system. Processors are essential for a wide range of tasks in computing and electronics, and are found in devices like computers, smartphones, and more. + +## History + +The history of processors ranges from primitive vacuum tubes to multi-core processors being able to handle tons of workload. Some key milestones in their history include: + +- **1940s-1950s**: Early computers used bulky vacuum tubes for processing, which made them large and inefficient. +- **1960s**: The invention of transistors led to smaller, faster, and more reliable processors. +- **1970s**: The Intel 4004, the first microprocessor, integrated all processing functions into a single chip. +- **1980s**: The Intel 8086 introduced the x86 architecture, which powered personal computers like the IBM PC. +- **1990s**: The Intel Pentium processors revolutionized personal computer performance. +- **2000s-Present**: Multi-core processors enabled improved multitasking, while ARM chips became dominant in mobile devices. +- **2010s-Present**: Processors became smaller and more powerful with nanometer technology, and AI-specific chips began to emerge. + +## Types of Processors + +Depending upon their function, processors are divided into many types. Some of them are: + +- **Central Processing Unit (CPU)**: The primary processor in a computer, responsible for executing the majority of instructions in a program. It handles tasks such as arithmetic, logic, and data management. +- **Graphics Processing Unit (GPU)**: A specialized processor designed for graphics-related tasks, such as rendering images, video processing, and running complex simulations. GPUs are commonly found in gaming systems, workstations, and high-performance computing applications. +- **Digital Signal Processor (DSP)**: A processor optimized for signal processing tasks, including sound, image, and video processing. DSPs are used in devices like smartphones, audio equipment, and telecommunications systems. +- **Application-Specific Integrated Circuit (ASIC)**: A custom-designed processor for a specific application, often used in specialized systems such as cryptocurrency mining or network equipment. +- **Microcontrollers**: Small, embedded processors found in everyday devices like microwaves, washing machines, and cars. They are often responsible for managing tasks and controls in these devices. + +Additionally, as artificial intelligence (AI) continues to grow, specialized processors called **Neural Processing Units (NPUs)**, also known as AI accelerators, are being developed to handle AI-specific workloads more efficiently. diff --git a/content/python/concepts/statsmodels/terms/diagnostic-plots/diagnostic-plots.md b/content/python/concepts/statsmodels/terms/diagnostic-plots/diagnostic-plots.md new file mode 100644 index 00000000000..603ee49a3a9 --- /dev/null +++ b/content/python/concepts/statsmodels/terms/diagnostic-plots/diagnostic-plots.md @@ -0,0 +1,90 @@ +--- +Title: 'Diagnostic Plots' +Description: 'Diagnostic plots are visual tools used to assess the validity of regression model assumptions, detect anomalies, and evaluate model performance.' +Subjects: + - 'Machine Learning' + - 'Data Science' +Tags: + - 'Statistics' + - 'Properties' + - 'Models' + - 'Data' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +Diagnostic plots are essential tools for evaluating the assumptions and performance of regression models. In the context of linear regression, these plots help identify potential issues such as non-linearity, non-constant variance, outliers, high leverage points, and collinearity. The `statsmodels` library in Python provides several functions to generate these diagnostic plots, aiding in assessing model fit and validity. + +Common diagnostic plots include: + +- **Residual plots**: Check for homoscedasticity and non-linearity. +- **Q-Q plots**: Assess the normality of residuals. +- **Leverage plots**: Identify influential points. +- **Scale-location plots**: Detect patterns in residual variance. + +## Syntax + +There are several different methods for generating diagnostic plots in statsmodels. Two common methods are `plot_partregress_grid()` and `plot_regress_exog()`. These methods work with a fitted regression results object. + +### `plot_partregress_grid()` + +The `plot_partregress_grid()` method generates diagnostic plots for all explanatory variables in the model. It helps assess the relationship between the residuals and each independent variable. + +The syntax for using `plot_partregress_grid()` is: + +```psuedo +results.plot_partregress_grid() +``` + +- `results` refers to the fitted regression results object. + +### `plot_regress_exog()` + +The `plot_regress_exog()` method generates residual plots for a specific independent variable. This can help check the assumption of linearity with respect to a particular predictor. + +The syntax for using `plot_regress_exog()` is: + +```pseudo +results.plot_regress_exog(exog_idx) +``` + +- `results` refers to the fitted regression results object. +- `exog_idx` is the index of the explanatory variable whose relationship with the dependent variable you want to plot. + +## Example + +Below is an example demonstrating how to generate diagnostic plots for a linear regression model using `statsmodels`: + +```py +import statsmodels.api as sm +import numpy as np +import pandas as pd +import matplotlib.pyplot as plt + +# Create synthetic data +np.random.seed(0) +X = np.random.rand(100, 2) +X = sm.add_constant(X) # Add constant (intercept) +y = X[:, 1] + X[:, 2] + np.random.randn(100) # Dependent variable with some noise + +# Fit linear regression model +model = sm.OLS(y, X) +results = model.fit() + +# Generate diagnostic plots for all variables +fig = plt.figure(figsize=(10, 8)) +fig = sm.graphics.plot_partregress_grid(results) +plt.show() + +# Alternatively, generate a residual plot for the first independent variable +fig = plt.figure(figsize=(10, 8)) +fig = sm.graphics.plot_regress_exog(results, 1) +plt.show() +``` + +The output will be as follows: + +![Diagnostic plots for all variables](https://raw.githubusercontent.com/Codecademy/docs/main/media/partial-regression-plot.png) + +![A residual plot for the first independent variable](https://raw.githubusercontent.com/Codecademy/docs/main/media/regression-plot-for-x1.png) diff --git a/content/pytorch/concepts/tensor-operations/terms/numel/numel.md b/content/pytorch/concepts/tensor-operations/terms/numel/numel.md new file mode 100644 index 00000000000..1cb136f0a0d --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/numel/numel.md @@ -0,0 +1,55 @@ +--- +Title: '.numel()' +Description: 'Returns the total number of elements in a tensor.' +Subjects: + - 'AI' + - 'Data Science' +Tags: + - 'AI' + - 'Data Types' + - 'Deep Learning' + - 'Functions' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'paths/data-science' +--- + +In PyTorch, the **`.numel()`** method calculates the product of all dimensions of the tensor to determine its total size. It's particularly useful to know the total count of elements regardless of the tensor's shape or dimensionality. + +## Syntax + +```pseudo +torch.numel(Tensor) +``` + +- `Tensor`: The input tensor whose total number of elements is to be computed. + +It returns an integer representing the total number of elements in the given tensor. + +## Example + +The following example creates a _2x3_ tensor and demonstrates how `.numel()` counts all elements across all dimensions: + +```py +import torch + +# Create a 2x3 tensor +x = torch.randn(2, 3) +print("Tensor x:") +print(x) + +y = torch.numel(x) +print("\nTotal number of elements:", y) +``` + +The above code produces the following output: + +```shell +Tensor x: +tensor([[-1.0727, 0.3469, -1.2021], + [ 0.0424, 0.1689, 2.6234]]) + +Total number of elements: 6 +``` + +> **Note:** The output varies on each run because `torch.randn(2, 3)` generates random values from a normal distribution. diff --git a/content/pytorch/concepts/tensor-operations/terms/unravel-index/unravel-index.md b/content/pytorch/concepts/tensor-operations/terms/unravel-index/unravel-index.md new file mode 100644 index 00000000000..19e5a26dd86 --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/unravel-index/unravel-index.md @@ -0,0 +1,93 @@ +--- +Title: '.unravel_index()' +Description: 'Converts flat indices into coordinate tuples based on the shape of a tensor, enabling multi-dimensional indexing.' +Subjects: + - 'Computer Science' + - 'Data Science' +Tags: + - 'Tensor' + - 'Index' + - 'PyTorch' +CatalogContent: + - 'learn-python-3' + - 'paths/data-science' +--- + +The **`.unravel_index()`** function in PyTorch maps flat (1D) indices to multi-dimensional coordinates using a specified [tensor](https://www.codecademy.com/resources/docs/pytorch/tensors) shape. This is particularly useful when working with operations that return linear indices and to find the positions in the original tensor’s dimensions. + +## Syntax + +```pseudo +torch.unravel_index(indices, shape) +``` + +- `indices` (Tensor): A 1D tensor containing flat indices to convert. +- `shape` (Tuple): The dimensions of the target tensor (e.g., `(rows, columns)`). + +Returns a tuple of tensors, where each tensor represents the coordinate values along a specific dimension of the target shape. + +## Example + +### Basic Usage + +Converting flat indices `[3, 1, 5]` into 2D coordinates for a tensor of shape `(2, 3)`: + +```py +import torch + +# Flat indices and target shape +indices = torch.tensor([3, 1, 5]) +shape = (2, 3) + +# Get multi-dimensional coordinates +coords = torch.unravel_index(indices, shape) + +print("Coordinates (row, column):") +for row, col in zip(*coords): + print(f"({row}, {col})") +``` + +The above code will return the following output: + +```shell +Coordinates (row, column): +(1, 0) +(0, 1) +(1, 2) +``` + +### 3D Tensor Example + +Convert flat indices to coordinates in a 3D tensor of shape `(2, 2, 3)`: + +```py +import torch + +indices_3d = torch.tensor([7, 2]) +shape_3d = (2, 2, 3) # Dimensions: (depth, rows, columns) + +coords_3d = torch.unravel_index(indices_3d, shape_3d) + +print("Coordinates (depth, row, column):") +for d, r, c in zip(*coords_3d): + print(f"({d}, {r}, {c})") +``` + +The above code returns the following output: + +```shell +Coordinates (depth, row, column): +(1, 0, 1) +(0, 0, 2) +``` + +For the 2D case (`shape = (2, 3)`) + +- Index 3 corresponds to row `1` (`3 // 3 = 1`), column `0` (`3 % 3 = 0`). +- Index 1 corresponds to row `0` (`1 // 3 = 0`), column `1` (`1 % 3 = 1`). +- Index 5 corresponds to row `1` (`5 // 3 = 1`), column `2` (`5 % 3 = 2`). + +For the 3D case (`shape = (2, 2, 3)`) + +- Index 7 is in depth `1` (`7 // (2 * 3) = 1`), row `0` (`(7 % 6) // 3 = 0`), column `1` (`(7 % 6) % 3 = 1`). +- Index 2 is in depth `0` (`2 // (2 * 3) = 0`), row `0` (`(2 % 6) // 3 = 0`), column `2` (`(2 % 6) % 3 = 2`). diff --git a/content/scipy/concepts/scipy-stats/terms/probability-distributions/probability-distributions.md b/content/scipy/concepts/scipy-stats/terms/probability-distributions/probability-distributions.md new file mode 100644 index 00000000000..7f09aa4f1b4 --- /dev/null +++ b/content/scipy/concepts/scipy-stats/terms/probability-distributions/probability-distributions.md @@ -0,0 +1,118 @@ +--- +Title: 'Probability Distributions' +Description: 'In SciPy, a probability distribution defines the likelihood of outcomes for a random variable, with functions for density, cumulative probability, and sampling.' +Subjects: + - 'Computer Science' + - 'Data Science' +Tags: + - 'Data' + - 'Functions' + - 'Math' + - 'Python' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +In statistics and probability theory, a probability distribution describes how the values of a random variable are spread or distributed. It gives the probabilities of the possible outcomes of an experiment or event. In the context of SciPy, the `scipy.stats` module provides a wide range of probability distributions that can be used for modelling, simulating, and analyzing random processes. + +SciPy's `scipy.stats` module includes continuous and discrete distributions. **Continuous distributions**, such as normal or exponential distributions, are used to model variables that can take any real value within a range. **Discrete distributions**, such as the binomial or Poisson distributions, model scenarios where outcomes are limited to specific, countable values. + +The `scipy.stats` module offers various functions for each distribution type, including: + +- **Probability Density Function (PDF)**: Describes the likelihood of a given value under a continuous distribution. +- **Cumulative Distribution Function (CDF)**: Gives the probability that a random variable will take a value less than or equal to a specified value. +- **Random Variates**: Functions to generate random samples from a specified distribution. +- **Statistical Moments**: Functions for calculating the mean, variance, skewness, and kurtosis of the distribution. + +These distributions are essential tools for tasks such as hypothesis testing, statistical modelling, simulations, and generating synthetic data that follows known statistical properties. + +## Syntax + +Each distribution is represented by a corresponding class in `scipy.stats`, which provides methods for computing properties like probability density, cumulative distribution, and random sampling. + +```pseudo +from scipy import stats + +# For continuous distributions +dist = stats.norm(loc=0, scale=1) # Normal distribution with mean 0 and std 1 + +# For discrete distributions +dist_binom = stats.binom(n=10, p=0.5) # Binomial distribution with 10 trials and 0.5 probability of success +``` + +### Methods available for probability distributions + +- `pdf(x)`: Probability Density Function (for continuous distributions) +- `cdf(x)`: Cumulative Distribution Function +- `rvs(size)`: Random variates (sampling from the distribution) +- `mean()`: Mean of the distribution +- `std()`: Standard deviation of the distribution + +## Examples + +### Normal Distribution + +In this example, the probability density function (PDF) will be calculated, and random samples from a normal distribution will be generated. + +```py +import numpy as np +import matplotlib.pyplot as plt +from scipy import stats + +# Define a normal distribution with mean 0 and standard deviation 1 +dist = stats.norm(loc=0, scale=1) + +# Generate 1000 random samples from the distribution +samples = dist.rvs(size=1000) + +# Plot the histogram of the samples +plt.hist(samples, bins=30, density=True, alpha=0.6, color='g') + +# Plot the PDF of the normal distribution +x = np.linspace(-4, 4, 100) +pdf = dist.pdf(x) +plt.plot(x, pdf, 'k', linewidth=2) +plt.title('Normal Distribution: Mean = 0, Std = 1') +plt.show() +``` + +This code generates random samples from a standard normal distribution and visualizes both the histogram of the samples and the probability density function (PDF). + +The output will be: + +![Normal Distribution](https://raw.githubusercontent.com/Codecademy/docs/main/media/normal-distribution.png) + +## Binomial Distribution + +In this example, the binomial distribution will be calculated to compute the probability of a specific number of successes in a series of trials. + +```py +from scipy import stats + +# Define a binomial distribution with 10 trials and probability of success 0.5 +dist_binom = stats.binom(n=10, p=0.5) + +# Probability of getting exactly 5 successes +prob_5_successes = dist_binom.pmf(5) +print(f"Probability of 5 successes: {prob_5_successes}") + +# Generate 1000 random samples +samples_binom = dist_binom.rvs(size=1000) + +# Plot the histogram of the binomial samples +import matplotlib.pyplot as plt +plt.hist(samples_binom, bins=10, density=True, alpha=0.6, color='blue') +plt.title('Binomial Distribution: n = 10, p = 0.5') +plt.show() +``` + +In this case, a binomial distribution with 10 trials and a 50% chance of success is defined. The example computes the probability of obtaining exactly 5 successes and visualizes the distribution of random samples. + +The output will be: + +```shell +Probability of 5 successes: 0.2460937500000002 +``` + +![Binomial Distribution](https://raw.githubusercontent.com/Codecademy/docs/main/media/binomial-distribution.png) diff --git a/media/binomial-distribution.png b/media/binomial-distribution.png new file mode 100644 index 00000000000..ea672320c28 Binary files /dev/null and b/media/binomial-distribution.png differ diff --git a/media/general-linux.png b/media/general-linux.png new file mode 100644 index 00000000000..5805fcc2e85 Binary files /dev/null and b/media/general-linux.png differ diff --git a/media/normal-distribution.png b/media/normal-distribution.png new file mode 100644 index 00000000000..5f5b902d6bc Binary files /dev/null and b/media/normal-distribution.png differ diff --git a/media/partial-regression-plot.png b/media/partial-regression-plot.png new file mode 100644 index 00000000000..5c67d7e4dca Binary files /dev/null and b/media/partial-regression-plot.png differ diff --git a/media/regression-plot-for-x1.png b/media/regression-plot-for-x1.png new file mode 100644 index 00000000000..ed3b36d5a95 Binary files /dev/null and b/media/regression-plot-for-x1.png differ