Glossary
**GAN (Generative Adversarial Network)**: A type of neural network architecture that pits two models, a generator and a discriminator, against each other to generate realistic data, such as images or text.
**Gaussian Mixture Model (GMM)**: A probabilistic model that represents a distribution as a mixture of multiple Gaussian distributions, often used in clustering and density estimation.
**Gaussian Process**: A non-parametric model used for regression and classification tasks that defines a distribution over functions, allowing for predictions with uncertainty estimates.
**Genetic Algorithm**: An optimization algorithm inspired by natural selection, where candidate solutions evolve over time through operations like mutation, crossover, and selection.
**Gibbs Sampling**: A Markov Chain Monte Carlo (MCMC) algorithm used to sample from a high-dimensional probability distribution by iteratively sampling from the conditional distributions of each variable.
**Gradient Descent**: An optimization algorithm that minimizes a function by iteratively moving towards the steepest descent direction, commonly used to train machine learning models by minimizing the loss function.
**Gradient Clipping**: A technique used in training deep neural networks to prevent the exploding gradient problem by capping the gradients at a maximum value during backpropagation.
**Gradient Boosting**: An ensemble learning technique that builds models sequentially, with each new model correcting the errors of the previous ones, often used in high-performance machine learning algorithms like XGBoost.
**Gradient Checking**: A technique used to verify the correctness of the implementation of backpropagation in neural networks by comparing the analytical gradient to the numerical gradient.
**Graph Neural Network (GNN)**: A type of neural network designed to operate on graph structures, where nodes represent entities and edges represent relationships, often used in tasks like social network analysis and recommendation systems.
**Graphical Model**: A probabilistic model where variables are represented as nodes and their dependencies are represented as edges, used to model complex distributions and relationships in data.
**Grid Search**: A hyperparameter tuning technique that exhaustively searches over a specified grid of hyperparameter values to find the best combination for a model.
**Ground Truth**: The actual or true value of a variable or label in a dataset, used as the reference for evaluating the accuracy of machine learning models.
**Group Normalization**: A normalization technique that divides the channels into groups and normalizes each group independently, often used in deep learning models to stabilize training.
**Gaussian Distribution**: A continuous probability distribution characterized by a symmetric, bell-shaped curve, also known as the normal distribution, widely used in statistics and machine learning.
**Gated Recurrent Unit (GRU)**: A type of recurrent neural network (RNN) architecture similar to LSTM but with a simpler structure, used to capture temporal dependencies in sequence data.
**Gini Index**: A measure of inequality often used in decision trees to evaluate the quality of a split, where a lower Gini Index indicates a better split.
**Graph Embedding**: A technique for representing nodes, edges, or entire subgraphs as vectors in a continuous vector space, enabling machine learning tasks on graph data such as node classification or link prediction.
**Global Average Pooling (GAP)**: A pooling operation commonly used in Convolutional Neural Networks (CNNs) that replaces fully connected layers by averaging each feature map, reducing the spatial dimensions.
**Generalization**: The ability of a machine learning model to perform well on new, unseen data, reflecting how well the model captures the underlying patterns rather than just memorizing the training data.
**Gaussian Blur**: An image processing technique that smooths an image by averaging pixel values with neighboring pixels, often used to reduce noise and detail in an image.
**Gaussian Naive Bayes**: A variant of the Naive Bayes algorithm that assumes that the continuous features follow a Gaussian distribution, often used for classification tasks.
**Grammatical Inference**: The process of learning grammars or formal languages from data, often used in natural language processing and computational linguistics.
**Graph Isomorphism**: The problem of determining whether two graphs are structurally identical, meaning they contain the same nodes and edges but possibly in a different arrangement.
**Greedy Algorithm**: An algorithmic approach that makes the locally optimal choice at each step with the hope of finding a global optimum, often used in optimization problems like the knapsack problem.
**Gaussian Kernel**: A kernel function used in support vector machines and other algorithms that computes the similarity between two points based on the Gaussian function, commonly used in non-linear classification tasks.
**Generative Model**: A type of model that learns the joint probability distribution of the input data, allowing it to generate new data samples from the learned distribution.
**Gradient Penalty**: A regularization technique used in the training of Generative Adversarial Networks (GANs) to enforce the Lipschitz constraint, improving the stability of training.
**Gradient Tape**: A programming construct used in automatic differentiation frameworks to record operations on tensors, allowing for the computation of gradients in deep learning models.
**Geometric Brownian Motion**: A stochastic process often used in financial modeling to represent the random evolution of stock prices or other financial variables over time.
**Global Optimization**: The task of finding the global minimum or maximum of an objective function, as opposed to a local minimum or maximum, often more challenging due to the presence of multiple local optima.
**Goal-Oriented Dialogue System**: A type of conversational AI system designed to achieve specific objectives, such as booking a flight or answering customer service queries, often using reinforcement learning to optimize interactions.
**Gradient Ascent**: The opposite of gradient descent, an optimization algorithm that maximizes a function by iteratively moving in the direction of the steepest ascent.
**Grounding**: In natural language processing, the process of linking linguistic expressions to real-world entities or concepts, enabling the understanding and generation of meaningful language.
**Genetic Programming**: A type of evolutionary algorithm where computer programs are optimized to perform a specific task, often used to evolve algorithms or symbolic expressions.
**Generative Pre-trained Transformer (GPT)**: A type of transformer-based model pre-trained on large text corpora to generate coherent and contextually relevant text, often used in natural language processing tasks.
**Gaussian Elimination**: A mathematical algorithm for solving systems of linear equations by transforming the matrix into a row echelon form, often used in numerical linear algebra.
**Gumbel Softmax**: A reparameterization trick used to sample from a categorical distribution in a differentiable manner, often used in reinforcement learning and generative models.
**Gradient Noise**: Random perturbations added to the gradients during training, often used to prevent overfitting and improve the robustness of the model.
**Global Alignment**: A method in sequence alignment that aligns two sequences along their entire length, often used in bioinformatics for comparing DNA, RNA, or protein sequences.
**Graph Attention Network (GAT)**: A type of graph neural network that uses attention mechanisms to focus on the most relevant parts of the graph when making predictions.
**Grayscale**: An image processing technique where an image is converted to shades of gray, removing color information while retaining intensity information, often used in preprocessing for computer vision tasks.
**Generalized Linear Model (GLM)**: A flexible generalization of ordinary linear regression that allows for response variables to have error distribution models other than a normal distribution, often used in statistical modeling.
**Gradient Descent Variants**: Different forms of gradient descent algorithms, such as Stochastic Gradient Descent (SGD), Mini-Batch Gradient Descent, and Batch Gradient Descent, each offering different trade-offs in terms of speed and accuracy.
**Gradient Exploding**: A problem in training deep neural networks where gradients grow uncontrollably large, leading to unstable model training and divergence of parameters.
**Graph Matching**: The process of finding correspondences between the nodes and edges of two graphs, often used in computer vision, pattern recognition, and network analysis.
**Gradient Histogram**: A histogram that represents the distribution of gradient magnitudes in an image, often used in feature extraction techniques like Histogram of Oriented Gradients (HOG).
**Graph Partitioning**: The process of dividing a graph into smaller subgraphs while minimizing the number of edges cut, often used in parallel computing and network analysis.
**Gradient Flow**: The movement of gradients during the training of a neural network, which can be analyzed to understand issues like vanishing or exploding gradients.
**Goal-conditioned Reinforcement Learning**: A variant of reinforcement learning where the agent is trained to achieve specific goals, allowing for more flexible and targeted learning.
**Gaussian Copula**: A statistical model used to describe the dependence between random variables by modeling their joint distribution with a Gaussian copula, often used in finance and risk management.