Hello World

Hi there! πŸ‘‹ I’m Ruize Xia (xiaruize0911), a student at Nanjing Foreign Language School in Nanjing, Jiangsu, China. I am passionate about learning and exploring new technologies and ideas. 🌍 Location πŸ“ Nanjing, Jiangsu, China πŸ“« Contact Me Email: xiaruize0911@gmail.com QQ: 2188298460 WeChat: xiaruize0911 Thank you for visiting! Feel free to explore my blog and connect with me. πŸš€

Gradient Descent

Gradient Descent Gradient Descent is an iterative algorithm used to optimize objective functions, widely applied in machine learning and deep learning to train models by minimizing loss functions. It is recommended to first read the Machine Learning Basics article for background understanding. Objective Function Objective Function: Sometimes also called criterion, cost function, or loss function, is the function we aim to minimize, usually denoted as $J(\theta)$, where $\theta$ is the model parameter. For example, Mean Squared Error (MSE) used in linear regression is a common objective function. ...

September 25, 2025

Machine Learning Basics

Machine Learning Basics This article is based on Chapter 5 of Machine Learning Yearning and introduces some basic concepts and methods of machine learning. Supervised Learning This article mainly discusses supervised learning algorithms. First, let’s introduce some basic concepts: Training Set: The dataset used to train the model, containing input data (sample, $X$) and corresponding labels ($y$). Sample: Each data point in the training set, usually represented as a vector $x^{(i)}$, where $i$ is the sample index. Test Set: The dataset used to evaluate model performance, containing unseen input data and corresponding labels. Input: The input data to the model, usually represented as a vector $x \in \mathbb{R}^n$, where $n$ is the input dimension. Output: The output result of the model, usually represented as a scalar $y$, which can be a continuous value (regression) or a discrete value (classification). Feature: Each dimension of the input data, represented as $x_1, x_2, \ldots, x_n$. Label: The true output value corresponding to the input data, generally represented as $y$. The main goal of supervised learning algorithms is to learn $P(y|x)$ from a training set, obtain a model to get an approximate value $\hat{P}(y|x)$, so that for new input $x$, the corresponding output $y$ can be predicted. ...

September 22, 2025

Deep Feedforward Neural Networks

Deep Feedforward Neural Networks This article is a study note for Chapter 6 of the book Deep Learning. Introduction Deep Feedforward Networks, also known as Multilayer Perceptrons (MLPs), are the most classic neural network models. From the perspective of graph theory, a deep feedforward network is a Directed Acyclic Graph (DAG), where each node represents a neuron and each edge represents a connection weight. The connections between nodes are directional, and information can only flow along the direction of the edges. Therefore, deep feedforward networks have no cycles or feedback connections. ...

September 22, 2025

Deep Learning Study Notes

Deep Learning Study Notes Preface This article is a summary directory of study notes and project records related to Deep Learning (DL). Thank you for reading. If you have any questions, feel free to leave a comment or contact me via email. Thank you! Contents Deep Feedforward Neural Networks

September 22, 2025