pythonproject/tf_demo.ipynb

366 lines
12 KiB
Plaintext
Raw Permalink Normal View History

2024-06-25 14:15:07 +08:00
{
"cells": [
{
"cell_type": "code",
"execution_count": 14,
"outputs": [],
"source": [
"import numpy as np\n",
"import tensorflow as tf\n",
"\n",
"from d2l import tensorflow as d2l\n",
"\n",
"true_w = tf.constant([2, -3.4])\n",
"true_b = 4.2\n",
"features, labels = d2l.synthetic_data(true_w, true_b, 1000)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T19:02:06.299583Z",
"start_time": "2024-05-13T19:02:06.263839Z"
}
},
"id": "initial_id"
},
{
"cell_type": "markdown",
"source": [],
"metadata": {
"collapsed": false
},
"id": "f2a124a4f15cd3d"
},
{
"cell_type": "raw",
"source": [
"读取数据集\n",
"我们可以调用框架中现有的API来读取数据。 我们将features和labels作为API的参数传递并通过数据迭代器指定batch_size。 此外布尔值is_train表示是否希望数据迭代器对象在每个迭代周期内打乱数据。"
],
"metadata": {
"collapsed": false
},
"id": "e809277140e059b9"
},
{
"cell_type": "code",
"execution_count": 6,
"outputs": [],
"source": [
"def load_array(data_arrays, batch_size, is_train=True): #@save\n",
" \"\"\"构造一个TensorFlow数据迭代器\"\"\"\n",
" dataset = tf.data.Dataset.from_tensor_slices(data_arrays)\n",
" if is_train:\n",
" dataset = dataset.shuffle(buffer_size=1000)\n",
" dataset = dataset.batch(batch_size)\n",
" return dataset\n",
"\n",
"batch_size = 10\n",
"data_iter = load_array((features, labels), batch_size)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T18:26:55.253446Z",
"start_time": "2024-05-13T18:26:55.238851Z"
}
},
"id": "e252e1932db7d695"
},
{
"cell_type": "raw",
"source": [
"使用data_iter的方式与我们在 3.2节中使用data_iter函数的方式相同。为了验证是否正常工作让我们读取并打印第一个小批量样本。 与 3.2节不同这里我们使用iter构造Python迭代器并使用next从迭代器中获取第一项。"
],
"metadata": {
"collapsed": false
},
"id": "988732fb6606bf66"
},
{
"cell_type": "code",
"execution_count": 7,
"outputs": [
{
"data": {
"text/plain": "(<tf.Tensor: shape=(10, 2), dtype=float32, numpy=\n array([[-0.32025504, -0.0979122 ],\n [-0.06632588, -0.09948308],\n [ 0.37965608, -0.4451669 ],\n [ 1.3547533 , -0.5283366 ],\n [ 0.93110466, 0.3636887 ],\n [ 0.10002718, 0.50685346],\n [-1.9711956 , 0.08630205],\n [ 0.537177 , 1.8008459 ],\n [-1.9760898 , -0.09219848],\n [-2.2803571 , -1.2965533 ]], dtype=float32)>,\n <tf.Tensor: shape=(10, 1), dtype=float32, numpy=\n array([[ 3.8989818 ],\n [ 4.4033084 ],\n [ 6.4690523 ],\n [ 8.722759 ],\n [ 4.818675 ],\n [ 2.6789532 ],\n [-0.02284066],\n [-0.8447736 ],\n [ 0.5562263 ],\n [ 4.0474586 ]], dtype=float32)>)"
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"next(iter(data_iter))"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T18:26:58.954492Z",
"start_time": "2024-05-13T18:26:58.888945Z"
}
},
"id": "2ca6f2d456922c25"
},
{
"cell_type": "raw",
"source": [
"定义模型\n",
"当我们在 3.2节中实现线性回归时, 我们明确定义了模型参数变量,并编写了计算的代码,这样通过基本的线性代数运算得到输出。 但是,如果模型变得更加复杂,且当我们几乎每天都需要实现模型时,自然会想简化这个过程。 这种情况类似于为自己的博客从零开始编写网页。 做一两次是有益的,但如果每个新博客就需要工程师花一个月的时间重新开始编写网页,那并不高效。\n",
"\n",
"对于标准深度学习模型,我们可以使用框架的预定义好的层。这使我们只需关注使用哪些层来构造模型,而不必关注层的实现细节。 我们首先定义一个模型变量net它是一个Sequential类的实例。 Sequential类将多个层串联在一起。 当给定输入数据时Sequential实例将数据传入到第一层 然后将第一层的输出作为第二层的输入,以此类推。 在下面的例子中我们的模型只包含一个层因此实际上不需要Sequential。 但是由于以后几乎所有的模型都是多层的在这里使用Sequential会让你熟悉“标准的流水线”。\n",
"\n",
"回顾 图3.1.2中的单层网络架构, 这一单层被称为全连接层fully-connected layer 因为它的每一个输入都通过矩阵-向量乘法得到它的每个输出。"
],
"metadata": {
"collapsed": false
},
"id": "410e67cebcf9d8b7"
},
{
"cell_type": "code",
"execution_count": 8,
"outputs": [],
"source": [
"# keras是TensorFlow的高级API\n",
"net = tf.keras.Sequential()\n",
"net.add(tf.keras.layers.Dense(1))"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T18:27:04.355480Z",
"start_time": "2024-05-13T18:27:04.289974Z"
}
},
"id": "24de7cc7da91fe11"
},
{
"cell_type": "raw",
"source": [
"初始化模型参数\n",
"在使用net之前我们需要初始化模型参数。 如在线性回归模型中的权重和偏置。 深度学习框架通常有预定义的方法来初始化参数。 在这里我们指定每个权重参数应该从均值为0、标准差为0.01的正态分布中随机采样, 偏置参数将初始化为零。\n",
"TensorFlow中的initializers模块提供了多种模型参数初始化方法。 在Keras中最简单的指定初始化方法是在创建层时指定kernel_initializer。 在这里我们重新创建了net。"
],
"metadata": {
"collapsed": false
},
"id": "9205b0b556c6432c"
},
{
"cell_type": "code",
"execution_count": 9,
"outputs": [],
"source": [
"initializer = tf.initializers.RandomNormal(stddev=0.01)\n",
"net = tf.keras.Sequential()\n",
"net.add(tf.keras.layers.Dense(1, kernel_initializer=initializer))"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T18:27:14.035097Z",
"start_time": "2024-05-13T18:27:13.982645Z"
}
},
"id": "2fb5fbf4fabaee24"
},
{
"cell_type": "raw",
"source": [
"上面的代码可能看起来很简单,但是这里有一个应该注意到的细节: 我们正在为网络初始化参数而Keras还不知道输入将有多少维! 网络的输入可能有2维也可能有2000维。 Keras让我们避免了这个问题在后端执行时初始化实际上是推迟deferred执行的。 只有在我们第一次尝试通过网络传递数据时才会进行真正的初始化。 请注意,因为参数还没有初始化,所以我们不能访问或操作它们。"
],
"metadata": {
"collapsed": false
},
"id": "9b84d89005435a4"
},
{
"cell_type": "raw",
"source": [
"定义损失函数"
],
"metadata": {
"collapsed": false
},
"id": "25cc1b368d8a8b15"
},
{
"cell_type": "raw",
"source": [
"计算均方误差使用的是MeanSquaredError类也称为平方范数。 默认情况下,它返回所有样本损失的平均值。"
],
"metadata": {
"collapsed": false
},
"id": "d9824c0215f38fa0"
},
{
"cell_type": "code",
"execution_count": 10,
"outputs": [],
"source": [
"loss = tf.keras.losses.MeanSquaredError()"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T18:27:20.427456Z",
"start_time": "2024-05-13T18:27:20.413261Z"
}
},
"id": "e2d1727f98ed0d29"
},
{
"cell_type": "raw",
"source": [
"定义优化算法\n",
"小批量随机梯度下降算法是一种优化神经网络的标准工具, Keras在optimizers模块中实现了该算法的许多变种。 小批量随机梯度下降只需要设置learning_rate值这里设置为0.03。"
],
"metadata": {
"collapsed": false
},
"id": "a6b049fbffcd31e4"
},
{
"cell_type": "code",
"execution_count": 11,
"outputs": [],
"source": [
"trainer = tf.keras.optimizers.SGD(learning_rate=0.03)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T18:27:24.596378Z",
"start_time": "2024-05-13T18:27:24.580161Z"
}
},
"id": "e8e2f81d312161e7"
},
{
"cell_type": "raw",
"source": [
"通过深度学习框架的高级API来实现我们的模型只需要相对较少的代码。 我们不必单独分配参数、不必定义我们的损失函数,也不必手动实现小批量随机梯度下降。 当我们需要更复杂的模型时高级API的优势将大大增加。 当我们有了所有的基本组件,训练过程代码与我们从零开始实现时所做的非常相似。\n",
"\n",
"回顾一下在每个迭代周期里我们将完整遍历一次数据集train_data 不停地从中获取一个小批量的输入和相应的标签。 对于每一个小批量,我们会进行以下步骤:\n",
"\n",
"通过调用net(X)生成预测并计算损失l前向传播。\n",
"\n",
"通过进行反向传播来计算梯度。\n",
"\n",
"通过调用优化器来更新模型参数。\n",
"\n",
"为了更好的衡量训练效果,我们计算每个迭代周期后的损失,并打印它来监控训练过程。"
],
"metadata": {
"collapsed": false
},
"id": "5a806107151aa2e5"
},
{
"cell_type": "code",
"execution_count": 12,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 1, loss 0.000211\n",
"epoch 2, loss 0.000096\n",
"epoch 3, loss 0.000096\n"
]
}
],
"source": [
"num_epochs = 3\n",
"for epoch in range(num_epochs):\n",
" for X, y in data_iter:\n",
" with tf.GradientTape() as tape:\n",
" l = loss(net(X, training=True), y)\n",
" grads = tape.gradient(l, net.trainable_variables)\n",
" trainer.apply_gradients(zip(grads, net.trainable_variables))\n",
" l = loss(net(features), labels)\n",
" print(f'epoch {epoch + 1}, loss {l:f}')"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T18:27:34.792682Z",
"start_time": "2024-05-13T18:27:31.970985Z"
}
},
"id": "bca5a85df8f21248"
},
{
"cell_type": "raw",
"source": [
"下面我们比较生成数据集的真实参数和通过有限数据训练获得的模型参数。 要访问参数我们首先从net访问所需的层然后读取该层的权重和偏置。 正如在从零开始实现中一样,我们估计得到的参数与生成数据的真实参数非常接近。"
],
"metadata": {
"collapsed": false
},
"id": "e0eb20fbc723511"
},
{
"cell_type": "code",
"execution_count": 13,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"w的估计误差 tf.Tensor([-0.00030255 0.00015068], shape=(2,), dtype=float32)\n",
"b的估计误差 [-0.00037432]\n"
]
}
],
"source": [
"w = net.get_weights()[0]\n",
"print('w的估计误差', true_w - tf.reshape(w, true_w.shape))\n",
"b = net.get_weights()[1]\n",
"print('b的估计误差', true_b - b)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-05-13T18:27:38.488671Z",
"start_time": "2024-05-13T18:27:38.448698Z"
}
},
"id": "1b30f0ba306ee16c"
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [],
"metadata": {
"collapsed": false
},
"id": "c72aa721015f9413"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}