重新实现手写识别:代码
我们实现下本章讨论的一点想法。我们将开发一个新程序,network2.py,这是我们在第1章程序network.py的升级版本。如果你没有看过network.py,一会儿你最好花几分钟快速看一下之前的讨论。代码只有74行,很容易理解。
就像network.py一样,network2.py的核心代码是Network类,用来表达神经网络。我们用sizes列表来分别初始化网络里的每一层,并选用一个cost,默认是交叉熵:
class Network(object):
def __init__(self, sizes, cost=CrossEntropyCost):
self.num_layers = len(sizes)
self.sizes = sizes
self.default_weight_initializer()
self.cost=cost
__init__
方法中开始的几行和network.py中的一样,并且非常容易理解。但下面两行是新添的,我们需要理解它们的细节。
让我们从检查default_weight_initializer方法开发。它用我们新改进的方法来初始化权重。正如我们看到的,在这个方法里输入到一个神经元的权重是被用均值为0和标准差为1的高斯随机变量除以连接输入到神经元数量的平方根来初始化的。这个方法也可用初始化偏移量,使用均值为0和标准差为1的高斯随机变量。这是代码:
def default_weight_initializer(self):
self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
self.weights = [np.random.randn(y, x)/np.sqrt(x)
for x, y in zip(self.sizes[:-1], self.sizes[1:])]
为了理解这块代码,重新看下 np 这个做线性代数运算的Numpy库可能会有所帮助。我们将在程序的开头import Numpy。还有需要注意,对于第一层的神经元并没有初始化任何偏移量。没有这样做的原因是第一层是输入层,因此任何偏移量都不会起作用。在 network.py 里做的是完全一样的事情。
作为 default_weight_initializer 的补充,我们将包含一个 large_weight_initializer 方法。这个方法使用第一章的老方式来初始化权重和偏移量,所有的权重和偏移量被初始化均值为0、标准差为1的高斯随机变量。下面是代码,当然,只和 default_weight_initializer 有一点细微的差别:
def large_weight_initializer(self):
self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(self.sizes[:-1], self.sizes[1:])]
我已经把 large_weight_initializer 方法包含了进来,主要是为了方便拿这一章的结果与第一章进行比较。我想不出太多推荐用它的实际场景。
第二个新东西是在Network的__init__
里,我们现在初始化一个coat属性。为了理解它是如何工作的,让我们来看表示交叉熵成本的类【如果你不太熟悉Python的静态方法,你可以忽略@staticmethod装饰器,只将fn和delta当成是普通方法。如果你好奇其中细节,所有@staticmethod所做就是告诉Python解释器,跟在下面的方法完全不依赖于所在对象。这也就是为什么self没有作为参数传给fn和delta方法】:
class CrossEntropyCost(object):
@staticmethod
def fn(a, y):
return np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a)))
@staticmethod
def delta(z, a, y):
return (a-y)
让我们分开来看。首先要观察的是,虽然从数学上来说交叉熵是一个函数,但我们将它实现成了一个Python类,不是一个Python方法。为什么我要这样选择这样做?原因是成本在我们的网络里扮演了两种不同的角色。明显的角色是测量一个输出激活值与期望输出的匹配程度。这个角色对应 CrossEntropyCost.fn 方法。(顺便要注意,CrossEntropyCost.fn 里的 np.nan_to_num 调用保证了Numpy能正确处理非常接近0数的对数。)但这个成本函数进入网络也有第二种方式。回想第二章的内容,当运行反向传播算法时,我们需要计算网络的输出误差,。输出误差的形式依赖于所选的成本函数:不同的成本函数,会有不同的输出误差形式。对于交叉熵成本函数就是,在公式(66)中所看到的,
出于这种原因,我们定义了第二个方法,CrossEntropyCost.delta,主要是为了告诉网络怎么去计算输出误差。然后将这两个方法打包到一个类中,其包含了网络需要知道成本函数的所有事情。
类似的,network2.py也包含了一个表示二次成本函数的类。这主要是为了与第一章的结果进行比较,因为今后我们主要会使用交叉熵成本函数。代码就在下面。QuadraticCost.fn方法是直接计算对应实际输出和期望输出的二次成本。QuadraticCost.delta的返回值是基于公式(30),我们在第二章推导出来,对于输出误差的二次成本。
class QuadraticCost(object):
@staticmethod
def fn(a, y):
return 0.5*np.linalg.norm(a-y)**2
@staticmethod
def delta(z, a, y):
return (a-y) * sigmoid_prime(z)
我们现在理解了network2.py和network.py之间的主要不同。它们都很简单。有少量的改变会在下面作讨论,其中包含L2正则的实现。在这之前,让我们来看一下network2.py的完整代码。你不用去读所有的代码细节,但它值得去理解整体结构,尤其是看注释,这样你能理解每个代码块的作用。当然,也欢迎大家尽情的钻研!如果你被搞糊涂了,你可以继续看下面的内容,再回去看代码。不管怎样,这是代码:
"""network2.py
~~~~~~~~~~~~~~
An improved version of network.py, implementing the stochastic
gradient descent learning algorithm for a feedforward neural network.
Improvements include the addition of the cross-entropy cost function,
regularization, and better initialization of network weights. Note
that I have focused on making the code simple, easily readable, and
easily modifiable. It is not optimized, and omits many desirable
features.
"""
#### Libraries
# Standard library
import json
import random
import sys
# Third-party libraries
import numpy as np
#### Define the quadratic and cross-entropy cost functions
class QuadraticCost(object):
@staticmethod
def fn(a, y):
"""Return the cost associated with an output ``a`` and desired output
``y``.
"""
return 0.5*np.linalg.norm(a-y)**2
@staticmethod
def delta(z, a, y):
"""Return the error delta from the output layer."""
return (a-y) * sigmoid_prime(z)
class CrossEntropyCost(object):
@staticmethod
def fn(a, y):
"""Return the cost associated with an output ``a`` and desired output
``y``. Note that np.nan_to_num is used to ensure numerical
stability. In particular, if both ``a`` and ``y`` have a 1.0
in the same slot, then the expression (1-y)*np.log(1-a)
returns nan. The np.nan_to_num ensures that that is converted
to the correct value (0.0).
"""
return np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a)))
@staticmethod
def delta(z, a, y):
"""Return the error delta from the output layer. Note that the
parameter ``z`` is not used by the method. It is included in
the method's parameters in order to make the interface
consistent with the delta method for other cost classes.
"""
return (a-y)
#### Main Network class
class Network(object):
def __init__(self, sizes, cost=CrossEntropyCost):
"""The list ``sizes`` contains the number of neurons in the respective
layers of the network. For example, if the list was [2, 3, 1]
then it would be a three-layer network, with the first layer
containing 2 neurons, the second layer 3 neurons, and the
third layer 1 neuron. The biases and weights for the network
are initialized randomly, using
``self.default_weight_initializer`` (see docstring for that
method).
"""
self.num_layers = len(sizes)
self.sizes = sizes
self.default_weight_initializer()
self.cost=cost
def default_weight_initializer(self):
"""Initialize each weight using a Gaussian distribution with mean 0
and standard deviation 1 over the square root of the number of
weights connecting to the same neuron. Initialize the biases
using a Gaussian distribution with mean 0 and standard
deviation 1.
Note that the first layer is assumed to be an input layer, and
by convention we won't set any biases for those neurons, since
biases are only ever used in computing the outputs from later
layers.
"""
self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
self.weights = [np.random.randn(y, x)/np.sqrt(x)
for x, y in zip(self.sizes[:-1], self.sizes[1:])]
def large_weight_initializer(self):
"""Initialize the weights using a Gaussian distribution with mean 0
and standard deviation 1. Initialize the biases using a
Gaussian distribution with mean 0 and standard deviation 1.
Note that the first layer is assumed to be an input layer, and
by convention we won't set any biases for those neurons, since
biases are only ever used in computing the outputs from later
layers.
This weight and bias initializer uses the same approach as in
Chapter 1, and is included for purposes of comparison. It
will usually be better to use the default weight initializer
instead.
"""
self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(self.sizes[:-1], self.sizes[1:])]
def feedforward(self, a):
"""Return the output of the network if ``a`` is input."""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
def SGD(self, training_data, epochs, mini_batch_size, eta,
lmbda = 0.0,
evaluation_data=None,
monitor_evaluation_cost=False,
monitor_evaluation_accuracy=False,
monitor_training_cost=False,
monitor_training_accuracy=False):
"""Train the neural network using mini-batch stochastic gradient
descent. The ``training_data`` is a list of tuples ``(x, y)``
representing the training inputs and the desired outputs. The
other non-optional parameters are self-explanatory, as is the
regularization parameter ``lmbda``. The method also accepts
``evaluation_data``, usually either the validation or test
data. We can monitor the cost and accuracy on either the
evaluation data or the training data, by setting the
appropriate flags. The method returns a tuple containing four
lists: the (per-epoch) costs on the evaluation data, the
accuracies on the evaluation data, the costs on the training
data, and the accuracies on the training data. All values are
evaluated at the end of each training epoch. So, for example,
if we train for 30 epochs, then the first element of the tuple
will be a 30-element list containing the cost on the
evaluation data at the end of each epoch. Note that the lists
are empty if the corresponding flag is not set.
"""
if evaluation_data: n_data = len(evaluation_data)
n = len(training_data)
evaluation_cost, evaluation_accuracy = [], []
training_cost, training_accuracy = [], []
for j in xrange(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in xrange(0, n, mini_batch_size)]
for mini_batch in mini_batches:
self.update_mini_batch(
mini_batch, eta, lmbda, len(training_data))
print "Epoch %s training complete" % j
if monitor_training_cost:
cost = self.total_cost(training_data, lmbda)
training_cost.append(cost)
print "Cost on training data: {}".format(cost)
if monitor_training_accuracy:
accuracy = self.accuracy(training_data, convert=True)
training_accuracy.append(accuracy)
print "Accuracy on training data: {} / {}".format(
accuracy, n)
if monitor_evaluation_cost:
cost = self.total_cost(evaluation_data, lmbda, convert=True)
evaluation_cost.append(cost)
print "Cost on evaluation data: {}".format(cost)
if monitor_evaluation_accuracy:
accuracy = self.accuracy(evaluation_data)
evaluation_accuracy.append(accuracy)
print "Accuracy on evaluation data: {} / {}".format(
self.accuracy(evaluation_data), n_data)
print
return evaluation_cost, evaluation_accuracy, \
training_cost, training_accuracy
def update_mini_batch(self, mini_batch, eta, lmbda, n):
"""Update the network's weights and biases by applying gradient
descent using backpropagation to a single mini batch. The
``mini_batch`` is a list of tuples ``(x, y)``, ``eta`` is the
learning rate, ``lmbda`` is the regularization parameter, and
``n`` is the total size of the training data set.
"""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [(1-eta*(lmbda/n))*w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
delta = (self.cost).delta(zs[-1], activations[-1], y)
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
# Note that the variable l in the loop below is used a little
# differently to the notation in Chapter 2 of the book. Here,
# l = 1 means the last layer of neurons, l = 2 is the
# second-last layer, and so on. It's a renumbering of the
# scheme in the book, used here to take advantage of the fact
# that Python can use negative indices in lists.
for l in xrange(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
return (nabla_b, nabla_w)
def accuracy(self, data, convert=False):
"""Return the number of inputs in ``data`` for which the neural
network outputs the correct result. The neural network's
output is assumed to be the index of whichever neuron in the
final layer has the highest activation.
The flag ``convert`` should be set to False if the data set is
validation or test data (the usual case), and to True if the
data set is the training data. The need for this flag arises
due to differences in the way the results ``y`` are
represented in the different data sets. In particular, it
flags whether we need to convert between the different
representations. It may seem strange to use different
representations for the different data sets. Why not use the
same representation for all three data sets? It's done for
efficiency reasons -- the program usually evaluates the cost
on the training data and the accuracy on other data sets.
These are different types of computations, and using different
representations speeds things up. More details on the
representations can be found in
mnist_loader.load_data_wrapper.
"""
if convert:
results = [(np.argmax(self.feedforward(x)), np.argmax(y))
for (x, y) in data]
else:
results = [(np.argmax(self.feedforward(x)), y)
for (x, y) in data]
return sum(int(x == y) for (x, y) in results)
def total_cost(self, data, lmbda, convert=False):
"""Return the total cost for the data set ``data``. The flag
``convert`` should be set to False if the data set is the
training data (the usual case), and to True if the data set is
the validation or test data. See comments on the similar (but
reversed) convention for the ``accuracy`` method, above.
"""
cost = 0.0
for x, y in data:
a = self.feedforward(x)
if convert: y = vectorized_result(y)
cost += self.cost.fn(a, y)/len(data)
cost += 0.5*(lmbda/len(data))*sum(
np.linalg.norm(w)**2 for w in self.weights)
return cost
def save(self, filename):
"""Save the neural network to the file ``filename``."""
data = {"sizes": self.sizes,
"weights": [w.tolist() for w in self.weights],
"biases": [b.tolist() for b in self.biases],
"cost": str(self.cost.__name__)}
f = open(filename, "w")
json.dump(data, f)
f.close()
#### Loading a Network
def load(filename):
"""Load a neural network from the file ``filename``. Returns an
instance of Network.
"""
f = open(filename, "r")
data = json.load(f)
f.close()
cost = getattr(sys.modules[__name__], data["cost"])
net = Network(data["sizes"], cost=cost)
net.weights = [np.array(w) for w in data["weights"]]
net.biases = [np.array(b) for b in data["biases"]]
return net
#### Miscellaneous functions
def vectorized_result(j):
"""Return a 10-dimensional unit vector with a 1.0 in the j'th position
and zeroes elsewhere. This is used to convert a digit (0...9)
into a corresponding desired output from the neural network.
"""
e = np.zeros((10, 1))
e[j] = 1.0
return e
def sigmoid(z):
"""The sigmoid function."""
return 1.0/(1.0+np.exp(-z))
def sigmoid_prime(z):
"""Derivative of the sigmoid function."""
return sigmoid(z)*(1-sigmoid(z))
代码里一个很有趣的改变是包含了L2正则。尽管这是一个概念上的重要变化,但其琐碎的实现很容易在代码中被忽略。大部分只是涉及传递lmbda参数至各方法,特别是Network.SGD方法。实际工作是在一行代码里完成的,Network.update_mini_batch方法的倒数第四行。这是我们修改梯度下降更新规则来包含权重衰退的地方。虽然这个改动很小,但对结果影响很大。
顺便说一下,这在神经网络里实现新技术时很常见。我们已经花费大量文字来讨论正则化。它的概念很微妙,也不容易理解。然而加到我们的程序里却很微不足道!另人意想不到的是尖端技术只要对代码做很小的改变就可以实现。
另一个很小但很重要的改变是我们的代码,在随机梯度下降方法Network.SGD上,增加了几个可选标志位。这些标志位使我们可以监控经过Network.SGD的training_data或evaluation_data的成本和准确率。在本章早些时候我们已经经常使用这些标志位了,但还是举例说下它是如何用的,仅做提醒:
>>> import mnist_loader
>>> training_data, validation_data, test_data = \
... mnist_loader.load_data_wrapper()
>>> import network2
>>> net = network2.Network([784, 30, 10], cost=network2.CrossEntropyCost)
>>> net.SGD(training_data, 30, 10, 0.5,
... lmbda = 5.0,
... evaluation_data=validation_data,
... monitor_evaluation_accuracy=True,
... monitor_evaluation_cost=True,
... monitor_training_accuracy=True,
... monitor_training_cost=True)
这里城我们将evaluation_data设置成为validation_data。当然我们也可以监控test_data或其他任何别的数据集的性能。4个标志位表示要监控evaluation_data 和training_data的成本和准确率。这些标志位默认是False,但在这里被打开来监控Networ的性能。此外,network2.py的Network.SGD方法返回有四个元素的元组,表示监控的结果。我们可以按下面的方式使用:
>>> evaluation_cost, evaluation_accuracy,
... training_cost, training_accuracy = net.SGD(training_data, 30, 10, 0.5,
... lmbda = 5.0,
... evaluation_data=validation_data,
... monitor_evaluation_accuracy=True,
... monitor_evaluation_cost=True,
... monitor_training_accuracy=True,
... monitor_training_cost=True)
上例中,evaluation_cost会有一30元素的列表,包含着每一代最后评估数据的成本。这类数据对理解网络行为格外有用。例如,它可以用来花一张图来展示网络随时间学习的趋势。实事上,本章早些时候的图片就是这样被创造出来的。无论如何要注意,如果任意一个监控标识位没有设置的话,那么元组里对应的元素就将是空列表。
代码上的其他补充包括一个Network.save方法,来保存Network对象到硬盘上,和一个方法load,来加载回来。注意保存和加载用的是JSON,而非Python里的pickle或cPickle模块,这些我们通常在Python里在硬盘上保存加载对象的方式。使用JSON需要比pickle或cPickle用更多的代码。为了理解用JSON的原因,设想未来的某一时间,我们决定改变Netwok类来允许除了sigmoid之外的神经元。为了实现这个变更,我们很可能会改变Network.__init__
方法里定义的属性。如果我们简单的用pickle来处理对象,那load方法很可能会失败。使用JSON来做序列化可以简单明了的保证老的Network也可以加载。
network2.py的代码还有很多细小的改变,但都是network.py上的简单的变化。结果就是网络代码由原来的74行,扩展成更有能力的152行。
问题
修改上面的代码实现L1正则,并用有30个隐藏神经元网络的L1正则来对MNIST数字进行分类。你那找到一个正则参数比非正则化运行的好吗?
查看network.py里的Network.cost_derivative方法。这个方法是为二次成本函数写的。你可以将其重新写成交叉熵成本函数吗?你能想出可能会在交叉熵成本函数版本下出现的问题吗?在network2.py里我们整个去掉了Network.cost_derivative方法,替代的将其功能整合到CrossEntropyCost.delta方法。它是怎样解决掉你刚发现的问题的?