当前位置:首页 > 教育 >

cnn是什么意思中文翻译(怎么一边翻译一边看cnn)

来源:原点资讯(www.yd166.com)时间:2024-06-20 21:44:00作者:YD166手机阅读>>

MNIST数据集

我们的卷积神经网络模型将似于LeNet-5架构,由卷积层、最大池化和非线性操作层。

cnn是什么意思中文翻译,怎么一边翻译一边看cnn(29)

卷积神经网络三维仿真

代码:

# Import the deep learning library import tensorflow as tf import time # Import the MNIST dataset from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) # Network inputs and outputs # The network's input is a 28×28 dimensional input n = 28 m = 28 num_input = n * m # MNIST data input num_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input X = tf.placeholder(tf.float32, [None, num_input]) Y = tf.placeholder(tf.float32, [None, num_classes]) # Storing the parameters of our LeNET-5 inspired Convolutional Neural Network weights = { "W_ij": tf.Variable(tf.random_normal([5, 5, 1, 32])), "W_jk": tf.Variable(tf.random_normal([5, 5, 32, 64])), "W_kl": tf.Variable(tf.random_normal([7 * 7 * 64, 1024])), "W_lm": tf.Variable(tf.random_normal([1024, num_classes])) } biases = { "b_ij": tf.Variable(tf.random_normal([32])), "b_jk": tf.Variable(tf.random_normal([64])), "b_kl": tf.Variable(tf.random_normal([1024])), "b_lm": tf.Variable(tf.random_normal([num_classes])) } # The hyper-parameters of our Convolutional Neural Network learning_rate = 1e-3 num_steps = 500 batch_size = 128 display_step = 10 def ConvolutionLayer(x, W, b, strides=1): # Convolution Layer x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME') x = tf.nn.bias_add(x, b) return x def ReLU(x): # ReLU activation function return tf.nn.relu(x) def PoolingLayer(x, k=2, strides=2): # Max Pooling layer return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, strides, strides, 1], padding='SAME') def Softmax(x): # Softmax activation function for the CNN's final output return tf.nn.softmax(x) # Create model def ConvolutionalNeuralNetwork(x, weights, biases): # MNIST data input is a 1-D row vector of 784 features (28×28 pixels) # Reshape to match picture format [Height x Width x Channel] # Tensor input become 4-D: [Batch Size, Height, Width, Channel] x = tf.reshape(x, shape=[-1, 28, 28, 1]) # Convolution Layer Conv1 = ConvolutionLayer(x, weights["W_ij"], biases["b_ij"]) # Non-Linearity ReLU1 = ReLU(Conv1) # Max Pooling (down-sampling) Pool1 = PoolingLayer(ReLU1, k=2) # Convolution Layer Conv2 = ConvolutionLayer(Pool1, weights["W_jk"], biases["b_jk"]) # Non-Linearity ReLU2 = ReLU(Conv2) # Max Pooling (down-sampling) Pool2 = PoolingLayer(ReLU2, k=2) # Fully connected layer # Reshape conv2 output to fit fully connected layer input FC = tf.reshape(Pool2, [-1, weights["W_kl"].get_shape().as_list()[0]]) FC = tf.add(tf.matmul(FC, weights["W_kl"]), biases["b_kl"]) FC = ReLU(FC) # Output, class prediction output = tf.add(tf.matmul(FC, weights["W_lm"]), biases["b_lm"]) return output # Construct model logits = ConvolutionalNeuralNetwork(X, weights, biases) prediction = Softmax(logits) # Softamx cross entropy loss function loss_function = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y)) # Optimization using the Adam Gradient Descent optimizer optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) training_process = optimizer.minimize(loss_function) # Evaluate model correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # recording how the loss functio varies over time during training cost = tf.summary.scalar("cost", loss_function) training_accuracy = tf.summary.scalar("accuracy", accuracy) train_summary_op = tf.summary.merge([cost,training_accuracy]) train_writer = tf.summary.FileWriter("./Desktop/logs", graph=tf.get_default_graph()) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: # Run the initializer sess.run(init) start_time = time.time() for step in range(1, num_steps 1): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) sess.run(training_process, feed_dict={X: batch_x, Y: batch_y}) if step % display_step == 0 or step == 1: # Calculate batch loss and accuracy loss, acc, summary = sess.run([loss_function, accuracy, train_summary_op], feed_dict={X: batch_x, Y: batch_y}) train_writer.add_summary(summary, step) print("Step " str(step) ", Minibatch Loss= " \ "{:.4f}".format(loss) ", Training Accuracy= " \ "{:.3f}".format(acc)) end_time = time.time() print("Time duration: " str(int(end_time-start_time)) " seconds") print("Optimization Finished!") # Calculate accuracy for 256 MNIST test images print("Testing Accuracy:", \ sess.run(accuracy, feed_dict={X: mnist.test.images[:256], Y: mnist.test.labels[:256]}))

上面的代码显得有些冗长,但如果一段一段的对其进行分解,读起来不是很难理解。

运行完该程序,对应结果应如下所示:

Step 1, Minibatch Loss= 74470.4844, Training Accuracy= 0.117 Step 10, Minibatch Loss= 20529.4141, Training Accuracy= 0.250 Step 20, Minibatch Loss= 14074.7539, Training Accuracy= 0.531 Step 30, Minibatch Loss= 7168.9839, Training Accuracy= 0.586 Step 40, Minibatch Loss= 4781.1060, Training Accuracy= 0.703 Step 50, Minibatch Loss= 3281.0979, Training Accuracy= 0.766 Step 60, Minibatch Loss= 2701.2451, Training Accuracy= 0.781 Step 70, Minibatch Loss= 2478.7153, Training Accuracy= 0.773 Step 80, Minibatch Loss= 2312.8320, Training Accuracy= 0.820 Step 90, Minibatch Loss= 2143.0774, Training Accuracy= 0.852 Step 100, Minibatch Loss= 1373.9169, Training Accuracy= 0.852 Step 110, Minibatch Loss= 1852.9535, Training Accuracy= 0.852 Step 120, Minibatch Loss= 1845.3500, Training Accuracy= 0.891 Step 130, Minibatch Loss= 1677.2566, Training Accuracy= 0.844 Step 140, Minibatch Loss= 1683.3661, Training Accuracy= 0.875 Step 150, Minibatch Loss= 1859.3821, Training Accuracy= 0.836 Step 160, Minibatch Loss= 1495.4796, Training Accuracy= 0.859 Step 170, Minibatch Loss= 609.3800, Training Accuracy= 0.914 Step 180, Minibatch Loss= 1376.5054, Training Accuracy= 0.891 Step 190, Minibatch Loss= 1085.0363, Training Accuracy= 0.891 Step 200, Minibatch Loss= 1129.7145, Training Accuracy= 0.914 Step 210, Minibatch Loss= 1488.5452, Training Accuracy= 0.906 Step 220, Minibatch Loss= 584.5027, Training Accuracy= 0.930 Step 230, Minibatch Loss= 619.9744, Training Accuracy= 0.914 Step 240, Minibatch Loss= 1575.8933, Training Accuracy= 0.891 Step 250, Minibatch Loss= 1558.5853, Training Accuracy= 0.891 Step 260, Minibatch Loss= 375.0371, Training Accuracy= 0.922 Step 270, Minibatch Loss= 1568.0758, Training Accuracy= 0.859 Step 280, Minibatch Loss= 1172.9205, Training Accuracy= 0.914 Step 290, Minibatch Loss= 1023.5415, Training Accuracy= 0.914 Step 300, Minibatch Loss= 475.9756, Training Accuracy= 0.945 Step 310, Minibatch Loss= 488.8930, Training Accuracy= 0.961 Step 320, Minibatch Loss= 1105.7720, Training Accuracy= 0.914 Step 330, Minibatch Loss= 1111.8589, Training Accuracy= 0.906 Step 340, Minibatch Loss= 842.7805, Training Accuracy= 0.930 Step 350, Minibatch Loss= 1514.0153, Training Accuracy= 0.914 Step 360, Minibatch Loss= 1722.1812, Training Accuracy= 0.875 Step 370, Minibatch Loss= 681.6041, Training Accuracy= 0.891 Step 380, Minibatch Loss= 902.8599, Training Accuracy= 0.930 Step 390, Minibatch Loss= 714.1541, Training Accuracy= 0.930 Step 400, Minibatch Loss= 1654.8883, Training Accuracy= 0.914 Step 410, Minibatch Loss= 696.6915, Training Accuracy= 0.906 Step 420, Minibatch Loss= 536.7183, Training Accuracy= 0.914 Step 430, Minibatch Loss= 1405.9148, Training Accuracy= 0.891 Step 440, Minibatch Loss= 199.4781, Training Accuracy= 0.953 Step 450, Minibatch Loss= 438.3784, Training Accuracy= 0.938 Step 460, Minibatch Loss= 409.6419, Training Accuracy= 0.969 Step 470, Minibatch Loss= 503.1216, Training Accuracy= 0.930 Step 480, Minibatch Loss= 482.6476, Training Accuracy= 0.922 Step 490, Minibatch Loss= 767.3893, Training Accuracy= 0.922 Step 500, Minibatch Loss= 626.8249, Training Accuracy= 0.930 Time duration: 657 seconds Optimization Finished! Testing Accuracy: 0.9453125

综上,们刚刚完成了第一个卷积神经网络的构建,正如在上面的结果中所看到的那样,从第一步到最后一步,模型的准确性已经得到很大的提升,但我们的卷积神经网络还有较大的改进空间。

现在让我们在Tensorboard中可视化构建的卷积神经网络模型:

cnn是什么意思中文翻译,怎么一边翻译一边看cnn(30)

可视化卷积神经网络

cnn是什么意思中文翻译,怎么一边翻译一边看cnn(31)

准确性和损失评估

结论

卷积神经网络是一个强大的深度学习模型,应用广泛,性能优异。卷积神经网络的使用只会随着数据变大和问题变得更加复杂变得更加具有挑战性。

注意

可以在以下位置找到本文的Jupyter笔记本:

  • https://github.com/AegeusZerium/DeepLearning/blob/master/Deep Learning/Demystifying Convolutional Neural Networks.ipynb

参考文献

  • https://en.wikipedia.org/wiki/Convolutional_neural_network
  • https://en.wikipedia.org/wiki/Yann_LeCun
  • * http://yann.lecun.com/exdb/mnist/
  • https://opensource.com/article/17/11/intro-tensorflow
  • https://en.wikipedia.org/wiki/Tensor
  • http://www.cs.columbia.edu/~mcollins/ff2.pdf
  • https://github.com/tensorflow/tensorboard
  • http://yann.lecun.com/exdb/lenet/


作者信息

Lightning Blade,机器学习热爱者

本文由阿里云云栖社区组织翻译。

文章原标题《Demystifying Convolutional Neural Networks》,译者:海棠,审校:Uncle_LLD。

,

栏目热文

美国cnn是什么机构(cnn从美国的实力地位出发)

美国cnn是什么机构(cnn从美国的实力地位出发)

来源:环球时报新媒体部今天,经常编造虚假新闻抹黑中国的美国CNN又推出一篇拙劣的“报道”,说什么由于中国近些年不断“随意...

2024-06-20 22:23:53查看全文 >>

cnn币有升值空间吗(icp币未来价格预测最新消息)

cnn币有升值空间吗(icp币未来价格预测最新消息)

最近各个微信社群、电报群和自媒体疯传的CNNS到底是什么呢,CCNNS是全球资产价值交换网络Crypto Neo-val...

2024-06-20 21:45:59查看全文 >>

网络cnn是什么意思(cnn的优点和缺点)

网络cnn是什么意思(cnn的优点和缺点)

白交 发自 凹非寺 量子位 报道 | 公众号 QbitAI看你是人还是物,是猫还是狗。卷积神经网络(CNN)最重要的用途...

2024-06-20 22:09:52查看全文 >>

strangers汉语意思(stranger什么意思 中文)

strangers汉语意思(stranger什么意思 中文)

1、We are the most familiar strangers .2、过去的已经过去了,死了的已经死了,活着的...

2024-06-20 22:11:08查看全文 >>

stranger中文什么意思(strange 中文意思是什么)

stranger中文什么意思(strange 中文意思是什么)

英语口语·吉米老师说hello 是“你好”,stranger 是“陌生人”,但 hello stanger 不是在和陌生...

2024-06-20 21:47:18查看全文 >>

cnn是什么网络用语(网络词cnn是什么意思)

cnn是什么网络用语(网络词cnn是什么意思)

据环球网消息 “做人不要太CNN!”11年前中国网民耳熟能详的这句话,如今又要热了起来。因为美国有线电视新闻网(CNN)...

2024-06-20 22:09:29查看全文 >>

cnn网络什么意思(cnn是什么意思啊网络用语怎么说)

cnn网络什么意思(cnn是什么意思啊网络用语怎么说)

随着图像识别技术的发展,我们的现实生活中已经有可以对猫、狗进行分辨的机器了,即给机器一张猫的图片,机器可以正确的预测图片...

2024-06-20 22:16:40查看全文 >>

抗日电影排行(评分最高的抗日电影排行榜)

抗日电影排行(评分最高的抗日电影排行榜)

抗日电影是中国电影史上一个非常重要的题材,它通过影像的方式记录了中华民族在抗日战争中的英勇和苦难。这些抗日电影在拍摄手法...

2024-06-20 21:41:33查看全文 >>

地雷战电视剧第44集(地雷战电视剧全集剧情)

地雷战电视剧第44集(地雷战电视剧全集剧情)

地雷战第三集。·41、玉兰发现敌情后,吩咐田嫂和二嫚挂好雷弦,然后开枪射击。她们决定用忽南忽北,打一枪换一个地方的麻雀战...

2024-06-20 22:23:30查看全文 >>

七年之痒的十大征兆(7年之痒有什么症状和表现)

七年之痒的十大征兆(7年之痒有什么症状和表现)

其实婚姻爱情不像我们看到的那样光鲜亮丽,很多人说许多事情发展到第七年就变了,尤其是婚姻,所以就有夫妻七年之痒这一说。七年...

2024-06-20 21:51:12查看全文 >>

文档排行