-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathatom.xml
536 lines (286 loc) · 427 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>浩翰Redamancy的博客</title>
<subtitle>文质彬彬 然后君子</subtitle>
<link href="/atom.xml" rel="self"/>
<link href="https://plutoacharon.github.io/"/>
<updated>2020-07-05T14:06:21.261Z</updated>
<id>https://plutoacharon.github.io/</id>
<author>
<name>浩翰</name>
</author>
<generator uri="http://hexo.io/">Hexo</generator>
<entry>
<title>使用Centos7基于Squid与Lvs搭建小型CDN</title>
<link href="https://plutoacharon.github.io/2020/07/05/%E4%BD%BF%E7%94%A8Centos7%E5%9F%BA%E4%BA%8ESquid%E4%B8%8ELvs%E6%90%AD%E5%BB%BA%E5%B0%8F%E5%9E%8BCDN/"/>
<id>https://plutoacharon.github.io/2020/07/05/使用Centos7基于Squid与Lvs搭建小型CDN/</id>
<published>2020-07-05T14:05:59.000Z</published>
<updated>2020-07-05T14:06:21.261Z</updated>
<content type="html"><![CDATA[<p>CDN详情查看我这篇文章:<a href="https://blog.csdn.net/qq_43442524/article/details/106924003" target="_blank" rel="noopener">https://blog.csdn.net/qq_43442524/article/details/106924003</a></p><h2 id="前期准备"><a href="#前期准备" class="headerlink" title="前期准备"></a>前期准备</h2><ul><li>Centos7 四台</li><li>Xshell</li></ul><p><img src="https://img-blog.csdnimg.cn/20200627164802904.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="1-Squid"><a href="#1-Squid" class="headerlink" title="1. Squid"></a>1. Squid</h2><p>Squid 常常被用作代理缓存服务器,在自建CDN中处于源站和客户端的中间位置,使得用户无需访问源站便可获取内容资源,提高了用户的访问速度。作为代理服务器,Squid 可以支持多种协议,如 HTTP 、 FTP , SSL 协议等,Squid 使用 的是单独的 I/O 驱动进程来获取并响应客户端的请求,这是 Squid 独特的地方。</p><p>Squid 作为代理服务器,可以获取并响应用户的访问请求 。当用户向 Squid 发出访 问某个内容的请求时,Squid 会将用户请求转发到需要的网站,然后,网站响应该请求并将内容返回给 Squid,最后 Squid 将内容返回给用户,同时也会在本地存放一份备份内 容,以后遇到同样的用户请求时则将备份传送给用户,以此提高用户的响应速度。</p><p>由于Squid 存在己久,导致其与近年来流行的系统特性有很多不兼容之处。所以,目前很多公司在引用 Squid 的时候都会对其核心功能进行修改,比如,修改 Squid 以使得它支持多进程等。对 CDN 的提供服务商而言,也需要根据不同需求对 Squid 进行特定的修改。<br>虽然 Squid 存在时间比较长,也有很多特性无法支持,但是作为代理缓存服务器, Squid仍然能为用户访问网站起到很好的加速作用,并且在提高访问速度的同时,也拥有身份验证以及流量管理等高级功能。基于此,流服务缓存节点采用 Squid 实现代理缓存功能 。</p><h3 id="1-1-安装Squid"><a href="#1-1-安装Squid" class="headerlink" title="1.1 安装Squid"></a>1.1 安装Squid</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># yum install -y squid</span></span><br><span class="line">[root@localhost ~]<span class="comment"># vim /etc/squid/squid.conf</span></span><br><span class="line">文件最后添加</span><br><span class="line"><span class="comment"># Httpd </span></span><br><span class="line">http_port 80 accel vhost vport</span><br><span class="line">cache_peer 192.168.0.100 parent 80 0 proxy-only</span><br><span class="line">http_access allow all</span><br></pre></td></tr></table></figure><p><img src="https://img-blog.csdnimg.cn/2020062716495327.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="1-2-启动Squid"><a href="#1-2-启动Squid" class="headerlink" title="1.2 启动Squid"></a>1.2 启动Squid</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># squid -k parse</span></span><br><span class="line">2020/06/27 15:35:35| Startup: Initializing Authentication Schemes ...</span><br><span class="line">2020/06/27 15:35:35| Startup: Initialized Authentication Scheme <span class="string">'basic'</span></span><br><span class="line">2020/06/27 15:35:35| Startup: Initialized Authentication Scheme <span class="string">'digest'</span></span><br><span class="line">2020/06/27 15:35:35| Startup: Initialized Authentication Scheme <span class="string">'negotiate'</span></span><br><span class="line">2020/06/27 15:35:35| Startup: Initialized Authentication Scheme <span class="string">'ntlm'</span></span><br><span class="line">2020/06/27 15:35:35| Startup: Initialized Authentication.</span><br><span class="line">2020/06/27 15:35:35| Processing Configuration File: /etc/squid/squid.conf (depth 0)</span><br><span class="line">2020/06/27 15:35:35| Processing: acl localnet src 10.0.0.0/8<span class="comment"># RFC1918 possible internal network</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl localnet src 172.16.0.0/12<span class="comment"># RFC1918 possible internal network</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl localnet src 192.168.0.0/16<span class="comment"># RFC1918 possible internal network</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl localnet src fc00::/7 <span class="comment"># RFC 4193 local private network range</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl localnet src fe80::/10 <span class="comment"># RFC 4291 link-local (directly plugged) machines</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl SSL_ports port 443</span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 80<span class="comment"># http</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 21<span class="comment"># ftp</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 443<span class="comment"># https</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 70<span class="comment"># gopher</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 210<span class="comment"># wais</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 1025-65535<span class="comment"># unregistered ports</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 280<span class="comment"># http-mgmt</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 488<span class="comment"># gss-http</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 591<span class="comment"># filemaker</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl Safe_ports port 777<span class="comment"># multiling http</span></span><br><span class="line">2020/06/27 15:35:35| Processing: acl CONNECT method CONNECT</span><br><span class="line">2020/06/27 15:35:35| Processing: http_access deny !Safe_ports</span><br><span class="line">2020/06/27 15:35:35| Processing: http_access deny CONNECT !SSL_ports</span><br><span class="line">2020/06/27 15:35:35| Processing: http_access allow localhost manager</span><br><span class="line">2020/06/27 15:35:35| Processing: http_access deny manager</span><br><span class="line">2020/06/27 15:35:35| Processing: http_access allow localnet</span><br><span class="line">2020/06/27 15:35:35| Processing: http_access allow localhost</span><br><span class="line">2020/06/27 15:35:35| Processing: http_access deny all</span><br><span class="line">2020/06/27 15:35:35| Processing: http_port 3128</span><br><span class="line">2020/06/27 15:35:35| Processing: coredump_dir /var/spool/squid</span><br><span class="line">2020/06/27 15:35:35| Processing: refresh_pattern ^ftp:144020%10080</span><br><span class="line">2020/06/27 15:35:35| Processing: refresh_pattern ^gopher:14400%1440</span><br><span class="line">2020/06/27 15:35:35| Processing: refresh_pattern -i (/cgi-bin/|\?) 00%0</span><br><span class="line">2020/06/27 15:35:35| Processing: refresh_pattern .020%4320</span><br><span class="line">2020/06/27 15:35:35| Processing: http_port 80 accel vhost vport</span><br><span class="line">2020/06/27 15:35:35| Processing: cache_peer 192.168.0.100 parent 80 0 proxy-only</span><br><span class="line">2020/06/27 15:35:35| Processing: http_access allow all</span><br><span class="line">2020/06/27 15:35:35| Initializing https proxy context</span><br><span class="line">[root@localhost ~]<span class="comment"># squid -k reconfigure</span></span><br><span class="line">[root@localhost ~]<span class="comment"># systemctl start squid</span></span><br><span class="line">[root@localhost ~]<span class="comment"># systemctl status squid</span></span><br><span class="line">● squid.service - Squid caching proxy</span><br><span class="line"> Loaded: loaded (/usr/lib/systemd/system/squid.service; disabled; vendor preset: disabled)</span><br><span class="line"> Active: active (running) since 六 2020-06-27 15:36:40 CST; 11s ago</span><br><span class="line"> Process: 2471 ExecStart=/usr/sbin/squid <span class="variable">$SQUID_OPTS</span> -f <span class="variable">$SQUID_CONF</span> (code=exited, status=0/SUCCESS)</span><br><span class="line"> Process: 2466 ExecStartPre=/usr/libexec/squid/cache_swap.sh (code=exited, status=0/SUCCESS)</span><br><span class="line"> Main PID: 2473 (squid)</span><br><span class="line"> CGroup: /system.slice/squid.service</span><br><span class="line"> ├─2473 /usr/sbin/squid -f /etc/squid/squid.conf</span><br><span class="line"> ├─2475 (squid-1) -f /etc/squid/squid.conf</span><br><span class="line"> └─2476 (logfile-daemon) /var/<span class="built_in">log</span>/squid/access.log</span><br><span class="line"></span><br><span class="line">6月 27 15:36:40 localhost.localdomain systemd[1]: Starting Squid caching proxy...</span><br><span class="line">6月 27 15:36:40 localhost.localdomain systemd[1]: Started Squid caching proxy.</span><br><span class="line">6月 27 15:36:40 localhost.localdomain squid[2473]: Squid Parent: will start 1 kids</span><br><span class="line">6月 27 15:36:40 localhost.localdomain squid[2473]: Squid Parent: (squid-1) process 2475 started</span><br></pre></td></tr></table></figure><h2 id="2-Apache"><a href="#2-Apache" class="headerlink" title="2. Apache"></a>2. Apache</h2><h3 id="2-1-安装Httpd服务"><a href="#2-1-安装Httpd服务" class="headerlink" title="2.1 安装Httpd服务"></a>2.1 安装Httpd服务</h3><p><code>[root@localhost ~]# yum install httpd -y</code></p><h3 id="2-2-编写首页"><a href="#2-2-编写首页" class="headerlink" title="2.2 编写首页"></a>2.2 编写首页</h3><p>#index.php<br><figure class="highlight php"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta"><?php</span></span><br><span class="line"><span class="function"><span class="keyword">function</span> <span class="title">serverIp</span><span class="params">()</span></span>{ <span class="comment">//获取服务器IP地址</span></span><br><span class="line"> <span class="keyword">if</span>(<span class="keyword">isset</span>($_SERVER)){</span><br><span class="line"> <span class="keyword">if</span>($_SERVER[<span class="string">'SERVER_ADDR'</span>]){</span><br><span class="line"> $server_ip=$_SERVER[<span class="string">'SERVER_ADDR'</span>];</span><br><span class="line"> }<span class="keyword">else</span>{</span><br><span class="line"> $server_ip=$_SERVER[<span class="string">'LOCAL_ADDR'</span>];</span><br><span class="line"> }</span><br><span class="line"> }<span class="keyword">else</span>{</span><br><span class="line"> $server_ip = getenv(<span class="string">'SERVER_ADDR'</span>);</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">return</span> $server_ip;</span><br><span class="line"> }</span><br><span class="line"> <span class="meta">?></span></span><br><span class="line"><!doctype html></span><br><span class="line"><html></span><br><span class="line"><head></span><br><span class="line"><meta charset=<span class="string">"utf-8"</span>></span><br><span class="line"><title>CDN测试</title></span><br><span class="line"></head></span><br><span class="line"><body></span><br><span class="line"> <div class="banner"></span><br><span class="line"> <ul></span><br><span class="line"> <li><img src=<span class="string">"1.jpg"</span> /></li></span><br><span class="line"> </ul></span><br><span class="line"> </div></span><br><span class="line"> <div class="main_list"></span><br><span class="line"> <ul></span><br><span class="line"> <li><a href=<span class="string">"#"</span>>CDN测试...</a></li></span><br><span class="line"> </ul></span><br><span class="line"> </div></span><br><span class="line"> <span><span class="meta"><?php</span> <span class="keyword">echo</span> serverIp(); <span class="meta">?></span></span></span><br><span class="line"></body></span><br><span class="line"></html></span><br></pre></td></tr></table></figure></p><h3 id="2-3-测试"><a href="#2-3-测试" class="headerlink" title="2.3 测试"></a>2.3 测试</h3><p><img src="https://img-blog.csdnimg.cn/20200627165308807.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>通过<code>192.168.0.101</code>访问到源站<code>192.168.0.100</code></p><p><strong>查看日志</strong>:<br><img src="https://img-blog.csdnimg.cn/20200627165426794.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>分两次访问,发现/<code>var/log/squid/access.log</code><br>第一次访问时是从源站(192.168.0.100)拉取资源,并且在本机缓存<br>第二次访问,直接访问本机(192.168.0.101)资源</p><h2 id="3-安装LVS实现负载均衡"><a href="#3-安装LVS实现负载均衡" class="headerlink" title="3. 安装LVS实现负载均衡"></a>3. 安装LVS实现负载均衡</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># yum install -y ipvsadm</span></span><br><span class="line">[root@localhost ~]<span class="comment"># lsmod |grep ip_vs </span></span><br><span class="line">[root@localhost ~]<span class="comment"># modprobe ip_vs</span></span><br><span class="line">[root@localhost ~]<span class="comment"># lsmod |grep ip_vs </span></span><br><span class="line">ip_vs 145497 0 </span><br><span class="line">nf_conntrack 139224 1 ip_vs</span><br><span class="line">libcrc32c 12644 3 xfs,ip_vs,nf_conntrack</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure><h3 id="3-1-创建VIP调度地址"><a href="#3-1-创建VIP调度地址" class="headerlink" title="3.1 创建VIP调度地址"></a>3.1 创建VIP调度地址</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># ifconfig ens33:0 192.168.0.200 netmask 255.255.255.255</span></span><br><span class="line">[root@localhost ~]<span class="comment"># ipvsadm -At 192.168.0.200:80 -s rr</span></span><br><span class="line">[root@localhost ~]<span class="comment"># ipvsadm -at 192.168.0.200:80 -r 192.168.0.101:80 -g</span></span><br><span class="line">[root@localhost ~]<span class="comment"># ipvsadm -at 192.168.0.200:80 -r 192.168.0.102:80 -g</span></span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure><p>在squid1和squid2两台服务器节点,创建VIP应答地址<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># ifconfig lo:0 192.168.0.200 netmask 255.255.255.255</span></span><br></pre></td></tr></table></figure></p><p>在squid1和squid2两台服务器节点,屏蔽ARP请求<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore </span></span><br><span class="line">[root@localhost ~]<span class="comment"># echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore </span></span><br><span class="line">[root@localhost ~]<span class="comment"># echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce </span></span><br><span class="line">[root@localhost ~]<span class="comment"># echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce </span></span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>在LVS中,#ipvsadm -L 检查配置情况<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># ipvsadm -L </span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP localhost.localdomain:http rr</span><br><span class="line"> -> 192.168.0.101:http Route 1 0 0 </span><br><span class="line"> -> 192.168.0.102:http Route 1 0 0 </span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="3-2-测试"><a href="#3-2-测试" class="headerlink" title="3.2 测试"></a>3.2 测试</h3><p>在Windows10访问(192.168.0.200),可以看到从VIP地址通过负载均衡访问到了Squid资源地址<br><img src="https://img-blog.csdnimg.cn/20200627165745933.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>查看日志:<br>宿主机通过LVS-VIP(192.168.0.200)访问到了Squid2(192.168.0.102),并且Squid2从源站(192.168.0.100)缓存了资源<br><img src="https://img-blog.csdnimg.cn/20200627165803905.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="原理"><a href="#原理" class="headerlink" title="原理"></a>原理</h2><p>此CDN方案原理就是客户端通过访问LVS暴露在外的虚拟地址<code>192.168.0.200</code>,将流量负载均衡到Squid1<code>192.168.0.101</code>或者Squid2<code>192.168.0.102</code>机器上,并且Squid实现了从源站<code>192.168.0.100</code>缓存了资源,当以后的流量想要访问源站时,直接从Squid服务器缓存中得到,大幅度减少了源站的压力。</p>]]></content>
<summary type="html">
<p>CDN详情查看我这篇文章:<a href="https://blog.csdn.net/qq_43442524/article/details/106924003" target="_blank" rel="noopener">https://blog.csdn.net/q
</summary>
<category term="CDN" scheme="https://plutoacharon.github.io/categories/CDN/"/>
<category term="CDN" scheme="https://plutoacharon.github.io/tags/CDN/"/>
</entry>
<entry>
<title>直播技术原理:CDN技术详解</title>
<link href="https://plutoacharon.github.io/2020/07/05/%E7%9B%B4%E6%92%AD%E6%8A%80%E6%9C%AF%E5%8E%9F%E7%90%86%EF%BC%9ACDN%E6%8A%80%E6%9C%AF%E8%AF%A6%E8%A7%A3/"/>
<id>https://plutoacharon.github.io/2020/07/05/直播技术原理:CDN技术详解/</id>
<published>2020-07-05T14:05:21.000Z</published>
<updated>2020-07-05T14:05:38.135Z</updated>
<content type="html"><![CDATA[<h2 id="背景"><a href="#背景" class="headerlink" title="背景"></a>背景</h2><p>随着互联网应用的迅速发展与网络流量的大幅度激增,用户对网站的加速需求日益增长。由于 CDN 技术能够及时解决网站的响应速度问题,并对网站的稳定性起了较大的提升作用,因此受到了业界的很大关注。 </p><p>不同于网站镜像的单纯内容复制,CDN 技术更加智能,可以用这样一个式子来解释 CDN 与镜像的关系: CDN=更智能的镜像+缓存+流量调度。 从上面的关系式可以看出,CDN 能够明显提高网络中数据流动的效率,从而解决网络带宽不足、 用户访问量过大以及内容分布不均等问题,提升用户的网站访问体验。 许多我国国内的网站出于业务需要,将源站服务器放在欧美地区。 这样一来,物理距离距中国太远,普遍 Ping 所需的时间都在 100ms 以上,使网站的用户会感觉到访问速率比较慢,访问体验度方面有所下降。所以 CDN 技术首先要解决的就是物理距离远所导致的访问速率降低问题。</p><p> 通过 CDN 技术,在中国香港、中国台湾等地区和日本、韩国等国家部署 CDN 节点进行数据分发,即使源站放置在遥远的欧美地区,中国用户的访问速率也会得到明显的改善。 最初 CDN 的提出,就是为了通过就近提供服务来解决物理距离过远导致性能不好的问题。使用 CDN 后,网络的基本组织架构和内容传输情况发生了很大变化。从普通网站用户的角度上看, CDN 节点的作用就相当于把一个网站就近部署在用户周围。 CDN 服务器会像源站服务器一样,为用户提供需要的内容服务。但是,由于 CDN 节点更靠近用户,因而能够更快地响应用户的请求。</p><p>以视频网站为例, 使用CDN 服务后,对服务请求进行了优化调度,更加有效地利用了带宽资源,使得视频加载时间减少,性能提高。<br><img src="https://img-blog.csdnimg.cn/2020062315100033.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><p>总的来说, CDN 对互联网应用的优化作用主要体现在以下几个方面: </p><ul><li>缓解源站服务器访问压力,解决服务器端的“第一千米”问题</li><li>优化热点内容的分布,合理缓存,减轻骨干网传输的流量压力</li><li>提升用户的访问质量和体验,全面提高网站访问速度</li><li>增强网站服务的可靠性,解决网站突发峰值流量问题</li><li>解决不同电信运营商之间互联互通问题造成的影响</li><li>提高安全性,有效防止异常流量对源站的攻击</li></ul><h2 id="CDN-基本概念"><a href="#CDN-基本概念" class="headerlink" title="CDN 基本概念"></a>CDN 基本概念</h2><h3 id="1-CON-的定义"><a href="#1-CON-的定义" class="headerlink" title="1. CON 的定义"></a>1. CON 的定义</h3><p>内容分发网络(Content Delivery Network, CDN),是在现有网络中增加一层新的网络架 构。 CDN 将源站的内容发布和传送到最靠近用户的边缘地区,使用户可以就近访问想要的内 容,从而提高用户访问的响应速度。 CDN 的基本原理是依靠放置在各地的缓存服务器,通过全局调度以及内容分发等功能模 块,将用户需要的那部分内容部署到最贴近用户的地方,将原本低效、不可靠的四网络转变成高效、可靠的智能网络,满足用户对内容访问质量的更高要求, 改善互联网网络拥塞问题, 提高用户访问网站的响应速度。<br>从字面意义上可以看出, CDN 的构成元素为内容 (Content)、分发(Delivery) 以及网 络(Network)。</p><p><strong>(1) 内容</strong></p><p>CDN 的内容通常是以下两种: 静态内容以及动态内容。</p><ul><li>静态内容:内容不经常更改,并且一旦它在 CDN 缓存中,可以由许多用户进行访问,缓存性强。 </li><li>动态内容:内容用于特定的用户或组,并且更新频率较高,通常来自源服务器并实时发送到CDN 中,缓存性较弱。对于用户的每一次请求, CDN 都必须从源站服务器拉取动态内容,所以动态内容加速的常用方法就是降低源站服务器和用户终端之间的传输时延。</li></ul><p><strong>(2)分发</strong> </p><p>CDN 的分发是指利用一定传送策略,将用户请求的内容发布到距离该用户最近的节点。</p><p><strong>(3)网络</strong></p><p>CDN 由成千上万个分布式服务器组成,通过服务器的通信,把内容分发和传送给终端用 户。<br>CDN 各节点之间是通过电信运营商的宽带网络进行通信的,可以说 CDN 是在电信运营 商的网络之上的一层网络。</p><h3 id="2-CON-可承载的内容"><a href="#2-CON-可承载的内容" class="headerlink" title="2. CON 可承载的内容"></a>2. CON 可承载的内容</h3><p>用户在向网站发起访问请求时,如果等待一定时间网站还没有响 应,用户就会放弃访问,而镜像通常不适用于大规模商业网站加速,因此,CDN 加速需求应运而生。 </p><p>静态内容是最早出现的 CDN 承载的内容类型,以文字、图片、动画等更新频率低的内容为主。</p><p>因此,CDN 技术最初就是用来对这些静态内容网页进行加速的。 </p><p>后来,随着互联网的大幅度升温、宽带的普及,用户利用互联网下载所需文件已经成为一种习惯,因此, CDN 对下载业务的加速服务也是必不可少的。 </p><p>近年来,大量视频网站涌现,流媒体流量随之迅速攀升,从而驱动了 CDN 技术的应用重点也逐步转为流媒体加速服务。 随着互联网技术的发展,社交网络、在线支付以及网络游戏等实时性强、内容经常更新 的互联网应用逐渐产生,因此,CDN技术也从静态内容的加速发展到动态内容的加速。 </p><p>从互联网应用的角度看, 需要CDN承载的内容主要为静态内容和动态内容。</p><h3 id="3-CDN-的工作过程"><a href="#3-CDN-的工作过程" class="headerlink" title="3. CDN 的工作过程"></a>3. CDN 的工作过程</h3><p>CDN 服务与传统网络服务最大的差别在于访问方式。传统情况下,用户发起访问请求后, 对于同一个内容的所有用户请求,都集中在同一个目标服务器上。</p><p> 而利用 CDN 加速后,用户的内容请求解析权交给了 CDN 的调度系统,然后将用户请求引导到性能最佳的最靠近用户的 CDN 节点上, 最终该节点为用户请求提供服务。 </p><p>传统的访问方式,造成了在网络中传输的极大压力,并且还无法保证用户的良好访问体验。 而使用 CDN 服务后,用户的访问请求不会集中在相同的目标服务器上,而是会分散到不同节点,在这种情况下,用户请求就不会跨地区,并且骨干网也不需要承担过重的流量负担,进而使得用户访问质量得到保证。 </p><p>下面介绍 CDN 的基本工作过程,包括内容注入、用户请求调度、内容分发以 及内容服务这 4 个步骤。<br> <strong>(1)内容注入</strong></p><p>内容注入是 CDN 能为用户提供服务的第一步,是内容从源站注入 CDN 的过程,使得用 户能从 CDN 系统中获取源站的内容。</p><p><strong>(2)用户请求调度</strong></p><p>用户请求调度是用户向网站发起访问请求, 最终用户被引导到最佳的有内容的 CDN 节点的过程,具体如下:</p><p> (a)当用户向网站发起访问请求时,经由本地 DNS 系统解析,本地 DNS 会通过递归方式将域名的解析权最终交给 CDN 授权 DNS 服务器 CGSLB);</p><p> (b) CDNGSLB 可将 CDN 节点设备的回地址返回用户,也可以将另一个负责解析用户 终端 IP 地址的 GSLB 设备的 IP 地址返回用户 </p><p>(c)用户向 CDN 的 GSLB 设备发起内容访问请求(IP 调度方式)</p><p> (d) CDN 的 GSLB 设备根据用户 E 地址以及用户请求的内容 URL,选择一台用户所属地区的本地负载均衡 (SLB) 设备,并让用户向该 SLB 发起访问请求;</p><p> (e)该 SLB 设备通过决策选择一台最佳的服务器为用户提供服务,用户向该服务器发起访问请求;</p><p> (f) 若该服务器内容未命中,而 SLB 仍将该服务器分配给用户, 则该服务器需要向上级 节点请求内容,然后,由该服务器向用户提供“边拉边放”的服务或者由上级节点直接为用户提供服务。 </p><p><strong>(3)内容分发</strong></p><p>当用户发起请求时,对于用户想要的内容,一部分已经预先直接推送到靠近用户的节点;<br>但是,当下级节点上并没有用户想要的内容时,就要通过向上级节点拉取内容的方式,把用户想要的内容分发到下级节点。</p><p> <strong>(4)内容服务</strong> </p><p>把找到的最靠近用户的 CDN 节点中的内容交付给终端用户。</p><p><img src="https://img-blog.csdnimg.cn/20200623154953556.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="4-CON-内容接入"><a href="#4-CON-内容接入" class="headerlink" title="4. CON 内容接入"></a>4. CON 内容接入</h3><p>CDN 内容接入是指内容从内容源接入 CDN 的行为<br><img src="https://img-blog.csdnimg.cn/20200623155453673.png" alt="在这里插入图片描述"><br>当互联网应用希望 由集中式部署向分布式的 CDN 部署转变时,首先要考虑、通过对接 CDN 将现有集中式部署的 内容转移到 CDN 中。 CDN 内容接入有 3 种接入方式:内容存储接入方式、内容预注入方式、实时回源方式,<br>这 3 种内容接入方式的适用场景及业务流程均有较大不同。</p><p><strong>(1) 内容存储接入</strong> </p><p>内容存储接入指源站(互联网应用的内容源)在发布内容前,提前把内容注入 CDN。内 容存储接入方式下,业务系统需主动向 CDN 内容库发送操作指令, CDN 根据指令获得内容并存储在 CDN 内容库中,从而在终端访问 CDN 时直接由 CDN 向终端提供内容,无需再从源站获取,提升了终端用户体验。 采用内容存储接入方式接入的内容将永久存储在 CDN 中,直到通过内容接入操作指令对该内容显式删除。 CDN 的内容存储接入包括对注入内容的增加、删除和更新,能够通过业 务系统或手工方式主动发起内容删除操作并立即实现全网删除。</p><p><strong>(2)内容预注入</strong></p><p>内容预注入是指源站在内容发布之前将内容注入 CDN 中 。 内容预注入与内容存储接入方式类似,都是由业务系统主动向 CDN 发送操作指令, CDN 根据指令预先从内容源回源获取内容,是就近提供服务的接入方式。 但采用内容预注入方式接入的内容并不永久存储在 CDN 中,而仅仅是进行内容缓存, CDN 会根据内容访问的热度情况对缓存内容进行智能删除,预注入内容可以设定一段时间不被删除的内容保护期。采用内容预注入方式接入的内容当被缓存删除后, CDN 仍可以通过回源方式获取内容提供服务。</p><p> <strong>(3)实时回源</strong></p><p>实时回源 (未注入〉是指源站在内容发布之前不向 CDN 注入内容,但当用户内容访问请求时, CDN 实时地从源站拉取内容。 </p><p>内容回源是指对于非托管模式的内容接入,当 CDN 收到业务系统内容预注入指令或用户内容服务请求而本地没有内容时,向内容源请求并获取内容接入 CDN 的行为。</p><p>实时回源方式无需由业务系统主动向 CDN 预先注入内容,而是在终端访问 CDN 时,通过回源方式向内容源实时获取内容到 CDN 中,向终端提供后续就近缓存服务。 </p><p>内容存储接入方式对用户的服务质量保障最佳,但对 CDN 的资源消耗较大,成本较高,适用于 IPTV 等对质量要求极高的业务应用。 </p><p>实时回源获取方式对 CDN 资源消耗较小,成本较低, 但对用户的服务质量保障比不上内容存储接入方式, 一般在网站等业务应用上使用, 是目前 CDN 的最主要接入方式。</p><p>内容预注入方式介于内容存储接入与实时获取方式,互联网服务提供商可根据自有业务的需求选择合适的内容接入方式。</p><h3 id="5-CON-用户请求调度"><a href="#5-CON-用户请求调度" class="headerlink" title="5. CON 用户请求调度"></a>5. CON 用户请求调度</h3><p>通常情况下, CDN 用户的内容访问请求调度如图 2-4 所示,分为两个层次:全局调度和本地调度</p><p><strong>(1) 全局调度</strong></p><p>全局调度的主要目的是根据用户所在地理位置不同,在各个节点之间进行分析决策,将 用户请求转移到整个网络中最靠近用户的节点。</p><p>全局调度方式目前主要有基于 DNS 调度方式和基于应用层重定向调度两种方式。</p><p><strong>(2)本地调度</strong> </p><p>和全局调度系统相比,本地调度通常被限制在一定地区范围内,并且更加关注 CDN 服务器设备具体的健康状况与负载情况,根据实时响应时间,将任务分配给最适合的服务器设备进行处理,进行更精细粒度的调度决策,实现真正的智能通信和发挥服务器集群最佳性能。 本地调度的意义在于充分利用现有设备,有效地解决了用户访问请求过多引起的系统负载过重的问题。<br><img src="https://img-blog.csdnimg.cn/20200623155841888.png" alt="在这里插入图片描述"></p><h3 id="6-CON-内容分发"><a href="#6-CON-内容分发" class="headerlink" title="6. CON 内容分发"></a>6. CON 内容分发</h3><p>互联网应用的响应时间通常是由网络带宽、路由时延、网站处理能力以及物理距离等因素决定的。其中,物理距离过长对互联网应用的响应时间有最直接的影响,会使响应速度变得十分缓慢。</p><p>因此,利用 CDN 技术把最热的内容分发部署到各地的节点上。</p><p>内容为不同地区的用户提供就近服务,就能够有效地提高互联网应用的响应速度<br><img src="https://img-blog.csdnimg.cn/20200623160225538.png" alt="在这里插入图片描述"><br>内容的分发有 Push 和 Pull 以及混合分发共 3 种实现方式。</p><p> <strong>(1) Push 方式</strong></p><p>Push 是一种主动分发的方式。通常, Push 由 CDN 内容管理系统发起,将内容从源站或 者中心内容库主动分发到各边缘的 CDN 节点,分发的协议可以采用 HTTP、 FTP 等。通过 Push 分发的内容一般是比较热点的内容,这些内容通过 Push 方式预先主动分发到边缘CDN 节点,可以实现有针对性的内容提供。对于 Push 分发需要考虑的主要问题是分发策略,即在什么时候分发什么内容。 一般来说, Push 分发是一种智能的主动分发策略。 可以通过用户访问的统计信息(例如, 热度级别排序信息)和己经预先设定的内容分发的规则,智能地决定是否进行内容主动分发。 并且可以根据用户历史访问数据等,建立回归模型,对于智能预测用户可能会大量访问的内容,将其提前推送到边缘节点。 </p><p><strong>(2) Pull 方式</strong></p><p>Pull 是一种被动的分发方式, Pull 分发通常由用户请求驱动。 当用户请求的内容在本地 的边缘 CDN 节点上不存在(未命中)时, 该 CDN 节点启动 Pull 方式从内容源或者其他 CDN 节点实时拉取内容, 并且在 Pull 方式下,内容是按需分发的。 在实际的 CDN 系统中, 一般两种分发方式都支持,但是根据内容的类型和业务模式的 不同,在选择主要的内容分发方式时会有所不同。通常, Push 方式适合内容访问比较集中的情况,例如,热点的流媒体视频内容: Pull 方式比较适合内容访问分散的情况。</p><p> <strong>(3)混合分发方式</strong> </p><p>混合分发方式就是 Push 与 Pull 分发方式结合的一种机制。混合分发方式有多种方案, 最常见的混合分发机制,是利用 Push 方式进行内容预推,后续则使用 Pull 方式拉取。 混合分发方式能够根据当前内容分发系统中的内容服务状况,采用推拉的方式动态地调 整内容在内容分发系统中的分布,对于热点内容主动将其推送(缓存)到边缘节点。</p><h2 id="典型的CDN架构与组网"><a href="#典型的CDN架构与组网" class="headerlink" title="典型的CDN架构与组网"></a>典型的CDN架构与组网</h2><h3 id="1-CDN-功能平面"><a href="#1-CDN-功能平面" class="headerlink" title="1. CDN 功能平面"></a>1. CDN 功能平面</h3><p>CDN 从功能上可以划分为包含管理平面、调度平面和数据平面在内的 3 个逻辑平面。</p><p>其中,在管理平面的管理和控制下实现内容分发和推送,在调度平面完成用户请求的调度、控制以及各种内容调度策略,在数据平面实现内容分发与服务实体。 </p><p>CDN 管理平面主要用来完成业务管理、网络管理、分发策略管理、 内容接入管理以及回源分发管理等一系列管理功能,管理、监控并保障 CDN 承载业务的高效运营。 </p><p>CDN 调度平面主要实现用户服务请求调度(包括 DNS 调度、 HTTP 调度、 RTSP 调度)内容定位、内容路由等功能,通过控制用户服务请求的调度,实现对用户的就近及有保障的服务。 </p><p>CDN 数据平面主要是内容分发与服务实体,主要负责为用户提供内容分发与应用服务, 包括内容存储、 内容缓存、内容分发、内容转码、内容服务等功能。<br><img src="https://img-blog.csdnimg.cn/20200623162457464.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200623162517670.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="CDN-部署架构"><a href="#CDN-部署架构" class="headerlink" title="CDN 部署架构"></a>CDN 部署架构</h2><p>节点是指多台物理设备在某地理区域范围内作为一个整体对外提供内容服务。每个 CDN 节点通常包含多台服务器设备,节点是 CDN 系统最基本的组成单元。 CDN 系统设计的主要目标是尽量提高用户请求的响应速度,为达到这一目标, CDN节点部署的原则是尽量将内容存放在最靠近用户的位置。</p><p>通俗地说,就是将为用户提供实际内容服务的服务器部署在网络的边缘位置。 中心节点层保存着完整的内容副本,当用户请求的内容在边缘层未命中时, 中心层可能会为用户直接提供服务,也可能由下级边缘节点向中心节点请求拉取内容,再分发到边缘节点为用户提供直接服务。但是,当用户请求量过大时,若大量边缘节点都直接向中心节点请求拉取内容,会造成中心层压力过大,这时,就需要考虑在边缘节点层和中心节点层之间部署一个区域层。区域层保存了部分内容副本,使其能够分发内容并在边缘节点未命中时提供服务,以减轻中心节点的压力。这样,就形成了 CDN 的三级系统架构。<br><img src="https://img-blog.csdnimg.cn/20200623162707202.png" alt="在这里插入图片描述"><br>从节点构成的角度来说,无论是 CDN 区域层节点还是 CDN 边缘层节点,都是由缓存设 备和本地负载均衡设备( SLB)构成的。在一个 CDN 节点中,缓存设备和本地负载均衡设备 的连接方式有两种: 一种是旁路方式,另一种是穿越方式<br><img src="https://img-blog.csdnimg.cn/20200623162743125.png" alt="在这里插入图片描述"><br>在穿越方式下, SLB 一般由四七层交换机实现, SLB 向外提供可访问(公网〉的虚拟 IP (VIP)地址,每台缓存设备分配不同的私网 IP 地址,该 SLB 连接下挂的所有缓存设备构成一个服务单元。所有用户请求经由该 SLB 设备,再由该 SLB 向上向下进行转发。 SLB 实际上承担了网络地址转换(Network Address Translation, NAT)功能,向用户隐藏了各台缓存设备设备的IP地址。这种方式是 CDN 系统中应用较多的方式,优点是具有较高的安全性和可靠性,但是,当节点容量大时,四七层交换机容易形成性能瓶颈。</p><p>在旁路方式下, SLB 有两种实现方式。 在早期, SLB 一般由软件实现。 SLB 和缓存设备都具有公共的 IP 地址, SLB 和缓存是并联关系。用户需要先访问 SLB 设备,然后再以重定 向的方式访问特定的缓存。 这种实现方式简单灵活,扩展性好,缺点是安全性较差,而且需要依赖于应用层重定向 。 随着技术的发展,四七层交换机也可采用旁路部署方式,旁挂在路由交换设备上, 数据流量通过三角传输方式进行转发。</p><h2 id="CDN-间组网"><a href="#CDN-间组网" class="headerlink" title="CDN 间组网"></a>CDN 间组网</h2><p>当 CDN 覆盖范围或能力不足,或需要多厂商时, CDN 可以进行组网。不同 CDN 的共同组网目标是实现 CDN 分发与服务能力的共享,各 CDN 通过标准接口实现互联互通。 CDN 共同组网根据服务的场景及各 CDN 的功能与性能不同,可选择不同的组网架构, 典型的组网逻辑可分为以下两种。</p><p><strong>(1)并联组网</strong> </p><p>源站同时接入多个 CDN,边’过用户请求调度层面进行流量分配,不同 CDN 共同承载内容。<br><img src="https://img-blog.csdnimg.cn/20200623163424979.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>并联组网方式需要把用户流量通过 CNAME 引导到一个用户请求流量调度系统,由该调度系统把请求分配至不同 CDN。 不同 CDN 间不进行内容的分发与服务互联,均需与源站系统进行互联的实现内容注入,或分别回源获取内容,再独立进行分发服务。在一个区域内引 入多家 CDN 服务提供商向用户提供 CDN 服务时, 一般采用这种组网方式。</p><p><strong>(2)级联组网</strong> </p><p>源站接入上游 CDN,上游 CDN 再进一步和下游其他 CDN 对接,上游 CDN 和下游 CDN 除调度层面外, CDN 内容分发与服务层面也进行互联,共同组成一张统一的 CDN<br><img src="https://img-blog.csdnimg.cn/20200623163521175.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><p>级联组网方式需要 CDN 承载的业务系统只对接一个 CDN (上游 CDN),向该 CDN 注入 内容或由该 CDN 向源站获取内容,并由该 CDN 决定用户调度和内容分发策略,把用户请求调度到其他下游CDN, 再由下游 CDN 通过 CDN 间的分发服务或回源接口实现上下游 CDN 间的互联,向最终用户提供服务。 为保证服务质量,需要服务的内容也可以通过内容预注入的方式通过上游 CDN 提前注入下游 CDN 中。</p>]]></content>
<summary type="html">
<h2 id="背景"><a href="#背景" class="headerlink" title="背景"></a>背景</h2><p>随着互联网应用的迅速发展与网络流量的大幅度激增,用户对网站的加速需求日益增长。由于 CDN 技术能够及时解决网站的响应速度问题,并对网站的稳
</summary>
<category term="CDN" scheme="https://plutoacharon.github.io/categories/CDN/"/>
<category term="CDN" scheme="https://plutoacharon.github.io/tags/CDN/"/>
</entry>
<entry>
<title>Centos7防火墙与IPTABLES详解</title>
<link href="https://plutoacharon.github.io/2020/07/05/Centos7%E9%98%B2%E7%81%AB%E5%A2%99%E4%B8%8EIPTABLES%E8%AF%A6%E8%A7%A3/"/>
<id>https://plutoacharon.github.io/2020/07/05/Centos7防火墙与IPTABLES详解/</id>
<published>2020-07-05T14:04:14.000Z</published>
<updated>2020-07-05T14:04:36.213Z</updated>
<content type="html"><![CDATA[<h2 id="防火墙定义"><a href="#防火墙定义" class="headerlink" title="防火墙定义"></a>防火墙定义</h2><p><img src="https://img-blog.csdnimg.cn/20200613163654977.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="防火墙分类"><a href="#防火墙分类" class="headerlink" title="防火墙分类"></a>防火墙分类</h3><p><img src="https://img-blog.csdnimg.cn/20200613163718367.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="无状态包过滤防火墙"><a href="#无状态包过滤防火墙" class="headerlink" title="无状态包过滤防火墙"></a>无状态包过滤防火墙</h3><p>基于单个IP报文进行操作,每个报文都是独立分析 </p><ul><li>默认规则 <ul><li>一切未被允许的都是禁止的 </li><li>一切未被禁止的都是允许的 </li></ul></li><li>规则特征 <ul><li>协议类型,如TCP、UDP、ICMP、IGMP等 </li><li>源和目的IP地址和端口 </li><li>TCP标记,如SYN、ACK、FIN、RST等 </li><li>网络层协议选项,如ICMP ECHO、ICMP REPLY等 </li><li>报文的传递方向,如进入接口还是从接口发出 </li><li>报文流过的接口名,如eth0<br><img src="https://img-blog.csdnimg.cn/20200613163957487.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200613164134991.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><h3 id="有状态包过滤防火墙"><a href="#有状态包过滤防火墙" class="headerlink" title="有状态包过滤防火墙"></a>有状态包过滤防火墙</h3>自动归类属于同一个会话的所有报文,实现会话的跟踪功能</li></ul></li><li>建立报文的会话状态表,利用状态表跟踪每个会话状态对于内部主机对外部主机的连接请求,防火墙可以认为这是一个会话的开始</li><li>访问控制策略<ul><li>报文流动方向和所属服务</li><li>发起会话和接受会话的终端地址范围</li><li>会话各阶段的状态</li></ul></li></ul><p><img src="https://img-blog.csdnimg.cn/20200613164256609.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h4 id="会话状态表"><a href="#会话状态表" class="headerlink" title="会话状态表"></a>会话状态表</h4><p><img src="https://img-blog.csdnimg.cn/202006131643026.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="代理防火墙分类"><a href="#代理防火墙分类" class="headerlink" title="代理防火墙分类"></a>代理防火墙分类</h3><p>应用层代理</p><ul><li>为特定的应用服务提供代理服务,对应用层协议进行解析,也称为应用层网关</li><li>优点是实现用户控制、可以对应用层数据进行细粒度的控制,缺点是效率较</li></ul><p>低电路层代理</p><ul><li>工作在传输层,相当于传输层的中继,能够在两个TCP/UDP套接字之间复制数据</li><li>可以同时为不同的应用层协议提供支持</li><li>无法提供应用层协议的解析和安全性检查<h2 id="IPTABLES防火墙"><a href="#IPTABLES防火墙" class="headerlink" title="IPTABLES防火墙"></a>IPTABLES防火墙</h2><h3 id="IPTABLE的表、链结构"><a href="#IPTABLE的表、链结构" class="headerlink" title="IPTABLE的表、链结构"></a>IPTABLE的表、链结构</h3>规则链</li><li>规则的作用:对数据包进行过滤或处理</li><li>链的作用:容纳各种防火墙规则</li><li>链的分类依据:处理数据包的不同时机</li></ul><p>默认包括5种规则链</p><ul><li>INPUT:处理入站数据包</li><li>OUTPUT:处理出站数据包</li><li>FORWARD:处理转发数据包</li><li>POSTROUTING链:在进行路由选择后处理数据包</li><li>PREROUTING链:在进行路由选择前处理数据包</li></ul><p>规则表</p><ul><li>表的作用:容纳各种规则链</li><li>表的划分依据:防火墙规则的作用相似</li></ul><p>默认包括4个规则表</p><ul><li>raw表:确定是否对该数据包进行状态跟踪</li><li>mangle表:为数据包设置标记</li><li>nat表:修改数据包中的源、目标IP地址或端口</li><li>filter表:确定是否放行该数据包(过滤)<br><img src="https://img-blog.csdnimg.cn/20200613165657221.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>规则表之间的顺序</li><li>raw -> mangle -> nat -> filter</li></ul><p>规则链之间的顺序</p><ul><li>入站:PREROUTING -> INPUT</li><li>出站:OUTPUT -> POSTROUTING</li><li>转发:PREROUTING -> FORWARD -> POSTROUTING</li></ul><p>规则链内的匹配顺序</p><ul><li>按顺序依次检查,匹配即停止(LOG策略例外)</li><li>若找不到相匹配的规则,则按该链的默认策略处理</li></ul><p><img src="https://img-blog.csdnimg.cn/20200613170033575.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="语法构成"><a href="#语法构成" class="headerlink" title="语法构成"></a>语法构成</h3><p>iptables [-t 表名] 选项 [链名] [条件] [-j 控制类型]</p><p><code>[root@localhost ~]# iptables -t filter -I INPUT -p icmp -j REJECT</code></p><p>几个注意事项</p><ul><li>不指定表名时,默认指filter表</li><li>不指定链名时,默认指表内的所有链</li><li>除非设置链的默认策略,否则必须指定匹配条件</li><li>选项、链名、控制类型使用大写字母,其余均为小写</li></ul><p>数据包的常见控制类型</p><ul><li>ACCEPT:允许通过</li><li>DROP:直接丢弃,不给出任何回应</li><li>REJECT:拒绝通过,必要时会给出提示</li><li>LOG:记录日志信息,然后传给下一条规则继续匹配</li></ul><p>添加新的规则<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">-A:在链的末尾追加一条规则</span><br><span class="line">-I:在链的开头(或指定序号)插入一条规则</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># iptables -t filter -A INPUT -p tcp -j ACCEPT </span></span><br><span class="line">[root@localhost ~]<span class="comment"># iptables -I INPUT -p udp -j ACCEPT </span></span><br><span class="line">[root@localhost ~]<span class="comment"># iptables -I INPUT 2 -p icmp -j ACCEPT</span></span><br></pre></td></tr></table></figure><p>查看规则列表<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">-L:列出所有的规则条目</span><br><span class="line">-n:以数字形式显示地址、端口等信息</span><br><span class="line">-v:以更详细的方式显示规则信息</span><br><span class="line">--line-numbers:查看规则时,显示规则的序号</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># iptables -n -L INPUT </span></span><br><span class="line">Chain INPUT (policy ACCEPT)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 </span><br><span class="line">ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 </span><br><span class="line">REJECT icmp -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT </span><br><span class="line">tcp -- 0.0.0.0/0 0.0.0.0/0</span><br></pre></td></tr></table></figure><p>删除、清空规则<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">-D:删除链内指定序号(或内容)的一条规则</span><br><span class="line">-F:清空所有的规则</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># iptables -D INPUT 3 </span></span><br><span class="line">[root@localhost ~]<span class="comment"># iptables -n -L INPUT Chain </span></span><br><span class="line">INPUT (policy ACCEPT) target prot opt <span class="built_in">source</span> destination </span><br><span class="line">ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 </span><br><span class="line">ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 </span><br><span class="line">ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0</span><br></pre></td></tr></table></figure><p><img src="https://img-blog.csdnimg.cn/20200613171401463.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="防火墙定义"><a href="#防火墙定义" class="headerlink" title="防火墙定义"></a>防火墙定义</h2><p><img src="https://img-blog.csdnimg.cn/20200613163654977.pn
</summary>
<category term="Liunx" scheme="https://plutoacharon.github.io/categories/Liunx/"/>
<category term="Liunx" scheme="https://plutoacharon.github.io/tags/Liunx/"/>
</entry>
<entry>
<title>解决Centos系统设置静态ip时报错 ping: www.baidu.com: Name or service not known</title>
<link href="https://plutoacharon.github.io/2020/07/05/%E8%A7%A3%E5%86%B3Centos%E7%B3%BB%E7%BB%9F%E8%AE%BE%E7%BD%AE%E9%9D%99%E6%80%81ip%E6%97%B6%E6%8A%A5%E9%94%99-ping-www-baidu-com-Name-or-service-not-known/"/>
<id>https://plutoacharon.github.io/2020/07/05/解决Centos系统设置静态ip时报错-ping-www-baidu-com-Name-or-service-not-known/</id>
<published>2020-07-05T14:03:34.000Z</published>
<updated>2020-07-05T14:03:57.077Z</updated>
<content type="html"><![CDATA[<p>具体设置静态IP可以查看我这篇文章:<br><a href="https://blog.csdn.net/qq_43442524/article/details/100077107?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522159185994819725247601171%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&request_id=159185994819725247601171&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_v2~rank_blog_v1-1-100077107.pc_v2_rank_blog_v1&utm_term=%E9%9D%99%E6%80%81" target="_blank" rel="noopener">Centos7下NAT设置静态ip</a></p><h2 id="问题"><a href="#问题" class="headerlink" title="问题"></a>问题</h2><p>设置静态以后发现 ==ping: <a href="http://www.baidu.com" target="_blank" rel="noopener">www.baidu.com</a>: Name or service not known==</p><p>但是ping网关192.168.233.2,DNS服务器8.8.8.8与114.114.114.114都能ping通</p><p>并且设置完静态显示正常 Xshell也可以<strong>正常连接</strong><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># ip a</span></span><br><span class="line">1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000</span><br><span class="line"> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00</span><br><span class="line"> inet 127.0.0.1/8 scope host lo</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet6 ::1/128 scope host </span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000</span><br><span class="line"> link/ether 00:0c:29:15:b8:04 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 192.168.233.128/24 brd 192.168.233.255 scope global noprefixroute ens33</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">[root@localhost ~]<span class="comment"># ping 8.8.8.8</span></span><br><span class="line">PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.</span><br><span class="line">64 bytes from 8.8.8.8: icmp_seq=3 ttl=128 time=48.4 ms</span><br><span class="line">64 bytes from 8.8.8.8: icmp_seq=9 ttl=128 time=47.0 ms</span><br><span class="line">64 bytes from 8.8.8.8: icmp_seq=10 ttl=128 time=46.7 ms</span><br><span class="line">^C</span><br><span class="line">--- 8.8.8.8 ping statistics ---</span><br><span class="line">10 packets transmitted, 3 received, 70% packet loss, time 9006ms</span><br><span class="line">rtt min/avg/max/mdev = 46.738/47.412/48.467/0.776 ms</span><br><span class="line">[root@localhost ~]<span class="comment"># ping 114.114.114.114</span></span><br><span class="line">PING 114.114.114.114 (114.114.114.114) 56(84) bytes of data.</span><br><span class="line">64 bytes from 114.114.114.114: icmp_seq=1 ttl=128 time=26.7 ms</span><br><span class="line">64 bytes from 114.114.114.114: icmp_seq=2 ttl=128 time=26.4 ms</span><br><span class="line">64 bytes from 114.114.114.114: icmp_seq=3 ttl=128 time=24.9 ms</span><br></pre></td></tr></table></figure></p><p>修改<code>/etc/resolv.conf</code>文件也无果<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># cat /etc/resolv.conf </span></span><br><span class="line"><span class="comment"># Generated by NetworkManager</span></span><br><span class="line">nameserver 8.8.8.8</span><br><span class="line">nameserver 114.114.114.114</span><br><span class="line">nameserver 192.168.233.2</span><br><span class="line">[root@localhost ~]<span class="comment"># route -n</span></span><br><span class="line">Kernel IP routing table</span><br><span class="line">Destination Gateway Genmask Flags Metric Ref Use Iface</span><br><span class="line">0.0.0.0 192.168.233.2 0.0.0.0 UG 100 0 0 ens33</span><br><span class="line">192.168.233.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33</span><br></pre></td></tr></table></figure></p><h2 id="解决"><a href="#解决" class="headerlink" title="解决"></a>解决</h2><p>解决DNS解析错误问题无果后 尝试使用<code>dhclient</code>命令分配dhcp地址<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># dhclient</span></span><br><span class="line">[root@localhost ~]<span class="comment"># ip a</span></span><br><span class="line">1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000</span><br><span class="line"> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00</span><br><span class="line"> inet 127.0.0.1/8 scope host lo</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet6 ::1/128 scope host </span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000</span><br><span class="line"> link/ether 00:0c:29:15:b8:04 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 192.168.233.128/24 brd 192.168.233.255 scope global noprefixroute ens33</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet 192.168.233.129/24 brd 192.168.233.255 scope global secondary dynamic ens33</span><br><span class="line"> valid_lft 1770sec preferred_lft 1770sec</span><br></pre></td></tr></table></figure></p><p>可以发现运行完<code>dhclient</code>命令后出现了<br><code>inet 192.168.233.129/24 brd 192.168.233.255 scope global secondary dynamic ens33 valid_lft 1770sec preferred_lft 1770sec</code><br>这一行代表网卡被分配了额外的dhcp地址 现在进行ping <a href="http://www.baidu.com" target="_blank" rel="noopener">www.baidu.com</a><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># ping www.baidu.com</span></span><br><span class="line">PING www.a.shifen.com (220.181.38.150) 56(84) bytes of data.</span><br><span class="line">64 bytes from 220.181.38.150: icmp_seq=1 ttl=128 time=4.80 ms</span><br><span class="line">64 bytes from 220.181.38.150: icmp_seq=2 ttl=128 time=5.43 ms</span><br><span class="line">64 bytes from 220.181.38.150: icmp_seq=3 ttl=128 time=8.02 ms</span><br><span class="line">64 bytes from 220.181.38.150: icmp_seq=4 ttl=128 time=5.12 ms</span><br><span class="line">^C</span><br><span class="line">--- www.a.shifen.com ping statistics ---</span><br><span class="line">4 packets transmitted, 4 received, 0% packet loss, time 3005ms</span><br><span class="line">rtt min/avg/max/mdev = 4.809/5.849/8.026/1.276 ms</span><br></pre></td></tr></table></figure></p><p>推测dns目前由dhcp到的<code>192.168.233.129</code>地址解析</p>]]></content>
<summary type="html">
<p>具体设置静态IP可以查看我这篇文章:<br><a href="https://blog.csdn.net/qq_43442524/article/details/100077107?ops_request_misc=%257B%2522request%255Fid%2522
</summary>
<category term="Liunx" scheme="https://plutoacharon.github.io/categories/Liunx/"/>
<category term="Liunx" scheme="https://plutoacharon.github.io/tags/Liunx/"/>
</entry>
<entry>
<title>解决docker中修改docker.daemon文件后启动失败</title>
<link href="https://plutoacharon.github.io/2020/05/17/%E8%A7%A3%E5%86%B3docker%E4%B8%AD%E4%BF%AE%E6%94%B9docker-daemon%E6%96%87%E4%BB%B6%E5%90%8E%E5%90%AF%E5%8A%A8%E5%A4%B1%E8%B4%A5/"/>
<id>https://plutoacharon.github.io/2020/05/17/解决docker中修改docker-daemon文件后启动失败/</id>
<published>2020-05-17T14:18:14.000Z</published>
<updated>2020-05-17T14:18:58.021Z</updated>
<content type="html"><![CDATA[<h2 id="在-docker-配置文件中设置"><a href="#在-docker-配置文件中设置" class="headerlink" title="在 docker 配置文件中设置"></a>在 docker 配置文件中设置</h2><p>docker 1.12 版本之后, 建议在 docker 的 js 配置文件中配置, 路径为 /etc/docker/daemon.json 默认没有这个文件, 可以手动创建此文件, docker 启动时默认会读取此配置文件<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">{</span><br><span class="line"> "registry-mirrors": ["https://6y2639ye.mirror.aliyuncs.com"]</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>我这里配置的加速源</p><p>在一次误操作中 动了<code>/usr/lib/systemd/system/docker.service</code>下的文件 报错:<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]# systemctl status docker.service</span><br><span class="line">● docker.service - Docker Application Container Engine</span><br><span class="line"> Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)</span><br><span class="line"> Active: failed (Result: start-limit) since 四 2020-05-14 10:19:16 CST; 25s ago</span><br><span class="line"> Docs: https://docs.docker.com</span><br><span class="line"> Process: 2493 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)</span><br><span class="line"> Main PID: 2493 (code=exited, status=1/FAILURE)</span><br><span class="line"></span><br><span class="line">5月 14 10:19:14 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.</span><br><span class="line">5月 14 10:19:14 localhost.localdomain systemd[1]: Unit docker.service entered failed state.</span><br><span class="line">5月 14 10:19:14 localhost.localdomain systemd[1]: docker.service failed.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: docker.service holdoff time over, scheduling restart.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: Stopped Docker Application Container Engine.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: start request repeated too quickly for docker.service</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: Unit docker.service entered failed state.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: docker.service failed.</span><br></pre></td></tr></table></figure></p><h2 id="解决"><a href="#解决" class="headerlink" title="解决"></a>解决</h2><p> 是因为 docker 的 socket 配置出现了冲突, 接下来查看 docker 的启动入口文件<br> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">> vim /lib/systemd/system/docker.service # Ubuntu的路径; CentOS 的路径为: /usr/lib/systemd/system/docker.service</span><br><span class="line"></span><br><span class="line">ExecStart=/usr/bin/dockerd -H fd://</span><br><span class="line">修改为</span><br><span class="line">ExecStart=/usr/bin/dockerd</span><br></pre></td></tr></table></figure></p><p>从上面可以看出, 在 docker 的启动入口文件中配置了 host 相关的信息, 而在 docker 的配置文件中也配置了 host 的信息, 所以发生了冲突. 解决办法, 建议将 docker 启动入口文件中的 -H fd:// 删除, 再重启 docker 服务即可<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># systemctl daemon-reload</span></span><br><span class="line">[root@localhost ~]<span class="comment"># systemctl start docker</span></span><br></pre></td></tr></table></figure></p>]]></content>
<summary type="html">
<h2 id="在-docker-配置文件中设置"><a href="#在-docker-配置文件中设置" class="headerlink" title="在 docker 配置文件中设置"></a>在 docker 配置文件中设置</h2><p>docker 1.12 版本
</summary>
<category term="Docker" scheme="https://plutoacharon.github.io/categories/Docker/"/>
<category term="Dokcer" scheme="https://plutoacharon.github.io/tags/Dokcer/"/>
</entry>
<entry>
<title>Python垃圾回收与内存管理</title>
<link href="https://plutoacharon.github.io/2020/05/12/Python%E5%9E%83%E5%9C%BE%E5%9B%9E%E6%94%B6%E4%B8%8E%E5%86%85%E5%AD%98%E7%AE%A1%E7%90%86/"/>
<id>https://plutoacharon.github.io/2020/05/12/Python垃圾回收与内存管理/</id>
<published>2020-05-12T14:42:31.000Z</published>
<updated>2020-05-12T14:42:53.246Z</updated>
<content type="html"><![CDATA[<p>@[toc]</p><h1 id="Python垃圾回收"><a href="#Python垃圾回收" class="headerlink" title="Python垃圾回收"></a>Python垃圾回收</h1><p>引用计数器为主,标记清除和分代回收为辅+缓存机制</p><h2 id="1-引用计数器"><a href="#1-引用计数器" class="headerlink" title="1. 引用计数器"></a>1. 引用计数器</h2><h3 id="1-1-环状双向链表-refchain"><a href="#1-1-环状双向链表-refchain" class="headerlink" title="1.1 环状双向链表 refchain"></a>1.1 环状双向链表 refchain</h3><p>在Python程序中创建的任何对象都会放在<code>refchain</code>中</p><p><code>static PyObject refchain = {&refchain, &refchain}</code></p><p><img src="https://img-blog.csdnimg.cn/20200509213439801.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>在Python程序中创建的任何对象都会放在<code>refchain</code>链表中</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">str1 = <span class="string">"str"</span></span><br><span class="line">num1 = <span class="number">1</span></span><br><span class="line">list1 = [<span class="string">"1"</span>,<span class="string">"2"</span>]</span><br></pre></td></tr></table></figure><p>当进行上述操作时,Python内部会创建一些数据(上一个对象,下一个对象,类型,引用个数,元素个数)</p><p><code>include/object.h</code></p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">define</span> _PyObject_HEAD_EXTRA \</span></span><br><span class="line"> <span class="class"><span class="keyword">struct</span> _<span class="title">object</span> *_<span class="title">ob_next</span>;</span> \</span><br><span class="line"> <span class="class"><span class="keyword">struct</span> _<span class="title">object</span> *_<span class="title">ob_prev</span>;</span></span><br><span class="line"> </span><br><span class="line"><span class="meta">#<span class="meta-keyword">define</span> PyObject_HEAD PyObject ob_base;</span></span><br><span class="line"> </span><br><span class="line"><span class="meta">#<span class="meta-keyword">define</span> PyObject_VAR_HEAD PyVarObject ob_base;</span></span><br><span class="line"> </span><br><span class="line"> </span><br><span class="line"><span class="keyword">typedef</span> <span class="class"><span class="keyword">struct</span> _<span class="title">object</span> {</span></span><br><span class="line"> _PyObject_HEAD_EXTRA <span class="comment">// 用于构造双向链表</span></span><br><span class="line"> Py_ssize_t ob_refcnt; <span class="comment">// 引用计数器</span></span><br><span class="line"> <span class="class"><span class="keyword">struct</span> _<span class="title">typeobject</span> *<span class="title">ob_type</span>;</span> <span class="comment">// 数据类型</span></span><br><span class="line">} PyObject;</span><br><span class="line"> </span><br><span class="line"> </span><br><span class="line"><span class="keyword">typedef</span> <span class="class"><span class="keyword">struct</span> {</span></span><br><span class="line"> PyObject ob_base; <span class="comment">// PyObject对象</span></span><br><span class="line"> Py_ssize_t ob_size; <span class="comment">/* Number of items in variable part,即:元素个数 */</span></span><br><span class="line">} PyVarObject;</span><br></pre></td></tr></table></figure><p>2个结构体</p><ul><li><strong>PyObject</strong>,此结构体中包含3个元素。<ul><li>_PyObject_HEAD_EXTRA,用于构造双向链表。</li><li>ob_refcnt,引用计数器。</li><li>ob_type,数据类型。</li></ul></li><li><strong>PyVarObject</strong>,次结构体中包含4个元素(ob_base中包含3个元素)<ul><li>ob_base,PyObject结构体对象,即:包含PyObject结构体中的三个元素。</li><li>ob_size,内部元素个数。</li></ul></li></ul><p>3个宏定义</p><ul><li>PyObject_HEAD,代指PyObject结构体。</li><li>PyVarObject_HEAD,代指PyVarObject对象。</li><li>_PyObject_HEAD_EXTRA,代指前后指针,用于构造双向队列。</li></ul><p>Python中所有类型创建对象时,底层都是与PyObject和PyVarObject结构体实现,一般情况下由单个元素组成对象内部会使用PyObject结构体(float)、由多个元素组成的对象内部会使用PyVarObject结构体(str/int/list/dict/tuple/set/自定义类),因为由多个元素组成的话是需要为其维护一个 ob_size(内部元素个数)。</p><p><strong>PyObject:float</strong></p><p><strong>PyVarObject:list、dict、tuple、set、int、str、bool</strong></p><p>因为Python中的int是不限制长度的,所以底层实现是用的str,所以int也属于PyVarObject阵营。Python中的bool实际上是0和1,所以也是int,也属于PyVarObject阵营。</p><h3 id="1-2-类型封装结构体"><a href="#1-2-类型封装结构体" class="headerlink" title="1.2 类型封装结构体"></a>1.2 类型封装结构体</h3><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">// float类型</span></span><br><span class="line"><span class="keyword">typedef</span> <span class="class"><span class="keyword">struct</span> {</span></span><br><span class="line"> PyObject_HEAD</span><br><span class="line"> <span class="keyword">double</span> ob_fval;</span><br><span class="line">} PyFloatObject;</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">data = <span class="number">1.11</span></span><br><span class="line">内部会创建:</span><br><span class="line"> _ob_netx = refchain的上一个对象</span><br><span class="line"> _ob_prev = refchain的下一个对象</span><br><span class="line"> ob_refcnt = <span class="number">1</span> </span><br><span class="line"> ob_type = float</span><br><span class="line"> ob_fval = <span class="number">1.11</span></span><br></pre></td></tr></table></figure><h3 id="1-3-引用计数器"><a href="#1-3-引用计数器" class="headerlink" title="1.3 引用计数器"></a>1.3 引用计数器</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="number">1.11</span></span><br><span class="line">v2 = <span class="number">1</span></span><br><span class="line">v3 = (<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>)</span><br></pre></td></tr></table></figure><p>当python程序运行时,会根据数据类型的不同找到对应的结构体,根据结构体中的字段来进行创建相关的数据,然后将对象添加到refchain双线链表中。</p><p>每个对象中有<code>ob_refcnt</code>引用计数器,值默认为1,当有其他变量引用对象时,引用计数器就会发生变化。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">a = <span class="number">1</span></span><br><span class="line">b = a</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">a = <span class="number">1</span></span><br><span class="line">b = a</span><br><span class="line"><span class="keyword">del</span> b <span class="comment"># b变量删除: b对应的对象引用器-1</span></span><br><span class="line"><span class="keyword">del</span> a <span class="comment"># a变量删除: a对用的对象引用其-1</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 当一个对象的引用计数器为0时,意味着没有人使用这个对象, 这个对象就是垃圾, 垃圾回收</span></span><br><span class="line"><span class="comment"># 回收: </span></span><br><span class="line">- 对象从refchain链表中移除</span><br><span class="line">- 将对象销毁, 内存归还</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 创建对象并初始化引用计数器为1</span></span><br><span class="line">num1 = <span class="number">1</span></span><br><span class="line">num2 = num1 <span class="comment"># 计数器+1</span></span><br><span class="line">num3 = num1 <span class="comment"># 计数器+1</span></span><br><span class="line">num4 = num1 <span class="comment"># 计数器+1</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 创建对象并初始化引用计数器为1</span></span><br><span class="line">str1 = <span class="string">"str"</span> <span class="comment"># 计数器+1</span></span><br><span class="line">str2 = str1 <span class="comment"># 计数器+1</span></span><br></pre></td></tr></table></figure><p><img src="https://img-blog.csdnimg.cn/20200509213502784.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="1-4-循环引用的问题"><a href="#1-4-循环引用的问题" class="headerlink" title="1.4 循环引用的问题"></a>1.4 循环引用的问题</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">list1 = [<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>] </span><br><span class="line">list2 = [<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>]</span><br><span class="line">list1.append(list2) <span class="comment"># 把v2追加到v1中, v2对应的引用计数器加1</span></span><br><span class="line">list2.append(list1) <span class="comment"># 把v1追加到v2中, v1对应的引用计数器加1</span></span><br></pre></td></tr></table></figure><p> list1与list2相互引用,如果不存在其他对象对它们的引用,list1与list2的引用计数也仍然为1,所占用的内存永远无法被回收,这将是致命的。</p><p> 对于如今的强大硬件,缺点1尚可接受,但是循环引用导致内存泄露,注定python还将引入新的回收机制。</p><h2 id="2-标记清除"><a href="#2-标记清除" class="headerlink" title="2. 标记清除"></a>2. 标记清除</h2><p>目的:为了解决引用计数器循环引用的不足</p><p>实现:在Python的底层再维护一个链表,链表中专门放可能存在循环引用的对象(list/tuple/dict/set)</p><p><img src="https://img-blog.csdnimg.cn/20200509213523566.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="\[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Y2sWHo9q-1589031264635)(../../Images/image-20200509205236478.png)\]"></p><p>在Python内部<code>某种情况</code>触发, 会去扫描<code>可能存在循环应用的链表</code>中的每个元素, 检查是否有循环引用, 如果有则让双方的引用计数器-1; 如果是0则进行垃圾回收</p><p>问题:</p><ul><li>什么时候扫描</li><li>可能存在循环引用的链表扫描代价大,每次扫描时间久</li></ul><h2 id="3-分代回收"><a href="#3-分代回收" class="headerlink" title="3. 分代回收"></a>3. 分代回收</h2><p>将可能存在循环应用的对象维护成3个链表:</p><ul><li>0代:0代中对象的个数达到700个扫描一次</li><li>1代:0代扫描10次,则1代扫描一次</li><li>2代:1代扫描10次,则2代扫描一次</li></ul><h2 id="4-小结"><a href="#4-小结" class="headerlink" title="4. 小结"></a>4. 小结</h2><p>在Python中维护了一个<code>refchain</code>的双向环状链表, 这个链表中存储程序创建的所有对象, 每种类型的对象中都有一个<code>ob_refcnt</code>引用计数器的值, 引用个数+1, -1 , 最后当引用计数器变成0时会进行垃圾回收(对象销毁, 从refchain中移除)</p><p>但是. 在Python中对于那些可以有多个元素组成的对象可能会存在循环引用的问题, 为了解决这个问题Python引入了标记清除和分带回收, 在其内部维护了4个链表</p><ul><li>refchain</li><li>0代</li><li>1代</li><li>2代</li></ul><p>在源码内部当达到各自的阈值时, 就会触发扫描链表进行标记清除的动作(有循环则各自-1)</p><h1 id="Python缓存"><a href="#Python缓存" class="headerlink" title="Python缓存"></a>Python缓存</h1><h2 id="1-池"><a href="#1-池" class="headerlink" title="1. 池"></a>1. 池</h2><p>为了避免重复创建和销毁一些常见对象, Python建立了维护池</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 启动解释器时, python内部帮我们创建: -5,-4...257</span></span><br><span class="line">v1 = <span class="number">7</span> <span class="comment"># 内部不会开辟内存, 直接去池中获取</span></span><br><span class="line">v2 = <span class="number">8</span> <span class="comment"># 内部不会开辟内存, 直接去池中获取</span></span><br></pre></td></tr></table></figure><h2 id="2-free-list"><a href="#2-free-list" class="headerlink" title="2. free_list"></a>2. free_list</h2><p>当一个对象的引用计数器为0时, 按理说应该回收, 但是内部不会直接回收, 而是将对象添加到<code>free_list</code>链表中当缓存。以后再去创建对象时,不再重新开辟内存,而是直接使用<code>free_list</code></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="number">1.11</span> <span class="comment"># 开辟内存, 内存存储结构体中定义那几个值, 并存到refchain中</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">del</span> v1 <span class="comment"># refchain中移除, 将对象添加到free_list中(80)个, free_list满了则销毁</span></span><br><span class="line"></span><br><span class="line">v2 = <span class="number">2.22</span> <span class="comment"># 不会重新开辟内存, 去free_list中获取对象, 对象内部数据初始化, 再放到refchain中</span></span><br></pre></td></tr></table></figure><ul><li><p>float类型,维护的free_list链表最多可缓存100个float对象。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="number">3.14</span> <span class="comment"># 开辟内存来存储float对象,并将对象添加到refchain链表。 </span></span><br><span class="line">print( id(v1) ) <span class="comment"># 内存地址:4436033488 </span></span><br><span class="line"><span class="keyword">del</span> v1 <span class="comment"># 引用计数器-1,如果为0则在rechain链表中移除,不销毁对象,而是将对象添加到float的free_list. </span></span><br><span class="line">v2 = <span class="number">9.999</span> <span class="comment"># 优先去free_list中获取对象,并重置为9.999,如果free_list为空才重新开辟内存。 </span></span><br><span class="line">print( id(v2) ) <span class="comment"># 内存地址:4436033488 </span></span><br><span class="line"><span class="comment"># 注意:引用计数器为0时,会先判断free_list中缓存个数是否满了,未满则将对象缓存,已满则直接将对象销毁。</span></span><br></pre></td></tr></table></figure></li><li><p>int类型,不是基于free_list,而是维护一个small_ints链表保存常见数据(小数据池),小数据池范围:<code>-5 <= value < 257</code>。即:重复使用这个范围的整数时,不会重新开辟内存。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="number">38</span> <span class="comment"># 去小数据池small_ints中获取38整数对象,将对象添加到refchain并让引用计数器+1。 </span></span><br><span class="line">print( id(v1)) <span class="comment">#内存地址:4514343712 </span></span><br><span class="line">v2 = <span class="number">38</span> <span class="comment"># 去小数据池small_ints中获取38整数对象,将refchain中的对象的引用计数器+1。 </span></span><br><span class="line">print( id(v2) ) <span class="comment">#内存地址:4514343712 </span></span><br><span class="line"><span class="comment"># 注意:在解释器启动时候-5~256就已经被加入到small_ints链表中且引用计数器初始化为1,代码中使用的值时直接去small_ints中拿来用并将引用计数器+1即可。另外,small_ints中的数据引用计数器永远不会为0(初始化时就设置为1了),所以也不会被销毁。</span></span><br></pre></td></tr></table></figure></li><li><p>str类型,维护<code>unicode_latin1[256]</code>链表,内部将所有的<code>ascii字符</code>缓存起来,以后使用时就不再反复创建。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="string">"A"</span> </span><br><span class="line">print( id(v1) ) <span class="comment"># 输出:4517720496 </span></span><br><span class="line"><span class="keyword">del</span> v1 v2 = <span class="string">"A"</span> </span><br><span class="line">print( id(v1) ) <span class="comment"># 输出:4517720496 # 除此之外,Python内部还对字符串做了驻留机制,针对那么只含有字母、数字、下划线的字符串(见源码Objects/codeobject.c),如果内存中已存在则不会重新在创建而是使用原来的地址里(不会像free_list那样一直在内存存活,只有内存中有才能被重复利用)。 </span></span><br><span class="line">v1 = <span class="string">"wupeiqi"</span> </span><br><span class="line">v2 = <span class="string">"wupeiqi"</span> </span><br><span class="line">print(id(v1) == id(v2)) <span class="comment"># 输出:True</span></span><br></pre></td></tr></table></figure></li><li><p>list类型,维护的free_list数组最多可缓存80个list对象。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">v1 = [<span class="number">11</span>,<span class="number">22</span>,<span class="number">33</span>] </span><br><span class="line">print( id(v1) ) <span class="comment"># 输出:4517628816 </span></span><br><span class="line"><span class="keyword">del</span> v1 v2 = [<span class="string">"武"</span>,<span class="string">"沛齐"</span>] </span><br><span class="line">print( id(v2) ) <span class="comment"># 输出:4517628816</span></span><br></pre></td></tr></table></figure></li><li><p>tuple类型,维护一个free_list数组且数组容量20,数组中元素可以是链表且每个链表最多可以容纳2000个元组对象。元组的free_list数组在存储数据时,是按照元组可以容纳的个数为索引找到free_list数组中对应的链表,并添加到链表中。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">v1 = (<span class="number">1</span>,<span class="number">2</span>) </span><br><span class="line">print( id(v1) ) </span><br><span class="line"><span class="keyword">del</span> v1 <span class="comment"># 因元组的数量为2,所以会把这个对象缓存到free_list[2]的链表中。 </span></span><br><span class="line">v2 = (<span class="string">"武沛齐"</span>,<span class="string">"Alex"</span>) <span class="comment"># 不会重新开辟内存,而是去free_list[2]对应的链表中拿到一个对象来使用。 </span></span><br><span class="line">print( id(v2) )</span><br></pre></td></tr></table></figure></li><li><p>dict类型,维护的free_list数组最多可缓存80个dict对象。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">v1 = {<span class="string">"k1"</span>:<span class="number">123</span>} </span><br><span class="line"> print( id(v1) ) <span class="comment"># 输出:4515998128 </span></span><br><span class="line"> <span class="keyword">del</span> v1 v2 = {<span class="string">"name"</span>:<span class="string">"武沛齐"</span>,<span class="string">"age"</span>:<span class="number">18</span>,<span class="string">"gender"</span>:<span class="string">"男"</span>} </span><br><span class="line"> print( id(v1) ) <span class="comment"># 输出:4515998128</span></span><br></pre></td></tr></table></figure></li></ul><p>这个老师讲的通俗易懂, 非常棒, 更多详细的解释:<code>https://pythonav.com/wiki/detail/6/88/</code></p><p>参考资料:</p><p><code>https://www.bilibili.com/video/BV1Ei4y1b7mo?p=2</code></p><p><code>https://my.oschina.net/hebianxizao/blog/57367</code></p><p><code>https://www.cnblogs.com/wupeiqi/articles/11507404.html</code></p>]]></content>
<summary type="html">
<p>@[toc]</p>
<h1 id="Python垃圾回收"><a href="#Python垃圾回收" class="headerlink" title="Python垃圾回收"></a>Python垃圾回收</h1><p>引用计数器为主,标记清除和分代回收为辅+缓存机制
</summary>
<category term="Python" scheme="https://plutoacharon.github.io/categories/Python/"/>
<category term="Python" scheme="https://plutoacharon.github.io/tags/Python/"/>
</entry>
<entry>
<title>git push文件夹时报错Fatal: HttpRequestException encountered.</title>
<link href="https://plutoacharon.github.io/2020/05/12/git-push%E6%96%87%E4%BB%B6%E5%A4%B9%E6%97%B6%E6%8A%A5%E9%94%99Fatal-HttpRequestException-encountered/"/>
<id>https://plutoacharon.github.io/2020/05/12/git-push文件夹时报错Fatal-HttpRequestException-encountered/</id>
<published>2020-05-12T14:41:22.000Z</published>
<updated>2020-05-12T14:42:16.458Z</updated>
<content type="html"><![CDATA[<p>在使用git push时报出如下的错误:<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">$ git push -u origin master</span><br><span class="line">fatal: HttpRequestException encountered.</span><br><span class="line"> 发送请求时出错。</span><br><span class="line">fatal: HttpRequestException encountered.</span><br><span class="line"> 发送请求时出错。</span><br><span class="line">Username <span class="keyword">for</span> <span class="string">'https://github.com'</span>:</span><br></pre></td></tr></table></figure></p><p>之前时不需要输入的,现在需要输入了,原因是git更新了一个证书,我们本地需要再更新以下:<br><a href="https://github.com/microsoft/Git-Credential-Manager-for-Windows/releases" target="_blank" rel="noopener">https://github.com/microsoft/Git-Credential-Manager-for-Windows/releases</a><br>进去后点击下载安装 GCMW最新版即可:<br><img src="https://img-blog.csdnimg.cn/20200506152021834.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<p>在使用git push时报出如下的错误:<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="lin
</summary>
<category term="GitHub" scheme="https://plutoacharon.github.io/categories/GitHub/"/>
<category term="GitHub" scheme="https://plutoacharon.github.io/tags/GitHub/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(八)---- 基于Docker配置NFS实现Nginx动静分离</title>
<link href="https://plutoacharon.github.io/2020/05/12/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E5%85%AB%EF%BC%89-%E5%9F%BA%E4%BA%8EDocker%E9%85%8D%E7%BD%AENFS%E5%AE%9E%E7%8E%B0Nginx%E5%8A%A8%E9%9D%99%E5%88%86%E7%A6%BB/"/>
<id>https://plutoacharon.github.io/2020/05/12/HA高可用与负载均衡入门到实战(八)-基于Docker配置NFS实现Nginx动静分离/</id>
<published>2020-05-12T14:40:37.000Z</published>
<updated>2020-05-12T14:40:53.177Z</updated>
<content type="html"><![CDATA[<h2 id="NFS介绍"><a href="#NFS介绍" class="headerlink" title="NFS介绍"></a>NFS介绍</h2><p>NFS 是Network File System的缩写,即网络文件系统。一种使用于分散式文件系统的协定,由Sun公司开发,于1984年向外公布。功能是通过网络让不同的机器、不同的操作系统能够彼此分享个别的数据,让应用程序在客户端通过网络访问位于服务器磁盘中的数据,是在类Unix系统间实现磁盘文件共享的一种方法。</p><p>NFS 的基本原则是“容许不同的客户端及服务端通过一组RPC分享相同的文件系统”,它是独立于操作系统,容许不同硬件及操作系统的系统共同进行文件的分享。</p><p>NFS在文件传送或信息传送过程中依赖于RPC协议。RPC,远程过程调用 (Remote Procedure Call) 是能使客户端执行其他系统中程序的一种机制。NFS本身是没有提供信息传输的协议和功能的,但NFS却能让我们通过网络进行资料的分享,这是因为NFS使用了一些其它的传输协议。而这些传输协议用到这个RPC功能的。可以说NFS本身就是使用RPC的一个程序。或者说NFS也是一个RPC SERVER。所以只要用到NFS的地方都要启动RPC服务,不论是NFS SERVER或者NFS CLIENT。这样SERVER和CLIENT才能通过RPC来实现PROGRAM PORT的对应。可以这么理解RPC和NFS的关系:NFS是一个文件系统,而RPC是负责负责信息的传输。</p><h2 id="什么是RPC"><a href="#什么是RPC" class="headerlink" title="什么是RPC"></a>什么是RPC</h2><p>由于NFS支持的功能相当多,而不同的功能都会使用不同的程序来启动,每启动一个功能就会启用一些端口来传输数据,因此,NFS的功能所对应的端口才无法固定,而是随机取用一些未使用的端口来作为传输之用,其中centos5.x随机端口为小于1024的,而centos6.x随机端口都是较大的。</p><p>因为端口不固定,这样一来就会造成客户端与NFS服务器端的通讯障碍,由于NFS客户端必须要知道NFS服务器端的数据传输端口才能进行通信交互数据。</p><p>解决以上问题,我们需要RPC服务来帮忙,NFS的RPC服务主要的功能是记录每个NFS功能所对应的端口号,并且在NFS客户端请求时将该端口和功能对应的信息传递给请求数据的NFS客户端,从而可以确保客户端连接正确的NFS端口上去,达到实现数据传输交互数据目的。RPC相当于NFS服务的中介。</p><p>如图所示:NFS工作流程简图</p><p><img src="https://img-blog.csdnimg.cn/20200430190602221.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><p>大致如以下几点:</p><p>1、首先用户访问网站程序,由程序在NFS客户端上发出NFS文件存取功能的询问请求,这时NFS客户端(即执行程序的服务器)RPC服务(portmap或rpcbind服务)就会通过网络向NFS服务端的RPC服务(portmap或rpcbind)的111端口发出NFS文件存取功能的询问请求。</p><p>2、NFS服务器端的RPC服务(即portmap或rpcbind)找到对应的已注册的NFS daemon端口后,通知NFS客户端的RPC服务(即portmap或rpcbind服务)</p><p>3、此时NFS客户端就可以获取到正确的端口,然后就直接与NFS daemon联机存取数据了。</p><p>4、NFS客户端把数据存取成功后,返回给当前访问程序,告知用户存取结果,作为网站用户,我们就完成了一次存取操作。 由于NFS的各项功能都需要想RPC服务注册,所以RPC服务才能获取到NFS服务的各项功能对应的端口、PID、NFS在主机所监听的IP等,NFS客户端才能够通过向RPC服务询问才找到正确的端口。也就是说,NFS需要有RPC服务的协助才能成功对外提供服务。由上面的描述,我们不难推出:无论是NFS客户端还是NFS服务器端,当要使用NFS时,都需要首先启动RPC服务,然后在启动NFS服务,客户端可以不启动NFS服务。</p><h2 id="安装配置NFS服务器"><a href="#安装配置NFS服务器" class="headerlink" title="安装配置NFS服务器"></a>安装配置NFS服务器</h2><h3 id="使用docker容器配置NFS服务器"><a href="#使用docker容器配置NFS服务器" class="headerlink" title="使用docker容器配置NFS服务器"></a>使用docker容器配置NFS服务器</h3><p>1) 启动centos容器并进入<br>docker run -d –privileged centos:v1 /usr/sbin/init<br>2) 在centos容器中使用yum方式安装nfs-utils<br><code>yum install nfs-utils</code><br>3) 保存容器为镜像</p><p>#docker commit 容器ID nfs<br>4) 启动容器nfs,设定地址为172.18.0.120</p><p>#docker run -d –privileged –net cluster –ip 172.18.0.120 –name nfs nfs /usr/sbin/init</p><p>5) 启动nfs服务,查看监听端口<br><code>systemctl start nfs-server</code></p><p>7) 新建共享目录/var/www/share,设置权限为777</p><p>8) 编辑/etc/exports文件<br><code>/var/www/share 172.18.0.*(rw,sync)</code></p><p>9) 导出nfs共享目录<br><code>exportfs -rv</code><br>10) 查看nfs上的共享目录</p><p>#showmount -e IP地址<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@c90e05748250 /]<span class="comment"># showmount -e 172.18.0.1</span></span><br><span class="line">Export list <span class="keyword">for</span> 172.18.0.1:</span><br><span class="line">/var/www/share 172.18.0.*</span><br></pre></td></tr></table></figure></p><h3 id="使用宿主机配置NFS服务器"><a href="#使用宿主机配置NFS服务器" class="headerlink" title="使用宿主机配置NFS服务器"></a>使用宿主机配置NFS服务器</h3><p>1) <code>yum install nfs-utils</code> //在宿主机安装nfs</p><p>2) 查看nfs配置文件<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">more /etc/nfs.onf </span><br><span class="line">more /etc/nfsmount.conf</span><br></pre></td></tr></table></figure></p><p>3) 启动nfs服务,查看监听端口</p><p><code>systemctl start nfs-server</code></p><p>4) 新建共享目录/var/www/share,设置权限为777</p><p>5) 编辑/etc/exports文件<br><code>/var/www/share 172.18.0.*(rw,sync)</code></p><p>6) 导出nfs共享目录<br><code>#exportfs -rv</code></p><p>7) 查看nfs上的共享目录</p><p>#showmount -e IP地址<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">showmount -e 172.18.0.1</span><br><span class="line">Export list <span class="keyword">for</span> 172.18.0.1:</span><br><span class="line">/var/www/share 172.18.0.*</span><br></pre></td></tr></table></figure></p><h3 id="启用APP1和APP2两个容器,挂载共享目录"><a href="#启用APP1和APP2两个容器,挂载共享目录" class="headerlink" title="启用APP1和APP2两个容器,挂载共享目录"></a>启用APP1和APP2两个容器,挂载共享目录</h3><p>1) 启动容器APP1,设定地址为172.18.0.111<br>docker run -d –privileged –net cluster –ip 172.18.0.111 –name APP1 php-apache /usr/sbin/init<br>2) 启动容器APP2,设定地址为172.18.0.112<br>docker run -d –privileged –net cluster –ip 172.18.0.112 –name APP2 php-apache /usr/sbin/init<br>3) <code>yum install nfs-utils</code> //进入容器并安装nfs<br>4) #showmount -e 172.18.0.1 //在APP1查看nfs上的共享目录<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">showmount -e 172.18.0.1</span><br><span class="line">Export list <span class="keyword">for</span> 172.18.0.1:</span><br><span class="line">/var/www/share 172.18.0.*</span><br></pre></td></tr></table></figure></p><p>5) 共享目录挂在到本地目录<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">mkdir /var/www/share</span><br><span class="line">mount 172.18.0.1:/var/www/share /var/www/share</span><br></pre></td></tr></table></figure></p><p>6) 在APP1的/var/www/share上读写文件,在nfs上查看</p><p>7) APP2按以上步骤配置</p><h2 id="配置nginx1、APP1实现动静分离"><a href="#配置nginx1、APP1实现动静分离" class="headerlink" title="配置nginx1、APP1实现动静分离"></a>配置nginx1、APP1实现动静分离</h2><h3 id="在APP1上编写PHP脚本,上传资源文件"><a href="#在APP1上编写PHP脚本,上传资源文件" class="headerlink" title="在APP1上编写PHP脚本,上传资源文件"></a>在APP1上编写PHP脚本,上传资源文件</h3><p>1) vim /var/www/index.php //在APP1上编辑php文件<br><figure class="highlight php"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta"><?php</span></span><br><span class="line"><span class="function"><span class="keyword">function</span> <span class="title">serverIp</span><span class="params">()</span></span>{ <span class="comment">//获取服务器IP地址</span></span><br><span class="line"> <span class="keyword">if</span>(<span class="keyword">isset</span>($_SERVER)){</span><br><span class="line"> <span class="keyword">if</span>($_SERVER[<span class="string">'SERVER_ADDR'</span>]){</span><br><span class="line"> $server_ip=$_SERVER[<span class="string">'SERVER_ADDR'</span>];</span><br><span class="line"> }<span class="keyword">else</span>{</span><br><span class="line"> $server_ip=$_SERVER[<span class="string">'LOCAL_ADDR'</span>];</span><br><span class="line"> }</span><br><span class="line"> }<span class="keyword">else</span>{</span><br><span class="line"> $server_ip = getenv(<span class="string">'SERVER_ADDR'</span>);</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">return</span> $server_ip;</span><br><span class="line"> }</span><br><span class="line"> <span class="meta">?></span></span><br><span class="line"><!doctype html></span><br><span class="line"><html></span><br><span class="line"><head></span><br><span class="line"><meta charset=<span class="string">"utf-8"</span>></span><br><span class="line"><title>动静分离测试</title></span><br><span class="line"><link rel=<span class="string">"stylesheet"</span> type=<span class="string">"text/css"</span> href=<span class="string">"share/banner.css"</span>></span><br><span class="line"><script type=<span class="string">"text/javascript"</span> src=<span class="string">"share/jquery-1.7.2.min.js"</span>></script></span><br><span class="line"></head></span><br><span class="line"><body></span><br><span class="line"> <div class="banner"></span><br><span class="line"> <ul></span><br><span class="line"> <li><img src=<span class="string">"share/banner_02.jpg"</span> /></li></span><br><span class="line"> <li><img src=<span class="string">"share/banner_01.gif"</span> /></li></span><br><span class="line"> </ul></span><br><span class="line"> </div></span><br><span class="line"> <div class="main_list"></span><br><span class="line"> <ul></span><br><span class="line"> <li><a href=<span class="string">"#"</span>>动静分离测试...</a></li></span><br><span class="line"> <li><a href=<span class="string">"#"</span>>动静分离测试...</a></li></span><br><span class="line"> </ul> </span><br><span class="line"> </div> </span><br><span class="line"> <span><span class="meta"><?php</span> <span class="keyword">echo</span> serverIp(); <span class="meta">?></span></span> </span><br><span class="line"></body></span><br><span class="line"></html></span><br></pre></td></tr></table></figure></p><p>4) 把图片资源文件上传到APP1服务器的 <code>/var/www/share</code>目录</p><p>5) 在宿主机nfs服务器的 /var/www/share目录中检查文件是否存在</p><p>6) 在宿主机使用curl访问<a href="http://172.18.0.111/index.php" target="_blank" rel="noopener">http://172.18.0.111/index.php</a></p><p><img src="https://img-blog.csdnimg.cn/20200430185740896.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="配置nginx反向代理,访问APP1"><a href="#配置nginx反向代理,访问APP1" class="headerlink" title="配置nginx反向代理,访问APP1"></a>配置nginx反向代理,访问APP1</h3><p>1) 启动容器nginx1,设定地址为172.18.0.11,把80端口映射到宿主机8080<br>docker run -d –privileged –net cluster –ip 172.18.0.11 -p 8080:80 –name nginx1 nginx-keep /usr/sbin/init<br>2) 在nginx1上编辑/etc/nginx/nginx.conf,重启nginx服务<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name localhost;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://172.18.0.111;</span><br><span class="line"> }</span><br></pre></td></tr></table></figure></p><p>3) 在主机使用浏览器访问<a href="http://192.168.*.100/index.php" target="_blank" rel="noopener">http://192.168.*.100/index.php</a> </p><p>这里肯定显示不了图片 因为网站的根目录为<code>/var/www/html</code>而share目录在<code>/var/www</code>下</p><p><img src="https://img-blog.csdnimg.cn/20200430185523405.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="配置nginx反向代理,宿主机nginx,支持动静分离"><a href="#配置nginx反向代理,宿主机nginx,支持动静分离" class="headerlink" title="配置nginx反向代理,宿主机nginx,支持动静分离"></a>配置nginx反向代理,宿主机nginx,支持动静分离</h3><p>1) 在nfs宿主机编辑/etc/nginx/conf.d/ default.conf,启用nginx服务<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name file.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> root /var/www;</span><br><span class="line"> index index.html index.htm;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 在nginx1上编辑/etc/nginx/nginx.conf,重启nginx服务<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name localhost;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://172.18.0.111;</span><br><span class="line"> }</span><br><span class="line"> location /share {</span><br><span class="line"> proxy_pass http://172.18.0.1/share;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>3) 在主机使用浏览器访问<a href="http://192.168.*.100/index.php" target="_blank" rel="noopener">http://192.168.*.100/index.php</a><br><img src="https://img-blog.csdnimg.cn/20200430185822475.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="配置nginx1、APP1、APP2、宿主机nfs和nginx,支持负载均衡动静分离"><a href="#配置nginx1、APP1、APP2、宿主机nfs和nginx,支持负载均衡动静分离" class="headerlink" title="配置nginx1、APP1、APP2、宿主机nfs和nginx,支持负载均衡动静分离"></a>配置nginx1、APP1、APP2、宿主机nfs和nginx,支持负载均衡动静分离</h3><p>1) 仿照步骤1,在APP2上编写PHP脚本,上传资源文件<br>3) 在nginx1上编辑/etc/nginx/nginx.conf,重启nginx服务<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name localhost;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://APP;</span><br><span class="line"> }</span><br><span class="line"> location /share {</span><br><span class="line"> proxy_pass http://172.18.0.1/share;</span><br><span class="line"> }</span><br><span class="line">upstream APP {</span><br><span class="line"> server 172.18.0.111;</span><br><span class="line"> server 172.18.0.112;</span><br><span class="line">}</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>4) 在主机使用浏览器访问<a href="http://192.168.*.100/index.php" target="_blank" rel="noopener">http://192.168.*.100/index.php</a><br><img src="https://img-blog.csdnimg.cn/20200430185827671.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="NFS介绍"><a href="#NFS介绍" class="headerlink" title="NFS介绍"></a>NFS介绍</h2><p>NFS 是Network File System的缩写,即网络文件系统。一种使用于分散式文件系统的协定,由Sun公司
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>Docker 三剑客之Machine,Compose,Swarm</title>
<link href="https://plutoacharon.github.io/2020/05/12/Docker-%E4%B8%89%E5%89%91%E5%AE%A2%E4%B9%8BMachine%EF%BC%8CCompose%EF%BC%8CSwarm/"/>
<id>https://plutoacharon.github.io/2020/05/12/Docker-三剑客之Machine,Compose,Swarm/</id>
<published>2020-05-12T14:40:08.000Z</published>
<updated>2020-05-12T14:40:22.534Z</updated>
<content type="html"><![CDATA[<h1 id="Docker三剑客"><a href="#Docker三剑客" class="headerlink" title="Docker三剑客"></a>Docker三剑客</h1><p>为了把容器化技术的优点发挥到极致,docker公司先后推出了三大技术</p><ul><li>docker-machine</li><li>docker-compose</li><li>docker-swarm<br>它们可以说是几乎实现了容器化技术中所有可能需要的底层技术手段。<br><img src="https://img-blog.csdnimg.cn/20200426145546753.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70#pic_center" alt="在这里插入图片描述"><blockquote><p>图源: <a href="https://xiaoxiami.gitbook.io/docker/docker-ji-qun" target="_blank" rel="noopener">https://xiaoxiami.gitbook.io/docker/docker-ji-qun</a></p></blockquote></li><li>docker-machine - 提供容器服务</li><li>docker-compose - 提供脚本执行服务,不用在像以前把容器的启动命令写的非常的长,用compose编写脚本就能简化容器的启动</li><li>几条简单指令就可以创建一个docker集群,最终实现分布式的服务<h2 id="Docker-三剑客之-Machine"><a href="#Docker-三剑客之-Machine" class="headerlink" title="Docker 三剑客之 Machine"></a>Docker 三剑客之 Machine</h2>Docker Machine 是 Docker 官方三剑客项目之一 ,负责使用 Docker 容器的第一步 :在多<br>种平台上快速安装和维护 Docker 运行环境 。 它支持多种平 台 ,让用户可以在很短时间内在<br>本地或云环境中搭建一套 Docker 主机集群。</li></ul><h3 id="Machine-简介"><a href="#Machine-简介" class="headerlink" title="Machine 简介"></a>Machine 简介</h3><p>Machine 项目是 Docker 官方的开源项目 ,负责实现对 Docker 运行环境进行安装和管理,特别在管理多个 Docker 环境时,使用 Machine 要比手动管理高效得多。</p><p>Machine 的定位是“在本地或者云环境中创建 Docker 主机” </p><p>其代码在<code>https://github.com/docker/machine</code> 上开源,遵循 Apache-2.0 许可</p><p>Machine 项目主要由 Go 语言编写,用户可以在本地任意指定由 Machine 管理的 Docker主机,并对其进行操作。</p><p>其基本功能包括:</p><ul><li>在指定节点或平台上安装 Docker 引擎,配置其为可使用的 Docker 环境;</li><li>集中管理(包括启动 、查看等)所安装 的Docker 环境。</li></ul><p>Machine 连接不同类型的操作平台是通过对应驱动来实现 的,目前已经集成了包括AWS 、 IBM 、 Google ,以及 OpenStack 、 VirtualBox 、 vSphere 等多种云平台的支持。</p><h3 id="安装"><a href="#安装" class="headerlink" title="安装"></a>安装</h3><p>在 Linux 平台上的安装十分简单,推荐从官方 Release 库<code>https://github.corn/docker/machine/releases</code> 直接下载编译好的二进制文件即可</p><p>在 Linux 64 位系统上直接下载对应的二进制包<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">$ sudo curl -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine- <span class="string">' uname -s'</span>-<span class="string">'uname -m '</span> > docker-machine </span><br><span class="line">$ sudo mv docker-machine /usr/ <span class="built_in">local</span>/bin/docker-machine </span><br><span class="line">$ sudo chmod +x /usr/<span class="built_in">local</span>/bin/docker-machine</span><br><span class="line">安装完成后,查看版本信息,验证运行正常:</span><br><span class="line">$ docker-machine -v </span><br><span class="line">docker-machine version 0.13.0</span><br></pre></td></tr></table></figure></p><p>当要对多个 Docker 主机环境进行安装、配置和管理时,采用 Docker Machine 的方式将远比手动方式<br>快捷。 不仅提高了操作速度,更通过批量统一的管理减少了出错的可能。 尤其在大规模集群和云平台环境中推荐使用</p><h2 id="Docker-三剑客之-Compose"><a href="#Docker-三剑客之-Compose" class="headerlink" title="Docker 三剑客之 Compose"></a>Docker 三剑客之 Compose</h2><p>编排( Orchestration )功能,是复杂系统是否具有灵活可操作性的关键。 特别在 Docker应用场景中,编排意味着用户可以灵活地对各种容器资源实现定义和管理。</p><p>Compose 作为 Docker 官方编排工具,其重要性不言而喻,它可以让用户通过编写一个简单的模板文件,快速地创建和管理基于 Docker 容器的应用集群。</p><h3 id="Compose-简介"><a href="#Compose-简介" class="headerlink" title="Compose 简介"></a>Compose 简介</h3><p>Compose 项目是 Docker 官方的开源项目,负责实现对基于 Docker 容器的多应用服务的快速编排。 从功能上看,跟 Open Stack 中的 Heat 十分类似。 其代码目前在 <code>https://github .com/docker/compose</code> 巳上开源 。</p><p>Compose 定位是“定义和运行多个 Docker 容器的应用”,其前身是开源项目<code>Fig</code> ,目前仍然兼容 Fig 格式的模板文件。</p><p>在日常工作中,经常会碰到需要多个容器相互配合来完成某项任务的情况。 例如要实现一个 Web 项目,除了 Web 服务容器本身,往往还需要再加上后端的数据库服务容器,甚至还包括前端的负载均衡容器等。</p><p>Compose 恰好满足了这样的需求。 它允许用户通过一个单独的 <code>docker-compose.yml</code>模板文件( YAML 格式)来定义一组相关联的应用容器为一个服务樵( stack ) </p><p>Compose 中有几个重要的概念:</p><ul><li><p>任务( task ) : 一个容器被称为一个任务。 任务拥有独一无二的 ID ,在同一个服务中的多个任务序号依次递增 。</p></li><li><p>服务( service ):某个相同应用镜像的容器副本集合,一个服务可以横向扩展为多个容器实例 。</p></li><li><p>服务枝 ( stack ) :由 多个服务组成 ,相互配合完成特定业务 , 如 Web 应用服务、数据<br>库服务共同构成 Web 服务钱 ,一般由一个 docker-cornpose.yml 文件定义。</p></li></ul><p>Compose 的默认管理对象是服务钱,通过子命令对栈中的多个服务进行便捷的生命周期管理。</p><p>Compose 项目由 Python 编写 ,实现上调用了 Docker 服务提供的 API 来对容器进行管理。</p><p>因此,只要所操作的平台支持 Docker API,就可以在其上利用 Compose 来进行编排管理。</p><h3 id="Compose安装"><a href="#Compose安装" class="headerlink" title="Compose安装"></a>Compose安装</h3><p>二进制包安装</p><p>这些发布的二进制包可以在<code>https://github.com/docker/compose/releases</code> 页面找到 </p><p>将这些二进制文件下载后直接放到执行路径下,并添加执行权限即可。<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">$ sudo curl -L https : //github.com/docker/compose/releases/download/1.19.0/docker-compose- ’ uname -s ’- ’ uname -m’ > / usr/ <span class="built_in">local</span> / bin/ docker-compose </span><br><span class="line">$ sudo chmod a+x /usr/<span class="built_in">local</span>/bin/docker-cornpose</span><br><span class="line">可以使用 docker-compose version 命令来查看版本信息,以测试是否安装成功:</span><br><span class="line"></span><br><span class="line">$ docker-compose version</span><br><span class="line">docker compose version 1.19.0</span><br><span class="line">docker-py version : 2.7.0 </span><br><span class="line">CPython version : 2.7.12 </span><br><span class="line">OpenSSL version : OpenSSL l.0.2g</span><br></pre></td></tr></table></figure></p><p>在 Docker 三剑客中, Compose 掌管运行时的编排能力,位置十分关键。 使用 Compose模板文件,用户可以编写包括若干服务的一个模板文件快速启动服务栈;如果分发给他人,也可快速创建一套相同的服务栈。</p><h2 id="Docker-三剑客之-Swarm"><a href="#Docker-三剑客之-Swarm" class="headerlink" title="Docker 三剑客之 Swarm"></a>Docker 三剑客之 Swarm</h2><p>Docker Swarm 是 Docker 官方三剑客项目之一,提供 Docker 容器集群服务,是 Docker官方对容器云生态进行支持的核心方案。 使用它,用户可以将多个 Docker 主机抽象为大规模的虚拟 Docker 服务,快速打造一套容器云平台</p><h3 id="Swarm-简介"><a href="#Swarm-简介" class="headerlink" title="Swarm 简介"></a>Swarm 简介</h3><p>Docker Swarm 是 Docker 公司推出的官方容器集群平台 , 基于 Go 语言实现,代码开源在 <code>https:// github.com/ docker/swarm</code> </p><p>目前,包括 Rackspace 等平台都采用了 Swarm ,用户也很容易在 AWS 等公有云平台使用 Swarm 。</p><p>Swarm 的前身是 Beam 项目和 libswarm 项目,首个正式版本( Swarm Vl )在 2014 年 12 月初发布 。 为了提高可扩展性, 2016 年 2 月对架构进行重新设计,推出了 V2 版本,支持超过 lK 个节点 。最新的 Docker Engine ( 1.12 后)已经集成SwarmKit 内嵌了对 Swarm 模式的支持。</p><p>作为容器集群管理器, Swarm 最大的优势之一就是原生支持 Docker API ,给用户使用带来极大的便利 。 各种基于标准 A凹的工具比如 Compose 、 Docker SDK 、各种管理软件, 甚至Docker 本身等都可以很容易的与 Swarm 进行集成。 这大大方便了用户将原先基于单节点的系统移植到 Swarm 上。 同时 Swarm 内置了对 Docker 网络插件的支持,用户可以很容易地部署跨主机的容器集群服务。</p><p>Swarm 也采用了典型的“主从”结构<br><img src="https://img-blog.csdnimg.cn/20200426163912544.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="“主从”结构"></p><p>通过 Raft 协议来在多个管理节点( Manager )中实现共识。 工作节点( Worker )上运行 agent 接受管理节点的统一管理和任<br>务分配。 用户提交服务请求只需要发给管理节点即可,管理节点会按照调度策略在集群中分配节点来运行服务相关的任务</p><p>在 Swarm V2 中,集群中会自动通过 Raft 协议分布式选举出 Manager 节点,无须额外的发现服务支持,避免了单点瓶颈。 同时, V2 中内置了基于 DNS 的负载均衡和对外部负载均衡机制的集成支持。</p><h3 id="Swarm-基本概念"><a href="#Swarm-基本概念" class="headerlink" title="Swarm 基本概念"></a>Swarm 基本概念</h3><p>Swarm 在 Docker 基础上扩展了支持多节点的能力,同时兼容了大部分的 Docker 操作。Swarm 中以集群为单位进行管理,支持服务层面的操作。</p><h4 id="1-Swarm-集群"><a href="#1-Swarm-集群" class="headerlink" title="1. Swarm 集群"></a>1. Swarm 集群</h4><p>Swarm 集群( Cluster )为一组被统一管理起来的 Docker 主机。 集群是 Swarm 所管理的对象。 这些主机通过 Docker 引擎的 Swarm 模式相互沟通,其中部分主机可能作为管理节点(manager)响应外部的管理请求,其他主机作为工作节点( worker )来实际运行 Docker 容器。当然,同一个主机也可以即作为管理节点,同时作为工作节点 。</p><p>当用户使用 Swarm 集群时,首先定义一个服务(指定状态、复制个数、网络、存储 、 暴露端- 等),然后通过管理节点发出启动服务的指令,管理节点随后会按照指定的服务规则进行调度,在集群中启动起来整个服务,并确保它正常运行。</p><h4 id="2-节点"><a href="#2-节点" class="headerlink" title="2. 节点"></a>2. 节点</h4><p>节点(Node )是 Swarm 集群的最小资源单位。 每个节点实际上都是一台 Docker 主机。<br>Swarm 集群中节点分为两种:</p><ul><li>管理节点( manager node ): 负责响应外部对集群的操作请求,并维持集群中资源,分发任务给工作节点 。 同时,多个管理节点之间通过 Raft 协议构成共识。 一般推荐每个集群设置 5 个或 7 个管理节点;</li><li>工作节点( worker node ):负责执行管理节点安排的具体任务。 默认情况下,管理节点自身也同时是工作节点 。 每个工作节点上运行代理( agent )来汇报任务完成情况。用户可以通过 docker node promote 命令来提升一个工作节点为管理节点;或者通过docker node demote 命令来将一个管理节点降级为工作节点。<h4 id="3-服务"><a href="#3-服务" class="headerlink" title="3. 服务"></a>3. 服务</h4>服务( Service)是 Docker 支持复杂多容器协作场景的利器。一个服务可以由若干个任务组成,每个任务为某个具体的应用。 服务还包括对应的存储 、 网络 、 端- 映射、副本个数 、 访问配置 、 升级配置等附加参数。一般来说,服务需要面向特定的场景,例如一个典型的 Web 服务可能包括前端应用 、 后<br>端应用,以及数据库等。 这些应用都属于该服务的管理范畴。</li></ul><p>Swarm 集群中服务类型也分为两种(可以通过-mode 指定) :</p><ul><li>复制服务( replicated services )模式 : 默认模式,每个任务在集群中会存在若干副本,<br>这些副本会被管理节点按照调度策略分发到集群中的工作节点上。 此模式下可以使<br>用-replicas 参数设置副本数量 ;</li><li>全局服务( global services )模式 : 调度器将在每个可用节点都执行一个相同的任务。<br>该模式适合运行节点的检查,如监控应用等。<h4 id="4-任务"><a href="#4-任务" class="headerlink" title="4. 任务"></a>4. 任务</h4>任务是 Swarm 集群中最小的调度单位,即一个指定的应用容器。 例如仅仅运行前端业务的前端容器。 任务从生命周期上将可能处于创建( NEW ) 、 等待( PENDING ) 、 分配( ASSIGNED ) 、 接受( ACCEPTED ) 、 准备( PREPARING )、开始( STARTING ) 、 运行 (RUNING) 、 完成(COMPLETE )、失败(FAILED ) 、 关闭(SHUTDOWN) 、 拒绝(PEJECTED ) 、孤立( ORPHANED )等不同状态 。</li></ul><p>Swarm 集群中的管理节点会按照调度要求将任务分配到工作节点上。 例如指定副本为 2时,可能会被分配到两个不同的工作节点上。一旦当某个任务被分配到一个工作节点,将无法被转移到另外的工作节点,即 Swarm 中的任务不支持迁移。</p><h4 id="5-服务的外部访问"><a href="#5-服务的外部访问" class="headerlink" title="5 . 服务的外部访问"></a>5 . 服务的外部访问</h4><p>Swarm 集群中的服务要被集群外部访问,必须要能允许任务的响应端口映射出来。Swarm 中支持入口负载均衡(ingress load balancing )的映射模式。 该模式下,每个服务都会被分配一个公开端口( PublishedPort ),该端口在集群中任意节点上都可以访问到,并被保留给该服务。</p><p>当有请求发送到任意节点的公开端- 时,该节点若并没有实际执行服务相关的容器,则会通过路由机制将请求转发给实际执行了服务容器的工作节点 。</p><p>通过使用 Swarm ,用户可以将若干 Docker 主机节点组成的集群当作一个大的虚拟 Docker 主机使用 。 并且,原先基于单机的Docker 应用,可以无缝地迁移到 Swarm 上来。 通过使用服务, Swarm 集群可以支持多个应用构建的复杂业务,并很容易对其进行升级等操作 。</p><p>在生产环境中, Swarm 的管理节点要考虑高可用性和安全保护,一方面多个管理节点应该分配到不同的容灾区域,另一方面服务节点应该配合数字证书等手段限制访问 。Swarm 功能已 经被无缝嵌入Docker 1.12+版本中,用户今后可 以 直接使用 Docker命令来完成相关功能的配置,对 Swarm 集群的管理更加简便。</p>]]></content>
<summary type="html">
<h1 id="Docker三剑客"><a href="#Docker三剑客" class="headerlink" title="Docker三剑客"></a>Docker三剑客</h1><p>为了把容器化技术的优点发挥到极致,docker公司先后推出了三大技术</p>
<ul
</summary>
<category term="Docker" scheme="https://plutoacharon.github.io/categories/Docker/"/>
<category term="Dokcer" scheme="https://plutoacharon.github.io/tags/Dokcer/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(八)----Kubernetes1.15.1 部署Prometheus</title>
<link href="https://plutoacharon.github.io/2020/05/12/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E5%85%AB-Kubernetes1-15-1-%E9%83%A8%E7%BD%B2Prometheus/"/>
<id>https://plutoacharon.github.io/2020/05/12/Kubernetes-K8s-入门到实践-八-Kubernetes1-15-1-部署Prometheus/</id>
<published>2020-05-12T14:39:38.000Z</published>
<updated>2020-05-12T14:39:48.536Z</updated>
<content type="html"><![CDATA[<h2 id="Prometheus介绍"><a href="#Prometheus介绍" class="headerlink" title="Prometheus介绍"></a>Prometheus介绍</h2><p>随着容器技术的迅速发展,Kubernetes 已然成为大家追捧的容器集群管理系统。Prometheus 作为生态圈 Cloud Native Computing Foundation(简称:CNCF)中的重要一员,其活跃度仅次于 Kubernetes, 现已广泛用于 Kubernetes 集群的监控系统中。</p><p>本文将简要介绍 Prometheus 的组成和相关概念,并实例演示 Prometheus 的安装,配置及使用。</p><h3 id="Prometheus的特点:"><a href="#Prometheus的特点:" class="headerlink" title="Prometheus的特点:"></a>Prometheus的特点:</h3><ul><li>多维度数据模型。</li><li>灵活的查询语言。</li><li>不依赖分布式存储,单个服务器节点是自主的。</li><li>通过基于HTTP的pull方式采集时序数据。</li><li>可以通过中间网关进行时序列数据推送。</li><li>通过服务发现或者静态配置来发现目标服务对象。</li><li>支持多种多样的图表和界面展示,比如Grafana等</li></ul><p><strong>官方架构图</strong><br>官方网站:<a href="https://prometheus.io/" target="_blank" rel="noopener">https://prometheus.io/</a><br><img src="https://img-blog.csdnimg.cn/20200425101507273.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200425101711164.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70#pic_center" alt="在这里插入图片描述"></p><p>Prometheus 生态圈中包含了多个组件,其中许多组件是可选的:</p><ul><li>Prometheus Server: 用于收集和存储时间序列数据。</li><li>Client Library: 客户端库,为需要监控的服务生成相应的 metrics 并暴露给 Prometheus server。当 Prometheus server 来 pull 时,直接返回实时状态的 metrics。</li><li>Push Gateway: 主要用于短期的 jobs。由于这类 jobs 存在时间较短,可能在 Prometheus 来 pull 之前就消失了。为此,这次 jobs 可以直接向 Prometheus server 端推送它们的 metrics。这种方式主要用于服务层面的 metrics,对于机器层面的 metrices,需要使用 node exporter。</li><li>Exporters: 用于暴露已有的第三方服务的 metrics 给 Prometheus。</li><li>Alertmanager: 从 Prometheus server 端接收到 alerts 后,会进行去除重复数据,分组,并路由到对收的接受方式,发出报警。常见的接收方式有:电子邮件,pagerduty,OpsGenie, webhook 等一些其他的工具。</li></ul><h3 id="Prometheus的基本原理"><a href="#Prometheus的基本原理" class="headerlink" title="Prometheus的基本原理"></a>Prometheus的基本原理</h3><p>Prometheus的基本原理是通过HTTP协议周期性抓取被监控组件的状态,任意组件只要提供对应的HTTP接口就可以接入监控。不需要任何SDK或者其他的集成过程。这样做非常适合做虚拟化环境监控系统,比如VM、Docker、Kubernetes等。输出被监控组件信息的HTTP接口被叫做exporter 。目前互联网公司常用的组件大部分都有exporter可以直接使用,比如Varnish、Haproxy、Nginx、MySQL、Linux系统信息(包括磁盘、内存、CPU、网络等等)。</p><h2 id="Prometheus部署"><a href="#Prometheus部署" class="headerlink" title="Prometheus部署"></a>Prometheus部署</h2><h3 id="1-修改-grafana-service-yaml-文件"><a href="#1-修改-grafana-service-yaml-文件" class="headerlink" title="1. 修改 grafana-service.yaml 文件"></a>1. 修改 grafana-service.yaml 文件</h3><p>使用git下载Prometheus项目<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 plugin]<span class="comment"># mkdir prometheus</span></span><br><span class="line">[root@k8s-master01 plugin]<span class="comment"># cd prometheus/</span></span><br><span class="line">[root@k8s-master01 prometheus]<span class="comment"># git clone https://github.com/coreos/kube-prometheus.git</span></span><br><span class="line">正克隆到 <span class="string">'kube-prometheus'</span>...</span><br><span class="line">remote: Enumerating objects: 4, <span class="keyword">done</span>.</span><br><span class="line">remote: Counting objects: 100% (4/4), <span class="keyword">done</span>.</span><br><span class="line">remote: Compressing objects: 100% (4/4), <span class="keyword">done</span>.</span><br><span class="line">remote: Total 8171 (delta 0), reused 1 (delta 0), pack-reused 8167</span><br><span class="line">接收对象中: 100% (8171/8171), 4.56 MiB | 57.00 KiB/s, <span class="keyword">done</span>.</span><br><span class="line">处理 delta 中: 100% (4936/4936), <span class="keyword">done</span>.</span><br><span class="line">[root@k8s-master01 prometheus]<span class="comment"># cd kube-prometheus/manifests/</span></span><br><span class="line">[root@k8s-master01 manifests]<span class="comment"># vim grafana-service.yaml</span></span><br></pre></td></tr></table></figure></p><p>使用 nodepode 方式访问 grafana:<br><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> app:</span> <span class="string">grafana</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">grafana</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">monitoring</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> type:</span> <span class="string">NodePort</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">http</span></span><br><span class="line"><span class="attr"> port:</span> <span class="number">3000</span></span><br><span class="line"><span class="attr"> targetPort:</span> <span class="string">http</span></span><br><span class="line"><span class="attr"> nodePort:</span> <span class="number">30100</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> app:</span> <span class="string">grafana</span></span><br></pre></td></tr></table></figure></p><h3 id="2-修改-修改-prometheus-service-yaml"><a href="#2-修改-修改-prometheus-service-yaml" class="headerlink" title="2. 修改 修改 prometheus-service.yaml"></a>2. 修改 修改 prometheus-service.yaml</h3><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> prometheus:</span> <span class="string">k8s</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">prometheus-k8s</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">monitoring</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> type:</span> <span class="string">NodePort</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">web</span></span><br><span class="line"><span class="attr"> port:</span> <span class="number">9090</span></span><br><span class="line"><span class="attr"> targetPort:</span> <span class="string">web</span></span><br><span class="line"><span class="attr"> nodePort:</span> <span class="number">30200</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> app:</span> <span class="string">prometheus</span></span><br><span class="line"><span class="attr"> prometheus:</span> <span class="string">k8s</span></span><br><span class="line"><span class="attr"> sessionAffinity:</span> <span class="string">ClientIP</span></span><br></pre></td></tr></table></figure><h3 id="3-修改alertmanager-service-yaml"><a href="#3-修改alertmanager-service-yaml" class="headerlink" title="3. 修改alertmanager-service.yaml"></a>3. 修改alertmanager-service.yaml</h3><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> alertmanager:</span> <span class="string">main</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">alertmanager-main</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">monitoring</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> type:</span> <span class="string">NodePort</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">web</span></span><br><span class="line"><span class="attr"> port:</span> <span class="number">9093</span></span><br><span class="line"><span class="attr"> targetPort:</span> <span class="string">web</span></span><br><span class="line"><span class="attr"> nodePort:</span> <span class="number">30300</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> alertmanager:</span> <span class="string">main</span></span><br><span class="line"><span class="attr"> app:</span> <span class="string">alertmanager</span></span><br><span class="line"><span class="attr"> sessionAffinity:</span> <span class="string">ClientIP</span></span><br></pre></td></tr></table></figure><h3 id="4-kubectl-apply-部署"><a href="#4-kubectl-apply-部署" class="headerlink" title="4. kubectl apply 部署"></a>4. kubectl apply 部署</h3><p>进入目录<code>kube-prometheus</code>执行<code>kubectl apply -f manifests/</code><br>报错<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">unable to recognize <span class="string">"../manifests/alertmanager-alertmanager.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"Alertmanager"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/alertmanager-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/grafana-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/kube-state-metrics-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/node-exporter-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-operator-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-prometheus.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"Prometheus"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-rules.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"PrometheusRule"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorApiserver.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorCoreDNS.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorKubeControllerManager.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorKubeScheduler.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorKubelet.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br></pre></td></tr></table></figure></p><p>网上查询得知:<a href="https://github.com/coreos/prometheus-operator/issues/1866" target="_blank" rel="noopener">As the QuickStart mentions, there is a race in Kubernetes that the CRD creation finished but the API is not actually available. You just have to run the command once again.</a> 需要运行多次</p><p>创建成功后查看<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 manifests]<span class="comment"># kubectl get svc -n monitoring</span></span><br><span class="line">NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE</span><br><span class="line">alertmanager-main NodePort 10.102.129.38 <none> 9093:30300/TCP 15s</span><br><span class="line">alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 8s</span><br><span class="line">grafana NodePort 10.103.207.222 <none> 3000:30100/TCP 14s</span><br><span class="line">kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 14s</span><br><span class="line">node-exporter ClusterIP None <none> 9100/TCP 14s</span><br><span class="line">prometheus-adapter ClusterIP 10.104.146.228 <none> 443/TCP 13s</span><br><span class="line">prometheus-k8s NodePort 10.100.247.74 <none> 9090:30200/TCP 12s</span><br><span class="line">prometheus-operator ClusterIP None <none> 8080/TCP 15s</span><br><span class="line">[root@k8s-master01 manifests]<span class="comment"># kubectl get pods -n monitoring</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">alertmanager-main-0 2/2 Running 1 111s</span><br><span class="line">grafana-7dc5f8f9f6-r9w78 1/1 Running 0 117s</span><br><span class="line">kube-state-metrics-5cbd67455c-q5hlh 4/4 Running 0 97s</span><br><span class="line">node-exporter-5bjhk 2/2 Running 0 116s</span><br><span class="line">node-exporter-n84tr 2/2 Running 0 115s</span><br><span class="line">node-exporter-xbz84 2/2 Running 0 115s</span><br><span class="line">prometheus-adapter-668748ddbd-c9ws6 1/1 Running 0 115s</span><br><span class="line">prometheus-k8s-0 3/3 Running 1 101s</span><br><span class="line">prometheus-k8s-1 3/3 Running 1 101s</span><br><span class="line">prometheus-operator-7447bf4dcb-jfmsn 1/1 Running 0 117s</span><br><span class="line">[root@k8s-master01 manifests]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="访问-prometheusprometheus"><a href="#访问-prometheusprometheus" class="headerlink" title="访问 prometheusprometheus"></a>访问 prometheusprometheus</h3><p>对应的 nodeport 端口为 30200,访问<a href="http://MasterIP:30200" target="_blank" rel="noopener">http://MasterIP:30200</a><br><img src="https://img-blog.csdnimg.cn/20200425121755673.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>通过访问<a href="http://MasterIP:30200/target可以看到" target="_blank" rel="noopener">http://MasterIP:30200/target可以看到</a> prometheus 已经成功连接上了 k8s 的 apiserver<br>节点全部健康<br><img src="https://img-blog.csdnimg.cn/20200425122703522.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>prometheus 的 WEB 界面上提供了基本的查询 K8S 集群中每个 POD 的 CPU 使用情况<br><code>sum by (pod_name)( rate(container_cpu_usage_seconds_total{image!="", pod_name!=""}[1m] ) )</code><br><img src="https://img-blog.csdnimg.cn/20200425130559647.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>上述的查询有出现数据,说明 node-exporter 往 prometheus 中写入数据正常</p><h3 id="访问-grafana查看"><a href="#访问-grafana查看" class="headerlink" title="访问 grafana查看"></a>访问 grafana查看</h3><p>grafana 服务暴露的端口号:<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl getservice-n monitoring | grep grafana</span><br><span class="line">grafana NodePort 10.107.56.143 <none> 3000:30100/TCP</span><br></pre></td></tr></table></figure></p><p>浏览器访问<a href="http://MasterIP:30100" target="_blank" rel="noopener">http://MasterIP:30100</a><br>用户名密码默认 admin/admin<br><img src="https://img-blog.csdnimg.cn/20200425130803860.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200425131004191.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>查看Kubernetes API server的数据<br><img src="https://img-blog.csdnimg.cn/20200425131017865.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="Prometheus介绍"><a href="#Prometheus介绍" class="headerlink" title="Prometheus介绍"></a>Prometheus介绍</h2><p>随着容器技术的迅速发展,Kubernetes 已然成为大家追
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(七)----部署Helm 2.13.1</title>
<link href="https://plutoacharon.github.io/2020/05/12/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E4%B8%83-%E9%83%A8%E7%BD%B2Helm-2-13-1/"/>
<id>https://plutoacharon.github.io/2020/05/12/Kubernetes-K8s-入门到实践-七-部署Helm-2-13-1/</id>
<published>2020-05-12T14:38:55.000Z</published>
<updated>2020-05-12T14:39:19.644Z</updated>
<content type="html"><![CDATA[<h2 id="什么是-Helm"><a href="#什么是-Helm" class="headerlink" title="什么是 Helm"></a>什么是 Helm</h2><p><a href="https://helm.sh/" target="_blank" rel="noopener">Helm官方网站</a>:The package manager for Kubernetes</p><p>在没使用 helm 之前,向 kubernetes 部署应用,我们要依次部署 deployment、svc 等,步骤较繁琐。况且随着很多项目微服务化,复杂的应用在容器中部署以及管理显得较为复杂。</p><p><code>Helm</code> 通过打包的方式,支持发布的版本管理和控制,很大程度上简化了 Kubernetes 应用的部署和管理Helm 本质就是让 K8s 的应用管理(Deployment,Service 等 ) 可配置,能动态生成,通过动态生成 K8s 资源清单文件(deployment.yaml,service.yaml),然后调用 Kubectl 自动执行 K8s 资源部署</p><p>Helm 是官方提供的类似于 YUM 的包管理器,是部署环境的流程封装。</p><p>Helm 有两个重要的概念:<strong>chart 和releasechart</strong> </p><ul><li>chart 是创建一个应用的信息集合,包括各种 Kubernetes 对象的配置模板、参数定义、依赖关系、文档说明等。chart 是应用部署的自包含逻辑单元。可以将 chart 想象成 apt、yum 中的软件安装包</li><li>release 是 chart 的运行实例,代表了一个正在运行的应用。当 chart 被安装到 Kubernetes 集群,就生成一个 release。chart 能够多次安装到同一个集群,每次安装都是一个 release</li></ul><p>Helm 包含两个组件:<strong>Helm 客户端</strong>和 <strong>Tiller 服务器</strong><br><img src="https://img-blog.csdnimg.cn/20200424194822963.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>Helm 客户端负责 chart 和 release 的创建和管理以及和 Tiller 的交互。</p><p>Tiller 服务器运行在 Kubernetes 集群中,它会处理 Helm 客户端的请求,与 Kubernetes API Server 交互</p><h2 id="Helm-2-13-1-部署"><a href="#Helm-2-13-1-部署" class="headerlink" title="Helm 2.13. 1 部署"></a>Helm 2.13. 1 部署</h2><h3 id="1-下载安装包"><a href="#1-下载安装包" class="headerlink" title="1. 下载安装包"></a>1. 下载安装包</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz</span><br><span class="line">tar -zxvf helm-v2.13.1-linux-amd64.tar.gz</span><br><span class="line"><span class="built_in">cd</span> linux-amd64/</span><br><span class="line">cp helm /usr/<span class="built_in">local</span>/bin/</span><br><span class="line">chmod a+x /usr/<span class="built_in">local</span>/bin/helm</span><br></pre></td></tr></table></figure><h3 id="2-创建-rbac-config-yaml-文件"><a href="#2-创建-rbac-config-yaml-文件" class="headerlink" title="2. 创建 rbac-config.yaml 文件"></a>2. 创建 rbac-config.yaml 文件</h3><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ServiceAccount</span></span><br><span class="line"><span class="attr">metadata:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">tiller</span> </span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">rbac.authorization.k8s.io/v1beta1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ClusterRoleBinding</span></span><br><span class="line"><span class="attr">metadata:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">tiller</span></span><br><span class="line"><span class="attr">roleRef:</span> </span><br><span class="line"><span class="attr"> apiGroup:</span> <span class="string">rbac.authorization.k8s.io</span> </span><br><span class="line"><span class="attr"> kind:</span> <span class="string">ClusterRole</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">cluster-admin</span></span><br><span class="line"><span class="attr">subjects:</span> </span><br><span class="line"><span class="attr"> - kind:</span> <span class="string">ServiceAccount</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">tiller</span> </span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br></pre></td></tr></table></figure><p>将yaml文件部署下去后,使用<code>helm init --service-account tiller --skip-refresh</code>命令初始化Heml</p><blockquote><p>如果下载镜像失败 需要自己下载镜像导入到Docker中(三台节点)<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 helm]<span class="comment"># kubectl apply -f rbac-config.yaml </span></span><br><span class="line">serviceaccount/tiller unchanged</span><br><span class="line">clusterrolebinding.rbac.authorization.k8s.io/tiller created</span><br><span class="line">[root@k8s-master01 helm]<span class="comment"># docker load -i helm-tiller.tar </span></span><br><span class="line">3fc64803ca2d: Loading layer [==================================================>] 4.463MB/4.463MB</span><br><span class="line">79395a173ae6: Loading layer [==================================================>] 6.006MB/6.006MB</span><br><span class="line">c33cd2d4c63e: Loading layer [==================================================>] 37.16MB/37.16MB</span><br><span class="line">d727bd750bf2: Loading layer [==================================================>] 36.89MB/36.89MB</span><br><span class="line">Loaded image: gcr.io/kubernetes-helm/tiller:v2.13.1</span><br><span class="line">[root@k8s-master01 helm]<span class="comment"># helm init --service-account tiller --skip-refresh</span></span><br><span class="line">Creating /root/.helm </span><br><span class="line">Creating /root/.helm/repository </span><br><span class="line">Creating /root/.helm/repository/cache </span><br><span class="line">Creating /root/.helm/repository/<span class="built_in">local</span> </span><br><span class="line">Creating /root/.helm/plugins </span><br><span class="line">Creating /root/.helm/starters </span><br><span class="line">Creating /root/.helm/cache/archive </span><br><span class="line">Creating /root/.helm/repository/repositories.yaml </span><br><span class="line">Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com </span><br><span class="line">Adding <span class="built_in">local</span> repo with URL: http://127.0.0.1:8879/charts </span><br><span class="line"><span class="variable">$HELM_HOME</span> has been configured at /root/.helm.</span><br><span class="line"></span><br><span class="line">Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.</span><br><span class="line"></span><br><span class="line">Please note: by default, Tiller is deployed with an insecure <span class="string">'allow unauthenticated users'</span> policy.</span><br><span class="line">To prevent this, run `helm init` with the --tiller-tls-verify flag.</span><br><span class="line">For more information on securing your installation see: https://docs.helm.sh/using_helm/<span class="comment">#securing-your-helm-installation</span></span><br><span class="line">Happy Helming!</span><br><span class="line">root@k8s-master01 helm]<span class="comment"># helm version</span></span><br><span class="line">Client: &version.Version{SemVer:<span class="string">"v2.13.1"</span>, GitCommit:<span class="string">"618447cbf203d147601b4b9bd7f8c37a5d39fbb4"</span>, GitTreeState:<span class="string">"clean"</span>}</span><br><span class="line">Server: &version.Version{SemVer:<span class="string">"v2.13.1"</span>, GitCommit:<span class="string">"618447cbf203d147601b4b9bd7f8c37a5d39fbb4"</span>, GitTreeState:<span class="string">"clean"</span>}</span><br><span class="line">[root@k8s-master01 helm]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p></blockquote>]]></content>
<summary type="html">
<h2 id="什么是-Helm"><a href="#什么是-Helm" class="headerlink" title="什么是 Helm"></a>什么是 Helm</h2><p><a href="https://helm.sh/" target="_blank" rel
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(七)---- 基于Docker配置KeepAlive-LVS负载均衡</title>
<link href="https://plutoacharon.github.io/2020/05/05/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E4%B8%83%EF%BC%89-%E5%9F%BA%E4%BA%8EDocker%E9%85%8D%E7%BD%AEKeepAlive-LVS%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<id>https://plutoacharon.github.io/2020/05/05/HA高可用与负载均衡入门到实战(七)-基于Docker配置KeepAlive-LVS负载均衡/</id>
<published>2020-05-05T13:40:13.000Z</published>
<updated>2020-05-05T13:40:44.959Z</updated>
<content type="html"><![CDATA[<h2 id="实验要求"><a href="#实验要求" class="headerlink" title="实验要求"></a>实验要求</h2><p>1、 安装配置LVS负载均衡<br>2、 安装配置LVS高可用负载均衡</p><p>拓扑图:<br><img src="https://img-blog.csdnimg.cn/20200423164904605.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="LVS介绍"><a href="#LVS介绍" class="headerlink" title="LVS介绍"></a>LVS介绍</h2><h3 id="负载均衡工作模式"><a href="#负载均衡工作模式" class="headerlink" title="负载均衡工作模式"></a>负载均衡工作模式</h3><h4 id="1-NAT模式"><a href="#1-NAT模式" class="headerlink" title="1. NAT模式"></a>1. NAT模式</h4><p><code>Virtualserver via Network address translation(VS/NAT)</code> 这个是通过网络地址转换的方法来实现调度的。</p><p>首先调度器(LB)接收到客户的请求数据包时(请求的目的IP为VIP),根据调度算法决定将请求发送给哪个后端的真实服务器(RS)。然后调度就把客户端发送的请求数据包的目标IP地址及端口改成后端真实服务器的IP地址(RIP),这样真实服务器(RS)就能够接收到客户的请求数据包了。真实服务器响应完请求后,查看默认路由(NAT模式下我们需要把RS的默认路由设置为LB服务器。)把响应后的数据包发送给LB,LB再接收到响应包后,把包的源地址改成虚拟地址(VIP)然后发送回给客户端。 </p><p><strong>调度过程IP包详细图:</strong><br><img src="https://img-blog.csdnimg.cn/20200423165104167.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br> <strong>原理图简述:</strong> </p><ol><li><p>客户端请求数据,目标IP为VIP </p></li><li><p>请求数据到达LB服务器,LB根据调度算法将目的地址修改为RIP地址及对应端口(此RIP地址是根据调度算法得出的。)并在连接HASH表中记录下这个连接。</p></li><li>数据包从LB服务器到达RS服务器webserver,然后webserver进行响应。Webserver的网关必须是LB,然后将数据返回给LB服务器。</li><li>收到RS的返回后的数据,根据连接HASH表修改源地址VIP&目标地址CIP,及对应端口80.然后数据就从LB出发到达客户端。</li><li><p>客户端收到的就只能看到VIP\DIP信息。</p><p><strong>NAT模式优缺点:</strong> </p></li></ol><ul><li>NAT技术将请求的报文和响应的报文都需要通过LB进行地址改写,因此网站访问量比较大的时候LB负载均衡调度器有比较大的瓶颈,一般要求最多只能10-20台节点</li><li>只需要在LB上配置一个公网IP地址就可以</li><li>每台内部的节点服务器的网关地址必须是调度器LB的内网地址</li><li>NAT模式支持对IP地址和端口进行转换。即用户请求的端口和真实服务器的端口可以不一致</li></ul><p><img src="https://img-blog.csdnimg.cn/20200423165440179.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><ol><li><p>客户端将请求发往前端的负载均衡器,请求报文源地址是CIP(客户端IP),后面统称为CIP),目标地址为VIP(负载均衡器前端地址,后面统称为VIP)</p></li><li><p>负载均衡器收到报文后,发现请求的是在规则里面存在的地址,那么它将客户端请求报文的目标地址改为了后端服务器的RIP地址并将报文根据算法发送出去</p></li><li><p>报文送到Real Server后,由于报文的目标地址是自己,所以会响应该请求,并将响应报文返还给LVS。</p></li><li><p>然后lvs将此报文的源地址修改为本机并发送给客户端。</p></li></ol><p><strong>优点:</strong> 集群中的物理服务器可以使用任何支持TCP/IP操作系统,只有负载均衡器需要一个合法的IP地址。<br><strong>缺点:</strong> 扩展性有限。当服务器节点(普通PC服务器)增长过多时,负载均衡器将成为整个系统的瓶颈,因为所有的请求包和应答包的流向都经过负载均衡器。当服务器节点过多时,大量的数据包都交汇在负载均衡器那,速度就会变慢</p><h4 id="2-TUN-隧道-模式"><a href="#2-TUN-隧道-模式" class="headerlink" title="2. TUN(隧道)模式"></a>2. TUN(隧道)模式</h4><p>virtual server via ip tunneling模式:采用NAT模式时,由于请求和响应的报文必须通过调度器地址重写,当客户请求越来越多时,调度器处理能力将成为瓶颈。为了解决这个问题,调度器把请求的报文通过IP隧道转发到真实的服务器。真实的服务器将响应处理后的数据直接返回给客户端。这样调度器就只处理请求入站报文,由于一般网络服务应答数据比请求报文大很多,采用VS/TUN模式后,集群系统的最大吞吐量可以提高10倍。 VS/TUN的工作流程图如下所示,它和NAT模式不同的是,它在LB和RS之间的传输不用改写IP地址。而是把客户请求包封装在一个IP tunnel里面,然后发送给RS节点服务器,节点服务器接收到之后解开IP tunnel后,进行响应处理。并且直接把包通过自己的外网地址发送给客户不用经过LB服务器。</p><p><strong>Tunnel原理流程图:</strong><br><img src="https://img-blog.csdnimg.cn/20200423165704792.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><strong>原理图过程简述:</strong> </p><ol><li>客户请求数据包,目标地址VIP发送到LB上。</li><li>LB接收到客户请求包,进行IP Tunnel封装。即在原有的包头加上IP Tunnel的包头。然后发送出去。 </li><li>RS节点服务器根据IP Tunnel包头信息(此时就又一种逻辑上的隐形隧道,只有LB和RS之间懂)收到请求包,然后解开IP Tunnel包头信息,得到客户的请求包并进行响应处理。 </li><li>响应处理完毕之后,RS服务器使用自己的出公网的线路,将这个响应数据包发送给客户端。源IP地址还是VIP地址</li></ol><p><img src="https://img-blog.csdnimg.cn/20200423165736408.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><ol><li>客户端将请求发往前端的负载均衡器,请求报文源地址是CIP,目标地址为VIP。 </li><li>负载均衡器收到报文后,发现请求的是在规则里面存在的地址,那么它将在客户端请求报文的首部再封装一层IP报文,将源地址改为DIP,目标地址改为RIP,并将此包发送给RS。 </li><li>RS收到请求报文后,会首先拆开第一层封装,然后发现里面还有一层IP首部的目标地址是自己lo接口上的VIP,所以会处理次请求报文,并将响应报文通过lo接口送给eth0网卡直接发送给客户端。 </li></ol><blockquote><p>注意: 需要设置lo接口的VIP不能在共网上出现。</p></blockquote><p>总结: </p><ol><li>TUNNEL 模式必须在所有的 realserver 机器上面绑定 VIP 的 IP 地址 </li><li>TUNNEL 模式的 vip ——>realserver 的包通信通过 TUNNEL 模式,不管是内网和外网都能通信,所以不需要 lvs vip 跟 realserver 在同一个网段内 </li><li>TUNNEL 模式 realserver 会把 packet 直接发给 client 不会给 lvs 了</li><li>TUNNEL 模式走的隧道模式,所以运维起来比较难,所以一般不用。 </li></ol><p><strong>优点:</strong> 负载均衡器只负责将请求包分发给后端节点服务器,而RS将应答包直接发给用户。所以,减少了负载均衡器的大量数据流动,负载均衡器不再是系统的瓶颈,就能处理很巨大的请求量,这种方式,一台负载均衡器能够为很多RS进行分发。而且跑在公网上就能进行不同地域的分发。 </p><p><strong>缺点:</strong> 隧道模式的RS节点需要合法IP,这种方式需要所有的服务器支持”IP Tunneling”(IP Encapsulation)协议,服务器可能只局限在部分Linux系统上。</p><h4 id="3-DR模式(直接路由模式"><a href="#3-DR模式(直接路由模式" class="headerlink" title="3. DR模式(直接路由模式)"></a>3. DR模式(直接路由模式)</h4><p><code>Virtual server via direct routing (vs/dr) DR</code>模式是通过改写请求报文的目标MAC地址,将请求发给真实服务器的,而真实服务器响应后的处理结果直接返回给客户端用户。同TUN模式一样,DR模式可以极大的提高集群系统的伸缩性。而且DR模式没有IP隧道的开销,对集群中的真实服务器也没有必要必须支持IP隧道协议的要求。但是要求调度器LB与真实服务器RS都有一块网卡连接到同一物理网段上,必须在同一个局域网环境。 DR模式是互联网使用比较多的一种模式。 </p><p><strong>DR模式原理图:</strong><br><img src="https://img-blog.csdnimg.cn/20200423170032869.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><strong>DR模式原理过程简述:</strong> </p><p> VS/DR模式的工作流程图如上图所示,它的连接调度和管理与NAT和TUN中的一样,它的报文转发方法和前两种不同。DR模式将报文直接路由给目标真实服务器。在DR模式中,调度器根据各个真实服务器的负载情况,连接数多少等,动态地选择一台服务器,不修改目标IP地址和目标端口,也不封装IP报文,而是将请求报文的数据帧的目标MAC地址改为真实服务器的MAC地址。然后再将修改的数据帧在服务器组的局域网上发送。因为数据帧的MAC地址是真实服务器的MAC地址,并且又在同一个局域网。那么根据局域网的通讯原理,真实复位是一定能够收到由LB发出的数据包。真实服务器接收到请求数据包的时候,解开IP包头查看到的目标IP是VIP。(此时只有自己的IP符合目标IP才会接收进来,所以我们需要在本地的回环借口上面配置VIP。</p><blockquote><p>另:由于网络接口都会进行ARP广播响应,但集群的其他机器都有这个VIP的lo接口,都响应就会冲突。所以我们需要把真实服务器的lo接口的ARP响应关闭掉。)然后真实服务器做成请求响应,之后根据自己的路由信息将这个响应数据包发送回给客户,并且源IP地址还是VIP。 </p></blockquote><p><strong>DR模式小结:</strong> </p><ol><li>通过在调度器LB上修改数据包的目的MAC地址实现转发。注意源地址仍然是CIP,目的地址仍然是VIP地址。</li><li>请求的报文经过调度器,而RS响应处理后的报文无需经过调度器LB,因此并发访问量大时使用效率很高(和NAT模式比) </li><li>因为DR模式是通过MAC地址改写机制实现转发,因此所有RS节点和调度器LB只能在一个局域网里面</li><li>RS主机需要绑定VIP地址在LO接口上,并且需要配置ARP抑制。</li><li>RS节点的默认网关不需要配置成LB,而是直接配置为上级路由的网关,能让RS直接出网就可以。 </li><li>由于DR模式的调度器仅做MAC地址的改写,所以调度器LB就不能改写目标端口,那么RS服务器就得使用和VIP相同的端口提供服务</li></ol><p><img src="https://img-blog.csdnimg.cn/20200423170212314.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><ol><li>客户端将请求发往前端的负载均衡器,请求报文源地址是CIP,目标地址为VIP。 </li><li>负载均衡器收到报文后,发现请求的是在规则里面存在的地址,那么它将客户端请求报文的源MAC地址改为自己DIP的MAC地址,目标MAC改为了RIP的MAC地址,并将此包发送给RS。 </li><li>RS发现请求报文中的目的MAC是自己,就会将次报文接收下来,处理完请求报文后,将响应报文通过lo接口送给eth0网卡直接发送给客户端。 </li></ol><blockquote><p>注意: 需要设置lo接口的VIP不能响应本地网络内的arp请求。 </p></blockquote><p><strong>总结:</strong> </p><ol><li>通过在调度器 LB 上修改数据包的目的 MAC 地址实现转发。注意源地址仍然是 CIP,目的地址仍然是 VIP 地址。</li><li>请求的报文经过调度器,而 RS 响应处理后的报文无需经过调度器 LB,因此并发访问量大时使用效率很高(和 NAT 模式比)</li><li>因为 DR 模式是通过 MAC 地址改写机制实现转发,因此所有 RS 节点和调度器 LB 只能在一个局域网里面 </li><li>RS 主机需要绑定 VIP 地址在 LO 接口(掩码32 位)上,并且需要配置 ARP 抑制。</li><li>RS 节点的默认网关不需要配置成 LB,而是直接配置为上级路由的网关,能让 RS 直接出网就可以。 </li><li>由于 DR 模式的调度器仅做 MAC 地址的改写,所以调度器 LB 就不能改写目标端口,那么 RS 服务器就得使用和 VIP 相同的端口提供服务。</li><li>直接对外的业务比如WEB等,RS 的IP最好是使用公网IP。对外的服务,比如数据库等最好使用内网IP。 </li></ol><p><strong>优点:</strong><br>和TUN(隧道模式)一样,负载均衡器也只是分发请求,应答包通过单独的路由方法返回给客户端。与VS-TUN相比,VS-DR这种实现方式不需要隧道结构,因此可以使用大多数操作系统做为物理服务器。 DR模式的效率很高,但是配置稍微复杂一点,因此对于访问量不是特别大的公司可以用haproxy/nginx取代。日1000-2000W PV或者并发请求1万一下都可以考虑用haproxy/nginx。 </p><p><strong>缺点:</strong> 所有 RS 节点和调度器 LB 只能在一个局域网里面。</p><h2 id="在LVS1配置LVS负载均衡"><a href="#在LVS1配置LVS负载均衡" class="headerlink" title="在LVS1配置LVS负载均衡"></a>在LVS1配置LVS负载均衡</h2><h3 id="1-使用centos镜像生成lvs-keep镜像"><a href="#1-使用centos镜像生成lvs-keep镜像" class="headerlink" title="1. 使用centos镜像生成lvs-keep镜像"></a>1. 使用centos镜像生成lvs-keep镜像</h3><ol><li>启动centos容器并进入<br><code>docker run -d --privileged centos:v1 /usr/sbin/init</code><br>2) 在centos容器中使用yum方式安装lvs和keepalived<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">yum install ipvsadm</span><br><span class="line">yum install keepalived</span><br></pre></td></tr></table></figure></li></ol><p>3) 保存容器为镜像<br><code>docker commit 容器ID lvs-keep</code></p><h3 id="2-使用nginx镜像启动nginx1和nginx2两个容器"><a href="#2-使用nginx镜像启动nginx1和nginx2两个容器" class="headerlink" title="2. 使用nginx镜像启动nginx1和nginx2两个容器"></a>2. 使用nginx镜像启动nginx1和nginx2两个容器</h3><p>1) 创建docker网络<br><code>docker network create --subnet=172.18.0.0/16 cluster</code><br>2) 查看宿主机上的docker网络类型种类<br><code>docker network ls</code><br>3) 启动容器nginx1,nginx2 设定地址为172.18.0.11, 172.18.0.12<br><code>docker run -d --privileged --net cluster --ip 172.18.0.11 --name nginx1 nginx /usr/sbin/init</code><br><code>docker run -d --privileged --net cluster --ip 172.18.0.12 --name nginx2 nginx /usr/sbin/init</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.11 --name nginx1 nginx /usr/sbin/init</span></span><br><span class="line">8deb9befa966726e16bee8fb4a8eb63ef0c47d66f507092b3bad63e11a348ffd</span><br><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.12 --name nginx2 nginx /usr/sbin/init</span></span><br><span class="line">f2fbc74a948461060345899ffd5d0e4e82b7012e2fff793daca3aa78fa4e90b9</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="3-使用lvs-keep镜像启用LVS1容器,配置LVS负载均衡"><a href="#3-使用lvs-keep镜像启用LVS1容器,配置LVS负载均衡" class="headerlink" title="3. 使用lvs-keep镜像启用LVS1容器,配置LVS负载均衡"></a>3. 使用lvs-keep镜像启用LVS1容器,配置LVS负载均衡</h3><blockquote><p>在宿主机上安装ipvsadm <code>yum install ipvsadm</code> # modprobe ip_vs //装入ip_vs模块<br>1) 启动容器LVS1,设定地址为172.18.0.8<br><code>docker run -d --privileged --net cluster --ip 172.18.0.8 --name LVS1 lvs-keep /usr/sbin/init</code><br>2) 进入LVS1容器<br><code>lsmod |grep ip_vs</code> 列出装载的模块<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@58a00cfe8c9d /]<span class="comment"># lsmod | grep ip_vs</span></span><br><span class="line">ip_vs 145497 0</span><br><span class="line">nf_conntrack 139224 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4</span><br><span class="line">libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack</span><br></pre></td></tr></table></figure></p></blockquote><p>3) 在LVS1创建VIP调度地址<br><code>ifconfig eth0:0 172.18.0.10 netmask 255.255.255.255</code><br>4) 在LVS1创建虚拟服务器,使用轮询方式:<br><code>ipvsadm -At 172.18.0.10:80 -s rr</code><br>5) 在LVS1添加nginx1和nginx2两台服务器节点,采用DR直接路由模式<br><code>ipvsadm -at 172.18.0.10:80 -r 172.18.0.11:80 -g</code><br><code>ipvsadm -at 172.18.0.10:80 -r 172.18.0.12:80 -g</code></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ifconfig eth0:0 172.18.0.10 netmask 255.255.255.255</span></span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -At 172.18.0.10:80 -s rr</span></span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -at 172.18.0.10:80 -r 172.18.0.11:80 -g</span></span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -at 172.18.0.10:80 -r 172.18.0.12:80 -g </span></span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -ln</span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP 172.18.0.10:80 rr</span><br><span class="line"> -> 172.18.0.11:80 Route 1 0 0 </span><br><span class="line"> -> 172.18.0.12:80 Route 1 0 0 </span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure><p>6) 在nginx1和nginx2两台服务器节点,创建VIP应答地址<br><code>ifconfig lo:0 172.18.0.10 netmask 255.255.255.255</code><br>7) 在nginx1和nginx2两台服务器节点,屏蔽ARP请求<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">echo</span> <span class="string">"1"</span> > /proc/sys/net/ipv4/conf/lo/arp_ignore </span><br><span class="line"><span class="built_in">echo</span> <span class="string">"1"</span> > /proc/sys/net/ipv4/conf/all/arp_ignore </span><br><span class="line"><span class="built_in">echo</span> <span class="string">"2"</span> > /proc/sys/net/ipv4/conf/lo/arp_announce </span><br><span class="line"><span class="built_in">echo</span> <span class="string">"2"</span> > /proc/sys/net/ipv4/conf/all/arp_announce</span><br></pre></td></tr></table></figure></p><p>8) 在LVS1中,<code>ipvsadm -L</code> 检查配置情况<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -L </span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP 58a00cfe8c9d:http rr</span><br><span class="line"> -> nginx1.cluster:http Route 1 0 0 </span><br><span class="line"> -> nginx2.cluster:http Route 1 0 0 </span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>9) 在宿主机中访问<a href="http://172.18.0.10,刷新时轮流访问两台节点服务器" target="_blank" rel="noopener">http://172.18.0.10,刷新时轮流访问两台节点服务器</a><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx2</span><br><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx1</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h2 id="使用KeepAlive配置LVS高可用"><a href="#使用KeepAlive配置LVS高可用" class="headerlink" title="使用KeepAlive配置LVS高可用"></a>使用KeepAlive配置LVS高可用</h2><blockquote><p>在两台LVS服务器安装配置KeepAlive,使得两台服务器互为备份并支持负载均衡<br>保持任务一中nginx1和nginx2两台服务器节点不变,重新启动容器LVS1和LVS2</p></blockquote><h3 id="1-使用lvs-keep镜像启用LVS1和LVS2容器,配置LVS负载均衡"><a href="#1-使用lvs-keep镜像启用LVS1和LVS2容器,配置LVS负载均衡" class="headerlink" title="1. 使用lvs-keep镜像启用LVS1和LVS2容器,配置LVS负载均衡"></a>1. 使用lvs-keep镜像启用LVS1和LVS2容器,配置LVS负载均衡</h3><blockquote><p>注意:需要在宿主机安装ipvsadm,# modprobe ip_vs //装入ip_vs模块<br>1) 启动容器LVS1,设定地址为172.18.0.8<br><code>docker run -d --privileged --net cluster --ip 172.18.0.8 --name LVS1 lvs-keep /usr/sbin/init</code><br>2) 启动容器LVS2,设定地址为172.18.0.9<br><code>docker run -d --privileged --net cluster --ip 172.18.0.9 --name LVS2 lvs-keep /usr/sbin/init</code><br>3) 编辑LVS1和LVS2中/etc/ keepalived /keepalived.conf文件<br> <figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br></pre></td><td class="code"><pre><span class="line">! Configuration File for keepalived</span><br><span class="line">global_defs {</span><br><span class="line"> notification_email {</span><br><span class="line"> [email protected]</span><br><span class="line"> [email protected]</span><br><span class="line"> [email protected]</span><br><span class="line"> }</span><br><span class="line"> notification_email_from [email protected]</span><br><span class="line"> smtp_server 192.168.200.1</span><br><span class="line"> smtp_connect_timeout 30</span><br><span class="line"> router_id LVS1</span><br><span class="line"> vrrp_skip_check_adv_addr #跳过vrrp报文地址检查</span><br><span class="line"> #vrrp_strict #严格遵守vrrp协议</span><br><span class="line"> vrrp_garp_interval 3 #在一个网卡上每组gratuitous arp消息之间的延迟时间,默认为0</span><br><span class="line"> vrrp_gna_interval 3 #在一个网卡上每组na消息之间的延迟时间,默认为0</span><br><span class="line">}</span><br><span class="line">vrrp_instance VI_1 {</span><br><span class="line"> state MASTER #LVS2设置为BACKUP</span><br><span class="line"> interface eth0</span><br><span class="line"> virtual_router_id 51</span><br><span class="line"> priority 100 #L 设置权重</span><br><span class="line"> advert_int 1</span><br><span class="line"> authentication {</span><br><span class="line"> auth_type PASS</span><br><span class="line"> auth_pass 1111</span><br><span class="line"> }</span><br><span class="line"> virtual_ipaddress {</span><br><span class="line"> 172.18.0.10</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line">virtual_server 172.18.0.10 80 { #配置虚拟服务器</span><br><span class="line"> delay_loop 6 #设置健康检查时间,单位是秒</span><br><span class="line"> lb_algo rr #设置负载调度算法,默认为rr即轮询算法</span><br><span class="line"> lb_kind DR #设置LVS实现LB机制,有NAT、TUNN和DR三个模式可选</span><br><span class="line"> persistence_timeout 0 #会话保持时间,单位为秒,设为0可以看到刷新效果</span><br><span class="line"> protocol TCP #指定转发协议类型,有TCP和UDP两种</span><br><span class="line"> real_server 172.18.0.11 80 { #配置服务器节点</span><br><span class="line"> weight 1</span><br><span class="line"> TCP_CHECK { #配置节点权值,数字越大权值越高</span><br><span class="line"> connect_timeout 3 #超时时间</span><br><span class="line"> retry 3 #重试次数</span><br><span class="line"> delay_before_retry 3 #重试间隔</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> real_server 172.18.0.12 80 {</span><br><span class="line"> weight 1</span><br><span class="line"> TCP_CHECK {</span><br><span class="line"> connect_timeout 3</span><br><span class="line"> retry 3</span><br><span class="line"> delay_before_retry 3</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p></blockquote><h3 id="2-验证KeepAlive配置LVS高可用集群"><a href="#2-验证KeepAlive配置LVS高可用集群" class="headerlink" title="2. 验证KeepAlive配置LVS高可用集群"></a>2. 验证KeepAlive配置LVS高可用集群</h3><p>1) 在两台服务器重启keepalived服务,i<code>pvsadm -L</code>检查配置情况<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@ef99a927fc2d /]<span class="comment"># ipvsadm -L</span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP 172.18.0.10:http rr</span><br><span class="line"> -> nginx1.cluster:http Route 1 0 0 </span><br><span class="line"> -> nginx2.cluster:http Route 1 0 0 </span><br><span class="line">[root@ef99a927fc2d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@a033e26a1fd8 /]<span class="comment"># ipvsadm -L</span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP a033e26a1fd8:http rr</span><br><span class="line"> -> nginx1.cluster:http Route 1 0 0 </span><br><span class="line"> -> nginx2.cluster:http Route 1 0 0</span><br></pre></td></tr></table></figure><p>2) 在宿主机中访问<a href="http://172.18.0.10,刷新时轮流访问两台节点服务器" target="_blank" rel="noopener">http://172.18.0.10,刷新时轮流访问两台节点服务器</a><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx2</span><br><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx1</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>3) 在LVS1服务器#ifconfig eth0 down //当掉服务器网卡</p><p>4) 在宿主机中访问<a href="http://172.18.0.10,刷新时轮流访问两台节点服务器" target="_blank" rel="noopener">http://172.18.0.10,刷新时轮流访问两台节点服务器</a></p><p>5) 在LVS2中,#ipvsadm -L //检查配置和连接情况<br>lvs2中可以看到<code>InActConn</code>增加</p><p>因为lvs1将eth0关闭以后, 有lvs2接管服务<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@ef99a927fc2d /]<span class="comment"># ipvsadm -L</span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP ef99a927fc2d:http rr</span><br><span class="line"> -> nginx1.cluster:http Route 1 0 3 </span><br><span class="line"> -> nginx2.cluster:http Route 1 0 3 </span><br><span class="line">[root@ef99a927fc2d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p>]]></content>
<summary type="html">
<h2 id="实验要求"><a href="#实验要求" class="headerlink" title="实验要求"></a>实验要求</h2><p>1、 安装配置LVS负载均衡<br>2、 安装配置LVS高可用负载均衡</p>
<p>拓扑图:<br><img
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(六)----深入掌握Pod</title>
<link href="https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E5%85%AD-%E6%B7%B1%E5%85%A5%E6%8E%8C%E6%8F%A1Pod/"/>
<id>https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-入门到实践-六-深入掌握Pod/</id>
<published>2020-04-21T09:55:33.000Z</published>
<updated>2020-04-21T09:55:47.368Z</updated>
<content type="html"><![CDATA[<p>上几章写了Kubernetes的基本概念与集群搭建<br>接下来将深入探索Pod的应用、配置、调度、升级及扩缩容,讲述Kubernetes容器编排。</p><p>本章将对Kubernetes如何发布与管理容器应用进行详细说明和示例,主要包括Pod和容器的使用、应用配置管理、Pod的控制和调度管理、Pod的升级和回滚,以及Pod的扩缩容机制等内容</p><h2 id="深入掌握Pod"><a href="#深入掌握Pod" class="headerlink" title="深入掌握Pod"></a>深入掌握Pod</h2><h3 id="Pod定义"><a href="#Pod定义" class="headerlink" title="Pod定义"></a>Pod定义</h3><p>Pod定义文件的yaml格式完整版<br><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span> <span class="comment">#必选,版本号,例如v1,版本号必须可以用 kubectl api-versions 查询到 .</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Pod</span> <span class="comment">#必选,Pod</span></span><br><span class="line"><span class="attr">metadata:</span> <span class="comment">#必选,元数据</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">string</span> <span class="comment">#必选,Pod名称</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">string</span> <span class="comment">#必选,Pod所属的命名空间,默认为"default"</span></span><br><span class="line"><span class="attr"> labels:</span> <span class="comment">#自定义标签</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#自定义标签名字</span></span><br><span class="line"><span class="attr"> annotations:</span> <span class="comment">#自定义注释列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span></span><br><span class="line"><span class="attr">spec:</span> <span class="comment">#必选,Pod中容器的详细定义</span></span><br><span class="line"><span class="attr"> containers:</span> <span class="comment">#必选,Pod中容器列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#必选,容器名称,需符合RFC 1035规范</span></span><br><span class="line"><span class="attr"> image:</span> <span class="string">string</span> <span class="comment">#必选,容器的镜像名称</span></span><br><span class="line"><span class="attr"> imagePullPolicy:</span> <span class="string">[</span> <span class="string">Always|Never|IfNotPresent</span> <span class="string">]</span> <span class="comment">#获取镜像的策略 Alawys表示下载镜像 IfnotPresent表示优先使用本地镜像,否则下载镜像,Nerver表示仅使用本地镜像</span></span><br><span class="line"><span class="attr"> command:</span> <span class="string">[string]</span> <span class="comment">#容器的启动命令列表,如不指定,使用打包时使用的启动命令</span></span><br><span class="line"><span class="attr"> args:</span> <span class="string">[string]</span> <span class="comment">#容器的启动命令参数列表</span></span><br><span class="line"><span class="attr"> workingDir:</span> <span class="string">string</span> <span class="comment">#容器的工作目录</span></span><br><span class="line"><span class="attr"> volumeMounts:</span> <span class="comment">#挂载到容器内部的存储卷配置</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名</span></span><br><span class="line"><span class="attr"> mountPath:</span> <span class="string">string</span> <span class="comment">#存储卷在容器内mount的绝对路径,应少于512字符</span></span><br><span class="line"><span class="attr"> readOnly:</span> <span class="string">boolean</span> <span class="comment">#是否为只读模式</span></span><br><span class="line"><span class="attr"> ports:</span> <span class="comment">#需要暴露的端口库号列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#端口的名称</span></span><br><span class="line"><span class="attr"> containerPort:</span> <span class="string">int</span> <span class="comment">#容器需要监听的端口号</span></span><br><span class="line"><span class="attr"> hostPort:</span> <span class="string">int</span> <span class="comment">#容器所在主机需要监听的端口号,默认与Container相同</span></span><br><span class="line"><span class="attr"> protocol:</span> <span class="string">string</span> <span class="comment">#端口协议,支持TCP和UDP,默认TCP</span></span><br><span class="line"><span class="attr"> env:</span> <span class="comment">#容器运行前需设置的环境变量列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#环境变量名称</span></span><br><span class="line"><span class="attr"> value:</span> <span class="string">string</span> <span class="comment">#环境变量的值</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="comment">#资源限制和请求的设置</span></span><br><span class="line"><span class="attr"> limits:</span> <span class="comment">#资源限制的设置</span></span><br><span class="line"><span class="attr"> cpu:</span> <span class="string">string</span> <span class="comment">#Cpu的限制,单位为core数,将用于docker run --cpu-shares参数</span></span><br><span class="line"><span class="attr"> memory:</span> <span class="string">string</span> <span class="comment">#内存限制,单位可以为Mib/Gib,将用于docker run --memory参数</span></span><br><span class="line"><span class="attr"> requests:</span> <span class="comment">#资源请求的设置</span></span><br><span class="line"><span class="attr"> cpu:</span> <span class="string">string</span> <span class="comment">#Cpu请求,容器启动的初始可用数量</span></span><br><span class="line"><span class="attr"> memory:</span> <span class="string">string</span> <span class="comment">#内存请求,容器启动的初始可用数量</span></span><br><span class="line"><span class="attr"> livenessProbe:</span> <span class="comment">#对Pod内各容器健康检查的设置,当探测无响应几次后将自动重启该容器,检查方法有exec、httpGet和tcpSocket,对一个容器只需设置其中一种方法即可</span></span><br><span class="line"><span class="attr"> exec:</span> <span class="comment">#对Pod容器内检查方式设置为exec方式</span></span><br><span class="line"><span class="attr"> command:</span> <span class="string">[string]</span> <span class="comment">#exec方式需要制定的命令或脚本</span></span><br><span class="line"><span class="attr"> httpGet:</span> <span class="comment">#对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> port:</span> <span class="string">number</span></span><br><span class="line"><span class="attr"> host:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> scheme:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> HttpHeaders:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> value:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> tcpSocket:</span> <span class="comment">#对Pod内个容器健康检查方式设置为tcpSocket方式</span></span><br><span class="line"><span class="attr"> port:</span> <span class="string">number</span></span><br><span class="line"><span class="attr"> initialDelaySeconds:</span> <span class="number">0</span> <span class="comment">#容器启动完成后首次探测的时间,单位为秒</span></span><br><span class="line"><span class="attr"> timeoutSeconds:</span> <span class="number">0</span> <span class="comment">#对容器健康检查探测等待响应的超时时间,单位秒,默认1秒</span></span><br><span class="line"><span class="attr"> periodSeconds:</span> <span class="number">0</span> <span class="comment">#对容器监控检查的定期探测时间设置,单位秒,默认10秒一次</span></span><br><span class="line"><span class="attr"> successThreshold:</span> <span class="number">0</span></span><br><span class="line"><span class="attr"> failureThreshold:</span> <span class="number">0</span></span><br><span class="line"><span class="attr"> securityContext:</span></span><br><span class="line"><span class="attr"> privileged:</span> <span class="literal">false</span></span><br><span class="line"><span class="attr"> restartPolicy:</span> <span class="string">[Always</span> <span class="string">| Never | OnFailure] #Pod的重启策略,Always表示一旦不管以何种方式终止运行,kubelet都将重启,OnFailure表示只有Pod以非0退出码退出才重启,Nerver表示不再重启该Pod</span></span><br><span class="line"><span class="string"></span><span class="attr"> nodeSelector:</span> <span class="string">obeject</span> <span class="comment">#设置NodeSelector表示将该Pod调度到包含这个label的node上,以key:value的格式指定</span></span><br><span class="line"><span class="attr"> imagePullSecrets:</span> <span class="comment">#Pull镜像时使用的secret名称,以key:secretkey格式指定</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> hostNetwork:</span> <span class="literal">false</span> <span class="comment">#是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络</span></span><br><span class="line"><span class="attr"> volumes:</span> <span class="comment">#在该pod上定义共享存储卷列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#共享存储卷名称 (volumes类型有很多种)</span></span><br><span class="line"><span class="attr"> emptyDir:</span> <span class="string">{}</span> <span class="comment">#类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值</span></span><br><span class="line"><span class="attr"> hostPath:</span> <span class="string">string</span> <span class="comment">#类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">string</span> <span class="comment">#Pod所在宿主机的目录,将被用于同期中mount的目录</span></span><br><span class="line"><span class="attr"> secret:</span> <span class="comment">#类型为secret的存储卷,挂载集群与定义的secre对象到容器内部</span></span><br><span class="line"><span class="attr"> scretname:</span> <span class="string">string</span> </span><br><span class="line"><span class="attr"> items:</span> </span><br><span class="line"><span class="attr"> - key:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> configMap:</span> <span class="comment">#类型为configMap的存储卷,挂载预定义的configMap对象到容器内部</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> items:</span></span><br><span class="line"><span class="attr"> - key:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">string</span></span><br></pre></td></tr></table></figure></p><h3 id="静态Pod"><a href="#静态Pod" class="headerlink" title="静态Pod"></a>静态Pod</h3><p>静态Pod是由kubelet进行管理的仅存在于特定Node上的Pod。</p><p>它们不能通过API Server进行管理,无法与ReplicationController、Deployment或者DaemonSet进行关联,并且kubelet无法对它们进行健康检查。</p><p>静态Pod总是由kubelet创建的,并且总在kubelet所在的Node上运行。创建静态Pod有两种方式:</p><ul><li>配置文件方式</li><li>HTTP方式<figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Pod</span></span><br><span class="line"><span class="attr">metadata:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">pod-demo</span> </span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">default</span> </span><br><span class="line"><span class="attr"> labels:</span> </span><br><span class="line"><span class="attr"> app:</span> <span class="string">myapp</span></span><br><span class="line"><span class="attr">spec:</span> </span><br><span class="line"><span class="attr"> containers:</span> </span><br><span class="line"><span class="attr"> - name:</span> <span class="string">myapp-1</span> </span><br><span class="line"><span class="attr"> image:</span> <span class="string">plutoacharon/myapp:v1</span> </span><br><span class="line"><span class="attr"> - name:</span> <span class="string">busybox-1</span> </span><br><span class="line"><span class="attr"> image:</span> <span class="attr">busybox:latest</span> </span><br><span class="line"><span class="attr"> command:</span> <span class="bullet">-</span> <span class="string">"/bin/sh"</span> <span class="bullet">-</span> <span class="string">"-c"</span> <span class="bullet">-</span> <span class="string">"sleep 3600"</span></span><br></pre></td></tr></table></figure></li></ul><h3 id="Pod容器共享Volume"><a href="#Pod容器共享Volume" class="headerlink" title="Pod容器共享Volume"></a>Pod容器共享Volume</h3><p>同一个Pod中的多个容器能够共享Pod级别的存储卷Volume。</p><p>Volume可以被定义为各种类型,多个容器各自进行挂载操作,将一个Volume挂载为容器内部需要的目录<br><img src="https://img-blog.csdnimg.cn/20200420212747115.png" alt="在这里插入图片描述"><br><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Pod</span></span><br><span class="line"><span class="attr">metadata:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">test-pd</span></span><br><span class="line"><span class="attr">spec:</span> </span><br><span class="line"><span class="attr"> containers:</span> </span><br><span class="line"><span class="attr"> - image:</span> <span class="string">k8s.gcr.io/test-webserver</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">test-container</span> </span><br><span class="line"><span class="attr"> volumeMounts:</span> </span><br><span class="line"><span class="attr"> - mountPath:</span> <span class="string">/cache</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">cache-volume</span> </span><br><span class="line"><span class="attr"> volumes:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">cache-volume</span> </span><br><span class="line"><span class="attr"> emptyDir:</span> <span class="string">{}</span></span><br></pre></td></tr></table></figure></p><h4 id="emptyDir"><a href="#emptyDir" class="headerlink" title="emptyDir"></a>emptyDir</h4><p>当 Pod 被分配给节点时,首先创建emptyDir卷,并且只要该 Pod 在该节点上运行,该卷就会存在。</p><p>正如卷的名字所述,它最初是空的。Pod 中的容器可以读取和写入emptyDir卷中的相同文件,尽管该卷可以挂载到每个容器中的相同或不同路径上。</p><p>当出于任何原因从节点中删除 Pod 时,emptyDir中的数据将被永久删除</p><p>emptyDir的用法有:</p><ul><li><p>暂存空间,例如用于基于磁盘的合并排序</p></li><li><p>用作长时间计算崩溃恢复时的检查点</p></li><li><p>Web服务器容器提供数据时,保存内容管理器容器提取的文件</p></li></ul><h3 id="ConfigMap概述"><a href="#ConfigMap概述" class="headerlink" title="ConfigMap概述"></a>ConfigMap概述</h3><p>ConfigMap 功能在 Kubernetes1.2 版本中引入,许多应用程序会从配置文件、命令行参数或环境变量中读取配置信息。</p><p>ConfigMap API 给我们提供了向容器中注入配置信息的机制,ConfigMap 可以被用来保存单个属性,也可以用来保存整个配置文件或者 JSON 二进制大对象</p><p>ConfigMap供容器使用的典型用法如下。</p><ul><li>生成为容器内的环境变量。</li><li>设置容器启动命令的启动参数(需设置为环境变量)</li><li>以Volume的形式挂载为容器内部的文件或目录。</li></ul><p>ConfigMap以一个或多个key:value的形式保存在Kubernetes系统中供应用使用,既可以用于表示一个变量的值(例如apploglevel=info),也可以用于表示一个完整配置文件的内容(例如server.xml=<?xml…>…)</p><p>可以通过YAML配置文件或者直接使用kubectl create configmap命令行的方式来创建ConfigMap。</p><p>使用ConfigMap的限制条件使用ConfigMap的限制条件如下。</p><ul><li>ConfigMap必须在Pod之前创建。</li><li>ConfigMap受Namespace限制,只有处于相同Namespace中的Pod才可以引用它。</li><li>ConfigMap中的配额管理还未能实现。</li><li>kubelet只支持可以被API Server管理的Pod使用ConfigMap。kubelet在本Node上通过 –manifest-url或–config自动创建的静态Pod将无法引用ConfigMap。</li><li>在Pod对ConfigMap进行挂载(volumeMount)操作时,在容器内部只能挂载为“目录”,无法挂载为“文件”。在挂载到容器内部后,在目录下将包含ConfigMap定义的每个item,如果在该目录下原来还有其他文件,则容器内的该目录将被挂载的ConfigMap覆盖。如果应用程序需要保留原来的其他文件,则需要进行额外的处理。可以将ConfigMap挂载到容器内部的临时目录,再通过启动脚本将配置文件复制或者链接到(cp或link命令)应用所用的实际配置目录下</li></ul><h3 id="容器内获取Pod信息(DownwardAPI)"><a href="#容器内获取Pod信息(DownwardAPI)" class="headerlink" title="容器内获取Pod信息(DownwardAPI)"></a>容器内获取Pod信息(DownwardAPI)</h3><p>我们知道,每个Pod在被成功创建出来之后,都会被系统分配唯一的名字、IP地址,并且处于某个Namespace中,那么我们如何在Pod的容器内获取Pod的这些重要信息呢?答案就是使用Downward API。</p><p>Downward API可以通过以下两种方式将Pod信息注入容器内部。</p><ul><li>环境变量:用于单个变量,可以将Pod信息和Container信息注入容器内部。</li><li>Volume挂载:将数组类信息生成为文件并挂载到容器内部。</li></ul><h3 id="Pod生命周期和重启策略"><a href="#Pod生命周期和重启策略" class="headerlink" title="Pod生命周期和重启策略"></a>Pod生命周期和重启策略</h3><p>挂起(Pending):Pod已被Kubernetes系统接受,但有一个或者多个容器镜像尚未创建。等待时间包括调度Pod的时间和通过网络下载镜像的时间,这可能需要花点时间</p><p>运行中(Running):该Pod已经绑定到了一个节点上,Pod中所有的容器都已被创建。至少有一个容器正在运行,或者正处于启动或重启状态成功(Succeeded):Pod中的所有容器都被成功终止,并且不会再重启</p><p>失败(Failed):Pod中的所有容器都已终止了,并且至少有一个容器是因为失败终止。也就是说,容器以非0状态退出或者被系统终止</p><p>未知(Unknown):因为某些原因无法取得Pod的状态,通常是因为与Pod所在主机通信失败</p><p>Pod的重启策略(RestartPolicy)应用于Pod内的所有容器,并且仅在Pod所处的Node上由kubelet进行判断和重启操作。当某个容器异常退出或者健康检查失败时,kubelet将根据RestartPolicy的设置来进行相应的操作。Pod的重启策略包括Always、OnFailure和Never,默认值为Always。</p><ul><li>Always:当容器失效时,由kubelet自动重启该容器。</li><li>OnFailure:当容器终止运行且退出码不为0时,由kubelet自动重启该容器。</li><li>Never:不论容器运行状态如何,kubelet都不会重启该容器。</li></ul><p>kubelet重启失效容器的时间间隔以sync-frequency乘以2n来计算,例如1、2、4、8倍等,最长延时5min,并且在成功重启后的10min后重置该时间。</p><p>Pod的重启策略与控制方式息息相关,当前可用于管理Pod的控制器包ReplicationController、Job、DaemonSet及直接通过kubelet管理(静态Pod)。每种控制器对Pod的重启策略要求如下</p><ul><li>RC和DaemonSet:必须设置为Always,需要保证该容器持续运行。</li><li>Job:OnFailure或Never,确保容器执行完成后不再重启。</li><li>kubelet:在Pod失效时自动重启它,不论将RestartPolicy设置为什么值,也不会对Pod进行健康检查</li></ul><h3 id="Pod健康检查和服务可用性检查"><a href="#Pod健康检查和服务可用性检查" class="headerlink" title="Pod健康检查和服务可用性检查"></a>Pod健康检查和服务可用性检查</h3><p>Kubernetes 对 Pod 的健康状态可以通过两类探针来检查:LivenessProbe 和ReadinessProbe,kubelet定期执行这两类探针来诊断容器的健康状况。</p><ul><li>LivenessProbe探针:用于判断容器是否存活(Running状态),如果LivenessProbe探针探测到容器不健康,则kubelet将杀掉该容器,并根据容器的重启策略做相应的处理。如果一个容器不包含LivenessProbe探针,那么kubelet认为该容器的LivenessProbe探针返回的值永远是Success。</li><li>ReadinessProbe探针:用于判断容器服务是否可用(Ready状态),达到Ready状态的Pod才可以接收请求。对于被Service管理的Pod,Service与Pod Endpoint的关联关系也将基于Pod是否Ready进行设置。如果在运行过程中Ready状态变为False,则系统自动将其从Service的后端Endpoint列表中隔离出去,后续再把恢复到Ready状态的Pod加回后端Endpoint列表。这样就能保证客户端在访问Service时不会被转发到服务不可用的Pod实例上。</li></ul>]]></content>
<summary type="html">
<p>上几章写了Kubernetes的基本概念与集群搭建<br>接下来将深入探索Pod的应用、配置、调度、升级及扩缩容,讲述Kubernetes容器编排。</p>
<p>本章将对Kubernetes如何发布与管理容器应用进行详细说明和示例,主要包括Pod和容器的使用、应用配置管理
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>Python算法学习: 2020年蓝桥杯省赛模拟赛-Python题解</title>
<link href="https://plutoacharon.github.io/2020/04/21/Python%E7%AE%97%E6%B3%95%E5%AD%A6%E4%B9%A0-2020%E5%B9%B4%E8%93%9D%E6%A1%A5%E6%9D%AF%E7%9C%81%E8%B5%9B%E6%A8%A1%E6%8B%9F%E8%B5%9B-Python%E9%A2%98%E8%A7%A3/"/>
<id>https://plutoacharon.github.io/2020/04/21/Python算法学习-2020年蓝桥杯省赛模拟赛-Python题解/</id>
<published>2020-04-21T09:54:56.000Z</published>
<updated>2020-04-21T10:01:48.090Z</updated>
<content type="html"><![CDATA[<h2 id="目录"><a href="#目录" class="headerlink" title="目录"></a>目录</h2><h3 id="填空题1"><a href="#填空题1" class="headerlink" title="填空题1"></a>填空题1</h3><p>问题描述<br> 一个包含有2019个结点的无向连通图,最少包含多少条边?<br>答案提交<br> 这是一道结果填空的题,你只需要算出结果后提交即可。本题的结果为一个整数,在提交答案时只填写这个整数,填写多余的内容将无法得分。<br>答案 :2018</p><h3 id="填空题2"><a href="#填空题2" class="headerlink" title="填空题2"></a>填空题2</h3><p>问题描述<br> 将LANQIAO中的字母重新排列,可以得到不同的单词,如LANQIAO、AAILNOQ等,注意这7个字母都要被用上,单词不一定有具体的英文意义。<br> 请问,总共能排列如多少个不同的单词。<br>答案提交<br> 这是一道结果填空的题,你只需要算出结果后提交即可。本题的结果为一个整数,在提交答案时只填写这个整数,填写多余的内容将无法得分。<br>答案 :2520</p><h3 id="填空题3"><a href="#填空题3" class="headerlink" title="填空题3"></a>填空题3</h3><p>问题描述<br> 在计算机存储中,12.5MB是多少字节?<br>答案提交<br> 这是一道结果填空的题,你只需要算出结果后提交即可。本题的结果为一个整数,在提交答案时只填写这个整数,填写多余的内容将无法得分。<br>答案 :13107200</p><h3 id="填空题4"><a href="#填空题4" class="headerlink" title="填空题4"></a>填空题4</h3><p>问题描述<br> 由1对括号,可以组成一种合法括号序列:()。<br> 由2对括号,可以组成两种合法括号序列:()()、(())。<br> 由4对括号组成的合法括号序列一共有多少种?<br>答案提交<br> 这是一道结果填空的题,你只需要算出结果后提交即可。本题的结果为一个整数,在提交答案时只填写这个整数,填写多余的内容将无法得分。<br>答案 :14</p><h3 id="编程题1-凯撒密码加密"><a href="#编程题1-凯撒密码加密" class="headerlink" title="编程题1 凯撒密码加密"></a>编程题1 凯撒密码加密</h3><p>问题描述<br> 给定一个单词,请使用凯撒密码将这个单词加密。<br> 凯撒密码是一种替换加密的技术,单词中的所有字母都在字母表上向后偏移3位后被替换成密文。即a变为d,b变为e,…,w变为z,x变为a,y变为b,z变为c。<br> 例如,lanqiao会变成odqtldr。<br>输入格式<br> 输入一行,包含一个单词,单词中只包含小写英文字母。<br>输出格式<br> 输出一行,表示加密后的密文。<br>样例输入<br>lanqiao<br>样例输出<br>odqtldr<br>评测用例规模与约定<br> 对于所有评测用例,单词中的字母个数不超过100<br><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">ans = <span class="string">""</span></span><br><span class="line">strq = list(input())</span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(len(strq)):</span><br><span class="line"> <span class="keyword">if</span> <span class="number">97</span> <= ord(strq[i]) <= <span class="number">119</span>:</span><br><span class="line"> strq[i] = chr(ord(strq[i]) + <span class="number">3</span>)</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> strq[i] = chr(ord(strq[i]) - <span class="number">120</span> + <span class="number">97</span>)</span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(len(strq)):</span><br><span class="line"> ans += strq[i]</span><br><span class="line">print(ans)</span><br></pre></td></tr></table></figure></p><h3 id="编程题2-反倍数"><a href="#编程题2-反倍数" class="headerlink" title="编程题2 反倍数"></a>编程题2 反倍数</h3><p>问题描述<br> 给定三个整数 a, b, c,如果一个整数既不是 a 的整数倍也不是 b 的整数倍还不是 c 的整数倍,则这个数称为反倍数。<br> 请问在 1 至 n 中有多少个反倍数。<br>输入格式<br> 输入的第一行包含一个整数 n。<br> 第二行包含三个整数 a, b, c,相邻两个数之间用一个空格分隔。<br>输出格式<br> 输出一行包含一个整数,表示答案。<br>样例输入<br>30<br>2 3 6<br>样例输出<br>10<br>样例说明<br> 以下这些数满足要求:1, 5, 7, 11, 13, 17, 19, 23, 25, 29。<br>评测用例规模与约定<br> 对于 40% 的评测用例,1 <= n <= 10000。<br> 对于 80% 的评测用例,1 <= n <= 100000。<br> 对于所有评测用例,1 <= n <= 1000000,1 <= a <= n,1 <= b <= n,1 <= c <= n。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">n = int(input())</span><br><span class="line">ans = <span class="number">0</span></span><br><span class="line">a,b,c = map(int, input().split())</span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">1</span>, n+<span class="number">1</span>):</span><br><span class="line"> <span class="keyword">if</span> i % a != <span class="number">0</span> <span class="keyword">and</span> i % b != <span class="number">0</span> <span class="keyword">and</span> i % c != <span class="number">0</span>:</span><br><span class="line"> ans += <span class="number">1</span></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="keyword">continue</span></span><br><span class="line">print(ans)</span><br></pre></td></tr></table></figure><h3 id="编程题3-摆动序列"><a href="#编程题3-摆动序列" class="headerlink" title="编程题3 摆动序列"></a>编程题3 摆动序列</h3><p>问题描述<br> 如果一个序列的奇数项都比前一项大,偶数项都比前一项小,则称为一个摆动序列。即 a[2i]<a[2i-1], a[2i+1]>a[2i]。<br> 小明想知道,长度为 m,每个数都是 1 到 n 之间的正整数的摆动序列一共有多少个。<br>输入格式<br> 输入一行包含两个整数 m,n。<br>输出格式<br> 输出一个整数,表示答案。答案可能很大,请输出答案除以10000的余数。<br>样例输入<br>3 4<br>样例输出<br>14<br>样例说明<br> 以下是符合要求的摆动序列:<br> 2 1 2<br> 2 1 3<br> 2 1 4<br> 3 1 2<br> 3 1 3<br> 3 1 4<br> 3 2 3<br> 3 2 4<br> 4 1 2<br> 4 1 3<br> 4 1 4<br> 4 2 3<br> 4 2 4<br> 4 3 4<br>评测用例规模与约定<br> 对于 20% 的评测用例,1 <= n, m <= 5;<br> 对于 50% 的评测用例,1 <= n, m <= 10;<br> 对于 80% 的评测用例,1 <= n, m <= 100;<br> 对于所有评测用例,1 <= n, m <= 1000。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line">ans = <span class="number">0</span></span><br><span class="line">m, n = map(int, input().split())</span><br><span class="line">dp = [[<span class="number">0</span> <span class="keyword">for</span> _ <span class="keyword">in</span> range(<span class="number">1024</span>)] <span class="keyword">for</span> _ <span class="keyword">in</span> range(<span class="number">1024</span>)]</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">1</span>, n + <span class="number">1</span>):</span><br><span class="line"> dp[<span class="number">1</span>][i] = n - i + <span class="number">1</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">2</span>, m+<span class="number">1</span>):</span><br><span class="line"> <span class="keyword">if</span> i & <span class="number">1</span>:</span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(n , <span class="number">0</span>, <span class="number">-1</span>):</span><br><span class="line"> dp[i][j] = (dp[i - <span class="number">1</span>][j - <span class="number">1</span>] + dp[i][j + <span class="number">1</span>]) % <span class="number">10000</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(<span class="number">1</span>, n+<span class="number">1</span>):</span><br><span class="line"> dp[i][j] = (dp[i - <span class="number">1</span>][j + <span class="number">1</span>] + dp[i][j - <span class="number">1</span>]) % <span class="number">10000</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> m & <span class="number">1</span>:</span><br><span class="line"> ans = dp[m][<span class="number">1</span>]</span><br><span class="line"><span class="keyword">else</span>:</span><br><span class="line"> ans = dp[m][n]</span><br><span class="line">print(ans)</span><br></pre></td></tr></table></figure><h3 id="编程题4-螺旋矩阵"><a href="#编程题4-螺旋矩阵" class="headerlink" title="编程题4 螺旋矩阵"></a>编程题4 螺旋矩阵</h3><p>问题描述<br> 对于一个 n 行 m 列的表格,我们可以使用螺旋的方式给表格依次填上正整数,我们称填好的表格为一个螺旋矩阵。<br> 例如,一个 4 行 5 列的螺旋矩阵如下:<br> 1 2 3 4 5<br> 14 15 16 17 6<br> 13 20 19 18 7<br> 12 11 10 9 8<br>输入格式<br> 输入的第一行包含两个整数 n, m,分别表示螺旋矩阵的行数和列数。<br> 第二行包含两个整数 r, c,表示要求的行号和列号。<br>输出格式<br> 输出一个整数,表示螺旋矩阵中第 r 行第 c 列的元素的值。<br>样例输入<br>4 5<br>2 2<br>样例输出<br>15<br>评测用例规模与约定<br> 对于 30% 的评测用例,2 <= n, m <= 20。<br> 对于 70% 的评测用例,2 <= n, m <= 100。<br> 对于所有评测用例,2 <= n, m <= 1000,1 <= r <= n,1 <= c <= m。<br><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br></pre></td><td class="code"><pre><span class="line">n, m = map(int, input().split())</span><br><span class="line">r, c = map(int, input().split())</span><br><span class="line">ansList = [[<span class="number">0</span> <span class="keyword">for</span> _ <span class="keyword">in</span> range(m)] <span class="keyword">for</span> _ <span class="keyword">in</span> range(n)]</span><br><span class="line">vis = [[<span class="number">0</span> <span class="keyword">for</span> _ <span class="keyword">in</span> range(m)] <span class="keyword">for</span> _ <span class="keyword">in</span> range(n)]</span><br><span class="line">i = <span class="number">1</span></span><br><span class="line">x = <span class="number">0</span> <span class="comment"># 当前纵坐标</span></span><br><span class="line">y = <span class="number">0</span> <span class="comment"># 当前横坐标</span></span><br><span class="line"><span class="keyword">while</span> i < n * m:</span><br><span class="line"></span><br><span class="line"> <span class="keyword">while</span> y < m <span class="keyword">and</span> vis[x][y] == <span class="number">0</span>:</span><br><span class="line"> ansList[x][y] = i</span><br><span class="line"> vis[x][y] = <span class="number">1</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> y += <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> y -= <span class="number">1</span></span><br><span class="line"> x += <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">while</span> x < n <span class="keyword">and</span> vis[x][y] == <span class="number">0</span>:</span><br><span class="line"> ansList[x][y] = i</span><br><span class="line"> vis[x][y] = <span class="number">1</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> x += <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> x -= <span class="number">1</span></span><br><span class="line"> y -= <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">while</span> y >= <span class="number">0</span> <span class="keyword">and</span> vis[x][y] == <span class="number">0</span>:</span><br><span class="line"> ansList[x][y] = i</span><br><span class="line"> vis[x][y] = <span class="number">1</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> y -= <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> y += <span class="number">1</span></span><br><span class="line"> x -= <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">while</span> x >= <span class="number">0</span> <span class="keyword">and</span> vis[x][y] == <span class="number">0</span>:</span><br><span class="line"> ansList[x][y] = i</span><br><span class="line"> vis[x][y] = <span class="number">1</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> x -= <span class="number">1</span></span><br><span class="line"> x += <span class="number">1</span></span><br><span class="line"> y += <span class="number">1</span></span><br><span class="line">print(ansList[r<span class="number">-1</span>][c<span class="number">-1</span>])</span><br></pre></td></tr></table></figure></p><h3 id="编程题5-村庄通电"><a href="#编程题5-村庄通电" class="headerlink" title="编程题5 村庄通电"></a>编程题5 村庄通电</h3><p>问题描述<br> 2015年,全中国实现了户户通电。作为一名电力建设者,小明正在帮助一带一路上的国家通电。<br> 这一次,小明要帮助 n 个村庄通电,其中 1 号村庄正好可以建立一个发电站,所发的电足够所有村庄使用。<br> 现在,这 n 个村庄之间都没有电线相连,小明主要要做的是架设电线连接这些村庄,使得所有村庄都直接或间接的与发电站相通。<br> 小明测量了所有村庄的位置(坐标)和高度,如果要连接两个村庄,小明需要花费两个村庄之间的坐标距离加上高度差的平方,形式化描述为坐标为 (x_1, y_1) 高度为 h_1 的村庄与坐标为 (x_2, y_2) 高度为 h_2 的村庄之间连接的费用为<br> sqrt((x_1-x_2)<em>(x_1-x_2)+(y_1-y_2)</em>(y_1-y_2))+(h_1-h_2)*(h_1-h_2)。<br> 在上式中 sqrt 表示取括号内的平方根。请注意括号的位置,高度的计算方式与横纵坐标的计算方式不同。<br> 由于经费有限,请帮助小明计算他至少要花费多少费用才能使这 n 个村庄都通电。<br>输入格式<br> 输入的第一行包含一个整数 n ,表示村庄的数量。<br> 接下来 n 行,每个三个整数 x, y, h,分别表示一个村庄的横、纵坐标和高度,其中第一个村庄可以建立发电站。<br>输出格式<br> 输出一行,包含一个实数,四舍五入保留 2 位小数,表示答案。<br>样例输入<br>4<br>1 1 3<br>9 9 7<br>8 8 6<br>4 5 4<br>样例输出<br>17.41<br>评测用例规模与约定<br> 对于 30% 的评测用例,1 <= n <= 10;<br> 对于 60% 的评测用例,1 <= n <= 100;<br> 对于所有评测用例,1 <= n <= 1000,0 <= x, y, h <= 10000。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="编程题6-小明植树"><a href="#编程题6-小明植树" class="headerlink" title="编程题6 小明植树"></a>编程题6 小明植树</h3><p>问题描述<br> 小明和朋友们一起去郊外植树,他们带了一些在自己实验室精心研究出的小树苗。<br> 小明和朋友们一共有 n 个人,他们经过精心挑选,在一块空地上每个人挑选了一个适合植树的位置,总共 n 个。他们准备把自己带的树苗都植下去。<br> 然而,他们遇到了一个困难:有的树苗比较大,而有的位置挨太近,导致两棵树植下去后会撞在一起。<br> 他们将树看成一个圆,圆心在他们找的位置上。如果两棵树对应的圆相交,这两棵树就不适合同时植下(相切不受影响),称为两棵树冲突。<br> 小明和朋友们决定先合计合计,只将其中的一部分树植下去,保证没有互相冲突的树。他们同时希望这些树所能覆盖的面积和(圆面积和)最大。<br>输入格式<br> 输入的第一行包含一个整数 n ,表示人数,即准备植树的位置数。<br> 接下来 n 行,每行三个整数 x, y, r,表示一棵树在空地上的横、纵坐标和半径。<br>输出格式<br> 输出一行包含一个整数,表示在不冲突下可以植树的面积和。由于每棵树的面积都是圆周率的整数倍,请输出答案除以圆周率后的值(应当是一个整数)。<br>样例输入<br>6<br>1 1 2<br>1 4 2<br>1 7 2<br>4 1 2<br>4 4 2<br>4 7 2<br>样例输出<br>12<br>评测用例规模与约定<br> 对于 30% 的评测用例,1 <= n <= 10;<br> 对于 60% 的评测用例,1 <= n <= 20;<br> 对于所有评测用例,1 <= n <= 30,0 <= x, y <= 1000,1 <= r <= 1000。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">isTure</span><span class="params">(i)</span>:</span></span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(n):</span><br><span class="line"> <span class="keyword">if</span> i != j <span class="keyword">and</span> vis[j]:</span><br><span class="line"> <span class="keyword">if</span> (x[i] - x[j]) * (x[i] - x[j]) + (y[i] - y[j]) * (y[i] - y[j]) < (r[i] + r[j]) * (r[i] + r[j]):</span><br><span class="line"> <span class="keyword">return</span> <span class="literal">False</span></span><br><span class="line"> <span class="keyword">return</span> <span class="literal">True</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">dfs</span><span class="params">(step, sum)</span>:</span></span><br><span class="line"> <span class="keyword">global</span> ans</span><br><span class="line"> <span class="keyword">if</span> step == n:</span><br><span class="line"> ans = max(ans, sum)</span><br><span class="line"> <span class="keyword">return</span></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(n):</span><br><span class="line"> <span class="keyword">if</span> vis[i] == <span class="number">0</span>:</span><br><span class="line"> tmp = r[i]</span><br><span class="line"> <span class="keyword">if</span> isTure(i) == <span class="literal">False</span>:</span><br><span class="line"> r[i] = <span class="number">0</span></span><br><span class="line"> vis[i] = <span class="number">1</span></span><br><span class="line"> dfs(step + <span class="number">1</span>, sum + r[i] * r[i])</span><br><span class="line"> vis[i] = <span class="number">0</span></span><br><span class="line"> r[i] = tmp</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">'__main__'</span>:</span><br><span class="line"> PI = <span class="number">3.14</span></span><br><span class="line"> ans = <span class="number">0</span></span><br><span class="line"> x = []</span><br><span class="line"> y = []</span><br><span class="line"> r = []</span><br><span class="line"> n = int(input())</span><br><span class="line"> vis = [<span class="number">0</span> <span class="keyword">for</span> _ <span class="keyword">in</span> range(n)]</span><br><span class="line"> <span class="keyword">for</span> _ <span class="keyword">in</span> range(n):</span><br><span class="line"> xt, yt, rt = map(int, input().split())</span><br><span class="line"> x.append(xt)</span><br><span class="line"> y.append(yt)</span><br><span class="line"> r.append(rt)</span><br><span class="line"> dfs(<span class="number">0</span>, <span class="number">0</span>)</span><br><span class="line"></span><br><span class="line"> print(ans)</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<h2 id="目录"><a href="#目录" class="headerlink" title="目录"></a>目录</h2><h3 id="填空题1"><a href="#填空题1" class="headerlink" title="填空题1"></a>填空题1</h
</summary>
<category term="Python算法" scheme="https://plutoacharon.github.io/categories/Python%E7%AE%97%E6%B3%95/"/>
<category term="Python算法" scheme="https://plutoacharon.github.io/tags/Python%E7%AE%97%E6%B3%95/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(六)---- 基于Docker配置KeepAlive支持Nginx高可用</title>
<link href="https://plutoacharon.github.io/2020/04/21/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E5%85%AD%EF%BC%89-%E5%9F%BA%E4%BA%8EDocker%E9%85%8D%E7%BD%AEKeepAlive%E6%94%AF%E6%8C%81Nginx%E9%AB%98%E5%8F%AF%E7%94%A8/"/>
<id>https://plutoacharon.github.io/2020/04/21/HA高可用与负载均衡入门到实战(六)-基于Docker配置KeepAlive支持Nginx高可用/</id>
<published>2020-04-21T09:52:59.000Z</published>
<updated>2020-04-21T09:54:38.065Z</updated>
<content type="html"><![CDATA[<h2 id="网站架构"><a href="#网站架构" class="headerlink" title="网站架构"></a>网站架构</h2><p>基于Docker容器里构建高并发网站</p><p>拓扑图:<br><img src="https://img-blog.csdnimg.cn/20200416115629660.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>上文讲述了简单的基于Docker的配置Nginx反向代理和负载均衡</p><p>本文讲述Keepalived与Nginx共同实现高可用实例<br>|IP地址 | 容器名 |功能|<br>|–|–|–|<br>| 172.18.0.11| nginx1| nginx+keepalived |<br>| 172.18.0.12|nginx2| nginx+keepalived |<br>| 172.18.0.10|VIP| |</p><h2 id="安装配置keepalived"><a href="#安装配置keepalived" class="headerlink" title="安装配置keepalived"></a>安装配置keepalived</h2><h3 id="使用nginx镜像生成nginx-keep镜像"><a href="#使用nginx镜像生成nginx-keep镜像" class="headerlink" title="使用nginx镜像生成nginx-keep镜像"></a>使用nginx镜像生成nginx-keep镜像</h3><p>1) 启动nginx容器并进入<br><code>docker run -d --privileged nginx /usr/sbin/init</code></p><p>2) 在nginx容器中使用yum方式安装keepalived<br><code>yum install -y keepalived</code><br>3) 保存容器为镜像<br><code>docker commit 容器ID nginx-keep</code></p><h3 id="使用nginx-keep镜像启动nginx1和nginx2两个容器"><a href="#使用nginx-keep镜像启动nginx1和nginx2两个容器" class="headerlink" title="使用nginx-keep镜像启动nginx1和nginx2两个容器"></a>使用nginx-keep镜像启动nginx1和nginx2两个容器</h3><p>1) 创建docker网络<br> <code>docker network create --subnet=172.18.0.0/16 cluster</code><br>2) 查看宿主机上的docker网络类型种类<br><code>docker network ls</code><br>3) 启动容器nginx1,设定地址为172.18.0.11<br><code>docker run -d --privileged --net cluster --ip 172.18.0.11 --name nginx1 nginx-keep /usr/sbin/init</code><br>4) 启动容器nginx2,设定地址为172.18.0.12<br><code>docker run -d --privileged --net cluster --ip 172.18.0.12 --name nginx2 nginx-keep /usr/sbin/init</code></p><p>5) 配置容器nginx1, nginx2的web服务,编辑首页内容为“nginx1”,“nginx2”, 在宿主机访问<br> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.12</span></span><br><span class="line">nginx2</span><br><span class="line"></span><br><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.11</span></span><br><span class="line">nginx1</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="在nginx1和nginx2两个容器配置keepalived"><a href="#在nginx1和nginx2两个容器配置keepalived" class="headerlink" title="在nginx1和nginx2两个容器配置keepalived"></a>在nginx1和nginx2两个容器配置keepalived</h3><p>1) 在nginx1编辑 /etc/keepalived/keepalived.conf ,启动keepalived服务<br> <figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br></pre></td><td class="code"><pre><span class="line"> ! Configuration File for keepalived</span><br><span class="line"></span><br><span class="line">global_defs {</span><br><span class="line"> notification_email {</span><br><span class="line"> [email protected]</span><br><span class="line"> [email protected]</span><br><span class="line"> [email protected]</span><br><span class="line"> }</span><br><span class="line"> notification_email_from [email protected]</span><br><span class="line"> smtp_server 192.168.200.1</span><br><span class="line"> smtp_connect_timeout 30</span><br><span class="line"> router_id nginx1</span><br><span class="line"> vrrp_skip_check_adv_addr</span><br><span class="line"> #vrrp_strict</span><br><span class="line"> vrrp_garp_interval 0</span><br><span class="line"> vrrp_gna_interval 0</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">vrrp_instance VI_1 {</span><br><span class="line"> state MASTER</span><br><span class="line"> interface eth0</span><br><span class="line"> virtual_router_id 51</span><br><span class="line"> priority 100</span><br><span class="line"> advert_int 1</span><br><span class="line"> authentication {</span><br><span class="line"> auth_type PASS</span><br><span class="line"> auth_pass 1111</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> virtual_ipaddress {</span><br><span class="line"> 172.18.0.10</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 在nginx2编辑 /etc/keepalived/keepalived.conf ,启动keepalived服务<br> <figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line"> ! Configuration File for keepalived</span><br><span class="line"></span><br><span class="line">global_defs {</span><br><span class="line"> notification_email {</span><br><span class="line"> [email protected]</span><br><span class="line"> [email protected]</span><br><span class="line"> [email protected]</span><br><span class="line"> }</span><br><span class="line"> notification_email_from [email protected]</span><br><span class="line"> smtp_server 192.168.200.1</span><br><span class="line"> smtp_connect_timeout 30</span><br><span class="line"> router_id nginx2</span><br><span class="line"> vrrp_skip_check_adv_addr</span><br><span class="line"> #vrrp_strict</span><br><span class="line"> vrrp_garp_interval 0</span><br><span class="line"> vrrp_gna_interval 0</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">vrrp_instance VI_1 {</span><br><span class="line"> state BACKUP</span><br><span class="line"> interface eth0</span><br><span class="line"> virtual_router_id 51</span><br><span class="line"> priority 90</span><br><span class="line"> advert_int 1</span><br><span class="line"> authentication {</span><br><span class="line"> auth_type PASS</span><br><span class="line"> auth_pass 1111</span><br><span class="line"> }</span><br><span class="line"> virtual_ipaddress {</span><br><span class="line"> 172.18.0.10</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p><strong>注意:</strong><br>在 <code>/etc/keepalived/keepalived.conf</code>配置文件中将<code>#vrrp_strict</code>注释掉, 否则会出现ping VIP不通的现象<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">vrrp_strict</span><br><span class="line">#严格遵守VRRP协议。 这将禁止:</span><br><span class="line"></span><br><span class="line">0 VIPs</span><br><span class="line">unicast peers (单播对等体)</span><br><span class="line">IPv6 addresses in VRRP version 2(VRRP版本2中的IPv6地址)</span><br></pre></td></tr></table></figure></p><blockquote><p>即vrrp_strict:严格遵守VRRP协议。下列情况将会阻止启动Keepalived:1. 没有VIP地址。2. 单播邻居。3. 在VRRP版本2中有IPv6地址。</p></blockquote><p>3) 在宿主机使用浏览器访问虚拟地址<br><code>curl http:// 172.18.0.10</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx1</span><br></pre></td></tr></table></figure></p><p>4) 在nginx1上当掉网卡<br><code>ifconfig eth0 down</code></p><p>5) 在宿主机使用浏览器访问虚拟地址<br><code>curl http:// 172.18.0.10</code><br> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx2</span><br></pre></td></tr></table></figure></p><h2 id="配置keepalived-支持nginx高可用"><a href="#配置keepalived-支持nginx高可用" class="headerlink" title="配置keepalived 支持nginx高可用"></a>配置keepalived 支持nginx高可用</h2><h3 id="编写-Nginx-状态检测脚本"><a href="#编写-Nginx-状态检测脚本" class="headerlink" title="编写 Nginx 状态检测脚本"></a>编写 Nginx 状态检测脚本</h3><p>1) 在nginx1上编写 Nginx 状态检测脚本<code>/etc/keepalived/nginx_check.sh</code></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#!/bin/bash</span></span><br><span class="line"><span class="keyword">if</span> [ `ps -C nginx --no-header |wc -l` -eq 0 ]</span><br><span class="line"> <span class="keyword">then</span></span><br><span class="line"> systemctl start nginx </span><br><span class="line"> sleep 2</span><br><span class="line"> <span class="keyword">if</span> [ `ps -C nginx --no-header |wc -l` -eq 0 ]</span><br><span class="line"> <span class="keyword">then</span></span><br><span class="line"> <span class="built_in">kill</span> keepalived</span><br><span class="line"> <span class="keyword">fi</span></span><br><span class="line"><span class="keyword">fi</span></span><br></pre></td></tr></table></figure><blockquote><p>脚本说明: 当检测nginx没有进程时选择启动nginx, 如果启动失败则关闭keepalived<br>2) 赋予/etc/keepalived/nginx_check.sh执行权限<br> <code>chmod a+x /etc/keepalived/nginx_check.sh</code></p></blockquote><h3 id="配置keepalived-支持nginx高可用-1"><a href="#配置keepalived-支持nginx高可用-1" class="headerlink" title="配置keepalived 支持nginx高可用"></a>配置keepalived 支持nginx高可用</h3><p>1) 在nginx1上编辑/etc/keepalived/keepalived.conf<br> <figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line">! Configuration File for keepalived</span><br><span class="line"></span><br><span class="line">global_defs {</span><br><span class="line"> notification_email {</span><br><span class="line"> [email protected]</span><br><span class="line"> [email protected]</span><br><span class="line"> [email protected]</span><br><span class="line"> }</span><br><span class="line"> notification_email_from [email protected]</span><br><span class="line"> smtp_server 192.168.200.1</span><br><span class="line"> smtp_connect_timeout 30</span><br><span class="line"> router_id nginx1</span><br><span class="line"> vrrp_skip_check_adv_addr</span><br><span class="line"> #vrrp_strict</span><br><span class="line"> vrrp_garp_interval 0</span><br><span class="line"> vrrp_gna_interval 0</span><br><span class="line">}</span><br><span class="line">vrrp_script chk_nginx{</span><br><span class="line"> script "/etc/keepalived/nginx_check.sh"</span><br><span class="line"> interval 2</span><br><span class="line"> weight -20</span><br><span class="line">}</span><br><span class="line">vrrp_instance VI_1 {</span><br><span class="line"> state MASTER</span><br><span class="line"> interface eth0</span><br><span class="line"> virtual_router_id 51</span><br><span class="line"> priority 100</span><br><span class="line"> advert_int 1</span><br><span class="line"> authentication {</span><br><span class="line"> auth_type PASS</span><br><span class="line"> auth_pass 1111</span><br><span class="line"> }</span><br><span class="line"> track_script{</span><br><span class="line"> chk_nginx</span><br><span class="line">}</span><br><span class="line"> virtual_ipaddress {</span><br><span class="line"> 172.18.0.10</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 重新启动keepalived,在主机使用浏览器访问虚拟地址<br> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx1</span><br></pre></td></tr></table></figure></p><p>3) 在nginx1停止nginx服务,在主机使用浏览器访问虚拟地址<br> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx2</span><br></pre></td></tr></table></figure></p><blockquote><p>原因: weight -20 每当运行一次vrrp_script chk_nginx脚本, 本机的权重减20</p></blockquote>]]></content>
<summary type="html">
<h2 id="网站架构"><a href="#网站架构" class="headerlink" title="网站架构"></a>网站架构</h2><p>基于Docker容器里构建高并发网站</p>
<p>拓扑图:<br><img src="https://img-blog.c
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>解决Kubernetes1.15.1 部署Flannel网络后pod及容器无法跨主机互通问题</title>
<link href="https://plutoacharon.github.io/2020/04/21/%E8%A7%A3%E5%86%B3Kubernetes1-15-1-%E9%83%A8%E7%BD%B2Flannel%E7%BD%91%E7%BB%9C%E5%90%8Epod%E5%8F%8A%E5%AE%B9%E5%99%A8%E6%97%A0%E6%B3%95%E8%B7%A8%E4%B8%BB%E6%9C%BA%E4%BA%92%E9%80%9A%E9%97%AE%E9%A2%98/"/>
<id>https://plutoacharon.github.io/2020/04/21/解决Kubernetes1-15-1-部署Flannel网络后pod及容器无法跨主机互通问题/</id>
<published>2020-04-21T09:50:39.000Z</published>
<updated>2020-04-21T09:50:53.251Z</updated>
<content type="html"><![CDATA[<p>记一次部署Flannel网络后网络不通问题, 查询网上资料无果</p><p>自己记录一下解决过程</p><h2 id="现象"><a href="#现象" class="headerlink" title="现象"></a>现象</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-54j5c 1/1 Running 0 5h44m</span><br><span class="line">coredns-5c98db65d4-jmvbf 1/1 Running 0 5h45m</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 10d</span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d</span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 10d</span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 10d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 10d</span><br><span class="line">kubernetes-dashboard-7d75c474bb-hg7zt 1/1 Running 0 71m</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get node</span></span><br><span class="line">NAME STATUS ROLES AGE VERSION</span><br><span class="line">k8s-master01 Ready master 10d v1.15.1</span><br><span class="line">k8s-node01 Ready <none> 9d v1.15.1</span><br><span class="line">k8s-node02 Ready <none> 9d v1.15.1</span><br></pre></td></tr></table></figure><p>由以上可以看到我部署Flannel以后, master检测到node节点 并且flannel容器显示<code>Running</code>正常</p><h2 id="排查问题"><a href="#排查问题" class="headerlink" title="排查问题"></a>排查问题</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># ip a</span></span><br><span class="line">1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1</span><br><span class="line"> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00</span><br><span class="line"> inet 127.0.0.1/8 scope host lo</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000</span><br><span class="line"> link/ether 00:0c:29:2c:d1:c2 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 192.168.0.50/24 brd 192.168.0.255 scope global noprefixroute ens33</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet6 fe80::20c:29ff:fe2c:d1c2/64 scope link </span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default </span><br><span class="line"> link/ether 02:42:1f:d8:95:21 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000</span><br><span class="line"> link/ether ee:02:3a:98:e3:e3 brd ff:ff:ff:ff:ff:ff</span><br><span class="line">5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default </span><br><span class="line"> link/ether d2:c2:72:50:95:31 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet 10.110.65.174/32 brd 10.110.65.174 scope global kube-ipvs0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">6: flannel.1: <BROADCAST,MULTICAST> mtu 1450 qdisc noqueue state DOWN group default </span><br><span class="line"> link/ether 7e:35:6d:f9:50:c3 brd ff:ff:ff:ff:ff:ff</span><br><span class="line">7: cni0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000</span><br><span class="line"> link/ether 8a:1b:ab:4c:83:c9 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 10.244.0.1/24 scope global cni0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br></pre></td></tr></table></figure><p><code>6: flannel.1</code>网络没有ip信息, 并且显示<code>DOWN</code>的状态</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># ping 10.244.2.6</span></span><br><span class="line">PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.</span><br><span class="line">^C</span><br><span class="line">--- 10.244.2.6 ping statistics ---</span><br><span class="line">13 packets transmitted, 0 received, 100% packet loss, time 12004ms</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># ping 10.244.2.6</span></span><br><span class="line">PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.</span><br><span class="line">^C</span><br><span class="line">--- 10.244.2.6 ping statistics ---</span><br><span class="line">36 packets transmitted, 0 received, 100% packet loss, time 35012ms</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node02 ~]<span class="comment"># ping 10.244.2.6</span></span><br><span class="line">PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.</span><br><span class="line">64 bytes from 10.244.2.6: icmp_seq=1 ttl=64 time=0.131 ms</span><br><span class="line">64 bytes from 10.244.2.6: icmp_seq=2 ttl=64 time=0.042 ms</span><br><span class="line">^C</span><br><span class="line">--- 10.244.2.6 ping statistics ---</span><br><span class="line">2 packets transmitted, 2 received, 0% packet loss, time 999ms</span><br></pre></td></tr></table></figure><p>一个存在与node2的pod只有node2能ping 通, 其他节点全部超时</p><h2 id="解决"><a href="#解决" class="headerlink" title="解决"></a>解决</h2><h3 id="方法1"><a href="#方法1" class="headerlink" title="方法1"></a>方法1</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># sudo iptables -P INPUT ACCEPT</span></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># sudo iptables -P OUTPUT ACCEPT</span></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># sudo iptables -P FORWARD ACCEPT</span></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># iptables -L -n</span></span><br><span class="line">Chain INPUT (policy ACCEPT)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0 </span><br><span class="line"></span><br><span class="line">Chain FORWARD (policy ACCEPT)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">KUBE-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */</span><br><span class="line">ACCEPT all -- 10.244.0.0/16 0.0.0.0/0 </span><br><span class="line">ACCEPT all -- 0.0.0.0/0 10.244.0.0/16 </span><br><span class="line"></span><br><span class="line">Chain OUTPUT (policy ACCEPT)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0 </span><br><span class="line"></span><br><span class="line">Chain DOCKER (0 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line"></span><br><span class="line">Chain DOCKER-ISOLATION-STAGE-1 (0 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line"></span><br><span class="line">Chain DOCKER-ISOLATION-STAGE-2 (0 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line"></span><br><span class="line">Chain DOCKER-USER (0 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line"></span><br><span class="line">Chain KUBE-FIREWALL (2 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall <span class="keyword">for</span> dropping marked packets */ mark match 0x8000/0x8000</span><br><span class="line"></span><br><span class="line">Chain KUBE-FORWARD (1 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ mark match 0x4000/0x4000</span><br><span class="line">ACCEPT all -- 10.244.0.0/16 0.0.0.0/0 /* kubernetes forwarding conntrack pod <span class="built_in">source</span> rule */ ctstate RELATED,ESTABLISHED</span><br><span class="line">ACCEPT all -- 0.0.0.0/0 10.244.0.0/16 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED</span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># service iptables save</span></span><br><span class="line">iptables: Saving firewall rules to /etc/sysconfig/iptables:[ 确定 ]</span><br></pre></td></tr></table></figure><p>清理<code>IPTABLES</code>规则, 保存<br>问题没有解决 使用方法二</p><h3 id="方法2"><a href="#方法2" class="headerlink" title="方法2"></a>方法2</h3><p>卸载flannel网络<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#第一步,在master节点删除flannel</span></span><br><span class="line">kubectl delete -f kube-flannel.yml</span><br><span class="line"></span><br><span class="line"><span class="comment">#第二步,在node节点清理flannel网络留下的文件</span></span><br><span class="line">ifconfig cni0 down</span><br><span class="line">ip link delete cni0</span><br><span class="line">ifconfig flannel.1 down</span><br><span class="line">ip link delete flannel.1</span><br><span class="line">rm -rf /var/lib/cni/</span><br><span class="line">rm -f /etc/cni/net.d/*</span><br></pre></td></tr></table></figure></p><p>重新部署Flannel网络<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl create -f kube-flannel.yml </span></span><br><span class="line">podsecuritypolicy.policy/psp.flannel.unprivileged created</span><br><span class="line">clusterrole.rbac.authorization.k8s.io/flannel created</span><br><span class="line">clusterrolebinding.rbac.authorization.k8s.io/flannel created</span><br><span class="line">serviceaccount/flannel created</span><br><span class="line">configmap/kube-flannel-cfg created</span><br><span class="line">daemonset.apps/kube-flannel-ds-amd64 created</span><br><span class="line">daemonset.apps/kube-flannel-ds-arm64 created</span><br><span class="line">daemonset.apps/kube-flannel-ds-arm created</span><br><span class="line">daemonset.apps/kube-flannel-ds-ppc64le created</span><br><span class="line">daemonset.apps/kube-flannel-ds-s390x created</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-8bpdd 1/1 Running 0 17s</span><br><span class="line">coredns-5c98db65d4-knfcj 1/1 Running 0 43s</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 10d</span><br><span class="line">kube-flannel-ds-amd64-56hsf 1/1 Running 0 25m</span><br><span class="line">kube-flannel-ds-amd64-56t49 1/1 Running 0 25m</span><br><span class="line">kube-flannel-ds-amd64-qz42z 1/1 Running 0 25m</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 10d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 10d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 10d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 10d</span><br><span class="line">kubernetes-dashboard-7d75c474bb-4r7hc 1/1 Running 0 23m</span><br><span class="line">[root@k8s-master01 flannel]<span class="comment">#</span></span><br></pre></td></tr></table></figure><p>重新部署Flannel网络后 容器需要重置, 删除就可以 k8s会重新自动添加<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># ping 10.244.1.2</span></span><br><span class="line">PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=1.04 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.498 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=3 ttl=63 time=0.575 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=4 ttl=63 time=0.578 ms</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># ping 10.244.1.2</span></span><br><span class="line">PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=1 ttl=64 time=0.065 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=2 ttl=64 time=0.038 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=3 ttl=64 time=0.135 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=4 ttl=64 time=0.058 ms</span><br><span class="line">^C</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node02 ~]<span class="comment"># ping 10.244.1.2</span></span><br><span class="line">PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.760 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.510 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=3 ttl=63 time=0.442 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=4 ttl=63 time=0.525 ms</span><br><span class="line">^C</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># ifconfig </span></span><br><span class="line">docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500</span><br><span class="line"> inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255</span><br><span class="line"> ether 02:42:1f:d8:95:21 txqueuelen 0 (Ethernet)</span><br><span class="line"> RX packets 0 bytes 0 (0.0 B)</span><br><span class="line"> RX errors 0 dropped 0 overruns 0 frame 0</span><br><span class="line"> TX packets 0 bytes 0 (0.0 B)</span><br><span class="line"> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><br><span class="line"></span><br><span class="line">ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500</span><br><span class="line"> inet 192.168.0.50 netmask 255.255.255.0 broadcast 192.168.0.255</span><br><span class="line"> inet6 fe80::20c:29ff:fe2c:d1c2 prefixlen 64 scopeid 0x20<link></span><br><span class="line"> ether 00:0c:29:2c:d1:c2 txqueuelen 1000 (Ethernet)</span><br><span class="line"> RX packets 737868 bytes 493443231 (470.5 MiB)</span><br><span class="line"> RX errors 0 dropped 0 overruns 0 frame 0</span><br><span class="line"> TX packets 1656623 bytes 3510224771 (3.2 GiB)</span><br><span class="line"> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><br><span class="line"></span><br><span class="line">flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450</span><br><span class="line"> inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0</span><br><span class="line"> ether aa:50:d6:f9:09:e5 txqueuelen 0 (Ethernet)</span><br><span class="line"> RX packets 14 bytes 1728 (1.6 KiB)</span><br><span class="line"> RX errors 0 dropped 0 overruns 0 frame 0</span><br><span class="line"> TX packets 67 bytes 5973 (5.8 KiB)</span><br><span class="line"> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><br><span class="line"></span><br><span class="line">lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536</span><br><span class="line"> inet 127.0.0.1 netmask 255.0.0.0</span><br><span class="line"> loop txqueuelen 1 (Local Loopback)</span><br><span class="line"> RX packets 6944750 bytes 1242999056 (1.1 GiB)</span><br><span class="line"> RX errors 0 dropped 0 overruns 0 frame 0</span><br><span class="line"> TX packets 6944750 bytes 1242999056 (1.1 GiB)</span><br><span class="line"> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><br><span class="line"></span><br><span class="line">[root@k8s-master01 flannel]<span class="comment">#</span></span><br></pre></td></tr></table></figure><p>flannel网络显示正常, 容器之间可以跨主机互通!</p>]]></content>
<summary type="html">
<p>记一次部署Flannel网络后网络不通问题, 查询网上资料无果</p>
<p>自己记录一下解决过程</p>
<h2 id="现象"><a href="#现象" class="headerlink" title="现象"></a>现象</h2><figure class="h
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(五)----Kubernetes1.15.1安装 Dashboard 的WEB UI插件</title>
<link href="https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E4%BA%94-Kubernetes1-15-1%E5%AE%89%E8%A3%85-Dashboard-%E7%9A%84WEB-UI%E6%8F%92%E4%BB%B6/"/>
<id>https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-入门到实践-五-Kubernetes1-15-1安装-Dashboard-的WEB-UI插件/</id>
<published>2020-04-21T09:50:00.000Z</published>
<updated>2020-04-21T09:50:16.811Z</updated>
<content type="html"><![CDATA[<p>上节讲解了通过kubeadm 搭建集群kubeadm1.15.1环境,现在的集群已经搭建成功了,今天给大家展示Kubernetes Dashboard 插件的安装</p><h2 id="下载官方的yaml文件"><a href="#下载官方的yaml文件" class="headerlink" title="下载官方的yaml文件"></a>下载官方的yaml文件</h2><p>进入官网:<code>https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml</span><br></pre></td></tr></table></figure></p><p> 修改:<br> type,指定端口类型为 NodePort,这样外界可以通过地址 nodeIP:nodePort 访问 dashboard<br> <img src="https://img-blog.csdnimg.cn/20200413184310625.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>如果网络不好,不能直接下载,需要手动创建kubernetes-dashboard.yaml文件<br><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># Copyright 2017 The Kubernetes Authors.</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># Licensed under the Apache License, Version 2.0 (the "License");</span></span><br><span class="line"><span class="comment"># you may not use this file except in compliance with the License.</span></span><br><span class="line"><span class="comment"># You may obtain a copy of the License at</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># http://www.apache.org/licenses/LICENSE-2.0</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># Unless required by applicable law or agreed to in writing, software</span></span><br><span class="line"><span class="comment"># distributed under the License is distributed on an "AS IS" BASIS,</span></span><br><span class="line"><span class="comment"># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.</span></span><br><span class="line"><span class="comment"># See the License for the specific language governing permissions and</span></span><br><span class="line"><span class="comment"># limitations under the License.</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># ------------------- Dashboard Secret ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Secret</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard-certs</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">type:</span> <span class="string">Opaque</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># ------------------- Dashboard Service Account ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ServiceAccount</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># ------------------- Dashboard Role & Role Binding ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Role</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">rbac.authorization.k8s.io/v1</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard-minimal</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">rules:</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["secrets"]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["create"]</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to create 'kubernetes-dashboard-settings' config map.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["configmaps"]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["create"]</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to get, update and delete Dashboard exclusive secrets.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["secrets"]</span></span><br><span class="line"><span class="attr"> resourceNames:</span> <span class="string">["kubernetes-dashboard-key-holder",</span> <span class="string">"kubernetes-dashboard-certs"</span><span class="string">]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["get",</span> <span class="string">"update"</span><span class="string">,</span> <span class="string">"delete"</span><span class="string">]</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["configmaps"]</span></span><br><span class="line"><span class="attr"> resourceNames:</span> <span class="string">["kubernetes-dashboard-settings"]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["get",</span> <span class="string">"update"</span><span class="string">]</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to get metrics from heapster.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["services"]</span></span><br><span class="line"><span class="attr"> resourceNames:</span> <span class="string">["heapster"]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["proxy"]</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["services/proxy"]</span></span><br><span class="line"><span class="attr"> resourceNames:</span> <span class="string">["heapster",</span> <span class="string">"http:heapster:"</span><span class="string">,</span> <span class="string">"https:heapster:"</span><span class="string">]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["get"]</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">rbac.authorization.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">RoleBinding</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard-minimal</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">roleRef:</span></span><br><span class="line"><span class="attr"> apiGroup:</span> <span class="string">rbac.authorization.k8s.io</span></span><br><span class="line"><span class="attr"> kind:</span> <span class="string">Role</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard-minimal</span></span><br><span class="line"><span class="attr">subjects:</span></span><br><span class="line"><span class="attr">- kind:</span> <span class="string">ServiceAccount</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># ------------------- Dashboard Deployment ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Deployment</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">apps/v1</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> replicas:</span> <span class="number">1</span></span><br><span class="line"><span class="attr"> revisionHistoryLimit:</span> <span class="number">10</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> matchLabels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> template:</span></span><br><span class="line"><span class="attr"> metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> spec:</span></span><br><span class="line"><span class="attr"> containers:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> image:</span> <span class="string">k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - containerPort:</span> <span class="number">8443</span></span><br><span class="line"><span class="attr"> protocol:</span> <span class="string">TCP</span></span><br><span class="line"><span class="attr"> args:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="bullet">--auto-generate-certificates</span></span><br><span class="line"> <span class="comment"># Uncomment the following line to manually specify Kubernetes API server Host</span></span><br><span class="line"> <span class="comment"># If not specified, Dashboard will attempt to auto discover the API server and connect</span></span><br><span class="line"> <span class="comment"># to it. Uncomment only if the default does not work.</span></span><br><span class="line"> <span class="comment"># - --apiserver-host=http://my-address:port</span></span><br><span class="line"><span class="attr"> volumeMounts:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">kubernetes-dashboard-certs</span></span><br><span class="line"><span class="attr"> mountPath:</span> <span class="string">/certs</span></span><br><span class="line"> <span class="comment"># Create on-disk volume to store exec logs</span></span><br><span class="line"><span class="attr"> - mountPath:</span> <span class="string">/tmp</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">tmp-volume</span></span><br><span class="line"><span class="attr"> livenessProbe:</span></span><br><span class="line"><span class="attr"> httpGet:</span></span><br><span class="line"><span class="attr"> scheme:</span> <span class="string">HTTPS</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">/</span></span><br><span class="line"><span class="attr"> port:</span> <span class="number">8443</span></span><br><span class="line"><span class="attr"> initialDelaySeconds:</span> <span class="number">30</span></span><br><span class="line"><span class="attr"> timeoutSeconds:</span> <span class="number">30</span></span><br><span class="line"><span class="attr"> volumes:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">kubernetes-dashboard-certs</span></span><br><span class="line"><span class="attr"> secret:</span></span><br><span class="line"><span class="attr"> secretName:</span> <span class="string">kubernetes-dashboard-certs</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">tmp-volume</span></span><br><span class="line"><span class="attr"> emptyDir:</span> <span class="string">{}</span></span><br><span class="line"><span class="attr"> serviceAccountName:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"> <span class="comment"># Comment the following tolerations if Dashboard must not be deployed on master</span></span><br><span class="line"><span class="attr"> tolerations:</span></span><br><span class="line"><span class="attr"> - key:</span> <span class="string">node-role.kubernetes.io/master</span></span><br><span class="line"><span class="attr"> effect:</span> <span class="string">NoSchedule</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># ------------------- Dashboard Service ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> type:</span> <span class="string">NodePort</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - port:</span> <span class="number">443</span></span><br><span class="line"><span class="attr"> targetPort:</span> <span class="number">8443</span></span><br><span class="line"><span class="attr"> nodePort:</span> <span class="number">32000</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br></pre></td></tr></table></figure></p><h2 id="拉取镜像"><a href="#拉取镜像" class="headerlink" title="拉取镜像"></a>拉取镜像</h2><p>为了避免访问外国网站,这里直接通过国内的阿里镜像拉取,通过tag更改名称<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">docker pull registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1</span><br><span class="line">docker tag registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># docker pull registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1</span></span><br><span class="line">v1.10.1: Pulling from rsqlh/kubernetes-dashboard</span><br><span class="line">9518d8afb433: Pull complete </span><br><span class="line">Digest: sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747</span><br><span class="line">Status: Downloaded newer image <span class="keyword">for</span> registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1</span><br><span class="line">registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1</span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># docker tag registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1</span></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># docker images</span></span><br><span class="line">REPOSITORY TAG IMAGE ID CREATED SIZE</span><br><span class="line">registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard v1.10.1 f9aed6605b81 16 months ago 122MB</span><br><span class="line">k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 16 months ago 122MB</span><br><span class="line">[root@k8s-node01 ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure><h2 id="部署yaml文件"><a href="#部署yaml文件" class="headerlink" title="部署yaml文件"></a>部署yaml文件</h2><p>通过<code>kubectl create -f</code>命令部署<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ui]<span class="comment"># kubectl create -f kubernetes-dashboard.yaml </span></span><br><span class="line">secret/kubernetes-dashboard-certs created</span><br><span class="line">serviceaccount/kubernetes-dashboard created</span><br><span class="line">role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created</span><br><span class="line">rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created</span><br><span class="line">deployment.apps/kubernetes-dashboard created</span><br><span class="line">service/kubernetes-dashboard created</span><br><span class="line">[root@k8s-master01 ui]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-54j5c 1/1 Running 0 3h53m</span><br><span class="line">coredns-5c98db65d4-jmvbf 1/1 Running 0 3h53m</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 10d</span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d</span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 10d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 10d</span><br><span class="line">kubernetes-dashboard-7d75c474bb-zj9c6 1/1 Running 0 18s</span><br><span class="line">[root@k8s-master01 ui]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>可以看到<code>kubernetes-dashboard</code>处于Running状态<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ui]<span class="comment"># kubectl get svc -n kube-system</span></span><br><span class="line">NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE</span><br><span class="line">kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 10d</span><br><span class="line">kubernetes-dashboard NodePort 10.110.65.174 <none> 443:32000/TCP 11m</span><br><span class="line">[root@k8s-master01 ui]<span class="comment"># kubectl get pod -n kube-system -o wide</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES</span><br><span class="line">coredns-5c98db65d4-54j5c 1/1 Running 0 4h5m 10.244.2.5 k8s-node02 <none> <none></span><br><span class="line">coredns-5c98db65d4-jmvbf 1/1 Running 0 4h6m 10.244.1.5 k8s-node01 <none> <none></span><br><span class="line">etcd-k8s-master01 1/1 Running 2 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d 192.168.0.52 k8s-node02 <none> <none></span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 9d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d 192.168.0.51 k8s-node01 <none> <none></span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d 192.168.0.51 k8s-node01 <none> <none></span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d 192.168.0.52 k8s-node02 <none> <none></span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kubernetes-dashboard-7d75c474bb-zj9c6 1/1 Running 0 13m 10.244.1.6 k8s-node02 <none> <none></span><br><span class="line">[root@k8s-master01 ui]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>可以看到<code>kubernetes-dashboard</code>暴露在node2上的32000端口</p><h2 id="访问ui页面"><a href="#访问ui页面" class="headerlink" title="访问ui页面"></a>访问ui页面</h2><p><code>https://192.168.0.52:32000/</code> 这是我node2的ip地址<br>建议使用<code>firefox</code>访问, <code>Chrome</code>访问会禁止不安全证书访问<br><img src="https://img-blog.csdnimg.cn/20200413191431640.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200413193104912.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="Token令牌登录"><a href="#Token令牌登录" class="headerlink" title="Token令牌登录"></a>Token令牌登录</h3><ol><li>创建serviceaccount<br><code>kubectl create serviceaccount dashboard-admin -n kube-system</code><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl create serviceaccount dashboard-admin -n kube-system</span></span><br><span class="line">serviceaccount/dashboard-admin created</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get sa -n kube-system</span></span><br><span class="line">NAME SECRETS AGE</span><br><span class="line">attachdetach-controller 1 10d</span><br><span class="line">bootstrap-signer 1 10d</span><br><span class="line">certificate-controller 1 10d</span><br><span class="line">clusterrole-aggregation-controller 1 10d</span><br><span class="line">coredns 1 10d</span><br><span class="line">cronjob-controller 1 10d</span><br><span class="line">daemon-set-controller 1 10d</span><br><span class="line">dashboard-admin 1 27s</span><br><span class="line">default 1 10d</span><br><span class="line">deployment-controller 1 10d</span><br><span class="line">disruption-controller 1 10d</span><br><span class="line">endpoint-controller 1 10d</span><br><span class="line">expand-controller 1 10d</span><br><span class="line">flannel 1 10d</span><br><span class="line">generic-garbage-collector 1 10d</span><br><span class="line">horizontal-pod-autoscaler 1 10d</span><br><span class="line">job-controller 1 10d</span><br><span class="line">kube-proxy 1 10d</span><br><span class="line">kubernetes-dashboard 1 48m</span><br><span class="line">namespace-controller 1 10d</span><br><span class="line">node-controller 1 10d</span><br><span class="line">persistent-volume-binder 1 10d</span><br><span class="line">pod-garbage-collector 1 10d</span><br><span class="line">pv-protection-controller 1 10d</span><br><span class="line">pvc-protection-controller 1 10d</span><br><span class="line">replicaset-controller 1 10d</span><br><span class="line">replication-controller 1 10d</span><br><span class="line">resourcequota-controller 1 10d</span><br><span class="line">service-account-controller 1 10d</span><br><span class="line">service-controller 1 10d</span><br><span class="line">statefulset-controller 1 10d</span><br><span class="line">token-cleaner 1 10d</span><br><span class="line">ttl-controller 1 10d</span><br><span class="line">[root@k8s-master01 ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></li></ol><p><code>dashboard-admin 1 27s</code>创建成功</p><ol start="2"><li>把serviceaccount绑定在clusteradmin,授权serviceaccount用户具有整个集群的访问管理权限<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin</span><br></pre></td></tr></table></figure></li></ol><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin</span></span><br><span class="line">clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get secret -n kube-system</span></span><br><span class="line">NAME TYPE DATA AGE</span><br><span class="line">attachdetach-controller-token-j5vtc kubernetes.io/service-account-token 3 10d</span><br><span class="line">bootstrap-signer-token-prjr2 kubernetes.io/service-account-token 3 10d</span><br><span class="line">certificate-controller-token-f8rjx kubernetes.io/service-account-token 3 10d</span><br><span class="line">clusterrole-aggregation-controller-token-l6lqh kubernetes.io/service-account-token 3 10d</span><br><span class="line">coredns-token-p5z2z kubernetes.io/service-account-token 3 10d</span><br><span class="line">cronjob-controller-token-jsp8k kubernetes.io/service-account-token 3 10d</span><br><span class="line">daemon-set-controller-token-4fh89 kubernetes.io/service-account-token 3 10d</span><br><span class="line">dashboard-admin-token-dl8pf kubernetes.io/service-account-token 3 8m55s</span><br><span class="line">default-token-22jpc kubernetes.io/service-account-token 3 10d</span><br><span class="line">deployment-controller-token-jc4xc kubernetes.io/service-account-token 3 10d</span><br><span class="line">disruption-controller-token-p85cv kubernetes.io/service-account-token 3 10d</span><br><span class="line">endpoint-controller-token-dhk4f kubernetes.io/service-account-token 3 10d</span><br><span class="line">expand-controller-token-lbsrj kubernetes.io/service-account-token 3 10d</span><br><span class="line">flannel-token-qjgks kubernetes.io/service-account-token 3 10d</span><br><span class="line">generic-garbage-collector-token-6fwmg kubernetes.io/service-account-token 3 10d</span><br><span class="line">horizontal-pod-autoscaler-token-vl8dh kubernetes.io/service-account-token 3 10d</span><br><span class="line">job-controller-token-c2sfm kubernetes.io/service-account-token 3 10d</span><br><span class="line">kube-proxy-token-qg465 kubernetes.io/service-account-token 3 10d</span><br><span class="line">kubernetes-dashboard-certs NodePort 0 56m</span><br><span class="line">kubernetes-dashboard-key-holder Opaque 2 56m</span><br><span class="line">kubernetes-dashboard-token-hpg2q kubernetes.io/service-account-token 3 56m</span><br><span class="line">namespace-controller-token-vvbxk kubernetes.io/service-account-token 3 10d</span><br><span class="line">node-controller-token-5hmv6 kubernetes.io/service-account-token 3 10d</span><br><span class="line">persistent-volume-binder-token-6vrk6 kubernetes.io/service-account-token 3 10d</span><br><span class="line">pod-garbage-collector-token-f8bvl kubernetes.io/service-account-token 3 10d</span><br><span class="line">pv-protection-controller-token-pp8bh kubernetes.io/service-account-token 3 10d</span><br><span class="line">pvc-protection-controller-token-jf6lj kubernetes.io/service-account-token 3 10d</span><br><span class="line">replicaset-controller-token-twbw8 kubernetes.io/service-account-token 3 10d</span><br><span class="line">replication-controller-token-lr45r kubernetes.io/service-account-token 3 10d</span><br><span class="line">resourcequota-controller-token-qlgbb kubernetes.io/service-account-token 3 10d</span><br><span class="line">service-account-controller-token-bsqlq kubernetes.io/service-account-token 3 10d</span><br><span class="line">service-controller-token-g6lvs kubernetes.io/service-account-token 3 10d</span><br><span class="line">statefulset-controller-token-h6wrx kubernetes.io/service-account-token 3 10d</span><br><span class="line">token-cleaner-token-wvwbn kubernetes.io/service-account-token 3 10d</span><br><span class="line">ttl-controller-token-z2fm7 kubernetes.io/service-account-token 3 10d</span><br></pre></td></tr></table></figure><ol start="3"><li>获取serviceaccount的secret信息,可得到token(令牌)的信息</li></ol><p><code>kubectl get secret -n kube-system</code></p><p>dashboard-admin-token-slfcr 通过上边命令获取到的<br><code>kubectl describe secret dashboard-admin-token-slfcr -n kube-system</code><br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br></pre></td><td class="code"><pre><span class="line">```bash</span><br><span class="line">[root@k8s-master01 ~]# kubectl get secret -n kube-system</span><br><span class="line">NAME TYPE DATA AGE</span><br><span class="line">attachdetach-controller-token-j5vtc kubernetes.io/service-account-token 3 10d</span><br><span class="line">bootstrap-signer-token-prjr2 kubernetes.io/service-account-token 3 10d</span><br><span class="line">certificate-controller-token-f8rjx kubernetes.io/service-account-token 3 10d</span><br><span class="line">clusterrole-aggregation-controller-token-l6lqh kubernetes.io/service-account-token 3 10d</span><br><span class="line">coredns-token-p5z2z kubernetes.io/service-account-token 3 10d</span><br><span class="line">cronjob-controller-token-jsp8k kubernetes.io/service-account-token 3 10d</span><br><span class="line">daemon-set-controller-token-4fh89 kubernetes.io/service-account-token 3 10d</span><br><span class="line">dashboard-admin-token-dl8pf kubernetes.io/service-account-token 3 9m2s</span><br><span class="line">default-token-22jpc kubernetes.io/service-account-token 3 10d</span><br><span class="line">deployment-controller-token-jc4xc kubernetes.io/service-account-token 3 10d</span><br><span class="line">disruption-controller-token-p85cv kubernetes.io/service-account-token 3 10d</span><br><span class="line">endpoint-controller-token-dhk4f kubernetes.io/service-account-token 3 10d</span><br><span class="line">expand-controller-token-lbsrj kubernetes.io/service-account-token 3 10d</span><br><span class="line">flannel-token-qjgks kubernetes.io/service-account-token 3 10d</span><br><span class="line">generic-garbage-collector-token-6fwmg kubernetes.io/service-account-token 3 10d</span><br><span class="line">horizontal-pod-autoscaler-token-vl8dh kubernetes.io/service-account-token 3 10d</span><br><span class="line">job-controller-token-c2sfm kubernetes.io/service-account-token 3 10d</span><br><span class="line">kube-proxy-token-qg465 kubernetes.io/service-account-token 3 10d</span><br><span class="line">kubernetes-dashboard-certs NodePort 0 56m</span><br><span class="line">kubernetes-dashboard-key-holder Opaque 2 56m</span><br><span class="line">kubernetes-dashboard-token-hpg2q kubernetes.io/service-account-token 3 56m</span><br><span class="line">namespace-controller-token-vvbxk kubernetes.io/service-account-token 3 10d</span><br><span class="line">node-controller-token-5hmv6 kubernetes.io/service-account-token 3 10d</span><br><span class="line">persistent-volume-binder-token-6vrk6 kubernetes.io/service-account-token 3 10d</span><br><span class="line">pod-garbage-collector-token-f8bvl kubernetes.io/service-account-token 3 10d</span><br><span class="line">pv-protection-controller-token-pp8bh kubernetes.io/service-account-token 3 10d</span><br><span class="line">pvc-protection-controller-token-jf6lj kubernetes.io/service-account-token 3 10d</span><br><span class="line">replicaset-controller-token-twbw8 kubernetes.io/service-account-token 3 10d</span><br><span class="line">replication-controller-token-lr45r kubernetes.io/service-account-token 3 10d</span><br><span class="line">resourcequota-controller-token-qlgbb kubernetes.io/service-account-token 3 10d</span><br><span class="line">service-account-controller-token-bsqlq kubernetes.io/service-account-token 3 10d</span><br><span class="line">service-controller-token-g6lvs kubernetes.io/service-account-token 3 10d</span><br><span class="line">statefulset-controller-token-h6wrx kubernetes.io/service-account-token 3 10d</span><br><span class="line">token-cleaner-token-wvwbn kubernetes.io/service-account-token 3 10d</span><br><span class="line">ttl-controller-token-z2fm7 kubernetes.io/service-account-token 3 10d</span><br><span class="line">[root@k8s-master01 ~]# kubectl describe secret dashboard-admin-token-dl8pf -n kube-system</span><br><span class="line">Name: dashboard-admin-token-dl8pf</span><br><span class="line">Namespace: kube-system</span><br><span class="line">Labels: <none></span><br><span class="line">Annotations: kubernetes.io/service-account.name: dashboard-admin</span><br><span class="line"> kubernetes.io/service-account.uid: b4fc67f6-1cab-4486-8652-05346c939c6d</span><br><span class="line"></span><br><span class="line">Type: kubernetes.io/service-account-token</span><br><span class="line"></span><br><span class="line">Data</span><br><span class="line">====</span><br><span class="line">ca.crt: 1025 bytes</span><br><span class="line">namespace: 11 bytes</span><br><span class="line">token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZGw4cGYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjRmYzY3ZjYtMWNhYi00NDg2LTg2NTItMDUzNDZjOTM5YzZkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.ArAKoKEiZ0xaV9rqff63iq2t6iAsWBmA-VhHKK_pnkiMObpPL-JjZras40HO0crE7Gnou9dUWCStW3AbfmtJ1SX_Hmo4OlXGH2xFBJ-_2wruwWOU89dlHhOnhw8__skhsVrE92-KDK00GRSrA4BkUu8PWp45jCQyIwFbF8h3L2ydcNlcs_rxGieVFRc1p9gaf_HAyXIIHEgu-M5LxA6BduN-3Z7WBzYMokFd_r_c_beAQ4CUlTYc1c0FjmqLeyZpyLJL6IMqztjaYHFXiRty6c-PQHZd6HQoElJShbw1lhZtHXSSw0A70Kb3ZVfqQZxRaOsqJYo70sZXQQRaYso6fg</span><br><span class="line">[root@k8s-master01 ~]#</span><br></pre></td></tr></table></figure></p><p>输入Token<br><img src="https://img-blog.csdnimg.cn/20200413192952809.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>部署成功!</p>]]></content>
<summary type="html">
<p>上节讲解了通过kubeadm 搭建集群kubeadm1.15.1环境,现在的集群已经搭建成功了,今天给大家展示Kubernetes Dashboard 插件的安装</p>
<h2 id="下载官方的yaml文件"><a href="#下载官方的yaml文件" class="
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>解决Kubernetes1.5.1 coredns报错CrashLoopBackOff</title>
<link href="https://plutoacharon.github.io/2020/04/21/%E8%A7%A3%E5%86%B3Kubernetes1-5-1-coredns%E6%8A%A5%E9%94%99CrashLoopBackOff/"/>
<id>https://plutoacharon.github.io/2020/04/21/解决Kubernetes1-5-1-coredns报错CrashLoopBackOff/</id>
<published>2020-04-21T09:49:27.000Z</published>
<updated>2020-04-21T09:49:45.084Z</updated>
<content type="html"><![CDATA[<p>今天在使用K8s查看pod时发现,<code>coredns</code>出现了<code>CrashLoopBackOff</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-f9rb7 0/1 CrashLoopBackOff 50 9d</span><br><span class="line">coredns-5c98db65d4-xcd9s 0/1 CrashLoopBackOff 50 9d</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 9d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 9d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d</span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 9d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 9d</span><br></pre></td></tr></table></figure></p><p>使用<code>kubectl logs</code>命令查看, 报错很奇怪<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl logs coredns-5c98db65d4-xcd9s -n kube-system</span></span><br><span class="line">E0413 06:32:09.919666 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?<span class="built_in">limit</span>=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host</span><br><span class="line">E0413 06:32:09.919666 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?<span class="built_in">limit</span>=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host</span><br></pre></td></tr></table></figure></p><h2 id="原因"><a href="#原因" class="headerlink" title="原因:"></a>原因:</h2><p>查阅k8s官方文档<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">coredns pods 有 CrashLoopBackOff 或者 Error 状态</span><br><span class="line">如果有些节点运行的是旧版本的 Docker,同时启用了 SELinux,您或许会遇到 coredns pods 无法启动的情况。 要解决此问题,您可以尝试以下选项之一:</span><br><span class="line"></span><br><span class="line">升级到 Docker 的较新版本。</span><br><span class="line"></span><br><span class="line">禁用 SELinux.</span><br><span class="line"></span><br><span class="line">修改 coredns 部署以设置 allowPrivilegeEscalation 为 true:</span><br><span class="line"></span><br><span class="line">kubectl -n kube-system get deployment coredns -o yaml | \</span><br><span class="line">sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \</span><br><span class="line">kubectl apply -f -</span><br><span class="line">CoreDNS 处于 CrashLoopBackOff 时的另一个原因是当 Kubernetes 中部署的 CoreDNS Pod 检测 到环路时。有许多解决方法 可以避免在每次 CoreDNS 监测到循环并退出时,Kubernetes 尝试重启 CoreDNS Pod 的情况。</span><br><span class="line"></span><br><span class="line">警告:</span><br><span class="line">警告:禁用 SELinux 或设置 allowPrivilegeEscalation 为 true 可能会损害集群的安全性。</span><br></pre></td></tr></table></figure></p><p>我这里的原因可能是以前配置<code>iptables</code>时产生的</p><h2 id="解决"><a href="#解决" class="headerlink" title="解决"></a>解决</h2><ol><li>设置iptables为空规则<br><code>iptables -F && service iptables save</code></li><li>删除报错的coredns pod<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl delete pod coredns-5c98db65d4-xcd9s</span></span><br><span class="line">Error from server (NotFound): pods <span class="string">"coredns-5c98db65d4-xcd9s"</span> not found</span><br><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl delete pod coredns-5c98db65d4-xcd9s -n kube-system</span></span><br><span class="line">pod <span class="string">"coredns-5c98db65d4-xcd9s"</span> deleted</span><br><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl delete pod coredns-5c98db65d4-f9rb7 -n kube-system</span></span><br><span class="line">pod <span class="string">"coredns-5c98db65d4-f9rb7"</span> deleted</span><br></pre></td></tr></table></figure></li></ol><p>重新查看pod<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-54j5c 1/1 Running 0 13m</span><br><span class="line">coredns-5c98db65d4-jmvbf 1/1 Running 0 14m</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 9d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 9d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d</span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 9d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 9d</span><br><span class="line">[root@k8s-master01 flannel]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>状态重新变成<code>Running</code></p>]]></content>
<summary type="html">
<p>今天在使用K8s查看pod时发现,<code>coredns</code>出现了<code>CrashLoopBackOff</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pr
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(四)----Kubernetes1.15.1配置私有仓库Harbor</title>
<link href="https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E5%9B%9B-Kubernetes1-15-1%E9%85%8D%E7%BD%AE%E7%A7%81%E6%9C%89%E4%BB%93%E5%BA%93Harbor/"/>
<id>https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-入门到实践-四-Kubernetes1-15-1配置私有仓库Harbor/</id>
<published>2020-04-21T09:48:19.000Z</published>
<updated>2020-04-21T09:48:58.957Z</updated>
<content type="html"><![CDATA[<h1 id="目录"><a href="#目录" class="headerlink" title="目录"></a>目录</h1><p><a href="https://blog.csdn.net/qq_43442524/article/details/104483555" target="_blank" rel="noopener">Kubernetes(K8s)入门到实践(一)—-Kubernetes入门</a><br><a href="https://blog.csdn.net/qq_43442524/article/details/104496523" target="_blank" rel="noopener">Kubernetes(K8s)入门到实践(二)—-Kubernetes的基本概念和术语</a><br><a href="https://blog.csdn.net/qq_43442524/article/details/105293018" target="_blank" rel="noopener">Kubernetes(K8s)入门到实践(三)—-Kubernetes Centos7集群安装</a><br><a href="https://blog.csdn.net/qq_43442524/article/details/105429614" target="_blank" rel="noopener">Kubernetes(K8s)入门到实践(四)—-Kubernetes1.15.1配置私有仓库Harbor</a></p><h2 id="前期准备"><a href="#前期准备" class="headerlink" title="前期准备"></a>前期准备</h2><ul><li>需要三台K8s节点</li><li>Harbor虚拟机</li><li>docker-compose</li><li>harbor安装包</li></ul><h2 id="安装docker"><a href="#安装docker" class="headerlink" title="安装docker"></a>安装docker</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">yum install -y yum-utils device-mapper-persistent-data lvm2</span><br><span class="line"></span><br><span class="line">yum-config-manager \</span><br><span class="line">--add-repo \ </span><br><span class="line">http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo</span><br><span class="line"></span><br><span class="line">yum update -y && yum install -y docker-ce</span><br></pre></td></tr></table></figure><p>安装完成后需要建立<code>/etc/docker/daemon.json</code>文件<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 启动docker</span></span><br><span class="line">systemctl start docker && systemctl <span class="built_in">enable</span> docker </span><br><span class="line"><span class="comment">## 创建 /etc/docker 目录</span></span><br><span class="line">mkdir /etc/docker</span><br><span class="line"><span class="comment"># 配置 daemon.json</span></span><br><span class="line">vim /etc/docker/daemon.json</span><br><span class="line">{</span><br><span class="line"> <span class="string">"exec-opts"</span>: [<span class="string">"native.cgroupdriver=systemd"</span>],</span><br><span class="line"> <span class="string">"log-driver"</span>: <span class="string">"json-file"</span>,</span><br><span class="line"> <span class="string">"log-opts"</span>: {</span><br><span class="line"><span class="string">"max-size"</span>: <span class="string">"100m"</span></span><br><span class="line"> },</span><br><span class="line"> <span class="string">"insecure-registries"</span>: [<span class="string">"https://hub.test.com"</span>]</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">mkdir -p /etc/systemd/system/docker.service.d</span><br><span class="line"><span class="comment"># 重启docker服务</span></span><br><span class="line">systemctl daemon-reload && systemctl restart docker && systemctl <span class="built_in">enable</span> docker</span><br></pre></td></tr></table></figure></p><p>同理: K8s节点也需要一样修改<code>/etc/docker/daemon.json</code>文件</p><h2 id="安装Harbor"><a href="#安装Harbor" class="headerlink" title="安装Harbor"></a>安装Harbor</h2><h3 id="下载docker-compose"><a href="#下载docker-compose" class="headerlink" title="下载docker-compose"></a>下载docker-compose</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m`> ./docker-compose</span><br></pre></td></tr></table></figure><h3 id="下载解压Harbor"><a href="#下载解压Harbor" class="headerlink" title="下载解压Harbor"></a>下载解压Harbor</h3><p>Harbor 官方地址:<code>https://github.com/vmware/harbor/releases</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># chmod a+x docker-compose </span></span><br><span class="line">[root@localhost ~]<span class="comment"># mv docker-compose /usr/local/bin/</span></span><br><span class="line">[root@localhost ~]<span class="comment"># tar -zxvf harbor-offline-installer-v1.2.0.tgz </span></span><br><span class="line">[root@localhost ~]<span class="comment"># mv harbor /usr/local/</span></span><br><span class="line">[root@localhost ~]<span class="comment"># cd /usr/local/harbor/</span></span><br></pre></td></tr></table></figure></p><h3 id="配置harbor-cfg"><a href="#配置harbor-cfg" class="headerlink" title="配置harbor.cfg"></a>配置harbor.cfg</h3><p>修改为https协议,并且定义网址<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">hostname = hub.test.com</span><br><span class="line">ui_url_protocol = https</span><br></pre></td></tr></table></figure></p><p>以下为ssl证书配置文件目录 接下来配置HTTPS证书<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">ssl_cert = /data/cert/server.crt</span><br><span class="line">ssl_cert_key = /data/cert/server.key</span><br><span class="line"></span><br><span class="line">#The path of secretkey storage</span><br><span class="line">secretkey_path = /data</span><br></pre></td></tr></table></figure></p><h3 id="创建https证书以及配置相关目录权限"><a href="#创建https证书以及配置相关目录权限" class="headerlink" title="创建https证书以及配置相关目录权限"></a>创建https证书以及配置相关目录权限</h3><p>创建cert目录,输入密码例如<code>123456</code>下面配置会用到<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost harbor]<span class="comment"># mkdir -p /data/cert</span></span><br><span class="line">[root@localhost harbor]<span class="comment"># cd /data/cert/</span></span><br><span class="line">[root@localhost cert]<span class="comment"># openssl genrsa -des3 -out server.key 2048</span></span><br><span class="line">Generating RSA private key, 2048 bit long modulus</span><br><span class="line">...................................+++</span><br><span class="line">................+++</span><br><span class="line">e is 65537 (0x10001)</span><br><span class="line">Enter pass phrase <span class="keyword">for</span> server.key:</span><br><span class="line">Verifying - Enter pass phrase <span class="keyword">for</span> server.key:</span><br></pre></td></tr></table></figure></p><p>生成服务器CSR证书请求文件,注意站点名称要一致</p><p>输入刚才设置的密码进行配置</p><blockquote><p>Common Name (eg, your name or your server’s hostname) []:<code>hub.test.com</code> 一定要填上面配置的网址<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost cert]<span class="comment"># openssl req -new -key server.key -out server.csr</span></span><br><span class="line">Enter pass phrase <span class="keyword">for</span> server.key:</span><br><span class="line">You are about to be asked to enter information that will be incorporated</span><br><span class="line">into your certificate request.</span><br><span class="line">What you are about to enter is what is called a Distinguished Name or a DN.</span><br><span class="line">There are quite a few fields but you can leave some blank</span><br><span class="line">For some fields there will be a default value,</span><br><span class="line">If you enter <span class="string">'.'</span>, the field will be left blank.</span><br><span class="line">-----</span><br><span class="line">Country Name (2 letter code) [XX]:CN </span><br><span class="line">State or Province Name (full name) []:Hebei</span><br><span class="line">Locality Name (eg, city) [Default City]:sjz</span><br><span class="line">Organization Name (eg, company) [Default Company Ltd]:<span class="built_in">test</span></span><br><span class="line">Organizational Unit Name (eg, section) []:<span class="built_in">test</span></span><br><span class="line">Common Name (eg, your name or your server<span class="string">'s hostname) []:hub.test.com</span></span><br><span class="line"><span class="string">Email Address []:[email protected] </span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="string">Please enter the following '</span>extra<span class="string">' attributes</span></span><br><span class="line"><span class="string">to be sent with your certificate request</span></span><br><span class="line"><span class="string">A challenge password []:</span></span><br><span class="line"><span class="string">An optional company name []:</span></span><br></pre></td></tr></table></figure></p></blockquote><p>生成服务器认证证书<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost cert]<span class="comment"># cp server.key server.key.org</span></span><br><span class="line">[root@localhost cert]<span class="comment"># openssl rsa -in server.key.org -out server.key</span></span><br><span class="line">Enter pass phrase <span class="keyword">for</span> server.key.org:</span><br><span class="line">writing RSA key</span><br><span class="line">[root@localhost cert]<span class="comment"># openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt</span></span><br><span class="line">Signature ok</span><br><span class="line">subject=/C=CN/ST=Hebei/L=sjz/O=<span class="built_in">test</span>/OU=<span class="built_in">test</span>/CN=hub.test.com/emailAddress=<span class="built_in">test</span>@qq.com</span><br><span class="line">Getting Private key</span><br><span class="line">[root@localhost cert]<span class="comment"># ls</span></span><br><span class="line">server.crt server.csr server.key server.key.org</span><br><span class="line">[root@localhost cert]<span class="comment"># chmod a+x *</span></span><br><span class="line">[root@localhost cert]<span class="comment"># cd -</span></span><br><span class="line">/usr/<span class="built_in">local</span>/harbor</span><br></pre></td></tr></table></figure></p><p>安装<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost harbor]<span class="comment"># ./install.sh </span></span><br><span class="line">[root@localhost harbor]<span class="comment"># docker ps -a</span></span><br><span class="line">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br><span class="line">c998c35434cd vmware/nginx-photon:1.11.13 <span class="string">"nginx -g 'daemon of…"</span> 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp nginx</span><br><span class="line">b8651abbdc0f vmware/harbor-jobservice:v1.2.0 <span class="string">"/harbor/harbor_jobs…"</span> 2 hours ago Up 2 hours harbor-jobservice</span><br><span class="line">38cd42c3ad61 vmware/harbor-ui:v1.2.0 <span class="string">"/harbor/harbor_ui"</span> 2 hours ago Up 2 hours harbor-ui</span><br><span class="line">7117305239e4 vmware/harbor-adminserver:v1.2.0 <span class="string">"/harbor/harbor_admi…"</span> 2 hours ago Up 2 hours harbor-adminserver</span><br><span class="line">547244f64e7b vmware/harbor-db:v1.2.0 <span class="string">"docker-entrypoint.s…"</span> 2 hours ago Up 2 hours 3306/tcp harbor-db</span><br><span class="line">08ac3fe587c8 vmware/registry:2.6.2-photon <span class="string">"/entrypoint.sh serv…"</span> 2 hours ago Up 2 hours 5000/tcp registry</span><br><span class="line">a137bc1e2548 vmware/harbor-log:v1.2.0 <span class="string">"/bin/sh -c 'crond &…"</span> 2 hours ago Up 2 hours 127.0.0.1:1514->514/tcp harbor-log</span><br><span class="line">[root@localhost harbor]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="修改hosts文件映射"><a href="#修改hosts文件映射" class="headerlink" title="修改hosts文件映射"></a>修改hosts文件映射</h3><p>修改k8s节点与Harbor虚拟机<code>/etc/hosts</code>文件<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">192.168.0.50 k8s-master01</span><br><span class="line">192.168.0.51 k8s-node01</span><br><span class="line">192.168.0.52 k8s-node02</span><br><span class="line">192.168.0.44 hub.test.com</span><br></pre></td></tr></table></figure></p><p>本地hosts文件添加<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">192.168.0.44 hub.test.com</span><br></pre></td></tr></table></figure></p><p>登录账号<code>admin</code>,密码<code>Harbor12345</code><br><img src="https://img-blog.csdnimg.cn/20200410115107334.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200410134144950.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="Harbor上传镜像"><a href="#Harbor上传镜像" class="headerlink" title="Harbor上传镜像"></a>Harbor上传镜像</h2><h3 id="拉取镜像"><a href="#拉取镜像" class="headerlink" title="拉取镜像"></a>拉取镜像</h3><p>这是是从我的docker hub中拉取的镜像<code>plutoacharon/myapp:v1</code>,也可以从docker hub中搜索拉取想要上传的镜像<br><code>docker pull plutoacharon/myapp:v1</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker pull plutoacharon/myapp:v1</span></span><br><span class="line">v1: Pulling from plutoacharon/myapp</span><br><span class="line">550fe1bea624: Pull complete </span><br><span class="line">af3988949040: Pull complete </span><br><span class="line">d6642feac728: Pull complete </span><br><span class="line">c20f0a205eaa: Pull complete </span><br><span class="line">fe78b5db7c4e: Pull complete </span><br><span class="line">6565e38e67fe: Pull complete </span><br><span class="line">Digest: sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513</span><br><span class="line">Status: Downloaded newer image <span class="keyword">for</span> plutoacharon/myapp:v1</span><br><span class="line">docker.io/plutoacharon/myapp:v1</span><br><span class="line">[root@localhost ~]<span class="comment"># docker images</span></span><br><span class="line">REPOSITORY TAG IMAGE ID CREATED SIZE</span><br><span class="line">plutoacharon/myapp v1 d4a5e0eaa84f 2 years ago 15.5MB</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="上传镜像"><a href="#上传镜像" class="headerlink" title="上传镜像"></a>上传镜像</h3><p>首先使用<code>docker login https://hub.test.com</code>登录才可以上传到Harbor中<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker login https://hub.test.com</span></span><br><span class="line">Username: admin</span><br><span class="line">Password: </span><br><span class="line">WARNING! Your password will be stored unencrypted <span class="keyword">in</span> /root/.docker/config.json.</span><br><span class="line">Configure a credential helper to remove this warning. See</span><br><span class="line">https://docs.docker.com/engine/reference/commandline/login/<span class="comment">#credentials-store</span></span><br><span class="line"></span><br><span class="line">Login Succeeded</span><br><span class="line">[root@localhost ~]<span class="comment"># docker tag plutoacharon/myapp:v1 hub.test.com/library/myapp:v1</span></span><br><span class="line">[root@localhost ~]<span class="comment"># docker push hub.test.com/library/myapp:v1</span></span><br><span class="line">The push refers to repository [hub.test.com/library/myapp]</span><br><span class="line">a0d2c4392b06: Pushed </span><br><span class="line">05a9e65e2d53: Pushed </span><br><span class="line">68695a6cfd7d: Pushed </span><br><span class="line">c1dc81a64903: Pushed </span><br><span class="line">8460a579ab63: Pushed </span><br><span class="line">d39d92664027: Pushed </span><br><span class="line">v1: digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e size: 1569</span><br></pre></td></tr></table></figure></p><p><img src="https://img-blog.csdnimg.cn/20200410142631549.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="Kubernetes拉取运行Harbor镜像"><a href="#Kubernetes拉取运行Harbor镜像" class="headerlink" title="Kubernetes拉取运行Harbor镜像"></a>Kubernetes拉取运行Harbor镜像</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl run nginx-deployment --image=hub.test.com/library/myapp:v1 --port=80 --replicas=1</span></span><br><span class="line">kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed <span class="keyword">in</span> a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.</span><br><span class="line">deployment.apps/nginx-deployment created</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get deployment</span></span><br><span class="line">NAME READY UP-TO-DATE AVAILABLE AGE</span><br><span class="line">nginx-deployment 1/1 1 1 25s</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get rs</span></span><br><span class="line">NAME DESIRED CURRENT READY AGE</span><br><span class="line">nginx-deployment-bdf84f685 1 1 1 39s</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get pod</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">nginx-deployment-bdf84f685-pg7qk 1/1 Running 0 50s</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get pod -o wide</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES</span><br><span class="line">nginx-deployment-bdf84f685-pg7qk 1/1 Running 0 65s 10.244.1.2 k8s-node01 <none> <none></span><br></pre></td></tr></table></figure><p><code>kubectl get pod -o wide</code>可以看到nginx-deployment在node1上运行<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># docker ps | grep nginx</span></span><br><span class="line">066e82c78200 hub.test.com/library/myapp <span class="string">"nginx -g 'daemon of…"</span> 20 minutes ago Up 20 minutes k8s_nginx-deployment_nginx-deployment-bdf84f685-pg7qk_default_11af7460-37a5-4d61-b94c-5c64684110ed_0</span><br><span class="line">3a0c5624068c k8s.gcr.io/pause:3.1 <span class="string">"/pause"</span> 20 minutes ago Up 20 minutes k8s_POD_nginx-deployment-bdf84f685-pg7qk_default_11af7460-37a5-4d61-b94c-5c64684110ed_0</span><br><span class="line">[root@k8s-node01 ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># curl 10.244.1.2</span></span><br><span class="line">Hello MyApp | Version: v1 | <a href=<span class="string">"hostname.html"</span>>Pod Name</a></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># curl 10.244.1.2/hostname.html</span></span><br><span class="line">nginx-deployment-bdf84f685-pg7qk</span><br><span class="line">[root@k8s-node01 ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<h1 id="目录"><a href="#目录" class="headerlink" title="目录"></a>目录</h1><p><a href="https://blog.csdn.net/qq_43442524/article/details/104483555"
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(五)---- 配置nginx反向代理和负载均衡</title>
<link href="https://plutoacharon.github.io/2020/04/09/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E4%BA%94%EF%BC%89-%E9%85%8D%E7%BD%AEnginx%E5%8F%8D%E5%90%91%E4%BB%A3%E7%90%86%E5%92%8C%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<id>https://plutoacharon.github.io/2020/04/09/HA高可用与负载均衡入门到实战(五)-配置nginx反向代理和负载均衡/</id>
<published>2020-04-09T12:32:14.000Z</published>
<updated>2020-04-09T12:32:26.531Z</updated>
<content type="html"><![CDATA[<h2 id="网站架构"><a href="#网站架构" class="headerlink" title="网站架构"></a>网站架构</h2><p>基于Docker容器里构建高并发网站</p><p>拓扑图:<br><img src="https://img-blog.csdnimg.cn/20200409155415760.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70#pic_center" alt="在这里插入图片描述"></p><h3 id="正向代理"><a href="#正向代理" class="headerlink" title="正向代理"></a>正向代理</h3><ul><li>代理:也被叫做正向代理,是一个位于客户端和目标服务器之间的代理服务器</li><li>作用:客户端将发送的请求和指定的目标服务器提交给代理服务器,然后代理服务器向目标服务器发起请求,并将获得的响应结果返回给客户端的过程<br><img src="https://img-blog.csdnimg.cn/20200409170657710.png" alt="在这里插入图片描述"></li></ul><h3 id="反向代理"><a href="#反向代理" class="headerlink" title="反向代理"></a>反向代理</h3><ul><li>反向代理:对于客户端而言就是目标服务器</li><li>作用:客户端向反向代理服务器发送请求后,反向代理服务器将该请求转发给内部网络上的后端服务器,并将从后端服务器上得到的响应结果返回给客户端<br><img src="https://img-blog.csdnimg.cn/20200409170738230.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><h4 id="反向代理服务配置"><a href="#反向代理服务配置" class="headerlink" title="反向代理服务配置"></a>反向代理服务配置</h4></li><li>反向代理的配置指令:proxy_pass,用于设置后端服务器的地址。该地址中包括传输数据使用的协议、服务器主机名以及可选的URI资源等</li><li><p>作用范围:通常在location块中进行设置</p><h3 id="负载均衡"><a href="#负载均衡" class="headerlink" title="负载均衡"></a>负载均衡</h3></li><li><p>指令:upstream指令可以实现负载均衡,在该指令中能够配置负载服务器组</p></li><li>配置方式:目前负载均衡有4种典型的配置方式</li></ul><table><thead><tr><th>配置方式</th><th>说明</th></tr></thead><tbody><tr><td>轮询方式</td><td>负载均衡默认设置方式,每个请求按照时间顺序逐一分配到不同的后端服务器进行处理,如果有服务器宕机,会自动剔除</td></tr><tr><td>权重方式</td><td>利用weight指定轮询的权重比率,与访问率成正比,用于后端服务器性能不均的情况</td></tr><tr><td>ip_hash方式</td><td>每个请求按访问IP的hash结果分配,这样可以使每个访客固定访问一个后端服务器,可以解决Session共享的问题</td></tr><tr><td>第三方模块</td><td>采用fair时,按照每台服务器的响应时间来分配请求,响应时间短的优先分配;若第三方模块采用url_hash时,按照访问url的hash值来分配请求</td></tr></tbody></table><h2 id="配置nginx反向代理,使用nginx1、APP1、APP2三个容器"><a href="#配置nginx反向代理,使用nginx1、APP1、APP2三个容器" class="headerlink" title="配置nginx反向代理,使用nginx1、APP1、APP2三个容器"></a>配置nginx反向代理,使用nginx1、APP1、APP2三个容器</h2><h3 id="使用php-apache镜像启动APP1和APP2两个容器"><a href="#使用php-apache镜像启动APP1和APP2两个容器" class="headerlink" title="使用php-apache镜像启动APP1和APP2两个容器"></a>使用php-apache镜像启动APP1和APP2两个容器</h3><p>1) docker network create –subnet=172.18.0.0/16 cluster //创建docker网络<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker network create --subnet=172.18.0.0/16 cluster</span></span><br><span class="line">93cf616f5b6466f3872a697e7246d525173405659d659f775584460cc523fc19</span><br><span class="line">[root@localhost ~]<span class="comment"># docker network ls</span></span><br><span class="line">NETWORK ID NAME DRIVER SCOPE</span><br><span class="line">5b668484dc8f bridge bridge <span class="built_in">local</span></span><br><span class="line">93cf616f5b64 cluster bridge <span class="built_in">local</span></span><br><span class="line">f2010c589fe5 host host <span class="built_in">local</span></span><br><span class="line">3e84fc461677 none null <span class="built_in">local</span></span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>2) 启动容器APP1,设定地址为172.18.0.111, 启动容器APP2,设定地址为172.18.0.112</p><p><code>docker run -d --privileged --net cluster --ip 172.18.0.111 --name APP1 php-apache /usr/sbin/init</code><br><code>docker run -d --privileged --net cluster --ip 172.18.0.112 --name APP2 php-apache /usr/sbin/init</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.111 --name APP1 php-apache /usr/sbin/init </span></span><br><span class="line">0119783e023dbd322e6598c4556743408fb2fda176b26406b8c80d3d982bf02e</span><br><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.112 --name APP2 php-apache /usr/sbin/init </span></span><br><span class="line">f2744c76c1759187788620e84705a0905b1021da4d987620b96cc0f3b4d2eac8</span><br><span class="line">[root@localhost ~]<span class="comment"># docker ps</span></span><br><span class="line">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br><span class="line">f2744c76c175 php-apache <span class="string">"/usr/sbin/init"</span> 4 seconds ago Up 2 seconds APP2</span><br><span class="line">0119783e023d php-apache <span class="string">"/usr/sbin/init"</span> 20 seconds ago Up 18 seconds APP1</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>3) 配置容器APP1,编辑首页内容为“site1”<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker exec -it f27 /bin/bash</span></span><br><span class="line">[root@f2744c76c175 /]<span class="comment"># vim /var/www/html/index.html</span></span><br><span class="line">[root@f2744c76c175 /]<span class="comment"># systemctl status httpd</span></span><br><span class="line">● httpd.service - The Apache HTTP Server</span><br><span class="line"> Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)</span><br><span class="line"> Drop-In: /usr/lib/systemd/system/httpd.service.d</span><br><span class="line"> └─php-fpm.conf</span><br><span class="line"> Active: inactive (dead)</span><br><span class="line"> Docs: man:httpd.service(8)</span><br><span class="line">[root@f2744c76c175 /]<span class="comment"># systemctl start httpd</span></span><br></pre></td></tr></table></figure></p><p>4) 配置容器APP1,编辑首页内容为“site2”<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker exec -it 011 /bin/bash</span></span><br><span class="line">[root@0119783e023d /]<span class="comment"># vim /var/www/html/index.html</span></span><br><span class="line">[root@0119783e023d /]<span class="comment"># systemctl start httpd</span></span><br><span class="line">[root@0119783e023d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>5)在宿主机访问<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.111</span></span><br><span class="line">This is site1!</span><br><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.112</span></span><br><span class="line">This is site2!</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="使用nginx镜像启动nginx1容器,配置反向代理"><a href="#使用nginx镜像启动nginx1容器,配置反向代理" class="headerlink" title="使用nginx镜像启动nginx1容器,配置反向代理"></a>使用nginx镜像启动nginx1容器,配置反向代理</h3><p>1) 启动容器nginx1,设定地址为172.18.0.11<br><code>docker run -d --privileged --net cluster --ip 172.18.0.11 -p 80:80 --name nginx1 nginx /usr/sbin/init</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.11 -p 80:80 --name nginx1 nginx /usr/sbin/init</span></span><br><span class="line">b0db3efdfe817b3df2557ef598e6bf709a5cabcfe2122d40caf344ee96075aac</span><br><span class="line">[root@localhost ~]<span class="comment"># docker exec -it b0d /bin/bash</span></span><br><span class="line">[root@b0db3efdfe81 /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>2) 在容器nginx1编辑/etc/nginx/nginx.conf文件,重新启动nginx服务</p><p>配置两台虚拟主机<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name site1.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://172.18.0.111;</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name site2.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://172.18.0.112;</span><br><span class="line"> }</span><br></pre></td></tr></table></figure></p><p>3) }在主机编辑hosts文件<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">宿主机的IP地址 site1.test.com</span><br><span class="line">宿主机的IP地址 site2.test.com</span><br><span class="line">宿主机的IP地址 www.test.com</span><br></pre></td></tr></table></figure></p><p>4) 在主机使用浏览器访问site1.test.com<br><img src="https://img-blog.csdnimg.cn/20200409164810311.png" alt="在这里插入图片描述"><br>5) 在主机使用浏览器访问site2.test.com<br><img src="https://img-blog.csdnimg.cn/20200409164752131.png" alt="在这里插入图片描述"></p><h4 id="配置nginx负载均衡,使用nginx1、APP1、APP2三个容器"><a href="#配置nginx负载均衡,使用nginx1、APP1、APP2三个容器" class="headerlink" title="配置nginx负载均衡,使用nginx1、APP1、APP2三个容器"></a>配置nginx负载均衡,使用nginx1、APP1、APP2三个容器</h4><p><strong>保持以上三个容器不变</strong> </p><p>使用nginx1容器,配置<code>nginx一般轮询负载均衡</code></p><p>1) 在容器nginx1编辑/etc/nginx/nginx.conf文件,重新启动nginx服务</p><p>配置 <a href="http://www.test.com虚拟主机" target="_blank" rel="noopener">www.test.com虚拟主机</a><br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name www.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://APP;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>配置负载均衡服务器组<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">upstream APP {</span><br><span class="line"> server 172.18.0.111;</span><br><span class="line"> server 172.18.0.112;</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 在主机使用浏览器访问 <a href="http://www.test.com并不断刷新" target="_blank" rel="noopener">www.test.com并不断刷新</a><br><img src="https://img-blog.csdnimg.cn/20200409165619268.png" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200409165626825.png" alt="在这里插入图片描述"></p><h4 id="使用nginx1容器,配置nginx-IP哈希轮询"><a href="#使用nginx1容器,配置nginx-IP哈希轮询" class="headerlink" title="使用nginx1容器,配置nginx IP哈希轮询"></a>使用nginx1容器,配置nginx IP哈希轮询</h4><p>1) 在容器nginx1编辑/etc/nginx/conf.d/default.conf文件,重新启动nginx服务</p><p>配置 <a href="http://www.test.com虚拟主机" target="_blank" rel="noopener">www.test.com虚拟主机</a><br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name www.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://APP;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>配置负载均衡服务器组<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">upstream APP {</span><br><span class="line"> ip_hash;</span><br><span class="line"> server 172.18.0.111;</span><br><span class="line"> server 172.18.0.112;</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 在不同ip主机使用浏览器访问 <a href="http://www.test.com" target="_blank" rel="noopener">www.test.com</a><br><img src="https://img-blog.csdnimg.cn/20200409170202667.png" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200409170146589.png" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="网站架构"><a href="#网站架构" class="headerlink" title="网站架构"></a>网站架构</h2><p>基于Docker容器里构建高并发网站</p>
<p>拓扑图:<br><img src="https://img-blog.c
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
</feed>