pipework是由docker工程师开发的一个docker网络配置工具,由200多行shell实现,方便易用
下载地址:wget https://github.com/jpetazzo/pipework.git
一、pipework在下面来使用和工作原理
1)将docker容器配置到本地网络环境中
为了使本地网络中的机器和docker容器更方便的通信,我们经常会有将docker容器配置到和主机同一个网段的需求,这个需求其实很容易实现,我们将docker容器和主机的网卡桥接起来,再给docker容器配上ip地址就行了。步骤如下:
我主机的ip地址为:192.168.64.132 网关为:192.168.64.2需要给docker容器的ip地址配置为:192.168.64.150/24
为了使本地网络中的机器和docker容器更方便的通信,我们经常会有将docker容器配置到和主机同一个网段的需求,这个需求其实很容易实现,我们将docker容器和主机的网卡桥接起来,再给docker容器配上ip地址就行了。步骤如下:
我主机的ip地址为:192.168.64.132 网关为:192.168.64.2需要给docker容器的ip地址配置为:192.168.64.150/24
1.在centos7image中默认没有ifconfig命令,我们简单的用dockerfile来做一个image。具体步骤如下:
[root@docker-yk ~]# mkdir /dockerifconfig
[root@docker-yk ~]# cd /dockerifconfig/
[root@docker-yk dockerifconfig]# vim Dockerfile
[root@docker-yk dockerifconfig]# cat Dockerfile
FROM 328edcd84f1b
RUN yum install -y net-tools
[root@docker-yk dockerifconfig]# docker build -t=”centos:ifconfig” .
[root@docker-yk ~]# cd /dockerifconfig/
[root@docker-yk dockerifconfig]# vim Dockerfile
[root@docker-yk dockerifconfig]# cat Dockerfile
FROM 328edcd84f1b
RUN yum install -y net-tools
[root@docker-yk dockerifconfig]# docker build -t=”centos:ifconfig” .
运行结束
2.docker images来查看我们创建的镜像。有了ifconfig证明已经成功
[root@docker-yk dockerifconfig]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos ifconfig fe07bba21e2f About a minute ago 300.2 MB
docker.io/centos latest 328edcd84f1b 5 weeks ago 192.5 MB
REPOSITORY TAG IMAGE ID CREATED SIZE
centos ifconfig fe07bba21e2f About a minute ago 300.2 MB
docker.io/centos latest 328edcd84f1b 5 weeks ago 192.5 MB
3.安装pipework工具,这边我已经下载好了。下载地址:wget https://github.com/jpetazzo/pipework.git
下载完成后解压 unzip
[root@docker-yk ~]# unzip pipework-master.zip
Archive: pipework-master.zip
ae42f1b5fef82b3bc23fe93c95c345e7af65fef3
creating: pipework-master/
extracting: pipework-master/.gitignore
inflating: pipework-master/LICENSE
inflating: pipework-master/README.md
inflating: pipework-master/docker-compose.yml
creating: pipework-master/doctoc/
inflating: pipework-master/doctoc/Dockerfile
inflating: pipework-master/pipework
inflating: pipework-master/pipework.spec
Archive: pipework-master.zip
ae42f1b5fef82b3bc23fe93c95c345e7af65fef3
creating: pipework-master/
extracting: pipework-master/.gitignore
inflating: pipework-master/LICENSE
inflating: pipework-master/README.md
inflating: pipework-master/docker-compose.yml
creating: pipework-master/doctoc/
inflating: pipework-master/doctoc/Dockerfile
inflating: pipework-master/pipework
inflating: pipework-master/pipework.spec
随后复制到/usr/local/bin下
[root@docker-yk ~]# cp -p /root/pipework-master/pipework /usr/local/bin/
4.启动docker容器
[root@docker-yk ~]# docker run -dit –name test1 centos:ifconfig
b5379afae825c12e70523e87adcdc5638cec9d200521cb32d4a9be46d26eedb2
b5379afae825c12e70523e87adcdc5638cec9d200521cb32d4a9be46d26eedb2
我们可以用docker ps -a来查看
配置容器网络,并且连到网桥br0上,网关写在ip地址后面加@指定
[root@docker-yk ~]# pipework br0 test1 192.168.64.150/24@192.168.64.2
将主机ens33桥接到br0上,并把ens33的ip地址配置在br0上
[root@docker-yk ~]# ip addr add 192.168.64.132/24 dev br0 \
ip addr del 192.168.64.132/24 dev ens33 \
brctl addif br0 ens33 \
ip route del default \
ip route add default via 192.168.64.2 dev br0 //64.2是网关
输完之后不用管它提示什么。我们查询了ip地址后发现ens33的ip地址转到了创建的br0上证明成功。
注意:如果使用的xshell或者crt等远程操作,在以上的命令中如果一条条输入会导致网络断掉,所以我用的crt我直接把命令连起来了,防止网络中断。用远程终端的,请把命令连着写起来。如下:
ip addr add 192.168.64.132/24 dev br0;ip addr del 192.168.64.132/24 dev ens33;brctl addif br0 ens33;ip route del default;ip route add default via 192.168.64.2 dev br0
5.登陆test1查询ip地址。
[root@docker-yk ~]# docker exec -it test1 /bin/bash
[root@b5379afae825 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 8 bytes 648 (648.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.64.150 netmask 255.255.255.0 broadcast 192.168.64.255
inet6 fe80::64f5:ccff:fed5:4e1b prefixlen 64 scopeid 0x20<link>
ether 66:f5:cc:d5:4e:1b txqueuelen 1000 (Ethernet)
RX packets 173 bytes 13796 (13.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 690 (690.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@b5379afae825 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 8 bytes 648 (648.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.64.150 netmask 255.255.255.0 broadcast 192.168.64.255
inet6 fe80::64f5:ccff:fed5:4e1b prefixlen 64 scopeid 0x20<link>
ether 66:f5:cc:d5:4e:1b txqueuelen 1000 (Ethernet)
RX packets 173 bytes 13796 (13.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 690 (690.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
随后去ping网关测试。
其实正常情况下可以ping通百度。这边我的系统ping不了-网络的问题,你们可以去ping百度测试
二、单主机docker容器VLAN划分
pipework 不仅可以使用linux bridege连接容器,还可以与openvswitch结合,实现docker容器的vlan划分,为了实现效果,我们将4个容器放在了同一个ip网段中。但实际他们是二层隔离的两个网络,有不同的广播域
安装openvswitch(基础环境)
[root@docker-yk ~]# yum install gcc make python-devel openssl-devel kernel-devel graphviz \
kernel-debug-devel autoconf automake rpm-build redhat-rpm-config libtool
kernel-debug-devel autoconf automake rpm-build redhat-rpm-config libtool
下载openvswitch的包
下载完成后解压打包
[root@docker-yk ~]# tar zxf openvswitch-2.3.1.tar.gz
[root@docker-yk ~]# mkdir -p ~/rpmbuild/SOURCES
[root@docker-yk ~]# cp openvswitch-2.3.1.tar.gz ~/rpmbuild/SOURCES/
[root@docker-yk ~]# sed ‘s/openvswitch-kmod, //g’ openvswitch-2.3.1/rhel/openvswitch.spec > openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec
[root@docker-yk ~]# rpmbuild -bb –without check openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec
[root@docker-yk ~]# mkdir -p ~/rpmbuild/SOURCES
[root@docker-yk ~]# cp openvswitch-2.3.1.tar.gz ~/rpmbuild/SOURCES/
[root@docker-yk ~]# sed ‘s/openvswitch-kmod, //g’ openvswitch-2.3.1/rhel/openvswitch.spec > openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec
[root@docker-yk ~]# rpmbuild -bb –without check openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec
之后会在/rpmbuild/RPMS/x86_64/下有两个文件,安装第一个就行
[root@docker-yk ~]# ls ~/rpmbuild/RPMS/x86_64/ -l
总用量 9556
-rw-r–r–. 1 root root 2014000 9月 12 20:54 openvswitch-2.3.1-1.x86_64.rpm
-rw-r–r–. 1 root root 7767124 9月 12 20:55 openvswitch-debuginfo-2.3.1-1.x86_64.rpm
总用量 9556
-rw-r–r–. 1 root root 2014000 9月 12 20:54 openvswitch-2.3.1-1.x86_64.rpm
-rw-r–r–. 1 root root 7767124 9月 12 20:55 openvswitch-debuginfo-2.3.1-1.x86_64.rpm
[root@docker-yk ~]# yum localinstall ~/rpmbuild/RPMS/x86_64/openvswitch-2.3.1-1.x86_64.rpm
安装完后我们启动
[root@docker-yk ~]# /sbin/chkconfig openvswitch on
[root@docker-yk ~]# /sbin/service openvswitch start
Starting openvswitch (via systemctl): [ OK ]
[root@docker-yk ~]# /sbin/service openvswitch start
Starting openvswitch (via systemctl): [ OK ]
查看状态(可以看到是正常运行)
创建交换机,把物理网卡加入ovs1
[root@docker-yk ~]# ovs-vsctl add-br ovs1;
ovs-vsctl add-port ovs1 ens33;
ip link set ovs1 up;
ifconfig ens33 0;
ifconfig ovs1 192.168.35.128
[root@docker-yk ~]# ovs-vsctl add-br ovs1;ovs-vsctl add-port ovs1 ens33;ip link set ovs1 up;ifconfig ens33 0;ifconfig ovs1 192.168.35.128
提示:如果是用的crt或者xshell等终端连接的一定要一条全部输完,如果一条条输会导致断开。
配置后查看ip地址
[root@docker-yk ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::2813:dcc7:4e37:418f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:65:55:03 txqueuelen 1000 (Ethernet)
RX packets 697612 bytes 924263907 (881.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 203452 bytes 12936889 (12.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::2813:dcc7:4e37:418f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:65:55:03 txqueuelen 1000 (Ethernet)
RX packets 697612 bytes 924263907 (881.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 203452 bytes 12936889 (12.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 212 bytes 18412 (17.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 212 bytes 18412 (17.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 212 bytes 18412 (17.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 212 bytes 18412 (17.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ovs1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.35.128 netmask 255.255.255.0 broadcast 192.168.35.255
inet6 fe80::20c:29ff:fe65:5503 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:65:55:03 txqueuelen 1000 (Ethernet)
RX packets 40 bytes 2920 (2.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 37 bytes 5065 (4.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
inet 192.168.35.128 netmask 255.255.255.0 broadcast 192.168.35.255
inet6 fe80::20c:29ff:fe65:5503 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:65:55:03 txqueuelen 1000 (Ethernet)
RX packets 40 bytes 2920 (2.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 37 bytes 5065 (4.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
在主机上创建四个docker容器,test1 test2 test3 test4
将test1,test2划分到一个vlan中,vlan的mac地址加@指定,在这里mac地址省略。
[root@docker-yk ~]# pipework ovs1 test1 192.168.35.100/24 @100
[root@docker-yk ~]# pipework ovs1 test2 192.168.35.110/24 @100
[root@docker-yk ~]# pipework ovs1 test3 192.168.35.120/24 @200
[root@docker-yk ~]# pipework ovs1 test4 192.168.35.130/24 @200
[root@docker-yk ~]# pipework ovs1 test2 192.168.35.110/24 @100
[root@docker-yk ~]# pipework ovs1 test3 192.168.35.120/24 @200
[root@docker-yk ~]# pipework ovs1 test4 192.168.35.130/24 @200
添加完成后,使用docker attach连到容器中测试发现test1和test2可以通信,但是与test3/test4是相互隔离的,这样一个简单的vlan隔离容器已经成功,下面登陆的为test1
登陆test3测试是否能和test4通信