前言

OpenStake 自动化部署工具一向被视为 OpenStack 发展的重点,现已被市场认可的有 Kolla、TripleO 等优秀工具。但今天我们不聊自动化部署,有兴趣的小伙伴可以浏览《Kolla 让 OpenStack 部署更贴心》。本篇的内容将一反常规,追求极致简约的 OpenStack 手动部署,抛开一切外在因素窥探 OpenStack 的本真。是一篇科普向的 OpenStack 入门介绍。

BTW,OpenStack 研发工程师不能过于依赖自动化部署工具,这会使得对 OpenStack 的理解流于表面。不妨花点时间试着手动部署一次,看看最原始的 OpenStack 到底长什么样子。

OpenStack 架构

Conceptual architecture

5月技术周 | 手动部署 OpenStack Rocky 双节点

Logical architecture

5月技术周 | 手动部署 OpenStack Rocky 双节点

网络选型

Networking Option 1: Provider networks

The provider networks option deploys the OpenStack Networking service in the simplest way possible with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially, it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-3 (routing) services. Additionally, a DHCP service provides IP address information to instances. The OpenStack user requires more information about the underlying network infrastructure to create a virtual network to exactly match the infrastructure. WARNING: option lacks support for self-service (private) networks, layer-3 (routing) services, and advanced services such as LBaaS and FWaaS. Consider the self-service networks option below if you desire these features.

5月技术周 | 手动部署 OpenStack Rocky 双节点

Provider networks 是将节点的虚拟网络桥接到运营商的物理网络(e.g. L2/L3 交换机、路由器),是一种比较简单的网络模型,物理网络设备的加入也使得网络具有更高的性能。但由于 Neutron 无需启用 L3 Router 服务,所以也就不能支持 LBaaS、FWaaS 等高级功能。

Networking Option 2: Self-service networks

The self-service networks option augments the provider networks option with layer-3 (routing) services that enable self-service networks using overlay segmentation methods such as VXLAN. Essentially, it routes virtual networks to physical networks using NAT. Additionally, this option provides the foundation for advanced services such as LBaaS and FWaaS. The OpenStack user can create virtual networks without the knowledge of underlying infrastructure on the data network. This can also include VLAN networks if the layer-2 plug-in is configured accordingly.

5月技术周 | 手动部署 OpenStack Rocky 双节点

Self-service networks(自服务的网络),是一套完整的 L2、L3 网络虚拟化解决方案,用户可以在完全不了解底层物理网络拓扑的情况下创建虚拟网络,Neutron 为用户提供多租户隔离多平面网络。

5月技术周 | 手动部署 OpenStack Rocky 双节点

双节点部署网络拓扑

5月技术周 | 手动部署 OpenStack Rocky 双节点

Controller

  • ens160: 172.18.22.231/24

  • ens192: 10.0.0.1/24

  • ens224: br-provider NIC

Compute

  • ens160: 172.18.22.232/24

  • ens192: 10.0.0.2/24

NOTE:下述 “fanguiju” 均为替换密码。

基础服务

DNS 域名解析

NOTE:我们使用 hosts 文件代替。

  • Controller

  1. [root@controller ~]# cat /etc/hosts

  2. 127.0.0.1 controller localhost localhost.localdomain localhost4 localhost4.localdomain4

  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

  4. 172.18.22.231 controller

  5. 172.18.22.232 compute

  • Compute

  1. [root@compute ~]# cat /etc/hosts

  2. 127.0.0.1 controller localhost localhost.localdomain localhost4 localhost4.localdomain4

  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

  4. 172.18.22.231 controller

  5. 172.18.22.232 compute

NTP 时间同步

  • Controller

  1. [root@controller ~]# cat /etc/chrony.conf | grep -v ^# | grep -v ^$

  2. server 0.centos.pool.ntp.org iburst

  3. server 1.centos.pool.ntp.org iburst

  4. server 2.centos.pool.ntp.org iburst

  5. server 3.centos.pool.ntp.org iburst

  6. driftfile /var/lib/chrony/drift

  7. makestep 1.0 3

  8. rtcsync

  9. allow 172.18.22.0/24

  10. logdir /var/log/chrony


  11. [root@controller ~]# systemctl enable chronyd.service

  12. [root@controller ~]# systemctl start chronyd.service


  13. [root@controller ~]# chronyc sources

  14. 210 Number of sources = 4

  15. MS Name/IP address Stratum Poll Reach LastRx Last sample

  16. ===============================================================================

  17. ^+ ntp1.ams1.nl.leaseweb.net 2 6 77 24 -4781us[-6335us] +/- 178ms

  18. ^? static.186.49.130.94.cli> 0 8 0 - +0ns[ +0ns] +/- 0ns

  19. ^? sv1.ggsrv.de 2 7 1 17 -36ms[ -36ms] +/- 130ms

  20. ^* 124-108-20-1.static.beta> 2 6 77 24 +382us[-1172us] +/- 135ms

  • Compute

  1. [root@compute ~]# cat /etc/chrony.conf | grep -v ^# | grep -v ^$

  2. server controller iburst

  3. driftfile /var/lib/chrony/drift

  4. makestep 1.0 3

  5. rtcsync

  6. logdir /var/log/chrony


  7. [root@compute ~]# systemctl enable chronyd.service

  8. [root@compute ~]# systemctl start chronyd.service


  9. [root@compute ~]# chronyc sources

  10. 210 Number of sources = 1

  11. MS Name/IP address Stratum Poll Reach LastRx Last sample

  12. ===============================================================================

  13. ^? controller 0 7 0 - +0ns[ +0ns] +/- 0ns

YUM 仓库源

  • Controller & Compute

  1. yum install centos-release-openstack-rocky -y

  2. yum upgrade -y

  3. yum install python-openstackclient -y

  4. yum install openstack-selinux -y

MySQL 数据库

  • Controller

  1. yum install mariadb mariadb-server python2-PyMySQL -y

  1. [root@controller ~]# cat /etc/my.cnf.d/openstack.cnf

  2. [mysqld]

  3. bind-address = 172.18.22.231


  4. default-storage-engine = innodb

  5. innodb_file_per_table = on

  6. max_connections = 4096

  7. collation-server = utf8_general_ci

  8. character-set-server = utf8


  9. [root@controller ~]# systemctl enable mariadb.service

  10. [root@controller ~]# systemctl start mariadb.service

  11. [root@controller ~]# systemctl status mariadb.service


  12. # 初始化 MySQL 数据库密码

  13. [root@controller ~]# mysql_secure_installation

问题:OpenStack 服务的接口响应都很慢,而且会出现 Too many connections 的异常。 TS:OpenStack 众多服务都会访问 MySQL 数据库,所以要对 MySQL 进行一些参数的设置,例如:增加最大连接数量、减少连接等待时间、自动清楚连接数间隔等等。e.g.

  1. [root@controller ~]# cat /etc/my.cnf | grep -v ^$ | grep -v ^#

  2. [client-server]

  3. [mysqld]

  4. symbolic-links=0

  5. max_connections=1000

  6. wait_timeout=5

  7. # interactive_timeout = 600

  8. !includedir /etc/my.cnf.d

RabbitMQ 消息队列

  • Controller

  1. yum install rabbitmq-server -y

  1. [root@controller ~]# systemctl enable rabbitmq-server.service

  2. [root@controller ~]# systemctl start rabbitmq-server.service

  3. [root@controller ~]# systemctl status rabbitmq-server.service


  4. # 初始化 RabbitMQ 用户密码及权限

  5. [root@controller ~]# rabbitmqctl add_user openstack fanguiju

  6. [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

问题

  1. Error: unable to connect to node rabbit@localhost: nodedown


  2. DIAGNOSTICS

  3. ===========


  4. attempted to contact: [rabbit@localhost]


  5. rabbit@localhost:

  6. * connected to epmd (port 4369) on localhost

  7. * epmd reports node 'rabbit' running on port 25672

  8. * TCP connection succeeded but Erlang distribution failed


  9. * Hostname mismatch: node "rabbit@controller" believes its host is different. Please ensure that hostnames resolve the same way locally and on "rabbit@controller"



  10. current node details:

  11. - node name: 'rabbitmq-cli-50@controller'

  12. - home dir: /var/lib/rabbitmq

  13. - cookie hash: J6O4pu2pK+BQLf1TTaZSwQ==

TS:Hostname mismatch(主机名不匹配),hostname 的修改对于 RabbitMQ 而言还存在滞后的 Cookie,重启操作系统即可解决。

更多 RabbitMQ 参考《快速入门分布式消息队列之 RabbitMQ》。

Memcached

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewalling, authentication, and encryption to secure it.

  • Controller

  1. yum install memcached python-memcached -y

  1. [root@controller ~]# cat /etc/sysconfig/memcached

  2. PORT="11211"

  3. USER="memcached"

  4. MAXCONN="1024"

  5. CACHESIZE="64"

  6. # OPTIONS="-l 127.0.0.1,::1"

  7. OPTIONS="-l 127.0.0.1,::1,controller"


  8. [root@controller ~]# systemctl enable memcached.service

  9. [root@controller ~]# systemctl start memcached.service

  10. [root@controller ~]# systemctl status memcached.service

Etcd

OpenStack services may use Etcd, a distributed reliable key-value store for distributed key locking, storing configuration, keeping track of service live-ness and other scenarios.

  • Controller

  1. yum install etcd -y

  1. [root@controller ~]# cat /etc/etcd/etcd.conf

  2. #[Member]

  3. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

  4. ETCD_LISTEN_PEER_URLS="http://172.18.22.231:2380"

  5. ETCD_LISTEN_CLIENT_URLS="http://172.18.22.231:2379"

  6. ETCD_NAME="controller"

  7. #[Clustering]

  8. ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.18.22.231:2380"

  9. ETCD_ADVERTISE_CLIENT_URLS="http://172.18.22.231:2379"

  10. ETCD_INITIAL_CLUSTER="controller=http://172.18.22.231:2380"

  11. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"

  12. ETCD_INITIAL_CLUSTER_STATE="new"


  13. [root@controller ~]# systemctl enable etcd

  14. [root@controller ~]# systemctl start etcd

  15. [root@controller ~]# systemctl status etcd

OpenStack Projects

Keystone(Controller)

Keystone 认证原理请浏览《OpenStack 组件实现原理 — Keystone 认证功能》。

  • 软件包

  1. yum install openstack-keystone httpd mod_wsgi -y

  • 配置

  1. # /etc/keystone/keystone.conf


  2. [database]

  3. connection = mysql+pymysql://keystone:fanguiju@controller/keystone


  4. [token]

  5. provider = fernet

  • 创建 keystone 数据库

  1. CREATE DATABASE keystone;

  2. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'fanguiju';

  3. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'fanguiju';

  • 初始化 keystone 数据库

  1. su -s /bin/sh -c "keystone-manage db_sync" keystone

  • 启用 Fernet key

  1. keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

  2. keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Fernet key 的特性请浏览《理解 Keystone 的四种 Token》

  • Bootstrap Keystone Services,自动创建 default domain、admin project、admin user (password)、admin role、member role、reader role 以及 keystone service 和 identity endpoint。

  1. keystone-manage bootstrap --bootstrap-password fanguiju

  2. --bootstrap-admin-url http://controller:5000/v3/

  3. --bootstrap-internal-url http://controller:5000/v3/

  4. --bootstrap-public-url http://controller:5000/v3/

  5. --bootstrap-region-id RegionOne

  • 配置及启动 Apache HTTP server NOTE:Keystone 的 Web Server 依托于 Apache HTTP server,是 httpd 的虚拟主机。

  1. ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

  1. # /usr/share/keystone/wsgi-keystone.conf

  2. # keystone 虚拟主机机配置文件


  3. Listen 5000


  4. <VirtualHost *:5000>

  5. WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

  6. WSGIProcessGroup keystone-public

  7. WSGIScriptAlias / /usr/bin/keystone-wsgi-public

  8. WSGIApplicationGroup %{GLOBAL}

  9. WSGIPassAuthorization On

  10. LimitRequestBody 114688

  11. <IfVersion >= 2.4>

  12. ErrorLogFormat "%{cu}t %M"

  13. </IfVersion>

  14. ErrorLog /var/log/httpd/keystone.log

  15. CustomLog /var/log/httpd/keystone_access.log combined


  16. <Directory /usr/bin>

  17. <IfVersion >= 2.4>

  18. Require all granted

  19. </IfVersion>

  20. <IfVersion < 2.4>

  21. Order allow,deny

  22. Allow from all

  23. </IfVersion>

  24. </Directory>

  25. </VirtualHost>


  26. Alias /identity /usr/bin/keystone-wsgi-public

  27. <Location /identity>

  28. SetHandler wsgi-script

  29. Options +ExecCGI


  30. WSGIProcessGroup keystone-public

  31. WSGIApplicationGroup %{GLOBAL}

  32. WSGIPassAuthorization On

  33. </Location>

  1. # /etc/httpd/conf/httpd.conf

  2. ServerName controller

  1. systemctl enable httpd.service

  2. systemctl start httpd.service

  3. systemctl status httpd.service

  • 创建租户 

5月技术周 | 手动部署 OpenStack Rocky 双节点

注入临时身份鉴权变量:

  1. export OS_USERNAME=admin

  2. export OS_PASSWORD=fanguiju

  3. export OS_PROJECT_NAME=admin

  4. export OS_USER_DOMAIN_NAME=Default

  5. export OS_PROJECT_DOMAIN_NAME=Default

  6. export OS_AUTH_URL=http://controller:5000/v3

  7. export OS_IDENTITY_API_VERSION=3

NOTE:keystone-manage bootstrap 后就已经完成了 admin 租户及用户的初始化了,现在只需要再创建一个 service project 用于包含 OpenStack Projects(e.g. Nova、Cinder、Neutron)。如果有必要也可以同时创建一个普通租户 Demo 以及该租户下的用户 myuser。e.g.

  1. openstack project create --domain default --description "Service Project" service


  2. openstack project create --domain default --description "Demo Project" myproject

  3. openstack user create --domain default --password-prompt myuser

  4. openstack role create myrole

  5. openstack role add --project myproject --user myuser myrole

  1. [root@controller ~]# openstack domain list

  2. +---------+---------+---------+--------------------+

  3. | ID | Name | Enabled | Description |

  4. +---------+---------+---------+--------------------+

  5. | default | Default | True | The default domain |

  6. +---------+---------+---------+--------------------+


  7. [root@controller ~]# openstack project list

  8. +----------------------------------+-----------+

  9. | ID | Name |

  10. +----------------------------------+-----------+

  11. | 64e45ce71e4843f3af4715d165f417b6 | service |

  12. | a2b55e37121042a1862275a9bc9b0223 | admin |

  13. | a50bbb6cd831484d934eb03f989b988b | myproject |

  14. +----------------------------------+-----------+


  15. [root@controller ~]# openstack group list


  16. [root@controller ~]# openstack user list

  17. +----------------------------------+--------+

  18. | ID | Name |

  19. +----------------------------------+--------+

  20. | 2cd4bbe862e54afe9292107928338f3f | myuser |

  21. | 92602c24daa24f019f05ecb95f1ce68e | admin |

  22. +----------------------------------+--------+


  23. [root@controller ~]# openstack role list

  24. +----------------------------------+--------+

  25. | ID | Name |

  26. +----------------------------------+--------+

  27. | 3bc0396aae414b5d96488d974a301405 | reader |

  28. | 811f5caa2ac747a5b61fe91ab93f2f2f | myrole |

  29. | 9366e60815bc4f1d80b1e57d51f7c228 | admin |

  30. | d9e0d3e5d1954feeb81e353117c15340 | member |

  31. +----------------------------------+--------+

  • Verify

  1. [root@controller ~]# unset OS_AUTH_URL OS_PASSWORD

  2. [root@controller ~]# openstack --os-auth-url http://controller:5000/v3

  3. > --os-project-domain-name Default --os-user-domain-name Default

  4. > --os-project-name admin --os-username admin token issue

  5. Password:

  6. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

  7. | Field | Value |

  8. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

  9. | expires | 2019-03-29T12:36:47+0000 |

  10. | id | gAAAAABcngNPjXntVhVAmLbek0MH7ZSzeYGC4cfipy4E3aiy_dRjEyJiPehNH2dkDVI94vHHHdni1h27BJvLp6gqIqglGVDHallPn3PqgZt3-JMq_dyxx2euQL1bhSNX9rAUbBvzL9_0LBPKw2glQmmRli9Qhu8QUz5tRkbxAb6iP7R2o-mU30Y |

  11. | project_id | a2b55e37121042a1862275a9bc9b0223 |

  12. | user_id | 92602c24daa24f019f05ecb95f1ce68e |

  13. +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

  • 创建 OpenStack client environment scripts

  1. [root@controller ~]# cat adminrc

  2. export OS_PROJECT_DOMAIN_NAME=Default

  3. export OS_USER_DOMAIN_NAME=Default

  4. export OS_PROJECT_NAME=admin

  5. export OS_USERNAME=admin

  6. export OS_PASSWORD=fanguiju

  7. export OS_AUTH_URL=http://controller:5000/v3

  8. export OS_IDENTITY_API_VERSION=3

  9. export OS_IMAGE_API_VERSION=2


  10. [root@controller ~]# source adminrc


  11. [root@controller ~]# openstack catalog list

  12. +----------+----------+----------------------------------------+

  13. | Name | Type | Endpoints |

  14. +----------+----------+----------------------------------------+

  15. | keystone | identity | RegionOne |

  16. | | | admin: http://controller:5000/v3/ |

  17. | | | RegionOne |

  18. | | | public: http://controller:5000/v3/ |

  19. | | | RegionOne |

  20. | | | internal: http://controller:5000/v3/ |

  21. | | | |

  22. +----------+----------+----------------------------------------+

Glance(Controller)

更多 Glance 架构信息请浏览《OpenStack 组件实现原理 — Glance 架构(V1/V2)》。

  • 添加 Glance 用户及其鉴权信息

  1. openstack service create --name glance --description "OpenStack Image" image


  2. openstack user create --domain default --password-prompt glance

  3. openstack role add --project service --user glance admin


  4. openstack endpoint create --region RegionOne image public http://controller:9292

  5. openstack endpoint create --region RegionOne image internal http://controller:9292

  6. openstack endpoint create --region RegionOne image admin http://controller:9292

  1. [root@controller ~]# openstack catalog list

  2. +-----------+-----------+-----------------------------------------+

  3. | Name | Type | Endpoints |

  4. +-----------+-----------+-----------------------------------------+

  5. | glance | image | RegionOne |

  6. | | | admin: http://controller:9292 |

  7. | | | RegionOne |

  8. | | | public: http://controller:9292 |

  9. | | | RegionOne |

  10. | | | internal: http://controller:9292 |

  11. | | | |

  12. | keystone | identity | RegionOne |

  13. | | | admin: http://controller:5000/v3/ |

  14. | | | RegionOne |

  15. | | | public: http://controller:5000/v3/ |

  16. | | | RegionOne |

  17. | | | internal: http://controller:5000/v3/ |

  18. | | | |

  19. +-----------+-----------+-----------------------------------------+

  • 软件包

  1. yum install openstack-glance -y

  • 配置

  1. # /etc/glance/glance-api.conf


  2. [glance_store]

  3. stores = file,http

  4. default_store = file

  5. # 本地的镜像文件存放目录

  6. filesystem_store_datadir = /var/lib/glance/images/


  7. [database]

  8. connection = mysql+pymysql://glance:fanguiju@controller/glance


  9. [keystone_authtoken]

  10. auth_uri = http://controller:5000

  11. auth_url = http://controller:5000

  12. memcached_servers = controller:11211

  13. auth_type = password

  14. project_domain_name = Default

  15. user_domain_name = Default

  16. project_name = service

  17. username = glance

  18. password = fanguiju


  19. [paste_deploy]

  20. flavor = keystone

  1. # /etc/glance/glance-registry.conf


  2. [database]

  3. connection = mysql+pymysql://glance:fanguiju@controller/glance


  4. [keystone_authtoken]

  5. auth_uri = http://controller:5000

  6. auth_url = http://controller:5000

  7. memcached_servers = controller:11211

  8. auth_type = password

  9. project_domain_name = Default

  10. user_domain_name = Default

  11. project_name = service

  12. username = glance

  13. password = fanguiju


  14. [paste_deploy]

  15. flavor = keystone

  • 创建 Glance 数据库

  1. CREATE DATABASE glance;

  2. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'fanguiju';

  3. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'fanguiju';

  • 初始化 Glance 数据库

  1. su -s /bin/sh -c "glance-manage db_sync" glance

  • 启动服务

  1. systemctl enable openstack-glance-api.service openstack-glance-registry.service

  2. systemctl start openstack-glance-api.service openstack-glance-registry.service

  3. systemctl status openstack-glance-api.service openstack-glance-registry.service

  • Verify

  1. wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img


  2. openstack image create "cirros"

  3. --file cirros-0.4.0-x86_64-disk.img

  4. --disk-format qcow2 --container-format bare

  5. --public

  1. [root@controller ~]# openstack image list

  2. +--------------------------------------+--------+--------+

  3. | ID | Name | Status |

  4. +--------------------------------------+--------+--------+

  5. | 59355e1b-2342-497b-9863-5c8b9969adf5 | cirros | active |

  6. +--------------------------------------+--------+--------+


  7. [root@controller ~]# ll /var/lib/glance/images/

  8. total 12980

  9. -rw-r-----. 1 glance glance 13287936 Mar 29 10:33 59355e1b-2342-497b-9863-5c8b9969adf5

Nova(Controller)

更多 Nova 信息请浏览《OpenStack 组件部署 — Nova Overview》。

  • 添加 Nova 用户及其鉴权信息

  1. openstack service create --name nova --description "OpenStack Compute" compute


  2. openstack user create --domain default --password-prompt nova

  3. openstack role add --project service --user nova admin


  4. openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

  5. openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

  6. openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1


  7. openstack service create --name placement --description "Placement API" placement


  8. openstack user create --domain default --password-prompt placement

  9. openstack role add --project service --user placement admin


  10. openstack endpoint create --region RegionOne placement public http://controller:8778

  11. openstack endpoint create --region RegionOne placement internal http://controller:8778

  12. openstack endpoint create --region RegionOne placement admin http://controller:8778

  1. [root@controller ~]# openstack catalog list

  2. +-----------+-----------+-----------------------------------------+

  3. | Name | Type | Endpoints |

  4. +-----------+-----------+-----------------------------------------+

  5. | nova | compute | RegionOne |

  6. | | | internal: http://controller:8774/v2.1 |

  7. | | | RegionOne |

  8. | | | admin: http://controller:8774/v2.1 |

  9. | | | RegionOne |

  10. | | | public: http://controller:8774/v2.1 |

  11. | | | |

  12. | glance | image | RegionOne |

  13. | | | admin: http://controller:9292 |

  14. | | | RegionOne |

  15. | | | public: http://controller:9292 |

  16. | | | RegionOne |

  17. | | | internal: http://controller:9292 |

  18. | | | |

  19. | keystone | identity | RegionOne |

  20. | | | admin: http://controller:5000/v3/ |

  21. | | | RegionOne |

  22. | | | public: http://controller:5000/v3/ |

  23. | | | RegionOne |

  24. | | | internal: http://controller:5000/v3/ |

  25. | | | |

  26. | placement | placement | RegionOne |

  27. | | | internal: http://controller:8778 |

  28. | | | RegionOne |

  29. | | | public: http://controller:8778 |

  30. | | | RegionOne |

  31. | | | admin: http://controller:8778 |

  32. | | | |

  33. +-----------+-----------+-----------------------------------------+

  • 软件包

  1. yum install openstack-nova-api openstack-nova-conductor

  2. openstack-nova-console openstack-nova-novncproxy

  3. openstack-nova-scheduler openstack-nova-placement-api -y

  • 配置

  1. # /etc/nova/nova.conf

  2. [DEFAULT]

  3. my_ip = 172.18.22.231

  4. enabled_apis = osapi_compute,metadata

  5. transport_url = rabbit://openstack:fanguiju@controller

  6. use_neutron = true

  7. firewall_driver = nova.virt.firewall.NoopFirewallDriver


  8. [api_database]

  9. connection = mysql+pymysql://nova:fanguiju@controller/nova_api


  10. [database]

  11. connection = mysql+pymysql://nova:fanguiju@controller/nova


  12. [placement_database]

  13. connection = mysql+pymysql://placement:fanguiju@controller/placement


  14. [api]

  15. auth_strategy = keystone


  16. [keystone_authtoken]

  17. auth_url = http://controller:5000/v3

  18. memcached_servers = controller:11211

  19. auth_type = password

  20. project_domain_name = default

  21. user_domain_name = default

  22. project_name = service

  23. username = nova

  24. password = fanguiju


  25. [vnc]

  26. enabled = true

  27. server_listen = $my_ip

  28. server_proxyclient_address = $my_ip


  29. [glance]

  30. api_servers = http://controller:9292


  31. [oslo_concurrency]

  32. lock_path = /var/lib/nova/tmp


  33. [placement]

  34. region_name = RegionOne

  35. project_domain_name = Default

  36. project_name = service

  37. auth_type = password

  38. user_domain_name = Default

  39. auth_url = http://controller:5000/v3

  40. username = placement

  41. password = fanguiju

  • 创建 Nova 相关数据库

  1. CREATE DATABASE nova_api;

  2. CREATE DATABASE nova;

  3. CREATE DATABASE nova_cell0;

  4. CREATE DATABASE placement;


  5. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';

  6. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju';


  7. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';

  8. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju';


  9. GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'fanguiju';

  10. GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'fanguiju';


  11. GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'fanguiju';

  12. GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'fanguiju';

  • 初始化 Nova API 和 Placement 数据库

  1. su -s /bin/sh -c "nova-manage api_db sync" nova

更多 Placement 信息请浏览《OpenStack Placement Project》。

  • 初始化 Nova 数据库

  1. su -s /bin/sh -c "nova-manage db sync" nova

  • 注册 cell0 数据库

  1. su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

  • 创建 cell1

  1. su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

  • 验证 cell0、cell1

  1. su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

5月技术周 | 手动部署 OpenStack Rocky 双节点

更多 Nova Cell 的信息请浏览《Nova Cell V2 详解》。

  • 注册 Placement Web Server 到 httpd

  1. # /etc/httpd/conf.d/00-nova-placement-api.conf


  2. Listen 8778


  3. <VirtualHost *:8778>

  4. WSGIProcessGroup nova-placement-api

  5. WSGIApplicationGroup %{GLOBAL}

  6. WSGIPassAuthorization On

  7. WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova

  8. WSGIScriptAlias / /usr/bin/nova-placement-api

  9. <IfVersion >= 2.4>

  10. ErrorLogFormat "%M"

  11. </IfVersion>

  12. ErrorLog /var/log/nova/nova-placement-api.log

  13. #SSLEngine On

  14. #SSLCertificateFile ...

  15. #SSLCertificateKeyFile ...


  16. <Directory /usr/bin>

  17. <IfVersion >= 2.4>

  18. Require all granted

  19. </IfVersion>

  20. <IfVersion < 2.4>

  21. Order allow,deny

  22. Allow from all

  23. </IfVersion>

  24. </Directory>

  25. </VirtualHost>


  26. Alias /nova-placement-api /usr/bin/nova-placement-api

  27. <Location /nova-placement-api>

  28. SetHandler wsgi-script

  29. Options +ExecCGI

  30. WSGIProcessGroup nova-placement-api

  31. WSGIApplicationGroup %{GLOBAL}

  32. WSGIPassAuthorization On

  33. </Location>

  1. systemctl restart httpd

  2. systemctl status httpd

  • 启动服务

  1. systemctl enable openstack-nova-api.service

  2. openstack-nova-consoleauth openstack-nova-scheduler.service

  3. openstack-nova-conductor.service openstack-nova-novncproxy.service


  4. systemctl start openstack-nova-api.service

  5. openstack-nova-consoleauth openstack-nova-scheduler.service

  6. openstack-nova-conductor.service openstack-nova-novncproxy.service


  7. systemctl status openstack-nova-api.service

  8. openstack-nova-consoleauth openstack-nova-scheduler.service

  9. openstack-nova-conductor.service openstack-nova-novncproxy.service

  • Verify

  1. [root@controller ~]# openstack compute service list

  2. +----+------------------+------------+----------+---------+-------+----------------------------+

  3. | ID | Binary | Host | Zone | Status | State | Updated At |

  4. +----+------------------+------------+----------+---------+-------+----------------------------+

  5. | 1 | nova-scheduler | controller | internal | enabled | up | 2019-03-29T15:22:51.000000 |

  6. | 2 | nova-consoleauth | controller | internal | enabled | up | 2019-03-29T15:22:52.000000 |

  7. | 3 | nova-conductor | controller | internal | enabled | up | 2019-03-29T15:22:51.000000 |

  8. +----+------------------+------------+----------+---------+-------+----------------------------+

Nova(Compute)

NOTE:在我们的规划中,Controller 同时身兼 Compute,所以在 Controller 上依旧要执行下列部署。 

NOTE:如果是在虚拟化实验环境,首先要检查虚拟机是否开启了嵌套虚拟化。e.g.

  1. [root@controller ~]# egrep -c '(vmx|svm)' /proc/cpuinfo

  2. 16


  3. [root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo

  4. 16

  • 软件包

  1. yum install openstack-nova-compute -y

  • 配置

  1. # /etc/nova/nova.conf


  2. [DEFAULT]

  3. my_ip = 172.18.22.232

  4. enabled_apis = osapi_compute,metadata

  5. transport_url = rabbit://openstack:fanguiju@controller

  6. use_neutron = true

  7. firewall_driver = nova.virt.firewall.NoopFirewallDriver

  8. compute_driver = libvirt.LibvirtDriver

  9. instances_path = /var/lib/nova/instances


  10. [api_database]

  11. connection = mysql+pymysql://nova:fanguiju@controller/nova_api


  12. [database]

  13. connection = mysql+pymysql://nova:fanguiju@controller/nova


  14. [placement_database]

  15. connection = mysql+pymysql://placement:fanguiju@controller/placement


  16. [api]

  17. auth_strategy = keystone


  18. [keystone_authtoken]

  19. auth_url = http://controller:5000/v3

  20. memcached_servers = controller:11211

  21. auth_type = password

  22. project_domain_name = default

  23. user_domain_name = default

  24. project_name = service

  25. username = nova

  26. password = fanguiju


  27. [vnc]

  28. enabled = true

  29. server_listen = 0.0.0.0

  30. server_proxyclient_address = $my_ip

  31. novncproxy_base_url = http://controller:6080/vnc_auto.html


  32. [glance]

  33. api_servers = http://controller:9292


  34. [oslo_concurrency]

  35. lock_path = /var/lib/nova/tmp


  36. [placement]

  37. region_name = RegionOne

  38. project_domain_name = Default

  39. project_name = service

  40. auth_type = password

  41. user_domain_name = Default

  42. auth_url = http://controller:5000/v3

  43. username = placement

  44. password = fanguiju


  45. [libvirt]

  46. virt_type = qemu

  • 启动服务

  1. systemctl enable libvirtd.service openstack-nova-compute.service

  2. systemctl start libvirtd.service openstack-nova-compute.service

  3. systemctl status libvirtd.service openstack-nova-compute.service

  • 将 Compute Node 注册到 Cell

  1. su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

问题:在 compute 上启动 nova-compute.service 服务时会被卡在,从异常栈看出卡在 nova-compute 与 nova-conductor 的 MQ 通信上,怀疑 compute 上的 nova-compute 与 controller 上的 rabbitmq 通讯出错。用 telnet 测试,果然不通。

  1. [root@compute ~]# telnet 172.18.22.231 5672

  2. Trying 172.18.22.231...

  3. telnet: connect to address 172.18.22.231: No route to host

是防火墙的问题,在 controller 开通 RabbitMQ 的相关端口:

  1. firewall-cmd --zone=public --permanent --add-port=4369/tcp &&

  2. firewall-cmd --zone=public --permanent --add-port=25672/tcp &&

  3. firewall-cmd --zone=public --permanent --add-port=5671-5672/tcp &&

  4. firewall-cmd --zone=public --permanent --add-port=15672/tcp &&

  5. firewall-cmd --zone=public --permanent --add-port=61613-61614/tcp &&

  6. firewall-cmd --zone=public --permanent --add-port=1883/tcp &&

  7. firewall-cmd --zone=public --permanent --add-port=8883/tcp

  8. firewall-cmd --reload

为了方便后面的试验,建议关闭所有防火墙规则:

  1. systemctl stop firewalld

  2. systemctl disable firewalld

  • Verify 在 controller 和 compute 都启动了 nova-compute.service 之后,我们拥有了两个计算节点:

  1. [root@controller ~]# openstack compute service list

  2. +----+------------------+------------+----------+---------+-------+----------------------------+

  3. | ID | Binary | Host | Zone | Status | State | Updated At |

  4. +----+------------------+------------+----------+---------+-------+----------------------------+

  5. | 1 | nova-scheduler | controller | internal | enabled | up | 2019-03-29T16:15:42.000000 |

  6. | 2 | nova-consoleauth | controller | internal | enabled | up | 2019-03-29T16:15:44.000000 |

  7. | 3 | nova-conductor | controller | internal | enabled | up | 2019-03-29T16:15:42.000000 |

  8. | 6 | nova-compute | controller | nova | enabled | up | 2019-03-29T16:15:41.000000 |

  9. | 7 | nova-compute | compute | nova | enabled | up | 2019-03-29T16:15:47.000000 |

  10. +----+------------------+------------+----------+---------+-------+----------------------------+


  11. # Check the cells and placement API are working successfully:

  12. [root@controller ~]# nova-status upgrade check

  13. +--------------------------------+

  14. | Upgrade Check Results |

  15. +--------------------------------+

  16. | Check: Cells v2 |

  17. | Result: Success |

  18. | Details: None |

  19. +--------------------------------+

  20. | Check: Placement API |

  21. | Result: Success |

  22. | Details: None |

  23. +--------------------------------+

  24. | Check: Resource Providers |

  25. | Result: Success |

  26. | Details: None |

  27. +--------------------------------+

  28. | Check: Ironic Flavor Migration |

  29. | Result: Success |

  30. | Details: None |

  31. +--------------------------------+

  32. | Check: API Service Version |

  33. | Result: Success |

  34. | Details: None |

  35. +--------------------------------+

  36. | Check: Request Spec Migration |

  37. | Result: Success |

  38. | Details: None |

  39. +--------------------------------+

  40. | Check: Console Auths |

  41. | Result: Success |

  42. | Details: None |

  43. +--------------------------------+

Neutron Open vSwitch(Controller)

更多 Neutron 架构及原理请浏览《我非要捅穿这 Neutron》 

5月技术周 | 手动部署 OpenStack Rocky 双节点5月技术周 | 手动部署 OpenStack Rocky 双节点

  • 添加 Neutron 用户及其鉴权信息

  1. openstack service create --name neutron --description "OpenStack Networking" network


  2. openstack user create --domain default --password-prompt neutron

  3. openstack role add --project service --user neutron admin


  4. openstack endpoint create --region RegionOne network public http://controller:9696

  5. openstack endpoint create --region RegionOne network internal http://controller:9696

  6. openstack endpoint create --region RegionOne network admin http://controller:9696

  • 软件包

  1. yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

  • 配置

  1. # /etc/neutron/neutron.conf


  2. [DEFAULT]

  3. core_plugin = ml2

  4. service_plugins = router

  5. allow_overlapping_ips = true

  6. transport_url = rabbit://openstack:fanguiju@controller

  7. auth_strategy = keystone

  8. notify_nova_on_port_status_changes = true

  9. notify_nova_on_port_data_changes = true


  10. [database]

  11. connection = mysql+pymysql://neutron:fanguiju@controller/neutron


  12. [keystone_authtoken]

  13. www_authenticate_uri = http://controller:5000

  14. auth_url = http://controller:5000

  15. memcached_servers = controller:11211

  16. auth_type = password

  17. project_domain_name = default

  18. user_domain_name = default

  19. project_name = service

  20. username = neutron

  21. password = fanguiju


  22. [nova]

  23. auth_url = http://controller:5000

  24. auth_type = password

  25. project_domain_name = default

  26. user_domain_name = default

  27. region_name = RegionOne

  28. project_name = service

  29. username = nova

  30. password = fanguiju


  31. [oslo_concurrency]

  32. lock_path = /var/lib/neutron/tmp

  1. # /etc/neutron/plugins/ml2/ml2_conf.ini


  2. [ml2]

  3. type_drivers = flat,vlan,vxlan

  4. # 因为实验环境 IP 地址不多,所以启动 VxLAN 网络类型

  5. tenant_network_types = vxlan

  6. extension_drivers = port_security

  7. mechanism_drivers = openvswitch,l2population


  8. [securitygroup]

  9. enable_ipset = true


  10. [ml2_type_vxlan]

  11. vni_ranges = 1:1000

  1. # /etc/neutron/plugins/ml2/openvswitch_agent.ini


  2. [ovs]

  3. # 物理网络隐射,OvS Bridge br-provider 需要手动创建

  4. bridge_mappings = provider:br-provider

  5. # OVERLAY_INTERFACE_IP_ADDRESS

  6. local_ip = 10.0.0.1


  7. [agent]

  8. tunnel_types = vxlan

  9. l2_population = True


  10. [securitygroup]

  11. firewall_driver = iptables_hybrid

  1. # /etc/neutron/l3_agent.ini


  2. [DEFAULT]

  3. interface_driver = openvswitch

  4. # The external_network_bridge option intentionally contains no value.

  5. external_network_bridge =

  1. # /etc/neutron/dhcp_agent.ini


  2. [DEFAULT]

  3. interface_driver = openvswitch

  4. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

  5. enable_isolated_metadata = true

  1. # /etc/neutron/metadata_agent.ini


  2. [DEFAULT]

  3. nova_metadata_host = controller

  4. metadata_proxy_shared_secret = fanguiju

  1. # /etc/nova/nova.conf


  2. ...


  3. [neutron]

  4. url = http://controller:9696

  5. auth_url = http://controller:5000

  6. auth_type = password

  7. project_domain_name = default

  8. user_domain_name = default

  9. region_name = RegionOne

  10. project_name = service

  11. username = neutron

  12. password = fanguiju

  13. service_metadata_proxy = true

  14. metadata_proxy_shared_secret = fanguiju

  1. ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

  • Open vSwitch

  1. systemctl enable openvswitch

  2. systemctl start openvswitch

  3. systemctl status openvswitch

  1. ovs-vsctl add-br br-provider

  2. ovs-vsctl add-port br-provider ens224

  1. [root@controller ~]# ovs-vsctl show

  2. 8ef8d299-fc4c-407a-a937-5a1058ea3355

  3. Bridge br-provider

  4. Port "ens224"

  5. Interface "ens224"

  6. Port br-provider

  7. Interface br-provider

  8. type: internal

  9. ovs_version: "2.10.1"

  • 创建 Neutron 数据库

  1. CREATE DATABASE neutron;

  2. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'fanguiju';

  3. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'fanguiju';

  • 初始化 Neutron 数据库

  1. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf

  2. --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

  • 启动服务

  1. systemctl restart openstack-nova-api.service


  2. systemctl enable neutron-server.service

  3. neutron-openvswitch-agent.service neutron-dhcp-agent.service

  4. neutron-metadata-agent.service

  5. systemctl start neutron-server.service

  6. neutron-openvswitch-agent.service neutron-dhcp-agent.service

  7. neutron-metadata-agent.service

  8. systemctl status neutron-server.service

  9. neutron-openvswitch-agent.service neutron-dhcp-agent.service

  10. neutron-metadata-agent.service


  11. systemctl enable neutron-l3-agent.service

  12. systemctl start neutron-l3-agent.service

  13. systemctl status neutron-l3-agent.service

NOTE:启动 OvS Agent 的时候会自动创建综合网桥 br-int、隧道网桥 br-tun。手动创建的 br-provider(br-ethX)用于 Flat、VLAN 非隧道类型网络。

  1. [root@controller ~]# ovs-vsctl show

  2. 8ef8d299-fc4c-407a-a937-5a1058ea3355

  3. Manager "ptcp:6640:127.0.0.1"

  4. is_connected: true

  5. Bridge br-tun

  6. Controller "tcp:127.0.0.1:6633"

  7. is_connected: true

  8. fail_mode: secure

  9. Port br-tun

  10. Interface br-tun

  11. type: internal

  12. Port patch-int

  13. Interface patch-int

  14. type: patch

  15. options: {peer=patch-tun}

  16. Bridge br-int

  17. Controller