mitaka版本的openstack 中 update_available_resource定时任务 如何更新 pinned_cpus
时间: 2024-05-10 20:16:28 浏览: 24
在Mitaka版本的OpenStack中,`update_available_resource`定时任务是由nova-scheduler服务执行的。该任务的作用是更新compute节点的资源信息,这些资源信息包括CPU、内存、磁盘等。其中,`pinned_cpus`是一个用于指定虚拟机绑定cpu的参数,是在nova-compute服务中配置的。因此,如果你想更新`pinned_cpus`参数,请在nova-compute节点上进行以下操作:
1. 打开nova.conf文件,找到[pci]部分,将以下参数设置为false:
```
pci_passthrough_whitelist = false
pci_allow_direct_passthrough = false
```
2. 打开nova.conf文件,找到[virt]部分,在其中添加以下参数:
```
vcpu_pin_set = "4-11,^8,15"
```
这里的`vcpu_pin_set`参数指定了虚拟机可以使用的CPU编号。
3. 重新启动nova-compute服务,使配置生效。
以上步骤完成后,`update_available_resource`任务会自动更新`pinned_cpus`参数,使其与nova-compute节点上的配置保持一致。
相关问题
帮我在red hat 上搭建一个openstack mitaka版完整过程
好的,下面是在 Red Hat 上搭建 OpenStack Mitaka 版的完整过程:
1. 系统要求
首先,确保您的系统符合以下要求:
- 操作系统:Red Hat Enterprise Linux 7.x
- CPU:双核或更高
- 内存:至少8GB
- 磁盘:至少100GB
2. 安装必要的软件
在开始安装 OpenStack Mitaka 之前,您需要安装一些必要的软件包和依赖项。运行以下命令来安装它们:
```
yum install -y centos-release-openstack-mitaka
yum update -y
yum install -y python-openstackclient openstack-selinux mariadb mariadb-server MySQL-python rabbitmq-server memcached python-memcached httpd mod_wsgi openstack-keystone openstack-glance openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables ipset
```
3. 配置数据库
a. 启动 MariaDB 服务并设置自动启动:
```
systemctl start mariadb.service
systemctl enable mariadb.service
```
b. 运行 mysql_secure_installation 命令来设置密码和其他安全设置:
```
mysql_secure_installation
```
c. 创建数据库和用户:
```
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
FLUSH PRIVILEGES;
EXIT;
```
4. 配置 Identity Service(Keystone)
a. 编辑 /etc/keystone/keystone.conf 文件,修改以下部分:
```
[database]
...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
...
[token]
...
provider = fernet
...
```
b. 初始化 Keystone 数据库:
```
su -s /bin/sh -c "keystone-manage db_sync" keystone
```
c. 设置 Keystone 管理员密码:
```
export OS_TOKEN=ADMIN_TOKEN
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
openstack service create --name keystone --description "OpenStack Identity" identity
openstack endpoint create --region RegionOne identity public http://controller:5000/v3
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
openstack domain create --description "Default Domain" default
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default --password ADMIN_PASS admin
openstack role create admin
openstack role add --project admin --user admin admin
```
d. 配置 Apache:
```
echo "ServerName controller" >> /etc/httpd/conf/httpd.conf
systemctl restart httpd.service
```
5. 配置 Image Service(Glance)
a. 编辑 /etc/glance/glance-api.conf 文件,修改以下部分:
```
[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
...
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
...
[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
...
```
b. 编辑 /etc/glance/glance-registry.conf 文件,修改以下部分:
```
[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
...
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
...
```
c. 初始化 Glance 数据库:
```
su -s /bin/sh -c "glance-manage db_sync" glance
```
d. 重启 Glance 服务:
```
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
```
6. 配置 Compute Service(Nova)
a. 编辑 /etc/nova/nova.conf 文件,修改以下部分:
```
[database]
...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
...
[DEFAULT]
...
auth_strategy = keystone
...
my_ip = CONTROLLER_IP_ADDRESS
...
[api]
...
auth_strategy = keystone
...
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
...
[glance]
...
api_servers = http://controller:9292
...
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
...
```
b. 初始化 Nova 数据库:
```
su -s /bin/sh -c "nova-manage db sync" nova
```
c. 重启 Nova 服务:
```
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
```
7. 配置 Networking Service(Neutron)
a. 编辑 /etc/neutron/neutron.conf 文件,修改以下部分:
```
[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
...
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
auth_strategy = keystone
...
rpc_backend = rabbit
...
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
...
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
...
```
b. 编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件,修改以下部分:
```
[ml2]
...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
...
[ml2_type_flat]
...
flat_networks = provider
...
[ml2_type_vxlan]
...
vni_ranges = 1:1000
...
[securitygroup]
...
enable_ipset = True
...
```
c. 编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件,修改以下部分:
```
[linux_bridge]
...
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
...
[vxlan]
...
enable_vxlan = True
local_ip = CONTROLLER_IP_ADDRESS
l2_population = True
...
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
```
d. 编辑 /etc/neutron/l3_agent.ini 文件,修改以下部分:
```
[DEFAULT]
...
interface_driver = linuxbridge
...
external_network_bridge =
...
```
e. 编辑 /etc/neutron/dhcp_agent.ini 文件,修改以下部分:
```
[DEFAULT]
...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
...
```
f. 创建软链接:
```
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
```
g. 初始化 Neutron 数据库:
```
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
```
h. 重启 Neutron 服务:
```
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
```
8. 验证安装
使用 OpenStack 的命令行工具进行验证,例如:
```
source /root/admin-openrc.sh
openstack user list
```
如果可以列出用户列表,则说明 OpenStack Mitaka 版已经安装成功了!
希望这个完整的安装过程可以帮助到您!
OpenStack的版本命名规则
OpenStack的版本命名规则以字母顺序为基础,并且每个版本对应一个动物名称。以下是各个版本的名称和发布时间:
- Austin: 2010.1
- Bexar: 2011.1
- Cactus: 2011.2
- Diablo: 2011.3
- Essex: 2012.1
- Folsom: 2012.2
- Grizzly: 2013.1
- Havana: 2013.2
- Icehouse: 2014.1
- Juno: 2014.2
- Kilo: 2015.1
- Liberty: 2015.2
- Mitaka: 2016.1
- Newton: 2016.2
- Ocata: 2017.1
- Pike: 2017.2
- Queens: 2018.1
- Rocky: 2018.2
- Stein: 2019.1
- Train: 2019.2
- Ussuri: 2020.1
- Victoria: 2020.3
- Wallaby: 2021.1
注:从F版开始,每个版本的名称必须以字母顺序递增。
相关推荐
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![docx](https://img-home.csdnimg.cn/images/20210720083331.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![pptx](https://img-home.csdnimg.cn/images/20210720083543.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)