Huge Pages for Openstack
Configure Inventory
Minimum Required Versions for images in .gitlab-ci.yml to build your inventory with support for huge pages.
| Image | Version Tag |
|---|---|
| inventory-validator | 0.2.0 |
| box-generator | 1.2.0 |
To enable Huge Pages for a specific compute node set the combination of size and count high enough for the NUMA architecture to actually fit the flavor below, honoring the NUMA architecture. The following example shows a configuration to allocate 4GB of Memory with a page size of 2MB.
- name: computenode-01
interfaces:
...
hugepages:
- size: 2M # size for huge pages e.g. 2MB
count: 2048 # amount of huge pages
The configured huge page settings are passed via kernel arguments and will be allocated at boot time of the host.
For this a reboot of the desired compute host is necessary to enable huge pages.
Configure Openstack
-
Configure
nova-controlby adding theNUMATopologyFilterto the Environment variableNOVA_ENABLED_FILTERSin yournova_control.yamldeployment. -
Redeploy
nova-controland resetnova-libvirt. This makes libvirt aware of huge pages. -
Create Openstack flavors according to your demands for the use of huge pages. It is important to configure the property
hw:mem_page_size=XMBto fit the huge page sizes on the hosts used for scheduling.This could be for example:
$ openstack flavor create --vcpus 1 --ram 2048 --disk 10 --property hw:mem_page_size=2MB my-hugepages-flavor
-
launch Virtual Machine with the huge pages flavor
$ openstack server create vmname --flavor my-hugepages-flavor --image cirros-0.5.1 --network my-network --wait
Verify and Troubleshoot
You can verifiy the settings for huge pages on the host machines.
$ cat /proc/meminfo |grep Huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 2048
HugePages_Free: 2048
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 4194304 kB
$ cat /sys/devices/system/node/node*/meminfo | fgrep Huge
Node 0 AnonHugePages: 0 kB
Node 0 ShmemHugePages: 0 kB
Node 0 FileHugePages: 0 kB
Node 0 HugePages_Total: 1024
Node 0 HugePages_Free: 974
Node 0 HugePages_Surp: 0
Node 1 AnonHugePages: 0 kB
Node 1 ShmemHugePages: 0 kB
Node 1 FileHugePages: 0 kB
Node 1 HugePages_Total: 1024
Node 1 HugePages_Free: 1024
Node 1 HugePages_Surp: 0
To verify operation on the server. There should be a hugepages process running on the host where the virtual machine is scheduled.
$ grep "KernelPageSize:\s*2048" /proc/[[:digit:]]*/smaps
/proc/207820/smaps:KernelPageSize: 2048 kB
$ ps aux|grep 207820
64055 207820 5.5 0.0 4901952 42768 ? Sl 09:41 13:47 /usr/bin/qemu-system-x86_64 -name guest=instn
root 207826 0.0 0.0 0 0 ? S 09:41 0:00 [vhost-207820]
$ virsh list
Id Name State
----------------------------------
1 instance-00000001 running
$ virsh dumpxml instance-00000001
...
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='0'/>
</hugepages>
</memoryBacking>
...
The <memoryBacking> element shows that the vm is using 2MB huge pages from NUMA node 0.