Test Day:2009-09-17 Virtualization Hugepages
(clean up test cases a bit)
|(3 intermediate revisions by 3 users not shown)|
|Line 152:||Line 152:|
| Tester || Description || Bug references || Status
| Tester || Description || Bug references || Status
| || || [http://bugzilla.redhat.com/
| || || [http://bugzilla.redhat.com/#] || ''''''
Latest revision as of 06:26, 8 June 2011
|Thursday Sep 17, 2009||All day||#fedora-test-day (webchat)|
 What to test?
If you come to this page after the test day is completed, your testing is still valuable, and you can use the information on this page to test huge pages support and provide feedback.
 Who's available
John Cooper is your host for today.
The following people have also agreed to be available for testing, workarounds, bug fixes, and general discussion:
- Chris Wright
- add your name here
 What's needed to test
- A fully updated Fedora 12 Rawhide machine. See instructions on the main test day page.
- At least one guest image installed before the test day (suggested reading - Virtualization_Quick_Start)
 Test Cases
This is the procedure I used to create the initial patch which allows libvirt to recognize/generate a huge page backed guest xml definition. NB: While fairly low-level and useful to unit test, it is however not a mechanism directly visible to a typical user.
The goal here was to allow libvirt to request guest backing by huge pages, which are essentially of 2MB size vs. that of a standard 4KB page. Doing so offers a significant performance benefit in certain application scenarios.
 Prepare the Host
Populate the huge page pool of a size suitable to support the guest image(s) which will be created:
# grep Huge /proc/meminfo HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB # echo 500 > /proc/sys/vm/nr_hugepages # grep Huge /proc/meminfo HugePages_Total: 500 HugePages_Free: 500 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB
Note the above may take a considerable amount of time on a machine with fragmented physical memory. So it is best to do so as soon after boot as possible. Also on machines with limited memory, populating a smaller number of pages may be necessary.
Having created the free huge page pool, mount hugetlbfs on the host. If the mount point doesn't exist, create it first:
# mkdir /dev/hugepages # mount -t hugetlbfs hugetlbfs /dev/hugepages
Note the mount above must be in place before launching libvirtd as the daemon currently checks for a hugetlbfs mount only upon startup. So if the daemon is currently running, restart it:
# service libvirtd restart
/var/log/messages for any errors.
 Launch the Guest
To launch the guest conventionally from virsh:
# virsh define test-guest.xml Domain foo defined from test-guest.xml
In the above example the guest is tagged with the name "foo" in the associated XML definition:
# virsh list --all Id Name State ---------------------------------- - foo shut off
The guest may be launched via:
# start foo Domain foo started
And a VNC connection to the guest console can be made via:
# virt-viewer foo
If all goes well the guest should launch successfully with its image backed by huge pages. [Note it won't unless the guest XML definition specifies huge page usage correctly as below. But proceeding here is instructive in any event.]
Successful launch of a huge page backed guest may be evidenced by observing the huge page free pool decreasing:
# grep Huge /proc/meminfo HugePages_Total: 500 HugePages_Free: 481 HugePages_Rsvd: 247 HugePages_Surp: 0 Hugepagesize: 2048 kB
In the likely case HugePages_Free == HugePages_Total take a look at the XML definition for the guest, For example:
# virsh dumpxml foo <domain type='qemu'> <name>foo</name> <uuid>4c58c2a6-1b52-688e-bcfb-e57159f50961</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> :
The above does not specify a memory backing mechanism and therefore defaults to backing by 4KB pages. To specify huge page backing a <memoryBacking> clause is needed:
# virsh dumpxml foo <domain type='qemu'> <name>foo</name> <uuid>4c58c2a6-1b52-688e-bcfb-e57159f50961</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <memoryBacking> <hugepages/> </memoryBacking> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> :
To add this to the XML definition, edit the corresponding file to add the <memoryBacking> clause as above use
# virsh edit foo Domain foo XML configuration edited.
This should result in a huge page backed guest launch which may be verified as above.
 Possible Caveat
There was a modification to the default disposition of selinux genfscon fs types affecting (among others) hugetlbfs in the kernel 2.6.29-2.6.30 timeframe. This manifests as failure of chcon(1) on hugetlbfs files. Correction requires a selinux policy change for hugetlbfs and a corresponding kernel fs change. Neither of which have been conclusively tested as of this writing on prospective FC12. Thus there is a possibility SELINUX may need to be disabled to allow successful launch of a huge page backed guest.
 Issues that were identified
|caiqian||Huge Page Backed Memory Failed for Kqemu Guests||#527670||NEW|