From Fedora Project Wiki

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

Latest revision Your text
Line 29: Line 29:
 
=== Test Cases ===
 
=== Test Cases ===
  
This is the procedure I used to create the initial patch which allows libvirt to recognize/generate a huge page backed guest xml definition.  NB: While fairly low-level and useful to unit test, it is however not a mechanism directly visible to a typical user.
+
This is the procedure I used to create the initial
 +
patch which allows libvirt to recognize/generate
 +
a huge page backed guest xml definition.  NB: While
 +
fairly low-level and useful to unit test, it is
 +
however not a mechanism directly visible to a
 +
typical user.
  
The goal here was to allow libvirt to request guest backing by huge pages, which are essentially of 2MB size vs. that of a standard 4KB page.  Doing so offers a significant performance benefit in certain application scenarios.
+
The goal here was to allow libvirt to request guest
 +
backing by huge pages, which are essentially of 2MB
 +
size vs. that of a standard 4KB page.  Doing so
 +
offers a significant performance benefit in certain
 +
application scenarios.
  
==== Prepare the Host ====
 
  
Populate the huge page pool of a size suitable to support the guest image(s) which will be created:
+
Prepare the Host
 +
----------------
 +
 
 +
Populate the huge page pool of a size suitable to
 +
support the guest image(s) which will be created:
  
 
     # grep Huge /proc/meminfo
 
     # grep Huge /proc/meminfo
Line 43: Line 55:
 
     HugePages_Surp:        0
 
     HugePages_Surp:        0
 
     Hugepagesize:      2048 kB
 
     Hugepagesize:      2048 kB
 +
 
     # echo 500 > /proc/sys/vm/nr_hugepages
 
     # echo 500 > /proc/sys/vm/nr_hugepages
 +
 
     # grep Huge /proc/meminfo
 
     # grep Huge /proc/meminfo
 
     HugePages_Total:    500
 
     HugePages_Total:    500
Line 51: Line 65:
 
     Hugepagesize:      2048 kB
 
     Hugepagesize:      2048 kB
  
Note the above may take a considerable amount of time on a machine with fragmented physical
+
Note the above may take a considerable amount of
memory.  So it is best to do so as soon after boot as possible.  Also on machines with limited
+
time on a machine with fragmented physical
memory, populating a smaller number of pages may be necessary.
+
memory.  So it is best to do so as soon after
 +
boot as possible.  Also on machines with limited
 +
memory, populating a smaller number of pages may
 +
be necessary.
  
Having created the free huge page pool, mount hugetlbfs on the host.  If the mount point doesn't exist, create it first:
+
Having created the free huge page pool, mount hugetlbfs
 +
on the host.  If the mount point doesn't exist,
 +
create it first:
  
 
     # mkdir /dev/hugepages
 
     # mkdir /dev/hugepages
 
     # mount -t hugetlbfs hugetlbfs /dev/hugepages
 
     # mount -t hugetlbfs hugetlbfs /dev/hugepages
  
Note the mount above must be in place before launching libvirtd as the daemon currently checks
+
Note the mount above must be in place before
for a hugetlbfs mount only upon startup.  So if the daemon is currently running, restart it:
+
launcing libvirtd as the daemon currently checks
 +
for a hugetlbfs mount only upon startup.  So if
 +
the daemon is currently running, terminate it:
  
     # service libvirtd restart
+
     # kill `cat /usr/local/var/run/libvirtd.pid`
  
Look in <code>/var/log/messages</code> for any errors.
+
Then (re)start the daemon.  Doing so in separate
 +
window running in the foreground will allow
 +
diagnostics to make their way to stdout:
  
==== Launch the Guest ====
+
    # <path_to_daemon>/libvirtd
 +
 
 +
 
 +
Launch the Guest
 +
----------------
  
 
To launch the guest conventionally from virsh:
 
To launch the guest conventionally from virsh:
  
     # virsh define test-guest.xml
+
     # <path_to_virsh>/virsh --connect qemu:///system
    Domain foo defined from test-guest.xml
 
  
In the above example the guest is tagged with the name "foo" in the associated XML definition:
+
    virsh #
  
     # virsh list --all
+
At this point a guest must be defined by specifying
 +
an XML definition (more on this below), for example:
 +
 
 +
     virsh # define /etc/libvirt/qemu/hp_danpb-on.xml
 +
    Domain foo defined from /etc/libvirt/qemu/hp_danpb-on.xml
 +
 
 +
In the above example the guest is tagged with the
 +
name "foo" in the associated XML definition:
 +
 
 +
    virsh # list --all
 
     Id Name                State
 
     Id Name                State
 
     ----------------------------------
 
     ----------------------------------
Line 83: Line 118:
 
The guest may be launched via:
 
The guest may be launched via:
  
     # start foo
+
     virsh # start foo
 
     Domain foo started
 
     Domain foo started
  
And a VNC connection to the guest console can be made via:
+
And a VNC connection to the guest console can be
 +
made via:
  
     # virt-viewer foo
+
     # vncviewer localhost:5900
  
If all goes well the guest should launch successfully with its image backed by huge pages.  [Note it won't unless the guest XML definition specifies huge page usage correctly as below.  But proceeding here is instructive in any event.]
+
If all goes well the guest should launch successfully
 +
with its image backed by huge pages.  [Note it won't
 +
unless the guest XML definition specifies huge page
 +
usage correctly as below.  But proceeding here is
 +
instructive in any event.]
  
Successful launch of a huge page backed guest may be evidenced by observing the huge page free pool decreasing:  
+
Successful launch of a huge page backed guest may be
 +
evidenced by observing the huge page free pool decreasing:
  
 
     # grep Huge /proc/meminfo
 
     # grep Huge /proc/meminfo
Line 101: Line 142:
 
     Hugepagesize:      2048 kB
 
     Hugepagesize:      2048 kB
  
In the likely case HugePages_Free == HugePages_Total take a look at the XML definition for the guest, For example:
+
In the likely case HugePages_Free == HugePages_Total
 +
take a look at the XML definition for the guest, For
 +
example:
  
     # virsh dumpxml foo
+
     virsh # dumpxml foo
 
     <domain type='qemu'>
 
     <domain type='qemu'>
 
       <name>foo</name>
 
       <name>foo</name>
Line 116: Line 159:
 
         :
 
         :
  
The above does not specify a memory backing mechanism and therefore defaults to backing by
+
The above does not specify a memory backing
4KB pages.  To specify huge page backing a &lt;memoryBacking&gt; clause is needed:
+
mechanism and therefore defaults to backing by
 +
4KB pages.  To specify huge page backing a
 +
<memoryBacking> clause is needed:
  
     # virsh dumpxml foo
+
     virsh # dumpxml foo
 
     <domain type='qemu'>
 
     <domain type='qemu'>
 
       <name>foo</name>
 
       <name>foo</name>
Line 135: Line 180:
 
         :
 
         :
  
To add this to the XML definition, edit the corresponding file to add the &lt;memoryBacking&gt; clause as above use <code>virsh edit</code>:
 
  
     # virsh edit foo
+
To add this to the XML definition, edit the corresponding
     Domain foo XML configuration edited.
+
file to add the <memoryBacking> clause as above, undefine,
 +
and redefine the guest:
 +
 
 +
     virsh # undefine foo
 +
    Domain foo has been undefined
 +
 
 +
    virsh # define /etc/libvirt/qemu/hp_danpb-on.xml    [edited XML def]
 +
    Domain foo defined from /etc/libvirt/qemu/hp_danpb-on.xml
 +
 
 +
    virsh # start foo
 +
     Domain foo started
 +
 
 +
This should result in a huge page backed guest launch
 +
which may be verified as above.
  
This should result in a huge page backed guest launch which may be verified as above.
 
  
==== Possible Caveat ====
+
Possible Caveat
 +
---------------
  
There was a modification to the default disposition of selinux genfscon fs types affecting (among others) hugetlbfs in the kernel 2.6.29-2.6.30 timeframe.  This manifests as failure of chcon(1) on hugetlbfs files. Correction requires a selinux policy change for hugetlbfs and a corresponding kernel fs change.  Neither of which have been conclusively tested as of this writing on prospective FC12.  Thus there is a possibility SELINUX may need to be disabled to allow successful launch of a huge page backed guest.
+
There was a modification to the default disposition of
 +
selinux genfscon fs types affecting (among others)
 +
hugetlbfs in the kernel 2.6.29-2.6.30 timeframe.  This
 +
manifests as failure of chcon(1) on hugetlbfs files.
 +
Correction requires a selinux policy change for hugetlbfs
 +
and a corresponding kernel fs change.  Neither of which
 +
have been conclusively tested as of this writing on
 +
prospective FC12.  Thus there is a possibility SELINUX
 +
may need to be disabled to allow successful launch of a
 +
huge page backed guest.
  
 
=== Issues that were identified ===
 
=== Issues that were identified ===

Please note that all contributions to Fedora Project Wiki are considered to be released under the Attribution-Share Alike 3.0 Unported (see Fedora Project Wiki:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To edit this page, please solve the following task below and enter the answer in the box (more info):

Cancel Editing help (opens in new window)