Kickstart Infrastructure SOP
Kickstart scripts provide our install infrastructure. We only have a handful of different kickstart scripts, for both xen guests and the xen hosts themselves.
Owner: Fedora Infrastructure Team
Contact: #fedora-admin, sysadmin-main
Servers: puppet01 (stores kickstarts and install media)
Purpose: Provides our install infrastructure
Our kickstart infrastructure lives on the proxy servers and puppet1. All install media and kickstart scripts are located on puppet1. Because the RHEL binaries are not public we have these bits blocked. You can add needed IPs to (from puppet01):
Physical Machine (dom0)
Xen Dom0 installs are far riskier then the DomU installs below. This is because if an install goes bad, your options to rebuild it are somewhat limited.
If PXE booting just follow the prompt after doing the pxe boot (most hosts will pxeboot via console hitting f12).
This only works on an already booted box, many boxes at our colocations may have to be rebuilt by the people in those locations first. Also make sure the IP you are about to boot to install from is allowed to our IP restricted infrastructure.fedoraproject.org as noted above (in Introduction).
Download the vmlinuz and initrd images.
wget http://infrastructure.fedoraproject.org/rhel/RHEL5-x86_64/images/pxeboot/vmlinuz \ -O /boot/vmlinuz-install wget http://infrastructure.fedoraproject.org/rhel/RHEL5-x86_64/images/pxeboot/initrd.img \ -O /boot/initrd-install.img grubby --add-kernel=/boot/vmlinuz-install \ --args="ks=http://infrastructure.fedoraproject.org/rhel/ks/xen-host-nohd \ method=http://infrastructure.fedoraproject.org/rhel/RHEL5-x86_64/ \ ksdevice=link ip=$IP gateway=$GATEWAY netmask=$NETMASK dns=$DNS" \ --title="install el5" --initrd=/boot/initrd-install.img
for a rhel6 install:
wget http://infrastructure.fedoraproject.org/rhel/RHEL6-x86_64/images/pxeboot/vmlinuz \ -O /boot/vmlinuz-install wget http://infrastructure.fedoraproject.org/rhel/RHEL6-x86_64/images/pxeboot/initrd.img \ -O /boot/initrd-install.img grubby --add-kernel=/boot/vmlinuz-install \ --args="ks=http://infrastructure.fedoraproject.org/rhel/ks/xen-host-nohd-6 \ repo=http://infrastructure.fedoraproject.org/rhel/RHEL6-x86_64/ \ ksdevice=link ip=$IP gateway=$GATEWAY netmask=$NETMASK dns=$DNS" \ --title="install el6" --initrd=/boot/initrd-install.img
Double and triple check your configuration settings (cat /boot/grub/menu.lst), especially your IP information. In places like ServerBeach not all hosts have the same netmask or gateway. Once everything is ready run:
echo "savedefault --default=0 --once" | grub --batch shutdown -r now
Once the box logs you out, start pinging the IP address. It will disappear and come back. Once you can ping it again, try to open up a VNC session. It can take a couple of minutes after the box is back up for it to actually allow vnc sessions. The VNC password is in the kickstart script on puppet1: "grep vnc /mnt/fedora/app/fi-repo/rhel/ks/xen-host":
If using the standard kickstart script, one can watch as the install completes itself, there should be no need to do anything. If using the xen-host-nohd script, one will need to configure the drives.
If all goes well, the vnc session will close, the box will reboot and come back up as the new host. The default root password is also listed in the kickstart script, from puppet1: grep rootpw /mnt/fedora/app/fi-repo/rhel/ks/xen-host Most physical machines are to be used as xen hosts, If that is the case with this host, just install puppet, update the box and follow the normal puppet instructions
Virtual Machine (domU)
Before building a machine, make sure to know the standard specs for the type of machine you're building in advance (disk space, amount of memory, i386 vs. x86_64).
Almost all of our virtual machines run off of LVM. Step 1 is to create the LVM partition you want.
lvcreate -n $NEWHOST -L 15G VolGroup00
Once the size of the new machine is set, we need to run the virt-install. As before ensure that the ip listed below has access to the infrastructure.fedoraproject.org site. This can be tricky, normally the ip, route and netmask can be templated from the dom0. This is not the case at server beach (see below for clarification). Make sure to update the amount of memory (-r) and the architecture of the repo that you point to for the machine you're building.
There are differences between kvm and xen, specifically with how consoles are handled.
KVM inside PHX:
virt-install -n $NEWHOST -r 1024 -f /dev/VolGroup/$NEWHOST \ -l http://puppet01.phx2.fedoraproject.org/repo/rhel/RHEL6-x86_64/ \ -x "ks=http://puppet01.phx2.fedoraproject.org/repo/rhel/ks/kvm-rhel-6 \ ip=$IP netmask=$NM gateway=$GW dns=$NS1,$NS2 console=tty0 console=ttyS0" \ --vnc --noautoconsole
KVM outside of PHX:
virt-install -n $NEWHOST -r 1024 -f /dev/VolGroup/$NEWHOST \ -l http://infrastructure.fedoraproject.org/rhel/RHEL6-x86_64/ \ -x "ks=http://infrastructure.fedoraproject.org/rhel/ks/kvm-rhel-6 \ ip=$IP netmask=$NM gateway=$GW dns=$NS1,$NS2 console=tty0 console=ttyS0" \ --vnc --noautoconsole
XEN inside PHX:
virt-install -n $NEWHOST -r 1024 -f /dev/VolGroup/$NEWHOST \ -l http://puppet01.phx2.fedoraproject.org/repo/rhel/RHEL6-x86_64/ \ -x "ks=http://puppet01.phx2.fedoraproject.org/repo/rhel/ks/xen-rhel-6 \ ip=$IP netmask=$NM gateway=$GW dns=$NS1,$NS2" \ --vnc --noautoconsole
XEN outside of PHX:
virt-install -n $NEWHOST -r 1024 -f /dev/VolGroup/$NEWHOST \ -l http://infrastructure.fedoraproject.org/rhel/RHEL6-x86_64/ \ -x "ks=http://infrastructure.fedoraproject.org/rhel/ks/xen-rhel-6 \ ip=$IP netmask=$NM gateway=$GW dns=$NS1,$NS2" \ --vnc --noautoconsole
These installs should not require any user intervention. If you would like to monitor its progress you will need to connect using vnc. If you cannot directly connect to the system's ip you can normally bounce through bastion. You can do that with:
vncviewer -via bastion.fedoraproject.org hostname_or_ip:1
When prompted for the vnc password, type in the vnc password given in the kickstarts specified above.
The installation process is pretty simple, the post configuration may not be depending on if the box you've installed has a reverse DNS lookup. Here's the checklist:
1. Ensure the hostname is set properly in
1. Ensure the system is up to date and can contact its yum mirror
yum -y update .
1. For an external box make sure
search vpn.fedoraproject.org fedoraproject.org while internal hosts (in PHX) should contain
search phx2.fedoraproject.org (this should be scripted by the kickstart file! -matt)
PPC boxes are just used for builders/composers and are all in PHX.
These instructions only apply in PHX, and they presume that dhcp is already set up for the host. Also make sure the IP you are about to boot to install from is allowed to our IP restricted infrastructure.fedoraproject.org as noted above (in Introduction). Then, you'll need to grab the installer kernel and initrd
wget http://puppet1.fedora.phx.redhat.com/repo/rhel/RHEL5-ppc/ppc/ppc64/vmlinuz \ -O /boot/vmlinuz-install wget http://puppet1.fedora.phx.redhat.com/repo/rhel/RHEL5-ppc/ppc/ppc64/ramdisk.image.gz \ -O /boot/initrd-install.img grubby --add-kernel=/boot/vmlinuz-install --initrd /boot/initrd-install.img \ --args="ks=http://10.8.34.125/repo/rhel/ks/ppc-builder-host ip=dhcp" --title "rekick"
Note that these instructions rely on dhcp. And if you put in the full ip information in yaboot.conf, yaboot gets very unhappy and is unable to boot.
Now, you'll need to reboot and watch the console carefully and select booting the 'rekick' option when the yaboot prompt comes up. That or change the default if you're brave.
Unfortunately, yaboot < 1.3.14 doesn't support a boot once, so you'll have to either watch the console and select the 'rekick' boot option or change the default if you're brave. This can take a couple of minutes as the ppc boxes spend a while in OF.
After the install, you'll want to change the network configuration to be static instead of dhcp. Just edit /etc/sysconfig/network and /etc/sysconfig/network-scripts/ifcfg-eth0
Note that these instructions will only work in PHX and depend on the fact that dhcp is set up for the host.
You can also boot the machine from the network and start an install that way. To do this, you need to ensure that the machine has an entry in /etc/dhcpd.conf on lockbox like those for ppc1-4. Then, watch for the machine to boot and enter the SMS menu by hitting 1 when prompted. From the SMS menu, you can choose boot options (5) and then navigate to network boot. This will load yaboot over the network. Due to spanning tree, this will take a while as it has to wait 60 seconds before even trying to get the address and then each file.
Once you have a yaboot prompt, you can either choose the default which kicks off a builder install or select 'rescue' to boot into rescue mode on the machine
Make sure the correct hostname is set (edit /etc/hosts and /etc/sysconfig/network if necessary). Edit /etc/resolv.conf to have the correct search path. This should contain phx.fedora.redhat.com for all PHX machines, vpn.fedoraproject.org for all VPN machines, and fedoraproject.org for all machines (in that order).
Once the box is booted (virtual or not) follow the steps in the Puppet SOP
After puppet has done it's magic, set up the VPN if needed. OpenVPN
If the machine has a puppet certificate then it is setup for func, automatically.
fasClient -i to get all the home directories populated.
Get the SSH public key from /etc/ssh/ssh_host_rsa_key.pub and add it to the master known_hosts file in puppet (modules/ssh/files/ssh_known_hosts).
Server beach has some interesting network infrastructure as it relates to our ability to do virtualization. Basically the dom0 is given an ip on one network as normal but the virtual hosts (when we request IP's) are given an address on a different network and one without a gateway. The best bet is to make sure that you request at least one IP for the host to be a gateway. This is a terrible waste of an IP but until a better method is found this will work. Once you have your IP addresses all that is required is to create an aliased interface on the host with that IP.
/etc/sysconfig/network-scripts/ifcfg-eth0:1 A reboot later and you can treat this xen host as a normal xen host (with bridged networking and such)