Docs/Drafts/CommandLineSurvivalGuide/Scenarios

From FedoraProject

< Docs | Drafts | CommandLineSurvivalGuide
Revision as of 21:41, 6 January 2010 by Vedranm (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Warning (medium size).png
This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.

This chapter deals with more advanced topics. Some sections are self-contained, but most are not. In order to better understand the concepts mentioned in this chapter, it is recommended that you read 'Concepts' first.

Contents

Executables and Processes

Written by Marina Miler.

ELF - Executable and Linking format

A short introduction

The executable and Linking format (ELF) specification is published by UNIX System Laboratories in the System V Application binary Interface(ABI). It is a standard file format for executables, shared libraries object code and core dumps. It was chosen to be a portable object file format for the 32-bit Intel Architecture environments for a huge number of operating systems. In 1999. it became the standard binary file format for Unix-like operating systems. The ELF line is meant to streamline software development by supplying developers with a set of binary interface definitions that spread across multiple operating environments. This should decrease the number of different interface implementations, thus decreasing the need for recoding and recompiling code.

Characteristics and division

Elf is very flexible, which means that it does not depands on a special processor or architecture, and exstensible, too. That is why it is adopted by a variety of operating systems on a lot of different platforms.

The main types of object files are executable files, relocatable files and shared object files.

Executable files are files containing a program that is apropriate for execution.

Relocateble files are containing data and the code that are adequate to create an exe or shared object file by linking with other object files.

Shared object files contain code and data adequate for linking in two possible ways. One method is to process the shared object file with other shared object files and relocatable files and that way create another object file. This is the role of the link editor. The other way is that the dynamic linker creates a process image by combining the shared object file with another exe file and other shared objects.

The data file layout consists of an ELF header and file data.The file data may contain a program header table, a section header table and data etered in the program header table or the section header table. The program header table is necessary to execute a program. It tells the system how to establish a process image, which means how to execute a program. Relocatable files are an exeption; they don't need a program header table. The section header table includes information about the file's sections, such as the section name or size, etc. Not every object file has a section layer, just files used while linking must have it.

The spreading of ELF

Many older executable formats (for example a.out, COFF) have been replaced by ELF in a lot of systems such as Linux, Syllable, HP-UX, Solaris, IRIX, DragonFly BSD, etc. (all Unix-like operating systems). There are some non-Unix operating systems that use ELF, too, for example a lot of mobile phones using ELF (some Motorola, Siemens, etc.) and some consoles, such as PlayStation 2, PlayStation 3, Wii, GP2X, and so on.

Processes

A process is a program in execution. Processes are identified by their process ID, also called PID. It's a special and unique number that identifies every single process. Every process has its own process descriptors that contains information about a process, such as his PID, parent process, children, state, processor registers, siblings, adress space information and a list of open files. Every user has rights only over the process he launched. This is a good security measure. Linux is almost immune to viruses. Viruses have to infect executable files, but the user has no write acces to vulnerable system files. Anyway, there exist some anti-virus promgrams for Linux, too. Usually they are needed because Linux systems are often used as file servers for Windows machines. In Linux, the processes are organized in a special way. Therefore we have "trees". The PID is displayed next to every process in the tree. Every process also displays the PID of his parent process, called parent process ID.

PS (process status)

By typing ps into the terminal, we'll get a list of processes that we (the current user) started on the terminal we are using in that moment. It shows a list of active processes.The PID is shown in the first column, in the second column we can see the terminal the process is applied to (TTY), the third one displays the CPU time used by the process, and the last column shows the running command (CMD)

$ ps

PID TTY TIME CMD 16613 tty000 00:00.27 bash

 2598    tty001     00:00.35       ps
 1435    tty002     00:00.16       bash 

The sintax of ps is: ps [options]

There are a lot of ways to customize this command. Here are some examples: $ ps -a -x -u

a - lists information about all processes (of all users) attached to the certain terminal, except of group leaders u - shows the user name or user ID of the user that started the process x - shows the processes without terminal or different from your terminal

$ ps -ef - shows you all information about current running processes $ ps g - selects group leaders, too


Signals

For controlling processes, we use 64 different signals. These signals can be sent via different numbers (e.g. 9 means KILL, 19 means STOP) or names (SIGx, x - the signal's title, for example SIGKILL, SIGSTOP). The signals are devided into two groups: standard signals and real-time signals (used for application-defined purposes). The standard signals go from 1 to 32, while the signals from 33 to 64 are real-time signals, also called "higher signals".

KILL and KILLALL

We use kill and killall when a process ignores our request to close up. kill is a program that requires a PID as an argument. The syntax is: kill [PID]. For example:

$ kill 650 - kills the process with the PID 650

The sytax is: kill -[signal] [PID]

$ kill -9 650 - kills the process with the PID 650 or $ kill -sigkill 650


killall is a program that takes a process name as an argument, e.g.

$ killall -9 processname

SIGSTOP

SIGSTOP is a signal which pauses the process we send it to, but it can continue the execution after receivig the signal SIGCONT (that is also the main difference between SIGSTOP and SIGKILL). It cannot be ignored, neither can it be caught. The syntax is:

kill -SIGSTOP [PID] kill -SIGCONT [PID] For example: $ kill -sigstop 2571 or $ kill -19 2571 $ kill -sigcont 2571 or $ kill -18 2571 kill -s -it uses the symbolic names that are defined in <signal.h>. Because of it the prefix "SIG" is unnecessary and the values can be identified without it.

SIGTERM

SIGTERM is a termination signal used to exit a process SIGTERM -15 - termination of a process For example: $ kill -sigterm 2251 or $ kill -15 2251 Unlike SIGKILL, SIGSTOP and SIGCONT it can be caught.

TOP

top shows the real-time processor activity and is able to sort tasks by memory usage, runtime, and CPU storage. It can also give us a limited interactive interface we can use to manipulate with processes. It combines the role of kill and ps. The output of top gets refreshed every 5 seconds. The main screen units are the summary area, prompt line, the columns header and the task area.

The sytax is: top [options]

Some options are:

b -batch mode command- it works in the batch mode, which means that only inputs that appear in the prescribed iterations limit will be accepted, if not, you have to kill the process.We can use it to send an arbitrary output from top to another file or program. h -help M -sorts processes by their use of memory n -it classifies the iteration limit needed to end a process. P – sorts processes by their use of CPU time u -it writes out the users processes. If you don't want all processes to be shown you'll have to enter the UID or the user's name when you're asked to. i -shows only processes that are running at the moment q -quit r -it changes the priority of a selected process

For example:

$ top -u marina - shows only the processes of the user whose username is marina $ top -u 400 – shows only the process with the UID 400

2.4 The tree structure

The pstree command is a command we use for showing the processes as a tree structure, it is also called "tree diagram".The process init is always at the beginning of the list, because when Linux is started it is always the first process running. As it was already mentioned before, the tree structure shows the parent process of a process and its children (processes that originate from it). If you want to kill the whole section, you just have to kill the parent. Analogous to that, a tree structure is also used to demonstrate the filesystems of Linux. The difference between pstree and ps is that pstree gives you an hierarchical order of the running processes, while ps shows them in the order they were started. The sytax is: pstree [options] [PID or username] For example:

$ pstree -A – uses ASCII characters to display a tree. $ pstree -p shows us the PIDs of the current processes. $ pstree -n sorts processes with the same parents by PID $ pstree | less – is used to get a clearer output. We use it because the output of the pstree command is very long and takes a lot of space.


Priority setting

Every process has his own defined priority. A scheduling priority is not the same thing as niceness. As an opposed from priority, niceness is more like an advice to the scheduler, which can be ignored, too. The lower the nice value, the higher will be the priority. The higher the nice value the lower will be the priority. The lowest nice value is -20, and the highest is 19. Only the privileged user can set the any priority he wants to any value he wants, while the other users are limited to chose a priority between 0 and 19 and only for the process they own. A process with a higher priority gets more process time ("it is more often executed") then the one with a lower priority.

The niceness of a process may be set with the nice command at the time of creation. Later it can be adjusted with the renice command.

Renice

We use renice to modify priorities if a process uses to many resources. It requires an absolute priority.

The syntax is: renice priority [[-p] pid ...] [[-g] pgrp ...] [[-u] user ...]

-p -multiple access pid- symbolizes the PID -g – the parameters are interpreted as process group IDs pgrp – symbolizes the process group ID -u - the parameters are interpreted as user IDs or user names user -the process owner

For example:

$ renice +5 335 -u marko ivan -p 34 -changes the priority of the process IDs 335 and 34, and all processes of the users marko and ivan.

$ renice +1 -u marko – increases the priority value of all processes owned by a user named marko

$ renice +1 -g students - increases the priority value of all processes owned by a group called students


Nice

nice is a command that affects on the setting of priority and imports a priority increasement that it adds or substracts from the current shell priority. It reaches from -20 to 19. Whithout a command it writes out the actual scheduling priority.

The syntax is: nice [option] [command [arguments] ]

For example: $ nice – writes out the actual priority value. $ nice grep -increases the niceness value of the grep command by 10, because it's the default value $ nice -n 6 grep -increases the niceness value of the grep command by 6 and runs it $ nice -n -1 grep - decreases the niceness value of the grep command by -1 and runs it.

JOB CONTROL

Job control applies to the facility to stop or resume the execution of a process by selection. The user uses an interactive interface presented by the system's terminal driver and Bash. In the meantime, it creates a table of all existing jobs, which can be listed with the jobs command. There are two types of processes depending on the current terminal process group ID. Processes whose process group ID is the same as the present terminal process group ID are foreground processes, while the process group ID from background processes is different from the termninals ID. Some other differences between these processes are that the foreground process is able to receive keyboard-generated signals( while background processes are not ) and that only foreground processes have the permittion to read from and write to the terminal. Backgroud processes can't read from and write to the terminal because the terminal sends them a SIGTTIN signal which they can't receive, so the process gets suspended. We use it when a job takes a lot of time, so we move it to the background and execute other commands while the one in the background is running.

bg -continues the suspended job in the background. We add the job specification after bg to point out which job will be continued in the background. If we skip the specification, the current job will be used. The return value will not be zero while it is running when job control is disabled, or while job control is enabled, but the job specification cannot be found or classifies a job started without job control. The syntax is: bg [jobspecification] bg  %2 -replaces job 2 to the background fg -continues the job in the forderground and makes it the present job. If we don't add the job specification, the current job will be used. It returns the value of the command in the forderground. The return value can be non-zero if it's run while job control is disabled, or when job control is enabled and the job specification is representing a job that was started without job control or isn't even representing a valid job. The sitax is: fg [jobspecification] For example: fg %2 – replaces job 2 to the foreground

JOBS

The syntax is: jobs[options] [jobspecification] jobs -x command [arguments] If the job specification is entered, the output is limited on information about the specified job. If it's not entered, it is going to list the status of all jobs.

$ jobs -prints all jobs

Some options are:

-l -shows the usual information + the process IDs -n -shows the information only for jobs that have changed their status from the last users checking of the status -p –shows the ID of the group leader of the job's process -r -limits the outputs of running jobs -s -limits the outputs of stopped jobs -x -the job ID found in arguments or commands is replaced with the apropriate process group ID, then the command is executed and arguments are added to it

% - we use it to identify a job with the job number(it has to be infront of the number) . It replaces the job name.

For example: '%12' -signifies job number 12. /home/marina# kill  %2 -we kill job number 2. '%?reg' – signifies any job that consists the string 'reg'. '%-' -signifies the previous job. '%+' -signifies a current job.

command & - starts a job instantly in the background

There are some shortcuts we can use too, while running a job. For example ctrl -c terminates a job and ctrl -z suspends a job.

/proc and /sys File Systems

Written by Marina Miler.

The /proc file system is a virtual file system that contains information about the system that are used by many programs(for example: top) . The word "virtual" indicates that the directory does not occupy any space on the hard drive. The /proc file system also contains a lot of information about your hardware. The /proc/sys subdirectory permits the changing of the parameters within kernel. If you want to change a value, you just have to echo that value into the file. This is possible only if you're root. The /proc directories include a lot of directories whose names are numbers. They are called process directories and they contain information about all processes that are currenty running. If you are not root, you can write out only information about the processes you started. If the process is terminated, its /proc process directory disappears. Every directory includes the same entries. Some of the entries are: status, cwd, environ, root, cmdline, maps, exe.

Status- includes a big number of information about the processes: the executable's name, the PID, PPID, UID, GID, memory usage...

cwd- means "current working directory". It is called that way because it's a soft link to the current working directory.

environ- it includes all evironmental variables that are defined for the process.

root -it is a soft link to the root directory that is used by the process.

cmdline –it is a unformatted file that contains the whole command line used to summon the processes.

maps –when you display the containment of this named pipe, parts the process' address space which are mapped to a file at the moment get visible.

exe -a soft link to the exe file matching the process being run.

ABOUT THE HARDWARE

The /proc file system contains information about the hardware of the machine, too. Some files which contain these information are:

cpuinfo -it contains information about the CPU.

apm -shows the capacity of your battery if you have a laptop.

meminfo -it includes information about the memory usage.

bus -shows information about the peripherals connected to the certain machine.

modules -shows a listing of modules that are used by the kernel in that moment.

acpi – contains some information that can be important for laptops. For example: button – ensures the control of actions with buttons like power, sleep .. battery – writes out the number of batteries in the laptop and their current condition thermal zone – tells you how hot your processor gets fan -shows you the current condition of fans on your machine. Control options may differ from one processor to another.

The Sub-Directory

The subdirectory reports different kernel parameters and authorizes you to change them and write into them if you are root. Some uses of it are:

-prevent IP spoofing -spoofing makes you think that a package is coming from your interface(but it's coming from the outside). You have to use this command: $ echo 1 >/proc/sys/net/ipv4/conf/all/rp_filter

-allow routing You have to use this command: $ echo 1 >/proc/sys/net/ipv4/ip_forward

LSPCI

It is a command that shows the information about all PCI buses and the peripherals connected to them. You have to have Linux kernel 2.1.82 or a newer version if you want to use this command. If you don't have one this vesions the PCI utilities will be available just for root.

The syntax is: lspci [options] Some options are:

-n -displays the vendor and device codes as numbers -v -it orders lspci to display detailed information about all devices.V stands for verbose. -vv -displays a very big number of information about the devices.Vv stands for very verbose. -x - displays a hexadecimal listing of the first 64 bytes of the PCI configuration space .

DMESG

dmesg means display message and is a command that displays the message buffer of the kernel. It "gets " the data by reading the kernel ring buffer and it can be helpful while troubleshooting to get information about the hardware. The syntax is: dmesg [options] .Without using any options it displays all kernel messages. Using any options with dmesg is very seldom. For example: dmesg | less - it gives us a better overview of the displayed information.

LSUSB

It is a command that displays the information about all USB devices in the system and the peripherals that are connected to them. The sytax is: lsusb [ options ]. Some of the options are:

-v –writes out detailed information about the devices.V stands for verbose. -t- displays the USB device as a tree -d -displays just the devices with the vendor and product ID

EVERYTHING IS A FILE

Written by Irma Obradovac.

***ENGLISH OF THIS SECTION IS NOT QUITE GOOD. CORRECTIONS ARE WELCOME.***

In the UNIX system everything is a file, and in case that it's not file, then it's a process. Previous sentence is true, because even though for example pipes and sockets special types of files, they are still files. Linux and Unix do not differ between files and directories. The directory is usually a file that has a name and contains other files. All applications, programs, different services, text, images, and input-output devices and other devices, are considered to be a files. In order to manage all these files, he imagines them as a tree structure on the hard disk. Large branches contain more branches, and at the end branches contains a leaves, which are analogous to a normal file.

Following are the file types provided by Unix:

Regular files (-): contain normal data, such as text files, executable files or programs, to enter or exit the program and etc. Directories (d): files that are lists of other files Character mode files (c): mechanism is used for input and output. The most special files are in /dev Block mode files (b):this file is created by the system or peripheral of machine (computer). Block mode files are stored, while character mode files are not. Links (l): system to make files or directory visible in several parts of the structure of the tree Named pipes (p): act more or less like sockets and form a way for processes to communicate with each other, without using network socket semantics. (Domain) sockets (s): a special file type, similar to TCP/IP sockets, providing inter-process networking protected by the file system's access control.

INODES

While many users and many common administrative tasks accept fact that files and directories are in a tree-like structure order that is not the case with computers because they dont understand anything about any kind of tree-structures or trees. File system is part of every partition and if we imagine them together than we can get the picture of that tree-structured system, than again, this is not easy and simple as it look like it is. Inode represent a file within a file system and it is some kind of serial number which contain information about file making data: location of the file on the hard disk and to whom it belongs. One set of the inodes is present in every partition and systems which have more than one partition can have files with exactly the same inode number. Data structure of the hard disk is described by inode which store file properties, including file data location. Exact number of inodes is created on every partition when hard disk initializes to accept storing the data, and that in most cases happen while we are adding more disks to system or during installation of the system. That number is going to be maximum amount of all file types that can be stored on the partition at the same time, and that include directories, links, special files etc. Usually we are declaring to have 1 inode per 2-8 KB of storage. When a new file is created, free inode is given to it, which include these informations: File owner and group owner of the file Type of the file (regular, directory, ...) File permissions Creation time and date, when its last changed or read Date and time of information changing inside the inode Links number to this file Size of the file Address which define the actual file data location


Directory and the file name are the only informations that is not included in an inode. They are being stored in special files directory. Tree-like structure which user can understand can be made by the system which compare inode names and file names. We can see inode numbers using the –i option to ls. Disk has separate space on it just for the inodes. Next thing we are going to talk about is the inode table. Listing of all inode numbers for the file system is being contained in it and when user is trying to access some file, then systen searches through this table of inodes so it can find the right inode number.when its found, inode can be accessed by the command in question so it can make changes if applicable. We can take example of editing a file with nano. What happens when you type nano <filename>, is that inode table found inode number and it allowes you to open that inode. During the edit session of nano some attributes are changed and when you are done and pressed Ctrl + X, the inode is closed and its being released. That way, in case two different users try to edit same file, inode is already assigned to anorher user ID in edit session, and the second user will need to wait until inode is being released. (***I'm not sure this is the case. Anyone?***)




LINKS

As we know already, file contains data record (inode) that has file informations in it and we are using file name simply as a reference to that inode. Contents of the file can be easily refered by links.

We have: · Hard links: guide to the file's inode

· Symbolic links: file containing another file's name


Diference between hard and symbolic links:

Maybe it seems to you that symbolic links arent really useful, but hard links are having some drawbacks. Biggest drawback they are having is that hard links cannot be created so they can link the files between two different systems. Unix can be consisted of many file systems which can be on many physical disks so that every system has its own structure and files and it maintain its own information. Hard links cannot enclose file systems just because they know only system specific information unlike the symbolic links which knows more general information like a file name what is making them to be able to enclose file systems:

Directories can't be created by hard links They cannot link the files between two different systems


Both symbolic links and hard links allows you to associate multiple names with one file, but symbolic links also allows you to: Create links between directories Link the files between two different systems


They will have different behavior when link source is moved or removed: Symbolic link is not updated Moved or removed, hard link always refer to the source






NAMED PIPES

Known as FIFO-s they can establish one way data flow and make communication between two unrelated processes possible. File system contains file, acess point, which identifies named pipes. Having a pathname of a file associated with them makes communication possible between unrelated processes as i mentioned before, in other words, those unrelated processes can begin their communication by opening file associated with the named pipe. They are file system persistent, which means that they exist beyond process life, unlike anonymous pipes which are process persistent. Named pipes have to be either deleted by one of the processes by calling unlike or using the command line so you delete them from the system. To allow communication file associated with the named pipe has to be opened by the processes. When process has access to reading end of the pipe it means file is being opened for reading, and if process has acess to writing end it means the file is being opened for writing. By default, named pipe has a support for blocked read and write operations. For example, if process opens the file for writing it's blocked until file is opened for reading by another process. But, specifying O_NONBLOCK flag while opening them make named pipes support non blocking operations.It must be opened either write or read only because it's a one way channel as i mentioned before. Extensive use of pipes is made by shells, for example, pipes are used to send the input of one command as the output of the other.In real life unix pipes (applications) are used for communication between two processes that need simple way for synchronous communication.





Creating a Named Pipe:

A named pipe can be created in two ways --> via the command line or from within a program.

From the Command Line: A named pipe may be created from the shell command line. For this one can use either the "mknod" or "mkfifo" commands.

Example:

To create a named pipe with the file named "npipe" you can use one of the following commands: % mknod npipe p or % mkfifo npipe You can also provide an absolute path of the named pipe to be created. Now if you look at the file using "ls ?l", you will see the following output: prw-rw-r-- 1 secf other 0 Jun 6 17:35 npipe The 'p' on the first column denotes that this is a named pipe. Just like any file in the system, it has access permissions that define which users may open the named pipe, and whether for reading, writing, or both.

Within a Program The function "mkfifo" can be used to create a named pipe from within a program. The signature of the function is as follows: int mkfifo(const char *path, mode_t mode) The mkfifo function takes the path of the file and the mode (permissions) with which the file should be created. It creates the new named pipe file as specified by the path. The function call assumes the O_CREATE|O_EXCL flags, that is, it creates a new named pipe or returns an error of EEXIST if the named pipe already exists. The named pipe's owner ID is set to the process' effective user ID, and its group ID is set to the process' effective group ID, or if the S_ISGID bit is set in the parent directory, the group ID of the named pipe is inherited from the parent directory.

Benefits of Named Pipes are:

using them is very simple mkfifo is a thread-safe function doesn't require any synchronization mechanism while using them even if its opened in non blocking mode write to a named pipe is guaranteed to be atomic, using write function call unlike anonymous pipes they have read and write permissions for secure communication


Limitations of Named Pipes are:

used only for communication between processes on the same host machine cannot be created on the NFS file system requires careful programming in order to avoid deadlocks because of blocking nature of pipes it has no record identification, it's a byte stream data

Filesystems

Written by Ema Matijević.

Introduction

In the beginning we'll handle some basic things about filesystems. It has several different meanings which can be confusing to novices. A filesystem means:

Whole directory tree, starting with the root directory / (forward slash) which contains a series of subdirectories, further subdirectories, etc.

Any organization of files and directories on a specific physical device. Here “organization” refers to the hierarchical directory structure.

A type of filesystem. How the storage of data (files, folders,...) is organized on a computer disk (hard disk, CD-ROM,...) or on a partition on a hard disk. Linux supports a large number of file systems, the most popular is ext2. Windows also has it's own filesystems as well as UNIX. But, more on that later.

Any physical device for storing files must have at least one filesystem on it, as a hierarchical directory structure organization, also it must have a type.

Every storage media must be formatted before it can be used. Formatting a hard disk involves the following steps: low-level formating, partitioning and high-level formatting.

Low level format creates the physical format that defines where the data is stored on the disk. Partitioning process divides the disk into logical parts that assign different hard disk volumes or drive letters (organizing hard drives). High-level formatting is the process of writing the file system structures on the disk that let the disk be used for storing programs and data. It builds the file system to be used by the operating system.




Representation of partitions in /dev

When you type ls dev in your command line, you'll see a list of devices. All devices connected to a computer are located at /dev directory and marked by different colours depending of device type. Hard drives are treated as files and are refered to by name just as hardware devices.

Some of the most common devices are:

/dev/hda is IDE hard drive type. Letter a means that it is the first hard drive (master).

/dev/hdb is the second (slave) device attached at the primary IDE-controller. This could be a second hard drive, CD-ROM drive, etc.

/dev/hda1 is the first partition from the IDE drive a (a1). Different drives are lettered and different partitions of those drives are numbered, so we have /dev/hda1, /dev/hda3, /dev/hdb2 and so on.

/dev/sda is SCSI drive. Today they are more popular than IDE.

/dev/fd0 is the first floppy drive. For floppy disks numbers refer to a drive (while for hard drives they refer to partitions).

/dev/ttyS0 is one of the serial ports


Example:

agpgart fd mapper ram11 sda stdin tty22 tty39 tty55 usbmon0 vcs8 audio freefall mcelog ram12 sda1 stdout tty23 tty4 tty56 usbmon1 vcsa binder full mem ram13 sda2 tty tty24 tty40 tty57 usbmon2 vcsa1 block fuse mixer ram14 sda3 tty0 tty25 tty41 tty58 usbmon3 vcsa2 bus hidraw0 net ram15 sda5 tty1 tty26 tty42 tty59 usbmon4 vcsa3 cdrom hidraw1 network_latency ram2 sda6 tty10 tty27 tty43 tty6 usbmon5 vcsa4 cdrw hpet network_throughput ram3 sda7 tty11 tty28 tty44 tty60 usbmon6 vcsa5 char input null ram4 sda8 tty12 tty29 tty45 tty61 usbmon7 vcsa6 console kmsg oldmem ram5 sequencer tty13 tty3 tty46 tty62 usbmon8 vcsa7 core log pktcdvd ram6 sequencer2 tty14 tty30 tty47 tty63 v4l vcsa8 cpu_dma_latency loop0 port ram7 sg0 tty15 tty31 tty48 tty7 vcs video0 disk loop1 ppp ram8 sg1 tty16 tty32 tty49 tty8 vcs1 zero dri loop2 psaux ram9 shm tty17 tty33 tty5 tty9 vcs2 dsp loop3 ptmx random snapshot tty18 tty34 tty50 ttyS0 vcs3 dvd loop4 pts rfkill snd tty19 tty35 tty51 ttyS1 vcs4 dvdrw loop5 ram0 rtc sndstat tty2 tty36 tty52 ttyS2 vcs5 ecryptfs loop6 ram1 rtc0 sr0 tty20 tty37 tty53 ttyS3 vcs6 fb0 loop7 ram10 scd0 stderr tty21 tty38 tty54 urandom vcs7


Mounting file systems

In Linux, if you want to access certain device like floppy, CD, or any other storage device, you must mount it first. Mounting means attaching storage device to some existing directory on your system. The place (directory) where your device is attached is called a mount point. After the device is mounted, you can access it by accessing the directory (mount point) where the device has been attached. When your work is done, and you want to remove the device from your computer, you need to detach it before removing it. This process is called unmounting.

Now, we will show an example of mounting a storage device (external hard disk). You need to be root (administrator) to do this. If you haven't logged in as root you can gain root privileges by typing su in terminal, and enter root password when asked.

First we have to name the directory in which we want to mount our device (mount point). If we want to call it /exhd, we need to check if this directory already exists by typing the command ls /exhd. If it doesn't exist, create it using mkdir /exhd command.

(The directory which we want to use as a mount point must be empty. If there were some files in it, when the device is mounted they will be hidden and will not be accessible until the mount point is released.)

After that type mount -t vfat /dev/sda1 /exhd.

The syntax: mount <-t type> <device> <mounting point>

Argument -t vfat specifies the type of filesystem, in this case vfat. Second, /dev/sda1 is the name of the device (/dev/<device name>), and final argument is the mounting point we earlier created.

Now type ls /exhd again to make sure all your data from external hard drive is transferred to mount point directory.

You can also type mount command to see all the mounted devices.

When you want to remove the device, simply type umount /exhd or umount /dev/sda1. It's now safe to remove the device. Notice: the command is Umount NOT UNmount.

The syntax: umount <mount point | device>

Now type exit to leave the terminal.


/etc/fstab

Fstab is a configuration file that contains information of all partitions and storage devices in your computer. It is located under /etc folder. Fstab is a plain text file, you can edit it with any text editor, but only if you have logged in as root (su command). /etc/fstab contains information of where your partitions and storage devices should be mounted and why.

When we type in our terminal more /etc/fstab, we will got a table with captions: <file system>, <mount point>, <type>, <options>, <dump>, <fsck>.


  1. /etc/fstab
  2. Created by anaconda on Mon Nov 30 16:51:58 2009
  3. Accessible filesystems, by reference, are maintained under '/dev/disk'
  4. See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

/dev/mapper/vg_user123-lv_root / ext4 defaults $ UUID=9f250971-0ac0-41ce-b443-b4ad9bef12bd /boot ext4 defau$ /dev/mapper/vg_user123-lv_swap swap swap defaults $ tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0


<file system> - lists the devices the file system resides on (/dev/hda, /dev/cdrom, etc.)

<mount point> - directories where device will be mounted (/, /home, /media/floppy, etc.)

<type> - specifies the file system type of certain device or partition (ext2, ext3, reiserfs, auto, ntfs etc.)

ext2, ext3, reiserfs, ntfs – Linux or Windows file system types (look 3rd chapter)

auto – isn't a filesystem type. It means that the filesystem type is detected automatically.

<options> - lists all the mount options for the device or partition (defaults, auto, noauto, ro, rw, etc.)

defaults - default options that are rw, suid, dev, exec, auto, nouser, and async

auto and noauto – auto option marks that the device will be mounted automatically. This is the default option. If we don't want auto option we can change that into noauto.

user and nouser – the user option allows normal users to mount the device, and with nouser only the root can mount the device. Nouser is the default.

exec and noexec – exec allows you to the execute files on this filesystem, while noexec doesn't allow you to do that. Exec is the default option.

sync and async – signify how the input and the output should be done (synchronously or asynchronously)

ro and rw – mount the filesystem read-only or read-write

<dump> - uses the number to decide if a filesystem should be backed up. If it's 0, dump will ignore that filesystem.

<fsck> - “filesystem check”; determines in which order the filesystems should be checked (depending on number)

/etc/mtab

/etc/mtab is a normal text file that is automatically edited by the mount program whenever file systems are mounted or unmounted. It's very similar to fstab and it has table captions: file system device, mount point, file system type, mount options, two unused fields with zeros in them (for /etc/fstab compatability).

While in the root directory /, we type cd etc and then nano mtab to open mtab file in a teminal.

File: /etc/mtab

/dev/mapper/vg_user123-lv_root / ext4 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 devpts /dev/pts devpts rw,gid=5,mode=620 0 0 tmpfs /dev/shm tmpfs rw,rootcontext="system_u:object_r:tmpfs_t:s0" 0 0 /dev/sda3 /boot ext4 rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 gvfs-fuse-daemon /home/user123/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user$ /dev/sdb1 /media/VERBATIM vfat rw,nosuid,nodev,uhelper=devkit,uid=500,gid=500,s$

/proc/mounts

Procfs (process filesystem) is a pseudo file system used to access process information from the kernel. It is mounted at a /proc directory. /proc includes directories for each running process.

/proc/mounts is a symlink (special type of file that contains a reference to another file or directory) which contains a list of the currently mounted devices and their mount points, file systems and mount options which are in use.

While in root / directory when we type nano /proc/mounts we will get something like this:

File: /proc/mounts

rootfs / rootfs rw 0 0 /proc /proc proc rw,relatime 0 0 /sys /sys sysfs rw,relatime 0 0 udev /dev tmpfs rw,seclabel,relatime,mode=755 0 0 /dev/pts /dev/pts devpts rw,seclabel,relatime,gid=5,mode=620,ptmxmode=000 0 0 /dev/shm /dev/shm tmpfs rw,seclabel,relatime 0 0 /dev/mapper/vg_user123-lv_root / ext4 rw,seclabel,relatime,barrier=1,data=ordered 0 0 none /selinux selinuxfs rw,relatime 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sda3 /boot ext4 rw,seclabel,relatime,barrier=1,data=ordered 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 gvfs-fuse-daemon /home/user123/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=500,group_id=500 0 0 /dev/sdb1 /media/VERBATIM vfat rw,nosuid,nodev,relatime,uid=500,gid=500,fmask=0022,dmask=0077,codepage=cp437,iocharse$

1st column – the device that is mounted 2nd column – mounting point 3rd column – filesystem type 4th column – mount options; the most common ones are rw (read-write) and ro (read-only) and they tell how the device is mounted 6th and 7th columns – values that match the format used in /etc/mtab

Linux and non-Linux filesystems

Linux filesystems

Ext2

Ext2 stands for “second extended file system”. It replaced the Extended File System (ext). It is the most popular, often said “traditional” and the most successful file system so far in the Linux community. Ext2 is correcting certain problems and limitations of its predecessor and it is still offering great robustness and good performance.

Journaling file systems

File system that keeps track of the changes it intends to make in a journal (usually a circular log in a certain area of the file system) before committing them to the main file system. In case of a system crash or power failure, such file systems are quicker to bring back online and less likely to become corrupted.

Ext3

Journaling file system which is commonly used and default file system for many Linux distributions. The main advantage over ext2 file system is journaling which improves reliability and eliminates the need to check file system after an unclean shut down. It allows upgrades from the ext2 file system without any need to back up and restore data. It also uses less CPU power than ReiserFS and XFS, and it's considered safer than the other Linux file systems because it's simple and has wider testing base.

Ext4

The successor of ext3 file system. Has many advantages over ext3 file system, some of them are: larger file system, journal checksumming (improved reliability), faster file system checking, etc. The main disadvantage is delayed allocation and potential data loss. That means that there is some additional risk of data loss in cases where the system crashes before all of the data has been written to disk.

ReiserFS

General purpose file system. Named by the primary developer of the ReiserFS and Reiser4 file systems. It was the first journaling file system included in the standard kernel, so it's default filesystem for many distributions. It's considered stable and has many features. It uses binary-tree concepts inspired by database software. His successor is Reiser4.


JFS

Journaled File System. 64-bit journaling file system created by IBM. JFS is fast and reliable, with good performance under different kinds of load. Another very good characteristic is that it's light and efficient with available system resources and even heavy disk activity is realized with low CPU usage.

XFS

High-performance journaling file system created by Silicon Graphics. XFS is very proficient at handling large files and at offering smooth data transfers. It's a 64-bit file system.

BTRFS

B-Tree File System announced and developed by Oracle and open for contribution from anyone. His main advantages are fault (error) tolerance, easy administration, repairing and detecting errors in the data stored in disk.



UNIX file systems

The term Unix is used to describe any operating system that conforms to Unix standards, meaning the core operating system operates the same as the original Unix operating system. Today's Unix systems are split into various branches.

UFS

Unix File System is used by many Unix and Unix-like operating systems. This is a hierarchical file system. Because it is the most integrated of the file systems, it has received a lot of development attention over the past few years. It's very suitable for a wide variety of applications, but on the other hand old and lacking some features.

ZFS

Combined file system and logical volume manager. It has support for the high storage capacities, integration of the concepts of file system and volume management, snapshots and copy-on-write clones.


Windows file systems

FAT

File Allocation Table file system architecture today widely used on most computer systems and most memory cards, such as those used with digital cameras. Found on floppy disks, flash memory cards, digital cameras and many other portable devices because of its relative simplicity. Performance is not the advantage of this file system. It uses very simple data structures and makes poor use of disk space in situations where many small files are present.


NTFS

New Technology File System is the standard file system of Windows NT, over Windows XP, Vista and Windows 7. This is the prefered file system for Windows operating systems. NTFS has several improvements over FAT file system including disk space utilization, some additional extensions, improved performance, reliability, etc.

***WE SHOULD ADD SOMETHING ABOUT ntfs.ko and FUSE HERE.***

NTFS-3g is a open source implementation of Microsoft Windows NTFS file system driver. It can run unmodified on many different operating systems such as FreeBSD, Solaris, Linux, Mac OSX and others. NTFS-3G supports all operations for writing files. NTFS-3G supports partial NTFS journaling, so if an unexpected failure leaves the file system in an inconsistent state, the volume can be repaired.


THIS DOCUMENTATION IS IN DRAFT STAGE AND NEEDS TO BE DEVELOPED.