From FedoraProject

Jump to: navigation, search


Development Tips


It is often mentioned that running oprofile is more complicated than using gprof, because it has to start a daemon and load a kernel module. But gprof needs recompilation of an application and dependent libraries with -pg option, which could be worse in case you need to recompile also glib library. Setting and using oprofile:

Best practices

In every good course book problems with memory allocation, performance of some specific functions and so on are mentioned. The best thing to do is to buy a good book ;-) It doesn't make sense to thing about every line, but optimize only things which are bottlenecks for performance.

Here is a short overview of techniques which are often problematic:

  • threads
  • Wake up only when necessary
  • Don't use [f]sync() if not necessary
  • Do not actively poll in programs or use short regular timeouts, rather react on events
  • If you wake up, do everything at once (race to idle) and as fast as possible
  • Use large buffers to avoid frequent disk access. Write one large block at a time
  • Group timers across applications if possible (even systems)
  • excessive I/O, power consumption, or memory usage - memleaks
  • Avoid unnecessary work/computation

And now some examples:


It is widely believed that using threads make our application performing better and faster. But it is not true every-time.

Python is using Global Lock Interpreter so the threading is profitable only for bigger I/O operations. We can help ourselves by optimizing them by unladen swallow (still not in upstream).

Perl threads was originally created for applications running on systems without fork (win32). In Perl threads the data are copied for every single thread (Copy On Write). The data are not shared by default, because user should be able to define the level of data sharing. For data sharing the (threads::shared) module has to be included, then the data are copied (Copy On Write) plus the module creates tied variables for them, which takes even more time and is even slower.

Reference: performance of threads

The C threads share the same memory, each thread has its own stack, kernel doesn't have to create new file descriptors and allocate new memory space. The C can really use support of more CPUs for more threads.

Therefore, if you want have a better performance of your threads, you should be using some low language like C/C++. If you are using a scripting languages, then it's possible to write a C binding. The low performing parts can be tracked down by profilers.

Reference: improving performance of your application


Many applications are scanning configuration files for changes. In many cases it is done by setting up some interval e.g. every minute. This can be a problem, because it is forcing disc to wake-up from spindowns. The best solution is to find a good interval, a good checking mechanism or check for changes with inotify - eg. act on event. Inotify can check variety of changes on a file or a directory. The problem is that we have only limited numbers of watches on a system. The number can be obtained from


and it could be changed, but it is not recommended.


#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/inotify.h>
#include <unistd.h>

int main(int argc, char *argv[]) {
  int fd;
  int wd;
  int retval;
  struct timeval tv;

  fd = inotify_init();

  /* checking modification of a file - writing into */
  wd = inotify_add_watch(fd, "./myConfig", IN_MODIFY);
  if (wd < 0) {
    printf("inotify cannot be used\n");
    /* switch back to previous checking */

  fd_set rfds;
  FD_SET(fd, &rfds);
  tv.tv_sec = 5;
  tv.tv_usec = 0;
  retval = select(fd + 1, &rfds, NULL, NULL, &tv);
  if (retval == -1)
  else if (retval) {
    printf("file was modified\n");

  return EXIT_SUCCESS;


  • variety of checks


  • finite number of watches per system
  • failure of inotify

In the case where inotify fails, the code has to fallback to different check method. That usually means lot of "#if #define" in the source code.


man 7 inotify


Fsync is known as an I/O expensive operation, but according to the references is not completely true. The article has also interesting discussion which shows several different opinions on using or not using fsync at all.

The typical examples are Firefox freeze (fsync) vs. empty files (without fsync). What's happened in these cases?

Firefox used to call the sqlite library each time the user clicked on a link to go to a new page. The sqlite called fsync and because of the file system settings (mainly ext3 with data ordered mode), then there was a long latency when nothing was happening. This could take a long time (even 30s), if there was a large file copying by another process in the same time.

In other cases wasn't fsync used at all and there was no problem until the switch to ext4 file system. The ext3 was set to data ordered mode, which flushed memory every few seconds and saved it to a disc. But with ext4 and laptop_mode the interval was longer and the data might get lost when the system was unexpectedly switched off. Now the ext4 is patched, but still we should thing about design of our applications and use fsync carefully.

Simple example of reading/writing into a configuration file shows how backup of a file can be made or how can data be lost.

Bad example:

/* open and read configuration file e.g. ~/.kde/myconfig */
fd = open("./kde/myconfig", O_WRONLY|O_TRUNC|O_CREAT);
write(fd, bufferOfNewData, sizeof(bufferOfNewData));

Better example:

open("/.kde/myconfig", O_WRONLY|O_TRUNC|O_CREAT);
fd = open("/.kde/myconfig.suffix", O_WRONLY|O_TRUNC|O_CREAT);
write(fd, bufferOfNewData, sizeof(bufferOfNewData));
fsync; /* paranoia - optional */
/* do_copy("/.kde/myconfig", "/.kde/myconfig~"); */ /* paranoia - optional */
rename("/.kde/myconfig.suffix", "/.kde/myconfig");

Reference: inside of fsync