build create a one single file rpm package

| 0 Comments | 0 TrackBacks
Let's create an rpm package that contains a single file. This was so annoying for me to figure out that I don't want to ever remember or research how to do it again so here it is documented. All I want is to package the file /usr/bin/flashplayerdebugger into an RPM. This is a single binary file and my server installation on the cloud is forcing me to install it using rpm.

RPM seems to have a lot of predefined expectations that is tedious to find and learn. My starting point was this short article by IBM. http://www.ibm.com/developerworks/library/l-rpm1/

That article goes through a build, configure and make steps to build a binary and finally package that binary. In my case I already have the binary and just want to build the rpm right away. After many facing many errors and searching around, these are the steps I had to do to successfully create the rpm.

Create the following 5 directories:

my home is /home/fsaberi.
I created /home/fsaberi/flashplayerdebugger01

then the following 5 directories under /home/fsaberi/flashplayerdebugger01

BUILD/  RPMS/  SOURCES/  SPECS/  SRPMS/


the binary flashplayerdebugger01 is packaged under the SOURCES directory in this way, in a tar ball, so that it would work with the spec file shown further below:

[fsaberi@farm2-zcon-01-v1 flashplayerdebugger01]$ tar -tvf SOURCES/flashplayerdebugger01.tar
drwxr-xr-x f/u     0 20.. 15:30:27 flashplayerdebugger-01/
drwxr-xr-x f/u     0 20.. 15:30:27 flashplayerdebugger-01/usr/
drwxr-xr-x f/u     0 20.. 15:30:46 flashplayerdebugger-01/usr/bin/
-rwxr-xr-x f/u 16816172 201.. 15:30:46 flashplayerdebugger-01/usr/bin/flashplayerdebugger01


The spec file SPECS/flashplayerdebugger.spec would then look like this:

[fsaberi@farm2-zcon-01-v1 flashplayerdebugger01]$ cat SPECS/flashplayerdebugger.spec

%define _topdir         /home/fsaberi/flashplayerdebugger01
%define name        flashplayerdebugger
%define release        1
%define version     01
%define buildroot %{_topdir}/BUILD/%{name}-%{version}

BuildRoot:    %{buildroot}
Summary:         flash player debugger for blah2 server
License:         GPL
Name:             %{name}
Version:         %{version}
Release:         %{release}
Source:         %{name}%{version}.tar
Prefix:         /usr/bin
Group:             blah2

AutoReqProv: no

%description
flash player debugger file for blah2 server. Contains one binary file.

%prep
%setup -q

%build

%install
cp %{buildroot}/usr/bin/%{name}%{version} /usr/bin

%files
%defattr(-,root,root)
/usr/bin/flashplayerdebugger01
[fsaberi@farm2-zcon-01-v1 flashplayerdebugger01]$


A few things I learned is that %setup -q will go into the SOURCES directory and will expect to find a tar file or else it will error out. When the untar occurs it then expect to find a directory that will be %{name}-%{version}. If the hyphen is not there then it complains. It untars %{name}%{version}.tar under the BUILD directory and goes on to the %build and then the %install sections.

I'm packaging a single binary file so I left %build empty. %install will actually install the binary locally on the system before heading to the %files section. In the %files i specify the only binary I wanted to package.

The command to build the rpm is

To make this work, I had used:

   sudo rpmbuild  -vv -bb --clean  SPECS/flashplayerdebugger.spec

But then I got this error:

error: Installed (but unpackaged) file(s) found:
   /debugfiles.list
   /debuglinks.list
   /debugsources.list

Searching a bit more, meant that I had to create a ~/.rpmmacros file with this line in it:

%debug_package          %{nil}

Then everything worked and the RPM is created under the directory RPMS.

To package and copy just one file using rpmbuild, you will most likely want to disable the dependency check or your package won't install without requiring a lot of other packages. So add the tag AutoReqProv: no as shown in the above spec file. There are also AutoReq and AutoProv tags you can look into. But the AutoReqProv deals with both.

So much trouble just to rpm package one single file.

Parallel processing fork exit vfork _exit

| 0 Comments | 0 TrackBacks
The only way a new process is created by the Unix Kernel is when an existing process calls the fork function. And forking is the most expensive operation the kernel performs.

I don't mean threading. True parallel processing is achieved in Unix when the processes are independent from each other. By that we mean separate file descriptors, data segments, stack, heap, signal handlers, user ID and everything else.

The title of this article mentions exit, _exit and vfork. I will talk about these as well as they are relevant and important. But first we need to dig into the original fork itself and fully understand what happens when a program calls it. I will give examples in C and then cover a bit about Perl as there are differences between the two.

Every program has a text segment which contains the machine instructions executed by the CPU. This is sometimes shared between programs so that only a single copy needs to be in memory for frequently 
Hornbill.jpg
executed programs. A bunch of different processes could share the same text segment and save on memory usage when they all mean to execute the same set of instructions.

There are also other memory segments reserved for a process, such as a heap, a stack and an initialized data area. The heap is the space where dynamically allocated memory goes, such as when malloc  or calloc are called within a running program. This is not a fixed size area as malloc can reserve different sizes each time based on what the input of the program is. The initialized data area is the place where any declarations outside of any function call are stored. These are variable and values defined before any processing actually takes place. Such as global variables. Then there's the stack. Each process has a stack onto which it pushes information relating to each function calls, such as addresses of where to return to once a function has finished. Automatic variables (those that are locally defined and cease to exist once the function returns) are also stored onto the stack.

The above memory layout description is very general though accurate enough amongst different flavors of Unix. Each OS does it its own way more and less. But it is good enough for us to generalize in such a way that will explain to us the behaviors of fork and vfork. Especially accurate is the description of the stack. Let's start with fork() by considering this program:

#include <stdio.h>
#include <stdlib.h>

int global=5;
int main(void){
    int pid;
    int local=33;

    printf("before fork\n");
    pid = fork();
    if ( pid < 0 ){
        printf("Failed to fork!\n");
        exit(1);
    }else if (pid == 0) {
        printf("I am child, my pid is %d\n",getpid());
        global++;
        local++;
    }else {
        printf("I am parent, pid is %d and I spawned child %d\n",getpid(), pid);
        sleep(5);
    }
    printf("global=%d local=%d pid=%d\n",global,local,getpid());
    exit(0);
}
This is the proper way fork should be called. First, if it fails it will return a negative integer and we should check for that. Else if it returned zero then we are in the newly created process, the child. Else the returned value was a positive integer which is the PID of this other process that was created and we are the original process, the parent. The printf's make all this clear.

Let's compile and run the above code:

farhadsa@farhadsaberi.com [~/tmp]# gcc fork.c 
farhadsa@farhadsaberi.com [~/tmp]# ./a.out 
before fork
I am parent, my pid is 15905 and I spawned child 15907
I am child, my pid is 15907
global=6 local=34 pid=15907
global=5 local=33 pid=15905
farhadsa@farhadsaberi.com [~/tmp]# 

Before I show what just happened pictorially, I want to say something about buffering because it's important for understanding fork. Let's run the same program but this time redirect its output into a file and see what happens.

farhadsa@farhadsaberi.com [~/tmp]# ./a.out > output
farhadsa@farhadsaberi.com [~/tmp]# cat output 
before fork
I am child, my pid is 21709
global=6 local=34 pid=21709
before fork
I am parent, my pid is 21708 and I spawned child 21709
global=5 local=33 pid=21708
farhadsa@farhadsaberi.com [~/tmp]# 

Did you see the difference? I'll explain in a minute why the line "before fork" was printed twice when I redirected the output to a file. Let's look at what happened pictorially.

fork.png
As we can see in the picture, the text segment of the new process is identical to the text segment of the existing process that created it. The kernel created a new process and copied over the text segment. It also created a new heap for the new process and copied over what was in the parent process's heap. It also created a file descriptor table for the new process and copied over all open file descriptors from the parent to the child. For example if the parent had a file opened with an offset 312, the child will also have a file descriptor pointing to the same file with an offset 312. That is why when the child calls printf() on STDOUT to say "I am child, my pid is ...," we see it on the same terminal on which we started the parent process. In fact, anything you can think of that the parent had, the child has it as well.

Getting back to my executing the above C program with and without redirecting its output to a file ... we noticed that the line "before fork" was printed twice when redirected to a file. The reason is that when the new process is created, the buffered IO segment of the parent is also copied over to the child. Really, nothing was left out. In Unix, in general, the Standard IO Library is buffered differently based on what kind of file its destination is. When its destination is a file of type Terminal then it is "line buffered" while if it is a regular file then it is "fully buffered." Meaning that while printing to a Terminal, every time the "\n" character is seen the buffer is flushed. So when the output was to a regular file, fork() was called while the words "before fork" were still in the buffer. After fork the parent flushed (wrote) its buffer and the child did the same. But when it was printing to the terminal, the original process's standard IO library's printf() call was already flushed to the terminal because it was newline terminated (and hence nothing was there to copy over to the child).

It's important to mention the sleep(5) in the above code where the parent process executes. It is purposefully meant that the parent sleeps for a while so that the child finishes (exits) first. CPU time is given to each of these processes in a manner completely unpredictable to us. There's no synchronization between the two and this is termed as asynchronously executing. This is a topic for another article where I will show how we can synchronize them. But for now let's sleep in the parent so that the child exits first.

exit vs _exit

I have to explain the two exit functions available before explaining vfork: exit() performs program cleanups and then calls _exit(). It is then _exit() that closes file descriptors and blushes IO buffers before returning to the kernel. When a process calls exit() or return() inside main(), if there were any functions registered with atexit(), then those functions are invoked. Once a registered function returns it is then removed from the registered functions on the stack. Meaning that the same atexit() registered function cannot be invoked twice within the same process.

If you _exit() however, registered atexit() functions are ignored. And this is the main difference. With exit() the atexit() registered functions are called and then implicitly _exit() is invoked. You can bypass exit() by calling _exit() directly. Consider this simple program:
#include <stdio.h>
#include <stdlib.h> 

static void myexit1(void), myexit2(void);
int main(void){
    printf("begin program\n");

    if(atexit(myexit2) != 0) {
             printf("ERROR can't register myexit1"); exit(EXIT_FAILURE);}
    if(atexit(myexit1) != 0) {
             printf("ERROR can't register myexit1"); exit(EXIT_FAILURE);}
    _exit(EXIT_SUCCESS); /* note that it is not exit() */
}
static void myexit2(void){
    printf("call to myexit2, second exit handler\n");
}
static void myexit1(void){
    printf("call to myexit1, first exit handler\n");
}
farhadsa@farhadsaberi.com [~/tmp]# ./a.out
begin program
farhadsa@farhadsaberi.com [~/tmp]#

The functions myexit2() and myexit1() were skipped and the program returned to the kernel.

COW (Copy On Write)

Final note before vfork. Today most flavors of Unix implement what's called the Copy On Write optimization of the fork() function. Meaning that the parent's stack is not copied over to the child. Since it is assumed that possibly the child() will not touch any variables previously declared before calling fork(). It is assumed that the child will exec(). Since the child does not need the parent's stack then why waste resources copying it over. However if the child does try to write to those variables, then the kernel will copy the stack over to the child first so that changes are in the child's space.

This is the optimization. Copy the information over to the child only if the child attempts to write to it. Otherwise leave it there.

vfork

The are only two reasons why a program would ever call fork().

1- To duplicate itself so that one process performs one section of the code
     while the other process executes another section.
2- To call exec(), meaning that the child process becomes another program.

vfork() is intended to be used for the second case. When you fork() and then immediately exec() another program, all that heap and stack copying to the child's process space is huge overhead for nothing because they are never used. vfork() creates the new process without fully copying the address space of the parent into the child. While the child is running however, it is sharing the parent's address space. Meaning that if the child writes to any variable then it is changing (or corrupting) the same variable within the parent.
 
Many Unixes have abandoned vfork because of the Copy On Write optimization of fork() and have made vfork() synonymous to fork(). However some systems have brought vfork() back because it is still much faster than fork() with COW. One such Kernel is NetBSD and here is their explanation verbatim:

fork() with COW:
  • Traverse parent's vm_map, marking the writable portions of the address space COW. This means invoking the pmap, modifying PTEs, and flushing the TLB.

  • Create a vm_map for the child, copy the parent's vm_map entries into the child's vm_map. Optionally, invoke the pmap to copy PTEs from the parent's page tables into the child's page tables.

  • Block parent.

  • Child runs. If PTEs were not copied, take page fault to get a physical mapping for the text page at the current program counter.

  • Child execs, and unmaps the entire address space that was just created, and creates a new one. This implies that the parent's vm_map has to be traversed to mark the COW portions not-COW.

  • Unblock parent.

  • Parent runs, takes page fault when modifying previously R/W data that was marked R/O for COW. No data is copied at this time.

vfork():

  • Take reference to parent's vmspace structure.

  • Block parent.

  • Child runs. No page faults occur because the parent's page tables are being used, and the PTEs are already valid.

  • Child execs, deletes the reference it had to the parent's vmspace structure, and creates a new one.

  • Unblock parent.

  • Parent runs. (No page faults occur because the parent's vm_map was not modified.)


Until we are kernel developers we'll ignore the details of their explanation and only understand that vfork() is faster. Let's show an example:

farhadsa@farhadsaberi.com [~/tmp]# cat vfork.c
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>

int glob=6;
static void myexit1(void), myexit2(void);

int main(void){
    int var;
    pid_t pid;
    var = 88;
    printf("before vfork\n");
 
    if(atexit(myexit2) != 0) {
               printf("ERROR can't register myexit1"); exit(EXIT_FAILURE);}
    if(atexit(myexit1) != 0) {
               printf("ERROR can't register myexit1"); exit(EXIT_FAILURE);}

    pid = vfork();
    if( pid < 0){
          printf("fork failed\n"); exit(EXIT_FAILURE);
     } else if ( pid == 0){
          glob++;
          var++;
          _exit(EXIT_SUCCESS);
    }
    printf("pid = %d, glob = %d, var = %d\n", getpid(), glob, var);
    exit(EXIT_SUCCESS);
}
static void myexit2(void){
    printf("call to myexit2, second exit handler\n");
}
static void myexit1(void){
    printf("call to myexit1, first exit handler\n");
}
farhadsa@farhadsaberi.com [~/tmp]# gcc vfork.c
farhadsa@farhadsaberi.com [~/tmp]# ./a.out
before vfork
pid = 8581, glob = 7, var = 89
call to myexit1, first exit handler
call to myexit2, second exit handler
farhadsa@farhadsaberi.com [~/tmp]#

Here I did not exec and simply _exit'ed the child after modifying the two variables var and global. The parent printed the values of var and global and they were changed to 7 and 89. But it was the child that did the updating!! You can see how vfork() is sharing the parent's stack for the automatic variable var and the global variable global within the heap as the child was able to change them. If you replace the vfork() call with fork() you would see that the parent's values would remain unchanged.

Now i'll take the same above code and change the exit function of the child from _exit to exit. Meaning that the child is now going to call atexit() registered functions. Let's see what happens:

farhadsa@farhadsaberi.com [~/tmp]# cat vfork.c
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>

int glob=6;
static void myexit1(void), myexit2(void);

int main(void){
    int var;
    pid_t pid;
    var = 88;
    printf("begin program\n");
 
    if(atexit(myexit2) != 0) {
                   printf("ERROR can't register myexit1"); exit(EXIT_FAILURE);}
    if(atexit(myexit1) != 0) {
                   printf("ERROR can't register myexit1"); exit(EXIT_FAILURE);}

    pid = vfork();
    if( pid < 0){
          printf("fork failed\n"); exit(EXIT_FAILURE);
     } else if ( pid == 0){
          glob++;
          var++;
          exit(EXIT_SUCCESS);
    }
    printf("pid = %d, glob = %d, var = %d\n", getpid(), glob, var);
    exit(EXIT_SUCCESS);
}
static void myexit2(void){
    printf("call to myexit2, second exit handler\n");
}
static void myexit1(void){
    printf("call to myexit1, first exit handler\n");
}
farhadsa@farhadsaberi.com [~/tmp]# gcc vfork.c
farhadsa@farhadsaberi.com [~/tmp]# ./a.out
begin program
call to myexit1, first exit handler
call to myexit2, second exit handler
pid = 21315, glob = 7, var = 89
farhadsa@farhadsaberi.com [~/tmp]#

See the output changed orders?! The two registered functions were called first and then the parent printed its pid and global and var variables. Why? Because vfork() kept the child running with the same atexit() registered functions and exit() caused them to be called by the child this time. They were therefore removed from the registered atexit() stack by the child and when the parent's turn came nothing was registered to be executed. This is why it is important to call _exit() in a vfork child and not exit.

If you change the vfork() in the above program to fork() you will see the atexit() registered functions myexit1() and myexit2() called twice. Once by the child and once by the parent because they would each have a separate copy of the registered atexit functions.

Another very important aspect of vfork() is synchronization. In both examples you saw that the child executed first without me doing anything special. This is by design. vfork() guarantees that the parent waits for the child to either call exit() or exec(). Until then the parent is blocked by the kernel from executing. So if you're going to use vfork() make sure that you exec() as soon as possible and if your exec() fails then call _exit() and not exit(). Using vfork() runs the risk of a blocking the parent forever if the child never exec's or exits.

Perl, fork and Copy On Write

fork() is simulated as much as possible in Perl. The buffering is also different. There is no vfork() in Perl so that makes it tiny bit less efficient and performance degrades fast when you repeatedly call fork and modify variables within the child. That's because COW becomes useless. If you are going to fork() in Perl then design your program carefully so that you first of all take advantage of COW and secondly keep children running rather than forking a new one for each task. Think of an HTTP server. I'll write about this with a full example later on.

But now let's look how copy on right speeds things up using simple examples in Perl and timing them. First let see how much time it takes to assign 130 variables an integer, 1000 times. That's a total of 130000 variable assignments.


#!/usr/bin/perl
use strict;
use warnings;

my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l,$m,$n,$o,$p,$q,$r,$s,$t)=0;
my ($u,$v,$w,$x,$y,$z) =0;
my ($aa,$ab,$ac,$ad,$ae,$af,$ag,$ah,$ai,$aj,$ak,$al,$am,$an,$ao,$ap)=0;
my ($aq,$ar,$as,$at,$au,$av,$aw,$ax,$ay,$az) =0;
my ($ba,$bb,$bc,$bd,$be,$bf,$bg,$bh,$bi,$bj,$bk,$bl,$bm,$bn,$bo,$bp)=0;
my ($bq,$br,$bs,$bt,$bu,$bv,$bw,$bx,$by,$bz) =0;
my ($ca,$cb,$cc,$cd,$ce,$cf,$cg,$ch,$ci,$cj,$ck,$cl,$cm,$cn,$co,$cp)=0;
my ($cq,$cr,$cs,$ct,$cu,$cv,$cw,$cx,$cy,$cz) =0;
my ($da,$db,$dc,$dd,$de,$df,$dg,$dh,$di,$dj,$dk,$dl,$dm,$dn,$do,$dp)=0;
my ($dq,$dr,$ds,$dt,$du,$dv,$dw,$dx,$dy,$dz) =0;

for ( 1 .. 1000 ){
  $a=$_;    
  $b=1;$c=1;$d=1;$e=1;$f=1;$g=1;$h=1;$i=1;$j=1;$k=1;$l=1;$m=1;$n=1;$o=1;
                 $p=1;$q=1;$r=1;$s=1;$t=1;$u=1;$v=1;$w=1;$x=1;$y=1;$z=1;
  $aa=1;$ab=1;$ac=1;$ad=1;$ae=1;$af=1;$ag=1;$ah=1;$ai=1;$aj=1;$ak=1;$al=1;$am=1;    
  $an=1;$ao=1;$ap=1;$aq=1;$ar=1;$as=1;$at=1;$au=1;$av=1;$aw=1;$ax=1;$ay=1;$az=1;
  $ba=1;$bb=1;$bc=1;$bd=1;$be=1;$bf=1;$bg=1;$bh=1;$bi=1;$bj=1;$bk=1;$bl=1;$bm=1;
  $bn=1;$bo=1;$bp=1;$bq=1;$br=1;$bs=1;$bt=1;$bu=1;$bv=1;$bw=1;$bx=1;$by=1;$bz=1;
  $ca=1;$cb=1;$cc=1;$cd=1;$ce=1;$cf=1;$cg=1;$ch=1;$ci=1;$cj=1;$ck=1;$cl=1;$cm=1;
  $cn=1;$co=1;$cp=1;$cq=1;$cr=1;$cs=1;$ct=1;$cu=1;$cv=1;$cw=1;$cx=1;$cy=1;$cz=1;
  $da=1;$db=1;$dc=1;$dd=1;$de=1;$df=1;$dg=1;$dh=1;$di=1;$dj=1;$dk=1;$dl=1;$dm=1;
  $dn=1;$do=1;$dp=1;$dq=1;$dr=1;$ds=1;$dt=1;$du=1;$dv=1;$dw=1;$dx=1;$dy=1;$dz=1;
}
print "a  ${a} \n";
Running the above program consistently takes 22 to 28 milliseconds on my laptop.

$ time ./loop.pl 
a  1000 

real 0m0.023s
user 0m0.016s
sys 0m0.006s


Let's change the above code so that this time we fork 1000 times without making any variable assignment within the child. And then fork 1000 times but this time assigning a value to those 130 variables within the child and see the time differences.

#!/usr/bin/perl
use warnings;
use strict;

print "before fork\n";
my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l,$m,$n,$o,$p,$q,$r,$s,$t)=0;
my ($u,$v,$w,$x,$y,$z) =0;
my ($aa,$ab,$ac,$ad,$ae,$af,$ag,$ah,$ai,$aj,$ak,$al,$am,$an,$ao,$ap)=0;
my ($aq,$ar,$as,$at,$au,$av,$aw,$ax,$ay,$az) =0;
my ($ba,$bb,$bc,$bd,$be,$bf,$bg,$bh,$bi,$bj,$bk,$bl,$bm,$bn,$bo,$bp)=0;
my ($bq,$br,$bs,$bt,$bu,$bv,$bw,$bx,$by,$bz) =0;
my ($ca,$cb,$cc,$cd,$ce,$cf,$cg,$ch,$ci,$cj,$ck,$cl,$cm,$cn,$co,$cp)=0;
my ($cq,$cr,$cs,$ct,$cu,$cv,$cw,$cx,$cy,$cz) =0;
my ($da,$db,$dc,$dd,$de,$df,$dg,$dh,$di,$dj,$dk,$dl,$dm,$dn,$do,$dp)=0;
my ($dq,$dr,$ds,$dt,$du,$dv,$dw,$dx,$dy,$dz) =0;

for ( 1 .. 1000 ){
   my $pid = fork();
   if ($pid < 0){
      print "fork failed $!";
   }elsif($pid == 0){
      exit(0);
   }else{
       wait(); # parent reaps child
   }
} 
Don't worry about the wait() function in the parent. I'll talk about that in the next article. Let's time it. $ time ./fork.pl before fork real 0m1.549s user 0m0.358s sys 0m0.902s Run it many times. It consistently takes around 1500 milliseconds (1.5 seconds) to run. Now let's make an assignment to all those 130 variables and see how long it takes. Remember that the same assignment of 130 x 1000 times without forking took about 25 milliseconds. So we would expect that making 130 variable assignment within the child would add 25 milliseconds to 1500 ms for the fork part, for a total of 1525 ms, or so.
#!/usr/bin/perl
use warnings;
use strict;

print "before fork\n";
my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l,$m,$n,$o,$p,$q,$r,$s,$t)=0;
my ($u,$v,$w,$x,$y,$z) =0;
my ($aa,$ab,$ac,$ad,$ae,$af,$ag,$ah,$ai,$aj,$ak,$al,$am,$an,$ao,$ap)=0;
my ($aq,$ar,$as,$at,$au,$av,$aw,$ax,$ay,$az) =0;
my ($ba,$bb,$bc,$bd,$be,$bf,$bg,$bh,$bi,$bj,$bk,$bl,$bm,$bn,$bo,$bp)=0;
my ($bq,$br,$bs,$bt,$bu,$bv,$bw,$bx,$by,$bz) =0;
my ($ca,$cb,$cc,$cd,$ce,$cf,$cg,$ch,$ci,$cj,$ck,$cl,$cm,$cn,$co,$cp)=0;
my ($cq,$cr,$cs,$ct,$cu,$cv,$cw,$cx,$cy,$cz) =0;
my ($da,$db,$dc,$dd,$de,$df,$dg,$dh,$di,$dj,$dk,$dl,$dm,$dn,$do,$dp)=0;
my ($dq,$dr,$ds,$dt,$du,$dv,$dw,$dx,$dy,$dz) =0;

for ( 1 .. 1000 ){
   my $pid = fork();
   if ($pid < 0){
      print "fork failed $!";
   }elsif($pid == 0){
     $b=1;$c=1;$d=1;$e=1;$f=1;$g=1;$h=1;$i=1;$j=1;$k=1;$l=1;$m=1;$n=1;$o=1;
                    $p=1;$q=1;$r=1;$s=1;$t=1;$u=1;$v=1;$w=1;$x=1;$y=1;$z=1;
     $aa=1;$ab=1;$ac=1;$ad=1;$ae=1;$af=1;$ag=1;$ah=1;$ai=1;$aj=1;$ak=1;$al=1;$am=1;
     $an=1;$ao=1;$ap=1;$aq=1;$ar=1;$as=1;$at=1;$au=1;$av=1;$aw=1;$ax=1;$ay=1;$az=1;

     $ba=1;$bb=1;$bc=1;$bd=1;$be=1;$bf=1;$bg=1;$bh=1;$bi=1;$bj=1;$bk=1;$bl=1;$bm=1;
     $bn=1;$bo=1;$bp=1;$bq=1;$br=1;$bs=1;$bt=1;$bu=1;$bv=1;$bw=1;$bx=1;$by=1;$bz=1;
     $ca=1;$cb=1;$cc=1;$cd=1;$ce=1;$cf=1;$cg=1;$ch=1;$ci=1;$cj=1;$ck=1;$cl=1;$cm=1;
     $cn=1;$co=1;$cp=1;$cq=1;$cr=1;$cs=1;$ct=1;$cu=1;$cv=1;$cw=1;$cx=1;$cy=1;$cz=1;
     $da=1;$db=1;$dc=1;$dd=1;$de=1;$df=1;$dg=1;$dh=1;$di=1;$dj=1;$dk=1;$dl=1;$dm=1;
     $dn=1;$do=1;$dp=1;$dq=1;$dr=1;$ds=1;$dt=1;$du=1;$dv=1;$dw=1;$dx=1;$dy=1;$dz=1;
     exit(0);
   }else{
     wait();
   }
}
Running the above multiple times takes about 1700 milliseconds. But I expected 1525 ms. There's consistently a 175 ms gap. That's because the kernel has to copy over from the parent to the child's stack those $ba, $dc and so on variables before making the assignments. That's a significant performance degradation. Most of the time we can live with it if it is done a few time. But if you fork and copy on a continuous basis then each 100 ms gap will add up fast to a sluggishly performing program.

Hard Link Soft Symbolic Links

| 0 Comments | 0 TrackBacks
Hard links and soft links are easily known and used by anyone on the unix console. A soft link can be made between file systems but the fact that a hard link cannot extend beyond the current file system is known to everyone yet its explanation remains cloudy to most. We go for decades working with Unix and never clear up the issue. This article will explain in detail what hard links are, why they are confined to their own file system and their impact on the existence of a file. But first let's explain soft links, or symbolic links, which ever you wish to call it.

There is a prerequisite to this. Please read the article on files and directory permissions which explains how a directory contains the inode numbers which contain all the information about a file
Sparrow_Hawk.jpgexcept the file's name. I assume that that is clear before continuing.

Soft or Symbolic Link:

I explained in the previous article that a directory contains nothing but file names and for each file name there is an inode number. This i-node number refers to the i-node structure definition for this file. One of the flags in the i-node structure defines the type of file. When the ls -l command displays the file, it shows those files that are symbolic links with an l at the beginning. 

farhad@farhad-desktop:/tmp/test$ ln -s /usr/bin/java java ; ls -l
lrwxrwxrwx 1 farhad farhad 13 2010-12-25 01:36 java -> /usr/bin/java

A note on permissions on symbolic links: they have no effect at all. Now, what I just created is an entry into the directory /tmp/test/ that has the name java. In fact I just created a new file with a new i-node definition. A new set of blocks on disk were just reserved for this new file. The type of file mentioned in the inode is set to be a symbolic link. And the contents inside those data blocks for this file on the hard drive are 13 bytes which form the string "/usr/bin/java" (count the characters). As a matter of fact I highlighted the number 13 in the output above.

Let's show the i-node number for the newly created file /tmp/test/java and the one we had before, /usr/bin/java:

farhad@farhad-desktop:/tmp/test$ ls -i java /usr/bin/java
6426080 java  4206093 /usr/bin/java

Let's show this graphically:

soft_link.png

I could have saved all the wording and just showed this picture which took me an hour to draw. The inode 6426080 is of type S_IFLNK, or a Symbolic Link. 

When the file /tmp/test/java is accessed, it becomes known that this file is a Symbolic Link and therefore its blocks are read to find out where the destination file is. If the destination file is also another symbolic link then its blocks are also read and followed. This process continues until a regular file (any file other than a symbolic link) is found. This behavior is program dependent. The programmer chooses to follow the symbolic link or not. Notice that the symbolic link can be deleted without affecting the destination file in any way. They are completely different files. That's why they are allowed to reside on separate file systems because deleting one has no effect on the other because each has its own inode table.

Hard Links:

It is easy now to differentiate between a soft link and a hard link. Simply put, a hard link refers directly to an existing i-node instead of creating a new i-node for the new directory entry as soft link does. 

Note that a hard link cannot be created across different file systems. Since my /tmp and /usr are on different file systems, trying to create a hard link from /tmp to anything under /usr will fail:

farhad@farhad-desktop:/tmp$ ln /usr/bin/java /tmp/java
ln: creating hard link `/tmp/java' => `/usr/bin/java': Invalid cross-device link

This is the main reason why anyone would read this article :-) They want to know why this is the case and now it will all become clear. To be able to create a hard link to /usr/bin/java I must create a link within the same partition as /usr. But before I actually create the link, look at the ls -il output of the existing /usr/bin/java file:

farhad@farhad-desktop:/$ ls -li /usr/bin/java
4206093 -rwxr-xr-x 1 root  root  38508  2010-09-07   10:35  /usr/bin/java

I exaggerated the "1" in the above listing. I will create a hard link to /usr/bin/java and we'll look at this output again to see what happens.

When we create a hard link we remove the -s option:

hard_link.gif
The one thing that is important to understand is that NO NEW inode was created. A hard link only creates a new directory entry to an existing inode. Now let's look again at the output of ls -li on the existing /usr/bin/java:

farhad@farhad-desktop:/$ ls -li /usr/bin/java
4206093 -rwxr-xr-x 2 root  root  38508  2010-09-07   10:35  /usr/bin/java

Notice that the count which was 1 before we created the hard link has been incremented to 2. This is another field of the i-node. Every time a new hard link is created this count is incremented. Remember in the previous article on permissions I mentioned how a user does not need write access to a file in order to be able to delete it? The real answer has to do with hard links. Those data blocks of a file including the i-node table are only permanently deleted by the kernel when the number of hard links in the i-node is zero. So if I were to delete the java entry inside the directory /usr/bin still the java file and its inode would remain because the link would drop back to 1, not zero.

And finally to answer the very famous question, why a hard link cannot cross file systems: The reason has to do with references a hard link makes to the i-node. In order to maintain file system integrity every entry inside a directory file must refer to an inode within the same file system. That's because of the hard link count. This count has to be valid. If you remove an entry inside a directory file the kernel must correctly decrement the hard link count inside the corresponding inode. If the directory entry were on another file system then you could wipe out this other file system. What would happen to the inode count? It would incorrectly remain unchanged and the kernel would never delete it because its count would never reach zero. This would lead to an inode leak.

The other reason why a hard link cannot cross file systems is because the i-node does not contain any file system entry. Meaning that all i-nodes inside the i-list refer to data blocks that reside within the same file system. They don't hold data block addresses that reside on another partition. There's no entry for that in the i-node table. Think of the file system as a street. Each house on the street is a data block. Each inode contains a door number for each house. But does not know the name of the street it is on. So when the kernel looks up a door number in the inode table, it assumes that this address is on the same street as the inode itself. This is by design. Since a hard link is a direct pointer to a data block, it is assumed that this address is on the same partition. There's no other way of knowing on which partition this data block would be if it were not on the same partition as the inode itself. This is another reason why a hard link cannot cross filesystems. If it did then the current file's data block address inside the inode would be insufficient to find it.

I hope that my explanation of hard and symbolic links was satisfactory.

Files Directory security setuid sticky bit permissions

| 0 Comments | 0 TrackBacks
Understanding Files and Directory permissions, their creation and deletion rules on Unix is one of the most obvious and important topics that many even veteran unix administrators sometime ignore to master. I don't mean the extended stuff. FreeBSD, Linux, Solaris and AIX all have their own extended access control list on top of the standard security they all have in common. 

In this article I'll talk about what the read, the write and the execute bits mean on a file and also on a directory. I will also cover the setuid and the sticky bits. I will then explain what rules does the kernel follow when deleting a file and why is it most often called "unlinking a file" rather than "deleting." In another article I'll cover what hard links and soft links fundamentally are. Some of this stuff such as the read, write and execute bits are easily known by everyone on a file, but not entirely. And I promise that things will get more interesting further down.

I won't get into explaining what user ID's, group ID's and "other" mean for each file because you already know it. When I talk about the read permission bit on a file, it should be known that if the read bit is on the group then any user in that group will have read access and if the read bit is on the "other" (sometimes called world) then everyone with a user account on the system will have read access.

Before we start let's explain what a directory is. A directory in Unix is simply a file and we occasionally call it a "directory file." Except that the directory is a file with a special flag telling the kernel that its content consists nothing but names (a character array in C) and for each name there's a number. This name is the name of a file, a directory, or any other kind of file. The number is a positive integer and it is called the inode. It is the address of the i-node in the file system's i-list. The i-node inside the i-list contains information about the file such as the file's type, the file's access permission bits, the size of the file, pointers to the data blocks for the file and so on. 

unix_file_system.gif

That's how the location of each other file is found. The inode number for a file is looked up inside the directory file. Then the inode structure is fetched from the file system's i-list. In it the start of the data block for this file is read. Notice that the one information that the inode does not contain about its file is the name of the file. That's in the directory file.

The read bit on a file: 
-r - - r - - r - -
  Allows the user to read those blocks of data assigned to this file.

The read bit on a directory: 
dr - - r - - r - -
  Allows the user to execute the opendir() system call to read the directory file. Notice that we are not referring to the inodes yet, as the command ls does without any options. However, if we issue the "ls -l" command then for each file in the directory, its inode is seeked and read so that the file size, permission bits, number of links to the file and so on are displayed. Important to know that anything other than LISTING does NOT require the read bit.

The write bit on a file: 
- - w - - w - - w -
Allows the user to modify the content of the hard drive address blocks assigned to this file.

The write bit on a directory: 
d- w - - w - - w -
It allows the user to write a name (a string) and a number in the directory's address space. This name and number will define another file or a directory (a directory is also a file) that will be the child of this directory. If the user cannot write the name of a new file inside its directory file then the entire new file creation fails.

armadillo.jpg

The execute bit on a file: 
- - - x - - x - - x
It allows the user to open the file and try to execute it by asking the Kernel to give birth to a new process. If the shell detects that the file is executable, or if it is not it will try to execute an interpreter (perl, python, bash, php) which will translate the content into machine executable instructions.

The execute bit on a directory: 
d- - x - - x - - x
Every process in unix has a state called the Current Working Directory (CWD).  If the execution bit is set for a user then this user's shell which is the current process will be permitted to change its current working directory to this one. In the case of a process, the executable bit for a directory must be set in order to allow a process to successfully make the system call chdir to this directory. This is the case for the web server Apache. If the execute bit on a docroot is not set for apache's process ID then apache's chdir call into its docroot will fail and apache will exit out because of it. We will see below that this failure could be circumvented. In short, the execute bit allows the user to change directories into this directory.

With all the above being said, let's put ourselves in scenarios to test our knowledge. We have two users, farhad and usr1. Consider the permission bits on this file and directory:


farhad@farhad-desktop:/tmp/test$ ls -l
drwx------ 2 farhad farhad 4096 2010-12-23 15:51 dir1

farhad@farhad-desktop:/tmp/test$ ls -l dir1/
---------x 1 farhad farhad 29 2010-12-23 15:51 hello.sh

farhad@farhad-desktop:/tmp/test$ chmod 001 dir1/ ; ls -l
d--------x 2 farhad farhad 4096 2010-12-23 15:51 dir1

The question is whether usr1 is allowed to execute hello.sh. It is under dir1 who's only permission is the execute bit on other. To demonstrate this with the user or group bits would have been the same. I could add usr1 to the group farhad and set the execute bit on the group instead of other.

This output shows whether usr1 can execute hello.sh:

usr1@farhad-desktop:/tmp/test$ ls -l
d--------x 2 farhad farhad 4096 2010-12-23 15:51 dir1
usr1@farhad-desktop:/tmp/test$ ls -l dir1/
ls: cannot open directory dir1/: Permission denied
usr1@farhad-desktop:/tmp/test$ ./dir1/hello.sh
hello there
usr1@farhad-desktop:/tmp/test$ cd dir1
usr1@farhad-desktop:/tmp/test/dir1$ ls
ls: cannot open directory .: Permission denied

Most people will be surprised here. Yes it was able to execute it!! Consider how important the above is. usr1 was unable to list the directory dir1 because there's no read access anywhere on it. But it *was* able to *execute* the file hello.sh which is under it. Why? because executing the file did not require to *list* the directory (no need to call opendir() ?). Only that usr1 has to be able to chdir to dir1 and if it already knows the name of the file, it can execute it by looking up its inode inside the directory. This is different than *listing* the directory. With this knowledge we can drastically improve the security of our web servers!

Here's a side note on linux 2.6.35-22 on which i'm testing. Instead of the above if I were to do this with user farhad the whole thing would fail because Linux does not consider the owner to also be part of other. (Very weird!) Keeping the same permissions as before, let's run the same commands with user farhad this time:

farhad@farhad-desktop:/tmp/test$ ls -l
d--------x 2 farhad farhad 4096 2010-12-23 15:51 dir1
farhad@farhad-desktop:/tmp/test$ ls -l ./dir1/
ls: cannot open directory ./dir1/: Permission denied
farhad@farhad-desktop:/tmp/test$ ./dir1/hello.sh
bash: ./dir1/hello.sh: Permission denied

See how Linux is preventing hello.sh from executing? The same thing can be executed by any other user (such as usr1 shown above). Linux is not considering the owner of the file to be part of the world (others). That's a peculiar behavior.

Now notice something else. I'll be switching between users to show this so pay attention :-)

farhad@farhad-desktop:/tmp/test$ ls -l dir1/
ls: cannot open directory dir1/: Permission denied
farhad@farhad-desktop:/tmp/test$ ls -ld dir1
d--------x 2 farhad farhad 4096 2010-12-23 15:51 dir1
farhad@farhad-desktop:/tmp/test$ chmod o+w dir1 ; ls -ld dir1
d-------wx 2 farhad farhad 4096 2010-12-23 15:51 dir1

Now on to the usr1 console, see how usr1 has no read access to the directory. And the permissions on hello.sh remain "---------x" as before.

usr1@farhad-desktop:/tmp/test$ ls -l
d-------wx 2 farhad farhad 4096 2010-12-23 15:51 dir1
usr1@farhad-desktop:/tmp/test$ ls -l dir1
ls: cannot open directory dir1: Permission denied
usr1@farhad-desktop:/tmp/test$ rm dir1/hello.sh
rm: remove write-protected regular file `dir1/hello.sh'? yes
usr1@farhad-desktop:/tmp/test$ ls -l dir1
ls: cannot open directory dir1: Permission denied

Wow! We think that usr1 just deleted hello.sh but we can't even confirm it because we cannot list the directory dir1. Let's check with user farhad whether hello.sh was really deleted.

farhad@farhad-desktop:/tmp/test$ ls -ld dir1
d-------wx 2 farhad farhad 4096 2010-12-23 15:51 dir1
farhad@farhad-desktop:/tmp/test$ ls -l dir1
ls: cannot open directory dir1: Permission denied
farhad@farhad-desktop:/tmp/test$ chmod 505 dir1/
farhad@farhad-desktop:/tmp/test$ ls -l dir1
total 0  (It is really deleted)


Not even farhad was able to list dir1 even though he's the owner. I gave dir1 read permission to its owner to see and indeed usr1 previously was able to delete hello.sh. But wait a minute! hello.sh did not have the w bit set anywhere. No one could update the script hello.sh. How was usr1 able to delete it? That's because the file hello.sh exists as long as there is an entry for it in a directory somewhere. It doesn't even have to be it's current directory (more on that later. Hint: hard links). So you see that a file's existence is only related to a mention of its inode in a directory. Not the actual blocks of the file's content on disk or the permission of the file itself. We don't need write access to a file in order to delete it! We only need write access to all of the directories that mention this file's i-node number. You only get this info in a Unix System's programmer book.

Sticky bit:

Let's first get the case of a sticky bit on a regular file out of the way. It has NO effect at all.

The sticky bit is always set on the /tmp directory or on Solaris /var/tmp as well. If the bit is set then a file in that directory can be removed or renamed only if the user has write permission for the directory, and either:
  • Owns the file
  • Owns the directory, or
  • is the superuser

Let's demonstrate on our dir1 by first setting a full 777 permissions:

farhad@farhad-desktop:/tmp/test$ ls -l
drwxrwxrwx 2 farhad farhad 4096 2010-12-23 17:59 dir1
farhad@farhad-desktop:/tmp/test$ chmod o+t dir1 ; ls -l
drwxrwxrwt 2 farhad farhad 4096 2010-12-23 17:59 dir1

We now have two other users, usr1 and usr2. Let's create a file with usr1:

usr1@farhad-desktop:/tmp/test/dir1$ touch file1; ls -l
-rw-r--r-- 1 usr1 usr1 0 2010-12-23 23:05 file1

Because dir1 has the sticky bit set, only usr1 can delete file1 or the owner of the directory which is farhad. Let's see if usr2 can delete it:

usr2@farhad-desktop:/tmp/test/dir1$ ls -l
-rw-r--r-- 1 usr1 usr1 0 2010-12-23 23:05 file1
usr2@farhad-desktop:/tmp/test/dir1$ rm file1
rm: remove write-protected regular empty file `file1'? yes
rm: cannot remove `file1': Operation not permitted

But farhad can delete it because it owns the directory:

farhad@farhad-desktop:/tmp/test$ rm dir1/file1
rm: remove write-protected regular empty file `dir1/file1'? yes
farhad@farhad-desktop:/tmp/test$ ls -l dir1/
total 0
farhad@farhad-desktop:/tmp/test$

That's basically the gist of it. You set the sticky bit on a directory that has its write bit open to everybody in order to protect its contents from being deleted by anybody else.

If you ever see the sticky bit with a capital T instead of a small t then it means that the directory has its execute bit removed on other.

farhad@farhad-desktop:/tmp/test$ chmod 776 dir1/ ; ls -ld dir1
drwxrwxrw- 2 farhad farhad 4096 2010-12-23 18:09 dir1/
farhad@farhad-desktop:/tmp/test$ chmod o+t dir1/ ; ls -ld dir1
drwxrwxrwT 2 farhad farhad 4096 2010-12-23 18:09 dir1/

See the T in capital? it is the same sticky bit but since at that other position the execute bit is missing, the sticky bit shows in capital to notify us of this fact. In this case the directory is useful to only the user ID belonging to the group ID of the directory cause the rest of the ID's won't be able to chdir into it (see above).

setuid:

The setuid bit on a directory is only effective when it is on the group bit.

farhad@farhad-desktop:/tmp/test$ chmod g+s dir1/ ; ls -l
drwxrwsrwx 2 farhad farhad 4096 2010-12-24 00:40 dir1

Since the group id of dir1 is farhad, with the setgid bit set any file created under dir1 will inherit the group id of dir1 as well.

usr1@farhad-desktop:/tmp/test/dir1$ touch file1 ; ls -l
-rw-r--r-- 1 usr1 farhad 0 2010-12-24 00:50 file1

You can see that file1 was created by usr1 but it inherited dir1's group. When the setuid is on the user or the other bit then it has no effect. I'll update this if i find otherwise.

The main use of the setuid is when it is on an executable file. Every process has a "real UID" and an "effective UID." (there's also a "saved set-UID" which we don't need to worry about now. That's for when we talk about the exec() function). The real UID is the actual ID fetched from the /etc/passwd file. It is the ID you get when you login to your shell. It is the "effective user ID" and the "effective group ID" that determine the process's file access permissions (not the real user id nor the real group ID).

All processes have their user and group ID's equal to their effective user and effective group ID's. The setuid bit instructs the kernel to set the effective user ID of the process to be that of the owner of the file. And similarly the set group ID tells the kernel to set the effective group ID of the process to that of the group ID of the executable file.

Remember that file access permission are checked against the effective user ID and effective group ID of a process. Let's explain with an example:

farhad@farhad-desktop:/tmp/test$ ls -l /usr/bin/passwd ; ls -l /etc/shadow
-rwsr-xr-x 1 root root 37100 2010-09-03 03:28 /usr/bin/passwd
-rw-r----- 1 root shadow 1159 2010-12-23 22:24 /etc/shadow

You can see that the passwd executable file has the setuid bit set. This means that any process executing passwd will end up with its effective user ID as being that of the executable file. In this case root. Everyone can execute passwd because it's executable bit is set on the other field. This is how usr1 can set its effective user ID to that of the root and hence obtain file access permissions to change the shadow file which contains the encrypted password. The same rule applies to the group setuid.

I hope that this article cleared some things up for some folks.

Dual Monitor ATI Radeon RV100 QY Radeon 7000/VE X.org

| 0 Comments | 0 TrackBacks
beaver.jpgHow I got my Ubuntu 10.10 Maverick setup to use two monitors with the ATI Technologies Inc Radeon RV100 QY Radeon 7000/VE video card was more troubling than I imagined. The card works fine straight out of the installation but the trouble of slowness starts when you run xrandr to use two monitors. 

Video performance is fine using one screen. But with two monitors and xrandr, moving a window takes a few seconds. It is just too slow and unusable. I upgraded from Ubuntu 8.04 Hardy and I had no problems there. But with Ubuntu 10.10 Maverick things changed a bit and hurt the Radeon RV100's performance. That's because the Direct Rendering Infrastructure or DRI is not enabled. Here's how I enabled it.

It seems that X runs on Maverick without the need for you to explicitly configure an xorg.conf. Problem is that DRI is not enabled and if you want a speedier graphics then you need to configure you're own xorg.conf and enable DRI. 

$ lspci | grep VGA
01:00.0 VGA compatible controller: ATI Technologies Inc Radeon RV100 QY [Radeon 7000/VE]

Here's a picture of my dual monitor setup:
dual_monitor.gif
I'm writing this about a month after the installation so forgive me if I don't remember all the details. But the most important part should be here to fix your slowness.

First look under /etc/x11 and you'll see that there's no more an xorg.conf file. There's a default for your card detected and used out of the box for you. We'll need to get back at configuring our own xorg.conf.

You'll have to kill X, configure xorg.conf and startx until you get it working. Run this to stop the automatic restart of X.

sudo /etc/init.d/gdm stop

Then switch to a non GUI screen by pressing alt+F1, or alt+F2. Or was it Ctrl+Alt+F2 ? Try them until you get the black and white login console. No GUI. Before showing xorg.conf, i have to say that I use windowmaker. So i place wmaker in my home's .xinitrc.

$ cat ~/.xinitrc
wmaker
$

It's just one line "wmaker" which is the window maker executable. 

Now let's play with /etc/X11/xorg.conf and run the command startx. If this file is available, it will use it instead of what it figured out by itself during the installation. If you have doubts whether your /etc/X11/xorg.conf file is picked up or not, check the log file /var/log/Xorg.0.log and it will explicitly say "Using config file: /etc/X11/xorg.conf."

Let's get a default first:

$ Xorg -configure

This will probe your devices and will generate a default xorg.conf file for you. It should identify your monitors and their horizontal and vertical frequencies as well.

Now take your default and change it to look like this one that I have. Pay attention to the SubSection "Display" part where you have to add a Virtual line. You are extending both monitors horizontally so 1280 + 1680= 2960.

Here's my xorg.conf file that you need:

Section "ServerLayout"
Identifier     "X.org Configured"
Screen      0  "Screen0" 0 0
InputDevice    "Mouse0" "CorePointer"
InputDevice    "Keyboard0" "CoreKeyboard"
EndSection

Section "Files"
ModulePath   "/usr/lib/xorg/modules"
FontPath     "/usr/share/fonts/X11/misc"
FontPath     "/usr/share/fonts/X11/cyrillic"
FontPath     "/usr/share/fonts/X11/100dpi/:unscaled"
FontPath     "/usr/share/fonts/X11/75dpi/:unscaled"
FontPath     "/usr/share/fonts/X11/Type1"
FontPath     "/usr/share/fonts/X11/100dpi"
FontPath     "/usr/share/fonts/X11/75dpi"
FontPath     "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType"
FontPath     "built-ins"
EndSection

Section "Module"
Load  "dri"
Load  "dri2"
Load  "extmod"
Load  "record"
Load  "glx"
Load  "dbe"
EndSection

Section "InputDevice"
Identifier  "Keyboard0"
Driver      "kbd"
EndSection

Section "InputDevice"
Identifier  "Mouse0"
Driver      "mouse"
Option    "Protocol" "auto"
Option    "Device" "/dev/input/mice"
Option    "ZAxisMapping" "4 5 6 7"
EndSection
Section "Monitor"
        #DisplaySize      380   300     # mm
        Identifier   "Monitor0"
        VendorName   "DEL"
        ModelName    "DELL 1907FP"
        HorizSync    30.0 - 81.0
        VertRefresh  56.0 - 76.0
        Option      "DPMS"
EndSection

Section "Monitor"
#DisplaySize  470   300 # mm
Identifier   "Monitor1"
VendorName   "ACR"
ModelName    "AL2216W"
HorizSync    31.0 - 84.0
VertRefresh  56.0 - 77.0
Option    "DPMS"
EndSection
Section "Device"
Identifier  "Card0"
Driver      "ati"
BusID       "PCI:1:0:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device     "Card0"
Monitor    "Monitor0"
        DefaultDepth   24
SubSection "Display"
Viewport   0 0
Depth     24
                Modes           "1280x1024"
                Virtual          2960 1050
EndSubSection
EndSection

Section "DRI"
        Mode 0666
EndSection
        
Section "Extensions"
        Option "Composite" "Enable"
EndSection


The section that I found on a website (thank you!) and added to my config which enabled DRI are the last two sections DRI and Extensions. I don't know what the Extensions is for and I don't really ask. It works and I don't touch it. Finding this on the net is so hard that I thought I should document this here.

Now if you just run startx hopefully you'll have window maker come up looking great. To render both monitors as one virtual, we'll use xrandr.

$ xrandr --output DVI-0 --mode 1280x1024 --pos 0x0 --output DVI-1 --mode 1680x1050 --pos 1280x0

That's because running xrandr without options shows that DVI-0 is my 19'' DELL and DVI-1 is my 22'' Acer. The positions work from top left down to bottom right. See the figure way above where the xrandr positions 0x0 and 1280x0 are shown.

Don't forget to restart windowmaker after running xrandr to refix the screen. I hope that this will help someone else out there. It took me two days to find and piece all of this together. That's why Linux is still way behind windows and Mac.