This article is aimed at a computer enthusiast interested in systems programming and in the performance of operating systems. I have tried to keep the level of the article comparable with one in, say BYTE.
For me, the attraction of the computer was not what could be done with it, but how it could be done. When I was about 14 I bought my first computer; a second hand BBC-Model B with 32kRAM and a 100k 40Track single sided disk drive, and proceeded to teach myself how to program in BASIC. Before long I started to wonder how this incredible machine did what it did, so I dove into the realms of assembler and machine-code and discovered the ingenious interrupt mechanism, memory mapped input/output and the 16k Operating System ROM. To this day I still don't understand all it's intricacies. Would this be true if I had a higher level description of the algorithms used?
Probably not, and this is precisely what draws many a computer enthusiast to Linux, the free (meaning freely distributable and modifiable) 'UNIX-like' operating system for the PC (and other Platforms). The fact that the source code is available opens the door to deeper understanding and greater control, not to mention a fatter wallet!
Linus Torvalds (the original author of Linux) and other contributors have been working on Linux for more than 5 years now, and the question which I hope this article answers is: What have they been able to come up with and how does it compare technically with commercial operating systems for the PC?
Firstly Linux can be installed on your Hard Disk, without having to remove DOS,Windows or OS/2 and uses a special filing system called UMSDOS which allows Linux to have access to all your existing files. It can also run DOS programs under Linux using a program called DOSEMU, for more information see (Ref 7m). For a comprehensive list of features see (Ref 4 - Go to option 2.Linux Features).
The stable version of Linux as of 8th February 1996 was/is 1.2.13 and is (mostly) source level compatible with the IEEE's POSIX standard, System V and BSD (Berkeley Software Distribution) "UNIX's". Also by using a iBCS2-compliant emulation module, it is mostly binary compatible with System V Releases 3 and 4. For (incomplete) lists of programs and applications available for Linux see (Ref 4, 7d, 7j, 7k).
What I consider to be very important, especially for Linux to become more popular for businesses, is that all the existing data files that were used in DOS, Windows and OS/2 applications must be able to be converted into an (open) standard file format (.SFF) to be used in all Linux applications. See (Ref 7j) for a related issue.
Although Linux is available on many other hardware platforms (and this portability and scalability contribute to it's popularity) I have only considered the PC (itself a broad term) as it is the original platform that Linux was written for, and forms the majority of the home computer market.
Linux will run on 386 and higher processors, in what is called 'protected-mode', which is that multiple tasks (or processes) can be running and that they cannot interfere with one another or access system resources without permission from the operating system (which executes at the highest privilege level called supervisor mode). They are all given separate virtual address spaces (32bits => 4GBytes!) and are context switched under the control of the operating system to provide concurrency. In effect this means, that as long as the kernel code doesn't contain any bug's the system will be very stable.
One of the big problems with the x86 family of processors, is that, for backwards compatibility reasons they operate in 'real-mode' by default. This is basically emulating how a 8086 addresses memory with no protection system. The problem arises in that the BIOS (Basic Input Output System) for these systems is also written in such a way that it needs to be executed in 'real/16bit mode', Thus when a protected mode program wishes to access the BIOS, it has to switch the processor back to real mode, call the BIOS function then switch back to protected mode. This takes time, and because of the addressing limitations in real mode (20bits) also leads to data transfer complications. It is also a possible stability problem as protection is disabled and data could be easily written on top of another processes data.
For these reasons, Linux and OS/2 don't use the Conventional BIOS provided in a ROM in the machine but access the hardware directly (Ref 7l). In OS/2 this is done by 'building' a custom Advanced BIOS in RAM from the details of the hardware in your particular machine, which will run in protected mode (Ref 17). In Linux this is done by choosing hardware specific device drivers to include into the Kernel. See (Ref 4) for more information on supported hardware. Hardware manufacturers will soon be bringing out ABIOS's with 'plug and play' compatibility for use with Windows95 and Windows NT 4.0 Workstation (Ref 16) and hopefully this new BIOS will be utilised by Linux to increase Hardware compatibility whenever it is detected. Even without this, the Linux user has the advantage that there will probably be someone who has the same incompatible hardware as they have and will write a Kernel patch to solve the problem (Ref 7a).
Current 'UNIX-Like' Kernels such as Linux have a Monolithic structure; which means that they are essentially a single piece of executable code. Because they are structured in this way, if a new device driver needs to be installed, the whole Kernel needs to be re-compiled (Ref 7f)! This is one of the major disadvantages to 'UNIX-Like' systems. However, Linux has a feature which makes it possible to overcome this oversight. It supports Dynamically Loadable Kernel Modules, which are basically independent executable code fragments that can be linked into the Kernel as it is running. These modules can then act as part of the Kernel and so can be used to implement low-level features such as device drivers (Ref 7n, 13).
An alternative to the unstructured Monolithic Kernel is the Micro-Kernel architecture which is becoming much more common. In this approach the Kernel is much smaller and will usually only implement the protection scheme,proccess abstraction and communication/syncronisation primitives. I/O handlers and low-level procedures are then implemented as processes and are therefore isolated from each other and have clean interfaces between other proccess's and the kernel. For example the new MacOS aka Copland (Ref 14). This article also mentions a Hardware Abstraction Layer (HAL - Nothing to do with a Dr Chandra I hope!) which would improve it's portability. (Ref 15) mentions that Windows NT also uses a HAL and that it can be easily ported across Intel x86/Pentium, MIPS R4000/R4400 and DEC Alpha processors. Admittedly I do not know the details of how this, however it will invariably result in longer execution times than it would without.
This is an area where the Linux Kernel would probably win if measured by efficiency. I would also say that Linux is relatively portable without having to resort to a HAL. However, if ease of development, maintainability and stability are more important then the Micro-Kernel is the better choice because of its encapsulation of all low-level non-micro-kernel functions.
Linux supports multiple processes running concurrently using a pre-emptive scheduling strategy and also supports multiple users using the machine at the same time. Whilst Windows95, Windows NT and OS/2 can all multitask (or multiprogram) none of these systems supports multiple users (multi-access). Although the latest versions of the commercial OS's all multitask most 32bit (protected) and 16bit (real or DOS or Windows 3...) applications comfortably (Ref 18,20), Windows95 has the limitation that old 16bit programs are run in the same 'Virtual Machine' which is pre-emptively multitasked with all the running 32-bit tasks. Within this Virtual Machine however, the 16bit programs are 'co-operatively multitasked' which means that each process runs until it gives up the processor or crashes - in which case every 16bit application is brought down with it (Ref 15).
As an aside OS/2 and Windows NT both support multi-processor systems which currently are not supported by Linux, see SMP - S.ymmetric M.ultiP.rocessing in (Ref 15,17). This feature is being worked on even as you read this.
Linux uses the 386's own Paging mechanism (Ref 1 Page 582) to implement virtual memory, however it has a number of clever features that are implemented on top of this. Namely:-
Considering that Linux has been put together by a group of unpaid programmers in a little over 5 years, and that it can be considered to be on the same technical level as commercial operating systems such as Windows NT, Windows95 and OS/2 Warp, I would say that the future looks promising. Currently it isn't the easiest operating system to install/configure and there aren't many large easy to use applications about. But this is bound to change in the near future, if enough people get involved and enjoy adding to this already brilliant system, perhaps, with the current rate of progress, Linux will overtake these comercial operating systems and give them a run for their money!