Sunday, December 7, 2014

 

Order Linux Discs


Avoid Systemd by trying AntiX, Slackware or OpenBSD.
For older computers try Puppy Linux.


Buy Linux and BSD Discs



Wednesday, November 19, 2014

 

Thoughts on Systemd


First a short description of systemd:

Systemd is a collection of system management daemons, libraries, and utilities designed for Linux.

One of the main reasons why some folks don't like systemd is it stores logs in a binary format. This goes against the Unix philosophy of storing data in flat text files. The old method was sysvinit which is described here:

http://en.wikipedia.org/wiki/Init#SysV-style

Of course we can go further back, all the way back to Research Unix v5 from 1974:

http://www.maxhost.org/other/text/original-unix-v5-init.c.txt

BSD init was, prior to 4.3BSD, the same as Research UNIX's init and after that things
diverged so there was a BSD and AT&T SysV way of running init. In the ancient days
of Unix v5 there were no runlevels. Unix v5 simply ran /etc/rc in the Thompson shell and
then launched getty.

Some folks say that systemd is the svchost.exe of Linux, saying it is essentially making
Linux more like Microsoft Windows. It is a monolithic entity that hides what's happening
behind the scenes. It stomps on the Unix Philosophy (again) of doing one thing and
doing it well. With systemd we have one large Swiss army knife of a tool that isn't
very good at anything in particular.

For people who want a modern distro that stays much closer to the original Unix Philosophy
we have the BSDs: NetBSD, FreeBSD and OpenBSD. Another solution would be to fork an
existing Linux distro into sysintv and systemd versions but that is hardly ideal.
I'm not sure which Linux distros will avoid systemd in the future as it seems many
of them are jumping on the systemd bandwagon. Slackware appears to be resisting the
systemd tide and "Patrick has stated he intends to stick to the BSD-stylized SysVInit design."

Another solution to this problem is to do what I do, i.e. to use an older Linux distro
that still uses sysvinit and upgrade it as necessary. This method isn't very popular but
there are many retro-computing specialists who use older versions of Linux and Research
Unix. Some distros we use include FC1, 2.11BSD and Unix v5,v6 and v7. Of course I do
expect there to be more forks of distros appearing in the future. There's just too many
different opinions on how things should be done in the Linux community.

Saturday, October 25, 2014

 

Using Older Software and Hardware


If we look at the history of Unix we can see that the kernel, libraries and userland programs have all increased in size as time passes. Here is a summary of the sizes of libc.a, the main library for C language:

Unix version 2 from 1972:  31 functions and 5,242 bytes
Unix version 5 from 1974:  85 functions and 21,542 bytes
Unix version 6 from 1975:  74 functions and 22,042 bytes
(note that with v6 the math functions were moved to liba.a)
Unix version 7 from 1979: 154 functions, and 77,690 bytes

2.11BSD from 1992: 347 functions, 233,788 bytes

Slackware Linux 4.0 from 1999: libc.so.5.44 has 580,816 bytes

Vector Linux Classic from 2007: 1,425 functions and 2,979,634 bytes

Similarly we can see that basic commands like ps for process status have also increased in size:

ps for Unix v5:      2,660 bytes
ps for modern linux: 63,948 bytes (a 24-fold increase!)

One could easily wonder "at what point does the feature set of a given part of an operating system reach completion"? It seems we never reach that point. On the other hand we don't necessarily have to continually upgrade our operating systems. We could reach back into the past and pick an older starting point and upgrade that distro as needed.

Over a decade ago I started running a web server based on Fedora Core 1 which still runs to this day. There's no doubt in my mind that FC1, while leaner than the most of the current distros from 2014, is still bloated with tons of functions I never use. In other words the ideal operating system, one in which does exactly what I need and no more, does not exist. I will give an honourable mention to OpenBSD though.

There has been some good work done to counter the bloat. Tiny Core Linux does a reasonably good job of being lean and mean. It isn't what I would call low memory use though, it still needs at least 46 MB of RAM (compare this to Unix v5 which can run easily in 256K of ram).

Others will argue that as the feature set of an operating system increases it is inevitable that its size will also increase. That is true but I can't help wonder exactly why libc.a has to be almost 3 megabytes in size. There has been work done to make a leaner libc with the MUSL project. MUSL libc.a is a mere 1.6 megabytes which is significantly smaller.

What else could one do to improve the ever increasing bloat of their operating systems? Well, you could roll your own software distribution or even write your own operating system. You could pick a distro based which uses the MUSL version of libc, e.g. Snowflake and Sabotage.

Unfortunately even the so called light distros seem bloated to me. The true champion of lightweight distros is Basic Linux 3.5 which is based on Slackware 3.5 from 1999. It can run within 16 megs of memory and it can use Slackware 4.0 packages.  There are still folks that use it on older hardware. Of course we can rule out Firefox or Chrome or most other modern web browsers while constrained with such a severe memory limit, although we can still use text based web browsers like lynx.

Using a truly ancient operating system like Unix v5 would seem too restrictive to all but the most hard-core PDP-11 aficionados. We can safely assume that everybody would want an operating system that supports ethernet and TCP/IP. Also a working PDP-11 is a rarity and they wouldn't be efficient in terms of power use.

It seems to me that bloated operating systems go hand in hand with corporations who want to sell everyone endless updates of software and hardware. Computer users need to make a judgement call on just how often they want to upgrade and how much money they want to spend. I've already reached the point of upgrade fatigue and I'm putting most of my efforts into maintaining and repairing my existing computers. The truth of the matter is chip fabrication pollutes the world with caustic chemicals like Trichloroethylene. The endless production of plastic isn't doing our environment any good either.

Wednesday, October 1, 2014

 

Why I don't use tablets


The world is filled with tablets and smartphones. I've resisted the temptation to jump on the bandwagon (mostly, I did buy a used zaurus).

My main computer activities consist of running a server, programming and researching information on the internet. Now writing programs on a tablet just seems silly. There's no tactile feedback from an on-screen keyboard and even if it did have a real keyboard it would be no where near the full size. The grunt-and-point interface just doesn't cut it.

I've always wondered just how difficult it is to repair a tablet. Fortunately the engineers at ifixit.com have ranked the tablets by repairability:

Tablet Repairability

The Microsoft Surface Pro ranked the worst and the Dell XPS 10 took top ranking as the most repairable device on the list.

The advantages of using a desktop or tower computer over a tablet are numerous: no battery life to worry about, proper keyboard, repairable (replacement parts are easy to come by), large monitor, and it's just nicer to sit at a desk with a good chair. Last but not least: you're probably not going to drop your desktop computer.

Portable devices such as cameras and ereaders do have their uses, but to my mind tablets have far less utility that a standard desktop. One can not help but wonder what the sociologist Vance Packard (author of the Waste Makers) would make of tablets. It's hard to imagine he would be in favour of them.


Friday, September 5, 2014

 

Why the Computer Experience is Often Poor


HEAT

The most common reason computers become damaged is due to heat. A lot of
computers are not designed well enough to deal with 100% CPU use over long
periods of time. This has been a problem that computers had since their
beginnings using vacuum tubes. The problem remains to this day as a lot
of modern laptops still overheat and there are many examples of video cards
in desktop machines becoming damaged due to excessive heat.

Intel has tried deal with this problem with their speedstep technology.
If the sensors say the computer is getting too hot then the CPU scales the clock frequency to a lower value. Other solutions for overly hot laptops include cleaning out the fan vents and buying a cooling pad, of which there are active and passive types.

Generally a desktop with good air flow should not overheat.

Note: I've never had a Panasonic Toughbook CF-48 overheat.

COMPONENT FAILURE

Unfortunately just about all computer components fail at some point. It's
a good idea to watch for signs of impending failure. One can make
component failure less likely to happen by using high quality parts such
as power supplies, motherboards and hard drives. Some external hard drives
are rugged and have anti shock capability.

The DC adapters used with a lot of routers and cable modems are cheaply made
and are a common point of failure. It's possible for a cable modem to fail
and not fail completely, instead there is increased packet loss. The
easiest solution for this problem is to completely swap out the cable modem
for another one. In the case of routers merely swapping out the DC adapter
(or wall wart) will usually fix the problem. Be sure to match the physical
connector, polarity, voltage and amperage to the original unit.

LACK OF KNOWLEDGE OF DEVICE PROTOCOLS

It's one of the oldest tricks in the book for hardware manufacturers of
printers and scanners: Make the protocol the device communicates in a
secret, thus making life difficult for software programmers (especially FOSS
ones). Often there is little or no technical documentation to support development of free and open-source device drivers for their products.

REPAIRABILITY

As computer motherboards become more and more integrated they become less and
less repairable. In the days of the IBM PC the motherboard had no disk controllers
or IO ports, just the memory chips, the CPU, and external ports for the keyboard and cassette deck. Thus it was a relatively simple matter to replace defective components. Not so with a modern motherboard that has sound capability, networking, hard drive controllers, USB, and who knows what else.

One wonders what repair level is possible with tablets and other tiny devices which
use surface mount technology. On the other side of the coin often it is quite easy to repair a modern LCD HDTV. The large case size ensures that there is a reasonable distance between components. One can at least do part replacement at the component level.

The road to unrepairable computers started with Very Large Scale Integration (VLSI) technology in the 1970s. Before the microcomputer appeared the CPU was a collection
of separate cards. In the case of the DEC PDP-11/20 it had integrated circuit flip-chip modules. Individual flip-chips could be repaired.

SOFTWARE CREEP

The final issue is software creep. By this I mean the continual replacement of older
software with newer software. It seems unavoidable but it's usually unnecessary.
How often does one need to update their word processor? I'm using abiword on
Linux and it seems adequate for my purposes. KDE versions after 3.5.10 do not intererest
me. In fact I've switched to the lighter IceWM and I'm quite happy with it.

CONCLUSIONS

In conclusion we can clearly see why the computer experience is often poor, and this
is mostly due to poor design and lack of knowledge of computer maintainance. It's not
the user's fault as the manufacturers want unknowledgeable users and unrepairable devices so they can keep selling you a new computer or television or $ELECTRONIC_DEVICE every 4 years or so. It's bad for your pocket book and it's bad for the environment. All that ewaste has to go somewhere and somewhere is usually a landfill site.

The solutions to these problems are somewhat unclear but I favour using older more
repairable computers as part of the answer. I've seen Panasonic Toughbooks last over ten
years and there's no reason why one couldn't keep it working for at least another decade.

Tuesday, August 12, 2014

 

unix version 5 demo



A short video of Unix version 5 from Bell Labs circa 1974 running on PDP-11/70.

Friday, July 25, 2014

 

We Have Strayed from the Original Ideas of Unix


When Ken Thompson and Dennis Ritchie created Unix and C Language they also created
a philosophy of Unix.

Some of the original ideas were:

    Small is beautiful.
    Make each program do one thing well.
    Build a prototype as soon as possible.
    Choose portability over efficiency.
    Store data in flat text files.
    Use software leverage to your advantage.
    Use shell scripts to increase leverage and portability.
    Avoid captive user interfaces.
    Make every program a filter

In some ways we have actually made improvements to the Unix Philosophy with Richard Stallman's GPL. We also have a mostly standardized graphical system with the X Window System. I can't find any overt references to sharing of source code from the early days of Bell Labs but it clearly did happen even if it was de facto
rather than de jure.

But the idea that "small is beautiful" has faltered rather a lot. Unix and Unix-like Distros have become rather bloated. And no one would think of programs like The Gimp or Photoshop as "doing one thing well". I'd be willing to grant we have chosen portability over efficiency. Levering software to our advantage, yes we have done that.

As for "Avoiding captive user interfaces", well there's lots of room for improvement on that score. Storing data in flat text files isn't always possible. No one would think of storing a video file as a text file. Ditto for making every program a filter. A lot of programs are graphical and interactive and I can't really see any way to make a filter out of those types of programs.

Looking at Unix version 5 we can truly see that "Small is Beautiful". The kernel of the original Unix v5 is 25,802 bytes in size. The entire operating system could be stored on a DEC RK05 magnetic disk drive which was 2.5 megs in size. That includes the C and Fortran compilers, the kernel, the assembler, the device drivers, the userland programs and the source code for all of the above. It was truly an amazing accomplishment. To be fair, You would have needed a second RK05 disk pack for man pages and a little extra breathing room.

Another big advantage of Unix version 5 was that it was quite possible for a determined programmer to actually read the source code of the entire system. That really isn't possible with a modern Linux distro. It would have been great if every programmer's introduction to programming included Unix version 5. It's possible now, thanks to simh and The Unix Historical Society you can do it.

In my previous blog entry I talked about using less memory. Unix version 5 could run quite well with 256 kilobytes of ram.

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]