Thursday, July 10, 2008

Bringing the trashcan to the command line




Linux.com



Everything Linux and Open Source




Bringing the trashcan to the command line



June 17, 2008 (9:00:00 AM) - 3 weeks, 2 days ago


By: Ben Martin




The trash
project allows you to interact with your desktop trashcan from the
command line. It lets users "undo" deletions made with the trash
command in a similar manner to restoring files from the trashcan in a
desktop environment. For experienced Linux users, the trash command
comes in handy when you want to put a file into the trashcan from the
command line.


Because trash implements the FreeDesktop.org Trash Specification,
it plays nicely with the trashcan offered by the KDE desktop
environment. That means you can trash a directory from the command line
and see it in your trashcan from Konqueror. Unfortunately, the trash
implementation in GNOME 2.20 did not communicate with either KDE 3.5.8
or the trash command.



Installation



Trash is not available from the distribution repositories for
Ubuntu, Fedora, or openSUSE. I built version 0.1.10 from source on a
64-bit Fedora 8 machine. Trash is written in Python, so build and
installation follows the normal python setup.py install procedure.



When you use the list-trash command to view the contents of your
trashcan, you might encounter an error if you are also using the Linux
Logical Volume Manager (LVM).
Version 0.1.10 of trash uses the df command to work out what
filesystems are available. Unfortunately, it invokes df without the
POSIX compatibility mode -P, and as such the lines
specifying LVM devices will include line breaks where trash does not
expect them to be. You can fix this by changing line 460 of
/usr/lib/python2.5/site-packages/libtrash.py to include the -P option when spawning the df command, as shown below:



<div class="code"><br />       else:<br />          df_file=os.popen('df <b>-P</b>')<br />          while True:<br /></div></pre>

<p>I also found an issue executing some trash commands when using bind
mounts to mount filesystems in two locations. The commands would simply
fail with <code>ValueError: path is not a mount point</code>, not informing which path is not a mount point or what you should do to fix the situation.</p>

<h4>Usage</h4>

<p>The trash project includes four commands: empty-trash, list-trash,
restore-trash, and trash, the latter being the main command, with the
others enabling full trashcan interaction from the command line. The
only two commands that accept command-line parameters are empty-trash
and trash. The empty-trash command accepts a single argument that
specifies a cutoff for the number of days old that a trash item can be.
For example, If you specify 7, then any items in your trashcan older
than a week will be deleted. The trash command takes the files and
directory names that you wish to put into your trashcan, and also
accepts <code>-d</code>, <code>-f</code>, <code>-i</code> and <code>-r</code>
options for compatibility with the rm(1) command. These last four
options don't actually do anything with the trash command apart from
make its invocation more familiar to users of the rm command.</p>

<p>Let's run through an example of how to use the trash commands:</p>

<pre><div class="code"><br />$ mkdir trashdir1<br />$ date >trashdir1/dfa.txt<br />$ date >trashdir1/dfb.txt<br />$ list-trash <br />$ trash trashdir1<br />$ list-trash <br />2008-06-10 15:03:11 /home/ben/trashdir1<br />$ mkdir trashdir1<br />$ date >trashdir1/dfc.txt<br />$ trash trashdir1<br />$ list-trash <br />2008-06-10 15:04:01 /home/ben/trashdir1<br />2008-06-10 15:03:11 /home/ben/trashdir1<br />$ restore-trash <br /> 0 2008-06-10 15:04:01 /home/ben/trashdir1<br /> 1 2008-06-10 15:03:11 /home/ben/trashdir1<br />What file to restore [0..1]: 0<br />$ l trashdir1/<br />total 8.0K<br />-rw-rw-r-- 1 ben ben 29 2008-06-10 15:03 dfc.txt<br /></div>


As you can see, it is perfectly valid for multiple items in the
trashcan to have the same file name and have been deleted from the same
directory. Here I restored only the latest trashdir1 that was moved to
the trashcan.



The restore-trash command must be executed in the directory of the
trashed file. The above commands were all executed in my home
directory; if I had been in /tmp and executed restore-trash, I would
not have seen /home/ben/trashdir1 as a restore option. At times it
might be misleading to execute restore-trash and be told that there are
"No trashed files." Perhaps the developers should expand this message
to inform you that there are "No trashed files for directory X" so that
you have a hint that you should be in the directory that the file was
deleted from before executing restore-trash. For scripting it might
also be convenient to be able to use restore-trash with a path and have
it restore the most recent file or directory with that name.



While the command-line options to the trash commands are currently
fairly spartan, the ability to interact with the same trashcan that KDE
3 is using from the command line can assist folks getting into
command-line interaction without stepping up to using the more
permanent rm command right off the bat.






Read in the original layout at: http://www.linux.com/feature/138331




Kudos to openSUSE 11.0




Linux.com



Everything Linux and Open Source




Kudos to openSUSE 11.0



June 20, 2008 (4:00:00 PM) - 2 weeks, 6 days ago


By: Susan Linton




openSUSE 11.0
was one of the most anticipated Linux distro releases of 2008. Despite
a few bugs in the final code, which was released yesterday, it was
worth the wait. The openSUSE version of KDE 4 alone is worth the
download, and the improvements to the software manager make customizing
a pleasure.



I used the 4.3GB DVD version, but live CD versions are also available.
In either, the first thing you might notice is the beautiful new
installer. The layout is similar to that of previous versions, with a
large interactive window and a progress list to the right, but with an
elegant new color scheme and stylish graphics. And the beauty is not
only skin deep -- there are a lot of changes under the hood in this
release.



The openSUSE developers have made many improvements to save users
time and effort. A new "Installation from Images" option uses a defined
set of packages in an install image for many common package groups,
such as the GNOME desktop. Using this saves users from having to
organize the needed packages and resolve the dependencies at the time
of the system installation. It's a feature users can disable if they
wish, but it does seem to save some install time.


At the beginning of the install process you can tick "Use Automatic
Configuration." In other distributions, similarly worded phrases can
turn off hardware auto-detection and lead to long, agonizing
configurations. Wanting to avoid that fate, I checked the box, but as
it turns out, this setting merely bypasses the hardware confirmation
screen where users normally accept the auto-detected proposal or custom
configure their hardware. For users who normally agree to the proposed
settings, this saves time and clicks.






Automatic Configuration does not bypass the installation summary.
You can still change many options, such as the partitioning proposal.
openSUSE presents the user with a proposed partitioning layout, but you
can edit the configuration to your needs. For example, you can make a
new partition or choose one that is already present. You can even use
advanced options such as LVM and RAID.


During the DVD install you can choose your desktop envirnoment from
among GNOME, KDE 4, KDE 3, Xfce, and Others, listed in alphabetical
order. Some other desktops available for install include Enlightenment,
IceWM, FVWM, and Window Maker. These less popular desktops don't
include the openSUSE look. They are provided as released by the
upstream developers.


No desktop environment is selected by default -- you must choose
one. At the installation summary screen, you can click the Software
heading to select additional desktop environments and software if
desired.


The package selection screens haven't changed much in function on
the surface, but they too have received a facelift. You can still
search or choose packages by groups, package patterns, or individually.


To save another step during the install the openSUSE developers
decided that the first user and root would share the same password.
They believe that a large percentage of users use the same password for
the first user and root, but if you have security concerns, it's easy
to change the root password later.


OpenSUSE has always had one of the premier installers in the Linux
landscape, and the developers have worked hard to make it even better
in 11.0. Besides the items I specifically mentioned, there are little
changes all over that make it more streamlined and easier than ever.


Because of its many desktop options, openSUSE is like several
distributions in one. Here's a look at each of the major desktop
environments.


KDE and Xfce


KDE 3.5.9 and Xfce 4.4.2 are stable, old-reliable desktops, and they
functioned just as expected with no problems. Like the other major
openSUSE desktops, they are customized to give them an openSUSE look
and feel. In fact, the gray and green theme runs throughout the whole
of openSUSE, including the GRUB screen, login screen, and application
splash screens, which gives the desktop a uniform professional touch.



At first glance, little distinguishes KDE 4 from KDE 3 -- which is a
good thing. Instead of a clunky, buggy Vista clone, users are welcomed
into a familiar reassuring environment. KDE 4 in openSUSE is an tidy
understated desktop with a panel at the bottom, a few icons, the
Kickoff menu, and the widget creator in the upper right corner.



In addition to the comfortable environment, many KDE applications
are now ported or backported to KDE 4.04 in openSUSE. I was able to
import mbox mail files as well as KDE 3 maildir-format files into KMail
1.9.51. Likewise, I was able to import my news feeds into Akregator
1.2.50. Both of these functioned well, except Akregator was a bit
sluggish during fetches under the weight of my 700+ feeds. I was able
to just drop my Konqueror bookmark file into the .kde4 directory. It
appears that for all the improvements KDE 4 is supposed to bring, Flash
is still broken in Konqueror, although this is probably a universal in
KDE and not confined to openSUSE.


When inserting removable media under KDE 4, the New Device Notifier
located in the panel beside the clock opens with a list of devices.
Depending upon the media, you may be given a choice of actions or have
one default. For example, a data CD gives only "Open in Dolphin," while
a USB memory stick opens an action chooser. Beside each device is an
icon that will umount or eject the device.


Overall I was impressed with the usability and stability found in
openSUSE's KDE 4 implementation. I began experiencing crashes only
while exploring the Personal Settings module (Systemsettings, the
replacement for the KDE Control Center) and changing numerous settings
and reversing them back and forth. This is when I discovered that you
need to press Ctrl-Alt-Backspace twice to kill the X server. This is
the first time I've needed to do this in openSUSE.


GNOME 2.22


I experienced some issues with the GNOME desktop. It started just
fine and seemed functional during the first tests. Problems arose when
I tested the update applet. When I was adding a repository, the online
update utility crashed and left most of GNOME unresponsive. When I left
the GNOME desktop, the login screen font was scrambled or not fully
rendered. I logged back into GNOME, but the font problem persisted. I
tried to log out again, but now the Logout tool didn't function any
longer.



After rebooting the system, GNOME seemed to function normally, but
the update applet never returned to the panel. Running Online Update
configuration through the YaST Control Center in GNOME continued to
crash, and thus the Online Update tool would not function. However, the
update applet did continue to appear in the KDE desktops afterward, and
I was able to complete configuration and check for updates while in
KDE.


Hardware support


Though I had some problems with software in different desktop
environments, hardware support in Linux has all but become a non-issue,
and this is even more true with openSUSE. While I don't own any exotic
or bleeding-edge hardware, what I do have is well supported. For
example, my Hewlett-Packard laptop, which was designed for Windows, is
almost fully supported. The only exception is the wireless Ethernet
chip, which requires Windows drivers. I used Ndiswrapper in 11.0 to
extract and load the drivers to bring it to life. Other critical laptop
features were available by default, although Suspend to RAM didn't work
for me.


Sometimes, though, my Internet connection, which was configured to
start at boot, wouldn't be started. The KNetwork Manager didn't
function for me this release either. The GNOME network applet seemed to
work well, however, so as a workaround, I just used it in KDE too.


Software


openSUSE is what I commonly refer to as a "kitchen sink" distro
because it includes everything but the kitchen sink. It'd almost be
easier to list what it doesn't have than what it does.


Besides a few extra desktops and the kernel development packages, my
install consisted of the default package selection. This includes
Firefox 3.0b5, OpenOffice.org 2.4.0, GIMP 2.4.5, Inkscape, Pidgin,
Liferea, Ekiga, GnuCash, Evolution, Tasque, and KOffice.


openSUSE also includes the lastest Compiz Fusion. AIGLX, which
provides GL-accelerated effects on desktops, should be enabled by
default for those with supported hardware. That unfortunately leaves
Nvidia users out until they install the proprietary graphic drivers.
However, there are graphical configuration tools for enabling and
setting options such as the choice of profile. You can choose profiles
ranging from lightweight with few effects to full with lots of effects.
The CompizConfig Settings Manager provides deeper settings. In
addition, there are lots of great plugins included, such as the
Magnifier, Window Scaling, and Show Mouse.




Under the hood openSUSE 11.0 ships with Linux-2.6.25.5, X.Org X Server 1.4.0.90, Xorg-X11 7.3, and GCC 4.3.1 20080507.


Multimedia



Multimedia support is a bit lacking in openSUSE by default. openSUSE has a policy of excluding certain code that does not conform to the open source definition
and, unfortunately, that includes support for most multimedia formats.
openSUSE 11.0 includes the just released Banshee 1.0, Amarok 1.4.9.1,
K3b, Brasero, Totem, and Kaffeine. I could listen to an audio CD and
watch Flash content from the Web, but I couldn't use any other
multimedia file on hand.



However, community-provided solutions are already in place. YaST one-click install wizards
will add repositories and install support for popular audio and video
formats. After installing the codecs, libraries, and updated
applications, I was able to enjoy any video or audio file I tested. I
sometimes experienced crashes in Banshee while trying to adjust the
volume. The problem was reproducible, but not consistent. I can't seem
to get Amarok to recognize my CD-ROM drive either, but I can use KsCD
or Banshee instead to listen to audio CDs.


Software management







If you'd like to install additional software, openSUSE comes with a
powerful package management system. ZYpp, which utilizes the RPM Package Management
format, was completely rewritten during the 10.x series, and 11.0
brings even more improvement. To the end user this means better
dependency resolution and much faster performance.


Zypper, the command-line package manager, functions much like
apt-get does for APT. It can install, uninstall, update repositories,
upgrade the system, or update packages. For example, zypper install
crack-attack will install the game Crack Attack. zypper search tuxpaint
will see if Tuxpaint is available in the openSUSE repositories you have
configured. Some other arguments include remove, addrepo, update, and
dist-upgrade.



Those who prefer graphical tools are in for a treat. The YaST
package management front ends have gotten a facelift this release. It
comes in a Qt version for KDE desktops and a GTK version for GNOME
users. Using YaST simplifies software installation for users of all
experience levels. It just takes a few mouse clicks to install any
package.



In my testing, I found that both the command line and the graphical
package tools worked well and were much faster than in previous
releases. My only complaint is that the YaST GUI still refreshes the
repository databases automatically each time it is opened. Fortunately,
in this release there is a Skip Refresh button, but with the speed
improvements it's usually half done by the time I grab the mouse and
click it.


Conclusions


openSUSE 11.0 is a fabulous release. The pretty new graphics set the
stage for significant improvements under the surface. All the time and
energy put into the package management system has paid off. Including
KDE 4 is not as big of a risk for openSUSE as it might be for other
major distributions because of the conservative and intuitive way KDE 4
is set up. openSUSE has given me hope that I could actually like KDE 4.


As many point-0 releases, 11.0 does have bugs and rough edges. I
experienced a few, and others are likely to be reported in the upcoming
weeks. For the most part, the ones I encountered were insignificant,
not showstoppers.


Overall, 11.0 is a commendable release. The developers have done an
admirable job walking that fine line between stable and bleeding edge.
If you like the latest software or wish for a nice usable KDE 4, then
openSUSE 11.0 is for you. If you're completely happy with 10.3, well,
perhaps you might want to wait for further reports.







Read in the original layout at: http://www.linux.com/feature/139073




Build your own ultimate boot disc




Linux.com



Everything Linux and Open Source




Build your own ultimate boot disc



June 25, 2008 (4:00:00 PM) - 2 weeks, 1 day ago


By: Kurt Edelbrock




You
turn on your trusty old Linux box, and things are going well as you
pass through the boot loader, until the disk check reveals that your
hard drive partition table is corrupt, and you are unable to access
your machine. You need a good rescue disk -- and the best way to get
one is to create your own.


You can
customize an Ubuntu 8.04 Hardy Heron live CD to make a good bootable
utilities disk by adding and removing packages from the standard
installation. Specifically, you can remove most of the Ubuntu
applications and install antivirus, a partition recover tool, a few
disk utilities, and a rootkit checker, among other things. I'm going to
create the live CD within an Ubuntu installation, but the directions
should work for most Debian-based operating systems, and can be easily
ported elsewhere. This guide largely follows the community documentation article
on the Ubuntu customization process, which is a good place to look for
more advanced information and troubleshooting support, while the livecdlist.com wiki is the best place to look for customized directions.



To create and use the Ubuntu-based boot CD, you'll need a computer
with at least 3GB of disk space and 512MB RAM. 1GB of swap is
recommended, though I did it with 512 MB.



Create the live CD environment



The first step is to download
the Ubuntu 8.04 live CD ISO file for your system type. You can get it
from the Web site, or you can use wget on the command line:



wget -v http://releases.ubuntu.com/hardy/ubuntu-8.04-desktop-i386.iso



To work with the image, you'll need to install a few packages to
support the squashfs filesystem format, and mkisofs, the utility to
create ISO images. On Ubuntu, you can install them with the command sudo apt-get install squashfs-tools mkisofs.



Now, load the squashfs module, then copy, mount, and extract the contents of the ISO file in order to customize the contents:



<div class="code">sudo modprobe squashfs<br />mkdir rescue<br />mv ubuntu-8.04-desktop-i386.iso rescue<br />cd rescue<br />mkdir mnt<br />sudo mount -o loop ubuntu-8.04-desktop-i386.iso mnt<br />mkdir extract-cd<br />rsync --exclude=/casper/filesystem.squashfs -a mnt/ extract-cd<br />mkdir squashfs<br />sudo mount -t squashfs -o loop mnt/casper/filesystem.squashfs squashfs<br />mkdir edit<br />sudo cp -a squashfs/* edit/<br /></div></pre>

<p>You'll want to customize the CD in a chroot environment. Chroot
changes the root directory of the environment, allowing you to access
the files and applications inside the CD directly, which you must do in
order to use tools like apt-get. In order to use a network connection
inside chroot, which you'll probably want to do to add new packages,
you'll need to copy in the hosts and resolv.conf files to configure
your network settings. You can achieve this with the following:</p>

<pre><div class="code">sudo cp /etc/resolv.conf edit/etc/<br /><br />sudo cp /etc/hosts edit/etc/</div></pre>

<p>Once you've completed these steps, you can start to work inside the
live CD. Mount the live CD to the edit/dev mountpoint, then change your
root directory into the newly mounted volume. You'll need to mount
/proc and /sys volumes to work with the kernel, and export your
settings to avoid locale and GPG problems later on:</p>

<pre><div class="code">sudo mount --bind /dev/ edit/dev<br />sudo chroot edit<br />mount -t proc none /proc<br />mount -t sysfs none /sys<br /><br />export HOME=/root<br />export LC_ALL=C<br /></div></pre>

<h4>Free space by removing unneeded packages</h4>

<p>You can configure the packages that are included with the live CD
using apt-get or Aptitude. You'll want to free up some space to add the
rescue applications; even though the data is compressed, all of it
needs to fit on a 700MB CD or on a higher-capacity DVD. You can remove
packages and applications that aren't useful for the recovery. I chose
to remove the OpenOffice.org suite, the GNOME games set, Ekiga,
Ubiquity, Evolution, and the GIMP, saving me around 200MB. If you are
comfortable without a command-line environment, you might want to get
rid of GNOME and Xorg; if you do that, you need not install GParted and
the other graphical tools in the next section. In any case, the goal is
to get rid of large applications. To sort all of the installed packages
by size, run the following command in the chrooted environment:</p>

<p><code>dpkg-query -W --showformat='${Installed-Size} ${Package}\n' | sort -nr | less</code></p>

<p>You can use apt-get to remove a package. Use it with the <code>--purge</code> argument to get rid of configuration files. The sudo command won't work in the chroot, and therefore should be omitted:</p>

<p><code>apt-get remove --purge <em>package-name</em></code></p>

<div class="sidebar">
<p><strong>Prebuilt Linux rescue CDs</strong></p>

<p>You don't need to build a custom rescue disk to get a great bootable
utility CD. Here are a few pre-built rescue CDs you can try.</p>

<ul><li><a href="http://partedmagic.com/">Parted Magic</a> -- This 45MB boot
CD uses GParted, the GNOME partition editor, to handle partition table
management for an extensive list of filesystems, including ext2/3,
NTFS, and HFS+. Parted Magic uses the Xfce desktop environment to
provide a variety of tools, including Firefox, Thunar, and ISO tools.
It also has a USB version to use from a thumb drive.</li><li><a href="http://www.sysresccd.org/">SystemRescueCd</a> -- The 191MB
CD features partition, archive, and networking tools, along with a slew
of editors and file browsers. This is probably the easiest system boot
CD, and is recommended for less advanced Linux users. It also has a
rootkit checker, virus scan, and CD burning utilities. It includes an X
interface through Xfce.</li><li><a href="http://trinityhome.org/">Trinity Rescue Kit</a> -- The
129MB Trinity Rescue Kit is designed for the rescue and recovery of
Windows machines, but it will work for Linux as well. It includes a few
virus scan applications, a Windows password reset tool, Samba, SSH,
rootkit removal tools, and partition and backup tools. It is based on
Mandriva Linux.</li></ul>
</div>

<h4>Add rescue applications</h4>

<p>Once you have removed all of the unneeded applications from the live
CD you can start to add rescue and recovery applications. Generally,
rescue CDs include a variety of disk utilities and security tools, as
well as networking tools to find support and access outside machines.
You may not want all of the applications I mention, and you can add
some that I don't. This is your personal boot CD, and should be
configured as you see fit. For ideas about what to include on your CD,
you might want to check out some of the prebuilt rescue distributions
mentioned in the sidebar.</p>

<p>You can install packages from the repositories using apt-get, but
you must add the multiverse repository to your /etc/apt/sources.list
file:</p>

<pre><div class="code"><br />deb http://us.archive.ubuntu.com/ubuntu/ hardy main multiverse<br />deb-src http://us.archive.ubuntu.com/ubuntu/ hardy main multiverse<br /></div></pre>

<!-- Image: partimage_thumb.png - Cutline: Partimage is an essential program for a rescue disk. Use it to copy and restore data. -->

<p>A disk partition tool is the staple of a mature boot disk. Fortunately, the Ubuntu live CD comes with <a href="http://gparted.sourceforge.net/">GParted</a>,
the GNOME Partition Editor, so adding a package isn't required. If you
chose to forgo a graphical environment, you should make sure that
parted is installed instead to handle partition tables from the command
line. If you accidentally delete a partition, installing a program like
<a href="http://www.cgsecurity.org/wiki/TestDisk">testdisk</a> can help
you recover it, as well as provide a few other basic disk tools. If you
are using the ext2 filesystem type and you accidentally delete a file,
you'll find the <a href="http://e2undel.sourceforge.net/">e2undel</a> package helpful in recovering it. If you need to copy an entire partition from a dying disk, or just want to make a backup, <a href="http://www.partimage.org/Main_Page">partimage</a> is the way to go. You can also use it to restore a partition with a previously made backup.</p>

<!-- Image: GParted_thumb.png - Cutline: Use GParted to configure your partition tables for your hard disk -->

<p>If you plan to use this disc with Windows machines, you will want to install antivirus and rootkit tools. <a href="http://www.clamav.net/">Clamscan</a> provides quick and easy virus scan with a command-line-based update tool. <a href="http://www.chkrootkit.org/">Chkrootkit</a> is a scanner to find and remove rootkits that could be hiding in your computer. You can use <a href="http://www.sleuthkit.org/">sleuthkit</a> to conduct analysis of your filesystem and browse through hidden files.</p>

<p>After you finish adding packages, clean up your temporary data and unmount the environment:</p>

<pre><div class="code">apt-get clean<br />rm -rf /tmp/*<br />rm /etc/resolv.conf<br />umount /proc<br />umount /sys<br />exit<br />sudo umount edit/dev<br /></div></pre>

<p>Now, regenerate the manifest (which is basically a list of installed packages) and copy in into the correct directory:</p>

<pre><div class="code"><br />chmod +w extract-cd/casper/filesystem.manifest<br />sudo chroot edit dpkg-query -W --showformat='${Package} ${Version}\n' > extract-cd/casper/filesystem.manifest<br />sudo cp extract-cd/casper/filesystem.manifest extract-cd/casper/filesystem.manifest-desktop<br />sudo sed -i '/ubiquity/d' extract-cd/casper/filesystem.manifest-desktop<br /></div></pre>

<p>Compress the filesystem to squeeze it onto a disc:</p>

<pre><div class="code"><br />sudo rm extract-cd/casper/filesystem.squashfs<br />sudo mksquashfs edit extract-cd/casper/filesystem.squashfs -nolzma<br /></div></pre>

<p>And finally, create the ISO file:</p>
<pre><div class="code"><br />cd extract-cd<br />sudo mkisofs -r -V "$IMAGE_NAME" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ../ubuntu-8.04-desktop-i386.iso<br /></div></pre>

<p>Once the image file is created, you need to burn it to a disc. You
can do that pretty easily with K3b or Brasero. If you want, you can do
it from the command line:</p>

<pre><div class="code"><br />cdrecord dev=/dev/cdrom ubuntu-8.04-desktop-i386.iso<br /></div>


Once the CD is finished burning, you should be able to put it into
your optical drive and boot into the environment you just created.



This should give you more than enough information to start building
your ultimate custom rescue CD. Add the packages and tools you need,
and hopefully you'll never be at a loss the next time your computer has
a problem during startup.






Read in the original layout at: http://www.linux.com/feature/137524




Don't forget the text editor




Linux.com



Everything Linux and Open Source




Don't forget the text editor



June 26, 2008 (4:00:00 PM) - 2 weeks ago


By: Alan Berg




Text
editors are important for many tasks, from editing configuration files,
nudging cron jobs, and manipulating XML files to quickly pushing out a
README. Luckily, there are a number of interesting editors available.
Here's a brief introduction to nine intriguing choices. While some may
be better suited to certain tasks, it's no one tool is better than
another for all tasks. Try them all and use the ones you like best.


vi



Old favorite vi (or one
of its variants, such as Vim or Elvis) is available on most *nix
systems. If you are a system administrator moving from one *nix system
to another, the one reliable fact is that vi will work, macros
and all. Once you have learned the keystrokes, swapping words at the
boundaries, replacing sections of text, or transversing through a large
file with vi is efficient, fast, and predictable. However, its initial
learning curve is somewhat steep, and there is no real GUI.



Gedit and Kate









Gedit (see figure 1) is a small and
lightweight text editor for the GNOME desktop, and the default text
editor for Ubuntu. An excellent tool with syntax highlighting for a
wealth of scripting and programming languages, it allows for extension
via plugins (figure 2) and does most tasks efficiently, without fuss.








The editor is modern in design, with a tab per open file, thus allowing
for easy cut and pasting between documents. The interface is
uncluttered and somewhat configurable via Edit -> Preferences for
such attributes as the enabling of allowing line numbering and changing
tabs to spaces.







You can also run Kate (KDE Advanced Text Editor) under the GNOME desktop. However, you will have to install its package with a command like sudo apt-get install kate-plugins, which will also install some extra plugin-enabled functionality. Kate has a slightly busier interface than Gedit (figure 3),
and to use tabbing between documents, you must activate the feature via
enabling the correct plugin. But Kate is significantly more
configurable than Gedit, exposing more of its innards as preferences.



An immediately helpful feature is the ability to hide code that is
within a certain scope. For example, to hide all the code within a
foreach statement, double-click on the offending line. This is a
significant help for uncluttering verbose scripting text. Also, under
the Tools menu, you can change the end of line type to switch between
Unix, DOS, and Mac, thus avoiding subtle issues in your text later.





Both Kate and Gedit support quick ad-hoc editing of numerous
scripting and programming languages. They are both excellent editors
for a variety of tasks.



TEA and Emacs



Emacs and TEA
are more complex and configurable than Gedit and Kate, with a much
wider scope of potential abilities. If you want to work within a single
environment, including sending mail, then these adaptive tools have the
power to let you.



TEA (figure 4)
is a compact, configurable, and function-rich editor that takes up only
around 500KB of memory. TEA provides a decent text editor, with markup
support for LaTeX, DocBook, Wikipedia, and HTML. It does not provide
any syntax highlighting, but does provide an extremely basic project
environment for compiling code.



Thankfully, TEA also contains a delightfully named crapbook (read
notes holder) for storing temporary text. The editor provides a spell
checker and statistics for documents and therefore sits comfortably
between an office suite and a plain editor. Other functionality
includes a file browser and a calendar. Because the editor's compact
size is based on the fact that it relies heavily on using external
tools, under the Help menu there is a well-thought-out self-check
command that on activation mentions any missing dependencies.



You can extend TEA via manipulating text files to expand specific
features. For example, to add your own command to the Run menu, open
the ext_programs text file via File -> Manage utility Files ->
External programs to add the option xterm, which activates -- yes, you
guessed it -- an xterm window. Add and save the text on a new line:



xterm=xterm &





Emacs (figure 5)
is powerful, feature-rich, and configurable. This tool has a long
history reaching back as far as 1976. Originally written by Richard
Stallman and Guy Steele, Emacs split into two main branches, Xemacs and
Emacs, in 1991. The functionality in both branches is comparable.
Having a long and renowned history implies fitness of purpose and core
stability.



Emacs is not only a text editor but also an interpreter for Emacs
Lisp, an extension of the Lisp programming language. Elisp makes
scripting relatively easy. If you are a power user, you can tweak the
.emacs configuration file (or whatever your local equivalent is) to,
for example, add extra menus.



People have written a large number of modes in the Elisp language
exist. A mode is an extension, modification, or enhancement. For
example, one mode may be for SQL development, and another for Perl
programming. The main Emacs wiki details the most up-to-date information and lists a full set of potential modes.



Emacs' default GUI is succinct and contains much functionality
beyond editing. You can perform file navigation, send out email (if
configured correctly), and perform specific tasks such as debugging,
patching, and creating diffs with a few succinct keystrokes.



The software's documentation and international language support is superb, and the editor includes an online tutorial.



Gaining full mastery of Emacs, even for the cleverest among us,
requires patience and time. The intuitive use of buffers or the
memorizing of Ctrl and Alt keystroke combinations is a chore, but if
you perfect it, expect massive gains in efficiency.





Leafpad, Mousepad, and Medit



If you are looking for a simple editor that does the bare minimum, then either Leafpad or Mousepad
will fulfill the basics. They look the same and allow for word wrap,
line numbering, auto indent, and a choice of fonts, and not much else.



Medit is a
straightforward text editor with syntax highlighting and tabbed panes.
To add an entry to its configurable tool menu (figure 6) select
Settings -> Preferences then highlight the Tools option. Clicking on
the new item icon (the picture of a document with the orange circle at
the bottom center) activates a dialog. Change the name of the item from
"new command," then add the entries displayed in figure 5. You will
then have a new tool that lists in the currently selected document all
the running processes (figure 7). For the devotees, Medit has its own
scripting language, called mooscript.



The editor also has a well placed expandable no fuss file selector
that is readily available via a click on the right side of the main
window.







SciTE



SciTE, the Scintilla-based
text editor, offers some of the features of a programmer's interactive
development environment. It supports tab panes, syntax highlighting,
and code folding, and goes a solid step further for programmers. For
example, on opening a Perl file, or numerous other languages, you can
check a script's basic validity via the menu option Tools -> Check
syntax. That displays a second window (figure 8) with the gruesome
details.



SciTE presents a no-fuss, easy-to-learn approach to controlling a scripting environment. A development tool like Eclipse will give you more features and adaptability, but also a steeper learning curve.





Final comments



This article covers only a fraction of the available text editors
for the Linux desktop. If I have missed your favorite, share your
opinion and place a link in the comments section for this article.






Read in the original layout at: http://www.linux.com/feature/137879




Desktop Linux strategies for marketplace success

Linux.com :: Desktop Linux strategies for marketplace success
Linux.com

Everything Linux and Open Source
Desktop Linux strategies for marketplace success
May 03, 2008 (2:00:00 PM) - 2 months, 1 week ago

By: Carlton Hobbs

What strategy is needed to really spread desktop Linux to average home users? Here are some ideas that just might work.

Journalist Steven J. Vaughan-Nichols argues that

Linux businesses, for the most part, don't do marketing. I think they're extremely foolish not to spend any money on it, but there it is.... Like the Linux companies, many of them were sure that they didn't need to market themselves. Like Linux companies, they thought word of mouth was enough.... Well guess what: it's not. Without marketing, no one from the outside looking in can tell one Linux from another. They just see a confusing mish-mash of names, and unless they're already really motivated, they're going to start turning off from Linux at the very start.

I argue almost the opposite. A large part of mainstream media marketing, advertising, and branding is a means to get name recognition at a very superficial level. Its main targets are people who make superficial buying decisions, and for the right products, this works. Why buy name brand Tylenol vs. generic acetaminophen, name brand cereal, or a thousand other identical products that come off the same assembly line but use different packaging at different prices? From the perspective of the thrifty, the main answers are ignorance and brand recognition.

Of course, not all marketing is to compete with effectively identical products. Consider the American beer industry as a major marketing powerhouse with a few similarities to the Windows vs. Linux market. The major American breweries formulated modern beers after Prohibition to appeal to people who didn't like the taste of beer, and as a side effect the major brewers accepted, these beers taste bad to beer connoisseurs. The post-Prohibition era, even to this day, retains elements of a cartelized liquor distribution industry designed to make it difficult and expensive to compete with the major breweries, such that there have been no new domestic majors in decades. The rebirth of real beer in America was through microbreweries that have small to non-existent marketing budgets. They rely on beer connoisseurs who communicate through beer fan reviews, word of mouth, willingness to experiment, and seeking out the minority of stores that actually carry microbrew and local beers. Beer commercials for microbrews about sports and sexy women would not get many beer drinkers to seek out good beer that isn't already easy to find. Such commercials are just for "all beer is beer" drinkers who are susceptible to brand association marketing and herd opinion.

This doesn't mean that high-cost marketing is innately wrong or bad. It means that if you can increase the marginal sales of your high-profit-per-sale product to people who make quick decisions based on brand recognition, then your marketing expenses were a good investment, but otherwise not. Unfortunately for Linux companies, desktop Linux is a very low profit per "sale" product that is not an impulse choice off a shelf of interchangeable consumer goods. As Red Hat learned years ago, the shrink-wrapped box on a store shelf will not change the current OS market.

So if word of mouth and near-zero-budget advertising are our main prospects, then perhaps what is needed is a better person-to-person strategy. Fortunately, there is definite room for improvement here. One major barrier to entry is lack of Linux preinstallation, and the occasional need for more expertise with compatibility issues. Desktop Linux must partly resolve these challenges through its internal advantage of strong community by strategic and expansionary networking, and by using the big opportunity of failure to address the massive number of PCs that people keep collecting dust, thinking they will upgrade sometime, someday.

Desktop Linux must focus on local communities for recruiting the next wave of users and evangelists. Ubuntu has the right idea with its LoCo initiative. However, to get really local and networked, a distro-centric local community is not the most efficient. If local Ubuntu, Debian, Slackware, etc. users never meet, they will forfeit great networking opportunities. There needs to be local GNU/Linux/FOSS communities with broad ranges of software experience, occupations, contacts, and distro preferences. Fortunately, many already exist, and there is at least one list where people can find groups near them. Linux promoters must recognize face-to-face personal interaction as a primary means for strategic growth of desktop Linux.

Local free software organizations need to be able to offer free Linux installation and encourage people to reuse or donate computers that would run poorly with current Windows systems. Certain groups are naturally good targets to recruit and possibly join as recruiters themselves. Decentralist political groups, neighborhood associations, Parent Teacher Associations, and other educational organizations are also intelligent low budget groups. College groups, homeschool groups, agriculture co-ops, churches, and religious groups are all great places to find people who have spare computers to reinvigorate or donate, or would be willing to have a computer set to dual boot. In general, groups that depend on donations or have small budgets are looking for ways to minimize unnecessary costs. Some of their members would likely be radicalized when they learn what little is required to show others how to switch to Linux.

Local free software organizations need a quick and easy tool to communicate what the GNU/Linux OS can do. Perhaps the best method would also serve as a means of introduction. An organization could create business cards that provide a brief description of the local Linux group, its Web address, and purpose. The card should be visually impressive and colorful. They can let people know that the card itself was designed with only free software, whether it be OpenOffice.org, gLabels, Inkscape, Scribus, or some combination that anyone could easily get through Linux.

Is there a model for such success without advertising budgets? Ask yourself how you heard about and started using Google. Was it through advertising? Google became a giant because the barrier to trying a new search engine was so low and the value quickly obvious. It was used by almost everyone before anyone saw a Google advertisement. If Linux advocates can do the same, then Windows will be in trouble. I don't see how this can happen without active local free software groups that seek out growth, and success would likely be in proportion to the efficiency of local groups. If some are more successful than others, then the more successful local methods could be adopted elsewhere.

All the experience and networked knowledge of local free software cooperatives might be enough that small businesses would hire the local groups to upgrade their computer systems to Linux for real money. Local groups could even have contracts with particular distros that provide paid business support to receive some of the profit. Local cooperatives would not likely make much money, but maybe enough on occasion to purchase a few rounds of quality microbrews to celebrate a few more people unshackled from Goliath-soft. Very few people will get rich with Linux, but a lot of people could be meaningfully less poor with it, and free-as-in-freedom might actually buy the enjoyment of a few free-as-in-beers.
Read in the original layout at: http://www.linux.com/feature/134126

How to write a thorought review of a linux distribution

Linux.com :: How to write a thorough review of a Linux distribution
Linux.com

Everything Linux and Open Source
How to write a thorough review of a Linux distribution
July 03, 2008 (9:00:00 PM) - 1 week ago

By: Mark Gregson

I have never written a review of a Linux distribution, but I've read more than I can count, and many of them have been maddeningly incomplete and not worth the time it took to read them. Here's a list of items you need to talk about in order to write a thorough review, covering every aspect of the distribution from the initial download to the final recommendation and everything in between.

Not every item below applies to every distribution; you need to choose which items to include and which to ignore. For example, if the distribution is for an embedded device, there's probably not much point in discussing window manager themes. However, the more you include, the better your review will be. You can cover some of this information in a simple table, but many of the points deserve more explanation.

In addition to talking about each item, you should tell your readers how important or useful you think each item is. For example, if the distribution automatically boots all the way to a logged-in guest account, do you like this or not?
Purpose of distribution

Describe the purpose of the distribution as given by the creators. For example, the distribution may be intended for servers or it might instead be for multimedia creators. If the creators do not list a purpose, you can give your view, but this should be done in a later section where you describe your impressions and recommendations.
Parent distribution

Describe the "parent" distribution this distribution is based on. For example, is this distribution Slackware or Debian based? If it is not based on a parent distribution, discuss the implications of this. Spend more time on this point if the parent distribution is particularly relevant -- for example, is the distribution intended to be minimalist but was built on the biggest kernel around?
Version

Give the version number of the distribution you are reviewing. What is the release date? Is this the latest version? If not, explain why you are you reviewing an older version. For example, the latest version might be beta and you only want to review the latest stable release. Which kernel is the distribution based on? Does the distribution contain multiple kernel versions? Don't just list the kernel as "2.6" but rather give the minor release number also (e.g. 2.6.25). Note whether the kernel is recent or older and discuss the distribution release dates versus the kernel release dates if that is relevant. For example, if the distribution is only days or weeks old but the kernel version is much older, note this. If the distribution uses a special kernel, include the bug fix number (e.g. 2.6.16.28).
Distribution creators

Mention the distribution creators -- is it one person working out of his home or is the distribution from a large scientific or government body?
Acquiring

Describe the methods provided to get the distribution. List the size of the distribution. Are there mirrors in major regions or is the distribution only available on its Web site? How fast was the download? How did you find out about this distribution? Are there other formats available? For example, can one purchase a DVD or CD? Is this a commercial or a free release?
CPU architectures

List the CPU architectures the version supports. Tell the readers whether one download supports multiple architectures, and about any special architectures that are supported.
Live vs. install

Say whether the distribution requires an install before you can use it fully. Most distributions have a live CD version. List the various "flavours" the distribution comes in. For example, are there separate minimal and full versions? Are there versions for different kinds or users or for different computer architectures?
Live CD issues

Describe the distribution's approach to saving personal files while in live CD mode. Are there any ways of saving the configuration? Does the distribution have a control panel entry for this? How easy is it to restart the live CD version and apply your saved configuration?
Install

Discuss the tools and process of installing the distribution onto the hard drive. How much manual intervention is required? How much technical expertise is needed? How well is the process documented? Does the system automatically partition the drives? Does it automatically backup existing files? Which filesystems are supported? Is there a default filesystem?
Languages

Since you will likely write your review in English (since you are reading this guide in English) then if the default language is not English emphasize the point. List the languages supported by the distribution. What is the default language? How well supported are the other listed languages? For example, if English is not the default language, do applications and documentation switch to English when you switch the language to English?
Boot issues

Describe which boot parameters are required and which boot options are available. Does the distribution stop and ask for user input or does it boot automatically? Even if it does boot all the way to a graphical desktop, does it still require configuration settings in the desktop? For example, some distributions will run all the way to a logged-in user running X but then show a dialog box requiring the person to set up the network card.

Describe the ease of modifying the boot parameters, especially if you had to change any of the defaults in order to get the distribution working on your machine. Were there function keys that changed the default resolution or runlevel? Describe the level of documentation provided on the boot screens themselves. For example, did you have to already know that you should type "noapci" or did the boot screens explain all that and all other "cheat codes" (at least the ones required to get your machine working)?

Tell the user if the system boots with a splash screen. This is particularly important if the distribution is intended for novice users. Does the distribution show any boot output or is it fully hidden?
Start scripts

Describe the startup scripts that come with the distribution. Besides startx, are there separate start scripts for each window manager? How easy is it to find them (i.e. are they documented clearly)? Are there start scripts for services like CUPS and firewalls?
Graphical vs. text

Explain what happens when the system has finished booting. Does it automatically start an X session? Which version of X does it use? If it doesn't start X automatically, what does it do? What sorts of instructions are given on screen, regardless of the boot mode? Is there enough information given to tell the user how to log in? For example, for any distribution intended for novice users, are the username and password shown on the login screen?
Screen resolution

Discuss the video mode the system boots to by default if it boots to X. Does it automatically find and use the highest resolution or possibly the highest refresh rate? Is the boot video mode selectable at some point?
Hardware detection

Describe the automatic hardware detection of the distribution. This includes but is not limited to video cards, audio cards, keyboards, mice (including the scroll wheel), USB ports, hard drives, CD and DVD devices, modems, network cards, printers, and scanners. Are the correct video drivers loaded? If you plug in or remove USB devices, does the system correctly configure the devices? Does your sound card work? Which sound architecture does it use?
Mounting

Tell whether the hard drives are mounted in read-only or in read-write mode. Can the distribution read and write non-Linux drives? How easy is it to switch between read-only and read-write? Were all the hard drives automatically mounted?
Network

Discuss how the distribution configures the network devices. Are they all discovered? Does the system let you use DHCP? How much manual configuration is required? What tools are provided for managing the network?

Describe whether wireless network cards are properly configured. On startup, does the system tell you whether it tried, and what the result is? Are there any special tools to help you manage wireless connections?
Printing

Describe the printer setup process. Print some documents. Is CUPS automatically started? How easy is it to add a printer? Which printers are supported?
Laptop support

If you run the distribution on a laptop, discuss how well the hardware is supported. Does the distribution correctly deal with putting the laptop into sleep and hibernate modes? Is the infrared port configured? Does it support Bluetooth?
Login manager

Mention the login manager that starts by default. You can also list other supplied login managers, if any.
Window managers

List all the window managers supplied with the distribution and their version numbers. Does the login manager (if any) list the window managers and allow the user to select one different from the default? Are there window managers included that aren't shown by the login manager? Are there any unusual window managers?
Themes

Discuss the themes provided. How easy is it to change themes and get new ones?
Look and feel

Discuss the overall look of the distribution. Is there a consistent feel starting from the boot loader and going all the way through the login manager, default themes, and any customized control panels?
Fonts

Discuss the fonts that are installed with the distribution. Note anything exceptional, such as a large number of unusual fonts.
Desktop

Describe the desktop. Which applets are started and in which taskbars? What icons are there? Are there any special panels or menus? Do any other programs start automatically?
Menus

Describe how well the menus are configured in each window manager. Does each window manager have the same applications listed as the others, and are they laid out identically to the other supplied window managers as far as the window manager will allow? Are all the important installed applications listed? Are there submenus by category? If so, how well do the applications fit the assigned category? How many submenus do you need to navigate before you find what you want? Are there desktop icons for any applications? Do not assume that all applications shown in menus actually work or are even installed. For example, if the distribution includes a word processor in the menu, did you start it?
Software selection

Discuss the applications that are installed in the distribution by category. Highlight any unusual applications. List any non-open-source applications. Note especially if anything is missing. For example, if this is a desktop distribution, are there office applications? If it is a server edition, are all the services available?
Codecs

Discuss the codecs supplied with the distribution. How complete is the set of included codecs? Are all the free codecs included by default? Are there non-free codecs that might cause legal issues in some jurisdictions? How easy is it to get additional codecs if required? How well did the applications play the media you tested?
Customized control panel

Describe any customized control panels that come with the distribution. What tasks do they cover? How easy are they to find and use? Do the tools work completely or does the user need to do further work via non-customized tools?
Customized application configuration

Describe any customized application configuration tools provided. For example, is there a customized tool to configure encryption?
Special configurations

Describe any special programs and configurations not found in most other distributions. For example, if the distribution includes Wine, is it fully configured to run, and are there any installed Windows applications that are preconfigured? Are there icons or menu items that start these applications?
Development tools

Describe the development tools provided. List the compilers, editors, and integrated development environments.
Services

List the services that are started automatically. For example, is sshd started? Describe the system for controlling the starting and stopping of services. Is there an administration control panel for services? Are any important services not included in the control panel?
Security

Describe the security features of the distribution. Is the root password weak or strong? Are there any guest accounts or does the distribution automatically log in as root? Are the guest accounts password protected? Does the distribution boot all the way to a desktop or is there a login screen? Are the passwords easy to find? For example, some distributions show the password on the login screen as part of the wallpaper. Are there special security features such as firewalls or automatically encrypted filesystems?
Packages

Discuss package management for the distribution. Test out some package installs. Which package repositories are preconfigured? How many packages are there for this distribution? What is the package manager? How easy was it to install applications? Did the newly installed packages work?
Patches

Describe the method provided by the distribution creators for patching the distribution. For example, are there any patches or does the user need to download a complete copy of the latest distribution? How easy is it to apply the patches?
Performance

Discuss the speed of the distribution if it differs significantly from other distributions. Does it boot extra quickly? Is the windowing system extremely responsive? Is the kernel optimized in any way that shows? Does the distribution run entirely within RAM? Did the install go extremely slow?
Stability

Describe the stability of the distribution. Do any of the applications crash? Does the distribution start the same every time? Does it have trouble with certain configurations of hardware such as laptops or old computers?
Shutdown

Describe what happens when the user exits. For example, if you shut down X via the Ctrl-Alt-Backspace key combination, do you return to a login screen or does the system shut down? Does it drop the user to a command line? Does the system power off automatically? Does it eject the CD during shutdown?
Remastering

Discuss any tools provided for rebuilding the distribution. How easily can you remaster the distribution? Is there any documentation for this task?
Upgrade and rollback

Discuss the process of moving to a newer or older version of the distribution after having done an install. Are there any tools provided to assist with this?
Problems

List all the problems you encountered. Discuss what you found confusing. What did not work as documented? What workarounds did you have to apply? Did the solutions provided by the official support staff or the user community work? Are problems noted on the Web site, and does the project have an expected schedule of when they will be fixed?
Documentation

Describe the documentation provided with the distribution. Are there man pages for every application? Is the info command included with full documentation? Does the distribution include the help files for each application? Are all the customized tools documented?
Support

Describe the official support for the distribution. How responsive are the creators to user questions? Are they open to receiving help? How complete is the Web site, and how easy is it to find help there? Are there other help forums, such as IRC channels, and do developers frequent them?
User community

Describe the user community to the extent you can judge from the distribution's forums. How large and active is it? How helpful is it? How friendly is it?
Source code availability

Describe the process of getting the software for the distribution. How easy is it to find all the software, especially for the customized tools? How well is it documented?
Releases and roadmap

Does the distribution have a regular release schedule? Is the schedule published? How closely do the creators adhere to the schedule? Describe the roadmap for the distribution. Are there documented plans for future releases? What things are in the roadmap?
History

Describe the history of the distribution. Has the distribution been stable at each release? How has the distribution changed over time? When did the distribution first appear and how often was it updated? Has the distribution been influential on other distributions or in other ways?
Screenshots

Provide useful screenshots, not just screenshots for the sake of having some pictures in your review. Useful screenshots mean noteworthy items described in the text, such as something that makes the distribution distinctive or anything particularly interesting.
Links

Provide links in your review not only to the distribution Web site but also to anything that is not directly related to the distribution you are reviewing. For example, if you mention other distributions you should include links to them.
Your omissions

List the things you did not test and the reasons why you did not.
Your system and your perspective

Describe the hardware and software configuration of your test machine. Did you have any esoteric hardware? Did you run the distribution inside a virtual machine?

Describe your relevant experience. How long have you been using Linux? How many distributions have you tried? Which distributions do you regularly use? Are you an administrator or programmer?

Discuss your reasons for reviewing this distribution. For example, do you want to describe the latest release of this distribution? Do the distribution creators make any special claims for this release or this distribution? Have you been using this distribution for day-to-day work?

Disclose your biases. For example, do you prefer everything to be automated or would you rather have manual control? Do you have a favourite application that wasn't included when you thought it should be?
Recommendations

Summarize your review and your impressions. Compare this distribution against others of the same type. If the target audience is described by the creators, does the distribution meet the needs of the target audience? What kind of users will most like this distribution?
Final words

Consider having your review edited. Your review should be well-organized, clearly worded, and without grammar or spelling errors.

This guide on writing reviews means that you won't be able to blast through a distribution in 20 minutes, but the extra time you take to cover so many items will be rewarded by your readers' interest and appreciation. They'll want to read the next thing you write. Your review may become the definitive standard against which all others are compared.
Read in the original layout at: http://www.linux.com/feature/139593

Wednesday, July 9, 2008

Beyond Your First Shell Script

Beyond Your First Shell Script
August 1st, 1996 by Brian Rice in

* Software

As your shell scripts get more complex, you'll need to put a directive at the beginning to tell the operating system what sort of shell script this is.

So here it is—your first shell script:

lpr weekly.report
Mail boss < weekly.report
cp weekly.report /floppy/report.txt
rm weekly.report

You found yourself repeating the same few commands over and over: print out your weekly report, mail a copy to the boss, copy the report onto a floppy disc, and delete the original. So it was a big time saver when someone showed you that you can place those commands into a text file (“dealwithit”, for instance), mark the file as executable with chmod +x dealwithit, and then run it just by typing its filename.

But you'd like to know more. This script you've written is not very robust; if you run it in the wrong directory, you get a cascade of ugly error messages. And the script is not very flexible either—if you'd like to print, mail, backup, and delete some other file, you'll have to create another version of the script. Finally, if someone asks you what kind of shell script you've written—Bourne? Korn? C shell?—you can't say. Then read on.

Last question first: what kind of shell script is this? Actually, the script above is quite generic. It uses only features common to all the shells. Lucky you. As your shell scripts get more complex, you'll need to put a directive at the beginning to tell the operating system what sort of shell script this is.

#!/bin/bash

The #! should be the first two characters of the file, and the rest should be the complete pathname of the shell program you intend this script to be run by. Astute readers will note that this line looks like a comment, and, since it begins with a # character, it is one, syntactically. It's also magic.

When the operating system tries to run a file as a program, it reads the first few bytes of the file (its “magic number”) to learn what kind of file it is. The byte pattern #! means that this is a shell script, and that the next several bytes, up to a newline character, make up the name of the binary that the OS should really run, feeding it this script file.

Paranoid programmers will make sure that no spaces are placed after the executable name on the #! line. You are paranoid, aren't you? Good. Also, notice that running a shell script requires that it be read first; this is why you must have both read and execute permission to run a shell script file, while you need only execute permission to run a binary file.

In this article, we will focus on writing programs for the Bourne shell and its descendants. A Bourne shell script will run flawlessly not only under the Bourne shell, but also under the Korn shell, which adds a variety of features for efficiency and ease of use. The Korn shell itself has two descendents of its own: the POSIX standard shell, which is virtually identical to the Korn shell; and a big Linux favorite, the Bourne Again shell. The Bourne Again shell (“bash”) adds mostly interactive features from the descendants of the C shell, Bill Joy's attempt to introduce a shell that would use C-like control structures. What a great idea! Due to a few good reasons, most shell programming has followed the Bourne shell side of the genealogical tree. But people love the C shell's interactive features, which is why they too were incorporated into the Bourne Again shell.

Let me rephrase: do not write C shell scripts. Continue to use the C shell, or its descendant tcsh, interactively if you care to; your author does. But learn and use the Bourne/Korn/bash shells for scripting.

So here is our shell script now:

#!/bin/bash
lpr weekly.report
Mail boss < weekly.report
cp weekly.report /floppy/report.txt
rm weekly.report

If we run this script in the wrong directory, or if we accidentally name our file something other than “weekly.report”, here's what happens:

lpr: weekly.report: No such file or directory
./dealwithit: weekly.report: No such file or directory
cp: weekly.report: No such file or directory
rm: weekly.report: No such file or directory

and we get a bunch of “Permission denied” messages if we run the script when the permissions on the file are wrong. Bleah. Couldn't we do a check at the beginning of the program, so that if something is wrong we can avert all these ugly messages? Indeed we can, using (surprise!) an if.

It occurs to us that if cat weekly.report works, so will most of the things our script wants to do. The shell's if statement works just as this thought suggests: you give the if statement a command to try, and if the command runs successfully, it will run the other commands for you too. You also can specify some commands to run if the first command—called the “control command”--fails. Let's give it a try:

#!/bin/bash
if
cat weekly.report
then
lpr weekly.report
Mail boss < weekly.report
cp weekly.report /floppy/report.txt
rm weekly.report
else
echo I see no weekly.report file here.
fi

The indentation is not mandatory, but does make your shell scripts easier to read. You can put the control command on the same line as the if keyword itself.

This new version works great when there's an error. We get only one “No such file or directory” message, an improvement over four, and then our helpful personalized error message appears. But the script isn't so hot when it works: now we get the contents of weekly.report dumped to the screen as a preliminary. This is, after all, what cat does. Couldn't it just shut up?

You may know something about redirecting input and output in the world of Unix: the > character sends a command's output to a file, and the < character arranges for a command to get its input from a file, as in our Mail command. So if only we could send the cat command's output to the trash can instead of to a file... Wait! Maybe there's a trash can file somewhere. There is: /dev/null. Any output sent to /dev/null dribbles out the back of the computer. So let's change our cat command to:

cat weekly.report >/dev/null

Because you are paranoid, you may be wondering whether sending output to the trash can will affect whether this command succeeds or fails. Since /dev/null always exists and is writable by anyone, it will not fail.

Now our script is much quieter. But when cat fails, we still see the

cat: weekly.report: No such file or directory

error message. Why didn't this go into the trash can too? Because error messages flow separately from output, even though they usually share a common destination: the screen. We redirected standard output, but said nothing about errors. To redirect the errors, we can:

cat weekly.report >/dev/null 2>/dev/null

Just as > means “Send output here,” 2> means “Send errors here.” In fact, > is really just a synonym for 1>. Another, terser way to say the above command is this:

cat weekly.report >/dev/null 2>&1

The incantation 2>&1 means “Send errors (output stream number 2) to the same place ordinary output (output stream number 1) is going to.” By the way, this 2> jazz only works in the Bourne shell and its descendants. The C shell makes it annoying to separate errors from output, which is one of the reasons people avoid programming in it.

You may be saying to yourself: “This cat trick is fun, but isn't there some way I can just give a true-or-false expression? Like, either the file exists and is readable, or not?” Yes, you can. There is a command whose whole job is to succeed or fail depending on whether the expression you give is true or not: test. This is why your test programs called “test” never work, by the way. Here is our program, rewritten to use test:

#!/bin/bash
if
test -r weekly.report
then
lpr weekly.report
Mail boss < weekly.report
cp weekly.report /floppy/report.txt
rm weekly.report
else
echo I see no weekly.report file here.
fi

The test command's -r operator means, “Does this file exist, and can I read it?” test is quiet regardless of whether it succeeds or fails, so there's no need for anything to get sent to /dev/null.

Test also has an alternative syntax: you can use a [ character instead of the word test, so long as you have a ] at the end of the line. Be sure to put a space between any other characters and the [ and the ] characters! We can make our if look like this now:

if [ -r weekly.report ]

Hey, now that looks like a program! Even though we're using brackets, this is still the test command. There are lots of other things test can do for you; see its man page for the complete list. For example, we seem to recall that what lets you delete a file is not whether you can read it, but whether the directory it sits in gives you write permission. So we can re-write our script like this:

#!/bin/bash
if [ ! -r weekly.report ]
then
echo I see no weekly.report file here.
exit 1
fi
if [ ! -w . ]
then
echo I will not be able to delete
echo weekly.report for you, so I give up.
exit 2
fi
# Real work omitted...

Each test now has a ! character in it, which means “not”. So the first test succeeds if the weekly.report is not readable, and the second succeeds if the current directory (“.”) is not writable. In each case, the script prints an error message and exits. Notice that there's a different number fed to exit each time. This is how Unix commands (including if itself!) tell whether other commands succeed: if they exit with any exit code other than 0, they didn't. What each non-zero number (up to 255) means, other than “Something bad happened,” is up to you. But 0 always means success.

If this seems backwards to you, give yourself a cookie. It is backwards. But there's a good design reason for it, and it's a universal Unix-command convention, so get used to it.

Notice also that our real work no longer has an if wrapped around it. Our script will only get that far if none of our error conditions are detected. So we can just assume that all those error conditions are not in fact present! Real shell scripts exploit this property ruthlessly, often beginning with screenfuls of tests before any real work is done.

Now that we've made our script more robust, let's work on making it more general. Most Unix commands can take an argument from their command lines that tells them what to do; why can't our script? Because it has “weekly.report” littered all through it, that's why. We need to replace weekly.report with something that means “the thing on the command line.” Meet $1.

#!/bin/bash
if [ ! -r $1 ]
then
echo I see no $1 file here.
exit 1
fi
if [ ! -w . ]
then
echo I will not be able to delete $1 for you.
echo So I give up.
exit 2
fi
lpr $1
Mail boss < $1
# and so forth...
exit 0

$1 means the first argument on the command line. Yes, $2 means the second, $3 means the third, and so on. What's $0? The name of the command itself. So we can change our error messages so that they look like this:

echo $0: I see no $1 file here.

Ever noticed that Unix error messages introduce themselves? That's how.

Unfortunately, now there's a new threat to our program: what if the user forgets to put an argument on the command line? Then the right thing for $1 to have in it would be nothing at all. We might be back to our cascade of error messages, since a lot of commands, such as rm, complain at you if you put nothing at all on their command lines. In this program's case, it's even worse, since the first time $1 is used is as an argument to test -r, and test will give you a syntax error if you ask it to test -r nothing at all. And what does lpr do if you put nothing at all on its command line? Try it! But be prepared; you could end up with a mess.

Fortunately, test can help. Let's put this as the very first test in our program, right after the #!/bin/bash:

if [ -z "$1" ]
then
echo $0: usage: $0 filename
exit 3
fi

Now if the user puts nothing on the command line, we print a usage message and quit. The -z operator means “is this an empty string?”. Notice the double quotes around the $1: they are mandatory here. If you leave them out, test will give an error message in just the situation we are trying to detect. The quotes protect the nothing-at-all stored in $1 from causing a syntax error.

This if clause appears at the very top of many, many shell scripts. Among its other benefits, it relieves us from having to wrap $1 in quote marks later in our program, since if $1 were empty we would have exited at the start. In fact, the only time quotes would still be necessary would be if $1 could contain characters with a special meaning to the shell, such as a space or a question mark. Filenames don't, usually.

What if we want our script to be able to take a variable number of arguments? Most Unix commands can, after all. One way is clear: we could just cut and paste all the stuff in our shell script, so we'd have a bunch of commands that dealt with $1, then a bunch of commands that dealt with $2, and so forth. Sound like a good idea? No? Good for you; it's a terrible idea.

First of all, there would be some fixed upper limit on the number of arguments we could handle, determined by when we got tired of cutting, pasting, and editing our script. Second, any time you have many copies of the same code, you have a quality problem waiting to happen. You'll forget to make a change, or fix a bug, all of the many places necessary. Third, we often hand wildcards, like *, to Unix commands on their command lines. These wildcards are expanded into a list of filenames before the command runs! So it's very easy to get a command line with more arguments than some arbitrary, low limit.

Maybe we could use some kind of arithmetic trick to count through our arguments, like $i or something. This won't work either. The expression $i means “the contents of the variable called i”, not “the i'th thing on the command line.” Furthermore, not all shells let you refer to command-line words after $9 at all, and those that do make you use ${10}, ${11}, and so forth.

So what do we do? This:

while [ ! -z "$1" ]
do
# do stuff to $1
shift
done

Here's how we read that script: “While there's something in $1, we mess with it. Immediately after we finish messing with it, we do the shift command, which moves the contents of $2 into $1, the contents of $3 into $2, and so forth, regardless of how many of these command-line arguments there are. Then we go back and do it all again. We know we've finished when there's nothing at all in $1.”

This technique allows us to write a script that can handle any number of arguments, while only dealing with $1 at a time. So now our script looks like this:

#!/bin/bash
while [ ! -z "$1" ]
do
# do stuff to $1
if [ ! -r $1 ]
then
echo $0: I see no $1 file here.
exit 1
fi
# omitted test...
lpr $1
Mail boss < $1
# and so forth...
shift
done
exit 0

Notice that we nested if inside while. We can do that all we like. Also notice that this program quits the instant it finds something wrong. If you would like it to continue on to the next argument instead of bombing out, just replace an exit with:

shift
continue

The continue command just means “Go back up to the top of the loop right now, and try the control command again.” Thought question: why did we have to put a shift right before the continue?

Here's a potential problem: we've made it easy for someone to use this program on files that live in different directories. But we're only testing the current directory for writability. Instead, we should do this:

if [ ! -w `dirname $1` ]
then
echo $0: I will not be able to delete $1 for you.
# ...

The dirname command prints out what directory a file is in, judging from its pathname. If you give dirname a filename that doesn't start with a directory, it will print “.”--the current directory. And those backquotes? Unlike all other kinds of quotation marks, they don't mean “this is really all one piece ignore spaces.” Instead, backquotes—also called “grave accents”--mean “Run the command inside the backquotes before you run the whole command line. Capture all of the backquoted command's output, and pretend that was what appeared on the larger command line instead of the junk in backquotes.” In other words, we are substituting a command's output into another command line.

So here is the final version of our shell script:

#!/bin/bash
while [ ! -z "$1" ]
do
if [ ! -r $1 ]
then
echo $0: I see no $1 file here.
shift
continue
fi
if [ ! -w `dirname $1` ]
then
echo $0: I will not be able to delete $1 for you.
shift
continue
fi
lpr $1
Mail boss < $1
cp $1 /floppy/`basename $1`
rm $1
shift
done
exit 0

An exercise for the reader: what does `basename $1` do?

Now there are only two other techniques you need to know to meet the vast majority of your scripting needs. First, suppose you really do need to count. How do we do the equivalent of a C for loop? Here's the traditional Bourne shell way:

i=0
upperlim=10
while [ $i -lt $upperlim ]
do
# mess with $i
i=`expr $i + 1`
done

Notice that we did not use the for keyword. for is for something else entirely. Instead, here we initialize a variable i to 0, then we enter and remain in the loop as long as the value in i is less than 10. (Fortran programmers will recognize -lt as the less-than operator; guess why > is not used in this context.) The rather mysterious line

i=`expr $i + 1`

calls the expr command, which evaluates arithmetic expression. We stuff expr's output back into i using backquotes.

Ugly, isn't it? And not especially fast either, since we are running a command every time we want to add 1 to i. Can't the shell just do the arithmetic itself? If the shell is the Bourne shell, no, it can't. But the Korn shell can:

((i=i+1))

Use that syntax if it works, and if you don't need portability. The bash shell uses something similar:

i=$(($i+1))

which is a bit more portable (it even works in the Korn shell), since it is specified by POSIX, but still won't work for some non-POSIX bourne shells.

So what does for do? It allows you to wade through a list of items, assigning a variable to each element of the list in turn. Here's a trivial example:

for a in Larry Moe Curly
do
echo $a
done

which would print

Larry
Moe
Curly

Less trivially, we can use this to handle the case where we want to do something for each word in a variable:

mylist="apple banana cheese rutabaga"
for w in $mylist
do
# mess with $w
done

or for each file matched by a shell wildcard pattern:

for f in /docs/reports/*.txt
do
pr -h $f $f | lpr
done

or for each word in the output of a command:

for a in `cat people.txt`
do
banner $a
done

Here's how you can use for to simulate the C for you know and love:

for i in 0 1 2 3 4 5 6 7 8 9 10 11
do
# mess with $i
done

Of course, it'd be very difficult to have a variable upper limit with this syntax, which is why we usually use the while loop shown above.

Congratulations! You've now seen what's at work in the vast bulk of practical shell scripts. Go forth and save time!

Brian Rice (rice@kcomputing.com) s Member of Technical Staff with K Computing, a nationwide Unix and Internet training firm.

Validating an IP Address in a Bash Script

Validating an IP Address in a Bash Script
June 26th, 2008 by Mitch Frazier in

I've recently written about using bash arrays and bash regular expressions, so here's a more useful example of using them to test IP addresses for validity.

To belabor the obvious: IP addresses are 32 bit values written as four numbers (the individual bytes of the IP address) separated by dots (periods). Each of the four numbers has a valid range of 0 to 255.

The following bash script contains a bash function which returns true if it is passed a valid IP address and false otherwise. In bash speak true means it exits with a zero status, anything else is false. The status of a command/function is stored in the bash variable "$?".

#!/bin/bash

# Test an IP address for validity:
# Usage:
# valid_ip IP_ADDRESS
# if [[ $? -eq 0 ]]; then echo good; else echo bad; fi
# OR
# if valid_ip IP_ADDRESS; then echo good; else echo bad; fi
#
function valid_ip()
{
local ip=$1
local stat=1

if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
OIFS=$IFS
IFS='.'
ip=($ip)
IFS=$OIFS
[[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \
&& ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
stat=$?
fi
return $stat
}

# If run directly, execute some tests.
if [[ "$(basename $0 .sh)" == 'valid_ip' ]]; then
ips='
4.2.2.2
a.b.c.d
192.168.1.1
0.0.0.0
255.255.255.255
255.255.255.256
192.168.0.1
192.168.0
1234.123.123.123
'
for ip in $ips
do
if valid_ip $ip; then stat='good'; else stat='bad'; fi
printf "%-20s: %s\n" "$ip" "$stat"
done
fi

If you save this script as "valid_ip.sh" and then run it directly it will run some tests and prints the results:

# sh valid_ip.sh
4.2.2.2 : good
a.b.c.d : bad
192.168.1.1 : good
0.0.0.0 : good
255.255.255.255 : good
255.255.255.256 : bad
192.168.0.1 : good
192.168.0 : bad
1234.123.123.123 : bad

In the function valid_ip, the if statement uses a regular expression to make sure the subject IP address consists of four dot separated numbers:

if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then

If that test passes then the code inside the if statement separates the subject IP address into four parts at the dots and places the parts in an array:

OIFS=$IFS
IFS='.'
ip=($ip)
IFS=$OIFS

It does this by momentarily changing bash's Internal Field Separator variable so that rather than parsing words as whitespace separated items, bash parses them as dot separated. Putting the value of the subject IP address inside parenthesis and assigning it to itself thereby turns it into an array where each dot separated number is assigned to an array slot. Now the individual pieces are tested to make sure they're all less than or equal to 255 and the status of the test is saved so that it can be returned to the caller:

[[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \
&& ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
stat=$?

Note that there's no need to test that the numbers are greater than or equal to zero because the regular expression test has already eliminated any thing that doesn't consist of only dots and digits. __________________________

Mitch Frazier is the System Administrator at Linux Journal.