By Daniel Rothamel on October 12, 2007
I admit it. I am into the whole social media thing.
- Facebook? Check.
- LinkedIn? Check.
- Twitter? Check.
I am a self-professed information junkie, so I find these sites fascinting. I have also found that they are a great way for me to stay in contact not only with my clients, but with other RE.net bloggers out there. I have met plenty of new bloggers and made some great contacts by using Facebook and LinkedIn. Twitter seems to be lacking a significant real estate voice, however. Sure, some of my favorite real estate bloggers have Twitter profiles (Greg, Joel, & Jim among them). The problem is, they don’t update them much, if ever. I think this is a real shame.
I started using Twitter a few weeks ago. I admit that I came in with a built-in bias against Twitter. To be honest, I find the stated purpose of Twitter (i.e. “What are you doing?”) pretty lame. I don’t think there are a lot of people out there who really care what I am doing at any given moment. I decided that I would try to follow 5 rules when writing my tweets. I didn’t want things to get boring.
In true Web 2.0-spirit, people have taken Twitter and turned into more than just countless status updates of millions of random people. It is truly a great place to find out what has captured people’s attention. All you have to do is seek out people who you find interesting and follow their tweets. It is a great way to stay on the bleeding edge of news and information.
Twitter has also established itself as a premier micro-blogging platform. It has been great for me because I can post links on Twitter that I find interesting, but that I don’t really have the time or inclination to use an entire blog post to discuss. There are plenty of other notable bloggers out there who are doing the very same thing. Sometimes following the tweets of others has inspired blog posts of my own. At the very least, Twitter has become a platform from which to launch discoveries into all sorts of things that I might have otherwise missed.
Real estate bloggers could benefit tremendously from using Twitter. The micro-blogging aspect of Twitter could be very valuable to people like real estate bloggers, who I am sure have all kinds of great ideas, but not always the time to write about them. Let’s say that I read a great story about mortgage fraud, but I just don’t have the time to devote a full post to it. I can post the link on Twitter, and perhaps someone else who is following me will follow the link and write a post of their own. Even if that doesn’t happen, because bloggers tend to be more plugged-in to what is happening in the industry, Twitter would help everyone stay on top of the industry by offering instantaneous communication and dissemination of information. In a way, Twitter is a living uber-wiki.
The real estate bloggers that I know are all very smart and creative people. Twitter offers a convenient and efficient way to get their message out to not only the rest of the blogosphere, but also to the public as well. I am also confident that real estate bloggers could also come up with alternative uses for Twitter that would benefit us all, bloggers, clients and customers alike.
So, real estate bloggers, if you are reading this, head on over to Twitter and get going– the RE.net needs you!
Wednesday, November 4, 2009
I Wish More Real Estate Bloggers Would Use Twitter
Tuesday, August 12, 2008
6 Steps to Secure Your Home Wireless Network
6 Steps to Secure Your Home Wireless Network
Posted on August 7th, 2008 by Ramesh
Filed Under: Security Tags: Wi-Fi, Wireless
Most
of you might have enabled wireless encryption, which is only one of the
6 steps mentioned in this article to make your wireless network safe
and secure from hackers. The screenshots mentioned below are from
Linksys wireless router. But, you’ll find similar options for all the 6
steps mentioned below in wireless routers from any other vendors.
1. Enable Encryption
Let us start with the basics. Most of the wireless router has the
encryption disabled by default. Make sure to enable either WPA or WPA2
wireless encryption. Click on Wireless -> Wireless Security , to
enable the encryption and assign a password as shown in Fig-1.
Following are the different wireless encryption options available.
- WEP (Wired Equivalent Protection) 64-bit and 128-bit: WEP is an old wireless encryption standard. Never use WEP encryption, which can be hacked within seconds.
- WPA (Wi-Fi Protected Access): WPA-PSK is also
refered as WPA-Personal. This is a new version of wireless encryption
standard and more secure than WEP. Most of the wireless adapters on
your laptop will support WPA. - WPA2: This is the latest wireless encryption
standard that provides the best encryption. Always use WPA2, if both
your wireless router and laptop wireless adapter supports it.
2. Change the SSID name
SSID (Service Set Identifier) refers to the name of
your wireless connection, that you see on the “Available Wireless
Connections” list from your laptop while connecting. Changing the
wireless name itself doesn’t offer any protection, but usually
discourages a hacker, as they know that you’ve taken some steps to
secure your wireless connection. Click on Wireless -> Basic wireless settings -> Change the “Wireless Network Name (SSID):”, as shown in the Fig-2.
3. Disable SSID broadcast
You can avoid your wireless name from getting displayed on
“Available Wireless Connections” on all your neighbors laptop. This can
be done by instructing the wireless router not to broadcast the name to
everybody. Once you’ve disabled the SSID broadcast, the first time when
someone wants to connect to your wireless network, you need to provide
the name to them. Click on Wireless -> Basic wireless settings -> Click on the Disable radio-button next to “Wireless SSID Broadcast”, as shown in Fig-2.
4. Enable MAC filtering
Even after you have performed the above item#1 - #3, a very
determined hacker may still get access to your network. The next
security step is to allow wireless access only to your trusted laptops,
by allowing wireless connection only to known MAC address. MAC (Media Access Control) address
is an unique identifier attached to most network adapters. In this
case, this should be the unique identifier of your laptop wireless
adapter. On Linux, do ifconfig from the command prompt to get wireless hardware address. On windows, do ipconfig /all from the command prompt to identify the MAC address as shown below.
C:>ipconfig /all<br />Ethernet adapter Wireless Network Connection:<br />Connection-specific DNS Suffix . : socal.rr.com<br />Description . . . . . . . . . . . : Dell Wireless 1390 WLAN Mini-Card<br />Physical Address. . . . . . . . . : <strong>00:1A:92:2B:70:B6</strong>
Click on Wireless -> Wireless MAC filter -> Click on
Enable radio-button next to “Wireless MAC filter” -> Click on
“Permit only PCs listed to access the wireless network” radio-button, as shown in Fig-3.
Click on Edit MAC filter list and add the MAC
address of your laptop to this list. If you want to allow access to
more than one laptop, add the MAC address of all the laptops to this
list as shown in Fig-4 and click on “Save Settings”.
5. Change password for Web Access
The default password for wireless web access are the same for the
specific model of a wireless router assigned by the manufacturer.
Change the default password of the wireless router web access to a
strong password. Click on Administration -> Management, to change the password as shown in Fig-5 below.
Fig-5 Change password and disable wireless web access
6. Disable administrative access through web
As a final step, make sure to disable web administrative access
through wireless. Once you do this, to make any configuration changes
to the wireless router, you can always use ethernet cable connection
from your laptop to configure the wireless. Click on Administration -> Management -> Disable radio-button next to “Wireless Access Web”, as shown in Fig-5 above.
Thursday, July 10, 2008
Bringing the trashcan to the command line
Linux.com
Everything Linux and Open Source
Bringing the trashcan to the command line
The trash
project allows you to interact with your desktop trashcan from the
command line. It lets users "undo" deletions made with the trash
command in a similar manner to restoring files from the trashcan in a
desktop environment. For experienced Linux users, the trash command
comes in handy when you want to put a file into the trashcan from the
command line.
Because trash implements the FreeDesktop.org Trash Specification,
it plays nicely with the trashcan offered by the KDE desktop
environment. That means you can trash a directory from the command line
and see it in your trashcan from Konqueror. Unfortunately, the trash
implementation in GNOME 2.20 did not communicate with either KDE 3.5.8
or the trash command.
Installation
Trash is not available from the distribution repositories for
Ubuntu, Fedora, or openSUSE. I built version 0.1.10 from source on a
64-bit Fedora 8 machine. Trash is written in Python, so build and
installation follows the normal python setup.py install
procedure.
When you use the list-trash command to view the contents of your
trashcan, you might encounter an error if you are also using the Linux
Logical Volume Manager (LVM).
Version 0.1.10 of trash uses the df command to work out what
filesystems are available. Unfortunately, it invokes df without the
POSIX compatibility mode -P
, and as such the lines
specifying LVM devices will include line breaks where trash does not
expect them to be. You can fix this by changing line 460 of
/usr/lib/python2.5/site-packages/libtrash.py to include the -P
option when spawning the df command, as shown below:
<div class="code"><br /> else:<br /> df_file=os.popen('df <b>-P</b>')<br /> while True:<br /></div></pre>
<p>I also found an issue executing some trash commands when using bind
mounts to mount filesystems in two locations. The commands would simply
fail with <code>ValueError: path is not a mount point</code>, not informing which path is not a mount point or what you should do to fix the situation.</p>
<h4>Usage</h4>
<p>The trash project includes four commands: empty-trash, list-trash,
restore-trash, and trash, the latter being the main command, with the
others enabling full trashcan interaction from the command line. The
only two commands that accept command-line parameters are empty-trash
and trash. The empty-trash command accepts a single argument that
specifies a cutoff for the number of days old that a trash item can be.
For example, If you specify 7, then any items in your trashcan older
than a week will be deleted. The trash command takes the files and
directory names that you wish to put into your trashcan, and also
accepts <code>-d</code>, <code>-f</code>, <code>-i</code> and <code>-r</code>
options for compatibility with the rm(1) command. These last four
options don't actually do anything with the trash command apart from
make its invocation more familiar to users of the rm command.</p>
<p>Let's run through an example of how to use the trash commands:</p>
<pre><div class="code"><br />$ mkdir trashdir1<br />$ date >trashdir1/dfa.txt<br />$ date >trashdir1/dfb.txt<br />$ list-trash <br />$ trash trashdir1<br />$ list-trash <br />2008-06-10 15:03:11 /home/ben/trashdir1<br />$ mkdir trashdir1<br />$ date >trashdir1/dfc.txt<br />$ trash trashdir1<br />$ list-trash <br />2008-06-10 15:04:01 /home/ben/trashdir1<br />2008-06-10 15:03:11 /home/ben/trashdir1<br />$ restore-trash <br /> 0 2008-06-10 15:04:01 /home/ben/trashdir1<br /> 1 2008-06-10 15:03:11 /home/ben/trashdir1<br />What file to restore [0..1]: 0<br />$ l trashdir1/<br />total 8.0K<br />-rw-rw-r-- 1 ben ben 29 2008-06-10 15:03 dfc.txt<br /></div>
As you can see, it is perfectly valid for multiple items in the
trashcan to have the same file name and have been deleted from the same
directory. Here I restored only the latest trashdir1 that was moved to
the trashcan.
The restore-trash command must be executed in the directory of the
trashed file. The above commands were all executed in my home
directory; if I had been in /tmp and executed restore-trash, I would
not have seen /home/ben/trashdir1 as a restore option. At times it
might be misleading to execute restore-trash and be told that there are
"No trashed files." Perhaps the developers should expand this message
to inform you that there are "No trashed files for directory X" so that
you have a hint that you should be in the directory that the file was
deleted from before executing restore-trash. For scripting it might
also be convenient to be able to use restore-trash with a path and have
it restore the most recent file or directory with that name.
While the command-line options to the trash commands are currently
fairly spartan, the ability to interact with the same trashcan that KDE
3 is using from the command line can assist folks getting into
command-line interaction without stepping up to using the more
permanent rm
command right off the bat.
Kudos to openSUSE 11.0
Linux.com
Everything Linux and Open Source
Kudos to openSUSE 11.0
openSUSE 11.0
was one of the most anticipated Linux distro releases of 2008. Despite
a few bugs in the final code, which was released yesterday, it was
worth the wait. The openSUSE version of KDE 4 alone is worth the
download, and the improvements to the software manager make customizing
a pleasure.
I used the 4.3GB DVD version, but live CD versions are also available.
In either, the first thing you might notice is the beautiful new
installer. The layout is similar to that of previous versions, with a
large interactive window and a progress list to the right, but with an
elegant new color scheme and stylish graphics. And the beauty is not
only skin deep -- there are a lot of changes under the hood in this
release.
The openSUSE developers have made many improvements to save users
time and effort. A new "Installation from Images" option uses a defined
set of packages in an install image for many common package groups,
such as the GNOME desktop. Using this saves users from having to
organize the needed packages and resolve the dependencies at the time
of the system installation. It's a feature users can disable if they
wish, but it does seem to save some install time.
At the beginning of the install process you can tick "Use Automatic
Configuration." In other distributions, similarly worded phrases can
turn off hardware auto-detection and lead to long, agonizing
configurations. Wanting to avoid that fate, I checked the box, but as
it turns out, this setting merely bypasses the hardware confirmation
screen where users normally accept the auto-detected proposal or custom
configure their hardware. For users who normally agree to the proposed
settings, this saves time and clicks.
Automatic Configuration does not bypass the installation summary.
You can still change many options, such as the partitioning proposal.
openSUSE presents the user with a proposed partitioning layout, but you
can edit the configuration to your needs. For example, you can make a
new partition or choose one that is already present. You can even use
advanced options such as LVM and RAID.
During the DVD install you can choose your desktop envirnoment from
among GNOME, KDE 4, KDE 3, Xfce, and Others, listed in alphabetical
order. Some other desktops available for install include Enlightenment,
IceWM, FVWM, and Window Maker. These less popular desktops don't
include the openSUSE look. They are provided as released by the
upstream developers.
No desktop environment is selected by default -- you must choose
one. At the installation summary screen, you can click the Software
heading to select additional desktop environments and software if
desired.
The package selection screens haven't changed much in function on
the surface, but they too have received a facelift. You can still
search or choose packages by groups, package patterns, or individually.
To save another step during the install the openSUSE developers
decided that the first user and root would share the same password.
They believe that a large percentage of users use the same password for
the first user and root, but if you have security concerns, it's easy
to change the root password later.
OpenSUSE has always had one of the premier installers in the Linux
landscape, and the developers have worked hard to make it even better
in 11.0. Besides the items I specifically mentioned, there are little
changes all over that make it more streamlined and easier than ever.
Because of its many desktop options, openSUSE is like several
distributions in one. Here's a look at each of the major desktop
environments.
KDE and Xfce
KDE 3.5.9 and Xfce 4.4.2 are stable, old-reliable desktops, and they
functioned just as expected with no problems. Like the other major
openSUSE desktops, they are customized to give them an openSUSE look
and feel. In fact, the gray and green theme runs throughout the whole
of openSUSE, including the GRUB screen, login screen, and application
splash screens, which gives the desktop a uniform professional touch.
At first glance, little distinguishes KDE 4 from KDE 3 -- which is a
good thing. Instead of a clunky, buggy Vista clone, users are welcomed
into a familiar reassuring environment. KDE 4 in openSUSE is an tidy
understated desktop with a panel at the bottom, a few icons, the
Kickoff menu, and the widget creator in the upper right corner.
In addition to the comfortable environment, many KDE applications
are now ported or backported to KDE 4.04 in openSUSE. I was able to
import mbox mail files as well as KDE 3 maildir-format files into KMail
1.9.51. Likewise, I was able to import my news feeds into Akregator
1.2.50. Both of these functioned well, except Akregator was a bit
sluggish during fetches under the weight of my 700+ feeds. I was able
to just drop my Konqueror bookmark file into the .kde4 directory. It
appears that for all the improvements KDE 4 is supposed to bring, Flash
is still broken in Konqueror, although this is probably a universal in
KDE and not confined to openSUSE.
When inserting removable media under KDE 4, the New Device Notifier
located in the panel beside the clock opens with a list of devices.
Depending upon the media, you may be given a choice of actions or have
one default. For example, a data CD gives only "Open in Dolphin," while
a USB memory stick opens an action chooser. Beside each device is an
icon that will umount or eject the device.
Overall I was impressed with the usability and stability found in
openSUSE's KDE 4 implementation. I began experiencing crashes only
while exploring the Personal Settings module (Systemsettings, the
replacement for the KDE Control Center) and changing numerous settings
and reversing them back and forth. This is when I discovered that you
need to press Ctrl-Alt-Backspace twice to kill the X server. This is
the first time I've needed to do this in openSUSE.
GNOME 2.22
I experienced some issues with the GNOME desktop. It started just
fine and seemed functional during the first tests. Problems arose when
I tested the update applet. When I was adding a repository, the online
update utility crashed and left most of GNOME unresponsive. When I left
the GNOME desktop, the login screen font was scrambled or not fully
rendered. I logged back into GNOME, but the font problem persisted. I
tried to log out again, but now the Logout tool didn't function any
longer.
After rebooting the system, GNOME seemed to function normally, but
the update applet never returned to the panel. Running Online Update
configuration through the YaST Control Center in GNOME continued to
crash, and thus the Online Update tool would not function. However, the
update applet did continue to appear in the KDE desktops afterward, and
I was able to complete configuration and check for updates while in
KDE.
Hardware support
Though I had some problems with software in different desktop
environments, hardware support in Linux has all but become a non-issue,
and this is even more true with openSUSE. While I don't own any exotic
or bleeding-edge hardware, what I do have is well supported. For
example, my Hewlett-Packard laptop, which was designed for Windows, is
almost fully supported. The only exception is the wireless Ethernet
chip, which requires Windows drivers. I used Ndiswrapper in 11.0 to
extract and load the drivers to bring it to life. Other critical laptop
features were available by default, although Suspend to RAM didn't work
for me.
Sometimes, though, my Internet connection, which was configured to
start at boot, wouldn't be started. The KNetwork Manager didn't
function for me this release either. The GNOME network applet seemed to
work well, however, so as a workaround, I just used it in KDE too.
Software
openSUSE is what I commonly refer to as a "kitchen sink" distro
because it includes everything but the kitchen sink. It'd almost be
easier to list what it doesn't have than what it does.
Besides a few extra desktops and the kernel development packages, my
install consisted of the default package selection. This includes
Firefox 3.0b5, OpenOffice.org 2.4.0, GIMP 2.4.5, Inkscape, Pidgin,
Liferea, Ekiga, GnuCash, Evolution, Tasque, and KOffice.
openSUSE also includes the lastest Compiz Fusion. AIGLX, which
provides GL-accelerated effects on desktops, should be enabled by
default for those with supported hardware. That unfortunately leaves
Nvidia users out until they install the proprietary graphic drivers.
However, there are graphical configuration tools for enabling and
setting options such as the choice of profile. You can choose profiles
ranging from lightweight with few effects to full with lots of effects.
The CompizConfig Settings Manager provides deeper settings. In
addition, there are lots of great plugins included, such as the
Magnifier, Window Scaling, and Show Mouse.
Under the hood openSUSE 11.0 ships with Linux-2.6.25.5, X.Org X Server 1.4.0.90, Xorg-X11 7.3, and GCC 4.3.1 20080507.
Multimedia
Multimedia support is a bit lacking in openSUSE by default. openSUSE has a policy of excluding certain code that does not conform to the open source definition
and, unfortunately, that includes support for most multimedia formats.
openSUSE 11.0 includes the just released Banshee 1.0, Amarok 1.4.9.1,
K3b, Brasero, Totem, and Kaffeine. I could listen to an audio CD and
watch Flash content from the Web, but I couldn't use any other
multimedia file on hand.
However, community-provided solutions are already in place. YaST one-click install wizards
will add repositories and install support for popular audio and video
formats. After installing the codecs, libraries, and updated
applications, I was able to enjoy any video or audio file I tested. I
sometimes experienced crashes in Banshee while trying to adjust the
volume. The problem was reproducible, but not consistent. I can't seem
to get Amarok to recognize my CD-ROM drive either, but I can use KsCD
or Banshee instead to listen to audio CDs.
Software management
If you'd like to install additional software, openSUSE comes with a
powerful package management system. ZYpp, which utilizes the RPM Package Management
format, was completely rewritten during the 10.x series, and 11.0
brings even more improvement. To the end user this means better
dependency resolution and much faster performance.
Zypper, the command-line package manager, functions much like
apt-get does for APT. It can install, uninstall, update repositories,
upgrade the system, or update packages. For example, zypper install
crack-attack will install the game Crack Attack. zypper search tuxpaint
will see if Tuxpaint is available in the openSUSE repositories you have
configured. Some other arguments include remove, addrepo, update, and
dist-upgrade.
Those who prefer graphical tools are in for a treat. The YaST
package management front ends have gotten a facelift this release. It
comes in a Qt version for KDE desktops and a GTK version for GNOME
users. Using YaST simplifies software installation for users of all
experience levels. It just takes a few mouse clicks to install any
package.
In my testing, I found that both the command line and the graphical
package tools worked well and were much faster than in previous
releases. My only complaint is that the YaST GUI still refreshes the
repository databases automatically each time it is opened. Fortunately,
in this release there is a Skip Refresh button, but with the speed
improvements it's usually half done by the time I grab the mouse and
click it.
Conclusions
openSUSE 11.0 is a fabulous release. The pretty new graphics set the
stage for significant improvements under the surface. All the time and
energy put into the package management system has paid off. Including
KDE 4 is not as big of a risk for openSUSE as it might be for other
major distributions because of the conservative and intuitive way KDE 4
is set up. openSUSE has given me hope that I could actually like KDE 4.
As many point-0 releases, 11.0 does have bugs and rough edges. I
experienced a few, and others are likely to be reported in the upcoming
weeks. For the most part, the ones I encountered were insignificant,
not showstoppers.
Overall, 11.0 is a commendable release. The developers have done an
admirable job walking that fine line between stable and bleeding edge.
If you like the latest software or wish for a nice usable KDE 4, then
openSUSE 11.0 is for you. If you're completely happy with 10.3, well,
perhaps you might want to wait for further reports.
Build your own ultimate boot disc
Linux.com
Everything Linux and Open Source
Build your own ultimate boot disc
You
turn on your trusty old Linux box, and things are going well as you
pass through the boot loader, until the disk check reveals that your
hard drive partition table is corrupt, and you are unable to access
your machine. You need a good rescue disk -- and the best way to get
one is to create your own.
You can
customize an Ubuntu 8.04 Hardy Heron live CD to make a good bootable
utilities disk by adding and removing packages from the standard
installation. Specifically, you can remove most of the Ubuntu
applications and install antivirus, a partition recover tool, a few
disk utilities, and a rootkit checker, among other things. I'm going to
create the live CD within an Ubuntu installation, but the directions
should work for most Debian-based operating systems, and can be easily
ported elsewhere. This guide largely follows the community documentation article
on the Ubuntu customization process, which is a good place to look for
more advanced information and troubleshooting support, while the livecdlist.com wiki is the best place to look for customized directions.
To create and use the Ubuntu-based boot CD, you'll need a computer
with at least 3GB of disk space and 512MB RAM. 1GB of swap is
recommended, though I did it with 512 MB.
Create the live CD environment
The first step is to download
the Ubuntu 8.04 live CD ISO file for your system type. You can get it
from the Web site, or you can use wget on the command line:
wget -v http://releases.ubuntu.com/hardy/ubuntu-8.04-desktop-i386.iso
To work with the image, you'll need to install a few packages to
support the squashfs filesystem format, and mkisofs, the utility to
create ISO images. On Ubuntu, you can install them with the command sudo apt-get install squashfs-tools mkisofs
.
Now, load the squashfs module, then copy, mount, and extract the contents of the ISO file in order to customize the contents:
<div class="code">sudo modprobe squashfs<br />mkdir rescue<br />mv ubuntu-8.04-desktop-i386.iso rescue<br />cd rescue<br />mkdir mnt<br />sudo mount -o loop ubuntu-8.04-desktop-i386.iso mnt<br />mkdir extract-cd<br />rsync --exclude=/casper/filesystem.squashfs -a mnt/ extract-cd<br />mkdir squashfs<br />sudo mount -t squashfs -o loop mnt/casper/filesystem.squashfs squashfs<br />mkdir edit<br />sudo cp -a squashfs/* edit/<br /></div></pre>
<p>You'll want to customize the CD in a chroot environment. Chroot
changes the root directory of the environment, allowing you to access
the files and applications inside the CD directly, which you must do in
order to use tools like apt-get. In order to use a network connection
inside chroot, which you'll probably want to do to add new packages,
you'll need to copy in the hosts and resolv.conf files to configure
your network settings. You can achieve this with the following:</p>
<pre><div class="code">sudo cp /etc/resolv.conf edit/etc/<br /><br />sudo cp /etc/hosts edit/etc/</div></pre>
<p>Once you've completed these steps, you can start to work inside the
live CD. Mount the live CD to the edit/dev mountpoint, then change your
root directory into the newly mounted volume. You'll need to mount
/proc and /sys volumes to work with the kernel, and export your
settings to avoid locale and GPG problems later on:</p>
<pre><div class="code">sudo mount --bind /dev/ edit/dev<br />sudo chroot edit<br />mount -t proc none /proc<br />mount -t sysfs none /sys<br /><br />export HOME=/root<br />export LC_ALL=C<br /></div></pre>
<h4>Free space by removing unneeded packages</h4>
<p>You can configure the packages that are included with the live CD
using apt-get or Aptitude. You'll want to free up some space to add the
rescue applications; even though the data is compressed, all of it
needs to fit on a 700MB CD or on a higher-capacity DVD. You can remove
packages and applications that aren't useful for the recovery. I chose
to remove the OpenOffice.org suite, the GNOME games set, Ekiga,
Ubiquity, Evolution, and the GIMP, saving me around 200MB. If you are
comfortable without a command-line environment, you might want to get
rid of GNOME and Xorg; if you do that, you need not install GParted and
the other graphical tools in the next section. In any case, the goal is
to get rid of large applications. To sort all of the installed packages
by size, run the following command in the chrooted environment:</p>
<p><code>dpkg-query -W --showformat='${Installed-Size} ${Package}\n' | sort -nr | less</code></p>
<p>You can use apt-get to remove a package. Use it with the <code>--purge</code> argument to get rid of configuration files. The sudo command won't work in the chroot, and therefore should be omitted:</p>
<p><code>apt-get remove --purge <em>package-name</em></code></p>
<div class="sidebar">
<p><strong>Prebuilt Linux rescue CDs</strong></p>
<p>You don't need to build a custom rescue disk to get a great bootable
utility CD. Here are a few pre-built rescue CDs you can try.</p>
<ul><li><a href="http://partedmagic.com/">Parted Magic</a> -- This 45MB boot
CD uses GParted, the GNOME partition editor, to handle partition table
management for an extensive list of filesystems, including ext2/3,
NTFS, and HFS+. Parted Magic uses the Xfce desktop environment to
provide a variety of tools, including Firefox, Thunar, and ISO tools.
It also has a USB version to use from a thumb drive.</li><li><a href="http://www.sysresccd.org/">SystemRescueCd</a> -- The 191MB
CD features partition, archive, and networking tools, along with a slew
of editors and file browsers. This is probably the easiest system boot
CD, and is recommended for less advanced Linux users. It also has a
rootkit checker, virus scan, and CD burning utilities. It includes an X
interface through Xfce.</li><li><a href="http://trinityhome.org/">Trinity Rescue Kit</a> -- The
129MB Trinity Rescue Kit is designed for the rescue and recovery of
Windows machines, but it will work for Linux as well. It includes a few
virus scan applications, a Windows password reset tool, Samba, SSH,
rootkit removal tools, and partition and backup tools. It is based on
Mandriva Linux.</li></ul>
</div>
<h4>Add rescue applications</h4>
<p>Once you have removed all of the unneeded applications from the live
CD you can start to add rescue and recovery applications. Generally,
rescue CDs include a variety of disk utilities and security tools, as
well as networking tools to find support and access outside machines.
You may not want all of the applications I mention, and you can add
some that I don't. This is your personal boot CD, and should be
configured as you see fit. For ideas about what to include on your CD,
you might want to check out some of the prebuilt rescue distributions
mentioned in the sidebar.</p>
<p>You can install packages from the repositories using apt-get, but
you must add the multiverse repository to your /etc/apt/sources.list
file:</p>
<pre><div class="code"><br />deb http://us.archive.ubuntu.com/ubuntu/ hardy main multiverse<br />deb-src http://us.archive.ubuntu.com/ubuntu/ hardy main multiverse<br /></div></pre>
<!-- Image: partimage_thumb.png - Cutline: Partimage is an essential program for a rescue disk. Use it to copy and restore data. -->
<p>A disk partition tool is the staple of a mature boot disk. Fortunately, the Ubuntu live CD comes with <a href="http://gparted.sourceforge.net/">GParted</a>,
the GNOME Partition Editor, so adding a package isn't required. If you
chose to forgo a graphical environment, you should make sure that
parted is installed instead to handle partition tables from the command
line. If you accidentally delete a partition, installing a program like
<a href="http://www.cgsecurity.org/wiki/TestDisk">testdisk</a> can help
you recover it, as well as provide a few other basic disk tools. If you
are using the ext2 filesystem type and you accidentally delete a file,
you'll find the <a href="http://e2undel.sourceforge.net/">e2undel</a> package helpful in recovering it. If you need to copy an entire partition from a dying disk, or just want to make a backup, <a href="http://www.partimage.org/Main_Page">partimage</a> is the way to go. You can also use it to restore a partition with a previously made backup.</p>
<!-- Image: GParted_thumb.png - Cutline: Use GParted to configure your partition tables for your hard disk -->
<p>If you plan to use this disc with Windows machines, you will want to install antivirus and rootkit tools. <a href="http://www.clamav.net/">Clamscan</a> provides quick and easy virus scan with a command-line-based update tool. <a href="http://www.chkrootkit.org/">Chkrootkit</a> is a scanner to find and remove rootkits that could be hiding in your computer. You can use <a href="http://www.sleuthkit.org/">sleuthkit</a> to conduct analysis of your filesystem and browse through hidden files.</p>
<p>After you finish adding packages, clean up your temporary data and unmount the environment:</p>
<pre><div class="code">apt-get clean<br />rm -rf /tmp/*<br />rm /etc/resolv.conf<br />umount /proc<br />umount /sys<br />exit<br />sudo umount edit/dev<br /></div></pre>
<p>Now, regenerate the manifest (which is basically a list of installed packages) and copy in into the correct directory:</p>
<pre><div class="code"><br />chmod +w extract-cd/casper/filesystem.manifest<br />sudo chroot edit dpkg-query -W --showformat='${Package} ${Version}\n' > extract-cd/casper/filesystem.manifest<br />sudo cp extract-cd/casper/filesystem.manifest extract-cd/casper/filesystem.manifest-desktop<br />sudo sed -i '/ubiquity/d' extract-cd/casper/filesystem.manifest-desktop<br /></div></pre>
<p>Compress the filesystem to squeeze it onto a disc:</p>
<pre><div class="code"><br />sudo rm extract-cd/casper/filesystem.squashfs<br />sudo mksquashfs edit extract-cd/casper/filesystem.squashfs -nolzma<br /></div></pre>
<p>And finally, create the ISO file:</p>
<pre><div class="code"><br />cd extract-cd<br />sudo mkisofs -r -V "$IMAGE_NAME" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ../ubuntu-8.04-desktop-i386.iso<br /></div></pre>
<p>Once the image file is created, you need to burn it to a disc. You
can do that pretty easily with K3b or Brasero. If you want, you can do
it from the command line:</p>
<pre><div class="code"><br />cdrecord dev=/dev/cdrom ubuntu-8.04-desktop-i386.iso<br /></div>
Once the CD is finished burning, you should be able to put it into
your optical drive and boot into the environment you just created.
This should give you more than enough information to start building
your ultimate custom rescue CD. Add the packages and tools you need,
and hopefully you'll never be at a loss the next time your computer has
a problem during startup.
Don't forget the text editor
Linux.com
Everything Linux and Open Source
Don't forget the text editor
Text
editors are important for many tasks, from editing configuration files,
nudging cron jobs, and manipulating XML files to quickly pushing out a
README. Luckily, there are a number of interesting editors available.
Here's a brief introduction to nine intriguing choices. While some may
be better suited to certain tasks, it's no one tool is better than
another for all tasks. Try them all and use the ones you like best.
vi
Old favorite vi (or one
of its variants, such as Vim or Elvis) is available on most *nix
systems. If you are a system administrator moving from one *nix system
to another, the one reliable fact is that vi will work, macros
and all. Once you have learned the keystrokes, swapping words at the
boundaries, replacing sections of text, or transversing through a large
file with vi is efficient, fast, and predictable. However, its initial
learning curve is somewhat steep, and there is no real GUI.
Gedit and Kate
Gedit (see figure 1) is a small and
lightweight text editor for the GNOME desktop, and the default text
editor for Ubuntu. An excellent tool with syntax highlighting for a
wealth of scripting and programming languages, it allows for extension
via plugins (figure 2) and does most tasks efficiently, without fuss.
The editor is modern in design, with a tab per open file, thus allowing
for easy cut and pasting between documents. The interface is
uncluttered and somewhat configurable via Edit -> Preferences for
such attributes as the enabling of allowing line numbering and changing
tabs to spaces.
You can also run Kate (KDE Advanced Text Editor) under the GNOME desktop. However, you will have to install its package with a command like sudo apt-get install kate-plugins
, which will also install some extra plugin-enabled functionality. Kate has a slightly busier interface than Gedit (figure 3),
and to use tabbing between documents, you must activate the feature via
enabling the correct plugin. But Kate is significantly more
configurable than Gedit, exposing more of its innards as preferences.
An immediately helpful feature is the ability to hide code that is
within a certain scope. For example, to hide all the code within a
foreach statement, double-click on the offending line. This is a
significant help for uncluttering verbose scripting text. Also, under
the Tools menu, you can change the end of line type to switch between
Unix, DOS, and Mac, thus avoiding subtle issues in your text later.
Both Kate and Gedit support quick ad-hoc editing of numerous
scripting and programming languages. They are both excellent editors
for a variety of tasks.
TEA and Emacs
Emacs and TEA
are more complex and configurable than Gedit and Kate, with a much
wider scope of potential abilities. If you want to work within a single
environment, including sending mail, then these adaptive tools have the
power to let you.
TEA (figure 4)
is a compact, configurable, and function-rich editor that takes up only
around 500KB of memory. TEA provides a decent text editor, with markup
support for LaTeX, DocBook, Wikipedia, and HTML. It does not provide
any syntax highlighting, but does provide an extremely basic project
environment for compiling code.
Thankfully, TEA also contains a delightfully named crapbook (read
notes holder) for storing temporary text. The editor provides a spell
checker and statistics for documents and therefore sits comfortably
between an office suite and a plain editor. Other functionality
includes a file browser and a calendar. Because the editor's compact
size is based on the fact that it relies heavily on using external
tools, under the Help menu there is a well-thought-out self-check
command that on activation mentions any missing dependencies.
You can extend TEA via manipulating text files to expand specific
features. For example, to add your own command to the Run menu, open
the ext_programs text file via File -> Manage utility Files ->
External programs to add the option xterm, which activates -- yes, you
guessed it -- an xterm window. Add and save the text on a new line:
xterm=xterm &
Emacs (figure 5)
is powerful, feature-rich, and configurable. This tool has a long
history reaching back as far as 1976. Originally written by Richard
Stallman and Guy Steele, Emacs split into two main branches, Xemacs and
Emacs, in 1991. The functionality in both branches is comparable.
Having a long and renowned history implies fitness of purpose and core
stability.
Emacs is not only a text editor but also an interpreter for Emacs
Lisp, an extension of the Lisp programming language. Elisp makes
scripting relatively easy. If you are a power user, you can tweak the
.emacs configuration file (or whatever your local equivalent is) to,
for example, add extra menus.
People have written a large number of modes in the Elisp language
exist. A mode is an extension, modification, or enhancement. For
example, one mode may be for SQL development, and another for Perl
programming. The main Emacs wiki details the most up-to-date information and lists a full set of potential modes.
Emacs' default GUI is succinct and contains much functionality
beyond editing. You can perform file navigation, send out email (if
configured correctly), and perform specific tasks such as debugging,
patching, and creating diffs with a few succinct keystrokes.
The software's documentation and international language support is superb, and the editor includes an online tutorial.
Gaining full mastery of Emacs, even for the cleverest among us,
requires patience and time. The intuitive use of buffers or the
memorizing of Ctrl and Alt keystroke combinations is a chore, but if
you perfect it, expect massive gains in efficiency.
Leafpad, Mousepad, and Medit
If you are looking for a simple editor that does the bare minimum, then either Leafpad or Mousepad
will fulfill the basics. They look the same and allow for word wrap,
line numbering, auto indent, and a choice of fonts, and not much else.
Medit is a
straightforward text editor with syntax highlighting and tabbed panes.
To add an entry to its configurable tool menu (figure 6) select
Settings -> Preferences then highlight the Tools option. Clicking on
the new item icon (the picture of a document with the orange circle at
the bottom center) activates a dialog. Change the name of the item from
"new command," then add the entries displayed in figure 5. You will
then have a new tool that lists in the currently selected document all
the running processes (figure 7). For the devotees, Medit has its own
scripting language, called mooscript.
The editor also has a well placed expandable no fuss file selector
that is readily available via a click on the right side of the main
window.
SciTE
SciTE, the Scintilla-based
text editor, offers some of the features of a programmer's interactive
development environment. It supports tab panes, syntax highlighting,
and code folding, and goes a solid step further for programmers. For
example, on opening a Perl file, or numerous other languages, you can
check a script's basic validity via the menu option Tools -> Check
syntax. That displays a second window (figure 8) with the gruesome
details.
SciTE presents a no-fuss, easy-to-learn approach to controlling a scripting environment. A development tool like Eclipse will give you more features and adaptability, but also a steeper learning curve.
Final comments
This article covers only a fraction of the available text editors
for the Linux desktop. If I have missed your favorite, share your
opinion and place a link in the comments section for this article.
Desktop Linux strategies for marketplace success
Linux.com
Everything Linux and Open Source
Desktop Linux strategies for marketplace success
May 03, 2008 (2:00:00 PM) - 2 months, 1 week ago
By: Carlton Hobbs
What strategy is needed to really spread desktop Linux to average home users? Here are some ideas that just might work.
Journalist Steven J. Vaughan-Nichols argues that
Linux businesses, for the most part, don't do marketing. I think they're extremely foolish not to spend any money on it, but there it is.... Like the Linux companies, many of them were sure that they didn't need to market themselves. Like Linux companies, they thought word of mouth was enough.... Well guess what: it's not. Without marketing, no one from the outside looking in can tell one Linux from another. They just see a confusing mish-mash of names, and unless they're already really motivated, they're going to start turning off from Linux at the very start.
I argue almost the opposite. A large part of mainstream media marketing, advertising, and branding is a means to get name recognition at a very superficial level. Its main targets are people who make superficial buying decisions, and for the right products, this works. Why buy name brand Tylenol vs. generic acetaminophen, name brand cereal, or a thousand other identical products that come off the same assembly line but use different packaging at different prices? From the perspective of the thrifty, the main answers are ignorance and brand recognition.
Of course, not all marketing is to compete with effectively identical products. Consider the American beer industry as a major marketing powerhouse with a few similarities to the Windows vs. Linux market. The major American breweries formulated modern beers after Prohibition to appeal to people who didn't like the taste of beer, and as a side effect the major brewers accepted, these beers taste bad to beer connoisseurs. The post-Prohibition era, even to this day, retains elements of a cartelized liquor distribution industry designed to make it difficult and expensive to compete with the major breweries, such that there have been no new domestic majors in decades. The rebirth of real beer in America was through microbreweries that have small to non-existent marketing budgets. They rely on beer connoisseurs who communicate through beer fan reviews, word of mouth, willingness to experiment, and seeking out the minority of stores that actually carry microbrew and local beers. Beer commercials for microbrews about sports and sexy women would not get many beer drinkers to seek out good beer that isn't already easy to find. Such commercials are just for "all beer is beer" drinkers who are susceptible to brand association marketing and herd opinion.
This doesn't mean that high-cost marketing is innately wrong or bad. It means that if you can increase the marginal sales of your high-profit-per-sale product to people who make quick decisions based on brand recognition, then your marketing expenses were a good investment, but otherwise not. Unfortunately for Linux companies, desktop Linux is a very low profit per "sale" product that is not an impulse choice off a shelf of interchangeable consumer goods. As Red Hat learned years ago, the shrink-wrapped box on a store shelf will not change the current OS market.
So if word of mouth and near-zero-budget advertising are our main prospects, then perhaps what is needed is a better person-to-person strategy. Fortunately, there is definite room for improvement here. One major barrier to entry is lack of Linux preinstallation, and the occasional need for more expertise with compatibility issues. Desktop Linux must partly resolve these challenges through its internal advantage of strong community by strategic and expansionary networking, and by using the big opportunity of failure to address the massive number of PCs that people keep collecting dust, thinking they will upgrade sometime, someday.
Desktop Linux must focus on local communities for recruiting the next wave of users and evangelists. Ubuntu has the right idea with its LoCo initiative. However, to get really local and networked, a distro-centric local community is not the most efficient. If local Ubuntu, Debian, Slackware, etc. users never meet, they will forfeit great networking opportunities. There needs to be local GNU/Linux/FOSS communities with broad ranges of software experience, occupations, contacts, and distro preferences. Fortunately, many already exist, and there is at least one list where people can find groups near them. Linux promoters must recognize face-to-face personal interaction as a primary means for strategic growth of desktop Linux.
Local free software organizations need to be able to offer free Linux installation and encourage people to reuse or donate computers that would run poorly with current Windows systems. Certain groups are naturally good targets to recruit and possibly join as recruiters themselves. Decentralist political groups, neighborhood associations, Parent Teacher Associations, and other educational organizations are also intelligent low budget groups. College groups, homeschool groups, agriculture co-ops, churches, and religious groups are all great places to find people who have spare computers to reinvigorate or donate, or would be willing to have a computer set to dual boot. In general, groups that depend on donations or have small budgets are looking for ways to minimize unnecessary costs. Some of their members would likely be radicalized when they learn what little is required to show others how to switch to Linux.
Local free software organizations need a quick and easy tool to communicate what the GNU/Linux OS can do. Perhaps the best method would also serve as a means of introduction. An organization could create business cards that provide a brief description of the local Linux group, its Web address, and purpose. The card should be visually impressive and colorful. They can let people know that the card itself was designed with only free software, whether it be OpenOffice.org, gLabels, Inkscape, Scribus, or some combination that anyone could easily get through Linux.
Is there a model for such success without advertising budgets? Ask yourself how you heard about and started using Google. Was it through advertising? Google became a giant because the barrier to trying a new search engine was so low and the value quickly obvious. It was used by almost everyone before anyone saw a Google advertisement. If Linux advocates can do the same, then Windows will be in trouble. I don't see how this can happen without active local free software groups that seek out growth, and success would likely be in proportion to the efficiency of local groups. If some are more successful than others, then the more successful local methods could be adopted elsewhere.
All the experience and networked knowledge of local free software cooperatives might be enough that small businesses would hire the local groups to upgrade their computer systems to Linux for real money. Local groups could even have contracts with particular distros that provide paid business support to receive some of the profit. Local cooperatives would not likely make much money, but maybe enough on occasion to purchase a few rounds of quality microbrews to celebrate a few more people unshackled from Goliath-soft. Very few people will get rich with Linux, but a lot of people could be meaningfully less poor with it, and free-as-in-freedom might actually buy the enjoyment of a few free-as-in-beers.
Read in the original layout at: http://www.linux.com/feature/134126