Thu, 28 Apr 2022
Hawai’i Vacation Part One
Jan and I went to Hawai'i in April to commemorate 30 short, blissful years together. In addition, this visit means I have now collected the complete set of 50 States
I took a bunch of pictures, but Jan's look better.
Here are some some of mine:
cool rocks on the beach just north of Kona.
Kilauea crater at night.
The public shower across the street from the beach near our condo.
Sunset from Sammy Choy's balcony.
The entrance to the Thurston lava tube near Kilauea crater.
Sat, 05 Dec 2020
This Is a Test
This is only a test. Had this been an actual test, the title would have been followed by content.
Tue, 07 Apr 2020
I Can’t Find My Keys!
Troubleshooting Encrypted Email Transport on Postfix
Encrypted email transport from server to server and between servers and clients is useful. Although it is not fully end-to-end, this level of encryption prevents snooping by a casual observer and — as part of a defense-in-depth involving a variety of other configuration options — helps an administrator ensure that unauthorized email is not accepted or forwarded while permitting users to send and receive mail while they are not on the local network.
However — like almost everything of value — ensuring an encrypted channel means an increase in effort and complexity. One drawback to encrypting your SMTP, POP, and IMAP traffic is that you must now manage your encryption keys.
Recently, I encountered a series of email alerts sent from my mail server that contained text like this:
Transcript of session follows.
Out: 220 mail.example.com ESMTP Postfix (Debian/GNU)
In: EHLO frm1-zendable.com
Out: 250-mail.example.com
Transcript of session follows.
Out: 220 mail.example.com ESMTP Postfix (Debian/GNU)
In: EHLO frm1-zendable.com
Out: 250-mail.example.com
Out: 250-PIPELINING
Out: 250-SIZE 1311000000
Out: 250-ETRN
Out: 250-STARTTLS
Out: 250-ENHANCEDSTATUSCODES
Out: 250-8BITMIME
Out: 250 DSN
In: STARTTLS
Out: 454 4.7.0 TLS not available due to local problem
In: QUIT
Out: 221 2.0.0 Bye
For other details, see the local mail logfile
and this:
Transcript of session follows.
Out: 220 mail.example.com ESMTP Postfix (Debian/GNU)
In: EHLO scanner.sslsonar.org
Out: 250-mail.example.com
Out: 250-PIPELINING
Out: 250-SIZE 1311000000
Out: 250-ETRN
Out: 250-STARTTLS
Out: 250-ENHANCEDSTATUSCODES
Out: 250-8BITMIME
Out: 250 DSN
In: STARTTLS
Out: 454 4.7.0 TLS not available due to local problem
In: RSET
Out: 430 4.7.0 Must issue a STARTTLS command first
Session aborted, reason: lost connection
For other details, see the local mail logfile
Following the advice in the messages, I then went digging through my email logs looking for clues. Specifically, I was looking for warnings that involved TLS and Postfix; and I didn’t want a gazillion so I limited my view to only the last 100. For purposes of this blog post, assume anything referring to “example.com” is an arbitrary domain name and anything referring to “localhost” is whatever machine name you’re working on.
root@localhost:~# grep warning /var/log/mail.log | grep -i tls | tail -100
This resulted in a screenful of entries like these:
Apr 2 07:00:19 localhost postfix/smtpd[18925]: warning: cannot get RSA private key from file "/etc/ssl/certs/san.example.com.key": disabling TLS support
Apr 2 07:00:19 localhost postfix/smtpd[18925]: warning: TLS library problem: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch:../crypto/x509/x509_cmp.c:295:
Apparently, I had told Postfix to use public and private encryption keys that were not actually paired.
As I had recently rotated my keys (the old ones had expired), and the timeframe for the appearance of the emails corresponded nearly exactly with that event, I thought I knew where the issue lay.
First, a teensy bit of background. About halfway through the validity period of my last set of certificates, I decided that it would be easier to manage a single chain of certificates using a Subject Alternate Name rather than a bunch of individual certificates for each host that needed a certificate. In other words, instead of separate certs for “mail.example.com” and “www.example.com” and “secure.example.com”, I would have one certificate for all three that could then include values for each subdomain. Logically, I would call this Subject Alternate Name certificate “san.example.com”. Simple.
Of course, what I didn’t do is request the certificate properly using a similarly-named CSR. How requests for certificates work is beyond the scope of this blog post, but I have covered it before.
Further, when I had requested the san.example.com certificate I had not uniformly applied it to every servce where it was needed. For example, I can use the same certificate for the Apache HTTP server, the Postfix SMTP server, and the Dovecot IMAP server; but I had only deployed it in Apache.
Oops.
As a result, I had a bunch of CSR files, a bunch of certificate files, and a bunch of chain-of-authority files scattered in several locations across my servers, with a variety of similar- but-not-quite-the-same names. Now I needed to sort this out. This discussion will revolve around remediating the Postfix problem; but similar steps will work for Apache and Dovecot if you substitute the appropriate configuration files.
First, I needed to identify the configuration files where Postfix was calling the certificates. On Debian-derived systems these are generally in either /etc/postfix/main.cf or /etc/postfix/master.cf — and sometimes both places:
root@localhost:~# egrep "[Cc]ert|.key" /etc/postfix/ma*cf
/etc/postfix/main.cf:#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
/etc/postfix/main.cf:#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
/etc/postfix/main.cf:smtpd_tls_cert_file=/etc/ssl/certs/example.com.crt
/etc/postfix/main.cf:smtpd_tls_key_file=/etc/ssl/private/san.example.com.key
/etc/postfix/main.cf:#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
/etc/postfix/main.cf:#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
/etc/postfix/main.cf:smtpd_tls_cert_file=/etc/ssl/certs/example.com.crt
/etc/postfix/main.cf:smtpd_tls_CAfile=/etc/ssl/certs/mail.example.com.sectigo.crt
/etc/postfix/master.cf: -o smtpd_tls_cert_file=/etc/ssl/certs/example.com.crt
/etc/postfix/master.cf: -o smtpd_tls_CAfile=/etc/ssl/certs/mail.example.com.sectigo.crt
Yes, I had configured them in both. Not a best practice, but a discussion for another time. Nevertheless, I now had to see what certificate-related files were in the various directories I had identified:
root@localhost:~# ls -la /etc/ssl/private/
total 76
drwx--x--- 2 root ssl-cert 4096 Sep 1 2019 .
drwxr-xr-x 4 root root 4096 Oct 3 09:15 ..
-rw-r--r-- 1 root root 1098 Aug 31 2019 example.net.csr
-rw------- 1 root root 1704 Aug 31 2019 example.net.key
-rw------- 1 root root 887 Mar 12 2009 example.net.key.20090312
-rw-r--r-- 1 root root 1123 Aug 31 2019 mail.example.com.csr
-rw------- 1 root root 1704 Aug 31 2019 mail.example.com.key
-rw-r--r-- 1 root root 1094 Aug 31 2019 mail.example.net.csr
-rw------- 1 root root 1704 Aug 31 2019 mail.example.net.key
-rw-r--r-- 1 root root 1387 Sep 1 2019 san.example.com.csr
-rw------- 1 root root 1704 Sep 1 2019 san.example.com.key
-rw-r--r-- 1 root root 1232 Sep 1 2019 san.example.net.csr
-rw------- 1 root root 1704 Sep 1 2019 san.example.net.key
-rw------- 1 root root 891 Nov 27 2010 ssl-cert-example.net.pem
-rw-r----- 1 root ssl-cert 1704 Jul 5 2019 ssl-cert-snakeoil.key
-rw-r--r-- 1 root root 1119 Aug 31 2019 www.example.com.csr
-rw------- 1 root root 1704 Aug 31 2019 www.example.com.key
-rw-r--r-- 1 root root 1090 Aug 31 2019 www.example.net.csr
-rw------- 1 root root 1708 Aug 31 2019 www.example.net.key
A few things to note:
- The permissions on the .csr files are 644 because that’s the way ssl creates them by default. Possibly this is bad practice. I will investigate further.
- The snakeoil key file was generated by the system installer when it was first built; just about every Debian system has a file like this. It’s considered unsafe to use in a production environment (thus the “snakeoil” designation) because it has insufficient entropy and the key may be predictable.
- The remaining key file permissions are 600 because nobody should be able to read or modify system-wide private keys except root.
The contents of the other directory, but looking only for relevant certificates and not the gazillions of other public certs that my system is prepared to trust:
root@localhost:~# ls -la /etc/ssl/certs/*example*
-rw-r--r-- 1 root root 4123 Aug 31 2019 /etc/ssl/certs/example.com.ca-bundle
-rw-r--r-- 1 root root 2370 Aug 31 2019 /etc/ssl/certs/example.com.crt
-rw-r--r-- 1 root root 4123 Sep 2 2019 /etc/ssl/certs/san_example_com.ca-bundle
-rw-r--r-- 1 root root 4123 Sep 1 2019 /etc/ssl/certs/san.example.com.ca-bundle
-rw-r--r-- 1 root root 2301 Sep 2 2019 /etc/ssl/certs/san_example_com.crt
-rw-r--r-- 1 root root 2369 Mar 11 00:39 /etc/ssl/certs/san.example.com.crt
-rw-r--r-- 1 root root 2301 Sep 1 2019 /etc/ssl/certs/san.example.com.crt.expired-2020-03-10
-rw-r----- 1 root root 1704 Sep 1 2019 /etc/ssl/certs/san.example.com.key
-rw-r----- 1 root root 5647 Mar 11 00:39 /etc/ssl/certs/san.example.com.sectigo.crt
Notes:
- The permissions on the final two files are 640, because the certificates only need to be read by the group, not by everyone.
- This directory also contains many dozens of links to files in /usr/share (at least on Debian); these files are generally browser-trusted certificates from third parties, and managed by your browser/browser manufacturer/the distro maintainers. Don’t mess with the links unless you are confident and competent.
Comparing the keys is much easier and more accurate if you let the computer do it. Fortunately, openssl provides a facility in conjunction with sha256sum to check whether various keys, certificates, and CSRs are compatible. Specify the file type, path, and format and then whether it’s public or private; pipe the output to the sha256sum command to get a signature. If the signatures match, the keys belong together. If the signatures don’t match, try a different combination. This particular set of three took a few different combinations to reach:
root@localhost:~# openssl pkey -in /etc/ssl/private/foo.example.com.key -pubout -outform pem | sha256sum
e35337bf968e1f4ae93fae2a2f22d1c68dcb7c6a8d44f7659cbcbb6d78ae3232 -
root@localhost:~# openssl x509 -in /etc/ssl/certs/foo_example_com.crt -pubkey -noout -outform pem | sha256sum
e35337bf968e1f4ae93fae2a2f22d1c68dcb7c6a8d44f7659cbcbb6d78ae3232 -
root@localhost:~# openssl req -in /etc/ssl/private/foo.example.com.csr -pubkey -noout -outform pem | sha256sum
e35337bf968e1f4ae93fae2a2f22d1c68dcb7c6a8d44f7659cbcbb6d78ae3232 -
If I’d had more than four potential matches for any of my keys, I’d have scripted a loop to do a comparison instead of executing each test by hand. In an environment where you might have potentially dozens or hundreds of keys, that would be the approach to take.
Now to clean up the mess.
In case I am using this set of key/certificate/chain elsewhere, I don’t want to rename any of the components. That could potentially break other stuff. I could also go off on an investigation into other places they’re in use, but that would delay the fix for my mail server and perhaps send me down another rabbit hole. Instead, I solved one problem at a time by linking the existing chain-of-trust certificate file to a new name so that it’s clearer (to me at least) what it does:
root@localhost:~# ln /etc/ssl/certs/example.com.ca-bundle /etc/ssl/certs/mail.example.com.sectigo.crt
Now I have ten potentially relevant files instead of nine; but two of them are really the same file:
root@localhost:~# ls -la /etc/ssl/certs/*example*
-rw-r--r-- 1 root root 4123 Aug 31 2019 /etc/ssl/certs/example.com.ca-bundle
-rw-r--r-- 1 root root 2370 Aug 31 2019 /etc/ssl/certs/example.com.crt
-rw-r--r-- 2 root root 4123 Aug 31 2019 /etc/ssl/certs/mail.example.com.sectigo.crt
-rw-r--r-- 1 root root 4123 Sep 2 2019 /etc/ssl/certs/san_example_com.ca-bundle
-rw-r--r-- 1 root root 4123 Sep 1 2019 /etc/ssl/certs/san.example.com.ca-bundle
-rw-r--r-- 1 root root 2301 Sep 2 2019 /etc/ssl/certs/san_example_com.crt
-rw-r--r-- 1 root root 2369 Mar 11 00:39 /etc/ssl/certs/san.example.com.crt
-rw-r--r-- 1 root root 2301 Sep 1 2019 /etc/ssl/certs/san.example.com.crt.expired-2020-03-10
-rw-r----- 1 root root 1704 Sep 1 2019 /etc/ssl/certs/san.example.com.key
-rw-r----- 1 root root 5647 Mar 11 00:39 /etc/ssl/certs/san.example.com.sectigo.crt
After a quick edit to /etc/postfix/master.cf and /etc/postfix/main.cf to replace the offending key entries and then a reload of the Postfix configuration and I’m back in business.
service postfix reload
Now if I repeat my original search for the configurations, I can see the new ones alongside the (commented out) originals.:
root@localhost:~# egrep "[Cc]ert|.key" /etc/postfix/ma*cf
/etc/postfix/main.cf:##smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
/etc/postfix/main.cf:##smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
/etc/postfix/main.cf:##smtpd_tls_cert_file=/etc/ssl/certs/hallmarc.net.crt
/etc/postfix/main.cf:#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-hallmarc.net.crt
/etc/postfix/main.cf:smtpd_tls_cert_file=/etc/ssl/certs/example.com.crt
/etc/postfix/main.cf:##smtpd_tls_key_file=/etc/ssl/private/hallmarc.net.pem
/etc/postfix/main.cf:#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-hallmarc.net.pem
/etc/postfix/main.cf:smtpd_tls_key_file=/etc/ssl/private/mail.example.com.key
/etc/postfix/main.cf:#smtpd_tls_key_file=/etc/ssl/private/www.example.com.key
/etc/postfix/main.cf:smtpd_tls_CAfile=/etc/ssl/certs/mail.example.com.sectigo.crt
/etc/postfix/master.cf: -o smtpd_tls_cert_file=/etc/ssl/certs/example.com.crt
/etc/postfix/master.cf: -o smtpd_tls_key_file=/etc/ssl/private/mail.example.com.key
/etc/postfix/master.cf: -o smtpd_tls_CAfile=/etc/ssl/certs/mail.example.com.sectigo.crt
Although this is not an ideal situation, it is an improvement that better secures my email and makes the error messages about TLS failures go away.
posted at: 18:38 | permanent link to this entryThu, 26 Nov 2015
Left Holding (Open) the Bag
Filesystem inode Trouble Part II
You may recall that in January, 2014 I wrote a piece about inodes and filesystems behaving badly. At the behest of a colleague (Hi, Josh!), this is the exciting conclusion to that saga.
When last we left the action, my free disk space looked like this:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.9G 7.6G 1.8G 81% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 485M 70M 390M 16% /boot
/dev/sda5 4.0G 808M 3.0G 22% /tmp
/dev/sda7 53G 670M 50G 2% /var
/dev/sda3 5.0G 216M 4.5G 5% /var/log
Yet, ordinary users were unable to log in, and I could not create new files on the root (“/” ) filesystem.
To summarize thus far:
- My users can’t log in.
- I have disks, which the OS has identified.
- I have filesystems on the disks.
- I have mount points for the filesystems.
- I probably even have mounted those filesystems.
- I was able to rebuild the mounted fileystem table.
After digging about and recreating the filesystem description table, I determined that the system had run out of inodes:
[root@localhost ~]# df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
devtmpfs 982K 632 982K 1% /dev
tmpfs 985K 1 985K 1% /dev/shm
/dev/sda2 640K 640K 0 100% /
devtmpfs 982K 632 982K 1% /dev
/dev/sda1 126K 50 125K 1% /boot
/dev/sda5 256K 96K 161K 38% /tmp
/dev/sda7 3.4M 3.4K 3.4M 1% /var
/dev/sda3 320K 177 320K 1% /var/log
tmpfs 985K 1 985K 1% /dev/shm
tmpfs 985K 1 985K 1% /dev/shm
tmpfs 985K 1 985K 1% /dev/shm
A quick reminder about inodes: Linux (and other Unix and Unix-like operating systems) in their default configuration use inodes to keep track of what file goes where on the system’s disks, and to keep metadata about the file (user and group ownership, creation time, read and write permissions, etc.). Think of inodes as the index for the file system: at least one inode for each file (you will have more than one inode per file if the file is linked to multiple locations in your filesystem). Unfortunately, there are a finite number of inodes available (varying from filesystem to filesystem and configuration to configuration, but typically numbering well into the tens of thousands), and when they run out — even if the system has more raw space available — it can’t create any more files. Moreover, the number of inodes a filesystem has cannot be (easily) changed after it is created.
Fortunately, there is a simple solution!
Unfortunately, I no longer have access to the system that I was troubleshooting when I wrote my earlier post. However, the fix is pretty universal. With a little script-fu, we can find out how many files each directory in the filesystem has. Once we have identified the quantity and location, we can determine whether there is any particular reason to keep those files. Most of the time in a situation like this, some runaway process has been spewing data that doesn’t get cleaned up properly, either because the process never terminates or because the process isn’t properly coordinating with tools like logrotate
. If the data being spewed is a bunch of small files, we can then simply delete the files.
To start, then:
echo 'echo $(ls -a "${1}" | wc -l) ${1}' > /tmp/files_`date +%F`
chmod 700 /tmp/files_`date +%F`
find . -mount -type d -print0 | xargs -0 -n1 /tmp/files_`date +%F` | sort -n | tail -10
This will:
- Generate a list of directories in the root filesystem;
- Count the number of files in each directory;
- Spit out the list with two columns:
- the right column with the directory name,
- the left column with the file count;
- Sort the two-column list by the file count value;
- If the list is more than 10 lines, only show you the 10 with the most files.
Usually, the directory with the most files is your culprit. You’ll want to verify that, then determine whether you should just delete the oldest umpteen files in that directory, all the files in the directory, or whatever other subset is appropriate. You’ll also want to correct whatever process is generating the files. Short of rewriting the program that spewed so much data, you can fix this a number of ways. Three straightforward methods are:
- Set up a wrapper script that manages it properly,
- Create a
logrotate
script to clean it up, or - Build a
cron
job that will periodically do that for you.
Don’t forget to delete your temporary file:
rm -f /tmp/files_`date +%F`
Happy hunting!
posted at: 20:20 | permanent link to this entryMon, 27 Jan 2014
Network Neutrality Setback
Network Neutrality is an issue that I've been following for a long time.
The premise of Network Neutrality is that ISPs (like Cox, Comcast, and AT&T) should prioritize protocols (web traffic before email or DNS lookups before bit torrent) and regulate bandwidth hogs on their networks however they feel appropriate for their business. However, they shouldn't be allowed to prioritize content providers (Netflix over YouTube, hallmarc.net over Facebook), or their own content over their competitors'.
Recently, a federal appeals court ruled that the FCC's Network Neutrality regulations were invalid because the FCC has not chosen to classify broadband Internet access the same way it classifies your home phone — as a telecommunications service with equal access to all. If the FCC reclassifies Internet access, Network Neutrality will be restored.
Although you may not particularly admire the ACLU, I urge you to sign their petition in support of restoration of Net Neutrality.
If you are unsure about whether to take my word for it, or have questions about the implications for society at large, please refer to these resources for more information:
- Wikipedia's article on Net Neutrality
- The Wall Street Journal's Net Neutrality coverage
- The Electronic Frontier Foundation's summary
- The FCC's current rulemaking and policy
As always, I am happy to answer any other questions you might have.
Thank you! I appreciate your thoughtful consideration.
Wed, 08 Jan 2014
I’ve Run out of Places to Put Stuff
Filesystem inode Trouble
When I want to find out how much space is available on a server, I use the df
command, usually with a flag like -h
for “human readable”. This provides me with a nice summary of space in the filesystems, in this case showing me that my root ( “/” ) filesystem has 1.8 GB available out of 9.9 GB total, for example:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.9G 7.6G 1.8G 81% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 485M 70M 390M 16% /boot
/dev/sda5 4.0G 808M 3.0G 22% /tmp
/dev/sda7 53G 670M 50G 2% /var
/dev/sda3 5.0G 216M 4.5G 5% /var/log
Sometimes, things go dramatically awry, however. For example, I recently encountered a situation where ordinary users were unable to log in to a host. I could only log in as root. This is generally a bad practice (and not everybody should have the power of root anyway), so I went about troubleshooting. Among the things I did was check whether any filesystems were full with the aforementioned df -h
command.
And I got this output:
[root@localhost ~]# df -h
df: cannot read table of mounted filesystems
This is suggestive of a major problem. The system is running, obviously. And, this is good: it means that the system can read at least a couple of the filesystems directly. It just can’t summarize their status for me.
So, I look at the file that is supposed to contain the table of mounted filesystems:
[root@localhost ~]# cat /etc/mtab
No output at all
Then I look at the partition table (using fdisk -l
), to see what the system thinks its disks look like:
[root@localhost ~]# fdisk -l
Disk /dev/sda: 80.5 GB, 80530636800 bytes
255 heads, 63 sectors/track, 9790 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d5e85
Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 1370 10485760 83 Linux
/dev/sda3 1370 2022 5242880 83 Linux
/dev/sda4 2022 9791 62401536 5 Extended
/dev/sda5 2023 2545 4194304 83 Linux
/dev/sda6 2545 2800 2048000 82 Linux swap / Solaris
/dev/sda7 2800 9791 56156160 83 Linux
So far so good: this system knows it has a disk (/dev/sda
) with partitions (sda1
through sda7
); and it at least can identify the type of filesystems they contain.
Just in case any of the fileystems aren’t mounted, using mount -a
I attempt to mount them all:
[root@localhost ~]# mount -a
mount: /dev/sda1 already mounted or /boot busy
mount: /dev/sda5 already mounted or /tmp busy
mount: /dev/sda7 already mounted or /var busy
mount: /dev/sda3 already mounted or /var/log busy
can't create lock file /etc/mtab~1605: No space left on device (use -n flag to override)
mount: devpts already mounted or /dev/pts busy
mount: sysfs already mounted or /sys busy
That looks mostly good; they’re already showing as mounted (or just busy, but that’s a rather improbable situation). However, I see the line that says can't create lock file /etc/mtab~1605: No space left on device (use -n flag to override)
, which worries me. Quite a lot.
Looking a little deeper, I try to see whether /etc/mtab
(my mounted file system table file) even exists at all:
[root@localhost ~]# ls -l /etc/mt*
-rw-r--r--. 1 root root 0 Jan 3 09:20 /etc/mtab
It’s there, but has zero bytes! That means the file is empty. It should contain enough information to describe the mounted filesystems — always more than zero bytes.
To summarize thus far:
- My users can’t log in.
- I have disks, which the OS has identified.
- I have filesystems on the disks.
- I have mount points for the filesystems.
- I probably even have mounted those filesystems.
- But, before I can check the status of the filesystems, I’ll have to force the system to rebuild the mounted fileystem table.
Fortunately, because Linux has a virtual filesystem containing information about the current running environment kept entirely in system RAM (the /proc
filesystem), using grep
and I/O redirection I can export the contents of the known mounts file ( /proc/mounts
) into a new /etc/mtab
file and try my df
command again:
[root@localhost ~]# grep -v rootfs /proc/mounts > /etc/mtab
Now I can see that my /etc/mtab
file contains 1423 bytes:
[root@localhost ~]# ls -l /etc/mt*
-rw-r--r--. 1 root root 1423 Jan 3 09:30 /etc/mtab
Then I can check whether the system can tell me about the filesystems using df
and the -h
flag:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.9G 7.6G 1.8G 81% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 485M 70M 390M 16% /boot
/dev/sda5 4.0G 808M 3.0G 22% /tmp
/dev/sda7 53G 670M 50G 2% /var
/dev/sda3 5.0G 216M 4.5G 5% /var/log
It claims I’ve got plenty of space! Why, then, can I not use touch
to create a file in the /
directory, let alone log in as an ordinary user?
Possibly, because the inodes are all used up. But, “What are inodes?” you ask… Linux (and other Unix and Unix-like operating systems) in their default configuration use inodes to keep track of what file goes where on the system’s disks, and to keep metadata about the file (user and group ownership, creation time, read and write permissions, etc.). Think of inodes as the index for the file system: one inode for each file. Unfortunately, there are a finite number of inodes available (varying from filesystem to filesystem and configuration to configuration, but typically numbering well into the tens of thousands), and when they run out — even if the system has more raw space available — I can’t create any more files; thus our current problem.
Fortunately, now that my mounted filesystem table has been rebuilt, I can check for what inodes are looking like using df
and the -i
flag:
[root@localhost ~]# df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
devtmpfs 982K 632 982K 1% /dev
tmpfs 985K 1 985K 1% /dev/shm
/dev/sda2 640K 640K 0 100% /
devtmpfs 982K 632 982K 1% /dev
/dev/sda1 126K 50 125K 1% /boot
/dev/sda5 256K 96K 161K 38% /tmp
/dev/sda7 3.4M 3.4K 3.4M 1% /var
/dev/sda3 320K 177 320K 1% /var/log
tmpfs 985K 1 985K 1% /dev/shm
tmpfs 985K 1 985K 1% /dev/shm
tmpfs 985K 1 985K 1% /dev/shm
Yup, out of inodes on the / filesystem. What to do?
Join us next time for the exciting conclusion!
posted at: 16:37 | permanent link to this entryFri, 03 Jan 2014
Take this job…
I/O Redirection
As previously discussed, sometimes I want the output of a command to go somewhere besides the screen right in front of me.
For example, I have a script running from cron
— which has no screen to which it should send its output. On many Linux and Unix systems, it will instead generate an email, which is generally sent to the system administrator. She probably doesn’t want to see the output of my script, however. Especially if there are 50 users on the system, all of whom are sending script output to email. And we certainly don’t want the output to go to the great bit bucket in the sky… At least not until we learn about /dev/null
.
Instead, I want both <STDOUT> and <STDERR>
to go to a file. Earlier, I showed you how to send either <STDOUT> or <STDERR>
to a file. However, I can also combine these into a single I/O redirect, like so:
ls 2>&1> foo
This has taken the <STDERR>
and redirected it to the same file descriptor as <STDOUT>
, which is then dumped to a file. This is rather complicated to type, so more recent versions of some shells provide a shorter method:
ls &> foo
But what if I want my <STDOUT> and <STDERR>
to go to two different files? Stay tuned…
Take this job…
I/O Redirection
Linux shells (and Unix shells before them) have three popular methods for passing data around: STDIN
(standard input), STDOUT
(standard output), and STDERR
(standard error). To keep things simple, think of STDIN
as your keyboard, STDOUT
as your screen, and STDERR
as, uh, your screen when something breaks. As you will see later, there are nuances — STDIN
isn’t always the keyboard, nor are STDOUT
and STDERR
always your screen.
Let’s start with a simple reason for knowing what STDIN
, STDOUT
, and STDERR
do. For example, sometimes I want the output of a command to go somewhere besides the screen right in front of me. Instead, I may want the output to go to a new file, fileout.txt
, so I can read it later.
To do this, I would redirect the output from my command, like so:
ls foo > fileout.txt
That “>
” means “take standard out from the previous command (ls foo
) and redirect it to the named file (fileout.txt); if the file already exists, overwrite it with whatever output I give you”. It’s what’s called an I/O redirection operator. Other such operators are <
(take the following file and give its contents to the preceding command) and >>
(append the output of the previous operation to the following file; if the file doesn’t already exist, create it).
STDIN
, STDOUT
, and STDERR
have numbers, too. STDIN
is 0
, STDOUT
is 1
, and STDERR
is 2
. When using I/O redirection operators without an identifying number, the shell presumes that you want to reference STDOUT
. However, you can always choose to explicitly use the numbers. Let’s say I want STDERR
to be redirected to a file:
ls foo 2> fileerr.txt
After running the command, fileerr.txt would contain text like this:
ls: cannot access foo: No such file or directory
Presuming, of course, that a file named “foo” does not exist in the current directory.
Naturally, there are ways to combine redirection operators to make scripting output less verbose and avoid having your system administrator contact you about your abuse of the system.
Join us next time for another exciting episode!
posted at: 09:43 | permanent link to this entryWed, 04 Jul 2012
“Once Upon a Time…
…there was a magical place where it never rained.”
So goes the line from Holes, when Mr. Sir is explaining to the inmates of Camp Greenlake that they should not expect any reprieve from the oppressive heat.
“The end,” he finishes.
Moving to Las Vegas made me feel a lot like that… Except today, on July 4, 2012, it rained.
Although I have photographic evidence, I cannot upload it from my phone at the moment
Wed, 25 Apr 2012
Playing Alaska Tourist (Part 1)
Having now wrapped up my project with GCI, I’m ready to have a few days of fun before I head back to the lower 48. (Wow! Don’t I sound just like an Alaskan?)
Two things I wanted to do before heading south were to visit the Alaska Aviation Museum and see some wildlife on the water. So, I drove south along the highway to Seward, Alaska’s only year-round ice-free port with access to the interior. There, I boarded the Orca Song for a tour around Resurrection Bay.
Out on the water, I saw Sea Lions,
A huge variety of birds, and the namesake of my transport, the highlight of the trip:
A pod of whales, including a very young orca, still juvenile tan instead of black and white. I apologize for the poor quality of the photos… The orcas were not cooperating, although the baby was so cute! The battery on my phone was about to die and getting these two shots was difficult as it was.
posted at: 21:12 | permanent link to this entry
Sat, 07 Apr 2012
Farewell Anchorage
My work here in Anchorage is about done, so this entry will be a dump of all the photos I haven’t been able to use in previous postings but still think are representative of my last three months.
The Tony Knowles trail down to the Cook Inlet is clear!
Look at those daredevils out there on the ice…
Sunset over the Cook Inlet:
The view up from Ship Creek to the downtown skyline:
A map of Anchorage as it was originally planned (from the Anchorage Museum):
Anchorage as it actually is:
I’m just kidding about that… This is where they store the signage for the farmers’ market that goes up once all the snow has melted.
Here’s the Statehood monument overlooking Ship Creek:
Last but not least, the best picture I could get of the Northern Lights:
Unfortunately, the lights of Anchorage are too bright when reflecting up from the snow to see much of a view of the Aurora Borealis. My chief regret from my time in Anchorage is that I was unable to get more than a smudgey glimpse of the Northern Lights.
posted at: 21:12 | permanent link to this entryMon, 02 Apr 2012
Farewell Winter
Anchorage has been a good place to visit this winter, despite all my kvetching.
Spring is nearly here…
If you look closely, you can see grass under those trees!
So now that I’ve experienced the harsh reality of an Anchorage winter, I’m leaving for warmer climes… And I won’t see the glory that is Alaska in summer.
posted at: 21:12 | permanent link to this entrySun, 01 Apr 2012
Spring
We've had several consecutive days of above freezing weather here in Anchorage. The roads have mostly cleared of ice, although a few patches of black ice hide in the shadows. The sidewalks are occasionally a bit dangerous, but since the second or third day of the thaw they've been much less icy.
Folks here in Alaska call this "break up", and it's much like you'd expect from a phrase like that — you never know what the mood of the weather will be; you never know what you're about to put your foot in; and you never know when you're going to embarrass yourself by falling on your butt.
Generally, though, it's shirt-sleeve-warm and pleasant for mid-day walks, at least for a larger guy like me. But when the sun sets at around 8:30 p.m., it doesn't take long for it to get quite chilly. The puddles ice over in about thirty minutes and footing is treacherous from not long after that until an hour after sunrise at about 8:30 a.m. .
When I go on my early evening walks, I keep an eye out for scenery worth sharing — during break up, there's not much of that, because the snow is an ugly brown and the roads are a wet black. There's not much that's photogenic about Anchorage right now.
However, I did catch a couple of glimpses of spring greenery this week. Here are a couple of blades of grass just trying to poke out of the cold, damp earth.
Hope springs eternal!
Tue, 27 Mar 2012
Road Trip II
Last weekend I took advantage of some clear weather and clear-ish roads to take another road trip. This one was south along the Seward Highway, which runs along the Turnagain Arm of the Cook Inlet.
The fjord-like waterway is stunningly gorgeous.
This a view from the north side of the Arm looking south across the water. In the foreground, you can see the railroad tracks that run between the road and the water. Across the center of the channel are broken blocks of ice, each about half the size of a small car. While I was driving, I didn't notice this; but the water is flowing out toward the Inlet. You can tell because the ice is moving swiftly downstream. Very swiftly. Dangerously swiftly.
As I drove further along, I came upon the town of Girdwood. From one perspective, Girdwood is 40 miles from Anchorage. However, from a strictly legalistic perspective, Girdwood is inside Anchorage, because the Anchorage City limits are enormous, encompassing a goodly portion of Alaska's Chugach State Park. Girdwood is perhaps best known for its famous ski resort, Aleyeska. Here are a pair of views of the chair lift from the main road through town:
If you look closely, you can just see the main lodge at the base of the mountain and the chairlift wending its way up the snow face.
I also tried to capture some panoramas of the Turnagain Arm, but haven't yet stitched them together. We'll save that for another post, I suppose.
Mon, 19 Mar 2012
Road Trip
Just because I'm in Anchorage in dead of Winter doesn't mean I can't get out and see Alaska. Sometimes the weather even cooperates.
For example, two weeks ago I drove north up the Glenn Highway to see if I could catch a glimpse of Denali. While I only got about 20 miles past Wasilla (you betcha, that Wasilla) before the cloud cover rolled in, I did get a view of some mighty pretty mountains.
This is the northern end of the Chugach mountain range that borders Anchorage to the east; it forms the southeastern edge of the Knick Arm of the Cook Inlet, just off the Gulf of Alaska. Anchorage sits at the confluence of the Knick Arm and Turnagin Arm of the Inlet, and each arm is fed by a river.
I’d put a map here, but Google isn’t cooperating… Instead, here’s a link to the region.
So, while I didn't get to see the tallest mountain in North America, I did see some spectacular scenery.
Marc Elliot Hall St. Peters, Missouri
Page created: 21 January 2002
Page modified: 09 December 2017