Posts in the “linux-unix” category

Linux/Unix: How to edit your crontab file with “crontab -e”

Linux crontab FAQ: How do I edit my Unix/Linux crontab file?

I was working with an experienced Linux sysadmin a few days ago, and when we needed to make a change to the root user crontab file, I was really surprised to watch him cd to the root user’s cron folder, make changes to the file, then do a kill -HUP on the crontab process.

Thinking he knew something I didn’t know, I asked him why he did all of that work instead of just entering this:

How to use the Linux 'lsof' command to list open files

Linux “open files” FAQ: Can you share some examples of how to show “open files” on a Linux system (i.e., how to use the lsof command)?

The Linux lsof command lists information about files that are open by processes running on the system. (The lsof command itself stands for “list of open files.”) In this brief article I’ll just share some lsof command examples. If you have any questions, just let me know.

Dozens of Unix/Linux 'find' command examples


Linux/Unix FAQ: Can you share some Linux find command examples?

Sure. The Linux find command is very powerful. It can search the entire filesystem to find files and directories according to the search criteria you specify. Besides using the find command to locate files, you can also use it to execute other Linux commands (grep, mv, rm, etc.) on the files and directories that are found, which makes find even more powerful.

Linux crontab examples (every X minutes or hours)


Linux crontab FAQ: How do I schedule Unix/Linux crontab jobs to run at time intervals, like “Every five minutes,” “Every ten minutes,” “Every half hour,” and so on?

Solution: I’ve posted other Unix/Linux crontab tutorials here before (How to edit your Linux crontab file, Example Linux crontab file format), but I’ve never included a tutorial that covers the “every” options, so here are some examples to demonstrate this crontab syntax.

A Unix/Linux shell script to make a quick backup of a directory

As a brief note today, I just created this little Unix/Linux shell script that I named tarquick, and it lets me quickly create a tar/gz backup of one directory. It does a lot of the tar work for you, and all you have to do is specify an optional directory name:

Unix: How to redirect STDOUT and STDERR to /dev/null

Linux/Unix FAQ: How do I redirect STDOUT and STDERR to /dev/null?

To redirect both STDOUT and STDERR to /dev/null, use this syntax:

$ my_command > /dev/null 2>&1

With that syntax, when my_command is run, its STDOUT output is sent to /dev/null (the “bit bucket”), and then STDERR is told to go to the same place as STDOUT. This syntax can be used to redirect command output to any location, but we commonly send it to /dev/null when we don’t care about either type of output.

How do I sort a Unix directory listing by file size?

To sort a Unix / Linux directory listing by file size, you just need to add one or more options to the base ls. On Mac OS X (which runs a form of Unix) this command works for me:

ls -alS

That lists the files in order, from largest to smallest. To reverse the listing so it shows smallest to largest, just add the 'r' option to that command:

ls -alSr

For another article related to finding large files, see my article, How to find the largest files under a directory on MacOS.

How to sort Linux ls command file output

A couple of days ago I was asked how to sort the output from the Unix and Linux ls command. Off the top of my head I knew how to sort the ls output by file modification time, and also knew how to sort ls with the Linux sort command, but I didn't realize there were other cool file sorting options available until I looked them up.

In this short tutorial I'll demonstrate the Unix/Linux ls command file sorting options I just learned.

Sorting Unix 'ls' command output by filesize

I just noticed that some of the MySQL files on this website had grown very large, so I wanted to be able to list all of the files in the MySQL data directory and sort them by filesize, with the largest files shown at the end of the listing. This ls command did the trick, resulting in the output shown in the image:

ls -Slhr

The -S option is the key, telling the ls command to sort the file listing by size. The -h option tells ls to make the output human readable, and -r tells it to reverse the output, so in this case the largest files are shown at the end of the output.

Linux shell script date formatting

Unix/Linux date FAQ: How do I create a formatted date in Linux? (Or, “How do I create a formatted date I can use in a Linux shell script?”)

I just ran into a case where I needed to create a formatted date in a Linux shell script, where the desired date format looks like this:


To create this formatted date string, I use the Linux date command, adding the + symbol to specify that I want to use the date formatting option, like this:

Notes about setting up HTTPS on websites using LetEncrypt and certbot

As a note to self, I added SSL/TLS certificates to a couple of websites using LetEncrypt. Here are a couple of notes about the process:

  • Read the LetEncrypt docs
  • They suggest using certbot
  • Read those docs, and follow their instructions for installing the packages you’ll need
  • Make sure your server firewall rules allow port 443 (You may get an “Unable to connect to the server” error message if you forget this part, as I did)
  • After making some backups, run this command as root (or you may be able to use the sudo command):

Linux: How to find multiple filenames with the ‘find’ command

Unix/Linux find command FAQ: How can I write one Unix find command to find multiple filenames (or filename patterns)? For example, I want to find all the files beneath the current directory that end with the file extensions ".class" and ".sh".

You can use the Linux find command to find multiple filename patterns at one time, but for most of us the syntax isn't very common. In short, the solution is to use the find command's "or" option, with a little shell escape magic. Let's take a look at several examples.

How to use the Linux sed command to edit many files in place (and make a backup copy)


Warning: The following Unix sed commands are very powerful, so you can modify a lot of files successfully — or really screw things up — all in one command. :)

Yesterday I ran into a situation where I had to edit over 250,000 files, and with that I also thought, “I need to remember how to use the Unix/Linux sed command.” I knew what editing commands I wanted to run — a series of simple find/replace commands — but my bigger problem was how to edit that many files in place.

A quick look at the sed man page showed that I needed to use the -i argument to edit the files in place:

How to control/configure vim colors


vim colors FAQ: Can you provide details on how to control/configure colors in the vim editor (i.e., vim color settings)?

Sure. When using vim syntax highlighting, a common complaint is that the default color scheme is a little too bold. In this article I'll try to demonstrate how you can change the colors in vim to be a little more pleasing, or at least be more in your control.

Linux: Recursive file searching with `grep -r` (like grep + find)


Unix/Linux grep FAQ: How can I perform a recursive search with the grep command in Linux?

Two solutions are shown next, followed by some additional details which may be useful.

Solution 1: Combine 'find' and 'grep'

For years I always used variations of the following Linux find and grep commands to recursively search subdirectories for files that match a grep pattern:

find . -type f -exec grep -l 'alvin' {} \;

This command can be read as, “Search all files in all subdirectories of the current directory for the string ‘alvin’, and print the filenames that contain this pattern.” It’s an extremely powerful approach for recursively searching files in all subdirectories that match the pattern I specify.

Solution 2: 'grep -r'

However, I was just reminded that a much easier way to perform the same recursive search is with the -r flag of the grep command:

grep -rl alvin .

As you can see, this is a much shorter command, and it performs the same recursive search as the longer command, specifically:

  • The -r option says “do a recursive search”
  • The -l option (lowercase letter L) says “list only filenames”
  • As you’ll see below, you can also add -i for case-insensitive searches

If you haven’t used commands like these before, to demonstrate the results of this search, in a PHP project directory I’m working in right now, this command returns a list of files like this:


More: Search multiple subdirectories

Your recursive grep searches don’t have to be limited to just the current directory. This next example shows how to recursively search two unrelated directories for the case-insensitive string "alvin":

grep -ril alvin /home/cato /htdocs/zenf

In this example, the search is made case-insensitive by adding the -i argument to the grep command.

Using egrep recursively

You can also perform recursive searches with the egrep command, which lets you search for multiple patterns at one time. Since I tend to mark comments in my code with my initials ("aja") or my name ("alvin"), this recursive egrep command shows how to search for those two patterns, again in a case-insensitive manner:

egrep -ril 'aja|alvin' .

Note that in this case, quotes are required around my search pattern.

Summary: `grep -r` notes

A few notes about the grep -r command:

  • This particular use of the grep command doesn’t make much sense unless you use it with the -l (lowercase "L") argument as well. This flag tells grep to print the matching filenames.
  • Don’t forget to list one or more directories at the end of your grep command. If you forget to add any directories, grep will attempt to read from standard input (as usual).
  • As shown, you can use other normal grep flags as well, including -i to ignore case, -v to reverse the meaning of the search, etc.

Here’s the section of the Linux grep man page that discusses the -r flag:

-R, -r, --recursive
Read all files under each directory, recursively; this is
equivalent to the -d recurse option.

  Recurse in directories only searching file matching PATTERN.

  Recurse in directories skip file matching PATTERN.

As you’ve seen, the grep -r command makes it easy to recursively search directories for all files that match the search pattern you specify, and the syntax is much shorter than the equivalent find/grep command.

For more information on the find command, see my Linux find command examples, and for more information on the grep command, see my Linux grep command examples.

vi/vim delete commands and examples

vi/vim editor FAQ: Can you share some example vi/vim delete commands?

The vi editor can be just a little difficult to get started with, so I thought I’d share some more vi commands here today, specifically some commands about how to delete text in vi/vim.

vi/vim delete commands - reference

A lot of times all people need is a quick reference, so I’ll start with a quick reference of vi/vim delete commands:

[toc hidden:1]

Linux: How to get the basename from the full filename

As a quick note today, if you’re ever writing a Unix/Linux shell script and need to get the filename from a complete (canonical) directory/file path, you can use the Linux basename command like this:

$ basename /foo/bar/baz/foo.txt

How to make an offline mirror copy of a website with wget

As a short note today, if you want to make an offline copy/mirror of a website using the GNU/Linux wget command, a command like this will do the trick for you:

wget --mirror            \
     --convert-links     \
     --html-extension    \
     --wait=2            \
     -o log              \

Update: One thing I learned about this command is that it doesn’t make a copy of “rollover” images, i.e., images that are changed by JavaScript when the user rolls over them. I haven’t investigated how to fix this yet, but the easiest thing to do is to copy the /images directory from the server, assuming that you’re making a static copy of your own website, as I am doing. Another thing you can do is manually download the rollover images.

Why I did this

In my case I used this command because I don’t want to use Drupal to serve that website any more, so I used wget to convert the original Drupal website into a series of static HTML files that can be served by Nginx or Apache. (There’s no need to use Drupal here, as I no longer update that website, and I don’t accept comments there.) I just did the same thing with my website, which is basically an online version of a children’s book that I haven’t modified in many years.

Why use the --html-extension option?

Note that you won’t always need to use the --html-extension option with wget, but because the original version of my How I Sold My Business website did not use any extensions at the end of the URLs, it was necessary in this case.

What I mean by that is that the original version of my website had URLs like this:

Notice that there is no .html extension at the end of that URL. Therefore, what happens if you use wget without the --html-extension option is that you end up with a file on your local computer with this name:


Even if you use MAMP or WAMP to serve this file from your local filesystem, they aren’t going to know that this is an HTML file, so essentially what you end up with is a worthless file.

Conversely, when you do use the --html-extension option, you end up with this file on your local filesystem:


On a Mac, that file is easily opened in a browser, and you don’t even need MAMP. wget is also smart enough to change all the links within the offline version of the website to refer to the new filenames, so everything works.

Explanation of the wget options used

Here’s a short explanation of the options I used in that wget command:

    Turn on options suitable for mirroring. This option turns on 
    recursion and time-stamping, sets infinite recursion depth,
    and keeps FTP directory listings. It is currently equivalent to 
    ‘-r -N -l inf --no-remove-listing’. 

    After the download is complete, convert the links in the document
    to make them suitable for local viewing.


-o foo
    write "log" output to a file named "foo"

    Wait the specified number of seconds between the retrievals.
    Use of this option is recommended, as it lightens the server load 
    by making the requests less frequent.

Depending on the web server settings of the website you’re copying, you may also need to use the -U option, which works something like this:

-U Mozilla
   mascarade as a Mozilla browser

That option lets you set the wget user agent. (I suspect that the string you use may need to be a little more complicated than that, but I didn’t need it, and didn’t investigate it further.)

I got most of these settings from the GNU wget manual.


An alternative approach is to use httrack, like this:

httrack --footer "" http://mywebsite:8888/

I’m currently experimenting to see which works better.


I’ll write more about wget and its options in a future blog post, but for now, if you want to make an offline mirror copy of a website, the wget command I showed should work.

Unix/Linux shell script reference page (shell cheat sheet)

Linux shell script test syntax

All of the shell script tests that follow should be performed between the bracket characters [ and ], like this:

if [ true ]
  # do something here

Very important: Make sure you leave spaces around the bracket characters.

I'll show more detailed tests as we go along.

Linux shell file-related tests

To perform tests on files use the following comparison operators:

[toc hidden:1]