Curl FAQ: How do I use curl to get the headers from a website URL?
Short answer: Use curl's
-I option, like this:
$ curl -I URL
Here's a specific example, including a real URL and results:
As a short note today, if you want to make an offline copy/mirror of a website using the GNU/Linux
wget command, a command like this will do the trick for you:
This blog post on why curl defaults to stdout is an interesting discussion about decisions you have to make when designing things.
I've been having a problem with a GoDaddy website lately (see my GoDaddy 4GH performance problems page, and in an effort to get a better handle on both (a) GoDaddy website downtime and (b) GoDaddy 4GH performance, I wrote a Unix shell script to download a sample web page from my website.
To that end, I created the following shell script, and then ran it from my Mac every two minutes:
wget command FAQ: Can you share an example of a
wget command used in a Linux shell script?
Here's a Unix/Linux shell script that I created to download a specific URL on the internet every day using the
wget command. Note that I also use the
date command to create a dynamic filename, which I'll describe shortly.
Linux wget FAQ: Can you share an example of the Linux wget command?
Suppose you're working on a Unix or Linux machine remotely through an SSH session, and then you need to get a resource (like a tar or gzip file) that's on the Internet over to that machine. You could download that file to your local machine, and then use
scp to copy it to your remote Unix box, but that's a lot of work.
Java FAQ: Can you share some source code for a “Java wget” program, i.e., a Java program that works like the Unix wget or curl commands?
Here's the source for a program I've named JGet, which acts similar to the wget or curl programs. I didn't have wget installed when I needed it (and my client wouldn't let me install it), so I wrote this Java wget replacement program.