Curl FAQ: How do I use curl to get the headers from a website URL?
Short answer: Use curl's
-I option, like this:
$ curl -I URL
Here's a specific example, including a real URL and results:
$ curl -I http://goo.gl/3fG5K HTTP/1.1 301 Moved Permanently Content-Type: text/html; charset=UTF-8 Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: Fri, 01 Jan 1990 00:00:00 GMT Date: Mon, 26 Nov 2012 00:11:15 GMT Location: http://alvinalexander.com/linux/linux-teleport-command-cd-improved X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block Server: GSE Transfer-Encoding: chunked
The following example shows the output from curl when you don't use the
-I option on the same URL:
$ curl http://goo.gl/3fG5K <HTML> <HEAD> <TITLE>Moved Permanently</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000"> <H1>Moved Permanently</H1> The document has moved <A HREF="http://alvinalexander.com/linux/linux-teleport-command-cd-improved">here</A>. </BODY> </HTML>
I came across your blog and having wanted to try to run the command curl, it appears it is not natively installed on my distribution which is Ubuntu LTS 12.04. Having dug around the interwebz, it seems I have install apache and php which I don't want to. Is there a way to run other command similar to curl to see HTTP headers or am I mistaken about having to install the apache and php packages?
On Ubuntu you should be able to install curl using apt-get, like this:
sudo apt-get install curl
'apt-get' is the package management system for Ubuntu.