I have a shared web hosting account on GoDaddy, and wanted to do a 301 redirect with an Apache .htaccess file. For some unknown reason GoDaddy’s web interface wasn’t working for this, so I thought I’d fix the problem manually.
In short, this did NOT work:
Redirect 301 /the-old-uri http://alvinalexander.com/the-new-uri
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is a very short recipe, Recipe 15.12, “How to access HTTP response headers after making an HTTP request with Apache HttpClient.”
You need to access the HTTP response headers after making an HTTP request in your Scala code.
Use the Apache HttpClient library, and get the headers from the
HttpResponse object after making a request:
As I’m working on getting a mobile version of this site working, I ran into a problem with having a robots.txt file on a Drupal multisite installation. The root of the problem is that you need to have a robots.txt file like this on your mobile site:
User-agent: * Disallow: /
That’s to keep the search engines from scanning and storing that content, which will be a duplicate of your main website.
As a quick note to self, I used this Apache httpd.conf configuration in MAMP on my MacBook Pro when developing my “Focus” web application in 2014:
A nice graphic on the use of open source tools on the internet.
Last week I wrote an Apache access log parser library in Scala to help me analyze my Apache HTTP access log file records using Apache Spark. The source code for that project is hosted here on Github. You can use this library to parse Apache access log “combined” records using Scala, Java, and other JVM-based programming languages.
Generating a list of URLs from Apache access log files, sorted by hit count, using Apache Spark (and Scala)
I don’t want to make my original Parsing Apache access log records with Spark and Scala article any longer, so I’m putting some new, better code here.
Assuming that you read that article, I’ll jump right in and say that I use this code to load my data into the Spark REPL:
I want to analyze some Apache access log files for this website, and since those log files contain hundreds of millions (billions?) of lines, I thought I’d roll up my sleeves and dig into Apache Spark to see how it works, and how well it works. I used Hadoop several years ago, and as a quick summary, I found the transition to be easy. Here are my notes.