learning

You process things a little better when you put pen to paper.

“You process things a little better when you put pen to paper.”

~ Trevor Siemian, Denver Broncos QB on something he learned from Peyton Manning (and something a professor told me in college many years ago)

Getting a Mac/Java app ready for Apple’s Mac App Store

Over the last two days I’ve gotten a Mac/Java app ready for Apple’s Mac App Store, including bundling the application as a macOS “.app” application bundle, and signing it so it can be submitted to the Store.

A relatively quick look at my browser history shows that I needed to hit over 260 URLs to get that done. As a wise professor once told me, “Keep learning, keep learning.”

This is a page from my book, “A Survival Guide for New Consultants”

The two things I learned in college

“The discipline of Zen consists in upsetting this groundwork once and for all, and reconstructing the old frame on an entirely new basis.”

D.T. Suzuki

I was talking to a friend the other day about what I learned in college, and I came to the conclusion that I learned two major things.

This is a page from my book, Learning Functional Programming in Scala

Introduction: Learning Functional Programming in Scala

“So why do I write, torturing myself to put it down? Because in spite of myself I've learned some things.”

Ralph Ellison

The short version of “Why I wrote this book” is that I found that trying to learn functional programming in Scala was really hard, and I want to try to improve that situation.

The longer answer goes like this ...

The thermodynamics of learning

From a Phys.org article titled The thermodynamics of learning:

“The greatest significance of our work is that we bring the second law of thermodynamics to the analysis of neural networks,” Sebastian Goldt at the University of Stuttgart, Germany, told Phys.org. “The second law is a very powerful statement about which transformations are possible — and learning is just a transformation of a neural network at the expense of energy. This makes our results quite general and takes us one step towards understanding the ultimate limits of the efficiency of neural networks.”