By Alvin Alexander. Last updated: June 4, 2016
I just found these “Numbers every programmer should know” in an email I sent to myself a few years ago. I don’t remember the original source of the content, but I know they come from a talk titled “Building Software Systems at Google and Lessons Learned”, by a person named Jeffrey Dean, at Stanford:
L1 cache reference 0.5 ns Branch mispredict 5 ns L2 cache reference 7 ns Mutex lock/unlock 25 ns Main memory reference 100 ns Compress 1K bytes w/ cheap algorithm 3,000 ns Send 2K bytes over 1 Gbps network 20,000 ns Read 1 MB sequentially from memory 250,000 ns Round trip within same datacenter 500,000 ns Disk seek 10,000,000 ns Read 1 MB sequentially from disk 20,000,000 ns Send packet CA->Netherlands->CA 150,000,000 ns
I wouldn’t worry about committing them to memory any time soon, but it’s nice and interesting to see the relative differences in these numbers/speeds.