performance

Bloop’s compiler performance is ~29% faster than SBT (on at least one project)

I had read that Bloop was faster than Scala compiler tools like scalac and fsc, so I wondered if it was faster than SBT, and if so, how much faster. So I downloaded Eric Torreborre’s specs2 source code, which has 880 source code files, and compiled the entire project with both SBT and Bloop.

SBT performance

To test SBT’s speed, I ran all the commands from inside the SBT command prompt, which I usually do anyway to avoid the SBT/JVM startup lag time. I also ran clean and compile several times before recording SBT’s speed, because I thought that would be a better reflection of real-world usage and performance. I ran the tests four times, and the average time with SBT was 49 seconds, and that was very consistent, always coming in between 48 and 50 seconds.

Bloop performance

Measuring Scrum team productivity/speed with Function Point Analysis

I bought my first copy of Agile Software Development with Scrum, by Schwarber and Beedle back around 2002, I think. I was just thumbing through it last night when I saw that they use Function Points as a metric to demonstrate the velocity that agile software teams achieve, and more specifically use it to show that some teams develop software much faster using Scrum.

I didn’t know about Function Point Analysis back in 2002 — I didn’t become a Certified Function Point Specialist until about two years later — so I probably just skimmed over that line then, but when I saw it last night I thought it was cool that they used function points as a metric for software team development speed.

Computer latency: 1977-2017 alvin February 10, 2019 - 6:24pm

A friend sent me this link about computer latency.

A note about Scala/Java startup time

I was reading this post by Martin Odersky (Make the Scala runtime independent of the standard library) and came across this comment by Li Haoyi: “This would also make it more feasible to use Scala for tiny bootstrap scripts; current Mill’s launcher is written in Java because the added classloading needed to use scala.Predef (even just println) easily adds a 200-400ms of initialization overhead.” I haven’t written anything where the startup time of a Scala application was a huge problem, but that was interesting to read.

(Though I should say that I wish all Scala/Java command-line apps started faster. It’s one reason I occasionally think about using Haskell for small scripts, so I can compile them to an executable.)

Great tech review of Apple’s iPad A12X system

Kudos to Samuel Axon of Ars Technica for writing a very good tech review of the hardware behind Apple’s new iPad Pro (2018). As I was reading it, it reminded me of the old style of solid writing that I used to get when I bought print copies of magazines.

One of the nuggets of the article is shown in the image I’ve attached here, where you can see that the 2018 iPad Pro is faster than every MacBook Pro in existence other than its 2018 model, at least in terms of the Geekbench multi-core performance tests. If you dig through the images in the article you’ll see that the story isn’t quite as powerful in the single-core benchmark, where the iPad Pro lags the 2018 MacBook Pro by up to 16%. But in those tests the iPad Pro is roughly the equivalent of a 2018 Dell XPS 15 2-in-1 model. (The older Macs use Intel Core i7 and Xeon W processors, and the Dell model uses an Intel Core i7. The 2018 MacBook Pro uses an Intel Core i9.)

These numbers — comparing a tablet to i7 and i9 processors — make one think that Apple will be using their own chips inside Mac computer systems some time soon.

JMH, an SBT plugin for running OpenJDK JMH benchmarks

JMH is an SBT plugin for running OpenJDK JMH benchmarks. Per its docs, “JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM.”

They also recommend reading an article titled Nanotrusting the Nanotime if you’re interested in writing your own benchmark tests.