With a programming language/environment such as Java the performance of our Java code may be a concern. I often use an old-fashioned "quick-and-dirty" way to measure the performance of Java code, and I'll share that method in this brief article.
I must confess, the method I use is as old-fashioned as they come. Generally it requires three steps:
The difference between the result of Step 3 and Step 1 is the time it took to run your code.
Written in Java code, the technique looks like this:
long startTime = System.currentTimeMillis(); // run your code here long stopTime = System.currentTimeMillis(); long runTime = stopTime - startTime; System.out.println("Run time: " + runTime);
All you have to do is substitute your code in place of the comment "run your code here".
The only "trick" to this method is knowing how to obtain the system time. The
System.currentTimeMillis() method returns the current time in milliseconds since midnight GMT on January 1, 1970. Therefore,
stopTime will always be greater than or equal to
runTime will always be a positive number (or zero if your code executes very quickly), measured in milliseconds.
Generally I use this method to profile code performance on single-user workstations, or on multi-user workstations that aren't busy. On multi-user computer systems it's possible that other processes will be running, and your runTime will be affected because your sharing the CPU with these other simultaneously running processes.
Also, I only use this technique to measure events that take much longer to run than the
System.currentTimeMillis() system call requires. As in any good science experiment, you don't want your measurement technique to get in the way of the experiment you're trying to measure. Fortunately, because you're generally only interested in profiling events that run measurably slow (and not interested in events that run in 1 or 2 milliseconds), this isn't normally a problem.