Spent the last few days learning the Go language. It’s gotten a surprisingly negative reaction among the literati. I think a lot of it is Google-fatigue, but a lot might be legitimate language issues. I’ll talk about one: parallelization speeds. Go provides goroutines as simple mechanisms for parallelization. Essentially ‘go’ is a keyword in the language that lets you background a function call. The language multiplexes goroutines onto threads “efficiently” and according to the GOMAXPROCS environment variable.
But benchmarking with
time, I discovered that performance went down
as processors went up. To some degree one can expect this: the job you ask a new
thread (or goroutine) to perform must be sufficiently complex that the
computation isn’t dwarfed by the cost of manufacturing the thread (or
goroutine). So, I
a bit by only spawning routines when the data set
was sufficiently large, otherwise keeping computation in the same routine. This
had a significant improvement on my GOMAXPROCS=1 benchmarks, but still the
multi-core version was half again or twice slower.
Thinking it was some fault in my implementation, I dug up somebody else’s quicksort implementation and, applying the same data set size optimization, put it to the test. Same results. So, I dug through the mailing list to see that, quote,
In time, suitable problems should be able to see a linear speedup in number of cores, but we’re not there yet.
and so that apparently “parallelized computations taking in many cases longer than their single-threaded counterparts is a known issue at this point” -- still waiting on confirmation on that one. I want to be optimistic but this seems pretty bad. I’m going to keep hunting around for maybe a “suitable problem” that will see any “speedup” in number of cores, because for the moment Go is an interesting language to be writing in, and a welcome break from the bowels of C++. I hope I can find one.