On Wed, Mar 21, 2018 at 09:13:02AM -0700, Steven Schear wrote:
http://parallel.princeton.edu/openpiton/open_source_processors.php
Looks like some nice architecture stuff being explored here, reducing core-count coherence overhead ("up to 64" core coherence domains are optimized) and reducing power usage with "drafting mode" which is essentially SIMD applied at a higher level ('cohering' many threads - e.g. uploading and analysing/scaling images for many web page end users - thread instructions are aligned to execute simultaneously (where possible) improving throughput to energy by up to 20 percent, 8.57 percent on average). So they're targetting the right metrics for a longer term traction - "throughput divided by energy" which is what datacenters must optimize if they want to increase profitability - a little different to minimizing latency, and minimizing power usage per data processed/ stuff done, required for battery powered end user devices - although I think it's not yet clear that these two outcomes are not ultimately one and the same result (with a laptop, hike the CPU frequency when some heavy calculations need to be done, to minimize the time required to do them (user's want low latency) and this thereby also minimizing overall power usage). All roads lead to Rome.