How the Titan supercomputer was recycled. The video of the decommissioning:
While Titan was the 12th most powerful supercomputer in the world at the time of decommissioning, but running it was cost-prohibitive:
[...] the price tag for providing all the infrastructure Titan required to function would have been cost prohibitive. Unlike newer supercomputers, Titan needed three different cooling systems to operate: refrigerant, chilled water, and air conditioning—all very expensive to maintain. Furthermore, Titan used about 4 to 6 megawatts of electricity on average, which is enough to power over 3,000 houses—not the sort of electrical service available to many institutions. Meanwhile, attempting to reduce Titan’s overall size and power usage with fewer cabinets would have resulted in less computing power than can be purchased with newer, smaller systems at a lower cost.
ORNL's website gives the specifications for the supercomputer, and they are quite impressive as you would expect:
Titan featured 18,688 compute nodes, a total system memory of 710 terabytes, and Cray’s high-performance Gemini network. Its 299,008 CPU cores guided simulations while the accompanying GPUs handled hundreds of calculations simultaneously. The system provided decreased time to solution, increased complexity of models, and greater realism in simulations. Titan helped launch a new era for science and engineering as computing approaches the exascale, or a million trillion calculations a second.
I wonder if it would be possible to perform some of the computations done on this machine on commodity hardware instead. The amount of CPU and memory seems to be massive, but surely a whole datacenter can have more than 710 terabytes of memory and 18,688 computer nodes in total. So what prevents us from running these computations on AWS, etc.? Perhaps communication between nodes is very high.