It’s interesting to see, not necessarily from a backward compatibility model, but interesting none the less for those of us who are leading enterprise applications which are still stuck at “lowest common denomenator” just due to their tremendous size and expected reach.

I can certainly see how many applications can expect to reach a better performance vs cost (engineering man hours, legacy patch costs, etc) ratio for their intensive parts which either have the best chance for memory or cpu utilization management. Lets face it, none of our applications can be so brazen to run on a Cray with unlimited resources (although it sure as hell would be a lot easier on reducing Engineering constraints or getting “Pixel Nazi’s” off our backs).