According to Moore’s law, the industry’s master metronome, predicts that the number of components that can be squeezed onto a microchip of a given size and thus, loosely, the amount of computational power available at a given cost doubles every two years. For many comparatively simple ai applications, that means that the cost of training a computer is falling. However, that is not true everywhere. A combination of ballooning complexity and competition means costs at the cutting edge are rising sharply. What really influences the cost of training AI Machines ?