The ability to create a modular system has significant benefits, particularly in business. The ability to remove and replace individual components keeps costs down while allowing incremental improvements in both speed and efficiency. However, as with most things, there is no free lunch. The module provided by the Von Neumann architecture comes with some serious shortcomings:
- Von Neumann Bottleneck: Of all the shortcomings, the von Neumann bottleneck is the most dangerous when considering the requirements for disciplines such as artificial intelligence, machine learning, and even data science.
- Single points of failure: Any loss of connection to the bus necessarily means that the computer fails immediately, not gracefully. Even on systems with multiple processors, the loss of a single process, which should simply result in a loss of capacity, leads instead to a complete system failure. The same problem occurs with the loss of other system
- Components: instead of reducing functionality, the entire system fails. Since AI often requires constant system operation, the potential for severe consequences escalates with the way the application is hardware dependent.
- Single Thinking: The Von Neumann bus can either retrieve an instruction or retrieve the data required to carry out an instruction, but it cannot do both. Thus, when data recovery requires several bus cycles, the processor remains idle, reducing its ability to perform even more instruction-intensive AI tasks.
- Tasks: When the brain performs a task, a number of synapses are fired simultaneously, allowing multiple processes to be carried out simultaneously. Von Neumann’s original design allowed only one operation to be performed at a time, and only after the system retrieved both the instructions and the required data. Today’s computers typically have multiple cores, allowing simultaneous execution of operations in each core. However, application code must specifically address this requirement, so the functionality often remains unused.