yottabytes

There’re a lot of different suggestions these days about which technologies are best for developing the newest supercomputers. One option that is being pursued is a quantum computer. Hewlett Packard Enterprises (HPE) has just announced a single-memory computer that is affectionately called ‘The Machine’. The new machine has a stunning 160 terabytes of memory, but the architecture may allow for the memory to scale up to about 4.1 yottabytes. If that sounds like something from a science fiction movie (akin to the flux capacitor and light sabers), it isn’t too far off.

A single Yottabyte is 10 to the 24th power bytes – or, for the non-mathematical types – a one with 24 zeros after it (4 yottabytes is 250000 times the entire digital memory on earth today). At one point, it was estimated that it would take a billion data centers the size of a city block to store this amount of data (the equivalent of Delaware and Rhode Island combined). However, new memory density technology has made it possible to store that quantity of yottabytes on data cards around double the size of the Hindenburg. Nevertheless, this newest hardware platform architecture promises some serious memory scalability with much smaller size constraints, and far greater levels of computing power. The memory-driven model seems to be a way to move technology forward quickly while at the same time using tech that is at our disposal now. The memory driven model makes memory the key to architecture, rather than the processor. The system promises to eliminate some of the hang ups between hardware to allow for a much faster processing time, per computation.

Each node of memory is connected via a high performance protocol and the computer is running a Linux optimized OS. The goal of the computer is to take advantage of the abundant persistent memory for usage with big data. Andrew Wheeler, the deputy director of HP Labs said, ”We think we’ve got an architecture that scales all the way from edge devices — the intelligent edge — through some bigger systems that are probably more in line with what we’ve built out as a prototype.”

HPE is hoping that this sort of scalable mass-memory computing model will be right up the alley of large corporations with big data needs as we enter the age of AI and hyper computing. As data is analyzed, compared, and utilized, it requires processor power, but the memory driven model allows those analyses to take place within the memory system fabric rather than in the interaction between memory and processor, meaning that there is no hardware crossover to deal with. The entire system is simply processing across existing memory channels. And it’s a lot of memory! Imagine dealing with all the existing data in the world at one time, seamlessly. That’s what this new system promises.

Whether the HPE computer is the solution of the future remains to be seen. However, it does seem clear that this is at least one very viable model for speeding up the needed data analysis for current computational requirements.

We hope you liked this article.  View more trending tech news here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here