New chip design could make some data operations 1,000 times faster

A British company has created the model for a computer memory chip that could make some data operations 1,000 times faster, and so easy to program that school pupils could write the code.

The new design, by the Cambridge-based start-up Blueshift Memory, is one in a wave of radical changes in computer memory emerging from different companies around the world. Together, these could significantly improve computers’ capacity to meet our need to process more and more data, for tasks such as drug discovery, DNA research, artificial intelligence design, and the management of future smart cities.

Computer scientists have, for some time, warned that even the most powerful supercomputers are struggling to keep pace with society’s spiralling data demands. A major reason for this is that computer memory chips (usually RAM chips) are not improving as quickly as their central processing units (CPUs).

This creates a “tailback” when high-performance computers perform large-scale operations, like database searches with millions of possible outcomes. Data stacks up in a slow-moving queue between the CPU and the less-efficient memory, reducing the speed at which even powerful computers can deliver results.

Blueshift’s new design reorganises the way in which a memory chip handles these operations, so that it delivers data to the CPU much faster. With it, operations could take minutes, or even seconds, rather than hours.

The chip’s designers stress that this is only part of a solution that will require greater collaboration between various companies who are working on the “data tailback” challenge.

But Blueshift’s initial model has nevertheless yielded impressive results. The company, a small team of computer engineers with extensive experience in high performance computing, have built an FPGA card that emulates the chip’s effects. Simulations using this suggest that the chip could, for example, make searches of the vast databases that are used to match fragments of DNA in scientific research, or in criminal investigations, 100 times faster.

Further tests showed that the algorithms used in weather forecasting and climate change modelling could also run 100 times faster using the chip. It could also make searches on Google as much as 1,000 times faster.

These huge improvements are possible because the chip is structured to store data in readiness for this type of operation.

Blueshift’s team analysed and categorised thousands of algorithms used by companies to solve complex data problems. They then designed the chip so that it arranges data in preparation for these tasks - an approach that could be combined with any type of memory cell technology.

Peter Marosan, Chief Technology Officer at Blueshift Memory, said: “Imagine if you are a taxi driver but the town where you work is always changing, people are constantly swapping houses, and the shops and services are forever disappearing and reappearing in different places. That’s similar to the way in which data is organised in existing chips. Our design is the equivalent of replacing that with a stable, structured town where you already know where everything is and can find it much more quickly. It makes everything faster, easier and more effective.”

Blueshift’s design could also make it much easier to program some data operations, because it would remove the need to include complex instructions about how to handle the vast quantities of data involved. “It would make some big data programming as straightforward as the basic data searches that computing students learn to write in high school,” Marosan said.

Computer scientists have traditionally tried to devise workarounds for the data tailback (also known by specialists as the “von Neumann bottleneck”) rather than solutions. But the performance gap between CPUs and memory chips is now growing by about 50% every year, while data demands are skyrocketing. Many leading figures in the computing technology field have suggested that memory and data handling need to be redesigned for the big data age.

Significantly, two of the seven finalists in the data category at this year’s Hello Tomorrow conference – a major international gathering focusing on future technologies – presented new approaches to address the challenge: Blueshift Memory, and the US firm, Memcomputing.

Others, like Upmem in France, are designing processing-in-memory (PIM) chips, which could address the tailback by adding processors to the memory itself.

Blueshift are now seeking funding to create a full first iteration of their chip, which costs considerably more than the prototype emulator. The company says that changing the way in which computer memory works could improve many data operations – not just big data or database searches.

For example, the artificial intelligence in autonomous vehicles, like driverless cars, needs to process huge quantities of data quickly to make decisions. And in a future in which objects and people are likely to be closely connected in smart cities, fast, real-time data processing on a large scale will be essential to manage traffic flows, utility supplies, and even evacuation procedures in times of danger.

Better memory chips could also accelerate the more data-hungry aspects of home computing. Blueshift’s prototype makes rendering films in video editing software 10 times faster, for example. And it could improve the processing speeds of virtual reality headsets by a factor of up to 1,000.

Marosan has tested their prototype on his home PC. “It gave me one of the fastest home computers in the world!” he said. “That doesn’t make much difference for everyday tasks like sending emails. But it could speed up some scientific tests, and my kids are hoping to use it to play esports soon!”



Looking for something specific?