In-memory Computing and Its Impact on Software Performance

The evolution of hardware has been a steady pace of decreasing size and reducing the distance for signaling. In doing so, it has constantly given us higher performance. When you look at software, the basic principles have remained the same for decades, however, as with everything else, the software industry is making improvements.

In-memory platforms with in-memory databases are emerging and are becoming the new standard. In-memory computing is, unlike traditional computing, a technology for both application and database in the memory. Accessing data stored in-memory is much faster, up to as much as 10,000 times faster compared to the traditional system. This also minimizes the need for performance tuning and maintenance by developers and system integrators and provide a much faster experience for the end user. In-memory computing allows data to be analyzed in real-time, enabling real-time reporting and decision-making for businesses. According to Gartner, deploying business intelligence tools on the traditional system can take as much as 17 months, and many vendors therefore choose the in-memory technology to speed up the implementation time.

 

Since in-memory databases utilize the server’s main memory as primary storage location, besides improvement in speed, the size and cost is significantly reduced. Traditional systems keep a lot of redundant data as the system needs to create a copy of the data for each component that are added to the system, such as additional database, server, integrator, or middleware to increase the volume or performance. For every component you add to the system, the more complex it becomes. By continuously adding hardware, you have;

  • a never-ending hardware cost
  • an increasing need for storage space to store the hardware
  • a continuous work on integration and maintenance.

 

The more hardware you add, the more copies of the data will be created and the more it needs to travel, which with time results in a decrease in performance. Hence, creating a slippery slope of reduced performance and added hardware and increased cost. With the in-memory system – since data is stored in-memory – it entails a single data transfer and doesn’t share the traditional system’s challenge of signaling and decreased performance. Because of this, the system would be able to handle everything with one server, where it would have required the traditional system 100 servers and databases. In-memory databases are from the start designed to be more streamlined, with the optimization goals of reducing memory consumption and CPU cycles.

By | 2017-03-30T12:10:31+00:00 October 13th, 2016|Starcounter|0 Comments

By continuing to use the site, you agree to the use of cookies. More information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close