In-Memory Processing
In-Memory Processing: The concept that will arm you with multiple performance advantages

In-Memory Processing: The concept that will arm you with multiple performance advantages

 

I have seen the levels of curiosity around in-memory technologies soar up over the last few years. This, I believe, is partly due to the extensive availability of inexpensive 64-bit systems. But this can’t be all there is to it, right?One can’t ignore the numerous performance pros that in-memory computing provides over traditional disk-based processes. This is a capability that only selected vendors such as SAP and Oracle are offering these days. SAP’s High-Performance Analytic Appliance (HANA) project has been sharpening its in-memory computing capability for quite some time now and the offering can even house complete analytic systems in memory!“Tuning” into the in-memory concept

Latest technologies have the capacity to get tech followers on their toes — fueled by their desire to learn more. However, I think the in-memory database technology takes the cake in this respect right now for one simple reason — it symbolizes a blooming trend in the field of Database Management System (DBMS). The idea of handling data in system memory is not recent, but the processes affiliated to by both current and extant DBMSes are.

I’ve always been psyched about memory optimization, especially so since I started working on Oracle. Optimization happens when you play around with your System Global Area (SGA) and tune it to gain maximum database performance. By “tune” I mean get the most-used data to be read from the database’s memory instead of from your hard disk. Therefore, a well-tuned database will have almost 95% of the requested data already stored in its memory.

Memory optimization — at the grassroots level

Generally, there are two types of databases: Online Transactional Processing (OLTP) and Decision Support System (DSS). Of these, OLTP operates on a few rows and multiple columns, and it works best on the row format. The DSS accesses few columns and many rows and works best on the column format. Imagine the result if we could combine the effect of both OLTP and DSS techniques in a single product!

Usually, data storage happens on disks in row format only. Whenever a request is made for data read or write, data is loaded into traditional row-store. On the other hand, each time data is sought for pure read operations, it gets pulled into a new in-memory column-store. This invariably involves conversion from a row to column format.

Therefore, every time a transaction involves Inserts, Updates, or Deletes with commit, new data will instantly and concurrently show up in the row-store as well as the in-memory column-store. This means we can be assured of one thing — both these store formats will be transactionally consistent. And the most important point here? This approach doesn’t require additional memory!

A giant leap toward enhanced data processing and analysis

So, is the in-memory concept for you? Only if faster decision-making is one of your priorities. You can’t take lightly the other basic benefits it offers either:

- With in-memory computing, you don’t need to worry about altering or replacing applications, as all existent applications run unaltered in any new landscape

- This eliminates the requirement to rework on the database, as the in-memory capability can be integrated sans database migration and/or table reorganization

- In-memory sets zero limits on database and table sizes as it is designed to work on databases and systems of all sizes

- And – you guessed it right – with in-memory, you don’t have to switch to another landscape. It’s possible to run this on extant hardware.

This is just the tip of the iceberg. There’s a lot more that can be discussed about in-memory computing. After all, businesses have managed to release brand new product lines in a span of just a few days – all thanks to this concept.

I intend to write another post that takes off from where this one ends. Till then, I look forward to hearing your thoughts on in-memory. Type your comments below, and I’ll respond to them. And, don’t forget to subscribe to the blog to stay updated whenever there are new posts from me!

 

Author: Samadhan Pawar

Samadhan Pawar is part of the SAP HANA/Oracle DBA Consulting team at Knack Systems. Samadhan has over 9 years of experience, specializing in DB optimization, with focus on SAP Netweaver BW Migration on HANA database via BW-PCA and DMO methodologies. He has extensive experience handling SAP HANA DB. Samadhan has implemented advanced memory compression, involving several terabytes of database for multiple large-scale clients. He is an active tech blogger on various industry-recognized platforms, churning out more than 5 lakh hits on his blog posts from his 500+ followers. [Certifications: Oracle 9i/10g/11g OCP & RAC 11g OCP, SAP Certification for System Administration (Oracle DB) with SAP NetWeaver 7.31 (C_TADM51_731) & SAP Certified for Technology Associated SAP HANA (C_HANATEC_142)]

2 thoughts on “In-Memory Processing: The concept that will arm you with multiple performance advantages

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>