How to use In-Memory Caching in your .NET Core Web Application
About In-Memory Caching
In-memory caching is a technique used in computer systems to improve performance by storing frequently accessed data in memory, rather than retrieving it from slower data storage devices such as hard disk drives or solid-state drives. In-memory caching is commonly used in web applications, databases, and other systems where fast access to data is critical.
In-memory caching works by storing data in a cache, which is a temporary storage area in memory that is used to hold frequently accessed data. When a user requests data, the system checks the cache first to see if the data is already stored there. If the data is in the cache, the system can retrieve it quickly and efficiently. If the data is not in the cache, the system retrieves it from the slower data storage device and adds it to the cache for future requests.
Use Cases of In-Memory Caching
In-memory caching is useful in scenarios where data is frequently accessed and updated but does not change very often. Some common use cases of in-memory caching in .NET Core web applications include:
- Page Output Caching - Caching the output of a web page to avoid re-generating the page every time it is requested. This can greatly reduce the load on the server and improve the response time of the web application.
- Data Caching - Caching frequently accessed data like product catalogs, user profiles, or session data to reduce database or API calls.
- Partial View Caching - Caching partial views of a web page like navigation menus, sidebars, or footers to improve page load times.
- Query Result Caching - Caching the result of complex queries that are frequently used to avoid executing the same query multiple times.
Implementation of In-Memory Caching
To use in-memory caching in a .NET Core web application, you need to follow these steps:
Install the Microsoft.Extensions.Caching.Memory NuGet package.
Inject the IMemoryCache service in the controller or service where you want to use caching.
public class MyController : Controller
{
private readonly IMemoryCache _cache;
public MyController(IMemoryCache cache)
{
_cache = cache;
}
}
Example methods for In-Memory Caching implementation mechanism:
public void SetCachedCategories()
{
if (!_memoryCache.TryGetValue("Categories", out IEnumerable<Category>? categories))
{
categories = new List<Category>(); //Get fresh categories from the server
var cacheEntryOptions = new MemoryCacheEntryOptions().SetSlidingExpiration(TimeSpan.FromHours(1));
_memoryCache.Set("Categories", categories, cacheEntryOptions);
}
}
public IEnumerable<Category> GetCachedCategories()
{
var categories = new List<Category>();
if (_memoryCache.TryGetValue("Categories", out IEnumerable<Category>? cachedCategories) && cachedCategories != null)
{
categories.AddRange(cachedCategories);
}
return categories;
}
These two methods are responsible for setting and retrieving cached categories from an in-memory cache.
The SetCachedCategories() method first attempts to retrieve the cached categories from the memory cache using the _memoryCache.TryGetValue() method. If the cached categories are not found in the cache, the method fetches fresh categories from the server and adds them to the cache using the _memoryCache.Set() method. The MemoryCacheEntryOptions() object is used to specify the expiration time for the cache entry. In this case, a sliding expiration of one hour is set, which means that if the cache entry is accessed within the hour, its expiration time will be extended by another hour.
The GetCachedCategories() method retrieves the categories from the memory cache using the _memoryCache.TryGetValue() method. If the categories are found in the cache, they are added to a new list and returned. If the categories are not found in the cache, an empty list is returned.
Differences between In-Memory Caching and Distributed Caching
Distributed caching is another caching technique that stores data in a shared cache across multiple servers. In contrast, in-memory caching stores data in the memory of a single server.
The main differences between in-memory caching and distributed caching are:
- Scalability - Distributed caching can be scaled across multiple servers, making it more suitable for large applications with high traffic. In-memory caching is limited to a single server.
- Consistency - Distributed caching ensures consistency across multiple servers, whereas in-memory caching may not be consistent across multiple instances of the application running on different servers.
- Fault Tolerance - Distributed caching provides fault tolerance by replicating data across multiple servers, whereas in-memory caching can result in data loss if the server goes down.
Conclusions
In-memory caching is a useful technique to improve the performance of .NET Core web applications by reducing database or API calls. It is easy to implement and provides a significant performance boost in scenarios where data is frequently accessed and updated. However, in-memory caching has some limitations, and for large applications with high traffic, distributed caching may be a better option.