In today’s AI driven workloads read performance matters a lot. There are three main ways we can make sure that read performance is good
1. Getting data from kernel page cache to avoid latency from fetching data from storage device.
2. Proactively prefetching data from storage device in background, so that next read request will find data in kernel page cache.
3. Keep discarding pages which are no longer used so that there can be more room for prefetching data from storage devices to kernel page cache.
To help on above points kernel has provided on demand readahead mechanism, where user applications can advise kernel about the way I/O will happen, e.g “madvise” systems calls have flags like “MADV_SEQUENTIAL” which says Expect page references in sequential order, “MADV_WILLNEED” Expect access in the near future. (Hence, it might be a good idea to read some pages ahead.)
To be sure if above advise/strategy works as expected we need some way to verify whether we are getting more cache-hit with prefetch and if we are also discarding unused pages. For this eBPF is optimal mechanism where we can put probes/traces at various interesting places in kernel, put data/state in BPF storage like maps and then in user space display those maps in some graphical form to see above data.
This talk will focus on how we can use eBPF to gather information from different places/functions in a kernel and using user space techniques like histogram/graph decide if our advises to kernel is working in expected manner. Data collected from eBPF can also be sent to LLM (Large Language models) to do a Contextual Analysis using RAG (Retrieval Augmented Generation) to generate insights, identify patterns, and offer recommendations based on the input data. For example, it could identify correlations between different performance metrics, highlight anomalies or trends, and suggest adjustments to optimise read performance further.