Kubernetes has become the new application server of modern cloud-native applications. Efficiency is one of the top benefits, thanks to containers’ leaner resource footprint and Kubernetes ability to pack them into the underlying hosts while ensuring performance isolation among applications.
The reality can be different: many teams adopting Kubernetes experience high infrastructure costs and performance issues, with applications failing to match latency SLOs. Why is that? Kubernetes requires applications to be carefully configured to achieve such benefits, and that is not easy due to Kubernetes tricky resource management mechanisms and the large number of microservices a performance engineer must tune in a cloud-native application.
In this talk, we cover:
- The key Kubernetes resource management concepts that you need to know from a performance and reliability perspective
- How AI can help performance engineers and SREs achieve the perfect balance among low costs, optimal application performance, and reliability — automating the sizing of container resource settings driven by your application SLOs