At KubeCon EU in Barcelona today, Microsoft made a slew of Kubernetes announcements. The company launched the Service Mesh Interface (SMI) specification, its new community project for collaboration around Service Mesh infrastructure, and also released Helm 3 alpha, version 1.0 of its Visual Studio Code Kubernetes extension, and version 1.0 of the Virtual Kubelet.
SMI stands out as the biggest announcement. The open project was started by Microsoft just a few months ago. The list of partners has quickly grown to include AspenMesh, Buoyant, Docker, HashiCorp, Kinvolk, Pivotal, Rancher, Red Hat, Solo, VMware, and Weaveworks.
Network pipes were dumb, with encryption, compression, and identity logic living inside endpoints, until micro-services, containers, and orchestration systems like Kubernetes came along. Service mesh technology makes the network smarter by pushing this logic into the network, controlled by management APIs. As a result, service mesh technologies have taken off with application developers. Unfortunately, developers who use mesh technologies face a lot of fragmentation. They must choose a provider, write directly to those APIs, and become locked into a complex surface. Without generic interfaces, developers lose portability, flexibility, and overall innovation across the broad ecosystem.
“There’s extensibility just all over the system,” Brendan Burns, Kubernetes cofounder and Microsoft distinguished engineer, told VentureBeat. “And service mesh stood out as this place where there wasn’t any accessibility and there wasn’t any generic interface. It wasn’t following the pattern of the way that the rest of the Kubernetes architecture had been designed, or had been retrofitted, in all honesty. In some places we didn’t put plugins there initially. But then we came along and we fixed it. We added pluggable interfaces. And so I think this is a really important moment. It’s an acknowledgement that service mesh is critically important.”
Service Mesh Interface
By providing a generic API interface, SMI frees developers to use service mesh capabilities without being tied to a particular implementation. They can experiment and change implementations without having to change their applications.
SMI follows in the footsteps of existing Kubernetes resources, like Ingress and Network Policy, which also do not provide an implementation. The SMI specification instead defines a set of common APIs that allow mesh providers to deliver their own implementations. This means mesh providers can either use SMI APIs directly or build operators to translate SMI to native APIs.
“Some of the people who we’re partnering with are excited because they’re not going to build a service mesh but they want to build tooling that targets a service mesh,” Burns explained. “Maybe it’s a better way of doing experiments, or rollouts of software. And if they have to choose a particular service mesh’s API, that tool has less reach. We’re in the business of making generic infrastructure, and different customers will make different choices. Someone might choose to use Consul by HashiCorp, someone might choose Istio.”
“If a tool can’t target a generic interface, then it’s very hard for people to build great experiences on top of something like a service mesh,” Burns continued. “And so by defining and taking advantage effectively of our position as a leader in the Kubernetes space, to help define that interface, that benefits all of our customers. They can gain access to all that tooling that the community has built.”
SMI’s initial specifications cover the top three service mesh features, per Microsoft’s enterprise customers:
- Policy — apply policies like identity and transport encryption across services
- Telemetry — capture key metrics like error rate and latency between services
- Management — shift and weight traffic between different services
Microsoft and its partners expect to evolve SMI APIs over time and extend the current specification with new capabilities.
“It’s very early days,” Gabe Monroy, program manager of Azure Container compute, told VentureBeat. “And having this interface layer is a really critical component to making service mesh technology friendly and approachable to enterprises. We look at our job as taking this stack of cloud native technology and making it palatable to our enterprise customers.”
Helm, the defacto standard for packaging and deploying Kubernetes applications, is becoming a modern application package manager. Microsoft has released the first alpha of Helm 3, which Burns says “represents a nearly complete refactoring of the Helm package manager.”
Because the Helm project is nearly as old as Kubernetes, its design predates many advancements in Kubernetes, like CustomResourceDefinitions and even Kubernetes RBAC. “Helm was created in a much earlier time in sort of the timeline of Kubernetes. It’s had to evolve quite a bit,” Monroy noted. “So there’s a lot of new features in Helm 3, particularly around security, that are very much anticipated by the broad group of folks that are using Helm in a lot of production scenarios today.”
Helm 2 implemented multiple features that in turn made it less tightly integrated with Kubernetes. Managing things like RBAC of Charts and Resources became complicated and disconnected from Kubernetes itself. Helm 3 attempts to fix this problem that Helm 2 created by replacing custom APIs for charts and deployments with CustomResourceDefinitions. You can now use the kubectl command line to interact with your Helm charts and Kubernetes native RBAC to limit access and resources that users can create.
Visual Studio Code and Virtual Kubelet
Visual Studio Code’s open source Kubernetes extension has hit version 1.0. The extension brings native Kubernetes integration to Visual Studio Code. The milestone means it is fully supported for production management of Kubernetes clusters. Additionally, Microsoft has added an extensibility API that makes it possible for anyone to build their own integration experiences on top of Microsoft’s baseline Kubernetes integration.
“What it really means is that we wanted to say that we thought that we achieved the sort of MVP set of features,” Burns explained. “We released an API for it so that people can take our extension and then start building on top of it. And so part of the other reason to hit a 1.0 was to say, ‘Hey, if you want to take a dependency on us, we’re going to give you good guarantees that we’ll maintain compatibility going forward.’ We’re not done, but we’ve hit a good point. Also, we’re ready for people to start building on top and allow for an ecosystem of people to build on top of the code that we’ve laid down there.”
The Virtual Kubelet has also hit the 1.0 milestone. The Virtual Kubelet “represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances,” Burns explains. In short, skip managing an operating system but keep using Kubernetes for orchestration.
“We developed it and in the context of the Cloud Native Computing Foundation, where it’s a sandbox project,” Burns noted. “With 1.0, we’re saying ‘It’s ready.’ We think we’ve done all the work that we need in order for people to take production level dependencies on this project.”