Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.
Docker, the startup responsible for accelerating the container technology revolution, is making several announcements at its DockerCon conference in San Francisco this week — and several other companies are taking the event as opportunity to make announcements, too.
In case you haven’t been following along, here’s a roundup of all the news coming out of the conference.
A startup called Portworx is bringing the concept of software-defined storage to increasingly popular Linux container technology for packaging up application code. Today, in conjunction with the 2015 DockerCon conference, Portworx announced that it has raised $8.5 million in initial funding.
VMware today announced a technology preview of Project Bonneville, a runtime that will allow companies that have invested in VMware’s vSphere virtualization software to run applications packaged up in Docker containers inside of virtual machines.
Project Bonneville allows engineers to choose containers from the Docker Hub and then run them inside virtual machines, thanks to a feature in vSphere called Instant Clone.
“The pure approach Bonneville takes is that the container is a VM, and the VM is a container,” VMware senior software engineer Ben Corrie wrote in a blog post about the project. “There is no distinction, no encapsulation, and no in-guest virtualization. All of the necessary container infrastructure is outside of the VM in the container host.”
Google today broadened the availability of a couple of its cloud services for working with applications packaged up in containers. The Google Container Engine for deploying and managing containers on Google’s cloud infrastructure, until now available in alpha, is now in beta. And the Google Container Registry for privately storing Docker container images, previously in beta, is now generally available.
Google has made a few tweaks to Container Engine, which relies on the Google-led Kubernetes open-source container management software, which can deploy containers onto multiple public clouds. For one thing, now Google will only update the version of Kubernetes running inside of Container Engine when you run a command. And you can turn on Google Cloud Logging to track the activity of a cluster “with a single checkbox,” Google product manager Eric Han wrote in a blog post on the news.
And now there are prices for Container Engine: 15 cents per hour for “standard” clusters with as many as 100 virtual machine nodes and managed uptime. Google won’t charge anything for “basic” clusters with up to five virtual machine nodes and no managed uptime.
Docker and CoreOS today are jointly announcing that they’re working with several major tech companies on a new Linux Foundation initiative called the Open Container Project. The idea is for everyone — users and vendors — to agree on a standard container runtime and image format and prevent unnecessary fragmentation.
So this is what it’s come to — two hot startups putting aside their disagreements for a moment to agree on some standards. Other vendors participating in the project include — take a deep breath — Amazon Web Services, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Intel, Joyent, Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat, and VMware.
Docker, the startup that has popularized open-source technology for packaging up applications into containers, is introducing new software-defined networking (SDN) capabilities as a result of its SocketPlane acquisition.
Docker is taking an ecosystem approach here. The new system accepts third party plug-ins from SDN vendors like Cisco, Midokura, Nuage Networks, VMware, and Weaveworks.
The core Docker engine project is getting the native SDN and plug-in enhancements. Docker’s Compose open-source project will allow users to determine which containers should be connected together for applications. And with Docker’s Swarm open-source project, “the multi-container application can be immediately networked across multiple hosts and can communicate seamlessly across a cluster of machines with a single command,” according to a statement.
IBM today announced that it is the first company to resell Docker Trusted Registry, a piece of software for on-premises data centers from hot enterprise startup Docker.
Docker itself announced today the availability of Docker Truster Registry version 1.1 at the DockerCon conference in San Francisco. Now IBM will sell it to customers. Docker Truster Registry integrates with IBM’s UrbanCode and PureApplication System tools, IBM noted in a statement.
IBM today is also announcing the general availability of IBM Containers, a cloud service for deploying applications in containers on the IBM Bluemix platform-as-a-service (PaaS) cloud. IBM launched IBM Containers in beta in December.
Startup Docker is starting to give developers access to the latest features of its trendy open-source container technology, in new experimental releases of the software.
“If you’re familiar with the Chrome Canary release, it’s the same thing,” said Docker cofounder and chief technology officer Solomon Hykes at the DockerCon conference in San Francisco today. There’s also some resemblance to Mozilla’s Firefox Nightly browser releases, Microsoft’s Windows Insider program, and Google’s Chrome OS operating system.
But Docker’s new experimental releases are updated every day — they’re “not some development branch,” Hykes said.
Docker today announced the general availability of the Docker Trusted Registry, a piece of software that companies can use to securely store their container images.
The Docker Trusted Registry, which first launched in beta in February, includes management features and commercial support. It comes with Active Directory and LDAP support and audit logging.
Companies can run it in public clouds or in on-premises data centers. IBM has already said it would resell the software, and the Microsoft Azure Marketplace already carries it. Amazon Web Services will offer it, too.
Microsoft is making its widely used software even more capable of working with the trendy open-source Docker container technology. During a presentation at the DockerCon conference in San Francisco today, Mark Russinovich, chief technology officer of the Microsoft Azure public cloud, used Docker to deploy an application on both the Windows Server and Linux operating systems.
The front end was written in ASP.NET, the middle tier in Node.js, and the back end on top of MongoDB, Russinovich said. It was “kind of ironic,” he said, but he pushed the ASP.NET code to a Linux container, and he pushed the Node code to a container running Windows Server Technical Preview 2.
In addition to showcasing the new cross-platform container capabilities, today Russinovich also set up a continuous-integration system for testing and running containers with the open-source Docker Compose software — right from within Visual Studio Online.
Beyond that, the Azure Marketplace now features a Docker Trusted Registry virtual-machine image, and the Azure Marketplace now allows people to deploy container-based applications from Docker Hub images right onto Azure. Russinovich even showed how you could deploy multi-container applications, like one featuring both WordPress and MySQL.
Note: We’ll update this post with more DockerCon coverage throughout the course of the conference.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more