—Update 21st March, vBrownbag episode on AWS that these slides are from, is now posted on youtube: https://www.youtube.com/watch?v=u8rWI5tuSq8 —
— Update 15th March ~5pm CET, added some extra info and clarified some points—
More details regarding VMware Cloud on AWS are starting to come out of VMware. Tonight I attended an awesome #vBrownbag webinar on #VMWonAWS, hosted by Chris Williams (@mistwire) and Ariel Sanchez ( @arielsanchezmor).
Presenting where Adam Osterholt (@osterholta) Eric Hardcastle (@CloudGuyVMware) and Paul Gifford @cloudcanuck
Here are some of the slides and highlights that stood out for me. Information is not NDA and permission was given to repost slides.
VMware Cross-Cloud Architecture. A nice slide that summorises the VMware strategy going forward. Expect VMware cloud to pop up in more places, like IBM Cloud. More info about VMware cloud strategy here
Important to note here, is that this is a complete service offering, meaning its fully licensed. You do not need to bring your own licenses to the table. So you get the full benefit of technologies like vSAN and NSX as part of the offering.
Skillsets.. this is a huge selling point. Many native cloud deployments require your admins to know AWS or cloud-native specific tools and automation scripting languages. VMware Cloud on AWS (VMWonAWS) removes that barrier-to-entry completely. If you can administer a VMware-based cloud stack today , you can administer VMware Cloud on AWS.
You have access to AWS sites around the world to host VMWonAWS. What is to note however is that, because these are vSphere clusters on bare-metal, where you instantiate your VMware environment is where you are bound in certain ways.
Initial roleout will be Oregon. The followed by an EMEA location. Sometime around mid-2017. (from announcement to GA in about a year.. not bad!!)
With the recent S3 outage in mind, asked specifically about things like stretched-cluster and other advanced high-availability features inside AWS, and these will not be initially part of the offering. However you can always move your VMs off and on VMWonAWS via x-vmotion. More or that later.
VMWonAWS will use customized HTML interfaces throughout. No flash here! 🙂
But if you are a bit of a masochist and you like the flash/flex client, it will be available to you anyway.
The frontend provisioning component will include its own API interface. What you see below is a mockup and subject to change.
Administering your cluster uses a custom and locked-down version of the already available HTML5 client.
Its important to note here, that VMware will administer, and upgrade their software inside these environments themselves. They will keep an n-1 backward compatibility, but if you have a lot of integration talking against this environment, operationally you will have to keep up with updating your stuff. Think of vRA/vRO workflows and other automation you might have talking to your VMWonAWS instances. This may be a challenge for customers.
Demonstrated below is a typical feature unique to VMWonAWS, the ability to resize your entire cluster on the fly.
Again, above screenshots are mockups/work-in-progress
Your VMware environment is neatly wrapped up in an NSX Edge gateway, which you cannot touch. However, inside your environment, you are able to provision your own NSX networks, manage DFW, edges, etc, and with that all the functionality they offer you. However initially NSX API access will be limited or not available, so it may be hard to automate NSX actions out of the gate.
The Virtual Private Cloud (VPC) you get is divided into 2 pools of resources. Management functions are separated from compute.
Remember that all of this is bare-metal, managed and patched by VMware directly.
VMware manages the VPC with their stuff in it. Your get access to it via your own VPC, and the two are then linked together.
They give you a snazzy web frontend interface with its own API do the basic connectivity config and provisioning.
So how do you connect up your new VMWonAWS instance with your on-premises infrastructure?
End-to-end, you are bridging via Edges.. but there is obviously a little more involved. Here are the high-level steps that the customer and VMware/Amazon take to hook it all up.
The thing to remember here is that your traffic to the VMware VPC is routed through your customer VPC. Its ‘fronts’ the VMware VPC.
Link the vCenters together, and now you can use x-vmotion to move VMs back and forth. And remember, no NSX license is required on-prem to do this.
If you already have NSX, you can of stretch your NSX networks across. this allows live x-vmotions (cross-vcenter vmotion).
If you do not have NSX on-premise, you will deploy a locked-down NSX edge for bridging, but vmotions would be ‘cold’.
Encryption will be available between the Edge endpoints. No details on this yet.
As standard NSX edges are being used on both ends, you can do things like NAT, so you can do overlapping IP spaces if you so choose. That is not something native AWS VPC’s allow you to do.
Because your always have your own native AWS VPC, you can leverage any other native AWS service.
But you can do some crazy-cool things too, that will be familiar to native AWS users. You can, for example, leverage regional native AWS services, for example S3, inside VMWonAWS VMs. These resources are connected inside AWS, using their own internal routing. So this kind of traffic does not neet to go back out over the internet.
VMs inside VMWonAWS can make use of the Amazon breakout for their internet connectivity. Or you can backflow it through your own on-premises internet.
Some additional notes on APIs:
There is no backup function built into this, so you are expected to backup your own VMs hosted inside VMWonAWS. Do facilitate this, the VADP API for backups is available to leverage, as per normal.
Some notes on vSAN:
vSAN is used as underlying storage. All Flash. VMware does not yet know what the default setup of this will be in terms of FTT (failure To Tolerate_ level or dedupe. But you will have control over most of it, to decide for yourself what you want.