Archive for the ‘vplex’ Category

Solaris 11 on ESX – Serialized Disk IO bug causes extreme performance degradation #vexpert

Wednesday, March 29th, 2017

In this post, I discuss a newly found performance bug in Solaris 11, that has since Solaris 11 came out in 2011, severely hampered ESX VM disk i/o performance when using the LSI Logic SAS controller. I show how we identified the issue, what tools were used, and what the bug actually is.

In Short:

A bug in the disk controller driver ‘mpt_sas’ as used in Solaris 11, as used by the VMware virtual machine ‘LSI Logic SAS’ controller emulation, was causing disk I/O to only be handled up to 3 i/o at a time.

This causes severe disk i/o performance degradation on all versions of Solaris 11 up to the patched version. This was observed on Solaris 11 VMs on  vSphere 5.5u2, but has not been tested on any other vSphere version.

The issue was identified by myself and Valentin Bondzio of VMware GSS, together with our customer, and eventually Oracle. Tools used: iostat, esxtop, vscsiStats

The issue was patched in patch# 25485763 for Solaris 11.3.17.5.0, and in Solaris 12

Bug Report ( Bug 24764515 : Tagged command queuing disabled for SCSI-2 and SPC targets  ) : https://pastebin.com/DhAgVp7s

Link to Oracle Internal

KB Article: (Solaris 11 guest on VMware ESXI submit only one disk I/O at a time (Doc ID 2238101.1) ) : https://pastebin.com/hwhwiLRM

Link to Oracle Internal

————————

TLDR below:

(more…)

EMC VPLEX VS2 to VS6 seamless, non-disruptive hardware upgrade

Tuesday, February 28th, 2017

This post describes our experience with upgrading from EMC VPLEX VS2 to VS6 hardware, in a seamless non-disruptive fashion.

EMC VPlex is a powerful storage virtualization product and I have had several years of experience with it in a active-active metro-storage-cluster deployment. I am a big fan. Its rock-solid, very intuitive to use and very reliable if set up correctly. Check out these 2 videos to learn what it does.

Around August 2016, EMC released VPLEX VS6, the next generation of hardware for the VPLEX platform. In many aspects it is, generally, twice as fast, utilizing the latest Intel chipset and 16Gbe FC, with an Infiniband interconnect between the directors and a boatload of extra cache.

One of our customers recently wanted their VS2 hardware either scaled-out or replaced by VS6 for performance reasons. Going for a hardware replacement was more cost-effective than scaling out by adding more VS2 engines.

Impressively the in-place upgrade of the hardware could be done none-disruptively. This is achievable through the clever way the GeoSynchrony firmware is ‘loosely coupled’ from the hardware. The VS6 hardware is a significant upgrade over the VS2, yet they are able to run the same firmware version of GeoSynchrony without the different components of VPLEX being aware of the fact. This is especially useful if you have VPLEX deployed in a metro-cluster.
So to prepare for a seamless upgrade from VS2 to VS6, your VS2 hardware needs to be on the S6 firmware. The exact same release as the VS6 hardware you will be transitioning to.

VPLEX consists of ‘Engines’ that house 2 ‘directors’. You can think of these as broadly analogous to the service processors in an array. With the main difference being that they are active-active. They share a cache and are able to handle i/o for the same LUN’s simultaneously. If you add another engine with 2 extra directors, now you have 4 directors all servicing the same workload and load-balancing the work.

Essentially the directors form a cluster together, directly over their infiniband, or in metro-cluster, also, partially, over fiber channel across the WAN. Because they are decoupled from the management plane, they can continue operating even when the management plane is temporarily not available. It also means that, if their firmware is the same, even though the underlying hardware is a generation apart, they can still form a cluster together without any of them noticing.  This is what makes the non-disruptive upgrade, even in a metro-cluster configuration, possible. It also means that you can upgrade one side of the VPLEX metro-cluster separately, and a day or even a week apart from the other side. This makes planning an upgrade more flexible. There is a caveat however, and that is a possible slight performance hit on your wan-com replication between the VS2 and VS6 sides, so you don’t want to keep in that state for all too long.

 

VPLEX VS2 hardware. 1 engine consisting of 2 directors.


VS6 hardware. Directors are now stacked on top of each other. 

Because all directors running the same firmware are essentially equivalent, even though they might be of different hardware generations, you can almost predict what the non-disruptive hardware upgrade looks like. Its more or less the same procedure as if you where to replace a defective director. The only difference is that the old VS2 hardware is now short-circuited to the new VS6 hardware, which enables the new VS6 directors to take over i/o and replication from the old directors one at a time.

The only thing the frontend hosts and the backend storage ever notice, is temporarily losing half their storage paths. So naturally, you need to have your multipathing software on your hosts in order. This will most likely be EMC powerpath, which handles this scenario flawlessly.

The most impressive trick of this transfer, however, is that the new directors will seamlessly take over the entire ‘identity’ of the old directors. This includes -everything- unique about the director, including, crucially, the WWNs.  This is important because transferring the WWNs is the very thing that makes the transition seamless.  It does of course require you to have ‘soft zoning’ in place, in the case of FC. As a director port WWN will suddenly, in the space of about a minute, vanish from 1 port, and pop up on another port. But if you have your zoning set up correctly, you do not even have to touch your switches at all.

And yes, that does mean you need double cabling, at least temporarily. The old VS2 is of course connected to your i/o switches, and the new VS6 will need to be connected simultaneously on all its ports, during the upgrade process.

So have fun cabling those 😉

That might be a bit of a hassle, but its a small price to pay for such a smooth and seamless transition.

To enable the old VS2 hardware, (which used FC to talk to his partner director over local-com), to talk to the new VS6 directors (which use Infiniband) during the migration, it is necessary to temporary insert an extra FC module into the VS6 directors. During a specific step in the upgrade process, the VS2 is connected to the VS6, and for a brief period, your i/o is being served from a combination of a VS2 and VS6 director that are sharing volumes and cache with eachother. This is a neat trick.

Inserting the temp IO modules:

As a final step, the old VS2 management server settings are imported to the new redundant VS6 management modules. In VS6, these management modules are now integrated into the director chassis, and act in a active-passive failover mode. This is a great improvement over the single on-redundant VS2 management server, with its single power supply (!)

 

Old Management Server:

New management modules:

The new management server hardware completely takes over the identity and settings of the old management server. This even includes IP address, customer cluster names and the cluster serial numbers. The VS6 will adopt the serial numbers of your VS2 hardware. This is important to know from a EMC support point-of-view and may confuse people.

The great advantage is that all local settings and accounts, and all monitoring tools and alerting mechanisms flawlessly work with the new hardware. For example we have a powershell script that uses the API to check the health status. This script worked immediately with the VS6 without having to change anything. Also VIPR SRM only need a restart of the VPLEX collector,whereafter it continued collecting without having to change anything.  The only thing I have found that did not get transferred where the SNMP trapping targets.
After upgrade, the benefit of the new VS6 hardware was immediately noticeable. Here is a graph  of average aggregate director CPU use, from EMC VIPR SRM:

As this kind of product is fundamental to your storage layer, its stability and reliability, especially during maintenance work like firmware and hardware upgrades, is paramount, and is taken seriously by EMC. Unlike other EMC products like VNX, you are not expected or indeed allowed to update this hardware yourself, unless you are a certified partner.  Changes that need to be done to your VPLEX platform go through a part of EMC called the ‘Remote Pro-active’ team. 

There is a process that has to be followed which involves getting them involved early, a round of pre-validation health-checks, and the hands-on action of the maintenance job either remotely via webex, or on sight by local EMC engineers if that is required. A hardware upgrade will always require onsite personal, so make sure they deliver pizza to the datacenter! If an upgrade goes smoothly, expect it to take 4-5 hours. That includes all the final pre-checks, hardware work, cabling, transfer of management identity to the VS6, and decommissioning of the VS2 hardware.

In the end the upgrade was a great success, and our customer had zero impact. Pretty impressive for a complete hardware replacement of such a vital part of your storage infra.

Finally, here is the text of the September2016 VPLEX Uptime bulletin with some additional information about the upgrade requirements. Be aware that this may be deprecated, please consult with EMC support for the latest info.

https://support.emc.com/docu79516_Uptime-Bulletin:-VPLEX-Edition-Volume-23,-September-2016.pdf?language=en_US

There is an EMC community thread where people have been leaving their experiences with the upgrade, have a look here: https://community.emc.com/message/969664

 

Metro-Cluster SDRS datastore tagging and the EnforceStorageProfiles advanced setting

Monday, October 31st, 2016

When doing vSphere Metro Storage Cluster, on the shared storage layer, you often have a ‘fallback’ side. The LUN that will become authoritative for reading and writing in case of a site failure, or a split brain.

This makes VM storage placement on the correct Datastores rather important from an availability perspective.

Up till now, you had to manage intelligent VM storage-placement decisions yourself. And if you wanted the alignment of ‘compute’ -aka where the VM is running, in relation to where its storage falls back, then you also had to take care of this yourself through some kind of automation or scripting.

This problem would be compounded if you also wanted to logically group these storage ‘sides’ into SDRS clusters, which you often do, especially if you have many datastores.

In the past few years, mostly in regard to vSAN and vVOLs, VMware have been pushing the use of Storage Policies, and getting us thinking towards a model of VM-policy based storage management.

Wouldn’t it be great if you could leverage the new Storage Policies, to take care of your metro-cluster datastore placement? For example, by tagging datastores, and building a policy around that.

And what if you could get SDRS to automate and enforce these policy-based placement rules?

The EnforceStorageProfiles advanced setting introduced in 6.0U2 seemed to promise to do this.

However, messing around with Storage Policies, Tagging and in particular that EnforceStorageProfiles advanced setting, I encountered some inconsistent and unexpected GUI and enforcement behavior that show we are just not quite there yet.

This post details my findings from the lab.

————————————-

The summery is as follows:

It appears that if you mix different self-tagged storage capabilities inside a storage-cluster, the cluster itself will not pass the Storage Policy compatibility check on any policy that checks for a tag that is not applied to all datastores in that cluster.

Only if all the datastores inside the storage-cluster share the same tag, will the cluster itself report itself as compatible.

This is despite applying that tag to the storage-cluster object itself! It appears that adding or not adding these tags to the storage-cluster object has no discernible effect on the Storage Compatibility check of the policy.

This contradicts the stated purpose and potential usefulness of the EnforceStorageProfiles advanced setting.

However, individual datastores inside the storage-cluster will correctly be detected as compliant or non-compliant based on custom tags.

The failure of the compatibility check on the storage-cluster will not stop you from provisioning a new VM to that datastore cluster, but the compatibility warnings you get only apply to 1 or more underlying non-compatible data stores. It does not tell you which though, so that can be confusing.

The Advanced setting EnforceStorageProfiles will effect storage-cluster initial placement recommendations, but will not result in SDRS movements on their own when the value is set to 1 (soft enforcement) .
Even EnforceStorageProfiles=2  (hard enforce) does not make SDRS automatically move a VMs storage from non-compatible to compatible datastores in datastore-cluster. It seems to only effect initial placement.  This appears to contradict the way the setting is described to function.

However, even soft enforcement will stop you from moving a VM manually to a non-complaint datastore within that storage-cluster, even though you specified an SDRS override for that VM. That is unexpected, and the kind of behavior one would only expect with a ‘hard’ enforce. Again, this is unexpected behavior.

This may mean that while SDRS will not move a VM that has already been placed,  to correct storage on its own accord after the fact, it will at least prevent the VM from moving to incorrect storage.

Summed up that means that as long as you get your initial placement right, EnforceStorageProfiles  will make sure the VMs storage at least stays there. But it won’t leverage SDRS to fix placements, as the setting appears to have meant to.

 

Now for the details and examples:
————–

I have 4 Datastores in my SDRS cluster:

I have applied various tags to these datastore objects, for example the datastores start with ‘store1’ received the following tags:

datastores start with ‘store2’ received the following tags:

The crucial difference here is the tag “Equalogic Store 1” vs “Equalogic Store 2

In the this default situation, the SDRS Datastore Cluster itself has no storage tags applied at all.

 

I have created a Storage Policy that is meant to match with datastores with the “Equalogic Store 2” tag.  The idea here is that I can assign this policy to VMs, so that inside that datastore cluster those VMs will always reside on ‘Store2’ datastores and not on ‘Store1’ datastores.

I plan to have SDRS (soft) enforce this placement using the advanced option EnforceStorageProfiles=1, introduced in vSphere vCenter Server 6.0.0b

 

 

The match for ‘Equalogic Store 2’  is the only rule in this policy.

 

But when I check the storage compatibility, neither the datastores that have that tag nor the datastore cluster object shows up under the ‘Compatible’ listing.

However, under the ‘Incompatible’ listing, the Cluster shows up as follows:

Notice how the SDRS Cluster object has appeared to have ‘inherited’ the error conditions of both Datastores that do not have the tag.

This was unexpected.

In the available documentation for VM Storage Policies, I have not found any reference to SDRS Clusters directly. My main reference here is Chapter 20 of the vsphere-esxi-vcenter-server-601-storage-guide.  Throughout the documentation, only datastore objects themselves are referenced.

The end of chapter 8 of the vsphere-esxi-vcenter-server-601-storage-guide ; ‘Storage DRS Integration with Storage Profiles’ – explains the use of the EnforceStorageProfiles advanced setting.

 

 

The odd thing is, the documentation for the The PbmPlacementSolver data object (which I asume Storage Policy placement checker is utilizing)  even explicitly states that storage POD’s (SDRS Clusters) is a valid ‘Hub’ for checking against.

But it seems as if the ‘hub’ in the case of being an SDRS cluster, will produce an error for every underlying datastore that throws an error. In cases of mixed-capability datastores in a single SDRS Cluster, depending on how specific your storage profile is, chances are it will always throw an error.

So this seems contradictory!  How can we have an SDRS advanced setting that operates on a per-datastore bases, while the cluster object will likely always stop the compatibility check from succeeding?

 

As a possible workaround for these errors, I tried applying tags to the SDRS Cluster itself.  I applied the “Equalogic Store 1” and “Equalogic Store 2” both to the SDRS Cluster object. The idea being that the compatibility check of the storage policy would never fail to match on either of these tags.

 

 

But alas, it seems to ignore tags you set on the SDRS Cluster itself.

Anyway, its throwing an error, but is it really stopping SDRS from taking the policy into account, or not?

 

Testing SDRS Behaviors

 

 

Provision a new VM

Selecting the SDRS Cluster, It throws the compatibility warning twice, without telling you which underlying datastores it is warning you about. That is not very useful!

However, it will deploy the VM without any issue.

When we check the VM, we can see that it has indeed placed the VM on a compatible Datastore

 

 

Manual Storage-vmotion to non-compliant datastore

In order to force a specific target datastore inside an SDRS Cluster, check the ‘Disable Storage DRS for this virtual machine’ checkbox. This will create an override rule for this VM specifically.  When we do this and select a non-compatible datastore, it throws a warning, as we might expect. But as I have chosen to override SDRS recommendations completely here, I expect to be able to just power on through this selection.

 

No such luck. Remember that EnforceStorageProfiles is still set to only ‘1’, which is a soft enforcement. This is not the kind of behavior I expect from a ‘soft’ enforcement, especially not when I just specified that I wanted to ignore SDRS placement recommendations altogether!

I should be able to ignore these warnings, for above stated reasons. Its a bit inconsistent that I am still prevented from overriding!

There are 2 ways around this.

First of all you can momentarily turn off SDRS completely.

You must now choose a datastore manually. Selecting the non-compatible datastore will give the warning, as expected.

But now no enforcement takes place and we are free to move the VM wherever we want.

The other workaround, which is not so much a workaround, as it is the correct way of dealing with policy-based VM placement, is to change the policy. 
If you put the VMs policy back to default, it doesn’t care where you move it.

 

Storage DRS Movement Behaviors

When EnforceStorageProfiles=1  SDRS does not seem to move the VM, even if it is non-complaint.

Unfortunately, EnforceStorageProfiles=2 (hard enforce) does not change this behavior. I was really hoping here that it would automatically move the VM to the correct storage, but it does not, even when manually triggering SDRS recommendations.

Manual Storage-vmotion to compliant datastore

When the VM is already inside the storage-cluster, but on a non-complaint datastore , you would think it would be easy to get it back onto compliant datastore.
It is not. When you select the datastore-cluster object as the target, it will fault on the same error as manually moving it in the previous example. – explicit movements inside an SDRS-enabled cluster always require an override.

Create the override by selecting the checkbox again.

Dont forget to remove the override again, afterwards.

Manual Storage-vmotion from external datastore to the storage-cluster

Here, SDRS will respect the storage policy and recommend initial placement on the correct compliant datastores.


 

Conclusion.

Tag-based storage policies, and their use in combination with SDRS Clusters, appears to be buggy and underdeveloped. The interface feedback is inconsistent and unclear. As a result, the behavior of the EnforceStorageProfiles setting becomes unreliable.

Its hard to think of a better used case for  EnforceStorageProfiles  than the self-tagged SDRS datastore scenario I tried in the lab. both vSAN and vVOL datastores do not benefit from this setting. It really only applies to ‘classic’ datatores in an SDRS cluster.

I have seen that self-tagging does not work correctly. But I have not yet gone to back to the original use-case of Storage Profiles: VASA properties. However, with VASA advertised properties you are limited to what the VASA endpoint is advertising. Self-tagging is far more flexible, and currently the only way I can give datastores a ‘side’ in a shared-storage metro-cluster design.

Nothing I have read about vSphere 6.5 so far, leads me to believe this situation has been improved. But I will have to wait for the bits to become available.

 

My journey to find how to set EMC VPLEX DNS Settings and how to change your default root password.

Tuesday, December 22nd, 2015

Warning: This is kind of a rant.

Sometimes I really have to wonder if the engineers who build hardware ever even talk to people who use their products.

Though I love the EMC VPLEX, I get this feeling of a ‘disconnect’ between design and use more strongly with this product than with many others.

This post is a typical example.

I noticed that one of my vplex clusters apparently does not have the correct DNS settings set up.

Now, Disclaimer: I am not a Linux guy. But even if I was, my first thought, when dealing with hardware, is not to treat it as an ordinary Linux distro. Those kind of assumptions can be fatal.  When its a complete provided solution, I assume and it is mostly the case,that vendors supply specific configuration commands environments to configure the hardware. It is always best practice to follow vendor guidelines first before you start messing around yourself.   Messing around yourself is often not even supported.

 

So, lets start working the problem:

 

My first go to for most things is of course google:

 

Now I really did try to find anything, any post by anyone, that could tell me how to set up DNS settings. I spent a whole 5 minutes at least on Google :p

But alas, no, lots of informative blog posts, nothing about DNS however.

Ok, to the manuals. I keep a folder of VPLEX documentation handy for exactly this kind of thing:

 

 

 

docu52651_VPLEX-Command-Reference-Guide MARCH2014.pdf

 

 

Uhh.. nope.

docu52646_VPLEX-Administration-Guide MARCH2014.pdf

AHA!

 

 

Uhh.. nope.

 

docu34005_VPLEX-Configuration-Guide MARCH2014.pdf

Nope

🙁

 

 

 

Ok, something more drastic:

docu52707_VPLEX-5.3-Documentation-Portfolio.pdf

3 hits. THREE.. really?

 

Yes.. I know the management server uses DNS. *sigh*

 

 

 

Oh.. well at least I know that it uses standard Bind now, great!

 

 

 

 

oh, hi again!

 

 

Ok, lets try EMC Support site next:

Uhhmm..    only interesting one here is:

( https://support.emc.com/docu34006_VPLEX-with-GeoSynchrony-5.0-and-Point-Releases-CLI-Guide.pdf?language=en_US )

director dns-settings create, eh??

Ok then!

Getting exited now!

\

 

‘Create a new DNS settings configuration’

Uhmm.. you mean like… where I can enter my DNS servers, right? Riiiiight?

 

Oh.. uh.. what?  I guess they removed it in or prior to Geosyncronity 5.3 ?    :p

🙁

Back to EMC support

Nope.

 

 

Nope.

So… there is NO DNS knowledge anywhere in the EMC documentation?  At all???  Anywhere??

 

Wait! Luke, there is another!

 

SolVe (seriously, who comes up with these names) is the replacement to the good ole ‘procedure generator’ that used to be on SupportLink.

Hmm… I dont see DNS listed?

Change IP addresses maybe??

Hmm…  not really.. however I see an interesting command: management-server

Oh… I guess you are too good to care for plain old DNS eh?

 

And this is the point where I have run out of options to try within the EMC support sphere.

And As you can see, I really really did try!

 

So…   the Management server is basically a Suse Linux distro, right?

vi /etc/resolv.conf

Uhm… well fuck.

Now, I am logged into the management server with the ‘service’ account. The highest-level account that is mentioned in any of the documentation. of course, it is not the root account.

sudo su – …  and voila:

There we go!

 

Which brings me to another thing I might as well address right now.

The default root account password for vplex management server is easily Googlable. That is why you should change it. There actually is a procedure for this: https://support.emc.com/kb/211258
Which I am sure no one ever anywhere ever has ever followed.. that at least is usually the case with this sort of thing.

Here is the text from that KB article:

The default password should be changed by following the below procedure. EMC recommends following the steps in this KB article and downloading the script mentioned in the article from EMC On-Line Support.

Automated script: 

The VPLEX cluster must be upgraded to code version 5.4.1 Patch 3 or to 5.5 Patch 1 prior to running the script.

Note: VS1 customers cannot upgrade to 5.5, since only VS2 hardware is capable of running 5.5. VS1 customers must upgrade to 5.4 SP1 P3, and VS2 customers can go to either 5.4 SP1 P3, or 5.5 Patch 1.

The script, “VPLEX-MS-patch-update-change-root_password-2015-11-21-install” automates the workaround procedure and can be found at EMC’s EMC Online Support.

Instructions to run the script: 

Log in to the VPLEX management-server using the service account credentials and perform the following from the management-server shell prompt:

  1. Pull down a copy of the “VPLEX-MS-patch-update-change-root_password-2015-11-21-install” script from the specified location above and then, using SecureCopy (scp), copy the script into the “/tmp/VPlexInstallPackages/” directory on the VPLEX management server.
  2. The permissions need to be changed to allow execution of the script using the command chmod +x.

service@ManagementServer:~> chmod +x /tmp/VPlexInstallPackages/VPlex-MS-patch-update-root_password-2015-11-21-install

  1. Run the script as shown below.

Sample Output:

This script will perform following operation:
– Search and insert the IPMI related commands in /etc/sudoers.d/vplex-mgmt.
– Prompt for the mgmt-server root password change.
Run the script with “–force” option to execute it

service@ManagementServer:~> sudo /tmp/VPlexInstallPackages/VPlex-MS-patch-update-root_password-2015-11-21-install –force

Running the script…

– Updating sudoers
– Change root password
Choose password of appropriate complexity.

Enter New Password:
Reenter New Password:

Testing password strength…

Changing password for root.

Patch Applied

NOTE: In the event that the password is not updated, run the script again with proper password complexity.

  1. Following running of the script, from the management server, verify that password change is successful.

Sample output:

service@ManagementServer:~> sudo -k whoami
root’s password:
root

***Contact EMC Customer Service with the new root password to verify that EMC can continue to support your VPLEX installation. Failure to update EMC Customer Service with the new password may prevent EMC from providing timely support in the event of an outage.

Notice how convoluted this is. Also notice how you need to have at least 5.4.1 Patch 3 in order to even run it.

While EMC KB articles have an attachment section, this script in question is of course not added.

Instead, you have to go look for it yourself, helpfully, they link you to: https://support.emc.com/products/29264_VPLEX-VS2/Tools/

And its right there, for now at least.

What I find interesting here is that it appears both the article, and the script, have been last edited.. .today?
Coincidental. But also a little scary. Does this mean that prior to 5.4.1 Patch 3 there really was no supported way to change the default vplex management server root password? The one that every EMC and VPLEX support engineer knows and is easily Googlable? Really? 

I think the most troubling part of all this is that final phrase:

Failure to update EMC Customer Service with the new password may prevent EMC from providing timely support in the event of an outage.

Have you ever tried changing vendor default backdoor passwords, and see if their support teams can deal with it?  Newsflash: they can not. We tried this once with EMC Clariion support. Changed the default passwords. We dutifully informed EMC support that we changed them. They assured it this was noted down in their administration for our customer.

You can of course guess what happened. Every single time EMC support would try to get in, and complain that they could not. You had to tell them every single time about the new passwords you had set up.  I am sure that somewhere in the EMC administrative system, there is a notes field that could contain our non-default passwords. But no EMC engineer I have ever spoken to would even look there, or even know to look there.

If you build an entire hardware-support infrastructure around the assumption of built-in default password that everyone-and-their-mother knows, you make it fundamentally harder to properly support users who ‘do the right thing’ and change them. And you build in vulnerability by default.

Instead, design you hardware and appliances to generate new and unique strong default passwords on first deployment, or have the user provide them (enforcing complexity). (many VMware appliances now do this). But do NOT bake in backdoor default passwords that users and Google will find out about eventually.