Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel provisioning #210

Open
wants to merge 39 commits into
base: master
Choose a base branch
from

Conversation

taliesins
Copy link
Contributor

With these changes I have managed to spin up 10 Windows boxes in parallel. I also make use of a few vagrant plugins like: vagrant-berkshelf, vagrant-triggers and vagrant-windows-domain. So there is lots of crazy things happening like reboots and re-initialization of file shares.

This PR is going to need some help on the test side. I am a bit of a Ruby noob, so I am not sure how we should be changing the tests to suite the code changes. So the tests have been left unchanged. Please send me a PR to fix the tests!

@ghost
Copy link

ghost commented Oct 26, 2016

@taliesins I merged your PR #204 (by way of #212 where I fixed tests, etc), but that has caused the parallel-provisioning branch to have some rather large merge conflicts, particularly in lib/vSphere/action/clone.rb. Would you be able to take care of resolving those conflicts?

@taliesins taliesins force-pushed the parallel-provisioning branch from 0037407 to 06def27 Compare November 17, 2016 18:20
@taliesins
Copy link
Contributor Author

@michael-brandt-cu I have resolved the conflicts. It would be wonderful if you could help me with the tests. Would love to get this into the main branch.

I have added a couple more things into this pull request:

  • Added support for multiple network cards with full configuration of all features
  • Added support for serial ports with full configration of all features
  • Selection of management interface for communication between Vagrant and vm
  • Changed the wait for windows sysprep to wait for customization, as people can use customization on other oses. Made it optional to wait for customization.

# If the machine ID changed, then we need to rebuild our underlying
# driver.
def machine_id_changed
id = @machine.id
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taliesins I'm working through the rubocop failures right now, and for this line it says:

lib/vSphere/provider.rb:28:9: W: Useless assignment to variable - id.
        id = @machine.id
        ^^

id is not used anywhere in this function. Was this supposed to be @id = @machine.id, setting the id property on an instance of Provider? If not, then the line should be deleted to pass rubocop.

elsif (x = p.find(final, RbVmomi::VIM::ClusterComputeResource))
x
end
rescue Exception
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rubocop for this line says:

lib/vSphere/driver.rb:500:9: W: Avoid rescuing the Exception class. Perhaps you meant to rescue StandardError?
        rescue Exception
        ^^^^^^^^^^^^^^^^

We could replace Exception with StandardError like rubocop suggests, but I think it would be even better to use the specific exception or exceptions that we anticipate finding here, eg:

rescue TypeError, NameError

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taliesins Is there a small number of specific exceptions we expect to encounter here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cut and paste this from: https://github.com/nsidc/vagrant-vsphere/blob/master/lib/vSphere/util/vim_helpers.rb

I will see if I can work out why this code was implemented this way.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. It's too bad that it's difficult to tell in git when code is just moved around like that.

Looks like it passed rubocop before because vim_helpers.rb was specifically excluded in the rubocop configuration: https://github.com/nsidc/vagrant-vsphere/blob/master/.rubocop.yml#L7

So, no need to worry about this then! I'll just update the config to point to name the correct file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. I think we do want to run rubocop on this file. Lets see if we can put in something that it will pass with.

This file basically contains all the logic in it :>

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well that the whole file wasn't actually excluded, just the Lint/RescueException cop. So it would still run most of the checks.

@ghost ghost mentioned this pull request Nov 18, 2016
@ghost
Copy link

ghost commented Nov 18, 2016

@taliesins

I have resolved the conflicts.

Great!

It would be wonderful if you could help me with the tests.

I've created a new branch, parallel-provisioning-tests, starting from what you have here, with changes to get rubocop and the tests passing so that the Travis build can pass. I don't have it all passing yet, but that's where I'm planning to continue work on that.

I have a 3-day weekend this weekend, but I'll be able to get back to this next week.

@ghost
Copy link

ghost commented Nov 23, 2016

I took a break from fixing unit tests, and I tried using this branch to destroy and recreate a VM I use for development on my current main project at NSIDC. I got these errors when attempting to destroy it:

There are errors in the configuration of this machine. Please fix
the following errors and try again:

vSphere Provider:
* The following settings shouldn't exist: real_nic_ip, vlan

We use both of these settings in our own infrastructure, and it looks like this branch removes real_nic_ip and changes how vlan works. Considering these breaking changes, and the possibility that other configuration settings are also changed in a backwards-incompatible way, I'm hesitant to pull this in.

It might be possible to release these changes in their current form as vagrant-vsphere-parallel, but then we would quickly have two diverging plugins; given how many changes this PR has, any features/fixes added to vagrant-vsphere could be difficult to apply to vagrant-vsphere-parallel, and vice versa.

@taliesins
Copy link
Contributor Author

@michael-brandt-cu real_nic_ip, vlan are configuration settings that used to be at machine level. To support multiple nics we need to move them to network adapter level.

Perhaps we could put in a shim property for vlan and ip_address?

It should be trivial to change your vagrant scripts to use the multiple network card approach:

  • management_network_adapter_slot to select the management nic
  • management_network_adapter_address_family which address family to detect ip for
  • vlan: "Env#{environment_number}" is equivalent of vlan at machine level
  • ip_address: ipAddress is equivalent of real_nic_ip at machine level

Here is an example snippet from my vagrant file for setting up Juniper FCP for vMX:

      fpc.vm.provider :vsphere do |vsphere, overrides|
        vsphere.host = vsphere_host
        vsphere.insecure = true
        vsphere.user = vsphere_username
        vsphere.password = vsphere_password        
        vsphere.compute_resource_name = vsphere_cluster
        vsphere.resource_pool_name = vsphere_resource_pool   
        vsphere.template_name = vsphere_template_juniperfpc
        vsphere.name = fpc.vm.hostname
        vsphere.vm_base_path = "#{vsphere_vm_base_path}/Env#{environment_number}"
        vsphere.data_store_name = vsphere_datastore
        vsphere.memory_mb = 8192
        vsphere.cpu_count = 3
        vsphere.management_network_adapter_slot = 0
        vsphere.management_network_adapter_address_family = 'ipv4'
        vsphere.network_adapter 0, vlan: "Env#{environment_number}", mac_address: macAddress, ip_address: ipAddress
        vsphere.network_adapter 1, vlan: "Env#{environment_number}-Vmx#{i}"
        vsphere.network_adapter 2, vlan: "Env#{environment_number}-TransportBetweenVmx#{i % 2 == 1 ? i : i-1}AndVmx#{i % 2 == 0 ? i : i+1}"
        vsphere.network_adapter 3, vlan: "Env#{environment_number}-#{i % 2 == 1 ? 'Client' : 'CSP'}#{i % 2 == 1 ? i : i-1}-2"
        vsphere.network_adapter 4, vlan: "Env#{environment_number}-#{i % 2 == 1 ? 'Client' : 'CSP'}#{i % 2 == 1 ? i : i-1}-4"
        vsphere.network_adapter 5, vlan: "Env#{environment_number}-#{i % 2 == 1 ? 'Client' : 'CSP'}#{i % 2 == 1 ? i : i-1}-6"
        vsphere.network_adapter 6, vlan: "Env#{environment_number}-#{i % 2 == 1 ? 'Client' : 'CSP'}#{i % 2 == 1 ? i : i-1}-1"
        vsphere.network_adapter 7, vlan: "Env#{environment_number}-#{i % 2 == 1 ? 'Client' : 'CSP'}#{i % 2 == 1 ? i : i-1}-3"
        vsphere.network_adapter 8, vlan: "Env#{environment_number}-#{i % 2 == 1 ? 'Client' : 'CSP'}#{i % 2 == 1 ? i : i-1}-5"
        vsphere.network_adapter 9, vlan: "Env#{environment_number}-#{i % 2 == 1 ? 'Client' : 'CSP'}#{i % 2 == 1 ? i : i-1}-7"
      end

taliesins and others added 22 commits June 13, 2017 16:56
Allow ip family to detect on management nic to be specified
Allow an ip address to be specified for a nic, so that auto detection of ip does not take place (use in situations where you cant install vmware guest tools)
Use default nic Vsphere nic detection if we not using a customized approach

Use standard variable naming convention
…n can occur on other operating systems.

You can choose to wait for customization to complete, as not everyone uses VSphere customizations.
Modifies many files except for driver.rb; rubocop seems to stall out
when auto-correcting driver.rb
@taliesins taliesins force-pushed the parallel-provisioning branch from 1861e4e to 0ce7927 Compare June 13, 2017 16:02
Taliesin Sisson added 2 commits July 21, 2017 15:28
…inning up machines in parallel we know which machine the ui output is for
@macster84
Copy link

Hi everyone,

looks like there is lot of potential in this pull request.
Any chance we can get this or parts of it merged?

@michael-brandt-cu: Is backwards compatibility your main concern?

Maybe we can split the pull request in multiple different ones so that risk and merge effort is reduced?
In any case, I would be happy to help.

@ghost
Copy link

ghost commented Jan 23, 2018

@michael-brandt-cu: Is backwards compatibility your main concern?

Yes.

Maybe we can split the pull request in multiple different ones so that risk and merge effort is reduced?

Maybe. It's been a while since I've had funding to do much work on vagrant-vsphere, and even longer since I really looked at this PR, so I can't really say how well it could be split up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants