nothing about it is common or portable, so if you change your VM host, it might all fall apart.
Disclaimer, I’m pretty much elbow deep into terraform daily and have written/contributed to a few providers.
A lot of this is highly dependent upon the providers (the thing that allows the Terraform engine to interface with APIs for AWS, Proxmox, vSphere, etc. The Telmate Proxmox provider in particular is/was quite awful with not realizing a provisioned VM had moved to a new host.
Also, the default/tutorial code tends to be not very flexible. The game changer for me was using the built-in functions for decoding yaml from a config file (like yamldecode(file(config.yml))
in a locals block. You can then specify your desired infrastructure with yaml and (if you write your Terraform code correctly) you can blowout hundreds of VMs, policies, firewall rules, dns records etc with a single manifest. I’ve also used the local_file
resource with a Terraform file template to dynamically create an Ansible inventory file based on what’s deployed.
Yeah, that’s the other thing to keep in mind, since the KVM APIs are different from the vSphere APIs, you can’t just swap providers without changes. But if you were going from a test vSphere stack to a prod, you could update the endpoint and be just fine.
Hashicorp has caught some shit in the past about claiming the code covers multiple providers. Technically, it can if you do weird shit with modules, but in reality there isn’t a clean way to have a single, easily understandable project that can provision to multiple platforms.