vCPU per physical core

622 Views Asked by At

According to Google https://cloud.google.com/architecture/resource-mappings-from-on-premises-hardware-to-gcp the number of vCPU cores = threads per core × cores per socket × number of sockets

Modern AMD EPYC Zen 3 on Tau T2D plan will have 64 threads per core. I hope that Google will still allocate only two vCPUs per physical core - one vCPU per logical CPU core created by the operating system. So, what their policy is? Their sales department has no idea at all.

For comparison, how many shared vCPU cores are created on shared vCPU plans on AMD servers by Linode, Vultr, DO per physical core? Is it two vCPUs, eight v CPUs?

Thanks!

1

There are 1 best solutions below

0
On

A bit late, but perhaps still relevant info: t2d seems to give you an entire EPYC core per vCPU, when n2d/c2d seem to give you a hyper thread per vCPU (so 2vCPUs/core). You can verify that if you test scalability when running single-threaded vs multithreaded scalable workloads - if your vCPUs are whole cores, you get optimal scalability (2xvCPU = 2xPerformance for the right load).

You were curious about other cloud solutions. I recently did some extensive testing on multiple providers, including Linode and DO (but not Vultr as they are not as reputable). You can look at the Multi-threaded performance & CPU scalability section, from the graph, whatever has 90%+ is a full core per vCPU, otherwise it's a hyper-thread.

The short story is: most "dedicated" CPU instances across providers give you threads, with notable exceptions being the aforementioned t2d, as well as the ARM-powered VMs (e.g. Altair Altra, AWS Graviton2), which give you a full core for each vCPU. On the other hand, while many "shared" instances give you less consistent single-thread performance, they behave more like they are full cores per vCPU - probably because workloads are moved the free cores and nodes aren't usually excessively busy - and both Linode's and Digital Ocean's lowest cost shared instances are like that (making them great deals given the price). So, with those, your single-threaded performance will vary a bit with how busy the node is, but your 2xvCPU instance will actually have about 2x the performance of 1xvCPU if you can run things in parallel.