My understanding is that the host Operating System upon booting associates with and locks certain CPU Opcodes/instructions so that only the host OS may use them. The CPU architecture, additionally, supports this.
If a program offends and tries to use these instructions, the CPU will trap and revert control back to the OS which will terminate the program. Hence a program uses Syscalls() to do these things.
When you Virtualized an OS, it wouldn't issue syscalls(), it would simply execute the privileged instruction. It stands to reason it would then be terminated by the Host OS.
How are Opcodes virtualized?
There's another option besides termination, which is emulation and continue.
Some systems emulate misaligned loads & stores when the hardware doesn't support it but programs expect it; similar for floating point instructions & registers. So, a host can emulate things to make it look like the hardware did execute that instruction. Emulation means making changes to the program's state (cpu registers or memory as needed) and then resuming the program after the faulting instruction.
Similar mechanism for handling page faults: load, copy, or otherwise map the address and then continue the program — though for these, the faulting instruction is restarted once the page(s) are made available.
In those environments, it is critical that the privileged and missing instructions fault (take a hardware exception) so that the host can emulate with the proper semantics. The hardware changes behavior for execution of privileged instructions based on the operating mode, whether user or supervisor (or other). Taking an exception automatically switches to another mode, and resuming from the exception restores the original mode.
Hardware schemes can help with efficiency, for example, RISC V defines specific instructions and behaviors for multiple separate privilege levels, so that some privileged operations can be allowed without faulting to the next layer.