If your virtual instance is not booting due to a broken grub2 configuration you can fix it by connecting through your parent server with the sysrescuecd on the virtual instance.
Here is a video showing this in action. https://youtu.be/9TOGb6cEhcY

This is what grub2 rescue screen looks like. If this is what you’re seeing then connect to your parent server, get a list of instances, stop the instance with the Unique ID that you’re working on, copy the configuration file, and then modify the copied configuration file to boot from the sysrescuecd iso file.
ssh root@parentserverlocation.com
virsh list
cp -p /xen/configs/E7PX0D{,.sysrescd}.cfg
vim /xen/configs/E7PX0D.sysrescd.cfg
On the OS block in the cfg file add the section for boot dev=cdrom:
<os>
<type arch="x86_64" machine="pc-i440fx-2.2">hvm</type>
<boot dev='cdrom'/>
<boot dev="hd"/>
</os>
Then on the disk block in the cfg file add the section for source file=xen/images/systesmrescuecd-version.iso
<disk type="file" device="cdrom">
<target dev="hdc" bus="ide"/>
<source file="/xen/images/systemrescuecd-x86-5.0.2.iso"/>
<readonly/>
</disk>
Make sure you upload the sysrescuecd to the parent server from your workstation.
scp systemrescuecd-x86-4.6.1.iso root@parentserver.address.com:/xen/images/systemrescuecd-x86-4.6.1.iso
Then stop the virtual instance, and restart it using the modified configuration file
virsh destroy E7PX0D
virsh create /xen/configs/E7PX0D.sysrescd.cfg
Now that you have the virtual instance started with the rescuecd you need to connect to it via a virtual tty. I will be posting an article for connecting via a virtual tty for a KVM instance in the future as it will require its own guide, once that is posted I will modify this post.
Once connect to the virtual TTY you need to find your virtual disks root partition, mount it, and then also mount the boot partition.

fdisk -l
mount /dev/vda3 /mnt/gentoo
mount /dev/vda1 /mnt/gentoo
Then we need to mount proc dev and sys to the root partition, and chroot into this root partition. Once it is mounted we need to recreate the grub.cfg file. If the Kernel is broken we need to bring up the eth0 interface and reinstall the Kernel.

for dir in proc dev sys; do mount --bind /$dir /mnt/gentoo/$dir; done
chroot /mnt/gentoo /bin/bash
grub2-mkconfig -o /boot/grub2/grub.cfg
ifup eth0
yum reinstall kernel

At this point we can shutdown the instance since the grub has been rebuilt, and boot the normal configuration on the parent server.
shutdown -h now
virsh create /xen/configs/E7PX0D.cfg
Now the instance should be back online in a few moments, and our work is done.
If you have a dedicated server that is down due to a broken grub2 then installing the sysrescuecd onto a USB drive, and starting from the fdisk -l command will work for that dedicated server as well.
If you this helped you out consider supporting me on youtube or https://twitch.tv/djrunkie