Recent Forum Posts
From categories:
page »

Hi Everybody,

I am experiencing the same issues. Has anyone successed ?

My config : Debian Wheezy 7.3, Xen 4.2.1, DRBD 8.3.11.

Thanks for your reply.

I would like to autostart remus when my dual cpu platform(A & B) system boots up, instead of invoking it manually from the command line.

My Xen HVM domU is a xend managed domain and starts up automatically on A when A boots up.

I thought I had found a way of auto starting "remus -i 100 VM B > /dev/null 2>&1" from within a bash script. This bash script (called from within /ect/xen/vif-bridge) would wait for domstate of VM to become idle, and then start remus. I thought I could do this beceause /etc/xen/scripts/vif-bridge is always called whenever a xend managed domU is started.

The VM starts up fine on A on bootup, but freezes after a few seconds after remus has been invoked. The VM domstate bceomes either shutoff or it becomes stateless. If I destroy the VM on B (always in paused state as expected) the VM on A starts running again (idle state).

Has anyone tried this or something similar before? Is there a better way?

how to auto start remus by uhakanssonuhakansson, 23 Oct 2013 22:17

I have now installed Ubuntu 12.10, DRBD 8.3.11 and Xen 4.2.1 (I still used xend & xm create etc.) but with the same result.I cannot remus two VMs in the same direction. I have tried —blackhole, —no-net, no-compression, pinning the vcpus, etc… Nothing works.

However I tried remus the two VMs in reverse direction without doing anything special, just standard remus at default rate 200ms and it ran for more than 62 hours(running from Friday afternoon to Monday morning). When I VNC into one of the VMs this morning, one remus process aborted immediately, and the other vm became stateless on the originationg node, impossible to destroy or shutdown, other than by rebooting the entire CPU.

Has it ever been clearly demonstrated that more than one HVM can be replicated at the same time for an extended period of time, and not just a few minutes.

Is there any way of debugging this issue? The remus log and the xend log are not very helpful.

I applied the drbd-hvm patch so I could use the drbd VBD type and and the HVM virtualization type for my VM. Should the rest of the Xen 4.1.2 patches be applied to 4.2.1 as well?

Any suggestions are very welcome.


thanks for all the suggestions. I have tried it all and remus still does not replicate two VMs. Sometimes two remus replications will run for a few seconds before they exit(abort). Usually one remus aborts and the other one continues for a few seconds before also aborting, and the two vm ending up in a state where I cannot do xm destroy vm anymore. Sometimes runnnig /etc/init.d/xend restart on both nodes will fix it, but sometimes I just have to reboot the dom0 on both nodes.

As part of upgrading to DRBD 8.3.11, I also updated to CentOS 6.3, Linux kernel 3.4.32-6.el6.x86_64, and Xen 4.2.2

remus -i 100 vm1 node2 > /var/log/vm1.log 2>&1 &
remus -i 100 vm2 node2 > /var/log/vm1.log 2>&1 &

PROF: flushed memory at 1379585535.750035
PROF: suspending at 1379585535.838686
issuing HVM suspend hypercall
suspend hypercall returned 0
pausing QEMU
PROF: resumed at 1379585535.849508
resuming QEMU
Sending 5873 bytes of QEMU state
PROF: flushed memory at 1379585535.852089
PROF: suspending at 1379585535.946905
issuing HVM suspend hypercall
suspend hypercall returned 0
domain 1 not shut down
xc: error: Suspend request failed: Internal error
xc: error: Domain appears not to have suspended: Internal error
PROF: resumed at 1379585535.967212
resuming QEMU

PROF: flushed memory at 1379585536.483855
PROF: suspending at 1379585536.575694
issuing HVM suspend hypercall
suspend hypercall returned 0
pausing QEMU
PROF: resumed at 1379585536.583224
resuming QEMU
Sending 5873 bytes of QEMU state
PROF: flushed memory at 1379585536.585965
PROF: suspending at 1379585536.679800
issuing HVM suspend hypercall
suspend hypercall returned 0
domain 2 not shut down
xc: error: Suspend request failed: Internal error
xc: error: Domain appears not to have suspended: Internal error
qemu logdirty mode: disable
PROF: resumed at 1379585536.688845
resuming QEMU

[2013-09-19 06:12:15 3318] INFO (XendDomainInfo:2079) Domain has shutdown: name=vm1 id=1 reason=suspend.
[2013-09-19 06:12:16 3318] INFO (XendDomainInfo:2079) Domain has shutdown: name=vm2 id=2 reason=suspend.

After remus exits vm1 and vm2 exist on both CPUs (node1 and node2) I get several messsages on the dom0 console that looks as follows

INFO: task qemu-dm: N blocked for more than 120 seconds.
"echo - > /proc/sys/kernel/hung_task_timeout_secs" disables this message

After a couple of minutes I get another series of messages on the dom0 console

block drbd2: [drbd2_worker/N] sock_sendmsg time expired, ko =3
block drbd2: [drbd2_worker/N] sock_sendmsg time expired, ko =2
block drbd2: meta connection shutdown by peer.
block drbd2: sock_sendmsg returned -104
block drbd2: error receving Data, 1: 4120!
block drbd2: Split-Brain detected but unresolved, dropping connection!
block drbd2: error receving ReportState, 1: 4!

block drbd2: [drbd2_worker/N] sock_sendmsg time expired, ko =3
block drbd2: [drbd2_worker/N] sock_sendmsg time expired, ko =2
block drbd2: error receving Data, 1: 4120!
block drbd2: Split-Brain detected but unresolved, dropping connection!
block drbd2: meta connection shutdown by peer.
block drbd2: error receving ReportState, 1: 4!

I am now trying different drbd sync rates to see if DRBD protocol D takes to much of the availabile bandwidth, but I doubt this is a performace/resource issue since
1) I can xm migrate —live vm1 node2 & xm migrate —live vm2 node 2 & without any problem.
2) I have in the past been able to remus vm1 node2 > /var/log/vm1.log 2>&1 & reemus vm2 node 1 > /var/log/vm2.log 2>&1 & without any problem, that is run two remus in reverse direction,
3) I can remus one vm for several days, but when I start the second remus, not only does the second remus abort, but the first remus that had been running for several days in a row, will also abort, sometimes even before the second remus aborts.

Any more ideas is greatly appreciated.

Remus LOG
marcelopereirajmarcelopereiraj 17 Sep 2013 05:40
in discussion Discussions / Remus » Remus LOG


Can i view stats about checkpoint in xen 4.2.1?

In my log, only exist:

PROF: suspending at 1379372481.629415
PROF: resumed at 1379372481.645901
PROF: flushed memory at 1379372481.650006
PROF: suspending at 1379372481.672432
PROF: resumed at 1379372481.689282
PROF: flushed memory at 1379372481.693019

What is this numbers? Have information about dirty pages, drbd size, checkpoint size, and others?

"remus" works perfectly, all right, but I not have any information about "checkpoints". Its normal?

There is a patch (about stats) for version 4.2.1 too?
My install guide is:



Remus LOG by marcelopereirajmarcelopereiraj, 17 Sep 2013 05:40

I am surprised you are facing this issue on a 2.6.32 kernel.
The 34 seconds is certainly strange. What kind of a VM is it ?
Whats the memory allocated to each domain and dom0 ? (i.e. total system memory) ?
Do you mean it takes 34 seconds to see any output on the log file or to see the backup domU in paused state on the other node ?

Firstly, are you sure you have the sch_plug or sch_queue kernel module required for network buffering ?
(I am not sure about the naming. Grep for sch_ in tools/python/xen/remus/*.py)

I would start with the one VM config first, to make sure all cases (nonet, blackhole, failover, etc) work properly (without the 34 seconds stuff).
It is possible that the CPU in the primary is getting overloaded when you start the second remus, interrupting the first one and ultimately failing everything.
Thats because, when you start remus, the full VM state is copied over. Which requires a lot of spare memory and processing power in dom0.

So here are my suggestions:
1. Switch to DRBD 8.3.11 if it works (source available on this website)
2. Disable hyperthreading
3. Pin dom0 vcpu0 to cpu0. Give dom0 atleast two cores.
4. Pin the guests to remaining cores.
5. Start with one small sized VM and work your way up to 2 small VMs. Then bigger VMs.
6. If things still fail, modify the timeouts patch to make Remus wait a looong time before calling it quits. (make it arbitrarily huge like 30s or so). Do the same with the DRBD config file (as a starter, you can just replicate without disk. Just change the disk spec from DRBD to /dev/drbdN (N== the device associated with the resource.) Remus would complain that the disk is not replicated, but you can ignore that warning for now).

Hope this helps


I have tried remus replication of two HVM VMs without any success.

dom0 setup:
CentOS release 6.3
Linux #1 SMP Wed Aug 1 14:17:44 EDT 2012 x86_64 qemu logdirty mode: disable6_64 x86_64 Gnu Linux
Xen version: 4.1.2
drbd version 8.3.9
eth0/1 speed: 1000Mb/s
I am using protocol D for disk replication, and remus for HVM CPU and Memory replication.
hw setup: Intel Core i7-2720QM @ 2.2 GHz, bios settings, Advanced -> Processor & clock options, EIST and Turbo boost technology enabled, Virtualization technoogy enabled.

-I staggered the start of the remu sreplication of each VM (remus -i 100 vm host > /var/log/vm.log 2>&1 &) to decrease the intiial CPU load spike that can be seen using xentop for example. This failed.
-I tried increasing the replication interval from 100 ms to 500ms and then to 1000ms. Both attempts failed.
-I then tried replicating with options —blackhole and —no-net, to eliminate any potential disk or network buffering issues. This failed too.
-I tried setting disk type to xvda instead of hda. That failed too.
-I have tried exiting my VNC session once I verified that my app was up and running before starting remus in case that could mess things up. That failed too.

I then tried xen migrating live two VMs node1->node2 simultaneously. This succeeded without any problem, even though the CPU load on node2 was > 100%.

So whatever this is it does not seem to be a CPU load issue either, or a xen issue.

Also, It takes 34 seconds from the time remus is started until the bytes actually begin being transmitted. I am not sure if that indicates what could possiby be wrong.

example test case:
First I make sure the drbd disk state is good using cat /proc/drbd. I made sure it is Connected Secondary/Secondary UpToDate/UpToDate on both node1 and node2.
a) xm create vm1.cfg & xm create vm2.cfg
b) wait a few minutes until vm1 & vm2 are up & running
c) remus -i 100 vm1 node2 > /var/log/vm1.log
d) wait ~34 seconds until bytes are being transmitted node1 -> node2
e) remus -i 100 vm1 node2 > /var/log/vm2.log
f) after less than a minute remus for vm1 exits, and remus for vm2 never gets underway

Here is the tail of the log file for vm1:
Total Data Sent= 48.996 MB
Sending 5873 bytes of QEMU state
Total Data Sent= 49.015 MB
domain 10 not shut down
xc: error: Suspend request failed: Internal error
xc: error: Domain appears not to have suspended: Internal error
qemu logdirty mode: disable

Here is the tail of the log file for vm2:
qemu logdirty mode: enable

The drbd status is Connected Primary/Primary UpToDate/UpToDate for vm1 which is in state -s on node1 and in state p- on node2.
The drbd status is Connected Primary/Primary UpToDate/UpToDate for vm2 which is in state r- on node1 and in state -- on node2.

It appears that what I end up with is in an rather unhealthy state of affairs. I am not sure if this can be called split brain, or cloning or something else. Good is it not, as yoda would say.

I cannot find /var/lib/xen/suspend_evtchn_lock* files or anything remotely similar in my dom0.

Any advise would be greatly appreciated.

Hello, I have a question regarding performance, especially in relation to TTL when I'm using the PING to domU.
No REMUS, PING TTL is 0.06 ms
After enabling REMUS * (i-10), the PING TTL is + / - 15.00 ms

  • DomU VM without load

Is that correct?

uname -a: Linux left 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:31:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

Thank you!

I aplied the timeout patch (, but idont know qht happens.

After this patch the remus heartbeat timeout is 100 ms and after is 500 ms ?
which is the heartbeat timeout default ?


remus heartbeat timeout by kleberdexterkleberdexter, 29 May 2013 04:55

i've installed xen 4.1.4 and drbd 8.3.11 on 2 ubuntu 12.04 64 bit servers.
I start drbd and erverything goes well.
Afterwards i launch the VM, no problem either.
I launch remus —no-net rt xen2 and in /var/log/syslog appear many times the "block drbd1: Local backing block device frozen?" error and the vm freezes.

There's a solution or a patch to solve this problem?

Kind regards

Drbd error by random9random9, 23 May 2013 17:14

On PV guests, when i start remus for second VM i got the error.

With HVM guests i got the error after some minutes.
the error is:

PROF: flushed memory at 1367809647.187154
PROF: suspending at 1367809648.186917
issuing HVM suspend hypercall
suspend hypercall returned 0
domain 11 not shut down
xc: error: Suspend request failed: Internal error
xc: error: Domain appears not to have suspended: Internal error
PROF: resumed at 1367809648.189705
resuming QEMU

Do you know if is possible to backup a entire xen host ( more than one vm) in another host?

what error are you talking about ?

Thanks for the reply.

I use NFS , but one disk per VM. Im not shring disks between VMS.
I try with blackhole replication but i got the same error. I aplied this patch. How much is the timeout with and without this patch?

i Just can running it with hvm and 256 mb of memory.

Sorry for my english.

I dont think 4.2 is a good choice. The last I tested it, it was extremely slow.
I have never tried remus with NFS. Assuming you have read the Remus paper, Remus is not meant to work on shared disks.

The following suggestions assume you are running a setup as described in this wiki. (xen 4.1)

To first see if its the primary or the backup that is preventing you from running multiple Remus instances, I suggest you try to
run Remus in blackhole replication mode (see section Replicating to /dev/null), on every VM in the physical host. If all VMs are being successfully replicated to /dev/null, then "incrementally" change the replication target to the backup host. That is, first replicate VM1 alone to backup host while other VMs are replicated to /dev/null. Then add another VM, and so on. This process would help you narrow down the number of VMs that can on the physical host without unnecessarily tweaking timeouts. Then tweak the timeouts in 05_timeouts.patch (link to this file can be found in the installation instructions).

Hello how are you ?
I have the same problem with xen 4.2 plus nfs.
Yous solve that just using HVM guests ?

The script will take care of it. It automatically promotes a disk from secondary to primary when a VM is booted from the disk, assuming you used the
drbd:<resourcename> syntax in the disk section, of the vm config file.

Thanks Shriram, Just a follow on question, Do I need to set the primary server disk as primary or just leave the disks as secondary/secondary and the block-drbd script will take care of it?

The installation instructions given there were incorrect, wrt DRBD setup. You are not supposed to manually promote both disks to primary. The block-drbd script automatically does it when Remus is running.

When you first boot up a vm, the script will and also automatically promote the local disk to primary (the end-to-end setup will be primary/secondary).

Using Remus with protocol C is not guaranteed to work under all failure circumstances.

Hello All,

I tried to setup Remus HA and as part of it, I tried to use DRBD with Protocol D and DRBD resource set as Primary in both servers and with that I end up in locking up of block device and Split brain issues. If I change the DRBD resource to Protocol D with Primary/Secondary or Protocol C with Primary/Primary both the DRBD and Remus work and I am able to test HA. Would like to know importance of the Primary-Primary requirement and whether can I use Primary-Secondary with protocol D?

I referred the installation instruction in, and installed drbd-8.3.11 and Xen 4.1.4 in Ubuntu 12.10 with Linux kernel 3.5.0-23.


the problem is the checkpointing time.
I was using a big checkpointing time (-i 1000).

now i use 40,50, 200 .

I hope that its is useful

page »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License