I have a shard cluster with 4 different shards. I'm testing different configurations and shard keys.
I have a collection with 5,000,000 of documents and I'm sharding the collection. But there are only 3 shards in use, and there are 2 with a lot of chunks and the other one has only 1 chunk.
sh.status()
is like that:
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("55807b06780a890b28354352")
}
shards:
{ "_id" : "sh1_rs1", "host" : "sh1_rs1/192.168.0.126:27018,192.168.0.189:27017,192.168.0.190:27018" }
{ "_id" : "sh2_rs1", "host" : "sh2_rs1/192.168.0.222:27018,192.168.0.229:27018,192.168.0.59:27018" }
{ "_id" : "sh3_rs1", "host" : "sh3_rs1/192.168.0.188:27018" }
{ "_id" : "sh4_rs1", "host" : "sh4_rs1/192.168.0.228:27018" }
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
8360 : Failed with error 'moveChunk failed to engage TO-shard in the data transfer: migrate already in progress', from sh1_rs1 to sh4_rs1
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "sh1_rs1" }
{ "_id" : "infonavit", "partitioned" : false, "primary" : "sh1_rs1" }
{ "_id" : "test_shard", "partitioned" : true, "primary" : "sh1_rs1" }
test_shard.coll
shard key: { "_id" : 1 }
chunks:
sh1_rs1 1
sh2_rs1 1
sh3_rs1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("557f394e256393d0ee325b14") } on : sh2_rs1 Timestamp(2, 0)
{ "_id" : ObjectId("557f394e256393d0ee325b14") } -->> { "_id" : ObjectId("5581969fe3a93c05ea233e56") } on : sh3_rs1 Timestamp(3, 0)
{ "_id" : ObjectId("5581969fe3a93c05ea233e56") } -->> { "_id" : { "$maxKey" : 1 } } on : sh1_rs1 Timestamp(3, 1)
{ "_id" : "infonavit_cartera", "partitioned" : true, "primary" : "sh1_rs1" }
infonavit_cartera.creditos
shard key: { "saldo_actual" : 1 }
chunks:
sh1_rs1 43
sh2_rs1 1
sh3_rs1 37
too many chunks to print, use verbose if you want to force print
Do you have any idea why this is happening?