一、MongoShake的安装
1、MongoShake的介绍
MongoShake是阿里云以golang语言编写的通用平台型服务工具,它通过读取MongoDB的Oplog操作日志来复制MongoDB的数据以实现特定需求。
MongoShake还提供了日志数据的订阅和消费功能,可通过SDK、Kafka、MetaQ等方式的灵活对接,适用于日志订阅、数据中心同步、Cache异步淘汰等场景。
特别说明:此工具仅支持数据源是副本集或分片,不支持单节点的数据源
2、 MongoShake下载及使用说明
下载:
wget https://github.com/alibaba/MongoShake/releases/download/release-v2.0.7-20190817/mongo-shake-2.0.7.tar.gz
解压
mkdir mongoshake
tar xvf mongoshake-2.0.tar.gz -C ./ mongoshake
修改配置文件
通过vim命令,修改MongoShake的配置文件collector.conf,由于每个参数都说中文说明,就不在重复,
以下是我的配置文件collector.conf
mongo_urls = mongodb://192.168.83.126:端口号
collector.id = mongoshake
checkpoint.interval = 5000
mongo_connect_mode = secondaryPreferred
http_profile = 9100
system_profile = 9200
log_level = debug
log_file = collector.log
log_buffer = true
# 配置同步的黑白名单
filter.namespace.black =
filter.namespace.white =
oplog.gids =
shard_key = auto
syncer.reader.buffer_time = 1
worker = 8
worker.batch_queue_size = 64
adaptive.batching_max_size = 1024
fetcher.buffer_capacity = 256
worker.oplog_compressor = none
tunnel = direct
tunnel.address = mongodb://192.168.83.129:端口号
context.storage = database
context.address = ckpt_default
context.start_position = 2000-01-01T00:00:01Z
master_quorum = false
replayer.dml_only = true
replayer.executor = 1
replayer.executor.upsert = false
replayer.executor.insert_on_dup_update = false
replayer.conflict_write_to = none
replayer.durable = true
3、启动同步,并打印日志信息
./collector -conf=collector.conf -verbose
4、观察打印信息
[09:38:57 CST 2019/12/09] [INFO] (mongoshake/collector.(*ReplicationCoordinator).Run:80) finish full sync, start incr sync with timestamp: fullBeginTs[1780991443], fullFinishTs[1780993737]
5、监控MongoShake状态
增量数据同步开始后,您可以再开启一个命令行窗口,通过如下命令来监控MongoShake。
./mongoshake-stat --port=9100
参数
说明
logs_get/sec 每秒获取的oplog数量。
logs_repl/sec 每秒执行重放操作的oplog数量。
logs_success/sec 每秒成功执行重放操作的oplog数量。
lsn.time 最后发送oplog的时间。
lsn_ack.time 目标端确认写入的时间。
lsn_ckpt.time Check Point持久化的时间。
now.time 当前时间。
replset 源数据库的副本集名称。
一、迁移方案
1、单机全量迁移到副本集
1) 环境介绍
源库
192.168.84.129:27017(主)
192.168.84.129:27018(从)
192.168.84.129:27019(从)
目标库
192.168.83.126:27017主库
192.168.83.127:27017 从库
192.168.83.128:27017 从库
2) 通过oplog模式,只同步增量,但是原始数据需要同步
源库备份:
cd /usr/local/mongodb/bin
./mongodump --host localhost --port 27017 --oplog
目标库还原:
./mongorestore --drop /usr/local/mongodb/bin/dump
3) mongodb shake配置文件
mongo_urls = mongodb://192.168.83.129:端口号
collector.id = mongoshake
checkpoint.interval = 5000
mongo_connect_mode = standalone
http_profile = 9100
system_profile = 9200
log_level = debug
log_file = collector.log
log_buffer = true
# 配置同步的黑白名单
filter.namespace.black =
filter.namespace.white =
oplog.gids =
shard_key = auto
syncer.reader.buffer_time = 1
worker = 8
worker.batch_queue_size = 64
adaptive.batching_max_size = 1024
fetcher.buffer_capacity = 256
worker.oplog_compressor = none
sync_mode = all
tunnel = direct
tunnel.address = mongodb://192.168.83.126:端口号
context.storage = database
context.address = ckpt_default
context.start_position = 2000-01-01T00:00:01Z
master_quorum = false
replayer.dml_only = true
replayer.executor = 1
replayer.executor.upsert = false
replayer.executor.insert_on_dup_update = false
replayer.conflict_write_to = none
replayer.durable = true
replayer.collection_drop = true
4) 启动同步
./collector -conf=collector.conf -verbose
5) 查看同步
造数据语句
use testdb;
for (i=1;i<=10000;i++) db.tb3.insert( {name:"student"+i, age:(i%120), address: "shanghai" } );
db.tb3.count()