mysql_load_check

table export tool.

Usage: ./mysql_load_check user/pwd@ip:port [options]
       -h                 help
       -table ds.t1       table to load into
       -txt 1.txt         load file

Examples:
       ./mysql_load_check root/1qaz\!QAZ -table ds.t1 -txt 1.txt

zcbus_check

This is zcbus check tool.

Usage: ./zcbus_check [options]
       -h                 help
       -log_level 2       log_level
       -log 1.log         log file
       -p .ini            parameter file
       -nodeid 20         nodeid

Examples:
       ./zcbus_check -nodeid 20000 -p /home/config/zcbus.ini -log check.log

freebcp

usage:  freebcp [[database_name.]owner.]table_name|query {in | out | queryout } datafile
        [-m maxerrors] [-f formatfile] [-e errfile]
        [-F firstrow] [-L lastrow] [-b batchsize]
        [-n] [-c] [-t field_terminator] [-r row_terminator]
        [-U username] [-P password] [-I interfaces_file] [-S server] [-D database]
        [-v] [-d] [-h "hint [,...]" [-O "set connection_option on|off, ...]"
        [-A packet size] [-T text or image size] [-E]
        [-i input_file] [-o output_file]

example: freebcp testdb.dbo.inserttest in inserttest.txt -S mssql -U guest -P password -c

osstat_tool


Usage: ./osstat_tool

os stat tool.

Output Options:
  -sample 3,1              specify sample interval and sample times

Examples:
  ./osstat_tool
  ./osstat_tool -sample 3
  ./osstat_tool -sample 3,1

zbmq_tool

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
zbmq_tool: Release 8.1-16 64 bit (QA) - Production on 2023-12-14 14:47:47
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456435
This is zbmq exp tool.

Usage: ./zbmq_tool [options]

Generic Options:
       -h                 help
       -log_level 2       log_level
       -log 1.log         log file

Special Options:
       -broker 127.0.0.1:9092  brokers, for zbmq server mode
       -path /zbmq             zbmq path
       -topic ttt              topic name
       -info                   print topic infomation
       -offset 10              export start offset
       -count 1                export mesage count
       -timestamp "2020-03-22 11:20:21"
                               export start timestamp, if set this, ignore -offset
       -s                      statistics only
       -o 1.bsd                output or input bsdfile, if set this, ignore -dir

Examples:
       ./zbmq_tool -path /zbmq -topic 1.test.testtab.f -o 1.bsd
       ./zbmq_tool -topic 1.test.testtab.r -timestamp "2020-03-22 11:20:21"
       ./zbmq_tool -topic 1.test.testtab.r -offset 20 -count 1

ddlparse

ddlparse test.

Usage: ./ddlparse sqlfile
       -h                 help
       -split --next      use it for split two sql
       -dbtype mysql/postgresql/sqlserver/oracle

Examples:
       ./ddlparse -dbtype mysql 1.sql

xlog_dump


Usage: ./xlog_dump log_file

Dump PostgreSQL xlog file.

Options:
  -gauss                    dump openGauss xlog
  -kingbase V008R006C005B0054
                            dump kingbase wal, specify version
  -block_size size          default 8192, show wal_block_size
  -segment_size size        default 16M, show wal_segment_size
  -dict dict.dat            dictionary
  -relfilenode 1234,3456    specify relfilenode to dump
  -continuous               continuous dump next log
  -end_ptr 0/2A07CE99       dump end ptr

binlog_dump


Usage: ./binlog_dump [options] log_file

Dump MySQL Binary log file.

Options:
  -blen                        total buffer allocated (default:50MB)
  -pos pos                     start position
  -remote                      read log from remote
  -z                           remote read with compress
  -server_id                   slave server id
  -s                           statistics table operation in binlog
  -speed                       requires -remote, test remote read speed
  -raw                         requires -remote, output raw binlog data to log files
  -odir                        requires -raw, save files dir
  -last_log mysql-bin.000003   requires -remote, the last log to dump
  -stop-never                  requires -remote, wait for more data from the server
                               instead of stopping at the end of the last log

Examples:
       ./binlog_dump mysql-bin.000002
       ./binlog_dump mysql-bin.000002 -pos 144
       ./binlog_dump mysql-bin.000770 -remote root/1qaz\!QAZ@127.0.0.1:3306
       ./binlog_dump -s mysql-bin.000002
       ./binlog_dump mysql-bin.000002 -raw -odir /tmp

oracle_config

oracle replication configure tool, provided by ZCBUS.

Usage: ./oracle_config user/pwd@ip:port/server [options]

  -h                           help
  -service xout                log capture service name
  -add_table t1.a,t1.b         add capture table, if configure all table of t2, set t2.*
  -remove_table t1.a,t2,*      remove capture table, if remove all table of t2, set t2.*
  -list                        list capture information
  -remove                      remove service
Examples:
  ./oracle_config zcbus/zcbus@172.17.58.145:1521/oracle12c -service xout1 -add_table dt2.test
  ./oracle_config zcbus/zcbus@172.17.58.145:1521/oracle12c -service xout1 -remove

lmrfile_dump


Usage: ./lmrfile_dump 1.lmr

lmr file dump tool.

Options:
  -s                    statistics only
  -objn 100             specify objn to dump
  -conn zcbus/zcbus@172.17.46.244:1521/orcl
                        specify oracle connection, for -s, fetch table name of objn

Examples:
  ./lmrfile_dump 1.lmr

xlog_recv


Usage: ./xlog_recv user/pwd@ip:port/database

remote read PostgreSQL xlog file.

Options:
  -o /tmp             output log path
  -start_log 000000010000000000000009
                      start log name
  -end_log 000000010000000000000009
                      end log name

Examples:
       ./xlog_recv postgresql:zcbus/1qaz!QAZ@172.17.104.186:15431/zcbusdb -o /tmp
       ./xlog_recv opengauss:zcbus/1qaz!QAZ@172.17.104.186:15431/zcbusdb -o /tmp

oggdump

usage: ./oggdump file pos

logmnr2file


Usage: ./logmnr2file user/pwd@ip:port/db

logmnr log to file.

Options:
  -start_scn 1111             logmnr start scn
  -end_scn 2222               logmnr end scn
  -redolog redo01.log         specify oracle redo log
  -thread 1                   specify thread
  -sequence 56                specify sequence
  -records 1024               max output records, default output all records
  -fetch_record_once 1        fetch record once, default 1000
  -filter_useless_operations  filter useless operations in where
  -to_single_byte             use to_single_byte for varchar columns
  -no_filter                  output all type records
  -where "DATA_OBJ#=12345"    where condition
  -sleep 100                  sleep time every 100000 reocrds, ms
  -o 1.dat                    output file
  -dameng                     logmnr log of dameng

Examples:
  ./logmnr2file zcbus/zcbus@172.17.46.244:1521/orcl -redolog /tmp/redo01.log -start_scn 1111 -o 1.lmr
  ./logmnr2file zcbus/zcbus@172.17.46.244:1521/orcl -thread 1 -sequence 56 -start_scn 1111 -o 1.lmr

log2bsdata

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
log2bsdata: Release 8.1-16 64 bit (QA) - Production on 2023-12-14 14:47:47
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456445
This is log2bsdata tool.

Usage: ./log2bsdata user/pwd@tns [options]
       -h                        help
       -v                        version
       -log_level 2              log_level
       -log                      log file
       -product_type adb         database product type, adb
       -big_endian               big endian mode
       -start_pos pos            start position
       -end_pos pos              end position, for pg
       -service xout             xstream service name, for oracle or ogg, if not set, use logmnr
       -rac_mode                 for logmnr, rac mode
       -rac_mode_threads 3       for logmnr rac mode, set threads to parse log
       -scn_back 10              for logmnr, scn_back
       -update_columns_complete  for logmnr, when update columns not complete, select back
       -fetch_record_once 1      for logmnr, fetch record once
       -logmnr_where "DATA_OBJ#=12345"
                                 for logmnr, where condition
       -logmnr_select_binary     for dameng logmnr
       -slot zcbus_test          pg slot name
       -ogg_home /ogg            ogg home
       -read_from_remote         read log from remote, for mysql, pg, logmnr
       -log_path /tmp            log path, for local read log
       -compress_read_mode       compress read log from remote, for mysql
       -log_buffer_len 10M       log buffer length, default 20M
       -proxy_mode 2             for sqlserver, proxy mode
       -proxy_server sqlserver:zcbus/password@172.17.104.185:1433
                                 for sqlserver, proxy server
       -proxy_host 192.168.1.1   for sqlserver, proxy host
       -proxy_path c:\bak        for sqlserver, proxy path
       -db database              database, for pg. sqlserver
       -table dbo.test           table to parse
       -dict dict.dat            dictionary to use
       -event_trigger_table zcbus.ddl_event_tbl
                                 ddl event trigger table
       -compare_table zcbus.auto_compare_tbl
                                 auto compare table
       -filter_trans_table zcbus.filter_trans_table
                                 if detect dml of this table, filter whole trans
       -o 1.bsd                  output file
       -get_dict 20003           get dict from bus_push_capture_dict of nodeid
       -make_dict                make dict from database of -table
       -no_db oracle             no db mode, specify database type
       -delay_analysis_time 10   delay analysis time, for sybase
       -log_cache_buffer_len 80M pg cache log in memory size
       -not_filter_dml_in_ddl    for pg, do not file dml in ddl trans

Examples:
       ./log2bsdata oracle:zcbus/password@172.17.58.145:1521/oracle12c -service xout -table dt2.test2 -start_pos 00000E5CB540000000000000000000000E5CB540000000000000000001
       ./log2bsdata oracle:zcbus/password@ORCL11G -service xout -o 1.bsd
       ./log2bsdata mysql:zcbus/password -start_pos 000768.114 -o 1.bsd
       ./log2bsdata db2:zcbus/password@172.17.58.145:50000 -db zcbus -table DB2INST1.TEST2 -log_level 2 -start_pos [48976306] -o 1.bsd
       ./log2bsdata sqlserver:zcbus/password@172.17.104.185:1433 -db ds -table dbo.test -o 1.bsd
       ./log2bsdata postgresql:zcbus/password -db postgres -table public.test -o 1.bsd
       ./log2bsdata ogg:zcbus/password@172.17.58.145:1521/oracle12c -service ext01 -ogg_home /ogg
       ./log2bsdata sybase:sa/password@172.17.104.186:5000 -db ds -table dbo.test -o 1.bsd
       ./log2bsdata mysql:zcbus/password@172.17.104.186:3306/databus -get_dict 20003 -o dict.dat
       ./log2bsdata oracle:zcbus/password@172.17.58.145:1521/oracle12c -table dt.test -make_dict -o dict.dat
       ./log2bsdata oceanbase-mysql:root/obcluster@192.168.121.129:2883 -table ds.test -log 1.log
       ./log2bsdata oceanbase-oracle:root/xxxxxxxx@127.0.0.1:2883 -table ds.test -log 1.log
       ./log2bsdata -no_db oracle -read_from_remote -dict dict.dat -start_pos 987435 -log_path /lmr

table_compare_changed

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
table_compare_changed: Release 8.1-16 64 bit (QA) - Production on 2023-12-14 14:47:47
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456447
table compare changed data tool.

Usage: ./table_compare_changed [options]
  -h                      help
  -v                      version
  -log_level 2            log_level
  -broker 127.0.0.1:9092  brokers, default: 127.0.0.1:9092
  -topic aaa              topic name for consume bsd data
  -start "2020-07-24 13:00:00"
                          start time to consume from topic
  -end "2020-07-24 13:30:00"
                          end time to consume from topic
  -source db_type:user/pwd@ip:port
                          source database
  -source_table s.t1      source table to compare
  -target db_type:user/pwd@ip:port
                          target database
  -target_table t.t1      target table to compare
  -parallel 2             select parallel count, default 4, for oracle
  -to_single_byte         convert varchar2 to single byte, for oracle
  -columns c1,c2,c3       specify columns to compare, must include pk or uk
  -repair                 output repair file

Examples:
  ./table_compare_changed -topic aaa -source mysql:zcbus/password@127.0.0.1:3306 -source_table mm.test -target mysql:zcbus/password@127.0.0.1:3306 -target_table dt2.test
  ./table_compare_changed -topic aaa -source oracle:zcbus/password@172.17.58.145:1521/oracle12c -source_table zcbus.test -target oracle:zcbus/password@172.17.58.145:1521/oracle12c -target_table zcbus.test1
  ./table_compare_changed -topic aaa -source postgresql:zcbus/password@127.0.0.1:5432 -source_table postgres.public.test -target postgresql:zcbus/password@127.0.0.1:5432 -target_table postgres.public.test1
  ./table_compare_changed -topic aaa -source sqlserver:zcbus/password@172.17.104.185:1433 -source_table ds.dbo.test -target sqlserver:zcbus/password@172.17.104.185:1433 -target_table ds.dbo.test2
  ./table_compare_changed -topic aaa -source db2:zcbus/password@172.17.58.145:50000 -source_table zcbus.DB2INST1.TEST -target db2:zcbus/password@172.17.58.145:50000 -target_table zcbus.DB2INST1.TEST1
  ./table_compare_changed -topic aaa -source sybase:zcbus/password@172.17.58.145:5000 -source_table ds.dbo.test -target sybase:zcbus/password@172.17.58.145:5000 -target_table ds.dbo.test1

zcbus_test

This is zcbus test tool.

Usage: ./zcbus_test [options]
       -h                       help
       -log_level 2             log_level
       -log test.log            log file
       -p zcbus.ini             zcbus config db param
       -test full,real          test items, can set full or real or full,real
       -compare                 only compare, not test
       -containerid 1           containerid
       -nodeid 20003            nodeid
       -customerid 10002        customerid
       -ctlid 261               ctlid, if not set, test all ctlid of customerid
       -full_sh full.sh         table full sh to exec
       -real_sh init.sh         table real sh to exec
       -real_wait 10            after exec real_sh, wait miniute for compare table, default 10

Examples:
       ./zcbus_test -p zcbus.ini -test full -full_sh full.sh -containerid 1 -nodeid 20003 -customerid 10002 -log test.log
       ./zcbus_test -p zcbus.ini -test real -real_sh init.sh -containerid 1 -nodeid 20003 -customerid 10002 -log test.log

netmap

This is net mapping service.

Usage: ./netmap -local [ip:]port -remote [ip:]port[options]
       -h                 help
       -v                 version
       -max_conn 1024     max connections, default 128, max 20000
       -con_tmo 30        connnect timeout time[second], default 60
       -process 2         multi process, default 1, max 8
       -thread 4          multi threads in single process, default 1, max 64
       -check_cmd /tmp/check.sh
                          If there are multiple remote hosts, before each connection,
                          invoke the command to check whether host can be connected,
                          if the command returns 0, the host can be connected.
                          e.g. /tmp/check.sh 172.17.58.146 3306
       -lock              if process>1 and set -check_cmd, lock when exec check_cmd
       -log_level 2       log_level
       -log 1.log         log file

Examples:
       ./netmap -local 172.17.58.146:3306 -remote 172.17.58.149:3306 -log log.map
       ./netmap -local 3306 -remote 172.17.58.149:3306,172.17.58.150:3306 -log log.map

etl_check

etl file check tool, provided by ZCBUS team.

Usage: ./etl_check [options]
  -h                      help
  -log_level 2            log_level
  -log 1.log              log file
  -etl etl.ini            etl config file
  -i in.bsd               input bsd file
  -o out.bsd              output bsd file

Examples:
  ./etl_check -etl etl.ini -i in.bsd -o out.bsd
  ./etl_check -etl etl.ini

charset_map_generate

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
charset_map_generate: Release 8.1-16 64 bit (QA) - Production on 2023-12-14 14:47:47
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456452
generate charset map tool.

Usage: ./charset_map_generate db_type:user/pwd@tns [options]
       -h                           help
       -log_level 2                 log_level
       -log 1.log                   log file
       -full_sync_buffer 300M       export buffer length, default 200M
       -table dbo.ccc_big5          table to export for charset map
       -client_charset cp850        client charset, default cp850
       -where where.ini             where cond file, only support one table export
       -edir /tmp                   export file dir
       -target db_type:user/pwd@tns target database
       -target_table dbo.ccc_big5   target table to export for charset map

Examples:
       ./charset_map_generate sybase:zcbus/password@172.17.58.145:5000/abc -table dbo.ccc_big5 -target postgresql:zcbus/password@127.0.0.1:5432/abc -target_table public.ccc_big5 -edir /tmp

zcbus_service

ZCBUS_CACHE_PATH=/usr/local/zcbus/cache
ZCBUS_CLUSTERID=0
This is zcbus service manager.

Usage: ./zcbus_service [options]
       -h                 help
       -v                 version
       -log_level 2       log_level
       -list              show service list of container
       -start 1,2         set start status of service ids
                          if start all services, use -start all
       -stop 1            set stop status of service ids
                          if stop all services, use -stop all
       -force_stop 1,2    set force stop status of service ids
                          if force_stop all services, use -force_stop all
       -conn_check 127.0.0.1:5500
                          check if the ip:port can be connected
[Internal options]
       -zcbus             startup zcbus program
       -etl               startup etl program
       -ksync             startup ksync program
       -compare           startup compare program
       -client            startup client program
       -check             startup table check program
       -file              startup file service program

Examples:
       ./zcbus_service -log_level 2
       ./zcbus_service -list
       ./zcbus_service -start 1,2
       ./zcbus_service -stop all

zcbus_docker

ZCBUS_CACHE_PATH=/usr/local/zcbus/cache
This is zcbus docker manager.

Usage: ./zcbus_docker [options]
       -h                 help
       -v                 version
       -log_level 2       log_level
       -conn_test /zcbus/zcbus.ini
                          test connection and quit
[Internal options]
       -manager           startup docker manager program
       -listener          startup listener program

Examples:
       ./zcbus_docker -log_level 2

zbmq_server

ZCBUS_MQ_PATH not set, use default[/usr/local/zcbus/mq]
This is zcbus mq server.

Usage: ./zbmq_server [options]
       -h                 help
       -v                 version
       -log_level 2       log_level
       -startup           startup zbmq server
       -shutdown          shutdown zbmq server
Examples:
       ./zbmq_server -log_level 2

table_migrate

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
table_migrate: Release 8.1-16 64 bit (QA) - Production on 2024-01-16 14:05:48
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456458
table migrate tool.

Usage:
       -h                          help
       -log_level 2                log_level
       -log 1.log                  log file
       -dir /tmp                   cache path
       -source db_type:user/pwd@ip:port
                                   source database
       -source_table s.t1          source table to export
       -target db_type:user/pwd@ip:port
                                   target database
       -target_table t.t1          target table to import
       -parallel 2                 for import, threads
       -sql_mode 0                 for import, 0 - load, 1 - sql bind, 2 - direct sql (default 0)
       -max_sql_len 10240          for import, when direct sql mode, multi-insert sql max length, default 0
       -drop_mode 0                for import, 0 - drop table, 1 - truncate, 2 - truncate data only (default 0)
       -filter_ddl                 for import, filter ddl operation
       -full_load_with_pk          for import, full load data with pk

Examples:
       ./table_migrate -source oracle:zcbus/password@172.17.58.145:1521/oracle12c -source_table zcbus.test2 -dir /tmp -target mysql:zcbus/password@127.0.0.1:3306 -table zcbus.test

tableimp

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
tableimp: Release 8.1-16 64 bit (QA) - Production on 2024-01-16 14:05:48
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456460
Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
tableimp: Release 8.1-16 64 bit (QA) - Production on 2024-01-16 14:05:48
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456460
table import tool.

Usage: ./tableimp db_type:user/pwd@tns [options]
       -h                          help
       -log_level 2                log_level
       -log 1.log                  log file
       -url http://127.0.0.1:8080  ddl convert url
       -table ds.t1                table to import
       -db postgres                database, for postgresql,db2,sqlserver
       -parallel 2                 import threads
       -idir /tmp                  bsd file dir to import
       -mode 0                     0 - load, 1 - sql bind, 2 - direct sql  (default 0)
       -max_sql_len 10240          when direct sql mode, multi-insert sql max length, default 0
       -truncate                   not create table, only truncate
       -truncate_data              not create table, only truncate data
       -filter_ddl                 filter ddl operation
       -full_load_with_pk          full load data with pk
       -ignore_trail_spaces 0      0 - not ignore, 1 - ignore (default 1)
       -db_charset GB18030         set target database charset, for sybase
       -client_charset gbk         set client charset, for sybase/pg/mysql
       -charset_map charset_map.txt
                                   custom charset map file
       -keys c1,c3                 set cols replace pk, if set this, do not check pk
       -cols c1,c3                 set columns to apply
       -addcol ZCBUS_SOURCE_PART=zcbus_source_part:1000,ZCBUS_SOURCE_OPTYPE,ZCBUS_INSERT_OPTIME
                                   add coumns, support ZCBUS_SOURCE_PART,ZCBUS_SOURCE_OPTYPE,
                                   ZCBUS_SOURCE_OPTIME,ZCBUS_TARGET_OPTIME,ZCBUS_INSERT_OPTIME,
                                   ZCBUS_SEQUENCE_NO
       -land_file                  for mysql/pg load, if set this, land to file first
       -msg_len 5242880            split message length, default 5M
       -repair                     repair data by bsd file
       -update_to_delete_insert    update convert to delete insert
       -delete_before_insert       if -repair is set, when all insert, delete first
       -auto_choose_columns_as_pk  if not find pk/uk, and not set -keys, auto choose columns as pk
       -update_merge               merge update
       -bsd 1.bsd                  bsd file for repair
       -delete_file                delete loaded file
       -zbmq_topic ds.tt           zbmq mode, topic name
       -compatible_old_bsdata      compatible with old bsdata
       -service 127.0.0.1:10058    connnect to db service to import data
       -hdfs 127.0.0.1:9000:/tmp   hdfs ip:port:path, for hive
       -version 2.1                hive version, support 2.1 or other
       -update_delete_single_row   update or delete single row
       -dml_error_skip             skip error dml
       -count_after_full_sync_end  select count(*) where full sync end
       -remove_illegal_chars       remove illegal utf8 chars

Examples:
       ./tableimp oracle:zcbus/password@172.17.58.145:1521/oracle12c -table zcbus.test2 -idir /tmp
       ./tableimp oracle:zcbus/password@ORCL11G -table ds.t1 -idir /tmp
       ./tableimp mysql:zcbus/password@127.0.0.1:3306 -table ds.t1 -idir /tmp
       ./tableimp postgresql:zcbus/password@127.0.0.1:5432 -db dt2 -table public.test -idir /tmp
       ./tableimp sqlserver:zcbus/password@172.17.104.185:1433 -db dt2 -table dbo.test_string -idir /tmp
       ./tableimp db2:zcbus/password@172.17.58.145:50000 -db zcbus -table db2inst1.test -idir /tmp
       ./tableimp sybase:zcbus/password@172.17.58.145:5000 -db ds -table dbo.test -idir /tmp
       ./tableimp sybaseiq:zcbus/password@172.17.58.145:5000 -db ds -table dbo.test -idir /tmp
       ./tableimp redis:zcbus/password@172.17.104.186:6379 -table ds.t1 -idir /tmp
       ./tableimp dameng:zcbus/zcbus123456@172.17.104.186:5236 -table zcbus.test -idir /tmp
       ./tableimp hive:root/1qaz\!QAZ@172.17.46.243:10000 -service 127.0.0.1:10058 -hdfs 172.17.46.243:9000:/tmp -table node02.hive01 -idir /tmp
       ./tableimp hana:zcbus/zcbus123456@172.17.104.186:5236 -service 127.0.0.1:10058 -table zcbus.test -idir /tmp
       ./tableimp yashandb:zcbus/zcbus@172.17.46.243:1688 -service 127.0.0.1:10058 -table zcbus.test -idir /tmp
       ./tableimp clickhouse:default/123456@172.17.46.243:8123 -table ds.test -idir /tmp
       ./tableimp informix:ifxuser/ifxuser@101.201.81.45:9088 -service 127.0.0.1:10058 -db zcbusdb -table ds.test -idir /tmp
       ./tableimp maxcompute:zcbus/zcbus@\'http://service.cn-beijing.maxcompute.aliyun.com/api\'/mc_prog_01 -service 127.0.0.1:10058 -table ds.test -idir /tmp
       ./tableimp oceanbase-mysql:root/xxxxxxxx@127.0.0.1:2883 -table ds.test -idir /tmp
       ./tableimp oceanbase-oracle:root/xxxxxxxx@127.0.0.1:2883 -table ds.test -idir /tmp

tableexp

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
tableexp: Release 8.1-16 64 bit (QA) - Production on 2024-01-16 14:05:48
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456463
Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
tableexp: Release 8.1-16 64 bit (QA) - Production on 2024-01-16 14:05:48
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456463
table export tool.

Usage: ./tableexp db_type:user/pwd@tns [options]
       -h                      help
       -log_level 2            log_level
       -log 1.log              log file
       -full_sync_buffer 300M  export buffer length, default 200M
       -export_mode 0          for pg/mysql: 1 - export by cursor mode, default 0
       -table ds.t1,dt.*       table to export
       -db postgres            database, for postgresql,db2,sqlserver,sybase
       -db_charset GB18030     set database charset, for sybase
       -client_charset gbk     set client charset, for sybase/pg/mysql
       -ddl_only               only export table ddl
       -parallel 2             select parallel count, default 4, for oracle
       -to_single_byte         convert varchar2 to single byte, for oracle
       -thread 4               parallel export by partitions, for mysql/oracle, max 16 threads
       -max_fetch_rows_once 10 for mysql/pg cursor mode, fetch rows once
       -blk_mode               blk export mode, for sqlserver, sybase
       -where where.ini        where cond file, only support one table export
       -sql "select * from test"
                               specify sql to export
       -speed_limit 1M         export speed limit, set 100k to limit 100k/s
       -edir /tmp              export bsd file dir
       -o out.txt              export to file, format 0,1,2 can use this, if set this, ignore -edir
       -service 127.0.0.1:10058
                               connnect to db service to export data
       -format 1               export file format, default 0
                               0 - bsd
                               1 - txt, for mysql load
                               2 - csv
                               3 - zcbus mq format
                               4 - zcbus mq format, with compress
                               5 - charset map format file
       -conv GB18030 UTF-8 C4E3
                               charset convert, source charset, target charset, hex data
       -conv_err_out           output charset convert rows to file
       -charset_map charset_map.txt
                               custom charset map file
       -charset_map_conv C4E3  charset convert with -charset_map, hex data
       -multi_rows_mode        export bsdata use multi-rows format
       -append_extra_info      export bsdata with rowid, for oracle

Examples:
       ./tableexp oracle:zcbus/password@172.17.58.145:1521/oracle12c -table zcbus.test -edir /tmp
       ./tableexp oracle:zcbus/password@ORCL11G -table zcbus.test -edir /tmp
       ./tableexp mysql:zcbus/password@127.0.0.1:3306 -table zcbus.test -edir /tmp
       ./tableexp postgresql:zcbus/password@127.0.0.1:5432 -db postgres -table public.test -edir /tmp
       ./tableexp sqlserver:zcbus/password@172.17.104.185:1433 -db zcbus -table dbo.test -edir /tmp
       ./tableexp db2:zcbus/password@172.17.58.145:50000 -db zcbus -table db2inst1.test -edir /tmp
       ./tableexp sybase:zcbus/password@172.17.58.145:5000 -db ds -table dbo.test -edir /tmp
       ./tableexp dameng:zcbus/zcbus123456@172.17.104.186:5236 -table zcbus.test -edir /tmp
       ./tableexp mongodb:admin/123456@172.17.104.186:27107 -table ds.col -edir /tmp
       ./tableexp hana:admin/123456@172.17.104.186:27107 -table ds.col -edir /tmp
       ./tableexp clickhouse:default/123456@172.17.46.243:8123 -table ds.test -edir /tmp
       ./tableexp informix:ifxuser/ifxuser@101.201.81.45:9088 -service 127.0.0.1:10058 -db zcbusdb -table ds.test -edir /tmp
       ./tableexp oceanbase-mysql:root/xxxxxxxx@127.0.0.1:2883 -table ds.test -edir /tmp
       ./tableexp oceanbase-oracle:root/xxxxxxxx@127.0.0.1:2883 -table ds.test -edir /tmp

table_compare

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
table_compare: Release 8.1-16 64 bit (QA) - Production on 2024-01-16 14:05:48
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456466
table compare tool, provided by ZCBUS team.

Usage: ./table_compare [options]
  -h                      help
  -v                      version
  -log_level 2            log_level
  -log 1.log              log file
  -source db_type:user/pwd@ip:port
                          source database
  -source_table s.t1      source table to compare
  -target db_type:user/pwd@ip:port
                          target database
  -target_table t.t1      target table to compare
  -parallel 2             select parallel count, default 4, for oracle
  -to_single_byte         convert varchar2 to single byte, for oracle
  -where "col1=1"         where condition, if set this, ignore source_where and target_where
  -source_where "col1=1"  source where condition
  -target_where "col1=1"  target where condition
  -source_db_charset GB18030
                          set source database charset
  -target_db_charset GB18030
                          set target database charset
  -source_client_charset cp850
                          set source database client charset
  -target_client_charset cp850
                          set target database client charset
  -source_charset_map source_charset_map.txt
                          custom charset map file for source database
  -target_charset_map target_charset_map.txt
                          custom charset map file for target database
  -columns c1,c2,c3       specify columns to compare, must include pk or uk
  -strict_mode            compare with strict mode, do not remove decimals or microseconds
  -keys c1,c2             specify table primary key
  -count                  only compare count
  -repair                 output repair file
  -all_columns            if set -repair, output all columns in repair file
  -detail_report          output detail report file
  -etl etl.ini            etl rule for source data

Examples:
  ./table_compare -source mysql:zcbus/password@127.0.0.1:3306 -source_table mm.test -target mysql:zcbus/password@127.0.0.1:3306 -target_table dt2.test
  ./table_compare -source oracle:zcbus/password@172.17.58.145:1521/oracle12c -source_table zcbus.test -target oracle:zcbus/password@172.17.58.145:1521/oracle12c -target_table zcbus.test1
  ./table_compare -source postgresql:zcbus/password@127.0.0.1:5432 -source_table postgres.public.test -target postgresql:zcbus/password@127.0.0.1:5432 -target_table postgres.public.test1
  ./table_compare -source sqlserver:zcbus/password@172.17.104.185:1433 -source_table ds.dbo.test -target sqlserver:zcbus/password@172.17.104.185:1433 -target_table ds.dbo.test2
  ./table_compare -source db2:zcbus/password@172.17.58.145:50000 -source_table zcbus.DB2INST1.TEST -target db2:zcbus/password@172.17.58.145:50000 -target_table zcbus.DB2INST1.TEST1
  ./table_compare -source sybase:zcbus/password@172.17.58.145:5000 -source_table ds.dbo.test -target sybase:zcbus/password@172.17.58.145:5000 -target_table ds.dbo.test1
  ./table_compare -source mysql:zcbus/password@127.0.0.1:3306 -source_table mm.test -target dameng:zcbus/zcbus123456@172.17.104.186:5236 -target_table dt2.test
  ./table_compare -source mysql:zcbus/password@127.0.0.1:3306 -source_table mm.test -target kafka:172.17.46.244:9092 -target_table full_topic_test
  ./table_compare -source hana:admin/123456@172.17.104.186:27107 -source_table ds.col -target dameng:zcbus/zcbus123456@172.17.104.186:5236 -target_table dt2.test

please set terminal codeset is utf8 for output chinese.

table_compare工具提供对数据库指定的表进行比对的功能,表必须有主键或者唯一索引。
源库和目标库可以是同一个库,也可以是不同的库,支持异构数据库的比对,源库和目标库可以是oracle、mysql、sqlserver、db2、postgresql、sybase的任何一种。
比对结束后,可以输出比对详细结果的report.txt文件,如果使用时加上-repair参数,可以同时输出修复用的bsd文件,用bsdata_dump工具,可以查看bsd文件的具体内容。

参数详细说明:
-source db_type:user/pwd@ip:port
              设置源库的连接参数,不同类型的库的连接字符串示例如下:
              mysql:zcbus/password@127.0.0.1:3306
              oracle:zcbus/password@172.17.58.145:1521/oracle12c
              postgresql:zcbus/password@127.0.0.1:5432
              sqlserver:zcbus/password@172.17.104.185:1433
              db2:db2inst1/db2inst1@172.17.58.145:50000
              sybase:zcbus/password@172.17.58.145:5000
-source_table s.t1
              设置源表的表名,不同类型数据库的表名格式:
              mysql     : db.table
              oracle    : user.table
              sqlserver : db.schema.table
              db2       : db.schema.table
              postgresql: db.schema.table
-target db_type:user/pwd@ip:port
              设置目标库的连接参数,连接字符串参考 -source
-target_table t.t1
              设置目标表的表名,表名格式参考 -source_table
-columns c1, c2, c3
              指定比对的列名,多列用逗号隔开,指定的列里面必须包含主键列或唯一索引的列 
-source_where "col1=1"
              指定对源表select时使用的where条件 
-target_where "col1=1"
              指定对目标表select时使用的where条件 
-where "col1=1"
              指定对源表和目标表共同使用的where条件,如果设置了此参数,
              -source_where和 -target_where自动忽略 
-parallel 2
              如果源库或目标库是oracle库,该参数可以设置select /*+parallel*/ 的并发数 
-repair
              比对结束后输出修复用的bsd文件 

备注:主键字段值如果有换行符,windows下的'\r\n'被替换成<zcbusbrn>,linux的'\n'替换成<zcbusbn>,如果是用输出的主键值查询时请注意替换。

sql_exec

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
sql_exec: Release 8.1-16 64 bit (QA) - Production on 2024-01-16 14:05:48
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456468
sql execute tool.

Usage: ./sql_exec db_type:user/pwd@tns [options]
       -h                          help
       -log_level 2                log_level
       -db postgres                database, for postgresql,db2,sqlserver,sybase
       -service 127.0.0.1:10058    connnect to db service to import data
       -conn conn_string           for sybaseiq, test connect
       -sql "insert into test values(1,2)"
                                   specify sql to execute
       -sqlfile 1.sql              specify sql file to execute

Examples:
       ./sql_exec oracle:zcbus/password@172.17.58.145:1521/oracle12c -sql "insert into test values(1,2)"
       ./sql_exec mysql:zcbus/password@127.0.0.1:3306 -sql "insert into test values(1,2)"
       ./sql_exec postgresql:zcbus/password@127.0.0.1:5432 -db postgres -sql "insert into test values(1,2)"
       ./sql_exec sqlserver:zcbus/password@172.17.104.185:1433 -db zcbus -sql "insert into test values(1,2)"
       ./sql_exec db2:zcbus/password@172.17.58.145:50000 -db zcbus -sql "insert into test values(1,2)"
       ./sql_exec sybase:sa/password@172.17.104.186:5000 -db ds -sql "insert into test values(1,2)"
       ./sql_exec sybaseiq:sa/password@172.17.104.186:5000 -db ds -sql "insert into test values(1,2)"
       ./sql_exec dameng:zcbus/zcbus123456@172.17.104.186:5236 -sql "insert into test values(1,2)"
       ./sql_exec hana:zcbus/zcbus123456@172.17.104.186:5236 -sql "insert into test values(1,2)"
       ./sql_exec yashandb:zcbus/zcbus@172.17.46.243:1688 -service 127.0.0.1:10058 -sql "insert into test values(1,2)"
       ./sql_exec clickhouse:default/123456@172.17.46.243:8123 -sql "insert into test values(1,2)"
       ./sql_exec informix:ifxuser/ifxuser@101.201.81.45:9088 -service 127.0.0.1:10058 -db zcbusdb -sql "insert into test values(1,2)"

kafka_tool

Linux d881ed3efbce 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
kafka_tool: Release 8.1-16 64 bit (QA) - Production on 2024-01-12 16:36:17
Copyright (c) 2024 ZCBUS. All Rights Reserved.
process id 456470
This is kafka exp/imp tool.

Usage: ./kafka_tool [options]

Generic Options:
       -h                 help
       -log_level 2       log_level
       -log 1.log         log file

Special Options:
       -broker 127.0.0.1:9092  brokers, default: 127.0.0.1:9092
       -group aaa              group name, default: rdkafka_consumer_example
       -topic ttt              topic name
       -info                   print topic infomation
       -list                   print all topics
       -delete                 delete topic from kafka
       -exp                    export from kafka(default mode)
       -imp                    import bsd file to kafka
       -tag 10001              for import, message tag
       -timeout 10             kafka consume timeout, s
       -offset 10              export start offset
       -count 1                export mesage count
       -timestamp "2020-03-22 11:20:21"
                               export start timestamp, if set this, ignore -offset
       -set_offset 10          set consume offset and exit
       -etl etl.ini            export use etl.ini
       -filter "id=1"          filter data, use etl CONDITION
       -optype INSERT,DDL      specify optype to output
       -dir /tmp               bsdata file dir
       -continuous             wait for more data from kafka when consume no data
       -s                      in exp mode, statistics only, not output
       -exp_mode 0             0: export to single file, default
                               1: split to big files
                               2: direct export a message to a file
       -o 1.bsd                output or input bsdfile, if set this, ignore -dir
       -to_old_format          in imp mode, convert to old format
       -parfile prop.ini       kafka properties file, if set this, ignore -broker

Examples:
       ./kafka_tool -broker 172.17.46.244:9092 -topic 1.test.testtab.f -o 1.bsd
       ./kafka_tool -topic 1.test.testtab.r -timestamp "2020-03-22 11:20:21"
       ./kafka_tool -topic 1100.cninfo.tb_fund_0219.s -etl etl.ini -dir /tmp
       ./kafka_tool -group aaa -topic ttt -set_offset 10
       ./kafka_tool -imp -topic 1100.cninfo.tb_fund_0219.s -dir /tmp

please set terminal codeset is utf8 for output chinese.

本工具提供对kafka中topic里的数据进行导入导出的功能。
参数详细说明:
       -broker 127.0.0.1:9092  指定kafka brokers, 不指定的话,默认是127.0.0.1:9092
       -exp                    导出模式
       -imp                    导入模式,导入bsd文件到kafka
                               不显示指定导入导出模式的话,默认为导出模式
       -dir /tmp               导入导出bsd文件所在的目录
导出模式参数:
       -group aaa              指定kafka消费组的名字, 默认是rdkafka_consumer_example
       -topic ttt              指定消费的topic名字
       -offset 10              topic开始消费的偏移量
       -count 1                消费消息的个数
       -timestamp "2020-03-22 11:20:21"
                               topic开始消费的时间戳, 如果设置了这个, 将忽略-offset设置的开始偏移量
       -set_offset 10          设置topic开始消费的偏移量,并退出,下次从这个偏移量开始消费,不用指定开始消费的偏移量
       -etl etl.ini            etl配置文件,可以指定etl规则对消费出来的数据进行处理,配置文件里只能指定一个etl规则
       -exp_mode 0             0: 导出到单个文件0.bsd,默认选项
                               1: 导出消息进行合并拆分,导出成一个个的大文件
                               2: 一条消息导出到一个文件
       -o 1.bsd                指定导出或导入的文件名,如果设置这个,将忽略-dir和-exp_mode的设置
导入模式参数:
       -tag 10001              指定发送到topic里的消息的标签,不指定的话,消息标签为空

freetds.conf

a.sh: line 7: ./freetds.conf: Permission denied

bsdata_dump

bsdata dump tool.

Usage: ./bsdata_dump db_type:user/pwd@tns [options]
       -h                  help
       -log_level 4        log_level, 4 will dump long data
       -offset 100         dump offset
       -end_offset 200     dump end offset
       -count 1            dump bsd vector count
       -o 1.bsd            output bsd format file, can not use with -json
       -cvt 1.txt          convert bsdata_dump txt file to bsd file
       -json               output json format
       -append 1.bsd       append bsd file to source bsdfile
       -json2bsd 1.json    convert json format file to bsd
       -unpack /tmp        unpack first bsd vector in file to multiple files, only for insert/update/delete
       -pack /tmp          pack bsd file from multiple files, only for insert/update/delete
       -max_col_len 64     when unpack, if column data length>max_col_len, output to single file,
                           default 64, max 128, binary type always output to single file
       -pack_template update
                           print pack sample template, for insert/update/delete

Examples:
       ./bsdata_dump 1.bsd
       ./bsdata_dump 1.bsd -json
       ./bsdata_dump 1.bsd -append 2.bsd
       ./bsdata_dump -cvt 1.txt -o 1.bsd
       ./bsdata_dump 1.bsd -unpack /tmp
       ./bsdata_dump 1.bsd -pack /tmp
       ./bsdata_dump -pack_template update
       ./bsdata_dump -json2bsd 1.json -o 1.bsd
文档更新时间: 2024-02-18 15:04   作者:操李红