承接前文 OceanBase 安全审计的《传输加密》,本文主要实践数据透明加密,并验证加密是否有效。
作者:张乾,外星人2号,兼任四位喵星人的铲屎官。
爱可生开源社区出品,原创内容未经授权不得随意使用,转载请联系小编并注明来源。
本文约 1200 字,预计阅读需要 4 分钟。
环境
版本:OceanBase 4.1.0.0 企业版
加密配置
详细的 加密步骤 略过,本次使用 MySQL 租户。
开启透明加密并创建表空间
管理员用户登录到集群的 MySQL 租户。
# 开启 internal 方式的透明加密
# tde_method 默认值为 none,表示关闭透明表空间加密
obclient [oceanbase]> ALTER SYSTEM SET tde_method='internal';
Query OK, 0 rows affected (0.022 sec)
obclient [oceanbase]> SHOW PARAMETERS LIKE 'tde_method';
+-------+----------+-------------+----------+------------+-----------+----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+--------+---------+-------------------+
| zone | svr_type | svr_ip | svr_port | name | data_type | value | info | section | scope | source | edit_level |
+-------+----------+-------------+----------+------------+-----------+----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+--------+---------+-------------------+
| zone1 | observer | 172.17.0.13 | 2882 | tde_method | NULL | internal | none : transparent encryption is none, none means cannot use tde, internal : transparent encryption is in the form of internal tables, bkmi : transparent encryption is in the form of external bkmi | OBSERVER | TENANT | DEFAULT | DYNAMIC_EFFECTIVE |
+-------+----------+-------------+----------+------------+-----------+----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+--------+---------+-------------------+
1 row in set (0.017 sec)
# 执行该语句,生成主密钥
obclient [oceanbase]> ALTER INSTANCE ROTATE INNODB MASTER KEY;
Query OK, 0 rows affected (0.028 sec)
# 创建表空间并指定加密算法,其中 'y' 表示默认使用 aes-256 算法
obclient [oceanbase]> CREATE TABLESPACE sectest_ts1 encryption = 'y';
Query OK, 0 rows affected (0.021 sec)
在加密表空间内创建新表
普通用户登录到数据库的 MySQL 租户,创建新表 t1
。
# 创建表并指定表空间
obclient [sysbenchdb]> CREATE TABLE t1 (id1 int, id2 int) TABLESPACE sectest_ts1;
Query OK, 0 rows affected (0.076 sec)
# 确认表空间内的表是否标记为加密
# encryptionalg 为 aes-256,且 encrypted 为 YES 则表示表加密配置成功
obclient [oceanbase]> SELECT table_name,encryptionalg,encrypted FROM oceanbase.V$OB_ENCRYPTED_TABLES;
+------------+---------------+-----------+
| table_name | encryptionalg | encrypted |
+------------+---------------+-----------+
| t1 | aes-256 | YES |
+------------+---------------+-----------+
1 row in set (0.048 sec)
往表内插入一条值,并做大合并,使值落盘 SSTable。
# 插入值
obclient [sysbenchdb]> insert into t1 values (147852369,999999991);
Query OK, 1 row affected (0.005 sec)
# 做大合并
ALTER SYSTEM MAJOR FREEZE TENANT=ALL;
# 查看合并进度
SELECT * FROM oceanbase.CDB_OB_ZONE_MAJOR_COMPACTION\G
创建一个不加密的表用以对比
普通用户登录到数据库的 MySQL 租户,创建不指定加密空间的新表 ttttttt2
。
同样插入一条数据,并做大合并。
obclient [sysbenchdb]> CREATE TABLE ttttttt2 (id1 int, id2 int);
Query OK, 0 rows affected (0.076 sec)
obclient [sysbenchdb]> insert into ttttttt2 values (147852369,999999991);
Query OK, 1 row affected (0.005 sec)
# 做大合并
ALTER SYSTEM MAJOR FREEZE TENANT=ALL;
# 查看合并进度
SELECT * FROM oceanbase.CDB_OB_ZONE_MAJOR_COMPACTION\G
加密验证
验证方式是借助工具 ob_admin,其 dumpsst 功能可以显示 block_file 文件中的内容。
使用 dumpsst 来查看加密表的内容,验证是否加密。
使用前需要知道目标数据的 macro block id
,接下来先找到上面数据对应的 macro block id
。
查找 macro block id
先根据 oceanbase.DBA_OB_TABLE_LOCATIONS
找到两张表的 TABLET_ID
,其中加密表 t1
的 TABLET_ID
为 200001,未加密表 ttttttt2
的 TABLET_ID
为 200002。
obclient [oceanbase]> select * from oceanbase.DBA_OB_TABLE_LOCATIONS where TABLE_NAME='t1';
+---------------+------------+----------+------------+----------------+-------------------+------------+---------------+-----------+-------+-------+-------------+----------+--------+--------------+
| DATABASE_NAME | TABLE_NAME | TABLE_ID | TABLE_TYPE | PARTITION_NAME | SUBPARTITION_NAME | INDEX_NAME | DATA_TABLE_ID | TABLET_ID | LS_ID | ZONE | SVR_IP | SVR_PORT | ROLE | REPLICA_TYPE |
+---------------+------------+----------+------------+----------------+-------------------+------------+---------------+-----------+-------+-------+-------------+----------+--------+--------------+
| sysbenchdb | t1 | 500006 | USER TABLE | NULL | NULL | NULL | NULL | 200001 | 1001 | zone1 | 172.17.0.13 | 2882 | LEADER | FULL |
+---------------+------------+----------+------------+----------------+-------------------+------------+---------------+-----------+-------+-------+-------------+----------+--------+--------------+
1 row in set (0.005 sec)
obclient [oceanbase]> select * from oceanbase.DBA_OB_TABLE_LOCATIONS where TABLE_NAME='ttttttt2';
+---------------+------------+----------+------------+----------------+-------------------+------------+---------------+-----------+-------+-------+-------------+----------+--------+--------------+
| DATABASE_NAME | TABLE_NAME | TABLE_ID | TABLE_TYPE | PARTITION_NAME | SUBPARTITION_NAME | INDEX_NAME | DATA_TABLE_ID | TABLET_ID | LS_ID | ZONE | SVR_IP | SVR_PORT | ROLE | REPLICA_TYPE |
+---------------+------------+----------+------------+----------------+-------------------+------------+---------------+-----------+-------+-------+-------------+----------+--------+--------------+
| sysbenchdb | ttttttt2 | 500007 | USER TABLE | NULL | NULL | NULL | NULL | 200002 | 1001 | zone1 | 172.17.0.13 | 2882 | LEADER | FULL |
+---------------+------------+----------+------------+----------------+-------------------+------------+---------------+-----------+-------+-------+-------------+----------+--------+--------------+
1 row in set (0.005 sec)
拿着 TABLET_ID
,根据合并时间,在 GV$OB_TABLET_COMPACTION_HISTORY
中找到 MACRO_ID_LIST
,其中记录的 ID
即是我们需要的 macro block id
。
从输出中,我们可以看到加密表 t1
对应的 macro block id
为 387,未加密表 ttttttt2
对应的 macro block id
为 718。
obclient [oceanbase]> select * from GV$OB_TABLET_COMPACTION_HISTORY where TABLET_ID=200001 and TYPE='MAJOR_MERGE' order by START_TIME \G
*************************** 1. row ***************************
SVR_IP: 172.17.0.13
SVR_PORT: 2882
TENANT_ID: 1004
LS_ID: 1001
TABLET_ID: 200001
TYPE: MAJOR_MERGE
COMPACTION_SCN: 1685093467526445446
START_TIME: 2023-05-26 17:31:22.478149
FINISH_TIME: 2023-05-26 17:31:22.482045
TASK_ID: YB42AC11000D-0005FC95091493EB-0-0
OCCUPY_SIZE: 432
MACRO_BLOCK_COUNT: 1
MULTIPLEXED_MACRO_BLOCK_COUNT: 0
NEW_MICRO_COUNT_IN_NEW_MACRO: 1
MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO: 0
TOTAL_ROW_COUNT: 1
INCREMENTAL_ROW_COUNT: 1
COMPRESSION_RATIO: 0.67
NEW_FLUSH_DATA_RATE: 100
PROGRESSIVE_COMPACTION_ROUND: 1
PROGRESSIVE_COMPACTION_NUM: 0
PARALLEL_DEGREE: 1
PARALLEL_INFO: -
PARTICIPANT_TABLE: table_cnt=4,[MAJOR]scn=1;[MINI]start_scn=1,end_scn=1685093478867382402;
MACRO_ID_LIST: 387
COMMENTS: serialize_medium_list:{cnt=1;1685093467526445446}|time_guard=EXECUTE=4.20ms|(0.79)|CREATE_SSTABLE=648us|(0.12)|total=5.32ms;
*************************** 2. row ***************************
SVR_IP: 172.17.0.13
SVR_PORT: 2882
TENANT_ID: 1004
LS_ID: 1001
TABLET_ID: 200001
TYPE: MAJOR_MERGE
COMPACTION_SCN: 1685094492266634220
START_TIME: 2023-05-26 17:48:27.276906
FINISH_TIME: 2023-05-26 17:48:27.282468
TASK_ID: YB42AC11000D-0005FC9509149878-0-0
OCCUPY_SIZE: 432
MACRO_BLOCK_COUNT: 1
MULTIPLEXED_MACRO_BLOCK_COUNT: 0
NEW_MICRO_COUNT_IN_NEW_MACRO: 1
MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO: 0
TOTAL_ROW_COUNT: 1
INCREMENTAL_ROW_COUNT: 1
COMPRESSION_RATIO: 0.67
NEW_FLUSH_DATA_RATE: 71
PROGRESSIVE_COMPACTION_ROUND: 1
PROGRESSIVE_COMPACTION_NUM: 0
PARALLEL_DEGREE: 1
PARALLEL_INFO: -
PARTICIPANT_TABLE: table_cnt=3,[MAJOR]scn=1685093467526445446;[MINI]start_scn=1685093467530410154,end_scn=1685094504683817069;
MACRO_ID_LIST: 718
COMMENTS: serialize_medium_list:{cnt=1;1685094492266634220}|time_guard=EXECUTE=5.92ms|(0.45)|CREATE_SSTABLE=5.94ms|(0.45)|total=13.10ms;
obclient [oceanbase]> select * from GV$OB_TABLET_COMPACTION_HISTORY where TABLET_ID=200002 and TYPE='MAJOR_MERGE' order by START_TIME \G
*************************** 1. row ***************************
SVR_IP: 172.17.0.13
SVR_PORT: 2882
TENANT_ID: 1004
LS_ID: 1001
TABLET_ID: 200002
TYPE: MAJOR_MERGE
COMPACTION_SCN: 1685094492266634220
START_TIME: 2023-05-26 17:48:27.277801
FINISH_TIME: 2023-05-26 17:48:27.284542
TASK_ID: YB42AC11000D-0005FC9509149879-0-0
OCCUPY_SIZE: 424
MACRO_BLOCK_COUNT: 1
MULTIPLEXED_MACRO_BLOCK_COUNT: 0
NEW_MICRO_COUNT_IN_NEW_MACRO: 1
MULTIPLEXED_MICRO_COUNT_IN_NEW_MACRO: 0
TOTAL_ROW_COUNT: 1
INCREMENTAL_ROW_COUNT: 1
COMPRESSION_RATIO: 0.61
NEW_FLUSH_DATA_RATE: 40
PROGRESSIVE_COMPACTION_ROUND: 1
PROGRESSIVE_COMPACTION_NUM: 0
PARALLEL_DEGREE: 1
PARALLEL_INFO: -
PARTICIPANT_TABLE: table_cnt=4,[MAJOR]scn=1685093467526445446;[MINI]start_scn=1,end_scn=1685094504683817070;
MACRO_ID_LIST: 718
COMMENTS: serialize_medium_list:{cnt=1;1685094492266634220}|time_guard=EXECUTE=10.20ms|(0.86)|total=11.87ms;
解析 block_file 文件
安装完 ob_admin,使用 dumpsst 解析上个步骤拿到的 macro block id
。
注意:
- ob_admin dumpsst 必须在 ${path_to_oceanbase}/oceanbase 层级运行,原因是读取
etc/observer.config.bin
使用的是相对路径。- 目前测试下来,必须指定
--macro-id
,否则都会报错(报错内容需在 ob_admin.log 中查看)。
介绍本次使用的几个参数如下:
-f
指定 data 目录。-d
宏块类型,目前仅支持 macro_block。-a
即macro-id
,填写上面步骤中获取的值。-t
指定tablet_id
,进一步精确范围。-i
即micro block id
,-1 表示所有 micro blocks。
解析 t1 表,即加密表
可以看到输出中 tablet_id
为 200001,row_count
为 1,对应我们插入的那一条数据。
其中并未展示这行数据内容,验证数据成功加密。
[admin@ob_4 oceanbase]$ ob_admin dumpsst -f /home/admin/oceanbase/store/obdemo/ -d macro_block -a 387 -t 200001 -i -1
succ to open, filename=ob_admin.log, fd=3, wf_fd=2
old log_file need close, old = ob_admin.log new = ob_admin.log
succ to open, filename=ob_admin.log, fd=3, wf_fd=2
succ to open, filename=ob_admin_rs.log, fd=4, wf_fd=2
------------------------------{Common Header}------------------------------
| header_size|24
| version|1
| magic|1001
| attr|1
| payload_size|952
| payload_checksum|-1027413104
--------------------------------------------------------------------------------
------------------------------{SSTable Macro Block Header}------------------------------
| header_size|208
| version|1
| magic|1007
| tablet_id|200001
| logical_version|1685093467526445446
| data_seq|0
| column_count|5
| rowkey_column_count|3
| row_store_type|1
| row_count|1
| occupy_size|432
| micro_block_count|1
| micro_block_data_offset|232
| data_checksum|2617981320
| compressor_type|6
| master_key_id|500004
--------------------------------------------------------------------------------
--------{column_index column_type column_order column_checksum collation_type}----------
| [0 ObUInt64Type ASC 3344869974 63]
| [1 ObIntType ASC 313654433 63]
| [2 ObIntType ASC 2388842353 63]
| [3 ObInt32Type ASC 2776795072 63]
| [4 ObInt32Type ASC 82537422 63]
--------------------------------------------------------------------------------
解析 ttttttt2 表,即未加密的表
替换命令中 tablet_id
和 macro block id
为 ttttttt2
表的 id
,进行解析。
对比加密表 t1
,未加密表输出信息更丰富,并且可以看到具体的数据内容。
此处精简展示,可以看到 Total Rows 中显示了前面插入的那条数据[{"INT":147852369}][{"INT":999999991}]。
[admin@ob_4 oceanbase]$ ob_admin dumpsst -f /home/admin/oceanbase/store/obdemo/ -d macro_block -a 718 -t 200002 -i -1
succ to open, filename=ob_admin.log, fd=3, wf_fd=2
old log_file need close, old = ob_admin.log new = ob_admin.log
succ to open, filename=ob_admin.log, fd=3, wf_fd=2
succ to open, filename=ob_admin_rs.log, fd=4, wf_fd=2
------------------------------{Common Header}------------------------------
| header_size|24
| version|1
| magic|1001
| attr|1
| payload_size|892
| payload_checksum|-1696352947
--------------------------------------------------------------------------------
------------------------------{SSTable Macro Block Header}------------------------------
| header_size|208
| version|1
| magic|1007
| tablet_id|200002
| logical_version|1685094492266634220
| data_seq|0
| column_count|5
| rowkey_column_count|3
| row_store_type|1
| row_count|1
| occupy_size|424
| micro_block_count|1
| micro_block_data_offset|232
| data_checksum|725485397
| compressor_type|6
| master_key_id|0
--------------------------------------------------------------------------------
……
------------------------------{Total Rows[1]}------------------------------
|ROW[0]:trans_id=[{txid:0}],dml_flag=[N|INSERT],mvcc_flag=[]|[{"BIGINT UNSIGNED":1}][{"BIGINT":-1685094482154160502}][{"BIGINT":0}][{"INT":147852369}][{"INT":999999991}]
……
------------------------------{Encoding Column Header[4]}------------------------------
| type|0
| attribute|0
| is fix length|0
| has extend value|0
| is bit packing|0
| is last var field|0
| extend value index|65542
| store object type|0
| offset|0
| length|0
--------------------------------------------------------------------------------
------------------------------{Index Micro Block[0]}------------------------------
------------------------------{Total Rows[1]}------------------------------
|ROW[0]:trans_id=[{txid:0}],dml_flag=[N|INSERT],mvcc_flag=[]|[{"BIGINT UNSIGNED":1}][{"BIGINT":-1685094482154160502}][{"BIGINT":0}][{"VARCHAR":"
", collation:"binary", coercibility:"NUMERIC"}]
|Index Block Row Header|[{version:1, row_store_type:1, compressor_type:6, is_data_index:1, is_data_block:1, is_leaf_block:0, is_major_node:1, is_pre_aggregated:0, is_deleted:0, contain_uncommitted_row:0, is_macro_node:0, has_string_out_row:0, all_lob_in_row:1, macro_id:[-1](ver=0,mode=0,seq=0), block_offset:232, block_size:192, master_key_id:0, encrypt_id:0, encrypt_key:"data_size:16, data:00000000000000000000000000000000", row_count:1, schema_version:1685094464567160, macro_block_count:0, micro_block_count:1}]
------------------------------{Macro Meta Micro Block}------------------------------
------------------------------{Encoding Micro Header}------------------------------
| header_size|96
| version|2
| magic|1005
| column_count|4
| rowkey_column_count|3
| row_count|1
| row_store_type|2
| row_index_byte|0
| var_column_count|0
| row_data_offset|357
|column_chksum[ 0]|3344869974
|column_chksum[ 1]|1868627082
|column_chksum[ 2]|2388842353
|column_chksum[ 3]|1583982749
--------------------------------------------------------------------------------
……
小结
本文主要是使用 ob_admin 工具的 dumpsst 功能解析 block_file,验证了 OceanBase 数据透明加密功能。
使用 dumpsst 过程中碰到问题,建议多关注 ob_admin.log
,对于排查比较有帮助。
更多技术文章,请访问:https://opensource.actionsky.com/
关于 SQLE
爱可生开源社区的 SQLE 是一款面向数据库使用者和管理者,支持多场景审核,支持标准化上线流程,原生支持 MySQL 审核且数据库类型可扩展的 SQL 审核工具。