文档章节

Oracle数据库迁移到PostgreSQL/EDB初探

Rocky-Wang
 Rocky-Wang
发布于 2015/04/17 16:29
字数 3237
阅读 713
收藏 37

由于某些非技术方面的原因,我们也在搞一些开源数据库引入,替换商业数据库,于是瞄上了PostgreSQL。


PostgreSQL本身的技术不在这里做介绍,虽然国内PostgreSQL没有Mysql那么流行,但是搜索一下,还是能够找到不少PostgreSQL中文的一些技术资料, 其中PG的官方文档是最详尽且最有助于学习PG的:

最新的9.4版本的online doc:http://www.postgresql.org/docs/9.4/static/index.html 

山东瀚高科技提供的中文翻译文档:http://www.highgo.com.cn/docs/docs90cn/


Oracle迁移到PostgreSQL:

比较流行的迁移工具 ora2pg:  http://ora2pg.darold.net/config.html#testing

大致原理:

该工具使用perl写的, 主要迁移功能都由ora2pg.conf 这个参数文件控制,该参数文件定义源库Oracle和目标库的DSN,通过DBD::Oracle Perl 、 DBD::Pg Perl 的module连接到源库和目标库, 按照用户制定的参数,导出源库Oracle的相关对象结构(也可以连数据一起导),导入到目标库PG。 导入可以是在线的(导出脚本不落地直接导入PG),也可以先生成导出脚本到本地,手动修改复核后,再手动使用psql导入PG。 

ora2pg命令行各参数使用也比较清晰明了, 支持各种导出导入选项,比如你可以只导某些schema或者对象,或者排除某些schema或者对象不导, 也可以什么都不做,只对源库做一个迁移PG的分析,出具一份报告等等。

具体使用方法,还是自己去实际尝试会比较好。 


使用Ora2pg的方案将Oracle迁移至PG, 遇到的问题多少与源Oracle数据库有多少与PG不兼容的东西成正比。下面是我们遇到的问题简单总结:

应用程序里面sqlmap.xml 人工REVIEW时的问题发现:

Oracle PostgreSQL
dual表 没有dual表,可以直接select 1、select user、select xxx
时间函数(sysdate)

current_date

current_time

current_timestamp

trunc trunc/date trunc(时间截断函数)
sys_guid() 有类似sys_guid的函数uuid_generate_v4,但需要安装,create extension "uuid-oosp".
nvl PG可以使用COALESCE
where rownum < ... select row_number() over() , * from XXXX或者limit



PG导入ora2pg产生的迁移脚本时发现的问题:

tables      

DEFAULT Sys_guid();

session# serial# 字段

DEFAULT uuid_generate_v4;

去掉#

partitions   父表在tables中已创建,创建子表时,由于大小写问题提示找不到父表。 表名称区分大小写,继承父表时,大小写改为与父表相同。增加或减少分区要修改触发器函数。
synonyms  

转换过来的语句类似如下:

CREATE VIEW public.tablename AS SELECT * FROM owner.tablename SECURITY DEFINER;

PG中没有同义词,自动创建为视图,转换过来的视图名称与存在的表名相同,需要修改视图名称。

SECURITY DEFINER不能加到创建语句的后面,可通过授权来控制权限。

packages    

1、PG中没有v$session.(oracle原来的packages里面大多有这个)

select sid, serial# into v_sid, v_serial# from v$session

2、有些转换过来的语句顺序不正确,需要重构。

3、packages自动转换为function,且function会创建到以原来packages命名的schema下

ora2pg会把Oracle里面的package header转换为同名的schema
procedures  

1、dbms_output.put_line('成功插入数据!');-->RAISE NOTICE '%',('成功插入数据!');

2、 dbms_output.put_line(sqlerrm) --- RAISE NOTICE '%', sqlerrm;

3、表的子查询必须包围在圆括弧里并且必须赋予一个别名

4、start with connect by 递归查询在PG中WITH RECURSIVE

views   

1、字符类型问题

2、递归查询没有转换成功

3、外连接中的“+”号没有转换

4、DECODE函数需要重构

5、COALESCE 函数返回值类型不匹配

1、字符类型需要调整。

2、start with connect by 递归查询在PG中WITH RECURSIVE

3、 (+)这样的外连接写法需要调整为SQL标准的 table1 [LEFT|RIGHT|FULL] OUTER JOIN table2 ON (...);

4、DECODE函数需要重构成(case when some_column = 'some_value' then 'some_other_value' when ... then ... else 'some_default_value' end ) as some_column;

5、COALESCE 函数返回值类型不匹配、需要类型转换。

等等。

可以看到直接将Oracle迁移到PostgreSQL,无论是应用层,还是数据库层面, 都还是有很多改写工作要做的。 

由于有上述诸多问题,我们研究了一下号称为替换Oracle而生的EnterpriseDB(PostgreSQL的商业版):

http://www.enterprisedb.com/products-services-training/products/documentation/enterpriseedition

EnterpriseDB的安装指引:

http://www.enterprisedb.com/docs/en/9.4/instguide/toc.html

EnterpriseDB的官方文档:

http://www.enterprisedb.com/docs/en/9.4/eeguide/toc.html


EnterpriseDB的Oracle迁移PG的工具Migration Toolkit使用指引:

http://www.enterprisedb.com/docs/en/9.4/migrate/toc.html

工具使用比较简单,简单配置一些参数接口, 迁移过程发现的问题相比ora2pg少了很多,但还是有一些:

例如EDB中没有function sys_connect_by_path函数、 oracle trigger语法里面的referencing new as new old as old等。


结论:

虽然EDB的Migration Toolkit这个迁移工具本身没有像ora2pg那么多灵活的迁移选项组合, 但是其功能基本满足迁移需求,而且重要的是由于EDB在PostgreSQL的基础上,外面包了一层,实现了很多Oracle的概念,例如前面提到的dual表、同义词、pkg、内建函数,甚至Oracle的分区表语法可以直接在EDB运行。所以,同样的源Oracle数据库,我们发现90%左右的都可以直接在目标EnterpriseDB运行了,只需要修改一少部分。 


具体EnterpriseDB的兼容Oracle的清单,参考

http://www.enterprisedb.com/products-services-training/products/postgres-plus-advanced-server?quicktabs_advanceservertab=3#quicktabs-advanceservertab


所以,从节省成本与运维稳定两方面平衡考虑, 对于新建系统直接采用PostgreSQL,对于已有系统迁移到PostgreSQL,使用EnterpriseDB也不失为一个过渡期的较好选择。 


附录:

ora2pg --help


Usage: ora2pg [-dhpqv --estimate_cost --dump_as_html] [--option value]


    -a | --allow str  : coma separated list of objects to allow from export.

Can be used with SHOW_COLUMN too.

    -b | --basedir dir: Used to set the default output directory, where files

resulting from exports will be stored.

    -c | --conf file  : Used to set an alternate configuration file than the

default /etc/ora2pg/ora2pg.conf.

    -d | --debug      : Enable verbose output.

    -e | --exclude str: coma separated list of objects to exclude from export.

Can be used with SHOW_COLUMN too.

    -h | --help       : Print this short help.

    -i | --input file : File containing Oracle PL/SQL code to convert with

no Oracle database connection initiated.

    -j | --jobs num   : number of parallel process to send data to PostgreSQL.

    -J | --copies num : number of parallel connection to extract data from Oracle.

    -l | --log file   : Used to set a log file. Default is stdout.

    -L | --limit num  : number of tuples extracted from Oracle and stored in

memory before writing, default: 10000.

    -n | --namespace schema : Used to set the Oracle schema to extract from.

    -o | --out file   : Used to set the path to the output file where SQL will

be written. Default: output.sql in running directory.

    -p | --plsql      : Enable PLSQL to PLPSQL code conversion.

    -P | --parallel num: Number of parallel tables to extract at the same time.

    -q | --quiet      : disable progress bar.

    -s | --source DSN : Allow to set the Oracle DBI datasource.

    -t | --type export: Used to set the export type. It will override the one

given in the configuration file (TYPE).

    -u | --user name  : Used to set the Oracle database connection user.

    -v | --version    : Show Ora2Pg Version and exit.

    -w | --password pwd : Used to set the password of the Oracle database user.

    --forceowner: if set to 1 force ora2pg to set tables and sequences owner

 like in Oracle database. If the value is set to a username this

 one will be used as the objects owner. By default it's the user

 used to connect to the Pg database that will be the owner.

    --nls_lang code: use this to set the Oracle NLS_LANG client encoding.

    --client_encoding code: Use this to set the PostgreSQL client encoding.

    --view_as_table str: coma separated list of view to export as table.

    --estimate_cost   : activate the migration cost evalution with SHOW_REPORT

    --cost_unit_value minutes: number of minutes for a cost evalution unit.

 default: 5 minutes, correspond to a migration conducted by a

 PostgreSQL expert. Set it to 10 if this is your first migration.

   --dump_as_html     : force ora2pg to dump report in HTML, used only with

                        SHOW_REPORT. Default is to dump report as simple text.

   --init_project NAME: initialise a typical ora2pg project tree. Top directory

                        will be created under project base dir.

   --project_base DIR : define the base dir for ora2pg project trees. Default

                        is current directory.


See full documentation at http://ora2pg.darold.net/ for more help or see manpage with 'man ora2pg'.



EDB的Migration Toolkit:


runMTK.sh -help

Running EnterpriseDB Migration Toolkit (Build 48.0.1) ...


EnterpriseDB Migration Toolkit (Build 48.0.1)


Usage: runMTK [-options] SCHEMA


If no option is specified, the complete schema will be imported.


where options include:

-helpDisplay the application command-line usage.

-versionDisplay the application version information.

-verbose [on|off] Display application log messages on standard output (default: on).


-schemaOnlyImport the schema object definitions only.

-dataOnlyImport the table data only. When -tables is in place, it imports data only for the selected tables. Note: If there are any FK constraints defined on target tables, use -truncLoad option along with this option.


-sourcedbtype db_type The -sourcedbtype option specifies the source database type. db_type may be one of the following values: mysql, oracle, sqlserver, sybase, postgresql, enterprisedb. db_type is case-insensitive. By default, db_type is oracle.

-targetdbtype db_type The -targetdbtype option specifies the target database type. db_type may be one of the following values: oracle, sqlserver, postgresql, enterprisedb. db_type is case-insensitive. By default, db_type is enterprisedb.


-allTablesImport all tables.

-tables LISTImport comma-separated list of tables.

-constraintsImport the table constraints.

-indexesImport the table indexes.

-triggersImport the table triggers.

-allViewsImport all Views.

-views LISTImport comma-separated list of Views.

-allProcsImport all stored procedures.

-procs LISTImport comma-separated list of stored procedures.

-allFuncsImport all functions.

-funcs LISTImport comma-separated list of functions.

-allPackagesImport all packages.

-packages LIST Import comma-separated list of packages.

-allSequencesImport all sequences.

-sequences LIST Import comma-separated list of sequences.

-targetSchema NAME Name of the target schema (default: target schema is named after source schema).

-allDBLinksImport all Database Links.

-allSynonymsIt enables the migration of all public and private synonyms from an Oracle database to an Advanced Server database.  If a synonym with the same name already exists in the target database, the existing synonym will be replaced with the migrated version.

-allPublicSynonymsIt enables the migration of all public synonyms from an Oracle database to an Advanced Server database.  If a synonym with the same name already exists in the target database, the existing synonym will be replaced with the migrated version.

-allPrivateSynonymsIt enables the migration of all private synonyms from an Oracle database to an Advanced Server database.  If a synonym with the same name already exists in the target database, the existing synonym will be replaced with the migrated version.


-dropSchema [true|false] Drop the schema if it already exists in the target database (default: false).

-truncLoadIt disables any constraints on target table and truncates the data from the table before importing new data. This option can only be used with -dataOnly.

-safeModeTransfer data in safe mode using plain SQL statements.

-copyDelimiterSpecify a single character to be used as delimiter in copy command when loading table data. Default is \t

-batchSizeSpecify the Batch Size to be used by the bulk inserts. Valid values are  1-1000, default batch size is 1000, reduce if you run into Out of Memory exception

-cpBatchSize Specify the Batch Size in MB, to be used in the Copy Command. Valid value is > 0, default batch size is 8 MB

-fetchSize Specify fetch size in terms of number of rows should be fetched in result set at a time. This option can be used when tables contain millions of rows and you want to avoid out of memory errors.

-filterPropThe properties file that contains table where clause.

-skipFKConstSkip migration of FK constraints.

-skipCKConstSkip migration of Check constraints.

-ignoreCheckConstFilterBy default MTK does not migrate Check constraints and Default clauses from Sybase, use this option to turn off this filter.

-fastCopyBypass WAL logging to perform the COPY operation in an optimized way, default disabled.

-customColTypeMapping LISTUse custom type mapping represented by a semi-colon separated list, where each entry is specified using COL_NAME_REG_EXPR=TYPE pair. e.g. .*ID=INTEGER

-customColTypeMappingFile PROP_FILEThe custom type mapping represented by a properties file, where each entry is specified using COL_NAME_REG_EXPR=TYPE pair. e.g. .*ID=INTEGER

-offlineMigration [PATH] This performs offline migration and saves the DDL/DML scripts in files for a later execution. By default the script files will be saved under user home folder, if required follow -offlineMigration option with a custom path. 

-logDir LOG_PATH Specify a custom path to save the log file. By default, on Linux the logs will be saved under folder $HOME/.enterprisedb/migration-toolkit/logs. In case of Windows logs will be saved under folder %HOMEDRIVE%%HOMEPATH%\.enterprisedb\migration-toolkit\logs.

-copyViaDBLinkOra This option can be used to copy data using dblink_ora COPY commad. This option can only be used in Oracle to EnterpriseDB migration mode.

-singleDataFileUse single SQL file for offline data storage for all tables. This option cannot be used in COPY format.

-allUsers Import all users and roles from the source database.

-users LIST Import the selected users/roles from the source database. LIST is a comma-separated list of user/role names e.g. -users MTK,SAMPLE

-allRules Import all rules from the source database.

-rules LIST Import the selected rules from the source database. LIST is a comma-separated list of rule names e.g. -rules high_sal_emp,low_sal_emp

-allGroups Import all groups from the source database.

-groups LIST Import the selected groups from the source database. LIST is a comma-separated list of group names e.g. -groups acct_emp,mkt_emp

-allDomains Import all domain, enumeration and composite types from the source database.

-domains LIST Import the selected domain, enumeration and composite types from the source database. LIST is a comma-separated list of domain names e.g. -domains d_email,d_dob, mood

-objecttypesImport the user-defined object types.

-replaceNullChar <CHAR> If null character is part of a column value, the data migration fails over JDBC protocol. This option can be used to replace null character with a user-specified character.

-importPartitionAsTable [LIST] Use this option to import Oracle Partitioned table as a normal table in EnterpriseDB. To apply the rule on a selected set of tables, follow the option by a comma-separated list of table names.

-enableConstBeforeDataLoad Use this option to re-enable constraints (and triggers) before data load. This is useful in the scenario when the migrated table is mapped to a partition table in EnterpriseDB.

-checkFunctionBodies [true|false] When set to false, it disables validation of the function body during function creation, this is to avoid errors if function contains forward references. Applicable when target database is Postgres/EnterpriseDB, default is true.

-retryCount VALUESpecify the number of re-attempts performed by MTK to migrate objects that failed due to cross-schema dependencies. The VALUE parameter should be greater than 0, default is 2.

-analyze It invokes ANALYZE operation against a target Postgres or Postgres Plus Advanced Server database. The ANALYZE collects statistics for the migrated tables that are utilized for efficient query plans.

-vacuumAnalyze It invokes VACUUM and ANALYZE operations against a target Postgres or Postgres Plus Advanced Server database. The VACUUM reclaims dead tuple storage whereas ANALYZE collects statistics for the migrated tables that are utilized for efficient query plans.

-loaderCount VALUESpecify the number of jobs (threads) to perform data load in parallel. The VALUE parameter should be greater than 0, default is 1.

-logFileSize VALUEIt represents the maximum file size limit (in MB) before rotating to a new log file, defaults to 50MB.

-logFileCount VALUEIt represents the number of files to maintain in log file rotation history, defaults to 20. Specify a value of zero to disable log file rotation.

-useOraCaseIt preserves the identifier case while migrating from Oracle, except for functions, procedures and packages unless identifier names are given in quotes. 

-logBadSQLIt saves the DDL scripts for the objects that fail to migrate, in a .sql file in log folder.

-targetDBVersionIt represents the major.minor version of the target database. This option is applicable for offline migration mode and is used to validate certain migration options as per target db version [default is 9.4 for EnterpriseDB database].


Database Connection Information:

The application will read the connectivity information for the source and target database servers from toolkit.properties file.

Refer to MTK readme document for more information.





© 著作权归作者所有

共有 人打赏支持
Rocky-Wang
粉丝 5
博文 34
码字总数 43517
作品 0
深圳
数据库管理员
私信 提问
加载中

评论(1)

白豆腐徐长卿
白豆腐徐长卿
写的不错。
EDB PPAS(Oracle 兼容版) Oracle与PostgreSQL 兼容模式的参数配置切换

标签 PostgreSQL , EDB , PPAS , 参数 , Oracle模式 , PostgreSQL模式 背景 EDB PPAS是EDB推出的一款同时兼容Oracle和PostgreSQL协议的数据库,在去O的场景中,使用非常广泛,价格便宜,同时...

德哥
05/06
0
0
阿里云宣布与数据库厂商EnterpriseDB(EDB)达成深度合作 提供优秀的Oracle兼容性

自2015年起,阿里云已经与EnterpriseDB公司就云数据库产品进行业务合作,基于阿里云飞天架构及EDB Postgres Advanced Server推出 云数据库PPAS版。针对Postgres市场的持续升温,阿里云将与E...

桐碧2018
09/22
0
0
2014PostgreSQL用户大会PPT下载

由国内PostgreSQL的志愿者发起的“2014PostgreSQL用户大会”于2014年12月12日如期在深圳召开(本站召集帖),“PostgreSQL用户大会”是一个旨在促进PostgreSQL在中国发展的非营利性的会议。感...

永和
2014/12/17
0
17
多平台向Postgresql 迁移

最近有个工作要把 一个SqlServer 迁移到 Postgresql 平台上,上网搜了搜,EnterpriseDB在这块做了很多工作, EnterpriseDB 的 Postgres Plus Advanced Server管理工具中有一个 mogiration t...

从前
2013/01/30
0
0
阿里云ppas 逻辑备份(导出)、还原 - 导出到本地、从本地导入

背景 阿里云RDS PPAS是PG的企业版本,兼容PG同时兼容Oracle。 由于ppas做了很多兼容ORACLE的工作,所以元数据与PG社区版本有很大不同,那么用户在使用RDS PPAS时,如果有导出、导入的需求,请...

pg小助手
10/19
0
0

没有更多内容

加载失败,请刷新页面

加载更多

大数据教程(9.6)map端join实现

上一篇文章讲了mapreduce配合实现join,本节博主将讲述在map端的join实现; 一、需求 实现两个“表”的join操作,其中一个表数据量小,一个表很大,这种场景在实际中非常常见,比如“订单日志...

em_aaron
29分钟前
1
0
关于《红楼梦》的读后感优秀范文2000字

关于《红楼梦》的读后感优秀范文2000字: (在写读《红楼梦》有感的路上遇到了博友“五音不全”师兄,使我感到汗颜,于是放弃了87版的电视剧,只读原著《红楼梦》和五音的评红作品,对原著才...

原创小博客
40分钟前
1
0
cookie与session详解

session与cookie是什么? session与cookie属于一种会话控制技术.常用在身份识别,登录验证,数据传输等.举个例子,就像我们去超市买东西结账的时候,我们要拿出我们的会员卡才会获取优惠.这时...

士兵7
43分钟前
1
0
十万个为什么之为什么大家都说dubbo

Dubbo是什么? 使用背景 dubbo为什么这么流行, 为什么大家都这么喜欢用dubbo; 通过了解分布式开发了解到, 为适应访问量暴增,业务拆分后, 子应用部署在多台服务器上,而多台服务器通过可以通过d...

尾生
今天
2
0
Docker搭建代码质量检测平台-SonarQube(中文版)

Sonar是一个用于代码质量管理的开源平台,用于管理源代码的质量,可以从七个维度检测代码质量。通过插件形式,可以支持包括java,C#,C/C++,PL/SQL,Cobol,JavaScrip,Groovy等等二十几种编程语言...

Jacktanger
今天
2
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部