Why Migrate from Btrieve to PostgreSQL and other Relational Databases?
Introduction Many independent software vendors (ISV) and corporate users still rely on applications that use a category of database collective called...
Before migrating Btrieve to SQL, your first step should ideally be to validate the Data Definition Files (DDF's) in your database.
As you probably know Pervasive Software has developed and provided a DDF Builder with Pervasive.SQL v9. There are many similar tools available on the internet that you can use to help you with this daunting task. However, although these applications can be very helpful, the process of identifying the definitions for a 300-table database can still contain errors. Also, initial visual inspection of the revised data in the Control Center may be misleading. Unfortunately, it is very easy to miss common DDF errors such as a misaligned offset, or the wrong data type, such as zstring instead of string.
The solution: The BTR2SQL DDF Validation Utility
BTR2SQL contains a handy DDF Validation Tool. This feature was developed to make sure that the DDFs have been defined accurately. When activated it searches for and identifies the common DDF errors. For example, the total length of fields that do not match the Btrieve record size for the file, and for files that have not been defined. It also checks for anomalies in the names of the columns, indexes, etc.
After checking for table and column level issues, it will examine every row from each table in the database within each field. When an error is detected, a message will be displayed. The error code and a suggestion to help you repair the problem will be viewable.
Optimally, you should run the DDF Validation Tool prior to migrating your data to SQL. Refer to the BTR2SQL User Guide. If you have already migrated the tables, you can convert them again if problems are encountered with the DDFs.
We recommend that you run a validation even if you have had DDFs for years. The validation does not take long to run, but you may want to limit the row scanning to save some time. Refer to the following section, Command line, for details. Any time that you make changes to the DDFs, we recommend that you run the DDF Validation Tool on the affected tables.
Command line
DDFValidator [-dblocation ]
[-log ] [-table ]
[-ignore ] [-?]
-dblocation : The full path to the Pervasive.SQL DDFs, which define the database schema. If this is not provided, the current directory is assumed.
-log :Send the report (XML format) to the specified file. When -log is not included, the XML is send to stdout. When -log is provided, the XML is sent to the file and only the error messages display so you can see that the tool is working and not hung.
-table : By default, all tables are tested. This includes the -table parameter, which allows you to specify one or more tables to narrow the tests. Use semicolons to separate the list; or include the -table param multiple times.
-max-rows : Allows you to narrow the testing time by reducing the number of rows checked. If your table has 10 million rows, it is likely that after approximately the first million, the rest of the rows will validate the same way.
-ignore : Error codes to ignore - separated by semicolons. If you receive many of the same warning messages and you have verified that it is not a problem for your database, then you can filter it out to avoid littering the output.
Parameters with brackets [] are optional. Surround names with quotes if needed. (space in the name).
If -dblocation is left out, the DDFs are read from the current directory.
Example: Test the Pervasive.SQL Demodata database; send report to XML file:
DDFValidator -dblocation "c:pvswdemodata" -log DemoDataTest.xml
Introduction Many independent software vendors (ISV) and corporate users still rely on applications that use a category of database collective called...
COBOL applications are the foundation of numerous essential business functions, especially within the banking, insurance, and government sectors....
Imagine breaking free from the constraints of old, monolithic systems and embracing the agility and innovation of cloud-based solutions.