Converting to Universal Table Spaces (UTS): Insights from Real-World Success Stories

Posted By: Rahul Bajaj Technical Content,

Written by Rahul Bajaj and Sreenivas Javvaji 

In the world of Db2 for z/OS, the conversion from deprecated table space types (such as segmented and simple table spaces) to Universal Table Spaces (UTS) has become increasingly essential. With Db2’s ongoing enhancements and the deprecation of older table space types, many users now face the challenge of migrating their data infrastructure to UTS. Recently, our team presented insights into helping several organizations complete this conversion process, providing valuable lessons and actionable strategies that others can apply to their own migration journeys.


Why Convert to Universal Table Spaces?

The shift to UTS, introduced with Db2 9, is more than a matter of staying current; it’s about unlocking a range of advanced features and performance enhancements that older table spaces simply can’t support. Here are some of the reasons UTS is essential for modern Db2 systems:

  • Access to Advanced Db2 Features: Many new Db2 capabilities are available only in UTS, such as in-lining of LOBs, Huffman compression, improved DROP enhancements, and “currently committed” locking, which streamline data management and enable modern practices.

  • Schema Flexibility: UTS enables more flexible schema changes with features like pending DDL, allowing for easier adaptations in response to evolving data requirements.

  • Enhanced Space Management: UTS offers better space utilization and management for large tables, including features like efficient mass delete, segment reuse, and larger table handling.

Converting to UTS not only ensures compliance with Db2’s strategic direction but also positions organizations to leverage new capabilities that enhance database performance, scalability, and manageability.


Conversion Paths and Strategies

The path to UTS varies depending on the source tablespace type and target requirements. Here are a few common approaches:

  • Multi-Table Simple/Segmented to a Set of PBGs: Converting tablespaces containing multiple tables requires careful consideration of the Db2 subsystem level and dataset size. There are three primary methods to perform this conversion, depending on your environment:

    1. UNLOAD/DROP/CREATE/LOAD: This traditional approach involves unloading the data, dropping the original tablespace, creating new PBG tablespaces, and reloading the data.

    2. CREATE/LOAD/RENAME/DROP: In this method, a new UTS tablespace (e.g., ts_new) is created, and data is loaded from the original segmented tablespace (ts) into the new tablespace. The original tablespace is then renamed to ts_old for verification purposes. After verifying the data in ts_new, it is renamed to ts, and finally, the old tablespace (ts_old) is dropped.

    3. ALTER TABLESPACE MOVE TABLE + Online REORG: Introduced in Db2 V12R1M508, this option simplifies the process by allowing a fully online conversion. Tables are moved directly to a new PBG tablespace using the ALTER TABLESPACE MOVE TABLE command, followed by an online REORG to materialize the changes.

Db2 Subsystem-Level Considerations:

  • Below Function Level V12R1M508: Only the first two methods (UNLOAD/DROP/CREATE/LOAD or CREATE/LOAD/DROP/RENAME) are available for multi-table conversions. These methods require downtime and careful dataset handling to avoid data loss or integrity issues.

  • At or Above Function Level V12R1M508: All three methods are available, with the ALTER TABLESPACE MOVE TABLE command offering the most streamlined and efficient option. This fully online approach minimizes downtime, making it the preferred method for systems at this function level.

For datasets exceeding 4GB, enabling Extended Addressability (EA) is essential to ensure smooth handling of large data volumes. Proper configuration of EA attributes in the new tablespaces and related indexes is critical to avoid errors and optimize performance.

  • Single-Table Simple/Segmented to PBG (Partitioned by Growth): For single-table tablespaces, the conversion is relatively straightforward. Simple commands like  ALTER TABLESPACE MAXPARTITIONS followed by an online REORG can accomplish this transition.

  • Classic (Table/Index-controlled) Partitioned to PBR (Partitioned by Range): Converting classic partitioned tablespaces to UTS PBR involves different steps depending on whether the tablespace uses index-controlled or table-controlled partitioning. The process to convert any partitioned (non-UTS) tablespace to a partition-by-range (PBR) universal tablespace includes the following steps:

    1. If the table space uses index-controlled partitioning, convert it to use table-controlled partitioning.

    2. Issue an ALTER TABLESPACE statement specifying a segment size to convert the tablespace to a partition-by-range universal tablespace with absolute page numbering (APN).

    3. Run the REORG utility for the entire tablespace to materialize the pending data definition changes.

Note: The conversion of any partitioned tablespace to PBR creates the PBR with absolute page numbering (APN). To convert it into PBR2 with relative page numbering (RPN), execute the ALTER TABLESPACE PAGENUM RELATIVE command. This is a pending DDL change, and you must run a REORG on the entire tablespace to materialize the change. This applies to both table-controlled and index-controlled partitioning schemes. RPN is the strategic direction for PBR table spaces in Db2. 

By selecting the right path and considering the data requirements and potential dependencies, users can navigate the conversion process more smoothly, as demonstrated in the following success stories.


Real-World Stories: Challenges and Resolutions
Story 1: Recovering from Extended Outage and Data Loss

Scenario: One user faced a critical challenge during their conversion to UTS. They executed a UNLOAD/DROP/CREATE/LOAD sequence. Unfortunately, during the first LOAD step, the job failed due to an unavailable resource error (SQLCODE -904 and reason code 00D70008). This error indicated that the dataset required Extended Format and Extended Addressability (EA) attributes, which were not correctly defined through DFSMS. In the process of troubleshooting, some UNLOAD datasets were accidentally deleted, leading to partial data loss and an extended outage.

Challenges:

    1. Extended Outage: The failure and missing data led to prolonged downtime.

    2. Data Loss: The accidental deletion of UNLOAD datasets made it difficult to recover the data without starting over.

    3. Restart Complexity: Restarting the conversion was challenging due to the deleted datasets and duplicate dataset errors.

Resolution:

    1. Recover Data First: We restored the data by recreating tables in the original segmented tablespaces, then used DSN1COPY with Object ID Translation (OBIDXLAT) to transfer data from image copies.

    2. Enable Extended Addressability (EA): EA was enabled on required tablespaces to support large datasets (DSSIZE exceeding 4GB), ensuring efficient data handling and avoiding further allocation errors.

Preventive Measures:

    • Take Pre-Conversion Image Copies: Image copies before conversion protect against data loss.
    • Use Persistent Storage for UNLOAD Datasets: Store datasets persistently and avoid manual deletions or renaming during conversion.
    • Practice Disaster Recovery: Preparing for DSN1COPY disaster recovery allows for smooth responses to unexpected data issues.

This story emphasizes the importance of having backups and preemptively enabling EA for large datasets to avoid resource errors and reduce outage times.


Story 2: Troubles with Indexes on New EA-Enabled STOGROUP

Scenario: Another user created a new EA-enabled storage group for their UTS, but their existing STOGROUP lacked EA configuration. After creating a new EA-enabled STOGROUP and moving tables to the new storage group, we encountered issues with the REORG utility due to non-EA-enabled indexes associated with the tablespaces.

Challenges:

    1. STOGROUP Incompatibility: While the tablespaces were migrated to the EA-enabled STOGROUP, the associated indexes weren’t.
    2. Extended Downtime: Restarting the REORG utility required altering index STOGROUPs, resulting in additional downtime.

Resolution:

    1. Alter Dependent Indexes: SQL queries identified the affected indexes, which were altered to use the new EA-enabled STOGROUP.
    2. Restart REORG: With the altered indexes, REORG could run without further issues.

Preventive Measures:

    • STOGROUP Compatibility Checks: Ensure all objects, including dependent indexes, are EA-enabled when moving to a new STOGROUP.
    • Enable EA in Existing STOGROUPs: Where feasible, update existing STOGROUPs to avoid adjusting dependencies.

This case highlights the need to consider all dependencies, especially indexes, when planning a STOGROUP migration.


Story 3: Resolving REORG Failures in Inline Copying

Scenario: While performing a REORG  with inline copying during a MOVE TABLE operation, one user encountered a failure due to missing symbolic variables (&TS. or &SN.) in the template. This issue prevented dynamic allocation of datasets, causing the REORG to fail.

Challenges:

    • Incorrect Template Configuration: Missing symbolic variables in the template halted the operation.
    • Complex Restart: Restarting REORG required careful reconfiguration to avoid recurrence in the further REORGs.

Resolution:

    1. Update TEMPLATE Variables: We modified the template to include &TS. or &SN. symbols, which enabled dynamic dataset allocation.
    2. Restart Utility from Beginning: With the updated template, the REORG utility ran successfully to completion.

Preventive Measures:

    • Template Validation: Before running REORGs with inline copy to materialize the UTS conversion, verify templates include all required symbolic variables.

This story underscores the importance of template configuration to ensure the smooth operation of utilities like REORG.


Story 4: Block Size and Record Format Errors in LOAD Utility

Scenario: During the conversion, the LOAD  utility encountered errors due to incompatible block size (BLKSIZE) and logical record length (LRECL) attributes for an empty UNLOAD dataset. The defaults in the UNLOAD dataset were incompatible with the LOAD utility, leading to job failure.

Challenges:

    1. Incorrect Dataset Attributes: Default UNLOAD dataset attributes didn’t align with LOAD utility requirements.
    2. Handling Empty Tables: Empty tables can sometimes result in datasets with incompatible default attributes.

Resolution:

    1. Configure Dataset Attributes Properly: We recommended setting dataset attributes to DSORG(PS), LRECL(4096), and RECFM(VB) to avoid compatibility issues.
    2. Selective UNLOAD for Non-Empty Tables: Where possible, skip empty tables during UNLOAD to prevent such issues.

Preventive Measures:

    • Attribute Validation: Validate that all datasets have compatible attributes before running LOAD.
    • Avoid Default Configurations for Empty Tables: Ensure default configurations don’t conflict with utility requirements, especially for empty datasets.

This story emphasizes the importance of attribute management, even for empty datasets, in preventing job interruptions during conversion.


Story 5: Exceeding SYSOUT Row Limits

Scenario: During conversion, one user encountered a S722 abend when SYSOUT exceeded its row limit, halting the process. This issue was unrelated to the UTS conversion itself but prevented the job from completing.

Challenges:

    • Unexpected Job Failure: The job failed due to system-imposed output limits, not due to a table space conversion issue.

Resolution:

    1. Increase Output Limits: Adjust SYSOUT output limits.
    2. Routing output to a dataset 
    3. Update JES Output Parameters: Increasing the job output limits in JES ensures the job can run smoothly without interruptions.

Preventive Measures:

    • Review Output Limits: Before conversion, check and adjust output limits to avoid unexpected interruptions.
    • Redirect Large Outputs to Datasets: For large jobs, consider redirecting outputs to datasets rather than relying solely on SYSOUT.

This small story serves as a reminder that even small configuration details, like output limits, can impact the success of a job.


Key Takeaways for Successful UTS Migration

Through these stories, here are several best practices to guide others embarking on UTS conversions:

    1. Enable Extended Addressability (EA) in Advance: For datasets exceeding 4GB, ensure Extended Addressability (EA) is enabled on table spaces and related indexes. When using the new MOVE TABLE option of ALTER TABLESPACE, consider the restrictions for the target UTS, such as MAXPARTITIONS 1, DEFINE NO, and matching source table space attributes for  LOGGED/NOT LOGGED and CCSID. For larger datasets, increase the DSSIZE parameter to exceed 4GB to accommodate the single-partition limitation (MAXPARTITIONS 1). EA is critical for efficiently handling such configurations.
    2. Identify Tables for Conversion: Focus on tablespaces containing LOB columns or large datasets, as these often have unique requirements. Prioritize conversions based on business needs and technical feasibility.
    3. Plan Schema Changes in Advance: Prepare for schema changes by identifying dependencies, ensuring compatibility with application packages, and planning for partitioning updates to minimize disruptions.
    4. Plan for Data Continuity: Take image copies of tablespaces before migration and store UNLOAD datasets persistently to protect against accidental deletions and data loss.
    5. Avoid Manual Deletion of Critical Data Files: Retain UNLOAD datasets to allow for easier recovery in case of a failure, avoiding the need to repeat the entire conversion process.
    6. Template and Configuration Checks: Before conversion, verify that templates include necessary variables and output limits are sufficient. For large jobs, consider redirecting outputs to datasets.
    7. Address Application Dependencies: Identify packages and applications that rely on converted tablespaces. Rebind these packages to prevent invalidation and ensure application continuity.
    8. Validate and Optimize Configuration: Run RUNSTATS after conversion to ensure optimal database performance.
    9. Choose the Right Conversion Path:
      • For subsystems below Db2 V12R1M508, use traditional methods like UNLOAD/DROP/CREATE/LOAD or CREATE/LOAD/DROP/RENAME.
      • For V12R1M508 and above, the MOVE TABLE option with online REORG offers a streamlined, downtime-minimizing solution.


Conclusion

The migration to Universal Table Spaces is both challenging and rewarding. It brings performance enhancements and modernization options that are crucial for future-proofing Db2 environments. By learning from these real-world success stories, other users can better prepare for their UTS migration journey, reducing risks and maximizing the advantages of these table spaces.

Remember, a successful migration to UTS requires thorough preparation, clear understanding of dependencies, and a well-documented strategy. With the right approach, converting to UTS can be a smooth and highly beneficial step toward a modern, resilient Db2 infrastructure.