Skip to Content

3 Reasons Why PostgreSQL WAL Files Are Not Deleted

PostgreSQL’s Write-Ahead Logging (WAL) system plays a crucial role in ensuring data integrity and enabling features like replication and point-in-time recovery.

However, WAL files can sometimes fail to be deleted, leading to excessive disk space consumption.

In this article, we will explore three common reasons why WAL files might not be deleted in PostgreSQL and how to resolve the issue.

Understanding PostgreSQL WAL Files

WAL files are the foundation of PostgreSQL’s ability to deliver ACID-compliant transactions. They maintain an append-only log of every change made to the database’s on-disk data files. Every modification, from simple data updates to complex schema alterations, is recorded in these files. The default size of a WAL file is 16MB, and they reside in the pg_wal subdirectory within the PostgreSQL cluster’s data directory.

Get Your Free Linux training!

Join our free Linux training and discover the power of open-source technology. Enhance your skills and boost your career! Learn Linux for Free!

The number of WAL files generated is directly related to the volume of write operations on the server. A write-intensive workload, such as frequent data updates or insertions, will result in a higher rate of WAL file creation. The system automatically manages the creation and deletion of WAL files, but certain configuration settings can be tuned to optimize this process.

Why Does PostgreSQL Retain WAL Files?

PostgreSQL retains WAL files for three primary reasons:

  • Checkpoints: WAL files serve as a temporary storage mechanism for data modifications that occurred after the last checkpoint. They hold these changes until the next checkpoint, ensuring data integrity. Once a checkpoint is completed and the changes are safely written to the main data files, the associated WAL files become eligible for deletion.
  • Replication: WAL files play a crucial role in both logical and physical replication methods employed by PostgreSQL. Replicas rely on these files to stay synchronized with the primary server. Until a replica has successfully processed the changes recorded in a WAL file, that file cannot be deleted from the primary server.
  • WAL Archiving: When WAL archiving is enabled, PostgreSQL creates a backup of each WAL file to a secure, off-site location for disaster recovery purposes. This could involve using a user-defined script or a dynamic library to transfer WAL files to an NFS volume or an S3 bucket. A WAL file cannot be removed until the archiving process for that file is successfully completed.

Common Causes of Excessive WAL File Growth

Understanding the reasons behind WAL file retention is crucial for diagnosing and addressing situations where WAL files accumulate excessively. Here are some common scenarios that can lead to WAL file buildup:

  • Infrequent Checkpoints: Checkpoints are crucial for writing changes from WAL files to the main data files and for allowing WAL files to be deleted. The frequency of checkpoints is governed by two configuration settings: checkpoint_timeout, which defines the maximum time between checkpoints, and max_wal_size, which dictates the maximum cumulative size of WAL files before initiating a checkpoint. If these values are set too high, checkpoints will happen less often, resulting in WAL files accumulating. While reducing these settings can increase checkpoint frequency, it’s important to consider the potential impact on disk I/O performance, as checkpoints are I/O intensive operations.
  • Replication Issues: Replication delays or failures can significantly contribute to WAL file buildup on the primary server. There are two main scenarios to consider:
    • Replication Slots: Replication slots are a feature in PostgreSQL that tracks the progress of replication clients, ensuring that necessary WAL files are retained until the replica has processed them. If a standby server experiences delays or disconnects, the replication slot associated with it will accumulate WAL files on the primary server, potentially leading to disk space issues. The pg_replication_slots system view provides information about each replication slot, including its status. A wal_status of extended indicates that the slot is retaining more WAL files than the limits set by max_wal_size and wal_keep_size, suggesting a lagging replica. PostgreSQL version 13 introduced the max_slot_wal_keep_size setting, which allows administrators to set a hard limit on the amount of WAL files a replication slot can retain.
    • Physical Replication without Slots: While less common, physical replication can be configured without using replication slots. In such setups, the primary server lacks precise information about the standby’s progress in consuming WAL files. To prevent premature deletion of WAL files needed by the standby, the wal_keep_size setting is used. If WAL files consistently reach the limit defined by wal_keep_size, it might be necessary to investigate the standby’s performance or consider adjusting the wal_keep_size value.
  • WAL Archiving Failures: If the user-provided script or library responsible for archiving WAL files malfunctions, the process will continuously retry, preventing the WAL files from being deleted. Examining the pg_wal/archive_status subdirectory can reveal potential archiving issues. Files with the .ready extension indicate WAL files waiting to be archived. A large number of .ready files suggests a problem with the archiving process, which needs to be investigated and resolved. Tools like pgmetrics can help monitor and troubleshoot archiving issues by providing insights into the number of ready files and the overall archiving performance.

Strategies for Managing and Preventing WAL File Accumulation

Effectively managing WAL file growth involves a combination of monitoring, configuration adjustments, and structural optimizations:

  • Monitoring: Regularly monitoring key metrics is essential to identify potential WAL file buildup issues early on. Pay attention to the following:
    • Disk Space Usage: Regularly check the disk space consumption of the volume storing the WAL files. Unexpected increases could signal a problem.
    • Replication Slot Status: Use the pg_replication_slots view to monitor the status of replication slots. A wal_status of extended, unreserved, or lost indicates potential issues with replicas that need attention.
    • Replication Lag: Keep track of the replication lag of each standby server. Significant lag can lead to WAL file accumulation on the primary server.
    • WAL Archiving Failures: Monitor the performance of the WAL archiving script or library. Investigate any failures and ensure timely resolution to prevent WAL file buildup. The number of .ready files in the pg_wal/archive_status directory is a key indicator of archiving health.
  • Configuration Tuning:
    • checkpoint_timeout and max_wal_size: Adjust these settings based on workload characteristics. Reducing the values can increase checkpoint frequency and limit WAL file buildup, but be mindful of the increased I/O activity.
    • wal_keep_size: If using physical replication without slots, set wal_keep_size appropriately to ensure standbys have enough time to catch up without causing excessive WAL file retention. Lowering the value if possible can help manage WAL file growth.
    • max_slot_wal_keep_size (PostgreSQL 13 and later): This setting provides a crucial safety net for replication slot-based setups. Setting a reasonable limit can prevent runaway WAL file growth caused by lagging or failed replicas.
    • Background Writer and Shared Buffer Cache Settings: Fine-tuning these settings can influence checkpoint frequency and performance, indirectly impacting WAL file generation.
  • Structural Optimizations:
    • Separate Disk for WAL Files: Due to the distinct access patterns and usage characteristics of WAL files compared to data files, storing them on a separate disk is highly recommended. This not only improves performance by reducing I/O contention but also simplifies management and capacity planning.
    • Cascaded Replication: Implementing cascaded replication, where one or two fast standby servers handle replication for non-essential or supplementary clients, can help manage WAL file retention on the primary server more effectively. This reduces the primary server’s burden of retaining WAL files for numerous clients with varying performance levels.

Commands for WAL Management in PostgreSQL: Replication Slots, Checkpoints, and Archiving

Let’s summarize the key commands and concepts related to replication slots, checkpoint monitoring, and WAL archiving status based on the information provided:

Replication Slot Management

Replication slots play a crucial role in managing WAL file retention. A replication slot informs the PostgreSQL server that specific WAL files are needed by a replication client and should not be removed. The sources highlight two main SQL commands related to replication slots:

Monitoring Replication Slots:


SELECT * FROM pg_replication_slots;

This command retrieves detailed information about all the replication slots defined on your PostgreSQL server. By carefully examining the output, you can identify slots that might be contributing to the retention of a large volume of WAL files.

Key Columns to Examine:

  • slot_name: This column provides the unique name assigned to each replication slot. It helps you distinguish between different slots.
  • active: This column indicates whether a client is actively connected to the slot (true) or not (false). An ‘f’ indicates a potentially problematic slot where the client is disconnected or lagging significantly.
  • wal_status: This column reflects the status of WAL file consumption by the slot and can provide insights into possible replication issues. The possible values are:
    • reserved: WAL file usage falls within the configured limits.
    • extended: The slot is retaining more WAL files than the limits set by the parameters max_wal_size and wal_keep_size. This often occurs when a replica is behind in receiving updates.
    • unreserved: (Available in PostgreSQL 13 and later) This status means the slot is holding onto more WAL files than the limit specified by max_slot_wal_keep_size. In this situation, PostgreSQL may not guarantee that all the WAL files needed by the slot will be kept, possibly resulting in replication failures.
    • lost: This status indicates that essential WAL files required by the slot have been deleted, making it impossible for replication to continue.
  • restart_lsn: This column displays the WAL location, represented as a Log Sequence Number (LSN), from where a replica would need to resume replication.

Dropping a Replication Slot:

SELECT pg_drop_replication_slot('slot_name');

This command removes the replication slot identified by 'slot_name'. Exercise caution when using this command because dropping an active slot will interrupt the replication process. If a slot is no longer needed, dropping it can help to reduce WAL file accumulation.

Checkpoint Monitoring

Checkpoints in PostgreSQL are essential for writing changes from memory (specifically the WAL) to the data files on disk. They ensure data consistency and enable faster recovery in case of a crash. While there aren’t specific SQL commands to directly monitor checkpoint activity, the sources emphasize the importance of managing the frequency and size of checkpoints using configuration parameters:

  • checkpoint_timeout (Units: Time): This parameter sets the maximum time interval between automatic checkpoints. If you set this value too high, it can lead to the buildup of many WAL files before a checkpoint occurs.
  • max_wal_size (Units: Bytes): This parameter determines the maximum allowed size of accumulated WAL files. Once this limit is reached, PostgreSQL initiates a checkpoint. If this value is too high, it can also lead to increased WAL file retention.
  • wal_keep_size (Units: Bytes): This parameter is useful for physical replication setups without replication slots. It instructs PostgreSQL to retain a specified amount of WAL files to allow standby servers (replicas) to catch up, even if they are not using replication slots. If you have a non-zero value for wal_keep_size and notice a buildup of WAL files, you might consider reducing this setting if your replication setup allows for it.

These parameters can be adjusted in your PostgreSQL configuration file (postgresql.conf) to control how often checkpoints occur, which indirectly affects the number of WAL files retained.

WAL Archiving Status

WAL archiving is crucial for backing up WAL files and enabling point-in-time recovery. The sources explain that a failing or slow WAL archiving process can cause WAL files to accumulate. They don’t provide SQL commands to directly monitor WAL archiving status but do suggest these approaches:

  • Using the pgmetrics Tool: The sources mention the pgmetrics tool as a way to gather information about WAL archiving, including metrics like the number of files waiting to be archived. However, specific SQL queries or commands used by pgmetrics are not detailed in the sources. You might need to consult the documentation for pgmetrics to understand how it gathers WAL archiving status.
  • Manual File System Checks: The sources also describe how to check the number of WAL files ready for archiving by inspecting the pg_wal/archive_status/ directory within your PostgreSQL data directory. This method involves interacting with the file system rather than executing SQL queries. Each file in this directory represents a WAL segment, and the file extension (.ready or .done) indicates its archiving status.

Conclusion

By understanding the mechanisms behind WAL file generation and retention, administrators can effectively address and prevent issues related to excessive WAL file growth. Diligent monitoring, appropriate configuration settings, and structural optimizations can ensure a healthy and performant PostgreSQL deployment.