We've moved to Google Cloud SQL, created couple of databases and imported lots of data. Alongside it was a pretty large amount of queries which were interrupted here and there which have left some garbage in form of temp files. And the storage usage went far above 1TB.
postgres=> SELECT datname, temp_files AS "Temporary files", pg_size_pretty(temp_bytes) AS "Size of temporary files" FROM pg_stat_database;
datname | Temporary files | Size of temporary files
---------------+-----------------+-------------------------
cloudsqladmin | 0 | 0 bytes
template0 | 0 | 0 bytes
postgres | 0 | 0 bytes
template1 | 0 | 0 bytes
first | 33621 | 722 GB
second | 9 | 3399 MB
third | 293313 | 153 GB
(7 rows)
According to the results of the query above we have ~1TB of potentially useless files. There are couple of questions:
- How to identify temp files not used by any running queries?
- How to remove them having that postgres is managed by Google Cloud SQL?
As per the PostgreSQL documentation, the field
temp_bytes
is defined as:Meaning, that the number is the sum of the temporary file sizes since the creation of the database (or since last pg_stat_reset()), and not the current temp file usage.
The current usage could be determined using the 'file functions' in non-cloud database instance, but in Cloud SQL a normal user can not execute
select pg_ls_dir('base/pgsql_temp')
as this is reserved only to superusers.As you said, Cloud SQL is a managed service, therefore at the moment, there is no way to see the current temp file usage.
One thing that will definitely clear the number you see is pg_stat_reset(), though as said before, it is not about current temp file usage, but a historical total;
One thing guaranteed to clean out temp files is restarting of the database instance, as part of the start process is wiping the
base/pgsql_temp
directory.