I'm looking for a way to find the row count for all my tables in Postgres. I know I can do this one table at a time with:
SELECT count(*) FROM table_name;
but I'd like to see the row count for all the tables and then order by that to get an idea of how big all my tables are.
I like Daniel Vérité's answer. But when you can't use a CREATE statement you can either use a bash solution or, if you're a windows user, a powershell one:
I usually don't rely on statistics, especially in PostgreSQL.
There's three ways to get this sort of count, each with their own tradeoffs.
If you want a true count, you have to execute the SELECT statement like the one you used against each table. This is because PostgreSQL keeps row visibility information in the row itself, not anywhere else, so any accurate count can only be relative to some transaction. You're getting a count of what that transaction sees at the point in time when it executes. You could automate this to run against every table in the database, but you probably don't need that level of accuracy or want to wait that long.
The second approach notes that the statistics collector tracks roughly how many rows are "live" (not deleted or obsoleted by later updates) at any time. This value can be off by a bit under heavy activity, but is generally a good estimate:
That can also show you how many rows are dead, which is itself an interesting number to monitor.
The third way is to note that the system ANALYZE command, which is executed by the autovacuum process regularly as of PostgreSQL 8.3 to update table statistics, also computes a row estimate. You can grab that one like this:
Which of these queries is better to use is hard to say. Normally I make that decision based on whether there's more useful information I also want to use inside of pg_class or inside of pg_stat_user_tables. For basic counting purposes just to see how big things are in general, either should be accurate enough.
To get estimates, see Greg Smith's answer.
To get exact counts, the other answers so far are plagued with some issues, some of them serious (see below). Here's a version that's hopefully better:
It takes a schema name as parameter, or
public
if no parameter is given.To work with a specific list of schemas or a list coming from a query without modifying the function, it can be called from within a query like this:
This produces a 3-columns output with the schema, the table and the rows count.
Now here are some issues in the other answers that this function avoids:
Table and schema names shouldn't be injected into executable SQL without being quoted, either with
quote_ident
or with the more modernformat()
function with its%I
format string. Otherwise some malicious person may name their tabletablename;DROP TABLE other_table
which is perfectly valid as a table name.Even without the SQL injection and funny characters problems, table name may exist in variants differing by case. If a table is named
ABCD
and another oneabcd
, theSELECT count(*) FROM...
must use a quoted name otherwise it will skipABCD
and countabcd
twice. The%I
of format does this automatically.information_schema.tables
lists custom composite types in addition to tables, even when table_type is'BASE TABLE'
(!). As a consequence, we can't iterate oninformation_schema.tables
, otherwise we risk havingselect count(*) from name_of_composite_type
and that would fail. OTOHpg_class where relkind='r'
should always work fine.The type of COUNT() is
bigint
, notint
. Tables with more than 2.15 billion rows may exist (running a count(*) on them is a bad idea, though).A permanent type need not to be created for a function to return a resultset with several columns.
RETURNS TABLE(definition...)
is a better alternative.I made a small variation to include all tables, also for non-public tables.
use
select count_em_all();
to call it.Hope you find this usefull. Paul
I don't remember the URL from where I collected this. But hope this should help you:
Executing
select count_em_all();
should get you row count of all your tables.