I have a broken table.
I’ve been trying to recover data from this table for a long time.
I know that block 14719010 has an invalid page header. I know that
this corresponds to row 614113462 in that table. This
post
was helpful enough to show me a way to identify the row, but alas the
steps for recovery suggested do not work, because as soon as you try
to select the ctid
of the row, you are informed of an invalid page
header in that block.
I discovered pgfsck, but
alas it doesn’t speak PostgreSQL 7.4 table format.
I want to copy the valid data out of this table into a new, working
table. I know which row is broken, so I could copy out all the rows
up to that limit and insert them into the other table, then try to
delete the rows up to that point. Alas, two problems:
The transaction that occurs ends up filling the disk rapidly due
to the number of rows involved. This also has the sideeffect of
stealing all the disk space and not letting me reclaim it because
VACUUM
and VACUUM FULL
fail with our friendly error message.
So I can do it in small chunks, and delete those bits once done,
but…
DELETE
can’t be followed with a LIMIT
clause, so I can’t
delete the rows once they’ve been safely copied over. I can’t specify
anything else because a WHERE
clause causes the server to search the
entire table for matches, which will mean it’ll hit our friendly block
and abort with our very familiar error.
I’d like to try to dump the table, and hopefully when it crashes, at
least the first 614113461 rows will be dumped. However, my attempts
so far reveal that pg_dump
is buffering the stream, so I suspect
that I won’t see any results in the dump at all.
I can’t even identify the file on disk that is broken, and try to
remove or zero or something it. I’ve heard rumour of a pg_filedump
command but it’s nowhere to be found in the Debian 3.1 package
of PostgreSQL.
… time passes…
My hopes were raised when I found I didn’t have postgresql-contrib
installed, but alas no pg_filedump
.
packages.d.o doesn’t even know about it
:(
Bit more googling for postgresql delete limit
finds me at this
thread.
Aha! maybe a way to shrink the old table as I clean it up!
The battle rages on…