To: Dirk Lutzebaeck <lutzeb@aeccom.com>
cc: pgsql-bugs@postgreSQL.org
Subject: Re: [BUGS] vacuum analyze corrupts db with larger tuples (< 8k) 
In-reply-to: <14449.58495.638146.353607@ampato.aeccom.com> 
References: <14449.58495.638146.353607@ampato.aeccom.com>
Comments: In-reply-to Dirk Lutzebaeck <lutzeb@aeccom.com>
	message dated "Tue, 04 Jan 2000 13:15:59 +0100"
Date: Tue, 04 Jan 2000 11:12:38 -0500
Message-ID: <13309.947002358@sss.pgh.pa.us>
From: Tom Lane <tgl@sss.pgh.pa.us>
Sender: owner-bugs@postgreSQL.org
Precedence: bulk

Dirk Lutzebaeck <lutzeb@aeccom.com> writes:
> ok, here is what I have found out on 6.5.3, Linux 2.2.10:
> [ make table with a bunch of almost-5K varchar fields ]
> # vacuumdb --analyze test
> ERROR:  Tuple is too big: size 9604
> vacuumdb: database vacuum failed on test.

Ohhh ... I know what's going on.  The oversize tuple is the one that
VACUUM is attempting to store in pg_statistic, containing the min and
max values for your varchar column.  In this example, both the min and
max are just shy of 5K characters, so the pg_statistic tuple is too
big to fit on a page.

I had already patched this in current sources, by the expedient of not
trying to store a pg_statistic tuple at all if it's too big.  (Then
you don't get stats for that particular column, but the stats probably
wouldn't be useful anyway.)

I suppose I should make up a back-patch for REL6_5 with this fix.

			regards, tom lane

************