🌐 AI搜索 & 代理 主页
Skip to content

Commit bae8ca8

Browse files
committed
Revisit cosmetics of "For inplace update, send nontransactional invalidations."
This removes a never-used CacheInvalidateHeapTupleInplace() parameter. It adds README content about inplace update visibility in logical decoding. It rewrites other comments. Back-patch to v18, where commit 243e9b4 first appeared. Since this removes a CacheInvalidateHeapTupleInplace() parameter, expect a v18 ".abi-compliance-history" edit to follow. PGXN contains no calls to that function. Reported-by: Paul A Jungwirth <pj@illuminatedcomputing.com> Reported-by: Ilyasov Ian <ianilyasov@outlook.com> Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Surya Poondla <s_poondla@apple.com> Discussion: https://postgr.es/m/CA+renyU+LGLvCqS0=fHit-N1J-2=2_mPK97AQxvcfKm+F-DxJA@mail.gmail.com Backpatch-through: 18
1 parent 3fbad03 commit bae8ca8

File tree

5 files changed

+57
-33
lines changed

5 files changed

+57
-33
lines changed

src/backend/access/heap/README.tuplock

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -199,3 +199,35 @@ under a reader holding a pin. A reader of a heap_fetch() result tuple may
199199
witness a torn read. Current inplace-updated fields are aligned and are no
200200
wider than four bytes, and current readers don't need consistency across
201201
fields. Hence, they get by with just fetching each field once.
202+
203+
During logical decoding, caches reflect an inplace update no later than the
204+
next XLOG_XACT_INVALIDATIONS. That record witnesses the end of a command.
205+
Tuples of its cmin are then visible to decoding, as are inplace updates of any
206+
lower LSN. Inplace updates of a higher LSN may also be visible, even if those
207+
updates would have been invisible to a non-historic snapshot matching
208+
decoding's historic snapshot. (In other words, decoding may see inplace
209+
updates that were not visible to a similar snapshot taken during original
210+
transaction processing.) That's a consequence of inplace update violating
211+
MVCC: there are no snapshot-specific versions of inplace-updated values. This
212+
all makes it hard to reason about inplace-updated column reads during logical
213+
decoding, but the behavior does suffice for relhasindex. A relhasindex=t in
214+
CREATE INDEX becomes visible no later than the new pg_index row. While it may
215+
be visible earlier, that's harmless. Finding zero indexes despite
216+
relhasindex=t is normal in more cases than this, e.g. after DROP INDEX.
217+
Example of a case that meaningfully reacts to the inplace inval:
218+
219+
CREATE TABLE cat (c int) WITH (user_catalog_table = true);
220+
CREATE TABLE normal (d int);
221+
...
222+
CREATE INDEX ON cat (c)\; INSERT INTO normal VALUES (1);
223+
224+
If the output plugin reads "cat" during decoding of the INSERT, it's fair to
225+
want that read to see relhasindex=t and use the new index.
226+
227+
An alternative would be to have decoding of XLOG_HEAP_INPLACE immediately
228+
execute its invals. That would behave more like invals during original
229+
transaction processing. It would remove the decoding-specific delay in e.g. a
230+
decoding plugin witnessing a relfrozenxid change. However, a good use case
231+
for that is unlikely, since the plugin would still witness relfrozenxid
232+
changes prematurely. Hence, inplace update takes the trivial approach of
233+
delegating to XLOG_XACT_INVALIDATIONS.

src/backend/access/heap/heapam.c

Lines changed: 14 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -6360,15 +6360,17 @@ heap_inplace_lock(Relation relation,
63606360
Assert(BufferIsValid(buffer));
63616361

63626362
/*
6363-
* Construct shared cache inval if necessary. Because we pass a tuple
6364-
* version without our own inplace changes or inplace changes other
6365-
* sessions complete while we wait for locks, inplace update mustn't
6366-
* change catcache lookup keys. But we aren't bothering with index
6367-
* updates either, so that's true a fortiori. After LockBuffer(), it
6368-
* would be too late, because this might reach a
6369-
* CatalogCacheInitializeCache() that locks "buffer".
6363+
* Register shared cache invals if necessary. Other sessions may finish
6364+
* inplace updates of this tuple between this step and LockTuple(). Since
6365+
* inplace updates don't change cache keys, that's harmless.
6366+
*
6367+
* While it's tempting to register invals only after confirming we can
6368+
* return true, the following obstacle precludes reordering steps that
6369+
* way. Registering invals might reach a CatalogCacheInitializeCache()
6370+
* that locks "buffer". That would hang indefinitely if running after our
6371+
* own LockBuffer(). Hence, we must register invals before LockBuffer().
63706372
*/
6371-
CacheInvalidateHeapTupleInplace(relation, oldtup_ptr, NULL);
6373+
CacheInvalidateHeapTupleInplace(relation, oldtup_ptr);
63726374

63736375
LockTuple(relation, &oldtup.t_self, InplaceUpdateTupleLock);
63746376
LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
@@ -6606,10 +6608,6 @@ heap_inplace_update_and_unlock(Relation relation,
66066608
/*
66076609
* Send invalidations to shared queue. SearchSysCacheLocked1() assumes we
66086610
* do this before UnlockTuple().
6609-
*
6610-
* If we're mutating a tuple visible only to this transaction, there's an
6611-
* equivalent transactional inval from the action that created the tuple,
6612-
* and this inval is superfluous.
66136611
*/
66146612
AtInplace_Inval();
66156613

@@ -6620,10 +6618,10 @@ heap_inplace_update_and_unlock(Relation relation,
66206618
AcceptInvalidationMessages(); /* local processing of just-sent inval */
66216619

66226620
/*
6623-
* Queue a transactional inval. The immediate invalidation we just sent
6624-
* is the only one known to be necessary. To reduce risk from the
6625-
* transition to immediate invalidation, continue sending a transactional
6626-
* invalidation like we've long done. Third-party code might rely on it.
6621+
* Queue a transactional inval, for logical decoding and for third-party
6622+
* code that might have been relying on it since long before inplace
6623+
* update adopted immediate invalidation. See README.tuplock section
6624+
* "Reading inplace-updated columns" for logical decoding details.
66276625
*/
66286626
if (!IsBootstrapProcessingMode())
66296627
CacheInvalidateHeapTuple(relation, tuple, NULL);

src/backend/replication/logical/decode.c

Lines changed: 3 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -521,18 +521,9 @@ heap_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
521521

522522
/*
523523
* Inplace updates are only ever performed on catalog tuples and
524-
* can, per definition, not change tuple visibility. Inplace
525-
* updates don't affect storage or interpretation of table rows,
526-
* so they don't affect logicalrep_write_tuple() outcomes. Hence,
527-
* we don't process invalidations from the original operation. If
528-
* inplace updates did affect those things, invalidations wouldn't
529-
* make it work, since there are no snapshot-specific versions of
530-
* inplace-updated values. Since we also don't decode catalog
531-
* tuples, we're not interested in the record's contents.
532-
*
533-
* WAL contains likely-unnecessary commit-time invals from the
534-
* CacheInvalidateHeapTuple() call in
535-
* heap_inplace_update_and_unlock(). Excess invalidation is safe.
524+
* can, per definition, not change tuple visibility. Since we
525+
* also don't decode catalog tuples, we're not interested in the
526+
* record's contents.
536527
*/
537528
break;
538529

src/backend/utils/cache/inval.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1583,13 +1583,17 @@ CacheInvalidateHeapTuple(Relation relation,
15831583
* implied.
15841584
*
15851585
* Like CacheInvalidateHeapTuple(), but for inplace updates.
1586+
*
1587+
* Just before and just after the inplace update, the tuple's cache keys must
1588+
* match those in key_equivalent_tuple. Cache keys consist of catcache lookup
1589+
* key columns and columns referencing pg_class.oid values,
1590+
* e.g. pg_constraint.conrelid, which would trigger relcache inval.
15861591
*/
15871592
void
15881593
CacheInvalidateHeapTupleInplace(Relation relation,
1589-
HeapTuple tuple,
1590-
HeapTuple newtuple)
1594+
HeapTuple key_equivalent_tuple)
15911595
{
1592-
CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1596+
CacheInvalidateHeapTupleCommon(relation, key_equivalent_tuple, NULL,
15931597
PrepareInplaceInvalidationState);
15941598
}
15951599

src/include/utils/inval.h

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,8 +43,7 @@ extern void CacheInvalidateHeapTuple(Relation relation,
4343
HeapTuple tuple,
4444
HeapTuple newtuple);
4545
extern void CacheInvalidateHeapTupleInplace(Relation relation,
46-
HeapTuple tuple,
47-
HeapTuple newtuple);
46+
HeapTuple key_equivalent_tuple);
4847

4948
extern void CacheInvalidateCatalog(Oid catalogId);
5049

0 commit comments

Comments
 (0)