🌐 AI搜索 & 代理 主页
Skip to content

Commit 64bf53d

Browse files
committed
Revisit cosmetics of "For inplace update, send nontransactional invalidations."
This removes a never-used CacheInvalidateHeapTupleInplace() parameter. It adds README content about inplace update visibility in logical decoding. It rewrites other comments. Back-patch to v18, where commit 243e9b4 first appeared. Since this removes a CacheInvalidateHeapTupleInplace() parameter, expect a v18 ".abi-compliance-history" edit to follow. PGXN contains no calls to that function. Reported-by: Paul A Jungwirth <pj@illuminatedcomputing.com> Reported-by: Ilyasov Ian <ianilyasov@outlook.com> Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Surya Poondla <s_poondla@apple.com> Discussion: https://postgr.es/m/CA+renyU+LGLvCqS0=fHit-N1J-2=2_mPK97AQxvcfKm+F-DxJA@mail.gmail.com Backpatch-through: 18
1 parent 0839fbe commit 64bf53d

File tree

5 files changed

+57
-33
lines changed

5 files changed

+57
-33
lines changed

src/backend/access/heap/README.tuplock

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -199,3 +199,35 @@ under a reader holding a pin. A reader of a heap_fetch() result tuple may
199199
witness a torn read. Current inplace-updated fields are aligned and are no
200200
wider than four bytes, and current readers don't need consistency across
201201
fields. Hence, they get by with just fetching each field once.
202+
203+
During logical decoding, caches reflect an inplace update no later than the
204+
next XLOG_XACT_INVALIDATIONS. That record witnesses the end of a command.
205+
Tuples of its cmin are then visible to decoding, as are inplace updates of any
206+
lower LSN. Inplace updates of a higher LSN may also be visible, even if those
207+
updates would have been invisible to a non-historic snapshot matching
208+
decoding's historic snapshot. (In other words, decoding may see inplace
209+
updates that were not visible to a similar snapshot taken during original
210+
transaction processing.) That's a consequence of inplace update violating
211+
MVCC: there are no snapshot-specific versions of inplace-updated values. This
212+
all makes it hard to reason about inplace-updated column reads during logical
213+
decoding, but the behavior does suffice for relhasindex. A relhasindex=t in
214+
CREATE INDEX becomes visible no later than the new pg_index row. While it may
215+
be visible earlier, that's harmless. Finding zero indexes despite
216+
relhasindex=t is normal in more cases than this, e.g. after DROP INDEX.
217+
Example of a case that meaningfully reacts to the inplace inval:
218+
219+
CREATE TABLE cat (c int) WITH (user_catalog_table = true);
220+
CREATE TABLE normal (d int);
221+
...
222+
CREATE INDEX ON cat (c)\; INSERT INTO normal VALUES (1);
223+
224+
If the output plugin reads "cat" during decoding of the INSERT, it's fair to
225+
want that read to see relhasindex=t and use the new index.
226+
227+
An alternative would be to have decoding of XLOG_HEAP_INPLACE immediately
228+
execute its invals. That would behave more like invals during original
229+
transaction processing. It would remove the decoding-specific delay in e.g. a
230+
decoding plugin witnessing a relfrozenxid change. However, a good use case
231+
for that is unlikely, since the plugin would still witness relfrozenxid
232+
changes prematurely. Hence, inplace update takes the trivial approach of
233+
delegating to XLOG_XACT_INVALIDATIONS.

src/backend/access/heap/heapam.c

Lines changed: 14 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -6396,15 +6396,17 @@ heap_inplace_lock(Relation relation,
63966396
Assert(BufferIsValid(buffer));
63976397

63986398
/*
6399-
* Construct shared cache inval if necessary. Because we pass a tuple
6400-
* version without our own inplace changes or inplace changes other
6401-
* sessions complete while we wait for locks, inplace update mustn't
6402-
* change catcache lookup keys. But we aren't bothering with index
6403-
* updates either, so that's true a fortiori. After LockBuffer(), it
6404-
* would be too late, because this might reach a
6405-
* CatalogCacheInitializeCache() that locks "buffer".
6399+
* Register shared cache invals if necessary. Other sessions may finish
6400+
* inplace updates of this tuple between this step and LockTuple(). Since
6401+
* inplace updates don't change cache keys, that's harmless.
6402+
*
6403+
* While it's tempting to register invals only after confirming we can
6404+
* return true, the following obstacle precludes reordering steps that
6405+
* way. Registering invals might reach a CatalogCacheInitializeCache()
6406+
* that locks "buffer". That would hang indefinitely if running after our
6407+
* own LockBuffer(). Hence, we must register invals before LockBuffer().
64066408
*/
6407-
CacheInvalidateHeapTupleInplace(relation, oldtup_ptr, NULL);
6409+
CacheInvalidateHeapTupleInplace(relation, oldtup_ptr);
64086410

64096411
LockTuple(relation, &oldtup.t_self, InplaceUpdateTupleLock);
64106412
LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
@@ -6642,10 +6644,6 @@ heap_inplace_update_and_unlock(Relation relation,
66426644
/*
66436645
* Send invalidations to shared queue. SearchSysCacheLocked1() assumes we
66446646
* do this before UnlockTuple().
6645-
*
6646-
* If we're mutating a tuple visible only to this transaction, there's an
6647-
* equivalent transactional inval from the action that created the tuple,
6648-
* and this inval is superfluous.
66496647
*/
66506648
AtInplace_Inval();
66516649

@@ -6656,10 +6654,10 @@ heap_inplace_update_and_unlock(Relation relation,
66566654
AcceptInvalidationMessages(); /* local processing of just-sent inval */
66576655

66586656
/*
6659-
* Queue a transactional inval. The immediate invalidation we just sent
6660-
* is the only one known to be necessary. To reduce risk from the
6661-
* transition to immediate invalidation, continue sending a transactional
6662-
* invalidation like we've long done. Third-party code might rely on it.
6657+
* Queue a transactional inval, for logical decoding and for third-party
6658+
* code that might have been relying on it since long before inplace
6659+
* update adopted immediate invalidation. See README.tuplock section
6660+
* "Reading inplace-updated columns" for logical decoding details.
66636661
*/
66646662
if (!IsBootstrapProcessingMode())
66656663
CacheInvalidateHeapTuple(relation, tuple, NULL);

src/backend/replication/logical/decode.c

Lines changed: 3 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -521,18 +521,9 @@ heap_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
521521

522522
/*
523523
* Inplace updates are only ever performed on catalog tuples and
524-
* can, per definition, not change tuple visibility. Inplace
525-
* updates don't affect storage or interpretation of table rows,
526-
* so they don't affect logicalrep_write_tuple() outcomes. Hence,
527-
* we don't process invalidations from the original operation. If
528-
* inplace updates did affect those things, invalidations wouldn't
529-
* make it work, since there are no snapshot-specific versions of
530-
* inplace-updated values. Since we also don't decode catalog
531-
* tuples, we're not interested in the record's contents.
532-
*
533-
* WAL contains likely-unnecessary commit-time invals from the
534-
* CacheInvalidateHeapTuple() call in
535-
* heap_inplace_update_and_unlock(). Excess invalidation is safe.
524+
* can, per definition, not change tuple visibility. Since we
525+
* also don't decode catalog tuples, we're not interested in the
526+
* record's contents.
536527
*/
537528
break;
538529

src/backend/utils/cache/inval.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1583,13 +1583,17 @@ CacheInvalidateHeapTuple(Relation relation,
15831583
* implied.
15841584
*
15851585
* Like CacheInvalidateHeapTuple(), but for inplace updates.
1586+
*
1587+
* Just before and just after the inplace update, the tuple's cache keys must
1588+
* match those in key_equivalent_tuple. Cache keys consist of catcache lookup
1589+
* key columns and columns referencing pg_class.oid values,
1590+
* e.g. pg_constraint.conrelid, which would trigger relcache inval.
15861591
*/
15871592
void
15881593
CacheInvalidateHeapTupleInplace(Relation relation,
1589-
HeapTuple tuple,
1590-
HeapTuple newtuple)
1594+
HeapTuple key_equivalent_tuple)
15911595
{
1592-
CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1596+
CacheInvalidateHeapTupleCommon(relation, key_equivalent_tuple, NULL,
15931597
PrepareInplaceInvalidationState);
15941598
}
15951599

src/include/utils/inval.h

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,7 @@ extern void CacheInvalidateHeapTuple(Relation relation,
6161
HeapTuple tuple,
6262
HeapTuple newtuple);
6363
extern void CacheInvalidateHeapTupleInplace(Relation relation,
64-
HeapTuple tuple,
65-
HeapTuple newtuple);
64+
HeapTuple key_equivalent_tuple);
6665

6766
extern void CacheInvalidateCatalog(Oid catalogId);
6867

0 commit comments

Comments
 (0)