Do not take it for granted that standards wrt barcodes will be stringently followed by everyone
2025-09-23 26605, 2025
LupinIII has quit
2025-09-23 26623, 2025
LupinIII joined the channel
2025-09-23 26640, 2025
LupinIII has quit
2025-09-23 26658, 2025
LupinIII joined the channel
2025-09-23 26625, 2025
isobel_86 has quit
2025-09-23 26609, 2025
tagomago joined the channel
2025-09-23 26632, 2025
Maxr1998_ joined the channel
2025-09-23 26648, 2025
Maxr1998 has quit
2025-09-23 26620, 2025
zer0bitz-_ joined the channel
2025-09-23 26613, 2025
zer0bitz- has quit
2025-09-23 26603, 2025
G0d joined the channel
2025-09-23 26643, 2025
MeatPupp3t21 has quit
2025-09-23 26611, 2025
MeatPupp3t21 joined the channel
2025-09-23 26642, 2025
MeatPupp3t21 has quit
2025-09-23 26606, 2025
MeatPupp3t21 joined the channel
2025-09-23 26628, 2025
zer0bitz-_ has quit
2025-09-23 26637, 2025
zer0bitz- joined the channel
2025-09-23 26659, 2025
innocuou- joined the channel
2025-09-23 26602, 2025
innocuous has quit
2025-09-23 26634, 2025
Isobel_86 joined the channel
2025-09-23 26609, 2025
v6lur_ joined the channel
2025-09-23 26659, 2025
MeatPupp3t21 has quit
2025-09-23 26625, 2025
MeatPupp3t21 joined the channel
2025-09-23 26654, 2025
PaulDOkno[m] joined the channel
2025-09-23 26655, 2025
PaulDOkno[m]
Hello, there. Is it the right place to ask technical question about the musicbrainz database ?
2025-09-23 26615, 2025
MeatPupp3t21 has quit
2025-09-23 26641, 2025
MeatPupp3t21 joined the channel
2025-09-23 26608, 2025
PaulDOkno[m]
btw Hello yvanzo (IRC) it's been a while :)
2025-09-23 26649, 2025
reosarevok[m]
This or #metabrainz:chatbrainz.org anyway. Potentially that one is best, more devs there
2025-09-23 26606, 2025
PaulDOkno[m]
Okay, I will go ahead here then. I am writting a Lidarr metadata server and I am currently initializing a local musicbrainz database (the process looks a lot like what mbslave is doing). My issue is with the Live Data Feed ingestion. Ingesting replication packet takes a while. I suspect this is because I have added some triggers of my own to keep a Meilisearch index in sync with the database. I guess the default replication
2025-09-23 26606, 2025
PaulDOkno[m]
process for musicbrainz mirror does not have this issue, but I was not able to determine how it was done. Maybe there are some existing sql triggers on mirrors that I missed ?
2025-09-23 26609, 2025
PaulDOkno[m]
Long story short these triggers https://github.com/oknozor/metadada/blob/feat/mb-… makes sql transaction time skyrocket during live datafeed ingestion. I think I am missing the default replication process for miirrors.
2025-09-23 26640, 2025
MeatPupp3t21 has quit
2025-09-23 26604, 2025
MeatPupp3t21 joined the channel
2025-09-23 26614, 2025
MeatPupp3t21 has quit
2025-09-23 26640, 2025
MeatPupp3t21 joined the channel
2025-09-23 26607, 2025
v6lur_ has quit
2025-09-23 26641, 2025
yvanzo[m] joined the channel
2025-09-23 26642, 2025
yvanzo[m]
Hi Paul D. Okno, it’s been a while indeed, welcome back 🧙... (full message at <https://matrix.chatbrainz.org/_matrix/media/v3/download/chatbrainz.org/OxxaTMzIooqaUaxNRJbkKxGm>)
2025-09-23 26653, 2025
PaulDOkno[m]
Actually after doing a bit of digging into postgres internal, it's not my custom indexing trigger that are taking time. But tag ref_count updates. I don't know if those comes from replication packet or are triggered automatically on update.
2025-09-23 26658, 2025
PaulDOkno[m] uploaded an image: (180KiB) < https://matrix.chatbrainz.org/_matrix/media/v3/download/hoohoot.org/YRjLCYwntZVTgzgNFdMmspbG/image.png >
2025-09-23 26651, 2025
PaulDOkno[m]
Packet takes between 1 to 5 minutes to get ingested. Maybe I shouldn't be worried. How does that compare to a typical mirror setup ?
2025-09-23 26618, 2025
PaulDOkno[m]
Also while developing this I wrote a mbslave clone but with fancy progress bar, the whole musicbrainz database setup takes about 20~40min (on just a subset of the database). For now its included in a bigger project but maybe it would be worth to turn this into a standalone binary for other people to use.... (full message at <https://matrix.chatbrainz.org/_matrix/media/v3/download/chatbrainz.org/ajqQQsQFWchxFPBPiqVxZISU>)
2025-09-23 26654, 2025
vzctr81191 joined the channel
2025-09-23 26601, 2025
joessexmpp joined the channel
2025-09-23 26627, 2025
joessexmpp has quit
2025-09-23 26644, 2025
lusciouslover has quit
2025-09-23 26621, 2025
lusciouslover joined the channel
2025-09-23 26639, 2025
reosarevok[m]
Each hourly packet taking at most 5 minutes doesn't seem like the worst at least :)
2025-09-23 26647, 2025
reosarevok[m]
mbslave but with automatic schema update would probably make some people happy seeing the current issues in mbslave with people's dbs failing because of schema changes
2025-09-23 26608, 2025
LupinIII has quit
2025-09-23 26626, 2025
LupinIII joined the channel
2025-09-23 26643, 2025
LupinIII has quit
2025-09-23 26601, 2025
LupinIII joined the channel
2025-09-23 26643, 2025
Cheezmo_ joined the channel
2025-09-23 26657, 2025
Cheezmo_ has quit
2025-09-23 26618, 2025
krei-se has quit
2025-09-23 26611, 2025
lusciouslover has quit
2025-09-23 26607, 2025
lusciouslover joined the channel
2025-09-23 26612, 2025
PaulDOkno[m]
Actually my first estimation was not accurate, some packet take between 1 to 5 mins some other can take as long as 1h30 :(
2025-09-23 26620, 2025
krei-se joined the channel
2025-09-23 26613, 2025
PaulDOkno[m]
so I think I am missing something, currently I am executing all pending data in a single sql transaction. Also streaming directly from the tar archive (no write to dbmirror2.pending_data/pending_key). I am just updating current_replication_sequence and current_replication_schema after each packet have been applied
2025-09-23 26612, 2025
PaulDOkno[m]
Does the official replication process involve grouping by xid and removing pending data from db mirror to keep track of the remains in case of failure ?