<11xmemphistox> If I host MusicBrainz in docker, can I specify where to put the actual DB? IE) a mount point to a nas?
opticdelusion has quit
opticdelusion joined the channel
akshaaatt[m] has quit
mukti has quit
yellowhatpro[m] joined the channel
yellowhatpro[m]
yvanzo: bitmap hi, I have refractored code for polling and notifying and made it a bit more simpler, and fixed a couple of mistakes.
Can you have a look at `poller` and `archival` modules ?
Kladky joined the channel
Kladky has quit
Kladky joined the channel
yvanzo[m]
Hi yellowhatpro, I’ve read the updated description, it seems to be promising so far.
There should be more tests for first index, for example when is_saved is set to true.
(edit_data is being handled by postgres trigger) seems to be an outdated comment.
extract_last_rows_idx_from_internet_archive_table seems to assume that there will always be a row with a from_table matching every source table (edit_data and edit_note in the current implementation). How is that guaranteed?
Aren’t these rows deleted eventually?
(Not sure that part is implemented though.)
Also, if a edit_data row contains a URL that is already in the URL queue, the edit_data row won’t be mentioned in the URL queue.
mayhem: added comments to pr. Feels so fresh to read python code after a long time xD
reosarevok[m]
yvanzo: remind me, if I wanted to start collecting the new stats and make sure they work fine during beta, should I be updating the cron container? And is that a bad idea?
yvanzo[m]
reosarevok: That’s what I’ve been doing so far. Making patch for the production branch and deploying it directly in the cron container.
reosarevok[m]
Making a patch as in? Cherry-pick them into prod?
yvanzo[m]
No, into a local branch.
based on production
reosarevok[m]
So make an mbs branch cron-stats-2024-06 off prod, then cherry-picking those, then what, logging into the container and checking that out? I haven't changed container branches before, just used the update scripts :)
<yvanzo[m]> "Aren’t these rows deleted..." <- Yeah we will eventually have a task which will periodically run over `internet_archive_urls` table, and clean the rows already saved, and retry the URLs which could not be saved
<yvanzo[m]> "Also, if a edit_data row..." <- Ummmm, I didn't get this one, do you mean I should take care of duplicates?
yvanzo[m]
What happens if the same URL (example.com) is in two different edit notes?