yesterday's went until 19:23 so we'll surely have to stop them
yvanzo
bitmap: I rearranged a bit the post release sequence to quickly produce a sample data dump at first, as it can help with debugging Docker Compose if something goes wrong during the last tests.
bitmap
ok, makes sense
yvanzo
Maybe we can even start the live data feed before starting a full data dump?
reosarevok
yvanzo: I tweeted, wanna post elsewhere since it's probably faster for you given you have accessed them already? :)
marcelveldt joined the channel
bitmap
yvanzo: we could do it in that order too, yes (it just requires waiting around for the packet to be generated)
reosarevok
bitmap: I'm going to stop cron for sitemaps then
No need to stop the process in any specific way in advance, is there?
bitmap
oh I already stopped the cron there, but I mean we can forcefully stop the container later
reosarevok
Oh
bitmap
stopping cron doesn't affect existing jobs that are running
reosarevok
Sure, ok, I meant stop and remove the container :)
Sorry
I guess there's no reason to wait if we're reasonably sure it won't finish on time?
bitmap
I just killed them
reosarevok
Thanks
bitmap
(you can run the command to stop and remove the container though)
reosarevok
Done
yvanzo: are you waiting for a specific time for "Check Weblate repository status" (again)? :)
yvanzo
bitmap: I reordered the steps and made it clear that we should keep going on.
bitmap
👍
yvanzo
reosarevok: Weblate just needed time to catch up with changes: It had “33 missing in the push branch”, now 0.
bitmap
we can start the sample dump concurrently with the MB production cron
reosarevok
lucifer: it's soon time "to stop LB cron jobs that depend on the MB DB"
yvanzo
Ok, wasn't sure if it might be delaying the first packet by a while.
lucifer
reosarevok: sure can do it right now
bitmap
reosarevok: had you started the pg_dump of musicbrainz_db already?
reosarevok
Nope
But I can
bitmap
oh we probably should have had that first in the list
reosarevok
Oops.
started it now
How long should this take?
yvanzo
reosarevok: It seems that I’m not receiving any confirmation mail from Twitter, so I cannot see what you posted and use it for other networks.
reosarevok
I just quoted the previous tweet with "Downtime in around an hour now. Finish your editing and wish us a quick release! - reo"
Not really around an hour anymore tho
bitmap
the dump I made on the 7th is 27GB, so if it dumps at least 1GB per minute, hopefully under 30 minutes :)
it's at 2GB after 2 minutes...
(I moved it to the top of the list)
reosarevok
Thanks
yvanzo
reosarevok: Done.
bitmap, reosarevok: I also backed up MB web container logs just in case.
bitmap
thanks, where are the backups?
yvanzo
In my home directory.
bitmap
👍
the dump won't finish by 17 utc
yvanzo
I’m stopping SIR instances on rakim.
bitmap
should we skip it or wait? this particular step was before we had barman and zfs snapshots
reosarevok
bitmap: which dump?
the aretha one I started a while ago, on edit_data now
bitmap
the pg_dump of musicbrainz_db you started at 16:27
yvanzo
pg_dump is very easy to use, but if you are comfortable enough to restore with other methods, feel free to skip.
reosarevok
Oh, I thought you said 30 min or so?
How long do you think it'll take? We can wait 5 min extra, but
bitmap
probably around 45 minutes in actuality, that was an early estimate
yvanzo
It’s also okay to wait a bit more if needed.
reosarevok
I think that's fine tbh
We can stop it if it seems like it'll take a lot longer than that by 17utc
bitmap
but could be up to an hour, not 100% sure it will stay at the same rate
if something goes horribly wrong I assume we'd start with the zfs snapshot since it's taken right before the upgrade
yvanzo
That gives us a bit more time to gather restore instructions with those other methods then.
bitmap
it's almost half done, so, probably fine to wait a little
yvanzo
👍
reosarevok
Finished edit_data, at least
atj
did you take the snapshot or have you not reached that point?
yvanzo
not reached that point
atj
if something was to wrong and you wanted to revert, it'd simply be a case of running `zfs rollback rpool/ROOT/ubuntu/srv/postgresql@preupgrade` and `zfs rollback rpool/ROOT/ubuntu/srv/postgresql/wal-12@preupgrade`
should be pretty much instantaneous and you'd be exactly where you were before you started
mayhem is home and idle
mayhem
if anyone needs any help, shout@
reosarevok puts on his robe and wizard ha
yvanzo
hlep!
reosarevok
oh, not what you were going for, mayhem, sorry
Sophist_UK
atj: Ah, the wonders of ZFS snapshots. :-)
reosarevok
The dump is on editor, hopefully recording/release won't take forever
bitmap
12:15 is the new downtime ETA
sorry, 17:15
yvanzo
12:15 am ;)
bitmap
haha
atj
👀
yvanzo
Thanks atj, I’ve added these two commands to the roadmap docs.
But when it will be on release it won’t be finished still
Actually, we have a second full mirror on wolf to test upgrading. :)
MatthewGlubb joined the channel
reosarevok
hi MatthewGlubb! We didn't break anything yet surely?
mayhem
lol
MatthewGlubb
Haha! Hi reosarevok! No. Just checking in to see that all systems are nominal for launch. I haven't looked, but I wondered if mb-solr related changes have been tagged in git to make it easy for me to know what to upgrade?
(we are already running Solr 9, so should be simple for us)
reosarevok
We're not releasing all the search stuff today in the end since it's a bit behind in testing but yvanzo can give you all the details :)
bitmap
reosarevok: finished?
yvanzo
MatthewGlubb: 👍 even though we won’t provide officially support for it until we have deployed it in production with dumps.