lucifer: I haven't tested with link with the same volume, I'm not sure the best way to do that
lucifer
that should only be a small change, i'll prepare for that now.
alastairp
thanks
then we should start adding commands to the doc
lucifer
so let's do without link?
alastairp
I think that's the safer more understood operation
lucifer
👍
yes i'll upload all the files somewhere.
alastairp
do you want to commit your dockerfile to an LB branch?
lucifer
i was going to do a new repo because may come useful in future too.
thoughts on new repo in meb?
alastairp
no problem
lucifer
👍
alastairp
I was just thinking - you're right that we can bring the old version up immediately if we leave the indexes, but if we delete them then that's less data to copy too :)
lucifer
yeah true. let me see how big the indices are.
alastairp
if it's more than 50gb, I think it'd be worth it
I'm going to double-check the final backup steps that we had (see how long a full no-listen no-spark backup takes)
lucifer: from the logs on the 1st. I don't understand why there was an hour between "Creating dump of public data..." and "Creating dump of timescale public data..."
I guess that's the public postgres dump? what's in there?
lucifer
so makes sense to drop indices then
alastairp: stats.
alastairp
(timescale public data is mappings)
lucifer: ah, right. we don't need that, wrong database.
lucifer
right. only need timescale data
alastairp
maybe we just go for a pg_dump of the mappings again this time
lucifer
playlists too (private timescale dumps)
alastairp
yep
lucifer: oh - we only need to recreate indexes which have a field which is of type `text`
it was about 10 mins when you made the user id ones when we did that migration, right?
lucifer
i think so yes, less than 30 mins for sure.
alastairp
oh neat, I didn't know about INCLUDE. Could be nice to investigate
right, but that's just 1 index out of 3 or 4 which we need to do
we can add a time estimate anyway
(to the doc)
lucifer
yeah makes sense
alastairp
a dump backup to gaga is going to take about 40 minutes, do we want to save 10 minutes by dropping indexes first, or leave them there so we can bring it up in a hurry if needed (if we end up restoring from this backup we'd need to recreate indexes)
for saving only 5-10 mins I'm tempted to leave them there?
I started adding steps for the pg_ugprade to the doc, please sanity check
lucifer
alastair, i'd say let's drop indexes just before pg_upgrade. so take backup with indexes.
CatQuest: can you save that question for in about 3 hours?
just finished lunch
lucifer: what do you need from me?
skelly37 has quit
CatQuest
hey I'm going to do dishes and start dinner myself so sure? :D
alastairp
zas: atj: what maintenance do you need to do on gaga, approx how long do you think, and will you need to reboot?
lucifer
alastairp: can you review docker-sever-configs PR and sanity check that the volume is fine or we need to make change to the script?
atj
we're going to start managing the server using ansible, this is the first we've done it to an already configured system, so it could throw some issues up
skelly37 joined the channel
alastairp
will we do it at the beginning of downtime or the end?
atj
however I would say 60-90 minutes at most
alastairp
(if you don't need to reboot, we can try and make use of parallelism and I'll make the db backup at the same time)
CatQuest
alastairp: you don't have to tell me that you will talk about it in 3 hours, jsut talk about it in 3 hours :)
but apreciate it nevertheless
atj
a reboot would be good to ensure everything comes back as expected
however I will defer to zas on that
alastairp
CatQuest: we're about to do a scary database upgrade, so I don't want to think about other things for now, but I'm sure I'll forget about you when we're done, so it'd be great if you could ask again later tonight if you want an answer for your question
atj: ok, thanks
atj
I think it's probably best you do the upgrade first
alastairp
lucifer: looking at PR
CatQuest
sure thing! good luck with the database thing!
lucifer
alastairp: volume creation at the start. bring up new container near the end (after PG before before TS upgrade)
CatQuest
actualy
good luck everyone ✨
alastairp
lucifer: is the name field needed anywhere in consul?
lucifer
oh right, in LB config.
alastairp
{{if service "timescale-listenbrainz"}}
bitmap
reosarevok: so they are all pre-ngs or predate the control_for_whitespace constraints, and I guess it makes sense if the constraints were created with NOT VALID. but I wonder how nobody ran into this or reported it before?
alastairp
I don't have a problem just editing the volume create command and the name of the volume in the docker run command
reosarevok: surely some people are using RT_STANDALONE, even I used to import normal dumps with RT_STANDALONE before we had the sample dumps, so I'm confused how I never saw that
lucifer
alastairp, what about the postgres health check thingy? we had issues with it last time iirc.
alastairp
lucifer: oh, neat. although again - I'm not sure I like the idea of having the container name and service name different
lucifer
its different already though. listenbrainz_timescale vs timescale-listenbrainz. but yes fine to with modifying the script
alastairp
listenbrainz_timescale ? where's that from?
lucifer
listenbrainz-timescale sorry.
so listenbrainz-timescale vs timescale-listenbrainz
alastairp
oh haha, I see what you mean
good catch!
ok, your fix is good then ;)
lucifer
👍
just finishing dinner, will be back in ~10 mins for upgrade.
alastairp
ok!
lucifer
oh duh forgot dst again. we have an hour. will take my time then lol
alastairp
lucifer: no, we have 10 mins :)
lucifer
oh duh again.
alastairp
taking down test & beta containers
lucifer: is the LB container for taking down listenstore ready?