_lucifer: I think the Spotify player is broken for everyone on LB right now, so if I manage to fix it by the end of the day here I'd love to see it in the release. That being said, we can also do another release in the coming days, I don't want to hold you back
Let me have a quick look at the open PRs
_lucifer
Mr_Monkey: oh! what's the error with the spotify player? wondering if its related to the spotify reader fixes we deployed some time ago.
Mr_Monkey
I honestly can't tell, but it could just as well be front-end only.
In any case, trying to play a track gives 404 errors (searching for a track works fine), and after a bunch of retires I get a 429 rate limit error.
_lucifer
okay, let me see if i can get the sentry frontend PR up to speed. that should probably aid in debugging the issue.
That's all the front-end PRs I see that are ready at the moment.
BrainzGit
[listenbrainz-server] alastair opened pull request #1389 (master…listen-null-validation): When writing a batch of listens, skip any which have null characters https://github.com/metabrainz/listenbrainz-serv...
alastairp
ruaok: ^
ruaok
alastairp: do we have a ticket to remind ourselves to remove this code when the front end checking has been improved?
alastairp
I'll add one for the checking, but I'm not sure we should remove this... in case something manages to get into the queue in the future
_lucifer
regarding that, is it possible to avoid checking every listen. because we wanted to remove the extra serialising/deserialising steps and this feels like adding one.
but i have not profiled anything so i might be wrong about the performance anyways.
ruaok
it is adding an extra step. but this is a temporary measure so we don't lose any of the queued listens.
there are 200k stuck right now.
Mr_Monkey
ruaok: Anything I need to do on the new be-gone branch? I'm getting this:
ImportError: cannot import name 'SECONDS_IN_TIME_RANGE' from 'listenbrainz.listenstore.timescale_listenstore'
_lucifer
oh ok, makes sense.
ruaok
Mr_Monkey: the branch is not fully baked yet.
where is that coming from?
that needs to be nuked too. api.py ?
alastairp
_lucifer: at the moment it's checking a bunch of values, but there's no extra serialisation/deserialisation step here
Mr_Monkey
listenbrainz/listenstore/timescale_listenstore.py
alastairp
but we may be able to find a better place for this check
I'll open a ticket
Mr_Monkey
Err, yes listenbrainz/webserver/views/api.py
alastairp
ruaok: unit tests passed on CI
should we merge and release?
ruaok
just finished reading the PR. looks good.
shall I or do you want to do it?
alastairp
just done it
BrainzGit
[listenbrainz-server] alastair merged pull request #1389 (master…listen-null-validation): When writing a batch of listens, skip any which have null characters https://github.com/metabrainz/listenbrainz-serv...
_lucifer
i can do it. as i have to release the db changes as well.
ruaok
is that something you're doing right this sec, _lucifer ?
I really want to get the timescale writer going again
_lucifer
yup, just fixing the indent comment and going to do the release.
ruaok
ok, ping me when you start the release so I can keep an eye on things.
[listenbrainz-server] amCap1712 merged pull request #1384 (master…youtube-db): Add SQL scripts for connecting new music services and migrating user data https://github.com/metabrainz/listenbrainz-serv...
ah yes. the drop would be a near vertical line if it was just a single insert or two.
alastairp
which is the unique queue? I see beta_unique but don't know what the prod one is
ruaok
just unique
_lucifer
why did this happen suddenly though? i mean why did the listens began to pass through messybrainz without erring out?
alastairp
_lucifer: because there would have been a listen that managed to match a hash in messybrainz
so therefore it didn't do an insert
but it still had a null value in additioanl_data
which was passed through into the queue
_lucifer
oh makes sense.
ruaok
nearly time for some noms, no, alastairp, Mr_Monkey ?
_lucifer
indeed, very lucky that messybrainz acted as a filter.
Mr_Monkey
Afirm
alastairp
and so this is why we should have done an explicit check in the body of the listen
instead of relying on the database error
I'm opening a ticket for that now
_lucifer
makes sense.
safe to upgrade cron, ruaok ?
ruaok
yes
alastairp
ruaok: do you know that just by intuition (you know more or less when the tasks run?)
ruaok
we purposely moved the schedule so that automated tasks run during EU night and the cluster is more or less ilde EU day.
alastairp
it'd be nice to have a single command that we can run to either 1) check if any processes are actively running in the cron container, or 2) show a calendar of upcoming events so that we can look at them
yeah, I remember you saying that
a quick hacky way of doing it could be to list processes inside the container and look for a python process
ruaok
oh wait, the PR I was waiting for wasn't even reviewed yet. boo.
that is the code with the updated listen fetch query that uses the listen_count cont agg.
alastairp
postgres' recommendation has always been that when upgrading between major versions, a dump and reload is always best (in which case it's going to make a new index anyway)
in some cases you can get away with running a new version on the old database, which it seems like they may have done here
ruaok
(still untested)
_lucifer
yeah, seems that way.
alastairp
ruaok: right, I see. neat
zas
_lucifer: I installed docker on michael, the installation uses ufw-docker (see https://github.com/chaifeng/ufw-docker), you'll need to use ufw-docker command to allow ports