#metabrainz

/

      • ruaok
        what do we gain by making this change?
      • _lucifer
        i thought that we could store and retrieve as the values as integers. but i just tested with redis-cli and that is not the case, it returns integers as strings.
      • so just ignore that comment.
      • alastairp
        the only gain perhaps is that logically the values are all joined together in a single key
      • instead of having 2 keys per user, there are just 2 count keys in redis
      • _lucifer
        ah, alastairp's motivation was different.
      • alastairp
        _lucifer: a reminder, all counts in redis are always strings
      • even if you use `INCR foo`, it'll be stored in a string
      • _lucifer
        oh! i didn't know that.
      • ruaok
        good enough for now
      • 353850 listens processed: exact 78138 high 800 med 2426 low 2697 no 23198 prev 246591 err 0 93.4%
      • some really nice stats coming from the MBID writer working on live listens coming in.
      • alastairp
        ruaok: listenstore review done
      • -> lunch. back to answer questions soon
      • piti has quit
      • ruaok
        all comments addressed, most fixed.
      • alastairp
        I tried the select max(listened_at) query without a group by and it gave me a result
      • not sure what's different in your one
      • ruaok
        I took it out and it barfed on me.
      • different timescale versions?
      • alastairp
        you took out just group by, or group by and order by?
      • ruaok
        just group by
      • alastairp
        I can reproduce the error if I take out just group by
      • but you don't need order by, because you're already using max/min?
      • ruaok
        LIMIT 1
      • 🤦‍♂️
      • alastairp
        but max(listened_by) only returns 1 row
      • ruaok
        I wonder if this was confusing the query planner making things slow.
      • but now tests are failing. huh
      • BenOckmore has quit
      • BrainzGit
        [bookbrainz-site] MonkeyDo merged pull request #623 (master…dependabot/npm_and_yarn/underscore-1.13.1): chore(deps): bump underscore from 1.10.2 to 1.13.1 https://github.com/bookbrainz/bookbrainz-site/p...
      • 17WAAF7UX joined the channel
      • 17WAAF7UX
        Project bookbrainz-site build #3955: passed in 4 min 35 sec: https://travis-ci.org/bookbrainz/bookbrainz-sit...
      • 17WAAF7UX has left the channel
      • ruaok
        fix pushed.
      • alastairp: have you started looking at https://github.com/metabrainz/listenbrainz-serv... yet?
      • we need that merged before I can deploy.
      • alastairp
        yeah, I had a glance through that this morning as well. no comments
      • let me approve both PRs
      • ruaok
        ok, hoping for the unit tests to pass. if so, I'm going to proceed with the deployment.
      • alastairp
        ok, great. I'm here for the rest of the afternoon too
      • ruaok
        k
      • unit tests passed. waiting for prod tests now.
      • passed.
      • BrainzGit
        [listenbrainz-server] mayhem merged pull request #1396 (be-gone-time-ranges…no-more-time-ranges-deployment): Recalculate all user data https://github.com/metabrainz/listenbrainz-serv...
      • [listenbrainz-server] mayhem merged pull request #1390 (master…be-gone-time-ranges): Remove time ranges, refactor listen fetching and improve listen count/timestamp caching https://github.com/metabrainz/listenbrainz-serv...
      • ruaok
        if I push a tag, does it build the images automatically now?
      • alastairp
        _lucifer: ^
      • ruaok: in theory yes, though once last week it didn't work for me and we're not sure
      • _lucifer
        yes, it should.
      • alastairp
        give it a go and we'll see
      • ruaok
        k
      • where can I find out that the images completed building?
      • _lucifer
      • BrainzGit
        [bookbrainz-site] snyk-bot opened pull request #624 (master…snyk-fix-11a6bc053b6e8918d0937be199c5387c): [Snyk] Fix for 6 vulnerabilities https://github.com/bookbrainz/bookbrainz-site/p...
      • alastairp
        top menu -> actions-> build image and publish to docker hub
      • ruaok
        thx
      • travis-ci joined the channel
      • travis-ci
        Project bookbrainz-site build #3957: failed in 3 min 43 sec: https://travis-ci.org/bookbrainz/bookbrainz-sit...
      • travis-ci has left the channel
      • BrainzGit
        [bookbrainz-site] MonkeyDo closed pull request #624 (master…snyk-fix-11a6bc053b6e8918d0937be199c5387c): [Snyk] Fix for 6 vulnerabilities https://github.com/bookbrainz/bookbrainz-site/p...
      • alastairp
        _lucifer: looks like this job may not have used any docker cache. any thoughts as to why? Either because it's the first build since we fixed the ARGs, or because it failed to store the cache the last time it ran?
      • _lucifer
        the latter or because the cache got evicted.
      • the tests run often but the release is seldom so its probable that the release cache gets evicted.
      • alastairp
        right. I'm not sure - 10 minutes for a build is quite a long time. I know that if I do a small incremental release locally (e.g a .0 then a .1) then the build time is going to be much smaller
      • shorter
      • _lucifer
        yeah, right.
      • ruaok
        gah. refreshing the 30day continuous aggregate also takes up gobs of diskspace. :(
      • _lucifer
        the cache got stored this time, alastairp.
      • ruaok, could this be related to WITH NO DATA option?
      • ruaok
        it is.
      • but then you would normally refresh the agg and it would run into the same issues.
      • I need to see if I can have it refresh the agg in chunks (5 years at a time)
      • yvanzo
        nelgin: There is a more general issue: These logs don’t provide enough details to identify the edit(s) that caused the failure.
      • It is known that some edits can lead to such failure too, see https://tickets.metabrainz.org/browse/SEARCH-577 for example.
      • BrainzBot
        SEARCH-577: Index limit exceeded on medium formats update
      • yvanzo
        It’s doable to increase your 'index_limit' value but to the risk of PostgreSQL being less available for other tasks during such events.
      • As a workaround... ^
      • I will resume addressing indexer’s issues (and prerequisites to port to python 3) after the next schema change release.
      • ruaok
        wow, gaga. 1210M/s write speed. zoooooom!
      • still, hurry up, eh gaga?
      • alastairp
        what's going on? You can generate the new aggregate before we deploy, then when we deploy it'll start using it, and then we can delete the old one?
      • ruaok
        that is what I am doing.
      • its stopped consuming disk, so now just need to wait.
      • _lucifer
        alastairp: if you are idle currently, can you take a look at the youtube PR?
      • alastairp
        I approved them
      • we're going to have to have a talk about this workflow, I can see that I'm a bottleneck for reviews and I'm not sure how we can set things up to get them processed quicker
      • _lucifer
        thanks!
      • one issue regarding the time ranges and youtube PR has been that these are huge PRs. if we could do small incremental PRs, merges and deployments. it would have been easier.
      • but i don't think it was possible in these 2 cases.
      • although we could probably setup a weekly meeting sort of thing like the MB team have.
      • alastairp
        yeah, iliekcomputers has commented about the value of small PRs before, and I agree there
      • but sometimes a change has to be big (time ranges, this one, etc)
      • _lucifer
        yeah, sometimes there's no option but otherwise we should aim for small PRs.
      • alastairp
        a few times for SoC I've tried to do these incremental PRs like you did here, but in my experience that's always gone badly too
      • _lucifer
        yeah, rebasing so on becomes a mess.
      • ruaok
        i don't rebase anymore -- it always turns into a painful mess. merging master in works much better for me.
      • ok, aggregate built, finally.
      • starting deploy.
      • taking taking web-test for the time being.
      • _lucifer
        yeah, i gave up on rebasing last week too.
      • alastairp, ruaok, Mr_Monkey: so how about we try to aim for a weekly release, we can set a day in advance that works for all of us. that day we try to review and get merged as many PRs as possible and release at the end of day. just like the PR smashing days we had some time ago, but only every week.
      • ruaok
        I like it.
      • hmm. my tagged image is not here: https://hub.docker.com/r/metabrainz/listenbrain...
      • should be v-2021-05-07.0
      • alastairp
        oh
      • we change the args to push.sh but didn't update it here
      • PR incoming
      • _lucifer
        oh! sorry, my bad.
      • BrainzGit
        [listenbrainz-server] alastair opened pull request #1434 (master…build-image-push-args): Fix arguments passed to push.sh https://github.com/metabrainz/listenbrainz-serv...
      • alastairp
        so the 'prod' image on docker hub is the one we want
      • _lucifer
        yes right
      • alastairp
        _lucifer: it's worth a try setting a release day, though I don't want the day to turn into a panic to try and merge as many PRs as possible before we release
      • I think our periodic "fix prs" day that we do at the office is working well, though the last time we did that we definitely picked up the easy ones
      • BrainzGit
        [listenbrainz-server] alastair merged pull request #1434 (master…build-image-push-args): Fix arguments passed to push.sh https://github.com/metabrainz/listenbrainz-serv...
      • [listenbrainz-server] alastair merged pull request #1433 (master…spelling): Fix grammar https://github.com/metabrainz/listenbrainz-serv...
      • alastairp
        tagging v-2021-05-07.1
      • ruaok
        I'm pushing v-2021-05-07.0 now.
      • alastairp
        ah, sure. that'll work too
      • _lucifer
        alastairp: right, i agree. we should aim for a reasonable amount of changes.
      • alastairp
        _lucifer: the job failed to pull cache layers again
      • ruaok
        though if .1 is done and ready, I'll use that.
      • alastairp
        nah, it'll be another 10 minutes, because for some reason the build job isn't using a cache
      • ruaok
        k
      • these build times are getting redonkulous
      • alastairp
        yeah, this 10 minutes is for a full LB prod image. we added caching in the github actions precicesly to avoid this, and when building locally on your machine it should reuse caches
      • but even then, it's a pretty long process from 0
      • ruaok
        yeah
      • alastairp
        growing up in a country at the far end of a single cable to the rest of the internet still makes me cringe at downloading the same packages again and again every time you want to make a new deployment. it feels like such a waste
      • ruaok
        huh? still not found???
      • is push.sh busted?
      • do I need to merge another branch?
      • _lucifer
        the image is not pushed yet, a couple of minutes more.
      • alastairp
        how did you try and push it yourself?
      • ruaok
        MY image didn't appear.
      • docker/push.sh prod v-2021-05-07.0
      • alastairp
        sorry - I thought you pulled :prod and renamed it to v-2021-05-07.0
      • _lucifer
        you do not need prod now.
      • alastairp
        ok. we made this change and didn't inform you of it - sorry about that
      • _lucifer
        docker/push.sh v-2021-05-07.0
      • alastairp
        the syntax is now ^
      • _lucifer
        the action has also pushed now, so you can use that for now.
      • ruaok
        finally found it.
      • _lucifer
        alastairp, i'll enable DEBUG logging on the actions and see if I can find the issue.
      • alastairp
        _lucifer:
      • > Stored root cache, key: docker-layer-caching-Build image and publish to Docker Hub-61ad5188f09e6d2abe065a8e450c87c8238c1c1a0f06684921281fbebe0e9d70-root, id: 50510
      • on the post-run job
      • ruaok
        what if i need push.sh beta beta ?
      • alastairp
        but when starting: