bitmap: ruaok: bono was slow probably because its running a dev instance. in dev, the js files are almost ~10 MB each. in prod, those are shrunk down to ~800 kb each + plus get cached.
bitmap
right, I figured they just weren't minifed
MB's are probably >10 MB in dev if you include source maps, though we output those to a separate file
lucifer
makes sense
btw MB is using ES5 or ES6?
bitmap
we compile everything to es5
in the actual source code we use stuff from es2020 afaik
lucifer
ah nice
peterhil joined the channel
akshat
Hi lucifer!
lucifer
hi!
akshat
So to be able to run the color browse branch locally, I would have to use port forwarding to bono.meb?
Or how does the steps go?
do^
lucifer
you can bring up an instance on bono.meb. and then access it using the public url.
akshat
Okay understood. How do I bring up the instance?
lucifer
./develop.sh up
akshat
But that starts it in localhost, right?
How do I bring it up in bono?
lucifer
you'll probably get an error currently, because my containers are up. i'll take those down.
run it on bono. i mean ssh bono.metabrainz.org, git clone lb server, checkout branch, run ./develop.sh up.
akshat
Interesting so I wasn't cloning the project in bono to do stuff. Got it
What I was thinking was that I could mirror my local work to bono somehow and access the stuff there. But I will have to setup everything in bono again right
Great, I see it's already allocated.
You must be using it I suppose
lucifer
it might be possible to mirror your setup but don't think it will be easier than manually copying.
yes i'll bring mine down
akshat
Thanks
lucifer
done
akshat
"ERROR: for web Cannot start service web: driver failed programming external connectivity on endpoint listenbrainz_web_1 (dc64de3e4658e87683554047ac592dec4a1ba603a6a86a231d35485d0c6e5b1a): Bind for 0.0.0.0:80 failed: port is already allocated"
Still do see this
lucifer
which branch?
akshat
color browse only
lucifer
let me check, it had the workaround until yesterday
outsidecontext, rdswift: I was told that https://picard-docs.musicbrainz.org/en/variable... is a bit outdated and the relevant checkbox is now "Enable track relationships". If so, can we update it? :)
outsidecontext, zas: in a related question, does Picard have any way currently to get relationship data for releases that go over the 500 recording limit? We really should figure out a way of doing that - if it needs a new endpoint we could talk about it with bitmap and yvanzo.
(without that, the payoff for people actually improving the data on huge releases is super low :) )
zas
afaik Picard doesn't do anything special, and depends on the web service to provide data, if it is limited server-side, then Picard can't rely go over that. Now I'm not sure what's this 500 recordings limit. I wish we had ws "paging", and no limit at all.
outsidecontext
Yes, that's limited server side. The only way around would be currently to load the data piece by piece afterwards, probably per recording. That was proposed before, but it obviously would be super slow (8+ minutes with the 1 request per second limit)
So that's mostly a server side issue. Having the ability to get the recording data for a release paginated would be a way around that I guess, e.g. if the endpoint only returns 500 recordings per call. Even for most huge release it would still be only 2 or 3 calls I guess
aerozol joined the channel
Etua joined the channel
Etua has quit
reosarevok
Yeah, an option I could see is having a special endpoint that returns just relationship data for the recordings/works, given a release MBID
I guess that's kinda what you were suggesting :)
I wanted to make sure Picard didn't currently implement a "check every recording" option or anything
But yeah, I guess that'd be super annoyingly slow
bitmap: I'd really like some feedback on this when you're around ^ :)
outsidecontext
reosarevok: not sure about that, if the endpoint loading release data with many recordings + works and their relationships is slow a separate endpoint for loading the recordings + works and their relationships will likely just suffer from the same issue. As I understand the problem with loading the release is really the data on recording and work level
reosarevok
I meant, having that paginated
So you get the recordings without rels in the normal request (with *some* notification that rels were not returned)
outsidecontext
Yes, makes sense
reosarevok
And then you can get a paginated endpoint that is just recordingmbid: rels data somehow
MBS-3680: Error 502 Bad Gateway loading certain releases from the webservice
ROpdebee has quit
opal has quit
tandy[m] has quit
legoktm[m] has quit
leonardo has quit
yvanzo has quit
mruszczyk has quit
milkii has quit
Rotab has quit
atj has quit
yvanzo joined the channel
tandy[m] joined the channel
legoktm[m] joined the channel
leonardo joined the channel
riksucks has quit
riksucks joined the channel
ruaok
moin!
the bot got 55k images indexed before getting a 403 axe.
zas
about handling of huge releases, I guess bitmap has few ideas. This is something we could discuss during the summit. It seems to me that will be more and more a problem since we get more and more metadata.
ruaok
zas, bitmap : thanks for the CAA setup. working great. now running with 16 threads.
patton at less than 20% cpu.
zas
ax51 are real workhorses
ruaok
I love the AMD cpus. I have one on my desk at works and its lovely.