Can't connect to solrcloud.metabrainz.org:443 (certificate verify failed)
SSL connect attempt failed error:0A000086:SSL routines::certificate verify failed at /home/musicbrainz/carton-local/lib/perl5/LWP/Protocol/http.pm line 49.
Searching for artists
bitmap[m]
zas: julian45: any idea? ^
julian45[m] joined the channel
julian45[m]
unfortunately unavailable for the next few hours due to a pre planned thing
but
assuming that comes from the perspective of an MB instance trying to connect to solrcloud, my guesses would be an expired cert being served by solrcloud, a somehow invalid/corrupted cert, or issues with the client consuming the cert within MB
may be worth taking a look at solr changes re SSL between old and current versions
bitmap[m]
the cert has indeed expired (NotAfter: May 27 23:59:59 2025 GMT)
I temporarily disabled SSL verification for solrcloud.metabrainz.org inside the MBS containers to buy us some time
bitmap: YoMo12 cert issue should now be fixed, it was a double issue, certificate wasn't updated and check for expiration wasn't working, both should be fixed now.
nawcom joined the channel
vardhan_ has quit
nawcom has quit
Maxr1998 joined the channel
Maxr1998_ has quit
nawcom joined the channel
monkey[m]
<lucifer[m]> "monkey: I have updated LB#3260..." <- Yes, I'll deploy in test and have a look at the UI
monkey: it might not work on test.lb because the table needs to be updated first.
monkey[m]
Ah, I see.
Let me run it locally
lucifer[m]
you might need to run the last fm importer container too.
monkey[m]
In the meantime, it looks like one of the tests is failing
Definitely some changes needed for the UI
lucifer[m]
ah i'll fix that
cool, i'll fix the test and leave the the UI changes and then merging up to you
monkey[m]
I figure I'll work on it directly, ____junaid____ has enough to think about with the GSOC project approaching.
I had done the export data status thingy, so I'll make it closer to that.
lucifer[m]
yup makes sense, test should be fixed now btw. what are the UI changes needed?
monkey[m]
Many changes needed. Basically making it more like the export page status
But also includes moving some hide/show logic into the status component rather than the parent. It's in line with some work I was looking into yesterday about UX for LFM imports in LB-1728 , so I want to apply some of the suggestions on the ticket from aerozol
<lucifer[m]> "mayhem: re navidrome, do you..." <- no real opinion, but I respect the desired to identify (troublesome) clients. unless it presents a problem for you, I'd say run with it.
monkey[m]
<lucifer[m]> "oh i just saw the comment, i..." <- Thanks for working on the last improvement!
monkey: sure i'll do that later today, i am reviewing the slow dashboard PR atm
monkey[m]
OK. When you do, and if you deploy to beta to check it, please put the bootstrap5 branch back on beta afterwards.
I will update the BS5 branch with latest master changes now.
fettuccinae[m] joined the channel
fettuccinae[m]
mayhem: Are all the endpoints supposed to be private and and should only MeB projects be allowed to access them? I thought only `notification/send` endpoint was supposed to be private and that the remaining endpoints could be accessed by end users .
s/and//
mayhem[m]
<fettuccinae[m]> "mayhem: Are all the endpoints..." <- everything is private. only MeB projects should be able to access them -- at least that is how I envisioned it. I think we should go with that now -- if people make a good case for them being public, then we can change them later. not a lot of work.
fettuccinae[m]
<mayhem[m]> "everything is private. only..." <- Oh, then I'll need to revert the token_required decorator to its previous version. And till client credentials grant is added, the auth checks will be skipped. Sorry for the misunderstanding:(
mayhem[m]
all good!
yvanzo[m]
Hi bitmap, please proofread the pull request to upgrade search for mirrors, I finished testing it. If everything is fine, I will release it today.
lucifer[m]
mayhem: pondering about fetching listens from timescale optimizations, i realise that the main use case of listens in timescale is to serve user dashboard which is always a per user data query. the per time data queries (stats etc.) are actually handled by spark. which makes me feel that partitioning our listens by users might be better timestamps. of course a partition for every user might not be feasible, pg would probably have
limits on it anyway so tradeoffs to consider but might be worthwhile exploring at some point.
bitmap[m]
<zas[m]> "bitmap: YoMo12 cert issue should..." <- thanks, I just re-enabled SSL verification for search queries, working fine now 👍
lucifer[m]
monkey: testing the dashboard speed up pr and it seems the frontend is not querying for extra listens correctly, at least not with the correct min/max ts.
monkey[m]
This is the suggested improvement we discussed, but that maybe wasn't implemented, no?
mayhem[m]
lucifer[m]: remember that the social feed loads data from multiple users, is this in your planning?
monkey[m]
FWIW, for the social feed we can put a hard limit for how far back we want to go. Could be small, a week even.
Or month, or whatever
mayhem[m]
lucifer: how would you partition by users?
lucifer[m]
[@mayhem:chatbrainz.org](https://matrix.to/#/@mayhem:chatbrainz.org) the simplest solution could one partition per user.
Timescale lets you partition by numeric columns.
monkey[m]
lucifer: I think maybe the front-end changes were in LB#3252 which is now closed
Can someone please confirm whether or not MusicBrainz Android is currently being maintained? If not, I plan to remove it from the Picard documentation when we publish v3 of Picard. Thanks.
Sintharu joined the channel
mayhem[m]
not being maintained -- we decided to sharpen the focus on the LB app.
lucifer[m]
monkey: i see, can you implement it in that PR?
vardhan_ has quit
ansh[m] has quit
kellnerd[m] has quit
Maxr1998_ joined the channel
Maxr1998 has quit
Gautam_coder[m] has quit
jasje[m] has quit
Sintharu has quit
BobSwift[m]
<mayhem[m]> "not being maintained -- we..." <- That's what I thought I remembered. Thanks for confirming.
suvid: i think handling the file uploads and creating a background task to process the uploaded files is a good starting point for the draft PR.
mamanullah7[m]
Hey lucifer: should I add this!?
And one more thing should I also create a draft pr!?
And regarding commits do i commit daily or weekly or if i complete a particular implementation then I shuld commit!?
lucifer[m]
m.amanullah7: i don't think you need error message in the oauth table, any error messages would be directly surfaced to the user. note that you don't need to implement imports only playback integration in brainplayer.
mamanullah7[m]
Okay my bad!!
lucifer[m]
for PRs, I would suggest to do it often to avoid losing changes. we can later squash the commits as needed.
s/PRs/commits/
mamanullah7[m]
lucifer[m]: Thanks! i'll create draft repo then and start commits!
* Thanks! i'll create draft repo and then will commit whatever changes i made!