rdswift: I would love your help and promise that the LB team would not give you and shit for helping.
yellowhatpro has quit
lusciouslover has quit
Maxr1998 has quit
Maxr1998 joined the channel
monkey
Uh-oh, the last LB deployment also deployed the SPA feature that was on beta with bugs...
I thought we said we could wait a bit before the next deployment, but I didn't make it particularly clear that it shouldn't be deployed before approval
v6lur_ has quit
Oh well.
lucifer
monkey: oh sorry š. I completely forgot about SPA when I made the release a couple of days ago.
monkey
Yeah, I figured :) And I wasn't around to say anything
No biggy, we've got bug fixes PRs up already, we'll work on those in priority
yvanzo
rdswift: Your help is always welcome and Iām still hoping to continue making progress together on this topic.
outsidecontext
lucifer: are you around? I have some questions about the new MeB oauth.
lucifer
outsidecontext: yes
outsidecontext
lucifer: So far Picard had used OOB flow with user needing to manually copy the "code" value for getting the access tokens. For a long time there had been the plan to migrate this to callback URL on localhost, and I have revived an old implementation of this against the existing oauth implementation.
but: the callback runs on picard's builtin webserver, and that can a) run on arbitrary ports and b) can be disabled completely. I'm not sure how to deal with that
lucifer
can the webserver be amended to always run on a specific port?
outsidecontext
in the existing oauth implementation it accepts arbitrary redirect URLs for "installed applications", and I can fallback to OOB
lucifer
or at least a range of fixed ports.
outsidecontext
I guess it can, but what to do if the port is occupied?
lucifer
for the new oauth implementation, you can specify a list of redirect_uris. you could probably put in a 100 ports there.
hoping at least one of those would be free.
outsidecontext
ok, then we would need to limit the allowed port range. probably most run on the default port anyway (or one of the next higher ports, as picard automatically tries to find a free one)
lucifer
makes sense. as for a fallback, would it make sense to let the user provide credentials and redirect uris of their own?
outsidecontext
that sounds overly complicated
lucifer
true that. i am guessing only a very small number of users would need that.
outsidecontext
OOB won't be supported, right?
lucifer
can picard's webserver be configured to run on interfaces other than localhost?
outsidecontext
it can be configured to run on all interfaces, but it will always run on localhost
lucifer
we haven't had the discussion as a team. i have not implemented it for now, i personally would like to avoid it because it seems like a bad idea security wise but if its a deal breaker we can consider adding it back in.
or at least add a special case to restrict oob to only picard.
but if picard can work without OOB, yes OOB won't be supported.
outsidecontext
it would at least allow a clean fallback in case the webserver is not running. otherwise we could of course start it for login and shut it down afterwards again
lucifer
makes sense.
outsidecontext
is it already know which scopes will be available?
lucifer
we can discuss how to keep OOB around for picard in the next OAuth meet.
nope, that is another point up for discussion in that meeting.
outsidecontext
ok, sounds good.
lucifer
for MB, we'll probably keep rating, profile and tag around. not sure if we need finer grained scopes for these. but just a guess for now.
outsidecontext
picard currently uses "profile tag rating collection submit_isrc submit_barcode"
if this becomes a central login it might make sense to prefix scopes with service names or such. something like "mb:tags lb:listens" or whatever is needed for other services
lucifer
makes sense.
yes, that is the plan. the scopes will be same probably but with prefixes.
ah, great, thanks. I see the scopes discussion is already similar to the above
I'll see to get the callback logic with current oauth merged into picard. Then I think Picard's oauth implementation is in a good shape to be moved over to the new oauth whenever it is ready.
lucifer
awesome, thanks!
zas
yvanzo, bitmap: we have slow response times from few servers on mb website since few days. I just changed the server weights a bit, but I think there's something going on. cpu resources used by mb website are on the high side, and regarding the number of queries they handle per instance, there's really something to improve. Perhaps the current issue is due to the nature of queries, dunno. Please have a look when you have time.
yvanzo: solr search on test.mb doesn't work, not sure about the status of this though. Wasn't it set up?
mara42 joined the channel
mara42 has quit
mara42 joined the channel
mara42 has quit
mara42 joined the channel
bitmap
zas: I noticed yesterday too, will try to investigate more deeply a bit later today
zas
Hey bitmap, atm it seems paco is much slower than others
bitmap
hmm, I didn't see anything odd with the requests at a glance, although paco is one of the nodes that handles both prod and beta (and it seems the beta container is getting a similar # of requests as the prod container?)
"<mayhem> rdswift: I would love your help and promise that the LB team would not give you and shit for helping." Okay, can you point me to what you have now and I'll start taking a look? No promises on how quick though, because I'm currently working on preparing the income tax filing for my company.
bitmap
we also get tons of /set-language/ requests and I'm not sure why. I guess bots but they have generic UAs
zas
bitmap: can it be beta that slowdown things? we had issues with zappa too
mara42 has quit
bitmap
could be, I didn't expect beta website container to have as many reqs as prod, but I guess bots hit it just as hard
not sure why it was only an issue in the past few days though
rdswift
"<yvanzo> rdswift: Your help is always welcome..." That certainly isn't the way I remember it. In any event, the MusicBrainz documentation is well down my list of priorities if I do decide to work on it at all.
zas
we should adapt robots.txt for non-prod servers, we do not want google (for example) to index them
it seems to me they use a lot of cpu for very few requests/s
bitmap
unless we are getting a ton more requests to the beta containers all of a sudden, it could also be due to the artist alias loading that was deployed there
ok, that makes sense because the amount of traffic on the paco beta container did seem unusual
and I don't see the alias changes having that big of an impact, most artists have few or no aliases
zas
bitmap: what about spawning 2 more instances of beta?
bitmap
I could try, do you have a recommendation on which nodes to use?
zas
hip
clash
well, that's to see if it improves the situation, but the core issue is rather the inefficiency of mb website containers, they handle very few req/s compared to the cpu they use