akshaaatt (IRC): I am busy a bit making an app for my institute club, shud wrap it up by today.. shud I wrap up BP revamp first in 3-4 days of coding period or shud I first try and get started on GSoC project from tomorrow?
ijc has quit
ijc joined the channel
SigHunter has quit
SigHunter joined the channel
SigHunter has quit
SigHunter joined the channel
SigHunter has quit
SigHunter joined the channel
kepstin has quit
kepstin joined the channel
Maxr1998 has quit
Maxr1998 joined the channel
MonkeyPython joined the channel
pranav[m] has quit
v6lur joined the channel
v6lur has quit
BrainzGit
[listenbrainz-server] 14anshg1214 opened pull request #2883 (03brainzplayer-spa…refactor-recommendations-page): Refactor RecommendationsPage to Functional Component https://github.com/metabrainz/listenbrainz-serv...
"Sorry, http://archive.org/ is having trouble (front end load balancers are overloaded). We are working on it. Back as soon as we can."
mayhem
nope, times out.
zas
solr on solr4 restarted, now solr disk I/O looks on par with other nodes, iotop clearly reported some solr processes as heavily reading from disk; though the cluster and all cores are in sync now
atj
zas: pong, sorry was making dinner
zas
atj: np, is new search enable on beta already?
atj
no, not yet
zas
I'll re-increase rate limit on (old) solr cluster, it seems stable again, let's see
atj: ok thx
atj
was meant to happen in Friday but I had a few things to wrap up
*on
zas
the issue related to high disk I/O (heavy read ops) is very weird, let's hope it's something that was fixed in recent solr
atj
I've not seen it on the new cluster, I think it might be related to page caching
zas
it happens more on solr5 than any
atj
the new cluster uses MMIO for the directory cache
zas
cluster running at ~45% capacity, seems stable
I'll let few mins pass and then I'll raise the limit a bit more
restarting sir-prod, let's see
grrrr solr4 down again
zerodogg joined the channel
the issue seems related to sir
sir stopped again
both solr 4 & 5 disk read rate increased to 1.3gb/s again
causing the whole cluster to slow down
atj
no significant increase in updates from Sir AFAICS
zas
nope, but I think that's triggered by a post from sir
after restarting solr on 4 & 5, disk I/O comes back to normal, I'll not restart sir, let's see if it happens again without sir running
atj
i'm running a 72hour load test on the new cluster at the moment and there's been no performance issues so far
so seems to be isolated to the old cluster
zas
stopped sir, restarted solr 4 & 5, everything back to normal
restoring usual rate limit
hmmm
sir stopped, cranked up the rate limit ... and boom.... solr 4 & 5 again
but ... they seem to recover now
or not... load increasing quickly on those
solr5 load > 150
derat joined the channel
derat
is there a known issue with email subscriptions? i'm configured to get daily updates, but i noticed that i haven't received any since one that was sent at 2024-05-24 00:53:48 +0000
bitmap
derat: yes, it was crashing because I didn't deploy https://github.com/metabrainz/musicbrainz-serve... to the cron container. did that now and started the subscriptions processing again. sorry for the delay
derat
bitmap: no worries, and thanks!
derat has quit
zas
I fully restarted solr cloud, rate limited at 20 req/s, sir restarted