reosarevok: to bring more accuracy for the next day
2021-03-02 06147, 2021
reosarevok
Do see if my suggestion in the ticket seems good enough maybe :)
2021-03-02 06147, 2021
yvanzo
reosarevok: the WIP commit is supposed to fix the first commit (of PR #1112) or that is more complicated?
2021-03-02 06120, 2021
reosarevok
I think it should just fix the first commit to work with the editor data changes
2021-03-02 06130, 2021
reosarevok
(I wrote this a while ago, but that's what it looked like on review + test)
2021-03-02 06116, 2021
zas
yvanzo: I wonder why solr nodes have that much load when SIR is working, I guess that's related to Indexes replication, but it seems a lot of load for few changes
mb-solr-5 is writing to disk at a permanent rate of 600mb/s ... other nodes < 30mb/s, any idea?
2021-03-02 06123, 2021
atj
the graph shows it's reading from disk at 600MB/s?
2021-03-02 06143, 2021
atj
is the CPU usage user mode usage or non-idle?
2021-03-02 06104, 2021
zas
yes, sorry s/write/read/
2021-03-02 06156, 2021
zas
user
2021-03-02 06119, 2021
atj
some nice graphs there zas, did you create the dashboard?
2021-03-02 06125, 2021
zas
yes
2021-03-02 06141, 2021
atj
I do enjoy creating a Grafana dashboard :)
2021-03-02 06142, 2021
zas
I'll stop this instance, and see how the cluster acts
2021-03-02 06106, 2021
zas
atj: that's a very good monitoring tool, especially since they added alerts
2021-03-02 06143, 2021
atj
yeah it's very useful for operational monitoring. I think the alerting functionality is a bit weak though as you can't use templated graphs.
2021-03-02 06111, 2021
atj
quite strange how the disk read on solr-5 seems to be the only outlier
2021-03-02 06124, 2021
zas
yes... I'd more than strange ;)
2021-03-02 06106, 2021
zas
ok stopped this solr instance, let's see if other nodes exhibits same behavior
2021-03-02 06118, 2021
zas
it doesn't seem another node started to read at this rate, in fact, apart a bit more handled queries (4 nodes instead of 5) nothing changed
2021-03-02 06124, 2021
zas
I'll start it again
2021-03-02 06149, 2021
zas
rebooting the node while at it
2021-03-02 06113, 2021
alastairp
deploying new LB
2021-03-02 06133, 2021
Etua joined the channel
2021-03-02 06110, 2021
BrainzGit
[listenbrainz-server] paramsingh opened pull request #1309 (master…param-user-recommendation-event-table): Add new table for user recommendation events https://github.com/metabrainz/listenbrainz-server…
seems that it didn't rename it from mbid_mapping_20210302_041458 to mbid_mapping
2021-03-02 06136, 2021
ruaok
there is a concept of collection aliases. there is an alias from mbid_mapping to mbid_mapping_20210302_041458. Changing this alias allows the atomic rotating of indexes.
2021-03-02 06112, 2021
ruaok
this problem is a connection problem first and foremost. the config info that LB labs has is correct, but the container cannot reach typesense.
I'll just commit the fix directly to master. should I push v-2021-03-02.1 ?
2021-03-02 06151, 2021
alastairp
just do the commit, _lucifer can do the release + push :)
2021-03-02 06159, 2021
ruaok
commit pushed.
2021-03-02 06101, 2021
ruaok
thanks _lucifer
2021-03-02 06101, 2021
alastairp
it'll be 02.0, I released yesterday's build
2021-03-02 06119, 2021
_lucifer
on it
2021-03-02 06151, 2021
alastairp
_lucifer: git tag v-2021-03-02.1, merge master to production, make github release, build+push image
2021-03-02 06107, 2021
alastairp
uh
2021-03-02 06109, 2021
alastairp
v-2021-03-02.0
2021-03-02 06140, 2021
alastairp
I'm really not a fan of magic python HTTP API wrappers that hide the details of the request that it's making. all such clients should have a straightforward way of debugging the details of the requests
2021-03-02 06114, 2021
ruaok
"magic python HTTP API wrappers" like what?
2021-03-02 06125, 2021
alastairp
like the typesense client library
2021-03-02 06153, 2021
alastairp
I call client.collections['foo'].documents.search, but what http request is that actually making
2021-03-02 06124, 2021
alastairp
I ran into exactly the same problem with another client yesterday. Once I could work out what query it was trying to make, the problem was obvious, but it took ages to get to that point