lucifer: my listens are not submitting through spotify, it shows that im listening to them but none of them were submitted. Listens from lastfm submit fine.
Hm, I've had two 503 "Cannot submit listens to queue, please try again later." errors about 15 minutes ago, rest of my listens seem to be accepted by the server
I understand the openAI and Microsoft deal and how it relates to us. Bing uses MB data. Future AI are likely going to be trained on the data that is in bing, ergo MB. Imagine chatgpt fully speaking mbids.
2023-01-29 02938, 2023
mayhem goes to look at telegram
2023-01-29 02951, 2023
mayhem
Input validation is a real pita.
2023-01-29 02916, 2023
alastairp
yeah. looks like it's the length of the recording name which is causing a problem on a messybrainz index
2023-01-29 02901, 2023
alastairp
stopping timescale writer; I'm going to get a message off the queue manually and see if it's the bad one
2023-01-29 02936, 2023
lucifer
alastairp: mayhem: fixed. queue should clear in ~5 mins.
2023-01-29 02952, 2023
alastairp
lucifer: oh, thanks :)
2023-01-29 02922, 2023
alastairp
lucifer: how did you do it? (should I restart ts writer?)
2023-01-29 02958, 2023
lucifer
alastairp: had removed the bad listen from queue. and it autodelivered remaining messages. queue fell from 12k to 7k so far.
2023-01-29 02918, 2023
lucifer
yup, restarting sound good
2023-01-29 02917, 2023
alastairp
lucifer: how did you remove it?
2023-01-29 02927, 2023
lucifer
rmq admin dashboard
2023-01-29 02937, 2023
alastairp
admin interface -> get message -> reject requeue false?
2023-01-29 02953, 2023
lucifer
first requeue to verify then, automatic ack to remove.
2023-01-29 02925, 2023
alastairp
got it. I'm writing this down, because I've only done it once before when we were in a similar situateion
2023-01-29 02938, 2023
alastairp
lucifer: that listen is back, ts writer logs is showing the same error
2023-01-29 02957, 2023
lucifer
hmm, let me check. i am still logged in
2023-01-29 02917, 2023
lucifer
queue is going down though. maybe those are old logs?
2023-01-29 02922, 2023
jasje has quit
2023-01-29 02937, 2023
alastairp
I set --tail 100, so I didn't expect so. but let me look at the queue too
2023-01-29 02941, 2023
lucifer
(stop/restart doesn't clear container logs)
2023-01-29 02910, 2023
lucifer
in normal functioning, it logs nothing so possible old logs.
2023-01-29 02929, 2023
alastairp
nope, it's definitely crashing again. the timestamp for "Timescale Writer started" is increasing
lets wait for the queue to clear or stop going down i guess
2023-01-29 02936, 2023
alastairp
sure
2023-01-29 02942, 2023
alastairp
queue is empty, so I'm unsure about this message now
2023-01-29 02943, 2023
lucifer
alastairp: there was another bad listen, removed that as well.
2023-01-29 02951, 2023
alastairp
ah,. that's why it's empty :)
2023-01-29 02920, 2023
lucifer
can confirm there were 2 different listens because listened_at varies.
2023-01-29 02953, 2023
alastairp
lucifer: so I guess I see 2 possible fixes here: 1 is to limit the length of string columns that have an index. 2 is to try and catch this kind of exception and mark incoming listens as failed somehow?
2023-01-29 02954, 2023
lucifer
i am not sure why it kept on crashing on the first one but skipped the second one and processed all other listens fine.
2023-01-29 02920, 2023
lucifer
alastairp: yeah, plan was to send such errorneous listens to a reject queue.
2023-01-29 02928, 2023
alastairp
right
2023-01-29 02943, 2023
lucifer
the issue i see with that we want to do batched inserts, so we would to reject entire batch,
2023-01-29 02943, 2023
alastairp
yep. one option is to try a batch, if it fails fall back to one at a time, then reject the specific one causing the issue