[listenbrainz-android] 14dependabot[bot] opened pull request #468 (03dev…dependabot/gradle/dev/com.google.devtools.ksp-2.0.10-1.0.24): Bump com.google.devtools.ksp from 2.0.0-1.0.23 to 2.0.10-1.0.24 https://github.com/metabrainz/listenbrainz-andr...
[listenbrainz-android] 14dependabot[bot] closed pull request #464 (03dev…dependabot/gradle/dev/com.google.devtools.ksp-2.0.0-1.0.24): Bump com.google.devtools.ksp from 2.0.0-1.0.23 to 2.0.0-1.0.24 https://github.com/metabrainz/listenbrainz-andr...
pite has quit
lusciouslover has quit
lusciouslover joined the channel
Kladky joined the channel
ericd[m] has quit
tarun joined the channel
tarun has quit
wargreen has quit
[listenbrainz-server] 14MonkeyDo opened pull request #2956 (03brainzplayer-spa…rebased-multi-track-mbid-mapping): LB-1281: Link all listens from the same album https://github.com/metabrainz/listenbrainz-serv...
monkey[m]
Did anyone else just receive a "ListenBrainz Spotify Importer Error" email?
minimal joined the channel
discordbrainz
<12lazybookwyrm> Yeah
BobSwift[m]
monkey: Yes, and I thought that was really strange since I don't actually have a spotify account.
For some reason, 'Activate both features" is checked, but I know that I never did it since I've never had a spotify annount. Was this setting automatically checked as a default at some point? Will it screw something up if I uncheck it (if it allws me to do that)?
nbin joined the channel
s/allws/allows/
s/'/"/, s/allws/allows/
s/'/"/, s/annount/account/, s/allws/allows/
I see soundcloud is also enabled, but I don't have a soundcloud account either. I just disabled spotify in the settings.
pite joined the channel
TOPIC: MetaBrainz Community and Development channel | MusicBrainz non-development: #musicbrainz | BookBrainz: #bookbrainz | Channel is logged and not empty as it is bridged to IRC; see https://musicbrainz.org/doc/ChatBrainz for details | Agenda: Reviews, Docker Compose v2 (yvanzo)
monkey[m]
Disabling either won't have any adverse effect. Strange that it was checked for you without your knowledge, but at least it explains the email...
yvanzo
The agenda item I just added is about dropping support for Docker Compose v1 for MB mirrors (with announcement for November?), and switching to Docker Compose v2 for development setup in all MetaBrainz projects.
(v1 has reached its EOL already)
BobSwift[m]
monkey: I wonder if it has something to do with the server being "test.listenbrainz.org", ir does that use the same user database for the account settings?
* @monkey:chatbrainz.org: I wonder if it has something to do with the server being "test.listenbrainz.org", or does that use the same user database for the account settings?
yellowhatpro[m]: > <@yellowhatpro:matrix.org> At the other end, the listener, which constantly listens to the channel, I am deliberately sleeping for some time (5 sec currently)
and remind me, after sleeping for 5s, it polls all edits/edit notes made since then?
yellowhatpro[m]
bitmap[m]: Alrighty, will remove it from the query
yellowhatpro[m]: > <@yellowhatpro:matrix.org> At the other end, the listener, which constantly listens to the channel, I am deliberately sleeping for some time (5 sec currently)
Regarding this, with this, we can delay the rate limiting from the archival part. That means, no matter with what speed the URLs are being added, it will only archive (or try to archive) after the amount we decide.
bitmap[m]: polling part is different, it is independent of the network requester part
bitmap[m]
ah sorry, this is in the listener
yellowhatpro[m]
After 5 sec, it will pick the front of the channel, and try archiving it.
yvanzo[m]: The time it being inserted in the `internet_archive_urls`
* time it's being
bitmap[m]
makes sense, so that includes are status that the previously-inserted URL has. (maybe it makes sense to ignore Failed ones that were processed longer than 24 hours ago though?)
s/are/all/, s//`/, s//`/
yellowhatpro[m]
yeah, failed ones are being killed
by the retry task
yvanzo[m]
what do you mean by killed?
(since it isn’t a process)
yellowhatpro[m]
Ahh, I mean deleted
sowwy
just got excited
The ones that were with status error, if they were permanent errors, are also getting deleted:
only when the retry/cleanup task starts again, and iterate over the whole internet_archive_urls, and spots a Failed row, it deletes that row
bitmap[m]
yellowhatpro[m]: so how long would you estimate a `Failed` event would stick around before it's deleted?
yellowhatpro[m]
Less than a day is what I am assuming right now, since the retry task wakes every 24hr
bitmap[m]
ok, just wondering if it makes sense to keep them around for longer in case sentry is down or something
yellowhatpro[m]
no issue, i was going to ask a doubt that if we are having sentry, do we still need to store them for long, but ig I got my answer hehe
So before deleting any Failed entry, I can add a check for X duration, if it is beyond, we will delete it, otherwise, we will keep it
bitmap[m]
sounds good to me
so URLs are only retried once per 24 hr, correct?
yellowhatpro[m]
yeah, the retry task will spawn every 24 hr, iterate the internet_archive_urls table, and check what it can delete or retry
the ones we can retry, are sent to the channel
where they are enqueued
and when their time comes, they go the listener
bitmap[m]
and how many attempts does it make currently?
yvanzo[m]
how do you control the time the retry task is spawning?
bitmap[m]
just asking because if a URL is on its third attempt, that would be three days later, but should_insert_url_to_internet_archive_urls only checks for URLs within the last day?
yellowhatpro[m]
bitmap[m]: `should_insert_url_to_internet_archive_urls` is only for the case when I am polling from `edit_data` and `edit_notes`
Retrying all at once may face network issues or rate limit, so that isn’t optimal but it should allow testing it at least. :)
bitmap[m]
yellowhatpro[m]: not sure I follow -- I'm trying to understand what happens if a URL older than 24 hrs still has pending retries left, and `should_insert_url_to_internet_archive_urls` doesn't see it because of the 24 hour window
yellowhatpro[m]
yvanzo[m]: oh, for test actually, I take only 20 rows in internet_archive_urls
yvanzo[m]
yellowhatpro: I mean testing in deployment.
yellowhatpro[m]
bitmap[m]: the method `should_insert_url_to_internet_archive_urls` is only being used while polling:
If a URL older than 24 hrs, still has pending retries, it will be again pushed to internet_archive_urls. the `should_insert_url_to_internet_archive_urls` method is not checked in this flow
<yvanzo[m]> "yellowhatpro: I mean testing..." <- Umm, sorry I couldn't understand this. retrying all at once while testing in deployment means?
yvanzo[m]
yellowhatpro: It will be deployed to work with test.musicbrainz.org in the beginning.
bitmap[m]
yellowhatpro[m]: > <@yellowhatpro:matrix.org> the method `should_insert_url_to_internet_archive_urls` is only being used while polling:... (full message at <https://matrix.chatbrainz.org/_matrix/media/v3/...>)
yvanzo[m]
Retrying all the URLs that need to be retried one after each other isn’t optimal because it might happen at a time where the API isn’t available or the service reached its rate limit already.