1 month, 3 months, 9 months, 2 years 3 months, 6 years 9 months, 20 years 3 months.
we allow a maximum of 10 passes but as you can see even 6 or 7 passes will cause a full table search.
i think we should not search more than 9 months in one pass.
BrainzGit
[musicbrainz-server] 14reosarevok opened pull request #3517 (03master…MBS-13982): MBS-13982: Wrap overlong release and release group titles in tables https://github.com/metabrainz/musicbrainz-serve...
mayhem[m]
<lucifer[m]> "i think we should not search..." <- seems senisible...
fettuccinae[m] joined the channel
fettuccinae[m]
<lucifer[m]> "i think we should not search..." <- I was trying to implement in such a way that we initially load data only for 30 days and then render a button which does the normal search.
If we limit each pass for 9 months, users with a gaps in their listening won't be able to fetch their older listens
* I was trying to implement in such a way that we initially load data only for 30 days and then render a button which does the normal search.
If we limit each pass for 9 months, users with a gaps in their listens won't be able to fetch their older listens
* I was trying to implement in such a way that we initially load data only for 30 days and then render a button which does the normal search.
If we limit each pass for 9 months, users with gaps of more than 9 months in their listens won't be able to fetch their older listens
lucifer[m]
fettuccinae[m]: > <@fettuccinae:matrix.org> I was trying to implement in such a way that we initially load data only for 30 days and then render a button which does the normal search.
> If we limit each pass for 9 months, users with gaps of more than 9 months in their listens won't be able to fetch their older listens
you would need to call the API again with the new from_ts/to_ts.
to load data for users with more gaps.
i think we should return a next/prev link in the API response to let the caller know the exact timestamps that we have already searched for and avoid redundant searches.
fettuccinae[m]
lucifer[m]: Ah yes, and if they got like multiple gaps, they can just call the api again. I'll limit the search window for each pass
lucifer[m]
for now though, i think you can just add a flag to fetch listens. example from_profile and set it to true for the initial load in the profile view.
when the frontend makes the API call to load more listens, it will just use the existing implementation and do multiple passes spanning the entire table.
when we have added prev/next links in the API response to aid navigation we can reduce the window size too.
fettuccinae[m]
lucifer[m]: I don't think we'd need a flag, because when the initial call is made, it's made without from_ts and to_ts,
<lucifer[m]> "i think we should return a next..." <- frontend does check the last listen's timestamp and makes the call for next/ more listens.
<lucifer[m]> "when the frontend makes the..." <- I was thinking if we could limit the window size in each pass, we could render the listens of first 9 months, then frontend could use the listened_at_ts of the last listen and use it as from_ts to make next api call.
* I was thinking if we could limit the window size in each pass, we could render the listens of first 9 months initially, if the listens are less than 25, frontend could use the listened_at_ts of the last listen and use it as from_ts to make the next api call for listens in next window.
lucifer[m]
@fettuccinae:matrix.org: yes but the backend has also likely searched for more time ranges.
for instance, think of a user with gaps in data where they have a couple of listens in last month and then no listens in 1 year. the frontend will make the API call with the timestamp of the listen from last month.
whereas the backend has already searched the last 9 months and didn't find any data. i mean to surface this information in the API for pagination. so that the frontend can avoid making redundant queries.
lucifer: do you know which redirect URIs are allowed for the jira auth integration with MB? or is that potentially not checked?
lucifer[m]
julian45: on BrainzBot side?
julian45[m]
On the side where we allow folks to log into Jira w/ their MB account
lucifer[m]
i see, yvanzo or bitmap would know that.
its not using the new MeB OAuth provider, but the MB.org one.
yvanzo[m]
julian45: Good catch
Adding updating the callback URI to the steps
Well, there is no update to do once the migration is over.
julian45[m]
hopefully the migration will not break the jira side config and require updates to the prod config on the MB side, but if we want to test login before cutting over tickets.MeB to point to the new server, it would be good for the new instance's domain name to be allowed
* hopefully the migration will not break the jira side config and require updates to the prod config on the MB side (like a randomly generated string in jira's own callback URI or smth), but if we want to test login before cutting over tickets.MeB to point to the new server, it would be good for the new instance's domain name to be allowed
ditto for the github login option as well
yvanzo[m]
Tickets will go offline in 30min from now.
SigHunter has quit
SigHunter joined the channel
aerozol[m]
Great ticket offline pic :)
monkey[m]
<lucifer[m]> "monkey: what issue are you..." <- I figured most of my issues out, thanks though!