alastairp: the timeout thing might be doable not completely sure though. so like store the latest message in ts writer, if it times out and is redelivered do something about it? or do you mean if rabbitmq could automatically do this for us?
ok, I'm forwarding a schema change question to support@ then
Works for me
atj: akshaaatt yvanzo bitmap alastairp monkey outsidecontext and anyone with a @meb email address. If you'd like a metabrainz dropbox account with loads of storage, go ahead and use your @meb email address to sign up for the dropbox account and you'll be automatically added to the team.
Sounds cool mayhem!
yeah, go for it. our friend at dropbox gave us a free business account.
lucifer: hmm, adding some extra checks via tswriter sounds like trouble and complexity
I was just thinking out loud that it'd be great if rmq had a "you're _about_ to be disconnected" callback, rather than just "you've been disconnected"
any thoughts on the comment that I added on LB#2009?
we have a 10kb per listen upper limit, but our check is that the average size of all listens in the payload is less than 10kb, not the full message
re size of data, yeah pg_column_size may work, probably use it to find the largest listen and then do actual length on it because pg might be compressing jsonb and the size might be less than actual then.
regarding the check, i am not sure why it was added. is it there to prevent overwhelming something api side or db side?
or maybe we can estimate
seems a weird check
"the average size of all listens in the payload is less than 10kb"
I think this may have come about because of our changes in technologies
if the former then a per document size makes sense but if the latter then a per listen size limit is sensible.
that check may have been added before we started using rabbitmq (and so before we could have multiple listens per message)
so multiple listens existed that time too.probably that way for easier checking indeed
alastairp: `select *, pg_column_size(data) AS size from listen ORDER BY size DESC LIMIT 5;` to find largest listen. then `select length(data::text) from listen where listened_at = 1651504402 AND user_id = 8741;` for its length. length is 9331.
commit message by mayhem: "That is probably enough sanity checking for this minute."
I mean, at the time it probably was
lucifer: yes, multiple listens in an API payload existed. but I was talking about what happens after the API endpoint
I think that at that time we may have split the payload up into messages of 1 listen each
alastairp: thats what the SO answer from yesterfay did?
lucifer: yes, the answer from yesterday suggested a new format for source dependencies (PEP 508 I understand), but that's still different from what requirements.txt needs
i see, makes sense to rewrite then
so... we either need to manage these lists manually, convert automatically, or switch to poetry/pyproject.toml for local development so that we can reuse the same file for local dev and remote installs
agreed that automatically converting when inserting into setup.py is probably the easiest idea for now - and it's only temporary until new mbdata is released, I guess
if poetry/pyproject.toml handle this better then makes sense to migrate to those in future.
indeed, the current situation should be temporary
no. i refuse. you can't call a foss project "poetry" that's just .. how even
yeah, though I'm just worried that we'll move to a new dependency tool just as a new "better" one gets released
outsidecontext, zas: FIFO seems to work for unix, you might want to take a look and review the bare protocol and ideas behind it: https://github.com/skelly37/pipethon. In the evening I'll take care of doing the same but with Windows API and then it's ready to be implemented in Picard.
skelly37: great work getting this far so soon!
of course, as they say the first 90% is easy, it's the second 90% that takes most of the time :)
pipes work on windows?
alastairp: thanks :)
atj: "but with windows API" - I guess not
ansh: by the way, I didn't confirm with you the other day, but as mayhem pointed out starting early is fine. let me know when you want to discuss this. I was chatting with monkey and we think that it makes sense that I work with you directly when you're working on CB parts, and he works with you when doing BB parts. perhaps the 3 of us could get together in the next week or so and go over your plan again
atj: It requires pywin32 module, os.mkfifo() is unfortunately unix only.