hi jmp_music_, yes, I'm around. how about in 1 hour?
white_snack joined the channel
white_shadow has quit
outsidecontext
rdswift: are you around? We are preparing the Picard 2.4 release today. What's needed for the docs site? I think you had the changes already lined up, right?
prabal
Mr_Monkey: https://test.bookbrainz.org/collection/caf53b17... you made me a collaborator of this collection. I should have an option to remove myself as a collaborator. I am thinking of adding a button `remove yourself as collaborator` in this page.
I am confused whether I should re-use the `delete-collection-modal` here or should i make a new `remove-collaborator` modal.
https://github.com/bookbrainz/bookbrainz-site/b... . The structure of `remove-collaborator-modal` will be pretty much same as `delete-modal` which makes me think I should re-use the modal but there are few changes - postUrl, body-text, header, button-text - which will make reusing it little messy
what do you think?
Mr_Monkey
prabal: I think you could refactor to resuse the same modal. for the modal body, you could pass the contents to the component using `{children}` inside the modal component so you'd call it like this: `<RefactoredModal> <div>the modal body</div></RefactoredModal>
That could save you a bit of trouble. The rest of the elements could be passed as props without problem I think
prabal
yeahh okayy
iliekcomputers
ishaanshah: awesome!
when in doubt, create your own component :D
ishaanshah
Yeah I just looked up the source code and reimplemented it with slight changes
and I'll let you and bitmap finish this one off -- the DB on paco is already restarted with the right setting, but bitmap may need to do more work on it.
abhinavohri joined the channel
sumedh joined the channel
white_snack has quit
white_shadow joined the channel
white_shadow has quit
abhinavohri
iliekcomputers: Can u help me with the` url_for`method? For my test ,i want that it does not return a trailing slash at the end of the url. How can i do it?
iliekcomputers
just put in the string directly, instead of using url_for
What is `self.assertContext('user', self.user)` doing?
I am not using 'url_for' method so should i keep it or omit it?
iliekcomputers
ishaanshah: will have to cancel our meeting today. Apologies!
rdswift
outsidecontext, zas: I've been collecting the new version documentation changes in a separate branch, so all I should need to do is rebase it and merge.
zas
ok, thanks, we are preparing the release, binaries are built, website updated (but not deployed yet)
rdswift
Just waiting for the release before doing that, but I can start a pr for it now. Thanks for the "heads up".
iliekcomputers: hey, I guess you're working and then interview? I had some thoughts about spark that I wanted to run past you
got some time around meeting-time?
ruaok
its an honor system drinks station. each drink 2€. some chap was there swapping out cooling packs.
only in germany.
alastairp
ruaok: did you see the recent photos of the mini wine windows in italy?
iliekcomputers
alastairp: probably not today, is tomorrow ok?
alastairp
any day is fine
ruaok
yes, resurrected because of covid? great!
alastairp
let's talk tomorrow. thanks
iliekcomputers
sounds good.
pristine___ probably has more context on the collaborative filtering stuff re spark btw
alastairp
it's more about the workflow, rather than tools/algorithms
iliekcomputers
ok. cool. that's probably me then. :P
ruaok
iliekcomputers: "[2020-08-10 14:57:44,631] DEBUG in request_consumer: Pushing result to RabbitMQ..."
is that stats being pushed? it takes a looong time....
iliekcomputers
Which stat is it?
Note that because spark is lazy, it only does the actual computations when it needs to start pushing stuff which means the logs aren't exact. It's calculating the stat after it logs the pushing message.
Still don't expect anything to take hours though...
ruaok
I dunno. I can't tell from the logs.
and I requested some recs, buts its been stuck like this for a while.
alastairp
hey, so this is actually my question. maybe I'll open the discussion now, and we can continue it whenever
I understand that after you send a message to compute something, it's sent back to the listenbrainz side over rabbitmq
I was just thinking through this - is there a reason why we don't save it to hdfs and just return a filename? and then have listenbrainz download it from hdfs and do whatever with it
ruaok
alastairp: they are hosted in two different places.
consider the machines at hetzner to be "permanent machines". We can expect them to be there at all times.
alastairp
right, so that would require opening hdfs up on the public internet?
ruaok
whereas the spark cluster is more flexible. right now its stored on 4 machines (dirt cheap, the stuff that someone has used before) at hetzner.
if someone were to offer us a better cluster for free, we'd move there.
MFCR_ColbyRay joined the channel
I'd love to have a 16 node cluster with loads of disk space, but alas, we're have a tiny 4 node cluster.
alastairp
I'm not sure that that strictly discounts my suggestion, but it seems likely that it introduces additional complexity to the idea
ruaok
this arrangement allows us to pass data back and forth using a mechanism we already rely on with minimal config exceptions.
it doesn't... yet.
the spark cluster is considered "disposable" and "batch oriented".
whereas our productions servers are considered to be "stable" and "per request fast".
alastairp
right. it was just a thought that I had after looking through this part of the code, and after remembering these types of discussions happening a few times before (e.g. "does all of the data that we want to return fit inside rabbitmq?")
ruaok
so, results being stored in PG for fast response to the user.
alastairp
sure, I'm not suggesting removing results from pg
ruaok
loads of discussions, yes.
alastairp
the idea would be for the pg writer to request from hdfs, and then write to pg. instead of sending the results directly back in a rabbitmq message
ruaok
and its a shitty balance between what I am willing to open our wallets for and what we need to work extra for.
alastairp
instead, rabbitmq would just be a signaling mechanism
ruaok
I see where you're going with that.
except docker. docker swarm in particular.
its.. special.
if you publish a port from a service, then docker goes and opens those ports for that server to the WORLD.
and you CANNOT say don't do that.
alastairp
where does rabbitmq live? it's the main mb cluster, and is publicly accessible for spark to connect to?
right, got it
ruaok
hetzer. lemmy has an opening that allows the spark leader to connect to rabbitmq. but lemmy only allows that from that one IP of the leader.