afair, the existing query could be simplified to this
the other issues with the query in general were that it showed me a lot of re-releases, anniversary releases.
so probably better to use release_first_release_date or recording_first_release_date tables instead.
or maybe check it on release group level if possible?
mayhem
I'll have to play with that, since I didn't think the first_release tables meshed real well with the rest of our cannonical data.
lucifer
ah i see.
mayhem
post query checking might in fact give better results, but I can have a look.
lucifer
that's the only issue i can remember for now.
mayhem
once I get that query done, I'll work on the API endpoint that fetches the data for all release (not a specific) user -- that still needs doing, yes?
lucifer
yes
mayhem
ok, cool. thanks for catching me up on this.
next up, I was thinking about how we manage cron entries.
we stagger them by time, but I wonder if we can just do roll-up action called "nightly" that does all the nightly tasks in a row so that we have less dead time during the night.
might be tricky since some things get queued and some block, but I think that will be my approach to keep generating models every night have not having i take quite so much time.
I'll play with that as well.
lucifer
sure sounds good.
spark processes only 1 request at a time anyway so you could enqueue as much as you want and it'll catch up at some point.
mayhem
exactly. I just need to make sure that the dumps and other non-spark tasks dont interfere. but we can also start dumps earlier. lots of options, but I think that will be my approach to improve that PR and get it out for testing.
lucifer
agreed.
alastairp
morning
ansh: did you just start up BB on wolf?
ansh
yes
alastairp
Pratha-Fish: hi, good afternoon. I'll take a look at your results later today and see if we can work out what the next possible step is. I think it might be a good idea to do an investigation into the items for which canonical mbid and mapping lookup are different
ansh: no problem, we just got a notification that all of the processes just crashed. I'm not sure what this behaviour is
Pratha-Fish
alastairp: sounds good
I have exported a set of items for which canonical MBID and mapping lookups are different too
alastairp
that's perfect
lucifer: mayhem: I recall a debug method for the mapper which provides more information about matches... am I remembering correctly?
monkey
alastairp: I think the zombie procs are dur to using nodemon and webpack to rebuild sources and also restart the server upon changes
Code reloading inside Docker generates a lot of zombie processes #881
monkey
Ahaa
alastairp
yes, all of the zombies have disappeared now
so, no clear answer to what might help this. perhaps it's related to signals not being passed to child processes correctly. unclear if this is explicitly a problem inside docker, or if it also happens on regular linux?
we could run with an init process to try and help this issue described in 882, but I guess the other underlying issue is that nodemon is trying to restart _its_ child processes on filechange
not specifically a signal to nodemon
monkey
> Nodemon kills the child process on rs and on SIGINT. However when it restarts due to file changes, it does not kill the child process.
alastairp
and these issues are very old too - unless we're explicitly running an old nodemon, it seems that they may have changed all of this code a few times since
monkey
I just checked, we're on a recent version of nodemon (2.0.2, current version is 2.0.18)
alastairp: what's the status of frank to flobot migration? Do you need help on it?
alastairp
zas: sorry, I didn't get back in contact with you last week
acousticbrainz.org is now running on flobot, mayhem did the DNS switch for me
frank is still running ftp.acousticbrainz.org, but now that we are putting dumps in aretha I think that we can just remove this domain and update the links
zas
please do, so I can retire frank and order rex (rudi's clone / second part of new gateways). I'll be (very) busy this week and next one, moving to a new house, but I wish I can work on new gateways second part of July and test & validate redundancy. My goal is to replace kiki & herb asap.
alastairp
yes, I didn't bother you too much because I knew that you were moving, maybe we can coordinate for a few hours when you are free to shut down/cancel frank? (you tell me when you are free)
mayhem
i can help with that.
alastairp
ok, fine. mayhem, let's coordinate tomorrow perhaps then? otherwise I don't think we need zas for anything else
mayhem
sure.
I know zas' pain all too well. my turn to help out.
zas
mayhem: thanks :) I'll be more or less around this week though, until Saturday, then serious stuff is starting, final sprint should end on 14/15th (move will/should be completed). Extra unexpected stuff happened, today I'm trying to sort out a Brazil-ish situation with French National Education / schools / kids stuff... (in short, someone somewhere didn't do his job, and my son is not registered for next year school as he should have).
CatQuest
what is a Brazil-ish situation? are you moving to Brazil??
lucifer: crontab reorged for 2046, please have a look.
mayhem sets a timezone for himself on LB
zas
outsidecontext: still ok for the release tomorrow?
thevar1able has quit
thevar1able joined the channel
agatzk has quit
agatzk joined the channel
monkey
ansh: Regarding the mockup for showing reviews on BB: That looks pretty good ! I would like to make sure that we have as much a homogenous look between CB and BB as possible. I know alastairp was talking about adding short excerpts of the review text like you did in the mockup, so let's try to make them look similar once that is implemented
reosarevok
bitmap: sorry about sending comments separately and not in a review, I didn't know how much I was going to look through so I started like that and now it seems silly to change :D
ansh
monkey: Understood
bitmap
reosarevok: not a problem
monkey
ansh: looking at CB quickly, we could try to go closer to the look of reviews in the browse reviews page