akshat: monkey: front end tests are failing. can one of you look, please?
monkey
Looking at it now
akshat
ruaok I think we should add the links to our discussions on IRC regarding huesound to the PR description as well
Yes on it
ruaok
akshat: please do.
akshat
Okaay!
monkey
`Test Suites: 26 failed` definitely something wrong going on with the front-end tests.
rerunning the tests
Etua joined the channel
akshat
There is some problem with an import monkey I was trying to figure out how to get rid of it
Do you have any better takes at resolving it?
monkey
Not sure what was happening, but there's tons of errors in code that we haven't touched in the PR
Lots of symptoms, but not sure what the cause is
akshat
I bumped up the dependencies. That has caused a change
That isn't the major error though
But I'm pretty sure it is causing a few extra
What's the next step to bumping dependencies monkey?
monkey
Ah, I see. Which dependencies?
akshat
Quite a lot in package.json
monkey
Ah, yes.
akshat
Do we revert that or we just need a few additions while doing this?
monkey
What was the reason for this package update?
Yeah, I think we'll need to revert those changes and only update any package that's absolutely required
(and also package-lock.json changes)
In general, we only want to update one dependency at a time, or a set of related dependencies.
akshat
I was trying to resolve the import issue from an external package and in the meantime updated my local packages. The bump I made to dependecies work fine when testing locally but I do agree that it isn't good to do this in this pr
But asking in general, if I make a new pr and want to bump a lot of dependecies and want the tests to pass, what additional step should I make?
You can answer this later once we have dealt with current work 😃
monkey
Step one is don't bump a lot of dependencies :p We do it carefully one functionality at a time, otherwise it's harder to figure out what is happening if something breaks, and can hide other issues too.
akshat
Makes sense
monkey
And then I guess we deploy to test.lb if we want to test in a docker image in an environment similar to prod, and then check that the specific functionality you're changing is working as expected
akshat
Ah okayy
monkey
So for huesound, I'll let you first revert the changes to package.json and package-lock.json, and then we can look and see if there are any package issues (the one you were trying to resolve)
package import issues*
akshat
Reverted
monkey
OK, I'll pull the changes and see how it looks locally
akshat
Great!
yvanzo
zas: Thanks. What do you think of having a directory scripts/lib/ to distinguish scripts to be sourced only rather than executed?
zas
we can do that, I'm working on it atm
monkey
Looking at the front-end tests failing, I'm guessing this is the issue you were trying to solve akshat ?
the << master, HEAD >> stuff in the file. a failed merge perhaps?
monkey
Can't see the diff, which file is that?
Oh, I see it now
lucifer
package-lock.json
monkey
in package-lock
lucifer
yup
monkey
Yeah, that definitely looks wrong
TOPIC: MetaBrainz Community and Development channel | MusicBrainz non-development: #musicbrainz | BookBrainz: #bookbrainz | Channel is logged; see https://musicbrainz.org/doc/IRC for details | Agenda: Reviews, Follow up on Summit Notes -> Tickets, Allow test editing without verification (Freso/reo), Allow account admins to see editor-editor subscriptions? (Freso), next meeting PSA (Freso)
TOPIC: MetaBrainz Community and Development channel | MusicBrainz non-development: #musicbrainz | BookBrainz: #bookbrainz | Channel is logged; see https://musicbrainz.org/doc/IRC for details | Agenda: Reviews, Follow up on Summit Notes -> Tickets, Allow test editing without verification (Freso/reo), Allow account admins to see editor-editor subscriptions? (Freso), next meeting PSA (Freso)
Fixed. Still having issues locally though, seems unrelated
OK akshat I updated the one package that we were having an issue with, tests are running again locally (two failed tests, but at least they make sense).
akshat
That's great monkey!!
Let me have a look
monkey
The failing tests are probably due to the test harness not behaving well with canvas elements
I'll take that on, it's coming from the colourwheel library I incorporated
akshat
Ah ohkayy
lucifer
ruaok: should we discuss the inc dump stuff further?
ruaok
sure. I might drop off periodically as I am dealing with some BS customer support stuff for the office net connection.
lucifer
cool 👍
ruaok
fundamentally, I think we need to write the unique listens to a file and then ship that to spark
lucifer
so we intend to add a conusmer to unique queue, make it write to a temp dir and then publish that as the inc dump.
right
ruaok
yes
and we can write them in spark format as we go
lucifer
yes that sounds good to me.
how important do you think this work is compared to listen counts in pg or say other LB tasks that may come up in near future (like we intended to start work on spotify roundup stuff next week) ?
ruaok
I think this is more important than listen counts (and should be quicker too).
given that this is a public facing issue, I think it should get a reasonably high priority.
since people are coming wishing to get stats.
Etua has quit
lucifer
agreed. i'll get on this next then. moving on to some implementation details.
ruaok
sweet
lucifer
1) should this consumer run in a new container or a part of existing ones?
2) do we write every listen to the file and save it? (sounds inefficient but i may be wrong)
ruaok
1) I think it might be good to make a new container part of the mbid_mapping side of things.
2) yes, that is fine. filesystems are fast and that isn't a problem.
lucifer
great., thanks. i'll get started on this soon and ask anything else that comes on the way when it comes.
nbin: I tested “GitHub OAuth for signing in to Jira” but couldn’t reproduce your issue. I enabled debug logs for this add-on so I can look into it if that happens again.
ruaok
lucifer: actually, I think we should add the output of the incremental listens to the timescale_writer. we already have the container, we already consume the listens. it would be trivial to write them to disk.
I think I can do that pretty quickly. want me to go for it?
Will look to close things now that we're all kinda done
monkey
👍
TOPIC: MetaBrainz Community and Development channel | MusicBrainz non-development: #musicbrainz | BookBrainz: #bookbrainz | Channel is logged; see https://musicbrainz.org/doc/IRC for details | Agenda: Reviews, Follow up on Summit Notes -> Tickets, Allow test editing without verification (Freso/reo), Allow account admins to see editor-editor subscriptions? (Freso), removing empty accounts - last
ruaok
lucifer: ok, I think I worked out all the issues that pertain to new improved spark incremental dumps. lets see if I can knock it out tomorrow.
akshat
ruaok there are some python tests failing so if I could use a hand with that it'll be great
ruaok catches the ball and runs with it
ruaok
whenever you see failed test that has the error message "This connection has been closed", just re-run the tests. this is some heisenbug in our tests and we've not been able to sort out since it almost never happens on our dev machines.
I've requested to re-run the tests. see what happens this time.
akshat
Oh okaay!
lucifer
ruaok: 👍
ruaok
wooo hooo! "All checks have passed"
thanks monkey, akshat!
now we just have to get that pile of code reviewed. :D