#metabrainz

/

      • iliekcomputers
        that's it for me, fin.
      • Freso
        iliekcomputers: Pick next. :)
      • iliekcomputers
        (oh wait, got people to book tickets to BCN!)
      • ruaok
        thank you!
      • iliekcomputers
        Freso: go :)
      • Freso
        :p
      • πŸ™‹
      • Attended MeB board meeting this week, which went pretty well.
      • Reviewed new conflict resolution protocol. I have a remaining point I need to clarify further, but it generally looks really good, IMO.
      • Other than that, being around and about.
      • fin! ruaok: go!
      • ruaok
        k
      • Freso
        (People still up: reosarevok, bitmap, zas, yvanzo, samj1912, bukwurm, Leo_Verto, kartikeyaSh, dragonzeron – anyone else, please let me know ASAP.)
      • ruaok
        last week was prep for the board meeting and finishing off the conflict resolution protocol.
      • samj1912
        O/
      • ruaok
        more PRs, thankfully no accounting and misc other things to wrap up before traveling for nearly a month.
      • I'm purchasing plane tickets, signing visa letters and piles of other silly things like that.
      • then friday I scooted off to California.
      • it is a perfect SF summer here today.
      • 12C fog and grey. after 33C in BCN, this is amazing.
      • I'll be meeting with google and others on my trip, in particular I'll see about catching up with the cloud team. heh.
      • I'm AFK, but if you need me, ask a question. I'm never too far from being able to answer.
      • that's it for now.
      • zas?
      • zas
        hey
      • we released Picard 2
      • Freso
        πŸ™Œ
      • πŸŽ‰
      • Leo_Verto
        !m Picard team
      • BrainzBot
        You're doing good work, Picard team!
      • zas
        SOLR cloud was moved to new datacenter, we got much better machines and latency
      • samj1912
        \o/
      • ruaok
        now handling 100% of traffic fine>
      • ?
      • zas
        so we currently serve 160 req/s with only 3 SOLR nodes with response time ~100ms (50-150ms)
      • samj1912
        Yup :D
      • zas
        which is much better, if traffic increases we'll have to add another node to preserve response time
      • but for now, that's perfectly ok :)
      • reosarevok
        Yay
      • zas
        i ordered the replacement of wetton cpu fan yesterday, it was overheating
      • samj1912
      • Current stats :D
      • ruaok
        zas: excellent. feel free to add nodes as you need.
      • zas
        ruaok: i will do if i see response time degrades
      • apart that, usual PRs reviews, documentation updates, and maintainance stuff
      • fin. samj1912 ?
      • samj1912
        Yo
      • Beginning of the week dealt with the solr cloud migration
      • Ufw rules and zk broke live indexing
      • But got that fixed in about 4 hours
      • Latter half of the week was spent dealing with picard
      • Mostly working with bitmap on code signing and Travis
      • Travis is a pita to configure
      • Apart from that jira cleanups, picked up a few slacking bugs
      • Released yet another picard minor release today with osx fixes and a universal crash bug
      • Did a bit of upgrades on pw
      • And working on a secret long term picard project ;)
      • Freso
        Uh oh.
      • samj1912
        That's it for me
      • bitmap: go
      • bitmap
        hey
      • Freso
        (People still up: reosarevok, yvanzo, bukwurm, Leo_Verto, kartikeyaSh, dragonzeron – anyone else, please let me know ASAP.)
      • bitmap
        I mostly worked more on the react entity merge forms and macos codesigning for picard
      • samj1912
        !m bitmap
      • BrainzBot
        You're doing good work, bitmap!
      • bitmap
        I put the cert/key in syswiki so we don't have to generate 5 more next release :)
      • samj1912
        Lol XD
      • bitmap
        also did some code review and looked at postgres-bdr docker images, still figuring out how it works
      • Freso
        Boring. :)
      • bitmap
        fin, yvanzo ?
      • yvanzo
        hey!
      • Last half-week, I worked on MusicBrainz server virtual machine. A new build(3.2GB) with sample data only is available at ftp://ftp.eu.metabrainz.org/pub/musicbrainz-vm/...
      • Inspected hip containers for maintenance and pg/ws containers for load issue.
      • Plus usual stuff: SpamBrainz, small fixes, tickets triage, support...
      • Finito, reosarevok?
      • ruaok
        yvanzo:have you tried the VM I uploaded before I left.
      • it was a complete one, but I never tried it.
      • reosarevok
        Hi! More support, more editing and fixing
      • yvanzo
        ruaok: yes, it is corrupted, I deleted it.
      • ruaok
        alas, thanks
      • reosarevok
        Trying to do some last minute Wikimedia Eesti business
      • ruaok
        yvanzo: lets chat quickly about this after the meeting.
      • reosarevok
        And moved back to the city
      • So should be around more often again
      • And have more time to deal with stuff :)
      • bukwurm: you?
      • bukwurm
        Hey everyone!
      • Freso
        (People still up: Leo_Verto, kartikeyaSh, dragonzeron – anyone else, please let me know ASAP.)
      • bukwurm
        This week I worked on migrating entity creation from bookbrainz site to data
      • Many new exciting features in the language have been added in last few years, primarily dealing with async nature.
      • So I refactored the existing code, and made it fit with the existing work in progress on the bb-data
      • Apart from that I worked on recent imports and routes handling discard/upgrade on the bbsite
      • That's it from me this week. Leo_Verto ?
      • Leo_Verto
        Thank you!
      • Finally got through my exams, worked on polishing spambrainz code (type hinting is really cool) and started work on docker images.
      • I've also been trying to figure out how to make GCE preemptible instances work for Keras.
      • Finally my shitty cable connection suddenly decided to stop getting an upstream so I'm stuck with mobile internet. Yay for ISPs!
      • (also zas, if possible I'd love to talk to you after the meeting)
      • fin.
      • kartikeyaSh, go!
      • zas
        sure
      • kartikeyaSh
        hi
      • Freso
        (Only dragonzeron left on my list. Last chance for anyone else to let me know they want to go!)
      • kartikeyaSh
        This week I wrote code to create recording clusters for incoming recordings. I'll write tests for the same and create PR ASAP.
      • I also worked on setting up a VM for executing the written code so far for part one of the GSoC project. This code was executed on the data in the only available datadump of messybrainz http://ftp.osuosl.org/pub/musicbrainz/messybrainz/
      • While creating clusters I found that we have recording_json which contains MBIDs but not titles. others with MBIDs keys pointing to empty strings which must be inserted before the LB put a validation check on such mistakes. I had to handle those cases.
      • And then I learned how to work with EXPLAIN statement to speed up the queries by creating appropriate indexes. And an important note to keep was use of VACUUM ANALYZE statement in postgres which must be executed after loading a datadump into postgres. So, that the query execution plans are created optimally.
      • iliekcomputers
        🌟
      • kartikeyaSh
        This dump contains 9335675 recordings. I created clusters for the recordings that contain recording MBID. And created artist clusters for the artist_credits that contain artist MBIDs. And for the releases too. It took aprox 3 hours to create recording clusters. Whereas just aprox 20 seconds to create artist clusters and release clusters. This was because we just have aprox 10-12k recordings with artist MBIDs and release MBIDs.
      • But fetching release clusters using recording MBIDs was slow it took aprox 24 hours. As we did query musicbrainz database again and again for the same information with a different MBIDs. If we copied the 4 tables involved in the join on musicbrainz database we won't have to join again and again those tables. And the process will speed up.
      • fin!
      • dragonzeron: go
      • dragonzeron
        Ok
      • Leo_Verto
        I see I'm not the only one pre-typing my reviews :P
      • CatQuest
        Leo_Verto: I always do that :P
      • Freso wonders if dragonzeron is still around…
      • kartikeyaSh
        Leo_Verto: can't let people wait while I write this type of long reviews
      • πŸ˜‚
      • Freso
        dragonzeron: Ping?
      • dragonzeron
        So I have been working on adding isrcs as usual however I started using Vgmdb to add additional information that I was not able to find about albums so I have been working on that as well as removing Likedis auto edits
      • Freso
        Ah.
      • dragonzeron
        and I also said Ok before hand
      • Freso
        Oh, I missed that. Sorry.
      • dragonzeron
        understandable
      • I have also been going to albums that have amazon covers and then grabbing the covers from there and uploading it to the Cover Archive
      • so yeah thats it
      • Freso
        Alright, thanks dragonzeron and everyone for your reviews. :)
      • Looks like we have no other items on the agenda, so we'll close the meeting with this.
      • Thanks for your time everyone!
      • </BANG>
      • ruaok
        thanks Freso
      • iliekcomputers, zas: stick around for a minute.
      • iliekcomputers
        thanks Freso
      • zas
        sticking around...
      • kartikeyaSh
        thanks Freso
      • ruaok
        I see that EX51 at hetzner has 64BG ram and 2*4TB drives.
      • no setup cost.
      • I propose that we order an EX51 to put into our empty slot at hetzner.
      • samj1912
        bitmap can you do a quick test run of picard 2.0.2 (see if it works without code sign issues)
      • ruaok
        then we move into that and when we're ready we let go of one of boingo/prince.
      • 4TB of disk (in RAID-0) should be sufficient, no?
      • CatQuest
        and tanks Freso hopefully next time I'll be well and joining in again? ()woul have done it this week but I am so ill πŸ™€
      • thanks*
      • iliekcomputers
        ruaok: should be sufficient def
      • ruaok
        zas?
      • iliekcomputers
        spike uses 1 TB around :)
      • zas
        ruaok: ok for me
      • ruaok
        so then boingo/prince as app server and the new machine as DB server.
      • ok, I'll order the server then.
      • iliekcomputers
        can we name it frank (after frank ocean)