#metabrainz

/

      • D4RK-PH0ENiX joined the channel
      • D4RK-PH0ENiX has quit
      • D4RK-PH0ENiX joined the channel
      • HSOWA joined the channel
      • HSOWA has quit
      • HSOWA joined the channel
      • KassOtsimine has quit
      • thomasross has quit
      • thomasross joined the channel
      • adhawkins has quit
      • adhawkins joined the channel
      • adhawkins has quit
      • adhawkins joined the channel
      • KassOtsimine joined the channel
      • KassOtsimine has quit
      • KassOtsimine joined the channel
      • HSOWA has quit
      • Gazooo has quit
      • Gazooo joined the channel
      • rembo10_ joined the channel
      • yvanzo_ joined the channel
      • DropItLikeItsHot joined the channel
      • yvanzo has quit
      • AfroThundr has quit
      • rembo10 has quit
      • kori has quit
      • loujine has quit
      • dseomn has quit
      • dseomn joined the channel
      • kori joined the channel
      • loujine joined the channel
      • Major_Lurker joined the channel
      • HSOWA joined the channel
      • HSOWA has quit
      • HSOWA joined the channel
      • KassOtsimine has quit
      • HSOWA has quit
      • adhawkins has quit
      • adhawkins joined the channel
      • adhawkins has quit
      • adhawkins joined the channel
      • Dr-Flay has quit
      • KassOtsimine joined the channel
      • KassOtsimine has quit
      • KassOtsimine joined the channel
      • Nyanko-sensei joined the channel
      • D4RK-PH0ENiX has quit
      • Nyanko-sensei has quit
      • D4RK-PH0ENiX joined the channel
      • eharris has quit
      • HSOWA joined the channel
      • D4RK-PH0ENiX has quit
      • KassOtsimine has quit
      • D4RK-PH0ENiX joined the channel
      • D4RK-PH0ENiX has quit
      • D4RK-PH0ENiX joined the channel
      • eharris joined the channel
      • KassOtsimine joined the channel
      • HSOWA has quit
      • dragonzeron has quit
      • HSOWA joined the channel
      • KassOtsimine has quit
      • zas
        samj1912: ping
      • solr-cloud-1 was rebooted half an hour ago, someone did this ?
      • 7:12:16 utc
      • samj1912 ^^
      • samj1912
        Nope
      • Afk for a while now
      • loujine has quit
      • HSOWA has quit
      • ruaok
        yvanzo_: the VM ballooned to 20GB this time around.
      • also MOOOIN! \ΓΈ-
      • latest MV is now available on the FTP site, md5 generating.
      • loujine joined the channel
      • zas
        Moooin ruaok
      • solr-1 rebooted "alone" at 7:12:16 UTC, after being unavailable for ~30 mins, i have no explanation for now
      • ruaok
        I didn't do it.
      • zas
        i suspect underlying hardware issue, but i see nothing in Hetzner status
      • ruaok
        is it just me or does it seem that hetzner's cloud isn't nearly as stable as AWS or GC?
      • probably built on the same crap consumer hardware is my guess.
      • zas
        it isn't just you ;)
      • prices... are quite different
      • ruaok
        I'm thinking of writing them an actual letter. not just a tech support message. but an actual letter that says that their shit sucks.
      • because as with their other offerings, we're putting in a lot of effort to keep things running when we shouldn't have to.
      • zas
        well, their cloud is quite new, and we know their weaknesses, for me that's a non-issue, if we are able to make our systems fault-tolerant enough
      • after, that's a question of money: human-time has a cost
      • ruaok
        yeah, a good excercise for us. still, we're wasting time on it.
      • exactly.
      • zas
        i don't see that as a real waste of time, unreliable underlying hardware force us to improve the fault-tolerance of our systems, which is imho very good
      • current solr stuff can easily lost a node (but not 2), at worse we can spawn a new one and configure it under 30 mins
      • reliable hardware leads to non-fault-tolerant systems, and when something bad happens .... that's catastrophic ;)
      • ruaok
        agreed.
      • but once we remove all single points of failure, if we still need to play this stupid game, then we're wasting efforts.
      • zas
        once we'll remove all single points of failure, we'll not care failures anymore ;)
      • but we still have too many atm
      • the main problem for me with hetzner cloud is that cpu resources "actually available" aren't really predictable, and i have the feeling they stole us in this regard
      • difference between all SOLR nodes is quite significative
      • ruaok
        yes.
      • can you do me a favor and find one graph that clearly indicates that the performance between equal nodes is clearly different over a long period of time?
      • send me a png or a link?
      • then I can include it in a letter.
      • zas
      • those are responses times, running exact same hardware, same software, on 3 different VMs
      • cpu-bound
      • solr was _always_ much faster
      • solr1*
      • but it is the exact same specs
      • ruaok
        ok, I will actually attempt to elevate this as a support ticket and complain before I send an actual letter.
      • zas
        but imho that's too soon to complain about this, i detected a weird (and yet unexplained) behavior
      • solr 3 vs solr 2
      • zookeeper master changed after a reboot, and curves inverted
      • i still have to dig that, and understand how & why it happened
      • samj1912 suggested it could be to the hetzner cloud resources allocation (servers were restarted), but i'm not sure
      • but still, if you look at solr1 response times, they are consistently 2 times better than the worse one
      • which is weird because solr1 handles 1.5x more requests
      • so basically solr 2 & 3 are very "slow" compared to solr 1
      • but they cost the exact same price
      • ruaok
        I think reboots a points where VMs might be migrated to another physical host.
      • we don't know for certain, but this is what it feels like.
      • zas
        mine too, but we should be informed right?
      • solr1 does 36 ops with 99ms response time, while solr3 is doing 27ops with 191ms rtime
      • weight is heavier on solr1 (150)
      • and that's consistent since we have those nodes
      • solr 2 or 3 never approach solr1 performance
      • Nyanko-sensei joined the channel
      • D4RK-PH0ENiX has quit
      • yvanzo_
        salute!
      • MD5 sum matches :)
      • UmkaDK has quit
      • ruaok
        yay!
      • madmouser1 joined the channel
      • UmkaDK joined the channel
      • UmkaDK has quit
      • UmkaDK joined the channel
      • zas
        i'll not add artist's tumblr anymore, since tumblr GPDR compliance is a joke, and no, i'll not uncheck 300+ ad partners shit, f*ck u tumblr.
      • ruaok
        heh, yeah. all former yahoo properties suck ass.
      • zas
      • ruaok
        wow. just wow.
      • kartikeyaSh
        ruaok: For creating clusters using fetched release MBIDs, I compare release_name with fetched release names(multiple releases are there for single recording in most cases) and if I get a single match I associate that release MBID to the recording else nothing is associated. But in case we have only one release right now for some recording e.g. https://musicbrainz.org/release/485e33dd-d8f2-4.... Suppose we get recording like
      • {"title":"Farewell to Ireland", "artist": "James Morrison", "release": "Irish Favourites", "recording_mbid":"https://musicbrainz.org/recording/13d6a027-3dae-4f4d-a08a-3ba044f7a257"}. So, do I associate release MBID https://musicbrainz.org/release/485e33dd-d8f2-4... to this recording or not? Maybe in future, some other release for this recording can be inserted into the MusicBrainz database. So, our association will become incorrect
      • but for now, it's the only known release with the given name so the association is correct.
      • Nyanko-sensei has quit
      • UmkaDK
        Guys, a quick schema upgrade question: will there be one in October?
      • D4RK-PH0ENiX joined the channel
      • ruaok
        UmkaDK: unlikely. we haven't decided yet, but there have been no real musings that I've caught.
      • I' m really more in favor of getting some single points of failure and UX things moving.
      • UmkaDK
        Thanks ruaok!
      • ruaok
        kartikeyaSh: gimme a few minutes to go to the office and then you're on top of my todo list.