solr-cloud-1 was rebooted half an hour ago, someone did this ?
7:12:16 utc
samj1912 ^^
samj1912
Nope
Afk for a while now
loujine has quit
HSOWA has quit
ruaok
yvanzo_: the VM ballooned to 20GB this time around.
also MOOOIN! \ΓΈ-
latest MV is now available on the FTP site, md5 generating.
loujine joined the channel
zas
Moooin ruaok
solr-1 rebooted "alone" at 7:12:16 UTC, after being unavailable for ~30 mins, i have no explanation for now
ruaok
I didn't do it.
zas
i suspect underlying hardware issue, but i see nothing in Hetzner status
ruaok
is it just me or does it seem that hetzner's cloud isn't nearly as stable as AWS or GC?
probably built on the same crap consumer hardware is my guess.
zas
it isn't just you ;)
prices... are quite different
ruaok
I'm thinking of writing them an actual letter. not just a tech support message. but an actual letter that says that their shit sucks.
because as with their other offerings, we're putting in a lot of effort to keep things running when we shouldn't have to.
zas
well, their cloud is quite new, and we know their weaknesses, for me that's a non-issue, if we are able to make our systems fault-tolerant enough
after, that's a question of money: human-time has a cost
ruaok
yeah, a good excercise for us. still, we're wasting time on it.
exactly.
zas
i don't see that as a real waste of time, unreliable underlying hardware force us to improve the fault-tolerance of our systems, which is imho very good
current solr stuff can easily lost a node (but not 2), at worse we can spawn a new one and configure it under 30 mins
reliable hardware leads to non-fault-tolerant systems, and when something bad happens .... that's catastrophic ;)
ruaok
agreed.
but once we remove all single points of failure, if we still need to play this stupid game, then we're wasting efforts.
zas
once we'll remove all single points of failure, we'll not care failures anymore ;)
but we still have too many atm
the main problem for me with hetzner cloud is that cpu resources "actually available" aren't really predictable, and i have the feeling they stole us in this regard
difference between all SOLR nodes is quite significative
ruaok
yes.
can you do me a favor and find one graph that clearly indicates that the performance between equal nodes is clearly different over a long period of time?
ruaok: For creating clusters using fetched release MBIDs, I compare release_name with fetched release names(multiple releases are there for single recording in most cases) and if I get a single match I associate that release MBID to the recording else nothing is associated. But in case we have only one release right now for some recording e.g. https://musicbrainz.org/release/485e33dd-d8f2-4.... Suppose we get recording like
{"title":"Farewell to Ireland", "artist": "James Morrison", "release": "Irish Favourites", "recording_mbid":"https://musicbrainz.org/recording/13d6a027-3dae-4f4d-a08a-3ba044f7a257"}. So, do I associate release MBID https://musicbrainz.org/release/485e33dd-d8f2-4... to this recording or not? Maybe in future, some other release for this recording can be inserted into the MusicBrainz database. So, our association will become incorrect
but for now, it's the only known release with the given name so the association is correct.
Nyanko-sensei has quit
UmkaDK
Guys, a quick schema upgrade question: will there be one in October?
D4RK-PH0ENiX joined the channel
ruaok
UmkaDK: unlikely. we haven't decided yet, but there have been no real musings that I've caught.
I' m really more in favor of getting some single points of failure and UX things moving.
UmkaDK
Thanks ruaok!
ruaok
kartikeyaSh: gimme a few minutes to go to the office and then you're on top of my todo list.