#musicbrainz-devel

/

      • ruaok
        O_O
      • the query works. but gives a blank result.
      • rvedotrc
        is ns1 their auth server, or a resolver?
      • ruaok
        not sure
      • rvedotrc
        if auth, then it's ok for it to refuse.
      • my guess is that we're only allowed to query via their resolvers, 72.29.161.2 and .1
      • as you say, firewall.
      • Anyway, all good now.
      • ruaok
        thanks again!
      • I'll go for a ride and then I'll work on getting services moved again.
      • this time I'll start with the mail-mx, since that was one thing I never tested.
      • OH. one more Q.
      • or two.
      • rvedotrc
        ?
      • ruaok
        the logger service that we have running on an ip...
      • ruaok looks
      • 10.1.1.249 -- what is it and how do I test it?
      • same goes for .246 -- monitor.
      • rvedotrc
        249 is on scooby. It's a syslog service. It's up to us to define if it's useful, I guess.
      • .246 is the source address used by the gateways when they query machines on the network for nagios checks.
      • ruaok
        249 is not in use then?
      • rvedotrc
        so nothing "runs on" .246
      • well. scooby is still up, and it has that IP...
      • ruaok
        oh, scooby.
      • ok, I'll nuke that from the list and then only do a ping test on .246
      • that should suffice, yes?
      • rvedotrc
        Not sure what you mean.
      • If you want a syslogger, then ~something~ needs to have that IP, and log what it receives.
      • If not, then ~meh~.
      • ruaok
        testing wise -- that when I move the IP, I just need to ping .246 and if response, I'm done.
      • I'll just leave .249 alone for the time being, since I'm not touching scooby.
      • rvedotrc
        For 246, the symptom of getting it wrong will be failing nagios checks on many machines.
      • ruaok
        very obvious, in other words. :)
      • good good.
      • rvedotrc
        Assuming we still use nagios :_)
      • ruaok
        we do.
      • I recently spent some time cleaning things up a bit.
      • finally, we'd love a clue (or fix, if your up for it) on this:
      • during some failure, carl running out of disk, i think, these graphs stopped. ian has looked at it, but couldn't work out the issue.
      • all of the nginx-rrd graphs stopped working.
      • rvedotrc
        eww. OK. Will look later.
      • ruaok
        lol
      • k.
      • I'm off for a ride then.
      • thanks!
      • Freso
        ruaok: "lsmod now needs a sort option,", it has one: `man sort` ;)
      • LordSputnik joined the channel
      • alastairp
        Gentlecat: importing the first incremental dump doesn’t work - conflicts with the statistics table
      • maybe incremental always dumps all statistics?
      • Gentlecat
      • alastairp
        also, AB-44
      • which is more urgent
      • we import highlevel first
      • but it has a FK to lowlevel
      • so once you have FK constraints set up, this fails
      • one final question, does the dump of highlevel only dump things that we also did in lowlevel? or still based on the dates?
      • Gentlecat
        you are importing data using import_data command, right?
      • alastairp
        yes
      • if a low-level track is submitted, timestamp x, we create a dump point x + 1, then the highlevel of that track is created x + 2, then we perform the dump at x + 3
      • Gentlecat
        yeah, I guess we should drop constraints there before importing like it's done in init_db
      • alastairp
        is the highlevel track included?
      • or left for the next one, because it was created after the date in the dump table
      • yeah, dropping constraints could be dangerous… I’m not sure
      • should the mirror be unavailable during an import?
      • can you ask me your question about the data editor again? I didn’t understand it https://github.com/metabrainz/acousticbrainz-se...
      • Gentlecat
        to your first question: everything is based on dates right now
      • so it is assumed that you have all previous incremental dumps imported before that
      • maybe we should do something different for json dumps
      • alastairp
        that’s not exactly what I’m asking - there may be a possible race condition that incremental dump 2 has the lowlevel row, but the highlevel row won’t appear until incr dump 3
      • reosarevok joined the channel
      • Gentlecat
        oh, and highlevel data is going to be calculated before next dump is imported?
      • alastairp
        don’t worry about a mirror - assume that the mirror doesn’t run any highlevel computation, or accept submission
      • Gentlecat
        ok, I don't understand what the problem is then
      • alastairp
        let me find the code and see if it’s a problem
      • this uses `now()` as the value for `highlevel.submitted`
      • so, highlevel.submitted is always later in time than lowlevel.submitted - because the high level tool runs in batches
      • Gentlecat
        right
      • alastairp
        this means there’s a theoretical condition where I could set a incremental dump to include all submissions that occur before a set time
      • and this time could include the lowlevel for a track, but not the highlevel
      • Gentlecat
        and you want that highlevel data to be in the same incremental dump?
      • alastairp
        yes, I think it makes senese
      • so for https://github.com/metabrainz/acousticbrainz-se... we should join to lowlevel to get the submission date
      • Gentlecat
        but think about how it would work if you need to create next incremental dump and highlevel data is not calculated yet
      • alastairp
        yes, this is also a problem
      • Gentlecat
        from cron job, let's say
      • alastairp
        2 solutions: 1) always do an incremental dump from earlier than now - so a dump at 3pm only accepts tracks submitted up to 2pm
      • 2) since the lowlevel dump is going to take so long, we can assume that highlevel will be ready by then
      • Gentlecat
        and it means we need to make sure that all lowlevel datasets have highlevel caluclated before allowing to create a dump
      • alastairp
        or 3) ...
      • yeah, what you said
      • actually verify beforehand
      • Gentlecat
        otherwise it will cause a lot of conflicts and you aren't going to be able to import anything
      • alastairp
        do you understand the problem now? can you write an issue that you understand
      • Gentlecat
        I don't quite understand why it is a problem
      • that you have to wait for a month for that highlevel data if it's missing?
      • alastairp
        yes
      • Gentlecat
        ok, yeah
      • alastairp
        the majority of people who will run a mirror will be running it because they actually want this information
      • Gentlecat
        should they be able to calculate highlevel data on their mirrors?
      • alastairp
        no
      • because that’s just going to result in pain
      • Gentlecat
        and no submissions, right?
      • alastairp
        yeah
      • Gentlecat
        because "mirror", got it
      • alastairp
        but for now, probably just informally
      • that is, we allow them to if they want to, but with no guarantees as to what happens the next time they try and load an incremental
      • Gentlecat
        yeah
      • alastairp
        the main people we should support are those who just want a copy of the database
      • we can try and support others who are a bit more tech-savvy and can work around issues that they might create
      • but we shouldn’t go out of our way to do stuff if it makes more work for us
      • Gentlecat
        can you create a ticket so we can keep track of all that?
      • alastairp
        ok
      • Gentlecat
        I'll think how to implement all that
      • alastairp
        :D
      • Sebastinas
        Are there any plans shutting down WS/1 or will it continue to be available for the next couple of years?
      • Gentlecat
        probably next week, still have some other things I need to get done
      • alastairp
        Sebastinas: years might be stretching it
      • at a guess, it might disappear when we release ws/3, but that’s barely in the planning stage yet
      • why do you still want to use ws1?
      • Sebastinas
        I don't. I'd like to push for a removal of libmusicbrainz3 which still uses WS/1.
      • (In Debian)
      • alastairp
        ahh
      • Sebastinas
        Having a date would make that a lot easier.
      • alastairp
        does anything still depend on it?
      • Sebastinas
        kicad and gnome-mplayer
      • alastairp
        urg
      • Sebastinas
        s/kicad/kscd/
      • alastairp
        ah, that looks better
      • I can’t give you a more concrete answer
      • you could get better feedback if you posted to the musicbrainz-devel mailing list
      • Gentlecat
        alastairp: I updated that comment about data editor. Is it still unclear?
      • alastairp
        my feeling is that no one here will have bad feelings if the bindings disappear, but I understand if that’s not a strong enough statement for you
      • Gentlecat: ah, right. yeah, I expected that someone who wanted to view the dataset could also see the metadata
      • Gentlecat
        it would be more friendly than MBIDs, for sure
      • alastairp
        if there was no plan to show metadata on ‘view’, I would argue that it’s not needed on ‘create’ either
      • but if we do show metadata, anywhere in the editor, it should be cached through the server
      • certain amount of time, is what I was thinking
      • Gentlecat
        not sure which "view" and "create" you are talking about
      • I assume we'll need some way to display public datasets on the website
      • ruaok
        Freso: I'm too lazy to type " | sort" :)
      • alastairp
        create - when someone is making a dataset and adding a mbid to a class
      • kepstin-laptop joined the channel
      • view, when someone is looking at all of the things in a dataset
      • poor postgres
      • not happy with acousticbrainz=> delete from highlevel_json;
      • ruaok
        truncate is much faster.
      • alastairp
        oh. I guess I should have used truncate
      • snap