13:05 PM
ruaok
O_O
13:06 PM
the query works. but gives a blank result.
13:06 PM
rvedotrc
is ns1 their auth server, or a resolver?
13:06 PM
ruaok
not sure
13:06 PM
rvedotrc
if auth, then it's ok for it to refuse.
13:07 PM
my guess is that we're only allowed to query via their resolvers, 72.29.161.2 and .1
13:07 PM
as you say, firewall.
13:07 PM
Anyway, all good now.
13:07 PM
ruaok
thanks again!
13:07 PM
I'll go for a ride and then I'll work on getting services moved again.
13:08 PM
this time I'll start with the mail-mx, since that was one thing I never tested.
13:08 PM
OH. one more Q.
13:08 PM
or two.
13:08 PM
rvedotrc
?
13:08 PM
ruaok
the logger service that we have running on an ip...
13:08 PM
ruaok looks
13:08 PM
10.1.1.249 -- what is it and how do I test it?
13:09 PM
same goes for .246 -- monitor.
13:10 PM
rvedotrc
249 is on scooby. It's a syslog service. It's up to us to define if it's useful, I guess.
13:11 PM
.246 is the source address used by the gateways when they query machines on the network for nagios checks.
13:11 PM
ruaok
249 is not in use then?
13:11 PM
rvedotrc
so nothing "runs on" .246
13:11 PM
well. scooby is still up, and it has that IP...
13:12 PM
ruaok
oh, scooby.
13:12 PM
ok, I'll nuke that from the list and then only do a ping test on .246
13:12 PM
that should suffice, yes?
13:13 PM
rvedotrc
Not sure what you mean.
13:13 PM
If you want a syslogger, then ~something~ needs to have that IP, and log what it receives.
13:13 PM
If not, then ~meh~.
13:13 PM
ruaok
testing wise -- that when I move the IP, I just need to ping .246 and if response, I'm done.
13:14 PM
I'll just leave .249 alone for the time being, since I'm not touching scooby.
13:14 PM
rvedotrc
For 246, the symptom of getting it wrong will be failing nagios checks on many machines.
13:14 PM
ruaok
very obvious, in other words. :)
13:14 PM
good good.
13:14 PM
rvedotrc
Assuming we still use nagios :_)
13:14 PM
ruaok
we do.
13:15 PM
I recently spent some time cleaning things up a bit.
13:15 PM
finally, we'd love a clue (or fix, if your up for it) on this:
13:15 PM
13:15 PM
during some failure, carl running out of disk, i think, these graphs stopped. ian has looked at it, but couldn't work out the issue.
13:16 PM
all of the nginx-rrd graphs stopped working.
13:16 PM
rvedotrc
eww. OK. Will look later.
13:16 PM
ruaok
lol
13:16 PM
k.
13:16 PM
I'm off for a ride then.
13:16 PM
thanks!
13:17 PM
Freso
ruaok: "lsmod now needs a sort option,", it has one: `man sort` ;)
13:30 PM
LordSputnik joined the channel
14:00 PM
alastairp
Gentlecat: importing the first incremental dump doesn’t work - conflicts with the statistics table
14:00 PM
maybe incremental always dumps all statistics?
14:02 PM
Gentlecat
14:05 PM
alastairp
also, AB-44
14:06 PM
which is more urgent
14:06 PM
we import highlevel first
14:06 PM
but it has a FK to lowlevel
14:06 PM
so once you have FK constraints set up, this fails
14:07 PM
one final question, does the dump of highlevel only dump things that we also did in lowlevel? or still based on the dates?
14:07 PM
Gentlecat
you are importing data using import_data command, right?
14:07 PM
alastairp
yes
14:08 PM
if a low-level track is submitted, timestamp x, we create a dump point x + 1, then the highlevel of that track is created x + 2, then we perform the dump at x + 3
14:08 PM
Gentlecat
yeah, I guess we should drop constraints there before importing like it's done in init_db
14:08 PM
alastairp
is the highlevel track included?
14:09 PM
or left for the next one, because it was created after the date in the dump table
14:09 PM
yeah, dropping constraints could be dangerous… I’m not sure
14:09 PM
should the mirror be unavailable during an import?
14:12 PM
14:13 PM
Gentlecat
to your first question: everything is based on dates right now
14:13 PM
so it is assumed that you have all previous incremental dumps imported before that
14:13 PM
maybe we should do something different for json dumps
14:14 PM
alastairp
that’s not exactly what I’m asking - there may be a possible race condition that incremental dump 2 has the lowlevel row, but the highlevel row won’t appear until incr dump 3
14:14 PM
reosarevok joined the channel
14:15 PM
Gentlecat
oh, and highlevel data is going to be calculated before next dump is imported?
14:16 PM
alastairp
don’t worry about a mirror - assume that the mirror doesn’t run any highlevel computation, or accept submission
14:19 PM
Gentlecat
ok, I don't understand what the problem is then
14:19 PM
alastairp
let me find the code and see if it’s a problem
14:21 PM
14:21 PM
this uses `now()` as the value for `highlevel.submitted`
14:22 PM
so, highlevel.submitted is always later in time than lowlevel.submitted - because the high level tool runs in batches
14:22 PM
Gentlecat
right
14:23 PM
alastairp
this means there’s a theoretical condition where I could set a incremental dump to include all submissions that occur before a set time
14:23 PM
and this time could include the lowlevel for a track, but not the highlevel
14:26 PM
Gentlecat
and you want that highlevel data to be in the same incremental dump?
14:26 PM
alastairp
yes, I think it makes senese
14:27 PM
14:27 PM
Gentlecat
but think about how it would work if you need to create next incremental dump and highlevel data is not calculated yet
14:27 PM
alastairp
yes, this is also a problem
14:28 PM
Gentlecat
from cron job, let's say
14:29 PM
alastairp
2 solutions: 1) always do an incremental dump from earlier than now - so a dump at 3pm only accepts tracks submitted up to 2pm
14:29 PM
2) since the lowlevel dump is going to take so long, we can assume that highlevel will be ready by then
14:29 PM
Gentlecat
and it means we need to make sure that all lowlevel datasets have highlevel caluclated before allowing to create a dump
14:29 PM
alastairp
or 3) ...
14:29 PM
yeah, what you said
14:29 PM
actually verify beforehand
14:30 PM
Gentlecat
otherwise it will cause a lot of conflicts and you aren't going to be able to import anything
14:30 PM
alastairp
do you understand the problem now? can you write an issue that you understand
14:31 PM
Gentlecat
I don't quite understand why it is a problem
14:31 PM
that you have to wait for a month for that highlevel data if it's missing?
14:32 PM
alastairp
yes
14:32 PM
Gentlecat
ok, yeah
14:32 PM
alastairp
the majority of people who will run a mirror will be running it because they actually want this information
14:33 PM
Gentlecat
should they be able to calculate highlevel data on their mirrors?
14:33 PM
alastairp
no
14:33 PM
because that’s just going to result in pain
14:33 PM
Gentlecat
and no submissions, right?
14:33 PM
alastairp
yeah
14:33 PM
Gentlecat
because "mirror", got it
14:33 PM
alastairp
but for now, probably just informally
14:34 PM
that is, we allow them to if they want to, but with no guarantees as to what happens the next time they try and load an incremental
14:34 PM
Gentlecat
yeah
14:35 PM
alastairp
the main people we should support are those who just want a copy of the database
14:35 PM
we can try and support others who are a bit more tech-savvy and can work around issues that they might create
14:35 PM
but we shouldn’t go out of our way to do stuff if it makes more work for us
14:35 PM
Gentlecat
can you create a ticket so we can keep track of all that?
14:36 PM
alastairp
ok
14:36 PM
Gentlecat
I'll think how to implement all that
14:36 PM
alastairp
:D
14:38 PM
Sebastinas
Are there any plans shutting down WS/1 or will it continue to be available for the next couple of years?
14:38 PM
Gentlecat
probably next week, still have some other things I need to get done
14:39 PM
alastairp
Sebastinas: years might be stretching it
14:39 PM
at a guess, it might disappear when we release ws/3, but that’s barely in the planning stage yet
14:39 PM
why do you still want to use ws1?
14:40 PM
Sebastinas
I don't. I'd like to push for a removal of libmusicbrainz3 which still uses WS/1.
14:40 PM
(In Debian)
14:40 PM
alastairp
ahh
14:40 PM
Sebastinas
Having a date would make that a lot easier.
14:40 PM
alastairp
does anything still depend on it?
14:41 PM
Sebastinas
kicad and gnome-mplayer
14:41 PM
alastairp
urg
14:41 PM
Sebastinas
s/kicad/kscd/
14:42 PM
alastairp
ah, that looks better
14:42 PM
I can’t give you a more concrete answer
14:43 PM
you could get better feedback if you posted to the musicbrainz-devel mailing list
14:43 PM
Gentlecat
alastairp: I updated that comment about data editor. Is it still unclear?
14:43 PM
alastairp
my feeling is that no one here will have bad feelings if the bindings disappear, but I understand if that’s not a strong enough statement for you
14:44 PM
Gentlecat: ah, right. yeah, I expected that someone who wanted to view the dataset could also see the metadata
14:45 PM
Gentlecat
it would be more friendly than MBIDs, for sure
14:45 PM
alastairp
if there was no plan to show metadata on ‘view’, I would argue that it’s not needed on ‘create’ either
14:45 PM
but if we do show metadata, anywhere in the editor, it should be cached through the server
14:46 PM
certain amount of time, is what I was thinking
14:48 PM
Gentlecat
not sure which "view" and "create" you are talking about
14:49 PM
I assume we'll need some way to display public datasets on the website
14:49 PM
ruaok
Freso: I'm too lazy to type " | sort" :)
14:50 PM
alastairp
create - when someone is making a dataset and adding a mbid to a class
14:50 PM
kepstin-laptop joined the channel
14:50 PM
view, when someone is looking at all of the things in a dataset
14:54 PM
poor postgres
14:54 PM
not happy with acousticbrainz=> delete from highlevel_json;
14:54 PM
ruaok
truncate is much faster.
14:54 PM
alastairp
oh. I guess I should have used truncate
14:54 PM
snap