still getting a 500 error and thinking of doing a reinstall
D4RK-PH0ENiX has quit
D4RK-PH0ENiX joined the channel
D4RK-PH0ENiX has quit
D4RK-PH0ENiX joined the channel
supersandro20007 joined the channel
supersandro2000 has quit
supersandro20007 has quit
c1e0 joined the channel
Gore has quit
Gore joined the channel
supersandro2000 joined the channel
navap
I see Picard is now on the windows store, nice! Anyone know why it has a seizure warning listed for it?
I don't see it when browsing the store via browser, but when within the app this is what I see: https://imgur.com/a/2OssgEd
outsidecontext: Maybe something for you? ^
yvanzo
mo’’in’
prabal joined the channel
dpmittal_ joined the channel
tmontney: `docker ps` output looks ok, you might want to check `docker-compose logs musicbrainz`
dpmittal_ has quit
navap has quit
navap joined the channel
anant joined the channel
tmontney
ok
outsidecontext
navap: that's a generic message on the US store front. I commented on your ticket
navap
outsidecontext: Interesting. And very weird. Must be some legal thing.
outsidecontext
navap: it is, for sure. A lot of companies are afraid of getting sued in the US for things like this. And usually their legal department wants to be rather safe than sorry
yvanzo
ruaok: I agree, VM should be replaced with Docker for mirrors.
Most of utility scripts in MBVM now have equivalents in MB Docker.
pristine__
iliekcomputers: hey. So today I was trying to run A script and it just got stuck. Did you come across something like this? I also checked the logs, so whenever we shoot a script, there is mention of no of executors, memory of executora etc in the logs but today the logs had no mention of executors, possibly executors are not starting. I will look into it after mid sems but do you know any quick work around, if you have seen
this before?
yvanzo
Main issue of OVAs is loaded data becomes quickly outdated.
sumedh joined the channel
BrainzGit
[musicbrainz-server] yvanzo merged pull request #1381 (master…fix-autocomplete-warning): Fix noisy warning about autocomplete: uninitialized value in numeric gt (>) https://github.com/metabrainz/musicbrainz-serve...
prabal
@Mr_Monkey : Hi
supersandro2000 has quit
sumedh has quit
supersandro2000 joined the channel
sumedh joined the channel
Which code is hosted in test.bookbrainz.org. I don't think it's master branch of bookbrainz github repo. Because merge feature is not there.
ZaphodBeeblebrox has quit
sumedh has quit
CatQuest joined the channel
CatQuest
Mr_Monkey: here is something https://www.collectorz.com/ (they have a book thing (I know because apparently there ewas a politician here that uses it to categorise her books /by ISBN number/))
and i was thinknig that we would want to be the sort of database backend to this
Stats infra is feature complete for release after this PR, in my opinion.
I'm on vacation starting next week, so I'll let it run until I get back and if there are no big breakages until then, I'll look into doing a release.
CatQuest
vacation in march 🎶
why are there no "maracas" emoji
there are farto few musicla instruments emoji
sumedh joined the channel
BrainzGit
[musicbrainz-server] yvanzo merged pull request #1396 (master…fix-medium-warnings): Fix noisy warnings about medium: uninitialized value in numeric eq/ne https://github.com/metabrainz/musicbrainz-serve...
Freso
amCap1712: "Most of the identical code concerns reading the metadata of audio file." How much of this would still be shared as the LB plugin is expanded to submit Artist, Track, Release, … MBIDs? It seems like right now it only gets the metadata that would be expected for Last.FM, but LB can take so much more.
ruaok
> I'll look into doing a release.
fabu, that sounds wise.
I'm hoping to get a timescale DB on lemmy for testing asap, so we can see how it runs in a parallel.
amCap1712
Freso: It depends on how its implemented. If I do end up extracting the common code, I have planned to the base requirements in the utilities file. That includes the data structures used, macros to format metadata, extracting metadata from audio files and setup and cleanup code for the plugin. For the additional features, I intend to build them directly into listenbrainz module. That common code would not be affected by expanding
anything in ListenBrainz module according to this implementation.
zas
ruaok: about freeing a rackspace, I think we can stop containers (mb website & ws) on wetton and see how things are going during peak hours, if ok, you can either use it, or replace it by a more powerful server
yvanzo, bitmap : ^^
ruaok
wetton is one of the newer servers, no?
would it make sense to replace one of the older servers?
zas
AX50-SSD and identical to cage, we have those since 3 years
ruaok
ok, I'm all for it. I know which server I want to replace it with.
zas
those are ryzen based, and can easily removed without us spending time moving services around
ruaok
well, not the exact one. the threadripper one is delayed due to coronavirus.
great. lets have yvanzo and bitmap chime in and if they are ok with it, let's do it.
zas
I'll stop services on wettong
yvanzo
ok, there is nothing specific about MB / wetton
zas
done, I think we have enough capacity to handle the loss, we reduced a lot non-legit traffic during last months/years
ruaok
well, start more containers on other underutilized machines.
EX52-NVMe (6 core, 64GB, 1 TB raid-1 ssd, €64) looks nice, but I really want 128GB ram and 1TB SSD. all those are super expensive. :(
Gazooo has quit
Gazooo joined the channel
sumedh has quit
iliekcomputers
ruaok: hey
ruaok
yo yo
iliekcomputers
errors should go to sentry in my opinion
(while emails are for the happy path)
with emails, we'll know something is gone bad if we don't get an email
with sentry, we'll know something has gone bad if there's something in there (ideally)
does that sound reasonable?
ruaok
generally we've done it in reverse.
no email == good.
> with emails, we'll know something is gone bad if we don't get an email
I won't regularly notice missing emails, TBH.
but extra email for good == is also good for a while.
iliekcomputers
the emails are more for fyi type of things in my head.
the error use case is much better covered by sentry, it has stack traces, better logs, number of times stuff happened etc etc
ruaok
if you're happy with this, then fine by me. let's see if we miss stuff down the line.
iliekcomputers
i do not have any intention of moving from sentry to email for error reporting
about the dump creation dates
ruaok
I agree that sentry is good for error reporting, no doubt.
iliekcomputers
we currently do it on the 1st and 15th of every month
ruaok
I guess my main point is the lacking email indicating a problem.
iliekcomputers
ruaok: i'll actively monitor for some time. eventually we could move to a different irc channel (or telegram) for these alerts. basically just things that we should log as having happened.
email was just the simplest to implement for now
ruaok
ok, then lets proceed and adjust if necessary. one failure isn't a big deal at all.
D4RK-PH0ENiX has quit
iliekcomputers
sounds good
thoughts on the dump import being staggered 7 days behind dump creation?
BrainzGit
[listenbrainz-server] paramsingh merged pull request #744 (master…param/add-dump-import-to-request-consumer): Automate data dump imports into the spark cluster https://github.com/metabrainz/listenbrainz-serv...
that was actually another question I had... when migrating to another solution (timescale, or otherwise) waiting for dumps to be created is a long process. that will be tricky.
iliekcomputers
yeah. developing and testing incremental dumps was hard because the cycle was too slow
ruaok
I bet.
iliekcomputers
as in testing that full dump 1 + incremental dump 2 = full dump 2
ruaok
I wonder if I can run a dump dumper on newhost and work from there.
I'll still need to setup a queue and collect new listens and let them pile up.
weird that I can import a dump in 2 hours, but dumping it takes 2 days. I bet a lot of time spent compressing the dump, no?
which I am not going to do anymore...
pristine__
iliekcomputers: did you come across anything like that? Can you please scroll and read the message if you missed it :p
iliekcomputers
pristine__: yes
i actually did
so the request-consumer has a spark context already
which is what blocks other requests
i think
i'm not sure
i came across the problem, stopped the request consumer, and everything worked fine and dandy
however, stopping the request consumer isn't the solution here, because it's now actually being used in production