amending unpushed commits is fine but don't do it for already pushed once :P
naiveai
yeah, will keep in mind. i spotted it too late >_<
heyoni joined the channel
come to think of it I'm going to have to sign all my commits too ....
Sophist-UK has quit
heyoni_ joined the channel
heyoni has quit
D4RK-PH0ENiX has quit
annebelleo joined the channel
D4RK-PH0ENiX joined the channel
heyoni joined the channel
annebelleo has quit
heyoni_ has quit
discopatrick has quit
LordSputnik: could you purge @coveralls comments from https://git.io/vbbR1? I'm about to do a bulk signing commit that would leave only one coveralls comment for all my commits
Slurpee joined the channel
then I'll do a proper writeup on the changes and submit
Slurpee has quit
xps2 has quit
warp_ joined the channel
rgunkar joined the channel
dragonzeron has quit
leon joined the channel
leon is now known as Guest94268
Leftmost
naiveai, by and large you shouldn't ever have to do a force push, so if you find yourself doing that, double-check.
heyoni has quit
xps2 joined the channel
Guest94268
Hey
I am trying to setup picard-website locally and following the install.md for linux
however when I am running pip install -r requirements.txt I am facing an error
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-g83s_q00/transifex-client/setup.py", line 14, in <module>
if long_description.startswith(BOM):
TypeError: startswith first arg must be str or a tuple of str, not bytes
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-g83s_q00/transifex-client/
How to solve this error?
iliekcomputers
Which version of python are you using?
Are you using docker?
Guest94268
I am using python 2.7.12
Using virualenv
*python 3.5.2
naiveai
Leftmost: i have someā¦bad git habits to say the least, so I do force pushes more foten then I really should. i'm trying to shake off the habit
i feel like at this point I should just open a new PR without all the coveralls spam
lesson learned (i feel like i've been saying that a lot lately): git is literally magic, but magic should be used wisely.
LordSputnik, arthelon[m], Leftmost: made PR#174 with all commits signed, pushed all at once. submitted.
phew! now i won't bother you guys again.
naiveai has left the channel
naiveai joined the channel
Guest94268
@iliekcomputers any help?
PROTechThor joined the channel
xps2 has quit
PROTechThor has quit
TehTotalPwnage joined the channel
drsaunder has quit
d4rkie joined the channel
D4RK-PH0ENiX has quit
D4RK-PH0ENiX joined the channel
d4rkie has quit
d4rkie joined the channel
D4RK-PH0ENiX has quit
D4RK-PH0ENiX joined the channel
d4rkie has quit
reosarevok
zas: suffering some sort of server setbacks?
Stuff seems slow
MajorLurker has quit
iliekcomputers
Guest94268: I'm not familiar with Picard website but it uses python 2.7 I think
MajorLurker joined the channel
zas
reosarevok: i did a quick check, it doesn't look slower than usual
reosarevok
I think now it is a bit better again - I was having issues with stuff taking so long to send it broke scripts :/
discopatrick joined the channel
Guest94268 has quit
HSOWA joined the channel
KassOtsimine has quit
KassOtsimine joined the channel
KassOtsimine has quit
KassOtsimine joined the channel
HSOWA has quit
Bruker joined the channel
kuno has quit
warp_ is now known as kuno
kuno has quit
kuno joined the channel
Bruker has quit
HSOWA joined the channel
KassOtsimine has quit
Bruker joined the channel
KassOtsimine joined the channel
HSOWA has quit
Bruker has quit
Bruker joined the channel
kartikeya joined the channel
kartikeya has quit
kartikeya joined the channel
KassOtsimine has quit
KassOtsimine joined the channel
Bruker_ joined the channel
Bruker has quit
Bruker_ has quit
Bruker joined the channel
ruaok
iliekcomputers: ping. see if we can appear in the same place at the same time today.
iliekcomputers
Here right now
ruaok
sweet.
ok, first off. I'm really sorry.
second, I remember what drama the freedb data dumps are.
they create one file on disk for every CD.
it chews up inodes and does an insane amount of file opening/closing.
the process is painfully slow and ideally one would do an in-memory stream decompress to not bog down the filesystem.
rgunkar has quit
iliekcomputers
And us creating a file for each user is similar?
ruaok
exactly.
as much as I love the fact that one user can quickly find their own data, it makes consuming data dumps much harder for everyone.
iliekcomputers
I see.
ruaok
I think we should make it really simple then and not think too much.
iliekcomputers
Tbh, there are only ~1200 users in LB right now.
ruaok
correct.
but this becomes sub-optimal at 10k+, which isn't far out.
iliekcomputers
Okay.
ruaok
so, I am thinking that we should set some file size limit. say 10MB or maybe 100MB.
then we pump listens into txt files.
normal is to make one JSON document per line.
iliekcomputers
Text files which aren't username specific
ruaok
correct.
iliekcomputers
And each Json file contains usernames?
ruaok
maybe just monotonically increasing numbers.
iliekcomputers
*json doc
ruaok
yes. I forget, does the JSON now contain usernames?
antlarr has quit
iliekcomputers
I'm not sure but I don't think it does.
ruaok
that would make it easy for us, but a bit harder for users since they would need to grep for their user names.
antlarr joined the channel
my natural instinct is to make it so that we create an enclosing JSON doc with the username and then output one per line.
but catcat would make that tough. having to load 1M+ listens into ram to be able to parse it, its also not very smart.
so, we need to find a simple way to delineate listens by users.
we could do that in the index file.
user -> filename, offset
user -> filename, offset, size
so that to find the data for a given user, they need to extract one file, seek to offset and read size bytes.
and it makes dumping the data very simple.
iliekcomputers
Hmmm, sounds good, what if a user's listens take more than one file?
ruaok
I think if we tread the filesize as a recommendation, not a hard limit, we should be good.
meaning that if we pick 10Mb as the limit (unlikely) and a user has 12mb of listens, they take up one file.
iliekcomputers
Ah okay
ruaok
and if a user is at 9mb and then we reach catcat, fine.
iliekcomputers
catcat would still probably take many files?
ruaok
I'm quite curious to dump catcat to a single file using your current setup.