probably you should create a partition with 1KB block size
otherwise you waste 3K for each file
djce
Six months ago or so, it was 3319 Mb
Good idea. Do you know how you do that?
yalaforge
you need a free partition, one you don't need anymore
djce
I was thinking of seeing how well it would fit into a PG database too...
OK. Or I could use a "-o loop" file, yes?
yalaforge
use mke2fs with the -b option
djce
Ah, ok.
I don't do a lot of that normally. The OS installer takes care of that for me :-)
yalaforge
I don't know about loopback file systems, sorry
djce
ok
yalaforge
you need to umount the partition in question
but all data on it will be lost, of course.
djce
yes, of course. thanks.
yalaforge
mke2fs -b 1024 -j /dev/hd...
oh, and you need enough inodes. At least one for each file (the -N option)
djce
1067811
ouch!
yalaforge
oh dear. that's a lot of files
with 1K block size it should be just a bit over 1Gig
why do you need all that crap?
djce
I'm just gonna see how big the tar file is after "bunzip2"
I don't. It's just for, err, fun. Or interest. Or because I can.
yalaforge
:-)
djce wonders how much faster a simple decompress will be than a decompress-and-extract.
much faster
djce
We'll see. But yet, I think so!
(yes)
yalaforge
the problem with extract is that many new data structures have to be created in the file system
ext2fs doesn't cope very well with this. The BSD FS in async mode is even worse
djce
Yup. I'm hoping with just a bunzip2 (i.e. freedb-complete.tar) that I can scan the tar file, build up an index of filenames to TAR offsets, and then just "open/seek/read" to load each record.
Hopefully, seek is nice and fast.
yalaforge
yes, I think that'll work
I once fed a tar file full of mp3s to my xmms. It worked well :-)
cool, there is a description of the tar format in /usr/include/tar.h
djce
bunzip2: 10 min, instead of 12 hours.
yalaforge
we were right :-)
how do you want to extract the index?
djce
Archive::Tar (perl). Or a little variant of it wot I just writ :-)