• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

do macs really handle files in the gigabyte range a lot better than pcs?

akapaul

Junior Member
ok, some mac zealot was telling me that
windows has trouble dealing with GB sized files....
i asked him for proof that macs are better at this, he simply
did not reply.
so are macs really that much better at manipulating GB sized files?
and does RAM have nothing to do with this. lets say u have 4gigs of RAM
cant u just have everything go on RAM instead of ur hdd?
 
That's something that's very hard to prove because it's easy to create a 4G file and copy it around and sh!t, but manipulating it depends a lot on what type of file it is and what programs you use in addition to the OS in question.

It's not as simple as "just having everything go on RAM". Because you're most likely running Windows on a 32-bit CPU you have 4G VM space available to each proecss but only 2G is really accessable because the OS reserves 2G for itself and chunks of that 2G will be eaten up by the program itself, data it uses to run and shared libraries it loads. So say with a pretty large video editing program it uses 100M, you're now down to 1.9G VM for the app regardless of physical memory available. You also have to take into consideration the fact that the OS will be caching the file also, so you don't need to keep it all in memory.

The big advantage that Macs might have right now is that the G5 is a 64-bit CPU which removes all the VM limitations I was talking about above. You, for the time being, have pretty much unlimited address space to use with a 64-bit CPU and OS. What that means is that you could theoretically dump 100G memory into a G5 (provided physical space for it and no stupid limitations imposed by Apple in the software) and load up nearly 100G of video directly into memory.

I see this: which means it breaks the 4 gigabyte barrier and can use up to 8 gigabytes of main memory. on Apple's web site which confuses me a bit, I don't know whether it refers to a physical limitation (i.e. it only has 4 memory slots and you can only buy 2G chips) or if it's a software imposed limit for Apple to later introduce higher level machines that support more memory.

Anyway, neither one is a clear cut winner because there's too many variables. Label him a zealot and move on, Apple may have slightly better consumer level software with their iLife crap though.
 
They are probably confusing the issue with Windows 9x's inability to deal with large files, with Win XP/2K/NT4 refer to Nothinman's post.

One thing that seems to be confusing to many mac zealots is the differances between the versions of Windows. They tend to compare OS X to Windows 98 and of course OS X is going to beat Windows 98 hands down in nearly every measureable way. Of course many "PC People" tend to still think of Macintoshes as anything pre OS X, if you compare OS 9 or earlier to Win XP I would say Win XP is the better choice. However if you compare Windows XP or 2K to OS X than you are going to be fairly comperable (of course the true mac zealot will still whine about how OS X is sooo much better without presenting any real viable argument).

I think both OS X and Win XP are pretty good, they both have their advantages and disadvantages and I think claiming one is "better" than the other is like claiming Ford is better than Chevy or vice versa (of course some people will still claim one or the other but that is a debate for another day).

rant
If my post sounds jaded that's because it is, I've 2 "mac guys" who work for me and they seem to think that because Macs are better at some things that they are better at everything...
/rant

-Spy
 
I don't know about file sizes.

IMO the filing system that Mac's use (HFS+) is one of the weak points of the architecture. It's kinda akward and easily damaged and uses all that weird two part meta-data + file data crap the Apples use. And journalling slows it down.

Still lot better then FAT though. I think that filing systems is a big strength in Linux. JFS, XFS, Ext3, etc etc are probably the best you can get. But I am not a expert of anything and biased though.

Maybe he could be refering to how the achitexture is designed. Older PowerG4's all used SCSI, which combined with the better clock for clock performance made Macs much better at handling files at speed, even if they were outclassed in most other aspects.

Dual proccessors also have a advantage over other designs especially when it comes to IDE drives. Certian types large file transfers will use close to a 100% proccessor time even with DMA stuff turned all the way on with even faster proccessors. With Dual proccessors you can have one act as a sort of dedicated IDE controller and the other one still has full use for doing editing on that file.

I don't know about the g4's, but the G5's (IBM power970) proccessor and supporting archatecture was specificly designed for multiproccessor setup. Much like the AMD Opteron. Since IBM is a big fan of using slower cooler proccessors to out perform much GHZ-faster single hot proccessor.

So then with SATA, dual proccessor setup and new system bus design the high-end PowerG5 would provide better performance then a normal x865 setup. Older g4's (500mhz and less, I think) with SCSI setup would be better then the similarly aged wintel computer, but I don't know about the later G4's with regular IDE setup.

(Even normally-priced promise controllers are software based)

But that's all speculation on my part. I don't have any real proof, exept personal experiance using a 500mhz G4 with photoshop and other image editing/creating programs + OS X and still getting good performance.

All you have to do is buy a SCSI array setup with a quality controller, setup Linux with XFS or other high performance filing system and then the SCSI controller takes the brunt of file transfers instead of the CPU and you lose the advantage of the dual proccessor/archatecture design and that would rape the Mac's in terms of file performance.

So I figure what he was saying was a bit over-blown.

 
Im no expert on Apples, but;
I`d would guess that they would be slightly more capible, at the moment, at video editing, as practically every apple machine ive seen lately has been advertised as a video editing/photo editing platform. This would suggest that it could handle large files a little better than windows, and without the problems that windows sometimes has (incorrect transfer times [4328474 minutes remaining...], crashes, ect😉

No, about file systems;
Not sure how HFS+ works, so i cant comment.
Ext2/3 seem ok.
Fat16. Dont even bother with.
FAT32. Is good for speed on smaller sized disks, but prone to errors, as above.
NTFS. All Varients. It has alot going for it. Almost as fast as FAT32, stable, very hard to fcuk up, good with large partition sizes, encryptbale BUT, Has no native DOS support.

If it wasnt for the non-native DOS support, NTFS would be being used on all of my drives, and not just the windows one.
 
If it wasnt for the non-native DOS support, NTFS would be being used on all of my drives, and not just the windows one.

The lack of DOS support is DOS's fault not NTFS's, well I guess since MS owns both it's MS's fault for not writing any DOS drivers. If you really want there is a NTFS4DOS driver, it crushes you available conventional memory though.

I personally havn't booted DOS on any of my machines in years so I can't imagine why that would be a factor at all and the only reason we use DOS at work is for ghost so NTFS support in DOS is irrelevant and now with WinPE and Ghost32 I'm hoping to eliminate DOS there too.
 
I know that its not NTFS`s fault for DOS support, but MS really should have made it so all of their OS`s can access NTFS drives without 3rd party drivers.
I use DOS whenever i need to re-install windows, which is very rare, but still;
 
Im no expert on Apples, but;
I`d would guess that they would be slightly more capible, at the moment, at video editing, as practically every apple machine ive seen lately has been advertised as a video editing/photo editing platform. This would suggest that it could handle large files a little better than windows, and without the problems that windows sometimes has (incorrect transfer times [4328474 minutes remaining...], crashes, ect 😉
So because Apple marketing says that their platform is good for Video/Image editing it's supposed to have faster handling of large files?
rolleye.gif


BTW the "problems that windows sometimes has" that you are referring are roughly equal if you compare 2k/XP pro. against OS X
I know that its not NTFS`s fault for DOS support, but MS really should have made it so all of their OS`s can access NTFS drives without 3rd party drivers.
I use DOS whenever i need to re-install windows, which is very rare, but still;
Why would DOS need NTFS support to reinstall Windows? I can understand the desire to occasionally get into DOS to setup the drive prior to installing Windows, or even copying the i386 directory to the HD so you can install Windows directly off of the HD. However neither requires DOS to have NTFS support, and that is why Microsoft built those options into the install menu.

DOS is some pretty old stuff, in fact Microsoft stopped supporting it a long time ago.

-Spy
 
drag: XFS, JFS, and ext3 are all journalled.
So is NTFS, for that matter. NTFS has some of the worst journalling of any FS ever, I've heard.
ext3's journalling is slapped on as an afterthought, so it's not too efficient.
JFS isn't meant for real use. The main point of the driver for it in linux is so that people can copy data from old OS/2 drives in their linux systems.

the 8GB limitation of the Apple G5 systems is a physical limitation of the number of slots. No 64-bit CPU has full 64-bit physical addressing, though. Most have somewhere around 40-bit addressing, allowing somewhere around a TB of RAM if there are enough slots on the board.

You can't use DOS to install any NT-based Windows, since NT-based Windows have nothing to do with DOS.

The two-part filesystem is not a meta-data and data thing. It's resource and data. All FSes have separate meta-data for each file. Otherwise you'd never be able to access any files. It's actually a very nice system, as it allows structured data within files with native OS support. You can put in custom GUI components directly into an executable along with other parts that contain the executable code. You can even put these pieces on a normal data file. E.g., a text document where the embedded images and other objects are resources (IIRC, this is how SimpleText does things).
 
Originally posted by: spyordie007

So because Apple marketing says that their platform is good for Video/Image editing it's supposed to have faster handling of large files?
rolleye.gif
As i said, im no expert on Apples. It was a simply observation.
Why would DOS need NTFS support to reinstall Windows? I can understand the desire to occasionally get into DOS to setup the drive prior to installing Windows, or even copying the i386 directory to the HD so you can install Windows directly off of the HD. However neither requires DOS to have NTFS support, and that is why Microsoft built those options into the install menu.

DOS is some pretty old stuff, in fact Microsoft stopped supporting it a long time ago.
Look. I use DOS to re-install windows. I make a proper job of something when i do it. And its allways worked for me to work things like that through DOS.
Even if i copyed the CD to the HD, i`d still need a boot disk, which, guess what? Uses DOS!
There has also been time when i would have liked to have been able to access that NTFS partition through DOS.
So for whatever reason. I use DOS to do somethings. NTFS support would be handy in it. Without the need for a 3rd party driver. So stop trying t o agruing with me as to why i use DOS. I just do.
 
so can i conclude that the zealot
was bull shitting me
with his not-backed-by-facts-claim
that macs are better?
 
In this instance there is not grounds to give a generalized statement that either OS is any better than the other. Your biggest factor is going to be the hardware you are running the OS on, in that case your results will vary drastically depending on which hardware you are using (IDE vs. SCSI, bus arch., etc.).

-Spy
 
drag: XFS, JFS, and ext3 are all journalled.
So is NTFS, for that matter. NTFS has some of the worst journalling of any FS ever, I've heard.
ext3's journalling is slapped on as an afterthought, so it's not too efficient.

I know this! Hells, bells.

It's just when you turn on journalling for HFS+, it's causes slowdowns.


Ext3's Journalling system is hardly a afterthought. It was specificly designed for journalling. Just because it's based off of ext2 which is not journalling, doesn't mean it's any less effective.

If you look thru the archives or do a search or something I did a whole bunch of benchmarks comparing the speed of large file use with different filing systems. JFS was actually the looser out of the bunch in terms of speed. But then again the file transfer it did do only ate 42% or so of cpu-time, the others ran around 90% cpu usage. (It was a 1700+ OC'd to 18xxMHZ.). This is expected since it was from IBM, and they never have speed in as the first concern (seems like it at least). Their solution is to throw hardware at it.

The winner was XFS, but Ext3 came damn close. Ext2 was the fastest sometimes, depending on the circumstances. (smallish files, I beleive)

The two-part filesystem is not a meta-data and data thing. It's resource and data. All FSes have separate meta-data for each file. Otherwise you'd never be able to access any files. It's actually a very nice system, as it allows structured data within files with native OS support. You can put in custom GUI components directly into an executable along with other parts that contain the executable code. You can even put these pieces on a normal data file. E.g., a text document where the embedded images and other objects are resources (IIRC, this is how SimpleText does things).

Oh, I don't know about that. Apple does some very extra weird stuff that other OSes don't do. Their files in native HFS are one file, made up of 2 parts or sections or whatever. Such as the icon used for it, it's position on the page, what type of file it is etc etc. In Linux they don't do that. If anything has to remember the Icon or position it's the program your using to display the files, non of that is stored in the actual file itself(like nautalis or konquerer or whatever), probably the same thing in Windows.

Stuff like file permissions (If I am correct) are stored in the directory itself in linux, which itself is just a file. (a special file that points to the location of hardlinks(?) to the parts of the disk that store the files it itself....)

In apple this stuff becomes very evident when you set up a file server for it. When you are sharing files with Apple (say a w2k server over SMB shares) you end up storing the file that contains the data, just like any other windows file. But you end up with these extra ._filename stuff all over your harddrive. This is to contain the extra metadata (if that's the correct word) that can't be handled under any file format other then Apple's native stuff.

Now correct me if I am wrong, it's been a long time since I looked up this stuff. (time to do some more research)

so can i conclude that the zealot
was bull shitting me
with his not-backed-by-facts-claim
that macs are better?

he was right. It's just not that big of a difference, at least not enough to go about bragging about it.

Macs are better. IMO (especially when they are running linux)

Just not that much better for the extra $$$$$$$$$$$$$$$$$$$$$$ for me.

But if your working on artist or professional, a G5 can offer significant advantages for you. Enough to boost your productivity over using windows in many cases. (linux is not a contender, exept as use as a rendering/workstation farm for high-end 3-d stuff.)

When a single project for a small business may be worth 100,000's dollars and you are on a strict deadline, the cost of a 3000-5000 dollar mac hardware setup vs a 2000-4000 dollar hardware PC setup becomes quickly immaterial vs the 15,000 dollars worth of software on either one. Especially when everybody working for you are artists and their talents lie no where near dealing with technical issues.
 
ext3's journalling is slapped on as an afterthought, so it's not too efficient.

If it's not efficient it's because it was generalized into the JBD layer instead of making it part of the filesystem driver, the idea was to have a generic journaling layer so any fs could use journaling with very little extra work.

You can't use DOS to install any NT-based Windows, since NT-based Windows have nothing to do with DOS.

Yes you can, if you look in the install directory there's a file called winnt.exe (IIRC) that is the DOS installer .

Look. I use DOS to re-install windows. I make a proper job of something when i do it. And its allways worked for me to work things like that through DOS.

Using DOS to install NT is pointless though, just boot off the CD and go from there. Even if you really want to prepare the partitions via something like Partition Magic beforehand there's no reason to do the install itself from there.

Stuff like file permissions (If I am correct) are stored in the directory itself in linux,

No, they're stored in the inode. If they were stored in the directory they would be attached to the filename instead of the file itself and that would be bad.

Macs are better.

Not really. The only thing different in any recent Mac is the chipsset and CPU, the rest are all standard parts. I don't know about the G5, but the G4s still used SDRAM which can hurt multimedia editing performance quite a bit when compared to a P4 with an 800Mhz FSB or even DDR on an Athlon.
 
Originally posted by: drag
drag: XFS, JFS, and ext3 are all journalled.
So is NTFS, for that matter. NTFS has some of the worst journalling of any FS ever, I've heard.
ext3's journalling is slapped on as an afterthought, so it's not too efficient.
I know this! Hells, bells.

It's just when you turn on journalling for HFS+, it's causes slowdowns.
Ok, but what does that have to do with the topic, anyway? That happens with ALL filesystems. It's a fact of the way journalling works. However, we're not talking about GBs of metadata; we're talking about GBs of file data.
Ext3's Journalling system is hardly a afterthought. It was specificly designed for journalling. Just because it's based off of ext2 which is not journalling, doesn't mean it's any less effective.
ext2 isn't designed for the demands of a journal and isn't designed to benefit from the way the FS is modified in that situation, so it's a bit slower than it could be.
If you look thru the archives or do a search or something I did a whole bunch of benchmarks comparing the speed of large file use with different filing systems. JFS was actually the looser out of the bunch in terms of speed. But then again the file transfer it did do only ate 42% or so of cpu-time, the others ran around 90% cpu usage. (It was a 1700+ OC'd to 18xxMHZ.). This is expected since it was from IBM, and they never have speed in as the first concern (seems like it at least). Their solution is to throw hardware at it.
No, it's because the JFS porters were told to just port the damn thing quickly and not optimize it much. As I said before, it's not designed for normal use. The reason it doesn't use as much CPU is probably that it doesn't try to squeeze as much as possible out of the system. Either it spends its time waiting for the HD or it doesn't try to use complex algorithms to schedule and arrange operations for the best speed. As I said, they just wanted the damn thing to work, not to be fast. IBM's JVM is the fastest out there, so I don't know where you're getting your opinions about their priorities. They typically throw more hardware at stuff to make it more reliable and able to handle more concurrent tasks, not faster for some specific simple task, if you're talking about mainframes.
The winner was XFS, but Ext3 came damn close. Ext2 was the fastest sometimes, depending on the circumstances. (smallish files, I beleive)
Yes, that's what I'd expect, as XFS is actually designed for speed+journalling. Ext2 is designed for speed, but wasn't designed with journalling in mind.
The two-part filesystem is not a meta-data and data thing. It's resource and data. All FSes have separate meta-data for each file. Otherwise you'd never be able to access any files. It's actually a very nice system, as it allows structured data within files with native OS support. You can put in custom GUI components directly into an executable along with other parts that contain the executable code. You can even put these pieces on a normal data file. E.g., a text document where the embedded images and other objects are resources (IIRC, this is how SimpleText does things).
Oh, I don't know about that. Apple does some very extra weird stuff that other OSes don't do. Their files in native HFS are one file, made up of 2 parts or sections or whatever. Such as the icon used for it, it's position on the page, what type of file it is etc etc. In Linux they don't do that. If anything has to remember the Icon or position it's the program your using to display the files, non of that is stored in the actual file itself(like nautalis or konquerer or whatever), probably the same thing in Windows.
The icon used for a document is stored in the app's resource fork. Part of the metadata is a creator code and type code. The creator code tells the OS which app to look at for the icon and the type code tells the OS which of those icons to use. Just because unix FSes are unable to store this kind of information doesn't make them superior. Windows and unix have to rely on the filename to guess what the file is and what should open it. There are benefits to either, but you can always open a file using an app that didn't create it on a Mac. Much of this may be changing with OS X, as it is unix-based.
Stuff like file permissions (If I am correct) are stored in the directory itself in linux, which itself is just a file. (a special file that points to the location of hardlinks(?) to the parts of the disk that store the files it itself....)
No, permissions are stored in the inode. dirs are as you describe. hard links point to an inode. Try making a hard link and then using chmod on it and looking at the two links to the file.
In apple this stuff becomes very evident when you set up a file server for it. When you are sharing files with Apple (say a w2k server over SMB shares) you end up storing the file that contains the data, just like any other windows file. But you end up with these extra ._filename stuff all over your harddrive. This is to contain the extra metadata (if that's the correct word) that can't be handled under any file format other then Apple's native stuff.
There are two issues here. The extra metadata (XFS also has this "problem") and the resource fork. Ever try using umsdos? Are unix FSes inferior because they need to do that extra crap to graft themselves onto an FAT FS? Also, an SMB mount on linux has problems with permissions, because SMB can't handle that "extra metadata". So what's your point? That Macs should just discard part of the file and metadata because some FS doesn't have native support for it? And what does this have to do with the speed of dealing with GB files?
 
Ok, but what does that have to do with the topic, anyway? That happens with ALL filesystems. It's a fact of the way journalling works. However, we're not talking about GBs of metadata; we're talking about GBs of file data.

But when HFS+ was first given journaling it took somewhere around a 25% performance hit IIRC, that's a big difference. They've probably sped it up by now but I really couldn't tell you.

ext2 isn't designed for the demands of a journal and isn't designed to benefit from the way the FS is modified in that situation, so it's a bit slower than it could be.

There's no real extra demands that ext2 hasn't met. All you really need is a new inode for the journal and the write paths then get modified to add a journal entry before actually making the change and marking the pages dirty.

No, it's because the JFS porters were told to just port the damn thing quickly and not optimize it much.

Also because it was the JFS2 that was used on OS/2, not the JFS used in AIX, so it's had less work on it.

Just because unix FSes are unable to store this kind of information doesn't make them superior. Windows and unix have to rely on the filename to guess what the file is and what should open it.

It's not that it's not possible, it's not desirable. NTFS has had the ability to have multiple forks (I think they call them streams) for some time, it's just that noone uses them except MS with their AFP server. And If you look in the Linux 2.6 kernel config filesystems ext2 and ext3 have had the extended attribute patches merged, XFS has supported extended attributes for some time and I wouldn't be surprised if JFS and Reiserfs get them soon. Extended attributes aren't full data forks, but you could put things like creator and type codes in there.
 
Originally posted by: Nothinman
Ok, but what does that have to do with the topic, anyway? That happens with ALL filesystems. It's a fact of the way journalling works. However, we're not talking about GBs of metadata; we're talking about GBs of file data.
But when HFS+ was first given journaling it took somewhere around a 25% performance hit IIRC, that's a big difference. They've probably sped it up by now but I really couldn't tell you.
OK, but is HFS+ data or metadata journalled? If it's metadata journalled, then there's no hit as far as this thread is concerned. Even then, you can disable the feature if you don't want it.
ext2 isn't designed for the demands of a journal and isn't designed to benefit from the way the FS is modified in that situation, so it's a bit slower than it could be.
There's no real extra demands that ext2 hasn't met. All you really need is a new inode for the journal and the write paths then get modified to add a journal entry before actually making the change and marking the pages dirty.
It has met the demands, but ext3 isn't optimized for those demands. XFS has quite a bit of stuff in it that knows that there is a journal, and is optimized to do things the most efficient way for that situation.
Just because unix FSes are unable to store this kind of information doesn't make them superior. Windows and unix have to rely on the filename to guess what the file is and what should open it.
It's not that it's not possible, it's not desirable. NTFS has had the ability to have multiple forks (I think they call them streams) for some time, it's just that noone uses them except MS with their AFP server. And If you look in the Linux 2.6 kernel config filesystems ext2 and ext3 have had the extended attribute patches merged, XFS has supported extended attributes for some time and I wouldn't be surprised if JFS and Reiserfs get them soon. Extended attributes aren't full data forks, but you could put things like creator and type codes in there.
Yes, but EAs are metadata, not data (and that's what I was referring to when I said XFS would have problems). Forks are data. Creator and type codes are metadata, as I explained.
 
journalling matters, even in this thread, because performance doesn't just mean file speed. What does it matter if you have the fastest throughoput imaginable if after a power outage you just lost your 18gig video file you just finished working on?

As far as the extra information on macs, no I don't think it should discard the extra information, but what real purpose does it have? The only thing it realy accomplishes is to save the position of icon on a page and to eliminate the need for a .filetype at the end of a file.
 
OK, but is HFS+ data or metadata journalled? If it's metadata journalled, then there's no hit as far as this thread is concerned. Even then, you can disable the feature if you don't want it.

I don't know, I assume it's just metadata as this is Apple we're talking about and since they were promoting the speed improvements in each OS X release so much I doubt they'd do something as bad for speed as full data journaling.

It has met the demands, but ext3 isn't optimized for those demands. XFS has quite a bit of stuff in it that knows that there is a journal, and is optimized to do things the most efficient way for that situation.

XFS also has a completely different on-disk format and directory organization. If the ext2 guys wanted to write a whole new filesystem they probably would have just contributed to JFS or reiserfs or something instead.

Yes, but EAs are metadata, not data (and that's what I was referring to when I said XFS would have problems). Forks are data. Creator and type codes are metadata, as I explained.

But forks aren't needed for the stuff you were talking about, just EAs would work fine to store mostly everything mentioned so far.
 
I don't know, I assume it's just metadata as this is Apple we're talking about and since they were promoting the speed improvements in each OS X release so much I doubt they'd do something as bad for speed as full data journaling.

There is extra information that is added to every file made in Apple. I don't know if the correct term is metadata, or what, but it's something that is extra and is part of the legacy left over from older apple OSes. Weither it's Linux or Windows, when you share files, this "extra" information gets translated into those ._filename files. I don't know how exactly it works, but it's very annoying and can cause headaches when sharing files between various OSes. A few times corrupted files in W2k would create undeleteable files...

The journalling is something else. It was added to HFS+ in the 10.2.2 update. It was a feature that Apple wasn't proud of and didn't offer any automatic journal enabling. They only referenced it as a side note, and in a couple "technical" type documents on their website. You have to go to the command line to turn it on. I think in later updates they added a gui way to do it, but I am not sure.

Turning it on (file journalling) caused extra cpu overhead and it caused a 15-30% decrease in file performance depending on the horsepower of your paticular Mac.

To me this is a "good thing" except for the decrease in performance, because it seemed to me that the filing system was pretty fragile when compared to NTFS or other native Linux formats. I beleive this is due to the half Unix/half OS 9 nature of it. I don't know for sure.

My perceptions could of been caused by the fact these were college computers and students would power off at the power strip if they thought you weren't looking, just to get out of class quicker. Also the programs like Photoshop would spontaniously change file permissions with out any prompting to make them run quicker/better.

This type of behavior(spontanious file permissions changing/improper shutdown) is what caused 80% of the headaches that I had to deal with. Sometimes it caused REALY annoying problems. The rest was OS 9 and quark express, and dealing with classic mode and fonts.

Other then that, Macs were a breeze to administrate. Especially after we installed remote desktop stuff.
 
Originally posted by: drag
journalling matters, even in this thread, because performance doesn't just mean file speed. What does it matter if you have the fastest throughoput imaginable if after a power outage you just lost your 18gig video file you just finished working on?
If it's only metadata journalled (and who would want data journalling with that huge of a file...), you'd still lose the file.
As far as the extra information on macs, no I don't think it should discard the extra information, but what real purpose does it have? The only thing it realy accomplishes is to save the position of icon on a page and to eliminate the need for a .filetype at the end of a file.
I still have no clue what the position of an icon has to do with the file's metadata. The type and creator codes are different from the info in the file extension.
 
Originally posted by: Nothinman

Yes, but EAs are metadata, not data (and that's what I was referring to when I said XFS would have problems). Forks are data. Creator and type codes are metadata, as I explained.
But forks aren't needed for the stuff you were talking about, just EAs would work fine to store mostly everything mentioned so far.
No, you wouldn't store code for custom UI widgets or layouts of dialogs in EAs.
 
Originally posted by: rjain
Originally posted by: drag
journalling matters, even in this thread, because performance doesn't just mean file speed. What does it matter if you have the fastest throughoput imaginable if after a power outage you just lost your 18gig video file you just finished working on?
If it's only metadata journalled (and who would want data journalling with that huge of a file...), you'd still lose the file.
As far as the extra information on macs, no I don't think it should discard the extra information, but what real purpose does it have? The only thing it realy accomplishes is to save the position of icon on a page and to eliminate the need for a .filetype at the end of a file.
I still have no clue what the position of an icon has to do with the file's metadata. The type and creator codes are different from the info in the file extension.

Like I said, Mac's do file stuff weird, I'll see if I can find some links.
 
Back
Top