• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

C# style question

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The purpose of header files is to contain function prototypes. Thats really it. They can do other stuff, but generally it is just setting up a place holder for the compiler to say "Hey, this function/class exists somewhere else in the code, so don't freak out if I try to call a function you haven't seen yet."

Right, BECAUSE of limited resources (think 1970's compiler) With modern compilers, there's no need to "place hold" and wait for the compiler to catch up.
 
Right, BECAUSE of limited resources (think 1970's compiler) With modern compilers, there's no need to "place hold" and wait for the compiler to catch up.

Well, not entirely.
Namely, the 'place holding' is done because the functions are 'external'.
These external references are not solved by the compiler, but by the linker.

.NET works differently, where the actual native code isn't done at 'compile time', but at runtime (or install time), and that's also where possible linking would occur.
The assemblies don't contain native code. They are equivalent to the internal bytecode (think optimized parse-tree) of a regular compiler, before generating the native code, and eventually linking it into a binary executable image.
 
Right, BECAUSE of limited resources (think 1970's compiler) With modern compilers, there's no need to "place hold" and wait for the compiler to catch up.

Its as scali said. since the "include" directive literally means "Take this code, copy it here". The problem you run into with overly big projects is that you don't want to have all your source code in just one file. The include with prototypes allows you to split them up nice and neatly.

It comes down to the way C was set up initially (and honestly, most compiled languages are this way). The compiler was meant to go through one file at a time and the linker was supposed to hook those files together. While resources may have been part of the equation, they aren't the full thing. A big part of it was looking for a method to make things modular. That is the true purpose of header files.

Modern compilers have the same issue, they just handle it differently. Rather then working as just a compiler, modern compilers are more prone to work as both compiler and linker. So in the end, the VS C# compiler is really just saying "Ok, I don't know where this function is, I'll find it later" and then throwing an error if it is never found. The issue that arises with this method is you have to feed to the compiler all the files that are going to be compiled. That runs into a whole slew of problems (which are apparently not considered too big of issues) such as the fact that you can't modularly build a project. IE make a change to one file and not have to rebuild then entire project.
 
Last edited:
Its as scali said. since the "include" directive literally means "Take this code, copy it here". The problem you run into with overly big projects is that you don't want to have all your source code in just one file. The include with prototypes allows you to split them up nice and neatly.

It comes down to the way C was set up initially (and honestly, most compiled languages are this way). The compiler was meant to go through one file at a time and the linker was supposed to hook those files together. While resources may have been part of the equation, they aren't the full thing. A big part of it was looking for a method to make things modular. That is the true purpose of header files.

Modern compilers have the same issue, they just handle it differently. Rather then working as just a compiler, modern compilers are more prone to work as both compiler and linker. So in the end, the VS C# compiler is really just saying "Ok, I don't know where this function is, I'll find it later" and then throwing an error if it is never found. The issue that arises with this method is you have to feed to the compiler all the files that are going to be compiled. That runs into a whole slew of problems (which are apparently not considered too big of issues) such as the fact that you can't modularly build a project. IE make a change to one file and not have to rebuild then entire project.

Are you saying C# can't incrementally compile? That's crazy. I have apps that take 45 minutes to compile from a clean build. But I can go in, modify one file, and have it rebuild the assembly in a few seconds.

Incremental compilation has been in EVERY version of .Net since it's inception. If header files still had a use, they wouldnt have been dropped from every modern language.
 
As said, .NET doesn't do a full compile to native code. So the 'incremental compile' step is done at runtime/install time, when the VM compiles to native code.
 
As said, .NET doesn't do a full compile to native code. So the 'incremental compile' step is done at runtime/install time, when the VM compiles to native code.

I have projects that take 45 minutes to do a complete build.

I can "modify" those projects and have a completely new assembly built in a matter of seconds.

That is the definition of incremental build. ("Not having to recompile everything")

There is no JIT'ing involved in the above scenario.
 
Yes, but part of the incremental compiling step of a classic compiler is now moved to the JIT phase.
With header files, the code is injected directly into your source file (inline functions and all), and recompiled that way.
So if you edit a header, you trigger a recompile on all code that includes that header.

With a JIT compiler, the inlining of functions, constants etc can be postponed until JIT time. The result is that changes in one assembly does not necessarily trigger a recompile of all dependent assemblies, unlike the above scenario with header files.
 
umm, are you reading my example?

45 minutes to a few seconds.... thats because it's NOT doing a full recompile. And it's doing it WITHOUT header files! And there is NO jitting (as I didnt run either aseembly) involved here.

Native vs Managed code is irrelevant to this discussion!
 
umm, are you reading my example?

45 minutes to a few seconds.... thats because it's NOT doing a full recompile. And it's doing it WITHOUT header files! And there is NO jitting (as I didnt run either aseembly) involved here.

Native vs Managed code is irrelevant to this discussion!

I never argued against your example, did I?
I am just saying that it's not a 'full recompile' in the way a classic compiler would do it. It will be faster anyway, as it's doing less work.
So are you reading my posts? It sounds like you're completely absorbed in what you're trying to say, and not even bothering to pay enough respect to your discussion partners to think about what they're trying to say.
 
I never argued against your example, did I?
I am just saying that it's not a 'full recompile' in the way a classic compiler would do it. It will be faster anyway, as it's doing less work.
So are you reading my posts? It sounds like you're completely absorbed in what you're trying to say, and not even bothering to pay enough respect to your discussion partners to think about what they're trying to say.


Lets assume the C# compiler doesnt do any linking, and the JIT actually does it all (a leap but I'm trying to work with you here)

So answer me this.. are headers REQUIRED for linking?
 
Ah, you're one of those. Still thinks he knows it better (which he doesn't), and then tries to trap others.
Sorry, not playing that game. You're on your own. Cogman and I have given enough info for you to figure it out if you're as smart as you think you are.
 
I'm not cornered (I'd have to be wrong first, which I'm not).
I just hate people like you who try to corner people.
I'm just leaving this to Cogman, if he's interested.
 
Yea, I know... your ego kicked into Alpha Male mode... You have this uncontrollable urge to prove that you know better than us... we understand. And you desperately try to corner us by coming up with far-fetched hypotheses and putting words in our mouths...
I just don't care about you or your ego at all.
 
Yea, I know... your ego kicked into Alpha Male mode... You have this uncontrollable urge to prove that you know better than us... we understand. And you desperately try to corner us by coming up with far-fetched hypotheses and putting words in our mouths...
I just don't care about you or your ego at all.

If someone tries to tell you that you are wrong about something you assume it's because they have an ego?

Or do you think that maybe, just maybe (go read wikipedia), you are wrong?

I actually thought, maybe, just maybe, I am wrong (I know, had to swallow my HUGE ego to think that) so I went to the search engines, came up with about 10 different forums, faq's etc, that all answer the question "Why doesnt c#/java have header files" and luckily, for me (and my enormous internet ego), my thoughts were mostly confirmed.

Though I will admit I was actually wrong in how I thought the GAC worked (but I actually lost the post I had about the GAC by hitting the back button, and didnt feel like retyping it)
 
Lets assume the C# compiler doesnt do any linking, and the JIT actually does it all (a leap but I'm trying to work with you here)

So answer me this.. are headers REQUIRED for linking?

No, headers aren't required for linking. What is required is some sort of signaling method/signature to let the compiler know where a function call maps to in the source. In most compiled languages (in other words, not bytecode languages). The header file gives the compiler a place holder which it can call back to later.

Depending on how the compiler/linking system works, it can be as complex as remapping all calls to go to the correct function, to keeping the calls going to the same location, and then at that location jump/call to the correct location (the most common).

Compilers work primarily on the basis of giving functions "signatures" and keeping a list of those signatures. When it goes through the compilation process, it assigns to each function a signature and references back to said signatures. If it runs into a function call that it has no signature reference to, it freaks out (in the c/c++ world) and aborts. Thats where the headers come into play, they give the compiler signatures that essentially mean "goto here, the actual location will be either declared later or the linker will handle it."

As for the C#/Java method of doing things, I'm not entirely sure how it pulls the magic off. Most likely, it has a multi pass build system where it generates all signatures for all functions. But that is just speculation.

Just browsing the object directories, it has a couple of files with "references".cache and "referencesInput".cache My bet would be that those files contain information about where all the functions line up in the compiled code (thus saving build time). A complete rebuild most likely nukes those files.
 
Or do you think that maybe, just maybe (go read wikipedia), you are wrong?

I really wonder what you try to say with that wikipedia link anyway.
My point didn't exactly revolve around header files.
I also didn't say what you were saying was *wrong*, just that it was an incomplete/overly simplified view of the situation. I just tried to add some of the missing nuances.
You just felt attacked. So you try to corner me (which spells E-G-O). I don't like that.
 
Compilers work primarily on the basis of giving functions "signatures" and keeping a list of those signatures
Signatures can also be referred to as "forward declarations", from Wiki:

Newer compiled languages (such as Java, C#) do not use forward declarations; identifiers are recognized automatically from source files and read directly from dynamic library symbols. This means header files are not needed
 
I really wonder what you try to say with that wikipedia link anyway.
My point didn't exactly revolve around header files.
I also didn't say what you were saying was *wrong*, just that it was an incomplete/overly simplified view of the situation. I just tried to add some of the missing nuances.
You just felt attacked. So you try to corner me (which spells E-G-O). I don't like that.

I didnt feel attacked, I felt ANNOYED that you were ignoring my points and kept talking around them.
 
Last edited:
I didnt feel attacked, I felt ANNOYED that you were ignoring my points and kept talking around them.

That's what happens when you don't disagree, but just try to add nuances. You don't argue the points. Which you interpret as 'ignoring', and 'talking around them'.
You make it into an "I'm right and you're wrong". Which spells E-G-O.
I never said you were wrong... and nothing you said or linked has conflicted with anything I said either. So how will you deal with that completely new situation in your life, where not everything is good and evil, black and white?
 
Late to this thread, but imo definitely do not use code structure tricks to make the method list more accessible. C# is a managed language and is designed to work intimately with the development environment. As mentioned somewhere above you could use #region to do this, and you'll find that the editor already provides automatic outlining for methods and bodies. Right click the doc and play with the outlining options to see what I mean. Personally I don't like #region. I suggest moving away from the idea of the editor as the means of browsing your code, and coming to rely much more heavily on the class view (tab next to solution explorer) and the drop-down method quick jump list. VS2010 includes a huge number of options for discovering and navigating code. Free yourself from the editor 🙂.
 
Late to this thread, but imo definitely do not use code structure tricks to make the method list more accessible. C# is a managed language and is designed to work intimately with the development environment. As mentioned somewhere above you could use #region to do this, and you'll find that the editor already provides automatic outlining for methods and bodies. Right click the doc and play with the outlining options to see what I mean. Personally I don't like #region. I suggest moving away from the idea of the editor as the means of browsing your code, and coming to rely much more heavily on the class view (tab next to solution explorer) and the drop-down method quick jump list. VS2010 includes a huge number of options for discovering and navigating code. Free yourself from the editor 🙂.

🙂 So late to the game 😛

It will take some getting use to. I've just done so much lower level stuff that moving to a higher level language is really quite.. Interesting. Kind of throws you for a loop not having to think about things like "But wait, what about freeing up memory!".
 
Back
Top