Why C is terrible, and Java/C# are Great

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
I find it somewhat depressing that, on this forum, having a diverse knowledge of programming languages is knowing everything from C to Java...to C#.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
I find it somewhat depressing that, on this forum, having a diverse knowledge of programming languages is knowing everything from C to Java...to C#.

Who made that claim?

We focus mainly on those three because those three are the most popular languages out there (and for good reason). Beyond those, I've programmed in Pascal, Assembly, PHP, javascript, verilog (HDLs can be programming languages too :)) ect. Yet my language of choice is generally C++.

Who cares about esoteric languages? A knowledge about how to program in Scheme is far less important then the knowledge of how programming works.
 

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
Who cares about esoteric languages? A knowledge about how to program in Scheme is far less important then the knowledge of how programming works.

I didn't have Scheme in mind, but that works. I have a lot more confidence in a good Scheme programmer than a good PHP programmer.

Why? Because if you have an understanding of higher-order functional programming then you know how to write cleaner, shorter, better programs in any language, in any paradigm.

Last time I checked, being "esoteric" doesn't make something bad. Winning a popularity contest doesn't make something necessarily good. Need I point out the obvious examples?

Ten years ago I was a hardcore C hacker, on the fence about C++ (and that OOP crap), didn't care for Java, worked with PHP and I thought that was everything the world had to offer. Then I was introduced to Scheme and CL, and eventually went far beyond those languages.

I am still a hardcore C hacker, still on the fence about C++ (but in a different way), still don't care for Java, and now I wouldn't touch PHP even with a ten-foot pole. I don't program in Scheme or CL anymore, having moved on to ML and Haskell for my application needs. And when I do write C (and occasional C++) code, I have a very clear understanding of the limitations of those languages, and the ways I can write better and cleaner code despite those limitations. It's a perspective gained by looking back from a higher level.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
I didn't have Scheme in mind, but that works. I have a lot more confidence in a good Scheme programmer than a good PHP programmer.

Why? Because if you have an understanding of higher-order functional programming then you know how to write cleaner, shorter, better programs in any language, in any paradigm.

Last time I checked, being "esoteric" doesn't make something bad. Winning a popularity contest doesn't make something necessarily good. Need I point out the obvious examples?

Ten years ago I was a hardcore C hacker, on the fence about C++ (and that OOP crap), didn't care for Java, worked with PHP and I thought that was everything the world had to offer. Then I was introduced to Scheme and CL, and eventually went far beyond those languages.

I am still a hardcore C hacker, still on the fence about C++ (but in a different way), still don't care for Java, and now I wouldn't touch PHP even with a ten-foot pole. I don't program in Scheme or CL anymore, having moved on to ML and Haskell for my application needs. And when I do write C (and occasional C++) code, I have a very clear understanding of the limitations of those languages, and the ways I can write better and cleaner code despite those limitations. It's a perspective gained by looking back from a higher level.

Point taken. I myself am far less impressed by "I program in javascript" vs "I program in Lisp".

I wasn't trying to say that esoteric languages are bad, just that a knowledge of them isn't required to have a diverse knowledge of programming, or to be a good programmer. My metric for good programmer isn't dependent on the languages you know (crappy programmers exist for all languages).

One interesting thing you said "...I can write better and cleaner code despite those limitations. It's a perspective gained by looking back from a higher level." I find that particularly interesting as, for my experience, I've learned to write better code not from the HL languages that I've learned, rather, from the low level languages that I've learned. I've found it invaluable to actually know what is going on in the background. Thats something a high level language can quickly hide.

For example. For a functional language, dynamic programming might not be the easiest thing to implement. It also might not make a whole lot of sense in that context. Yet from a low level standpoint, the benefits of dynamic programming (where applicable) are pretty dang substantial.

BTW, might I ask what you have against OOP? It is a great tool when used correctly.
 

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
Actually Javascript has many Lisp-like features, including first-class higher-order functions. You may not see many Javascript programmers take advantage of it, but it's there.

You are right that you can certainly become a good programmer without exposure to many programming languages, I just think it's less likely.

It is helpful to know both the high and low level. You may use lexically-scoped functions that close over variable bindings, but you also know that the compiler implements this by packaging a pointer to an "environment" structure along with the function pointer. Or that the parametric polymorphism you enjoy is provided through "boxing" of data into word-sized descriptors. Or templates in some cases. The garbage collector makes your life easier, but your choice of parameters affects how and when your data is tenured, whether it is compacted or merely traced.

But take a purely low-level perspective and you can get lost in the details. There is a great deal of complexity of managing memory in a large library or project, just look at what happens to libraries like GTK or programs like Mozilla, or even gcc. Writing in a low-level language can give you very fine control, but it can make implementing more efficient data-structures much more daunting a task. What's the use of micro-optimization if the algorithm you picked has worse asymptotic complexity?

Dynamic programming is an interesting example. In a functional programming language like Haskell, it seems daunting because you're not allowed to modify variables, which is why I think you brought it up. But actually dynamic programming is one of the simplest and most natural techniques to use in Haskell, because of lazy evaluation. Just reify your algorithm in terms of a concrete data-structure (like an array), and the compiler takes care of caching results for you.
 

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
I just feel that OOP is overly hyped. The first problem is that it is not clearly defined. There are as many definitions of OOP as there are OO-languages. Smalltalkers will claim that static types can never be OO, Common Lispers prefer generic functions over message passing, C++ insist on their rigid class hierarchy and prefer multiple-inheritance over Java. There's about 20 zillion other variations which would take me forever to enumerate.

And after all that, it seems to me that OO is just one way of structuring programs and is not ideal for all purposes. I think that orienting the entire language around it can be stultifying because it forces you to mold your ideas into that form regardless of whether it is a good fit.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I think a huge drawback of GC is the overhead.
Yes, it's fine for most applications, but when you need time-critical stuff or high-performance code, GC can get in the way.
With Java or .NET you're pretty much forced to create all objects on the heap, as there is no concept of stack allocation. The VMs are optimized for fast allocation, but that's not where it hurts.
The hurt is when you have to allocate tons of temporary objects... They have to be GC'ed at some point. And that can temporarily lock your application (it can't GC while your code is running). Which is unacceptable when you want low-latency response.

I have written code to 'work around' the GC, by using object pooling, so these temporary objects were actually not temporary, but were re-used, so that they wouldn't need to be created and GC'ed all the time. Or sometimes you'd just use a few arrays of primitives rather than real objects, since an array of primitives just counts as one object, and as such just one reference for the GC to inspect. This can massively reduce the time a GC spends in the mark phase.
But that pretty much defeats the idea of a GC. It's supposed to make the programmer's life easier.
 

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
Your notion of GC sounds like its circa 1980s. Generational GCs were created to address that sort of issue. I've seen plenty of old Lisp code that also maintained pools of objects, but it's entirely pointless with modern generational GCs which are extremely fast to reclaim short-lived objects, much more so than any hand-cooked "solution".

In short, a generational collector takes the observation that most objects are either ephemeral or persistent and splits memory in several spaces. Young objects are scanned more often than old objects. The nursery space is fairly small and can be scanned in a matter of microseconds: this is called a scavenge, typically. Once an object survives a number of scavenges it is promoted to a space that is scanned much less often. The result is a GC that works well for 90% of applications, sometimes with a little fine-tuning. (Obviously, real-time applications will demand a RT-friendly GC)

Most run-time systems these days employ generational GC for this reason. Now the other thing that amuses me, as an OS programmer I have to deal with the innards of memory systems, and I can tell you that there is nothing simple about the design of manual memory management systems. I get the feeling a lot of people think that malloc/free are basically "free" and nothing could be further from the truth. Go ahead and take a look at your favorite open source OS's implementation of libc malloc someday, there are a number of algorithms floating around. They all are essentially compromises, heuristics if you will, to deal with the fundamentally hard problem of fragmentation. Not only is it difficult, it can be unpredictable, for example if you use a best-fit algorithm then you can easily run into situations where malloc requires an unusually large amount of time to run.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Your notion of GC sounds like its circa 1980s. Generational GCs were created to address that sort of issue. I've seen plenty of old Lisp code that also maintained pools of objects, but it's entirely pointless with modern generational GCs which are extremely fast to reclaim short-lived objects, much more so than any hand-cooked "solution".

Actually, I was talking about present-day Java and .NET code (and thanks for the arrogance).
Yes, the theory all sounds great, but if it doesn't work in practice...
As you say, it works for 90%... I write that other 10%.

As for malloc/free... no, but stack allocation is pretty much free. You just create a stack frame of the size you require. Doesn't matter how many objects, or how big they are really. That just changes the offset.
That's something that Java/.NET don't allow you to do (okay, there is stackalloc(), but that won't allow you to use the memory for regular objects).
In asm/C/C++ I just wouldn't use malloc/free(new/delete) in certain situations in the first place.
 
Last edited:

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
Sorry, but you described a situation straight out of the old code I used to help maintain that was written over 20 years ago.

Anyway, GC isn't there to solve the problem of dynamic extent allocation. It's that indefinite extent stuff that causes all the problems.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Sorry, but you described a situation straight out of the old code I used to help maintain that was written over 20 years ago.

But as I said a few posts up:
"While we primarily use C# at work now, I find myself using other languages still.
I'll use C/C++ for certain low-level or performance-critical stuff, or even assembly when required, and wrap it in .NET, so it can be used from a C# user interface."

So I develop C# code *now*, with .NET 4.0 even (with the latest and greatest GC with some optimizations to minimize locking of your application with a new-and-improved async GC algo, where background gen2 collection can be pre-empted by a gen0 collection phase if required, rather than blocking allocation completely until gen2 is complete).
(Yes I know all the theory too).

Basically we do the entire UI with WPF, and try to put as much business logic into C# code as possible, because of the speed of development and ease of maintenance... but there are just cases where you need to have that bit of extra control that .NET won't give you. The GC is just an integral part of the VM, and your code has to go by its rules.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
I just feel that OOP is overly hyped. The first problem is that it is not clearly defined. There are as many definitions of OOP as there are OO-languages. Smalltalkers will claim that static types can never be OO, Common Lispers prefer generic functions over message passing, C++ insist on their rigid class hierarchy and prefer multiple-inheritance over Java. There's about 20 zillion other variations which would take me forever to enumerate.

And after all that, it seems to me that OO is just one way of structuring programs and is not ideal for all purposes. I think that orienting the entire language around it can be stultifying because it forces you to mold your ideas into that form regardless of whether it is a good fit.

It's as clearly defined as 'procedural' I guess. The term object-oriented should probably be thrown out now. It's been baking for almost 30 years and it's overdone. But there is a fundamentally different way of looking at software embedded in the core of an object-based approach. Some people get it. Some people still, after decades, don't.

If you listen to a statement of a problem and your first inclination is to see it as a series of operations then you don't get it. If you listen to the same problem and your first instinct is to identify the things that are interacting and what their behavior is, you do get it.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
It's as clearly defined as 'procedural' I guess. The term object-oriented should probably be thrown out now. It's been baking for almost 30 years and it's overdone. But there is a fundamentally different way of looking at software embedded in the core of an object-based approach. Some people get it. Some people still, after decades, don't.

If you listen to a statement of a problem and your first inclination is to see it as a series of operations then you don't get it. If you listen to the same problem and your first instinct is to identify the things that are interacting and what their behavior is, you do get it.

Hehe, it's called OOAD of which OOP really has almost nothing to do with. :p
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Hehe, it's called OOAD of which OOP really has almost nothing to do with. :p

I completely disagree with that. If your intent is to state that you can implement an object-oriented design in a procedural language that is both true and beside the point. What I am saying is that you cannot create an object-oriented design, or implement it in any language, until you are able to think about the problem in those terms. The post that I was responding to attempted to relegate object-oriented implementations to the category of "just another way to structure a program" and that couldn't be further from the truth. It's about using higher levels of abstraction to organize the core concepts of a program, and then having the implementation flow from that.
 

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
It's about using higher levels of abstraction to organize the core concepts of a program, and then having the implementation flow from that.

I would call that a trivial statement, since it could be applied to any design methodology. Besides "obfuscated programming" anyway.

How does that distinguish OOP in particular?
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
I would call that a trivial statement, since it could be applied to any design methodology. Besides "obfuscated programming" anyway.

How does that distinguish OOP in particular?

Well, because the abstractions are higher level than a procedure, or a group of procedures, in that they encompass both state and behavior from the beginning. It's easy for procedurally-oriented developers to be dismissive about those ideas and claim that we were doing it all along before object-oriented languages started to go mainstream, but we weren't. The ideas were out there, and some languages, but the average developer thought in terms of functions, groups of functions, and the data they operated on. When we began to unite these ideas behind recognizable things with interfaces and encapsulated state our systems became more decoupled, and more robust.

I would also point out that the object-oriented paradigm has stood the test of time, scrutiny, and real-world application for more than twenty years now. It doesn't require me to defend its core ideas here :).
 

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
Well if I just listened to conventional wisdom and did what everyone else did, there wouldn't be much to talk about. Lots of things have been used for many years but are still considered lousy, especially in the computer world.

I would say you are arguing for explicit module systems, and I would completely agree about that. But I still don't see why that requires OOP. It is true that most OOP languages have some way of enforcing modularity, but it is also true that many non-OOP languages have that property as well: for example, Standard ML, which has the most comprehensive module system ever devised for a "real" language (whatever you may think of it).

You could go further and say that any language that permits unrestricted side-effects breaks modularity: any old code can go ahead and do nasty things to your computer at any time, and there's ultimately nothing you can do without peeking at the source (or using some kind of hardware protection, or maybe SFI). Instead of splitting up state into a bunch of objects, you should instead go stateless. Every function (not procedure, now) is completely defined by its inputs and outputs. This does wonders for modularity because functions are easy to compose, there's no re-entrancy issues, and you know the system can't be funked up by some sneaky behavior. Add a proper type system with parametric polymorphism and you've got code re-use and abstraction too.

Now some people say that this is too restrictive, that you have to have side-effects to write real programs. The Haskell folks are doing a great job showing that it is not a limitation, in fact, quite the opposite -- which is why I enjoy following their work. Obviously they have their problems too, but that's why I keep looking for more ideas.

Back to OO languages, I find the traditional concept of a "method" to be tantamount to a function with an implicit argument (self) and privileged access to a single datatype. This is rather limiting, even in something as simple as a Dictionary you may want to work with more than one datatype in that manner. Witness the accumulating array of "design patterns" each of which essentially codifies a particular workaround as canon. What you find instead in C++ is STL-style template meta-programming, and in Java is Generics, both of which provide parametric polymorphism-style programming to each language respectively. I think that was a big improvement for both, don't you?
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Well if I just listened to conventional wisdom and did what everyone else did, there wouldn't be much to talk about. Lots of things have been used for many years but are still considered lousy, especially in the computer world.

Perhaps, but in most such cases in engineering and the sciences there is at least an underground consensus that the accepted "thing", whatever it is, needs to be challenged. I see no such consensus in software engineering with regard to the overall applicability and durability of object-oriented techniques. On the contrary they have become accepted and nearly canon for application development. The application and penetration of these ideas varies across certain sub-disciplines such as embedded systems, but the lag is widely accepted to be a result of environmental limitations, not limitations in the usefulness of an object paradigm.

I would say you are arguing for explicit module systems, and I would completely agree about that. But I still don't see why that requires OOP. It is true that most OOP languages have some way of enforcing modularity, but it is also true that many non-OOP languages have that property as well: for example, Standard ML, which has the most comprehensive module system ever devised for a "real" language (whatever you may think of it).

I don't think we need to find another name for it, nor do I think that it matters what language is used to implement it, as long as it supports the paradigm as fully as the environment allows. Functional languages offer a different paradigm that I frankly don't understand well. How they work in data rich application environments where common tasks include reading and processing large lists of entities from persistent store, gathering and processing user input, etc., is unclear to me, so I have to withhold judgement on whether/how they challenge or extend the object paradigm. I haven't seen them gaining a lot of traction in domains where such applications are common.

You could go further and say that any language that permits unrestricted side-effects breaks modularity: any old code can go ahead and do nasty things to your computer at any time, and there's ultimately nothing you can do without peeking at the source (or using some kind of hardware protection, or maybe SFI). Instead of splitting up state into a bunch of objects, you should instead go stateless. Every function (not procedure, now) is completely defined by its inputs and outputs. This does wonders for modularity because functions are easy to compose, there's no re-entrancy issues, and you know the system can't be funked up by some sneaky behavior. Add a proper type system with parametric polymorphism and you've got code re-use and abstraction too.

Theoretically I agree, but again I don't fully understand how to bring this into play in, say, a requirement to read in and report on 10,000 book records, or process an uploaded image and save it to disk. I think pure stack-based machine models have their uses, and those may grow, but I don't know that I see them challenging object-oriented (cum procedural) languages for the application developer's mind share.

Now some people say that this is too restrictive, that you have to have side-effects to write real programs. The Haskell folks are doing a great job showing that it is not a limitation, in fact, quite the opposite -- which is why I enjoy following their work. Obviously they have their problems too, but that's why I keep looking for more ideas.

Back to OO languages, I find the traditional concept of a "method" to be tantamount to a function with an implicit argument (self) and privileged access to a single datatype. This is rather limiting, even in something as simple as a Dictionary you may want to work with more than one datatype in that manner. Witness the accumulating array of "design patterns" each of which essentially codifies a particular workaround as canon. What you find instead in C++ is STL-style template meta-programming, and in Java is Generics, both of which provide parametric polymorphism-style programming to each language respectively. I think that was a big improvement for both, don't you?

I think your description of design patterns as codified workarounds is extremely narrow and unfairly self-serving with respect to your general argument. Within that view virtually everything we do to translate human intent into symbols a computer can interpret and act on is a workaround. Also, I am not sure I understand the relevance, but yes, parameterized type definition was a nice addition to C++, java, and C#, although its a fair bit easier to understand and use in either of the latter two than it ever was in Bjarne's baby.
 

dinkumthinkum

Senior member
Jul 3, 2008
203
0
0
Been busy, catching up...

Perhaps, but in most such cases in engineering and the sciences there is at least an underground consensus that the accepted "thing", whatever it is, needs to be challenged. I see no such consensus in software engineering with regard to the overall applicability and durability of object-oriented techniques. On the contrary they have become accepted and nearly canon for application development. The application and penetration of these ideas varies across certain sub-disciplines such as embedded systems, but the lag is widely accepted to be a result of environmental limitations, not limitations in the usefulness of an object paradigm.

I'm not sure I understand what you are trying to say. If you are saying that you have not seen anyone challenging the conventional wisdom on OOP, I would say that you haven't been looking, since I know of plenty of people and language communities doing just that.

I don't think we need to find another name for it, nor do I think that it matters what language is used to implement it, as long as it supports the paradigm as fully as the environment allows. Functional languages offer a different paradigm that I frankly don't understand well. How they work in data rich application environments where common tasks include reading and processing large lists of entities from persistent store, gathering and processing user input, etc., is unclear to me, so I have to withhold judgement on whether/how they challenge or extend the object paradigm. I haven't seen them gaining a lot of traction in domains where such applications are common.

In practice most well implemented functional language compiler and runtime environments do as well as any other high-level language. Typically there is also a large gain for programmer, because of the expressive power of these languages.

Theoretically I agree, but again I don't fully understand how to bring this into play in, say, a requirement to read in and report on 10,000 book records, or process an uploaded image and save it to disk. I think pure stack-based machine models have their uses, and those may grow, but I don't know that I see them challenging object-oriented (cum procedural) languages for the application developer's mind share.

I might be misreading this, but I just want to be sure that you know stack-based is not a synonym for functional. While a high-level language does not specify a machine model, it is typically implemented using the usual register, stack, and GCed heap memory model. Some more interesting implementations use heap-allocated stack frames because they support first-class continuation passing. But I digress.

I think your description of design patterns as codified workarounds is extremely narrow and unfairly self-serving with respect to your general argument. Within that view virtually everything we do to translate human intent into symbols a computer can interpret and act on is a workaround.

I think you misunderstood my point. Let me rephrase, starting from some first principles.

The reason you have "function abstraction" is to avoid writing the same code over and over again. The reason you have "type abstraction" (a.k.a. parametric polymorphism) is to avoid writing the same code over and over again for different types. Some languages even have "syntax abstraction". There's many variations on this theme.

"Design patterns" as they are taught to programmers are the opposite principle: they are code that you repeat over and over again because you cannot avoid doing so. In that way, they are workarounds for the flaw being that the language is missing some facility for abstraction.

Also, I am not sure I understand the relevance, but yes, parameterized type definition was a nice addition to C++, java, and C#, although its a fair bit easier to understand and use in either of the latter two than it ever was in Bjarne's baby.

Yeah, C++ templates are a massive hack. A useful one, but still. For Java Generics, at least they went to the right guy to design them (Wadler) even if he did have to work within the confines of a broken language (Java).

Anyway my point was that the real power in those languages came from the type abstraction facilities, not the OOP facilities. Consider Java pre-Generics. An awful mess of downcasting and miserable "Object" polymorphism. Or C++ without templates, you're left with type unsafe hacks like "void *" C programmer-style (or reliance on hideous CPP macros).

Sure, you could work that way, and lots of people did. Doesn't make it a good idea, not anymore at least.