dinkumthinkum
Senior member
- Jul 3, 2008
- 203
- 0
- 0
I find it somewhat depressing that, on this forum, having a diverse knowledge of programming languages is knowing everything from C to Java...to C#.
I find it somewhat depressing that, on this forum, having a diverse knowledge of programming languages is knowing everything from C to Java...to C#.
I find it somewhat depressing that, on this forum, having a diverse knowledge of programming languages is knowing everything from C to Java...to C#.
I agree but, you don't see too many new apps written in Pascal or Cobol either.
Who cares about esoteric languages? A knowledge about how to program in Scheme is far less important then the knowledge of how programming works.
I didn't have Scheme in mind, but that works. I have a lot more confidence in a good Scheme programmer than a good PHP programmer.
Why? Because if you have an understanding of higher-order functional programming then you know how to write cleaner, shorter, better programs in any language, in any paradigm.
Last time I checked, being "esoteric" doesn't make something bad. Winning a popularity contest doesn't make something necessarily good. Need I point out the obvious examples?
Ten years ago I was a hardcore C hacker, on the fence about C++ (and that OOP crap), didn't care for Java, worked with PHP and I thought that was everything the world had to offer. Then I was introduced to Scheme and CL, and eventually went far beyond those languages.
I am still a hardcore C hacker, still on the fence about C++ (but in a different way), still don't care for Java, and now I wouldn't touch PHP even with a ten-foot pole. I don't program in Scheme or CL anymore, having moved on to ML and Haskell for my application needs. And when I do write C (and occasional C++) code, I have a very clear understanding of the limitations of those languages, and the ways I can write better and cleaner code despite those limitations. It's a perspective gained by looking back from a higher level.
Your notion of GC sounds like its circa 1980s. Generational GCs were created to address that sort of issue. I've seen plenty of old Lisp code that also maintained pools of objects, but it's entirely pointless with modern generational GCs which are extremely fast to reclaim short-lived objects, much more so than any hand-cooked "solution".
Sorry, but you described a situation straight out of the old code I used to help maintain that was written over 20 years ago.
I just feel that OOP is overly hyped. The first problem is that it is not clearly defined. There are as many definitions of OOP as there are OO-languages. Smalltalkers will claim that static types can never be OO, Common Lispers prefer generic functions over message passing, C++ insist on their rigid class hierarchy and prefer multiple-inheritance over Java. There's about 20 zillion other variations which would take me forever to enumerate.
And after all that, it seems to me that OO is just one way of structuring programs and is not ideal for all purposes. I think that orienting the entire language around it can be stultifying because it forces you to mold your ideas into that form regardless of whether it is a good fit.
It's as clearly defined as 'procedural' I guess. The term object-oriented should probably be thrown out now. It's been baking for almost 30 years and it's overdone. But there is a fundamentally different way of looking at software embedded in the core of an object-based approach. Some people get it. Some people still, after decades, don't.
If you listen to a statement of a problem and your first inclination is to see it as a series of operations then you don't get it. If you listen to the same problem and your first instinct is to identify the things that are interacting and what their behavior is, you do get it.
Hehe, it's called OOAD of which OOP really has almost nothing to do with.![]()
It's about using higher levels of abstraction to organize the core concepts of a program, and then having the implementation flow from that.
I would call that a trivial statement, since it could be applied to any design methodology. Besides "obfuscated programming" anyway.
How does that distinguish OOP in particular?
Well if I just listened to conventional wisdom and did what everyone else did, there wouldn't be much to talk about. Lots of things have been used for many years but are still considered lousy, especially in the computer world.
I would say you are arguing for explicit module systems, and I would completely agree about that. But I still don't see why that requires OOP. It is true that most OOP languages have some way of enforcing modularity, but it is also true that many non-OOP languages have that property as well: for example, Standard ML, which has the most comprehensive module system ever devised for a "real" language (whatever you may think of it).
You could go further and say that any language that permits unrestricted side-effects breaks modularity: any old code can go ahead and do nasty things to your computer at any time, and there's ultimately nothing you can do without peeking at the source (or using some kind of hardware protection, or maybe SFI). Instead of splitting up state into a bunch of objects, you should instead go stateless. Every function (not procedure, now) is completely defined by its inputs and outputs. This does wonders for modularity because functions are easy to compose, there's no re-entrancy issues, and you know the system can't be funked up by some sneaky behavior. Add a proper type system with parametric polymorphism and you've got code re-use and abstraction too.
Now some people say that this is too restrictive, that you have to have side-effects to write real programs. The Haskell folks are doing a great job showing that it is not a limitation, in fact, quite the opposite -- which is why I enjoy following their work. Obviously they have their problems too, but that's why I keep looking for more ideas.
Back to OO languages, I find the traditional concept of a "method" to be tantamount to a function with an implicit argument (self) and privileged access to a single datatype. This is rather limiting, even in something as simple as a Dictionary you may want to work with more than one datatype in that manner. Witness the accumulating array of "design patterns" each of which essentially codifies a particular workaround as canon. What you find instead in C++ is STL-style template meta-programming, and in Java is Generics, both of which provide parametric polymorphism-style programming to each language respectively. I think that was a big improvement for both, don't you?
Perhaps, but in most such cases in engineering and the sciences there is at least an underground consensus that the accepted "thing", whatever it is, needs to be challenged. I see no such consensus in software engineering with regard to the overall applicability and durability of object-oriented techniques. On the contrary they have become accepted and nearly canon for application development. The application and penetration of these ideas varies across certain sub-disciplines such as embedded systems, but the lag is widely accepted to be a result of environmental limitations, not limitations in the usefulness of an object paradigm.
I don't think we need to find another name for it, nor do I think that it matters what language is used to implement it, as long as it supports the paradigm as fully as the environment allows. Functional languages offer a different paradigm that I frankly don't understand well. How they work in data rich application environments where common tasks include reading and processing large lists of entities from persistent store, gathering and processing user input, etc., is unclear to me, so I have to withhold judgement on whether/how they challenge or extend the object paradigm. I haven't seen them gaining a lot of traction in domains where such applications are common.
Theoretically I agree, but again I don't fully understand how to bring this into play in, say, a requirement to read in and report on 10,000 book records, or process an uploaded image and save it to disk. I think pure stack-based machine models have their uses, and those may grow, but I don't know that I see them challenging object-oriented (cum procedural) languages for the application developer's mind share.
I think your description of design patterns as codified workarounds is extremely narrow and unfairly self-serving with respect to your general argument. Within that view virtually everything we do to translate human intent into symbols a computer can interpret and act on is a workaround.
Also, I am not sure I understand the relevance, but yes, parameterized type definition was a nice addition to C++, java, and C#, although its a fair bit easier to understand and use in either of the latter two than it ever was in Bjarne's baby.