## Object Oriented Programming has Failed Us

I’ve been thinking about the state of the programmers coming into our industry recently.  It seems to me that many of the college students who graduate today understand the syntax, but few know how to apply true object oriented principles to the real world.

I recently had a discussion with a friend who confirmed my observations.  Simply put, most people do not think in object oriented terms.

We’ve both spent time teaching other programmers, or having other programmers work for us.  We’ve watched good programmers stumble over this whole concept of object oriented programming.  Some leave understanding it and being able to apply it right away.  Others, struggle to do the exercises we give them.  Why is that?

When a high school student ask me about getting a Computer Science degree.  The first question I ask him is, “Did you like Algebra or Geometry better?  Specifically, did you like the proofs you did in Geometry or did you like plugging in the numbers in Algebra?”  The reason I ask this question is because programming is a lot like working a proof.  If you didn’t like proofs, if you wouldn’t be willing to spend the bulk of your day working a proof, you should not be a programmer.

But, with object oriented programming, there is another layer on top of this.  You see, proofs are linier, one dimensional problems.  Object oriented programming is at least two dimensional in nature and maybe even three dimensional.

So, now, my question needs to be modified to include the following question, “Have you ever worked on anything three dimensional.  Something you had to see in your mind’s eye and then transfer to the real world?”  Building with Legos, Tinker Toys, or taking a shop class would qualify.

But, you can see right away, that if I ask both questions, I’ve limited the number of people who are really able to program using object oriented principles to a very small subset.

Back in the day, we moved from C to C++.  I remember that maybe 20% of the programmers were able to make the jump.  Most who switched ended up writing C code using C++ syntax.  I’m pretty sure that’s NOT what Borland or microsoft had in mind when they gave us the C++ compilers.

I use to think that as we progressed, as kids came out of college, that more people would understand object oriented programming because that’s what they started with.  The reality is that many of them do not.  I use to think this had something to do with the education they were getting. That somehow the teachers were spending time on the syntax and not enough time on application.  But, the more I ponder the problem, the more I realize that the problem is not the students.  You can only do so much with the talent people have.  No, the real problem is that object oriented programming as a mass solution has failed us.

In a world driven by computers.  A world that already has too few programmers available.  Do we really want to limit the number of programmers we have available?

Folks, we’ve been looking at the programming language issue from the wrong perspective.  As we develop programming solutions in the future, we need to aim for solutions that most programmers understand.  Solutions that people understand.  Not solutions that mirror reality but solutions that work in the real world.

I really hate to say this, as much as I like object oriented programming but, I really think we might be better off if we went back to procedural programming for the bulk of the applications that we write and left the object oriented stuff to people who really understand it and for problems that can not be solved any other way.

Meanwhile, back at the lab, we need to come up with a programming environment that most people understand naturally.  We need to do usability studies on the languages we develop just like we test applications we develop for users.  Instead of trying to simplify what we are already doing, which continues to provide solutions that only programmers understand, we need to provide solutions that are no longer programmer centric.

Once we train and use up all the programming talent in the world, we will be forced to do this.  We might as well start now.

Other people talking about Teaching Object Oriented Programming

 Teaching Kids To Code - Now after watching this video I can understand the potential of the system for teaching Object oriented programming to students even of age 14. The concept where a student get to create something and show off the creation is the most … OOP Demystified: A Self-Teaching Guide - The authors leave out unnecessary, time-consuming information to deliver the essentials you need to begin and maintain all your OOP projects. This one-of-a-kind self-teaching text offers:. An easy way to understand OOP … Thoughts on teaching OO concepts to COBOL programmers - Summary: mainframe COBOL programmers the concepts of object oriented programming. … apps, J2EE apps, C# apps, or Ruby apps. … My specific goal is not to teach any language specific thing (though I … Why Java is a Poor Teaching Language - How can you expect someone to fully get OOP without learning procedural programming first? It’s an extra layer on top of those ideas. It cannot be properly learned from the top down. I’m not saying that Python is the best teaching …

## Most Commented Post

### 41 Responses to “Object Oriented Programming has Failed Us”

• ramil:

Right, our teacher in college didn’t really teached us well in OOP. He spend most of his time on the syntax.

• An interesting article.

My current thinking is that there needs to be more opportunities and even requirements for mentorship and continuing education in the field.

You can learn enough about OOP to pass tests with an A, but until you have to start to apply the ideas, it never really gels.

Every time I force myself to go back and reread some of the basic material, I find ideas finally clicking into place.

Having started in procedural programming, I instinctively think that’s the wrong route to go. OOP really can result in more understandable, easier to maintain code. But in line with what I understand as your theme, it has to be well-designed OOP. Bad OOP is usually just procedures wrapped in classes.

Programming is more than just understanding syntax. I think the industry as a whole needs to find a better paradigm to realize there is a growth path — we have to expect that brand-new programmers are going to struggle and we have to learn how to provide continuing education. And I think that would be as true in a pure-procedural world. If it was that much simpler, OOP would never have taken the hold it has.

• Dave:

Stevi,

You are where I was about 3 months ago on this issue. And I agree IF we continue using OOP as THE way to program, everything you’ve said is true.

However, I don’t have much hope of the industry providing the type of support that’s need to make OOP really work.

Unless, of course, managers stop treating programmers as little robots they can talk to and out comes code and treat them as humans who need to be constantly trained, mentored, and capable of making mistakes.

The problem is systemic and needs a systemic solution. Therefore, I propose trying to find a programming language/environment that works the way we think rather than the way things operate. IOW, neither procedural nor object oriented programming are the ultimate solution. But, if I had to pick one or the other as a manager, I’d probably pick procedural even though as a programmer I believe oop is a better programing model to work with.

Actually, if my shop was big enough, I’d try to structure my project so that the new programmers were primarily doing procedural coding and the older programmers, who understood it, were doing object oriented programming, and I’d provide a growth path that would mentor the new programmers up to where they could be object oriented programmers. Unfortunately, most shops aren’t big enough to pull that off.

• steve:

OO becomes more useful as programs grow and evolve, the way they do in the real world (ref Koenig and Moo: “Ruminations on C++”). Beginners won’t appreciate its value by doing small, unconnected assignments. Some will catch up once they enter the workforce, but some won’t. OO didn’t replace procedural programming; it added to it. Programming is increasingly multi-paradigm.

The real challenge to OO is coming out of concurrency concerns w.r.t. state. Will dropping back to procedural prepare students for a map-reduce world?

• Dave:

“OO Didn’t replace the procedural programming, it added to it.”

Unfortunately, the colleges and universities didn’t get the message. OOP is being taught at the expense of Procedural by professors who barely understand OOP themselves.

The problem is that OOP mostly closely maps to the problem and procedural most closely maps to how MOST people solve problems. Again, I am ONLY proposing “dropping back” because it more closely maps to how most people solve problems with the thought that as a programmer matures, he would then be educated in OOP.

If you think about it, this is exactly what you’ve proposed when you say, “OO becomes useful as programs grow.” and “Beginners won’t appreciate it’s value by doing small, unconnected assignments.”

Instead of teaching OOP at the college level, at the expense of procedural, it would be better to wait for the student to run into the problem that OOP solves and then present it to him once he has a full understanding of modular programming generally, and code reuse specifically, as these are the main two areas that OOP addresses and the main two areas that today’s college student doesn’t get, probably can’t even understand, until he’s worked on a project large enough to need the solution.

As I’ve stated elsewhere on this blog, the programming discipline lends itself better to a mentoring style of education than it does to the typical college level education. This was less true when all we were teaching was procedural programming.

So, until something better comes along, I suggest leaving procedural programming to the colleges and universities and mentor programmers who have proven themselves into the OOP world.

• Back in college, when I was given assignments in my OOP class, I used to think OOP was unnecessary. My first language was C, so I would think in procedural fashion, have a single JAVA class, and write the entire logic into the main() method. Now that I look back, I realize that the fault was not entirely my own. We were given small, disconnected problems to solve, OOP would seem much too trivial for using. The teachers never gave us problems that would make us realize the true potential of OOP. Giving lectures on inheritance, polymorphism and encapsulation is not enough, the coders must be given real-world problems for which OOP is the best approach to solve.

• I think it is unrealistic to expect IT graduates to be object oriented experts. Students probably wouln’t even spend half their entire time learning to write OO software. They also have to learn basic computer architecture, operating systems, databases, networking, project management, professional skills and more.

Rather than looking for someone else to provide a silver bullet solution I’d say take Dave’s advice and concede that you must also play a role in the education of new programmers.

The only other viable option is to simply stop hiring junior programmers.

• I was in the same stage while I was in B.Sc. CS. I was not very much clear with the concept of the polymorphism and inheritance. But In my professional career of three year I’ve started working with OOP and I like it much better than procedural approach. I don’t think unless and until you’ve touch with a live or professional project it’s hard to learn and know more about OOP but if you are comfortable with it then it’s surely a winner approach of programming.

• I like the analogy you make to Geomtery and proofs to programming. I’ve often viewed Geometry as having prepared me for programming more than any other academic discipline. There are 2 things to consider in your position.

1. The math curriculum in the high schools has changed dramatically since I attended and took geometry. It doesn’t have the intensive focus that it had when I attend, so maybe the students today aren’t as skilled in the logical problem solving techniques.

2. I remember when I took geometry, the first couple a proofs were really tough until I got the nack. Maybe, the students today haven’t wrestled with enough problems to get the nack for object oriented programming.

Finally, maybe with some feedback and experience, the new graduates will eventually get it. Don’t give up so quick.

• [...] read a post stating that OOP is not working. I agree with the author that OOP is not working, but I do not [...]

• OOP can help achieve better encapsulation and reduction of coupling (reduction of detailed and/or unnecessary information about the state of a system in order to use it). A programmer, after gaining experience, will begin to see the problems that stem from poor encapsulation and high coupling.

Learning the principles of OOP arm the programmer with some techniques to address these concerns, and of course OOP languages have features to facilitate such.

Information hiding and abstraction are a means to build yet higher level complexity systems. OOP has some value in that regard.

When applied to distributed software systems, though, OOP and OOP language features break down. For instance, distributed objects over a network have proven to be an unmitigated disaster for a programming approach. A services approach with async messaging have won out as superior in that particular venue.

One thing about HTML/JavaScript/DOM oriented software development is that it has had it’s own notion of a unit of encapsulation – which has been the web page. Hence simple languages such as PHP have proven effective for dealing with this manner of modularity.

However, with the advent of the web RIA application, the web page model is itself obsolete and we move back to where a classic OOP approach can have some merit again (at least in the client-tier of the web RIA app).

• Melvin Ram:

RE: As we develop programming solutions in the future, we need to aim for solutions that most programmers understand. Solutions that people understand.

It’s called Ruby on Rails

• Bob Dylan:

The OOP in C++ is crap, again in Java. These languages do not teach OO principles correctly if at all.

CLOS, smalltalk enforce such principles. (well CLOS doesn’t enforce so much as allow, but once you’ve seen it, you’ll want to use it)

• I remember taking an OOP course in college and struggling somewhat with the concepts until I read a book called “The Tao of Objects” by Gary Entsminger. It was just what I needed at the time because rather than focusing on the things I already understood from working with procedural languages, it concentrated on delivering a more intuitive understanding of objects than I’d received through the usual in-class examples of inheritance and so on. (“Bird is a kind of Animal, Chicken is a kind of Bird, …”) The marrying of state and behavior is still a pretty difficult concept to explain to people accustomed to viewing them as separate things, but I’d always hoped that changes in curriculum over time and earlier introduction of OO concepts would do something to improve the situation.

• Mike:

I must admit that I totally agree with what you’re saying.

Before I started university my programming experience was with C and other languages of the same syntax, so learning Java just felt like learning C with a twist added to it. I had also dabbled a bit with OOP concepts in PHP, so my first steps into the world of Object-Orientation were fairly uneven ones.

I then decided to read the book “Object-Oriented Systems Analysis and Design using UML” and it provided an insight that my previous programming experience couldn’t. Whilst a lot of CS students don’t take Systems Analysis the general ideas it lays out to students make it a great book for anyone wanting to learn more about OOP. Throughout my time at university my experience falls greatly into what you’ve described, with OOP just becoming a ‘different way’ of writing procedural programs for most.

• John:

I don’t think OOD/OOP has failed us, there are far too many libraries and frameworks being built using object orientated principles to classify it as a failure. Perhaps its not being taught very thoroughly at college and university, but thats a different matter. Once you go past writing anything trivial, I don’t think you would want to do it in any other way.

I think once you “get” objects, and appreciate the power of encapsulation, polymorphism and inheritence, you’ll never go back.

• Will:

I agree; our colleges have failed to teach students modern software development.

Wait, that wasn’t what you said. Or maybe it was?

I would love to take over the cis department of a university. I’d fucking shake their asses up. We’d start with the GOF and go from there. We’d stick with one language from freshman year through senior; you’d pick from the C# track (Windows) or the C++ track (*nix). Less calculating 5NF using some weird grid logic shit that nobody uses IRL and more time creating relational databases and CODING against them. There’s be a year’s worth of desktop application development, a year’s worth of web development, a year’s worth of data driven development, and a mix year full of optional studies such as graphics applications, server applications, networked apps, etc.

Not a single goddamn line of code would be accepted other than that checked into a professional source control system (either TFS or SVN). Your grade would be affected by your code coverage numbers. For most projects, you are handed a set of stories and a unit test project. Your code fails the unit tests you fail the assignment.

If anybody mentions Lisp they will be stoned. All the theoretical and obscure language courses will be relegated to an “I’m never going to work outside of academia” track where all the future grad students will go. These classes will be available to students as fillers.

Please, somebody put me in charge. I’m all about kicking ass and chewing gum, and you can’t chew gum in class…

• Gooseman:

Fortran 77 FTW!

• @will — I love it. Let me know when you take over, and where. I will be your first student

I took C++ and Java in school, but I went in to web development and started using PHP. My early code was all procedural, and it sucked. Then I started reading a book on design patterns in PHP, and the OOP stuff started to click. Now I code C# desktop apps and love to analyze problems with OOD in mind.

• Greg Herlein:

Interesting article, thank you. I would add another argument: even if your senior staff do the OO work you may not get the code re-use you may have hoped for. The single hardest part of OO is designing the object model. That model probably maps to your problem today and probably not to the same problem in a year – nor to someone else’s problem. And changing the object model is real work – work only suitable for your senior staff… and it takes a long time. I concur that for most software today an ‘event driven’ (my way of saying ‘services approach with async messages) model is more easily adapted to change in needed functionality.

• The problem with procedural programming is that it pretty much ensures that you’ll have to reinvent the wheel each and every time. Giving up on OOP is dumbing down programming, and judging by the state of Web 2.0 apps, that’s happening on its own already. No need to help it along. I went from VB to Cold Fusion to Java. It’s an adjustment, no doubt about it. But unless programmers are Barbies saying “Math is HARD! OOP is HARD!” then all this says is that there are people who maybe should look to other jobs.

• Here’s an idea, let’s lower the bar ALL across the board for every field.

We have too few doctors….Let’s lower the bar and let sub par doctors into the field by making it easier to get by, then i’ll send all the doctors to you, let me know how that works out for you.

You’ve got to be kidding me, YES, OOP is hard, YES, programming is not for everybody, and that’s the point, you want somebody who can’t grasp OOP handling the memory management on a complex application? sorry but i’ll let those guys go your way and i’ll take the rest.

• Dave:

I’ve let this ride for most of the day but, I’ll inject my two cents at this point since many of you are now arguing that we ought to keep programming hard.

Just because something IS hard does not mean it HAS to be.

What was the point of OOP in the first place? To make programming easier! If we say, “Dont’ give up on OOP, we don’t want to dumb things down.” Or, “let’s lower the bar across all the fields.” Why don’t we make that retro active to say, 30 years ago. Do you really want to program with the tools that were available then? Or would you rather “dumb things down” or “lower the bar” and have the tools we have today?

You are not fully thinking through the issue if you think making programming easier for the programmer is not a worthy goal simply because programming “ought to be hard.”

BTW, yes I am monitoring this discussion and I WILL delete anything that even borders on insulting. Sorry, I have to even mention it, but I’m starting to note a tend toward that direction.

• Legos and Object Oriented Programming…

What do Legos have in common with Object Oriented Programming?  According to an article on .Net Answers, those who are at home with Legos are most likely to fully embrace Object Oriented Programming principles.  There the author defines three categor…

• In response to dave:

Object oriented programming is a practice, whereas compilers for new languages, and the languages themselves are tools. The use of new tools to make life easier is all well and fine, although cases could be made against those ( although i’m not advocating that point of view) , but when a type of coding practice come’s in that allows for easier maintainability and is more sustainable, I don’t think we should jump ship to make it easier on other people. Also, what you’re talking about is a natural progression. Of course we don’t want to go back 30 years ago, but then again, do we wanna go back 10-15 years ago to when OO programming wasn’t around??? What you’re missing is that it HAS made programming easier for thousands of people, myself included, and I don’t see the need to lower the bar and take a step back.

• Dave:

As a programmer, I completely understand, and even slightly agree with you. As a business owner, program manager, and teacher I maintain that OOP has made things harder, in a lot of cases, than it needs to be.

Once people truely understand OOP, yes, it does in fact make life easier as a programmer.

Interesting story for you from the “teaching” side of things.

A room full of what both of us would consider senior level programmers working for some major companies in Hartford who would all claim they understand OOP were asked to explain polymorphism. What does it do? why do we need it? what problem does it solve? As is typical when he ask this question to other programmers, no one was able to explain it fully.

As a business owner, this is a problem because if you don’t fully understand polymorphism, you are just as likely to write duplicate code using OOP as you are with procedural methods.

What I’m asking for ultimately IS a progression. I really don’t think either OOP or Procedural is THE answer. Maybe declaritive? I don’t know. What I do know is that expecting peole with little or no real world experience to walk into a job and start writing good, maintainable OO code, in any language, is naive, costly, and possibly dangerous to the survival of the code base.

To go back to your doctor analogy. There is quite a bit of training that goes into making a doctor. They don’t graduate with a degree, even a doctorate, and start practicing medicine on their own right away. They have to go through internships, and are gradually allowed to do their own work. At least that’s the way it works here. I don’t think it is unreasonable to apply some similar type of practice to our field.

• hmmmm that is an interesting point of view, I think that perhaps your article has been taken out of context a bit and warped ( possibly myself being a guilty party hah ). Because I definetely understand what you’re saying in the teaching example…I have been a development manager, and own my own company now, and I have gone through interviews where 90% of the developers couldn’t answer questions like what polymorphism was. It in no way spoke of there development potential though, because under it all they still understood the basic concept of what OO design was. At the same time though I feel it’s a potentially disasterous scenario where everything is done procedurally, i’ve walked into company’s and seen what entirely procedural code ( but in this case they were just bad coders ) can look like..and it was a disaster in terms of maintainability. So I think we are both on the same page, perhaps procedural is the right solution in some cases, but I in no way feel that people should be allowed to run willy nilly and ignorant of OO as a good coding practice.

Perhaps you’re right thoug, in that some sort of standardized approach to moving through procedural and OO design practice is necessary, especially to bring developers up to a new level.

• Warden:

I believe OOP is not failing us. It’s purpose is not to make mediocre programmers into superior programmers. It’s a methodology FOR superior programmers to let them take their work farther. Also, our educational institutions are largely way behind. Having interviewed many, many college graduates with CS degrees I find myself very disappointed in the level of knowledge they come away with when they purchase a degree.

• I guess I was never clever enough to understand what is so damn difficult about OOP? Inheritance? Abstract classes? Virtual methods? Private vs. public? Beats me

• Dave:

Regev,

I never understood what the problem was either until I started teaching it. Then it became painfully clear.

I think most people “get” private vs public, but have trouble understanding protected.

I think they basically understand Properties and Methods, but have trouble understanding why you’d use a Property instead of just a member function.

I think they have trouble understanding why and when you’d use a shared function or a shared variable (static in most OOP languages I’m familiar with)

But, the number one most difficult concept to understand in OOP is polymorphism. This is unfortunate, because polymorphism is what really makes OOP worth learning at all.

• [...] about this problem in the narrower field of computers and programming for a while now.  See “Object Oriented Programming Has Failed Us” and “Do Programmers Need a Degree” for some of my more recent rants on the [...]

• Joel:

Medschool is intense, but it serves a purpose: it shows who can cut it, and who can’t. You wouldn’t want a doctor performing your open heart surgery who just doesn’t quite grasp the basic principles of anatomy. Graduating from medschool and earning your MD (or whatever other advanced degree) shows that you have the knowledge and ability to perform your job correctly. In a world that is now so dependent on computers, shouldn’t we have some way to identify the capable programmers before they get hired to work on business-critical software? So why are we talking about dumbing down the field of programming to accommodate the general public, when what we really need is some way to distinguish those who “get it” from those who don’t? There’s a great article here, titled “Separating Programming Sheep from Non-Programming Goats” http://www.codinghorror.com/blog/archives/000635.html where a study was done on incoming freshman in a CS program. It mentions further research, which I don’t know what the status of that is, but their initial results seem to indicate that for some people thinking in programming structures comes naturally, and for other people it will just never make sense, regardless of how they are educated. It’s not necessarily an issue of education or the wrong technology or programming paradigm; some people’s brains are physiologically not a match for programming, no matter how much they want to be programmers (and I’m sure we’ve all worked with or known someone in that category). And I’m definitely not trying to imply that good programmers are somehow superior and non-programmers are somehow lesser beings, it’s just part of what makes us all unique individuals. I love to play basketball, but I wasn’t born 6’10″ and I’m not staking my career on trying out for the NBA!

• Dave:

Joel,

you have a point. The only problem is, there aren’t enough programmers available for all the work we have available.

I still maintain that some sort of internship is what we need today. This gives the programmers the extra training they need without significantly reducing the work force in the process.

There are no easy answers. Obviously. I think we all agree there is a problem. What I fail to see consensus on so far is the solution. They all seem to have some short coming.

• Joel:

I agree, but how much of the workload right now is caused by having to clean up or maintain existing poorly-written software? As a programming manager myself, I can tell you I love the idea of doing mentorship (and especially the thought of having had mentoring when I was starting out), but I don’t really see how the capable programmers would find time for it. I think this may be the case more in the small-and-medium-business market more so than the enterprise, where there’s enough of an infrastructure in place to ensure some level of quality. Poor hiring decisions can be a detrimental problem for smaller shops, though, and there’s WAY more small businesses than Fortune 500 companies. Continuing to introduce more programmers who will never understand programming ultimately ends up really creating more work for the competent programmers.

The way things stand right now, the only way to really tell good developers from bad ones is basically to see what kind of code they produce. Some quick web searches on tips for hiring programmers can show you what a problem it is to identify decent programmers, even for managers with programming experience themselves, much less when the hiring is being done by a business manager with little or no programming experience. Having a degree and being able to answer a few questions from memory such as implementing a quicksort algorithm or quoting the definition of polymorphism just isn’t a good indicator–I may not be able to quote the 4 advantages of polymorphism word for word, but that’s because I’m busy using it. I worked with someone who loved to quote theories, and memorized things other people would say, and would talk all day about design principles. But when it came to implementing them, or choosing the right approach to a particular problem, he’s one of the worst programmers I’ve ever worked with. He’s a really nice guy, and he’s not dumb by any means, but he’s just never going to make it as a system architect. All too often, though, someone like that gets moved up into a position where they’re responsible for making those design decisions, or hired into a position like that because they can “talk the talk”.

Computer programming seems to be one of the few skilled fields where a degree isn’t necessarily even a requirement, much less a related degree (I know I can name programmers whose degrees are in things ranging from communications to art to mechanical engineering), and you’re basically hired based on previous work experience. The education system seems to take the approach of, “Well, we can’t possibly teach everything, and most wouldn’t really get it anyways, so we’ll teach some theory, make the actual programming courses electives, and let them pick up the actual skills on the job once they’re hired.” And from everything I’ve seen, the only real value in the workplace for a graduate degree is an MBA, which isn’t going to help anyone become a better programmer.

Have we reached the point where schools and employers need to recognize that there needs to be more specialization, both in available graduate degree programs and in hiring practices? Going back to the example of doctors, your undergrad is most likely in biology, or something of that nature, then you go to medschool to become a surgeon, or a vet, or a dentist. A 4-year program just can’t cover everything. Neither is 4 years sufficient to cover the general education requirements, and go into detail about operating systems theory, and microprocessor architecture, and database theory, and system architecture, and software engineering methodologies; neither does anybody need to go into great detail in all those areas. You’re at best going to have a class in each, if you know enough at age 20 to take the right electives. It seems like there need to be more graduate programs specializing in those specific areas, instead of giving you a generic degree and expecting you to pick up what you need on your own in the workplace. This would allow employers to match candidates’ interests and education more closely with their needs. The end result should be higher-quality, more maintainable software across the industry with (hopefully!) fewer bugs, and less time spent maintaining inept code written by poor design.

That doesn’t solve the immediate need for more programming resources, but long term, I think something along these lines fits best into our free-market economy. The best and the brightest generally are attracted to more prestigious, higher-paying jobs. Right now, you pretty much have to server your time as an entry-level programmer so you can learn how to write code before you can be considered for any higher-level position. Having a greater degree of specialization across the industry may encourage more brilliant young minds to pursue careers in software development without the fear of losing jobs overseas or having to work at the same entry-level job for several years alongside programmers who just don’t get it. These more specialized positions would then need to pay accordingly, but it wouldn’t be as big of a business risk if you knew you were going to get good results. Under the current system, I think that is a real problem–in essence, the salaries of those who are the best producers aren’t as high as they could/should be, if not for subsidizing the risk of the salaries of those who are less competent, since it’s just too hard to identify up front when the hiring decision is being made.

So, I’ve gone on probably way too long on this topic… but it’s a topic that’s been percolating in my head over the past several years as I’ve progressed in my career and tried to observe our industry. Maybe it’s time for me to get my own blog… I wonder if joelonsoftware.com is taken?

• Dave:

It’s been percolating in my head too. And by the number of comments on this post, I’d say a lot of other people are thinking about it too.

The schools are not setup to handle something that changes as rapidly as our field.

If you’ve been around as long as I have (and it sounds like you have) you’ll realize that the shift to OOP happened about 17 years ago (or so). At that time, even if we had the degree/internship program you suggest, about 80% of the programmers we had at the time would have never been able to get the degree we need now.

The problem with schools is that they can only teach things that are relatively static. If the medical profession changed as rapidly as the programming profession, they’d be in trouble too.

So, the question becomes not just how to deal with the new kids. But, how do we keep them up to date and able to deal with the new problems, new tools, new … well you get the idea.

• I’m probably going to sound like a real radical here.
I’ve been in programming at all levels for over 20 years, and still at it. In addition, I do corporate software training.
I actually love OOP as a programmer. As a trainer, OOP is an obstacle. What I find is there is a place for both procedural and OOP concepts.

When analyzing a business case, you are going to be using the tools of procedural programming: flowcharts and subroutines.
These concepts are best encapsulated in the use of desktop software.

How does the User accomplish their task now? — Most commonly, the tasks are being accomplished in desktop applications.

What steps are taken? Does the User move data from one application to another? ..or does the User return to the database for ‘fresh’ data?

And what is the best way to describe those steps?
This is a tricky one. Programmers and software designers tend to use high-level software terms for these answers when the best answers are really to talk about the menu by menu steps in desktop software.

The abstraction, and growing confusion IMO, is that desktop software uses OOP concepts to accomplish procedural goals – and that is the best way to describe any business case.
If in the course of a procedure the data (read: object) being manipulated would be more clearly defined as an extended object in programming, then it is time for OOP. If the data is understood as it is, then maybe there is no reason for applying OOP concepts.

Because of the prevalence of OOP concepts, desktop applications are becoming more object-oriented. This includes browser-based apps, too.
For tinker-toy like minds like myself, that’s great.

For Users, it’s resulted in a level of complexity that makes most applications -custom or out of the box – too difficult to work with.
Users think in step by step processes.

When I train, I often find myself following a step by step process from menu to menu to illustrate some feature. To the people I’m training, often this is annoying busy work. They’ll never use most of these features.

What I end up doing is trying to tie the features to something my trainees will actually be doing; and in the process trying to bridge the gap between the OOP orientation of the software and the perspectives of the participants.

I don’t think we need new tools. I think there are ample tools right there on the desktop. What’s needed is standards that restrict the OOP concepts to the conceptualizations of the Users.

IMHO,

PD

• Dave:

Paul,

That’s kind of what Agile programming is about. See my recent post on UML.

• koryuko:

Interesting article. I agree with you on some parts .I apologize but I thave to disagree on some points. think it’s not OOP who failed us, but I think it’s us who failed in OOP. I think teachers who mentored us way back doesn’t really need to teach us everything, If programming is all about the syntax and everything that goes with the code, then I shouldn’t be a programmer. I remember attending my first day class in c# . It was weird. I never heard of it as a language before.
I just learned c and java, and I think I mastered the concepts well in object oriented programming, that is why when I went to that class and my professor gave us a simple exercise to be solved in a language i didn’t even know it exists, I was able to translate my solution into a c# code.
This urge me to pursue my career as a programmer, when I learnt that it isn’t about how we write the syntax of a specific programming language , but how we understand the concept of what we are learning. I strongly believe OOP didn’t fail us , or the people who taught us , but it’s us who fail sometimes, and we fail when we stop allowing ourselves to learn.

• Dave:

It is interesting that we have two basic responses here.

Those of you who are programmers are saying, “Real programmers don’t have any trouble.”

Those of you responsible for finding “Real Programmers” realize there is a problem.

The post above illustrates the first.

And if there were enough “Real Programmers” to go around, I would actually agree with that line of thought. C++, Java, CSharp, etc.. are not problems for me personally… until I have to try to replicate myself by hiring more programmers.

So, for those of you who are of the opinion that the problem is the programmer, let me illustrate with something I think most of us will understand.

Let’s just say that 80% of our roads and highways were falling apart. Pot holes everywhere. Bridges unsafe. Basically unuseable.

You’d all be screaming at the government, “Fix the bridges dagnabit!”

And the people in charge would say, “We’d love to, but we can’t find enough qualified people. There are a lot of people who would love the work, but they don’t understand how to operate the equipment needed and even when we try to teach them how, they don’t seem to be able to get it.”

Wouldn’t you start thinking that maybe the problem was the equipment?

Maybe the illustration seems far fetched. But I would argue that the only reason it seems far fetched is because there ARE enough qualified people to keep the roads in good condition.

• robert:

I Googled for +python +”data driven development”, and this thread came up on the list. I’ve found that OO has failed miserably, so I read through (well, you piece; scanned the replies). I’ve reached the failure conclusion, but, I think, from a different angle.

OO fails because, even avowed OO-ers, still write procedural code. In java, anyway. Even if you read Meyer’s book (quite good, on the whole), he falls into the Method object, Data object trap. The only vocal public speakers who get it (in my reading) are Allen Holub (read his “Bank of Allen” series) and Dave Thomas (Tell don’t Ask). Doing OO to leverage what it can provide is *HARD*. Which is why the java world, at least, has fallen into the pseudo-OO trap. It is not helped by the frameworks, which do this to the hilt.

Anyway, being a Relationalist, I was looking to see whether Habanero style development was making its way to Python. I guess I’ll need to keep looking.

For transactional systems, let the database do the work, and the code to manage the I/O. That was Dr. Codd’s point. Young-uns don’t even know his name, or that xml is IMS warmed over.

• robert:

Came back to visit and just noticed your #26, which sparked further musings.

I earlier said OO was *HARD*, by which I meant that: 1) most large “shops” have decades old codebases (largely COBOL and C and VB) which are file based and procedural where most coding is done to maintain these very old sows and 2) those who have control over how to deal with these codebases are their creators (or next generation parents). IOW, implementing OO in such an environment leads to major conflict.

In the java world that I know, the code gets broken into pieces, but those pieces are just procedural code in OO clothing. Such code is harder to parse than either real OO or real procedural. Again, read Holub and Thomas for more detail. In order to do OO as envisioned, one needs to think truly in OO, which is to say (using Holub’s word) capabilities, not data and not methods, per se. This is why it’s hard. And that’s why most OO is made of Method/Action objects and Data objects; it keeps the procedural bits in one place and the data in another, just like COBOL, FORTRAN, C and VB. Everybody is kind of comfortable, and clueless about how OO really works.

OO is about semantics, not syntax.

As to how to use databases, which can fit very nicely with OO, but not so nicely with Copybooks, that’s another story, and largely the same: folks want to say they do databases but actually build them to look like the files they’re used to. The emergence of xml as datastore is another example of folks regressing: in this case to IMS, that hierarchical “database” of yore. The notion that data is “naturally” hierarchical is a myth, but fits nicely with folks who never grokked the relational model.

I’m just out of (thankfully) one of those Hartford companies (may be the same one?). If ever there were a location on the planet where neither RDBMS (#17 @Will) nor OO is understood, that would be it. No matter the syntax, those folks think in COBOL/VSAM; and will write that way no matter what. And the money that gets wasted on DB2 and Oracle and Web Servers… Coming from a Big City (where folks at least try to understand the progression and history of the profession), it was shocking. It was the CIO of one of Hartford’s insurance companies who got COBOL ’85 changes blocked because they would make ’85 not backward compatible enough.

I read, don’t remember where, that java has been embraced by the Fortune X00 companies because (nearly correct quote), “it allows the continual accretion of code by middling programmers just like COBOL” That is certainly how I’ve experienced it.

All of which raises the other question: if Real World experience is supposed to help the CS graduate know how to do things, what can we expect from places like Hartford? It is not alone, since the problem is really with the financial services industry’s view of computing, and that industry tends to exist in One Industry Towns. Do we need to fire all those CIOs out there who came up through the COBOL/C/VB ranks and are easily flummoxed by poseurs? And who don’t really want any radical change in their chain of command? In other words, isn’t it really a management problem?

Bear