Objects from the magic box

2001MonolithObjects are data, functions, behaviors, contracts, everything. If you came from the plain-old-C age, you would be familiar with a much simpler way of structuring your code: structures as records of data fields, and functions as collections of transformation steps that affect these data structures.
The procedural approach to programming is more strictly structured than OOP.

OOP was born out of procedural programming, as an extension. That extension, called classes, did not narrow down the possibilities or put additional constraints. It opened up a rather complex world of possibilites, by allowing a free unrestricted mix of data and functions called an “object”. One common rant agains OOP is that “OOP forces everything to be an object”. Joe Armstrong, designer of the Erlang functional Language, expressed this rant very strongly . I think the truth is quite the opposite. It’s not that OOP forces everything into an object, it’s that an object can be everything in OOP, and as such it’s hard to say what the object construct is meant for. I would rather second objections along the lines of Jeff Atwood’s entry  in that OOP is not a free ride.
A class can be a data structure, and in this case the encapsulation traits are probably not extremely interesting and the class itself could well be a structure. A class can be a collection of methods only, without shared state. In this case it’s like a C module. A class can be a contract, when it contains virtual functions. A class can be the implementation of a contract with hidden and encapsulated state. A class can be many more things.
I think that one of the productivity issues with OOP, at least the C++ way (and all other derivatives) is that all these different use cases are syntactically represented in the same way, as a class. The class construct is totally devoid of any specialization, and as such it’s both extremely powerful and hard to handle. The software architect needs to specialize the class into a meaningful tool for the problem at hand. OOP in this sense is a meta-programming paradigm, which does require some thoughtful selection of language features and how these should be bent to the goals of product creation. This becomes even more evident if you look into all the “companion” language features of C++, like templates, multiple inheritance or friend classes. If you choose OOP, you have to define rules of how to use the language, much more so than in the procedural language case. Java and C# made some moderate attempts at specializing the class construct by adding the interface keyword. It might be interesting to see what an even more constrained OOP language could look like. A language with special syntax for data classes, behavior classes, user interface classes, and so on. A language that naturally leads the developer to choose a nearly optimal tool for the job. Any language designer out there? For the time being, architects are called to do this step in frameworks instead of languages.

So, if OOP requires so much planning and choice of the tools, why has it become so popular? In my mind, it’s because of two reasons. First, because flexible structuring allows software designers to create libraries and frameworks with the reuse patterns they have in mind and they need. As Spiderman said, with great power comes great responsibility, and that’s what OOP gives and demands.

The second, maybe the most important reason, is that the object way of decomposing a problem is one of the most natural ways of handling complexity. When you plan your daily work activities, are you concerned about the innards of the car you are driving to reach the office? Do you need to know how combustion in the engine works? Do you need to check out the little transistors in your CPU to see they are all twitting correctly? Me not. I rely on those things working as expected. I don’t need to know details of their internal state. I appreciate that somebody encapsulated and hid their variables and workings in convenient packages for me to consume. It’s like this with objects, and it’s like this with human organizations. We all regularly delegate important work to others and trust them, maybe after signing some contract, that they will provide us with the results we need. Delegation and work-by-contract is what defines human structures as well as OOP, which is why OOP is popular for large software architectures.

There’s maybe one last perspective. Object orientation might favour static structures over processes made of steps, or state machines where state keeps changing. By hiding the changing state, OOP could give the impression of a perfect world of static relationships. The word “perfect” comes in fact from the latin composition of per-factum, that is complete, finished, done. If it’s done it does not change anymore and it’s thus static. Clearly a static structure is easier to observe than something which keeps changing, so perfection of static structures is more in the eyes of the beholder who can then appreciate all details. Science, for instance, is about capturing what changes in formulas that do not change and thus can be used for predictions. It’s not just an observer perspective, as static and long lasting structures are more worthy of investigation than brief temporary situations.
To sum it up, the bias of OOP towards static structures is natural and useful in describing large architectures.

 

Advertisements

It all started with an Apple

Some of theNEWTONAP2 biggest revolutions started with an apple. While men and women were dwelling peacefully without needing much apart from each other, it took an apple to shake up the balance and start the evolution of mankind. After a while, it was all due to an apple if that Newton concluded that stars and planets are following the same universal rules as we do on earth, thus opening up the possibility of understanding and exploring the universe.

After a few more years, it took another apple to rewrite the evolution of mankind once more. We were all living somewhat peacefully in the land of Java, if you were on the server islands, or .Net/Windows, if you were a desktop application developer. We were all amazed that we could run the same applications on desktops as on laptops, and we even had friends who would read emails on the their cathodic tube tvs connected to something they called a media center. From a developer perspective, there were a few uber-frameworks to deal with. Java on servers, .Net on clients, HTML with SQL backends on the web. Living in a world of few languages had its benefits. Microsoft was pouring a good deal of its resources on the .Net platform, which kept improving and extending its reach. Java was also maturing, and so were the development environments for it.
This was quite convenient for software designers, as the technology mattered a lot less than the concepts and the implementation. If you consider that c#/.net was basically a clone of Java/JVM initially, you should concur that the picture looked so flat it sounded almost boring for developers.

All of a sudden, year 2007, the apple changed it all for good (in many ways), again. The IPhone showed that there were other options for running applications apart from desktop pcs, or laptops. It showed that PCs can be thin, light, pocketable, look good, and work as phones as well as pcs. That was the revolution. In a matter of a few months, people got acquainted with the idea of using applications on tiny touch screens. Then the IPad came, and the poor laptop went to join the fate of the desktop as the tool for nerds. People who use the PC (I don’t mean software engineers) stopped caring about what operating system the device was running. They stopped caring whether the wordprocessing app could create multi-column layouts, because all they started writing were emails. Who takes care of formatting emails with pretty fonts? The idea of a general purpose PC, with very generic purpose applications like spreadsheets and super feature-loaded word processing applications was being questioned very heavily. The phone and tablet became the preferred content consumption device. We started using a myriad of very small, very focused applications in place of the big office suite guns. The “write once, run anywhere” line, which was already rusting to be honest, was forgotten and erased from all books. Now we write iOS applications with iOS tools, Android applications with a mixture of Eclipse and other tools, WinPhone applications with, uh, well, a myriad of tools, and Windows desktop applications with yet another set of tools. If you’re in the web space, then you cannot count the amount of languages, scripts, and specific technologies for specific needs.

The balance has somewhat shifted from architecture and uber frameworks to technology and small pragmatic solutions. If you like making software, it’s fun to choose the right tool for the job these days, there are quite a few options. To be honest, this mindset change has also to do with the economic crysis which imposed faster turnaround times for projects. It’s often less structure, less organization, more action. Time for hacking!
So what happened to uber-frameworks? There’s surely less and less investment in those. Microsoft was supporting the largest ecosystem of languages, frameworks and tools with .Net. The efforts of the company are now clearly directed towards the mobile space where it’s playing the catch up game. .Net has not evolved much, the loose ends are now mostly handled by the community. Bits and pieces of the framework (XAML, some libraries) were carried over to native (or almost) c++, which could be defined a “lower level language” compared to .net/Java. The new Directx 12 seems to go for a lower level approach as well. In general, lower level is more trendy nowadays than it was a just few years ago. Is this turning us into coding machines without architectures? I don’t think so, possibly it’s the opposite. It’s the mini frameworks, home brewed, that can save the day by pulling together all the tools and giving a structure to all those different-minded pieces of software.

The mini framework topic is basically the focus of most posts here, when I am not digressing about software philosophy or history, as done in this case.