Are soft skills more important than deep technical expertise?

Often I hear the ideas that technical skills are somewhat less important than, for example, communication or other soft skills. Here are some quotes to illustrate what people think:

Dustin Ewers, DeveloperOnFire:

The developer that’s a ninja at whatever the latest JavaScript framework is is great but a developer who is OK at that and also really good communicator is gonna win every day of the week in terms of actually delivering value to people.

John Sonmez, SoftSkills book:

I’d rather hire a developer who knows a little less but knows how to figure out what needs to be done and how to do it, than someone highly skilled who requires constant hand-holding to be productive.

I have seen the most technically competent yet arrogant and unfriendly people lose out on a job to a much less skilled but likable person.

TJ VanToll, DeveloperOnFire:

I don’t consider myself very good software developer necessarely I think I succeeded more in that like learning how to write a good email, learning how to write an opinionated article, can actually take you a lot far ther in many cases than knowing how to write good code for instance.

Now let’s look Pareto principle and we see that roughly 80% of problems we can solve with 20% of features that has a tool we use (like programming language or framework).

Multiply it by diminishing returns law which states that the outcome of our efforts always diminishes. Hence learning 20% of the most used programming language features will provide much more value than learning next 20% of features. But learning that next 20% of features will require more effort because they are more rarely used and therefore it will be harder to remember them.

Add the fact that:

Half of what a programmer knows will be useless in 10 years.

And I would emphasize here that the more detailed and specific the piece of knowledge, the faster it will become useless. This kind of demotivates people to learn new stuff, especially if they are not going to use it immediately.

Finally, add to the equation StackOverflow, YouTube, blogosphere and all other freely and immediately available resources which are capable to solve most of your programming problems in seconds.

What do we get? Well, for most of the jobs very deep and detailed technical knowledge is not as valuable as soft skills – ability to communicate with people via natural as well as via programming languages and ability to solve problems and organize (architect) systems.

At my current project, we hired a guy just because I knew him, I knew he was a good communicator, nice person and loved his craft, that’s it. We did not ask him any questions like what is polymorphism or the sequence of calling of constructors in the inheritance hierarchy. And it really worked great. Do I instead want someone who knows 80% of .NET by heart but with hypertrophied ego or something? No way…

Of course, I just share my thoughts and the fact that these same thoughts appear to other people as well. Interviewers, however, want you to know nitty gritty details as if knowing them proves you will be able to deliver, will keep working on the long demanding project or will be doing well with the team. Many people would disagree and insist that deep tech expertise is the key.

What do you guys think? What is your experience?


We would fail to write bubble sort on a whiteboard… Should we be proud?

Recently there was very interesting activity on the Twitter where famous programmers just admitted they are very bad at Computer Science (CS) and other fundamental concepts and still doing very well.

Now, this is very interesting. To me, this is just another manifestation, that knowledge has a little value today. Everything can be easily Googled in a very short time. Yes, the search takes some time, but it is much more efficient than trying to learn and remember all the details of CS and technology concepts. And I also admit: I would fail to write bubble sort on a whiteboard and estimate my algorithms’ complexity while being a successful developer for more than a decade now.

But one may come to the (I believe wrong) conclusion that it is ok to be unable to estimate the algorithm’s complexity and not to understand the essence of fundamental algorithms and their complexities. Algorithms are the methods for solving problems of procedural nature while design patterns are the methods to solve software structural problems and I would argue both are the very basics of software engineering. If you do not know these methods, when you are tackling the tough problem, you basically don’t know what you don’t know. You are unable to discover that your problem can be reduced to one of those fundamental problems and there is already an elegant solution to it. I definitely see my ignorance in CS as a problem and I am currently deliberately studying algorithms :-). And also clean code, which I always enjoy learning and practicing.

Here is what John Sonmez experience was after having learned algorithms:

All of a sudden it was like I put on special glasses that let me see the world in a different light and all these problems, all these places where I was like there’s nowhere in the real world where I’m going to use algorithms in my code, bam, it was popping up everywhere. It’s like, “Whoa! Look! Oh, I recognize this. This is like a min-max problem.” Bam! All of these places I started writing really efficient, really good code because I could see the problems.

I can also see the pain when job interviewers start asking puzzles and algorithms stuff. I hate it. If you want a job you are stressed already and now you have to focus, concentrate and tackle the tough problem at the blackboard as if you would do it at your real work! Problem-solving is very creative activity and It can not be done effectively under stress because the brain actually focuses on security aspect first and foremost. I don’t know how to replace this practice, but it is total evil to me. Interviewers, please, don’t do it to us, developers, unless the job requires this type of skill and knowledge. Pretty interesting article on the related topic from Yegor Bugayenko. But since interviewers still continue asking, maybe realizing the importance of fundamentals, we should be prepared!

Hence, if you have some spare time and you have a choice either to learn some new cool JavaScript framework, which will be forgotten in few years or some of the algorithms or design patterns which will serve you throughout all your career, I believe it is better to select last option.


Знайомство з мовою програмування APL

З радістю, на запрошення компанії SimCorp Україна, знову викладатиму курс по основах APL. Цього разу вирішив записати вступне відео, щоб потенційний студент міг ознайомитися з APL і моєю манерою викладання.


Few guidelines on unit testing derived from my experience

When I first started using unit testing in my practice, I had no idea what I was doing. I was also frustrated. It made little sense to me to have everything unit tested because writing tests and preparing test data took me too much time.

Ever felt the same way? Don’t understand where and when to write test and where you should leave the code alone? Don’t worry, this is normal. This happens to everyone who tries to improve the way he/she does software. I can’t give you a concrete prescription, no one can, but in this post, I will share what worked for me in terms of unit/automated testing. This will, probably, work for you.

Don’t write unit tests for simple cases. One objective way to measure the return on investment (ROI) from unit tests is to measure how much time they save for development team by catching regressions. In simple cases, when the code is not going to be changed or code is pretty straightforward it is unlikely that you will get regressions, therefore it is likely that you will have no ROI at all from your unit tests. Furthermore, you will need to maintain them. The law of diminishing returns works here. You can get 80% of the benefit from covering only 20% of your code with tests. This 20% of code contain your core business logic, which delivers the most value to your customers. Everything else is some kind of glue code, configurations, mappings, frameworks, libraries interoperations and so on. The more effort you put to cover this code, the less ROI you will have.

Do large acceptance tests for refactoring. If you plan to do some large refactoring or restructuring, classical unit tests will not help. In fact, they will go into your way. Classical unit tests are small and test some small parts of the system. When you start changing things, unit tests start glowing red and you will need to delete them. The large acceptance test captures the whole business case of interaction between your system and the user. Such a business case is something that brings real value to the business and should not be changed during refactoring. Relying on acceptance tests will increase your chances to refactor without damage to the business. In developeronfire podcast, Nick Gauthier (episode 149) reported that he had his largest success in the career by moving the web application he worked on from classical client-server architecture, where HTML was rendered on the server, to the single page application (SPA). Acceptance tests made transition really smooth for his team. My refactoring team at SimCorp also had a big success of not jeopardizing our product’s quality by major refactoring we did. That refactoring touched almost every user screen in the system. My team lead insisted on having large acceptance tests suite, which eventually ensured our success.

Unit test complex algorithmic stuff by classical unit tests. As you probably know there is classical and London school of TDD. According to the classical school, unit test just applies input data to the system under test, harvest the results and compare them to expected results. According to the London school, unit test invokes system under test and then checks if it behaves as expected, i.e. it calls its collaborators correctly and in the correct order. While I feel frustration from unit testing simple cases, I get a lot of value from classical TDD when I develop complex algorithms. Value, in this case, comes from regressions which happen during initial development, because when you develop a complex piece of software you can potentially spend days in one unit trying to put things together. I vividly remember one programming exercise from my SimCorp days. I had to develop a program, which would take APL data structure of any complexity (for instance matrix of matrices) and generate APL code which would recreate this particular data structure. My first attempt failed because after 4 ours of work I was far from being done. And most of this time was spent on retesting program operation with different types of inputs after every change to the algorithm. Next day, I have tried classical TDD. And the process was not only fun but also fruitful. In about 4 hours I was done with approximately 30 tests. I remember my impression was that I would not have finished such a program in this amount of time and confidence without unit tests.

Apply London school TDD to the integration code. What if your core business logic is not algorithmic one? What if you bring the value on the table by integrating different systems together? For a very long time, it was an open question for me. In such a cases I still want to be sure my core code is tested well. However, classical tests are awkward because integrators often don’t even have inputs or outputs. I believe London school tests are perfect for them. Once, at StrategicVision, I had to develop a system that would download videos from video hosting services, extract audio from those videos, transcribe audios and finally save transcriptions and video links to the database. No business logic in a classical sense, right? My code just invoked video hosting web service, then invoked downloader, then invoked transcription web service and finally invoked repository to store results. I wrote a bunch of tests which were testing, for instance, such facts: if the system under test invoked downloader for a particular video, it should later invoke clean up for this video; if the system under test invoked database repository to store results, before that it should invoke transcription web service.

These guidelines are highly subjective of course, but at least they work for me, at least at this point in my career. Hopefully, you will also find them helpful.


Віддалена розробка програмного забезпечення (Вебінар)

Вчора продовжив свою вебінарову подорож. Цього разу вирішив зробити експеримент і провести вебінар на не технічну тему та зі співавтором, Анею Зубенко, у формі інтерв’ю. Вийшло класно, дещо розтягнуто, але це ж було інтерактивне інтерв’ю і ми намагалися невимушено обговорити багато тем та відповісти на більшість запитань, що глядачі задавали. Словом багато фану, судячи з коментарів у чаті під час вебінару, народ теж отримав позитивний заряд! Там більшість інформації – аудіо, тож цілком можна його вилучити та послухати в навушниках по дорозі кудись. Приємного прослуховування!


My 7 favorite podcasts

I am a big podcasts fan. I listen to them all the time when I am waiting for something or commuting. They help me immerse into the western technology world from which Ukrainian tech community is quite disconnected. It is extremely unusual for me to meet at the conference some kind of world recognized guru or thought leader. Podcasts can not fully compensate for that because I can not ask questions while listening (and sometimes I desperately want to do so) but still I can hear other ask questions and those questions are sometimes even better then I could have asked. Podcasts also help me as non-native English speaker to improve and maintain my English communication skills. It is sometimes very difficult for the non-native speaker to understand the native speaker and accents of other non-native speakers. Podcasts actually provide great training on listening and understanding all these different accents, because podcast guests are people from all over the globe. It is hard for me to explain the reason, but podcasts help me to write and talk about technical topics in English. Probably this is the quantity to quality transformation. I listen a lot and spoken patterns are carved into my mind so that I can use them later in my own speech. Many people are surprised to learn that it is much more easy for me to explain something software related in English than in Ukrainian. So, here is my list:

.NET Rocks. I revolve mostly in Microsoft space. And there .NET Rocks is the number one podcast. It is not only about .NET, their episodes cover many different topics, related to software development in the Microsoft universe. Of course, most episodes are about .NET but often you will hear about the broad range of topics, starting from machine learning and ending with front-end stuff. Not only .NET rocks will keep you up to date with latest advances in .NET but it will also entertain you, I often find myself laughing or smiling while listening.

Javascript Jabber & Adventures in Angular. These two podcasts cover my front-end needs. I mention both because they are made by the same guy although the set of panelists differ. You will learn about advances in JavaScript and Angular worlds, learn about new libraries, front-end development problems, and possible solutions. In every episode, they discuss so-called picks, different things which podcast authors and guests are currently excited with. These picks will really teach you a lot.

SE-radio. This one was my first podcast. It mostly covers fundamental aspects of software development like refactoring, architecture, programming languages design, requirements engineering, software modeling, distributed and real-time systems. Back to the previous decade it was created and lead by Markus Voelter, who is my favorite podcaster. First, he is german but his English communication skills are extraordinary. Because he is not a native-speaker, his speech is simple and clear, the way he asks questions and digs into technical topics can be used as an etalon. It is also clear that the guy is passionate about podcasting and technology and for me it is a big deal, I love passionate people. This days se-radio is produced by IEEE Software magazine. Markus does not participate in its production anymore, but it is still pretty interesting to listen world-class experts talking on fundamental aspects of software.

Omegataupodcast. I have already mentioned Markus Voelter. After he finished with se-radio, he started Omegataupodcast. The podcast is about science and engineering. Although there are episodes on biology and social science, most episodes are about space engineering and science, aviation, physics, and computing. I have a background in radio engineering and aviation, therefore, omegataupodcast meets my interest in this kind of topics. It is a combination of Markus’ brilliance in dissecting complex technical topics and great science content which can literally go on for hours (episodes are pretty long).

Hanselminutes. It is hard to tell why am I listening regularly to this podcast. It is short, it has no specialization it looks and feels like the ordinary podcast, there are thousands of such. Probably because I have huge respect for Scott Hanselman and for all he does. He is the brilliant guy and he can combine interesting topic with fun conversation. Probably it is because ridiculously wide variety of topics covered on the podcast. One week he can talk on excel spreadsheet, another week – about toy robots, yet another week – some soft skills topic like management, motivation, and creative processes.

Developeronfire. I have recently discovered this podcast, I have listened to almost all old episodes and never miss new episodes. It is not technical and you can listen to it while working out when you can not concentrate very much. The podcast is indeed about going personal with your favorite geeks. Dave, the host, invites to talk about personal stuff active people in the software world. Mostly developers but not only, there were consultants, managers, marketers and other professions on the show, all somehow related to software development. Dave asks more or less the same set of questions like what is it the guest likes about technology, guest’s definition of value, his biggest success, and failure, her hobbies, and value delivering tips. Sometimes the guest is moderate and you will not hear something special and sometimes whole episode is full of pearls of wisdom. Take for instance episodes with Scott Hanselmann, DHH, J.B. Rainsberger or Linda Rising. Highly recommended, a lot of fun, deep conversations and you will be surprised how many things will resonate with you and your experiences in the industry. Cool stuff there.

EconTalk. This podcast is not specifically about technology or software, although some episodes are, but it is about everything else. Most episodes are discussions between the host and some brilliant personality who has written some great article or book. EcanTalk is meant to be about the economy but it is about the economy in a very broad sense. The topic can be very economy related like agriculture, chicken production, banking and monetary policy. They can also discuss somehow related to economy topics like machine learning, technology and artificial intelligence influence on the economy of the future, peoples’ ego, learning and education problems, sports, healthcare, and transhumanism. To summarize: this is a podcast where extremely smart people discuss very interesting and essential to everyday life problems.


Exploring what does “Remember me” checkbox mean on the Login page powered by ASP.NET Forms Authentication

The colleague of mine recently asked me to check if our Login works correctly. His concern was application’s prompt to login although earlier this day he logged in with “Remember me” checkbox checked. It was a surprise for me because our configuration had this statement:

Since I am not an original developer of the application I am currently working on, I was pretty sure this statement means “Keep user logged in for 12 hours”. I figured out I was wrong and in this post, I will explore what does this “Remember me” checkbox actually mean.

Session timeout and Forms Authentication timeout are not related at all. ASP.NET is designed so that information about currently logged user is not stored in the sessions as I presumed. In contrast, when the user logs in it creates a so-called forms authentication ticket. This ticket is a long string of different characters with encoded user information in it. This ticket is then returned to the browser with a request to set a cookie with given ticket. To illustrate:

Next time when the browser does request to the server, it sends the same cookie with the ticket inside back to the server, for example:

By processing the accepted ticket, the server knows which user has sent a request to it.
As you have probably noticed, no session is involved in this browser-server communication. This is how ASP.NET is designed. See details here.

Hence, my configuration <sessionState timeout="720"></sessionState> has nothing to do with Login functionality of my application. It turned out that ticket expiration can be also configured by adding the following statement to the configuration file: <forms timeout="720" />. Finally, my “timeout” configuration looks like this and my user is not logged out for 12 hours:

“Remember me” checkbox only helps user’s authentification to survive the browser restart. So, now my user is not logged out every 30 min (this is default forms timeout) but why do we need this “Remember me” checkbox? When this checkbox is checked the server not only asks the browser to store its ticket, it also asks it to persist this ticket for a certain amount of time. For instance, look at this response when the checkbox is checked:

When the browser receives such a response from the server, it saves the cookie with the ticket in the file system, which makes possible for the cookie to survive browser’s restart and even operating system restart. Read more about persistent cookies here.


The programmable web

I have spent many years in the University teaching students software engineering and writing my Ph.D. theses. During that time, of course, I had to read a lot of books and papers. One interesting point kind of carved in my mind: many years all kinds of software researchers and practitioners were trying to invent components. It was and still is the way too overloaded term. Different software concept was considered as components, from classes and functions to DLLs, JAR files and other things special for the operating system or some kind of framework. Like for example Angular 2 has components concept today as well as a WEB as a whole has now a concept of WEB component.

The primary characteristic of any component has always been some kind of secret and different kinds of components failed in different ways to hide their secrets and provide reusable abstractions. However these days we have another kind of component – a web service running somewhere. I am just amazed how abstract it is. The only thing you have to know is the URL, the address of the resource you want to utilize in your application and that’s basically it. Dave Thomas once in the interview mentioned that someone should have created Intel of software components instead of tons of frameworks. Well, today’s frameworks are not that bad anymore. But we almost have the Intel of software components. The only difference is that this Intel is not a single company but a service through which you can select whatever component you want to use in the application.

I am talking about It looks like a marketplace for software components. Recently I had to build an application which would ask users to record some video from whatever device, take and store these videos on the server, transcribe videos and store information about videos as well as transcriptions in some kind of storage. Sounds like a really big project? I was able to do it in approximately 2 weeks including investigation and proof of concepts.

First search for some kind of video recording, playback, and hosting service:

After a little bit of investigation CameraTag and Ziggeo proved to be components which could be embedded in my web applications to record/playback videos and also could host these videos and provide fast access to them.

Great, now that I have my videos and access to them, I want to transcribe them. For this I just select IBM Watson service (which by the way is the great one):


Next, glue these web services together and application is ready. And of high quality by the way.

To conclude, I would like to say that now is a really great time for developing software applications. Even a small team can build something complex and large by grabbing whatever complex component it wants and access it almost instantly. One also can build his/her own business in building such software components. The business model is pretty simple, people will pay you for computing and storage resources you provide to them. IBM in my example earns money for their artificial intelligence algorithms and Ziggeo/CameraTag have their money for capturing, storing and playing video resources.


Ідея обчислювального процесу

Я сьогодні почав читати книгу “Structure and Interpretation of Computer Programs“. Початок першого розділу мені так сподобався, що я вирішив його перекласти. Мені здається, що для студентів-програмістів ці декілька абзаців можуть стати певним просвітленням.

Ми збираємося вивчати ідею обчислювального процесу. Обчислювальні процеси є абстрактними істотами, які населяють комп’ютери. По мірі їх виконання, процеси маніпулюють іншими абстрактними істотами, які називаються даними. Виконання процесу спрямовує система правил, яка називається програмою. Люди створюють програми, для спрямування процесів. По суті, ми заклинаємо духів комп’ютера нашими заклинаннями.
Обчислювальний процес, дійсно, дуже схожий на поняття чаклунського духу. Його не можливо розглянути або торкнутися. Він зовсім не складається з матерії. Однак, він дуже реальний. Він може виконувати інтелектуальну роботу. Він може відповідати на запитання. Він може вплинути на світ виплатою грошей у банку або контролем маніпулятора на заводі. Програми, які ми використовуємо щоб чаклувати процеси – подібні до заклинань чаклуна. Вони ретельно складені з символічних виразів на таємничих і езотеричних мовах програмування та приписують завдання, які ми хочемо, щоб наші процеси виконували.

Обчислювальні процеси в комп’ютері, що правильно працює, виконують програми точно й правильно. Таким чином, подібно до учня чаклуна, програмісти-початківці повинні навчитися розуміти і передбачати наслідки їх чаклунств. Навіть невеликі помилки (які зазвичай називають багами або дефектами) в програмах можуть мати складні та непередбачувані наслідки.

На щастя, навчання програмуванню значно менш небезпечне ніж вивчення магії, тому що духи з якими ми маємо справу, зручно утримуються в безпечному режимі. Проте реальне програмування вимагає ретельності, досвіду і мудрості. Наприклад, невелика помилка в системі автоматизованого проектування може призвести до катастрофи літака, пошкодження або самознищення промислового робота.

Кваліфіковані інженери з програмного забезпечення можуть організувати програми так, що вони можуть бути достатньо впевнені у тому, що процеси, які управляються цими програмами виконають призначені завдання. Кваліфіковані інженери можуть наперед чітко уявляти собі поведінку їх систем. Вони знають як організувати програми так, щоб непередбачені проблеми не призвели до катастрофічних наслідків, а коли проблеми виникають, вони можуть налагоджувати свої програми. Добре розроблені обчислювальні системи, подібно до добре продуманих автомобілів або ядерних реакторів, розроблені на модульній основі таким чином, щоб їх частини могли бути побудовані, замінені і налагоджений окремо.


Чи є майбутнє у керованої моделлю розробки програмного забезпечення

Давненько не писав. Часто виникає бажання щось написати, але потім якось руки не доходять. Аж раптом нещодавно трапилась мені на очі презентація шановного Джона ден Хаана під назвою «Чому у керованої моделлю розробки програмного забезпечення немає майбутнього» (Why there is no future for model driven development). Оскільки я палкий прихильник розробки програмного забезпечення, керованої моделлю, не можу не відреагувати. Свою думку я йому коротко висловив на сайті, а тепер вирішив трохи детальніше розміркувати.
Почнімо з визначення. Отже розробка програмного забезпечення, керована моделлю (РКМ) – це методологія розробки програмного забезпечення (ПЗ), що спрямована на використання моделей ПЗ як основних продуктів розробки та генерування на їх основі інших робочих продуктів, у тому числі і вихідного коду, наприклад на Java або C++. Як і більшість понять у галузі ПЗ, це поняття запозичене із більш традиційних інженерних дисциплін, хоча там на керованості моделлю так явно не наголошують, оскільки використання моделей є загальноприйнятим. Ніхто навіть і уявити собі не може конструювання такої будівлі як Московський Міст у Києві чи літак «Мрія» без попередньої побудови великої кількості різноманітних спеціалізованих моделей. Моделі допомагають зрозуміти складні завдання та їх потенційні розв’язки через абстракцію. Однак у галузі розробки ПЗ моделювання аж ніяк не набуло широкого впровадження. Не будемо тут намагатися аналізувати причини, це тема окремого посту, а перейдемо відразу до демонстрації того, що РКМ має майбутнє.
Не вдаючись до історії постійного підвищення рівня абстракції мов програмування, відразу перерахуємо фундаментальні принципи програмної інженерії та покажемо як РКМ відповідає цим принципам: строгість та формальність, розділення відповідальності, модульність, абстракція, передбачення змін.
Строгість та формальність. Розробка програмного забезпечення є творчою діяльністю. У будь якій творчій діяльності є схильність до неточності. Строгість та формальність необхідне доповнення будь-якої інженерної діяльності. Тільки так можливо контролювати вартість та якість продуктів на виході такої діяльності. Моделювання ПЗ, використовуючи формальні мови, зі строго заданими синтаксисом та семантикою безумовно відповідає цьому принципу. Звісно, тут не йдеться про архітектуру, намальовану маркером на дошці, така модель корисна хіба що для обговорення проектних рішень.
Розділення відповідальності. Один з найголовніших принципів інженерії взагалі та інженерії ПЗ зокрема. Розділення відповідальності – це спрощення єдиного монолітного розв’язання завдання шляхом поділу на взаємодіючі розв’язання під-завдань. Моделювання різних аспектів ПЗ, таких як бізнес-логіка або архітектура, за допомогою різних, предметно-орієнтованих мов програмування (DSLs) є реалізацією цього принципу. Особливу вагу для бізнесу мають предметно-орієнтовані мови, призначені для опису конкретної галузі в якій працюватиме застосування. Наприклад, мови для опису управляння страховими полісами, бізнес процесів, алгоритмів розрахунку статистики успішності студентів і т.д. Програму, написану такою мовою зможе читати бізнес експерт, а програму мовою Java – ні.
Модульність. Цей принцип є окремим випадком попереднього. Суть його полягає у розбитті системи на простіші частини, що називаються модулями, компонентами, сервісами і т.д. Перераховані назви є, безумовно, абстракціями, створеними для реалізації принципу модульності. Однак ми не обмежені лише цими абстракціями у побудові ПЗ! Ми можемо послуговуватися, формально визначивши, такими абстракціями як компонент, процес, замикання, виключення, повідомлення, синхронізація, з’єднувач, двигун, крило літака, контролер, гамбургер, страховий поліс, множина яких обмежена лише нашою фантазією! Усі вони є частинками певного застосування, а моделювання ПЗ з їх використанням є реалізацією принципу модульності.
Абстракція. Принцип не потребує коментування. Відзначимо, що усе ПЗ є абстракцією. Абстракція дозволяє нам створювати складні системи. Абстракція подарувала нам поняття класу, об’єкту, події, функції, бібліотеки, тощо. Розробка ПЗ є складною. ПЗ є найскладнішим продуктом, що створює людина. Якщо задатися запитанням: а скільки інженерів-механіків я знаю? А скільки інженерів програмістів? Думаю, останніх у декілька разів більше. Гадаю, саме робота на надто низькому рівні абстракції змушує компанії наймати усе більше інженерів з ПЗ та виставляти захмарні ціни за розроблення та супроводження ПЗ. РКМ дає можливість підвищувати рівень абстракції та створювати нові абстракції, що краще відповідають потребам зацікавлених у розробці сторін.
Передбачення змін наголошує на тому, що необхідно передбачати та створювати зручні умови для внесення змін у ПЗ. Як правило, реалізацію цього принципу забезпечують модульність та розділення відповідальності. Якщо різні аспекти системи модельовані окремо, то можна легко змінювати окремо від інших аспектів бізнес-логіку, архітектуру, базову абстрактну машину (мови програмування, операційні системи, проміжне ПЗ, бази даних, тощо). Тобто РКМ надає інструменти для ефективної реалізації принципу передбачення змін.
Звісно, РКМ не є срібною кулею, гадаю такої кулі не буде винайдено. Однак, на мою думку, це закономірний рух уперед і дозрівання розробки програмного забезпечення до інженерної дисципліни. Автор коментованої доповіді відзначив, що моделювання має стосуватися не лише конструювання ПЗ, а й інших фаз розробки, особливо збору вимог, та розгортання. З цим важко не погодитися, тому правильніший термін був би не розробка, керована моделлю, а інженерія ПЗ, керована моделлю. Аби така інженерія стала реальністю, необхідно ще дуже багато наукових досліджень та практичної роботи.
Отже, чи є майбутнє у інженерії ПЗ, керованої моделлю? Судячи з усього так, інакше немає майбутнього у ПЗ як такого.