Are soft skills more important than deep technical expertise?

Often I hear the ideas that technical skills are somewhat less important than, for example, communication or other soft skills. Here are some quotes to illustrate what people think:

Dustin Ewers, DeveloperOnFire:

The developer that’s a ninja at whatever the latest JavaScript framework is is great but a developer who is OK at that and also really good communicator is gonna win every day of the week in terms of actually delivering value to people.

John Sonmez, SoftSkills book:

I’d rather hire a developer who knows a little less but knows how to figure out what needs to be done and how to do it, than someone highly skilled who requires constant hand-holding to be productive.

I have seen the most technically competent yet arrogant and unfriendly people lose out on a job to a much less skilled but likable person.

TJ VanToll, DeveloperOnFire:

I don’t consider myself very good software developer necessarely I think I succeeded more in that like learning how to write a good email, learning how to write an opinionated article, can actually take you a lot far ther in many cases than knowing how to write good code for instance.

Now let’s look Pareto principle and we see that roughly 80% of problems we can solve with 20% of features that has a tool we use (like programming language or framework).

Multiply it by diminishing returns law which states that the outcome of our efforts always diminishes. Hence learning 20% of the most used programming language features will provide much more value than learning next 20% of features. But learning that next 20% of features will require more effort because they are more rarely used and therefore it will be harder to remember them.

Add the fact that:

Half of what a programmer knows will be useless in 10 years.

And I would emphasize here that the more detailed and specific the piece of knowledge, the faster it will become useless. This kind of demotivates people to learn new stuff, especially if they are not going to use it immediately.

Finally, add to the equation StackOverflow, YouTube, blogosphere and all other freely and immediately available resources which are capable to solve most of your programming problems in seconds.

What do we get? Well, for most of the jobs very deep and detailed technical knowledge is not as valuable as soft skills – ability to communicate with people via natural as well as via programming languages and ability to solve problems and organize (architect) systems.

At my current project, we hired a guy just because I knew him, I knew he was a good communicator, nice person and loved his craft, that’s it. We did not ask him any questions like what is polymorphism or the sequence of calling of constructors in the inheritance hierarchy. And it really worked great. Do I instead want someone who knows 80% of .NET by heart but with hypertrophied ego or something? No way…

Of course, I just share my thoughts and the fact that these same thoughts appear to other people as well. Interviewers, however, want you to know nitty gritty details as if knowing them proves you will be able to deliver, will keep working on the long demanding project or will be doing well with the team. Many people would disagree and insist that deep tech expertise is the key.

What do you guys think? What is your experience?

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

VS Code and Gulp: Solving the “Breakpoint ignored because generated code not found” problem

Recently I have solved a problem which was bugging me for some time. I guess there are many reasons which can cause this problem and also there are many solutions to it, however, I will share my solution here and, hopefully, it will help someone else at least to start digging in the right direction.

So I am using VisualStudio Code for my JS editing, Gulp to automate the build which mainly takes application files from the src folder, passes them through the Babel transpiler and saves them to the .tmp folder. The server serves the application form the .temp folder.

The problem: I want to run my code in the Chrome and debug it right in the VS Code, however, whatever settings I select, putting the breakpoint causes VS Code to complain: “Breakpoint ignored because generated code not found”.

To solve the problem I did the following:

  • Used diagnosticLogging property of the of the configurations property in the launch.js file. Even if the following solution does not help you, this option will help you to debug your problem. This is how I ended up with my solution.

  • Wrote launch configuration like this:

It is very important to start Chrome with remote debugging port 9222 opened. Read official docs for more details. Note that my url ends with /*, this tells VS Code that it should track all files in debugging. More about this read here.

Also, note that webRoot points to the .temp folder, which I alluded to earlier. It is so important that I will repeat. In this folder, I have JS files spat from the Babel and these files are later served to the browser. Hence this is one part of the puzzle: I tell VS Code which exactly files I want to map to those, opened in VS Code (those which are in the src folder).

  • Used correctly setup gulp task to not only transpile JS code but also generate correct source maps. Since I am using Gulp with source maps plugin, I ensured that source maps do not include content on their own, but reference to the root of my application’s sources (from where my transpiler takes the source code, the src folder). Here is how my transpiling Gulp task looks like:

At first, I used this command to write source maps: g.sourcemaps.write('.'). Although it did generate source maps, they were apparently insufficient to VS Code to understand how to match Chrome served files from .tmp and files from src which it was currently browsing. The setting { includeContent: false, sourceRoot: __dirname + '/src/app' } effectively instructed source maps generator to not include source code into source maps itself, but instead point to the source root, where sources can be found. In my case, it is __dirname + '/src/app' which is exactly the folder, which was browsed by VS Code. Hence the second part of the puzzle is found.

However, if src folder is not served by your server to the browser, such code maps will not work in Chrome, because Chrome will not be able to find files on the local disk. This may cause problems in case you still want to debug your code in Chrome from time to time. These problems can be easily solved by disabling JS code maps in the Chrome Dev Tools settings.

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

We would fail to write bubble sort on a whiteboard… Should we be proud?

Recently there was very interesting activity on the Twitter where famous programmers just admitted they are very bad at Computer Science (CS) and other fundamental concepts and still doing very well.

Now, this is very interesting. To me, this is just another manifestation, that knowledge has a little value today. Everything can be easily Googled in a very short time. Yes, the search takes some time, but it is much more efficient than trying to learn and remember all the details of CS and technology concepts. And I also admit: I would fail to write bubble sort on a whiteboard and estimate my algorithms’ complexity while being a successful developer for more than a decade now.

But one may come to the (I believe wrong) conclusion that it is ok to be unable to estimate the algorithm’s complexity and not to understand the essence of fundamental algorithms and their complexities. Algorithms are the methods for solving problems of procedural nature while design patterns are the methods to solve software structural problems and I would argue both are the very basics of software engineering. If you do not know these methods, when you are tackling the tough problem, you basically don’t know what you don’t know. You are unable to discover that your problem can be reduced to one of those fundamental problems and there is already an elegant solution to it. I definitely see my ignorance in CS as a problem and I am currently deliberately studying algorithms :-). And also clean code, which I always enjoy learning and practicing.

Here is what John Sonmez experience was after having learned algorithms:

All of a sudden it was like I put on special glasses that let me see the world in a different light and all these problems, all these places where I was like there’s nowhere in the real world where I’m going to use algorithms in my code, bam, it was popping up everywhere. It’s like, “Whoa! Look! Oh, I recognize this. This is like a min-max problem.” Bam! All of these places I started writing really efficient, really good code because I could see the problems.

I can also see the pain when job interviewers start asking puzzles and algorithms stuff. I hate it. If you want a job you are stressed already and now you have to focus, concentrate and tackle the tough problem at the blackboard as if you would do it at your real work! Problem-solving is very creative activity and It can not be done effectively under stress because the brain actually focuses on security aspect first and foremost. I don’t know how to replace this practice, but it is total evil to me. Interviewers, please, don’t do it to us, developers, unless the job requires this type of skill and knowledge. Pretty interesting article on the related topic from Yegor Bugayenko. But since interviewers still continue asking, maybe realizing the importance of fundamentals, we should be prepared!

Hence, if you have some spare time and you have a choice either to learn some new cool JavaScript framework, which will be forgotten in few years or some of the algorithms or design patterns which will serve you throughout all your career, I believe it is better to select last option.

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

Знайомство з мовою програмування APL

З радістю, на запрошення компанії SimCorp Україна, знову викладатиму курс по основах APL. Цього разу вирішив записати вступне відео, щоб потенційний студент міг ознайомитися з APL і моєю манерою викладання.

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

What are TypeScript’s any, void, never, undefined and null types

Unlike intuitive types such as string or number, the types any, void, never, undefined and null might cause a confusion for a newbie TypeScript developer. In this post, I will share what I have learned about these types.

Any

The simplest one is any. From the spec:

The Any type is used to represent any JavaScript value. The Any type is a supertype of all types and is assignable to and from all types. In general, in places where a type is not explicitly provided and TypeScript cannot infer one, the Any type is assumed.

So, the any is the way how so-called optional typing is implemented in TypeScript. With any you effectively switch off the type checking and working in “JavaScript mode”. It is very useful when you gradually migrate from JavaScript and have not yet figured out all types. This is one of the core TypeScript’s value propositions.

undefined, null and –strictNullChecks

From the spec:

The Null type corresponds to the similarly named JavaScript primitive type and is the type of the null literal. The Undefined type corresponds to the similarly named JavaScript primitive type and is the type of the undefined literal. The Null type is a subtype of all types, except the Undefined type. The undefined type is a subtype of all types.

So from the spec, these types are the types whose domains consist of only one value and undefined is a specialization or subtype of null. However, despite of having this specialization relationship you still can assign null to the variable of type undefined and vice versa, hence more specifically the domains of both null and undefined are two values null and undefined:

The compiler option --strictNullChecks enables strict null checking mode, which removes the null and undefined values from the domain of every type and is only assignable to themselves and any. I believe this is much simpler to understand and predict how TypeScript will behave in different situations. Strict null checks provide us with the tool to increase the code reliability and therefore I recommend to always have it switched on unless you are dealing with a huge legacy system which does not compile because of this compiler switch.

void

From the spec:

The Void type, referenced by the void keyword, represents the absence of a value and is used as the return type of functions with no return value. The only possible values for the Void type are null and undefined. The Void type is a subtype of the Any type and a supertype of the Null and Undefined types.

This seems to be the pretty simple type with the domain of two values null and undefined. But as we learned earlier, the same domain is defined for null and undefined types. While the need for later types is dictated by Javascript, why we need yet another type with the same domain? Well, first, with --strictNullChecks the domains become not the same and second, we want to logically differentiate between function which might return undefined in some cases and function which does not expect to return any value at all:

never

The never type represents the type of values that never occur. It is a bit weird, but it perfectly makes sense in cases when function never returns a value or when you write a code in the code branch which will never execute:

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

Using array destructuring and recursion to make your JavaScript algorithms shine

JavaScript if far from being the best-designed language in the world. I personally often feel myself like in the mine field when coding in it. But it evolves and gets better and better. I can increasingly better express the ideas in JavaScript and I feel it becomes truly powerful and pretty pleasant language to use. Here, I want to share how today one can calculate the sum of the array of numbers. The example is contrived but this example is always used in the schools to teach programming and usually, students are taught to create an accumulator variable and then iterate and accumulate the sum in that variable. How procedural is this! Now, look at what you can do with JavaScript today:

No variables, no loops, pure simplicity. It might be not very efficient code, but very simple and functional one.

Another example. I have found a quicksort implementation in JavaScript on the web (you can skip it, it is just for illustration):

It is totally procedural and very difficult to understand. Now, here is my alternative implementation, using recursion and latest JavaScript features:

So clear, so simple and again no loops or variables. Love it. Again, performance is not great here, but I am biased towards code simplicity and optimizing only when absolutely necessary.

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

Updating an immutable object in TypeScript

Recently I could not figure out how to types-safely update immutable object in TypeScript. The problem is that in JavaScript the object spread feature is used for this task, like so:

But if you have accidentally missspelled the property you want to update, object spread will add a new propery instead of updating old one:

So I asked StackOverflow for help and got an excellent answer to use a helper function:

Now if I want to update my point but make a mistake – I get a compiler error:

All is great now, but someone on StackOverflow asked to explain each line of this update function. It is not simple indeed and uses latest TypeScript features. The author did not respond, so I will try to explain it here.

The most complex line is first one:

function update<T, K extends keyof T>(obj: T, updateSpec: Pick<T, K>): T {

This line incorporates two pretty complex typescript features: generics and indexed type query. I highly recommend clicking links provided in the previous sentence and reading articles provided by TypeScript official documentation, which is great. But here is my shallow explanation.

A generic type is the type which is not known when the function is defined but only becomes know when the function is used somewhere in the code (all is at development time, not runtime). For example in the given update function the type T is not known, when function is defined, but at this point o1 = update(o1, {y:1}); the type T becomes of type Point. So, for now, we know that in this particular call, function update will take a Point and will also return a Point.

Indexed type query gives you an ability to query at compile-time the names of the properties of some type. For example, type PointProperties = keyof Point; // "x" | "y". That is from the variable of type Point we can query the property named x or y or both. So the expression K extends keyof T means in our case that generic type K must be something compatible with "x" | "y":

Ok, now we know exactly the type constraints of update function, but what is Pick<T, K>? It is the type, added to TypeScript standard library in version 2.1 and is mapped type:

You can think of it as a type which consists of a subset of properties of type T, described by K. So when we have a point defined this way: interface Point { x: number, y: number }, the type Pick<Point,"x"> is basically {x:number} and Pick<Point,"x" | "y"> is { x: number, y: number }

Hence to conclude, the first line of the function just tells its name and that it takes as the first parameter the value of unconstrained type T, returned value is of the same type. And the second argument must be some type, which contains any subset of properties, defined in T. So, if you want to update the Point, the only eligible values for the second parameter of update are: { y: 1 }, { x: 1 }, {}, { y:1, x:1 } (note: number 1 is arbitrary here, any number will do; if compiler option --strictNullChecks is not on, then values undefined and null are also eligible).

All other lines are not nearly as interesting :-). Second line: const result = {} as T just creates creates an empty object and tells TypeScript to treat it as the value of type T. Third line Object.keys(obj).forEach(key => result[key] = obj[key]) is pure JavaScript which copies all properties from input object obj to the result object. Fourth line Object.keys(updateSpec).forEach((key: K) => result[key] = updateSpec[key]) does the same for updateSpec, this actually replaces old values of obj with new ones from updateSpec. And finally the function returns: return result;.

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

Angular 1.x has no dependencies and therefore it’s OK to include it with Webpack’s noParse option. So wrong.

Angular 1.x has no dependencies and therefore it’s OK to include it with Webpack’s option noParse. This idea I have learnt in the great online screencast (Russian) devoted to Webpack. The option noParse instructs Webpack not to analyze the source code of the included program and not to inject any dependencies to it. This can boost the performance when dealing with big libraries which have no dependencies and export global variables to communicate with other parts of the system. And indeed, Angular is huge, it works without any dependencies and exports global variable.

So I optimized my Webpack process with not parsing Angular and later lost 1.5 working days trying to understand why my system did not work. I had included two other modules jstree, which is a jQuery plugin and the Angular wrapper over jstree, ng-js-tree. Of course, JQuery was also involved and things just exploded:

The thing is, Angular HAS dependency! It is tricky, though. This dependency is JQuery, but if you don’t provide jQuery it will use it’s internal lighter version of JQuery named JQLite and will silently continue working. Here is the Angular’s source code proving that.

So here is what happened under Webpack’s control: 1) JQuery was initialized; 2) jstree was initialized and attached to the JQuery; 3) Angular was initialized, but since it was checking for window.jQuery and it was undefined, Angular was initialized with its internal beast JQLite. window.jQuery was undefined because Webpack does not allow globals, it manages global variables like jQuery on its own and passes them to modules which mention those variables in their source code. But that source code must be parsed first, and I had noParse for Angular :-). 4) ng-js-tree was initialized but with the DOM element wrapped by JQLite, not by real JQuery to which jstree was attached. 5) the miserable crash happened when ng-js-tree tried to access DOM element, provided to it, assuming that was JQuery DOM element while it was JQLite DOM element.

Hence, be careful when you optimize your Webpack with a noParse option for some module. Even if that module works just fine without any dependencies, there might be some complex interdependencies which will make you cry like in my case it was with Angular, JQuery, jstree, and ng-sj-tree.

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

How TypeScript can teach you web standards

Recently my team decided to migrate all views in our Angular 1.x application to React. To me, the value of React is doubled if used together with TypeScript. I enjoy developing views when I have autocompleted and static checks for the view model, used in the view. I also can safely rename elements of the view model, which I prefer doing often and after I have already worked with the model and realized what the best name could be. Moreover, JSX in TypeScript is also statically checked, so you can not make mistake there as well. All these merits are impossible in Angular 1.x and Angular 2. Both have special DSLs for describing views and of course those DSL are not as powerful as TypeScript when it comes to static type checks.

But what I have discovered is that TypeScript can even teach you web standards. These standards are not very consistent but we still have to use them correctly. Let me give you few examples.

First, TypeScript can teach you JavaScript itself and save you from silly errors. Look at this code and try to answer the questions (--strictNullChecks):

After you gave it some thought, here is the answer: the type of y is never and type of z is number | null. If you answered correctly, you are great JavaScript developer with exceptional memory. The type never is given to y because that line will never execute. This is because of JavaScript’s design flaw which defines this expression to be true: typeof null === 'object'. Having learned this, now, what are the types of y and z in the following snippet?

Well, you are totally right, y is null, because 'typeof null === 'object' is true. And since true part of the if statement has dealt with null, the type of z is just a number.

Now, let’s have a look at a second example, now TypeScript will tech us something about DOM API. Have a look at the following code, which does not compile (it used to handle React button click event):

What??? This way official React’s page teaches us to access values from the event targets!

React docs snapshot

When we first encountered this problem, we decided there was a bug in TypeScript type definitions for React (those are installed with npm install --save-dev @types/react). But later we discovered, those type definitions actually have another property for SyntheticEvent<T> (type of target), named currentTarget, whose type is different:

Hmmm, let’s see what dumentation has to say about currentTarget:

Identifies the current target for the event, as the event traverses the DOM. It always refers to the element to which the event handler has been attached, as opposed to event.target which identifies the element on which the event occurred.

While target is

A reference to the object that dispatched the event. It is different from event.currentTarget when the event handler is called during the bubbling or capturing phase of the event.

So are we interested in the value of the element where we placed an event handler or in the element, which dispatched the event? target is the origin of the event, which no one really cares about, it might be a span inside a link, for example, while currentTarget is something you should very much care about, as this is the element, to which you have attached your event handler. See more discussion on this topic here . We have learned this only because TypeScript caused us to dig deeper into targets and see the difference. Without it, we would have used target and wait for problems to come (someone might place child element to the element we were listening to and target would become this child element!). So, thanks, TypeScript and those great people who write type definitions for lessons :-).

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail

Few guidelines on unit testing derived from my experience

When I first started using unit testing in my practice, I had no idea what I was doing. I was also frustrated. It made little sense to me to have everything unit tested because writing tests and preparing test data took me too much time.

Ever felt the same way? Don’t understand where and when to write test and where you should leave the code alone? Don’t worry, this is normal. This happens to everyone who tries to improve the way he/she does software. I can’t give you a concrete prescription, no one can, but in this post, I will share what worked for me in terms of unit/automated testing. This will, probably, work for you.

Don’t write unit tests for simple cases. One objective way to measure the return on investment (ROI) from unit tests is to measure how much time they save for development team by catching regressions. In simple cases, when the code is not going to be changed or code is pretty straightforward it is unlikely that you will get regressions, therefore it is likely that you will have no ROI at all from your unit tests. Furthermore, you will need to maintain them. The law of diminishing returns works here. You can get 80% of the benefit from covering only 20% of your code with tests. This 20% of code contain your core business logic, which delivers the most value to your customers. Everything else is some kind of glue code, configurations, mappings, frameworks, libraries interoperations and so on. The more effort you put to cover this code, the less ROI you will have.

Do large acceptance tests for refactoring. If you plan to do some large refactoring or restructuring, classical unit tests will not help. In fact, they will go into your way. Classical unit tests are small and test some small parts of the system. When you start changing things, unit tests start glowing red and you will need to delete them. The large acceptance test captures the whole business case of interaction between your system and the user. Such a business case is something that brings real value to the business and should not be changed during refactoring. Relying on acceptance tests will increase your chances to refactor without damage to the business. In developeronfire podcast, Nick Gauthier (episode 149) reported that he had his largest success in the career by moving the web application he worked on from classical client-server architecture, where HTML was rendered on the server, to the single page application (SPA). Acceptance tests made transition really smooth for his team. My refactoring team at SimCorp also had a big success of not jeopardizing our product’s quality by major refactoring we did. That refactoring touched almost every user screen in the system. My team lead insisted on having large acceptance tests suite, which eventually ensured our success.

Unit test complex algorithmic stuff by classical unit tests. As you probably know there is classical and London school of TDD. According to the classical school, unit test just applies input data to the system under test, harvest the results and compare them to expected results. According to the London school, unit test invokes system under test and then checks if it behaves as expected, i.e. it calls its collaborators correctly and in the correct order. While I feel frustration from unit testing simple cases, I get a lot of value from classical TDD when I develop complex algorithms. Value, in this case, comes from regressions which happen during initial development, because when you develop a complex piece of software you can potentially spend days in one unit trying to put things together. I vividly remember one programming exercise from my SimCorp days. I had to develop a program, which would take APL data structure of any complexity (for instance matrix of matrices) and generate APL code which would recreate this particular data structure. My first attempt failed because after 4 ours of work I was far from being done. And most of this time was spent on retesting program operation with different types of inputs after every change to the algorithm. Next day, I have tried classical TDD. And the process was not only fun but also fruitful. In about 4 hours I was done with approximately 30 tests. I remember my impression was that I would not have finished such a program in this amount of time and confidence without unit tests.

Apply London school TDD to the integration code. What if your core business logic is not algorithmic one? What if you bring the value on the table by integrating different systems together? For a very long time, it was an open question for me. In such a cases I still want to be sure my core code is tested well. However, classical tests are awkward because integrators often don’t even have inputs or outputs. I believe London school tests are perfect for them. Once, at StrategicVision, I had to develop a system that would download videos from video hosting services, extract audio from those videos, transcribe audios and finally save transcriptions and video links to the database. No business logic in a classical sense, right? My code just invoked video hosting web service, then invoked downloader, then invoked transcription web service and finally invoked repository to store results. I wrote a bunch of tests which were testing, for instance, such facts: if the system under test invoked downloader for a particular video, it should later invoke clean up for this video; if the system under test invoked database repository to store results, before that it should invoke transcription web service.

These guidelines are highly subjective of course, but at least they work for me, at least at this point in my career. Hopefully, you will also find them helpful.

Facebooktwittergoogle_pluslinkedinmailFacebooktwittergoogle_pluslinkedinmail