#clean-code #software #code #implementation


Primitive types are simplifying and obfuscating the meaning of the variables / arguments in code, making the code slightly harder to read (one more thing to worry about[1]). Moreover, this prevents some validation that should happen on the type itself[2], which are typically moved to the class taking input for these variables. Instead, types should represent the actual underlying unit of the variable.

For example, using decimals for holding money is a) hiding the fact that it is money b) hiding the currency used for the money c) does not validate the value d) allows the user to interpret the value. One can assume the value is in cents rather than dollars. One can assume that 0.001 is a valid value.

For example, using strings for holding ids is a) allowing incorrect value b) preventing type-checking c) shifts responsibility to the user to determine how the field should be formatted d) shifts responsibility of validating value to the class exposing the variable or argument.


[1]: Using a primitive to represent a type shifts the responsibility of determining how the variable should be used to the reader or user, which renders the code less clean.

[2]: Which itself is transgressing the Single Responsibility Principle and Separation of Concerns {TODO: write SRP and SoC}.


Pablo Manuelli / Why you should avoid using primitive types