In all the programming languages that I have used, whenever you work with decimal numbers, they are represented with a point, which is contrary to several definitions such as:
Decimal numbers are those that are represented by a comma and that have an integer part (to the left of the comma) and another decimal part (to the right of the comma).
Concept taken from https://www3.gobiernodecanarias.org/medusa/ecoblog/crodalf/numeros-decimales-concepto/
either
We can say that a decimal number is one that is made up of an integer part and a decimal part, separated by a comma, and represents quantities that are not integers.
Definition taken from https://www.mundoprimaria.com/recursos-matematicas/numeros-decimales
For example in the programming console in javascript we make a simple division:
179/4
--> 44.75
It returns a decimal separated by a point and not by a comma. Why is this happening?
Note: If this question is not for this site please let me know.
First of all this is not a problem specific to programming languages, I wouldn't even call it a problem. We could simply say that it is a "cultural inconsistency". Several centuries before computers existed, the world had not reached a consensus on how to differentiate the units of the decimal part, there were several ways, but over time two prevailed: the point, used mainly in Great Britain and its areas of influence , and the comma in Western Europe, (later Latin America) and Russia among others. When the first modern computers appeared, we can say that the best known and most successful did so mainly in the United States, which is why the languages and hardware that marked the way adopted the notation that we could well call "Anglo-Saxon".
Even though the development of new languages is not tied to the Anglo-Saxon world, it is clear that the industry has preferred to maintain this de facto standard.
On the other hand, it is worth mentioning, as stated on the page of Fundéu - Foundation of Urgent Spanish :
Finally, it is worth pointing out that even when languages prefer the use of the point as a decimal separator, we will always have the possibility of modifying the representation of the data, regionalizing it, or "formatting" it according to the uses and customs, although of course, this necessarily involves converting that number to a formatted string.
For more information: Decimal separator
The explanation is quite simple: in the English system, commas are changed to periods and periods to commas. Ex:
And 1.4 would be
Note that in scientific notation 1400 would be 1.4x10e2, this in English is called floating point and in Spanish you can call it both floating point representation and floating point .
In short, since the world currently communicates in English, so do computers.