Is it possible to clear the concept of "optimization" in the sense it is used in the article?

As far as I could understand, the concept of "optimization" is used in a peculiar way in the paper. The optimization of the "finite numbers" recorded online unbeknownst to final users appears to be something negative but the reason of it is not clear as well as the peculiar meaning of "optimization" in such sense.

waiting for moderation
1 Answer
Accepted answer

Optimization of MMIE systems will likely lead towards canonicalization of 'value', a commensurability measure. This is what I mean by the dangers of mapping a human being to finite numbers within the context of optimization.

Commensurability between man and machine is the heart of the manuscript; in many ways a generalization of the Trolley problem (choose between two options: first option, 5 children die; 2nd option 24 adults die). When a human being is represented as a vector of finite numbers, an incommensurable measure is made commensurable. Commensurability allows for weighted utilitarian calculi (one example is Bentham's greatest good for greatest number). When such calculi are used in optimization frameworks, such as resource allocation, inhumane solutions - those that sacrifice the well-being or life of human beings for the 'greater' benefit of machine artifacts, or performance indices such as a smoother economy, reduction in air pollution, greater computing efficiency, roadblocks to refactoring etc - must be avoided or at least readily identified. This is not as straightforward as it appears: You may mark human records in a system as undeletable w/i the system and a fixed rule "Never delete these records, no matter what benefits may accrue", but what happens when that system becomes part of a larger system and is superseded by a copy w/o this restriction? Or system upgrades? How do we avoid (in Marxist terms) data-commodified human elements, or conversely the reification of a machine alrgorithm within the overall framework of a continuously optimizing environment? 2nd, third and n-th order effects may drive towards a goal but be totally opaque at a present point in time. Asimov's (1958) sci-fi story "All the Troubles of the World" shows how readily a data-driven optimizing entity can seemingly innocuously work towards a hidden, catastrophic goal https://en.wikipedia.org/wiki/All_the_Troubles_of_the_World

This is also why I introduced and emphasized the Chinese Shi concept in the manuscript: Arranging reality to birth a future.

These are multi-generational "Cathedral" software-engineering, continuous integration, teleontological reasoning issues that need to be addressed, in my humble opinion. One will need to ensure a deontological imprimatur on the systems; indelible, enforceable, survivable, adaptive, anti-fragile, parsable by future generations. Human society had a Moses and a Decalogue for the ages, we likely need something equivalent.

waiting for moderation