Symbolic Notation

One of the key features is the scale of which we can have consistent representations. This means how much we can use the same form to describe very simple things to very complex things. Language, for examples has this feature. It uses the same sentence structure to formulate very simple ideas to very complex ideas. Programming languages currently have made great progress in achieving this recursive uniformity. However, there are a few barriers that we would like to resolve in order to create a universal platform on which emergent complexity can thrive. The primary categories are: code/compiler/hardware barrier, multi-computer code barrier, source localization, and statistical versus absolute calculation.

Code/Compiler/Hardware Barrier

Currently for the majority of developers, code is regarded as separate from its compiled product and the hardware that it runs on. We move from theoretical representations towards executable code by translating the abstract symbolic concepts into concrete executable ones that cover the majority case. Given the limitations of computer memory and processor instructions, it is impossible to convey these theoretical operations completely. A simple example would be basic arithmetic on computers. While for the majority of cases, the computer representation is sufficient, we can easily hit edge cases such as integer overflow and underflow and floating point inaccuracies. Multi-Computer Barrier Looking at the data flow graph, we realize that there is no difference between if the data is transferred between stack frames or across fiber between computers. These two do however have different performance characteristics. In compiler theory, there is already the notion of rule based optimization based on benchmarks. This allows JIT compilers to choose the optimizations that are best suited for the mix of characteristics on the platform that it runs on.

Rule-base optimization can be seen comparable to catalyst in organic chemistry. The goal is to get to a lower energy state and we enable complex reactions by having catalysts which have high costs of construction but realize more than their energy potential over the span of their lifetimes. Source Localization

Statistical Versus Absolute Calculation

The proper integration of statistical calculation allows for information flows to be executed based on statistical data, like it mainly exists in the world. This also allows seamless integration of statistical computation platforms like the quantum computing machines which are in development. For those that aren’t familiar with the workings of quantum computing, a simplified notion would be that different operations can modify the statistical state of different atoms’ spin??? until we perform an observation operation in which it defaults to a certain answer based on its superimposed states.