Defining technologies via nonlinear orthography
The image below shows the high level modular layout of a supercomputer architecture and illustrates the typical hierarchical process that characterizes the way we currently engineer technologies. The system is created by a multi-level assembly process starting from the left (chips, bottom level) to the final structure (right, arrays of cabinets). Each level comprises a specific series of components (CPU, computer cards, boards,…) that are assembled from the previous levels, and supports a specific series of operations defined by a virtual language (Assembler, C/Fortran/…) that is defined and exists at each level. In this approach, a single level is built if an only if all the inner levels are properly defined: it is not possible to build one layer (e.g., the computer cards) if we do not have completely defined and characterized the components at the inner level (i.e., the processor chips).
The level decomposition in the figure does not stop at the chips. Each processor can be further decomposed into an inner hardware level composed individual integrated transistors (see figure below), which represent the basic building block of the entire system. Elements at higher levels (chip, cards,…) are composed by millions of identical copies of this design structure. The complete system works because the basic building elements (i.e., the transistors) are the same. The system does not work if these copies are not the identical (within some tolerances).
This hierarchical organization does not apply only to electronics, but is a general building approach adopted in any technological platform currently available, e.g., in energy harvesting:
We observe the same structure: a system built from basic identical building blocks and composed by PIN junctions, which harness light into electron-hole pairs that are subsequently used to generate electric power. Identical PIN junctions are assembled at higher levels into solar panels and then finally replicated into a large scale solar harvesting farm.
Another example from nanomaterials engineering:
We again observe the same periodic assembly of identical copies of a basic building blocks, here composed by different lattice structures of air holes realized on a semiconductor or oxide substrate. The final structures work if the periodic assembly of the identical units is accurate, and break if the system lacks long range order.
It is not difficult to continue with other examples and show that practically all technologies we produce today follow from the same linear approach: a hierarchical copy of individual building blocks, assembled into multi-level structures in which each level has specific rules, with well defined sets of tasks and behaviors. We can represent this process by the creation of a one dimensional puzzle, which is assembled piece by piece (i.e., level by level) and by using every time an identical piece (i.e., building block). We do not insert a piece if all the previous pieces are not in place:
This picture visually illustrates the many drawbacks of this linear approach:
- Fragility: the system breaks if any piece is missing or is not identical to the others
- Lack of efficiency and unsustainable development: the system output increases linearly with the number of pieces. This implies that any feature of the system (computational power, energy harvesting efficiency, power consumption…) scales linearly with the system size. At a certain point it makes it impossible for the system to sustain an improvement. The image below (DARPA Synapse project at http://www.artificialbrains.com/darpa-synapse-program) illustrates this issue in computer science: the energy demand required by standard machines is projected to increase too high in the near future, and a sharp paradigm shift is required to sustain technological progress in this field.
The examples discussed above beget the question: what is the origin of this engineering approach? An interesting perspective on this question comes from linguistics theories:
We have the same word for falling snow, snow on the ground, snow packed hard like ice, slushy snow, wind-driven flying snow—whatever the situation may be….To an Eskimo, this all-inclusive word would be almost unthinkable; he would say that falling snow, slushy snow, and so on, are sensuously and operationally different, different things to contend with; he uses different words for them and for other kinds of snow….We dissect nature along lines laid down by our native languages,…The grammar of each language is not merely a reproducing instrument for voicing ideas but rather is itself the shaper of ideas.
In the theory of “linguistic relativity” introduced by Whorf in the seminal paper cited above, the language we speak affects the speaker world’s view and influences our thoughts and cognition. The language we are taught since babies influences the way neurons are wired during development, and ultimately determines how we perceive reality. In this view, different populations of humans developed different languages because the different way they “dissect” and perceive Nature. Critics of this theory argue that it is the other way around: languages are the product of the way we think and see the world with our cognitive senses. Both perspectives are interesting to our discussion: they point to language as the expression of our natural form of thinking.
Despite various cultures on the planet developed languages that sound and read differently, they all possess a common point: they are all based on linear orthography:
I am writing a sentence in the English language.
Independently from the language we speak, we write letter by letter (i.e., piece by piece, level by level), we assemble letters into hierarchical structures (words, sentences, paragraphs,…) and we write sequentially: we do not place a letter or a word if all the previous words are not in place with an understood meaning we intend to convey. Our orthography mirrors precisely the way we build technologies. If the form of thinking we developed is linear, as our orthography, no surprise that this is the natural tendency of expressing our minds, either in the language either in the realization of technological products.
Complex natural systems, however, show us that there are others intriguing paths to follow. These systems are the product of millions of years of adaptations and are based on a much more evolved concept: what linguistics call “nonlinear orthography”.
“…Unlike all human written language, their writing is symbolic. It conveys meaning, it does not represent sound. Perhaps they view our form of writing as a wasted opportunity, passing every second communications channel. We have our friends in Pakistan to thank for their study on how the heptapods write. Because unlike speech, a hologram is free of time. Like their ship or their bodies. Their written language has no form or no direction. Linguists call this nonlinear orthography. Which raises the question: “Is this how they think?” Imagine you wanted to write a sentence using two hands, starting from either side. You’d have to know each word you wanted to use, as well as how much space they will occupy. A heptapod can write a complex sentence in two seconds effortlessly. It took us a month to make the simplest reply…”
Ian Donnelly, (Arrival, 2016)
Complex systems stem from a “nonlinear orthographic” design platform, in which all the system’s parts are assembled in parallel and at the same time, with an overall functionality that is conceptually defined within a single design step. This analysis perhaps illustrates the main challenges to imitate these systems: it implies we understand a form of orthography that is not innate into us, and at such, conceptually difficult and “complex” to grasp.
A first step to overcome this problem is the observation that such difficulty, or complexity, is only a mental projection. To a complex system, a “nonlinear orthographic” design concept is probably the simplest, and therefore more natural approach, to achieve the following factors:
- Robustness: even if a single element breaks, the system can re-route the information into a structure connected in a multidimensional space and, at such, it significantly reduces failures and maintain its functionality even under strong stress conditions.
- Scalability: the system is not based on periodic copies of identical objects and, at such, can be manufactured on large scales easily.
- Efficiency: the system response originates from the parallel interaction of a large number of elements and scales exponentially with the system size. This is a great advantage if compared to linear systems, which scales linearly and therefore offer highly inferior performances.
- Sustainability: exponential efficiency implies that a complex system tends to naturally dissipate only a limited amount of resources, and at such it possesses a high degree of sustainable development. Read more on these properties here.
At Primalight, we proceed step by step on what we call the complexity ladder: a ladder of different forms of complexity stemming from the most studied to the most difficult to understand, developing different technological products from energy devices to smart materials, bio-imaging, security, and so on. This research is not a bio-mimicry (we do not copy complex systems), it is the understanding of the rules of complex systems for “writing” technologies via nonlinear orthography, following an evolutionary-driven design paradigm for obtaining high levels of efficiency, sustainability, and robustness in scalable platforms.