1 What This Chapter is About

In this chapter, we present an analysis of the development of a number of technologies: nanotechnologies, robotics, additive and cognitive technologies, ICT and AI, as well as forecasts for their future. Together with medicine, they are integrating in the united MANBRIC complex (this is an acronym made up of the first letters of seven technologies). Each direction has its own future, problems and coordinate system.

We try to forecast major breakthroughs for each technology. In general, we believe they have a bright future. Nanotechnologies can have a significant impact on saving materials and energy, making major advances in medicine and many other industries and creating materials with amazing properties. Additive technologies promise to change the economy, moving from mass production to small-scale or customized one.

A universal smart robot will be able to alleviate the problems of labor shortage by replacing a nanny, a nurse, a cook and even a sapper, as well as many other professions. Cognitive technologies will enable rehabilitation of many millions of people who are currently paralyzed or otherwise disabled. AI will be an engine of innovation in almost all industries and spheres. One can expect driverless cars and pilotless aircrafts/drones will become widespread in the future.

The chapter analyzes these and other possible advances in various technologies, and also highlights the problems and difficulties along the way, as well as present arguments and reasons why certain forecasts may appear untenable. At the same time, we also analyze the possible negative consequences, including the threat to jobs posed by robots.

The described technologies are different, but there is a very important thread that unites these different technologies.

First, they all provide excellent examples of self-regulating and self-managing systems, that is, systems that can make decisions with minimal human intervention or even autonomously. Self-managing systems are the basis of the future technological paradigm and the main characteristic of the Cybernetic Revolution.

Second, all the technologies described in the chapter belong to the complex of innovative directions MANBRIC, for which we forecast a great future.

Third, using the example of the technologies described below, we can see the convergence of MANBRIC technologies. The MANBRIC clustering in robotics is an excellent example of this, as it combines the maximum number of technological directions. One must pay attention to the mutual influence of technological fields as they impact the direction of development choices. In particular, the chapter shows how AI programmers can learn from cognitive science, how they try to design robotics for the home Internet of robots (giving opportunities to connect robots to a home computer or an operation center). We trace the closest connection between MANBRIC directions in ensuring the functioning of institutions and industries. Thus, the military sphere will be an important driver for the development of these MANBRIC technologies, but of particular interest to the military are robotics, including drones, AI and cognitive technologies.

Fourth, we try to show how different features of the Cybernetic Revolution and the complexity of the functions of self-regulating systems are manifested in each technology. In particular, the functionality of robots can be distinguished as follows: active information processing managing the whole system (its receiving, analysis, distribution, transformation, etc.) by means of flexible interaction with the environment, existing contours of feedback and direct links that fulfill various functions and the collection of information to increase efficiency.

We pay particular attention to the relationship between robotics, medicine and biotechnology. It is shown that robotics and ICT, medicine, bio, nano, additive and cognitive technologies are already quite obviously combined in medical institutions, particularly when working with disabled people. In addition to medicine, AI plays a special role in the integration of MANBRIC. One can speak about a rapid development in the progress of artificial intelligence. We agree that the future is closely associated with the development of artificial intelligence, which will be involved in a very large number of self-regulating/self-managing systems. It is important to define the similarities and differences between self-regulating systems and artificial intelligence.

Described in a single complex, the MANBRIC technologies have different levels of development and will mature at different times, but all are likely to flourish in the future. While AI has already reached a very high development level, with respect to nano and additive technologies including robotics, one can expect an explosive period of development only in the 2050–2060s. One can expect a mass development of new means of transport in the middle or at the end of the final phase of the Cybernetic Revolution (between the late 2040s and 2060s).

However, self-driving electric vehicles can become a powerful source of technological development during the mature phase of the Scientific-Cybernetic Production Principle. Like self-driving cars, drones will sooner or later achieve true autonomy. Due to the great economic effect and convenience of such robots, they will have a very promising future.

A comprehensive analysis of these technologies within a single chapter allows us to show that the final phase of the Cybernetic Revolution will not be a wave of diverse innovations, but a complexly integrated and interconnected set of new generation technologies that will create an era of self-regulating/self-managing systems.

Thus, in this chapter, we solve a twofold problem—on the one hand, to show the logic of and development prospects for each of the technologies described. On the other hand, we try to show how the development of each of these technologies fits into the MANBRIC convergence as a whole.

We appraise how MANBRIC will develop in the future, how it will strengthen it in the future, how it will prepare medical technologies for the initial breakthrough that will launch the final phase of the Cybernetic Revolution and ensure the powerful development of self-regulating systems.

The chapter opens up to the reader many new perspectives, ideas, facts and problems that humanity will face in the process of the final phase of the Cybernetic Revolution.

2 Nanotechnologies as the Tool for Mastering the Microworld

2.1 On Nanotechnology: An Introduction

Humankind has been using nanomaterials for a long time. With the current knowledge about nanoparticles one can explain the peculiar properties of the well-known materials created in ancient times such as various enamels, painting materials, damask steel, etc.

Nanotechnology is a widely used concept that can be conventionally defined as an interdisciplinary field of applied science and technology that develops practical methods and research, analysis and synthesis, and also methods of producing nanomaterials by controlled manipulation of tiny objects down to individual molecules and even atoms. However, the idea of how to most effectively use nanotechnology for particular problems (primarily in the field of heath) has appeared quite recently. The wide range of topics covered by nanotechnology makes it problematic not only to define it, but also to classify nanoproducts.

The main point of nanotechnology is the control of matter on a scale smaller than 1 µm, usually between 1 and 100 nm (up to 100 nm in one measurement). A nanometer is a billionth of a meter, or 10–9 m, or a conditional construction of ten hydrogen atoms built in a row.

Why have nanoparticles become so popular? The matter is that in fact this is a very broad new domain which opens up colossal opportunities for mastering the microworld.Footnote 1 It is just at this level that the fundamental property of matter is clearly revealed, which is the realization of antipodal properties in its various systems. For example, gold is a conductor at the macro level, but it is an insulator at the nano level. The particles of some substances, which are between 1 and 100 nm in size, have very good catalytic and absorptive properties, while other materials have wonderful optical properties. At the nanoscale, the surface-to-volume ratio changes, and so do the properties of matter. In nature, there are nanosystems that are capable of organizing themselves into special structures and thus acquiring new properties, for example, the biopolymers (proteins, nucleic acids).

The peculiarity of nanoscience consists in the fact that it deals with atoms—compound particles of matter (to repeat: a conditional construction of ten hydrogen atoms built in a row of one nanometer in size). Scientists are now closer to understanding how it will be possible to force atoms to ‘build’ things. If so, such an approach opens up fantastic opportunities to create new materials with desirable properties. Scientists believe that it would become possible to mechanically move individual atoms using a manipulator (atom-by-atom assembly of objects according to the Nobel laureate Richard Feynman). Some of them think that this process will not contradict the known physical laws, although there is quite reasonable skepticism about this (in particular in connection with the known chemical laws, see below). At present, various ingenious techniques are used for such manipulations, but there is no solid solution yet.

2.2 Nanotechnology as a Result of the Cybernetic Revolution: Origin and Development

The first practical steps in nanotechnologies (as well as the ideological interpretation of the field) were made in the 1950s,Footnote 2 whereas the term, according to some scholars, was introduced in 1974 by the Japanese physicist Norio Taniguchi. Nanotechnology emerged as a result of the Cybernetic Revolution. However, for quite a long time it remained in the background of other important branches of science. Practical interest in nanotechnology grew rapidly at the end of the initial phase of the Cybernetic Revolution, in the 1980s, coinciding with the publication of Eric Drexler’s books (1987) Engines of Creation: The Coming Era of Nanotechnology and Nano-Systems: Molecular Machinery, and Computation (see also Drexler, 1992, 2013). However, the term became widespread when it was picked up by the media. With the start of ‘the nanotechnology race’, the word ‘nano’ frequently appeared on television and in print. This meant that nanotechnologies began to be considered as a strategic branch of a future hegemon (along with others: biotechnologies, green energy industry, etc.). Their ultimate task is to win the market of industrial production of new, important and in-demand technologies. The country that succeeds in this will be able to ensure its own economic growth and development for many years to come.

The nanotechnology race began with an initiative by President Bill Clinton in 2000, which became a landmark and launched the competition between countries. Then Japan and Western Europe joined the race, conducting research in nanotechnologies as part of their national programs (see Lane & Kalil, 2007). More and more attention was paid to nanotechnology in China, South Korea and many other countries, including Russia, which had a relatively good starting position in this field (Dementiev, 2008). Today nanotechnology is one of the most intensively developing sectors of the economy.

2.3 The Development of Nanotechnologies in the Course of the Cybernetic Revolution

The initial phase. The characteristics and opportunities of nanotechnologies correlate with the concept of the Cybernetic Revolution, which is not surprising since they originated within this revolution and besides, will play an increasingly important role in the course of its development.

The initial phase of the Cybernetic Revolution (from the 1950s to the early 1990s) was the formative period for nanotechnology. Roughly speaking, this period began in 1959 when Richard Feynman presented the idea of constructing new materials from nanoparticles, and ended with President Bill Clinton’s initiative in 2000 (Callaway, 2020). This period is characterized by a large number of scientific discoveries; however, many of them had no application at the time (e.g., Rybalkina, 2005: 21).

At this stage, the development of nanotechnology was in many ways defined by the creation of probe microscopy. These devices are the eyes and hands of nanotechnologists. In particular, in 1981, the German physicists created the scanning tunneling microscope (STM), which can measure interactions between atoms (Chen et al., 2021). Then, in 1985, the American physicists developed the technology to precisely measure particles of one nanometer in diameter. Since the 2000s, inventors have been moving toward seeing atoms in a microscope. In 2010, researchers at UCLA used a cryo-electron microscope to see the atoms of a virus. Over the last decade, there have been breakthroughs in obtaining clear images down to the individual atoms (e.g., Chen et al., 2021).

Nanotech euphoria. The modernization phase (the period of diffusion of innovations) of the Cybernetic Revolution is the period of the emergence of ‘modern nanotechnology’ (from the 1990s to the 2020/30 s). Nanotechnology has become involved in industrial production, while nanomaterials are penetrating into various fields: engineering, medicine, transport, aerospace, electronics, etc.

The first phase of the development of nanotechnologies (between 2000 and 2005) is associated with the so-called ‘passive nanostructures’ (incremental nanotechnologies), but generally it involved the production and use of nanodisperse powders. They have been added in order to modify the properties of basic construction materials: metals and alloys, polymers, ceramics, and are also used in cosmetics, pharmaceuticals, etc. Today this is considered to be a primitive generation of nanomaterials which are already widely used in manufacturing and in many products. Only a few nanoprojects have been introduced in high-tech industries.

The broad prospects offered by nanotechnologies, fueled by various interests and the mass media, gave rise to a real euphoria. However, most of the predictions proved wrong or were hardly possible to fulfill.Footnote 3 Figure 10.1 clearly shows the explosive growth in the number of patent grants in nanotechnologies, which well explains the euphoria of forecasts in this area, as well as the fact that development in the field of patent grants in nanotechnologies has stalled since 2016.

Fig. 10.1
A line graph plots relative global dynamic values versus years 2007 to 2021. The plotline inclines from (2009, 1) to (2016, 3.6) and declines to (2021, 3.4). The values are approximated.

Data source: WIPO (2023) (see also Fig. 10.1 in Chap. 4)

Relative global dynamics of patent grants in micro-structural and nanotechnologies, 2007–2021, 1 = 2007 level.

People always want innovations to go faster than they do. At the same time they do not see the obstacles and challenges and do not take into account economic problems. Moreover, such over-optimistic predictions are often driven by a simple desire to attract huge investments. Thus, the growth rates of nanoproduction volumes are far from being as fast as previously predicted.

Some analysts suppose that after 2020 we will enter the era of ‘radical nanosystems’ in the form of nanorobots and that nanobiotechnological and nanomedical systems will develop rapidly. However, we doubt such rapid development. The theory of production revolutions suggests that despite the numerous innovations that will appear during its modernization phase, they will hardly make a breakthrough to the higher innovative level, while many will remain in low demand. The discoveries that will provide the basis for the breakthrough are being prepared, while the breakthrough itself will come later. In the case of nanotechnology, this will most likely happen between the 2030s and 2050s, i.e., the real take-off of nanotechnology will actually take place a decade or two later than many researchers predict. Nevertheless, the coming decades will see the development of many earlier achievements in various fields. Outstanding innovations will certainly appear during these decades.

2.4 Forecasts for the Final Phase of the Cybernetic Revolution and Beyond

One can trace all the characteristics of the Cybernetic Revolution in the future development of nanotechnologies: (1) transition to the control of deeper and more fundamental processes and levels (down to the atomic level with the use of nano-particles as building blocks); (2) deep interconnection between directions of MANBRIC convergence (see below); (3) mass invention of technologies based on self-regulating systems (in which nanotechnology will play an essential role); (4) production of new materials with new properties (as we have seen above, there is a great opportunity to achieve this at the nano level); (5) resources and energy saving (e.g., nanomaterials in window glass can drastically reduce energy consumption in homes; nanotechnologies will help to deliver an optimum amount of medicine directly to the damaged area or even to separate cells); (6) nanotechnology will increase the trend toward super-miniaturization, etc.

2.4.1 Nanotechnology in Medicine: Great Opportunities

Despite the serious progress of nanotechnologies in electronics and other branches, we believe that the real nanotechnological breakthrough will most likely occur first in medicine, which will give an additional boost to its use in other fields. As a result, the breakthrough in the final phase of the Cybernetic Revolution will be provided by a deep integration of medicine with bio and nanotechnologies, resulting in the vigorous development of bionanomedicine. All these will create important backgrounds for the future era of self-regulating systems in medicine.

One of the future directions in medicine is the development of diagnostic methods that reduce costs. We have already mentioned nanochips (in Chaps. 8 and 9), which will play an important role here. The nanorobots will be able not only to perform medical functions, but also to control individual cell nutrition and excretion of waste products, will be put into practice. Nanorobots can be used to solve a wide range of problems, including the diagnosis and treatment of diseases, the fight against aging, the reconstruction of some parts of the human body, the production of various heavy-duty constructions, etc. (Giri, Maddahi, & Zareinia, 2021; Mallouk & Sen, 2009; Nistor & Rusu, 2019).

We have already mentioned some integration trends in these branches in Chaps. 8 and 9. In general the prospects for such integration are apparent within two to three decades. According to some forecasts, nanobiostructures (i.e., capable of carrying medical nanosensors and even reconfiguring the cells of an organism) will become commonplace. Their active use in diagnostics and as a means of boosting immunity will become an important direction in nanotechnology (Zhang et al., 2022). Already today, some advanced medical organizations apply nanotechnologies to create a biochip that allows a quick diagnosis of a number of dangerous diseases, including tuberculosis (e.g., Gavas et al., 2021). The development of nanotechnology in new materials that mimic new properties, such as those of bone tissue, is promising. The miniaturization of these robotic platforms has led to many applications in precision medicine (Soto et al., 2020). Nanotechnologies are already implemented in surgeries such as nano-neuroknitting to repair severed optic nerves, in implantation of artificial limbs with high precision, cardiac surgery, etc.

Currently, nanotechnologies are not yet used in medicine as mass technologies (see Chap. 8 for details). However, the research, experiments and tests, carried out today, can reveal the future role of nanotechnologies in medicine and give an idea of the main principles of their application, as well as some details of the future successful and breakthrough nanotechnologies.

Here is an interesting example. At the Laboratory of Nanophotonics at Rice University in Houston, Professors Naomi Halas and Peter Nordlander invented a new class of nanoparticles with unique optical properties—nanoshells (Henderson et al., 2022). Twenty times smaller in diameter than red blood cells (erythrocytes), they can move freely in the blood system. Special proteins, antibodies that attack cancer cells, are specifically attached to the surface of the cartridges. A few hours later after their injection, the organism is exposed in infrared light, which the nanoshells convert into thermal energy. This energy destroys cancer cells, while leaving the neighboring healthy cells virtually unharmed.

One of the areas where huge efforts of nanotechnology are concentrated is the fight against cancer (National Institute of Health, 2016). Many new ways to fight cancer are based on nanotechnologies (Aghebati-Maleki et al., 2020; Gavas et al., 2021; National Institute of Health, 2016; Yao et al., 2020). Approved cancer nanomedicines to date are liposomal drug delivery and magnetic fluid hyperthermia based on magnetic iron oxide nanoparticles (Soetaert et al., 2020). For example, the latter is used for carcinoma treatment and is based on heating iron oxide nanoparticles that are introduced into the infected tissue and exposed to a magnetic field, causing particles to heat up and destroy malignant cells (Gavilán et al., 2023).

Researchers are working to develop nanomaterial-based delivery platforms that will reduce the toxicity of chemotherapy drugs and increase their overall efficacy (National Cancer Institute, 2017a).

Nanotechnologies are also being investigated for the delivery of immunotherapy. This includes the use of nanoparticles to deliver drugs, immunostimulatory or immunomodulatory molecules in combination with chemo- or radiotherapy, or as adjuvants to other immunotherapies (National Cancer Institute, 2017b), in particular for the delivery of Herceptin, an effective drug for the treatment of breast cancer.

The value of nanomaterial-based delivery has become apparent for new types of therapeutics such as those using nucleic acids, which are highly unstable and susceptible to degradation. These include DNA- and RNA-based genetic therapeutics. This therapeutics is used in many cases to target ‘undruggable’ cancer proteins. Additionally, the increased stability of gene therapies delivered by nanocarriers, and often combined with controlled release, has been shown to prolong their effects (National Cancer Institute, 2017b).

It is extremely important that the development of nanotechnologies can radically reduce the side effects, as well as often very serious and sometimes even fatal consequences of medical interventions for patients, including the invasiveness and the damage caused by excessive doses of drugs, including chemotherapy, not to mention the discomfort and pain experienced by patients. And the cost of treatment can be drastically reduced.Footnote 4 In Chaps. 8 and 9 we have already discussed the use of nanostructures in new types of vaccines. Despite serious shortcomings and problems, this direction will undoubtedly develop.

Of course, new materials and the acquisition of new properties by materials through nanotechnologies (e.g., nanomaterials with self-cleaning mechanisms to remove bacteria from blood vessels) will play an important role in the development of medicine. This can manifest itself in the creation of artificial biological tissues, artificial immunity, support for damaged tissues, and much more. Some examples have already been given in previous chapters.

2.4.2 The Connection with Biotechnologies and Agriculture

Other important directions of nanotechnology include research into nanobiotechnology beyond medicine, particularly, in agriculture. Here, fields such as veterinary medicine and animal husbandry are akin to the field of human medicine. Animals, like humans, need treatment, protection from infectious diseases, enhanced immunity, and the development of feeding technologies. Thus, in a number of cases, similar nanotechnologies can be applied to medicine, veterinary medicine and agriculture. One can mention here the development of controlled protein synthesis technologies to obtain peptides with desirable immunogenic properties. Vector systems for the cloning of immunologically important proteins from pathogens and new generation vaccines with high efficacy and safety are being created. They work on creating nanoparticles for the production of genetically modified proteins, the development of biochips and test systems for biological screening (Persidis, 1998), immune monitoring and the prediction of dangerous and contagious animal diseases. Biochip technology is constantly improving and production has become cheaper. This has become even more evident with the wave of development of tests for the COVID-19.

Nanotechnologies are already actively applied in various sectors of agriculture, in particular, in the production of animal feed which allows a considerable reduction in their consumption and makes it more accessible. In crop production, the use of nanopowders with antibacterial components provides increased resistance to adverse weather conditions and increases the productivity of many food crops, such as potatoes, cereals, vegetables, fruit and berries. However, the future use of nanotechnologies in the agricultural sector holds much greater promises. In particular, nanotechnology can also help to solve the problems of wastewater treatment. Perhaps, more important is the expectation that nanotechnologies and the application of biotechnologies will support significant progress toward the creation of self-regulating systems in agriculture, so that agricultural operations will largely be carried out in an autonomous mode. Many technologies will appear to promote this process. Thanks to their more targeted action and dosage of drugs and substances, these technologies are likely to significantly reduce resource and energy costs and environmental impact. Thus, the introduction of membrane systems for purification, as well as special biocidal coatings and silver-based materials will facilitate and increase the level of livestock managment and the provision of high quality water.

It is expected that the use of nanotechnologies will change land cultivation through the use of nanosensors, nanopesticides and a decentralized water purification system. Nanotechnologies will make it possible to treat plants at the genetic level and to create high-yielding plant varieties that are particularly resistant to unfavorable conditions (Balabanov, 2010). Nanopesticides also can appear less harmful for the environment (Jain, 2004; Kah & Hofmann, 2014). Today there are some innovative ideas that can be further developed in agriculture. In particular, microbial preparations based on associative, endophytic and symbiotic bacteria have emerged. These preparations are designed to produce and transport various enzymes and low-molecular biological active agents (nano-objects) in plants. These can improve the adaptation of plants to unfavorable environmental conditions: pollution by toxic metals, salinization, superacidity, etc.

A fundamental approach is essentially developed for the purpose of obtaining high quality seed material. This approach is as follows: biologically active and phytosanitary components which can increase the adaptation of seeds and plants to real negative environmental conditions are constructed in the form of multifunctional nanochips (Ruban et al., 2011; Voropaeva et al., 2012).

Nanotechnologies can be used in a number of other areas, including being one of the means to preserve and restore the ecological environment.Footnote 5

2.4.3 New Materials and Their Properties

Self-regulation and self-organization of nanoparticles. Synthesis of new materials with desired properties. A close connection between nanotechnology and self-regulation within systems is based on their ability to change processes of self-organization of matter, forcing molecules and atoms to be ordered in a particular spatial and structural way. So, one of the major challenges for nanotechnology is to induce molecules to group and self-organize in the necessary pattern to acquire new properties. The supramolecular branch of chemistry is concerned with this problem and explores interactions that can organize molecules to create new substances and materials. Various processes of self-assembly have already been discovered, such as the electrochemical anodic oxidation (anodizing) of aluminum. However, nanoparticles require largely (or even fundamentally) different technologies for self-assembling. This is a very promising way to create a huge family of materials with desired properties. The creation of new materials with desirable properties is a direct way to make systems work according to predetermined scenarios.

We are still at the beginning of this path. Nevertheless, at present within the field of nanocomposite construction different technologies have been developed that produce substances that, for example, are capable of protective coating, self-cleaning, antibacterial protection, etc. Such nanotechnologies produce striking examples of various self-regulating systems, for example, self-cleaning nanocoatings (i.e., self-cleaning mechanisms to remove bacteria from vessels or self-cleaning nanopolish products for car glass). Nanopolishes modify the surface in such a way that a drop of water slides across it, collecting all the dirt, whereas on a smooth surface, on the contrary, a drop of water slides across it and leaves the dirt on the surface. This is called the ‘lotus effect’. The idea is borrowed from nature: the leaves of the lotus plant are covered with the tiniest bulges and cavities of wax so water runs down and washes the dirt away. Fabrics may be developed that give clothing a variety of properties, for example, protection against mosquitoes (Ciera et al., 2019). Nanofibers have great potentials in the medical field, for example, in a wireless garment with embedded textile sensors for simultaneous detection and continuous monitoring of ECG, respiration and physical activity. In the military, wireless communication of a fabric to a central unit allows medics to conduct remote triage of casualties to help them respond faster and safer [Nanotechnology is especially used in sports and following the current trend, sportswear of all types will experience the nanotechnology revolution, in the near future (Ciera et al., 2019; Harifi & Montazer, 2017; Ahn et al., 2006).

But even more important prospects open up in the creation of materials with unique properties for industry. Thus, according to some (possibly exaggerated) reports, the addition of graphene to nanotubes dramatically increases the useful properties of various materials. For example, the addition of nanotubes to tires reduces their weight by a one-third and their braking distance by 45. The addition of nanotubes to aluminium makes the resulting composite material four times stronger and 20 times harder than conventional aluminium (Ocsial, 2017).

There is a great potential for the use of new nanoproperties in electronics in other industries as well. For example, an unusual property of crystals opens up opportunities to study new magnetic phenomena (see Melikhov, 2018; Smith, 2022). Thin films of oxygen, copper and titanium nitride have been obtained. Their electrical resistance is a thousand times lower than that of conventional titanium nitride. This could be a technological breakthrough in the development of next-generation resistors and transistors (Baron et al., 2021).

Many nanotechnologies aim at energy saving and invention of alternative energy sources. So, the trend toward miniaturization of technology not only increases the operating speed of electronic devices and the packing density on the chip (see below), but also reduces their energy consumption. For example, ‘smart glass’ for rooms can respond to changing lighting and ambient temperature by changing its transparency and heat conduction. There are many different projects that lead to energy and cost savings. To name just one of the utmost importance, a widespread use of electronic paper could prevent deforestation.

2.4.4 Nanomechanical Engineering

Nanotechnology has considerable prospects, which could be observed in the development of nanomechanics, nanomechanical engineering and nanorobotics. The components of nanoelectronics, photonics, neuroelectronic interfaces and nanoelectromechanical and other systems are also likely to be developed. There are ideas, that based on the achieved results, we can expect advance in the development of nanosystems capable of self-assembly into three-dimensional networks.

It is also possible to talk about the use of nanotechnology in molecular devices, nuclear design, etc. As we will discuss in the AI section below, the idea of computers that will use nanotechnology to create and store information has been around for some time. There are ideas that such computers can be developed on the basis of nanoelectronics; however, according to Eric Drexler (1987), nanomechanics can also become a basis for storing information. He even proposed mechanical designs for the main components of a nanocomputer—memory cells, logical bytes. From special structures, such as fullerenes, nanotubes, nanocones and others, molecules can be assembled in the form of various nanodetails—gears, rods, bearing details, rotors of molecular turbines, moving parts of manipulators, etc. The assembly of the parts into a mechanical design can be performed by the assemblers (self-assemblers) with the biological macromolecules attached to the details which are capable of selective connection with each other.

Professor James Tour and his colleagues at Texas Rice University have tried to realize some of Drexler’s ideas about nanomechanics. In 2005, they created a molecular-mechanical design—the all-molecular four-wheeled nanocar, about 2 nm wide, which consumes light energy. It consisted of about 300 atoms and had a frame and axles. The development and construction of the nanocar took eight years. The scientists even plan to create nanotransport devices and nanotrucks to transport molecules to conveyors in nanofactories (Balabanov, 2010).

Certainly, they are more like toys, than practical research. They remind us of the steam toys similar to the mechanisms created by the Greek mechanic Heronus Alexandrinus who amazed the public in the first century CE. His devices barely resembled steam engines. But unlike Heronus, who did not even think of a practical use for steam, today’s nanotechnologists are preoccupied with practical applications. Therefore, the creation of nanomechanical engineering seems quite real, albeit in the long term. It would most likely happen near the end of this century. The same applies to nanorobotics, although the widespread diffusion of the latter may occur much earlier (see below).

Many of today’s futurological predictions about the development of nanomechanics, nanoelectronics, nanoengineering, etc. seem too bold and in some respects even fantastic. However, nanotechnologies will move in this direction one way or another. Anyway, it is obvious that both nanomechanical engineering and nanorobotics will take the development of self-managing systems to a new level, leading to the creation of an industry that will design such systems (similar to how the use of cars promoted their industrial manufacture—mechanical engineering).

2.4.5 Self-Assembly: Its Essence and Feasibility

Will self-managing systems be created, such as molecular assemblers, which are something like 3D printers at the molecular level? When we talk about molecular assemblers, we mean devices that can convert a given mass of matter in such a way that it can be reassembled into something else (into whatever we want), given a molecular model (assembly program) for it. This means that the assembler can work in much the same way as genes and ribosomes produce proteins, breaking them down into nucleotides and reassembling them. It was about molecular assemblers that Eric Drexler dreamed of when he described the coming assembler revolution, which he said would be more important than medical technologies, space horizons, advanced computers and new social inventions because it would affect each of these and many other areas.

Before moving on to a discussion of whether such assemblers are possible, it should be noted, by the way, that a number of future technologies that Drexler was trying to describe and use fit very well with our understanding of self-regulating/managing systems. For example: ‘Assembler: A molecular machine that can be programmed to build virtually any molecular structure or device from simpler chemical building blocks. Like a computer-controlled machine shop. Cell Repair Machine: A system that includes nanocomputers and molecular size sensors, as well as tools programmed to repair damage to cells and tissues. Automated engineering: the use of computers to carry out technical developments, in the extreme cases, detailed studies with little or no human assistance to a given general specification. Automated engineering is a specialized form of artificial intelligence’ (Drexler, 1987).

There are very strong arguments against such ideas. After all, atoms are not grains of sand, it is impossible to move and sort them just in such a way (because they are bound together not by mechanical, but by electromagnetic and other bonds). In particular, Smalley concluded that, when viewed outside of biological systems, the idea of molecular assembly and molecular manufacturing is in fundamental conflict with the laws of chemistry. In a public debate with Drexler in the pages of scientific journals, he argued that atoms simply cannot be put into place by mechanical means—you have to force them to form the appropriate bonds. But creating molecular machines that can do this is simply impossible (see e.g., Smalley, 2001; Drexler, 2003; see also Ford, 2015).

Nevertheless, it is possible that some kind of assemblers will be created (but this is clearly not a matter of the near future), and on this basis it will be possible to significantly expand our capabilities, as well as to open up a large niche for development and solving a number of important problems. But humanity’s technological aspirations have repeatedly encountered the fact that the idea of limitless energy or raw material possibilities (like perpetual motion projects) inevitably encounters very serious obstacles. Therefore, the idea of finding something that leads to unlimited abundance is utopian. Such assemblers will require a lot of energy and resources, and it is obvious that they will not be for free. And still, this will be a very significant step on the endless path to finding the philosopher’s stone that can turn everything into anything.

3 Robotics as a Direction in the Development of Self-regulating/Self-managing Systems

The concept of a robot is very ambiguous since today computer programs, manipulators and other mechanisms, human-like autonomous devices as well as some nano- and microdevices can be called robots. However, despite this diversity, a number of the main characteristics of robots of all types can be defined. It is of particular importance to us that these characteristics coincide to a large extent with the characteristics of the Cybernetic Revolution and its technologies. That is why an ideal robot (which can move, work and solve problems and communicate in a meaningful way depending on the situation) is a good example of a self-regulating/self-managing system. Many definitions also emphasize the aspiration to treat the robot as a self-regulating system. For example, one definition states that a robot is a device capable of moving independently in space, coping with tasks of picture recognition and analysis, possessing a high degree of mobility, being able to analyze a situation through feedback and also being able to predict situations, relying on its own experience and available information (definition by Professor Shigeru Vataat, see Nakano, 1988: 26).

The opportunities for the use of robots are undoubtedly vast. In particular, they can help to solve the problem of caring for the growing number of elderly people and the related problem of labor shortages.

3.1 Robotics at the Modernization Phase of the Cybernetic Revolution

The initial phase of the Cybernetic Revolution convincingly showed a powerful rise of robotics which is an entirely new branch of mechanical and electronic engineering; thus, hardly anyone doubted its promising prospects for development and widespread implementation. Futurologists expected the coming era of universal robots [e.g., Moravec, 1988]) and suggested, that the development of the robot, and robotics as an industry have limitless potential (e.g., Kahn, 1982: 182). However, the development of robotics has been much more modest. The reason is that a big part of the large-scale industry in the Western countries has moved to China and other developing countries since the 1990s. As a result, the share of industry (and the number of workers) in the Western countries has declined, the need to replace workers has significantly decreased, and respectively, the investments in relevant research have been reduced.

In the 1990s, robots continued to develop: their characteristics, software and interface were improved; their control became easier, etc. But the focus was on developing information robots and not industrial robots (see below; we have already spoken about the use of such robotic programs at stock exchanges in Chap. 4). Nevertheless, in the 1990s and 2010s there were (and still are) many optimistic and over-optimistic forecasts on the development of robotics, including for industrial purposes.Footnote 6 It was argued that the task of the next decade or two was to robotize everything (Proydakov, 2016). But the reality turned out to be much more prosaic. For example, Terry Gow (Bonev, 2013; World Robotics, 2013), the head of Taiwan’s Foxconn, the world’s largest microelectronics manufacturer, was going to increase the number of robots from 10,000 robots to 300,000 in 2012 and to 1 million by 2014 (Auslander, 2014). But he only managed to fulfill 5% of what he promised.

Such a leap forward in forecasting opportunities is quite a common case. As we have pointed out above, this can also be observed in other innovative industries, including electric vehicles, unmanned vehicles, nanotechnology, 3D printers, etc. This is due to the natural difficulty of forecasting, and is often linked to the PR campaigns of manufacturers who use such forecasts to actively involve investors in a new project.

There are currently about 3 million industrial robots in the world (IFR, 2020). We mean only the most advanced machines with at least three axes of motion and the possibility of free programming. Robots are primarily used for work in sterile conditions (in the electronics and pharmaceutical industries), product assembly and packaging (Smirnova, 2011). Nevertheless, a significant number of robots are still used in car factories. Robots are actively employed in logistic centers and in some other areas. Of course, robots have serious advantages over humans: they work much faster and besides, their work is of a higher quality, etc. However, we suppose that the industrial direction of robotics will hardly make a breakthrough during the initial period of the final phase of the Cybernetic Revolution in the 2030s and beyond. There will be other directions of robotics that could revolutionize this sector (see below).

In this regard, it is important to note that the number of robots in everyday life (domestic robots), medicine, logistics, services, entertainment and leisure is growing rapidly (IFR, 2020). The number of military robots is also growing significantly. Household robots, including domestic vacuum cleaners, are the largest group of consumer robots. Nearly 18.5 million units (+6%), worth $4.3 billion, were sold in 2019. It is also clear that the majority of sales in most non-industrial robotics sectors are less complex designs, although the situation is somewhat different in the medical sector. A surprisingly growing number of robots for rehabilitation and non-invasive therapy makes them the largest medical application in terms of units in the medical robotics sector (IFR, 2020, 2022).

The fastest growing service robots are: Autonomous Mobile Robots (AMR), delivery robots, cleaning robots, disinfection robots and medical robots. However, the use of robots in many other areas (promising in perspective, as discussed below) remains at the level of experimentation and enthusiasm. So the reality of robot penetration is far from what was predicted. So far, there is no evidence of the rapid “proliferation” of robots in the workplace, which has been and remains one of the most common predictions since the 1980s (e.g., Ford, 2015, 2021; see more below).

At present, in addition to the first-generation robots (the most numerous), various second-generation robots are developed. Active work is carried out on the design of the third-generation robots with high levels of intelligence, adaptability and orientation.

3.2 Robots: Classification and Some Forecasts

Currently, we can talk about a large number of different types of robots (we believe that there are at least 20 types). Conditionally, we will divide them into four groups, without pretending to be complete and taking into account the fact that the boundaries between the groups are conditional: (1) already widely used; (2) already used or under development, whose prevalence will occur in the future; (3) largely predictable; (4) specific robots.

First group: (1) industrial; (2) information and communications (voice, chat bots, software, etc.); (3) military (spy-robots, drones, mine-clearing, etc.) and rescue robots (this type is not yet widely used). However, there is a movement to ban military robots, but this is unlikely to stop the process; (4) medical (for details see below).

In our opinion, medical, military and rescue robots are particularly promising.

Second group: (5) personal/domestic service robots (including domestic helpers, such as robot cooks, security guards, companions, pet care, child education, etc.); (6) social robots (providing various services, including care for the infirm), the so-called assistive robots; (7) professional service robots and logistics robots used in business (cleaners, cooks, waiters, inspection and maintenance robots, receptionists, hotels, couriers, gardeners, loaders, warehouse workers, etc.)Footnote 7; (8) civil servant robots (the name is, of course, conditional). In particular, there are developments (in Israel and South Korea) of robots that can replace border guards and customs officers. They have prospects, especially in tourist countries. They are the first robot guards for prisons. Their task is to detect the aggressive behavior in prisoners and to help the staff in difficult situations. Well, the idea of Robocop has long been popularized. It is difficult to say how relevant the guards will be in the future, but recently the main direction in this direction is to solve the problem with the help of AI programs through video cameras; (9) agricultural robots, in particular already used for milking cows and picking apples (field robots); 10) sexual robotsFootnote 8; 11) high-tech toys (pet toys, useful in particular for patients, including those suffering from dementia, and old, infirm people who live with such toys as with real pets); 12) drones and self-driving vehicles (see a separate paragraph on them).

All these types of robots will be more or less in demand in the future, but we believe that service and social robots have the greatest prospects.

Third group: (13) planetary robots (designed for mining on the Moon, asteroids, etc.); (14) underwater robots (for algae cultivation and deep-sea mining); (15) biorobots (there are already experimental and exclusive samples), i.e., robots that copy living beings, including humans, with similar appearance and sense organs; (16) robots that are copies of real people.

This is a rather exotic group, whose development will depend on many conditions.

The fourth group comprises specific robots, some of them are already in use or under developmentFootnote 9: (17) nanorobots (see above and below about them); (18) microrobots (see below); (19) neuro-piloted robots linked to the development of biointerfaces. These are robots controlled by signals from the human brain or muscles. They can be used by the disabled and in telepresence work; (20) sports robots of various types, including neuro-piloted ones; (21) small scale flying robots like robobees.Footnote 10 As we can see, most of this group are robots, which are especially actively used in medicine or for the adaptation of disabled people, which promises them great prospects in the future.

From the above, it can be seen that the potential for the development of robotics is enormous, and this (together with the development of the technical capabilities of robots) will mean that in the future more and more new areas, directions and functions will be robotized both in production and in everyday life, in medicine, in private affairs, as well as in the public sphere and public services.

3.3 Problems that Robotics Should Solve in Order to Become a Locomotive of Innovative Development

These problems are different in nature; and in general, they show the complexity of the tasks that facing robotics will have to face. And their solution will allow robotics to acquire very significant opportunities, while the services provided by robots will become necessary and common (e.g., Proydakov, 2016).

Firstly, in order to make robots multifunctional and able to process a much larger amount of information than at present, it is necessary to solve technical problems associated with increasing the electrical efficiency and power supply capacity,Footnote 11 the performance of the robot’s computer-computer systems, and also to improve electromechanics. Secondly, the development of the so-called effectors, which serve as the robot’s organs: vision, touch, hearing, speech interface and even smell. In particular, the main problem for vision is the efficient processing of the incoming video stream; for hearing—the recognition of noisy continuous speech (these problems may be partially solved in the process of AI development).

Thirdly, these are problems of the limbs, so to speak. These include mechanics, orientation and tactile perception. It is necessary to teach robots to move at different speeds, on different surfaces, to step over obstacles or avoid them. For the hands, it is necessary to solve (a) problems of mechanics, in particular, the rapid construction of a trajectory of movements and control of the movement of the hands; (b) to create an advanced system of tactile sensors so that the “fingers” could feel the touch of objects, the force of compression, the temperature of the object being grasped, etc. (see more about this in relation to surgical robots).

There is also the problem of creating artificial muscles for robots, which is particularly important for anthropomorphic robots so that they could control facial expressions, lip movements, etc. Artificial muscles could obviously help to solve the problems, and could also be useful in medicine (for bioprostheses, artificial skeletons, etc.).

Solving each of these problems, not to mention solving them in the system, will help create robots that are well oriented, can move freely, can pick up objects of various shapes and weights and put them in the right place, can distinguish sounds and speech, can see, can perceive and distinguish objects and so on. In other words, there could appear very intelligent and effective helpers, which could be both universal and specialized with the help of programs. In other words, numerous areas of the service sector, the household, our livelihoods, our contacts, medicine, as well as many areas of production and trade, would change markedly and at some point radically. We believe that we will be able to observe these changes over the next three to four decades during the final phase of the Cybernetic Revolution, but they will be especially evident after the 2040s.

Robotics in medicine and nursing. We have already spoken (in Chaps. 4 and 8) about a successful use of robots in medicine and their role in the Cybernetic Revolution. The global medical robotics market is rapidly developing and is expected to grow at a CAGR of 16.67% from 2022 to 2029, reaching over $23.10 billion by 2029 from $5.90 billion in 2020 (Globenewswire, 2019).

Surgical robots are the most widely used and represent a rapidly developing sector (see Chap. 8; Pinkerton, 2013; Beck, 2013). They are expected to triple over the next few years (Globenewswire, 2019). Robot-assisted surgery has many advantages. Already today, robots help perform very delicate operations, especially in eye surgery, which requires the utmost accuracy (Smotrim.ru, 2016). Of course, both forecasts seem overly optimistic, however the expectations show that these are potentially very large markets to profit from. There are certain opportunities for the growth in surgical robotic market in Brazil, India, China and other emerging economies (Globenewswire, 2019). Nevertheless, along with the benefits, surgical robots also bring new problems and fears, since the surgeries involving these systems carry significant risks, including an increase in the number of injuries and deaths in robotic surgeries (Pinkerton, 2013). However, the situation is improving, thanks to technological innovation. In particular, safer systems are being developed (such as the American HeroSurg or the Dexmo Glove, developed by a team of scientists from Europe and China [Media Release, 2016]).Footnote 12 These robotic surgery systems have given doctors a sense of touch, a feeling as if they were actually holding the surgical tools, making it much easier to control the robotic surgeon. Moreover, haptic technology would also improve the ability to distinguish cancerous tissues from normal tissue. So medicine and robotics are only at the beginning of the journey; however, this direction is very promising.

Not only mechanical robots but also micro- and nanorobots are introduced into surgical practice. Advances in nanotechnology are particularly suited to the field of neurosurgery. Nanodevices provide the basis for greater control and accuracy, for example, when reconnecting nerves. Oncology is another area where surgical nanorobots could be useful, particularly for imaging tumor margins. The detection and mapping of tumor margins during surgery can be greatly improved by involving nanorobots into tumor excision procedures. The plan is to inject the patient with nanorobots that use chemical sensors trained to detect different concentrations of E-cadherin and beta-catenin to find tumor tissue margins and metastatic sites. A cluster of nanorobots will assemble on the tumor and transmit an electromagnetic localization signal to the surgeon for further investigations.

However, the development of robotics, including microrobots, in medicine is by no means limited to surgery. There are forecasts about the creation of similar micro-nano robots made of flexible materials that can change their shape for specific tasks. This, in turn, could have a dramatic effect on how a drug interacts with cells in the body (Binysh et al., 2022).

Currently, robots are actively involved in nursing-care and medical care, including robots for activities that benefit people with disabilities and older adults in the course of their daily lives and potentially with a wide range of functions from dressing to taking medications and playing. In Japan, nurse robots are actively being tested. They help patients to get out of bed, render help to stroke victims in regaining control over their limbs (Khel, 2015) and improve walking capacity (Moucheboeuf et al., 2020). GeckoSystems makes robotic nurses and robotic attendants, which, with telepresence capabilities, will allow doctors and nurses to remotely monitor and examine patients as well as change bed linens and give medicine to patients. In our opinion, this is one of the most challenging directions for the use of robots in the future. Many nurse robots are humanoids (Cairns, 2021), which makes them more comfortable for patients (Cairns, 2021). Hospital robots became very useful during the COVID-19 hospital staff shortage.Footnote 13 This market is estimated to be worth hundreds of billions of dollars. There are different ideas about the use of robots in medicine as medical transport, sanitation, disinfection and rehabilitation robots (Stanger & Cawley 1996; Crawford, 2016; see also about robots that take a patient’s blood pressure using only a simple touch [Kim et al., 2022]).

3.4 Forecasts for the Development of Robotics in the Final Phase of the Cybernetic Revolution

We believe that in the next few decades robotics will develop more rapidly not in industry but in other areas. One may ask the question, in which areas will robots be most widely used?

The first is medicine and care for the sick, elderly and infirm, where robots of all kinds, from nanorobots to anthropomorphic robots, will be used. There also will be created the self-regulating/self-managing systems that ensure the functioning of physiological processes, as well as robots with different “skills”—assistant robots, doctors, nurses and security guards. We believe that this will be one of the main directions that will allow at least a partial solution to the problems of care for the elderly.

Secondly, robots will be widely introduced in the service industry. Why? First of all, since it currently involves the major part of the working population in the developed countries, up to 80 percent (Acemoglu & Restrepo, 2017; Turja & Oksanen, 2019). It is rapidly developing in all countries, including China, India and the Third World countries in general. Thus, in economic terms, it is the most promising sector for labor replacement.

Thirdly, the use of robots is correlated to the problem of reduced labor supply due to global aging and lower birth rates around the world. Robotics can significantly alleviate labor shortages, or even largely solve the problem. This will be especially important in low-paid or labor-intensive sectors, from childcare and health care to waiters and cleaners; from security guards to loaders; from secretaries to truckers.Footnote 14 We have been observing this process for many decades (as in the case of computer-aided robots), with many occupations becoming extinct or reduced in number. But the problems of labor shortage, however, are only getting worse, even in highly paid jobs like truckers.

We do not consider the threat of structural unemployment here, it exists, and we must be ready for it. Although it is quite serious, in our opinion, it is not an existential threat. One should understand that the process of replacing human labor does not proceed immediately, even if society is technologically ready for it. Any practical (applied) innovation requires a number of conditions, from legal to economic, as well as psychological and others. Therefore, the replacement of human labor will not be rapid, even in certain spheres (and even more so in the economy as a whole) and it will take decades, not years. In addition, this can significantly change the problem of migration. Robots can largely replace migrants.

Fourthly, robots may replace humans in dangerous jobs and activities and in high-paying professions (like lawyers, and doctors), where possible. Here, technological unemployment will only be more painful, because people will lose highly paid jobs.

We believe that these four areas of the introduction of robots into the economy and life will develop most rapidly. And already based on these successes, home robots will develop very actively (for cleaning, cooking, other domestic tasks, managing a household, babysitting, etc.). This will obviously provide a huge market.Footnote 15

As for the universal and anthropomorphic ‘smart’ robots, it is necessary to recognize that the forecasts about the emergence of such ‘smart’ technologies in the near future do not yet correspond to the real opportunities. Their time will come much later, in the end of the 2040s—the 2050s or even later. Moreover, we should not worry that the twenty-first century will be the century of the post-biological world when as a result of the natural selection, robots will displace humans from the pedestal of evolution and develop under the influence of the new post-biological evolution, which can exceed the rates of the biological evolution millions of times (Wadhawan, 2007). We believe such point of view looks more like a scientific fiction than a scientific forecast.

There have been, and still are, different assumptions about the role of robotics in the near future. In 2007, Bill Gates (2007) considered that robotics was approximately in the same position as computers used to be in the 1970s when they founded the Microsoft Company together with Paul Allen, and he apparently anticipated that in the 2030s robotics would become as important as ICT is today. However, we doubt that this prediction will come true by the appointed time. Many companies are working on such developments, but in general, unfortunately, there is not as much business interest in this direction yet, as, for example, in biotechnologies, even though robotics already has a rather long history. Today the total volume of robot production worldwide is rather small. In 2020, 384,000 industrial robots were installed (IFR, 2021); for comparison, in the same year, China alone produced 27,020,000 cars, so the number of industrial robots is absolutely insufficient for an economic takeover.

However, there is no doubt that a bright future is awaiting us due to robotic development. But most likely, its rise will already happen during the apogee of the Cybernetic Revolution on the basis of development of future technologies. So, we assume that certain, though not revolutionary, achievements in this area will occur in the 2020s. In the 2030–2040s, we will see a much more significant increase in robotics, whereas an explosive development of robotics will happen a little later, in the 2050s and 2060s. By this time, as we have said, we can also expect the creation of truly ‘intelligent’ robots. Hardly all of them will be anthropomorphic, most likely their design will be defined by functions. However, universal robots are also likely to emerge. Perhaps, evolutionary robotics, in which robots are able to improve themselves and adapt to new conditions, may also develop by this time. They will not only be fully self-managing systems, but also self-evolving systems (which are already being developed in the field of AI, see below).

Today, however, many robots are remotely controlled by human operators, and only very few models have the ability to perform some tasks autonomously. These (and other) systems after all are still closer to the operated machines, but they are advancing toward self-regulation and self-managing. People need robots that are not directly operated by a person, but that can work with a high degree of autonomy (see Yuschenko, 2015).

In addition, robots (excluding automated production) work almost individually. In order for robotization to be profound and robots to become ubiquitous, significant progress should be made in the field of creating working and communicating teams of robots of different types. The solution to the problem of robot group interaction is already actively posed today. The development of effective methods of interaction and combining the efforts of many robots is important in military matters, as well as for robots in emergency situations, complex production tasks, etc.Footnote 16The task of providing control over multi-robot systems in the future is already set for robotics. A group of machines is better capable of performing qualitatively different tasks than a single robot (Yuschenko, 2015). So far, there are no breakthrough solutions. But if you look further ahead, then robots should be integrated into various self-regulating and self-managing systems, and this will require interaction between system elements and subsystems. If we talk about socio-technical self-regulating systems (SSS; see Chap. 14 about them), where robots can partially or completely replace certain groups of public servants (policemen, firemen, etc.), then the interaction not only with other robots but also with humans and other STS, is crucial. And this requires new forms of information exchange. Today we are talking about the future creation of special Internet, information services or even social networks for robots to exchange information with each other, which will form the inchoate collective intelligence of robots (Waibel et al., 2011). But we think that in the future such an exchange will be different.

At the end of the paragraph, we can formulate some interesting questions for research: to what extent will the robotization of society help a) to choose a career for self-development, creative, not boring, not routine? b) If robot nannies and assistive robots for people with disabilities can be created, and if enough robots reduce the need for labour and engagement of women into economy, will this be an impetus for women to have more children? (Reproductive technologies can, of course, play an important role here); c) what role can robotization play in the development of the e-government and e-state? (As mentioned above, robots can be integrated into socio-technical self-regulating systems, which helps to lead toward electronic state; see Chap. 14 on the electronic state); d) how can numerous moral problems be solved, including the problem of people’s emotional attachment to robots?

4 Transport and the Cybernetic Revolution

Self-driving cars. In many aspects, autonomous cars are similar to robots. Moreover, unmanned vehicles, like robots, clearly demonstrate the very ‘idea’ of the Cybernetic Revolution as a revolution of self-regulating/managing systems. So we can propose that in the forthcoming decades the breakthrough is most likely to occur in the direction of autonomous traffic and its management. This means that transport in general, vehicles and other transport systems will become driverless and thus self-managing. It is more likely that these will be electric vehicles, although it is possible that driverless cars running on hydrogen or other fuels will become more widespread.

Therefore, since there are many technical limitations and problems in the development of electric cars these problems will slow down driverless electric cars. In addition, there are many specific obstacles, including legal and organizational (safety, various fears and conservatism) on the way to the mass implementation of self-driving cars. It is impossible to overcome them quickly. Moreover, since people can drive cars themselves and like to do it, few will be willing to pay much more for a robot than for an ordinary car. This, combined with the fear that unmanned vehicles could put millions of people out of work, may hinder their mass adoption. However, from an economic perspective, self-driving vehicles could be more profitable and could dramatically change freight transport as well as taxi services. Such unmanned vehicles can significantly substitute taxi (Khel, 2015) and truck drivers. But certain legal and social difficulties may emerge here.

At present, many companies report the releases of their own models of self-driving cars. However, it is difficult to define the actual number of self-driving cars in use, since for the purposes of their promotion, the manufacturers give very inflated figures of hundreds of thousands, millions and even tens of millions of units, which seems incredible. In addition, there are big differences in what vehicles can be considered unmanned. Most of the so-called self-driving cars are still equipped with autopilots classified as Level 1 and Level 2. Using information about the situation around, the car equipped with the Level 2 autopilot can maintain its course in the lane, and accelerate and decelerate independently. This is done using sensors, detectors and, in some cases, cameras. The driver must keep his hands on the steering wheel of the car and not remove them. There are hundreds of advanced unmanned vehicles with Level-3 autopilot (when the driver does not need to keep his hands on the wheel, but is simply in the car),Footnote 17 and given the high cost of maintenance, there are very few cars that can drive without human presence.

It seems that the widespread use of self-driving cars will not happen in the next 10–15 years, but they can be introduced, work and make a profit in areas where cars move along a given route, that is, in cargo transport and maybe in some sectors of public transport, as well as in closed areas (at airports or in industrial zones).Footnote 18 In other cases, we expect that there will be a gradual development of autopilots, so cars will move toward self-driving, but will not be able to achieve it for a long time.Footnote 19 However, sooner or later such a breakthrough will happen. But there are many technical, organizational and legal problems to be solved;Footnote 20 As well as ethical ones, including problems in the vein of Isaac Asimov.

For example, Peter Dizikes (2016) describes that research shows inconsistent public opinion on the safety of driverless cars. He informs that in a series of surveys conducted in 2015, the researchers found that people generally take a utilitarian approach to safety ethics. They would prefer autonomous vehicles to minimize casualties in situations of extreme danger. This would mean, for example, that a car with one passenger would veer off the road and crash to avoid a crowd of ten pedestrians. At the same time, the survey’s respondents said, they would be much less likely to use a vehicle programmed in this way. In essence, people want driverless cars that are as pedestrian friendly as possible—except for the vehicles they would be riding in. “Most people want to live in a world where cars will minimize casualties”, says Iyad Rahwan, an Associate Professor at the MIT Media Lab and co-author of a new paper outlining the study. “But everyone wants their own car to protect them at all costs”.

So today the development of various models of such self-managing systems (i.e., driverless cars) should not be considered as the beginning of the final phase of the Cybernetic Revolution but only an important precursor to its inevitable start in the 2030s–2040s. However, later in the middle or at the end of the final phase of the Cybernetic Revolution (between the late 2040s and 2060s) one can expect a mass development of this new means of transport. Moreover, self-driving electric vehicles can become a powerful source of technological development during the mature phase of the Scientific-Cybernetic Production Principle.

Drones. Pilotless aircrafts or drones, like robots in general, vary greatly in size, complexity and capabilities (from a small toy to almost an aircraft). As already mentioned, they are particularly actively used in the military sphere in which new and new modifications are constantly appearing. But today drones start to be actively used for peaceful purposes (a fairly common trend in technological development) for checking the power lines or delivering first-aid equipment (they can also be used as messengers), in the agricultural sector and for many other purposes when inexpensive and continuous aerial support or monitoring is required.

Like driverless cars, drones are still semi-self-managing, they are controlled by a person on the ground. But their capabilities are growing, and so are their applications, to the point when they can repair the roof of one’s house. In particular, “agro-drones” are expected to be used in agriculture, especially for transporting fertilizers, seeds, etc. This is important given the off-road nature and complexity of delivering to certain areas. There are also plans to transport people, including the development of taxi drones, but so far they cannot be put into practice due to technical and legal problems. A big problem is finding a simple and environmentally friendly power source for drones. Some experts believe that the future belongs to powerful drones powered by lithium ion batteries. However, there is experience of developing hydrogen-powered drones.

Like self-driving cars, drones will sooner or later achieve true autonomy. Due to the great economic effect and convenience of such robots, they have a very promising future.

5 3D Printing

Universalization and 3D printing. Universalization, as we discussed this in Chap. 3, is one of the most important and even surprising characteristics and trends of the Cybernetic Revolution. Universalization is associated with a trend that combines in one direction a whole range of skills and capabilities from other fields and technologies; In other words, universalization, that is, the emergence of more and more multifunctional devices (today it is a computer, a mobile phone, a help desk robot; general AI, see below; in the future—a universal robot, a single medical-biotechnological environment [see Chaps. 6, 8, 11, and 15], etc.; in the future, it is also possible that many things can be done with the help of nanoassemblies, creating various nanostructures and substances). It lies in the fact that the vector of technological changes moves in such a way as to gather together the maximum number of possible operations in specific technologies, to make them as broad as possible, constantly combining various existing autonomous machines and mechanisms into a single complex. This also applies to the universalization of competencies with the help of self-regulating systems, when, with the help of devices and software, an individual can combine dozens of competencies, each of which was previously feasible only for a professional or an advanced amateur (we already talked about this in Chap. 4).

One of the most recent trends in universalization is 3D printing. As we will see, the opportunities provided by such printers are enormous: from construction to cooking, from a home workshop to museums, from motor car engineering to robotics, from medicine to children’s toys, from training models to design; from toys to weapons. These machines are actively used in such industries as aircraft construction and rocket engineering to produce individual details, for example, support stands for an aircraft engine (see, e.g., Turichin, 2015). And just because they are used in such fields their development requires considerable investment.

In fact, these printers are actually a universal home workshop or a universal production, construction or factory. And they will acquire new functions and integrate new subsystems in the future. Widespread use of 3D printers can eliminate long technology chains in some production sectors. It will be enough to have a sketch and to make (to ‘print’, ‘fuse’) a detail at home or in a 3D printing center. It will also be possible to organize small-scale piece goods production. Engineers could also develop simple food 3D printers that can print, for example, candies, pizza or designer cookies. Futurologists expect a great future for this direction of 3D printing (Lipson & Kurman, 2013; Ford, 2015).

And, we believe that the universalization trend will continue to expand the capabilities of self-regulating systems in the future, particularly for robots that will be able to use the functionality of 3D printers, and, possibly, the future self-driving transport, and other functions.

3D printing: general characteristics. 3D printers are machines that can create 3D designs. The history of 3D printers began in the 1980s, the first 3D printers similar to modern ones appeared in the 1990s, and since then the technology has come a long way, both in terms of developing its capabilities and greatly reducing its size and price. Some observers believe that the impetus for the emergence of 3D printing was given by the trend toward the reduction of large-scale production and the expansion of small-scale production that emerged at the end of the last century, as well as the general trend toward individualization of production.

These devices may be of different sizes and can produce a wide range of objects. The most common material is plastic, but some printers can work with metal (with the exception of aluminum), as well as hundreds of other materials, including high-strength composite materials, elastic materials, and even wood (Ford, 2015). With regard to 3D printing, we can also mention the melting of powder agglomerations of both polymeric elements and metal powders (here we can observe a link between printers and nanotechnologies).

Although often referred to as 3D printing (similar to paper-based printers), the term used by experts is “growing” or “building up” (see, e.g., Turichin, 2015). It seems to us to be more comprehensive, as it more accurately reflects the actual process of creating products. Layer by layer, material is built up until a detail or other object is “grown”. A distant analogy can be found in the work of the potter’s wheel.

3D printing is the process of creating a real three-dimensional solid physical object from a template designed using 3D computer-aided design (CAD) software, and then the 3D printer produces a real product. The basis of 3D printing is additivity, that is the merging (fusing) of materials and the creation of a specific design and construction (such technologies are called additive). Fusing layers consists of a series of iterative cycles of creating three-dimensional models, applying a layer of material on the working surface (lift) of the printer, lowering the lift platform by a distance equal to the thickness of one layer and removing waste from the working area surface. The cycles follow each other continuously.

In industrial production (metallurgy and mechanical engineering), a detail is most often produced by subtraction, that is by removing material (although casting and other methods are known), turning and drilling, thus eliminating superfluous material. For example, General Electric intended to use such technologies to reduce the weight of an aircraft engine by about 500 kg (Ford, 2015).

The potential of 3D printing in medicine. As already mentioned, the potential applications of additive technologies are very broad. But it is especially important for us to emphasize that these potentialities are vitally important for medicine: in the creation of artificial tissues, parts of organs and the organs themselves, as well as more durable parts of the body, such as bones, including those made from high-strength metals. We have already given a number of examples of this kind of use of bioprinters in Chaps. 8 and 9. It is of fundamental importance that this can be done in hospitals, laboratories, medical centers, etc., that is, applied to specific cases.

The principle of self-reproduction of the like. It is very curious that in this technology (as well as in robotics) there is another principle of the Cybernetic Revolution, which may become very important in the future, but which so far is weakly manifested (except in some programming developments) and, perhaps, in some biotechnologies. We mean the principle of self-reproduction of the like.

In 2005, the RepRap community of enthusiasts was founded. The project is based on two ideas: any RepRap printer can print another RepRap printer; all developments of 3D printing device are in the public domain.

In eight years, four generations of RepRap 3D printers have been developed. However, even now, the task of replicating one RepRap device with another is not completed, yet. It is one thing to print plastic parts; it is another to create microelectronics and metal elements of the extruder design (Plotnikov, 2014).

6 Cognitive Science and Cognitive Technologies

Cognitive sciences and government interest. Cognitive science studies the nature of the mental and nervous processes that control movement and many other bodily processes. It is actually a large complex of diverse fields related to intellectual processes, consciousness, knowledge, memory, etc. We refer here to fields such as cognitive neurophysiology, cognitive neuroscience, etc. In recent decades, many discoveries have been made that explain some of the mechanisms and reactions of our brain and mind, including the work of so-called neuromediators. A considerable number of new generation neurostimulating drugs have been created and are actively used in medicine. In general, there has emerged a new branch of pharmacology—neuropharmacology.

In 1924, the German scientist Hans Berger made the first recordings of human brain activity by attaching electrodes to the head (Wolpaw & Wolpaw, 2012). Later electrodes were implanted directly into the human brain.Footnote 21

The key technological achievement has been the development of new brain-scanning technologies (including computed tomography, etc.), which for the first time made it possible to obtain multiple images of the inside of the brain and to obtain direct and indirect data on its functioning. Nowadays, many research organizations are trying to create a database of neuronal cells and their types (according to the latest data, there are 90 billion neurons in the human brain). This will advance the interpretation of the mechanism of operation of visual system by developing a functional classification of different types of neurons in the brain.

In recent decades, we have observed a very rapid growth in research in this field. This is reflected in the rapidly growing global neurotechnology market, which is expected to grow from $10.7 billion in 2020 to $12.82 billion in 2022 (EMR, 2020; PR, 2022). This is due to a number of reasons, including the increased interest from the military, intelligence agencies and companies that are developing artificial intelligence technologies. It is clear that all of them have great opportunities for research funding. As to the military and intelligence services, their interest is quite obvious, as is that of large media and political centers seeking to find new ways of influencing people. As far as IT companies are concerned, they have taken a course in the study of how the brain works in order to try to build new IT technologies by analogy with the principles of the brain. And so-called neural networks and neural learning have become the result of this course. It is significant that these forces work in tandem and have at their disposal hundreds of millions of dollars from public funds (e.g., Cepelewicz, 2016). We will return to this issue in the paragraph on AI. Government-sponsored research is very active in the USA, but also in China, Europe, Korea, Japan, India, Russia and other countries.

Cognitive technologies and medicine. Although the research is being intensively driven by the government interests, nevertheless, we believe that the resulting achievements will be manifested in the development of the key areas of medicine.Footnote 22 Naturally, cognitive technologies are closely related to bio-, nano- and information technologies. First of all, cognitive science and technology can give breakthrough opportunities for the disabled and people with disabilities by minimizing their problems.Footnote 23 Since aging and disability are unfortunately closely linked, the problem of aging and dementia in developed countries also gives a powerful impetus to the development of neuroscience. One promising development from the University Hospital of Lausanne is an artificial spinal cord. The implant sends electrical impulses to the muscles, mimicking the brain, and could one day help people with severe spinal injuries to stand, walk and even play sports. The breakthrough is that the implanted leads are longer and wider, with the electrodes positioned to match the spinal nerve roots, so more muscles can be accessed with this new technology. At the same time, Israeli scientists presented the results of their work to restore the spinal cord function, giving hope to thousands of paralyzed people to get back on their feet (Rowald et al., 2022).

The so-called neuroimplants that help to restore or improve certain functions have great prospects. Various bionic implants are already developed and tested (e.g., bionic eye,Footnote 24 artificial (electronic) nose,Footnote 25 bionic prosthetic foot (Herr & Grabowski, 2011), knee (Dawley et al., 2013), other bionic or electronic organs. Since the control of these organs requires communication with the brain or nervous system, this significantly complicates and expands the possibilities of cognitive technologies. In particular scientists are already adjusting the functioning of an artificial eye, ear, and even heart by means of neural interfaces.

One can also observe the beginning of the development of many practical applications that can be implemented from the results of the study of the brain: preventive measures, treatment, rehabilitation, development of the capabilities of operators of complex systems, education and others. Thus, several years ago, the Journal of Neural Engineering published an article about a study by American engineers and neuroscientists who had succeeded in artificially enhancing the capabilities of the human brain and memory by installing neuroimplants (Hampson et al., 2018).

However, the opinion that in the 2030s some nanodevices will be implanted in the human brain and will be able to perform the input and output of the necessary signals from the brain cells and that this may even make learning and getting education much easier is very doubtful. Even if such a cyborgization could be realized in principle, it will happen much later.

Neural interfaces or brain-computer interfaces (which we discussed in Chaps. 3 and 8) may become one of the breakthrough directions of cognitive science and the Cybernetic Revolution in general. Let us recall that neural interfaces are technologies that connect the human brain and/or nervous system to external devices. They usually implement the interaction between the brain and the computer system. The fundamental achievement of cognitive sciences is an opportunity to control artificial organs via the brain signals, as healthy people do.

After it was established that the electrical activity of neurons could be used to control robotic manipulators, the study of neural interfaces became even more active (Lebedev & Nicolelis, 2006). Now it has become possible to transmit neuronal signals to devices and thus to operate artificial limbs with natural accuracy (see above).

All kinds of experiments in this direction are conducted in different countries and companies. With the rise of medicalization and the power of digital and pharmaceutical corporations, this kind of development frightens many, and with good reason. Therefore, it would be wise to bring such experiments under the control of society and channel them in a safe direction.

In the future neural interfaces could be used not only in medicine, but also in everyday life, for example to monitor the state of a driver’s or an operator’s brain and automatically wake them up if they fall asleep. However, there are great risks associated with a wide introduction, because of the strong limitation on human privacy, and the great temptation to use such private information for military, commercial or political purposes (e.g., Chowdhury, 2020). This danger may increase if the future neural interfaces become wireless and can be controlled remotely. Today such chips are as heavy as tiny wires..Footnote 26 Musk’s company continues active experimenting, and although one can hardly believe their predictions of early breakthroughs, the process continues. It needs regulation, especially since Musk’s company has already moved on to human experimentation.

In general the achievements of cognitive science are already in use and their application will increase even more in the fields that move toward self-regulating systems—from medicine to robotics, from cybernetics to problems of artificial intelligence, and, of course, for military purposes.

However, some serious technical and social difficulties can hamper the development of this direction. Among obstacles one can mention, firstly, the immune rejection. Secondly, many nanostructures, for example, nanotubes, which were predicted to have a bright future, turned out to be very toxic to the human body (Kotov et al., 2009). Thirdly, the implantation of external devices traumatizes the whole organism, despite all serious efforts to reduce the impact (Grill et al., 2009) and causes an inflammatory process called foreign body reaction (Lotti et al., 2017). Neuronal interface implantation triggers acute and subsequent chronic inflammatory responses at the interface with neurons and nerves, damaging surrounding tissues and worsening neural interface functionality (de la Oliva et al., 2018). Electrical impedance at the tissue/device interface also increases as a result of fibrotic tissue formation around the implant (Gunasekera et al., 2015). Moreover, immune cells such as macrophages continue to migrate to the implant site, releasing pro-inflammatory cytokines that perpetuate the immune response and compromise the long-term usability of the neural interfaces (Del Valle et al., 2015; Riva & Micera, 2021). Another problem is the difference in electrical conductivity between biological material and technical device, although there has been made certain progress in solving this problem (Abidian & Martin, 2009). But even if we solve these problems we will still need some powerful software to process the brain signals. At the same time, it is very important to find the means to provide feedback between a device and the human brain, in other words, the brain should not only send a signal, but also receive it from the device. Overcoming these limitations will help to rapidly advance neural interfaces to a new level. However, in order to avoid the mistakes and problems like those caused by uncontrollable spread of computer games (but with consequences on a much larger scale), it is necessary to prevent the misuse of data and the influence on the mind.

At the end of this section, we would like to express the following idea. For several reasons (for details see Grinin & Grinin, 2015; 2016: 147–148; 2021; Grinin et al., 2021; Grinin, 2019; Grinin & Korotayev, 2015), the emergence of essentially new forms of communication (as there have been different types of electronic communication: e-mail, social networks, other Internet-connections, mobile phones, etc.) is hardly possible in the coming decades. The development of communication has made great progress in recent decades and has generally even exceeded the level of overall development. Most likely, the revolutionary new forms of mass communication will not appear until the late twenty-first century.Footnote 27 However, we suppose and hope that this new form of mass communication will not be the communication via neural interfaces or via the direct implantation of chips in the human brain (so that communication goes from the source directly to the brain). There is still a long way to go. And even if it succeeds, we think, it will hardly be widely implemented in the next few decades (due to ethical, medical and legal constraints). Nevertheless, the threat is not far-fetched, so we can hardly ignore such opportunities and we should think about the restrictions beforehand since this possibility raises concerns.

7 Artificial Intelligence

7.1 Development of ICT and Some of Its Dimensions

Miniaturization is the phenomenon characteristic of the current technological progress. In the early 2010s, a new ‘core’ of information technologies was formed based on the transition from microelectronics to nanoelectronics. We are talking about the rapid reduction in the size of processor сhips. The speed of size reduction in the manufacturing process is evident from the following data: in the 1970s—3 µm (3,000 nm) (Zilog and Intel); in the 1980s–early 1990s—0.8 µm (800 nm) (Intel and IBM); in the late 1990s–early 2000s—180–130 nm (a number of companies); in the early 2010s—process technology at 45, 32, 28 nm (Intel, etc.). In 2019, Intel released processors based on the 10 nm process technology (the company cannot manage the production of 7 nm chips yet), while the TSMC released chipsets for mobile devices based on the 7 nm process technology (Apple A12, Kirin 980 and Snapdragon 855). At the same time, their production technologies are considerably different: Intel with its 10 nm can fit up to 100 million transistors per square millimeter, while the TSMC, with its 7 nm, can fit only 66 million. Accordingly, the miniaturization processes described above together with the increase in the power of computer technology and the speed of electronic communications have led to the most powerful spread of information and computer technologies.

The AMD Company has announced the imminent creation of 5 nm processors. But is it possible to create a whole new generation of computers with such a reduction in size? It is hardly possible because even under the most optimistic assumptions, such processes will have only a slightly higher productivity (up to 15%), while the shortage of chips will seriously hamper the development of these technologies. The bottleneck, where data try to get in, moving between the drive and the CPU, consumes a lot of power and generates much heat and this limits further improvements (Khel, 2015). Now it becomes clear that it is unlikely that it will be possible to create a fundamentally new generation of computers in this way. Hence, there is more and more talk and information about progress in the field of quantum computers, which, if they appear, will not be so soon.

Quite long ago, there appeared an idea of storing data using specific environments (e.g., magnetic, electric, or optical); with the advent of nanotechnologies, it becomes possible to store information, for example, by replacing silicon, the basic material used in the production of semiconductor devices, or by carbon nanotubes.Footnote 28 In this case a bit of information can be stored in the form of a few atoms, say 100 atoms. This would reduce the processor size by an order of magnitude and significantly increase their operating speed. The number of transistors in a processor has reached now tens of billions (see above), and, now the aim is to create a processor with trillions of transistors (Moore, 2021). However, there are doubts about the possibility of this technology completely replacing traditional transistors.

Speed of development. Since the 1960s, the so-called Moore’s Law, named after Intel co-founder Gordon Moore, is better described as the “observation” that processing power doubles every two years or so. This was an empirical observation, but due to its widespread popularization, it formed the idea that the tendency of the computing industry was to prioritize a new semiconductor node to fit that time frame (Campbell, 2021; Tsvetkov, 2017). One way or another, the power of computers grew exponentially, reaching amazing capabilities. However, since about 2007–2010, ‘Moore’s law’ no longer works, since the trends either lag behind or overtake it (Kish, 2002; Zhang, 2022).

However, while the first Moore’s law concerns a continuously increasing complexity of microcircuits for a short period of time (two years), then the ‘second Moore’s law’ formulated by Eugene Meyeran in 1998, argues that the cost of factories producing microcircuits increases exponentially with the complexity of the microcircuits produced. And this is what we observe today in the monstrous monopolization of chip production, which has led to a deep crisis and a chip ‘war’. In general, such increases in performance and productivity have led to the fact that the level of ICT has approached the limits beyond which a new qualitative breakthrough is already in sight. There are different views on when this will happen, and how it will manifest itself. Many new information technologies have emerged which we will discuss below.

In general, information technology has become ubiquitous; in fact, it is present in almost every sphere, and with the development of AI, as we discussed in Chap. 3, the situation is rapidly moving toward the point when, figuratively speaking, AI will become a kind of block system that can be assembled in a ‘configuration’ required for a particular task, just as today we can increase the memory of a computer or upgrade the graphics card.

7.2 Artificial Intelligence: General Characteristics

Thus, in recent years, the ITC technologies have clearly evolved to a higher level, that can be conditionally described as the level of artificial intelligence (AI). Although this concept is rather vague, and the AI has been used for a long time, today its capabilities have increased significantly thanks to various new technologies (neural networks, remote contact, linguistic systems, facial recognition systems, movement control, determination of individual preferences, machine learning and many others). Moreover, most recently, AI has become increasingly directed toward influencing social control, and even administrative relationships (see Grinin et al., 2021).

In the early 2010s, the concept of four main driving forces in the ICT market was formulated: social networks, mobile solutions, ‘cloud computing’, and means of processing large amounts of information (Belousov, 2016: 22). So since then, we can speak about a new wave of ever-increasing interest in AI. Previously, three waves have been identified: the first, in 1950–1960 is associated with work on machine translation and game programs; the second (1980s) is the development of expert systems; the third wave started in the late 1990s (Proydakov, 2016: 121). The idea has acquired a completely material reality: whoever owns artificial intelligence owns the world. With the account of the new types of AI (like ChatGTP), this is increasingly taking on a life-changing significance for the world. The AI race is accelerating. The sphere of influence of AI is expanding rapidly. Here is one of the lists of intellectual fields, or branches of AI:

  • logical AI;

  • search;

  • pattern recognition;

  • representation;

  • inference;

  • common sense knowledge and reasoning;

  • learning from experience;

  • planning;

  • epistemology;

  • ontology;

  • heuristics;

  • genetic programming (McCarthy, 2007; see also McCarthy, 1990, 2000; Mitchell, 1997; Shanahan, 1997; Thomason, 2003).

However, this list, which was incomplete from the moment it was drawn up, was quickly supplemented by common areas of activity: conducting dialogues and correspondence on a wide variety of topics, imitating art (painting, composing, poetry, etc.), compiling various texts, selecting necessary materials, etc.

It is believed that the increasing use of AI will lead to the adaptation of technologies in traditional economic sectors along the entire value chain and transform them, leading to the algorithmization of almost all functions, from logistics to company management, from pricing policy to market analysis.

In Chap. 3, we presented some definitions of AI, which are generally quite numerous. It is believed that there are three directions in these definitions: (1) emphasis on the similarity between AI and human intelligence as a result of activity (in particular, the possibility of AI dialogue, which we find today in many programs and in bots); (2) emphasis on the fact that AI programs are capable of learning and self-development (neural networks, deep learning, etc.); (3) definition of AI as a software and hardware complex, capable to some extent of solving problems and making decisions (more often in assisting in decision-making), as well as other intellectual functions.Footnote 29 Now it has become relevant to distinguish between the previous view of AI (or Artificial Narrow Intelligence, ANI, Narrow AI) (as a working program that replaces humans in some spheres) and general AI (AGI, Strong AI). This distinction appeared in theory a long time ago. Here is one of the definitions: An artificial general intelligence (AGI) is a hypothetical intelligent agent which can understand or learn any intellectual task that human beings or other animals can. A more precise definition, in our opinion, is that it is capable of performing most of the tasks that a human being can perform (Landgar, 2021; Turing, 1950). The AGI does not exist yet, and there is a fierce debate in the computer industry about how to create it, and whether it can even be created at all (Gates, 2023). But in the context of recent breakthroughs, it turns out that programmers with the help of supercomputers and, a huge team that teaches and “trains” AI, have come much closer to this threshold than it seemed possible until recently (see, e.g., Landgar 2021). Time will tell whether this is the most important technological advance since the graphical user interface, as Gates (2023) argues, or it is still a less breakthrough change. But it is definitely a dramatic change.

The ability to make decisions. Researchers (e.g., Zgurovsky & Zaichenko, 2013) often note the following capabilities of AI (compared to the previous levels of ICT):

  1. (1)

    the presence of a goal or a set of goals of functioning;

  2. (2)

    the ability to plan its actions and search for solutions to problems;

  3. (3)

    the ability to learn and adapt behavior at work;

  4. (4)

    the ability to work in a poorly formalized environment, under conditions of uncertainty, work with fuzzy instructions;

  5. (5)

    the ability to self-organize and self-develop;

  6. (6)

    the ability to understand natural language texts;

  7. (7)

    the ability to generalize and abstract the accumulated information.

So, the AI is really capable of making decisions. And there are a number of areas in which people are already entrusting ICTs and their algorithms to make decisions that are not advisory, but virtually or actually final (e.g., algorithmic trading on financial markets; collecting some legal evidence using video and infrared cameras, whose data are acceptable in court; border control for recognizing the contents of luggage and even individuals; credit decisions based on an algorithmic assessment of creditworthiness through indirect signs, etc.). First of all, these are areas where the amount of information and the necessary speed of its processing are too much for a human to handle. And the number of such spheres will, of course, increase. But still, in most cases, not only today, but also in the future, the AI will mainly support decision-making to one degree or another, or be part of a more sophisticated decision-making complex, embedded in different more complex and technologically integrating domains. In other words, AI should not be associated with self-regulating/managing systems, but should be only an integral part of many of them.

In this context, it is important for us to determine the place of AI in the world of self- regulating/self-managing systems, in which we already live and will continue to live. In Chap. 3, we defined AI as follows: AI is a special universal technology (and a practical field of the theory implementation), which is an (almost) indispensable part of self-regulating systems, just like an electric motor in electric machines; or an internal combustion engine (ICE) in cars, tractors and other machines. Hundreds and thousands of different machines are based on AI. But just as an internal combustion engine or an electric motor does not determine the operating principle of all machines, but only of their large class, so AI does not determine the functions and operating principle of all self-regulating systems, but only of their large class.

Limits. The rapid development of AI technologies naturally requires various restrictive measures. They are already lagging behind. However, some things are still being done. For example, in 2021, after several years of preparation, UNESCO adopted recommendations on the ethical aspects of artificial intelligence (UNESCO, 2021). Overall, this is a useful document that states that there is a need to provide a universal framework of values, operating principles and mechanisms to guide states in developing their laws, policies and other documents related to AI, in accordance with international law. However, human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of AI-based systems. But, of course, this document remains a mere declaration, which no one intends to follow.

On October 7, 2022, the White House Office of Science and Technology Policy (OSTP) released five guidelines to guide the design, use and implementation of automated systems. The Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, use and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence (https://www.whitehouse.gov/ostp/ai-bill-of-rights/what-is-the-blueprint-for-an-ai-bill-of-rights/). But of course, this is completely insufficient, especially since the state structures (including the intelligence services) themselves actively violate these principles and plan to continue to do so. The US government is developing certain measures to control the development of AI, however, due to the great interest of the secret services in this development, such measures may resemble the story when the fox was assigned to guard the henhouse.

In late July 2023, shortly after the advent of ChatGPT and other similar generative neural networks, the UN Security Council held a meeting on the topic “Artificial Intelligence: Opportunities and Risks for International Peace and Security”. At the meeting, the UN Secretary General supported the calls by a number of member countries of the world organization for the creation of a new UN body on AI issues, which would help eliminate future threats, as well as establish and implement internationally developed monitoring and control mechanisms. Of course, such a body will be useful, but taking into account that the most powerful states are striving to use generative AI as much as possible for the purposes of control, censorship, instilling the desired ideology, intelligence and military, its ability against the backdrop of secrecy and increasing competition of states in the fastest possible creation of ever new and more powerful AI will be clearly limited. The UN Security Council also announced the first global summit dedicated to the security and regulation of artificial intelligence technology.

In connection with the advent of ChatGPT and others like it, calls have intensified to slow down the race in the field of AI, to impose a kind of moratorium on the further development of AI, at least for a certain period of time. So, in March 2023, an open letter appeared, signed by many famous people, both involved in the development of AI and not involved. They call on all AI labs to immediately pause training of AI systems more powerful than GPT-4 for at least 6 months (Musk et al., 2023; see also Vincent, 2023). In May 2023, Geoffrey Hinton, sometimes referred to as ‘The Godfather of A.I.’ said he had resigned from Google, where he had worked for more than a decade and become one of the most respected voices in the field, so he could freely speak out about the risks of AI. A part of him, he said, now regrets his life’s work. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have”, Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough (Metz, 2023). Google and Alphabet CEO Sundar Pichai said “every product of every company” will be impacted by the quick development of AI, and warned that society needs to prepare for technologies like those that have already been introduced (Elias, 2023). In short, there are many voices for strengthening control over the development of AI. We are afraid that all measures will be declarative, that problems and conflicts in society will increase in order to start a movement to create really working mechanisms to limit the omnipotence of digital companies.

On the other hand, there are powerful forces that would like to install AI not just to maintain their own benefits, but to completely reformat society, in order to bring it under total control and to undermine any possible opposition. It is not without reason that high-tech gurus and Silicon Valley prophets are creating a new universal narrative to legitimize the authority of algorithms and Big Data. As a result, they could then empower Big Brother. This dream of people will give algorithms the authority to make the most important decisions in their lives (for a philosophical essay with description of such ideology see Harari, 2016).

However, it is absolutely necessary to regulate the pace of development of AI. It is worth pointing out an interesting connection with our ideas about the rate of technological progress, which is non-linear and will change dramatically. As we point out in Chap. 12, there has been a slowdown in technological growth in recent decades, starting from the 1970s. But our analysis shows that there are a number of reasons to expect that the global technological growth rate will return to a hyperbolic trajectory for some time in the forthcoming decades, corresponding to the beginning of the final phase of the Cybernetic Revolution (Grinin et al., 2020a, 2020b). Perhaps, the great acceleration in the development of new Large Linguistic Models (or new chats, i.e., new forms of AI) about which many analysts of AI (e.g., IBM, 2023) have been talking after the appearance of ChatGTP can be regarded as a signal of a such a turn from a slowdown in technological progress to its new acceleration.

7.3 Symbiosis of Cognitive Disciplines and AI

The development of ICT and AI has evolved into a multi-faceted relationship, as they become interdependent on each other. They are in the closest symbiotic relationship of all the technologies in MANBRIC (we have given examples in almost all chapters, including this one). Above we described the combination of neurotechnologies; those for ICT and for AI. However, the connection between AI and cognitive disciplines extends beyond and has other aspects as well. For example, one component of the MANBRIC can determine the direction of another. The fact is that over at least last three decades, it has been the achievements of cognitive sciences that have become one of the most important drivers of AI. We mean the rapidly developing neural network technology (to be more precise, now they are called neuromorphic networks, and they are studied within neuromorphic computing).Footnote 30 For the development of AI, it became necessary to understand how the human brain works (e.g., Hawkins & Blakeslee, 2004). The lack of knowledge about the structure and functioning of the brain, the mechanisms of memory, decision-making, foresight and other intellectual functions of the brain at deep levels has become an obstacle to progress along the path of machine learning and other abilities. As a result, a symbiosis between cognitive sciences and technologies, on the one hand, and programming, on the other hand, began to form, on the way to studying the work of the brain in order to use it for machine learning technologies, as well as the opportunity to influence human consciousness with the help of AI. This holds great promise, so this movement has been actively encouraged and led by government structures. It is believed that the key to future success lies in the collection, accumulation, processing, analysis and use of brain data. The USA has launched large-scale programs to collect and analyze brain data, but they also do it in Europe, China and other countries. The 2016 ‘Apollo Project of the Brain’ was very instructive in this regard, and the US Government committed $100 million to it. This intelligence project aims to ‘reverse-engineer’ the brain to reveal algorithms that could enable computers to think more like humans. The Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department’s famed DARPA, has spent $100 million on a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations and use those findings to better inform machine learning and artificial intelligence algorithms (Cepelewicz, 2016). To fully understand the unusual and massive task, we can cite the data from the Allen Institute (2020), which states that their task was to do something that had never been done before: cut a one cubic millimeter section of the brain cortex into ~25,000 ultra-thin slices, take ~125,000,000 images of those slices and assemble them into a 3D volume containing ~100,000 cells, 2.5 miles of wiring and 1,000,000,000 synaptic connections.

In a word, the development of AI in many of its directions is based on attempts to imitate biological mechanisms of various kinds.Footnote 31 Of course, it is very difficult for AI to reach the capabilities of the brain. The most advanced neural networks today have tens of layers. There are hundreds of thousands or millions of such layers in the human brain. This, of course, limits many opportunities, including the so-called deep learning (Schmidhuber, 2014; Gavrilov & Kangler, 2015). Nevertheless, even the first steps have brought very impressive successes for AI and at the same time make them worrying since they will primarily take advantage of structures, intentions and technologies that are secret and unaccountable. “It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community” (Cepelewicz, 2016).

One of the latest examples of sophisticated AI based on neural networks is ChatGPT. Its capabilities are amazing. Obviously, in the relatively near future (within 5–15 years), the ChatGPT will begin to actively replace human intellectual work in writing texts in many areas: journalism, advertising, poetry, even medical and scientific texts, etc. (Biswas, 2023; Lund and Wang, 2023). ChatGPT is one of a group of so-called Large Linguistic Models (LLM) (i.e., new forms of AI).

If these kinds of chatbots become very helpful secretaries and intellectual assistants, this could dramatically facilitate preparatory work in many areas of intellectual activity. But there is a risk, and a significant one, that people will stop double-checking the data and rely on the answers, which will reduce the level of generalization and depth, as well as reliability, not to mention the generalization of personal experience. In general, this is a process that has been going on for a long time, let us take Wikipedia as an example, but now it will accelerate. People will get lazy, and their style will start to follow the style of the chat (which can be filled with whatever sponsors and hosts see fit). This means that AI will start to dictate the style of thinking and presentation.

Thus, we are approaching a new phase in the introduction of new AI capabilities into our intellectual life. This will require a considerable transformation of education, art and science (which will, of course, partially degrade under such pressure), as well as of other areas. It will turn out that a monstrous amount of artificially created intellectual products will diminish its quality.

The danger is that such chats will immediately begin to be used as an indirect powerful ideological and propaganda machine (the more relevant texts with the right bias will cascade into one’s memory), increasing the propagative effect), and can also serve as a powerful censorship mechanism. Probably, a battle of ideological chats will begin (just as a current battle for and between websites is ongoing). And the most serious danger of the development of AI (and other technologies) is that we will realize these dangers only when it turns out to be very difficult to change. As Bill Gates notes, “Whatever limitations it [AI] has today will be gone before we know it” (Gates, 2023). Therefore, the regulations should be established in advance.

7.4 Artificial Intelligence and Virtual Reality

Despite the great progress in the development of ICT and AI, it should be noted that the market for their development has started to grow much more slowly than before. This is quite understandable, as a huge part or even most of humanity has computers, mobile phones, and of course access to the Internet. On the one hand, as we have seen, the new AI technologies are actively supported and funded by states and their structures, which are almost becoming the main customers. And it is here that the projects of gigantic scale and resource consumption are being created. But on the other hand, there are no breakthrough areas that can be compared to the mass character technology development in the previous period. Footnote 32 The idea of the Internet of Things has not materialized on the expected scale, the development of self-driving cars has been hampered (see above), etc. In general, this fits well with our concept that during the modernization phase of the Cybernetic Revolution, which we are now at the end of, there is a powerful diffusion of much improved basic technologies, the foundations of which were laid earlier. And the newly emerging technologies no longer seem so innovative and new. At the same time, these improved and highly differentiated technologies are already encompassing the whole society and attempting to reconstruct it completely. In addition, in this phase, the constraints and crises begin to be felt, and the foundations are being laid for a new technological breakthrough.

With this background, digital giants seek to penetrate into a wide range of fields (including medicine, see below), and also create new areas of innovations. For instance, the modification and augmentation of reality is one of the largest such fields. The most concentrated attempt to turn toward artificial reality was the transformation of Facebook into the Meta, though it seems to have failed completely. Some time ago there was news about rather large transactions within the field of artificial reality, in particular, about the “land auction” and the sale of digital paintings (while the artists committed themselves to destroying the originals). The idea that we are on the verge of a new technological revolution, in which business will migrate from the physical to the virtual world, which was expressed several years ago, has not become very visible; however, such a turn is gradually emerging. It is hard to imagine how powerful this shift will be, and so far the financial results have been disappointing.

In any case, the opportunities for changing many things, including the appearance of a person who can already change hair and makeup in the information space, and in principle also change voice and appearance, are constantly growing. It is no coincidence that one of Gartner’s recent predictions is that the development of AI toward the creation of a hyper-realistic reality may lead to the situation when people will stop believing their own eyes, so that the transition to a world of zero trust will begin (this also applies to the capabilities of the aforementioned ChatGPT and the like). No one and nothing in this world can be trusted without confirmation in the form of a cryptographic digital signature. It is likely, and even quite possible, that scam bots will join the scammers.Footnote 33 This opens up many opportunities for the beneficial use of deception technologies. In particular, if chats are to provide investment advice (and traders are already starting to ask for this kind of advice), then it is easy to imagine how it will be possible to manipulate the investors’ sentiment.

The IT clones of dead or real people can also complicate the situation. The South Korean company DeepBrain AI proposes a Rememory program to create IT clones. This can be a digital lifetime copy of a person, which is created as a result of a long communication (up to 7 h) which is necessary for the neural network to study a person’s character, habits, fix appearance and manner of communication and voice. After death, a digital copy can be created based on photos and videos and a short interview with the person’s relatives. Afterward, the relatives of the deceased can have a kind of IT spiritualistic session with him. But the cost of the procedure is high. A digital copy will cost between $ 12,000 and 24,000; each conversation will be to the tune of $ 1200 (Coupeau, 2023).

This way, artificial intelligence can significantly change people’s attitudes to life and death, as the deceased person can continue to live among us in a way and communicate through their copy. Their images, thanks to AI, will even be able to learn new facts and news, which will reinforce the illusion of the presence of a living image. Probably, for many, the very fact of losing a loved one may become less tragic. This may lead to additional problems of mental disorders in particularly impressionable people and new psychological and social problems. The prevalence of digital copies will increase, but not rapidly.

In our opinion, it is unlikely that the development of artificial reality technologies will have such a breakthrough direction as mobile phones, since their consumer properties are very specific, suitable only for rich people or those who, as they say, have nothing to do. But in general, of course, such a movement can significantly change some relationships. It is possible that artificial reality will be actively developed (if it is not already developed) by intelligence and military agencies in order to misinform the enemy.

Crypto money is becoming a more serious factor in financial affairs and the state has also started to actively invest and embed itself in this sector. But that is another matter. A change in the nature of money and finance, and with it fundamental changes in the financial sector, including the eradication or even the destruction of modern financial agents (including banks), is occurring. Further, a change in the relationship between financial authorities and the population can lead to unpredictable consequences. With respect to undermining the hegemony of dollar, the cryptocurrencies may one day become the world’s equivalent of precious metals, equally accepted in all corners of the globe.

7.5 AI and Health

As mentioned above, the shrinking profitability and the exhaustion of ICT expansion opportunities means that digital companies are actively penetrating into different spheres. One of the most important areas here is the health sector. In previous chapters and here, we have already touched on the importance of using ICT and AI in medicine and biotechnology, as well as the prospects in the use of nanotechnologies, robots, cognitive technologies and other things in medicine that are inconceivable without AI. We would like to focus on another important aspect. We will talk not only about medicine in the narrow sense of the word as a field of treatment and rehabilitation, but more generally as a field of control and self-control over health, lifestyle and life processes, condition monitoring, prognosis and diagnosis and so on. A wide range of devices are being created—watches, smart wristbands, etc., that monitor the pulse and various biorhythms, human activity (e.g., what distance, at what speed, what trajectory a person walks) and much more.

Moreover, there are increasingly active attempts to use those devices to track and control a person’s location, calls, messages, transactions, activities, sleep, driving, etc. They are also working on tracking human emotions and trying to limit their unwanted manifestations. The technology itself is reminiscent of the famous lie detector, because the AI that determines emotions focuses on a person’s breathing and heartbeat. But now, instead of sensors on the body, ordinary radio waves are used, and you can use a regular Wi-Fi signal. This means that in the foreseeable future, not only will it be impossible to hide our emotions from an invisible spy hidden on a wireless network, but AI will begin to teach us how to behave. The prototypes are already being developed. ‘Master, take it easy, and pull yourself together’, says the new wearable device, vibrating or squeezing the wrist. The development will help people who find it difficult to control their emotions (Glyantsev, 2019, 2021).

Such things generally seem quite reasonable and can make life easier and help to maintain health. After all, what can be said against the idea of using software to determine emotional burnout by heart rate? Emotional burnout is a real psychological syndrome caused by constant stress, the scourge of the twenty-first century, according to the authors of the development. Early diagnosis of emotional burnout can help to prevent overwork, stress, depression, nervousness and aggression. The researchers note that this method can also be used in the complex treatment of mental disorders and headaches (Muraya, 2022). Of course, this can be a really good way to monitor yourself if it stays at the level of voluntary use, and the data is not used or ‘leaked’ to be abused in terms of pushing services and other things. Yes, of course, all sorts of devices and smart things are now voluntary, but we have all seen how voluntariness turns into voluntary-obligatory, and then mandatory things.

In the context of our study, this brief review allows us to confirm our previous conclusions and to draw new ones: (1) there is a clear merging of MANBRIC technologies with medicine as an integral link in its core; (2) there is an obvious shift in the use of AI in the field of medicine and, in general, in the way of life and life of a person related to his health and the quality of biological life; (3) this trend will intensify; Footnote 34 (4) thus, the first prerequisites are being created for what we call the medical-biotechnological environment, designed to create a kind of artificial environment for the constant monitoring of human condition, which we expect to be fully operational in a few decades; (5) finally, it is extremely important that this trend should restrict human rights and freedoms as little as possible.

7.6 Ideas for a New Kind of Computer

Options for new approaches. As has already been mentioned, the development of information technology is now on the eve of creating a new type of computer which is fundamentally superior to the today’s computers. There are different ideas about the basis of such breakthrough technologies for the computers of the future. The most popular are the predictions about the future of quantum computers. Footnote 35 In layman’s terms, the fundamental difference between quantum computers and conventional computers lies in the way information is processed. Conventional processors perceive information in a binary system, that is, to put it simply, data takes on values in the form of either one or zero. Quantum computers perform computations in which information can have a value both of one and of zero, the calculations are performed not in ordinary bits, but in what are known as qubits. At the same time, unlike bits, qubits can take on different values at the same time, and as a result, they can perform calculations that a conventional processor is inherently incapable of (Datta et al., 2005; Veldhorst et al., 2015).

There have also been reports of attempts to create a photonic computer. The idea of nanocomputers lives on. As we said above, according to Eric Drexler, it is nanomechanics, not nanoelectronics, that can become such a base. Drexler has developed mechanical designs for the main components of the nanocomputer. Its main components can be pushed in and out of cores that interdependently block each other’s movements (Balabanov, 2010; Drexler, 1987, 1992, 2013). There is information about the creation of a so-called single-atom computer, which operates at a temperature close to absolute zero. This is a significant step toward the creation of a quantum computer.Footnote 36

Prospects. It is, of course, difficult to predict what type of computer will be the breakthrough computer leading into the future. Sooner or later such a breakthrough will happen. However, the push for a new type of computer could be accelerated by the race to build ever more powerful AI, which requires very large digital resources. However, we do not believe that it will be implemented on a mass scale soon. Let us try to explain why we think this is the case. At the moment, this new type of computer does not have the mass market appeal that the digital giants are accustomed to. The level of development of the most popular computers is quite satisfactory for users (if someone is not personally satisfied, it is only because they have not updated their models). Therefore, it is impossible to replace most of them with more productive ones, except by force. But we still need to create the infrastructure. This is all the more difficult since more and more people prefer to use mobile phones. The latter can still be made more powerful, and they can also be massively replaced only by force. After all, new equipment will obviously be much more expensive (perhaps by orders of magnitude) than the current equipment, which is not exactly cheap. However, even the strong efforts of governments to replace petrol cars with electric vehicles have not yet yielded big results. In other words, these new types of computers will be available only for government and very large companies that would like to replace expensive supercomputers. But this is a limited market. Even if a new type of computer were to emerge within a decade, its development would be limited due to a small or limited market. It would mostly pave the way to become the basis of a new technological paradigm in the future. And taking into account the fact that the need for them is not particularly great even within the military and in the space industries, their technological emergence may be delayed due to insufficient funding. However, if the AI race accelerates, government efforts, either directly or through proxy companies (including leading digital giants), could accelerate the development of ultra-fast computers. In this case, the dangers of uncontrolled use of AI would increase exponentially.

Models within existing technologies, neuromorphic processors. There are interesting directions within existing technologies. These include, in particular, neuromorphic processors which are being developed at the intersection of biology, physics, mathematics, computer science and semiconductor production, and which are based on the usual transistors, but with a different architecture that is similar to the structure of biological brain neurons. Similar to biological model, an artificial neuron has one output (axon), the signal from which can go to a large number of inputs of other neurons and can thus change their state. Enthusiasts believe that neuromorphic processors are one of the most promising developments in the field of computing. Today, they are just a new model of programmable computing, but in the near future they are expected not only to speed up the execution of labor-intensive computing tasks on the fly with minimal power consumption, but will also open up new harmonious aspects of the digital lifestyle, as seen in wildlife. Over time, neuromorphic processors have every chance to extend and complement the capabilities of modern processors with new technologies that will allow the computers of the future to operate, adapt and learn using algorithms that resemble the way humans think (Sandomirskaya, 2021; Thakur et al., 2018). We do not rule out the possibility of such a bright future, although there are still doubts as to whether neuromorphic processors will become as popular as the current ones, since they are still intended for a relatively limited range of users.

However, the development of fundamentally new types of processors is underway and it can be assumed that they will be realized some time in the future. But of course, such a basis for AI may increase the dangers of its misuse many times over, so it is necessary to think in advance about the limits of such technologies.

Thus, in this chapter, we have considered a whole group of closely related new technological areas: nanotechnology, robotics (including self-driving vehicles), additive and cognitive technologies and AI. We believe that their development will enhance the development of the MANBRIC convergence as a whole, prepare medical technologies for the initial breakthrough that will launch the final phase of the Cybernetic Revolution and ensure the powerful development of this phase, which we call the phase of self-regulating/self-managing systems.