Cybernetics is the interdisciplinary study of the structure of regulatory systems. Cybernetics is closely related to control theory and systems theory. Both in its origins and in its evolution in the second-half of the 20th century, cybernetics is equally applicable to physical and social (that is, language-based) systems.
Artificial Intelligence and cybernetics: Aren’t they the same thing? Or, isn’t one about computers and the other about robots? The answer to these questions is emphatically, No.
Researchers in Artificial Intelligence (AI) use computer technology to build intelligent machines; they consider implementation (that is, working examples) as the most important result. Practitioners of cybernetics use models of organizations, feedback, goals, and conversation to understand the capacity and limits of any system (technological, biological, or social); they consider powerful descriptions as the most important result.
The field of AI first flourished in the 1960s as the concept of universal computation [Minsky 1967], the cultural view of the brain as a computer, and the availability of digital computing machines came together to paint a future where computers were at least as smart as humans. The field of cybernetics came into being in the late 1940s when concepts of information, feedback, and regulation [Wiener 1948] were generalized from specific applications in engineering to systems in general, including systems of living organisms, abstract intelligent processes, and language.
The roots of cybernetic theory
Åžtefan Odobleja (1902-1978) was a Romanian scientist, one of the precursors of cybernetics. His major work, Psychologie consonantiste, first published in 1938 and 1939, in Paris, had established many of the major themes of cybernetics regarding cybernetics and systems thinking ten years before the work of Norbert Wiener was published in 1948. The word cybernetics was first used in the context of “the study of self-governance” by Plato in The Laws to signify the governance of people. The word ‘cybernétique’ was also used in 1834 by the physicist André-Marie Ampère (1775-1836) to denote the sciences of government in his classification system of human knowledge.
The first artificial automatic regulatory system, a water clock, was invented by the mechanician Ktesibios. In his water clocks, water flowed from a source such as a holding tank into a reservoir, then from the reservoir to the mechanisms of the clock. Ktesibios’s device used a cone-shaped float to monitor the level of the water in its reservoir and adjust the rate of flow of the water accordingly to maintain a constant level of water in the reservoir, so that it neither overflowed nor was allowed to run dry. This was the first artificial truly automatic self-regulatory device that required no outside intervention between the feedback and the controls of the mechanism. Although they did not refer to this concept by the name of Cybernetics (they considered it a field of engineering), Ktesibios and others such as Heron and Su Song are considered to be some of the first to study cybernetic principles.
Origins of “cybernetics”
The term itself began its rise to popularity in 1947 when Norbert Wiener used it to name a discipline apart from, but touching upon, such established disciplines as electrical engineering, mathematics, biology, neurophysiology, anthropology, and psychology. Wiener, Arturo Rosenblueth, and Julian Bigelow needed a name for their new discipline, and they adapted a Greek word meaning “the art of steering” to evoke the rich interaction of goals, predictions, actions, feedback, and response in systems of all kinds (the term “governor” derives from the same root) [Wiener 1948]. Early applications in the control of physical systems (aiming artillery, designing electrical circuits, and maneuvering simple robots) clarified the fundamental roles of these concepts in engineering; but the relevance to social systems and the softer sciences was also clear from the start. Many researchers from the 1940s through 1960 worked solidly within the tradition of cybernetics without necessarily using the term, some likely (R. Buckminster Fuller) but many less obviously (Gregory Bateson, Margaret Mead).
Origins of AI in cybernetics
Ironically but logically, AI and cybernetics have each gone in and out of fashion and influence in the search for machine intelligence. Cybernetics started in advance of AI, but AI dominated between 1960 and 1985, when repeated failures to achieve its claim of building “intelligent machines” finally caught up with it. These difficulties in AI led to renewed search for solutions that mirror prior approaches of cybernetics. Warren McCulloch and Walter Pitts were the first to propose a synthesis of neurophysiology and logic that tied the capabilities of brains to the limits of Turing computability [McCulloch & Pitts 1965]. The euphoria that followed spawned the field of AI [Lettvin 1989] along with early work on computation in neural nets, or, as then called, perceptrons. However the fashion of symbolic computing rose to squelch perceptron research in the 1960s, followed by its resurgence in the late 1980s. However this is not to say that current fashion in neural nets is a return to where cybernetics has been. Much of the modern work in neural nets rests in the philosophical tradition of AI and not that of cybernetics.
Philosophy of cybernetics
AI is predicated on the presumption that knowledge is a commodity that can be stored inside of a machine, and that the application of such stored knowledge to the real world constitutes intelligence [Minsky 1968]. Only within such a “realist” view of the world can, for example, semantic networks and rule-based expert systems appear to be a route to intelligent machines. Cybernetics in contrast has evolved from a “constructivist” view of the world [von Glasersfeld 1987] where objectivity derives from shared agreement about meaning, and where information (or intelligence for that matter) is an attribute of an interaction rather than a commodity stored in a computer [Winograd & Flores 1986]. These differences are not merely semantic in character, but rather determine fundamentally the source and direction of research performed from a cybernetic, versus an AI, stance.
Underlying philosophical differences between AI and cybernetics are displayed by showing how they each construe the terms in the central column. For example, the concept of “representation” is understood quite differently in the two fields. Relations on the left are causal arrows and reflect the reductionist reasoning inherent in AI’s “realist” perspective that via our nervous systems we discover the-world-as-it-is. Relations on the right are non-hierarchical and circular to reflect a “constructivist” perspective, where the world is invented (in contrast to being discovered) by an intelligence acting in a social tradition and creating shared meaning via hermeneutic (circular, self-defining) processes. The implications of these differences are very great and touch on recent efforts to reproduce the brain [Hawkins 2004, IBM/EPFL 2004] which maintain roots in the paradigm of “brain as computer”. These approaches hold the same limitations of digital symbolic computing and are neither likely to explain, nor to reproduce, the functioning of the nervous system.
DIFFERENT TYPES AND RESEARCH
Cybernetics is an earlier but still-used generic term for many types of subject matter. These subjects also extend into many others areas of science, but are united in their study of control of systems.
Consolidated Cybernetics is a leading Software house based at Coimbatore, premier industrial city and an emerging Information Technology hub, in Tamil Nadu, Southern India, was established in 1993.
Promoted and managed by IT Professionals, the company is spearheaded by a technocrat of distinguished experience. The company has multi location operations at Coimbatore, Chennai. The company, which started its operations in education and training, over the years grew into software development. Cybernetics partnered with world leaders Digital Equipment Corpn, Oracle Corpn and PSG College of Technology for education.
Pure cybernetics studies systems of control as a concept, attempting to discover the basic principles underlying such things as
ASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs.
Interactions of Actors Theory
Basic concepts of cybernetics
Feedback and circular causality
Feedback is a process whereby some proportion of the output signal of a system is passed (fed back) to the input. So, the system itself contains a loop. Feedback mechanisms fundamentally influence the dynamic behavior of a system. Roughly speaking negative feedback reduces the deviation or error from a goal state, therefore has stabilizing effects. Positive feedback which increases the deviation from an initial state, has destabilizing effects. Natural, technological and social systems are full of feedback mechanisms.
The general principles of feedback control were understood by engineers, and autonomous control systems were used to replace human operators. This replacement can be done only up to a point, and consequently one is brought directly to face the question of the role of the human observer in technological systems.
The mechanization of the mind
Cybernetic considerations of mind was related to the assumptions that
Thinking is a form of computation. The computation involved is not the mental operation of a human being who manipulates symbols in applying rules, such as those of addition or multiplication; instead it is what a particular class of machines do – machines technically referred to as ‘automata’. By virtue of this, thinking can be modeled within the domain of the mechanical.
Physical laws can explain why and how nature – in certain of its manifestations, not restricted exclusively to the human world – appears to us to contain meaning, finality, directionality, and intentionality.
Engineering cybernetics(or Technical cybernetics) deals with the question of control engineering of mechatronic systems. It is used to control or regulate such a system; more often the term control theory encompasses this field and is used instead.
Medical cybernetics investigates networks in human biology, medical decision making and the information processing structures in the living organism.
Biological Cybernetics investigates communication and control processes in living organisms and ecosystems.
Biorobotics is a term that loosely covers the fields of cybernetics, bionics and even genetic engineering as a collective study.
Cybernetics: The Center of Science’s Future
Cybernetics is not the same as robotics, and it has nothing to do with freezing dead people. It is as different from artificial intelligence as philosophy is from mud-pies. And, in the opinion of the speaker, it subsumes the “hard” sciences, the soft sciences, and the humanities as well.
Emerging from control theory and the feeling that trans-disciplinary enquiry was critical, the field of cybernetics surged in the 1940s. By 1960 it had become a political no-no, coincidentally the same period that it exploded into new domains. Today the word has returned to common use, but its meaning and importance are not understood. Cybernetics directly influences revolutionary work in fields such as biology, cognitive science, family therapy, machine intelligence, and management.
But what is it? Primarily an epistemological stance, cybernetics is informally characterized by the speaker as “the science of describing”; that is, a formal approach to the purpose and nature of this universal human activity. As such, it requires an examination of the subjectivity inherent in all description. Insofar as it exposes science as a consensual process (rather than a research for “truth”), it shows how science does not require a “real world” to do its work. Insofar as its primary observable is an “interaction” in which the observer inextricably participates, it is suitable for application to all human activities.
In building his argument for the importance of cybernetics in the future of science, the speaker will give an overview of the philosophy and implications of the field. Examples will be given from his work in software development and management consulting, as well as from other important applications. He will draw implications for an ethics of scientific enquiry, the responsibility of the individual, and the signs of change in the world order.
Ashby, W. Ross, Design for a Brain. London: Chapman and Hall, 1960.