It is generally recognized that complex systems are comprised of multiple, inter-related processes. In terms of restricted complexity, the goal of scientific practices is to study these processes, in order to uncover the rules or laws of complexity (…) complexity becomes the umbrella term for the ideas of chaos, fractals, disorder, and uncertainty. Despite the difficulty of the subject matter, it is believed that, with enough time and effort, we will be able to construct a unified theory of complexity – also referred to as the ‘Theory of Complexity’ (TOC) or the ‘Theory of Everything’ (TOE) (…) Seth Lloyd, a professor in mechanical engineering at MIT, has compiled a list of 31 different ways in which one can define complexity!
If we accept the fact that things are inherently complex, then it means that we cannot know phenomena in their full complexity. In other words, complex phenomena are irreducible. Acknowledging complexity therefore has a profound impact not only on the status of scientific practices, but also on the status of our knowledge claims as such. More specifically, because our knowledge of complex phenomena is limited, our practices should be informed by, and subject to, a self-critical rationality (…) Acknowledging the irreducible nature of complexity also influences our understanding of the general features of complexity
Features of Complex Systems:
- Complex Systems are constituted by richly interconnected components
- The component parts of complex systems have a double identity premised on both a diversity and a unity principle
- Upward and Downward causation give rise to complex structures: the competitive and cooperative interactions between component parts on a local level give rise to self-organisation which is defined as ‘a process whereby a system can develop a complex structure from fairly unstructured beginnings’
- Complex Systems exhibit self-organizing and emergent behavior: Self-organisation is a necessary condition for emergence, which is defined as ‘the idea that there are properties at a certain level of organization which cannot be predicted from the properties found at lower levels but not sufficient!
- Complex Systems are Open Systems: the intelligibility of open systems can only be understood in terms of their relation with the environment (…) there is an energy, material, or information transfer into or out of a given system’s boundary (…) the environment cannot be appropriated by the system, so the boundary between a system and its environment should be treated both as a real, physical category, and a mental category or ideal model
Woermann, M., 2011. What is complexity theory? Features and Implications. Systems Engineering Newsletter, 30, 1-8, available here
Image available here
All the properties that follow:
- A system is complex when it is composed of many parts that interconnect in intricate ways
- A system presents dynamic complexity when cause and effect are subtle, over time.
- A system is complex when it is composed of a group of related units (subsystems), for which the degree and nature of the relationships is imperfectly known. The overall emergent behavior is difficult to predict, even when subsystem behavior is readily predictable. Small changes in inputs or parameters may produce large changes in behavior
- A complex system has a set of different elements so connected or related as to perform a unique function not performable by the elements alone
- Scientific complexity relates to the behavior of macroscopic collections of units endowed with the potential to evolve in time
- Complexity theory and chaos theory both attempt to reconcile the unpredictability of non-linear dynamic systems with a sense of underlying order and structure
make up for this definition I like sooo much:
Complexity is the property of a real world system that is manifest in the inability of any one formalism being adequate to capture all its properties.
Ferreira, P., 2001. Tracing Complexity Theory. Full presentation available here
Image available here
The term originally appeared in Kuhn’s “The Structure of Scientific Revolutions” book in 1962. He had been struggling with the word since the ’40s:
According to Kuhn, he discovered incommensurability as a graduate student in the mid to late 1940s while struggling with what appeared to be nonsensical passages in Aristotelian physics(…) He could not believe that someone as extraordinary as Aristotle could have written them. Eventually patterns in the disconcerting passages began to emerge, and then all at once, the text made sense to him: a Gestalt switch that resulted when he changed the meanings of some of the central terms. He saw this process of meaning changing as a method of historical recovery. He realized that in his earlier encounters, he had been projecting contemporary meanings back into his historical sources (Whiggish history), and that he would need to peel them away in order to remove the distortion and understand the Aristotelian system in its own right (hermeneutic history) (…) Kuhn realized that these sorts of conceptual differences indicated breaks between different modes of thought, and he suspected that such breaks must be significant both for the nature of knowledge, and for the sense in which the development of knowledge can be said to make progress.
Kuhn was influenced by the bacteriologist Ludwik Fleck who used the term to describe the differences between ‘medical thinking’ and ‘scientific thinking’ and Gestalt psychology, especially as developed by Wolfgang Köhler.
Kuhn’s original holistic characterization of incommensurability has been distinguished into two separate theses:
- taxonomic involves conceptual change (…) no over-lap principle that precludes cross-classification of objects into different kinds within a theory’s taxonomy/ no two kind terms may overlap in their referents unless they are related as species to genus, in contrast to
- methodological, which involves the epistemic values used to evaluate theories (…) it is the idea that there are no shared, objective standards of scientific theory appraisal, so that there are no external or neutral standards that univocally determine the comparative evaluation of competing theories
The Incommensurability of Scientific Theories, In Stanford Encyclopedia of Philosophy, first published Wed Feb 25, 2009; substantive revision Tue Mar 5, 2013, available here
Image available here
Systems Theory or Systems Science: A system is an entity with interrelated and interdependent parts; it is defined by its boundaries and it is more than the sum of its parts (subsystem)/ n a complex system (having more than one sub-system.) a change in one part of the system will affect the operation and output of other parts and the operation and output of the system as a whole, systems theory attempts to find predictable patterns of behavior of these systems, and generalizes them to systems as a whole. The stability, growth or decline of a system will depend upon how well that system is able to adjust or be adjusted by its operating environment
Niklas Luhmann-Social Systems Theory: distinction between system and environment (inside/outside)/ it is the communications between people not people themselves, they are outside the system/ our thoughts make no difference to society unless they are communicated/ systems communicate about their environments, not with them/ the environment is what the system cannot control/ systems relate to the environment as information and as a resource/ society-encounters-organizations: the three types of social systems.
Autopoiesis: literally means self-creation/ a system capable of reproducing and maintaining itself; it is autopoietic if the whole produces the parts from which it is made/
Society: is an autopoietic system whose elements are communicative events reproducing other communicative events/ this communication has content and relationship levels: what is communicated and how/ all communication is both communication and communication about communication/ communication is imaginary/ communication takes place when an observer infers that one possible behaviour has been selected to express one possible message or idea/ the meaning of the message is always inferred by the observer.
Complexity: a system becomes complex when it is impossible to relate every element to every other element in every conceivable way at the same time/ when we can observe it in non equivalent ways/ when we can discern many distinct subsystems/ complexity is a property of observing
Image available here
Self-regulated learners are active participants in their own learning. This manifests:
- metacognitively: they plan, set goals, organize, self-monitor, and self-evaluate
- motivationally: they report high self-efficacy, self-attributions and intrinsic task interest
- behaviorally: they select, structure and create environments that optimize learning
Def. Feature 01: Use of Self-Regulated Learning Strategies_SR Learners have an awareness of strategic relations between regulatory processes or responses and learning outcomes and they use these strategies to achieve their academic goals.
Def. Feature 02: Responsiveness to Self-Oriented Feedback_SR Learners share a ‘self-oriented’ feedback loop. They monitor the effectiveness of their learning methods or strategies
Def. Feature 03: Interdependent Motivational Processes_examines how and why students use a particular strategy or response ranging from external rewards or punishment to a global sense of self-esteem and self-actualization
The image illustrated above represents the triadic reciprocality, a proposed view of self-regulated learning that assumes reciprocal causation among three influence processes. According to social cognitive theorists, SR Learning is not determined merely by personal processes but also environmental and behavioral events in a reciprocal fashion. According to Bandura, these are not necessarily symmetrical.
Determinants of SR LEarning
- personal influences (knowledge, metacognitive processes, golas and affect)
- behavioral influences (self-observation, self-judgement, self-reaction)
- environmental influences (enactive outcomes, mastery experiences, modelling, verbal persuasion, direct assistance, literary or other symbolic forms of information such as diagrams, pictures and formulas, structure of the learning context)
- Barry J. Zimmerman, 1990. Self-Regulated Learning and Academic Achievement: An Overview. In Educational Psychologist, 25(1), pp. 3-17
- Barry J. Zimmerman, 1989. A Social Cognitive View of Self-Regulated Academic Learning. In Journal of Educational Psychology, 81(3), pp. 329-339
Image available here
It is the name for the computer modelling approach to information processing based on the design or architecture of the brain. Connectionist computer models are based on how computation occurs in neural networks where neutrons represent the basic information processing structures in the brain.
All connectionist models consist of four parts:
- units: they are what neutrons are to the biological neural network, the basic information processing structures. Most connectionist models are computer simulations run on digital computers. Units in such models are virtual objects and are usually represented by circles. A unit receives input, it computes an output signal and then it sends the output to other units. This is called activation value. The purpose of the unit is to compute an output activation.
- connections: connectionist models are organised in layers of units, usually three (3). A network however, is not simply an interconnected group of objects but an interconnected group of objects that exchange information with one another. Network connections are conduits. The conduits through which information flows from one member of the network to the next are called synapses or connections and are represented with lines. (in biology synapses are the gaps between neutrons, the fluid-filled space through which chemical messengers -neurotransmitters- leave one neutron and enter another)
- activations: activation value in connectionist models are analogous to a neuron’s firing rate or how actively it is sending signals to other neurons. There is a big variability between the least active and the most active neutrons expressed in a scale fro 0 to 1
- connection weights: The input activations to a unit are not the only values it needs to know before it can compute its output activation. It also needs to know how strongly or weakly an input activation should affect its behaviour. The strength or weakness of a connection is measured by a connection weight. They range between -1 to 1. Inhibitory connection reduce a neuron’s level of activity; excitatory connections increase it.
Yet, the behaviour of a unit is never determined by an input signal sent via a single connection, however strong or weak that connection might be. It depends on its combined input. That is the sum of each input activation multiplied by its connection weight. The output activation of a unit represents how active it is, not the strength of its signal.
Connectionist networks consist of units and connections between units and have some very interesting features like emergence of behaviour. This does not reduce to any particular unit (liquidity in water). Graceful Degradation and Pattern Completion are two ways in which activations are spread through a network. They are not classical computers, their behaviour does not arise from an algorithm, they learn to behave the way they do.
Robert Stufflebeam, 2006. Connectionism: An Introduction (pages 1-3), in CCSI (Consortium on Cognitive Science Instruction) supported by the Mind Project, full article available here
Image available here