Just spent the last couple of hours listening to Prof. Robert Sapolsky, Stanford University. This was a 11 year old lecture on emergence and I’ve enjoyed every single argument and every single story he said. I can’t believe how lucky we are to have access to this kind of input on the click of a button. Interestingly (and also ironically) enough, he concludes his lecture discussing bottom-up emergent phenomena: people not needing experts or blueprints to tell them how to go about, just randomness and simple rules that in high quantity produce quality. This is around the time first xMOOCs showed up and connectivist theory was taking off. I can’t believe how related the two are.
Computer mediated brain to brain interaction (mice)
A brain-to-brain interface records the signals in one person’s brain, and then sends these signals through a computer in order to transmit them into the brain of another person. This process allows the second person to “read” the mind of the first or, in other words, have their brain fire in a similar pattern to the original person
In 2013 scientists tested the method to mice; they surgically implanted recording wires that measured brain activity in the motor areas of the brain
Brain to brain interaction using an electroencephalography cap and transcranial magnetic stimulation (humans)
the human device was non-invasive, meaning surgery wasn’t required. This device transferred the movement signals from the encoder straight to the motor area of the brain of the decoder, without using a computer (…) Then the scientists used transcranial magnetic stimulation (TMS) on the decoding person’s brain, sending little magnetic pulses through their skull to activate a specific region of their brain. This caused the second person to take the action that the first person meant to (…) The decoder wasn’t consciously aware of the signal they received (…) however, only movement was transferred, not thoughts
Brain to brain interaction using an electroencephalography cap and transcranial magnetic stimulation & led lights (humans)
Same researchers designed a game with pairs of participants, similar to 20 Questions. In the game, the encoder was given an object that the decoder wasn’t familiar with. The goal was for the decoder to successfully guess the object through a series of yes or no questions. But unlike in 20 Questions, the encoder responded by looking LED flashing lights, one signifying yes and the other no. The visual response generated in the encoder’s brain was transmitted to the visual areas of the brain of the decoder (…) The decoders were successfully able to guess the object in 72 percent of the games, compared to an 18 percent success rate without the BBI (…) this was the largest BBI study, and also the first to include female participants.
Multi-person brain-to-brain interfaces/ collective intelligence
To do this, researchers drew on their past work with brain-to-brain interfaces. The Senders wore electroencephalography (EEG) caps, which allowed the researchers to measure brain activity via electrical signals, and watched a Tetris-like game with a falling shape that needed to be rotated to fit into a row at the bottom of the screen. In another room, the Receiver sat with a transcranial magnetic stimulation (TMS) apparatus positioned near the visual cortex. The Receiver could only see the falling shape, not the gap that it needed to fill, so their decision to rotate the block was not based on the gap that needed to be filled. If a Sender thought the Receiver should rotate the shape, they would look at a light flashing at 17 hertz (Hz) for Yes. Otherwise, they would look at a light flashing 15Hz for No. Based on the frequency that was more apparent in the Senders’ EEG data, the Receiver’s TMS apparatus would stimulate their visual cortex above or below a threshold, signaling the Receiver to make the choice of whether to rotate. With this experiment, the Receiver was correct 81 percent of the time.
There’s a mind-boggling number of possible applications—just imagine projecting ideas in an educational environment, directly sharing memories with others, replacing the need for phones or the Internet altogether, or even, in the more near-term, using it to teach people new motor skills during rehabilitation.
- Lily Toomey, With new technology, mind control is no longer science-fiction, in Massive Science
- Jordan Harrod, Scientists sent thoughts from brain to brain with nothing in between, in Massive Science
the skills of listening to others becomes as important as making clear statements/ the good listener has to respond to intent, to suggestion, for the conversation to keep moving forward/ the difference between the two terms is not a matter of either/or. the heart of it all lies in picking up on concrete details, on specifics, to drive a conversation forward. Bad listeners bounce back in generalities when they respond; they are not attending to those small phrases, facial gestures or silences which open up a discussion.
Dialectic: the verbal play of opposites should gradually build up to a synthesis (…) the Aristotelian notion that although we use the same words, we cannot say we are speaking of the same things (..) the aim is to come to a mutual understanding (…) the listener elaborates the assumption by putting it into words (…) in the Socratic notion, the echo is actually a displacement
Dialogic: first coined by Mikhail Bakhtin to name a discussion which does not resolve itself by finding a common ground (…) though no shared agreements may be reached, through the process of exchange people may become more aware of their own views and expand their understanding of one another (..) knitted together but divergent exchange (…) a dialogic conversation can be ruined by too much identification with the other person.
Excerpts from Richard Sennett’s book, Together: The Rituals & Politics of Cooperation, 2012, London: Penguin Books (pages 18-20)
Image available here
Generally known as “learning styles”, it is the belief that individuals can benefit from receiving information in their preferred format, based on a self-report questionnaire. This belief has much intuitive appeal because individuals are better at some things than others and ultimately there may be a brain basis for these differences. Learning styles promises to optimize education by tailoring materials to match the individual’s preferred mode of sensory information processing.
There are, however, a number of problems with the learning styles approach. First, there is no coherent framework of preferred learning styles. Usually, individuals are categorised into one of three preferred styles of auditory, visual or kinesthetic learners based on self-reports. One study found that there were more than 70 different models of learning styles including among others, “left v right brain,” “holistic v serialists,” “verbalisers v visualisers” and so on. The second problem is that categorising individuals can lead to the assumption of fixed or rigid learning style, which can impair motivation to apply oneself or adapt.
Finally, and most damning, is that there have been systematic studies of the effectiveness of learning styles that have consistently found either no evidence or very weak evidence to support the hypothesis that matching or “meshing” material in the appropriate format to an individual’s learning style is selectively more effective for educational attainment. Students will improve if they think about how they learn but not because material is matched to their supposed learning style. The Educational Endowment Foundation in the UK has concluded that learning styles is “Low impact for very low cost, based on limited evidence”.
- Educators’ letter to the Guardian, No evidence to back idea of learning styles, In the Guardian, Sunday 12th March 2017, full article available here
- The debate over learning styles, In Mosaico Blog, posted on 3rd of September 2017, full blog post available here
Image available here
Gagne identifies five major categories of learning:
- verbal information: facts of knowledge
- intellectual skills: problem solving, discriminations, concepts, principles
- cognitive strategies: meta-cognition strategies for problem solving and thinking
- motor skills: behavioral physical skills
- attitudes: actions that a person chooses to complete
Learning tasks for intellectual skills can be organized in a hierarchy according to complexity:
- stimulus recognition,
- response generation,
- procedure following,
- use of terminology,
- concept formation,
- rule application, and
- problem solving
Each different type requires different types of instruction. The theory outlines nine instructional events and corresponding cognitive processes:
- Gaining attention (reception)/show variety of computer generated triangles
- Informing learners of the objective (expectancy)/pose question: “What is an equilateral triangle?”
- Stimulating recall of prior learning (retrieval)/ review definitions of triangles
- Presenting the stimulus (selective perception)/ give definition of equilateral triangle
- Providing learning guidance (semantic encoding)/ show example of how to create equilateral
- Eliciting performance (responding)/ ask students to create 5 different examples
- Providing feedback (reinforcement)/ check all examples as correct/incorrect
- Assessing performance (retrieval)/ provide scores and remediation
- Enhancing retention and transfer (generalization)/ show pictures of objects and ask students to identify equilaterals
Conditions of learning, Robert Gagne. In InstructionalDesign.org. Full text available here/ For more click here or here or search: Gagne, R. (1985). The Conditions of Learning (4th.). New York: Holt, Rinehart & Winston.
Image and more info available here
The term originally appeared in Kuhn’s “The Structure of Scientific Revolutions” book in 1962. He had been struggling with the word since the ’40s:
According to Kuhn, he discovered incommensurability as a graduate student in the mid to late 1940s while struggling with what appeared to be nonsensical passages in Aristotelian physics(…) He could not believe that someone as extraordinary as Aristotle could have written them. Eventually patterns in the disconcerting passages began to emerge, and then all at once, the text made sense to him: a Gestalt switch that resulted when he changed the meanings of some of the central terms. He saw this process of meaning changing as a method of historical recovery. He realized that in his earlier encounters, he had been projecting contemporary meanings back into his historical sources (Whiggish history), and that he would need to peel them away in order to remove the distortion and understand the Aristotelian system in its own right (hermeneutic history) (…) Kuhn realized that these sorts of conceptual differences indicated breaks between different modes of thought, and he suspected that such breaks must be significant both for the nature of knowledge, and for the sense in which the development of knowledge can be said to make progress.
Kuhn was influenced by the bacteriologist Ludwik Fleck who used the term to describe the differences between ‘medical thinking’ and ‘scientific thinking’ and Gestalt psychology, especially as developed by Wolfgang Köhler.
Kuhn’s original holistic characterization of incommensurability has been distinguished into two separate theses:
- taxonomic involves conceptual change (…) no over-lap principle that precludes cross-classification of objects into different kinds within a theory’s taxonomy/ no two kind terms may overlap in their referents unless they are related as species to genus, in contrast to
- methodological, which involves the epistemic values used to evaluate theories (…) it is the idea that there are no shared, objective standards of scientific theory appraisal, so that there are no external or neutral standards that univocally determine the comparative evaluation of competing theories
Image available here
Systems Theory or Systems Science: A system is an entity with interrelated and interdependent parts; it is defined by its boundaries and it is more than the sum of its parts (subsystem)/ n a complex system (having more than one sub-system.) a change in one part of the system will affect the operation and output of other parts and the operation and output of the system as a whole, systems theory attempts to find predictable patterns of behavior of these systems, and generalizes them to systems as a whole. The stability, growth or decline of a system will depend upon how well that system is able to adjust or be adjusted by its operating environment
Niklas Luhmann-Social Systems Theory: distinction between system and environment (inside/outside)/ it is the communications between people not people themselves, they are outside the system/ our thoughts make no difference to society unless they are communicated/ systems communicate about their environments, not with them/ the environment is what the system cannot control/ systems relate to the environment as information and as a resource/ society-encounters-organizations: the three types of social systems.
Autopoiesis: literally means self-creation/ a system capable of reproducing and maintaining itself; it is autopoietic if the whole produces the parts from which it is made/
Society: is an autopoietic system whose elements are communicative events reproducing other communicative events/ this communication has content and relationship levels: what is communicated and how/ all communication is both communication and communication about communication/ communication is imaginary/ communication takes place when an observer infers that one possible behaviour has been selected to express one possible message or idea/ the meaning of the message is always inferred by the observer.
Complexity: a system becomes complex when it is impossible to relate every element to every other element in every conceivable way at the same time/ when we can observe it in non equivalent ways/ when we can discern many distinct subsystems/ complexity is a property of observing
Image available here
Self-regulated learners are active participants in their own learning. This manifests:
- metacognitively: they plan, set goals, organize, self-monitor, and self-evaluate
- motivationally: they report high self-efficacy, self-attributions and intrinsic task interest
- behaviorally: they select, structure and create environments that optimize learning
Def. Feature 01: Use of Self-Regulated Learning Strategies_SR Learners have an awareness of strategic relations between regulatory processes or responses and learning outcomes and they use these strategies to achieve their academic goals.
Def. Feature 02: Responsiveness to Self-Oriented Feedback_SR Learners share a ‘self-oriented’ feedback loop. They monitor the effectiveness of their learning methods or strategies
Def. Feature 03: Interdependent Motivational Processes_examines how and why students use a particular strategy or response ranging from external rewards or punishment to a global sense of self-esteem and self-actualization
The image illustrated above represents the triadic reciprocality, a proposed view of self-regulated learning that assumes reciprocal causation among three influence processes. According to social cognitive theorists, SR Learning is not determined merely by personal processes but also environmental and behavioral events in a reciprocal fashion. According to Bandura, these are not necessarily symmetrical.
Determinants of SR LEarning
- personal influences (knowledge, metacognitive processes, golas and affect)
- behavioral influences (self-observation, self-judgement, self-reaction)
- environmental influences (enactive outcomes, mastery experiences, modelling, verbal persuasion, direct assistance, literary or other symbolic forms of information such as diagrams, pictures and formulas, structure of the learning context)
- Barry J. Zimmerman, 1990. Self-Regulated Learning and Academic Achievement: An Overview. In Educational Psychologist, 25(1), pp. 3-17
- Barry J. Zimmerman, 1989. A Social Cognitive View of Self-Regulated Academic Learning. In Journal of Educational Psychology, 81(3), pp. 329-339
Image available here
Pressey wanted the Automatic Teacher to give the human teacher more time for individual students (…) The machine was built out of typewriter parts and employed an intelligence test with 30 questions (…) The user responded to text question using four keys; each time the user pressed a key the machine advanced the test paper to the next question, but the counter registered only correct answers (…) In December 1925 Pressey began to seek investors, first among publishers and manufacturers of typewriters, adding machines, and mimeograph machines, and later, in the spring of 1926, extending his search to scientific instrument makers (…) in contrast to his peers, investors failed to see the virtues of Pressey’s machine (…) multiple efforts were made by him to massively the machine (he even invested his own money) but high production costs and difficulties in alignment made dragged production to an halt (…) after several attempts Pressey publicly admitted defeat. In a third and final School and Society article, he skewered education as “the one major activity in this country which is still in a crude handicraft stage (…) The Automatic Teacher was a technology of normalization, but it was at the same time a product of liberality.
- Petrina, S., 2004. Sidney Pressey and the Automation of Education, 1924-1934. In Technology and Culture, 45(2): 305-330, DOI: 10.1353/tech.2004.0085, full text available here
Image available here
Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22).
Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator. He developed a way of ranking various influences in different meta-analyses related to learning and achievement according to their effect sizes. In his ground-breaking study “Visible Learning” he ranked 138 influences that are related to learning outcomes from very positive effects to very negative effects. Hattie found that the average effect size of all the interventions he studied was 0.40. Therefore he decided to judge the success of influences relative to this ‘hinge point’, in order to find an answer to the question “What works best in education?”
Ivo Arnold, 2011. Book Review: John Hattie: Visible learning: A synthesis of over 800 meta-analyses relating to achievement, Int Rev Educ (2011) 57:219–221, DOI 10.1007/s11159-011-9198-8, Routledge, Abingdon, 2008, 392 pp, ISBN 978-0-415-47618-8 (pbk)
Hattie Ranking: 195 Influences And Effect Sizes Related To Student Achievement available here
It is the name for the computer modelling approach to information processing based on the design or architecture of the brain. Connectionist computer models are based on how computation occurs in neural networks where neutrons represent the basic information processing structures in the brain.
All connectionist models consist of four parts:
- units: they are what neutrons are to the biological neural network, the basic information processing structures. Most connectionist models are computer simulations run on digital computers. Units in such models are virtual objects and are usually represented by circles. A unit receives input, it computes an output signal and then it sends the output to other units. This is called activation value. The purpose of the unit is to compute an output activation.
- connections: connectionist models are organised in layers of units, usually three (3). A network however, is not simply an interconnected group of objects but an interconnected group of objects that exchange information with one another. Network connections are conduits. The conduits through which information flows from one member of the network to the next are called synapses or connections and are represented with lines. (in biology synapses are the gaps between neutrons, the fluid-filled space through which chemical messengers -neurotransmitters- leave one neutron and enter another)
- activations: activation value in connectionist models are analogous to a neuron’s firing rate or how actively it is sending signals to other neurons. There is a big variability between the least active and the most active neutrons expressed in a scale fro 0 to 1
- connection weights: The input activations to a unit are not the only values it needs to know before it can compute its output activation. It also needs to know how strongly or weakly an input activation should affect its behaviour. The strength or weakness of a connection is measured by a connection weight. They range between -1 to 1. Inhibitory connection reduce a neuron’s level of activity; excitatory connections increase it.
Yet, the behaviour of a unit is never determined by an input signal sent via a single connection, however strong or weak that connection might be. It depends on its combined input. That is the sum of each input activation multiplied by its connection weight. The output activation of a unit represents how active it is, not the strength of its signal.
Connectionist networks consist of units and connections between units and have some very interesting features like emergence of behaviour. This does not reduce to any particular unit (liquidity in water). Graceful Degradation and Pattern Completion are two ways in which activations are spread through a network. They are not classical computers, their behaviour does not arise from an algorithm, they learn to behave the way they do.
Robert Stufflebeam, 2006. Connectionism: An Introduction (pages 1-3), in CCSI (Consortium on Cognitive Science Instruction) supported by the Mind Project, full article available here
Image available here