Brain(s) to brain(s)

Image available here

Computer mediated brain to brain interaction (mice)

A brain-to-brain interface records the signals in one person’s brain, and then sends these signals through a computer in order to transmit them into the brain of another person. This process allows the second person to “read” the mind of the first or, in other words, have their brain fire in a similar pattern to the original person

In 2013 scientists tested the method to mice; they surgically implanted recording wires that measured brain activity in the motor areas of the brain

Brain to brain interaction using an electroencephalography cap and transcranial magnetic stimulation (humans)

the human device was non-invasive, meaning surgery wasn’t required. This device transferred the movement signals from the encoder straight to the motor area of the brain of the decoder, without using a computer (…) Then the scientists used transcranial magnetic stimulation (TMS) on the decoding person’s brain, sending little magnetic pulses through their skull to activate a specific region of their brain. This caused the second person to take the action that the first person meant to (…) The decoder wasn’t consciously aware of the signal they received (…) however, only movement was transferred, not thoughts

Brain to brain interaction using an electroencephalography cap and transcranial magnetic stimulation & led lights (humans)

Same researchers designed a game with pairs of participants, similar to 20 Questions. In the game, the encoder was given an object that the decoder wasn’t familiar with. The goal was for the decoder to successfully guess the object through a series of yes or no questions. But unlike in 20 Questions, the encoder responded by looking LED flashing lights, one signifying yes and the other no. The visual response generated in the encoder’s brain was transmitted to the visual areas of the brain of the decoder (…) The decoders were successfully able to guess the object in 72 percent of the games, compared to an 18 percent success rate without the BBI (…) this was the largest BBI study, and also the first to include female participants.

Multi-person brain-to-brain interfaces/ collective intelligence

To do this, researchers drew on their past work with brain-to-brain interfaces. The Senders wore electroencephalography (EEG) caps, which allowed the researchers to measure brain activity via electrical signals, and watched a Tetris-like game with a falling shape that needed to be rotated to fit into a row at the bottom of the screen. In another room, the Receiver sat with a transcranial magnetic stimulation (TMS) apparatus positioned near the visual cortex. The Receiver could only see the falling shape, not the gap that it needed to fill, so their decision to rotate the block was not based on the gap that needed to be filled. If a Sender thought the Receiver should rotate the shape, they would look at a light flashing at 17 hertz (Hz) for Yes. Otherwise, they would look at a light flashing 15Hz for No. Based on the frequency that was more apparent in the Senders’ EEG data, the Receiver’s TMS apparatus would stimulate their visual cortex above or below a threshold, signaling the Receiver to make the choice of whether to rotate. With this experiment, the Receiver was correct 81 percent of the time.

There’s a mind-boggling number of possible applications—just imagine projecting ideas in an educational environment, directly sharing memories with others, replacing the need for phones or the Internet altogether, or even, in the more near-term, using it to teach people new motor skills during rehabilitation.

References

On “Digital learning environments, the science of learning and the relationship between the teacher and the learner”

Image available here

Under what conditions do these technology tools lead to the most effective learning experiences? Dο they serve as a distraction if not deliberately integrated into learning activities? When these devices are incorporated deliberately into learning activities, how are students using them to make sense of ideas and apply them in practice? (…) It is much more complicated and difficult to develop an environment that can facilitate learning in complex conceptual domains (…) while adaptive systems have taken some forward leaps, there is still some way to go before these environments can cope with the significant diversity in how individual students make sense of complex ideas (…) Depending on how students structure related ideas in their mind, that structure will limit the way in which new information can be incorporated (…) The problem with providing personalised instruction in a digital environment is therefore not just about what the overall level of prior knowledge is but how that knowledge is structured in students’ minds (…) Technologies that are and will continue to impact on education need to be built on a foundation that includes a deep understanding of how students learn (…) teachers are constantly navigating a decision set that is practically infinite (…) The question becomes one of when and how technologies can be most effectively used, for what, and understanding what implications this has for the teacher-student relationship (…) there are two central narratives about what learning is: the first, acquisition, is vital but the second, participation, is even more powerful for learning (…)

There are several key areas helping students work with technologies:

  • Informing the development of and evaluating new technologies: research examining the effectiveness of the tools lags well behind the spread of their use (…) there is a clear need to draw on principles of quality student learning to determine how best to effectively combine the expertise of teachers and power of machines
  • Helping students to work with technologies: it is critical to determine how best to support students to do so in the absence of a teacher to help with this
  • Determining how technologies can best facilitate teaching and learning: the science of learning will assist in understanding the changing student-teacher dynamic in education is through the implications on broader policy and practice (…) The increased use of these technologies in classrooms must be driven by what is known about quality learning and not about financial or political motives.

Full article available here

Five ways to ensure that models serve society: a manifesto

Image available here
  • Mind the assumptions: assess uncertainty and sensitivity as their role in predictions is substantially larger that originally asserted
  • Mind the hubris: complexity can be the enemy of relevance; there is a trade-off between the usefulness of a model and the breadth it tries to capture; complexity is too often seen as an end in itself. Instead, the goal must be finding the optimum balance with error
  • Mind the framing: match purpose and context; no one model can serve all purposes; modellers know that the choice of tools will influence, and could even determine, the outcome of the analysis, so the technique is never neutral; shared approaches to assessing quality need to be accompanied by a shared commitment to transparency. Examples of terms that promise uncontested precision include: ‘cost–benefit’, ‘expected utility’, ‘decision theory’, ‘life-cycle assessment’, ‘ecosystem services’, and ‘evidence-based policy’. Yet all presuppose a set of values about what matters — sustainability for some, productivity or profitability for others; the best way to keep models from hiding their assumptions, including political leanings, is a set of social norms. These should cover how to produce a model, assess its uncertainty and communicate the results. International guidelines for this have been drawn up for several disciplines. They demand that processes involve stakeholders, accommodate multiple views and promote transparency, replication and analysis of sensitivity and uncertainty. Whenever a model is used for a new application with fresh stakeholders, it must be validated and verified anew.
  • Mind the consequences: quantification can backfire. Excessive regard for producing numbers can push a discipline away from being roughly right towards being precisely wrong; once a number takes centre-stage with a crisp narrative, other possible explanations and estimates can disappear from view. This might invite complacency, and the politicization of quantification, as other options are marginalized; opacity about uncertainty damages trust (…) Full explanations are crucial.
  • Mind the unknowns: acknowledge ignorance; communicating what is not known is at least as important as communicating what is known; Experts should have the courage to respond that “there is no number-answer to your question.”

Mathematical models are a great way to explore questions. They are also a dangerous way to assert answers. Asking models for certainty or consensus is more a sign of the difficulties in making controversial decisions than it is a solution, and can invite ritualistic use of quantification. Models’ assumptions and limitations must be appraised openly and honestly. Process and ethics matter as much as intellectual prowess. It follows, in our view, that good modelling cannot be done by modellers alone. It is a social activity. The French movement of statactivistes has shown how numbers can be fought with numbers, such as in the quantification of poverty and inequalities (…) We are calling not for an end to quantification, nor for apolitical models, but for full and frank disclosure. Following these five points will help to preserve mathematical modelling as a valuable tool. Each contributes to the overarching goal of billboarding the strengths and limits of model outputs. Ignore the five, and model predictions become Trojan horses for unstated interests and values. Model responsibly.

Saltelli, A. et al., (2020). Five ways to ensure that models serve society: a manifesto, article available here

Complexity Theory II (M. Woermann)

complexity theory toc

Restricted Complexity

It is generally recognized that complex systems are comprised of multiple, inter-related processes. In terms of restricted complexity, the goal of scientific practices is to study these processes, in order to uncover the rules or laws of complexity (…) complexity becomes the umbrella term for the ideas of chaos, fractals, disorder, and uncertainty. Despite the difficulty of the subject matter, it is believed that, with enough time and effort, we will be able to construct a unified theory of complexity – also referred to as the ‘Theory of Complexity’ (TOC) or the ‘Theory of Everything’ (TOE) (…) Seth Lloyd, a professor in mechanical engineering at MIT, has compiled a list of 31 different ways in which one can define complexity!

General Complexity

If we accept the fact that things are inherently complex, then it means that we cannot know phenomena in their full complexity. In other words, complex phenomena are irreducible. Acknowledging complexity therefore has a profound impact not only on the status of scientific practices, but also on the status of our knowledge claims as such. More specifically, because our knowledge of complex phenomena is limited, our practices should be informed by, and subject to, a self-critical rationality (…) Acknowledging the irreducible nature of complexity also influences our understanding of the general features of complexity

Features of Complex Systems:

  • Complex Systems are constituted by richly interconnected components
  • The component parts of complex systems have a double identity premised on both a diversity and a unity principle
  • Upward and Downward causation give rise to complex structures: the competitive and cooperative interactions between component parts on a local level give rise to self-organisation which is defined as ‘a process whereby a system can develop a complex structure from fairly unstructured beginnings’
  • Complex Systems exhibit self-organizing and emergent behavior: Self-organisation is a necessary condition for emergence, which is defined as ‘the idea that there are properties at a certain level of organization which cannot be predicted from the properties found at lower levels but not sufficient!
  • Complex Systems are Open Systems: the intelligibility of open systems can only be understood in terms of their relation with the environment (…) there is an energy, material, or information transfer into or out of a given system’s boundary (…)   the environment cannot be appropriated by the system, so the boundary between a system and its environment should be treated both as a real, physical category, and a mental category or ideal model

 

References

Woermann, M., 2011. What is complexity theory? Features and Implications. Systems Engineering Newsletter, 30, 1-8, available here

Image available here

Complexity Theory

map

All the properties that follow:

  • A system is complex when it is composed of many parts that interconnect in intricate ways
  • A system presents dynamic complexity when cause and effect are subtle, over time.
  • A system is complex when it is composed of a group of related units (subsystems), for which the degree and nature of the relationships is imperfectly known. The overall emergent behavior is difficult to predict, even when subsystem behavior is readily predictable. Small changes in inputs or parameters may produce large changes in behavior
  • A complex system has a set of different elements so connected or related as to perform a unique function not performable by the elements alone
  • Scientific complexity relates to the behavior of macroscopic collections of units endowed with the potential to evolve in time
  • Complexity theory and chaos theory both attempt to reconcile the unpredictability of non-linear dynamic systems with a sense of underlying order and structure

make up for this definition I like sooo much:

Complexity is the property of a real world system that is manifest in the inability of any one formalism being adequate to capture all its properties.

 

Reference

Ferreira, P., 2001. Tracing Complexity Theory. Full presentation available here

Image available here

Do learning styles exist?

learning styles

Generally known as “learning styles”, it is the belief that individuals can benefit from receiving information in their preferred format, based on a self-report questionnaire. This belief has much intuitive appeal because individuals are better at some things than others and ultimately there may be a brain basis for these differences. Learning styles promises to optimize education by tailoring materials to match the individual’s preferred mode of sensory information processing.

There are, however, a number of problems with the learning styles approach. First, there is no coherent framework of preferred learning styles. Usually, individuals are categorised into one of three preferred styles of auditory, visual or kinesthetic learners based on self-reports. One study found that there were more than 70 different models of learning styles including among others, “left v right brain,” “holistic v serialists,” “verbalisers v visualisers” and so on. The second problem is that categorising individuals can lead to the assumption of fixed or rigid learning style, which can impair motivation to apply oneself or adapt.

Finally, and most damning, is that there have been systematic studies of the effectiveness of learning styles that have consistently found either no evidence or very weak evidence to support the hypothesis that matching or “meshing” material in the appropriate format to an individual’s learning style is selectively more effective for educational attainment. Students will improve if they think about how they learn but not because material is matched to their supposed learning style. The Educational Endowment Foundation in the UK has concluded that learning styles is “Low impact for very low cost, based on limited evidence”.

 

References

  • Educators’ letter to the Guardian, No evidence to back idea of learning styles, In the Guardian, Sunday 12th March 2017, full article available here
  • The debate over learning styles, In Mosaico Blog, posted on 3rd of September 2017, full blog post available here

Image available here

Kuhn’s concept of ‘incommensurability’

incommensurability

The term originally appeared in Kuhn’s “The Structure of Scientific Revolutions” book in 1962. He had been struggling with the word since the ’40s:

According to Kuhn, he discovered incommensurability as a graduate student in the mid to late 1940s while struggling with what appeared to be nonsensical passages in Aristotelian physics(…) He could not believe that someone as extraordinary as Aristotle could have written them. Eventually patterns in the disconcerting passages began to emerge, and then all at once, the text made sense to him: a Gestalt switch that resulted when he changed the meanings of some of the central terms. He saw this process of meaning changing as a method of historical recovery. He realized that in his earlier encounters, he had been projecting contemporary meanings back into his historical sources (Whiggish history), and that he would need to peel them away in order to remove the distortion and understand the Aristotelian system in its own right (hermeneutic history) (…) Kuhn realized that these sorts of conceptual differences indicated breaks between different modes of thought, and he suspected that such breaks must be significant both for the nature of knowledge, and for the sense in which the development of knowledge can be said to make progress.

Kuhn was influenced by the bacteriologist Ludwik Fleck who used the term to describe the differences between ‘medical thinking’ and ‘scientific thinking’ and Gestalt psychology, especially as developed by Wolfgang Köhler.

Kuhn’s original holistic characterization of incommensurability has been distinguished into two separate theses:

  • taxonomic involves conceptual change (…) no over-lap principle that precludes cross-classification of objects into different kinds within a theory’s taxonomy/ no two kind terms may overlap in their referents unless they are related as species to genus, in contrast to
  • methodological, which involves the epistemic values used to evaluate theories (…) it is the idea that there are no shared, objective standards of scientific theory appraisal, so that there are no external or neutral standards that univocally determine the comparative evaluation of competing theories

 

Reference

The Incommensurability of Scientific Theories, In Stanford Encyclopedia of Philosophy, first published Wed Feb 25, 2009; substantive revision Tue Mar 5, 2013, available here

Image available here

Systems theory & Autopoiesis/ Society & Complexity

lee-bul-autopoiesis

Systems Theory or Systems Science: A system is an entity with interrelated and interdependent parts; it is defined by its boundaries and it is more than the sum of its parts (subsystem)/ n a complex system (having more than one sub-system.) a change in one part of the system will affect the operation and output of other parts and the operation and output of the system as a whole, systems theory attempts to find predictable patterns of behavior of these systems, and generalizes them to systems as a whole. The stability, growth or decline of a system will depend upon how well that system is able to adjust or be adjusted by its operating environment

Niklas Luhmann-Social Systems Theory: distinction between system and environment (inside/outside)/ it is the communications between people not people themselves, they are outside the system/ our thoughts make no difference to society unless they are communicated/ systems communicate about their environments, not with them/ the environment is what the system cannot control/ systems relate to the environment as information and as a resource/ society-encounters-organizations: the three types of social systems.

Autopoiesis: literally means self-creation/ a system capable of reproducing and maintaining itself; it is autopoietic if the whole produces the parts from which it is made/

Society:  is an autopoietic system whose elements are communicative events reproducing other communicative events/ this communication has content and relationship levels: what is communicated and how/ all communication is both communication and communication about communication/ communication is imaginary/ communication takes place when an observer infers that one possible behaviour has been selected to express one possible message or idea/ the meaning of the message is always inferred by the observer.

Complexity: a system becomes complex when it is impossible to relate every element to every other element in every conceivable way at the same time/ when we can observe it in non equivalent ways/ when we can discern many distinct subsystems/ complexity is a property of observing 

Image available here

Self-regulated learning

 

SELF RG_triadic-analysis-of-srl-functioning-picture

Self-regulated learners are active participants in their own learning. This manifests:

  • metacognitively: they plan, set goals, organize, self-monitor, and self-evaluate
  • motivationally: they report high self-efficacy, self-attributions and intrinsic task interest
  • behaviorally: they select, structure and create environments that optimize learning

Def. Feature 01: Use of Self-Regulated Learning Strategies_SR Learners have an awareness of strategic relations between regulatory processes or responses and learning outcomes and they use these strategies to achieve their academic goals.

Def. Feature 02: Responsiveness to Self-Oriented Feedback_SR Learners share a ‘self-oriented’ feedback loop. They monitor the effectiveness of their learning methods or strategies

Def. Feature 03: Interdependent Motivational Processes_examines how and why students use a particular strategy or response ranging from external rewards or punishment to a global sense of self-esteem and self-actualization

The image illustrated above represents the triadic reciprocality, a proposed view of self-regulated learning that assumes reciprocal causation among three influence processes. According to social cognitive theorists, SR Learning is not determined merely by personal processes but also environmental and behavioral events in a reciprocal fashion. According to Bandura, these are not necessarily symmetrical.

Determinants of SR LEarning

  • personal influences (knowledge, metacognitive processes, golas and affect)
  • behavioral influences (self-observation, self-judgement, self-reaction)
  • environmental influences (enactive outcomes, mastery experiences, modelling, verbal persuasion, direct assistance, literary or other symbolic forms of information such as diagrams, pictures and formulas, structure of the learning context)

 

References

  • Barry J. Zimmerman, 1990. Self-Regulated Learning and Academic Achievement: An Overview. In Educational Psychologist, 25(1), pp. 3-17
  • Barry J. Zimmerman, 1989. A Social Cognitive View of Self-Regulated Academic Learning. In Journal of Educational Psychology, 81(3), pp. 329-339

Image available here

Connectionism

Single-Layer_Neural_Network-Vector-Blank

It is the name for the computer modelling approach to information processing based on the design or architecture of the brain. Connectionist computer models are based on how computation occurs in neural networks where neutrons represent the basic information processing structures in the brain.

All connectionist models consist of four parts:

  • units: they are what neutrons are to the biological neural network, the basic information processing structures. Most connectionist models are computer simulations run on digital computers. Units in such models are virtual objects and are usually represented by circles. A unit receives input, it computes an output signal and then it sends the output to other units. This is called activation value. The purpose of the unit is to compute an output activation.
  • connections: connectionist models are organised in layers of units, usually three (3). A network however, is not simply an interconnected group of objects but an interconnected group of objects that exchange information with one another. Network connections are conduits. The conduits through which information flows from one member of the network to the next are called synapses or connections and are represented with lines. (in biology synapses are the gaps between neutrons, the fluid-filled space through which chemical messengers -neurotransmitters- leave one neutron and enter another)
  • activations: activation value in connectionist models are analogous to a neuron’s firing rate or how actively it is sending signals to other neurons. There is a big variability between the least active and the most active neutrons expressed in a scale fro 0 to 1
  • connection weights: The input activations to a unit are not the only values it needs to know before it can compute its output activation. It also needs to know how strongly or weakly an input activation should affect its behaviour. The strength or weakness of a connection is measured by a connection weight. They range between -1 to 1. Inhibitory connection reduce a neuron’s level of activity; excitatory connections increase it.

Yet, the behaviour of a unit is never determined by an input signal sent via a single connection, however strong or weak that connection might be. It depends on its combined input. That is the sum of each input activation multiplied by its connection weight. The output activation of a unit represents how active it is, not the strength of its signal.

Connectionist networks consist of units and connections between units and have some very interesting features like emergence of behaviour. This does not reduce to any particular unit (liquidity in water). Graceful Degradation and Pattern Completion are two ways in which activations are spread through a network. They are not classical computers, their behaviour does not arise from an algorithm, they learn to behave the way they do.

 

References

Robert Stufflebeam, 2006. Connectionism: An Introduction (pages 1-3), in CCSI (Consortium on Cognitive Science Instruction) supported by the Mind Project, full article available here

Image available here

Learning Rules

hebbs rule

Hebb’s rule: is a neuroscience theory where an increase in synaptic efficacy arises from the presynaptic cell’s repeated and persistent stimulation of the postsynaptic cel (…) Hebbian theory concerns how neurons might connect themselves to become engrams (=means by which memories are stored thus biophysical/biochemical changes in the brain in response to external stimuli) (…) The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells, and provides a biological basis for errorless learning methods for education and memory rehabilitation. In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning.

512px-Kernel_Machine

Back-propagationis a method used in artificial neural networks to calculate the error contribution of each neuron after a batch of data (in image recognition, multiple images) is processed  [=computing systems inspired by the biological neural networks that constitute animal brains, these systems learn to do tasks by considering examples](…) Backpropagation is sometimes referred to as deep learninga term used to describe neural networks with more than one hidden layer (layers not dedicated to input or output)

Boltzmannexamplev1

Boltzmann machine: is a type of stochastic recurrent neural network [a stochastic or random process is a mathematical object usually defined as a collection of random variables] (…) They were one of the first neural networks capable of learning internal representations, and are able to represent and (given sufficient time) solve difficult combinatoric problems (…) Boltzmann machines with unconstrained connectivity have not proven useful for practical problems in machine learning or inference, but if the connectivity is properly constrained, the learning can be made efficient enough to be useful for practical problems

 

References & Images

 

Embodied Action & Enaction

ENACTION

  • embodied action

embodied: cognition depends upon the kinds of experience that come from having a body with various sensorimotor capacities and that these individual sensorimotor capacities are themselves embedded in a more encompassing biological, psychological and cultural context/ action: sensory and motor processes, perception and action, are fundamentally inseparable in lived cognition.

  • enaction

it consists of two points: a. perception consists in perceptually guided action (how the perceiver can can guide his actions in his local situation) and b. cognitive structures emerge from the recurrent sensorimotor patterns that enable action to be perceptually guided (since the situations an individual is found in constantly change, the reference point for understanding perception is no longer a pregiven, but the sensorimotor structure of the perceiver-the way in which the nervous system links sensory and motor surfaces). The overall concern for the enactive approach to perception is to determine the common principles or lawful linkages between sensory and motor systems.

Merleau-Ponty:

The properties of the object and the intentions of the subject . . . are not only intermingled; they also constitute a new whole (…) Since all the movements of the organism are always conditioned by external influences, one can, if one wishes, readily treat behavior as an effect of the milieu. But in the same way, since all the stimulations which the organism receives have in tum been possible only by its preceding movements which have culminated in exposing the receptor organ to external influences, one could also say that behavior is the first cause of all the stimulations.

Piaget:

The laws of cognitive gevelopment, even at the sensorimotor stage, are an assimilation of and an accommodation to that pregiven world.

One of the most fundamental cognitive activities that all organisms perform is categorization. By this means the uniqueness of each experience is transformed into the more limited set of learned, meaningful categories to which humans and other organisms respond (…) In the enactive view, although mind and world arise together in enaction, their manner of arising in any particular situation is not arbitrary (…) The basic level of categorization appears to be the point at which cognition and environment become simultaneously enacted.

Johnson:

kinesthetic image schemas: for example, the container schema, the part-whole schema, and the source-path-goal schema

 

References

Francisco J. Varela, Evan Thompson and Eleanor Rosch, 1993. The embodied mind: Cognitive Science and Human Experience, MIT Press, pp. 172-180

Image available here