Interesting point made by Melissa Emler here. I am thinking more about online communities here just like Stephen Downes (I found this post in his newsletter) thinking that maybe event is the most tricky one to organize in the sense that it is synchronous and therefore, more vulnerable. Also, I am thinking if community is equal to the other two or also a goal in itself: Emler says that without a community there is no sense-making. True, but a strong community is also needed for further learning, its being perhaps the only thing that not only instigates motivation towards learning but also feeds learning with more content.
Just spent the last couple of hours listening to Prof. Robert Sapolsky, Stanford University. This was a 11 year old lecture on emergence and I’ve enjoyed every single argument and every single story he said. I can’t believe how lucky we are to have access to this kind of input on the click of a button. Interestingly (and also ironically) enough, he concludes his lecture discussing bottom-up emergent phenomena: people not needing experts or blueprints to tell them how to go about, just randomness and simple rules that in high quantity produce quality. This is around the time first xMOOCs showed up and connectivist theory was taking off. I can’t believe how related the two are.
Visceral (appearance): the automatic, unconscious reaction we have to experiences (…) System 1 thinking: these reactions are fast, immediate without reflection (…) Real world imagery and photography may create the right first impressions for such learning (…) The Gestalt Law of Proximity is often quoted in interface design and states that items close to each other are perceived as groups (…) The Gestalt Law of Similarity states that items similar to each other will be grouped by the user.
Behavioural (performance): This is about emotion and feelings around actual use or usability (…) There is a massive amount of good practice in interface design around usability. It is vital that the interface is a simple, consistent, predictable and easy to use as possible, as time and cognitive effort spent on the interface detracts from the cognitive effort needed to learn (…) Without challenge, difficulty and cognitive effort, you will not have the deep processing necessary for learnt knowledge, skills and behaviour to stick (…) Inducing emotion may be ideal when you want attitudinal shift in diversity, equality and other belief shift or self-awareness training but can be dangerous in non-affective training, where it can induce the illusion of learning
Reflective (memories and experience): System 2 thinking, the rational, reasoning side of the brain (…) This is complex and involves much more than just getting a score on the assessment, although that can be an important feeling of success (…) Challenging cognitive effort can propel the learner forward and make them feel as though they really are making progress. Feedback is also a powerful accelerator of learning, so personalising learning and feedback can move things forward making the learner feel good about themselves (…) It is easy to forget that one learns for a reason, ultimately to apply that knowledge, so the transfer through to action really does matter.
Donald Clark, Emotion in Learning Experience Design – Norman’s 3 facets; Visceral, Behavioural and Reflective…, Full article available here
Computer mediated brain to brain interaction (mice)
A brain-to-brain interface records the signals in one person’s brain, and then sends these signals through a computer in order to transmit them into the brain of another person. This process allows the second person to “read” the mind of the first or, in other words, have their brain fire in a similar pattern to the original person
In 2013 scientists tested the method to mice; they surgically implanted recording wires that measured brain activity in the motor areas of the brain
Brain to brain interaction using an electroencephalography cap and transcranial magnetic stimulation (humans)
the human device was non-invasive, meaning surgery wasn’t required. This device transferred the movement signals from the encoder straight to the motor area of the brain of the decoder, without using a computer (…) Then the scientists used transcranial magnetic stimulation (TMS) on the decoding person’s brain, sending little magnetic pulses through their skull to activate a specific region of their brain. This caused the second person to take the action that the first person meant to (…) The decoder wasn’t consciously aware of the signal they received (…) however, only movement was transferred, not thoughts
Brain to brain interaction using an electroencephalography cap and transcranial magnetic stimulation & led lights (humans)
Same researchers designed a game with pairs of participants, similar to 20 Questions. In the game, the encoder was given an object that the decoder wasn’t familiar with. The goal was for the decoder to successfully guess the object through a series of yes or no questions. But unlike in 20 Questions, the encoder responded by looking LED flashing lights, one signifying yes and the other no. The visual response generated in the encoder’s brain was transmitted to the visual areas of the brain of the decoder (…) The decoders were successfully able to guess the object in 72 percent of the games, compared to an 18 percent success rate without the BBI (…) this was the largest BBI study, and also the first to include female participants.
Multi-person brain-to-brain interfaces/ collective intelligence
To do this, researchers drew on their past work with brain-to-brain interfaces. The Senders wore electroencephalography (EEG) caps, which allowed the researchers to measure brain activity via electrical signals, and watched a Tetris-like game with a falling shape that needed to be rotated to fit into a row at the bottom of the screen. In another room, the Receiver sat with a transcranial magnetic stimulation (TMS) apparatus positioned near the visual cortex. The Receiver could only see the falling shape, not the gap that it needed to fill, so their decision to rotate the block was not based on the gap that needed to be filled. If a Sender thought the Receiver should rotate the shape, they would look at a light flashing at 17 hertz (Hz) for Yes. Otherwise, they would look at a light flashing 15Hz for No. Based on the frequency that was more apparent in the Senders’ EEG data, the Receiver’s TMS apparatus would stimulate their visual cortex above or below a threshold, signaling the Receiver to make the choice of whether to rotate. With this experiment, the Receiver was correct 81 percent of the time.
There’s a mind-boggling number of possible applications—just imagine projecting ideas in an educational environment, directly sharing memories with others, replacing the need for phones or the Internet altogether, or even, in the more near-term, using it to teach people new motor skills during rehabilitation.
- Lily Toomey, With new technology, mind control is no longer science-fiction, in Massive Science
- Jordan Harrod, Scientists sent thoughts from brain to brain with nothing in between, in Massive Science
Under what conditions do these technology tools lead to the most effective learning experiences? Dο they serve as a distraction if not deliberately integrated into learning activities? When these devices are incorporated deliberately into learning activities, how are students using them to make sense of ideas and apply them in practice? (…) It is much more complicated and difficult to develop an environment that can facilitate learning in complex conceptual domains (…) while adaptive systems have taken some forward leaps, there is still some way to go before these environments can cope with the significant diversity in how individual students make sense of complex ideas (…) Depending on how students structure related ideas in their mind, that structure will limit the way in which new information can be incorporated (…) The problem with providing personalised instruction in a digital environment is therefore not just about what the overall level of prior knowledge is but how that knowledge is structured in students’ minds (…) Technologies that are and will continue to impact on education need to be built on a foundation that includes a deep understanding of how students learn (…) teachers are constantly navigating a decision set that is practically infinite (…) The question becomes one of when and how technologies can be most effectively used, for what, and understanding what implications this has for the teacher-student relationship (…) there are two central narratives about what learning is: the first, acquisition, is vital but the second, participation, is even more powerful for learning (…)
There are several key areas helping students work with technologies:
- Informing the development of and evaluating new technologies: research examining the effectiveness of the tools lags well behind the spread of their use (…) there is a clear need to draw on principles of quality student learning to determine how best to effectively combine the expertise of teachers and power of machines
- Helping students to work with technologies: it is critical to determine how best to support students to do so in the absence of a teacher to help with this
- Determining how technologies can best facilitate teaching and learning: the science of learning will assist in understanding the changing student-teacher dynamic in education is through the implications on broader policy and practice (…) The increased use of these technologies in classrooms must be driven by what is known about quality learning and not about financial or political motives.
Full article available here
Abdullah, 2011: separation of design and building could be the philosophical difference between thinkers (designers) and doers (builders)
Harriss & Widder, 2014: Design build projects exist between the two tectonic plates of learning in academia and practice
Vlahos, 2000: Conventional studio projects present a disconnect from the needs of people and places and the understanding of different cultures. The outcomes of the theoretical studio projects are strongly developed, controlled, formal solutions with little understanding of the architectural intervention in communities. Students engage predominantly with theoretical, fictional projects.
Nepveux, 2010: Being involved physically in building allows students to reconcile their drawings with real structures they can build, weld, wire and plumb
Delport, 2016: Design-build projects have as outcome a physical product made through a process that can vary greatly in scope, focus and intent. They bring in tacit knowledge to the curriculum. The object contributes to social change and improving the lives of others
Van der Wath, 2013: it is an oscillation between the abstract to the concrete that allows students to develop the intellectual agility to tackle the complexities of arch innovation and experimentation that they will use in prof. practice
Brown, 2014: Live Projects’ greatest opportunity is not that it is a place to reflect on one’s own learning but, that it is a place to share that learning and reflection with others (Engestrom: a collective activity system is driven by a deeply communal motive)
Erdman, 2002: hands-on built projects in attempting to close the gap between designing and building replace the reflective process of design with the active process of building (-) they resist theorizing and critical discourse (-)
Chiles & Till, 2004: balance between practice and education encourages students to position themselves politically (+) prevarication is also not possible as the luxury of long-term studio development is removed (+)
Christenson & Srivastava, 2005: Focus on completion within a specific time frame overrides the value of process
Foot, 2012: where the completion and the focus on the end product are taken out of the equation, the notion of reflection, open-endedness and non linearity allows students to discover a variety of possible solutions
Hermie Elizabeth Delport, 2016, Towards Design-Build Architectural Education and Practice: Exploring Lessons from Educational Design-Build Projects, PhD Thesis, Prof Johannes Cronjé, Faculty of Informatics and Design at the Cape Peninsula University of Technology
- His early work had done more than that of any other living thinker to unsettle the traditional understanding of how we acquire knowledge of what’s real
- In a series of controversial books in the 1970s and 1980s, he argued that scientific facts should instead be seen as a product of scientific inquiry. Facts, Latour said, were “networked”; they stood or fell not on the strength of their inherent veracity but on the strength of the institutions and practices that produced them and made them intelligible. If this network broke down, the facts would go with them.
- Founder of the new academic discipline of science and technology studies
- The mid-1990s were the years of the so-called science wars, a series of heated public debates between “realists,” who held that facts were objective and free-standing, and “social constructionists,” like Latour. If scientific knowledge was socially produced — and thus partial, fallible, contingent — how could that not weaken its claims on reality? Lately, however, these debates have begun to look more like a prelude to the post-truth era in which society as a whole is presently condemned to live.
- By showing that scientific facts are the product of all-too-human procedures, these critics charge, Latour — whether he intended to or not — gave license to a pernicious anything-goes relativism that cynical conservatives were only too happy to appropriate for their own ends (…) But Latour believes that if the climate skeptics and other junk scientists have made anything clear, it’s that the traditional image of facts was never sustainable to begin with.
- With the rise of alternative facts, it has become clear that whether or not a statement is believed depends far less on its veracity than on the conditions of its “construction” — that is, who is making it, to whom it’s being addressed and from which institutions it emerges and is made visible.
- In Abidjan, Latour began to wonder what it would look like to study scientific knowledge not as a cognitive process but as an embodied cultural practice enabled by instruments, machinery and specific historical conditions.
- Day-to-day research — what he termed science in the making — appeared not so much as a stepwise progression toward rational truth as a disorderly mass of stray observations, inconclusive results and fledgling explanations (…) During the process of arguing over uncertain data, scientists foregrounded the reality that they were, in some essential sense, always speaking for the facts; and yet, as soon as their propositions were turned into indisputable statements and peer-reviewed papers — what Latour called ready-made science — they claimed that such facts had always spoken for themselves.
- In the 1980s, Latour helped to develop and advocate for a new approach to sociological research called Actor-Network Theory (…) Latour had seen how an apparently weak and isolated item — a scientific instrument, a scrap of paper, a photograph, a bacterial culture — could acquire enormous power because of the complicated network of other items, known as actors, that were mobilized around it. The more socially “networked” a fact was (the more people and things involved in its production), the more effectively it could refute its less-plausible alternatives.
- Latour believes that if scientists were transparent about how science really functions — as a process in which people, politics, institutions, peer review and so forth all play their parts — they would be in a stronger position to convince people of their claims
- Whether they are conscious of this epistemological shift, it is becoming increasingly common to hear scientists characterize their discipline as a “social enterprise” and to point to the strength of their scientific track record, their labors of consensus building and the credible reputations of their researchers.
Excerpts from: Bruno Latour, the Post-Truth Philosopher, Mounts a Defense of Science, By Ava Kofman published in New York Times, full article available here
Image available here
The network is a network of people: networked learning aims to understand social learning processes by asking how people develop and maintain a ‘web’ of social relations used for their learning and development (de Laat)
Networked learning does not necessarily involve ICT, though in specific cases it may make use of technology. What makes learning networked is the connection to and engagement with other people across different social positions inside and outside of a given institution. The network is supportive of a person’s learning through the access it provides to other people’s ideas and ways of participating in practice as well as of course through the opportunity to discuss these ideas and ways of participating and to potentially develop nuanced, common perspectives (Carvalho and Goodyear)
Networked learning may utilize ICT but it might me also supported by other means such as physical artefacts or artistic stimulation of senses and feelings while connections may also be drawn spontaneously by the learners themselves (Bober & Hynes)
The network is a network of situations or contexts: connections between the diverse contexts in which the learners participate as significant for understanding learning beyond online learning spaces, and, indeed, within them as well. This is the sense in which the network, under-stood as a network of situations, supports learning: by offering tacit knowledge, perspectives and ways of acting from known situations for re-situated use in new ones. Networked Learning’ on this under-standing is the learning arising from the connections drawn between situations and from the resituated use in new situations of knowledge, perspectives and ways of acting from known ones (Dohn)
The ‘network’ is one of ICT infrastructure, enabling connections across space and time: The support for learning provided by the network is one of infrastructure, i.e. the ease of saving, transporting and retrieving content for future use. Learning, it would seem, will be ‘networked’ whenever it is ICT-mediated, by that very fact; perhaps with the proviso that the situations of learning should indeed be separated in space and/or time so that the infrastructure (the ‘network’) is actually brought into play. This proviso would differentiate the field of networked learning somewhat from the field of Computer Supported Collaborative Learning (CSCL), where many studies concern ICT-facilitated group work between physically co-located students. The re-search field of Networked Learning is characterized, not only by focusing on ‘networks’, but also by taking a certain approach to learning, focusing critically on aspects of democratization and empowerment (Czerniewicz and Lee)
The ‘network’ is one of actants: consisting of both human and non-human agents in symmetrical relationship to each other. It is a systemic approach to learning, where individual learners’ interaction and learning may be analyzed as a result of socio-material entanglement with objects and other people. The network supports learning in the sense that any learning is in fact the result of concrete socio-material entanglement of physical, virtual, and human actants (Wright and Parchoma; Jones)
Bonderup Dohn, N., Sime, J-A., Cranmer, S., Ryberg, T., & de Laat, M. (2018). Reflections and challenges in Networked Learning. In N. Bonderup Dohn, S. Cranmer, J-A. Sime, M. de Laat, & T. Ryberg (Eds.), Networked Learning – reflections and challenges (pp. 187-212). Switzerland: Springer. Research in Networked Learning,
Image available here
Critical Pedagogy is an approach to teaching and learning predicated on fostering agency and empowering learners (implicitly and explicitly critiquing oppressive power structures). The word “critical” in Critical Pedagogy functions in several registers:
- Critical, as in mission-critical, essential;
- Critical, as in literary criticism and critique, providing definitions and interpretation;
- Critical, as in reflective and nuanced thinking about a subject;
- Critical, as in criticizing institutional, corporate, or societal impediments to learning;
- Critical Pedagogy, as a disciplinary approach, which inflects (and is inflected by) each of these other meanings.
Our work, the writers say, has wondered at the extent to which Critical Pedagogy translates into digital space.
In short, Critical Digital Pedagogy:
- centers its practice on community and collaboration;
- must remain open to diverse, international voices, and thus requires invention to re-imagine the ways that communication and collaboration happen across cultural and political boundaries;
- will not, cannot, be defined by a single voice but must gather together a cacophony of voices;
- must have use and application outside traditional institutions of education.
Preface by Audrey Watters. Book available for online reading here
The capability approach to a person’s advantage is concerned with evaluating it in terms of his or her actual ability to achieve various valuable functionings* a part of living
It differs from other approaches using other informational focuses, for example:
- personal utility
- absolute or relative opulence
- assessments of negative freedoms
- comparisons of means of freedom
- comparisons of resource holdings as a basis of just equality
The capability approach is concerned primarily with the identification of value-objects, and sees the evaluative space in terms of functionings and capabilities to function (…) Choices have to be faced in the delineation of the relevant functionings. The format always permits additional ‘achievements’ to be defined and included (…) There is no escape from the problem of evaluation in selecting a class of functionings in the description and appraisal of capabilities (…) (1) What are the objects of value? (2) How
valuable are the respective objects? the identification of the objects of value is
substantively the primary exercise which makes it possible to pursue the second question (…) The identification of the objects of value specifies what may be called an evaluative space (…) The selection of the evaluative space has a good deal of cutting power on its own, both because of what it includes as potentially valuable and because of what it excludes (…) The freedom to lead different types of life is reflected in the person’s capability set. The capability of a person depends on a variety of factors, including personal characteristics and social arrangements. A full accounting of individual freedom must, of course, go beyond the capabilities of personal living and pay attention to the person’s other objectives, but human capabilities constitute an important part of individual freedom (…) We can make a fourfold classification of points of evaluative interest in assessing human advantage, based on two different distinctions. One distinction is between (1.1) the promotion of the person’s well-being, and (1.2) the pursuit of the person’s overall agency goals (…) The second distinction is between (2.1) achievement, and (2.2) the freedom to achieve (…) The assessment of each of these four types of benefit involves an evaluative exercise, but they are not the same evaluative exercise (…0 The four categories of intrapersonal assessment and interpersonal comparison that follow from these two distinctions (namely, well-being achievement, well-being freedom, agency achievement, and agency freedom) are related to each other, but are not identical
*functionings represent parts of the state of a person–in particular the various things that he or she manages to do or be in leading a life. The capability of a person reflects the alternative combinations of functionings the person can achieve, and from which he or she can choose one collections
Excerpts from Amartyr Sen’s Capability and Well‐Being, full paper available here
Image available here
PLACES TO INTERVENE IN A SYSTEM (in increasing order of effectiveness)
12. Constants, parameters, numbers (such as subsidies, taxes, standards): even though they rarely change behavior
11. The sizes of buffers and other stabilizing stocks, relative to their flows: they are usually physical entities, not easy to change
10. The structure of material stocks and flows (such as transport networks, population age structures): they only way to fix a system is to rebuild it, but physical rebuilding is the slowest and most expensive kind of change
9. The lengths of delays, relative to the rate of system change: a system just can’t respond to short-term changes when it has long-term delays,a delay in feedback is critical relative to rates of change in the stocks that the feedback loop is trying to control. it;s easier to slow down the change rate
8. The strength of negative feedback loops, relative to the impacts they are trying to correct against: one of the biggest mistakes is that we drastically narrow the range of conditions over which the system can survive, the strength of a negative loop is important relative to the impact it is designed to correct (self-correcting)
7. The gain around driving positive feedback loops: a system with an unchecked positive loop ultimately will destroy itself (self-reinforcing). reducing the gain around a positive loop -slowing the growth- is usually a more powerful leverage point
6. The structure of information flows (who does and does not have access to information): missing feedback is one of the most common causes of system malfunction. adding or restoring information can be a powerful intervention, usually much easier and cheaper than rebuilding physical infrastructure
5. The rules of the system (such as incentives, punishments, constraints): as we try to imagine restructured rules like that and what our behavior would be under them, we come to understand the power of rules. power over the rules is real power
4. The power to add, change, evolve, or self-organize system structure: Self-organization means changing any aspect of a system lower on this list — adding completely new physical structures, such as brains or wings or computers — adding new negative or positive loops, or new rules. the ability to self-organize is the strongest form of system resilience.
3. The goals of the system: the goal of a system is a leverage point superior to the self-organizing ability of a system. even people within systems don’t often recognize what whole-system goal they are serving
2. The mindset or paradigm out of which the system — its goals, structure, rules, delays, parameters — arises: the shared idea in the minds of society, the great big unstated assumptions — unstated because unnecessary to state; everyone already knows them — constitute that society’s paradigm, or deepest set of beliefs about how the world works (Kuhn: keep pointing at the anomalies and failures in the old paradigm, you keep coming yourself, and loudly and with assurance from the new one, you insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather you work with active change agents and with the vast middle ground of people who are open-minded.)
1. The power to transcend paradigms: that is to keep oneself unattached in the arena of paradigms, to stay flexible, to realize that NO paradigm is “true,” that every one, including the one that sweetly shapes your own worldview, is a tremendously limited understanding of an immense and amazing universe that is far beyond human comprehension
the N word as opposed to the V word–shares constructivism’s connotation of learning as “building knowledge structures” irrespective of the circumstances of the learning. It then adds the idea that this happens especially felicitously in a context where the learner is consciously engaged in constructing a public entity, whether it’s a sand castle on the beach or a theory of the universe (Papert, 1991)
Seymour, says Idit Harel in his obituary, coined the term to advance a new theory of learning, claiming that children learn best when they:
- use tech-empowered learning tools and computational environments,
- take active roles of designers and builders; and
- do it in a social setting, with helpful mentors and coaches, or over networks.
Influencers: John Dewey, Maria Montessori, and Paulo Freire, Jean Piaget with whom he worked in 1958 to 1963 in Switzerland.
He was also responsible for the academic work for Logo programming language. He created the Logo Turtle, which was a physical turtle, and later became a virtual turtle which could be manipulated on screen by using the simple Logo programming language. MIT’s Epistemology and Learning Group, which Papert founded, has created many advanced technologies for learners including: robotics, system dynamics, multi-agent modeling, and digital fabrication. In 1985, he began a long and productive collaboration with the LEGO company, one of the first and largest corporate sponsors of the Media Lab. In the late 1990s, Papert moved to Maine and continued his work with young people there, establishing the Learning Barn and the Seymour Papert Institute in 1999
- Papert, S., Harel, I., 1991. Situating Constructionism, Ablex Publishing Corporation, 1st chapter retrieved here: http://edutechwiki.unige.ch/en/Constructionism (last accessed 09.08.2018)
- Harel, I., 2016. A Glimpse Into the Playful World of Seymour Papert. (obituary), IN EdSurge, 3rd August 2016, full text available here
Image available here