As we have seen above, the self defining nature is a means of maintaining authority, in the one instance over society, in the other over students. Teachers set the problems and provide the means to solve them. The fact that theories are developed in artificial environments means that their behaviour become entirely predictable. As Argyris and Schön note;
..techniques make self fulfilling prophecies for the professions. These techniques tend to be used to achieve a self-reinforcing system that maintain constancy…The artificial environments are designed to enable the professions to realise objectives as he sees them control the task, render the behaviour of others predictable, and thereby control it.
The same level of control is equally apparent in the profession of the teacher. It may be seen therefore that self-defining rational theory also leads to self-fulfilling theory. Nietzsche is withering in his critique of the rational mind’s pursuit of the truth and its apparent limitations:
If somebody hides a thing behind a bush, seeks it out and finds it in the self-same place, then there is not much to boast of respecting this seeking and finding; thus, however, matters stand with the pursuit of seeking and finding ‘truth’ within the realm of reason.
What this points to is the dangers of the closed circuit. Theory guides practice which in turn becomes the basis for theory; at best this a refining process in pursuit of the perfected theory defining a universal truth; at worst it becomes like a dog chasing his tail. This system generates a mirroring effect, whereby the precepts of the theory are reflected in the actions of practice.Jeremy Till, Contingent Theory: The Educator as Ironist, 1996. Full article available here (highlighting is mine)
I do like exploring the subtle changes in the nuanced interpretations of different terms and how they are used in everyday life. It is sometimes their misinterpretation that leads to misunderstandings and heated debates. So, a good definition, is always welcome. I am not convinced of the use of the term digital learning though; it is a term much more connected to computer skills and less to connected learning.
online: focuses on the connectivity of the learning; it implies a physical distance; less desirable for those who prefer social interaction and don’t who have limited access to stable internet/ virtual: the term suggests that the level of engagement required compared to a physical experience will be similar but different; however, virtual is linked to inauthentic and therefore is less desirable for those who want to experience a ‘real’ education event/ digital: inextricably linked to data storage but has evolved into meaning ‘related to the use of computer technology’; digital is also offline; digital learning doesn’t have any negative connotations like the other two terms.
blended: most prevalent term of the two; it implies different modes of delivery and/or student engagement; for others, it is a mix of onsite-online(digital) learning activities/ hybrid: the use of this term implies that students have a greater degree to choose how they engage with their learning; it implies agency
distance: it is a term that was in use before the widespread proliferation of digital approaches to learning (courses were taken through correspondence); refers to communication style/ remote: the term is used to avoid any reference to mode of communication and limit discourse to physical distance
Very happy to share the news with you! Last fall, we started out a research program in collaboration with A10 New European Architecture Magazine and Architectuur Lokaal at academic scale with the aim to investigate architectural competitions in seven European countries. This year round there was an equal number of academic institutions each working on a separate module. We were supposed to meet in June in Vienna, but the pandemic cost us that trip and the opportunity to share our research findings amongst us. This is why we set up a BLOG that currently accommodates the key points of our research, two very recent case studies, numerous interviews of the people involved and a commentary on Greek competition culture.
I would like to sincerely thank all the interviewees for their valuable contributions, Tzina Sotiropoulou and Antigoni Katsakou for participating in our last live session and Prelab for accommodating all of our live sessions. Most of all, I would like to thank the two students who took up this endeavor, Katerina P. Moustaka and Stelina Portesi. I am obliged to them for their hard work despite the challenging times.
Greek competition culture is still developing. Our research is only just a small fraction of a wider discourse that is unassailably connected to the future of the profession as well. Therefore, we are open to your comments and your feedback.
Patrick Schroder, Promoting a Just Transition to an Inclusive CE, Clatham House (Report): The ‘just’ transition concept is not new; it comes from climate change and climate justice movements (…) many social and political issues have been neglected in planning for the CE transition (…) a just transition framework for the CE can identify opportunities that reduce waste and stimulate product innovation (…) low- and middle-income countries that rely heavily on ‘linear’ sectors and the export of these commodities to higher-income countries are likely to be negatively affected by the shift to circularity (…) there is a need for new international cooperation programmes and a global mechanism to mobilise dedicated support funds for countries in need (…) COVID19 has shown that global emergencies have fast forwarded processes that otherwise may take years (…) three points: a. CE is necessary for both long-term resource security and short-term supplies of important materials, b. there is a need to improve the working conditions of the informal CE (waste pickers etc) and c. global supply chains will be radically changed (…)
Cindy Isenhour, Department of Anthropology, Climate Change Institute, University of Maine | CRITIQUE I: CE cannot be just about efficiency and technological improvement alone within the confines of a global economic system (…) in fact, success of CE has been hindered in part by carbon leakage to developing countries, off-shorial waste or by other means of shifting environmental burdens and market externalities (…) for some critics, high levels of total material throughput emissions and consumption have cannibalised a great deal of the gains (…) evidence that CE has helped us to decouple growth from environmental degradation is sadly hard to come by still (…) critics claim that despite CE success is not solely dependent on regenerative design (new packaging materials, industrial symbiosis, nutrient cycling technologies or recyclable polymers), but it is also about a fundamental shift, in global societal organisation and cultural frameworks (…) these have the power to renegotiate the meaning of ownership-property-economic value on materials and how we measure the successor our economic system (…) Can CE be capable of cultural change?| CRITIQUE II: CE scholarship is focused on rational choice theory and ecological modernisation and based on cost-benefit analyses (…) however, economic decision-making is highly contextual and social (…) consumers don’t want to alienate themselves from their peer groups and their neighbours (…) there is a necessity of coordinated approaches and collective action between social actors, so as to build trust and possibility of collaboration (…) How do we implement a CE that recognises the sociality of the economy? | CRITIQUE III: CE represents a new commodity frontier (…) sustainability programming can often capture the resources for those segments of society that are already more fortunate leading to economic exclusion (…) How can we broaden participation in CE as well as its conceptualisation and operationalisation to ensure equity and justice?
Computer mediated brain to brain interaction (mice)
A brain-to-brain interface records the signals in one person’s brain, and then sends these signals through a computer in order to transmit them into the brain of another person. This process allows the second person to “read” the mind of the first or, in other words, have their brain fire in a similar pattern to the original person
In 2013 scientists tested the method to mice; they surgically implanted recording wires that measured brain activity in the motor areas of the brain
Brain to brain interaction using an electroencephalography cap and transcranial magnetic stimulation (humans)
the human device was non-invasive, meaning surgery wasn’t required. This device transferred the movement signals from the encoder straight to the motor area of the brain of the decoder, without using a computer (…) Then the scientists used transcranial magnetic stimulation (TMS) on the decoding person’s brain, sending little magnetic pulses through their skull to activate a specific region of their brain. This caused the second person to take the action that the first person meant to (…) The decoder wasn’t consciously aware of the signal they received (…) however, only movement was transferred, not thoughts
Brain to brain interaction using an electroencephalography cap and transcranial magnetic stimulation & led lights (humans)
Same researchers designed a game with pairs of participants, similar to 20 Questions. In the game, the encoder was given an object that the decoder wasn’t familiar with. The goal was for the decoder to successfully guess the object through a series of yes or no questions. But unlike in 20 Questions, the encoder responded by looking LED flashing lights, one signifying yes and the other no. The visual response generated in the encoder’s brain was transmitted to the visual areas of the brain of the decoder (…) The decoders were successfully able to guess the object in 72 percent of the games, compared to an 18 percent success rate without the BBI (…) this was the largest BBI study, and also the first to include female participants.
Multi-person brain-to-brain interfaces/ collective intelligence
To do this, researchers drew on their past work with brain-to-brain interfaces. The Senders wore electroencephalography (EEG) caps, which allowed the researchers to measure brain activity via electrical signals, and watched a Tetris-like game with a falling shape that needed to be rotated to fit into a row at the bottom of the screen. In another room, the Receiver sat with a transcranial magnetic stimulation (TMS) apparatus positioned near the visual cortex. The Receiver could only see the falling shape, not the gap that it needed to fill, so their decision to rotate the block was not based on the gap that needed to be filled. If a Sender thought the Receiver should rotate the shape, they would look at a light flashing at 17 hertz (Hz) for Yes. Otherwise, they would look at a light flashing 15Hz for No. Based on the frequency that was more apparent in the Senders’ EEG data, the Receiver’s TMS apparatus would stimulate their visual cortex above or below a threshold, signaling the Receiver to make the choice of whether to rotate. With this experiment, the Receiver was correct 81 percent of the time.
There’s a mind-boggling number of possible applications—just imagine projecting ideas in an educational environment, directly sharing memories with others, replacing the need for phones or the Internet altogether, or even, in the more near-term, using it to teach people new motor skills during rehabilitation.
- Lily Toomey, With new technology, mind control is no longer science-fiction, in Massive Science
- Jordan Harrod, Scientists sent thoughts from brain to brain with nothing in between, in Massive Science
Under what conditions do these technology tools lead to the most effective learning experiences? Dο they serve as a distraction if not deliberately integrated into learning activities? When these devices are incorporated deliberately into learning activities, how are students using them to make sense of ideas and apply them in practice? (…) It is much more complicated and difficult to develop an environment that can facilitate learning in complex conceptual domains (…) while adaptive systems have taken some forward leaps, there is still some way to go before these environments can cope with the significant diversity in how individual students make sense of complex ideas (…) Depending on how students structure related ideas in their mind, that structure will limit the way in which new information can be incorporated (…) The problem with providing personalised instruction in a digital environment is therefore not just about what the overall level of prior knowledge is but how that knowledge is structured in students’ minds (…) Technologies that are and will continue to impact on education need to be built on a foundation that includes a deep understanding of how students learn (…) teachers are constantly navigating a decision set that is practically infinite (…) The question becomes one of when and how technologies can be most effectively used, for what, and understanding what implications this has for the teacher-student relationship (…) there are two central narratives about what learning is: the first, acquisition, is vital but the second, participation, is even more powerful for learning (…)
There are several key areas helping students work with technologies:
- Informing the development of and evaluating new technologies: research examining the effectiveness of the tools lags well behind the spread of their use (…) there is a clear need to draw on principles of quality student learning to determine how best to effectively combine the expertise of teachers and power of machines
- Helping students to work with technologies: it is critical to determine how best to support students to do so in the absence of a teacher to help with this
- Determining how technologies can best facilitate teaching and learning: the science of learning will assist in understanding the changing student-teacher dynamic in education is through the implications on broader policy and practice (…) The increased use of these technologies in classrooms must be driven by what is known about quality learning and not about financial or political motives.
Full article available here
- Mind the assumptions: assess uncertainty and sensitivity as their role in predictions is substantially larger that originally asserted
- Mind the hubris: complexity can be the enemy of relevance; there is a trade-off between the usefulness of a model and the breadth it tries to capture; complexity is too often seen as an end in itself. Instead, the goal must be finding the optimum balance with error
- Mind the framing: match purpose and context; no one model can serve all purposes; modellers know that the choice of tools will influence, and could even determine, the outcome of the analysis, so the technique is never neutral; shared approaches to assessing quality need to be accompanied by a shared commitment to transparency. Examples of terms that promise uncontested precision include: ‘cost–benefit’, ‘expected utility’, ‘decision theory’, ‘life-cycle assessment’, ‘ecosystem services’, and ‘evidence-based policy’. Yet all presuppose a set of values about what matters — sustainability for some, productivity or profitability for others; the best way to keep models from hiding their assumptions, including political leanings, is a set of social norms. These should cover how to produce a model, assess its uncertainty and communicate the results. International guidelines for this have been drawn up for several disciplines. They demand that processes involve stakeholders, accommodate multiple views and promote transparency, replication and analysis of sensitivity and uncertainty. Whenever a model is used for a new application with fresh stakeholders, it must be validated and verified anew.
- Mind the consequences: quantification can backfire. Excessive regard for producing numbers can push a discipline away from being roughly right towards being precisely wrong; once a number takes centre-stage with a crisp narrative, other possible explanations and estimates can disappear from view. This might invite complacency, and the politicization of quantification, as other options are marginalized; opacity about uncertainty damages trust (…) Full explanations are crucial.
- Mind the unknowns: acknowledge ignorance; communicating what is not known is at least as important as communicating what is known; Experts should have the courage to respond that “there is no number-answer to your question.”
Mathematical models are a great way to explore questions. They are also a dangerous way to assert answers. Asking models for certainty or consensus is more a sign of the difficulties in making controversial decisions than it is a solution, and can invite ritualistic use of quantification. Models’ assumptions and limitations must be appraised openly and honestly. Process and ethics matter as much as intellectual prowess. It follows, in our view, that good modelling cannot be done by modellers alone. It is a social activity. The French movement of statactivistes has shown how numbers can be fought with numbers, such as in the quantification of poverty and inequalities (…) We are calling not for an end to quantification, nor for apolitical models, but for full and frank disclosure. Following these five points will help to preserve mathematical modelling as a valuable tool. Each contributes to the overarching goal of billboarding the strengths and limits of model outputs. Ignore the five, and model predictions become Trojan horses for unstated interests and values. Model responsibly.
Saltelli, A. et al., (2020). Five ways to ensure that models serve society: a manifesto, article available here
Complexity is one of four challenges expressed in the acronym VUCA — Volatility, Uncertainty, Complexity, Ambiguity (…) VUCA has largely been adopted in the business world to refer to challenges which traditional leadership models find difficult to address (…) it requires different skills, structures, modus operandi, mindsets and organisational principles from those currently taught and practised (…) current leadership approaches are counter-productive, even harmful, to working with uncertainty and complexity. In trying to gain control of complexities, in trying to get a grip, our management methods are actually making things worse (…) the cumulative effect of applying the wrong management practices to complexity has exacerbated the challenges of VUCA (…) (complexity management) can only be achieved by including and integrating the perspectives of all the people affected (…) wide-scale conversations in the form of what he (Stacey) called “reflexive inquiry” (…) VUCA skills include: interpersonal skills (e.g. active listening), perspective coordination skills (complementarity), contextual thinking skills (shifting perspectives according to context) and collaboration skills (inclusive decision-making) (…) VUCA requires the integration and fusion of different perspectives, and not alpha heroes with all the ‘right’ answers (…) What we should learn, instead, is how to respond to complex problems from a vantage point of not knowing, probingly approaching inquiry with an empty mind and humility; likewise we need to learn how to integrate seemingly polar opposite perspectives collaboratively (…) Some of the ways suggested to learn these VUCA skills include design thinking and practicing Sociocracy. We should take note, however, that one cannot learn integration skills by oneself, these have to be practised and refined in groups. We therefore need to create more Communities of Practice where people can hone these new skills (…) Uhl-Bien defines complexity as ‘rich interconnectivity’. Interconnecting parts become complex when the parts interacting actually influence and change each other (…) what complexity calls for are deeper conversations that matter
(…) However, by entrusting the objectivity* of the morphogenesis to the sphere of nature, and in fact to theories that are far too general to be productive and useful, architecture is stranded on the shores of a programmatic bewilderment: if it does not focus on the production of forms, but on the natural and hence objective rules* of morphogenesis, all architectural outcomes and all that they entail are rendered fair and equal: this signifies the annulment of the field of meaning. And because meaning is a social construct, that which is pushed aside by the impetuous return of the natural is, precisely, the social -it is society, it is history (…) However, in the proposed process of natural morphogenesis, the architectural forms do not realise a project but are the outcome of the construction of events, as algorithmic interpretations of information data. The architect is given a new responsibility -not to design the forms but to prepare a bare field of possibilities on which the forces of reality will develop on objective* terms. The resolution of conflicts results into a valid though un-planned, unforeseen, uncanny and consequently estranging architectural form. In contrast, in the practised strategies of architectural design, where subjective* initiative is required by the designer, the construction of the uncanny, of the unexpected and the unforeseen, the estrangement or the paroxysm of architecture’s inherent indeterminability aims to alter conventional socio-spatial relations and differential meaning-giving outcomes (…) This acrobatic, risky relationship between intention and coincidence, between the design’s theoretical abstraction and the existence of reality’s multiple parameters, between natural disorder and intellectual order, perhaps between desire and need -this is what the introduction of the mythologised diagram is attempting to determine in digital strategies: it is an idea bordering on a game, a pseudoscientific mechanism of protestant deincrimination for the abundant pleasures provided by the exceptional new voluptuous spatial experiences of digital design, a ruse aiming to prevent the abolition of the responsibility of designing and to restore the designer’s initiative.
*Are the rules of morphogenesis indeed objective? or just a logical (con)sequence of events based on voluntary data interpretation? In this case, the design process -traditional or digital- is always subjective.
Read full paper here
Excerpts of the Wouter Vanstiphout interview to Rory Hyde (MVRDV) for the Australian design review in 2011. Full article available here
If you really want to change the city, or want a real struggle, a real fight, then it would require re-engaging with things like public planning for example, or re-engaging with government, or re-engaging with large-scale institutionalised developers. I think that’s where the real struggles lie, that we re-engage with these structures and these institutions, this horribly complex ‘dark matter’. That’s where it becomes really interesting (…) I do believe that architecture and design as a combination of pure speculation, rhetorical poetics and technical capacity, could play a role in politics. It could re-shape certain discussions and therefore create its own inevitability (…) I don’t think architects have to shed their visionary status, their ‘good’ arrogance, or their speculative powers, if only they would realise that things are contextual! Acknowledge the fact that the deepest meaning in what they do is directly related to the context in which they do it.
Wouter Vanstiphout is member of Crimson Historians & Urbanists and professor of Design as Politics at TU Delft
The question for the coming autumn is resolutely not how can we recreate the architecture studio online. It is how we can liberate our discipline from the assumption that an ill-defined space, time, pedagogy and culture is the only way to teach design. It is an opportunity to re-construct architecture education in a more critical, inclusive and democratic way. (highlighting is mine)
Full article available here