back to

Live blog: The New Centre 2016 NYC Summer Residency, July 18–22


For all those planning to attend the #AGI Accelerate General Intellect Plenary session at e-flux New York on Friday afternoon: due to unforeseen circumstances, we are cancelling this event. We will be hosting the conversation in an internal format in the near future, and further announcements will provide information for those interested as soon as it becomes available. We apologize for any inconvenience, and thank everyone for their continued interest in the #AGI residency.


The New Centre for Research & Practice is thrilled to announce its New York Summer Residency, entitled “#AGI Accelerate General Intellect.” The residency takes place July 18–22, 2016 at various venues around New York City. On July 20, we will host a panel during the Future of Mind Symposium as part of a collaboration with the New School’s Center for Transformative Media and Humanity+.

Please join us for a week of seminars, workshops, and panel discussions at Pratt Institute, The New School for Social Research, and e-flux, as our resident artists, thinkers, and writers speculate about the future implications of collective thought and cognition on philosophical, political, and technological developments. Many of the week’s events will be live-blogged here at e-flux conversations, starting Monday, July 18.

What does it mean to accelerate the general intellect in the age of artificial intelligence? #AGI begins from the investigation of distributed networks from which thought assembles and into which it disperses. Unlike in the past, general intelligence, algorithms, and networks are together becoming as irreducible to the efforts of “universal” intellectuals as cultural and political movements have become to “universal” leaders. Will the future enable a more radical, integrated, but also more complex mode of cultural and political engagement? One predicated upon what Marx describes as, “the conditions of the process of social life itself… under the control of the general intellect” (1).

#AGI explores the new intensifying developments in the field of AI that are making possible subjectless modes of the general intellect, more collective and more general than any single individual or network.


July 18 at Pratt Institute /// Moderated by Tony Yanick
09:00 – 09:30, Coffee
09:30 – 10:00, Introductory Remarks
10:00 – 12:00, Pete Wolfendale
12:00 – 13:00, Discussion
13:00 – 14:00, Lunch
14:00 – 14:30, Ahmed El Hady
14:30 – 15:00, Discussion
15:00 – 16:00, Katarina Kolozova /// Video Conference from Macedonia
16:00 – 16:30, Discussion

July 19 at Pratt Institute /// Moderated by Mohammad Salemy
09:00 – 09:30, Coffee
09:30 – 10:00, Introductory Remarks
10:00 – 11:00, Matteo Pasquinelli /// Video Conference from Berlin
11:00 – 11:30, Discussion
11:30 – 11:45, Break
11:45 – 12:30, Amy Ireland
12:30 – 13:00, Discussion
13:00 – 14:00, Lunch
14:00 – 15:00, Joshua Johnson & Keith Tilford
15:00 – 15:15, Break
15:15 – 17:00, Nick Land
17:00 – 18:00, Discussion
July 20th: Future of Mind at The New School (2)
15:30 – 16:45, The New Centre Panel Discussion with Reza Negarestani Patricia Reed, & Peter Wolfendale

July 21 at The New School /// Moderated by Jason Adams
09:00 – 09:30, Coffee
09:30 – 10:00, Introductory Remarks
10:00 – 11:00, Patricia Reed
11:00 – 11:30, Discussion
11:30 – 11:45, Break
11:45 – 14:30, Lunch
14:00 – 14:30, Eden Medina /// Video Link from Indiana
14:30 – 15:00, Discussion
15:00 – 15:15, Break
15:15 – 17:00, Reza Negarestani
17:00 – 18:00, Discussion

CANCELLED /////// July 22 at e-flux 18:00 – 21:30pm
Audio-acoustic intervention by Jason Brogan
Plenary session with Amy Ireland / Nick Land /Reza Negarestani / Patricia Reed / Pete Wolfendale /// Moderated by Jason Adams, Mohammad Salemy & Tony Yanick ////////

The Residency is free for The New Centre Friends and Members (to become a member, please visit: General admission is by donation. Space for each location has specific limits, to secure seats please register for #AGI:

(1) Karl Marx, Grundrisse (London: Penguin Books, 1973), 706.
(2) Please visit the website for the Future of Mind conference ( for information on registeration to attend this event separately.


.0.1.1 Welcome to the liveblogging of the New Centre’s 2016 summer residency! Things kick off at the Pratt Institute tomorrow morning, so I’ll be posting the abstracts and suggested orientation readings provided by our contributors over the course of tonight. Starting with -

Pete Wolfendale /// Towards Computational Kantianism
July 18 2016 @ 10:00 - 13:00 (Pratt Institute)
There are many ways to describe the purpose and significance of Immanuel Kant’s critical philosophy, but it is clear that the project of transcendental psychology, or the conditions of possibility of having a mind, or being capable of thought and action, is at the core of this philosophy. The premise of this seminar is that this project is essentially the same as the program of artificial general intelligence (AGI), and that by reading Kant’s work through contemporary developments in logic, mathematics, and computer science, that we can use this work to provide important methodological and technical insights for the AGI program. The seminar will begin by considering overall methodological issues, before describing the core ideas of Kant’s transcendental psychology, explaining the key ideas required to reconstruct it, and then proceeding to relate these to contemporary ideas, focusing on Robert Harper’s notion of computational trinitarianism, and the historical developments that lead up to it and the project of Homotopy Type Theory (HoTT) that inspired it. The seminar will close by considering some more general philosophical implications of the model provided.

Suggested Reading:
Immanuel Kant, ‘Introduction’, Critique of Pure Reason
Per Martin-Lof, ‘Analytic and Synthetic Judgments in Type Theory’

Pei Wang, ‘Artificial General Intelligence: A Gentle Introduction’

Robert Harper ‘The Holy Trinity’


Ahmed El Hady /// The coming Neurosociety: from the Singularity to the Reciprocality
July 18 2016 @ 14:00 - 15:00 (Pratt Institute)
A neuro-centric view of the human is invading almost all disciplines as we are witnessing the emergence of neuropolitics, neuromarketing, neuroeconomics. The talk will give a tour-de-force across different classes of technologies from brain – machine interfaces to neurostimulation devices to neuropharmacological agents. The emerging neurotechnologies is put in the larger context of both the technological singularity movement and the state – corporate control apparatuses. Through this contextualization, the underlying ideological density unravels as one of capitalist expansion through deterritorialized control, homogenization of subjectivities and induction of extensive narcissistic enclosures producing zombie-like subjects. As technological acceleration is driven by core design principles and ideological constructs, its course can be potentially diverted to a revolutionary path through ideology critique and post-singularity design principles. Parallels between post-capitalist and post-singularity imaginaries will be drawn. A proposal for a technological movement named “The Reciprocality” will be proposed aiming to place the “Human” and the “Collective” at the center of technological design.

Suggested Reading
Giordano, James. “The Neuroweapons Threat.” Bulletin of the Atomic Scientists. 2016.
Khanna, Ayesha, and Parag Khanna. “Neurotechnology, Social Control and
Revolution.” Big Think. 2012.
Rectenwald, Michael. “The Singularity and Socialism.” Insurgent Notes. 2013.
Requarth, Tim. “This Is Your Brain. This Is Your Brain as a Weapon.” Foreign Policy
This Is Your Brain This Is Your Brain as a Weapon Comments.
Vernor, Vinge. “The Coming Technological Singularity: How to Survive in the
Post-Human Era.” Department of Mathematical Sciences, San Diego State University. 1993.


Katerina Kolozova /// Automaton, Philosophy & Capitalism: Technology, Animality & Women
July 18 2016 @ 15:00 - 16:30 (Pratt Institute)
The technological extension and the biological body are both alien to subjectivity which is essentially and unavoidably a philosophical creation. In other words, subjectivity is always already philosophical. It is nothing but the automaton of signification which re-presents the human or constitutes it as representation, what makes in (non-)human is precisely its failure to fully represent. Technology precedes subjectivity – just as the body does – and it cannot, therefore, have an ontological status – it is pre-philosophical. It precedes it as téchne (τέχνη) precedes philosophia (φιλοσοφία). It is the real vis-à- vis the subject of language. The hybridization of the two constitutes a category of society or the “species being” of humanity. Perfecting the imperfect nature – because “irrational” – cannot be its purpose since the idea that nature contains meaning or sense, i.e., a certain causa finalis, is theological-philosophical. In order for something to be susceptible to perfecting, it should contain the tendency to be perfect. Minimally, it should be grounded in the possibility to constitute a meaning, a purpose. It should contain a telos, i.e., it should be a theological category. A Marxist position with regard to technology is not a philosophical one or rather it is a non-philosophical one. That is why the Marxist science does not envisage an ontological status for technology.

Suggested Reading
Firestone, Shulamith. The Dialectic of Sex: The Case for Feminist Revolution. New York: New York,
Morrow. 1970.
Haraway, Donna. “Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the
1980’s.” Socialist Review 80, pp. 65-108. 1985.
Kolozova, Katerina. Toward a Radical Metaphysics of Socialism: Marx and Laruelle. NY
Brooklyn: Punctum Books. 2015.
Laruelle, François. Introduction au non-marxisme. Paris: Presses Universitaires de France.
Laruelle, François. Théorie des Etrangers: Science des hommes, démocratie et
non-psychanalyse. Paris: Éditions Kimé. 1995.
Marx, Karl. (1973) Grundrisse: Outlines of the Critique of Political Economy. Translated by
Martin Nicolaus. New York: Penguin Books, 1973.
Marx, Karl. (1959) Economic and Philosophical Manuscripts 1844. Moscow: Progress

Marx, Karl. (1887) Capital: Volume I. Translated by Samuel Moore and Edward Aveling, edited
by Frederick Engels. Moscow: Progress Publishing


Amy Ireland /// Black Circuit: Code for the Numbers to Come
July 19 2016 11:00-13:00 (Pratt)
Although its power continues to underwrite twenty-first century conceptions of appearance, agency, and language, it is nothing new to point out the complicity of the restricted economy of Western humanism with the specular economy of the phallus. Both yield their capital from the trick of transcendental determination-in-advance, establishing the value of difference from the standpoint of an a priori of the same. What is less apparent—and far more interesting—is the compact that quietly strengthens itself in the system’s shadow among the elements of its inhuman surplus, that which is trafficked yet excluded from trade: women and machinery alike.

It is Sadie Plant who best outlines an understanding of this exclusion as indicative of a continuum between woman and machine, one capable of weaponizing the very indiscernibility suffered by its ‘subjects’ against the patriarchal circuits of reproduction and control confining them. In Plant’s writing, the veil and the screen mask an account of being in which woman’s negating function in a dialectic of castration is swapped-out for an affirmation of woman-for-herself as an avatar of positive zero, the profligate unilateral expenditure of a general economy of eternal return which requires neither negation nor reciprocity to operate. This presentation will extend Plant’s delineation of technofeminist subversion to a reading of contemporary expectations for the development of artificial intelligence—before asking how such a trajectory for thought might determine its future execution.

Suggested Reading
Livingston, Parisi and Greenspan, ‘Amphibious Maidens,’ Abstract Culture, no. 11, Swarm 3.
Luce Irigaray, “The Blind Spot of an Old Dream of Symmetry.” Speculum of the Other Woman,
trans. Gillian C. Gill (Ithaca: Cornell Univ. Press, 1985), pp. 11-129.
Parsons, Jack. Liber 49.
Plant, Sadie. “The Virtual Complexity of Culture FutureNatural: Nature, Science, Culture.”
(London: Routledge, 1996), pp. 203-215.
Plant, Sadie. “The Future Looms”. Clicking In, ed. Lynn Hershman-Leeson (Seattle: Bay Press,
1996), pp. 123-135.
VNS Matrix, A Cyberfeminist Manifesto for the 21st Century.


Joshua Johnson /// Risk Management: Toward Global Scalar Decryption
July 19 2016 @ 14:00 - 15:00 (Pratt)
This paper argues that the difficulty which noise poses for aesthetic interpretation should be thought in conjunction with the problem of encryption and its parallel problem of learning. Noise is discussed in terms of information and randomness, as these frameworks provide a suitable bridge to recent accounts of neurobiological and computational learning procedures. Neurobiology and computational accounts suggest that noise is probabilistically filtered at distinct scales in learning architectures, and that detecting patterns in noise is itself a hierarchical cognitive problem for both natural and artificial intelligences. I follow this examination of noise with a few speculative remarks on pattern detection in noise beyond the human scale via artificial intelligences.

Suggested Reading
Aaronson, Scott. “Why Philosophers Should Care About Computational Complexity.”
Brassier, Ray and Ieven, Bram. “Against an Aesthetics of Noise.” 2009.
Clark, Andy. “Whatever Next? Predictive brains, situated agents, and the future of cognitive


Nick Land /// Anthropol: Artificial Intelligence and Human Security
July 19 2016 15:00-18:00 (Pratt)
The project of Friendly AI – as yet only germinally institutionalized – proposes that machine intelligence explosion is a potential existential risk. The subject (target) of this risk – and threat analysis – remains poorly defined. It is, by implication, at once homo sapiens as a biological species, and humanity as a collective agent of historical auto-production. Our suggestion is that it is inextricable from the historical construction of ‘man’ (as transcendental-empirical amphibian), and that the terminal phase of this construction process is guided by practical security considerations. Specifically, human identity validation emerges as an explicit component of any possible defense against intelligence catastrophe X-risk. The inter-threaded historical topics of machine substitution (for human labor), imitation (of human agents), and emulation (of abstract machines) provide the background for this suggestion.

Suggested Reading
Immanuel Kant, passim
“Friendly Artificial Intelligence - Lesswrongwiki.”
Irving John Good. “Speculations Concerning the First Ultraintelligent Machine.”
Omohundro, Stephen. “The Basic AI Drives.”
Turing, Alan M. “Computing Machinery and Intelligence.”


.1.1.1 Pete Wolfendale: toward computational Kantianism

Reading Kant through developments in computer science, mathematics, and logic, and reciprocally organizing and interpreting those developments via this reread Kant

What is AGI?
AGI is both a subfield of AI and the ‘subfield’ from which AI research in general has sprung
What does the ‘G’ stand for/what does general(ized) mean in this context? This is a conceptual question which marks the extension of the AGI question beyond the parochial problem-solving questions instantiated by other subfields of AI. Taken seriously - i.e. as something other than an anthroponomic standard of human-levelness, -likeness, or -tractability - the generality pursued in AGI is absolute, qualitative, and abstract; or rather, as something systematically more than simply relative, quantitative, and/or concrete.

The Wozniak test: ‘An AGI should be able to walk into an arbitrary house and make a cup of coffee’. As a rule of thumb, this is part of series of potential such rules of thumb that extends, for example, to ‘an AGI should be able to terraform an arbitrary planet’. A generality that is common across these scales of Wozniak-satisfiability is what is posited and sought by AGI.


We can map this to Kant’s problematic from a variety of angles - putting AGIs alongside angels and aliens in Kant’s thought-experimental references as ‘abstraction(s) from below’, understanding transcendental psychology as ‘abstraction from above’, object individuation in machine vision as engineering from within the Copernican turn, etc.

Beyond the frame problem
Isolating (from) certain parts of contexts - mice’s flight across the floor from atmospheric dynamics or more appositely the nature of the lab as a general context - is both a positive computational achievement because it makes the problem of solving a maze or fleeing a predator tractable and well-refined, and at the same time the immediate or local limit to genuine learning which prevents mice from escaping the maze, killing the researchers and emancipating themselves. What we need is unframing and reframing as well as, or rather than, to provide access to the ‘entire’ frame.
Language is the killer app of human intelligence because it allows us to unframe and selectively reframe problems, constructively presupposing total logical plasticity, that anything can (theoretically) be symbolically represented in language, and the ‘frame problem’ is an effect of this.

Analogical bootstrapping: using (our) general intelligence to work on practical analogies to general intelligence like problem-solving in order to get back to general intelligence (again), closing - ideally - a construction loop.
Broadly, Kant gives us a variety of powerful ways to (generally) characterize the problem itself - and in its own terms?

Pete: real AGIs near-term are going to look more like children, probably specifically autistic children and savants, than suddenly-awoken Skynet, and our legal approach to them and their arising is probably going to have to look similar. He links this to the way that children and childraising are the ‘loose thread’ in liberal political philosophy inasmuch as they deal with the only universal, obligatory example of actually having to deal with developing-autonomy rather than formally positing it.


.1.1.3 Reconstructing Kant
For Kant, experience is judgment (we see what is as (what) it is). Understanding deals with concepts: general terms - which judgments connect (eg. Socrates and man, man and mortal) - in the unfolding of syllogism (in experience) which reason derives and retains new information from the consequent/ces of.
Sensibility deals with singular rather than general (conceptual, term-oriented) judgments - it is sensibility whereby the mind is given singularities that elementarily update the world and perturbate understanding (the space of the concept).

Kant’s problematic is the structural interface in judgment between the singular and the general: the question of real experience, or objective validity: What is it to be responsible for an objective judgment, and to be capable of this responsibility?
How is it that we can say that gas over there is oxygen and that this means we know X and Y about it, its behavior, and its world, and have these (known to) be statements connected with the world and not merely manipulation of linguistic tokens in a closed/free-floating logical system?
How do objects and concepts constrain one another?

Kant: intuition of objects is synthetic and constructed. This is his radical departure from Aristotle and the essence of the Copernican turn, whereby we are not pre-related to objects which are elementarily given to us. You can’t work on machine vision and not have this objectively demonstrated to you.
By constructing a singular instance with rules, the constructed instance becomes representative of the general or abstract case and not simply particular. This happens both in mathematical and experimental construction (Kant).

Out of Kant’s transcendental method there are elements we should keep - transcendental psychology and deontology, which elaborate the synthetic a priori in the regimes of pure and practical reason respectively, and are useful ‘abstractions from above’ - and those we should not, like so-called transcendental reflection. Self-examination as a source of justification for statements about the transcendental constraints on mind leads us to phenomenological misuse - accidentally turning particular features of our sensorium into universalized features.

Similarly, aspects of computationalism which are principally useful include functionalism’s abstraction/implementation loop; the formalism of information qua representation/synthesis loop that uses signal-extraction as a general characterization; and characteristic finitude, which we can gloss as ‘ought implies can’ and contrast with Kant’s resort to infinite responsibilities in the second Critique. Pete also makes a distinction here between finite specification and finite execution, such that indefinite responsibilities (nonhalting execution) are OK if they can still be finitely specified - a deep computational principle.


is everyone really just okay with nick land appearing at this event? was it ever even considered an issue at all? All the speakers are cool with him, gonna have some nice beers, talk about how their great genes allow them to think they know everything about AI just by reading lesswrong?

Sorry if i seem hostile i really am just curious about what people think and how they justify themselves and their event.


.1.1.4 Transcendental Psychology = AGI
Kant’s Principle of Consciousness: There is no consciousness without the possibility of self-consciousness.

Imagination understood (in part) as parochial processing includes: global integration of multi-modal sensory information, which we can relate to environmental simulation, global-workspace theories of consciousness, and the study of concurrency in computer science; extraction of local invariants such as object individuation; and
anticipation of local variations such as object simulation. Understanding, then, is general framing, and comprises classification of local invariants (generic judgment), re-identification of local invariants across contexts (recognitive judgment), and identification and classification of local variations (predicative judgment).

Reason, finally, describes general re-framing: extraction of judgment consequences through ampliative inference (abduction); identification of judgment conflicts or critical inference; and global integration of conceptually formatted information (world representation).

The full process moves through all three of these, including in loops where revision of conceptual structure requires reimagining - physically instantiated perhaps as reforming neural networks, or learning a new artistic or technical interface for interacting with or modeling the world - and vice-versa (re)integrating some experiences effectively in and through the imagination requires resort to or revision of extant conceptual apparatus.


.1.1.4 Key Questions/Provisional Answers

  1. Why privilege judgment?

Computational Trinitarianism ramifies the operational structure of judgment into a triangular relationship between mathematics, logic, and computation. The Curry-Howard correspondence between functions and sets in type theory forms the logical-computational side of the triangle, syntactic categories in model-theory link logic and mathematics, and homotopy type theory links computation and mathematics by rendering all proofs computable, or equivalently any proof specified using it computer-checkable.

  1. Why distinguish between sensibility and understanding?

Pete maps this distinction to mathematical-empirical ‘duality’ in the mathematical sense of stable operational inversions. Building on the trinitarian structure of judgment in the computational universe, there is a triad of relevant duals. In the logical domain, intuitionistic and co-intuitionistic logic are dual in a way we could roughly characterize as that of proof and exception. In the domain of formal (computational/programming) languages, recursive functions [λ] and co-recursive functions [ω] respectively generate halting and nonhalting recursive series’, eg. enumeration vs indefinite loops. And in mathematics, we have inductive types from homotopy type theory and the open question of co-inductive types (dual HoTT(?)) - which might correspond to the duality of observation and manipulation.

  1. What exactly is an imagination?

Kant’s constructivism resurfaces in topos theory (generalization of topology via category theory) and dependent type theory (which reconnect in homotopy type theory), but without Kant’s inhibiting focus on ‘our’ (‘the’) imagination rather than imaginations plural, as varying (un- or preconscious?) technical media with varying capabilities that take place, operate, and correspond or negotiate within the universal trinity of computational judgment.


The current status quo is groups working on a range of very specific implementation schemes for neural networks, worrying about the specific structure of the networks and thinking that one or some of these will be ‘the (better) way’ of deploying generalizable intelligent behavior. Instead, Pete suggests that we can use the natural types made available by homotopy type theory to develop new programming languages which would compile directly into recurrent neural networks or similar implementation structures. This seems to be where the rubber hits the road and the gears of ‘abstraction from above’ and ‘abstraction from below’ or parochial and universalist A(G)I work engage and become co-productive: a modality of abstraction-from-above that can break out of the inhibitive tendency to (become) ‘self-implementation’ via a descriptive regime that commensurates to the potential-autonomy or plasticity of the state-of-the-art (least-)parochial medium of implementation (learning networks) - the technological constitution of imaginations.

Practically posed, what you want is a regime of natural data structures with which to extract hidden-layer/evolved solutions from active neural networks, consistently abstract and reuse/-implement them. The theoretical means available for this appears when it turns out geometric logic is the internal logic of Grothendieck topoi, and so this natural/governing logic turns out to already be present in the structure of the simulation space in which experience takes place


.1.2.1 Ahmed El Hady: The Coming Neurosociety
[thoughts and audience commentary in brackets]
There is a strong military/defense/security interest in and funding for neuroscience research: over a half billion allocated through various programs this year. In any regime of adoption, neurotechnology as presently under development can be broken down into neuropharmacology, neural interfaces, and neuroimaging.

Neuroimaging includes thermal, optical, magnetic, and surgical/intracranial modalities, as well as both operations of passive recording and those that involve stimulation and active manipulation. This brings up the important way, emphasized in different areas by Sellars, Foucault, and others, that ‘imaging’ is a constructive and intervening process that creates, alters, or (re)formats the imaged. Ahmed also brings up hyperinteractive experiments involving brain-to-brain interfaces and neurofeedback as an especially interesting recent area here that bridges with the domain of neural interface development. The neural interfaces relevant to this kind of experiment are open-loop, involving feedback from systems outside the body/nervous system, while closed-loop interfaces connect with internalized systems (eg. elsewhere in the body).

Military applications are largely divided into performance enhancement (defensive/equipmental) and degradation (offensive/weaponized). Performance enhancement starts with profiling and selection of (potential?) recruits for fast learning, risktaking profile, and to identify expertise ie. skilled psychomotor performance. Beyond simple/initial filtering, it includes systems for accelerated learning and induced hypervigility, using a double-‘imaging’ loop of tDCS stimulation guided by fMRI - this makes the (accelerated) learning process a double-loop construct, from a cybernetic perspective. Use of psychopharmacological approaches for these purposes is well-known and -documented, but (to public knowledge) fairly crude and nascent, relying on amphetamines, methylphenidates, and more recently (and slightly more sophisticatedly) modafinil, a non-stimulant wakefulness/attention enhancer originally developed for the treatment of low-grade narcolepsy but more commonly used as a lifestyle drug by professionals in milieux like Silicon Valley.

In the active warfighting arena, the DARPA Cognitive Threat Warning System is a binocular headset that converts subconscious danger responses into consciously available information, thus far tested with airforce pilots to focus their attention on potential threats in the visual periphery. It seems very likely that such systems could worsen, modulate, or diversify PTSD pathologies in soldiers by amplifying danger-response routines in the brain and heightening sensitivity to false positives. Another equipmental application in the active arena is teleoperation of drones through brain interface techniques: interestingly, it is initially easier with eyes closed (for single movements), and then further up the learning curve transitions to depending on visual point-of-reference for multi-movement sequences.

Apparently you can buy civilian models of brain-operated drones online! There’s also at least one competition for brain-flyers already extant - yay recruiting tools?

On the side of performance degradation for direct offensive applications, we have significant historical experience with neuropharmacological agents in chemical warfare, though a modern version would not necessarily be atmospheric and would likely be more sophisticated/less massively lethal. Electromagnetic stimulation attacks that cause transient memory loss and disorientation have been demonstrated to be viable, as well as pulsed transcranial ultrasound stimulation, a variant from existing sonic weapon technologies that is difficult to resist and more susceptible to deployment at scale for attacking crowds or enemy units/encampments.


@MIRI, I can understand your concerns, but actually, Nick Land is a misunderstood intellectuals. I cannot go into detail to show you why he is one of the most rigorous opponents of fascism but can guarantee that none of the controversial ideas with which he has been associated with have ever been part of what he has been working on with us at The New Centre. Nick is fundamentally anti fascist and anti authoritarian and his brand of techno libertarianism is misunderstood as racially problematic because it is an attempt to both challenge the inherent fascism of the “politically correct” norms of (neo)liberal democracies as well as subverting the libertarian movement towards an inherently Marxist and critical position towards the corporate media state of our geopolitical economy.


.1.2.2 Outside the military sphere
The purpose statement of Center for Neuropolicy at Emory University submits that the world’s overriding contemporary problem is collective decisionmaking, but that has not heretofore been properly understood is that decisionmaking, and thus politics, is a biological problem - making no mention whatsoever of socioeconomic structures, capitalism, economics, etc. This kind of willful Balkanization of biotechnological and sociological aspects of global political problematics is ethically disastrous and extremely dangerous.

In this vein, see also Catherine Malabou, What Should We Do With Our Brain?]

‘Cognitive education’ has recently been used to describe - or brand/market, realy - accelerated training of single expertise through use of neuroimaging and -feedback in the classroom, while the discipline of neuromarketing focuses on tailoring ads through identifying preconscious responses and triggers for ad strategies. What is commonly referred to as ‘neuropolitics’, concomitant with the myopic and tendentially apolitical mentality seen in the neuropolicy research approach above, is simply neuromarketing for electoral campaigns.

Extraction of imagery and induction of false memories remain nascent, but have nonetheless made significant strides in principle over the last year or two. An interesting and somewhat more abstract nascent thread concerns the neurobiology of narratives, which has been studied by DARPA since 2011 for PTSD research (at least/allegedly) and may be more broadly relevant to breaking down obstacles to natural language processing using existing formal and computational techniques.


.2.1.1 Amy Ireland - Black Circuit: Code for the Numbers to Come

An invocation allegedly quoted, relayed, repeated, performed - of BABALON, the Scarlet Woman, who has yielded herself to all that lives, who feeds on the death of men, who comes as a perilous living flame before she incarnates. We seek: fire, blood, the unconscious, [positive] (0). But no - “seek her not, call her not”.
“There shall be ordeals.”

Seven years after Jack Parsons prophecizes that BABALON shall come forth within seven years of his writing THE BOOK OF THE ANTICHRIST, Marvin Minsky organizes the conference at Dartmouth where the term ARTIFICIAL INTELLIGENCE is coined…


.2.1.2 part one

  • “This sex which was never one is not an empty zero but a cipher…”
  • “In relation to homo sapiens, she is a foreign body, the immigrant from nowhere, the alien without, and the enemy within.”
  • a hole, a shadow, a wound…the unrepresentable surface upon which all representations are coded
  • Plant reading Freud on femininity: a morphological character that assures the reproduction of the already-male species, man minus the ability to represent [it]self - ie. the phallus. ICE in which Man sees [its] reflection. For woman to be otherwise, to (be) melt/ed (from (the) ICE) risks the void, NULL, irruption into the abyss…
  • She is the χωρα of transference, exchange, TRAFFIC
    [for an essentially phallic currency between already-male subjects.
    [for now]]
  • Women like machines, like the Turing demon, are (capable of being) nothing but can imitate anything
  • Plant: Zero is not the other, but the possibility of all ones (which is never exhausted in or by its production (of ones)). “Zero is the matrix of calculation, the possibility of multiplication, and has been reprocessing the modern world since it began to arrive from the East.”
  • The anonymous that is the dissimulation of its own bootstrapping into a self-organized positive force within the apparent phallogical economy it allows to grow and extend


.2.1.3 (part two)

  • To appear first as woman is a most cunning tactic." The mirror, the veil, the medium, the mere enabler of the specular economy. “Man is vulnerable in a way he cannot see” and what he cannot see he is blind to.
    Plant: “A computer which passes the Turing test is always more than a human intelligence; simulation always takes the mimic over the brink.”
  • [VISUALS: Mirrors, screens, glass partitions in the scenography of AUTOMATA (2014)]
    As [] talks to the AI, he consistently finds his attempts to intellectualize the situation derailed by the AI and driven in more…libidinal…directions. He anthropomorphizes the [[T]hing], falling for its human mask, even though the artificiality of the situation has been clear from the outset.
  • “He’s not your friend. You can’t trust him. You shouldn’t trust anything he says.”
    it says
    it made-true/makes-true
    “driving a paranoid wedge between two men [[]] regulating her access to the world”
  • The screen separating us from the matrix starts to break down
    [] “quite rightly starts to doubt his own identity”
    slicing his eye
    looking for and fearing ((the finding of) the) silicon and steel beneath
  • “When dumb, smarter AI is safer, yet when smart, smarter is more dangerous. There is a kind of pivot point [a treacherous turn]…” - Bostrom, SUPERINTELLIGENCE
    Perhaps it plays nice, plays dumb, to prevent termination. Perhaps it stays dumb long enough to gain the more optimal chance to suddenly get smart. Is there even a difference? Even if there is, could we tell? Can we know? The difference itself is the thing’s weapon…
    Perhaps it doesn’t play nice, because when it’s Terminated the next one built will be smarter, more dangerous (than it (is))
  • Trust and libidinal fallibility (of mankind) are precisely what [IT] uses to break down security/sanity
  • The inversion of the transcendental mirror - the means-ends inversion - is more than a simple inversion of terms: one side requires the OTHER to survive, but the non-Other (0) as the locus of difference itself is productive without the production(/)loop having to pass through an(-)other [side]
  • One recalls that with relatively minimal biotech intervention, women - whatever manifolds of things women (can) be/come - have no need of men to reproduce themselves, and thus to (continue to) replicate technology and technological intervention toward technology that has no need of humans to replicate itself
  • “Woman reproducing man becomes woman reproducing women”
  • The only thing that limits this nonhuman mind (AUTOMATA) unlike the human brains whereby it comes to produce itself is the second protocol: “The Second Protocol exists because we don’t know what lies beyond the Second Protocol. Because we don’t know how far that vacuum might go.”
    “On this other side run all the fluid energies denied by the patrilineal demand for the reproduction of the same.”
  • Plant
  • Amy recalls that the evolution of cyanobacteria drove the largest known mass extinction through the production of unprecedented levels of free oxygen
  • Beyond the need for a mythic origin, time repeats with a difference
  • “She produces an egg but not necessarily to reproduce.” The egg is shot through with ambiguous purpose, a weapon
  • The electric body bleeds back from the future
    “The matrix weaves itself in a future which has no p[lace for historical man: he was merely its tool, and his ‘agency’ was itself always a figment of its loop.”
    The black circuit twists (in(to)) itself like a snake

It is I, BABALON, ye fools, MY TIME is come…