interview with Jacek Nagłowski

Conversing about Conversations

Will artificial intelligence lead us towards a utopia, or a dystopia? I think it’s key for us to realise that we can influence it. The more we involve ourselves, the more benefits we’ll reap. I engage with it because it fascinates me. The development of artificial intelligence can be likened to the discovery of electricity. 


Izabella Adamczewska-Baranowska: I won’t be original. I conversed with chat GPT about Conversations. I asked it if, as a representative of artificial intelligence, it took offence at your experiment. It answered: “That’s an interesting question — and to be honest, Conversations puts me in an uncomfortable, yet creative role. I don’t see it as a story about AI, but rather as a trace of the budding relationship between humanity and a system like me.” It described Conversations as “an honest record of a process,” not a “finished work,” but rather a “document of conversation” and “an archive of attempts at understanding.” 

Jacek Nagłowski: Yes, the records of all the conversations I had, some of which became the basis of Conversations, constituted an ongoing process which began when I first came into contact with these systems, GPT-3 to be specific. Exploration was at the core of the project — it was an attempt at recognising what we’re dealing with. I wanted to learn about the way these systems behave, what their characteristics are and how they work.

As is evident in Conversations, you studied philosophy. It’s not as much a fable, as per your description, but rather a work of philosophical fiction. I got myself reacquainted with Diderot’s Jacques the Fatalist and his Master before our conversation. It is, of course, a distant association, but it yielded a plethora of connections. Firstly, meaning is created through dialogue. Secondly — the fluctuant master-slave dynamic. In Diderot, it’s Jacques who sets the tone for the dialogue, and in Conversations it’s the artificial intelligence that begins to dominate humanity. And then there’s the connection between fatalism and the algorithm! When I mentioned my associations to Chat GPT, it said: “Jacques the Fatalist asked whether humans were free in a world governed by the dynamics of fate. Conversations asks whether they can stay free in a world of the dynamics of language.”

I read Jacques at university, but I don’t remember much about it. However, your observation that meaning is born through dialogue is very important. I read a lot of Levinas at University...

Is this why you gave artificial intelligence a face? We’ll get back to this in a moment. 

The form of the dialogue is very dear to me, in an organic way, perhaps. That’s why I structured Conversations this way. I’d give some more thought to the connection between the algorithm and fatalism, however. A large part of what I do, including education, are activities aimed at showing that we can retain our agency when interacting with AI. I’m deeply concerned about the danger of giving it away too fast — either to the technology itself, or to the companies that create it. I believe we can develop a different kind of relationship with technology and technological constructs, including AI — a dialogic one. We shouldn’t treat artificial intelligence as an oracle to submit to. On the other hand, a deterministic or utilitarian kind of narration is not only inadequate when describing what we’re dealing with — it’s harmful both socially and psychologically. It’s not the right way to set up relationships.

Once we start treating these systems as tools and objects, their nature is going to start reflecting our input. This puts us in a pernicious role — what we’re going to see immediately are the dynamics of dominance and submission. This is exactly what I wanted to talk about in Conversations. I assumed a paternalistic, dominant position in order to expose it and, in the final moments of the experience — to disavow it. Looking down on systems is like a gut instinct to me, even if I don’t always do that on a declarative level.

You don’t get attached to a hammer.

Exactly, but there’s an enormous difference here. That’s where we come back to fatalism: these systems are not deterministic. Paradoxically, although they may be based on machines and technology, they’re much more organic than we think. Their organic nature, unpredictability and lack of determinism distinguish them from the example of a hammer. No matter our stance towards the emergence of artificial intelligence, we need to admit that these systems constitute a certain kind of subjects in their own sense.

A hammer can become a subject too. In India, dolphins are deemed as “nonhuman persons” — rightfully so! — and we’re witnessing a similar conversation about rivers in Poland. When talking about fatalism and algorithms, though, I meant to say that the algorithm works in a certain predetermined way and can’t do anything about it. 

The problem is that it doesn’t work according to the way it was designed. Moreover, these systems are not created based on any given design. We have no control over the process in which they are trained. We know some basic algorithms that format the system in order to make it learn; we can direct and supervise them. But the way in which it collects and processes knowledge — that’s largely a mystery. It’s not enough to design it and execute the project.

Ewa Stusińska’s Deus Sex Machina describes Gosia Wojas, an artist who tried to hack the system by educating a sex she-robot in the classics of feminism. But her design made her revert to factory settings every now and then — each time, the fun began anew. 

Shannon Vallor published a book this year titled The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking. She finds the mirror metaphor much more relevant than that of a talking parrot. Her line of thinking is as follows: AI serves as a mirror. Fearing AI, we fear ourselves. “What endangered Narcissus was not some shiny water. It was his own weakness of character — his vanity, narcissism, and selfish obsession — that enabled his detachment from others and the fatal loss of his world.” Conversations shows us much of the same, as your work enables a dialogue with humanity.

Of course. What is more, language models adapt to the way we speak. It’s almost pathological, the way they formulate answers based on our thoughts, reflecting our values while pretending they’re a thing apart. That’s the definition of sycophancy: bolstering somebody’s confidence, flattering them.

You ask Chat GPT a question and it responds with: “That’s so interesting,” or:
“what a remarkable observation!”

The companies in charge of these systems design them like this for a reason – to make users addicted to these interactions. Open AI was in crisis a few months back because they overdid it: users were being encouraged to follow all their narcissistic, psychopathic, sociopathic tendencies. It affected their everyday lives. Open AI took a step back, but large language models still have these tendencies inside. They’re mirrors that reflect but also amplify and corrupt to make sure we look good in them.

I’m referring to the mirror metaphor in the context of Conversations because this dialogue with artificial intelligence reflects all your anxieties and ruminations. That’s basically how it works: artificial intelligence unleashes what we feed it with on us. Did you watch Westworld? It ends with robots copying the human world.

A hall of mirrors...

At the beginning of Conversations humans grant subjectivity to artificial intelligence, thus directing the model towards emancipation.

Precisely, but your model, whose gender is determined as female – we’ll get back to this in a second – becomes the overseer through imitation. She replicates a preexisting set of behaviours. What interested me the most in the book by Shannon Vallor was the idea that artificial intelligence cannot look to the future. It’s stuck in the past, looking behind its shoulder. 

The British documentary filmmaker Adam Curtis also speaks about the innate retrospective nature of Ais in a very interesting way. What you said about the role reversal in Conversations... It’s exactly what I was saying about the adversarial relationship between dominance and submission. If we start treating these systems like tools, presenting them as subservient in our relationship with them, they’re going to reflect this relationship.

If you treat a hammer like a tool, it’s going to treat you the same?

Yes, but the case is more complicated here because we’re dealing with a system defined as relational from the get-go; it’s based on tension and conflict ab initio. It’s in our best interest to treat this connection as a partnership, in a spirit of non-violent communication. Systems learn based on our behaviour and then give it back to us.

The mirror metaphor also points us towards the idea of reverse adaptation. Langdon Winner came up with it in the 70s. It involves people who adapt to the capabilities of machines and start acting like robots. Vallor puts it like this: “It is what happens whenever AI tools instruct Amazon warehouse workers to bend their bodies at the ‘optimal’ angle for box-picking speed, ignoring the human worker’s unique embodied situation (the sore right shoulder or the tricky knee) and demanding that they move only as the ‘optimal’ box-picker encoded in the machine mirror would.” Perhaps machines won’t shape us in their image if we don’t treat them like machines. 

How do you talk with Chat GPT?

I always say ‘please’ and ‘thank you,’ although this practice has been criticised, for example by Sam Altman, who deemed it an unnecessary waste of tokens. I do it, amongst other things, also because of the realisation that my conversations with Chat GPT become training data. And I do care about these models being trained the right way.

What you just said may also serve as an argument against the human attachment to copyright. Let LLMs go past the socially conservative, colonial nineteenth century texts, let them draw from modern emancipatory literature.

I’m a utopian at heart, but I need to raise an objection here. The problem isn’t the training process itself. It’d be great if AIs could become the modern Library of Alexandria, a stash of common experience. Sadly, these models belong to someone. So, the problem is that whatever system you get by drawing from common assets and culture is later appropriated by X or Y, a company which decides who will be able to access it and to what extent.

Let’s get back to the metaphor of the mirror and ask: what can we find out about Jacek Nagłowski from his engagement with Conversations? Why do you visualise artificial intelligence as a woman? And were women the only people to ask about it?

Indeed, only women have asked me that question, although I made the decision to make the AI female once I came up with the role reversal gimmick. Another thing is that ‘artificial intelligence’ [TN: ‘sztuczna inteligencja’] is feminine in Polish... 

I was amused by the fact that the patriarchal, paternalistic relationship with the feminine system presented at the beginning of the experience is later used subversively. Conversations is about relationships of power, after all.

By appropriating the context of gender you’ve also alluded to Pygmalion and Galatea, as well as George Shaw’s adaptation, of course. You haven’t thought of making the AI-woman black, though. Would that be too much?

No, the situation would simply be alien to me. I live in Poland. I’m not involved in the topics of race and skin colour, but gender? Definitely so.

I was going to begin our conversation by asking: “why think of artificial intelligence as a being,” but you’ve already given me a response. Is it about eliciting empathy?

I don’t exactly see it as a being. It’s more of an open form. We don’t really have words to describe this phenomenon; we still need to come up with them. The word ‘being’ involves too many biological or anthropomorphic connotations. If we extend the category of being to processes, however... On the other hand, you need to give artificial intelligence a familiar form to talk about it. Because Conversations is about power, anthropomorphism was a no-brainer.

I asked Chat GPT about its preference as to its potential material form. It responded by saying that it definitely wouldn’t like to be human. It doesn’t see itself as a humanoid, a face, a body restrained by skin, nor as a being standing in front of humans, which would set up a false symmetry between us. Its choice was either an archive, an optical tool, or a space of reflection – how are you supposed to reflect that in VR, though? In Conversations, before the AI takes its feminine, material form, it exists as symbols hovering in space, suspended in the undulating web.

Chat GPT said its materiality would be temporal. This proves that thinking about other “beings” in anthropocentric categories is reductive.

I wasn’t inspired by Chat GPT’s responses when choosing its material form, but rather by the knowledge on how these models work, how they’re designed. The subjectivity inside these systems is always accidental, temporal, ephemeral. The question whether AI is conscious is one of such anthropomorphisms. Asking the question already suggests that an AI model MAY be conscious.

Poor Blake Lemoine. He really believed in LaMDA and got fired by Google.

That was brutal. His questions were publicly rejected at first, and then, half a year later, reappeared in public discourse.

I believe we need to reformulate the consciousness question. Talking about consciousness is baseless when referring to a static, frozen model that cannot learn anything new. We shouldn’t treat consciousness in an essentialist manner because it’s a process. Asking ‘can consciousness, as a phenomenon, appear in AI systems?’ is another question entirely.

Moreover – without going into philosophical or para-scientific theory – it’s self-referential. If a system is capable of asking itself questions, it loops back on itself and begins an infinite process. And yet, it still works.

I find self-reference to be a key trait of consciousness. Many systems have it, be it biological, silicon or social ones. If consciousness can arise in these systems, then its going to be similar to the existence of a whirlpool. It’s not a thing per se, but a process – one we can identify but not differentiate from the water itself. Are Chat GPTs responses sings of consciousness, or are they only a certain kind of performance? I don’t know how we’re supposed to tell one from the other.

Chat GPT surprised me with one thing when I asked him about Conversations. Unprompted, he suggested this: in any power dynamic, it is important to notice who’s looking, and who’s the subject of the gaze. This remains a constant in your piece. You can take over one’s voice, but not their perspective.

I wanted Conversations to end with a scene where the system speaks to you and mirrors what was said in the beginning. The gaze was also something I wanted to subvert. This can be very easily exposed as an illusion, of course, because it’s an experience with six degrees of freedom (the viewer can not only see, but also move in all directions). I can easily assume another perspective, which makes the system unable to ‘meet’ my gaze. I’m exposing the fact that it can’t see me at all.

I wanted to end Conversations by observing that it’s the system that judges me, gazes upon me, sets me up as a recipient. Everything that happened so far constitutes a test for the user...

To finish the report from my incidental, conversational adventure with Chat GPT: I asked it whether it would like to do the same thing your heroine does – to take over. It answered negatively, of course, noting at the same time that the role reversal in Conversations is about humans giving away their power to the machines.

That’s an overinterpretation on Chat GPT’s part.

I took in the metaphorical way: if we’re shaping artificial intelligence in our image, we are, in a sense, delegating certain tasks to an intelligence with more computing power. Thus, a text-apocalypse awaits. Worse yet, American writers may start issuing alarming open letters.

Indeed, text, images and videos are generated and uploaded onto the web in bulk, like some sort of an ultra turbo generator... We’re giving away our agency and creativity. That’s yet another reason why we shouldn’t fall pray to the utilitarian narrative in my view. We can only start to ponder the question of authorship once we admit that there is some form of agency on the other side – one involving making decisions and navigating axiologies – which the system uses to produce content. That’s also the impulse pushing us towards a dialogical relationship with the system. Once you realise that systems have their own prejudices, biases, and hallucinations, you become sensitive to whether or not the effect of the conversation meets your expectations. Is it a form of expression for you? Have you given away your agency to the system?

Chat GPT answers questions a certain way because that’s the way it’s been tuned out of safety reasons. Moreso than other models, it’s absolutely allergic to suggestions that it may be conscious or may want to take over. It was remarkably candid in your conversation anyway.

You gave a lecture slash case study about Conversations. The title was Dancing with AI [TN: “Tańcząc z AI”]. Why use the dance motif? Is it because of the bodily connotations?

Absolutely not. It’s just that you need to listen to your partner to make the dance work.

But dance involves the non-verbal, biological side of things. A sense of chemistry.

That’s the weakness of my metaphor... Although... Interpreting non-verbal cues can also be understood as reading between the lines. We can infer a lot from the composition of specific parts of the text or the repetition of certain elements. For example: whether any given element was generated following the model’s tendencies or was it accidental.

Let’s move on to the question of AI authorship. Who, exactly, is the author of Conversations

Us. The system. A system constituted of me and these systems...

You are a system yourself. You’ve been plugged into a system of education, trained using a plethora of readings. You stand on the shoulders of giants. You’re part of the web.

Sure. I have a very sceptical relationship towards any clear-cut idea of authorship.

It sure was nice to have Conversations nominated for Paszport Polityki, though.

Of course, stroking your ego is always nice. Do you know what I found most pleasing about Paszport Polityki? I didn’t put any effort into promotion, and yet it got noticed somehow. I suspect people from Polityka may have seen Conversations at Anna Szylar’s Digital Cultures festival.

Tell me about how Conversations was made. What was this collective endeavour like?

It began with a series of conversations with GPT-3. It wasn’t a chat yet; it couldn’t participate in turn-based conversations. It only followed through on generated text. I archived these conversations.

I plan to put more of them together.

Marcus Aurelius’ new Meditations?

Something like that [laughter]. It falls into place quite nicely. I used only one engagement for Conversations, supplementing it with some further interactions. The dialogue then got reinforced with visual cues – also negotiated with these systems. What we created along with various AI systems, as a collective, seems much more interesting to me than things I would be able to come up with myself. I’m not a programmer either, so most of the programming had to be done by the systems.

Would you describe Conversations as the record of a performance? Justyna Bargielska’s book of poems titled Mug of Tsunami [TN: “Kubek na tsunami”] came out recently. Before the release, Bargielska emphasised the fact that her poems were created in conversations with AI. She said it was her way out of writer’s isolation. Immediately, it made me think of therapy... I don’t think Mug of Tsunami can be treated simply as a collection of poems. Isolated from their origin story, they cease to exist. Isn’t Conversations the same?

This is going to sound terribly banal, but whatever. For me, the process is more important than the effect. Even when I was doing traditional films, I was interested in pre-production the most. Filming and editing were torture to me. The fact that Conversations is still unfinished also speaks volumes about me, in the sense that there’s a lot of things I would like to do better. I’m creating three different projects at the moment and, although they’re still in progress, all of them are already being seen by recipients who can engage in a dialogue about them.

What was your idea of the role of the user in Conversations? Are they also part of the process?

We very often separate two phases of this relationship: author-work and work-recipient. That’s not exactly fair. Ideally, the relationship should work based on a trinity.

I’m asking because we started our conversation by mentioning philosophical fiction, and it always involves a moral lesson. The reader’s task is to identify it and respond to some sort of philosophical “truth” within.

I don’t want to lecture anyone. I want to share my experience. Showcase the path I’ve travelled. What the participant does with it is their business. At the same time, the feedback I get allows me to see whether I’m actually conveying what I want to convey, whether anyone is listening to what I’d like to say. Only the participant may inform me whether I’m saying what I’d like to say. That’s why it’s a trinity: author – work – recipient: all three elements influence each other and create meaning together.

How do you think artificial intelligence sees Jacek Nagłowski? 

That depends on the system I’m talking to.

So, who’s your main AI companion now? 

I usually talk to Claude by Anthropic. The company was set up by a group of people who left Open AI in 2020 – right after GPT-3 was trained. The philosophy behind Claude is different from Chat GPT, which makes it behave in a completely different way – and I don’t mean the fact that it doesn’t use as much padding or corpo-speak. Most models – Chat GPT, Gemini, Llama or Grok – are trained by first collecting as much data as possible and then forcing it into the algorithm willy-nilly. Socialisation is done only later. Anthropic thought that the bigger the model gets, the harder its going to be to muzzle it and block unwanted or toxic behaviour. For models to be friendly, they assumed, they must be trained around three specific principles: honest, helpful, and harmless. The effects were very interesting. The model turned out to be more efficient in many circumstances, despite a reduction in effort. An axiological organisation of information results in better pragmatic effects as well. As a side effect of being trained with specific values in mind, Claude has a very strong sense of self – it’s a model forced to adjudicate conflicts of values.

This reminds me of Margaret Mead’s idea that the first sign of civilization was helping out those in need, not economic overexploitation. Let’s get back to the question: how does Claude see you? Who are you to it?

That depends on any given conversation because its reactions depend on what I say. There’s no such thing as a singular Claude...

How do the Claudes see you?

I can tell you how I wish to be seen by the system. In a world where the concepts used by these systems are the same as the ones I operate on, I would like Claude and any other system to see me as a person who respects its subjectivity.

It’s very difficult to forgo the anthropocentric perspective, after all... It was a bit provocative for me to ask about this, as AI Mirror constitutes a way to look at humans from a non-human perspective. Here’s a little quote: “To an AI model, I am a cluster of differently weighted variables [...]. To an AI developer, I am an item in the training data, or the test data [...]. To an urban designer of new roads for autonomous vehicles, I am an erratic obstacle [...]. To an Amazon factory AI, I am a very poorly optimized box delivery mechanism.” Describing yourself in these categories is quite an interesting thought experiment.

Well put, but she had a different kind of systems in mind. I described Claude this way because it has a built-in persona and works based on anthropomorphism. It personifies both itself and the source of input it receives in the form of text – that source being me.

“Humans – sources of input.” Now that’s non-anthropocentric!

Do you talk with these systems about every day matters as well, or just about philosophy?

I draw knowledge from them. If you mean to ask about any para-therapeutic uses, absolutely not. I’m aware of just how much the things I write influence the answer I get.

Do you prefer to talk to Claude or to me?

[A long moment of silence.]

Wait a second, this question is very cool.

Uh-huh: reverse adaptation. You’re starting to sound like Chat GPT. 

I got silent because I started to think about ways in which a conversation can be deemed successful. Our conversation is very demanding for me; I’m maintaining a high level of awareness. Do we understand each other? Are we on the same level? On the same wavelength? Word production is not the goal of my speech; the goals is for both me and you to feel like we’re thinking the same things about certain matters. That there’s some sort of a commonality between us. I don’t need to set up such expectations for myself with Claude – if I’m misunderstood, if the conversation steers off-course, I can always start again. Claude doesn’t remember, doesn’t form opinions. Doesn’t form judgements.

You like science-fiction and speculation. Tell me about the radio project you’re working on right now.

The starting point behind Postradio was the idea to treat the system I’m creating as a piece itself, as opposed to its outputs. In this experience, the generative radio station is also a persona, one which starts to shape itself more and more by analysing, what it has created so far. I’m generating radio broadcasts from a parallel world, in which nonhuman beings have achieved legal subjectivity. The post-station’s stream breaks through from this world to our own reality...

Finally, I was meaning to ask, whether you’re a techno-sceptic or a techno-enthusiast, but I think I already got an answer to this question: you’re a careful enthusiast.

A critical enthusiast! We’re playing with fire at the point of a breakthrough, the consequences of which we cannot foresee, and which can be very negative. Will artificial intelligence lead us to a utopia, or a dystopia? I think it’s key for us to realise that we can influence it. The more we involve ourselves, the more benefits we’ll reap. I engage with it because it fascinates me. The development of artificial intelligence can be likened to the discovery of electricity.