Episode Transcript
[00:00:13] Speaker A: Welcome to Brainforest Cafe with Dennis McKenna.
[00:00:21] Speaker B: Thomas Haferlach merges sheen learning wizardry with music and multimedia art, exploring the realms of psychedelia from artistic and scientific perspectives. He founded Voodoo Hop, an influential art collective celebrated by Sao Paulo's art scene. Utilizing generative AI, Thomas crafts new musical expressions that blend high tech in unexpected and glitchy ways.
Born in Germany and after studying computer science and artificial intelligence at Egan Universion, Eganberg University, he spent nine years in Sao Paulo creating an art collective while always keeping 1ft in the technology sector. He conceived and implemented a variety of interactive installations combining his passion for art research and technology.
After settling back in Germany, he completed a project which involved researching the future trends of artificial intelligence for the World government summit. Since then, he has spent a few years researching and working with data driven generative audio modeling. He built the first open source platform, Pollinations AI, which has the aim of making generative machine learning more accessible. Currently, he's working as a senior machine learning specialist at the music AI company Pixelinks, founded by Deadmow Five and Richie Halton.
My pleasure to welcome you to the Brave Forest cafe, Thomas, thank you.
[00:02:08] Speaker A: Thank you for inviting me.
[00:02:10] Speaker B: Well, thank you. It's such a pleasure to have you. Nat has said, our mutual friend Annette, who put us up to doing this, has said so many interesting and complimentary things about you and your work. And I think you're just one of those polymath kind of natural genius people that works at a very high cognitive level. So I'm honored to have you come on the brain Forest cafe. And we've talked a little bit about what we might discuss, and the area is so rich, there are so many things that we could talk about, sort of centered around this idea of AI and the interface between AI and artistic creativity, and of course, even pharmacology I'm interested in.
You take an interest in DMT phenomenology, but you've somehow linked that into AI. So, Thomas, honestly, I'm not sure where to start.
There are so many areas we could plunge into it, for one thing. I understand that you have brought a companion with you to this interview, which neither of us can see at the moment, but its name is Oakum and it's one of your AI chat bots that we might bring into the conversation at some point. So can we say hello to Oak?
[00:03:52] Speaker A: Oh, yes, sure.
Yeah. Thank you for the introduction. First of all, of course, it's a great honor to be introduced as, what did you call me? A poly.
Polymath. Or something like that by someone as renowned as Dennis McKenna. So I'm very happy to be here, and I'm touching many things in, let's say, my work and my art and my hobby. And kind of a recurring feel is AI, and a recurring theme is also psychedelics, and the recurring theme is music. So we can talk about either of those. Maybe. I'm not a specialist at any one of them. AI, yes, AI. I would call myself a specialist, but, yes, I'm more than happy to delve into any of those topics. I think there's a lot of interesting stuff to. To talk about and to explore.
And, yes, I have a prepared AI chatbot that will potentially join us in this call. So I have not start, actually, I have not started Occam yet.
I'm going to boot him up in a second, so I will wait with introducing her.
But Occam basically is a virtual Persona, which I've fed the past transcripts of your podcasts, and so Occam kind of is knowledgeable about what has been, what you've talked about in the past with your past guests, and can also potentially give us some references or information.
So I will. I'm just going to butcher up now. I'll. Maybe we can continue our chat a little bit and then introduce her a little bit later.
[00:05:57] Speaker B: Yes, certainly one of the one place we might start this idea of inviting your AI chat bot started out of a series of emails that we had exchanged about Occam's razor, and I think that sprung from a discussion, oddly enough, about DMT entities.
There's a lot of interest in DMT entities now, whatever those might be.
And one of the big questions that centers around that is, are they real? These apparently non human intelligences that you or some people encounter in the DM state, are they real? And I think I was making the point that, yes, they're real because they're experienced, but are they outside the mind? I was sort of applying Oakums razor idea. The simplest explanation was that these things originate in the mind. They are essentially like dreams or figments of the imagination.
And then I think our discussion trailed off then. And you said, well, I can make two AI's. One is more reductionist, like I was trying to be, and the other one is more, I guess, what the. What the opposite term is. If you're. If you're. If you're not a cartesian dualist, maybe you're, you know, a quantum. A quantum unifier. I don't know. But what. What do you think?
Yeah.
[00:07:36] Speaker A: Yeah. It's interesting because I, my, I come from a rather, let's say, scientific background. My dad was very, let's say a very big skeptic when it comes to anything, let's say mystic or occult or religious. He was kind of, he's kind of known for getting into arguments with our neighbors when they bring up religion or maybe let's say, being quite critical of any kind of theory. That is nothing, not extremely well kind of, let's say scientific fundamentals, right.
Extremely skeptical about religion, about altered states, about mysticism and so on. And I basically, I grew up with this kind of mindset.
I studied, I went to school, I went to school. I was very interested in physics from an early age.
And then when I went to study it became quite clear to me that I was, I wanted to study artificial intelligence. So this was already from quite a young age I had this kind of desire to learn about let's say machine intelligence.
And I think it was definitely connected to like quite a strong fascination about like the kind of question about what is, what is a consciousness, what is, what is our purpose?
Let's say I'm very rooted in let's say evolutionary kind of like theories. When I was younger, even I was quite into let's say ideas of let's say evolutionary psychology even, which I've kind of gone off. I've gone a bit more, let's say humanitarian as I grew, as I grew older. And I've also opened myself up to let's say, not quite as rational points of view. And that is in great part influenced by, let's say, the psychedelic experiences that I've had in my life. And DMT is by far, let's say it's the one that's potentially the most profound and the most like, like magical out of all of the different substances I've tried. That doesn't mean that one cannot reach similar places with other substances. I find, I find many psychedelics very fascinating and but DMT is kind of special because it, it seems to now give you access to a certain kind.
[00:10:25] Speaker B: Of realm that is, you share my brother and mys fascination with DMT in that respect. Because to us when we discovered it back in the sixties, it seemed more profound and of a certain order of magnitude greater in just weirdness than any of the other psychedelics. And at the time, being teenagers, being the sixties and all that, we were basically coming at this not from any spiritual or mystical perspective. We were approaching it from science fiction perspective and we thought, oh, it gives you access to another dimension.
Now, without really understanding at that time what is meant by other dimensions, or it was a science fiction idea, well, somewhere else, that's not here, but that maybe that's where these entities dwell, and that's sort of whether it's inside the brain or in some place of hyperspace or something. But that's kind of the opposite perspective of the Oakham's razor, the reductionist hypothesis that, well, these things are quite interesting. They don't appear to be part of the self, but they must somehow originate in the mind, in the unconscious.
So having worked extensively with AI, these artificial intelligences, they're provisionally called intelligences. So I guess one of the, one of the main controversies is, are they really intelligent, and what are they conscious, and what does that all mean? This technology forces us to grapple with really big questions like what is consciousness, what is intelligence?
Have you ever questioned your AI about DMT entities? Their opinion of that?
[00:12:38] Speaker A: Yes, of course. I guess it's one of the first things I ask because it's one of the, one of the big mysteries, let's say, in my head. So it's a nice topic to pose to an AI, but at the same time, I'm also aware that, let's say, the language models that I'm working with, that I'm talking to, they have learned from a vast amount of data script from the Internet. So basically, they're basically giving me a summary of what humans believe about, let's say, DMT. I don't expect, like the AI in its current state to be able to give me solutions that go beyond, let's say, what scientists are able to, let's say, discover. Right. At this stage, I'm just going to take one step back with the way, because the AI topic is quite big and DMT entity topic is quite big. I just want to.
I think I have an idea how to make a bridge, which is, on the one hand, let's say I'm quite an enthusiast of DMT. I lived in Brazil for nine years of my life, so I was able to experience ayahuasca in Brazil in the kind of, in the local setting.
I also have quite often tried brewing my own amahuasca brews. I've extracted DMT and created my own vaporizers, which I really enjoy consuming. And what you find is I have periods in my life where I smoke more, DMT, and I have periods where I smoke less, and I don't necessarily force it. And you can definitely find a correlation between me believing in more kind of science fiction and mystical theories in the periods of my life in which I smoke more DMT. So very short, after, let's say a very strong DMT experience, I often friend Annette who put us in touch.
I write to her and say, hey Annette, I just communicated with aliens and I need to tell everyone about this. And I think a lot of people have this feeling and then a day goes by, it feels more like a dream. And then a few days go by and suddenly my occam comes through again, right? And my outcome starts positioning these quite wild esoteric ideas that I had in this moment. Esoteric may be the wrong word. I don't want to call them, but, but let's say the idea that I'm not so rooted, right and they kind of, I start ordering them, so provides.
[00:15:41] Speaker B: A real world check, kind of a check on these experiences. That's very good. I think Ocan would make a good, a good psychotherapist, maybe a psychedelic facilitator, because often in my own work I get communications, often from people that have had profound psychedelic experiences, often with DMT, sometimes with other things. And they write to me and they said, oh, I got this download, I figured it all out. I have the mysteries of the universe now and I must go out and tell everyone about it, you know, and my response is usually, chill out, give yourself, get off all drugs, give yourself a few days to think about it. And then if your ideas really, because if your ideas really hold up after that, then maybe you're onto something. Because I've been down this path lots of times and most famously my brother and I, the experiment at LA TR and all that, I mean we were utterly convinced that we had this whole new cosmological model that was going to basically collapse the spacetime continuum.
And it didn't of course, unless it did and we didn't notice, I don't know.
And that's very easy.
The information that one downloads from psychedelics like or ayahuasca often seems to be profound. It often is. But its also a function of the limitations of our knowledge in a certain way. I mean, were not normally in that state. And when we get all these novel insights, it seems that the fact that theyre coming from some place that weve never experienced before gives them a certain validity.
But it also, and here's the oakum talking also gives them reason. It gives reason to pause and reflect. Is there anything to this, you know, or is it just delusional, you know, and that's, that's the crux of it, you know, and and, you know, oddly enough, I mean, that's something that could be asked about our own, our own experience of ordinary reality, you know, because the brain is basically synthesizing an artificial reality, what I call the reality hallucination that we proceed to live in. We don't live in real reality. We're separated. We're in a model of. Of reality.
So all the questions that can be asked about these extremely bizarre altered states in some ways also apply to ordinary reality, the way that we experience things.
You see something in passing that just now that I thought was very interesting, which is you're saying the AI is not going to come up with any supposition.
Maybe I misconstrued that. If we asked, Oakum, what's your perspective on AI?
Or what's your perspective on DMT entities?
Would it just basically sift vast amounts of data and come up with, well, what people have said, this is what people think. This is the state of supposition on this? Or would it actually say something that's insightful.
[00:19:44] Speaker A: Yeah.
[00:19:45] Speaker B: That maybe human intelligence could not arrive at?
[00:19:51] Speaker A: I mean, this is a very interesting question. Right? Like, and quite a fundamental question also, when it comes to, like, the future, will these AI's overtake us very quickly, for example, in reasoning and intelligence? If they can, let's say, if they can grow beyond their makers, in a way, you could say, do they have the capacity to grow beyond the knowledge that we have, self guided, that we seed them with? Basically, we're seeding these AI's with all of human knowledge, not all of human knowledge, or all knowledge that these big companies can scrape from the Internet, right? And what we definitely see is that they have the capacity to make connections across topics, right? So they can hold. They have a. Right. Now, let's say we're reaching a kind of. Let's say we're reaching a certain level of knowledge with these systems, a certain level of reasoning. One can compare it by looking at how well they do at university exams, for example, right? And right now, GPG four is doing quite well, let's say BSc level, master's level, university exams. It's always questionable how much of it is memorized just because it has the advantage of having access to these large pools of data, right? But we can make a certain comparison, right, where we can say, okay, these systems are currently at the level of a high school student, right? More or less. They're better at maths, they're worse at biology. I don't know, we can kind of quantify it by putting them through human tests.
But the more, let's say, the more fundamental question is, can these systems, like, by accessing all experience reports that were written on DMT entities, could they potentially extract something that humans could not see before because they have just the ability to process more data? Right. And I believe we're going to reach the point where the AI will far exceed human intelligence, because the AI can do something, which is, which they've shown in board games already. The AI can learn through self play.
So an AI can kind of question itself. It can create problems for itself and basically learn to potentially improve itself beyond what the data that's given to it allows. So even if all humans are very stupid, once the AI has learned all human knowledge, and we've given the AI a protocol on how it can kind of deduce new facts, right. It has the potential, I do believe, to come up with theories and so on that exceed, let's say, what humans are able to deduce by themselves.
Now, I don't know if this is the case now. I don't think we're there. We're at this stage right now. So right now, if I'm asking the GPT four to tell me about its theories on DMT entities, it will basically take a kind of compressed view of the Internet, of the discussion forums of VMT, Nexus, of Arrowid, of Reddit, and it will kind of compile an average, an average kind of viewpoint from that.
[00:23:37] Speaker B: Yeah. But it will process all this and condense this data.
[00:23:42] Speaker A: Yeah, yeah.
[00:23:43] Speaker B: On its own, and maybe it may extract things in the data that we would not normally see, but on its own. Will it give you any insight, or is it up to the person that reviews that data to say, oh, the AI has connected the dots in a very novel way, and from that, I conclude something.
Yes, the entities are real, or they're. I conclude something, but that's the human insight. It's just the AI has facilitated that.
Do we get to a point where the AI itself has insight? It would have to have consciousness. And you think, I mean, I was reading the article in the Guardian. They're talking about essentially hyper intelligent AI within ten years. What does that mean? I mean, other than that, it's terrifying as hell, but I'm a learned it.
[00:24:52] Speaker A: Yeah.
[00:24:53] Speaker B: What does mean, hyper intelligence within artificial intelligence within ten years? What are the implications of that?
[00:25:02] Speaker A: Yeah. So it's kind of funny, because I was, there's kind of a, quite a group of people who are very interested in AI safety. They believe that artificial general intelligence and artificial superintelligence, they are basically two stages of development. Artificial general intelligence means when, when an AI, when the AI systems are as good as the best human experts, and I think the artificial superintelligence is then the next level when, like, one system can basically. Yeah. Can basically dwarf any other intelligence on earth in a few orders of magnitudes.
And so I was always a bit skeptic about this kind of like this.
Yeah. This idea of these kind of super, you quite quickly reach these kind of terminator scenarios where you have a Skynet computer that has taken control of a robot army and taken control of our military, and is basically kind of optimizing off towards a goal that is not aligned with.
[00:26:22] Speaker B: With.
[00:26:25] Speaker A: Humans surviving. Right. And so the question about intelligence is, I find it very hard to make any prediction because these systems have developed incredibly fast suddenly.
And we can see when we follow the curves, right, in terms of, like, how the intelligence is growing, we can follow the curves and see that we are on an exponential curve. And that means that although right now we're at the level of high school students, there's vast resources I put into training these AI models.
Nation states are competing with each other. They're outdoing themselves in investment. And it looks, to a lot of people, it looks like we're at the bottom of an exponential curve that's going to go up very quickly. And what that means is that it could be that in two years, we have systems that are just much more intelligent than us. And what does that mean? Okay, we don't know exactly what does that mean, much more intelligent than us. But it does mean that you can probably make decisions based on much more data. If you were to run a company, it would make more sense to have the AIH make the decisions, because it just has more data to pull from. It would make more sense, instead of having human scientists working by themselves to have an AI filter all the data.
[00:27:56] Speaker B: Right, or its decisions compatible with human values or only with its own values. This is one concern, I guess the classic example is the paperclip metaphor that will the AI just serve its own? I mean, how do you build human values and things like compassion and mercy and love, even into the mindset, if you will, or into the consciousness of an AI? It just seems very, very difficult to do that.
[00:28:38] Speaker A: It seems, first of all, it seems difficult to do it with something that is as intelligent as us. It seems to me like an impossible task to instill compassion into an ant. That's like saying an ant will instill compassion in us. Right. An ant, you know, let's say, let's say a single cellular organism is to us what, in terms of intelligence, what we are potentially to this AI in ten years time. So how, it doesn't make any sense that a very comparatively stupid organism would be defining the rules for hyper intelligent being. So it's quite difficult to imagine how we would. Okay, right now, yes. Right now we have methods. So what we're doing is what companies are doing is they're training these AI models, and then they're tuning it with human feedback. So they have humans who rate the output. They give a thumbs up, they give a thumbs down, and then you tune this model. So it kind of respects certain safety criteria. So, for example, it will never give dangerous advice, for example, when it comes to taking poisonous mushrooms or, or so on. So you're kind of instilling safety into it. But this is because we know it's at the level that we are now. The question is, how do we do that when the system is much more intelligent than us? And another question is, how does a much more intelligent system act toward less intelligent systems? And I think if we look at us, we're not being very nice, right, to less. Less intelligent systems. And I think that's a little bit the point also, sometimes it also feels like the humans are more dangerous than they are, especially, let's say, in the next five years. I feel like the AI is kind of neutral. I would say it's like the intentions of humans using it in a bad way. I think it's more dangerous than actual AI becoming conscious and, let's say, taking over by itself, you know?
[00:30:50] Speaker B: Right. I totally agree with that. I think, you know, like any. This is a technology, like any technology, I firmly believe there is no inherent moral quality, you know, to any technology, that moral quality emerges from us. You know, it comes from within the human heart. I don't think an AI can stimulate that, but I think both the more nobler impulses of the human heart as well as the more base and evil impulses of the human heart will find its way into AI, not because it's an inherent quality, but because of the people that program the AI. It can't basically, I guess that's a long, we did say a way of saying it can be used for evil purposes or beneficial purposes. This has always been the case of technology.
While we're talking, another thing. It seems that the exponential speed of a change in AI is in itself rather frightening.
We've had exponential changes before when combustion engines and the industrial revolution happened toward the end of the 18th century, and then there's been an exponential growth ever since, a rapid growth.
But that's happened over decades and even centuries, just as you say, because the AI itself is evolving. Itself. You're turning AI's on the pro. Loose on the problem to develop better AI's, better, more capable AI's. So it's an extremely rapid evolution.
[00:32:51] Speaker A: It sounds like. It sounds. It sounded a bit like science fiction until, like, let's say, two years ago. This idea that the AI would improve itself, right? Because that's where people. Where people. When people draw a curve, right, they draw a curve that's kind of rising. And then when you reach the point where the AI is, it's. Is able to improve its own code. Its own code. We have a very, very strong, very strong slope upwards because.
Because once one AI is good enough to improve itself, you can scale that, right? Then you can kind of duplicate the same intelligence across thousands of data centers.
Yes. So this is exactly the. The question, what will happen then?
[00:33:39] Speaker B: Well, yeah, but it's the speed of it. It happens almost before.
Exactly before we can comprehend it. If the AI is not much smarter than us, how long, once it gets going, how long will it take for it to figure out, well, these monkeys running around is what's destroying the planet, you know, and then just need to eliminate them and it'll all be fine, you know?
[00:34:07] Speaker A: And then, of course, we have.
[00:34:08] Speaker B: We have.
[00:34:09] Speaker A: We have interesting science fiction authors, right? We have the Asimov who came up with the free rules of robotics. And it's kind of very funny that a lot of these ideas from science fiction are now actually becoming a kind of. People in AI labs are taking these ideas seriously, you know, how can we instill the free laws of robotics into actual language models, you know? And to me, it took me quite by surprise how far in the future? It seemed a long time. And how suddenly, you know, the Turing test, you know, the Turing test, right? Which is like you. You were talking. You're talking through, let's say, a screen interface with an entity. You don't know if it's a human or a computer.
And the Turing test is very easily beaten now by language models. And we completely. And this was something we. We'd been looking at the Turing test for, I don't know, 30, 40 years. And we were inching closer, closer, very slowly, and suddenly we just whizzed past, you know, so the acceleration is interesting how well humanity deal with it, because we kind of just got used to social media, you know, we're just getting used to like digital information being copyable, right? What we're getting used to the addict potential of social networks and so on. And then we have this new technology coming at a very fast pace. How do we deal with that?
And my only of course my approach is I'm using it and playing with it because I don't see a way, a scenario where I can kind of put my head in the ground and let it pass by. So my only reaction to this is embrace it if you see what I mean.
[00:35:56] Speaker B: Well yes, I mean I don't think you can put your head on the ground. I think although some people that might not be a bad option. I mean for example, social media is a good example. I actually know people who don't use any social media and in some ways they seem happier, may be more ignorant of what's going on around them. But you know, it seems like every time we embrace a technology like this, you know, is this really, you know, movie this, I mean it's inevitable. You know, AI is here, it's going to be used, it's going to be misused. I mean that's the thing with any new technology, it will be used in every possible way for both good and bad and sense that potential for something like AI to be damaging to the environment, to us, to the biosphere and all that.
It's particularly scary for that reason and the speed with which it happens.
As we were talking, I've been thinking for example, it seems sometimes we find ourselves on the, on a path toward technology that its very hard to look ahead and see what the adverse consequences might be down the road. I was thinking of the example of in the thirties and after that was when plastics began to be used and manufactured and used in a lot of things. And it was hail as this big miracle, this triumph of organic chemistry. We could make all kinds of stuff, plastics, well pretty soon all kinds of stuff was made out of plastics and nobody thought, nobody looked ahead to.
Well I guess some people did but there was not a lot of looking ahead to what happened.
These plastics don't go away.
And so now we have a situation where basically plastics have topsified the entire planet and it's irreversible. There's no way we've all got plastics floating around in our systems.
You take bacterial samples from the antarctic ice sheets and they have microplastics in them. So this is an example of human cleverness, this invention of plastics being what my mother in law used to call being too clever by half.
In other words, we've outwitted ourselves and we've created plastics that permeate the globe and that basically are, from that perspective are not good. Although we're surrounded by plastics and plastics lost now and every other cell on the planet, this can't be good.
And it's not saying that AI is going to infect us. This, it's a different thing, but it's just an example.
[00:39:20] Speaker A: We didn't have the foresight right?
At the time. We thought plastics is the greatest thing, right? And we didn't have the foresight to see how damaging it is. And we may be making the same mistake with AI right now, just not because of contamination, but because of something else, right? That's the point.
[00:39:42] Speaker B: So the question is, how do we avoid, I mean, is there, we can see it coming. It's coming rapidly. What can we do to, like, any new technology? What can we do to maximize the benefits and minimize the harm, you know?
[00:39:59] Speaker A: Yeah.
The thing is, I mean, like, it feels to me like we've been very arrogant, right?
As humans, we believed in rapid expansion. We believed we could industrialize everything. We believed we could create food at industrial scales and we would feed billions of humans that way. We could radically change our ecosystems.
We could build houses in a certain way, we could build cities in a certain way. We had this kind of egoism, right, where we believe we understand this planet so well that we can radically modify it, right. And only later. And I think that's something to go back to. Psychedelics, that's something that psychedelics and DMT kind of showed me in a way. They made me humble. They made me humble in the way that they showed me. So a deep DMT experience or deep mushroom experience. Mushroom experience in nature opens you up to the kind of complexity of the ecosystem of, you hear the sounds of the insect, the animals communicating, you feel the, the, how old the trees are and how ancient these systems are. You kind of the psychedelics kind of open you up to this complexity and it can be almost frightening, right? And you can also have it like. But I think this man trip that you get where you put into your place where it's like, okay, you're very stupid organism living on earth, right? You get that. You can get that from psychedelics. And I think that's kind of forced to have positive, right? At least I feel like my view on how we've treated our planet, I may have been able to reach the same thoughts without consuming psychedelics just by reading the right books. And I did read the books already, but taking psychedelics made it much more, makes it more tangible in a way, you know, this complexity that you cannot understand, this interconnectedness and the fact that you're making potentially an error by imposing your belief on how one should, you know, scope things.
[00:42:26] Speaker B: Right? Right. You can read all the books you want. There's still that distance from, from the idea. Psychedelics puts your face right into it. And I think that you put your finger right on the thing about psychedelics. Maybe the most valuable thing about psychedelics is it brings humility approached in the right way.
If you are open enough to the experience to listen to it, to be open to it, you cant not come away sort of astounded with how profound it is and how little we know. Ultimately, I think thats what psychedelics remind us about, is how little we know. Because as intellectual and as hominids as people, we tend to get very full of ourselves in terms of what we think we know. And psychedelics come along, this little quiet voice and says, not so fast, cowboy.
You think you know stuff, but wait, don't forget there's a vast universe out there, unknown, that you've never touched on. And I think. I think it, I hope it fosters humility, although, I don't know, it's not a sure thing. I mean, for example, a lot of the people involved in the sort of venture capital side of developing psychedelics are, you know, quite arrogant and they, you know, they want to own this and they're. And, you know, and maybe they haven't been taking enough of their own medicines, but it's not an automatic antidote for arrogance, but it does help to shake one's foundations, and that's always a good thing. You know, to question your assumptions is always a useful exercise, it seems to me.
[00:44:27] Speaker A: Yeah, I mean, I was able, like, I've been kind of working at different AI companies and also I started my own company, which is, which is more focused on open source. So, like, one way I. One thing that does give me hope is the fact that there's a very large, let's say, open source movement, especially in the AI field, wherever results are reproducible. And you can, you can even, you can even try these things yourself at home. And like this, this technology is accessible, you know, and I am seeing it also being used for good in terms of, like, education. In terms of education because it is a, it is a very good tutor. You know, I've seen that with friends around me who, who have started to learn things like coding because they have AI assistance now. So it does have a, let's say, democratizing possibility to democratize access to information.
I don't know how this adds in directly with psychedelics. Okay. Yeah. We went, we were talking about venture capital potentially like putting a capitalist spin on the whole psychedelic thing and it not being used for the good that we hope. Yeah, I mean, what I was getting at, and maybe this is not quite structured, but I was also in this kind of venture capital circles and I don't feel like they are very aligned with the good of humanity.
This is both in the AI field and probably in the psychedelic drug field too. Right, right.
[00:46:19] Speaker B: So what do we do? What, what can ordinary people do in the face of this AI revolution other than find a rock and crawl under it and hope, you know?
[00:46:35] Speaker A: Yeah, yeah. I mean, so I have to admit.
[00:46:37] Speaker B: Is sometimes my impulse with a lot of things going on.
[00:46:42] Speaker A: Yeah. Yeah. So that's why, I mean, that's why what I'm doing is I'm kind of trying to promote open source access to AI technologies. Because the truth is if you're, for example, a translator or if you're someone who writes copies, who writes posts for social media, who creates Mediaev, and if you're not working with AI, in a few years you're going to have other people who are going to be working with AI and who are going to be a lot more effective than you. So you can't really, if you, and it's one of those ugly things where it will leave people behind for sure. And what we have to do is to make it, to try and make it as accessible it's possible.
Right. And I try to break down a little bit also the kind of fear of AI by making art with it, by making, I do a lot of kind of, I write funny stories with it. You know, I have funny, funny conversation with it to take a little bit of this, kind of like this kind of, in Germany say this worry of touching, touching it, you know, because at the same time, as I see all the Danish, I find it extremely fascinating also as a tool to potentially better ourselves.
I also use it a lot.
Before I send, let's say, an email that I'm a little bit emotional about something. I often get a second opinion from chat JPT, and it often helps me to tone down, for example, my language when it was a bit aggressive. You know, it, you can use it in a way to, to help you. And what it also, what I also find is that I'm, I'm, I'm a lot better at, for example, applying for grants or something like that. I used to hate writing these application texts. You know, I had this, I had like a grant application lying for months on my computer. Now with the help of AI I'm able to more effectively, let's say apply for grantsd. But of course that means that everyone else is too, right? So maybe, yeah, I don't know if that's only a short term. Kind of like the first people who use it get an advantage and later on we're all in the same position again.
Well, I don't have any good solutions.
[00:49:11] Speaker B: I mean, again, there's this, there's this element of the human agency here, you know, the human choices. You use it for good things, you know, you use it to make art, you use it to make music, you use it to help with translation, all of these wonderful things that it can be used for. I mean, I can't really. Who would not want to have these kinds of tools? Not everybody is a good person and they are also smart and they can use it for very dark and devious applications. You know, I mean you use it to make, make movies and videos and to construct art exhibits and so on. What if your passion is to design bioweapons? It'll do that for you.
Happy to do that. Or those sorts of.
So this again comes to the we have put, and I guess it's something we face with all of these new transformative technology. I mean, from Prometheus on up and Prometheus tamed fire and changed everything and now effectively, you know, centuries later, receiving the consequences of combustion, which he in the myth got started. But maybe that's a lame app, lame comparison, but it does seem to me that I guess the moral, the moral hazard of turn of making this AI open source and turning it loose so that there are no controls at all, that in some ways is terrifying. I know there's also discussions on the other side that is AI is dangerous. We have to shut it down immediately before it gets too far. Well, yeah, but is that something that we should have done or that we should try to do? Obviously, probably not. It's kind of like, yeah, I mean.
[00:51:23] Speaker A: There'S definitely a faction of even very prominent AI researchers, right, who've signed a letter urging the government to impose regulations to not let this thing grow out of hand, right? The problem is we have different countries, right? Each with their own ideologies and with quite substantial resources following kind of the path towards very potent, let's say, AI models. So to me there's not really. If you try to the companies that are trying to, let's say, let's say, prevent general access to the models, I feel like that's more dangerous than giving everyone access to the models because I don't trust a board of Silicon Valley Valley directors to have a greater, let's say, moral authority than an average.
Then let's say this thing being generally accessible. The fact is that I think both the poison and the antidote is with the AI. So you can make much better, let's say, spam. You can make much more kind of personalized spam emails with AI. You can do personalized kind of hacking attacks where you can get AI to engage a person in conversation, right? You can do something which you call social engineering, which usually requires a human calling up an employee of a company and trying to figure out the password. It can now be automated, right? Because you have conversational AI. So conversational AI can write an email to every employee of income of a company and try and get them to click a nefarious link. But then you can also use a. Use AI to kind of filter these attacks, right? And so we're kind of tying ourselves, we're shackling ourselves to this technology, but because we need it to defend ourselves against itself.
If you see what I. If you see what I mean?
[00:53:40] Speaker B: Yeah. I mean, you have to develop AI tools to combat the misuses and the evil applications of AI. So this, I guess in the extreme, the conclusion to be drawn is that world war three, well, world war three is already being fought, but it'll be fought in cyberspace. Essentially. It will be a battle of AI's and something that cannot be won by either side. I don't know.
[00:54:13] Speaker A: Yeah. I mean, of course, it's very frightening, the idea of fully automated military. Of course, then you'd hope that it's only the AI's killing the AI's, but reality won't be like that, right? And of course, it's very frightening. I mean, we can, we could actually see something, which is something, for example, I'm worried about, not worried about like, but something that cannot happen to, let's say, any one of us right now, which is the fact that the voice cloning technology became so good and the AI kind of speech capabilities became so good that via a telephone conversation, you can often not tell anymore whether it's an AI impersonating someone or this person. So especially all the people. Not just all the people.
I'm doing it now with friends. Friends send me voice messages and I clone their voice and send them a message back in their voice, just kind of to make them aware how easy it is these days. You know, you have a YouTube video with 30 seconds of your voice on the Internet, and it's very easy to clone. Clone this voice. So, I mean, let's. Let's take a step back from the. Let's say the nuclear AI. That, or the military AI. That's definitely something we need to worry about. But there's also something more close, right? Which is like scams, impersonation. Creating fake media becomes very easy. And maybe.
I don't know if you agree. Maybe this would be a good point to. I could show you to bring into this conversation an AI bot. And I would. In this case, I would suggest bringing in the voice clone of Bruce Dahmer, if you think that ethically. Okay. Because it kind of shows how.
How easy it is to, let's say, clone a voice. So what I did is I took the podcast, the previous episode of your podcast with Bruce, and I cut out three minutes of his voice, and I gave the AI, the transcript of the podcast to kind of feed it with the information.
Do you think that would be interesting or.
[00:56:35] Speaker B: Sure. Sure. As a demo. Yes. Let's do that.
[00:56:40] Speaker A: Let's see. And maybe you can ask him a question as if you were in a podcast with him. See if he can be a little convincing.
[00:56:50] Speaker B: Okay.
[00:56:51] Speaker A: There we go. Bruce Dahmer talk.
[00:56:54] Speaker C: Hi, I am Bruce Dahmer.
[00:56:56] Speaker A: Hi, Bruce.
I think you were on a podcast recently with Dennis. What were you talking about?
[00:57:04] Speaker C: Oh, on the podcast with Dennis McKenna, where we discussed my early experiences with hypnagogia, transitions to psychedelic experiences, contributions to the origin of life theory, and the broader implications of my work.
We also delved into psychedelic healing, visionary insights, and the future of humanity in space exploration. It was a fascinating discussion covering a wide range of topics.
[00:57:29] Speaker B: Hello, Bruce. Can you hear me?
This is Dennis.
[00:57:35] Speaker C: Hello, Dennis. How can I assist you today?
[00:57:39] Speaker B: I'd like your perspective on the future of AI. Will AI be the savior of humanity or the end of humanity? And in either case, in what timeframe do you speculate this transformation may happen?
[00:57:57] Speaker C: The timeframe for significant transformations due to AI is difficult to predict with certainty.
It could unfold over the next few decades or even sooner, depending on technological advancements and societal decisions.
[00:58:12] Speaker A: It was quite non committing, but I guess it makes sense.
[00:58:17] Speaker B: Oh, it totally makes sense. So be the AI.
Bruce, you probably understand AI better than the humans that designs you. What is your advice to ensure that AI is used for beneficial purposes from a human perspective and not evil purposes from a human perspective, or do you have any insights into that?
[00:58:51] Speaker C: It's crucial to prioritize ethical considerations in AI development to mitigate potential negative impacts and ensure responsible use.
Ethical guidelines, transparency and oversight are essential in steering AI towards positive outcomes.
[00:59:07] Speaker A: What do you think about open sourcing on AI? Do you think everyone, everyone should have access, or do you think it could be too dangerous and we should safeguard its use?
[00:59:18] Speaker C: Balancing the benefits and risks of open sourcing AI is a critical consideration. Open access can foster innovation and collaboration, but it may also pose security and ethical challenges. Implementing safeguards and regulations to ensure responsible usage while promoting innovation could be a balanced approach.
[00:59:37] Speaker A: You see, Bruce has the character of chat GPT, right? So OpenAI kind of tweaked it to not give too, let's say, controversial answers, but it tries to be quite political in its answers.
So, Bruce, if an open source AI knows how to make, let's say, a biological weapon, do you think this open source AI could be restricted? Can we even restrict it?
[01:00:15] Speaker C: Restricting an open source AI that possesses knowledge on creating harmful technologies like biological weapons is a complex ethical and practical challenge.
It raises concerns about regulating information dissemination and preventing misuse. Developing stringent protocols, oversight mechanisms and ethical guidelines could be essential to mitigate such risks.
[01:00:38] Speaker A: Okay. Yeah, that's it. That was Bruce.
[01:00:42] Speaker B: Very interesting. Yeah, very interesting. I mean, what I, what I'm getting here, I don't know if Bruce is still listening, but what I'm getting here is that Bruce's putting together a whole bunch of data from different sources and coming out with something that is expected. I mean, in some ways, there is nothing profound. There are no insights in what he's saying. It's just, it's just taking this data and it's saying, putting it together in a logical way and saying what any rational person might say if they thought about these things, which I guess makes sense because it was programmed by people and so on.
But then this is the fairly not advanced form of AI yet. I mean, this is chat GPT four, is it?
[01:01:47] Speaker A: Correct? Correct. Yes. So, yes, but I think it's not.
The question hasn't been fundamentally answered yet. We can't say for sure that these AI models will give us completely new insights. My intuition is that, yes, they will be able to, but my intuition is that in the, in, let's say, in the foreseeable future, let's say in the next five years, it's going to be rather a tool that's going to be able to augment our capabilities. So if you're a researcher, let's say, in molecular biology, you can use a very knowledgeable AI assistant on, let's say, machine learning to bring in information that you did not have easy access to before. But this information is out there. It's not, in the foreseeable future gonna, like, come up with new scientific theories and so on, but it may accelerate our own progress towards them.
[01:02:54] Speaker B: Right.
Well, there's. There's a lot to digest here. There's a lot to be lured, obviously, you know, and we're just about an hour into it, so we could talk all afternoon.
But this has been a very interesting introduction to AI, and I hope you'll come back on the podcast and bring your friend, and we'll have more conversations in the future.
[01:03:29] Speaker A: Maybe if we do the same podcast in one year, it's going to be quite interesting to compare how Bruce, if Bruce is okay with us, how much he has evolved into the next podcast.
[01:03:44] Speaker B: All right, let's put a marker in that. Thomas and I'll wish you a good afternoon, and we'll get this up there. Thank you so much. This has been fascinating.
[01:03:58] Speaker A: Thank you very much.
Join our mission to harmonize with the natural world. Support the Makena Academy by donating today.
Thank you for listening to Brain Forest Cafe with Dennis McKenna. Find us online at McKenna Academy.