This is the full, auto-generated transcript of Ethan Monkhouse | Inside AI that Can Read Minds: When Technology Sees What You Hide, a conversation between Chris and Ethan on the Mailander Podcast. Timestamps may be included to help navigate the episode. Please note: the transcript may contain minor errors..

Chris (00:00.00)

What did the AI  see about you that gave you some pause?

Ethan (00:00.05)

I think the most jaw-dropping moment this year was in January. I was in New York and we were building up the third iteration of the core algorithm. And I ran on myself. And one thing that we do is these growth blueprints. It identifies your entry point, where you are, what your current position is. And also it picks up on what parts make you unique. Like what's your edge?

And for me, I ran it on my profile and my identity, and it came back and it picked up on specific things in my childhood, which would definitely fall under childhood trauma that I had never talked about to anyone.

Chris (00:00.54)

So the 35 years of my career precisely overlays this tremendous run up in technological change in the world that we've seen from the advent of the internet to global access through mobile and broadband technologies to the migration to cloud and SaaS based services and now to artificial intelligence. And at each step of that journey, that trajectory gets steeper and steeper, faster and faster. And the prospect of change is exciting and it is also at times very scary. Today's guest is working at that very edge of excitement and sometimes even scary, which is the founder of Naviro AI, Ethan Monkhouse. Welcome, Ethan.

 

Ethan (00:01.38)

Great to be here. Thanks for having me.

 

Chris (00:01.40)

So Ethan, one of the things that I wanted to tee up right away and have you walk through with me is some of the experience that you've had working at the edge, including with fine tuning some of your own AI models. And what happens at that edge? What do you see that's positive and then sometimes is even scary for you?

Ethan (00:02.00)

Yeah, there's it's, I think every day is terrifying, but also groundbreaking in terms of what you can do with technology. Like what we're doing is we're not chasing the likes of reach with our clients and brands and stuff like that. We're chasing signals, pattern recognition. And that's where all these advancements in tech have really allowed us to push the boundary in that respect. So even for myself, like I run it, all the tech we build, I run it myself first.

And consistently now, I'm like, this is a bit too much now. We're going to have to dial it back because it's able to pick up on those patterns, those signals that as humans, we can't pick up on because they're so slight. They're so subconscious that only when you have a process or a program to focus in on each of these individual signals following every move and logging it, then you can draw correlations. And that's when you start to see this behavior, well, this pattern behavior that emerges from seemingly unrelated actions and tasks. like, for example, if I have a load of posts on my Instagram that I'm going hiking, I've got a say a bag on my left shoulder, and then I'm carrying something in another post on my left shoulder again.

You can begin to infer that there may be a risk of shoulder injury more likely on the left side than the right side. And it's these small patterns that you can't really analyze unless you have automation to actually do it at scale.

 

Chris (00:03.36)

Yeah, fascinating. So tell me, break that down for me a couple of key things that are within there. One of which is with your technology that you're doing, you're going out there and the AI is looking at any sort of public information that might be in the ethernet, social media signals, images, whatever it might be, and then is creating the inferences out amongst those data points that it's able to see and then creating new basically projections or can identify behaviors or potential patterns that are of interest or even risk.

 

Ethan (00:04.14)

Yeah, exactly. It's very much like the roots of the tree. That's how I describe it, how it works, where it needs to keep on diving and keep on digging until it deems it stable or whatever it's searching for. It's got enough information. It's explored all different avenues to be at that conclusion that I know enough now that I'm confident in giving back an answer.

 

Chris (00:04.38)

What did the AI see about you that gave you some pause?

 

Ethan (00:04.42)

I think the most jaw-dropping moment this year was in January. I was in New York and we're building out the third iteration of the core algorithm. And I ran on myself. And one thing that we do is these growth blueprints. It identifies your entry point, where you are, what your current position is. And also it picks up on what parts make you unique. Like what's your edge?

And for me, I ran it on my profile and my identity and it came back and it picked up on specific things in my childhood, which would definitely fall under childhood trauma that I had never talked about to anyone, like maybe one person I can think of. And that definitely didn't make its way onto the internet. And it was able to infer that from the slightest signals on how I speak, how I, my posting cadence, when I post, how I post, what areas of the internet am I active on? What forms am I active on? When things are published about me or I'm publishing things about me, how do I actually deliver that? And we all think we're almost special, which we all have our unique edge, but there's a lot of things that can be categorized, people can be categorized a lot deeper, when you're analyzing the level of data we're analyzing at volume. And so for me, when I saw that I was like, okay, we can't release this version yet. It's a bit too intense. it really showcased on how phenomenal technology can be at picking up these signals. have no idea we're leaving.

 

Chris (00:06.31)

It's fascinating in the sense that it's going beyond when you're talking about inferences, you know, the common, or my perception would be that, yeah, it's going to be able to tell me things that I am leaving on Facebook or Instagram or X or LinkedIn or whatever is posted out there and pick up these that would tell me overtly about Ethan Monkhouse. But what you're describing is the ability of the technology to make inferences about your psychological profile, based on a history, based on childhood traumas, derived from how you speak, derived from times of day that you're posting or surfing or doing whatever it is, as well as the nature of how you desire to present yourself also tells the artificial intelligence a lot about your composition and your psychological makeup.

 

Ethan (00:07.26)

It's terrifying yet like amazing at the same time. Like all the smallest that one of the specific things it picked up about me is I have mild OCD and it was able to tell me this without me giving it any prompt and how it figured that out was it was analyzing content that I'd put up. And just like you've got a background there with kind of books and stuff on the shelf, it was logging over time, just the placement of these different objects.

And all of a sudden it noticed that in a few of my photos, I had my sunglasses. I'd always have like this in any photo I put up. But what was really interesting that it logged was that I never put my sunglasses like that. And then it collected that and put it off at the side, but it had a way of referencing it quickly. And then it picks up another small thing and another small thing, another small thing. And then eventually arrived at the conclusion that, wait, this isn't just habits. This is arguably uncontrolled habit. And that's when it put it into my growth blueprint, quoting my ILO CD. I was like, that's not in any way online. And it was those, when I looked through the data, I was like, oh my God, it, picked up on that. It picked up on where, say, well, I didn't even realize it did this, but my shoes, when I get in at home, will always put them together irrespective of where they are in the apartment. They will always be put together even if they're in the most random places. And all these small things that there is a balance between is it OCD? Is it not OCD? And so we tested it. We're like, okay, let's feed it data that is inherited of people who don't have OCD with very similar behaviors like say sunglasses going in the same way bags wearing wearing bags the same way, something that could be mistaken for habit, and it could tell the difference.

 

Chris (00:09.25)

Interesting, fascinating. So talk to me through how you do that process of fine tuning, how that model works. There's a dimension of it that you're writing algorithms, you're prompting it to look for certain things. And yet there's something that it's doing on its own that goes beyond that. So I'm curious and you know, when it comes to psychological profiling and the extrapolations or the inferences that it's able to make the cross correlations is able to make, where is it getting that foundational knowledge? Is it able to go out outside? Or are you putting that in there to help it do some of that psychological profiling?

 

Ethan (00:10.00)

Yeah, there's a lot that it does itself at this point. And it needs to because it needs to make inferences from qualitative data and fit it into that quantitative model that we have. But where it starts off is your foundation, foundational model, your frontier models. But those frontier models can differ in the sense where the task it's trying to do may be better if it's served by open AI and thropic, say grok, depending on the identity that the information that's gone, the identity of that person, that profile, it needs to be able to decide what is the right frontier model to use as my foundation. And that's where we have our first layer. have our fine tune model that we've got internally to decide, okay, where are going to route the, the, the, the task or the requests to in terms of the frontier. Then once we've got that, that's where we take, okay, we've got very personal identifiable information here or PII that a lot of the time will need to be masked. And so we need to be very careful about what data is fed into frontier models and what data comes nowhere near frontier models. And that's where the individual account fine tune comes in. So when it comes to reading someone online, everyone perceives presents themselves differently. And a lot of the time you're not accurate about it.

So it's very much like almost like an onion in the sense where you're like, you're peeling back the layers. And once you can identify that layer, beneath the surface, that's your core. Personality type online, say. And we tried it with frontier models. was impossible because well, frontier models are trained on this data that is inherently not correct in the sense where I've got a personality type that I and then the personality type that I present online, so does everyone else. And these frontier models, they're trained on that personality type that is presented online. And so we needed to make a tuned model that was able to pick up on, what are the signals about those surface personality types that could indicate an underlying personality type? And that's impossible when you're doing small data sets, but when you're analyzing millions and millions of profiles, then you start to draw conclusions, okay, someone who acts extroverted in this way online at this part of the during this time of the day on this channel, we're seeing a constant flag that they might be introverted in this sense, or there might be this underlying trait.

And as that you increase the volume there, you start to be able to draw conclusions, you've got enough data to actually say okay, if someone does this online, that is an example of a pseudo personality. And this is what we can be pretty confident it falls into one or two categories here. And then we have more conditions that will eventually narrow it down to one category. And that allows you to build up a personality type that is behind the surface. It's the true personality type, which is why it's so difficult to do up until now.

 

Chris (00:13.18)

Yeah, it seems that's the frontier that you're working on, which is to be able to unmask the personal or the public persona because there's something sitting behind it. You're unmasking what is really fundamentally that psychological profile underneath. Yeah, super interesting. And then take me back also, I think this is interesting that is part of your fine tuning of the models is then being able to direct which frontier model or foundational model that you're using, depending on their capabilities. And it strikes me, I mean, the pace of the change is so significant. The number of releases is so significant. The capabilities that are being released to the public by OpenAI or Anthropic or GROK and others, every, you know, week, two weeks, there's some sort of major release or the model tweaks in a way that all of a sudden it behaves in a very different way.

Talk to me about that, about how you make the choices, about which underlying frontier model that you're using, what you're seeing, what the patterns are, how that plays out.

 

Ethan (00:14.25)

Yeah, it all boils down to evaluations. So EVALS in for short in the space that is you're getting a response from a model and you have to decide whether is this the response we want. And when we decide to route to a particular model, it's not that we're doing a lot of computation being like, right, we're going with this one. It's we're going with all of them. They're coming back to us with a set of answers and we're going to evaluate against our internal set of evaluations.

Is this the best answer and then start grading them. So go off, get answers from all of them, evaluate them and then decide, okay, this is going to be the best option and then we'll have a backup as well. So in case something goes wrong with that frontier model, we can fall back on that. So you're constantly evaluating to make sure the quality is maintained.

 

Chris (00:15.13)

And that grading process is, again, an automation or an algorithm that's doing that. And or where does human judgment factor into that overall process? Where do you look at it and say, that's just not right? There's something that's off about the way it's thinking about this issue.

 

Ethan (00:15.29)

Yeah. And that was the case when we started off, when we had the first few versions of the algorithm, it was very manual and tedious, but we knew that was the case because we needed the agents to learn this is what is a good decision. And the thing about agents and the way this is all kind of shaping to be the industry itself is they understand what a bad decision is over time. So if you set up an agent and you don't give it that much context or that much data like prior past data, it'll still learn over time from making the wrong decision. And that's what our system does in the sense where we've reached a point where it knows what a good decision is, we've trained it manually. And over time, it's gathered, when I made this decision, that metric flopped, or that didn't perform as well, or we saw a drop off on the, I don't know the say the user sessions for that particular account, we saw a drop off after we use that output. This is constituting a bad decision. And then it logs that and it knows for next time, oh wait, when I did this, this was a bad decision. So over time, this compounds and it just gets better and better and better. And that's where the importance of your data set becomes, well, it becomes the most valuable part of your business.

 

Chris (00:16.50)

So there's a set of variables that are quite interesting that are at play that you seem to be working on here. One of which is that public persona that people are putting out into the ether and on the social media, which is designed to drive attention. They're trying to get likes and views and engagement and drive those particular metrics.

So it's a creative persona in order to drive those kinds of metrics. And it strikes me that what you're doing is trying to break down, get through that layer so you understand the more authentic person that's sitting behind that, including all their flaws and the things that they don't want to talk about.

And I know that one of the dimensions of what you're doing with Naviro is to be able to create trust-based marketing. If you understand that the person on the other side of that conversation better, the opportunity for a trust-based relationship is stronger. Is that a fair assessment of where you're trying to go with this?

 

Ethan (00:17.46)

Yeah, I know it's spot on. I think like, we're really in that point in the sacred economy where trust is starting to dwindle. You had the emergence of say, influencers, you had your macro influencers who were like 1 million plus, they do brand deals and because of the sheer volume, they did quite well. And then you had the emergence of micro influencers, which were your sub 50k.

And now it's reached a stage where there's so many brand deals available for micro influences. They're just posting whatever they're gifted or whatever they're paid to post. And it eradicates trust because well, there's no, there's a complete monetary, there's a monetary side to the deal that's being, well, the advertising that's been done there. It's not a say an unpaid sponsorship. And for me, and I know a lot of friends as well, like they,

 

When they get ads now, even if it genuinely is something they're interested in, if they know it's a potential like a paid brand deal, they won't buy it.

 

Chris (00:18.54)

interesting. A rejection of it because it is a brand deal.

 

Ethan (00:18.58)

Yeah. And the same thing we see with, say, ChatGPT is there's this whole thing around dashes and how ChatGPT will output a lot of different outputs using dashes. I know it might be the case in the States, but I know in Europe we don't use them. And it became a thing that you could tell if someone was written by ChatGPT if it used the dashes. But the interesting thing about it is is grammatically correct.

And is the right way of writing its output, yet it's too perfect. And we then don't trust it. And like, we human didn't write this. There's no way. So it's interesting things like that, where nothing was programmed in initially saying, do not write dashes or write dashes. It was only just of a sheer volume that people picked on. wait, this is how you can tell if it's written by chat GPT. And that has now had to be adapted for, well, our outputs to make sure we don't use any sort of dashes. But it's things like that where until you test it with people at high volume, you're never really going to know the real answer.

 

Chris (00:20.08)

One of the things that I know that you're doing, for example, with the brands and you work with some very large music labels is that you're trying to identify the super fan. Super fans, think, you know, 70 % of your revenue comes from 10 % of a major artist fan base. And so you're trying to take that all of that noise of who might be a fan and whittle it down so you can have a more authentic relationship with a super fan. And that ties into this trust-based relationship, correct?

 

Ethan (00:20.41)

Exactly. It's like lifetime value is everything when it comes to not just a label, but like a business. It's like, you want to know the lifetime value of your customer, how many orders they placing it, what, what's their lifetime. And then what is the actual net number at the bottom. And with fans is quite difficult because it's pretty difficult to track, but there are ways of doing that. But identifying who those people are before they become those high lifetime value users or customers has been a kind big question mark in the industry for ages. And that's where signals came into it. It's very much, okay, this is how someone behaves. We know the traits of a high lifetime value fan from the data we've crunched. Where do we see these signals? And can we draw any conclusions in terms of can we narrow down to a personality type? Are there certain personalities or subsets of those personality types that actually convert quite well when combined with other traits, certain locales or certain messaging things like that and Eventually you get a profile now like this is the profile that is going to make you the most amount of money if you can reach them.

 

Chris (00:21.49)

And you're having success being able to knit that down from if you had a million fans being able to find that 10%?

 

Ethan (00:21.59)

Like instantly as well instantly, say within two hours of an onboard, but knowing what that personality is within two hours of it, indexing your audience and going off and finding those routes.

 

Chris (00:22.12)

One of the things that you mentioned to me before is that you sat down with a music label and were able to basically replicate their marketing strategy that had taken them months to develop and you did it in an hour.

 

Ethan (00:22.24)

Yeah. It's, which I think to this day, they think that we hacked them or had like, had bugged their office or something like that. And it's, it's phenomenal that that is the way it's tech has evolved. And I think because we've trained it on like, I'm what, that's seven, eight years of digital marketing experience running like ads from, well, when Google and Facebook has with their very basic levels, everything Naviro is trained on, what we know in the industry and how to actually convert. So there is kind of backing strategies behind it. But what is interesting is this just sheer speed it can be done. And that's what we're trying to do is level the playing field with you've got marketing teams, which sometimes can be like the budgets can be quite confusing in the sense where they disappear into the ether. And I know for me, that frustrated me about the whole industry where it was like, I'm very quantitative. like to know what went in, where it went and what came out. And with a lot of marketing, is quite ambiguous. It's just seen as a requirement for a lot of businesses. I don't think it should be that way. I think you should be able to similar to finance to product, everything you can say, this is the amount of effort and time and resources that were put into it. And this is what we got out of it. And now it's possible to do that.

 

 

Chris (00:23.45)

Yeah, and I think this is one of the scary edges that most people are focused on with AI, which pertains to them very personally, which is if you're able to do in one hour what it took me and the marketing department three months to put together through quantitative analysis and our marketing strategy and a history of doing this and you're able to do it in an hour, my job just went away.

 

Ethan (00:24.06)

It does if you don't adapt. And I think that's where it's going to make or break people in marketing.

 

Chris (00:24.15)

And any across the knowledge economy, in my opinion, which is anytime that your foundational principle is, I know this particular market, this industry, this regime, these structures, this conversation, this language, all of the abstractions that we put in marketing or finance or law or whatever it might be. And all of sudden we can do this on a highly expedited basis at much less effort. Who survives that transition?

 

Ethan (00:24.44)

Yeah, it's going to be interesting. And I think like imperfect information has been arguably eradicated with this kind of surgeon AI across the board. Like for instance, my more than marketing legal is one of my biggest kind of bets in terms of where what's going to be eradicated the first and first and that scene and kind of first principles for in firsthand experience for us because we no longer really have to go to a legal counsel.

Because we can get it 90 % of the way with our internal model. It knows everything about the company. It knows exactly what is active. It knows all our registrations everywhere, all regs we have to abide by. And it knows how to form an argument and really, really well. And it has, when I say saved us tens of thousands, it has. And that didn't cost us anything.

 

Chris (00:25.38)

And you're so you're building private models, private AI models that have access to all of the documentation, none of which is permitted to train for outside purposes or disclose or leak outside, which is the greatest fear for the corporate environment, which is can I put a wall around this and still have hyper intelligence associated with it? Or does it get dumb because it doesn't have access to all of the, you know, the world of information that's out there?

 

 

Ethan (00:26.04)

And I think that's the way it's going, hopefully, is an organizational chart. And this is what baffles me is about two months ago, I was reading our org chart and I was like, wait a minute, I'm thinking of this completely wrong. We now have agents to sub in for a lot of it's no longer an organizational chart of all the different departments. It's the departments with the agents combined into it. And you have so many different decision makers that aren't actually people. There will be a decision at different points that involve people, but our org chart is now how did the agents and the team interact with each other in a way that is efficient. And that is a weird way that I never thought things would go, but it just was apparent one day I was drawing out the diagram. I was like, wait a minute, this isn't, this isn't the right way doing it. This is the actual right way of doing it.

 

Chris (00:26.50)

So the typical org chart has that hierarchical flow. You get the CEO at the top or founder or co-founders, et cetera. And then you have layers down below it. And it inevitably is some sort of pyramidical structure. But what you've done is then oriented it towards knowledge bases and decisions, I assume, with agents having functional responsibility in some sort of interleaved way with people.

 

Ethan (00:27.16)

Yeah... The kind of pyramid structure is still there. It's just the points that make up that pyramid have been subbed in for agents. And so there's certain decisions that can be automated. But what's really interesting is when you have agents managing agents. And that's where it gets really fun.

 

Chris (00:27.36)

And why is that? What is the dynamic?

 

Ethan (00:27.39)

Because you want you want the agents to have context, but you also want them to not get overwhelmed. And it sounds weird. It's talking as if it's a person, but it is in the sense where if you overload it with knowledge and say you need to do 100 different things, it's not going to be as efficient if you just have three focus things to do. And then there's some agent above you that's going to tell you if the app is good enough, and then it's going to send it to another agent and it's the coordinator essentially.

That was the reasoning behind that. Because we trialed it. We were like, well, can we not just have some genius of an agent and up its capacity on every sort of resource so that it was not super computer, but super agent in that sense. And it's not that yet. It just freaked out. And it started answering things that were not in the initial query. And it gets confused. And it's similar to human. If you give me a task, a list of tasks, which is 100 items long,

I'm not going to really know where to start and it will take me bit of time to dissect where to go. I'm not going to just output the hundred tasks all at the same time.

 

Chris (00:28.45)

Right. Yeah. And I use AI in a similar way. And I noticed over the weekend on some things that I was working on that it was quite good at provoking ideas and concepts, et cetera. But then it started to make its arguments using quantitative data, financial information, that it had not been provided. And so it's like, well, where's that data set coming from? The  logic is correct, but then all of a sudden it started to insert specific data points to support its logic. That I was like, that doesn't exist. I know that's not right. And I know that you haven't been provided that data. then human judgment has to step in and say, interesting concepts, but this is too far. I can't support what you're doing on some of these things.

 

Ethan (00:29.34)

Yeah. And I think that's the one thing when we started building Avira, I remember saying to Lola, my co-founder, it's like, we need to make sure that whatever we build, we don't just dive into using AI for the sake of using AI. We were both from a computer science background, both worked as software engineers. We knew the core concepts of enterprise development and how to build something from the ground up that was sturdy. And we made sure that in that pipeline of data analysis, that

Every stage we went through, we're asking us, do we really need to use AI here? No, we can do it this way. We can use actual just basic logic. Ads, subtracts, literally ones and zeros, that level of like, is this the correct output? keeping the qualitative data qualitative, because the moment you take, let's just say you have a batch of metrics, have your likes, views, shares.

And as soon as you take that qualitative data set and pass it into an LLM, that output is no longer guaranteed. And that's the problem a lot of people fall into because 99 % of the time it is guaranteed for that iteration of the output, which is what I think a lot of people, they don't realize the risk of doing that. So what you're saying about, it started making up arguments and inserting data points that would have been after a good few messages in the sense where

A lot of people think, if I put something into the LLM and it comes out, a pretty high chance it's going to be the same qualitative data. But then down the line, if I ask it to refer to that data, it's going to get that wrong. And that's where the risk creeps in. It's, you've removed the qualitative data.

 

Chris (00:31.14)

And this is the interesting thing to me about the concept of around fine tuning is that I find that as well, which is oftentimes the AI is quite good on the first question and a follow up. But if I do five, six, seven follow ups, the quality is decreasing. And in fact, and you get more errors the further down that line you get. So when you're talking about constructing an agent so that it has a limited field of view, limited functionality, and fairly tight constraints around that, it performs at a high level when you broaden the aperture or make it more complex, then it starts to break down a little bit.

Ethan (00:31.53)

Yeah, exactly. It's a context window issue. over time, we're going to get more powerful processes and more powerful hardware that can allow for a larger context window. I don't think that's a good thing when it comes to architecture and design, because you want to be building efficient systems.

 

Chris (00:32.10)

So let's bring this all the way back to where you started the story, which is looking at those profiles and that public persona and it being in the AI being able to, to, to, to pierce through and unmask it to a certain extent and make those inferences, et cetera. So we're talking about reliability, trust, understand the processes there. How do you, when the context window is that broad and complex, how are you getting the confidence of the recommendations or the analysis that it's making. How do you believe it?

 

Ethan (00:32.43)

We had an issue with context windows at the beginning, and that's when we kind of had to sit down and think, okay, how do we solve this? Not just expand the context window, not just pay a bit more for a larger context window, but like, how do we actually solve this at a scale? And that's where Vector databases came in. they're very, like they used across the board, but the underlying concept of them is, I think really important to understand if you're working with AI in the sense where they're not just a database that wherever they place a unit of information, let's say, Naviro has identified your personality type, ENFP, let's just say, and that's can be broken down into four components and E, E, N, F, P. And those chunks of data are going to be stored in that database in very specific places in 3d space.

If you imagine it's a block, we're going to place it, use a bit of compute to figure out where we're to place it. But then my personality comes in and say I'm INFJ. Then it's going to place it off in the database, but then there's an overlap of F here. So it's going to know to place that F closer to yours. So when it comes to assessing similarity, all we have to do is distance. We'll say, okay, look at this cluster, what is close by?

And to answer your question about the context window, that's what solved it because we're no longer telling the agent, okay, you have to remember this entire cube of data. just need to tell us what, what sort of things you're looking to know, and then look in that area and it's going to have everything for you. And that's where agents having the ability to call databases or make decisions, make decisions, like actually make the active decision on compute.

becomes groundbreaking because you never run into a context window issue in the sense where if our agent, and it does have handling for this, it won't run into it. But if it was to run out of context, it would know, okay, let me summarize everything I have here. Let me store it in a knowledge base and let me spin up a new instance of myself. So let me spin up and then fetch that summary. If I need the deeper data, I can go back to it. But for now, I just need the summary.

And that just resets the context again, losing data without losing all the context.

 

Chris (00:35.18)

So in your fine-tunned models, and I've played with your application, and you're talking about Myers-Briggs type of profiling, et cetera, did you program in the Myers-Briggs as part of that, as well as, I think there's probably four or five additional psychological profile techniques that are commonly utilized, disk and...

Yeah. So you overtly use those and then set up the pin so that you can basically measure the distance between a couple of characteristics and then do cross correlations within the broader context.

 

Ethan (00:35.54)

Exactly. And it's really interesting because my Briggs is not particularly scientifically backed. It's quite a like, if you look, if you take disk, it's a lot, there's a lot more science behind it. But my Briggs, it's, I don't know if you know, familiar with the story on how it came about, but at the end of the day, it's just a way of assessing human behavior that just happened over time. And yet is one of the most accurate ways of doing so. And it's what all social channels use and what allowed us to get kind of crazy results for the ad space because we use that as our base.

 

Chris (00:36.31)

It's interesting because I played with it, which is it profiled me and I think it got me right on Myers-Briggs. And my demographic for my audience is a different characteristic associated with it. So it's not that... So I think the common presumption would be if I'm INTJ, I would resonate with other INTJs, but that's not true. was extroverts are my audience.

 

Ethan (00:36.57)

This is where it gets really interesting.

 

Chris (00:37.00)

Yeah, it's super interesting. And it's true, I'll be honest with you.

 

Ethan (00:37.05)

What's crazy is how.. we know opposites attract, but what we found is personality types when it comes to content, all content has a personality type associated with it. With the way we built this, can narrow it down. But what we found was one common tactic when it comes to hooking people in virality, all that sort of stuff, getting engagement is pattern interrupts.

 

That is a huge thing and it's not like a hook. A hook is something you say to capture people's attention when they're scrolling. It's audible. A pattern interrupt is not necessarily, there's not as much intent. So it would be example, I start my video and I'm drinking from a glass of water and maybe me sipping the water, gets picked up on the microphone. That's an example of a pattern interrupt. And what we found was the... Chalk and cheese behavior of two personality types, like conflicting, was a patent interrupt with the content, which was something I had never, like, it never crossed my mind. It was only when we saw the data we were like, wait a minute, this is the definition of an organic patent interrupt.

 

Chris (00:38.24)

That is awesome. That's super interesting observation, isn't it? That you're probably, when you're out there working to engage, your brain is intrigued by the pattern interrupt. It's what will capture the attention and potentially even the trust is if you can get through that window, which is that you're looking for that pattern interrupt.

 

Ethan (00:38.45)

Consciously or subconsciously.

 

Chris (00:38.49)

So how does that, does it translate when you're trying to identify a super fan?

And I don't know that this is a client, but say you're looking at the broad universe of Taylor Swift fans. Yeah. Is there dynamics associated with what we've just talked about, including pattern interrupts that help you to identify or characterize today's super fans, but perhaps even more importantly, the ones who are emerging that are coming into that community. And I would imagine it's very different if you're talking about a pop artist versus a country artist versus an EDM artist versus whatever genre it might be that that audience is looking for a different kind of experience, a different kind of feel associated with the artist's engagement.

 

Ethan (00:39.30)

Yeah. I think that's what made it such an impossible task in the industry was every genre has a different fan behavior. Every artist has a different fan behavior and the ability to create a profile for those fans. varies for every artist and wasn't particularly feasible with previous tech. So with super fans, like we take Taylor Swift, One Direction would be an example actually where there's something that we picked up on was it wasn't actually so obviously you've got I'm going to use one direction and who would be a good example here…

 

The killers. Yeah. One Direction and the The Killers. It's probably, yeah, too polar opposite. But when it came to The Killers, you'd expect a lot of old school kind of like posters and stuff and like that kind of grime vibe on people's walls and stuff like that kind of. And then One Direction, similar, but not as intense as what we had thought. Like we were thinking, okay, people are to be a lot more vocal being a One Direction fan than the Killers. And what we found was that the Killers, their like age demographic was older. But what was really interesting was when you started to activate those older fans, their posting frequency was actually more frequent than the One Direction fans. And it was very, it was interesting to see the consistency associated with that where One Direction fans didn't particularly need to be activated in terms of campaigns, targeted them to reignite that interest in the artists or the brand. Whilst the killers once activated, once engaged and once an active fan became far more frequent advocates of the artists and the brand than One Direction fans. So those small things that you wouldn't really think would go together because then we realized that, okay, well, obviously we need to spend money on marketing, the engaged fans, you've got engaged active and super and the engaged fans to bring them into that part of the funnel, which is you're an active fan. And then very soon after they move into super fan. And then you have the monetary aspect of things like obviously, older demographic, there's a more disposable income there just by nature of demographic. And that was really interesting to see.

You'd expect the LTV to be higher, but it wasn't. Because they were more financially responsible. And One Direction fans would happily drop huge amounts of money with complete disregard for the financial impact. And that was another thing that we picked up on. And then it got us thinking, was like, wait a minute, can we have a look at the likes of Klana? Which is like... credit purchasing for like any sure I'm pretty sure you can like finance a burrito in some parts of the states now. So I started getting me interested in like, okay, how how does that tie into it? Because I know that's a fact with the younger generation is finance is huge, but also the emergence of like financing tickets and things like that. And then stack him up about Lollapalooza in Chicago and 70 % of the tickets were financed. And I was like, whoa..

And that's when we started drawing conclusions outside the music space where, okay, so if they fit this particular profile, how does it tie into the likelihood of being like using Klarna or like what are their behaviors in terms of, they have other overlapping interests that are outside the music space that we could use to infer a decision in the music space? The answer was yes. But it was just fascinating how deep that goes.

 

Chris (00:43.28)

Yeah, it's again, this is that exciting edge and scary edge. It's exciting in the sense of being able to understand just like that dynamic that you just described, which is that this fan base is older with higher disposable income. So you would think they would have a higher LTV, but they don't because they're more responsible, physically responsible. Whereas the younger demographic that's passionate about their artists and the brand does not have the disposable income, but will spend more. And then these financing mechanisms go into it. And I think that's super interesting to understand the marketing dynamics. I think it's the exciting edge and for us to make better decisions about that dynamic at play, ultimately, A, the scary edge is the psychological profiling, which is, we read about these cases in which, from an immigration perspective in the United States, that anything that somebody could put out into the public ether can now be utilized by various systems that are out there to scrape and identify people and is used for good or poor purposes, depending on your perspective, perhaps. It's like, this is the scary piece.

 

Ethan (00:44.38)

Yeah, that's where it definitely has crossed our mind. Well, actually, it's more than crossed our mind in the sense where we have to actively develop in a way where it can't be used for that. Right. Like you said, with border control, like I went, I was over in the States there about a month ago and the intensity of the border checks completely different. Like I have global entry, I was thinking, yeah, walk in, scan my face, happy days. No, it was about half an hour waiting in line. I remember asking, I was like, what's the story? Is my global entry expired or something? And they're like, no, no, no, there's just, you can see the frustration on their faces. They're like, yeah, no, we have to check everyone right now. And the social media scan was one of them. It is opening up a lot of doors that could do a lot of good, but also do a lot of bad. And I think we're actually located in the two countries that are known for civilian surveillance. It like the highest and some of the highest in the world, think outside China, but like the UK and the US when it comes to monitoring citizens is they actually have a joint act where they share the data, which is why I can get a global entry through the through UK passport. But what's interesting is as we move into a time where it's possible to make decisions, be it entry into a country, insurance, credit, based on lifestyle, that's going to get a bit more difficult to assess because what I saying about the two personality types and having a surface personality type, people have a lifestyle they present. It may not be reflective of the underlying lifestyle. And we are very aware that what we built can be used in that for that use case to identify true lifestyle. And what is also interesting is eventually it's not us who builds it, but it will be someone who builds it. And I think people being aware of what they're putting up online, which is just enough to just know like, okay, this is being indexed somewhere. It's not a matter of will someone find it. It's a matter of where is it going to be indexed? Because every major country is going to have to an extent, agent that governs all this and just being able to be a bit more conscious about what's going up there. can only do good, because it's not a matter of, okay, everything I post from now, it's a matter of what have I posted since I joined Facebook in 2012.

 

Chris (00:47.30)

And know you're very careful about your social media presence as well, based on what you know.

 

Ethan (00:47.38)

I think I've gone, I've gone through ups and downs. I think initially when we first really got into the weeds of this industry, I tried to delete everything. I was like, I don't want Instagram. I don't want Facebook. I want absolutely nothing. You can contact me via I think telegram was my main thing at the time. And then I realized that actually even with such small pieces of data could still piece together pretty good profile. And what was actually better to do was to be more conscious about what data was going up there and the Sure, allowing it to direct the inferences is going to make. So yes, I'm very careful about what I put up. But I'm also not radio silent, because I don't think that's possible in one the industry I'm in. But to just this day and age.

Chris (00:48.32)

Talk to me a little bit about you have something which is extremely powerful.

And the powers of it are unfolding, which is the ability to profile, understand the psychology of somebody overtly as well as what's going on underneath it, the ability to understand someone's lifestyle. And given that power, if it was held in the hands of somebody who can control access to a border or social benefits or other privileges or powers in society or the economy, that's extraordinarily powerful. Talk to me a little bit about your responsibility in not abusing that power, managing the power well as a private actor.

 

Ethan (00:49.14)

Yeah, yeah. And I think that's it is really down to the decision makers with this tech and that, yeah, that falls on myself and my co-founder and anyone in the industry really, because it is getting easier and easier to build these things. And the way I kind of see it is very much like when the industrial revolution happened, it's like the first cars that people just crashing everywhere.

It was only over time that we decided, hey, let's introduce some speed limits and a driving license and all these different, ways of directing that revolution. The problem here is that everything to do with this revolution is behind the screen. And that makes it a lot more difficult to manage because it's very easy for anyone to build something or do something in this industry and removing their identity. It's not like driving a car. And that for me is something that's kind of worrying because I'm not too sure how it's going to unfold. Like the most we can do is just make sure that the tech we're building cannot be used by bad actors and cannot be used for any malicious sense. Like I know I have the option of working, going down a cyber security route and kind of more government surveillance, kind of space, but didn't, purely just because of the ethics associated with it. It's going to happen. I'm sure it probably does happen, but it's controlling what you can control in that sense. for us, is, yeah, making sure that people can train models, but it's not training other people's data. It's not being that data is siloed off to their individual account. It's making sure that we don't allow any sort of models to go off and... almost go haywire, but also don't give models access to call different tools and programs that you aren't 100 % sure they can't abuse. Because the thing about agents is they iterate, iterate indefinitely in the sense where it may think something is a good decision right now, but 100 iterations over might be like, this isn't the right decision. And you need to have failsafe in place to actually catch that and catch it, flag it, and make sure it doesn't happen again.

 

Chris (00:51.43)

I'm interested in your perspectives about the industry, about where that arc leads, and how to make sure that we are leveraging these technologies for the best purposes?

Ethan (00:51.52)

Yeah, we're going to go, we are going through a phase where the emotional intelligence isn't required by engineers because a lot of industries think engineers aren't required anymore. And that from in my head is temporary. It's going to reach a point where we realize wait to build what we want to build at a scale, which is adoptable by like at a global level. You need engineers. And you need unemotionally intelligent engineers, because everyone needs to shine in what they do best. And I think for where we are right now is, the world of vibe coding where you can have the creative minds, high EQ build MVPs, but that's where it gets a bit more difficult because you can only get, get it to an MVP, scaling it up.

That's where the demand for infrastructure and the demand for very well-built architecture, or a very good architecture and good system design comes into play. And in my experience, the best engineers for that are the ones that have no EQ because they're not thinking, how does this interact with a human? They're thinking, how does this system interact with this system? And how does that affect the output that the user will see, but I'm not concerned about what the user will see. And that's the period in which that's been disregarded right now. And I know you see it in the, say, the software engineering job market completely down. Do I think junior engineers are going to be replaced? Yeah, they've unfortunately been within our company as well. it's mid-engineers is like the entry point nowadays. But again, it comes down to adapting to suit the roles and demand.

I think, just to kind of tie that off with marrying the two together, where I think we'll end up going, we'll see the demand for engineers again. But what would be really interesting is there will be a lot of pressure on a kind of a middleman role, which in the past would have been product. You'd have the founder, say with with a MVP of the founder with the kind of the vision and then product translating it into a much more executable set of steps. And then engineering implemented that implementing that, but where we're kind of going is product is going to have to have a lot more of a lot more EQ, not just a, this is the user experience. This is what we're seeing. This is what we need to do. And this is what the CC or C suite want us to do, but executing that roadmap, which yeah, you need to make sure delivery is happening, but you also need to make sure that the EQ of engineers is managed, the EQ of whoever's defining the vision is managed, and also the EQ of your customers because trust is going to become, when I say the most influential factor, it already is, but it is going to become the make or break for every single company online.

 

Chris (00:55.12)

This is the future. Thank you, Ethan.

 

Ethan (00:55.14)

All right, thanks Chris. Great to chat.