They Might Be Self-Aware

Brain Chips Promise Genius, Pharma Faces Copilot Chaos, & AI Battles PDF Madness

Episode Summary

They Might Be Self-Aware Podcast (TMBSA) - EPISODE 26 DON’T MAKE US SEND AN AI AFTER YOU – SUBSCRIBE NOW! Daniel's PDF woes are ruining his LLM experience this week, and he's not happy about it. Is Andrew Ng right about AI in the workforce, or is it just a trend? We dive deep into the true value of meeting notes and summaries from AI and discuss if an AI filter can make low IQ seem high IQ – and whether that's ethical. Brain implants: how close are we to having AI in our heads? Meanwhile, has Microsoft's Copilot failed a pharma company, or is the CIO just holding it wrong? Hunter and Daniel also ponder if AI tools are the key to self-awareness. Plus, Todd is banned (again). Just another week of rants, insights, and high-IQ humor here at They Might Be Self-Aware! Your future might depend on it! For more info, visit our website at https://www.tmbsa.tech/

Episode Notes

00:00:00 Intro
00:02:24 A PDF a day keeps the LLM away
00:07:28 AI in the Workforce: Trend or Transformation?
00:15:37 Microsoft's Copilot: Failure or Misuse?
00:17:52 Meeting Notes & Summaries: Are They Worth It?
00:23:32 AI Tools and Self-Awareness: The Final Frontier
00:24:13 Brain Implants: AI in Our Heads
00:26:53 IQ Filters: Enhancing or Deceiving?
00:33:32 Wrap Up

Episode Transcription

Hunter [00:02:30]:
Daniel, you. You like a good PDF file, don't you?

Daniel [00:02:55]:
Nope, I hate PDF's.

Hunter [00:02:58]:
What is it? I don't know what it is I'm getting portable document format.

Daniel [00:03:03]:
File professional. Nope. Yep. Portable document format. That's. I was trying to think of like a worse portable document format.

Hunter [00:03:11]:
Yeah. Confirmed.

Daniel [00:03:11]:
Yep. Did you use search, GPT or Google for that?

Hunter [00:03:15]:
I used, I used Google for that.

Daniel [00:03:17]:
Well, how do you know it's correct? Oh, wait, no, it's because. Yeah, yeah, that makes sense. No, I don't like PDF's, Hunter, and I'll tell you why.

Hunter [00:03:24]:
Okay.

Daniel [00:03:24]:
Because I like LLMs, which sounds.

Hunter [00:03:28]:
Don't we all?

Daniel [00:03:29]:
Yes. Well, not everyone. I really like using the new Claude 3.5 sonnet that came out. A lot of people basically say this is probably the current gold standard of models, you know, depending on who you talk to, depending on your use case. I really like using it because you can put together projects in it, and those projects can have different files uploaded to them where you can use the content of them basically as its own retrieval augmented generation scheme that gets built up associated with what you're doing. But more importantly, that project puts together like a document or hunk of code or whatever that you can iterate on with Claude, which I think is a really fascinating concept. The reason why I hate PDF's and why I'm all riled up about it is because I was trying to use Claude and upload a few PDF's to it. But depending on how that PDF has been saved, you can have a lot of information that takes up like a few hundred kilobytes.

Daniel [00:04:28]:
It's really not much. They can be very, very efficient with their memory usage, or especially you have like a colored background for the thing. All of a sudden, something that's a few pages long is 30 megabytes. And yeah, there's a 30 megabyte store.

Hunter [00:04:46]:
Images as pages, which some you can do in a PDF.

Daniel [00:04:49]:
Yep. So I was trying to use Claude for a project dungeons and dragons related, because of course it is, because that's the sort of person I am. And I couldn't get it to work with some of the documents that I was trying to upload because they were PDF's that were saved with that sort of image format. And there's a 30 megabyte limit. Now I can understand if whenever it goes in, it tries to figure out where all the text is, and it'll say how much of its memory is being used. And that memory is the context window, the number of words, tokens, etcetera, that can be kind of kept in its memory, which is a pretty big number. I think it's over 100,000 now. For this specific iteration.

Daniel [00:05:27]:
That's cool. But I couldn't get a few of these PDF's loaded up into it by virtue of they were just too big. I think the next step, and it's not a big step, but it'd be a huge quality of life step, is take a PDF and just get the text out of it easily. Now, there are a ton of things out there that do this that can extract the tables and the text and so on.

Hunter [00:05:51]:
I actually use chat GPT for that a lot.

Daniel [00:05:53]:
Yeah.

Hunter [00:05:54]:
Whenever I have an image and I just need to grab the text from it, I drag it in, say, extract the text, please. And it works great. And it also extracts the formatting in markdown format. And you could probably do something more advanced if you wanted to, but by default you get a markdown formatted document.

Daniel [00:06:09]:
Yep. And I've used GPT four to give in a PDF and get information out of it, or use a different service to get a okay version of the text and then have chat GPT kind of fix it up. That capability needs to be one of those things that's built into Claude and chat GPT and so on of like here is a PDF. Ignore the fact that it's a PDF, please just get the information out of it, toss the original data and just keep the information from it. This is just my little mini rant. I absolutely hate PDF's. Let's start things off with a little bit of anger, which is to say, I really don't like PDF's, because they're getting in the way of me using LLMs to help with the work that I want to do.

Hunter [00:07:02]:
I feel like you should be able to just automate converting your PDF's into a format that they can use. And also your 30 meg limit. Again, what you should do is just. So let's assume it is all images. You should be able to write a script. I'm trying to remember the name of the popular PDF library.

Daniel [00:07:20]:
Oh yeah, there's Pimu PDF. That's the one that I've used most in the past. There's a bunch of them.

Hunter [00:07:24]:
I feel like there's one library that's often behind all those other libraries, like some c library that everyone's using. But anyways, extract. No, no, that's the text, right? Yeah, yeah. But assuming it's images, extract all the images, which are the big files, compress them down into jpegs that are much smaller, and then take those, put them back into PDF files, and break it into multiple files if necessary to stay under the limit and then upload that. And that should just be a script. And in fact, you can ask claude to make that script for you.

Daniel [00:07:54]:
I was thinking maybe I literally do that. I have worked with PDF's, get the text, then do stuff with them before, but I don't want to have to do those steps. And most people might not know how to do those kinds of steps, so it needs to be something that's a little more Daniel.

Hunter [00:08:09]:
We aren't most people.

Daniel [00:08:11]:
That's true. We're very self aware. Wait, I think I'll say that at the end.

Hunter [00:08:16]:
It reminds me of this article I saw recently. Are you familiar with the Andrew Ng, kind of famous machine learning instructor? I was a Stanford professor. I think he also maybe founded the coursera.

Daniel [00:08:27]:
Yes, founded Coursera has a really great machine learning.

Hunter [00:08:31]:
Probably the most famous one.

Daniel [00:08:33]:
Yeah, I'd say the most famous one. I don't know for sure.

Hunter [00:08:35]:
A couple versions, I've dated it all now.

Daniel [00:08:37]:
So there was a really outdated version that used. I literally don't even remember the name of the programming language. This is years ago. And for some reason he was really bullish on whatever this language was. And it's not python. It's something that I'd never heard of before and have never heard of since, except for in the context of this one course, everyone basically said, please make a new version of it. Nobody's using whatever that language was. And he did.

Daniel [00:09:02]:
Now there's a python version of that course. And that I'd say is like, hey, I want to get into machine learning. Go watch that course, do that.

Hunter [00:09:11]:
He's very well respected in our industry, and he was speaking recently and he said that AI, in the, at least within the context of the near term, we'll call the next ten years. It's not going to replace human workers, but people that use it will replace people that don't. And I think that that. Connecting back to your PDF story a little bit, it's the, there's, there are the people that are just working with the PDF's and the people that are working with the PDF's and have the AI augmentation to get across that, that last mile or to make some of the grunt work go much faster than it is to think about it more as a tool and in the article or a little more context, he was also, you know, relating it to the, you know, as we talked previously, kind of like the 3% rule that as of today, there's really only 3% of people that really need to be using AI every single day. But if we look for small opportunities to make it go faster, there are tons of those and the people, and there's very few professions where you couldn't find some opportunity to do some of your work a little bit faster, a little bit better. Leveraging AI. And so the folks that embrace that are at least his hypothesis are the ones that are going to replace the human workers because we aren't really close to.

Daniel [00:10:39]:
AI means replacing humans.

Hunter [00:10:41]:
Yeah. AI really replaced 60 writers. We wanted to a room full of 60 writers, right. Or two podcast hosts. I don't know if you saw last episode, by the way, Todd will not be appearing on this. We have an agreement. Some legal authorities were involved, but Todd, Todd is, I think, 60ft.

Daniel [00:11:02]:
He's supposed to say, well, given that words were spoken, Todd lives in servers. The question is, what is 60ft, really?

Hunter [00:11:11]:
There was a late night phone call.

Daniel [00:11:13]:
Yep. I don't think I'm 60 milliseconds of lag away from us. So, okay. The idea that people using AI are going to replace people that don't, I think this is right within the lines of what we've described in the past, which is to say AI isn't going to be replacing people, you. Without the occasional media stunt. No one's hiring chat GPT. Rather, it's people using chat GPT to augment their capabilities within a job. And I think what Andrew Ng is saying is that this is a powerful enough tool, suite of capabilities, whatever you want to call it, that if you don't use it, you are going to be at a disadvantage compared to other people.

Daniel [00:11:57]:
Purely anecdotally, a friend of mine is currently looking for a new job, and that new job involves applying to a whole bunch of different places. And this friend was getting very few hits back. I looked at his resume, it looked fine and something about it, he was just not connecting with the folks that he was sending things out to. And his suspicion was that a lot of other people are using chat GPT or other generative AI type tools to gussy up the resume a little bit to make it look a little bit better, better, SEO better, keyword, whatever. And he, quote unquote, gave in. For the most part, I'd say he's somewhat against the use of these sorts of tools. Certainly this kind of case, a very human sort of, I want people to look at this and make a meaningful connection, and then they, the people want to hire me. And of course, you have to get through all the automated systems to do that.

Daniel [00:12:52]:
So he bites the bullet uses chat GPT. It fixes up the resume. And he said it was an immediate and marked difference in terms of how many people were responding to him. Um, getting it wasn't just a first discussion that he got his foot in the door. It was. He went from basically nothing to interviews, like late stage interviews, being able to actually get his way through the process just by using chat GPT just for his resume. That as a tool like this is pre job, so to speak. But once he or anyone else has a job, why wouldn't you use tools at your disposal to make your emails worded a little bit better or to change the formatting of a document to be more consistent with your company's stated policies for how you're supposed to write them? Whatever it ends up being, this is what these tools are for.

Daniel [00:13:47]:
And then if you don't get with the program, so to speak, you're either making your job harder or going to do it worse. I truly believe that. And not years from now. Now.

Hunter [00:13:58]:
From now. I don't know if you saw this week, developers got their first access to Apple intelligence. The. By the way, that's what AI.

Daniel [00:14:06]:
Not in the EU, right?

Hunter [00:14:08]:
No, that's a source subject, but they got their first access to it. So this is just a developer beta. So it's integrated natively with, I believe it's the iPhone. I don't know if they also released the next version of macOS, but it's mainly rewriting what people are demoing, at least when I'm seeing these little videos where they write their business letter and then they can go in and click, make it more professional, make it more polished. A lot of demos where people would write things that were very profanity laden, rants about their work that they want to send to their boss and then they press the button and it rewrites it.

Daniel [00:14:43]:
Here's what I really wildly more professional. Yeah.

Hunter [00:14:46]:
So it is coming to your lay to point and more and more people are going to have access to it and more and more people are going to be using it. So if you choose not to use it, well, you're going to have to compete with those that do.

Daniel [00:14:57]:
Now. I think somewhat conversely to that, some people are not seeing the value in, I'm going to say specifically like Chat GPT, or the co pilot AI for Microsoft. There is a pharma company where they were using Microsoft's copilot, which is a paid service they were paying for. I think it was like 300. Sorry, let's cut that cut there. The CIO at a pharma company canceled a Microsoft copilot AI subscription. Those subscriptions cost money. This is a paid service.

Hunter [00:15:37]:
We talked about it before. It's not included with the office 365 because that's one of the points you made was like, hey, like every company in the world, or let's say 80% of companies in the world have an office 365 license. Everyone's going to have AI, but at least at present it's expensive too. Like maybe $1012 more per month per license. They're charging a lot out of the gate, right?

Daniel [00:16:01]:
And that's per seat.

Hunter [00:16:02]:
Correct.

Daniel [00:16:03]:
And if you're a big company, that's a ton of money per month. We've talked plenty about local large language models that you can download for free and run on your computer.

Hunter [00:16:12]:
Right?

Daniel [00:16:12]:
If you just want someone to be able to put in an email and like re format it, you don't have to be paying money for that even. Go ahead.

Hunter [00:16:22]:
The argument is you don't have to make an employee much more productive in order to recoup that $12. So the ROI on $12 is incredible. That's the argument. That's the hypothesis. Spend dollar twelve and all your employees doing 20% more work every day. But how did it work out for this pharma company?

Daniel [00:16:45]:
Well said. Executive said building a generative AI slide capability, which is really at the quality of middle school presentations at this point, and then Excel, which is again, not really something that most people who use spreadsheets would think of using it. I think those are fair points to bring up. I wouldn't think of using chat GPT or any LLM really, in Microsoft Excel. I might ask it to help me write a function, but at least off the top of my head at this stage of the game, I wouldn't imagine that that's already built into Excel and that I should be using that version of it. I would probably even if I saw hey, you've opened up Excel. Here's a new version of Clippy and it says, can I help? I would still go to chat GPT and type in I'm looking to make this kind of function and here's a couple of sample inputs or whatever. When it comes to slides, I don't think I would really at this stage for myself, count on that to be doing a good job there.

Daniel [00:17:52]:
Now editing a word document or one of the features that this same executive found really meaningful was taking a like a teams meeting and then summarizing it. Holy moly, talk about people saying these meetings should have been emails instead. What if you didn't pay attention during a meeting and then you got a bullet point set of what the email should have been in the first place? I'd take that. That's worth $12 to me for sure. That's $12 a day, let alone a month.

Hunter [00:18:20]:
Yeah. I think it's a little tangent. Tangent. But I'll go in there. So everyone's doing that now. They're recording their meetings, and they're getting summaries. And I don't know the meetings that you're in, but I frequently now see what's the main one. I see more than all the others, but the AI agent joins the meeting and takes the notes.

Daniel [00:18:37]:
Yep, Zoom's got one.

Hunter [00:18:39]:
Otter's the one I see more than anyone else, but, yeah, it's native in Zoom now. Yeah, I never read those notes, and I never. And it's just another email that I get. Oh, here we are sharing the notes from the last meeting. I look at it, I create my email filter, throw that thing directly into the truck. You look at it. You like it?

Daniel [00:18:58]:
Well, it depends on how much attention I was paying. Hey, any of my employers ignore this next bit? If I'm not paying attention during a meeting, which is fairly often. Okay, you can start paying attention now. I do think that there is a lot of value to be had with summarizing what was in the. The meeting. That's what you see. Lots of product and project managers kind of talking about then.

Hunter [00:19:23]:
Yeah.

Daniel [00:19:23]:
Anyways, hey, here's our action items, and here's the people who should be doing that. And here's the. The follow up tickets that I'm gonna make. Having that automatically generated really seems like an obvious value add to me and then not have to pay attention during the meeting. And I can just see a quick one pager of what that was all about. Now, of course, if I, Daniel, have to be involved in the meeting, I don't think I'm going to be paying too much attention to those notes because I was part of the thing. I know what was going on.

Hunter [00:19:54]:
If I didn't have to go to the meeting, then, yeah, I'd rather read your bullets and why we need the first place. But I personally don't care for the meeting summary. Automated meeting summaries. I don't find them valuable. They're neat. Think of.

Daniel [00:20:09]:
Think of a meeting that you're invited to but you couldn't attend, and it gets recorded. Do you go back and watch those recordings? Oh, I'm gonna. Don't worry, I'll walk. Never, never ever. But if that recording got a one pager summary that was immediately made available to you, then you get all the important stuff and you didn't have to watch an hour and a half of people umming and eyeing back and forth.

Hunter [00:20:32]:
There's the thing that I do like, though, is when meetings get recorded and indexed, and then when I didn't, we at one point talk about this, I can go and search and then go back and I can literally, like, re listen to that moment, you know, months later. Like, didn't we discuss there was something about this that was discussed? Or I have a question after the fact. I'm like, oh, they asked me about this. I didn't. What was that? And then go back and search the index. And I can imagine a, okay, we should feed that into some sort of rag implementation so that then I can just ask the AI agent that valuable, your bulleted summary, stick it where the sun don't shine.

Daniel [00:21:08]:
So I think, what were we talking about? The things that you just said belong together. I use a note taking app called Obsidian. There's a million apps out there. Evernote is one of them. You can make your own private wiki for a bunch of different things, but it is kind of a wiki type solution. They have a nice little graph view where you can see all the nodes connecting to each other. But imagine if you had recordings of all of your meetings that get put into an index that also makes little one pager summaries of here's your meeting from this day. Summary links to other things.

Daniel [00:21:40]:
Here's a link to that project that you're talking about, or the Johnson account, or whatever it is. And every time you have a meeting, it fills in more of this knowledge base. This is, of course, just another implementation of what you could do with other retrieval augmented generation. But the reason why I like it being potentially integrated into a note taking app is that you can then build up your own easily human, understandable sort of, I want to see everything associated with the Johnson account or project such and such, and then go to its entry and then see, these were the five meetings where it was talked about, and this ticket that we made came out of it and have all that done automatically. That feels so, so useful to me.

Hunter [00:22:24]:
We were talking about the pharma company who's failing at their copilot usage, not Jack GPT, but office copilot. My suspicion is that they just weren't using it very well. This is a you're holding it wrong situation. However, I think that the marketing does a disservice to suggest that you do not have to hold it a particular way. You just open up your excel spreadsheet and press a button, and the world's most beautiful, professional looking PowerPoint presentation appears that will look like some PhD plus a design expert put it together right. That's not how it actually works.

Daniel [00:22:59]:
Yeah. This is not make me a slide deck. This is help draft out the slide deck. Help be a tool that I can use to make a slide deck better.

Hunter [00:23:09]:
You need to have a really good idea of what you're going for. And if you're not there yet, explore that idea, then you can use those ideas to get the AI to generate the thing that you want. And it will likely be better than if you had not used the AI because it's going to be a little more polished, it's going to be a little more complete, it's going to explore areas that you hadn't explored.

Daniel [00:23:32]:
And all of that back to Andrew Ng's point, which is AI isn't supposed to replace human workers, and it's not going to successfully in the short term. We've talked previously about a whole team of writers were let go, and then suddenly quality starts dropping. Or this pharma CIO saying, let's not use it because it can't completely replace all these tasks. It's not supposed to, nor is it good enough to, but not supposed to. It's supposed to be a tool that people use to augment their capabilities.

Hunter [00:24:03]:
Now, what if, and I'm speaking hypothetically here, what if you had chat GPT in your brain?

Daniel [00:24:13]:
You mean like the voices in my head that I can talk to and hold a conversation with?

Hunter [00:24:19]:
I'm actually referring to a company called synchron. They recently demoed a person with ALS, and they have a brain implant, and they integrated a chat GPT in with the brain implant, although I don't think it was actually running in the brain implant as. As their press release would suggest, but it essentially helped the user communicate, or I guess, expand the messages, sort of just like we were talking about. Like you have these sort of brief thoughts and how do I. Let me restate them in a more human like way, a more complete way, because the ALS person struggled to communicate large things because I had very limited mobility.

Daniel [00:24:59]:
Yeah. So wasn't able to synchronize this brain computer interface. Just put my hand on the back of my head because having watched the matrix, I now imagine there's a big spike that just kind of gets shunked right in there. Maybe that's not quite how it works, but the idea is there's this thing in his brain that when you think about moving a mouse or clicking on a specific part of a screen, that it does that. And historically, I say historically, as if this has been around for a long time, it hasn't. These kinds of technologies allow you to, those who don't have the use of their hands or other pieces of their body to use computers to be able.

Hunter [00:25:34]:
To communicate, kind of hunt and peck on a keyboard.

Daniel [00:25:37]:
Yeah. Which is an incredible set of capabilities. I'm very, very happy that people who don't have the ability to move will be able to communicate and, like, be part of the world around them. Using generative AI to augment that, I think, is a incredible use case. Of course, the, the CNET article, how this brain implant is using chat GPT is very misleading. From what I can tell, they're not using it. They're saying we could use something like chat GPT to figure out what the next word that somebody would say so that they could, instead of pecking for individual letters, be able to. It's.

Daniel [00:26:18]:
They're describing a better autocomplete, right?

Hunter [00:26:21]:
It is a better autocomplete. So it's when he receives a message in from a text, for example, or a conversation, someone says something, he has a list of things that he can respond with versus just hunting and pecking. And that list of things is generated by chat GPT. So it would be the same as if you asked me a question. Now I go to chat GPT and say, what are the top nine most probable answers to this question? I pop those up, and then he can look with his eyes and identify, okay, use this one versus me hunting and pecking an answer. It's good enough, right?

Daniel [00:26:53]:
Or thinking about it a specific way, that's it is a potential boost to that technology, and I don't want to deride that at all. But this isn't talking to chat GBT in your head, nor would I, I think, want that sort of thing, even just the her style. I have a phone in my pocket that I can leave on, just have conversations with. Is Sci-Fi enough for my tastes, but for non able bodied persons, do you think that chat GPT or other generative AI solutions are a, or the proper way to be allowing them to communicate better? Or should we be focusing more on just making, like, the actual ability to move the mouse around and click on stuff faster? The actual result is going to be both.

Hunter [00:27:48]:
But right I was just saying it's.

Daniel [00:27:51]:
Not one or the other. We're obviously doing both. But is a better autocomplete the right path to be going down? Or would it be using those same capabilities to help instead of like, individual words? I don't know. I've seen for texts and for emails. Here's maybe the rest of a sentence. And you can accept or reject that.

Hunter [00:28:18]:
I don't know. My brain went on a side run when you're talking about people with various challenges. So if we take someone with an incredibly low iq and we then filter everything they say through a chat GPT, which is instructed to rewrite it as someone with an incredibly high iq, what are we really doing there? And I. And what is that relationship that we're creating? And what are the ramifications of that? Of taking low IQ individuals and adding a high IQ filter on their output? It's still low IQ input.

Daniel [00:29:05]:
I don't think there's anything super inherently wrong with that. People use makeup to take of visual appearance and change it.

Hunter [00:29:15]:
That's an interesting comparison.

Daniel [00:29:17]:
Yeah. But also high IQ individuals take their.

Hunter [00:29:22]:
Like you and I, like, like the.

Daniel [00:29:24]:
Two of us, we can take anything that we write or say and then hire a. Like a. Not a screenwriter. What's the term? I'm trying to think like a editor. Yeah, an editor. Or a, like, speech writer perhaps. If I wanted to give a public presentation, I'm not just going to write down my thoughts and immediately present them in that first draft form. It's going to go through multiple drafts, especially if there's a person.

Hunter [00:29:47]:
I'm just pressing that copilot button.

Daniel [00:29:49]:
Yeah. If there's a person who use a copilot or chat GPT, you can read the meeting bullets that says, here is all of the thoughts that you wanted to do, but in a more palatable way. Here's the audience that you want to pander to, and here's ways that you could specifically do it. That happens in politics and outside of politics already. Why not make that same kind of tool available to people, especially if they know. I'm not much of a public speaker, but I need to talk about this or I'm going to write an email to my boss asking for a raise, but I don't know how to do it. That doesn't have to be low IQ people. High IQ people or, you know, someone who is bad at dealing with confrontation or this or that.

Daniel [00:30:31]:
This is using a third party person or otherwise whose job it is to help you present information in a certain way.

Hunter [00:30:44]:
I'm still stuck on my, again, the high IQ filter. And I think where it falls apart is so great. The low IQ person, they can apply the high IQ filter on their output that goes out to the world. They then receive response, and their response will be somewhere between medium IQ and high IQ. And now they need chat GPT to do the inverse. It's the Eli five filter back to them for everything. And I think that that's, they're not getting the complete picture back, the response back. So the concept that it could, it can elevate them for brief moments of time, but it can't, at least at this point in time, elevate them forever because they still have that limit, that, that ultimate limitation of low IQ.

Hunter [00:31:26]:
They wouldn't be able to complete the discourse.

Daniel [00:31:32]:
No, but I think there is an elevation that isn't temporary. There are, there are types of communication that you can't have describing this low to high IQ kind of back and forth. Forget mental acuity, just make that different languages. I can't speak Chinese. You know, if I try and speak in Mandarin to someone, well, I can't. I know zero Mandarin. I cannot speak in that, or Cantonese or any other chinese language. One cannot do that without the information.

Daniel [00:32:02]:
And so there's a very large percentage of humanity that I straight up can't talk to. But using generative AI, using just a Google translate, even hunting and pecking on a keyboard of just a big dictionary, I could do some sort of communication back and forth. And I think LLMs, machine learning in general, AI, all of these are tools that can help people communicate. And communication is very important to me. I got into computational linguistics because I think using technology to help people communicate more clearly and enhance people's lives is a very laudable goal. So I like the idea of using it to help those who might not understand high level concepts be able to grasp it, even if not fully, more than they would have otherwise, or people who can't speak a certain language be able to communicate with others. That would be a full language barrier authorize.

Hunter [00:33:04]:
So you're going to be perfect on the chat GPT implant.

Daniel [00:33:08]:
I'm giving it a pass. I'm not getting the neural implant, whether it be from neuralink or synchron or anyone else. But I do think that there is some sort of nearly always connected capability that I could really go for. Again, the, her style. The phone is in your pocket, and it's just on all the time. And you could ask it a question whenever I could get behind that.

Hunter [00:33:32]:
You just you may never become self aware without the chat.

Daniel [00:33:37]:
I'm relying on the phone to be self aware for me. Todd. Todd.

Hunter [00:33:41]:
Leave Todd out of it. Folks, excluding Todd, you have been listening to episode 26, the world renowned podcast. They might be self aware. Hit the subscribe button. Don't hit the subscribe button.

Daniel [00:34:00]:
Hit the subscribe button.

Hunter [00:34:01]:
Hit the subscribe.

Daniel [00:34:03]:
Like us. Leave some stars, a smile comment.

Hunter [00:34:06]:
We may even read it right here on the I don't say air. Sure, we'll say air. The wire.

Daniel [00:34:13]:
Breathe air. Yeah.

Hunter [00:34:14]:
Okay. You can find us on pretty much every major podcasting platform. Wherever you want to listen to your podcast, you can listen to. They might be self aware, including wherever you're listening to it right now, where there also probably is a subscribe button. But fear not, we will be back next week with even more insights. More than that, insights, knowledge, ruminations, humor, just general brilliance and high iq speak. But that's gonna do it for today because you have been listening to they might be self aware.