Podcasts about AltaVista

Web search engine

  • 219PODCASTS
  • 268EPISODES
  • 43mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Jan 15, 2026LATEST
AltaVista

POPULARITY

20192020202120222023202420252026


Best podcasts about AltaVista

Latest podcast episodes about AltaVista

A Cut Above: Horror Review
E234: X-The Man with the X-Ray Eyes (1963)

A Cut Above: Horror Review

Play Episode Listen Later Jan 15, 2026 85:20 Transcription Available


Episode 234: Week four of our Christmas with Corman theme (and entire month of Roger Corman films)  is here and through sheer vision we have picked X: The Man with the X-Ray Eyes from 1963. Can we see what's at this film's center? Find out. Tune in next week for our final week of Roger Corman films with 1964's Masque of the Red Death. Become a supporter of this podcast: https://www.spreaker.com/podcast/a-cut-above-horror-review--6354278/support.

9to5Mac Overtime
9to5Mac Overtime 054: How's it better than AltaVista?

9to5Mac Overtime

Play Episode Listen Later Jan 9, 2026 38:22


Jeff and Fernando are back with a chit chat about the best tech from the last year, and what they're most looking forward to in 2026. 9to5Mac Overtime is a weekly video-first podcast exploring fun and interesting observations in the Apple ecosystem, featuring 9to5Mac's Fernando Silva & Jeff Benjamin. Subscribe to Overtime via Apple Podcasts and our YouTube channel for more. Hosts Fernando Silva Jeff Benjamin Links Low cost MacBook launching in 2026 Potential iPhone Fold display Ableton Move ScreenFlow Subscribe 9to5Mac Overtime on Apple Podcasts 9to5Mac on YouTube 9to5Mac on YouTube membership with bonus perks

The Empire Builders Podcast
#238: Google – Do No Evil…

The Empire Builders Podcast

Play Episode Listen Later Jan 7, 2026 26:06


Larry Page said in the early day, a guiding principle is Do No Evil. I wonder if we can say that today or is it just business as usual? Dave Young: Welcome to the Empire Builders Podcast, teaching business owners the not-so secret techniques that took famous businesses from mom-and-pop to major brands. Stephen Semple is a marketing consultant, story collector, and storyteller. I’m Stephen’s sidekick and business partner, Dave Young. Before we get into today’s episode, a word from our sponsor, which is, well, it’s us, but we’re highlighting ads we’ve written and produced for our clients. So, here’s one of those. [Out of this World Plumbing Ad] Dave Young: This is the Empire Builders Podcast, by the way. Dave Young here, Steve Semple there. I wonder, Stephen, if we could do this whole episode without mentioning the name of the company that we’re going to be talking about. I ask that for the simple reason of they already know. They already know what we’re talking about. They already know we’re talking about them. They probably knew we were going to talk about them. Stephen Semple: Because of all the research I’ve done on my computer. Dave Young: No, because they’re listening to everything. They probably already know the date that this is going to come out and how long it’s… I don’t know, right? When they first started, and I don’t think we felt that way about them, and I can remember back in the early 2000s, just after the turn- Stephen Semple: In the early days, they had a statement. Larry Page was very famous. Dave Young: Yeah, “Do no evil.” Stephen Semple: “Do know evil. Do no evil,” and that was a very, very big part. In fact, in the early stages, they made a bunch of decisions that challenged the company financially because they were like, “This is not good experience for the person on the other end.” I wonder if anybody’s guessed yet what we’re going to be talking about. Dave Young: Well, then you go public, and it’s all about shareholders, right? It’s like the shareholders are like, “Well, we don’t care if you do evil or not. We want you to make money.” That’s what it’s about because you have [inaudible 00:03:01]. Stephen Semple: All those things happen. Dave Young: Yeah. Stephen Semple: This company that we’re talking about, we’ll go a little while before we’ll let the name out, was founded… On September 4th in 1998 was when it was actually founded. Dave Young: Oh, ’98. It goes back before the turn of the century [inaudible 00:03:14]. Stephen Semple: Yeah. It was founded by Larry Page and Sergey Brin, who met at Stanford. Interesting note, the Stanford grads also created Yahoo. Dave Young: Okay, yeah. Stephen Semple: That’s giving you another little clue about the company that we might be talking about. Dave Young: In the same geek club. Stephen Semple: Yeah, so 1998. I was thinking back, one year after I graduated from university, Windows 98 is launched and, believe it or not, the last Seinfeld episode aired. Dave Young: Are you kidding me? Stephen Semple: No, isn’t that crazy? Dave Young: ’98. Stephen Semple: Yeah. Dave Young: I mean, I was busy raising four daughters in ’98. Stephen Semple: Yeah. Today, this company, as you said, because you didn’t want me to name the company, has more net income than any other business in US history. It has, now, I got to let the cat out of the bag, eight and a half billion searches a day happen. And yes, we’re talking about the birth of Google, which is also now known as part of the Alphabet group. Dave Young: Alphabet, yeah. It’s funny how they got to get a name that means everything. Did they have a name before Google? I know Google was like… Oh, it’s a number really, right? It’s a gazillion, bazillion Googleplex. Stephen Semple: As we’ll go into a little bit later, they actually spelled it wrong when they registered the site. That’s not actually the way that the word is spelled. I’ll have to go… But yeah, the first iteration was a product called BackRub was the name of it. Dave Young: Backrub, okay. Stephen Semple: Alphabet also owns the second largest search engine, which is YouTube. Together, basically, it’s a $2 trillion business, which is larger than the economy of Canada. It’s this amazing thing. Going back to 1998, there are dozens of search engines all using different business models. Now, today Alphabet’s like 90% in the market. Up until this point, it’s been unassailable, and it’s going to be really interesting to see what the future of AI and whatnot brings to that business. But we’re not talking about the future, we’re talking about the past here, so back to the start. Larry Page was born in Lansing, Michigan. His dad is a professor of computer science. His mom is also a computer academic. This is in the ’70s. Between 1979 and ’80, his dad does a stint at Stanford and then also goes to work at Microsoft. Now, Larry and Sergey meet at Stanford, and they’re very ambitious, they’re equal co-founders, but Larry had this thing he also talked about where he said, “You need to do more than just invent things.” It wasn’t about inventing things, it was about creating things that people would use. Here’s what’s going on in the world of the web at this time to understand what’s going on. Here’s some web stats. In 1993, there’s 130 websites in the world. In 1996, three years later, there’s 600,000 websites. That’s a 723% growth year over year. The world has never seen growth like that before. Dave Young: Right, yeah. It was amazing to experience it. People that are younger than us don’t realize what it was. Josh Johnson, the comedian, has a great routine on trying to explain to people what it was like before Google. You needed to know something- Stephen Semple: What it was like for the internet. Dave Young: Yeah. You had to ask somebody who knew. If you needed the answer to a question, you had to ask somebody. And if they didn’t know, then you had to find somebody else, or you had to go to the library and ask a librarian and they would help you find the answer- Stephen Semple: Well, I don’t think it’s like a- Dave Young: … maybe by giving you a book that may or may not have the answer. Stephen Semple: Here’s an important point. I want you to put a pin in that research. We’re going to come back to it. I was about to go down a rabbit hole, but let’s come back to this in just a moment, because this is a very, very important point here about the birth of Google. Larry and Sergey first worked on systems to allow people to make annotations and notes directly on websites with no human involved, but the problem is that that could just overrun a site because there was no systems for ranking or order or anything along that lines. The other question they started to ask is, “Which annotations should someone look at? What are the ones that have authority?” This then created the idea of page rankings. All of this became messy, and this led to them to asking the question, “What if we just focused on ranking webpages?” which led to ranking search. Now, whole idea was ranking was based upon authority and credibility, and they drew this idea from academia. So when we would do research, David, and you’d find that one book, what did you do to figure out who the authority was on the topic? You went and you saw what book did that cite, what research did this book cite. The further you went back in those citations, the closer you got to the true authority, right? Do you remember doing that type of research? Dave Young: Yeah, sure. Stephen Semple: Right. They looked at that and they went, “Well, that’s how you establish credibility and authority is who’s citing who.” Okay. They decided that what they were going to do was do that for the web, and the way the web did that was links, especially in the early days where a lot of it was research. Dave Young: Yeah. If a whole bunch of people linked to you, then that gives you authority over the words that they used to link on and- Stephen Semple: Well, and also in the early days, those links carried a lot of metadata around what the author thought, like, “Why was the link there?” In the early days, backlinks were incredibly important. Now, SEO weasels are still today talking about backlinks, which is complete. Dude, backlinks, yeah, they kind of matter, but they’re… Anyway, I could go down a rabbit hole. Dave Young: Yeah. It’s like anything, the grifters figure out a way to hack the system and make something that’s not authoritative seem like it is. Stephen Semple: Yeah. It’s harder that you can’t hack the system today. Anyway, but the technology challenge, how do you figure out who’s backedlinked to who? Well, the only way you can do it is you have to crawl the entire web, copy the entire web, and reverse engineer the computation to do this. Dave Young: Yeah. It’s huge. We’ve been talking about Google’s algorithm for as long as Google’s been around. That’s the magic of it, right? Stephen Semple: Yeah. In the early days, with them doing it as a research project, they could do it because there was hundreds of sites. If this happened even two years later, like 1996, it would’ve been completely impossible because the sheer size to do it as a research project, right? Now, they called this system BackRub, and they started to shop this technology to other search engines because, again, remember there was HotBot and Lyco and Archie and AltaVista and Yahoo and Excite and Infoseek. There were a ton of these search engines. Dave Young: Don’t forget Ask Jeeves. Stephen Semple: Ask Jeeves? Actually, Ask Jeeves might’ve even been a little bit later, but yeah, Ask Jeeves was one of them once when it was around. Dave Young: There was one that was Dogpile that was… It would search a bunch of search engines. Stephen Semple: Right, yeah. There was all sorts of things. Dave Young: Yeah. Stephen Semple: There was another one called Excite, and they got close to doing a deal with Excite. They got a meeting with them, and they’re looking at a license deal, million dollars for BackRub, and they would go into the summer and they would implement it because they were still students at Stanford. They got so far as running for the executives there a side-by-side test. They demo this test and the results were so good with BackRub. Here’s what execs at Excite said, “Why on earth would we want to use your engine? We want people to stay on our site,” because, again, it would push people off the site because web portals had this mentality of keeping people on the site instead of having them leave. So it was a no deal. They go back to school and no one wants BackRub, so they decide to build it for themselves at Stanford. The original name was going to be Whatbox. Dave Young: Whatbox? I’m glad they didn’t use Whatbox. Stephen Semple: Yeah. They thought it sounded too close to a porn site or something like that. Dave Young: Okay, I’ll give them that. Stephen Semple: Larry’s dorm mate suggested Google, which is the mathematical term of 10 to the 100th power, but it’s spelled G-O-O-G-O-L. Dave Young: Googol, mm-hmm. Stephen Semple: Correct. Now, there’s lots of things here. Did Larry Page misregister? Did he decide purposely? There’s all sorts of different stories there, but the one that seems to be the most popular, at least liked the most, is that he misspelled it when he did the registration to G-O-G-G-L-E. Dave Young: I think that’s probably a good thing because when you hear it said, that’s kind of the first thing you go- Stephen Semple: That’s kind of how you spell it. Dave Young: … how you spell it. I think we’d have figured it out, but- Stephen Semple: We would’ve, but things that are easier are always better, right? Dave Young: Yeah. Stephen Semple: By spring of ’98, they’re doing 10,000 searches a day all out of Stanford University. Dave Young: Wait, 10,000 a day out of one place. Stephen Semple: Are using university resources. Everyone else is just using keywords on a page, which led to keyword stuffing, again, another one of these BS SEO keyword stuffing. Now, at one point, one half of the entire computing power at Stanford University is being used for Google searches. It’s the end of the ’98 academic year, and these guys are still students there. Now, sidebar, to this day, Stanford still owns a chunk of Google. Dave Young: Okay. Stephen Semple: Worked out well for Stanford. Dave Young: Yeah, I guess. Stephen Semple: Yeah. Now, Larry and Sergey need some seed round financing because they’ve got to get it off of Stanford. They’ve got to start building computers. They raise a million dollars. Here’s the interesting thing I had no idea. Guess who one of the first round investors are who ended up owning 25% of the company in the seed round? Dave Young: Stay tuned. We’re going to wrap up this story and tell you how to apply this lesson to your business right after this. [Using Stories To Sell Ad] Dave Young: Let’s pick up our story where we left off and trust me you haven’t missed a thing. Stephen Semple: Guess who one of the first round investors are who ended up owning 25% of the company in the seed round? Jeff Bezos. Dave Young: Oh, no kidding. Stephen Semple: Yeah, yeah. Jeff Bezos was one of the first four investors in Google. Dave Young: Okay. Well, here we are. Stephen Semple: Isn’t that incredible? Dave Young: Yeah. Stephen Semple: Now, AltaVista created a very interesting technology because AltaVista grew out of DEC computers who were building super computers at the time. They were basically one of the pre-leaders in search because what they would do is everybody else crawled the internet in series. They were crawling the internet in parallel, and this was a big technological breakthrough. In other words, they didn’t have to do it one at a time. They could send out a whole ton of crawlers, crawling all sorts of different things, all sorts of different pieces, bringing it back and could reassemble it. Dave Young: Got you. Stephen Semple: AltaVista also had therefore the most number of sites indexed. I remember back in the day, launching websites, like pre-2000, and yeah, you would launch a site and you would have to wait for it to be indexed and it could take weeks- Dave Young: You submit it. Yeah, there were things you could do to submit- Stephen Semple: There was things you could submit. Dave Young: … the search engines. Stephen Semple: Yes, yeah, and you would sit and you would wait and you’d be like, “Oh, it got crawled.” Yeah, it was crazy. We don’t think about that today. [inaudible 00:15:57] websites crawl. Dave Young: You’d make updates to your site and you’d need to resubmit it, so it would get crawled again- Stephen Semple: Oh, yeah. Yeah. Dave Young: … if there was new information. Stephen Semple: People would search your site and it would be different than the site that you would have because the updates hadn’t come through and all those other things. In 1998, Yahoo was the largest player. They were a $20 billion business, and they had a hand-curated guide to the internet, which worked at the time, but the explosive growth killed that. There was a point where Yahoo just couldn’t keep up with it. Then Yahoo went to this hybrid where the top part was hand-curated and then backfilled with search engine results. Now, originally, Google was very against the whole idea of banner ads, and this was the way everyone else was making money, because what they knew is people didn’t like banner ads, but you’re tracking eyeballs, you’re growing, you need more infrastructure, because basically their way of doing is they’re copying the entire internet and putting it on their servers and you need more money. Now, one of the other technological breakthroughs is Google figured out how to do this on a whole pile of cheap computers that they just stacked on top of each other, but you still needed money. At this moment, had no model for making money. They were getting all these eyeballs, they were faster because they built data centers around the world because they also figured out that, by decentralizing it, it was faster. They had lots of constraints. What they needed to do at this point was create a business model. What does one do when one needs to create a business model? Well, it’s early 1999, they’re running out of money. They hire Salar Kamangar, who’s a Stanford student, and they give him the job of writing a business plan. “Here, intern, you’re writing the business plan for how we’re going to make money. Go put together a pitch deck.” Dave Young: I wonder if they’re still using the plan. Stephen Semple: What they found at that point was there was basically three ways to make the money. Way number 1 was sell Google Search technology to enterprises. In other words, companies can use this to search their own documents and intranets. Dave Young: I remember that, yeah. Stephen Semple: Yeah. Number 2, sell ads, banner ads, and number 3, license search results to other search engines. Dave Young: Okay. Stephen Semple: Based upon this plan, spring of ’99, they do a Series A fundraise. They raised more money, and they also meet Omid [inaudible 00:18:22] who’s from Netscape, and he’s kind of done with Netscape because Netscape had been just bought by AOL, and they recruit him as a chief revenue officer. Omid tries to sell the enterprise model, kind of fails, so things are not looking good on the revenue front. It’s year 2000, and the technology bubble is starting to burst. The customer base is still growing because people love it, love Google, but they’re running out of money again. They decide to do banner ads, because they just have got no money. Here’s the interesting thing is, in this day, 2000, I want you to think about this, you have to set up a sales force to go out and sell banner ads to agencies, people picking up the phone and walking into offices, reaching out to ad agencies. Dave Young: Yeah, didn’t have a platform for buying and selling… And banner ads, gosh, they were never… Google ads, in the most recent memory, are always context-related, right? Stephen Semple: Yes. Dave Young: But if you’re just selling banner ads to an agency, you might be looking for dog food and you’re going to see car ads and you’re going to see ads for high-tech servers and all kinds of things that don’t have anything to do with what you’re looking for. Stephen Semple: That’s how the early banner ads work. Hold that thought. You’re always one step ahead of me, Dave. Dave Young: Oh, sorry. Stephen Semple: Hold that thought. No, this is awesome. Dave Young: I’m holding it. Stephen Semple: What I want to stress is, when we talk about how the world has changed, in 2000, Google decides to do banner ads and how they have to do it is a sales force going out, reaching out to agencies, and agencies faxed in the banner ads. Dave Young: Okay. Yeah, sure. It would take too long for them- Stephen Semple: I’m not making this up. This is how much the world has changed in 25 years. Dave Young: “Fax me the banner.” Stephen Semple: Salespeople going out to sell ads to agencies for banners on Google where the insertions were sent back by fax. Dave Young: For the people under 20 listening to us, a fax machine- Stephen Semple: Who don’t even know what the hell a fax machine is, yeah. Dave Young: A fax machine, yeah, well, we won’t go there. Stephen Semple: Yeah. Now, here’s what they do. They also say to the advertisers at this point, “Google will only accept text for banner ads for speed.” Again, they start with the model of CPM, cost per a thousand views, which is basically how all the agencies were doing it, but they did do a twist on it. They sold around this idea of intent that the ads were showing keyword-based and they were the first to do that. What they did is they did a test to prove this. This was really cool. They set themselves up as an Amazon affiliate and dynamically generated a link on a book search and served up an ad, an affiliate ad, and they’re able to show they were able to sell a whole pile of books. The test proved the idea worked. And then what they did is they went out and they white-labeled this for others. For example, Yahoo did it, and it would show on the bottom of Yahoo, “Powered by Google.” But here’s the thing, as soon as you start saying, “Powered by Google,” what are you doing? You’re creating share of voice. Share of voice, right? Dave Young: Well, yeah, why don’t I just go to Google? Stephen Semple: Why don’t I just go to Google? Look, we had saw this a few years earlier when Hotmail was launched by Microsoft where you would get this email and go, “Powered by Hotmail,” and you’d be like, “What’s this Hotmail thing?” Suddenly, everybody was getting Hotmail accounts, right? Dave Young: Yeah. Stephen Semple: No one has a Hotmail account, no longer they have Gmail accounts, they hardly have Gmail accounts anymore. Dave Young: No, I could tell you that we’ve got a lot of people at Wizard Academy that email us off with a Hotmail. Stephen Semple: Still have Hotmail accounts? Dave Young: Sure. Stephen Semple: Oh, wow. So it’s still around? Okay. Dave Young: And then some Yahoos, yeah. Stephen Semple: Wow, that’s amazing. That’s amazing. Well, still- Dave Young: Yahoo, the email, not the customer. They’re not a Yahoo, but they have an account there. Stephen Semple: In October 2000, they launch AdWords with a test of 350 advertisers. And then, in 2002, they launched pay-per-click Advertising. And then 2004, they go public. Now, here’s one of the other things I want to talk about in terms of share of voice. They had a couple things going on with share of voice. They had that, “powered by Google,” which created share of voice because… We often think of share of voice as being just advertising in terms of how much are people knowing about us. I remember knowing nothing about Google and then learning about Google when Google went public because Google dragged out going public. They talked about it for a long time, but it meant it was financial press, it was front page news. It got a lot of PR and a lot of press around the time that they went public. That going public for them also created massive share of voice because there was suddenly a whole community that were not technologically savvy that we’re now suddenly aware of, “Oh, there’s this Google thing.” Dave Young: And they’re in the news, yeah. So I’ve got an idea for us, Steve. Stephen Semple: Yep, okay. Dave Young: All right. Stephen Semple: Let’s hear it. Dave Young: Let’s pick up part 2 of Google at the point they go public. Stephen Semple: All right, let’s do that. That’ll be an episode we’ll do in the future, yeah. Dave Young: We don’t do very many two-parters, but we’re already kind of a lengthy Empire Builder Podcast here. Stephen Semple: Oh, yeah. I was just taking it to this point, but I think that would be very interesting- Dave Young: Oh, okay. Stephen Semple: … because look, Google is a massive force in the world today- Dave Young: Unbelievable, yeah. Stephen Semple: … and I think it would be interesting to do the next part because there’s all sorts of things that they did to continue this path of attracting eyeballs. Dave Young: We haven’t even touched on Gmail yet. No, we have not. We have not. Stephen Semple: Because that happened after they went public. Correct. Let’s do that. Dave Young: Okay. Stephen Semple: Here’s the lesson that I think that I want people to understand is share of voice comes from other things, but we’re going to explore that even more in this part 2. I like the idea of doing this part 2. They really looked at this problem from a completely different set of eyeballs, and this is where I commend Google, from the standpoint of there’s all this stuff in the internet and what we really want to know is who is the authority. They looked at the academic world for how does it establish authority, and how authority is established is how much is your work cited by others, how much are other… So, now, Google has of course expanded that to direct search and there’s all these other things, but they’ve always looked at it from the standpoint of, “Who in this space has the most authority? Who is really and truly the expert on this topic? We’re going to try to figure that out and serve that up.” Dave Young: Yeah. Stephen Semple: That’s core to what their objective has been. Dave Young: We could talk about Google for four or five episodes probably. Stephen Semple: We may, but we know we’re going to do one more. Dave Young: All right. Stephen Semple: Awesome. Dave Young: Well, thanks for bringing it up. We did mention their name. Actually, if we just put this out there, “Hey, Google, why don’t you send us all the talking points we need for part 2?” There, I put it out there. Let me know how that works. Stephen Semple: My email’s about to get just slammed. All right. Thanks, David. Dave Young: You won’t know it’s from them though. You won’t know. You won’t know. Isn’t that good? Stephen Semple: That’s true. That’s true. Dave Young: Thank you, Stephen. Stephen Semple: All right. Thanks, David. Dave Young: Thanks for listening to the podcast. Please share us, subscribe on your favorite podcast app, and leave us a big, fat, juicy five-star rating and review at Apple Podcasts. And if you’d like to schedule your own 90-minute Empire Building session, you can do it at empirebuildingprogram.com.

The Create & Thrive Podcast
I’m Retiring.

The Create & Thrive Podcast

Play Episode Listen Later Dec 29, 2025 19:20


It’s time for a change. Since I started my handmade business, Epheriell, in 2008, the world has changed so much. I have changed so much. In 2008 I was a 27 year old tuition centre manager, living with my English boyfriend in the Brisbane suburbs, and I started making jewellery in my spare time to fulfil a creative urge I had neglected for years. When I started Epheriell as a hobby business, I had zero idea that I would be here, talking to you, almost 18 years later, after having built a number of different businesses – all in the realm of the handmade sphere – and having turned over almost two million dollars from those businesses in those years. Wow. I’ve never pulled that number up before! It’s a bit overwhelming to think I managed to create products with my own two hands and one brain, and have been able to make a living from them for all this time. But. The world has changed. I have changed. And it’s time for me to move on. So, as of June 2026, I’ll be retiring from handmade business education, and I’ll be putting Epheriell on indefinite sabbatical. What will I do next? I don’t actually know yet. But we’ve finally reached a point where we are in the financial position to allow me to take a sabbatical where I will have the space to rest, reflect, get the dosage of my MHT right (😆), and decide what’s next for me. I’ve officially reached middle age this year – I keep joking about the fact that I got my first reading glasses and my MHT prescription in the same week! I turn 45 in 2026, and like the elder millennial that I am, I have been obsessed with the internet since I first dove into the world of message boards, IRC, fandom, Geocities, Altavista and Ask Jeeves when I was a fresh-faced 15-year-old in 1996. Did you know, I made my very first website (on Geocities, of course) back in 1996 or 97 – and it was an X-Files fan site? It was pretty popular too! Though of course, the internet was a much, much smaller place back then. Since then, I’ve launched various blogs, started various businesses, worked for other people’s online businesses, been an affiliate, run a podcast, various YouTube channels, sold jewellery under 2 different brands, sold ads on my own site, launched a membership, sold ebooks and ecourses… I’ve tried all sorts of things and made money online in myriad ways since making that very first website (which, of course, made me zero dollars, but my nerdy friends thought it was cool, so…). Suffice it to say,  I’ve been here for a long time. 30 years! I always dreamt of making a living from the internet, and it took me about 12 years to finally crack how to do it properly – which started when I stumbled on Etsy back in 2008 and opened that first shop. It has been an amazing journey, and I have loved so much about it. It has enabled me and Nick to live a life we love. To live in a place we adore. To have so much freedom of time, to travel regularly to visit friends and family overseas, and to live a low-stress life. But I’m ready to move on. I’m also ready to take a break from the internet. I have slowly faded away from most social media over the last few years. I have no interest in instagram any more. I pretty much only use Facebook to find events to attend IRL. Threads I have been enjoying, because it felt like a breath of fresh air to actually talk WITH people again, but even that is growing thin for me right now. Don’t even get me started on TikTok (ugh). I have been putting myself and my face and my life out there on the internet since my early 20s. But as I get older, and the changes in the online world loom, I feel the need to take a step back and just be again, without sharing things publicly with the world. It’s time for retreat, recalibration, and reflection. To decide what I want to do with this next phase of my life. And to do that, I’m stepping away for a time. Does that mean I’ll never come back online? Not at all! I might decide to start up my YouTube again. Or to launch something totally new. And it doesn’t mean I’m disappearing today. But I am aiming to be on sabbatical by the winter solstice here in Australia, which is June 21st, 2026. Why am I telling you so early? Because I’m going to offer a bunch of things for the last time, and it felt disingenuous to do so without being upfront about it. I’ve always tried my best to be honest and keep firm grip on my integrity in a space where that seems to be increasingly rare. I want you to know what’s happening, and why, so you can prepare and be aware – and if you so choose, to take advantage of my offers with full knowledge. If you are in my community, or are one of my students, Thriver Circle members, or Epheriell customers, here’s the timeline of exactly what’s happening. Timeline December 2025 Thriver Circle final launch – December 29th till January 7th. If you join at this time, I will unlock the full Your Year to Thrive course, so you will have just under 6 months to work through it at your own pace during this time (which does mean doing 2 lessons a week instead of 1 should you want to). You will have full access to all other courses and workshops, and I will still be running 2 live calls each month until the end of May, and I’ll be active in the FB group community. February 2026 Set Up Shop runs for the final time – February 9th till March 10th. April 2026 I will offer shop critiques until April 6th, then they will no longer be available. May 2026 I will offer the Wholesale Know-How course until May 4th, after which it will no longer be available. June 2026 The Thriver Circle will close forever on June 8th, 2026. Members will no longer have access to any aspect of the Circle, including the FB group, courses, workshops, podcasts. All memberships will be cancelled and all payments will be stopped by this date at the latest. I encourage members to cancel their membership when their May payment is deducted, as you will then stay an active member until the shut-down should you so wish. I will archiving all my social media – bar YouTube (I’m leaving all my free videos up) in June. Epheriell will be placed on sabbatical on June 8th, 2026. This will give us time to process any final orders by the time I take sabbatical. (I’m not ruling out opening Epheriell again in the future, but it will be a long while – probably at least 18 months – until I do so – unless Nick decides he wants to run it without my input, but he’s gotten himself a good permanent part-time government job so that will probably keep him busy enough!). On June 21st I will hopefully be done with all the work to wrap up the businesses, and will be stepping away from all social media for at least 6 months and taking a sabbatical from work and the online world.  I’m not going to lie, I am simultaneously elated and terrified to take this step. I’ve been feeling the urge to move on for a while now, but the circumstances were not in alignment until now, and I know that if I don’t take advantage of this opportunity, I will regret it immensely. I’m excited to take my first real break from paid work since I was 16, and I’m so thankful to Nick for joyfully supporting me in this choice. I want to take a moment to thank him, publicly, for all the work he has put in since this all began back in 2008. Not only in the business – of which he has been an employee for over a decade now (and let’s be honest, an unpaid helper before that) – but in our life together. He has been unfailing in his support – both emotional and practical – from day 1. He has been the main home manager – taking care of the fundamentals of life like cooking, cleaning, shopping, mowing, repairs… all of those ‘mundane’ things that most women have bear the mental load of, even when they are working as well. He’s freed me up to manage and run the businesses – and we have assisted each other in our respective spheres all this time. It’s going to be quite the dynamic shift as he goes ‘to work’ (even though he’ll still be mostly working from home, thankfully!) and I take on more of the domestic load for the first time in our relationship. I couldn’t have a more supportive partner in business and life, and I am so grateful every day that we chose to travel through life together. Thank you so much, Nick, for all that you are and all that you do. I love you, and I couldn’t have done this without you. So – I’m not going anywhere just yet, but I wanted you all to know what 2026 has in store. I will still be here, enthusiastically running my businesses and helping you for the first half of 2026, and I will treasure the chance to work in this space – and with you – for these last few months. It’s been an amazing career, and it wouldn’t exist without you. I want to say thank you to YOU. To the 6,000+ people who’ve bought a piece of Epheriell jewellery. Thank you for trusting us to make a treasured piece of jewellery. Particularly those of you for whom we’ve made wedding rings. Every one has been special and we’ve been honoured to make it for you. To every person who’s read my blog posts, my emails, listened to my podcast, or watched a YouTube video – I hope something I said has helped you! I have put so much free content out there over the years that honestly, you could have built your whole business using just that (and I know people who have, because they’ve told me!). But, most deeply, I want to say thank you to the thousands of past and present students who have paid me actual money to teach and help you grow your businesses: all I can hope for is that I haven’t let you down. I hope what I shared made a difference, I hope it was valuable, and I hope that you continue to chase your dreams, whatever they may be. I have always loved the handmade community – there are still so many genuine, creative, wonderful people making things with their hands and hearts. And in this ever-more-digital and fast-everything world we live in, I think handmade has more value than ever. May you all continue to add your unique beauty to the world. Now I’m going to sign off before I make myself cry. Keep Thriving, Jess The Thriver Circle is open now, for the last time. Join Now Then and Now Nick and I in 2008, at the housewarming for our first rental together. This is where I started Epheriell. I made the seaglass necklace I’m wearing – some of my early jewellery incorporated seaglass found at my local beach. Nick and I in 2025, in Dubrovnik. We were on one of our month-long overseas trips, visiting family and friends in England and Croatia. Listen or Watch 

Stop Wasting Your Wine
Alta Vista Brute Rose Sparkling Wine Review | A Fruit Forward Take on Sparkling Wine

Stop Wasting Your Wine

Play Episode Listen Later Dec 23, 2025 44:37


Sparkling wine season rolls on and this week we pop open the Alta Vista Brut Rosé from Mendoza, Argentina. It is a tank method sparkling wine made from an unexpected blend of Malbec and Pinot Noir, and it lands right in the middle of holiday drinking season.On this episode, we talk about why bubbles take over this time of year and how easy it is to lose track of what you are pouring between family dinners, guests dropping by, and that strange week between Christmas and New Year's. This is the kind of bottle you keep cold and open without overthinking it.From there we break down what tank method really means and why this wine tastes fruit forward, bright, and more rosé driven than bready or yeasty. Cherry, raspberry, citrus, and a little funk on the finish lead to a bigger conversation about sparkling wine styles and how production methods shape texture and flavor.We also walk through traditional method, ancestral method, and carbonation to help you understand what you are actually buying when you reach for bubbles. Especially when holiday pricing starts to climb.We wrap with our verdict on value, where this wine lands on our scale, and whether it earns the not a waste stamp. Plus we bring back Wine With That for a holiday table scenario that gets a little too real.If you like casual sippers.If you like sparkling without the fuss.If you want to drink smarter during bubble season.This episode is for you.Connect with the show. We would love to hear from you!Stop Wasting Your Wine on Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/stopwastingyourwine/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Stop Wasting Your Wine on YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@StopWastingYourWine⁠⁠⁠⁠⁠⁠⁠The Stop Wasting Your Wine Website⁠⁠⁠⁠⁠⁠https://stopwastingyourwine.com/Chapters 00:00 - Introduction 01:28 - Sparkling Wine Season04:18 - Exploring the Alta Vista Brute Rosé06:07 - Question of the Week06:59 - Tasting the Wine09:48 - Flavor Profiles and Production Methods13:09 - Exploring the Funk in Wine15:44 - Understanding Sparkling Wine Production19:40 - The Traditional Method of Sparkling Wine19:51 - Exploring the Tank Method of Sparkling Wine24:23 - Understanding the Ancestral Method of Sparkling Wine26:01 - The Carbonation Method: A Different Approach to Sparkling Wine27:55 - Wine Reviews: Casual Sippers vs. Premium Choices37:08 - Final Thoughts on Wine Quality and Pricing42:53 - Game: Wine with That

The Green Insider Powered by eRENEWABLE
Securing Critical Infrastructure: Insights from Tom Sego

The Green Insider Powered by eRENEWABLE

Play Episode Listen Later Dec 18, 2025 15:20


Tom Sego, founder and CEO of BlastWave, discussed his background in chemical engineering and his journey through various industries, including roles at Caterpillar, Eli Lilly, Emerson Electric, Alta Vista, and Apple. He explained that BlastWave was founded to combine Apple’s ease of use with cybersecurity, focusing on protecting critical infrastructure as it becomes increasingly digitized. Tom emphasized that human error is a significant security risk, citing an example from the San Jacinto Water District. This Follower Friday podcast is sponsored by UTSI International. Tom's podcast includes: Critical infrastructure sectors (like oil and gas, transportation, and manufacturing) face higher cyber risks than traditional IT systems due to the severe consequences of attacks and the challenge of securing legacy devices. Integrating old and new technologies is achieved by using a translation mechanism that enables secure communication between legacy systems and modern infrastructure. Artificial intelligence (AI) has a dual impact: it can enhance attackers' ability to automate cyberattacks, but it also offers opportunities to improve security, such as by eliminating vulnerabilities like passwords. Technology solutions are essential for reducing the human burden in security, especially for defending against phishing and reconnaissance attacks. Eliminating attack vectors (e.g., usernames and passwords) can significantly reduce security risks, regardless of how effective or frequent attacks become. Focusing on the safety of critical infrastructure allows people to prioritize what matters most in life, such as family, relationships, and health. To be an Insider Please subscribe to The Green Insider powered by ERENEWABLE wherever you get your podcast from and remember to leave us a five-star rating. This podcast is sponsored by UTSI International. To learn more about our sponsor or ask about being a sponsor, contact ERENEWABLE and the Green Insider Podcast. The post Securing Critical Infrastructure: Insights from Tom Sego appeared first on eRENEWABLE.

Here's What's Happening
The Government Can't Alta Vista Money

Here's What's Happening

Play Episode Listen Later Dec 10, 2025 7:28


Trump praises white immigrants, vilifies Black and brown countries at an official White House event, while Pete Hegseth faces fallout from deadly strikes and the Republican Party spirals further into chaos.Shooting in Kentucky-via ABC NewsTrump's Racist Comments-via AP News, Axios, and The HillTake the pledge to be a voter at raisingvoters.org/beavoterdecember. - on AmazonSubscribe to the Substack: kimmoffat.substack.comAll episodes can be foundat:kimmoffat.com/thenewsAs always, youcan findme on Instagram/Twitter/Bluesky @kimmoffat and TikTok @kimmoffatishere

The Tabernacle Today
Hope to Face Any Circumstance - 11/02/2025 Sunday Sermon

The Tabernacle Today

Play Episode Listen Later Nov 2, 2025 38:33


Hope to face any circumstance - Romans 15:13 Romans 15:13“May the God of hope fill you with all joy and peace in believing, so that by the power of the Holy Spirit you may abound in hope.” (ESV)Why look to God for hope?1. God gives real hopeLamentations 3:21-23 “But this I call to mind, and therefore I have hope: The steadfast love of the Lord never ceases; his mercies never come to an end; they are new every morning; great is your faithfulness.”Hope is forward-looking confidence in God, based on His faithfulness and power.2. God gives lasting hopeRomans 5:3-5 says, “More than that, we rejoice in our sufferings, knowing that suffering produces endurance, and endurance produces character, and character produces hope, and hope does not put us to shame, because God's love has been poured into our hearts through the Holy Spirit that has been given to us.”John 14:27 says, “Peace I leave with you; my peace I give to you. Not as the world gives do I give to you. Let not your hearts be troubled, neither let them be afraid.”3. God gives abundant hopeIt is Powerful to the true believer. It is Plentiful to the true believer.______________Dr. Don Cockes serves as a regional strategist in the Valley for the SBC of Virginia. He helps churches in various ways as an advisor, mentor, and partner in ministry.  He lives in Salem, but is a native of Altavista. He made a profession of faith in Christ at the age of 12, publicly acknowledged his call to ministry while in college, and has served in many Southern Baptist contexts.Since 1988, he has served in some form of ministry, including youth pastor, associate pastor, senior pastor, transitional pastor of four churches,  North American Mission Board missionary, and SBCV staff since 2004.  Additionally, he has served on numerous Southern Baptist boards and committees over the years and has a passion for missions. Don has a Doctor of Ministry from Southeastern Seminary and has degrees from James Madison University and Mid-America Seminary. He and his wife, Janine, have been married for more than 28 years and have two sons: Tim and Chris.

Faster, Please! — The Podcast

My fellow pro-growth/progress/abundance Up Wingers,For most of history, stagnation — not growth — was the rule. To explain why prosperity so often stalls, economist Carl Benedikt Frey offers a sweeping tour through a millennium of innovation and upheaval, showing how societies either harness — or are undone by — waves of technological change. His message is sobering: an AI revolution is no guarantee of a new age of progress.Today on Faster, Please! — The Podcast, I talk with Frey about why societies midjudge their trajectory and what it takes to reignite lasting growth.Frey is a professor of AI and Work at the Oxford Internet Institute and a fellow of Mansfield College, University of Oxford. He is the director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.He is the author of several books, including the brand new one, How Progress Ends: Technology, Innovation, and the Fate of Nations.In This Episode* The end of progress? (1:28)* A history of Chinese innovation (8:26)* Global competitive intensity (11:41)* Competitive problems in the US (15:50)* Lagging European progress (22:19)* AI & labor (25:46)Below is a lightly edited transcript of our conversation. The end of progress? (1:28). . . once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.Pethokoukis: Since 2020, we've seen the emergence of generative AI, mRNA vaccines, reusable rockets that have returned America to space, we're seeing this ongoing nuclear renaissance including advanced technologies, maybe even fusion, geothermal, the expansion of solar — there seems to be a lot cooking. Is worrying about the end of progress a bit too preemptive?Frey: Well in a way, it's always a bit too preemptive to worry about the future: You don't know what's going to come. But let me put it this way: If you had told me back in 1995 — and if I was a little bit older then — that computers and the internet would lead to a decade streak of productivity growth and then peter out, I would probably have thought you nuts because it's hard to think about anything that is more consequential. Computers have essentially given people the world's store of knowledge basically in their pockets. The internet has enabled us to connect inventors and scientists around the world. There are few tools that aided the research process more. There should hardly be any technology that has done more to boost scientific discovery, and yet we don't see it.We don't see it in the aggregate productivity statistics, so that petered out after a decade. Research productivity is in decline. Measures of breakthrough innovation is in decline. So it's always good to be optimistic, I guess, and I agree with you that, when you say AI and when you read about many of the things that are happening now, it's very, very exciting, but I remain somewhat skeptical that we are actually going to see that leading to a huge revival of economic growth.I would just be surprised if we don't see any upsurge at all, to be clear, but we do have global productivity stagnation right now. It's not just Europe, it's not just Britain. The US is not doing too well either over the past two decades or so. China's productivity is probably in the negative territory or stagnant, by more optimistic measures, and so we're having a growth problem.If tech progress were inevitable, why have predictions from the '90s, and certainly earlier decades like the '50s and '60s, about transformative breakthroughs and really fast economic growth by now, consistently failed to materialize? How does your thesis account for why those visions of rapid growth and progress have fallen short?I'm not sure if my thesis explains why those expectations didn't materialize, but I'm hopeful that I do provide some framework for thinking about why we've often seen historically rapid growth spurts followed by stagnation and even decline. The story I'm telling is not rocket science, exactly. It's basically built on the simple intuitions that once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.So for example, the Soviet Union actually did reasonably well in terms of economic growth. A lot of it, or most of it, was centered on heavy industry, I should say. So people didn't necessarily see the benefits in their pockets, but the economy grew rapidly for about four decades or so, then growth petered out, and eventually it collapsed. So for exploiting mass-production technologies, the Soviet system worked reasonably well. Soviet bureaucrats could hold factory managers accountable by benchmarking performance across factories.But that became much harder when something new was needed because when something is new, what's the benchmark? How do you benchmark against that? And more broadly, when something is new, you need to explore, and you need to explore often different technological trajectories. So in the Soviet system, if you were an aircraft engineer and you wanted to develop your prototype, you could go to the red arm and ask for funding. If they turned you down, you maybe had two or three other options. If they turned you down, your idea would die with you.Conversely, in the US back in '99, Bessemer Venture declined to invest in Google, which seemed like a bad idea with the benefit of hindsight, but it also illustrates that Google was no safe bet at the time. Yahoo and Alta Vista we're dominating search. You need somebody to invest in order to know if something is going to catch on, and in a more decentralized system, you can have more people taking different bets and you can explore more technological trajectories. That is one of the reasons why the US ended up leading the computer revolutions to which Soviet contributions were basically none.Going back to your question, why didn't those dreams materialize? I think we've made it harder to explore. Part of the reason is protective regulation. Part of the reason is lobbying by incumbents. Part of the reason is, I think, a revolving door between institutions like the US patent office and incumbents where we see in the data that examiners tend to grant large firms some patents that are of low quality and then get lucrative jobs at those places. That's creating barriers to entry. That's not good for new startups and inventors entering the marketplace. I think that is one of the reasons that we haven't seen some of those dreams materialize.A history of Chinese innovation (8:26)So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided . . .I wonder if your analysis of pre-industrial China, if there's any lessons you can draw about modern China as far as the way in which bad governance can undermine innovation and progress?Pre-industrial China has a long history. China was the technology leader during the Song and Tang dynasties. It had a meritocratic civil service. It was building infrastructure on scales that were unimaginable in Europe at the time, and yet it didn't have an industrial revolution. So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided, and because there was lots of social status attached to becoming a bureaucrat and passing the civil service examination, if Galileo was born in China, he would probably become a bureaucrat rather than a scientist, and I think that's part of the reason too.But China mostly did well when the state was strong rather than weak. A strong state was underpinned by intensive political competition, and once China had unified and there were fewer peer competitors, you see that the center begins to fade. They struggle to tax local elites in order to keep the peace. People begin to erect monopolies in their local markets and collide with guilds to protect production and their crafts from competition.So during the Qing dynasty, China begins to decline, whereas we see the opposite happening in Europe. European fragmentation aids exploration and innovation, but it doesn't necessarily aid scaling, and so that is something that Europe needs to come to terms with at a later stage when the industrial revolution starts to take off. And even before that, market integration played an important role in terms of undermining the guilds in Europe, and so part of the reason why the guilds persist longer in China is the distance is so much longer between cities and so the guilds are less exposed to competition. In the end, Europe ends up overtaking China, in large part because vested interests are undercut by governments, but also because of investments in things that spur market integration.Global competitive intensity (11:41)Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.This is a great McKinsey kind of way of looking at the world: The notion that what drives innovation is sort of maximum competitive intensity. You were talking about the competitive intensity in both Europe and in China when it was not so centralized. You were talking about the competitive intensity of a fragmented Europe.Do you think that the current level of competitive intensity between the United States and China —and I really wish I could add Europe in there. Plenty of white papers, I know, have been written about Europe's competitive state and its in innovativeness, and I hope those white papers are helpful and someone reads them, but it seems to be that the real competition is between United States and China.Do you not think that that competitive intensity will sort of keep those countries progressing despite any of the barriers that might pop up and that you've already mentioned a little bit? Isn't that a more powerful tailwind than any of the headwinds that you've mentioned?It could be, I think, if people learn the right lessons from history, at least that's a key argument of the book. Right now, what I'm seeing is the United States moving more towards protectionist with protective tariffs. Right now, what I see is a move towards, we could even say crony capitalism with tariff exemptions that some larger firms that are better-connected to the president are able to navigate, but certainly not challengers. You're seeing the United States embracing things like golden shares in Intel, and perhaps even extending that to a range of companies. Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.And China today is having similar problems and on, I would argue, an even greater scale. Growth used to be the key objective in China, and so for local governments, provincial governments competing on such targets, it was fairly easy to benchmark and measure and hold provincial governors accountable, and they would be promoted inside the Communist Party based on meeting growth targets. Now, we have prioritized common prosperity, more national security-oriented concerns.And so in China, most progress has been driven by private firms and foreign-invested firms. State-owned enterprise has generally been a drag on innovation and productivity. What you're seeing, though, as China is shifting more towards political objectives, it's harder to mobilize private enterprise, where the yard sticks are market share and profitability, for political goals. That means that China is increasingly relying more again on state-owned enterprises, which, again, have been a drag on innovation.So, in principle, I agree with you that historically you did see Russian defeat to Napoleon leading to this Stein-Hardenberg Reforms, and the abolishment of Gilded restrictions, and a more competitive marketplace for both goods and ideas. You saw that Russian losses in the Crimean War led to the of abolition of serfdom, and so there are many times in history where defeat, in particular, led to striking reforms, but right now, the competition itself doesn't seem to lead to the kinds of reforms I would've hoped to see in response.Competitive problems in the US (15:50)I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior.I certainly wrote enough pieces and talked to enough people over the past decade who have been worried about competition in the United States, and the story went something like this: that you had these big tech companies — Google, and Meta, Facebook and Microsoft — that these were companies were what they would call “forever companies,” that they had such dominance in their core businesses, and they were throwing off so much cash that these were unbeatable companies, and this was going to be bad for America. People who made that argument just could not imagine how any other companies could threaten their dominance. And yet, at the time, I pointed out that it seemed to me that these companies were constantly in fear that they were one technological advance from being in trouble.And then lo and behold, that's exactly what happened. And while in AI, certainly, Google's super important, and Meta Facebook are super important, so are OpenAI, and so is Anthropic, and there are other companies.So the point here, after my little soliloquy, is can we overstate these problems, at least in the United States, when it seems like it is still possible to create a new technology that breaks the apparent stranglehold of these incumbents? Google search does not look quite as solid a business as it did in 2022.Can we overstate the competitive problems of the United States, or is what you're saying more forward-looking, that perhaps we overstated the competitive problems in the past, but now, due to these tariffs, and executives having to travel to the White House and give the president gifts, that that creates a stage for the kind of competitive problems that we should really worry about?I'm very happy to support the notion that technological changes can lead to unpredictable outcomes that incumbents may struggle to predict and respond to. Even if they predict it, they struggle to act upon it because doing so often undermines the existing business model.So if you take Google, where the transformer was actually conceived, the seven people behind it, I think, have since left the company. One of the reasons that they probably didn't launch anything like ChatGPT was probably for the fear of cannibalizing search. So I think the most important mechanisms for dislodging incumbents are dramatic shifts in technology.None of the legacy media companies ended up leading social media. None of the legacy retailers ended up leading e-commerce. None of the automobile leaders are leading in EVs. None of the bicycle companies, which all went into automobile, so many of them, ended up leading. So there is a pattern there.At the same time, I think you do have to worry that there are anti-competitive practices going on that makes it harder, and that are costly. The revolving door between the USPTO and companies is one example of that. We also have a reasonable amount of evidence on killer acquisitions whereby firms buy up a competitor just to shut it down. Those things are happening. I think you need to have tools that allow you to combat that, and I think more broadly, the United States has a long history of fairly vigorous antitrust policy. I think it'd be a hard pressed to suggest that that has been a tremendous drag on American business or American dynamism. So if you don't think, for example, that American antitrust policy has contributed to innovation and dynamism, at the very least, you can't really say either that it's been a huge drag on it.In Japan, for example, in its postwar history, antitrust was extremely lax. In the United States, it was very vigorous, and it was very vigorous throughout the computer revolution as well, which it wasn't at all in Japan. If you take the lawsuit against IBM, for example, you can debate this. To what extent did it force it to unbundle hardware and software, and would Microsoft been the company it is today without that? I think AT&T, it's both the breakup and it's deregulation, as well, but I think by basically all accounts, that was a good idea, particularly at the time when the National Science Foundation released ARPANET into the world.I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior. There's always a risk of antitrust being heavily politicized, and that's always been a bad idea, but at the same time, I think having tools on the books that allows you to check monopolies and steer their investments more towards the innovation rather than anti-competitive practices, I think is, broadly speaking, a good thing. I think in the European Union, you often hear that competition policy is a drag on productivity. I think it's the least of Europe's problem.Lagging European progress (22:19)If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. . . but doesn't do the same in digital. The question in my mind is: Why is that?Let's talk about Europe as we sort of finish up. We don't have to write How Progress Ends, it seems like progress has ended, so maybe we want to think about how progress restarts, and is the problem in Europe, is it institutions or is it the revealed preference of Europeans, that they're getting what they want? That they don't value progress and dynamism, that it is a cultural preference that is manifested in institutions? And if that's the case — you can tell me if that's not the case, I kind of feel like it might be the case — how do you restart progress in Europe since it seems to have already ended?The most puzzling thing to me is not that Europe is less dynamic than the United States — that's not very puzzling at all — but that it hasn't even managed to catch up in digital. If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. So in a way, take automobiles, electrical machinery, chemicals, pharmaceuticals, nobody would say that Europe is behind in those industries, or at least not for long. Europe has very robust catchup growth in the post-war period, but doesn't do the same in digital. The question in my mind is: Why is that?I think part of the reason is that the returns to innovation, the returns to scaling in Europe are relatively muted by a fragmented market in services, in particular. The IMF estimates that if you take all trade barriers on services inside the European Union and you add them up, it's something like 110 percent tariffs. Trump Liberation Day tariffs, essentially, imposed within European Union. That means that European firms in digital and in services don't have a harmonized market to scale into, the way the United States and China has. I think that's by far the biggest reason.On top of that, there are well-intentioned regulations like the GDPR that, by any account, has been a drag on innovation, and particularly been harmful for startups, whereas larger firms that find it easier to manage compliance costs have essentially managed to offset those costs by capturing a larger share of the market. I think the AI Act is going in the same direction there, ad so you have more hurdles, you have greater costs of innovating because of those regulatory barriers. And then the return to innovation is more capped by having a smaller, fragmented market.I don't think that culture or European lust for leisure rather than work is the key reason. I think there's some of that, but if you look at the most dynamic places in Europe, it tends to be the Scandinavian countries and, being from Sweden myself, I can tell you that most people you will encounter there are not workaholics.AI & labor (25:46)I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin.As I finish up, let me ask you: Like a lot of economists who think about technology, you've thought about how AI will affect jobs — given what we've seen in the past few years, would it be your guess that, if we were to look at the labor force participation rates of the United States and other rich countries 10 years from now, that we will look at those employment numbers and think, “Wow, we can really see the impact of AI on those numbers”? Will it be extraordinarily evident, or would it be not as much?Unless there's very significant progress in AI, I don't think so. I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin. So in most activities where the world is changing, and the world is changing every day, you can't really rely on AI to reliably do work for you.An example of that, most people know of AlphaGo beating the world champion back in 2016. Few people will know that, back in 2023, human amateurs, using standard laptops, exposing the best Go programs to new positions that they would not have encountered in training, actually beat the best Go programs quite easily. So even in a domain where basically the problem is solved, where we already achieved super-human intelligence, you cannot really know how well these tools perform when circumstances change, and I think that that's really a problem. So unless we solve that, I don't think it's going to have an impact that will mean that labor force participation is going to be significantly lower 10 years from now.That said, I do think it's going to have a very significant impact on white collar work, and people's income and sense of status. I think of generative AI, in particular, as a tool that reduces barriers to entry in professional services. I often compare it to what happened with Uber and taxi services. With the arrival of GPS technology, knowing the name of every street in New York City was no longer a particularly valuable skill, and then with a platform matching supply and demand, anybody could essentially get into their car who has a driver's license and top up their incomes on the side. As a result of that, incumbent drivers faced more competition, they took a pay cut of around 10 percent.Obviously, a key difference with professional services is that they're traded. So I think it's very likely that, as generative AI reduces the productivity differential between people in, let's say the US and the Philippines in financial modeling, in paralegal work, in accounting, in a host of professional services, more of those activities will shift abroad, and I think many knowledge workers that had envisioned prosperous careers may feel a sense of loss of status and income as a consequence, and I do think that's quite significant.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Desde las cavernas, hoy y lo que sigue.
T5. C21. Prepa Altavista Nostalgia.

Desde las cavernas, hoy y lo que sigue.

Play Episode Listen Later Sep 19, 2025 15:32


En esta narración personal, Jesús Carlos Ponce comparte sus recuerdos como estudiante en la Preparatoria Altavista, ubicada en Ciudad Juárez, Chihuahua, donde asistió entre 1988 y 1991. El autor describe la ubicación e historia de la escuela, la cual funcionó inicialmente como secundaria, y rememora a maestros carismáticos y estrictos, además de la directora y otros empleados. Gran parte del relato se centra en las aventuras y travesuras vividas con su grupo de amigos, como ser castigados por faltar a clases o la rivalidad deportiva que existía con la Preparatoria del Chamizal. Finalmente, el narrador expresa un profundo sentimiento de nostalgia y gratitud por los años de juventud que experimentó en esa institución.

YAC Sports Podcast
Episode 362

YAC Sports Podcast

Play Episode Listen Later Sep 9, 2025 115:41


This week Joe and Leland talk about Waynesboro breaking their 24 game losing streak, Riverheads loses to Alta Vista, Wilson and Fort pick up wins too. VT, JMU, and UVA all drop games. All this and more on the YAC Sports Podcast.

thinkfuture with kalaboukis
1109 The Death of SEO? Bruce Clay on AI Search, PreWriter, and the Future of Traffic

thinkfuture with kalaboukis

Play Episode Listen Later Sep 2, 2025 67:40


See more: https://thinkfuture.substack.comConnect with Bruce: https://bruceclay.com---AI is changing search forever—and the old rules of SEO may no longer apply.In this episode of thinkfuture, host Chris Kalaboukis speaks with Bruce Clay, an early pioneer of search engine optimization who literally coined the term “SEO” back in the 1990s. Bruce has been on the front lines of search since AltaVista and InfoSeek—and now, he's sounding the alarm about how AI assistants like ChatGPT are rewriting the rules.We explore:- The evolution of search engines, from early hacks to today's complex algorithms- Why Google updates have made visibility harder to maintain- How AI-powered search assistants are bypassing websites by giving direct answers- The looming decline of traffic to informational sites- How SEO professionals can adapt to the AI-powered search landscape- The role of trust, authority, and structured content in an AI-first world -PreWriter: Bruce's AI-powered research suite that helps creators identify gaps, generate personas, and streamline content creation- The future of search, advertising, and AI-powered personal assistants handling everyday tasks like booking travel or recommending productsBruce brings a rare long-view perspective—having shaped the early SEO industry—and a pragmatic take on how creators, businesses, and marketers can survive and thrive in the AI era.If you're in marketing, SEO, content creation, or just curious about how AI is disrupting the web as we know it, this episode is essential listening.

Mañanas BLU 10:30 - con Camila Zuluaga
En Belén Altavista no solo ocurrió una emergencia natural, sino humanitaria: monseñor Andrés IO

Mañanas BLU 10:30 - con Camila Zuluaga

Play Episode Listen Later Aug 27, 2025 16:24


Mientras autoridades defienden una "evacuación humanitaria" planificada, comunidad y religiosos denuncian abusos, demoliciones masivas sin garantías y una estigmatización.See omnystudio.com/listener for privacy information.

Keen On Democracy
Who Owns The Front Door? The Multi-Trillion Dollar Battle to Assemble the AI Jigsaw

Keen On Democracy

Play Episode Listen Later Aug 23, 2025 44:43


Those who do win. Those are Keith Teare's immortal words to describe the winners of today's Silicon Valley battle to control tomorrow's AI world. But the real question, of course, is what to do to win this war. The battle (to excuse all these blunt military metaphors) is to assemble the AI pieces to reassemble what Keith calls the “jigsaw” of our new chat centric world. And to do that, the veteran start-up entrepreneur advises, requires owning “the front door”. Yet as Keith acknowledges, we're still in the AltaVista era of AI—multiple contenders fighting for dominance before a Google-like winner emerges. His key insight is that “attachment becomes the moat”. Users develop emotional bonds with their preferred AI interface, creating switching costs that transform temporary advantages into permanent market positions. Multi-trillion dollar success belongs to whoever builds the stickiest, most indispensable gateway to our AI-native future. Those who do that will win; those who don't, will not. 1. We're in the "AltaVista era" of AI - Multiple players (OpenAI, Google, Anthropic, Perplexity) are competing for dominance, but like the early search engine wars, one will likely emerge as the clear winner within 1-2 years.2. "Attachment becomes the moat" - Users develop emotional bonds with their preferred AI interface that create powerful switching costs. Keith uses Claude for coding and won't switch despite trying alternatives, demonstrating how user loyalty becomes a competitive advantage.3. The shift from "page-based" to "AI-native" internet - We're moving from a web of URLs and content pages to one where every interaction starts with human-AI conversation. The browser is becoming yesterday's technology.4. Publishers aren't doomed but are unprepared - The monetization model will evolve from traditional advertising to contextual links surfaced by AI. Publishers will eventually "beg to be included" and AI companies will pay for training content while driving traffic through relevant links.5. The "jigsaw pieces" already exist across industries - In healthcare, finance, and other sectors, all the components needed for AI transformation are available but need assembly. Whoever puts these pieces together first in each field will become massive companies - potentially the world's biggest in their respective industries.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe

M&A Science
Founder Exit Strategy: Xavier Gury on M&A Deal Terms vs Valuation

M&A Science

Play Episode Listen Later Aug 18, 2025 66:13


Xavier Gury, Founding Partner at Wind Xavier Gury, founding partner at Wind venture capital firm, brings a unique triple perspective to M&A: serial entrepreneur, acquisition target, and now investor. In this episode, Xavier unpacks the critical lessons from his three successful exits, including one transformative deal with Publicis, where he structured a performance-based earnout that prioritized terms over upfront valuation. The conversation reveals why 90% of the deal value came through earnout performance, how to align teams during integration, and the strategic mistakes buyers make when acquiring founder-led companies. M&A professionals will learn practical frameworks for structuring deals that actually work post-close. Things You'll Learn Why deal terms matter more than valuation – and how Xavier structured an earnout where only 10% was paid upfront The "yin yang" principle for balanced M&A deals that create value for both buyer and seller How to incentivize key employees during earnout periods to ensure alignment and execution success _____________ Today's episode of the M&A Science Podcast is brought to you by Grata! Grata is the leading private market dealmaking platform. With its best-in-class AI workflows and investment-grade data, Grata helps investors, advisors, and strategic acquirers effortlessly discover, research, and connect with potential targets — all in one sleek, user-friendly interface. Visit grata.com to learn more. ___________________ M&A Doesn't Have to Be So Painful

Make Sweden Stronger
Ulrika Klinkert - CMO Rugvista (Sommar-Repris)

Make Sweden Stronger

Play Episode Listen Later Jul 30, 2025 52:44


Idag är första gången vi välkomnar in någon från ett börsbolag. Anledningen till att det dröjt såhär länge är för att tyvärr tenderar ju poddar med folk i börsbolag att bli rätt slätstrukna p.g.a regelverk samt oro för vad man bör och inte bör säga. Men Ulrika Klinkert är en frisk fläkt i börs-Sverige och hon bjuder på mängder av insikter, anekdoter och skratt. Rugvista säljer mattor online till hela Europa (samt några länder till) och det är ett inspirerande avsnitt om ett bolag som sålt online sedan Altavista var en grej... (Poddavsnittet har publicerats tidigare under 2024, men eftersom det är sommar så tar jag en paus från poddinspelningar och publicerar mina favoritavsnitt från tidigare år!)

Edge of the Web - An SEO Podcast for Today's Digital Marketer
765 | Aligning SEO with AI and Intent: Bruce Clay's Perspective

Edge of the Web - An SEO Podcast for Today's Digital Marketer

Play Episode Listen Later Jul 11, 2025 45:14


Bruce Clay (yes, *the* Bruce Clay) returns to EDGE of the Web, and Erin tries to pretend the SEO world hasn't changed a bit in five years. Spoiler: It most definitely has, and Bruce is here to walk us through his journey from the days of Excite and AltaVista to the AI-fueled search ecosystem of today. It's a continued conversation on the EDGE regarding how AI is shaking up traditional SEO practices, from content and conversion to that all-important site-wide intent matching. Bruce and Erin break down the evolving art of helpful content, why intent is suddenly the star of the SEO show, and how marketers need to catch up with users who are now trained by endless queries and AI-powered overviews. If you thought ranking was tricky before, try staying relevant in a world where your web page needs to be more than just a pretty face—think FAQs, sitewide helpfulness, and a privacy link in your footer for extra measure. Bruce shares his firm take: AI is a tool, not a replacement. Research smarter, humanize your content, and, for the love of ranking, don't let perfection get in the way of expertise. Key Segments: [00:01:20] Introduction to Bruce Clay [00:06:36] AI's Impact on SEO Practices [00:08:29] AI vs. Search Engine Queries [00:10:02] EDGE: Housekeeping - Who is coming up [00:16:21] Effective AI Content Strategy [00:20:22] Understanding Search Intent Optimization [00:22:13] EDGE of the Web Title Sponsor: Site Strategics[00:25:50] AI's Consensus Over Search Optimization [00:31:36] EDGE of The Web Sponsor: Inlinks (WAIKAY)[00:36:35] AI-Enhanced Tool “Prewriter” Overview [00:43:42] AI: An Exciting Tool, Not The Only Solution Thanks to Our Sponsors! Site Strategics: http://edgeofthewebradio.com/site Inlinks/WAIKAY: https://edgeofthewebradio.com/waikay  Follow Our Guest Twitter / X: @BruceClay LinkedIn: https://www.linkedin.com/in/bruce-clay/ Resources Bruce Clay's Prewriter: https://www.prewriter.ai/

Honest eCommerce
Bonus Episode: Collecting the Right Data Instead of Waiting for Perfect Data with Sabir Semerkant

Honest eCommerce

Play Episode Listen Later Jul 10, 2025 51:26


Sabir Semerkant is an Ecommerce expert, growth strategist, and founder of Growth by Sabir, where he helps brands unlock compounding growth through a proven framework called the 8D Method. With over 20 years of experience and $1B+ in revenue driven, Sabir has worked with everyone from Fortune 500s to DTC challengers, scaling brands like Canon, Tommy Hilfiger, and Sour Patch Kids along the way.Today, Sabir leads growth across a wide range of eCommerce verticals, helping founders break through revenue plateaus with precision systems and operational clarity. His work is grounded in execution: simplifying tech stacks, identifying bottlenecks, and unlocking 2X results without chasing shiny tactics. In 2024 alone, his Rapid 2X program delivered a 108% average lift across 29 brands in just 21 days.Whether he's unpacking the real reasons most brands stall, breaking down why resumes don't equal results, or showing how to grow without over-relying on Meta ads, Sabir brings a direct, operator-first mindset to the conversation.He shares what it takes to scale in today's crowded DTC space without gambling on paid, blindly outsourcing growth, or losing sight of what truly moves the needle.In This Conversation We Discuss: [00:39] Intro[01:23] Learning marketing like a software upgrade[08:53] Improving CTR with daily copy experiments[14:55] Centralizing KPIs to drive clear decisions[19:11] Knowing your numbers without excuses[21:47] Segmenting real buyers from email signups[23:35] Testing ideas before scaling campaigns[28:32] Repositioning your product to fix weak growth[31:03] Aligning your whole team on brand growth[40:30] Avoiding the agency swap loop for growthResources:Subscribe to Honest Ecommerce on YoutubeTurning Promising Ecom Brands Into Profitable Winners https://growthbysabir.com/Follow Sabir Semerkant https://www.linkedin.com/in/sabirsemerkantIf you're enjoying the show, we'd love it if you left Honest Ecommerce a review on Apple Podcasts. It makes a huge impact on the success of the podcast, and we love reading every one of your reviews!

Mountain Gardener with Ken Lain
Alta Vista Garden Club Tour

Mountain Gardener with Ken Lain

Play Episode Listen Later Jun 6, 2025 4:02


In this episode, Ken Lain, The Mountain Gardener, chats about the Alta Vista Garden Club Tour in Prescott, Arizona. He highlights the opportunity to see unique local gardens, and you can purchase tickets at Watters Garden Center.Listen to Mountain Gardener on Cast11: https://cast11.com/mountain-gardener-with-ken-lain-gardening-podcast/Follow Cast11 on Facebook: https://Facebook.com/CAST11AZFollow Cast11 on Instagram: https://www.instagram.com/cast11_podcast_network/

Go To Market Grit
Bret Taylor's Journey Leading Salesforce, Sierra & OpenAI

Go To Market Grit

Play Episode Listen Later Jun 2, 2025 89:48


Over the past two decades, Bret Taylor has quietly helped shape the arc of Silicon Valley.From co-creating Google Maps to steering Facebook, Salesforce, and OpenAI, he's been behind some of the most consequential products in tech. Now, with his new company Sierra, he's starting from zero—again.In this conversation, Bret opens up about how founders navigate identity, why the best ideas often come from everyday friction, and how staying relentlessly focused can unlock real momentum in AI.Guest: Bret Taylor, Co-Founder of SierraChapters:00:00 Trailer00:49 Introduction01:57 Saving OpenAI09:15 Overwhelming yet capable of a lot13:36 Father and founder16:49 History is written by the victors22:13 How you price matters35:58 Stickiest piece of software49:48 The first realtime social network55:34 Facebook CTO who rewrote Google Maps1:02:10 Least known, most impressive1:11:39 The best way to predict the future1:16:22 Most personally passionate1:21:22 Currency of reputation1:27:17 Away from work1:28:35 Who Sierra is hiring1:28:58 What “grit” means to Bret1:29:18 OutroMentioned in this episode: Google Maps, Salesforce, OpenAI ChatGPT, Meta Facebook, X (formerly Twitter), Sam Altman, Elon Musk, Mark Zuckerberg, Google, Marissa Mayer, Excite, MSN, AltaVista, Amazon, Harvey, Airbnb, Coinbase, Apple, John Doerr, Cursor, Codeium Windsurf, Perplexity, xAI, Kleenex, Amazon Web Services (AWS), FriendFeed, Tumblr, Kevin Gibbs, Google Maps, Yelp, Trulia, iOS App Store, Blackberry, Facebook Messenger, Marvel Avengers, Slack, Quip, Leonardo da Vinci, Clay Bavor, Microsoft, Eric Schmidt, Alan Kay, Brian Armstrong, Brian Chesky, Shopify, SiriusXM, Patrick CollisonLinks:Connect with Bret TaylorXLinkedInConnect with JoubinXLinkedInEmail: grit@kleinerperkins.comLearn more about Kleiner Perkins

En Caso de que el Mundo Se Desintegre - ECDQEMSD
S27 Ep6035: Secuestro Involuntario

En Caso de que el Mundo Se Desintegre - ECDQEMSD

Play Episode Listen Later May 2, 2025 53:43


El misterioso caso de un delito que nunca fue cometido y un pueblo en alerta ECDQEMSD podcast episodio 6035 Secuestro Involuntario Conducen: El Pirata y El Sr. Lagartija https://canaltrans.com Noticias del Mundo: Marchas en el mundo por el Día del Trabajo - Reapareció Kamala Harris - La violencia en Haití - El león de Culiacán - No tenemos tobilleras - Fortuna negada - Cuestión de métodos - Pronóstico del Tiempo Historias Desintegradas: Una tarde de calor - Jugando en el patio - Viendo una película - Rumores al anochecer - Composta y lombrices - El México prehistórico - Taxis del DF - Chica fresa - Olor sintético - Geocities, Encarta, Altavista y muchos más - El liso santafesino - Clásico entre sabaleros y tatengues - Hay historias de éxito - El buen Atún - No al acoso escolar - Día mundial del asma y más... En Caso De Que El Mundo Se Desintegre - Podcast no tiene publicidad, sponsors ni organizaciones que aporten para mantenerlo al aire. Solo el sistema cooperativo de los que aportan a través de las suscripciones hacen posible que todo esto siga siendo una realidad. Gracias Dragones Dorados!! NO AI: ECDQEMSD Podcast no utiliza ninguna inteligencia artificial de manera directa para su realización. Diseño, guionado, música, edición y voces son de  nuestra completa intervención humana.

Gente de Andaluc�a
Más de 40 familias sin recursos reducirán su brecha digital gracias a Fundación Altavista

Gente de Andaluc�a

Play Episode Listen Later May 2, 2025


UpNorthNews with Pat Kreitlow
Ask Jeeves What Happened to Alta Vista (Hour 1)

UpNorthNews with Pat Kreitlow

Play Episode Listen Later Apr 23, 2025 14:38


Along with the rest of the morning's headlines, we'll have a little fun remembering the early days of the internet—when the information superhighway was more like a gravel road by comparison. At the time, Alta Vista, Ask Jeeves, and CompuServe were superstars for early users. How many others do you remember? Mornings with Pat Kreitlow airs on several stations across the Civic Media radio network, Monday through Friday from 6-9 am. Subscribe to the podcast to be sure not to miss out on a single episode! To learn more about the show and all of the programming across the Civic Media network, head over to https://civicmedia.us/shows to see the entire broadcast line up.

AZ Tech Roundtable 2.0
Kenmore is Home Electricity Made Easy - Modernize the Smart Home from Appliances to the Electric Grid – Revisited w/ CEO Sri Solur - AZ TRT S06 EP04 (265) 2-23-2025

AZ Tech Roundtable 2.0

Play Episode Listen Later Apr 4, 2025 46:50


Kenmore is Home Electricity Made Easy - Modernize the Smart Home from Appliances to the Electric Grid – Revisited w/ CEO Sri Solur   - AZ TRT S06 EP04 (265) 2-23-2025          What We Learned This Week ·         Kenmore is home electricity made easy.  Kenmore is on a mission to modernize the home. Live More & Live Better. Also need to make it Affordable. ·         Clean Tech goes w/ the smart home, smart appliances (that connect to the home) and the electrical power grid for better living Electrical Grid needs to be modernized – cannot handle the current & future power demands ·         Homes built Pre-1990 run on Electric Panels that are outdated – costs of $40K + to modernize to handle charging EVs at home ·         Design of the Future House would have a Battery in it that could recharge your appliances and electronics during down hours. ·         Solving problems in electricity and energy also have the same issues with working on better water and clean food. It is more than just an energy and electric issue.     Guest: Sri Solur, CEO, Kenmore / Brands  https://www.linkedin.com/in/solur https://www.kenmore.com/   Sri Solur is chief executive officer of brands for Kenmore at Transformco. An industry veteran with 25+ years of experience, Sri has a rich history of success leading high tech products and businesses. He previously served as CPO and GM at Berkshire Grey, a leader in industrial robotics, and was a member of the leadership team that took the company public. Sri also served as CPO at SharkNinja, and was instrumental in bringing the Shark IQ Robot vacuum and NinjaFoodi products to market, while also holding a leadership role to take the company public. Sri spent 20 years at Hewlett Packard, serving as founder and CPO of CloudPrint, the company's wearables and IOT business. In his career, Sri has created products for world-renowned brands including Hugo Boss, Movado, Ferrari, Juicy Couture, and more. Sri holds a bachelor's degree in Engineering from NIT and an MBA from Boston University.         As Earth Day approaches (April), Kenmore is empowering greener homes and people.    The trusted appliance maker recently unveiled a new “Home Electrification Made Easy” program that looks to simplify the electrification process and reduce overall costs in transitioning to electric appliances.    Kenmore has set an ambitious goal with the program to electrify one million homes that will ultimately save homeowners one billion dollars over the next decade.    Kenmore's innovation and energy programs are driving a new generation of electrification for today's home ecosystem. Some of the company's core innovations include:    Expansion of electrification and smart products for every room in the home.  Addition of electrification enablers, such as smart electrical panels and dynamic Level 2 EV chargers, that help eliminate roadblocks many homeowners have in wanting to electrify their entire home.  Simplifying rebate and savings programs, such as Congress' Inflation Reduction Act, to help customers cut costs by taking advantage of available local and national funding and discounts.  Building relationships with industry leaders in product, service and consumer education to supplement and amplify their mission to electrify American homes.    This electric push comes as a new generation of homeowners seek to invest in smarter, greener home solutions and previous generations are coming up against new government standards making accessibility to like-for-like replacement equipment for their home obsolete.    With Kenmore's electrification program delivering a quick onramp to affordable green energy homes, homeowners of all backgrounds and budgets have a more attainable path to smart, green home adoption.          Notes:   Kenmore CEO and Appliances   Seg. 1   Major appliances and clean tech and sustainability energy security is a big issue on the macro end. The effect on the electric grid and power lines.   There is lots of demand and potential blackouts. This is a fuel and demand issue. The government and utility companies are working on clean energy. Currently they use fossil fuels and working on using less.   Design of the future house would have a battery in it that could recharge your appliances and electronics storing down ours.   The electric layout of most homes, especially homes built pre-1990s has an 100 amp circuit. If you have modern tech like an EV charger in your house, an electrician cannot set it up because the EV charger will blow up your 100 amp circuit.   It would cost you between $20 and 60 K to upgrade a house for a modern electric set up. Kenmore will install electric panel with load balance for EV vehicles and in-home appliances.   Seg. 2   Electrical layout of a house as you install new appliances. There is a booster within the inflation reduction act. There are rebates for lower income people, where it pays you for getting new appliances. 10 K instant credit for new appliances.   The comparison of older appliances versus new appliances. Many older appliances may run on fossil fuels like a gas range oven or gas water heater. Older HVAC unit has more wear and tear.   On a hot days and really cold days appliances operate at peak and are putting demand on the electric red. Looking for new ways of sustainable clean energy and examples hydroelectric power.   You would have a back up in high demand times, where are you fire up a generator running on fossil fuels.   Do you want to protect the grid for maintenance but also things like cyber attacks. One way you could do this is make all homes standalone energy producers.   Peak rates for electricity or 6 to 10 PM at night. At these times electricity use taxes the grid and also taxes your wallet. Do you want to run your dishwasher post 10 PM.   Seg. 3   We are moving from a world of done by you to a world of done for you. The smart home of the future will help you.   The electrical panel would work with the grid and decide when to charge electronics in your house. Kenmore has electric appliances that works with the electric red. These appliances save you money and also save the grid.   On a bigger scale we need to modernize the electric road. Then in the future build better homes cars and appliances. Inflation reduction act has multiprong incentives for all of this.   When we saw the bull run of tech starting in 2010 it had three things working together. Social mobile and the cloud all came together to create this tech rise. Do you need electricity plus clean energy plus clean water.   A rising tide that can raise all. Do you want to solve problems, what are the pain killers?   Seg. 4   CEO was an engineer by trade. Worked in Boston went to business school and after that he built some products. Worked at Altavista on firewalls and search.   Cloud print on printing mobile with the HP e-print. Worked in wearables at Hugo boss and Ferrari.   Worked at Comcast on Xfinity digital security and high-speed Internet. Worked with shark and ninja on home robots. Worked at Bershire Gray, consumer robots which went public with an IPO.   Then at Brands / Kenmore (also Diehard batteries) - Building better and smarter appliances   Span that I/O build a smart electrical panel. Do you want your appliances to give you repair and maintenance updates.   Whole home electrification. A whole home dashboard controlling your smart home. An example would be your fridge would tell you when you need a new filter. Kenmore is a tech forward company.   Solving problems in electricity and energy also have the same issues with working on better water and clean food. It is more than just an energy and electric issue.   Live more and live better. Also need to make it affordable. Kenmore is home electricity made easy. rebates.kenmore.com they have the blue-collar work ethic with the idea of progress over perfection. Kenmore is a consumer centric team.         Biotech Shows: https://brt-show.libsyn.com/category/Biotech-Life+Sciences-Science   AZ Tech Council Shows:  https://brt-show.libsyn.com/size/5/?search=az+tech+council *Includes Best of AZ Tech Council show from 2/12/2023   Tech Topic: https://brt-show.libsyn.com/category/Tech-Startup-VC-Cybersecurity-Energy-Science  Best of Tech: https://brt-show.libsyn.com/size/5/?search=best+of+tech   ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT      Thanks for Listening. Please Subscribe to the AZ TRT Podcast.     AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business.  AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving.  Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more…    AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here                    More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/     Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.  

This Day in AI Podcast
EP94: Does Grok 3 Change Everything? Plus Vibes & Diss Track Comparison

This Day in AI Podcast

Play Episode Listen Later Feb 21, 2025 90:41


Join Simtheory: https://simtheory.ai----Grok 3 Dis Track (cringe): https://simulationtheory.ai/aff9ba04-ca0e-4572-84f4-687739c7b84bGrok 3 Dis Track written by Sonnet: https://simulationtheory.ai/edaed525-b9b6-473b-a6d6-f9cca9673868----Community: https://thisdayinai.com----Chapters:00:00 - First Impressions of Grok 310:00 - Discussion about Deep Search, Deep Research24:28 - Market landscape: Is OpenAI Rattled by xAI's Grok 3? Rumors of GPT-4.5 and GPT-548:48 - Why does Grok and xAI Exist? Will anyone care about Grok 3 next week?54:45 - Diss track battle with Grok 3 (re-written by Sonnet) & Model Tuning for Use Cases1:07:50 - GPT-4.5 and Anthropic Claude Thinking Next Week? & Are we a podcast about Altavista?1:13:25 - Economically productive agents & freaky muscular robot1:22:00 - Final thoughts of the week1:27:26 - Grok 3 Dis Track in Full (Sonnet Version)Thanks for your support and listening!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Beating Google at Search with Neural PageRank and $5M of H200s — with Will Bryk of Exa.ai

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 10, 2025 56:00


Applications close Monday for the NYC AI Engineer Summit focusing on AI Leadership and Agent Engineering! If you applied, invites should be rolling out shortly.The search landscape is experiencing a fundamental shift. Google built a >$2T company with the “10 blue links” experience, driven by PageRank as the core innovation for ranking. This was a big improvement from the previous directory-based experiences of AltaVista and Yahoo. Almost 4 decades later, Google is now stuck in this links-based experience, especially from a business model perspective. This legacy architecture creates fundamental constraints:* Must return results in ~400 milliseconds* Required to maintain comprehensive web coverage* Tied to keyword-based matching algorithms* Cost structures optimized for traditional indexingAs we move from the era of links to the era of answers, the way search works is changing. You're not showing a user links, but the goal is to provide context to an LLM. This means moving from keyword based search to more semantic understanding of the content:The link prediction objective can be seen as like a neural PageRank because what you're doing is you're predicting the links people share... but it's more powerful than PageRank. It's strictly more powerful because people might refer to that Paul Graham fundraising essay in like a thousand different ways. And so our model learns all the different ways.All of this is now powered by a $5M cluster with 144 H200s:This architectural choice enables entirely new search capabilities:* Comprehensive result sets instead of approximations* Deep semantic understanding of queries* Ability to process complex, natural language requestsAs search becomes more complex, time to results becomes a variable:People think of searches as like, oh, it takes 500 milliseconds because we've been conditioned... But what if searches can take like a minute or 10 minutes or a whole day, what can you then do?Unlike traditional search engines' fixed-cost indexing, Exa employs a hybrid approach:* Front-loaded compute for indexing and embeddings* Variable inference costs based on query complexity* Mix of owned infrastructure ($5M H200 cluster) and cloud resourcesExa sees a lot of competition from products like Perplexity and ChatGPT Search which layer AI on top of traditional search backends, but Exa is betting that true innovation requires rethinking search from the ground up. For example, the recently launched Websets, a way to turn searches into structured output in grid format, allowing you to create lists and databases out of web pages. The company raised a $17M Series A to build towards this mission, so keep an eye out for them in 2025. Chapters* 00:00:00 Introductions* 00:01:12 ExaAI's initial pitch and concept* 00:02:33 Will's background at SpaceX and Zoox* 00:03:45 Evolution of ExaAI (formerly Metaphor Systems)* 00:05:38 Exa's link prediction technology* 00:09:20 Meaning of the name "Exa"* 00:10:36 ExaAI's new product launch and capabilities* 00:13:33 Compute budgets and variable compute products* 00:14:43 Websets as a B2B offering* 00:19:28 How do you build a search engine?* 00:22:43 What is Neural PageRank?* 00:27:58 Exa use cases * 00:35:00 Auto-prompting* 00:38:42 Building agentic search* 00:44:19 Is o1 on the path to AGI?* 00:49:59 Company culture and nap pods* 00:54:52 Economics of AI search and the future of search technologyFull YouTube TranscriptPlease like and subscribe!Show Notes* ExaAI* Web Search Product* Websets* Series A Announcement* Exa Nap Pods* Perplexity AI* Character.AITranscriptAlessio [00:00:00]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:10]: Hey, and today we're in the studio with my good friend and former landlord, Will Bryk. Roommate. How you doing? Will, you're now CEO co-founder of ExaAI, used to be Metaphor Systems. What's your background, your story?Will [00:00:30]: Yeah, sure. So, yeah, I'm CEO of Exa. I've been doing it for three years. I guess I've always been interested in search, whether I knew it or not. Like, since I was a kid, I've always been interested in, like, high-quality information. And, like, you know, even in high school, wanted to improve the way we get information from news. And then in college, built a mini search engine. And then with Exa, like, you know, it's kind of like fulfilling the dream of actually being able to solve all the information needs I wanted as a kid. Yeah, I guess. I would say my entire life has kind of been rotating around this problem, which is pretty cool. Yeah.Swyx [00:00:50]: What'd you enter YC with?Will [00:00:53]: We entered YC with, uh, we are better than Google. Like, Google 2.0.Swyx [00:01:12]: What makes you say that? Like, that's so audacious to come out of the box with.Will [00:01:16]: Yeah, okay, so you have to remember the time. This was summer 2021. And, uh, GPT-3 had come out. Like, here was this magical thing that you could talk to, you could enter a whole paragraph, and it understands what you mean, understands the subtlety of your language. And then there was Google. Uh, which felt like it hadn't changed in a decade, uh, because it really hadn't. And it, like, you would give it a simple query, like, I don't know, uh, shirts without stripes, and it would give you a bunch of results for the shirts with stripes. And so, like, Google could barely understand you, and GBD3 could. And the theory was, what if you could make a search engine that actually understood you? What if you could apply the insights from LLMs to a search engine? And it's really been the same idea ever since. And we're actually a lot closer now, uh, to doing that. Yeah.Alessio [00:01:55]: Did you have any trouble making people believe? Obviously, there's the same element. I mean, YC overlap, was YC pretty AI forward, even 2021, or?Will [00:02:03]: It's nothing like it is today. But, um, uh, there were a few AI companies, but, uh, we were definitely, like, bold. And I think people, VCs generally like boldness, and we definitely had some AI background, and we had a working demo. So there was evidence that we could build something that was going to work. But yeah, I think, like, the fundamentals were there. I think people at the time were talking about how, you know, Google was failing in a lot of ways. And so there was a bit of conversation about it, but AI was not a big, big thing at the time. Yeah. Yeah.Alessio [00:02:33]: Before we jump into Exa, any fun background stories? I know you interned at SpaceX, any Elon, uh, stories? I know you were at Zoox as well, you know, kind of like robotics at Harvard. Any stuff that you saw early that you thought was going to get solved that maybe it's not solved today?Will [00:02:48]: Oh yeah. I mean, lots of things like that. Like, uh, I never really learned how to drive because I believed Elon that self-driving cars would happen. It did happen. And I take them every night to get home. But it took like 10 more years than I thought. Do you still not know how to drive? I know how to drive now. I learned it like two years ago. That would have been great to like, just, you know, Yeah, yeah, yeah. You know? Um, I was obsessed with Elon. Yeah. I mean, I worked at SpaceX because I really just wanted to work at one of his companies. And I remember they had a rule, like interns cannot touch Elon. And, um, that rule actually influenced my actions.Swyx [00:03:18]: Is it, can Elon touch interns? Ooh, like physically?Will [00:03:22]: Or like talk? Physically, physically, yeah, yeah, yeah, yeah. Okay, interesting. He's changed a lot, but, um, I mean, his companies are amazing. Um,Swyx [00:03:28]: What if you beat him at Diablo 2, Diablo 4, you know, like, Ah, maybe.Alessio [00:03:34]: I want to jump into, I know there's a lot of backstory used to be called metaphor system. So, um, and it, you've always been kind of like a prominent company, maybe at least RAI circles in the NSF.Swyx [00:03:45]: I'm actually curious how Metaphor got its initial aura. You launched with like, very little. We launched very little. Like there was, there was this like big splash image of like, this is Aurora or something. Yeah. Right. And then I was like, okay, what this thing, like the vibes are good, but I don't know what it is. And I think, I think it was much more sort of maybe consumer facing than what you are today. Would you say that's true?Will [00:04:06]: No, it's always been about building a better search algorithm, like search, like, just like the vision has always been perfect search. And if you do that, uh, we will figure out the downstream use cases later. It started on this fundamental belief that you could have perfect search over the web and we could talk about what that means. And like the initial thing we released was really just like our first search engine, like trying to get it out there. Kind of like, you know, an open source. So when OpenAI released, uh, ChachBt, like they didn't, I don't know how, how much of a game plan they had. They kind of just wanted to get something out there.Swyx [00:04:33]: Spooky research preview.Will [00:04:34]: Yeah, exactly. And it kind of morphed from a research company to a product company at that point. And I think similarly for us, like we were research, we started as a research endeavor with a, you know, clear eyes that like, if we succeed, it will be a massive business to make out of it. And that's kind of basically what happened. I think there are actually a lot of parallels to, of w between Exa and OpenAI. I often say we're the OpenAI of search. Um, because. Because we're a research company, we're a research startup that does like fundamental research into, uh, making like AGI for search in a, in a way. Uh, and then we have all these like, uh, business products that come out of that.Swyx [00:05:08]: Interesting. I want to ask a little bit more about Metaforesight and then we can go full Exa. When I first met you, which was really funny, cause like literally I stayed in your house in a very historic, uh, Hayes, Hayes Valley place. You said you were building sort of like link prediction foundation model, and I think there's still a lot of foundation model work. I mean, within Exa today, but what does that even mean? I cannot be the only person confused by that because like there's a limited vocabulary or tokens you're telling me, like the tokens are the links or, you know, like it's not, it's not clear. Yeah.Will [00:05:38]: Uh, what we meant by link prediction is that you are literally predicting, like given some texts, you're predicting the links that follow. Yes. That refers to like, it's how we describe the training procedure, which is that we find links on the web. Uh, we take the text surrounding the link. And then we predict. Which link follows you, like, uh, you know, similar to transformers where, uh, you're trying to predict the next token here, you're trying to predict the next link. And so you kind of like hide the link from the transformer. So if someone writes, you know, imagine some article where someone says, Hey, check out this really cool aerospace startup. And they, they say spacex.com afterwards, uh, we hide the spacex.com and ask the model, like what link came next. And by doing that many, many times, you know, billions of times, you could actually build a search engine out of that because then, uh, at query time at search time. Uh, you type in, uh, a query that's like really cool aerospace startup and the model will then try to predict what are the most likely links. So there's a lot of analogs to transformers, but like to actually make this work, it does require like a different architecture than, but it's transformer inspired. Yeah.Alessio [00:06:41]: What's the design decision between doing that versus extracting the link and the description and then embedding the description and then using, um, yeah. What do you need to predict the URL versus like just describing, because you're kind of do a similar thing in a way. Right. It's kind of like based on this description, it was like the closest link for it. So one thing is like predicting the link. The other approach is like I extract the link and the description, and then based on the query, I searched the closest description to it more. Yeah.Will [00:07:09]: That, that, by the way, that is, that is the link refers here to a document. It's not, I think one confusing thing is it's not, you're not actually predicting the URL, the URL itself that would require like the, the system to have memorized URLs. You're actually like getting the actual document, a more accurate name could be document prediction. I see. This was the initial like base model that Exo was trained on, but we've moved beyond that similar to like how, you know, uh, to train a really good like language model, you might start with this like self-supervised objective of predicting the next token and then, uh, just from random stuff on the web. But then you, you want to, uh, add a bunch of like synthetic data and like supervised fine tuning, um, stuff like that to make it really like controllable and robust. Yeah.Alessio [00:07:48]: Yeah. We just have flow from Lindy and, uh, their Lindy started to like hallucinate recrolling YouTube links instead of like, uh, something. Yeah. Support guide. So. Oh, interesting. Yeah.Swyx [00:07:57]: So round about January, you announced your series A and renamed to Exo. I didn't like the name at the, at the initial, but it's grown on me. I liked metaphor, but apparently people can spell metaphor. What would you say are the major components of Exo today? Right? Like, I feel like it used to be very model heavy. Then at the AI engineer conference, Shreyas gave a really good talk on the vector database that you guys have. What are the other major moving parts of Exo? Okay.Will [00:08:23]: So Exo overall is a search engine. Yeah. We're trying to make it like a perfect search engine. And to do that, you have to build lots of, and we're doing it from scratch, right? So to do that, you have to build lots of different. The crawler. Yeah. You have to crawl a bunch of the web. First of all, you have to find the URLs to crawl. Uh, it's connected to the crawler, but yeah, you find URLs, you crawl those URLs. Then you have to process them with some, you know, it could be an embedding model. It could be something more complex, but you need to take, you know, or like, you know, in the past it was like a keyword inverted index. Like you would process all these documents you gather into some processed index, and then you have to serve that. Uh, you had high throughput at low latency. And so that, and that's like the vector database. And so it's like the crawling system, the AI processing system, and then the serving system. Those are all like, you know, teams of like hundreds, maybe thousands of people at Google. Um, but for us, it's like one or two people each typically, but yeah.Alessio [00:09:13]: Can you explain the meaning of, uh, Exo, just the story 10 to the 16th, uh, 18, 18.Will [00:09:20]: Yeah, yeah, yeah, sure. So. Exo means 10 to the 18th, which is in stark contrast to. To Google, which is 10 to the hundredth. Uh, we actually have these like awesome shirts that are like 10th to 18th is greater than 10th to the hundredth. Yeah, it's great. And it's great because it's provocative. It's like every engineer in Silicon Valley is like, what? No, it's not true. Um, like, yeah. And, uh, and then you, you ask them, okay, what does it actually mean? And like the creative ones will, will recognize it. But yeah, I mean, 10 to the 18th is better than 10 to the hundredth when it comes to search, because with search, you want like the actual list of, of things that match what you're asking for. You don't want like the whole web. You want to basically with search filter, the, like everything that humanity has ever created to exactly what you want. And so the idea is like smaller is better there. You want like the best 10th to the 18th and not the 10th to the hundredth. I'm like, one way to say this is like, you know how Google often says at the top, uh, like, you know, 30 million results found. And it's like crazy. Cause you're looking for like the first startups in San Francisco that work on hardware or something. And like, they're not 30 million results like that. What you want is like 325 results found. And those are all the results. That's what you really want with search. And that's, that's our vision. It's like, it just gives you. Perfectly what you asked for.Swyx [00:10:24]: We're recording this ahead of your launch. Uh, we haven't released, we haven't figured out the, the, the name of the launch yet, but what is the product that you're launching? I guess now that we're coinciding this podcast with. Yeah.Will [00:10:36]: So we've basically developed the next version of Exa, which is the ability to get a near perfect list of results of whatever you want. And what that means is you can make a complex query now to Exa, for example, startups working on hardware in SF, and then just get a huge list of all the things that match. And, you know, our goal is if there are 325 startups that match that we find you all of them. And this is just like, there's just like a new experience that's never existed before. It's really like, I don't know how you would go about that right now with current tools and you can apply this same type of like technology to anything. Like, let's say you want, uh, you want to find all the blog posts that talk about Alessio's podcast, um, that have come out in the past year. That is 30 million results. Yeah. Right.Will [00:11:24]: But that, I mean, that would, I'm sure that would be extremely useful to you guys. And like, I don't really know how you would get that full comprehensive list.Swyx [00:11:29]: I just like, how do you, well, there's so many questions with regards to how do you know it's complete, right? Cause you're saying there's only 30 million, 325, whatever. And then how do you do the semantic understanding that it might take, right? So working in hardware, like I might not use the words hardware. I might use the words robotics. I might use the words wearables. I might use like whatever. Yes. So yeah, just tell us more. Yeah. Yeah. Sure. Sure.Will [00:11:53]: So one aspect of this, it's a little subjective. So like certainly providing, you know, at some point we'll provide parameters to the user to like, you know, some sort of threshold to like, uh, gauge like, okay, like this is a cutoff. Like, this is actually not what I mean, because sometimes it's subjective and there needs to be a feedback loop. Like, oh, like it might give you like a few examples and you say, yeah, exactly. And so like, you're, you're kind of like creating a classifier on the fly, but like, that's ultimately how you solve the problem. So the subject, there's a subjectivity problem and then there's a comprehensiveness problem. Those are two different problems. So. Yeah. So you have the comprehensiveness problem. What you basically have to do is you have to put more compute into the query, into the search until you get the full comprehensiveness. Yeah. And I think there's an interesting point here, which is that not all queries are made equal. Some queries just like this blog post one might require scanning, like scavenging, like throughout the whole web in a way that just, just simply requires more compute. You know, at some point there's some amount of compute where you will just be comprehensive. You could imagine, for example, running GPT-4 over the internet. You could imagine running GPT-4 over the entire web and saying like, is this a blog post about Alessio's podcast, like, is this a blog post about Alessio's podcast? And then that would work, right? It would take, you know, a year, maybe cost like a million dollars, but, or many more, but, um, it would work. Uh, the point is that like, given sufficient compute, you can solve the query. And so it's really a question of like, how comprehensive do you want it given your compute budget? I think it's very similar to O1, by the way. And one way of thinking about what we built is like O1 for search, uh, because O1 is all about like, you know, some, some, some questions require more compute than others, and we'll put as much compute into the question as we need to solve it. So similarly with our search, we will put as much compute into the query in order to get comprehensiveness. Yeah.Swyx [00:13:33]: Does that mean you have like some kind of compute budget that I can specify? Yes. Yes. Okay. And like, what are the upper and lower bounds?Will [00:13:42]: Yeah, there's something we're still figuring out. I think like, like everyone is a new paradigm of like variable compute products. Yeah. How do you specify the amount of compute? Like what happens when you. Run out? Do you just like, ah, do you, can you like keep going with it? Like, do you just put in more credits to get more, um, for some, like this can get complex at like the really large compute queries. And like, one thing we do is we give you a preview of what you're going to get, and then you could then spin up like a much larger job, uh, to get like way more results. But yes, there is some compute limit, um, at, at least right now. Yeah. People think of searches as like, oh, it takes 500 milliseconds because we've been conditioned, uh, to have search that takes 500 milliseconds. But like search engines like Google, right. No matter how complex your query to Google, it will take like, you know, roughly 400 milliseconds. But what if searches can take like a minute or 10 minutes or a whole day, what can you then do? And you can do very powerful things. Um, you know, you can imagine, you know, writing a search, going and get a cup of coffee, coming back and you have a perfect list. Like that's okay for a lot of use cases. Yeah.Alessio [00:14:43]: Yeah. I mean, the use case closest to me is venture capital, right? So, uh, no, I mean, eight years ago, I built one of the first like data driven sourcing platforms. So we were. You look at GitHub, Twitter, Product Hunt, all these things, look at interesting things, evaluate them. If you think about some jobs that people have, it's like literally just make a list. If you're like an analyst at a venture firm, your job is to make a list of interesting companies. And then you reach out to them. How do you think about being infrastructure versus like a product you could say, Hey, this is like a product to find companies. This is a product to find things versus like offering more as a blank canvas that people can build on top of. Oh, right. Right.Will [00:15:20]: Uh, we are. We are a search infrastructure company. So we want people to build, uh, on top of us, uh, build amazing products on top of us. But with this one, we try to build something that makes it really easy for users to just log in, put a few, you know, put some credits in and just get like amazing results right away and not have to wait to build some API integration. So we're kind of doing both. Uh, we, we want, we want people to integrate this into all their applications at the same time. We want to just make it really easy to use very similar again to open AI. Like they'll have, they have an API, but they also have. Like a ChatGPT interface so that you could, it's really easy to use, but you could also build it in your applications. Yeah.Alessio [00:15:56]: I'm still trying to wrap my head around a lot of the implications. So, so many businesses run on like information arbitrage, you know, like I know this thing that you don't, especially in investment and financial services. So yeah, now all of a sudden you have these tools for like, oh, actually everybody can get the same information at the same time, the same quality level as an API call. You know, it just kind of changes a lot of things. Yeah.Will [00:16:19]: I think, I think what we're grappling with here. What, what you're just thinking about is like, what is the world like if knowledge is kind of solved, if like any knowledge request you want is just like right there on your computer, it's kind of different from when intelligence is solved. There's like a good, I've written before about like a different super intelligence, super knowledge. Yeah. Like I think that the, the distinction between intelligence and knowledge is actually a pretty good one. They're definitely connected and related in all sorts of ways, but there is a distinction. You could have a world and we are going to have this world where you have like GP five level systems and beyond that could like answer any complex request. Um, unless it requires some. Like, if you say like, uh, you know, give me a list of all the PhDs in New York city who, I don't know, have thought about search before. And even though this, this super intelligence is going to be like, I can't find it on Google, right. Which is kind of crazy. Like we're literally going to have like super intelligences that are using Google. And so if Google can't find them information, there's nothing they could do. They can't find it. So, but if you also have a super knowledge system where it's like, you know, I'm calling this term super knowledge where you just get whatever knowledge you want, then you can pair with a super intelligence system. And then the super intelligence can, we'll never. Be blocked by lack of knowledge.Alessio [00:17:23]: Yeah. You told me this, uh, when we had lunch, I forget how it came out, but we were talking about AGI and whatnot. And you were like, even AGI is going to need search. Yeah.Swyx [00:17:32]: Yeah. Right. Yeah. Um, so we're actually referencing a blog post that you wrote super intelligence and super knowledge. Uh, so I would refer people to that. And this is actually a discussion we've had on the podcast a couple of times. Um, there's so much of model weights that are just memorizing facts. Some of the, some of those might be outdated. Some of them are incomplete or not. Yeah. So like you just need search. So I do wonder, like, is there a maximum language model size that will be the intelligence layer and then the rest is just search, right? Like maybe we should just always use search. And then that sort of workhorse model is just like, and it like, like, like one B or three B parameter model that just drives everything. Yes.Will [00:18:13]: I believe this is a much more optimal system to have a smaller LM. That's really just like an intelligence module. And it makes a call to a search. Tool that's way more efficient because if, okay, I mean the, the opposite of that would be like the LM is so big that can memorize the whole web. That would be like way, but you know, it's not practical at all. I don't, it's not possible to train that at least right now. And Carpathy has actually written about this, how like he could, he could see models moving more and more towards like intelligence modules using various tools. Yeah.Swyx [00:18:39]: So for listeners, that's the, that was him on the no priors podcast. And for us, we talked about this and the, on the Shin Yu and Harrison chase podcasts. I'm doing search in my head. I told you 30 million results. I forgot about our neural link integration. Self-hosted exit.Will [00:18:54]: Yeah. Yeah. No, I do see that that is a much more, much more efficient world. Yeah. I mean, you could also have GB four level systems calling search, but it's just because of the cost of inference. It's just better to have a very efficient search tool and a very efficient LM and they're built for different things. Yeah.Swyx [00:19:09]: I'm just kind of curious. Like it is still something so audacious that I don't want to elide, which is you're, you're, you're building a search engine. Where do you start? How do you, like, are there any reference papers or implementation? That would really influence your thinking, anything like that? Because I don't even know where to start apart from just crawl a bunch of s**t, but there's gotta be more insight than that.Will [00:19:28]: I mean, yeah, there's more insight, but I'm always surprised by like, if you have a group of people who are really focused on solving a problem, um, with the tools today, like there's some in, in software, like there are all sorts of creative solutions that just haven't been thought of before, particularly in the information retrieval field. Yeah. I think a lot of the techniques are just very old, frankly. Like I know how Google and Bing work and. They're just not using new methods. There are all sorts of reasons for that. Like one, like Google has to be comprehensive over the web. So they're, and they have to return in 400 milliseconds. And those two things combined means they are kind of limit and it can't cost too much. They're kind of limited in, uh, what kinds of algorithms they could even deploy at scale. So they end up using like a limited keyword based algorithm. Also like Google was built in a time where like in, you know, in 1998, where we didn't have LMS, we didn't have embeddings. And so they never thought to build those things. And so now they have this like gigantic system that is built on old technology. Yeah. And so a lot of the information retrieval field we found just like thinks in terms of that framework. Yeah. Whereas we came in as like newcomers just thinking like, okay, there here's GB three. It's magical. Obviously we're going to build search that is using that technology. And we never even thought about using keywords really ever. Uh, like we were neural all the way we're building an end to end neural search engine. And just that whole framing just makes us ask different questions, like pursue different lines of work. And there's just a lot of low hanging fruit because no one else is thinking about it. We're just on the frontier of neural search. We just are, um, for, for at web scale, um, because there's just not a lot of people thinking that way about it.Swyx [00:20:57]: Yeah. Maybe let's spell this out since, uh, we're already on this topic, elephants in the room are Perplexity and SearchGPT. That's the, I think that it's all, it's no longer called SearchGPT. I think they call it ChatGPT Search. How would you contrast your approaches to them based on what we know of how they work and yeah, just any, anything in that, in that area? Yeah.Will [00:21:15]: So these systems, there are a few of them now, uh, they basically rely on like traditional search engines like Google or Bing, and then they combine them with like LLMs at the end to, you know, output some power graphics, uh, answering your question. So they like search GPT perplexity. I think they have their own crawlers. No. So there's this important distinction between like having your own search system and like having your own cache of the web. Like for example, so you could create, you could crawl a bunch of the web. Imagine you crawl a hundred billion URLs, and then you create a key value store of like mapping from URL to the document that is technically called an index, but it's not a search algorithm. So then to actually like, when you make a query to search GPT, for example, what is it actually doing it? Let's say it's, it's, it could, it's using the Bing API, uh, getting a list of results and then it could go, it has this cache of like all the contents of those results and then could like bring in the cache, like the index cache, but it's not actually like, it's not like they've built a search engine from scratch over, you know, hundreds of billions of pages. It's like, does that distinction clear? It's like, yeah, you could have like a mapping from URL to documents, but then rely on traditional search engines to actually get the list of results because it's a very hard problem to take. It's not hard. It's not hard to use DynamoDB and, and, and map URLs to documents. It's a very hard problem to take a hundred billion or more documents and given a query, like instantly get the list of results that match. That's a much harder problem that very few entities on, in, on the planet have done. Like there's Google, there's Bing, uh, you know, there's Yandex, but you know, there are not that many companies that are, that are crazy enough to actually build their search engine from scratch when you could just use traditional search APIs.Alessio [00:22:43]: So Google had PageRank as like the big thing. Is there a LLM equivalent or like any. Stuff that you're working on that you want to highlight?Will [00:22:51]: The link prediction objective can be seen as like a neural PageRank because what you're doing is you're predicting the links people share. And so if everyone is sharing some Paul Graham essay about fundraising, then like our model is more likely to predict it. So like inherent in our training objective is this, uh, a sense of like high canonicity and like high quality, but it's more powerful than PageRank. It's strictly more powerful because people might refer to that Paul Graham fundraising essay in like a thousand different ways. And so our model learns all the different ways. That someone refers that Paul Graham, I say, while also learning how important that Paul Graham essay is. Um, so it's like, it's like PageRank on steroids kind of thing. Yeah.Alessio [00:23:26]: I think to me, that's the most interesting thing about search today, like with Google and whatnot, it's like, it's mostly like domain authority. So like if you get back playing, like if you search any AI term, you get this like SEO slop websites with like a bunch of things in them. So this is interesting, but then how do you think about more timeless maybe content? So if you think about, yeah. You know, maybe the founder mode essay, right. It gets shared by like a lot of people, but then you might have a lot of other essays that are also good, but they just don't really get a lot of traction. Even though maybe the people that share them are high quality. How do you kind of solve that thing when you don't have the people authority, so to speak of who's sharing, whether or not they're worth kind of like bumping up? Yeah.Will [00:24:10]: I mean, you do have a lot of control over the training data, so you could like make sure that the training data contains like high quality sources so that, okay. Like if you, if you're. Training data, I mean, it's very similar to like language, language model training. Like if you train on like a bunch of crap, your prediction will be crap. Our model will match the training distribution is trained on. And so we could like, there are lots of ways to tweak the training data to refer to high quality content that we want. Yeah. I would say also this, like this slop that is returned by, by traditional search engines, like Google and Bing, you have the slop is then, uh, transferred into the, these LLMs in like a search GBT or, you know, our other systems like that. Like if slop comes in, slop will go out. And so, yeah, that's another answer to how we're different is like, we're not like traditional search engines. We want to give like the highest quality results and like have full control over whatever you want. If you don't want slop, you get that. And then if you put an LM on top of that, which our customers do, then you just get higher quality results or high quality output.Alessio [00:25:06]: And I use Excel search very often and it's very good. Especially.Swyx [00:25:09]: Wave uses it too.Alessio [00:25:10]: Yeah. Yeah. Yeah. Yeah. Yeah. Like the slop is everywhere, especially when it comes to AI, when it comes to investment. When it comes to all of these things for like, it's valuable to be at the top. And this problem is only going to get worse because. Yeah, no, it's totally. What else is in the toolkit? So you have search API, you have ExaSearch, kind of like the web version. Now you have the list builder. I think you also have web scraping. Maybe just touch on that. Like, I guess maybe people, they want to search and then they want to scrape. Right. So is that kind of the use case that people have? Yeah.Will [00:25:41]: A lot of our customers, they don't just want, because they're building AI applications on top of Exa, they don't just want a list of URLs. They actually want. Like the full content, like cleans, parsed. Markdown. Markdown, maybe chunked, whatever they want, we'll give it to them. And so that's been like huge for customers. Just like getting the URLs and instantly getting the content for each URL is like, and you can do this for 10 or 100 or 1,000 URLs, wherever you want. That's very powerful.Swyx [00:26:05]: Yeah. I think this is the first thing I asked you for when I tried using Exa.Will [00:26:09]: Funny story is like when I built the first version of Exa, it's like, we just happened to store the content. Yes. Like the first 1,024 tokens. Because I just kind of like kept it because I thought of, you know, I don't know why. Really for debugging purposes. And so then when people started asking for content, it was actually pretty easy to serve it. But then, and then we did that, like Exa took off. So the computer's content was so useful. So that was kind of cool.Swyx [00:26:30]: It is. I would say there are other players like Gina, I think is in this space. Firecrawl is in this space. There's a bunch of scraper companies. And obviously scraper is just one part of your stack, but you might as well offer it since you already do it.Will [00:26:43]: Yeah, it makes sense. It's just easy to have an all-in-one solution. And like. We are, you know, building the best scraper in the world. So scraping is a hard problem and it's easy to get like, you know, a good scraper. It's very hard to get a great scraper and it's super hard to get a perfect scraper. So like, and, and scraping really matters to people. Do you have a perfect scraper? Not yet. Okay.Swyx [00:27:05]: The web is increasingly closing to the bots and the scrapers, Twitter, Reddit, Quora, Stack Overflow. I don't know what else. How are you dealing with that? How are you navigating those things? Like, you know. You know, opening your eyes, like just paying them money.Will [00:27:19]: Yeah, no, I mean, I think it definitely makes it harder for search engines. One response is just that there's so much value in the long tail of sites that are open. Okay. Um, and just like, even just searching over those well gets you most of the value. But I mean, there, there is definitely a lot of content that is increasingly not unavailable. And so you could get through that through data partnerships. The bigger we get as a company, the more, the easier it is to just like, uh, make partnerships. But I, I mean, I do see the world as like the future where the. The data, the, the data producers, the content creators will make partnerships with the entities that find that data.Alessio [00:27:53]: Any other fun use case that maybe people are not thinking about? Yeah.Will [00:27:58]: Oh, I mean, uh, there are so many customers. Yeah. What are people doing on AXA? Well, I think dating is a really interesting, uh, application of search that is completely underserved because there's a lot of profiles on the web and a lot of people who want to find love and that I'll use it. They give me. Like, you know, age boundaries, you know, education level location. Yeah. I mean, you want to, what, what do you want to do with data? You want to find like a partner who matches this education level, who like, you know, maybe has written about these types of topics before. Like if you could get a list of all the people like that, like, I think you will unblock a lot of people. I mean, there, I mean, I think this is a very Silicon Valley view of dating for sure. And I'm, I'm well aware of that, but it's just an interesting application of like, you know, I would love to meet like an intellectual partner, um, who like shares a lot of ideas. Yeah. Like if you could do that through better search and yeah.Swyx [00:28:48]: But what is it with Jeff? Jeff has already set me up with a few people. So like Jeff, I think it's my personal exit.Will [00:28:55]: my mom's actually a matchmaker and has got a lot of married. Yeah. No kidding. Yeah. Yeah. Search is built into the book. It's in your jeans. Yeah. Yeah.Swyx [00:29:02]: Yeah. Other than dating, like I know you're having quite some success in colleges. I would just love to map out some more use cases so that our listeners can just use those examples to think about use cases for XR, right? Because it's such a general technology that it's hard to. Uh, really pin down, like, what should I use it for and what kind of products can I build with it?Will [00:29:20]: Yeah, sure. So, I mean, there are so many applications of XR and we have, you know, many, many companies using us for very diverse range of use cases, but I'll just highlight some interesting ones. Like one customer, a big customer is using us to, um, basically build like a, a writing assistant for students who want to write, uh, research papers. And basically like XR will search for, uh, like a list of research papers related to what the student is writing. And then this product has. Has like an LLM that like summarizes the papers to basically it's like a next word prediction, but in, uh, you know, prompted by like, you know, 20 research papers that X has returned. It's like literally just doing their homework for them. Yeah. Yeah. the key point is like, it's, it's, uh, you know, it's, it's, you know, research is, is a really hard thing to do and you need like high quality content as input.Swyx [00:30:08]: Oh, so we've had illicit on the podcast. I think it's pretty similar. Uh, they, they do focus pretty much on just, just research papers and, and that research. Basically, I think dating, uh, research, like I just wanted to like spell out more things, like just the big verticals.Will [00:30:23]: Yeah, yeah, no, I mean, there, there are so many use cases. So finance we talked about, yeah. I mean, one big vertical is just finding a list of companies, uh, so it's useful for VCs, like you said, who want to find like a list of competitors to a specific company they're investigating or just a list of companies in some field. Like, uh, there was one VC that told me that him and his team, like we're using XR for like eight hours straight. Like, like that. For many days on end, just like, like, uh, doing like lots of different queries of different types, like, oh, like all the companies in AI for law or, uh, all the companies for AI for, uh, construction and just like getting lists of things because you just can't find this information with, with traditional search engines. And then, you know, finding companies is also useful for, for selling. If you want to find, you know, like if we want to find a list of, uh, writing assistants to sell to, then we can just, we just use XR ourselves to find that is actually how we found a lot of our customers. Ooh, you can find your own customers using XR. Oh my God. I, in the spirit of. Uh, using XR to bolster XR, like recruiting is really helpful. It is really great use case of XR, um, because we can just get like a list of, you know, people who thought about search and just get like a long list and then, you know, reach out to those people.Swyx [00:31:29]: When you say thought about, are you, are you thinking LinkedIn, Twitter, or are you thinking just blogs?Will [00:31:33]: Or they've written, I mean, it's pretty general. So in that case, like ideally XR would return like the, the really blogs written by people who have just. So if I don't blog, I don't show up to XR, right? Like I have to blog. well, I mean, you could show up. That's like an incentive for people to blog.Swyx [00:31:47]: Well, if you've written about, uh, search in on Twitter and we, we do, we do index a bunch of tweets and then we, we should be able to service that. Yeah. Um, I mean, this is something I tell people, like you have to make yourself discoverable to the web, uh, you know, it's called learning in public, but like, it's even more imperative now because otherwise you don't exist at all.Will [00:32:07]: Yeah, no, no, this is a huge, uh, thing, which is like search engines completely influence. They have downstream effects. They influence the internet itself. They influence what people. Choose to create. And so Google, because they're a keyword based search engine, people like kind of like keyword stuff. Yeah. They're, they're, they're incentivized to create things that just match a lot of keywords, which is not very high quality. Uh, whereas XR is a search algorithm that, uh, optimizes for like high quality and actually like matching what you mean. And so people are incentivized to create content that is high quality, that like the create content that they know will be found by the right person. So like, you know, if I am a search researcher and I want to be found. By XR, I should blog about search and all the things I'm building because, because now we have a search engine like XR that's powerful enough to find them. And so the search engine will influence like the downstream internet in all sorts of amazing ways. Yeah. Uh, whatever the search engine optimizes for is what the internet looks like. Yeah.Swyx [00:33:01]: Are you familiar with the term? McLuhanism? No, it's not. Uh, it's this concept that, uh, like first we shape tools and then the tools shape us. Okay. Yeah. Uh, so there's like this reflexive connection between the things we search for and the things that get searched. Yes. So like once you change the tool. The tool that searches the, the, the things that get searched also change. Yes.Will [00:33:18]: I mean, there was a clear example of that with 30 years of Google. Yeah, exactly. Google has basically trained us to think of search and Google has Google is search like in people's heads. Right. It's one, uh, hard part about XR is like, uh, ripping people away from that notion of search and expanding their sense of what search could be. Because like when people think search, they think like a few keywords, or at least they used to, they think of a few keywords and that's it. They don't think to make these like really complex paragraph long requests for information and get a perfect list. ChatGPT was an interesting like thing that expanded people's understanding of search because you start using ChatGPT for a few hours and you go back to Google and you like paste in your code and Google just doesn't work and you're like, oh, wait, it, Google doesn't do work that way. So like ChatGPT expanded our understanding of what search can be. And I think XR is, uh, is part of that. We want to expand people's notion, like, Hey, you could actually get whatever you want. Yeah.Alessio [00:34:06]: I search on XR right now, people writing about learning in public. I was like, is it gonna come out with Alessio? Am I, am I there? You're not because. Bro. It's. So, no, it's, it's so about, because it thinks about learning, like in public, like public schools and like focuses more on that. You know, it's like how, when there are like these highly overlapping things, like this is like a good result based on the query, you know, but like, how do I get to Alessio? Right. So if you're like in these subcultures, I don't think this would work in Google well either, you know, but I, I don't know if you have any learnings.Swyx [00:34:40]: No, I'm the first result on Google.Alessio [00:34:42]: People writing about learning. In public, you're not first result anymore, I guess.Swyx [00:34:48]: Just type learning public in Google.Alessio [00:34:49]: Well, yeah, yeah, yeah, yeah. But this is also like, this is in Google, it doesn't work either. That's what I'm saying. It's like how, when you have like a movement.Will [00:34:56]: There's confusion about the, like what you mean, like your intention is a little, uh. Yeah.Alessio [00:35:00]: It's like, yeah, I'm using, I'm using a term that like I didn't invent, but I'm kind of taking over, but like, they're just so much about that term already that it's hard to overcome. If that makes sense, because public schools is like, well, it's, it's hard to overcome.Will [00:35:14]: Public schools, you know, so there's the right solution to this, which is to specify more clearly what you mean. And I'm not expecting you to do that, but so the, the right interface to search is actually an LLM.Swyx [00:35:25]: Like you should be talking to an LLM about what you want and the LLM translates its knowledge of you or knowledge of what people usually mean into a query that excellent uses, which you have called auto prompts, right?Will [00:35:35]: Or, yeah, but it's like a very light version of that. And really it's just basically the right answer is it's the wrong interface and like very soon interface to search and really to everything will be LLM. And the LLM just has a full knowledge of you, right? So we're kind of building for that world. We're skating to where the puck is going to be. And so since we're moving to a world where like LLMs are interfaced to everything, you should build a search engine that can handle complex LLM queries, queries that come from LLMs. Because you're probably too lazy, I'm too lazy too, to write like a whole paragraph explaining, okay, this is what I mean by this word. But an LLM is not lazy. And so like the LLM will spit out like a paragraph or more explaining exactly what it wants. You need a search engine that can handle that. Traditional search engines like Google or Bing, they're actually... Designed for humans typing keywords. If you give a paragraph to Google or Bing, they just completely fail. And so Exa can handle paragraphs and we want to be able to handle it more and more until it's like perfect.Alessio [00:36:24]: What about opinions? Do you have lists? When you think about the list product, do you think about just finding entries? Do you think about ranking entries? I'll give you a dumb example. So on Lindy, I've been building the spot that every week gives me like the top fantasy football waiver pickups. But every website is like different opinions. I'm like, you should pick up. These five players, these five players. When you're making lists, do you want to be kind of like also ranking and like telling people what's best? Or like, are you mostly focused on just surfacing information?Will [00:36:56]: There's a really good distinction between filtering to like things that match your query and then ranking based on like what is like your preferences. And ranking is like filtering is objective. It's like, does this document match what you asked for? Whereas ranking is more subjective. It's like, what is the best? Well, it depends what you mean by best, right? So first, first table stakes is let's get the filtering into a perfect place where you actually like every document matches what you asked for. No surgeon can do that today. And then ranking, you know, there are all sorts of interesting ways to do that where like you've maybe for, you know, have the user like specify more clearly what they mean by best. You could do it. And if the user doesn't specify, you do your best, you do your best based on what people typically mean by best. But ideally, like the user can specify, oh, when I mean best, I actually mean ranked by the, you know, the number of people who visited that site. Let's say is, is one example ranking or, oh, what I mean by best, let's say you're listing companies. What I mean by best is like the ones that have, uh, you know, have the most employees or something like that. Like there are all sorts of ways to rank a list of results that are not captured by something as subjective as best. Yeah. Yeah.Alessio [00:38:00]: I mean, it's like, who are the best NBA players in the history? It's like everybody has their own. Right.Will [00:38:06]: Right. But I mean, the, the, the search engine should definitely like, even if you don't specify it, it should do as good of a job as possible. Yeah. Yeah. No, no, totally. Yeah. Yeah. Yeah. Yeah. It's a new topic to people because we're not used to a search engine that can handle like a very complex ranking system. Like you think to type in best basketball players and not something more specific because you know, that's the only thing Google could handle. But if Google could handle like, oh, basketball players ranked by like number of shots scored on average per game, then you would do that. But you know, they can't do that. So.Swyx [00:38:32]: Yeah. That's fascinating. So you haven't used the word agents, but you're kind of building a search agent. Do you believe that that is agentic in feature? Do you think that term is distracting?Will [00:38:42]: I think it's a good term. I do think everything will eventually become agentic. And so then the term will lose power, but yes, like what we're building is agentic it in a sense that it takes actions. It decides when to go deeper into something, it has a loop, right? It feels different from traditional search, which is like an algorithm, not an agent. Ours is a combination of an algorithm and an agent.Swyx [00:39:05]: I think my reflection from seeing this in the coding space where there's basically sort of classic. Framework for thinking about this stuff is the self-driving levels of autonomy, right? Level one to five, typically the level five ones all failed because there's full autonomy and we're not, we're not there yet. And people like control. People like to be in the loop. So the, the, the level ones was co-pilot first and now it's like cursor and whatever. So I feel like if it's too agentic, it's too magical, like, like a, like a one shot, I stick a, stick a paragraph into the text box and then it spits it back to me. It might feel like I'm too disconnected from the process and I don't trust it. As opposed to something where I'm more intimately involved with the research product. I see. So like, uh, wait, so the earlier versions are, so if trying to stick to the example of the basketball thing, like best basketball player, but instead of best, you, you actually get to customize it with like, whatever the metric is that you, you guys care about. Yeah. I'm still not a basketballer, but, uh, but, but, you know, like, like B people like to be in my, my thesis is that agents level five agents failed because people like to. To kind of have drive assist rather than full self-driving.Will [00:40:15]: I mean, a lot of this has to do with how good agents are. Like at some point, if agents for coding are better than humans at all tests and then humans block, yeah, we're not there yet.Swyx [00:40:25]: So like in a world where we're not there yet, what you're pitching us is like, you're, you're kind of saying you're going all the way there. Like I kind of, I think all one is also very full, full self-driving. You don't get to see the plan. You don't get to affect the plan yet. You just fire off a query and then it goes away for a couple of minutes and comes back. Right. Which is effectively what you're saying you're going to do too. And you think there's.Will [00:40:42]: There's a, there's an in-between. I saw. Okay. So in building this product, we're exploring new interfaces because what does it mean to kick off a search that goes and takes 10 minutes? Like, is that a good interface? Because what if the search is actually wrong or it's not exactly, exactly specified to what you mean, which is why you get previews. Yeah. You get previews. So it is iterative, but ultimately once you've specified exactly what you mean, then you kind of do just want to kick off a batch job. Right. So perhaps what you're getting at is like, uh, there's this barrier with agents where you have to like explain the full context of what you mean, and a lot of failure modes happen when you have, when you don't. Yeah. There's failure modes from the agent, just not being smart enough. And then there's failure modes from the agent, not understanding exactly what you mean. And there's a lot of context that is shared between humans that is like lost between like humans and, and this like new creature.Alessio [00:41:32]: Yeah. Yeah. Because people don't know what's going on. I mean, to me, the best example of like system prompts is like, why are you writing? You're a helpful assistant. Like. Of course you should be an awful, but people don't yet know, like, can I assume that, you know, that, you know, it's like, why did the, and now people write, oh, you're a very smart software engineer, but like, you never made, you never make mistakes. Like, were you going to try and make mistakes before? So I think people don't yet have an understanding, like with, with driving people know what good driving is. It's like, don't crash, stay within kind of like a certain speed range. It's like, follow the directions. It's like, I don't really have to explain all of those things. I hope. But with. AI and like models and like search, people are like, okay, what do you actually know? What are like your assumptions about how search, how you're going to do search? And like, can I trust it? You know, can I influence it? So I think that's kind of the, the middle ground, like before you go ahead and like do all the search, it's like, can I see how you're doing it? And then maybe help show your work kind of like, yeah, steer you. Yeah. Yeah.Will [00:42:32]: No, I mean, yeah. Sure. Saying, even if you've crafted a great system prompt, you want to be part of the process itself. Uh, because the system prompt doesn't, it doesn't capture everything. Right. So yeah. A system prompt is like, you get to choose the person you work with. It's like, oh, like I want, I want a software engineer who thinks this way about code. But then even once you've chosen that person, you can't just give them a high level command and they go do it perfectly. You have to be part of that process. So yeah, I agree.Swyx [00:42:58]: Just a side note for my system, my favorite system, prompt programming anecdote now is the Apple intelligence system prompt that someone, someone's a prompt injected it and seen it. And like the Apple. Intelligence has the words, like, please don't, don't hallucinate. And it's like, of course we don't want you to hallucinate. Right. Like, so it's exactly that, that what you're talking about, like we should train this behavior into the model, but somehow we still feel the need to inject into the prompt. And I still don't even think that we are very scientific about it. Like it, I think it's almost like cargo culting. Like we have this like magical, like turn around three times, throw salt over your shoulder before you do something. And like, it worked the last time. So let's just do it the same time now. And like, we do, there's no science to this.Will [00:43:35]: I do think a lot of these problems might be ironed out in future versions. Right. So, and like, they might, they might hide the details from you. So it's like, they actually, all of them have a system prompt. That's like, you are a helpful assistant. You don't actually have to include it, even though it might actually be the way they've implemented in the backend. It should be done in RLE AF.Swyx [00:43:52]: Okay. Uh, one question I was just kind of curious about this episode is I'm going to try to frame this in terms of this, the general AI search wars, you know, you're, you're one player in that, um, there's perplexity, chat, GPT, search, and Google, but there's also like the B2B side, uh, we had. Drew Houston from Dropbox on, and he's competing with Glean, who've, uh, we've also had DD from, from Glean on, is there an appetite for Exa for my company's documents?Will [00:44:19]: There is appetite, but I think we have to be disciplined, focused, disciplined. I mean, we're already taking on like perfect web search, which is a lot. Um, but I mean, ultimately we want to build a perfect search engine, which definitely for a lot of queries involves your, your personal information, your company's information. And so, yeah, I mean, the grandest vision of Exa is perfect search really over everything, every domain, you know, we're going to have an Exa satellite, uh, because, because satellites can gather information that, uh, is not available publicly. Uh, gotcha. Yeah.Alessio [00:44:51]: Can we talk about AGI? We never, we never talk about AGI, but you had, uh, this whole tweet about, oh, one being the biggest kind of like AI step function towards it. Why does it feel so important to you? I know there's kind of like always criticism and saying, Hey, it's not the smartest son is better. It's like, blah, blah, blah. What? You choose C. So you say, this is what Ilias see or Sam see what they will see.Will [00:45:13]: I've just, I've just, you know, been connecting the dots. I mean, this was the key thing that a bunch of labs were working on, which is like, can you create a reward signal? Can you teach yourself based on a reward signal? Whether you're, if you're trying to learn coding or math, if you could have one model say, uh, be a grading system that says like you have successfully solved this programming assessment and then one model, like be the generative system. That's like, here are a bunch of programming assessments. You could train on that. It's basically whenever you could create a reward signal for some task, you could just generate a bunch of tasks for yourself. See that like, oh, on two of these thousand, you did well. And then you just train on that data. It's basically like, I mean, creating your own data for yourself and like, you know, all the labs working on that opening, I built the most impressive product doing that. And it's just very, it's very easy now to see how that could like scale to just solving, like, like solving programming or solving mathematics, which sounds crazy, but everything about our world right now is crazy.Alessio [00:46:07]: Um, and so I think if you remove that whole, like, oh, that's impossible, and you just think really clearly about like, what's now possible with like what, what they've done with O1, it's easy to see how that scales. How do you think about older GPT models then? Should people still work on them? You know, if like, obviously they just had the new Haiku, like, is it even worth spending time, like making these models better versus just, you know, Sam talked about O2 at that day. So obviously they're, they're spending a lot of time in it, but then you have maybe. The GPU poor, which are still working on making Lama good. Uh, and then you have the follower labs that do not have an O1 like model out yet. Yeah.Will [00:46:47]: This kind of gets into like, uh, what will the ecosystem of, of models be like in the future? And is there room is, is everything just gonna be O1 like models? I think, well, I mean, there's definitely a question of like inference speed and if certain things like O1 takes a long time, because that's the thing. Well, I mean, O1 is, is two things. It's like one it's it's use it's bootstrapping itself. It's teaching itself. And so the base model is smarter. But then it also has this like inference time compute where it could like spend like many minutes or many hours thinking. And so even the base model, which is also fast, it doesn't have to take minutes. It could take is, is better, smarter. I believe all models will be trained with this paradigm. Like you'll want to train on the best data, but there will be many different size models from different, very many different like companies, I believe. Yeah. Because like, I don't, yeah, I mean, it's hard, hard to predict, but I don't think opening eye is going to dominate like every possible LLM for every possible. Use case. I think for a lot of things, like you just want the fastest model and that might not involve O1 methods at all.Swyx [00:47:42]: I would say if you were to take the exit being O1 for search, literally, you really need to prioritize search trajectories, like almost maybe paying a bunch of grad students to go research things. And then you kind of track what they search and what the sequence of searching is, because it seems like that is the gold mine here, like the chain of thought or the thinking trajectory. Yeah.Will [00:48:05]: When it comes to search, I've always been skeptical. I've always been skeptical of human labeled data. Okay. Yeah, please. We tried something at our company at Exa recently where me and a bunch of engineers on the team like labeled a bunch of queries and it was really hard. Like, you know, you have all these niche queries and you're looking at a bunch of results and you're trying to identify which is matched to query. It's talking about, you know, the intricacies of like some biological experiment or something. I have no idea. Like, I don't know what matches and what, what labelers like me tend to do is just match by keyword. I'm like, oh, I don't know. Oh, like this document matches a bunch of keywords, so it must be good. But then you're actually completely missing the meaning of the document. Whereas an LLM like GB4 is really good at labeling. And so I actually think like you just we get by, which we are right now doing using like LLM

The Fast Lane with Ed Lane
Ben Cates, NewsAdvance.com On Gretna, LCA, Altavista In State QFs

The Fast Lane with Ed Lane

Play Episode Listen Later Nov 28, 2024 8:30


Ben Cates, NewsAdvance.com On Gretna, LCA, Altavista In State QFs by Ed Lane

The WorldView in 5 Minutes
20 Dead Over Venezuelan Protests, 60% of Americans Support Death Penalty, Biden Administration Announces Plea Deal with 9/11 Conspirators

The WorldView in 5 Minutes

Play Episode Listen Later Aug 2, 2024


It's Friday, August 2nd, A.D. 2024. This is The World View in 5 Minutes written by Kevin Swanson and heard at www.TheWorldView.com.  Filling in for Adam McManus I'm Ean Leppin. Liberty Counsel Represents Kim Davis Liberty Counsel is taking hits for representing Kentucky County Clerk Kim Davis' religious liberty case at the 6th Circuit Court Appeals. At issue is the Liberty Counsel's challenge of the 2015 Obergefell decision and a potential reversal, in favor of religious liberty. Kim and her husband, as well as Liberty Counsel have been subjected to multiple, serious death threats. Liberty Counsel President Mat Staver released a statement Monday, noting that “Anyone who stands up to the hateful agenda of the LGBTQ Mafia is demonized. .   .. the LGBTQ left will not tolerate religious freedom and wants to destroy anyone who disagrees.” Biden Administration Announces Plea Deal with 9/11 Conspirators The Biden administration Department of Justice has announced a plea deal with alleged conspirators of the 9/11 attacks, which occurred 23 years ago. Khalid Sheikh Mohammed, Walid Bin ‘Attash, and Mustafa al Hawsawi will plead guilty to conspiracy and murder charges, but will not face the death penalty for the murders of 2,977 people on September 11, 2001. Senator JD Vance commenting on the deal told an audience yesterday, “"We need a president who kills terrorists, not negotiates with them.” And Speaker of the House, Mike Johnson, called the decision “unthinkable” and a “slap in the face” for families of those murdered by the terrorists. God said, “Whoever sheds man's blood, by man his blood shall be shed; for in the image of God He made man.” (Gen. 9:6) 60% of Americans Support Death Penalty 60% of Americans support the death penalty. There were 23 executions in the US last year down from 98 in 1999. The US Homicide rate has also increased since 2014, from 4.7 per 100,000, to 6.0 per 100,000 persons in 2023, which accounts for over 20,000 murders. Keep in mind, “The ruler is God's minister to you for good. But if you do evil, be afraid; for he does not bear the sword in vain; for he is God's minister, an avenger to execute wrath on him who practices evil.” Romans 13:4 20 Dead Over Venezuelan Protests Protests following the Venezuelan sham election over the weekend in which the communist dictator claimed victory - have resulted in 20 deaths, and 1,072 persons arrested by the regime, according to Effecto Cocuyo — an independent news source. The nation's prosecutor, Tarek William Saab has warned protestors, that they are facing up to 20-30 years in prison. A respected American polling organization, Edison Research, has issued its exit poll results for the Venezuelan election. Opposition candidate Edmundo González Urrutia of the Unitary Platform easily won by a margin of 65 to 31%. Younger (18-29 year old) voters were more likely to vote against the communist dictator, by a margin of 74% to 21%. Edison Research has been the sole provider of election data to the National Election Pool, consisting of ABC News, CBS News, CNN, and NBC News.  Also, AltaVista obtained results from about 1,000 polling stations, photographed them, analyzed them and then sent the results around the world. They also showed a landslide: 66 percent for González, 31 percent for Maduro. Debt in Africa Has Increased Substantially Africa's debt burden has increased substantially just since 2010. Reuters reports that Zambia, Ethiopia, and Ghana are now in default. And at least 20 African nations have taken on a heavy debt burden, as defined by the IMF, a condition that did not exist for these countries just 10 years ago.  Although, none of these nations are in as severe a condition as the United States - with a debt-to-GDP ratio — now at 122% up from 64% in 2008.  Japan, Sudan, Lebanon, and Greece have a higher debt-to-GDP ratio than the United States. “The debtor is servant to the lender.”  Proverbs 22:7. College Enrollment Dropping Undergraduate college enrollment has dropped another 852,000 students since 2019 - a 4.6% drop. Christain colleges are taking the hit.  A recent survey of 50 Christain colleges found 36 out of 50 Christian colleges had a net decrease in tuition income over the last 5 years, as reported by wng.org.  The college bubble has pretty much burst. . .In Minnesota, only 57% of high school graduates signed up for college on graduation, in 2022. That's down from 82% in 2011. Chik-fil-A Worker Fends off Armed Robber An armed robber broke into an Atlanta Chik Fil A last month, levelled a gun at employee Kevin Blair. . . and told him he was going to die if he didn't open the restaurant safe. By God's mercies, Blair fought off the armed robber for several harrowing minutes — in a desperate fight for his life, finally pushing him out the door of the restaurant. Blair talked about the struggle in an interview with WXIA TV — BLAIR: “I broke his glasses. I put my thumb into his eye.  He hit me several times.” WXIA TV ANNOUNCER: “Fighting off the intruder, the whole time thinking: BLAIR: “I want to see my kids.  That's really it.  Get through this.  Go see my kids.”  Police have arrested suspect Tommie Lee Williams in connection with the assault. And that's The World View in 5 Minutes on this Friday, August 2nd, in the year of our Lord 2024. Subscribe by iTunes or email to our unique Christian newscast at www.TheWorldView.com. Or get the Generations app through Google Play or The App Store. Filling in for Adam McManus I'm Ean Leppin. Seize the day for Jesus Christ. 

Appointed: A Canadian Senator Bringing Margins to the Centre
A Conversation with Ottawa City Councillors Theresa Kavanagh and Marty Carr re: Ottawa's Support for a Guaranteed Livable Basic Income & Its Importance as a Means of Addressing Income Insecurity and Health

Appointed: A Canadian Senator Bringing Margins to the Centre

Play Episode Listen Later Jul 31, 2024 27:41


On this episode of Appointed, Senator Kim Pate speaks with Ottawa City Councillors, Theresa Kavanagh and Marty Carr. This fabulous duo successfully presented a motion on July 10, 2024, supporting a Guaranteed Livable Basic Income. They were inspired by the Ottawa Board of Health June 17, 2024 resolution supporting a Basic Income Guarantee for all people over the age of 17 as a means of addressing poverty, the number one social determinant of ill health.Kim and the Councillors discuss the importance of a Guaranteed Livable Basic Income, the potential it has to support safety, autonomy, the social determinants of health, and other inequities faced by Ottawa citizens and Canadians more broadly.Councillor Carr represents the area of Alta Vista, and Councillor Kavanagh is the councillor for the By Ward region of Ottawa.__________________________________Senator Pate's Guaranteed Livable Basic Income Fact Sheets can be read hereCity Council Motion to Support a Guaranteed Basic Income for Canadians available here & hereOttawa City Council Backs Basic Income can be watched hereBill S-233, An Act to develop a national framework for a guaranteed livable basic income can be read hereAn Op-Ed by Councillor Marty Carr can be found here

Here's What's Happening
Let Her Alta Vista Her Options

Here's What's Happening

Play Episode Listen Later Jul 23, 2024 7:36


Here's what's happening today: CrowdStrike Update-via ABC NewsSecret Service Director Testifies-via AP NewsHarris for President-via AP News, NY TimesWatch this episode as a video at www.kimmoffat.com/thenewsRegister to vote, or check your registration at wearevoters.turbovote.orgTake the pledge to be a voter at raisingvoters.org/beavoterdecember. - on AmazonSubscribe to the Substack: kimmoffat.substack.comA full transcript (with links) is available at kimmoffat.com/hwh-transcriptsAs always, you can find me on Instagram/Twitter @kimmoffat and TikTok @kimmoffatishere

Securing Our Future
SOF 029: Innovating for National Security with Don Dodge

Securing Our Future

Play Episode Listen Later Jul 3, 2024 27:16


In this episode, host Jeremy Hitchcock sits down with Don Dodge, a tech and investment veteran with a storied career spanning groundbreaking startups and tech giants like Google and Microsoft. Join us as we explore Don's journey, from his early days at Forte Software, Alta Vista, and Napster to his pivotal roles at Microsoft and Google. We dive into his experiences with venture capital, the evolving landscape of startups, and his unique insights into dual-use technologies bridging the commercial and defense sectors. Don also shares his excitement about joining New North Ventures, emphasizing the opportunities in national security and commercial crossovers. This episode is packed with valuable lessons for entrepreneurs and investors alike, highlighting the importance of team dynamics, market understanding, and the changing game of building successful tech companies. 

The Bobber
Geneva Lake Shore Path: Hidden Gems & History

The Bobber

Play Episode Listen Later Jun 14, 2024 5:37


In this episode, Hailey uncovers one of Lake Geneva's most beautiful attractions–the Geneva Lake Shore Path–that holds scattered hidden gems and untold history from centuries ago. On top of its natural beauty and design, showing off Geneva Lake, property owners go a step further adding unique features along the way. The Geneva Lake Shore Path most definitely shows off Lake Geneva's stunning beauty today, but it also holds much of the area's history. As Hailey leads the journey along the Path, she reveals more about Lake Geneva's roots, discovering the history first-hand.Read the blog here: https://discoverwisconsin.com/geneva-lake-shore-path-hidden-gems-history/Geneva Lake Shore Path: https://www.visitlakegeneva.com/things-to-do/shore-path/; The Miracle Path: https://www.facebook.com/themiraclepath/The Bobber: https://discoverwisconsin.com/blog/The Cabin Podcast: https://the-cabin.simplecast.com. Follow on social @thecabinpodShop Discover Wisconsin: shop.discoverwisconsin.com. Follow on social @shopdiscoverwisconsinDiscover Wisconsin: https://discoverwisconsin.com/. Follow on social @discoverwisconsinDiscover Mediaworks: https://discovermediaworks.com/. Follow on social @discovermediaworksVisit Lake Geneva: https://www.visitlakegeneva.com/. Follow on social @visitlakegeneva

That Was The Week
Dear Sam

That Was The Week

Play Episode Listen Later May 24, 2024 32:40


Hat Tip to this week's creators: @edzitron, @bysarahkrouse, @dseetharaman, @JBFlint, @packyM, @KamalVC, @VaradanMonisha, @Claudiazeisberg, @IDTechReviews, @cjgustafson222, @NathanLands, @psawers, @lightspeedvp, @jaygoldberg, @avcContents* Editorial: Dear Sam, A Letter from a Founder to a Founder* Essays of the Week* Sam Altman Is Full Of S**t* Behind the Scenes of Scarlett Johansson's Battle With OpenAI* Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama* Better Tools, Bigger Companies* The Pervasive, Head-Scratching, Risk-Exploding Problem With Venture Capital* Video of the Week* OpenAI vs Gemini 1.5* AI of the Week* Does AI have a gross margin problem?* OpenAI and Wall Street Journal owner News Corp sign content deal* Scale AI Raises $1B In Accel-Led Round; Hits $13.8B Valuation* The Awful State of AI in California* News Of the Week* It's Time to Believe the AI Hype* The 49-Year Unicorn Backlog* Humane, the creator of the $700 Ai Pin, is reportedly seeking a buyer* NVIDIA CRUSHES EARNINGS, AGAIN* Startup of the Week* SUNO'S HIT FACTORY* Warpcast of the Week* Be GenerousEditorial: Dear Sam, A Letter from a Founder to a Founder.This week let's break the pattern and write this as a letter to Sam Altman.Dear Sam,It's been a swings and roundabouts week for you at OpenAI.I had a week like that in the spring of 1998. I was at Internet World launching RealNames to the world. RealNames invented paid clicks on keywords. Our first partner was AltaVista, and Google was our second—calling the feature "I'm Feeling Lucky."It was the simplest technology ever. We had a keyword, bought by a customer. An example might be Disney buying "Bambi." They would buy it in every country and language they wanted and point it to a specific URL in each place. Search engines would look at the keywords you typed in (later browsers too) and if RealNames had it as a paid keyword, they would send the user to the site, with no search results. Just a direct navigation. RealNames got paid for the customer sent.At the launch, we used the example of the keyword “Bambi” to show how superior our keywords were compared to domain names. In those days, Bambi.com pointed to a porn site. Our launch demo showed that typing "Bambi" went to Disney, but typing "Bambi.com" did not. All was well except we altered our network settings the eve of the launch, and when we demoed the use of "Bambi" at the launch, it (you can guess) went to the porn site.Journalists wrote about RealNames as a scam and bad actors.Luckily, we had great partners, and within 12 hours the network issue was fixed, and all was well. But for 24 hours, I felt like the world was collapsing around me. On the one hand, we launched our company, mostly to great acclaim; on the other, we were being destroyed in the tech media.Sam, I know how this week must have felt. Your decision to pull the ‘Sky' voice was right. And despite the horrors of the first 24 hours, this will pass.That said, you mismanaged this entire thing. I'm sure you acted in good faith in wanting to embrace the “Her” meme. It is a good idea. And ‘Sky' was a good effort.It seems clear you had spoken to Scarlett Johansson and failed to reach an agreement. I'm prepared to believe you could not react fast enough to change the voice prior to the demo.But once it went awry, you needed to do more than wait for a legal challenge before pulling it, and you needed to say something before the actress. Not doing so means that many people, probably most, think you did the entire thing on purpose.Clearly, you did not preconceive this. If you did, then the fact that you were happy to pull the voice, and your knowledge that the actress was not prepared to have her voice used, would have stopped you before it got as far as it did. You would be very reckless to have thought you could get away with using a voice like hers without her permission.So, you need to either go on the record and get this behind you or ignore it and hope it goes away. I think now we have ‘ScarJo' as a word, the latter might prove difficult.Best Regards,Keith (A fellow Founder)Beyond ScarJo there are some great essays this week. Pack McCormick writes about why AI will lead to more jobs and bigger companies. In framing his case he says”Technologies are tools. I don't mean that in the normal way that people mean it to say that technology is neither good nor bad.Tools are good.Humans can build better things with tools than they can without them.But tools aren't the point. They're tools.Tools lead to new possibilities and those lead to new endeavors. Read his essay below.And a team made up of @KamalVC, @VaradanMonisha, @Claudiazeisberg have penned an essay called ‘The Pervasive, Head-Scratching, Risk-Exploding Problem With Venture Capital'. The main thesis is about investing in private companies versus public companies. They have a great graphic showing that the range of outcomes in Venture Capital is very wide compared to other asset classes:Venture Capital's top percentiles out-perform other asset classes, but most do not. The safest asset class is global equity (public company stock).Building on this they show that large Venture investors that invest across 500 or more companies can compete with less risky assets by diversification.This depicts a simulation of a manager doing 15 deals, compared to 500 and shows more deals equals less risk.I recommend reading the full piece, linked in the contents above and the headline below. I think they are right, but there is a better way of derisking. The advice they give below is better than traditional venture capital, but that is a low bar:To de-risk venture capital, CIOs simply need to acknowledge that VC math is different from public markets math. The importance of low-probability, excess-return-generating investments means that proper diversification requires a portfolio of at least 500 startups.It will take work to assemble such a portfolio. It is hard to do by investing directly. Current funds and funds-of-funds are rarely designed with diversification in mind. Instead, they concentrate funding in a small subset of ultra-popular entrepreneurs, sectors, and geographies, which risks driving down returns on capital, leaving higher-return strategies underfunded.Investors who allocate and diversify their funds wisely and accept the evidence will not only achieve better and less-volatile returns, but will also ultimately nudge GPs to finally design diversified funds.In my day job - also about de-risking venture - we use AI to reduce risk, removing companies that are highly unlikely to be successful. The remaining companies (about 7% of the full set of venture backed companies) out-perform the market in a narrower band of outcomes:Here is how the SignalRank Index compares to the S&P500 and the NASDAQ. We assume an investor puts $1 into the S&P, the NASDAQ and The SignalRank Index in each year from 2014-2019 and then show the returns from each (average and median in the case of SignalRank).The median outcome from venture investments is that the investor loses money. The average is a lot better. But almost no managers achieve the average. By using AI to reduce risk we get the average outcome in 2014 to be 4.31x the investment (the white numbers), compared to the S&P500 1.39 and the NASDAQ 1.89. SignalRanks Median outcome is 2.24.De-risking venture capital is important and the writers of the essay show that it is possible to de-risk by diversification. But we can do even better by both diversifying and using data intelligence to remove downside outliers.I will leave you with that thought. More next week This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe

Citizens of Pawnee
Ep. 118: S4E16 "Sweet Sixteen"

Citizens of Pawnee

Play Episode Listen Later May 7, 2024 54:03


On this episode, I covered "Sweet Sixteen" from season 4. Ron urges Leslie to take some time off of work to focus on her campaign; the gang throws a surprise birthday party for Jerry... but forget to invite him, Ann and Tom have another stupid argument; and Chris and Andy bond over Champion. Also, Orthodox Easter, more cicadas, and Alta Vista references. FILLER: Rebel Moon 2, Civil War (both 2024) CONTACT: citizensofpawnee@gmail.com and Instagram @citizensofpawneepodcast and @parksrecmemes Dance of Joy Podcast @danceofjoypod Intro/general nonsense (00:47) Rebel Moon 2 (13:42) Civil War (16:55) "Sweet Sixteen" (28:02) New episodes every Tuesday, please share!

Sales and Marketing Built Freedom
Navigating the AI Hype Cycle: Insights from HubSpot's Former CRO Mark Roberge Part 1

Sales and Marketing Built Freedom

Play Episode Listen Later Apr 24, 2024 27:24


Join Ryan for part one with the legendary Mark Roberge, former CRO of HubSpot and current venture capitalist. They dive deep into the world of AI companies, discussing the immense opportunities and potential pitfalls. Mark shares his unique perspective on identifying promising AI start-ups and provides invaluable insights for entrepreneurs and investors alike. Join 2,500+ readers getting weekly practical guidance to scale themselves and their companies using Artificial Intelligence and Revenue Cheat Codes.   Explore becoming Superhuman here: https://superhumanrevenue.beehiiv.com/ KEY TAKEAWAYS AI is a generational technology, potentially bigger than the internet, but currently in a massive hype cycle similar to the dot-com era. Most current AI ideas are basic integrations into existing workflows, not reimagining the workflow, and are likely to be disrupted. Joining an AI company now provides valuable experience, even if 90% fail, as the winners will scoop up experienced talent. The AI technology creating operational efficiency in the short term may not be the same as the disruptors redefining industries over the next decade. Understanding prompting, the foundation of AI language is crucial for developing micro-workflows, agents, and eventually autonomous swarms. The innovator's dilemma and appropriate beachhead selection are key concepts for AI startups to consider when building for the long-term vision. RevOps, once seen as supporting humans, may flip to AI doing the job with humans tweaking and refining. Technical issues like hallucinations and latency will likely be ironed out as the technology matures, revealing AI's true potential. BEST MOMENTS "I've never had more conviction for my students that come up to me. Like, 'Hey, what's up? Professor, like what, what should I do? Like, I want to be, I want to go into startups. Like, what should I do?' And I, I've always had like different opinions. I was like, do you have to go to AI? Like every tech company is going to, in like six years is going to be native AI. You have to get that experience now." "I think all the tea leaves are similar to like 1997. Let's not forget that in that time, we dialled up to the internet using AOL, launched a Netscape browser and did searches through AltaVista, right? Where are those companies? Lycos, right? Is, is, is, yeah, is OpenAI Netscape or is it Google? Like we got to figure that out, right?" "The AI technology that's going to create significant operational efficiency and growth this year and next for companies, especially these bigger companies that my CROs are from is not going to be the trillion dollar new company that disrupts the space." "I've kind of come to the conclusion that the AI companies, vendors, technology that is going to create the most operational efficiency, even for our start-ups in the next year is not going to be the same AI technology company, et cetera, that becomes the disruptor over the next decade and redefines the sales and martech sector." "RevOps has become huge, didn't even exist 15 years ago. And for the most part, RevOps is seen as an organization that supports the humans to do their job. And that flips with what you're saying. Is, at some point, it flips where RevOps does the job and humans there to tweak it." Ryan Staley Founder and CEO Whale Boss ryan@whalesellingsystem.com www.ryanstaley.io Saas, Saas growth, Scale, Business Growth, B2b Saas, Saas Sales, Enterprise Saas, Business growth strategy, founder, ceo: https://www.whalesellingsystem.com/closingsecrets

The Fast Lane with Ed Lane
Ben Cates, NewsAdvance.com On No Hitters At Altavista, Special High School Lax Talent

The Fast Lane with Ed Lane

Play Episode Listen Later Apr 23, 2024 15:52


Ben Cates, NewsAdvance.com On No Hitters At Altavista, Special High School Lax Talent by Ed Lane

Inside Ag From Kansas Farm Bureau
S3 Ep41: Behind the Counter of Alta Vista Meat Co with Amie Brunkow

Inside Ag From Kansas Farm Bureau

Play Episode Listen Later Apr 10, 2024 24:28


Amie Brunkow owns Alta Vista Meat Company in Alta Vista, Kansas. She joins the Inside Ag podcast to share her passion for business and meat sciences. Amie gives a glimpse into her life as a business owner, discloses the unique features she provides local ranchers and showcases the many benefits of a traditional meat locker. Find Alta Vista Meat Company at Alta Vista Meat Co

Here's What's Happening
Alta Vista It

Here's What's Happening

Play Episode Listen Later Feb 21, 2024 6:59


Here's what's happening today: FBI Informant Had Russian Ties-via AP NewsSCOTUS Leaves Powell Sanctions in Place-via CBS NewsNikki Haley Will Stay in the Race-via NBC NewsMissing 11-Year-Old's Body Found-via ABC NewsNex Benedict-via The IndependentA full transcript (with links) is available at kimmoffat.com/hwh-transcriptsAs always, you can find me on Instagram/Twitter @kimmoffat and TikTok @kimmoffatishere

In Search of Green Marbles
E120 - AI Dominated ‘23. Will ‘24 be the Year of Cybersecurity?

In Search of Green Marbles

Play Episode Listen Later Feb 9, 2024 48:25


This week, Jordi and G3 welcome Sultan Meghji back to the podcast. As long-time listeners know, when Sultan is the guest, green marbles start flying everywhere.This episode continues in that tradition, but it focuses narrowly on cybersecurity, as Jordi and Sultan believe that cybersecurity isn't getting the attention it deserves by the markets. In fact, while generative AI may have been the key factor driving the equity markets higher last year, 2024 is a new ball game. And cybersecurity – not LLMs – could represent the dominant narrative this year. But according to Jordi, the best way to participate in the “cyber surge” is through Bitcoin.Please check important disclosures at the end of the episode.Timestamps:What is Jordi's point of view on the national threat posed by TikTok? [9:12]How confident is Sultan on the U.S. possessing superior abilities to defend against cyber attacks? [21:46]Is the strong market performance of leading cybersecurity names a reflection of a growing concern of a cyber attack? [25:14]What is the role of decentralization, blockchain, and crypto in combatting cyber attacks? [34:05] What advice does Sultan offer on how to prevent being hacked? [42:18]Resources:Frontier Foundry's websiteUK report on state-sponsored cyber attackersFirst of its kind AI heist AI Voice ScamsWhat ever happened to Alta Vista?Disclosures: This podcast and associated content (collectively, the “Post”) are provided to you by Weiss Multi-Strategy Advisers LLC (“Weiss”). The views expressed in the Post are for informational purposes only and are subject to change without notice. Information in this Post has been developed internally and is based on market conditions as of the date of the recording from sources believed to be reliable. Nothing in this Post should be construed as investment, legal, tax, or other advice and should not be viewed as a recommendation to purchase or sell any security or adopt any investment strategy. Past performance is no guarantee of future results. You should consult your own advisers regarding business, legal, tax, or other matters concerning investments. Any health-related information shared on the podcast is not intended as medical advice or for use in self-diagnosis or treatment. Please consult a qualified healthcare professional before acting upon any health-related information on the podcast. Weiss has no control over information at any external site hyperlinked in this Post. Weiss makes no representation concerning and is not responsible for the quality, content, nature, or reliability of any hyperlinked site and has included hyperlinks only as a convenience. The inclusion of any external hyperlink does not imply any endorsement, investigation, verification, or ongoing monitoring by Weiss of any information in any hyperlinked site. In no event shall Weiss be responsible for your use of a hyperlinked site. This is not intended to be an offer or solicitation of any security. Please visit www.gweiss.com to...

The Fast Lane with Ed Lane
Ben Cates, NewsAdvance.com On Altavista Bond With High School + LCA To LU

The Fast Lane with Ed Lane

Play Episode Listen Later Feb 7, 2024 9:57


Ben Cates, NewsAdvance.com On Altavista Bond With High School + LCA To LU by Ed Lane

Le Feuilleton
"Là d'où je viens a disparu" de Guillaume Poix 1/5 : Altavista, Salvador

Le Feuilleton

Play Episode Listen Later Jan 15, 2024 28:50


durée : 00:28:50 - Le Feuilleton - "Ça fait plus de vingt ans que j'habite ici et rien n'a changé, les pères ont transmis aux fils leurs armes, leurs maîtresses, leur mémoire."

Théâtre
"Là d'où je viens a disparu" de Guillaume Poix 1/5 : Altavista, Salvador

Théâtre

Play Episode Listen Later Jan 15, 2024 28:50


durée : 00:28:50 - Le Feuilleton - "Ça fait plus de vingt ans que j'habite ici et rien n'a changé, les pères ont transmis aux fils leurs armes, leurs maîtresses, leur mémoire."

Music of America Podcast
Music Of America Podcast Season 1 Episode 131 - SRT with Mitch Towne

Music of America Podcast

Play Episode Listen Later Jan 1, 2024 54:25


Songs include Tal Shia, Alta Vista and Mr. C T

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The "Normsky" architecture for AI coding agents — with Beyang Liu + Steve Yegge of SourceGraph

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 14, 2023 79:37


We are running an end of year survey for our listeners. Let us know any feedback you have for us, what episodes resonated with you the most, and guest requests for 2024! RAG has emerged as one of the key pieces of the AI Engineer stack. Jerry from LlamaIndex called it a “hack”, Bryan from Hex compared it to “a recommendation system from LLMs”, and even LangChain started with it. RAG is crucial in any AI coding workflow. We talked about context quality for code in our Phind episode. Today's guests, Beyang Liu and Steve Yegge from SourceGraph, have been focused on code indexing and retrieval for over 15 years. We locked them in our new studio to record a 1.5 hours masterclass on the history of code search, retrieval interfaces for code, and how they get SOTA 30% completion acceptance rate in their Cody product by being better at the “bin packing problem” of LLM context generation. Google Grok → SourceGraph → CodyWhile at Google in 2008, Steve built Grok, which lives on today as Google Kythe. It allowed engineers to do code parsing and searching across different codebases and programming languages. (You might remember this blog post from Steve's time at Google) Beyang was an intern at Google at the same time, and Grok became the inspiration to start SourceGraph in 2013. The two didn't know eachother personally until Beyang brought Steve out of retirement 9 years later to join him as VP Engineering. Fast forward 10 years, SourceGraph has become to best code search tool out there and raised $223M along the way. Nine months ago, they open sourced SourceGraph Cody, their AI coding assistant. All their code indexing and search infrastructure allows them to get SOTA results by having better RAG than competitors:* Code completions as you type that achieve an industry-best Completion Acceptance Rate (CAR) as high as 30% using a context-enhanced open-source LLM (StarCoder)* Context-aware chat that provides the option of using GPT-4 Turbo, Claude 2, GPT-3.5 Turbo, Mistral 7x8B, or Claude Instant, with more model integrations planned* Doc and unit test generation, along with AI quick fixes for common coding errors* AI-enhanced natural language code search, powered by a hybrid dense/sparse vector search engine There are a few pieces of infrastructure that helped Cody achieve these results:Dense-sparse vector retrieval system For many people, RAG = vector similarity search, but there's a lot more that you can do to get the best possible results. From their release:"Sparse vector search" is a fancy name for keyword search that potentially incorporates LLMs for things like ranking and term expansion (e.g., "k8s" expands to "Kubernetes container orchestration", possibly weighted as in SPLADE): * Dense vector retrieval makes use of embeddings, the internal representation that LLMs use to represent text. Dense vector retrieval provides recall over a broader set of results that may have no exact keyword matches but are still semantically similar. * Sparse vector retrieval is very fast, human-understandable, and yields high recall of results that closely match the user query. * We've found the approaches to be complementary.There's a very good blog post by Pinecone on SPLADE for sparse vector search if you're interested in diving in. If you're building RAG applications in areas that have a lot of industry-specific nomenclature, acronyms, etc, this is a good approach to getting better results.SCIPIn 2016, Microsoft announced the Language Server Protocol (LSP) and the Language Server Index Format (LSIF). This protocol makes it easy for IDEs to get all the context they need from a codebase to get things like file search, references, “go to definition”, etc. SourceGraph developed SCIP, “a better code indexing format than LSIF”:* Simpler and More Efficient Format: SCIP utilizes Protobuf instead of JSON, which is used by LSIF. Protobuf is more space-efficient, simpler, and more suitable for systems programming. * Better Performance and Smaller Index Sizes: SCIP indexers, such as scip-clang, show enhanced performance and reduced index file sizes compared to LSIF indexers (10%-20% smaller)* Easier to Develop and Debug: SCIP's design, centered around human-readable string IDs for symbols, makes it faster and more straightforward to develop new language indexers. Having more efficient indexing is key to more performant RAG on code. Show Notes* Sourcegraph* Cody* Copilot vs Cody* Steve's Stanford seminar on Grok* Steve's blog* Grab* Fireworks* Peter Norvig* Noam Chomsky* Code search* Kelly Norton* Zoekt* v0.devSee also our past episodes on Cursor, Phind, Codeium and Codium as well as the GitHub Copilot keynote at AI Engineer Summit.Timestamps* [00:00:00] Intros & Backgrounds* [00:05:20] How Steve's work on Grok inspired SourceGraph for Beyang* [00:08:10] What's Cody?* [00:11:22] Comparison of coding assistants and the capabilities of Cody* [00:16:00] The importance of context (RAG) in AI coding tools* [00:21:33] The debate between Chomsky and Norvig approaches in AI* [00:30:06] Normsky: the Norvig + Chomsky models collision* [00:36:00] The death of the DSL?* [00:40:00] LSP, Skip, Kythe, BFG, and all that fun stuff* [00:53:00] The SourceGraph internal stack* [00:58:46] Building on open source models* [01:02:00] SourceGraph for engineering managers?* [01:12:00] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:16]Swyx: Hey, and today we're christening our new podcast studio in the Newton, and we have Beyang and Steve from Sourcegraph. Welcome. [00:00:25]Beyang: Hey, thanks for having us. [00:00:26]Swyx: So this has been a long time coming. I'm very excited to have you. We also are just celebrating the one year anniversary of ChatGPT yesterday, but also we'll be talking about the GA of Cody later on today. We'll just do a quick intros of both of you. Obviously, people can research you and check the show notes for more. Beyang, you worked in computer vision at Stanford and then you worked at Palantir. I did, yeah. You also interned at Google. [00:00:48]Beyang: I did back in the day where I get to use Steve's system, DevTool. [00:00:53]Swyx: Right. What was it called? [00:00:55]Beyang: It was called Grok. Well, the end user thing was Google Code Search. That's what everyone called it, or just like CS. But the brains of it were really the kind of like Trigram index and then Grok, which provided the reference graph. [00:01:07]Steve: Today it's called Kythe, the open source Google one. It's sort of like Grok v3. [00:01:11]Swyx: On your podcast, which you've had me on, you've interviewed a bunch of other code search developers, including the current developer of Kythe, right? [00:01:19]Beyang: No, we didn't have any Kythe people on, although we would love to if they're up for it. We had Kelly Norton, who built a similar system at Etsy, it's an open source project called Hound. We also had Han-Wen Nienhuys, who created Zoekt, which is, I think, heavily inspired by the Trigram index that powered Google's original code search and that we also now use at Sourcegraph. Yeah. [00:01:45]Swyx: So you teamed up with Quinn over 10 years ago to start Sourcegraph and you were indexing all code on the internet. And now you're in a perfect spot to create a code intelligence startup. Yeah, yeah. [00:01:56]Beyang: I guess the backstory was, I used Google Code Search while I was an intern. And then after I left that internship and worked elsewhere, it was the single dev tool that I missed the most. I felt like my job was just a lot more tedious and much more of a hassle without it. And so when Quinn and I started working together at Palantir, he had also used various code search engines in open source over the years. And it was just a pain point that we both felt, both working on code at Palantir and also working within Palantir's clients, which were a lot of Fortune 500 companies, large financial institutions, folks like that. And if anything, the pains they felt in dealing with large complex code bases made our pain points feel small by comparison. So that was really the impetus for starting Sourcegraph. [00:02:42]Swyx: Yeah, excellent. Steve, you famously worked at Amazon. And you've told many, many stories. I want every single listener of Latent Space to check out Steve's YouTube because he effectively had a podcast that you didn't tell anyone about or something. You just hit record and just went on a few rants. I'm always here for your Stevie rants. And then you moved to Google, where you also had some interesting thoughts on just the overall Google culture versus Amazon. You joined Grab as head of eng for a couple of years. I'm from Singapore, so I have actually personally used a lot of Grab's features. And it was very interesting to see you talk so highly of Grab's engineering and sort of overall prospects. [00:03:21]Steve: Because as a customer, it sucked? [00:03:22]Swyx: Yeah, no, it's just like, being from a smaller country, you never see anyone from our home country being on a global stage or talked about as a startup that people admire or look up to, like on the league that you, with all your legendary experience, would consider equivalent. Yeah. [00:03:41]Steve: Yeah, no, absolutely. They actually, they didn't even know that they were as good as they were, in a sense. They started hiring a bunch of people from Silicon Valley to come in and sort of like fix it. And we came in and we were like, Oh, we could have been a little better operational excellence and stuff. But by and large, they're really sharp. The only thing about Grab is that they get criticized a lot for being too westernized. Oh, by who? By Singaporeans who don't want to work there. [00:04:06]Swyx: Okay. I guess I'm biased because I'm here, but I don't see that as a problem. If anything, they've had their success because they were more westernized than the Sanders Singaporean tech company. [00:04:15]Steve: I mean, they had their success because they are laser focused. They copy to Amazon. I mean, they're executing really, really, really well for a giant. I was on a slack with 2,500 engineers. It was like this giant waterfall that you could dip your toe into. You'd never catch up. Actually, the AI summarizers would have been really helpful there. But yeah, no, I think Grab is successful because they're just out there with their sleeves rolled up, just making it happen. [00:04:43]Swyx: And for those who don't know, it's not just like Uber of Southeast Asia, it's also a super app. PayPal Plus. [00:04:48]Steve: Yeah. [00:04:49]Swyx: In the way that super apps don't exist in the West. It's one of the enduring mysteries of B2C that super apps work in the East and don't work in the West. We just don't understand it. [00:04:57]Beyang: Yeah. [00:04:58]Steve: It's just kind of curious. They didn't work in India either. And it was primarily because of bandwidth reasons and smaller phones. [00:05:03]Swyx: That should change now. It should. [00:05:05]Steve: And maybe we'll see a super app here. [00:05:08]Swyx: You retired-ish? I did. You retired-ish on your own video game? Mm-hmm. Any fun stories about that? And that's also where you discovered some need for code search, right? Mm-hmm. [00:05:16]Steve: Sure. A need for a lot of stuff. Better programming languages, better databases. Better everything. I mean, I started in like 95, right? Where there was kind of nothing. Yeah. Yeah. [00:05:24]Beyang: I just want to say, I remember when you first went to Grab because you wrote that blog post talking about why you were excited about it, about like the expanding Asian market. And our reaction was like, oh, man, how did we miss stealing it with you? [00:05:36]Swyx: Hiring you. [00:05:37]Beyang: Yeah. [00:05:38]Steve: I was like, miss that. [00:05:39]Swyx: Tell that story. So how did this happen? Right? So you were inspired by Grok. [00:05:44]Beyang: I guess the backstory from my point of view is I had used code search and Grok while at Google, but I didn't actually know that it was connected to you, Steve. I knew you from your blog posts, which were always excellent, kind of like inside, very thoughtful takes from an engineer's perspective on some of the challenges facing tech companies and tech culture and that sort of thing. But my first introduction to you within the context of code intelligence, code understanding was I watched a talk that you gave, I think at Stanford, about Grok when you're first building it. And that was very eye opening. I was like, oh, like that guy, like the guy who, you know, writes the extremely thoughtful ranty like blog posts also built that system. And so that's how I knew, you know, you were involved in that. And then, you know, we always wanted to hire you, but never knew quite how to approach you or, you know, get that conversation started. [00:06:34]Steve: Well, we got introduced by Max, right? Yeah. It was temporal. Yeah. Yeah. I mean, it was a no brainer. They called me up and I had noticed when Sourcegraph had come out. Of course, when they first came out, I had this dagger of jealousy stabbed through me piercingly, which I remember because I am not a jealous person by any means, ever. But boy, I was like, but I was kind of busy, right? And just one thing led to another. I got sucked back into the ads vortex and whatever. So thank God Sourcegraph actually kind of rescued me. [00:07:05]Swyx: Here's a chance to build DevTools. Yeah. [00:07:08]Steve: That's the best. DevTools are the best. [00:07:10]Swyx: Cool. Well, so that's the overall intro. I guess we can get into Cody. Is there anything else that like people should know about you before we get started? [00:07:18]Steve: I mean, everybody knows I'm a musician. I can juggle five balls. [00:07:24]Swyx: Five is good. Five is good. I've only ever managed three. [00:07:27]Steve: Five is hard. Yeah. And six, a little bit. [00:07:30]Swyx: Wow. [00:07:31]Beyang: That's impressive. [00:07:32]Alessio: So yeah, to jump into Sourcegraph, this has been a company 10 years in the making. And as Sean said, now you're at the right place. Phase two. Now, exactly. You spent 10 years collecting all this code, indexing, making it easy to surface it. Yeah. [00:07:47]Swyx: And also learning how to work with enterprises and having them trust you with their code bases. Yeah. [00:07:52]Alessio: Because initially you were only doing on-prem, right? Like a lot of like VPC deployments. [00:07:55]Beyang: So in the very early days, we're cloud only. But the first major customers we landed were all on-prem, self-hosted. And that was, I think, related to the nature of the problem that we're solving, which becomes just like a critical, unignorable pain point once you're above like 100 devs or so. [00:08:11]Alessio: Yeah. And now Cody is going to be GA by the time this releases. So congrats to your future self for launching this in two weeks. Can you give a quick overview of just what Cody is? I think everybody understands that it's a AI coding agent, but a lot of companies say they have a AI coding agent. So yeah, what does Cody do? How do people interface with it? [00:08:32]Beyang: Yeah. So how is it different from the like several dozen other AI coding agents that exist in the market now? When we thought about building a coding assistant that would do things like code generation and question answering about your code base, I think we came at it from the perspective of, you know, we've spent the past decade building the world's best code understanding engine for human developers, right? So like it's kind of your guide as a human dev if you want to go and dive into a large complex code base. And so our intuition was that a lot of the context that we're providing to human developers would also be useful context for AI developers to consume. And so in terms of the feature set, Cody is very similar to a lot of other assistants. It does inline autocompletion. It does code base aware chat. It does specific commands that automate, you know, tasks that you might rather not want to do like generating unit tests or adding detailed documentation. But we think the core differentiator is really the quality of the context, which is hard to kind of describe succinctly. It's a bit like saying, you know, what's the difference between Google and Alta Vista? There's not like a quick checkbox list of features that you can rattle off, but it really just comes down to all the attention and detail that we've paid to making that context work well and be high quality and fast for human devs. We're now kind of plugging into the AI coding assistant as well. Yeah. [00:09:53]Steve: I mean, just to add my own perspective on to what Beyang just described, RAG is kind of like a consultant that the LLM has available, right, that knows about your code. RAG provides basically a bridge to a lookup system for the LLM, right? Whereas fine tuning would be more like on the job training for somebody. If the LLM is a person, you know, and you send them to a new job and you do on the job training, that's what fine tuning is like, right? So tuned to our specific task. You're always going to need that expert, even if you get the on the job training, because the expert knows your particular code base, your task, right? That expert has to know your code. And there's a chicken and egg problem because, right, you know, we're like, well, I'm going to ask the LLM about my code, but first I have to explain it, right? It's this chicken and egg problem. That's where RAG comes in. And we have the best consultants, right? The best assistant who knows your code. And so when you sit down with Cody, right, what Beyang said earlier about going to Google and using code search and then starting to feel like without it, his job was super tedious. Once you start using these, do you guys use coding assistants? [00:10:53]Swyx: Yeah, right. [00:10:54]Steve: I mean, like we're getting to the point very quickly, right? Where you feel like almost like you're programming without the internet, right? Or something, you know, it's like you're programming back in the nineties without the coding assistant. Yeah. Hopefully that helps for people who have like no idea about coding systems, what they are. [00:11:09]Swyx: Yeah. [00:11:10]Alessio: I mean, going back to using them, we had a lot of them on the podcast already. We had Cursor, we have Codium and Codium, very similar names. [00:11:18]Swyx: Yeah. Find, and then of course there's Copilot. [00:11:22]Alessio: You had a Copilot versus Cody blog post, and I think it really shows the context improvement. So you had two examples that stuck with me. One was, what does this application do? And the Copilot answer was like, oh, it uses JavaScript and NPM and this. And it's like, but that's not what it does. You know, that's what it's built with. Versus Cody was like, oh, these are like the major functions. And like, these are the functionalities and things like that. And then the other one was, how do I start this up? And Copilot just said NPM start, even though there was like no start command in the package JSON, but you know, most collapse, right? Most projects use NPM start. So maybe this does too. How do you think about open source models? Because Copilot has their own private thing. And I think you guys use Starcoder, if I remember right. Yeah, that's correct. [00:12:09]Beyang: I think Copilot uses some variant of Codex. They're kind of cagey about it. I don't think they've like officially announced what model they use. [00:12:16]Swyx: And I think they use a range of models based on what you're doing. Yeah. [00:12:19]Beyang: So everyone uses a range of model. Like no one uses the same model for like inline completion versus like chat because the latency requirements for. Oh, okay. Well, there's fill in the middle. There's also like what the model's trained on. So like we actually had completions powered by Claude Instant for a while. And but you had to kind of like prompt hack your way to get it to output just the code and not like, hey, you know, here's the code you asked for, like that sort of text. So like everyone uses a range of models. We've kind of designed Cody to be like especially model, not agnostic, but like pluggable. So one of our kind of design considerations was like as the ecosystem evolves, we want to be able to integrate the best in class models, whether they're proprietary or open source into Cody because the pace of innovation in the space is just so quick. And I think that's been to our advantage. Like today, Cody uses Starcoder for inline completions. And with the benefit of the context that we provide, we actually show comparable completion acceptance rate metrics. It's kind of like the standard metric that folks use to evaluate inline completion quality. It's like if I show you a completion, what's the chance that you actually accept the completion versus you reject it? And so we're at par with Copilot, which is at the head of that industry right now. And we've been able to do that with the Starcoder model, which is open source and the benefit of the context fetching stuff that we provide. And of course, a lot of like prompt engineering and other stuff along the way. [00:13:40]Alessio: And Steve, you wrote a post called cheating is all you need about what you're building. And one of the points you made is that everybody's fighting on the same axis, which is better UI and the IDE, maybe like a better chat response. But data modes are kind of the most important thing. And you guys have like a 10 year old mode with all the data you've been collecting. How do you kind of think about what other companies are doing wrong, right? Like, why is nobody doing this in terms of like really focusing on RAG? I feel like you see so many people. Oh, we just got a new model. It's like a bit human eval. And it's like, well, but maybe like that's not what we should really be doing, you know? Like, do you think most people underestimate the importance of like the actual RAG in code? [00:14:21]Steve: I think that people weren't doing it much. It wasn't. It's kind of at the edges of AI. It's not in the center. I know that when ChatGPT launched, so within the last year, I've heard a lot of rumblings from inside of Google, right? Because they're undergoing a huge transformation to try to, you know, of course, get into the new world. And I heard that they told, you know, a bunch of teams to go and train their own models or fine tune their own models, right? [00:14:43]Swyx: Both. [00:14:43]Steve: And, you know, it was a s**t show. Nobody knew how to do it. They launched two coding assistants. One was called Code D with an EY. And then there was, I don't know what happened in that one. And then there's Duet, right? Google loves to compete with themselves, right? They do this all the time. And they had a paper on Duet like from a year ago. And they were doing exactly what Copilot was doing, which was just pulling in the local context, right? But fundamentally, I thought of this because we were talking about the splitting of the [00:15:10]Swyx: models. [00:15:10]Steve: In the early days, it was the LLM did everything. And then we realized that for certain use cases, like completions, that a different, smaller, faster model would be better. And that fragmentation of models, actually, we expected to continue and proliferate, right? Because we are fundamentally, we're a recommender engine right now. Yeah, we're recommending code to the LLM. We're saying, may I interest you in this code right here so that you can answer my question? [00:15:34]Swyx: Yeah? [00:15:34]Steve: And being good at recommender engine, I mean, who are the best recommenders, right? There's YouTube and Spotify and, you know, Amazon or whatever, right? Yeah. [00:15:41]Swyx: Yeah. [00:15:41]Steve: And they all have many, many, many, many, many models, right? For all fine-tuned for very specific, you know. And that's where we're heading in code, too. Absolutely. [00:15:50]Swyx: Yeah. [00:15:50]Alessio: We just did an episode we released on Wednesday, which we said RAG is like Rexis or like LLMs. You're basically just suggesting good content. [00:15:58]Swyx: It's like what? Recommendations. [00:15:59]Beyang: Recommendations. [00:16:00]Alessio: Oh, got it. [00:16:01]Steve: Yeah, yeah, yeah. [00:16:02]Swyx: So like the naive implementation of RAG is you embed everything, throw it in a vector database, you embed your query, and then you find the nearest neighbors, and that's your RAG. But actually, you need to rank it. And actually, you need to make sure there's sample diversity and that kind of stuff. And then you're like slowly gradient dissenting yourself towards rediscovering proper Rexis, which has been traditional ML for a long time. But like approaching it from an LLM perspective. Yeah. [00:16:24]Beyang: I almost think of it as like a generalized search problem because it's a lot of the same things. Like you want your layer one to have high recall and get all the potential things that could be relevant. And then there's typically like a layer two re-ranking mechanism that bumps up the precision and tries to get the relevant stuff to the top of the results list. [00:16:43]Swyx: Have you discovered that ranking matters a lot? Oh, yeah. So the context is that I think a lot of research shows that like one, context utilization matters based on model. Like GPT uses the top of the context window, and then apparently Claude uses the bottom better. And it's lossy in the middle. Yeah. So ranking matters. No, it really does. [00:17:01]Beyang: The skill with which models are able to take advantage of context is always going to be dependent on how that factors into the impact on the training loss. [00:17:10]Swyx: Right? [00:17:10]Beyang: So like if you want long context window models to work well, then you have to have a ton of data where it's like, here's like a billion lines of text. And I'm going to ask a question about like something that's like, you know, embedded deeply into it and like, give me the right answer. And unless you have that training set, then of course, you're going to have variability in terms of like where it attends to. And in most kind of like naturally occurring data, the thing that you're talking about right now, the thing I'm asking you about is going to be something that we talked about recently. [00:17:36]Swyx: Yeah. [00:17:36]Steve: Did you really just say gradient dissenting yourself? Actually, I love that it's entered the casual lexicon. Yeah, yeah, yeah. [00:17:44]Swyx: My favorite version of that is, you know, how we have to p-hack papers. So, you know, when you throw humans at the problem, that's called graduate student dissent. That's great. It's really awesome. [00:17:54]Alessio: I think the other interesting thing that you have is this inline assist UX that I wouldn't say async, but like it works while you can also do work. So you can ask Cody to make changes on a code block and you can still edit the same file at the same time. [00:18:07]Swyx: Yeah. [00:18:07]Alessio: How do you see that in the future? Like, do you see a lot of Cody's running together at the same time? Like, how do you validate also that they're not messing each other up as they make changes in the code? And maybe what are the limitations today? And what do you think about where the attack is going? [00:18:21]Steve: I want to start with a little history and then I'm going to turn it over to Bian, all right? So we actually had this feature in the very first launch back in June. Dominic wrote it. It was called nonstop Cody. And you could have multiple, basically, LLM requests in parallel modifying your source [00:18:37]Swyx: file. [00:18:37]Steve: And he wrote a bunch of code to handle all of the diffing logic. And you could see the regions of code that the LLM was going to change, right? And he was showing me demos of it. And it just felt like it was just a little before its time, you know? But a bunch of that stuff, that scaffolding was able to be reused for where we're inline [00:18:56]Swyx: sitting today. [00:18:56]Steve: How would you characterize it today? [00:18:58]Beyang: Yeah, so that interface has really evolved from a, like, hey, general purpose, like, request anything inline in the code and have the code update to really, like, targeted features, like, you know, fix the bug that exists at this line or request a very specific [00:19:13]Swyx: change. [00:19:13]Beyang: And the reason for that is, I think, the challenge that we ran into with inline fixes, and we do want to get to the point where you could just fire and forget and have, you know, half a dozen of these running in parallel. But I think we ran into the challenge early on that a lot of people are running into now when they're trying to construct agents, which is the reliability of, you know, working code generation is just not quite there yet in today's language models. And so that kind of constrains you to an interaction where the human is always, like, in the inner loop, like, checking the output of each response. And if you want that to work in a way where you can be asynchronous, you kind of have to constrain it to a domain where today's language models can generate reliable code well enough. So, you know, generating unit tests, that's, like, a well-constrained problem. Or fixing a bug that shows up as, like, a compiler error or a test error, that's a well-constrained problem. But the more general, like, hey, write me this class that does X, Y, and Z using the libraries that I have, that is not quite there yet, even with the benefit of really good context. Like, it definitely moves the needle a lot, but we're not quite there yet to the point where you can just fire and forget. And I actually think that this is something that people don't broadly appreciate yet, because I think that, like, everyone's chasing this dream of agentic execution. And if we're to really define that down, I think it implies a couple things. You have, like, a multi-step process where each step is fully automated. We don't have to have a human in the loop every time. And there's also kind of like an LM call at each stage or nearly every stage in that [00:20:45]Swyx: chain. [00:20:45]Beyang: Based on all the work that we've done, you know, with the inline interactions, with kind of like general Codyfeatures for implementing longer chains of thought, we're actually a little bit more bearish than the average, you know, AI hypefluencer out there on the feasibility of agents with purely kind of like transformer-based models. To your original question, like, the inline interactions with CODI, we actually constrained it to be more targeted, like, you know, fix the current error or make this quick fix. I think that that does differentiate us from a lot of the other tools on the market, because a lot of people are going after this, like, shnazzy, like, inline edit interaction, whereas I think where we've moved, and this is based on the user feedback that we've gotten, it's like that sort of thing, it demos well, but when you're actually coding day to day, you don't want to have, like, a long chat conversation inline with the code base. That's a waste of time. You'd rather just have it write the right thing and then move on with your life or not have to think about it. And that's what we're trying to work towards. [00:21:37]Steve: I mean, yeah, we're not going in the agent direction, right? I mean, I'll believe in agents when somebody shows me one that works. Yeah. Instead, we're working on, you know, sort of solidifying our strength, which is bringing the right context in. So new context sources, ways for you to plug in your own context, ways for you to control or influence the context, you know, the mixing that happens before the request goes out, etc. And there's just so much low-hanging fruit left in that space that, you know, agents seems like a little bit of a boondoggle. [00:22:03]Beyang: Just to dive into that a little bit further, like, I think, you know, at a very high level, what do people mean when they say agents? They really mean, like, greater automation, fully automated, like, the dream is, like, here's an issue, go implement that. And I don't have to think about it as a human. And I think we are working towards that. Like, that is the eventual goal. I think it's specifically the approach of, like, hey, can we have a transformer-based LM alone be the kind of, like, backbone or the orchestrator of these agentic flows? Where we're a little bit more bearish today. [00:22:31]Swyx: You want the human in the loop. [00:22:32]Beyang: I mean, you kind of have to. It's just a reality of the behavior of language models that are purely, like, transformer-based. And I think that's just like a reflection of reality. And I don't think people realize that yet. Because if you look at the way that a lot of other AI tools have implemented context fetching, for instance, like, you see this in the Copilot approach, where if you use, like, the at-workspace thing that supposedly provides, like, code-based level context, it has, like, an agentic approach where you kind of look at how it's behaving. And it feels like they're making multiple requests to the LM being like, what would you do in this case? Would you search for stuff? What sort of files would you gather? Go and read those files. And it's like a multi-hop step, so it takes a long while. It's also non-deterministic. Because any sort of, like, LM invocation, it's like a dice roll. And then at the end of the day, the context it fetches is not that good. Whereas our approach is just like, OK, let's do some code searches that make sense. And then maybe, like, crawl through the reference graph a little bit. That is fast. That doesn't require any sort of LM invocation at all. And we can pull in much better context, you know, very quickly. So it's faster. [00:23:37]Swyx: It's more reliable. [00:23:37]Beyang: It's deterministic. And it yields better context quality. And so that's what we think. We just don't think you should cargo cult or naively go like, you know, agents are the [00:23:46]Swyx: future. [00:23:46]Beyang: Let's just try to, like, implement agents on top of the LM that exists today. I think there are a couple of other technologies or approaches that need to be refined first before we can get into these kind of, like, multi-stage, fully automated workflows. [00:24:00]Swyx: It makes sense. You know, we're very much focused on developer inner loop right now. But you do see things eventually moving towards developer outer loop. Yeah. So would you basically say that they're tackling the agent's problem that you don't want to tackle? [00:24:11]Beyang: No, I would say at a high level, we are after maybe, like, the same high level problem, which is like, hey, I want some code written. I want to develop some software and can automate a system. Go build that software for me. I think the approaches might be different. So I think the analogy in my mind is, I think about, like, the AI chess players. Coding, in some senses, I mean, it's similar and dissimilar to chess. I think one question I ask is, like, do you think producing code is more difficult than playing chess or less difficult than playing chess? More. [00:24:41]Swyx: I think more. [00:24:41]Beyang: Right. And if you look at the best AI chess players, like, yes, you can use an LLM to play chess. Like, people have showed demos where it's like, oh, like, yeah, GPT-4 is actually a pretty decent, like, chess move suggester. Right. But you would never build, like, a best in class chess player off of GPT-4 alone. [00:24:57]Swyx: Right. [00:24:57]Beyang: Like, the way that people design chess players is that you have kind of like a search space and then you have a way to explore that search space efficiently. There's a bunch of search algorithms, essentially. We were doing tree search in various ways. And you can have heuristic functions, which might be powered by an LLM. [00:25:12]Swyx: Right. [00:25:12]Beyang: Like, you might use an LLM to generate proposals in that space that you can efficiently explore. But the backbone is still this kind of more formalized tree search based approach rather than the LLM itself. And so I think my high level intuition is that, like, the way that we get to more reliable multi-step workflows that do things beyond, you know, generate unit test, it's really going to be like a search based approach where you use an LLM as kind of like an advisor or a proposal function, sort of your heuristic function, like the ASTAR search algorithm. But it's probably not going to be the thing that is the backbone, because I guess it's not the right tool for that. Yeah. [00:25:50]Swyx: I can see yourself kind of thinking through this, but not saying the words, the sort of philosophical Peter Norvig type discussion. Maybe you want to sort of introduce that in software. Yeah, definitely. [00:25:59]Beyang: So your listeners are savvy. They're probably familiar with the classic like Chomsky versus Norvig debate. [00:26:04]Swyx: No, actually, I wanted, I was prompting you to introduce that. Oh, got it. [00:26:08]Beyang: So, I mean, if you look at the history of artificial intelligence, right, you know, it goes way back to, I don't know, it's probably as old as modern computers, like 50s, 60s, 70s. People are debating on like, what is the path to producing a sort of like general human level of intelligence? And kind of two schools of thought that emerged. One is the Norvig school of thought, which roughly speaking includes large language models, you know, regression, SVN, basically any model that you kind of like learn from data. And it's like data driven. Most of machine learning would fall under this umbrella. And that school of thought says like, you know, just learn from the data. That's the approach to reaching intelligence. And then the Chomsky approach is more things like compilers and parsers and formal systems. So basically like, let's think very carefully about how to construct a formal, precise system. And that will be the approach to how we build a truly intelligent system. I think Lisp was invented so that you could create like rules-based systems that you would call AI. As a language. Yeah. And for a long time, there was like this debate, like there's certain like AI research labs that were more like, you know, in the Chomsky camp and others that were more in the Norvig camp. It's a debate that rages on today. And I feel like the consensus right now is that, you know, Norvig definitely has the upper hand right now with the advent of LMs and diffusion models and all the other recent progress in machine learning. But the Chomsky-based stuff is still really useful in my view. I mean, it's like parsers, compilers, basically a lot of the stuff that provides really good context. It provides kind of like the knowledge graph backbone that you want to explore with your AI dev tool. Like that will come from kind of like Chomsky-based tools like compilers and parsers. It's a lot of what we've invested in in the past decade at Sourcegraph and what you build with Grok. Basically like these formal systems that construct these very precise knowledge graphs that are great context providers and great kind of guard rails enforcers and kind of like safety checkers for the output of a more kind of like data-driven, fuzzier system that uses like the Norvig-based models. [00:28:03]Steve: Jang was talking about this stuff like it happened in the middle ages. Like, okay, so when I was in college, I was in college learning Lisp and prologue and planning and all the deterministic Chomsky approaches to AI. And I was there when Norvig basically declared it dead. I was there 3,000 years ago when Norvig and Chomsky fought on the volcano. When did he declare it dead? [00:28:26]Swyx: What do you mean he declared it dead? [00:28:27]Steve: It was like late 90s. [00:28:29]Swyx: Yeah. [00:28:29]Steve: When I went to Google, Peter Norvig was already there. He had basically like, I forget exactly where. It was some, he's got so many famous short posts, you know, amazing. [00:28:38]Swyx: He had a famous talk, the unreasonable effectiveness of data. Yeah. [00:28:41]Steve: Maybe that was it. But at some point, basically, he basically convinced everybody that deterministic approaches had failed and that heuristic-based, you know, data-driven statistical approaches, stochastic were better. [00:28:52]Swyx: Yeah. [00:28:52]Steve: The primary reason I can tell you this, because I was there, was that, was that, well, the steam-powered engine, no. The reason was that the deterministic stuff didn't scale. [00:29:06]Swyx: Yeah. Right. [00:29:06]Steve: They're using prologue, man, constraint systems and stuff like that. Well, that was a long time ago, right? Today, actually, these Chomsky-style systems do scale. And that's, in fact, exactly what Sourcegraph has built. Yeah. And so we have a very unique, I love the framing that Bjong's made, that the marriage of the Chomsky and the Norvig, you know, sort of models, you know, conceptual models, because we, you know, we have both of them and they're both really important. And in fact, there, there's this really interesting, like, kind of overlap between them, right? Where like the AI or our graph or our search engine could potentially provide the right context for any given query, which is, of course, why ranking is important. But what we've really signed ourselves up for is an extraordinary amount of testing. [00:29:45]Swyx: Yeah. [00:29:45]Steve: Because in SWIGs, you were saying that, you know, GPT-4 tends to the front of the context window and maybe other LLMs to the back and maybe, maybe the LLM in the middle. [00:29:53]Swyx: Yeah. [00:29:53]Steve: And so that means that, you know, if we're actually like, you know, verifying whether we, you know, some change we've made has improved things, we're going to have to test putting it at the beginning of the window and at the end of the window, you know, and maybe make the right decision based on the LLM that you've chosen. Which some of our competitors, that's a problem that they don't have, but we meet you, you know, where you are. Yeah. And we're, just to finish, we're writing tens of thousands. We're generating tests, you know, fill in the middle type tests and things. And then using our graph to basically sort of fine tune Cody's behavior there. [00:30:20]Swyx: Yeah. [00:30:21]Beyang: I also want to add, like, I have like an internal pet name for this, like kind of hybrid architecture that I'm trying to make catch on. Maybe I'll just say it here. Just saying it publicly kind of makes it more real. But like, I call the architecture that we've developed the Normsky architecture. [00:30:36]Swyx: Yeah. [00:30:36]Beyang: I mean, it's obviously a portmanteau of Norvig and Chomsky, but the acronym, it stands for non-agentic, rapid, multi-source code intelligence. So non-agentic because... Rolls right off the tongue. And Normsky. But it's non-agentic in the sense that like, we're not trying to like pitch you on kind of like agent hype, right? Like it's the things it does are really just developer tools developers have been using for decades now, like parsers and really good search indexes and things like that. Rapid because we place an emphasis on speed. We don't want to sit there waiting for kind of like multiple LLM requests to return to complete a simple user request. Multi-source because we're thinking broadly about what pieces of information and knowledge are useful context. So obviously starting with things that you can search in your code base, and then you add in the reference graph, which kind of like allows you to crawl outward from those initial results. But then even beyond that, you know, sources of information, like there's a lot of knowledge that's embedded in docs, in PRDs or product specs, in your production logging system, in your chat, in your Slack channel, right? Like there's so much context is embedded there. And when you're a human developer, and you're trying to like be productive in your code base, you're going to go to all these different systems to collect the context that you need to figure out what code you need to write. And I don't think the AI developer will be any different. It will need to pull context from all these different sources. So we're thinking broadly about how to integrate these into Codi. We hope through kind of like an open protocol that like others can extend and implement. And this is something else that should be accessible by December 14th in kind of like a preview stage. But that's really about like broadening this notion of the code graph beyond your Git repository to all the other sources where technical knowledge and valuable context can live. [00:32:21]Steve: Yeah, it becomes an artifact graph, right? It can link into your logs and your wikis and any data source, right? [00:32:27]Alessio: How do you guys think about the importance of, it's almost like data pre-processing in a way, which is bring it all together, tie it together, make it ready. Any thoughts on how to actually make that good? Some of the innovation you guys have made. [00:32:40]Steve: We talk a lot about the context fetching, right? I mean, there's a lot of ways you could answer this question. But, you know, we've spent a lot of time just in this podcast here talking about context fetching. But stuffing the context into the window is, you know, the bin packing problem, right? Because the window is not big enough, and you've got more context than you can fit. You've got a ranker maybe. But what is that context? Is it a function that was returned by an embedding or a graph call or something? Do you need the whole function? Or do you just need, you know, the top part of the function, this expression here, right? You know, so that art, the golf game of trying to, you know, get each piece of context down into its smallest state, possibly even summarized by another model, right, before it even goes to the LLM, becomes this is the game that we're in, yeah? And so, you know, recursive summarization and all the other techniques that you got to use to like stuff stuff into that context window become, you know, critically important. And you have to test them across every configuration of models that you could possibly need. [00:33:32]Beyang: I think data preprocessing is probably the like unsexy, way underappreciated secret to a lot of the cool stuff that people are shipping today. Whether you're doing like RAG or fine tuning or pre-training, like the preprocessing step matters so much because it's basically garbage in, garbage out, right? Like if you're feeding in garbage to the model, then it's going to output garbage. Concretely, you know, for code RAG, if you're not doing some sort of like preprocessing that takes advantage of a parser and is able to like extract the key components of a particular file of code, you know, separate the function signature from the body, from the doc string, what are you even doing? Like that's like table stakes. It opens up so much more possibilities with which you can kind of like tune your system to take advantage of the signals that come from those different parts of the code. Like we've had a tool, you know, since computers were invented that understands the structure of source code to a hundred percent precision. The compiler knows everything there is to know about the code in terms of like structure. Like why would you not want to use that in a system that's trying to generate code, answer questions about code? You shouldn't throw that out the window just because now we have really good, you know, data-driven models that can do other things. [00:34:44]Steve: Yeah. When I called it a data moat, you know, in my cheating post, a lot of people were confused, you know, because data moat sort of sounds like data lake because there's data and water and stuff. I don't know. And so they thought that we were sitting on this giant mountain of data that we had collected, but that's not what our data moat is. It's really a data pre-processing engine that can very quickly and scalably, like basically dissect your entire code base in a very small, fine-grained, you know, semantic unit and then serve it up. Yeah. And so it's really, it's not a data moat. It's a data pre-processing moat, I guess. [00:35:15]Beyang: Yeah. If anything, we're like hypersensitive to customer data privacy requirements. So it's not like we've taken a bunch of private data and like, you know, trained a generally available model. In fact, exactly the opposite. A lot of our customers are choosing Cody over Copilot and other competitors because we have an explicit guarantee that we don't do any of that. And that we've done that from day one. Yeah. I think that's a very real concern in today's day and age, because like if your proprietary IP finds its way into the training set of any model, it's very easy both to like extract that knowledge from the model and also use it to, you know, build systems that kind of work on top of the institutional knowledge that you've built up. [00:35:52]Alessio: About a year ago, I wrote a post on LLMs for developers. And one of the points I had was maybe the depth of like the DSL. I spent most of my career writing Ruby and I love Ruby. It's so nice to use, but you know, it's not as performant, but it's really easy to read, right? And then you look at other languages, maybe they're faster, but like they're more verbose, you know? And when you think about efficiency of the context window, that actually matters. [00:36:15]Swyx: Yeah. [00:36:15]Alessio: But I haven't really seen a DSL for models, you know? I haven't seen like code being optimized to like be easier to put in a model context. And it seems like your pre-processing is kind of doing that. Do you see in the future, like the way we think about the DSL and APIs and kind of like service interfaces be more focused on being context friendly, where it's like maybe it's harder to read for the human, but like the human is never going to write it anyway. We were talking on the Hacks podcast. There are like some data science things like spin up the spandex, like humans are never going to write again because the models can just do very easily. Yeah, curious to hear your thoughts. [00:36:51]Steve: Well, so DSLs, they involve, you know, writing a grammar and a parser and they're like little languages, right? We do them that way because, you know, we need them to compile and humans need to be able to read them and so on. The LLMs don't need that level of structure. You can throw any pile of crap at them, you know, more or less unstructured and they'll deal with it. So I think that's why a DSL hasn't emerged for sort of like communicating with the LLM or packaging up the context or anything. Maybe it will at some point, right? We've got, you know, tagging of context and things like that that are sort of peeking into DSL territory, right? But your point on do users, you know, do people have to learn DSLs like regular expressions or, you know, pick your favorite, right? XPath. I think you're absolutely right that the LLMs are really, really good at that. And I think you're going to see a lot less of people having to slave away learning these things. They just have to know the broad capabilities and the LLM will take care of the rest. [00:37:42]Swyx: Yeah, I'd agree with that. [00:37:43]Beyang: I think basically like the value profit of DSL is that it makes it easier to work with a lower level language, but at the expense of introducing an abstraction layer. And in many cases today, you know, without the benefit of AI cogeneration, like that totally worth it, right? With the benefit of AI cogeneration, I mean, I don't think all DSLs will go away. I think there's still, you know, places where that trade-off is going to be worthwhile. But it's kind of like how much of source code do you think is going to be generated through natural language prompting in the future? Because in a way, like any programming language is just a DSL on top of assembly, right? And so if people can do that, then yeah, like maybe for a large portion of the code [00:38:21]Swyx: that's written, [00:38:21]Beyang: people don't actually have to understand the DSL that is Ruby or Python or basically any other programming language that exists. [00:38:28]Steve: I mean, seriously, do you guys ever write SQL queries now without using a model of some sort? At least a draft. [00:38:34]Swyx: Yeah, right. [00:38:36]Steve: And so we have kind of like, you know, past that bridge, right? [00:38:39]Alessio: Yeah, I think like to me, the long-term thing is like, is there ever going to be, you don't actually see the code, you know? It's like, hey, the basic thing is like, hey, I need a function to some two numbers and that's it. I don't need you to generate the code. [00:38:53]Steve: And the following question, do you need the engineer or the paycheck? [00:38:56]Swyx: I mean, right? [00:38:58]Alessio: That's kind of the agent's discussion in a way where like you cannot automate the agents, but like slowly you're getting more of the atomic units of the work kind of like done. I kind of think of it as like, you know, [00:39:09]Beyang: do you need a punch card operator to answer that for you? And so like, I think we're still going to have people in the role of a software engineer, but the portion of time they spend on these kinds of like low-level, tedious tasks versus the higher level, more creative tasks is going to shift. [00:39:23]Steve: No, I haven't used punch cards. [00:39:25]Swyx: Yeah, I've been talking about like, so we kind of made this podcast about the sort of rise of the AI engineer. And like the first step is the AI enhanced engineer. That is that software developer that is no longer doing these routine, boilerplate-y type tasks, because they're just enhanced by tools like yours. So you mentioned OpenCodeGraph. I mean, that is a kind of DSL maybe, and because we're releasing this as you go GA, you hope for other people to take advantage of that? [00:39:52]Beyang: Oh yeah, I would say so OpenCodeGraph is not a DSL. It's more of a protocol. It's basically like, hey, if you want to make your system, whether it's, you know, chat or logging or whatever accessible to an AI developer tool like Cody, here's kind of like the schema by which you can provide that context and offer hints. So I would, you know, comparisons like LSP obviously did this for kind of like standard code intelligence. It's kind of like a lingua franca for providing fine references and codefinition. There's kind of like analogs to that. There might be also analogs to kind of the original OpenAI, kind of like plugins, API. There's all this like context out there that might be useful for an LM-based system to consume. And so at a high level, what we're trying to do is define a common language for context providers to provide context to other tools in the software development lifecycle. Yeah. Do you have any critiques of LSP, by the way, [00:40:42]Swyx: since like this is very much, very close to home? [00:40:45]Steve: One of the authors wrote a really good critique recently. Yeah. I don't think I saw that. Yeah, yeah. LSP could have been better. It just came out a couple of weeks ago. It was a good article. [00:40:54]Beyang: Yeah. I think LSP is great. Like for what it did for the developer ecosystem, it was absolutely fantastic. Like nowadays, like it's much easier now to get code navigation up and running in a bunch of editors by speaking this protocol. I think maybe the interesting question is like looking at the different design decisions comparing LSP basically with Kythe. Because Kythe has more of a... How would you describe it? [00:41:18]Steve: A storage format. [00:41:20]Beyang: I think the critique of LSP from a Kythe point of view would be like with LSP, you don't actually have an actual symbolic model of the code. It's not like LSP models like, hey, this function calls this other function. LSP is all like range-based. Like, hey, your cursor's at line 32, column 1. [00:41:35]Swyx: Yeah. [00:41:35]Beyang: And that's the thing you feed into the language server. And then it's like, okay, here's the range that you should jump to if you click on that range. So it kind of is intentionally ignorant of the fact that there's a thing called a reference underneath your cursor, and that's linked to a symbol definition. [00:41:49]Steve: Well, actually, that's the worst example you could have used. You're right. But that's the one thing that it actually did bake in is following references. [00:41:56]Swyx: Sure. [00:41:56]Steve: But it's sort of hardwired. [00:41:58]Swyx: Yeah. [00:41:58]Steve: Whereas Kythe attempts to model [00:42:00]Beyang: like all these things explicitly. [00:42:02]Swyx: And so... [00:42:02]Steve: Well, so LSP is a protocol, right? And so Google's internal protocol is gRPC-based. And it's a different approach than LSP. It's basically you make a heavy query to the back end, and you get a lot of data back, and then you render the whole page, you know? So we've looked at LSP, and we think that it's a little long in the tooth, right? I mean, it's a great protocol, lots and lots of support for it. But we need to push into the domain of exposing the intelligence through the protocol. Yeah. [00:42:29]Beyang: And so I would say we've developed a protocol of our own called Skip, which is at a very high level trying to take some of the good ideas from LSP and from Kythe and merge that into a system that in the near term is useful for Sourcegraph, but I think in the long term, we hope will be useful for the ecosystem. Okay, so here's what LSP did well. LSP, by virtue of being like intentionally dumb, dumb in air quotes, because I'm not like ragging on it, allowed language servers developers to kind of like bypass the hard problem of like modeling language semantics precisely. So like if all you want to do is jump to definition, you don't have to come up with like a universally unique naming scheme for each symbol, which is actually quite challenging because you have to think about like, okay, what's the top scope of this name? Is it the source code repository? Is it the package? Does it depend on like what package server you're fetching this from? Like whether it's the public one or the one inside your... Anyways, like naming is hard, right? And by just going from kind of like a location to location based approach, you basically just like throw that out the window. All I care about is jumping definition, just make that work. And you can make that work without having to deal with like all the complex global naming things. The limitation of that approach is that it's harder to build on top of that to build like a true knowledge graph. Like if you actually want a system that says like, okay, here's the web of functions and here's how they reference each other. And I want to incorporate that like semantic model of how the code operates or how the code relates to each other at like a static level. You can't do that with LSP because you have to deal with line ranges. And like concretely the pain point that we found in using LSP for source graph is like in order to do like a find references [00:44:04]Swyx: and then jump definitions, [00:44:04]Beyang: it's like a multi-hop process because like you have to jump to the range and then you have to find the symbol at that range. And it just adds a lot of latency and complexity of these operations where as a human, you're like, well, this thing clearly references this other thing. Why can't you just jump me to that? And I think that's the thing that Kaith does well. But then I think the issue that Kaith has had with adoption is because it is more sophisticated schema, I think. And so there's basically more things that you have to implement to get like a Kaith implementation up and running. I hope I'm not like, correct me if I'm wrong about any of this. [00:44:35]Steve: 100%, 100%. Kaith also has a problem, all these systems have the problem, even skip, or at least the way that we implemented the indexers, that they have to integrate with your build system in order to build that knowledge graph, right? Because you have to basically compile the code in a special mode to generate artifacts instead of binaries. And I would say, by the way, earlier I was saying that XREFs were in LSP, but it's actually, I was thinking of LSP plus LSIF. [00:44:58]Swyx: Yeah. That's another. [00:45:01]Steve: Which is actually bad. We can say that it's bad, right? [00:45:04]Steve: It's like skip or Kaith, it's supposed to be sort of a model serialization, you know, for the code graph, but it basically just does what LSP needs, the bare minimum. LSIF is basically if you took LSP [00:45:16]Beyang: and turned that into a serialization format. So like you build an index for language servers to kind of like quickly bootstrap from cold start. But it's a graph model [00:45:23]Steve: with all of the inconvenience of the API without an actual graph. And so, yeah. [00:45:29]Beyang: So like one of the things that we try to do with skip is try to capture the best of both worlds. So like make it easy to write an indexer, make the schema simple, but also model some of the more symbolic characteristics of the code that would allow us to essentially construct this knowledge graph that we can then make useful for both the human developer through SourceGraph and through the AI developer through Cody. [00:45:49]Steve: So anyway, just to finish off the graph comment, we've got a new graph, yeah, that's skip based. We call it BFG internally, right? It's a beautiful something graph. A big friendly graph. [00:46:00]Swyx: A big friendly graph. [00:46:01]Beyang: It's a blazing fast. [00:46:02]Steve: Blazing fast. [00:46:03]Swyx: Blazing fast graph. [00:46:04]Steve: And it is blazing fast, actually. It's really, really interesting. I should probably have to do a blog post about it to walk you through exactly how they're doing it. Oh, please. But it's a very AI-like iterative, you know, experimentation sort of approach. We're building a code graph based on all of our 10 years of knowledge about building code graphs, yeah? But we're building it quickly with zero configuration, and it doesn't have to integrate with your build. And through some magic tricks that we have. And so what just happens when you install the plugin, that it'll be there and indexing your code and providing that knowledge graph in the background without all that build system integration. This is a bit of secret sauce that we haven't really like advertised it very much lately. But I am super excited about it because what they do is they say, all right, you know, let's tackle function parameters today. Cody's not doing a very good job of completing function call arguments or function parameters in the definition, right? Yeah, we generate those thousands of tests, and then we can actually reuse those tests for the AI context as well. So fortunately, things are kind of converging on, we have, you know, half a dozen really, really good context sources, and we mix them all together. So anyway, BFG, you're going to hear more about it probably in the holidays? [00:47:12]Beyang: I think it'll be online for December 14th. We'll probably mention it. BFG is probably not the public name we're going to go with. I think we might call it like Graph Context or something like that. [00:47:20]Steve: We're officially calling it BFG. [00:47:22]Swyx: You heard it here first. [00:47:24]Beyang: BFG is just kind of like the working name. And so the impetus for BFG was like, if you look at like current AI inline code completion tools and the errors that they make, a lot of the errors that they make, even in kind of like the easy, like single line case, are essentially like type errors, right? Like you're trying to complete a function call and it suggests a variable that you defined earlier, but that variable is the wrong type. [00:47:47]Swyx: And that's the sort of thing [00:47:47]Beyang: where it's like a first year, like freshman CS student would not make that error, right? So like, why does the AI make that error? And the reason is, I mean, the AI is just suggesting things that are plausible without the context of the types or any other like broader files in the code. And so the kind of intuition here is like, why don't we just do the basic thing that like any baseline intelligent human developer would do, which is like click jump to definition, click some fine references and pull in that like Graph Context into the context window and then have it generate the completion. So like that's sort of like the MVP of what BFG was. And turns out that works really well. Like you can eliminate a lot of type errors that AI coding tools make just by pulling in that context. Yeah, but the graph is definitely [00:48:32]Steve: our Chomsky side. [00:48:33]Swyx: Yeah, exactly. [00:48:34]Beyang: So like this like Chomsky-Norvig thing, I think pops up in a bunch of differ

Kobrand Sips & Selling Tips
Argentina | Alta Vista - #020

Kobrand Sips & Selling Tips

Play Episode Listen Later Oct 26, 2023 5:32


Alta Vista is a small, craft winery that is producing high quality wines that are also affordable. The Malbec varietal has become so popular in the US that it is found in every off-premise account and most on-premise accounts. Additionally, Argentina is not just about Malbec anymore, they are making great Cabernet Sauvignon and Chardonnay as well. Be sure to mention all of these great options when selling! Nick Poletto is the Vice President of Education at Kobrand Corporation. Kobrand has been importing fine wine into the US since 1944. Kobrand is a family owned importer with quality wine as its main focus. Nick Poletto travels around the US teaching sales teams about wine and the many different producing regions. Nick has visited all of these properties around the globe and brings you the most complete information with the most important sales tips. For more information, visit our website at https://www.kobrandwineandspirits.com  Follow us on Instagram https://www.instagram.com/kobrandwines  For more wine education visit https://wine365.com  or view Nick's Wine Journal https://www.youtube.com/@nickyvino1  Good selling!

Le Feuilleton
"Là d'où je viens a disparu" de Guillaume Poix 1/5 : Altavista, Salvador

Le Feuilleton

Play Episode Listen Later Oct 22, 2023 28:50


durée : 00:28:50 - Le Feuilleton - "Ça fait plus de vingt ans que j'habite ici et rien n'a changé, les pères ont transmis aux fils leurs armes, leurs maîtresses, leur mémoire."

AM/PM Podcast
#365 - Pioneering Internet Marketing and AI: A Conversation with Perry Belcher

AM/PM Podcast

Play Episode Listen Later Oct 19, 2023 70:34


Join us as we welcome internet marketing titan, Perry Belcher, to the AM/PM Podcast! Listen in as we journey through Perry's remarkable career path - from humble beginnings before turning to digital marketing. Perry's illustrious career even saw him get a personal call from none other than Jeff Bezos himself, a short story you don't want to miss!   The conversation continues with Perry reflecting on the rise and fall of his business and partnerships. His journey, marked by selling health supplements to launching a digital marketing business, and finally starting the Driven Mastermind and the War Room, is an insightful one for any entrepreneur. Our chat also covers the importance of joining a mastermind group, the benefits it can bring, and how it can help you gain a broad perspective of different industries.   Lastly, Perry shares fascinating insights about the role of AI in business, specifically in copywriting. From reducing labor costs to crafting compelling headlines and stories, the potential applications of AI are far-reaching. He also discusses misconceptions people have about AI and the opportunities it presents. Tune in for a riveting discussion about the intersection of AI, E-commerce, and internet marketing. In episode 365 of the AM/PM Podcast, Kevin and Perry discuss: 09:22 - Success in Real Estate and Selling 16:45 - Running Successful Events 23:30 - The Value of Networking and Collaboration 29:55 - Selling Event Recordings for Profit 34:19 - Cash Prize Incentives for Speakers 39:00 - Leveraging Email Lists for Business Success 42:06 - Artificial Intelligence And Its Impact On Internet Marketing 53:21 - Other Mindblowing AI Capabilities 57:27 - AI's Role in Various Industries 1:07:38 - Follow Perry on Facebook for Updates 1:09:46 - Kevin's Words Of Wisdom Kevin King: Welcome to episode 365 of the AEM PM podcast. My guest this week is none other than the famous Perry Belcher. If you don't know who Perry is, perry is one of the top internet marketers, probably one of the top copywriters in the world today. He's got his hands in all kinds of stuff, from newsletters to AI, to print on demand to funnels, to you name it. In marketing, Perry's either got tremendous amount of experience in it or he's heavily involved in it right now. We talked some shop today and just go kind of all over the place on some really cool, interesting topics. I think you're getting a lot from this episode, so I hope you enjoy it. And don't forget, if you haven't yet, be sure to sign up for the Billion Dollar Sellers Newsletter. It's at billiondollarsellerswithaness.com. It's totally free. New issue every Monday and Thursday. It's getting rave reviews from people in the industry and some of the top people in the industry as well as people just getting started. So it's got a little bit different take on it and just a lot of information. Plus, we have a little bit of fun as well in the newsletter. So hopefully you can join us at billiondollarsellers.com. Enjoy today's episode with Perry. Perry Belcher, welcome to the AM/PM Podcast. It's an honor to have you on here. How's?   Perry: it going, man, Dr King, esquire at all. I'm doing great, buddy, I'm doing great. I'm just trying to survive this hot, hot, hot summer that we're all having, you know.   Kevin King: Well, you're out there in Vegas. Y'all had floods, right. I was seeing some stuff on TikTok, like some of the casino garages and stuff were flooding.   Perry: Yeah, there were some floods out here, so it's been. We got like years worth of rain in two days or something like that, they said, which we could stand. It didn't hurt. But the hot weather out here is just the way that it is. You get used to it after a little while.   Kevin King: Yeah, it's the same in Austin. It's like 108, I think today, and I know you know, football season just recently started and everybody's complaining that they're doing a game. One of the first games was in the middle of the afternoon, like 2.30 in the afternoon and like man, half these people are going to be dying out there, you better have some extra medical. You know supposed to do these things at night in Texas during September.   Perry: My kid did in the middle of the day and he had some days that they were kids passing out, you know. So I don't miss the heat in Austin. I'll take the heat in Vegas instead. It's different kind of heat to me.   Kevin King: Yeah, it's not. It's more of a dry heat, not that, not that human heat that we have here. I'll take it so for those. There's some probably some people listening that don't know. They're like who's this? Perry Belcher character? I never heard of this Perry Belcher guy and if you haven't, you've probably been living on a rock in internet marketing, because Perry Belcher is one of the living legends out there and when it comes to internet marketing, it's not just he dabbles on Amazon, but it's Amazon's just a little piece of what he does. He does a ton of other stuff. So, and you've been doing this since you're like, you've been an entrepreneur since you're like I don't know, three years old. I heard you selling hot dogs. I mean, you've pretty much done, everything from run from selling hot dogs to running, I don't know jewelry, pear shops or something, to having little kiosk in the mall, to crazy kind of stuff. I mean, just for those that don't know who the heck you are, just give a little bit about your background.   Perry: Sure, I'm world famous in Kazakhstan. I started out, you know, I grew up really poor in little town in Kentucky, paducah. It's a sound of dead body makes when it hits the floor. And I'll as soon as I could. I stayed there until I could drive. I could drive a car. I got the heck out of there and went to the big city, nashville, you know, and I got into, you know, early on I got into retail and I owned 42 jewelry stores. At one time when I was really, really young, before I was old enough to buy beer, I owned 42 jewelry stores. Isn't that crazy? That's crazy. Not that I didn't buy beer, but as long as I was legally buying beer Exactly. You know. So I was in retail. I went out of, you know, eventually I made three different runs and retailed it, Okay, and then I got into manufacturing. I found I really enjoyed manufacturing Great deal. I still do a lot of manufacturing, as you know and then along, I guess about 1997, for those young whippersnappers that were born about then that are on in your Amazon crowd right In 1997, they invented this thing called the interwebs and Jeff Bezos started a store called Amazon and I sort of got. I sort of got all caught up in the web thing. And you probably don't know this story. It was a true story, Kevin. I got a call from Jeff Bezos when I owned craftstorecom, so this was in probably 1998 or 1999. I got a personal call from Jeff Bezos wanting to talk to me about buying craftstorecom and rolling it into the Amazon family. And then they were only selling books, they were bleeding I don't even know $100 million, a quarter, or some crazy number. And I'm like dude, you're, I'm reading about you, you're losing money, I'm making money. You know, I think you got this reversed. I probably should buy you. I swear to God, I said that. Yeah yeah, I said that that was about best I can figure about a $750 million mistake.   Kevin King: Well, it's funny you say that, because I mean we go back, we're old school when it comes to way, before you know all this internet marketing craze. We were doing old school marketing, you know, by by putting a postage stamp on an envelope and sending it out. And I remember I have a couple of similar stories back around that same time, early late 90s, early 2000s. The guy at MySpace had just started somewhere around in there and those guys reached out to me. I had a newsletter, an online newsletter going at the time, and they reached out to me to do something and I turned. I just ignored them. I was like what's this MySpace thing? I never heard of it.   Perry: I did the same thing with Jim Barksdale. You know who that was. Yeah, yeah, barksdale wanted to buy one of my companies and I blew them off, and he was Netscape you know they also used to do back you might remember this back.   Kevin King: I had several different websites and to get traffic back before there was Google and all these. You know, this SEO and all this stuff is basically as Alta Vista and you know, I love that, I love that Yahoo and all these guys and you could just just by putting stuff in the meta tags, you'd rank, you know on top of the crap out of yeah. You put a text down at the bottom and all the good, all the good, all the good all the good, all that kind of stuff. But I one of the things, what you might remember this there is what's called ring sites. So in order to get traffic, you go to some guy would figure out how to get people to his site and then it would be like next or previous, and you'd hit a button and it would go to the next, previous, and then we had a newsletter that was doing about 250,000 emails a day back before can spam and all that stuff and to get traffic to it. You know, we were getting on Howard Stern Show when he was on terrestrial radio and we were doing all kinds of crazy stuff. But I was working with a site called BOMAS B-O-M-I-S and they had one of these ring sites and we they were like one of our top sources of traffic and I just remember there's two guys there running out of their apartment or something. I talked to one of them. This is like probably around 2000 or so, ish, 2001. He said, hey, you're going to be dealing with me from now on. My buddy is moving on. I'm like all right. I said James is moving on. I said, ok, cool, what's he going to do? He said I don't know, some sort of encyclopedia or something. I'm not sure what he's going to do. He's got some some crazy idea. Turns out it was Jimmy Wells from Wikipedia. I was actually working with Jimmy Wells from Wikipedia before he was Jimmy Wells from Wikipedia. Isn't that crazy? It's crazy, I mean the stories that we can tell from the early days of the Internet.   Perry: When I look back, I just can't. You know my buddy's favorite saying, and I've adopted this I can't believe how stupid I was two weeks ago.   You know like you. Just you just realize you know just the boneheaded stuff that you did when there was so much opportunity. The first domain I ever bought this was like just when domain registrations came out I bought formulas, the number four you oh wow com, the most worthless domain anyone could ever own, when I could have probably bought internet.com Pretend to buy anything and I bought the most boneheaded stuff. You know.   Kevin King: Well, you remember the guy that he got in early he bought was at sex.com or something for, like you know, 10 bucks or whatever it cost to register it back then before there was a go daddy, yeah, and remember the fight like 20 years ago over that domain because it became like the most valuable domain on the entire Internet or something. Remember that huge fight about that.   Perry: It was. It was crazy, but I know there's been a bunch of those stories. Man, I've got some friends that really did well buying domain real estate early on. I bought a lot. I mean I've, over time, I still think domains are a bargain. I really do Most. For the most part, I own stuff like sewing.com and makeuptutorials.com and diyprojects.com. I still own some big stuff that we operate and I own a bunch of other big stuff that we don't operate and you know I'm buying after markets.   Now I bought conventions.com for a little over $400,000 two weeks before COVID Boy. That timing was extraordinary. You know what could go wrong. Conventions are impervious to depression and so anyway, yeah, so I started buying. You know I got a manufacturing and I immediately saw the benefit of online selling because you could cut out all the different layers of middlemen in the in between the consumer and the manufacturer. So I've been a manufacturer selling direct to consumer for a long time. And then I got. I got in business with Ryan Dice. After I got in a lot of trouble, almost went to jail in the supplement business scares me to death to this day. You know I lost everything I had, almost went to the clink, and when that all got settled out I went to business with Ryan Dice and we he turned me on really to the information selling world.   Kevin King: How'd you guys meet up? Was it at some events, or did you just meet up? Yeah, we met up.   Perry: Yeah, I'll tell you, the story is pretty funny story. So we met at a Yonix Silver event. We went to dinner with, you know, all these millionaires, you know in the room, the millionaire mastermind people, and we went to this big dinner and we had like 20 people at the dinner and when the check came it was like, well, I only had a salad, well, I only had the soup, and you know they're all dividing up checks and crap. And I'm like, come on and Ryan looked at me and I looked at him. He said do you just want to pay this bill and get the hell out of here? And I said, yeah, so we split the bill. And that's how we became friends, how we met. And then, you know, when I we knew each other through Yonix and then when I got in trouble in the supplement business, I mean, I had loads of friends when you're, when you're now and when you're when you're netting out half million dollars a month and you're flying all your friends on private jets, the Thomas and crap on the weekends, boy, you got lots of friends, you know. And as soon as the money ran out, well, guess what? The friends ran out. You know, you know everything was, you know. Nobody knew who I was. Then, you know, and Ryan called me and said hey, man, I got this business in Austin. It's doing a couple million dollars a year. If you'll come help me run it, I'll give you half of it. Oh, wow, and we did $9 million in the first seven months.   Kevin King: And that was a digital marketer. For those of you that don't know, that's correct.   Perry: Yeah, it was called touch tone publishing then, but eventually we rebranded it became digital marketer and then out of digital marketer came traffic and conversion summit and out of traffic and conversion summit came the war room mastermind and we ran all three of those for years. And digital we sold a TNC to a Claire and Blackstone Blackstone group about four years ago, I guess. Then I sold my interest in digital marketer to Ryan and Ryan, roland, richard about two years ago and then we dissolved war room about a year ago I guess they were going a different direction and and Kossim Islam and Jason Flylon I started driven mastermind so but yeah, it was a great, great run with. Those guys are super good, guys are super, super smart and we were business partners for 14 years long time. It's a long. That's a you know outlast a long time.   Kevin King: That's a long time in this business longer than all my marriages, almost divine, you know. So going just down. We'll talk about some of those in just a second, but just down that back what? What got you in trouble in the supplement business was it claims that you just didn't realize you couldn't be. Yeah, what was the it?   Perry: was kind of a combination. I was. I was legitimately a pharmaceutical manufacturer. We were an FDA pharmaceutical manufacturer. I got all the licensure and all that I got in trouble with the state had nothing to do with the federal. They called in federal, they called in DA, they called in everybody, like guys. Everything he's doing is correct. But the state took issue to some claims and what ended up happening? They realized that they had not. The thing is, once the state gets their tentacles into you and have your money, you know it's really hard to get rid of them, right? They're like a tick. But. But at the end of the day, the only thing that that that they actually that stuck was something called ways and measures. So that meant that my equipment wasn't precise enough to put the exact amount of product per bottle. So let's say it says it's two ounces right, mine might be 2.1 or 1.9 ounces right, and that's there's. There are state laws about that. They're called ways and measures laws. They're governed by the people who manage gas pumps, if you could believe it. But out of everything that they originally said that I was doing, they dropped everything else and that was the only thing that actually, at the end of the day, was it? But I had to settle it and they got all my money and all my stuff and left me three million dollars in debt. And when, when I went to Austin and we hustled hard, you know, for a couple of years, and I paid all that off, I didn't file bankruptcy on it and it was hilarious because I threw a Perry's broke party. Yeah, about two years in, when I got to zero, I got back to just broke. I wasn't three million dollars, right. I threw a giant Perry's broke party as maybe one of the most fun parties we've ever had. It was a little you're in.   Kevin King: Austin's, you do that out at Willie Nelson's ranch. Because, I was tapes, remember he did that when he got in trouble for seven million bucks and he did some sort of big ass fundraising party out. He has this like old ranch out West of Texas, west of Austin that's. It's got a studio lot on it, basically an old.   Perry: House. Then I just had it right over the house and we had a big pool party and, oh my Lord, so many drunk people. It was a lot of fun, it was good time, so I got a lot of friends at Austin and you'll talk digital marketer.   Kevin King: the conference from like. I think the first one's a few hundred people to what the? Now it's five, six thousand people, or yeah, we get the biggest internet for if you're an internet marketing, yeah, just in in general, it's not just Amazon, it's like across the board, it's the biggest one out there, I think.   Perry: Yeah, before the year before COVID, I think we had the biggest year was seventy two hundred. Oh wow, seventy two hundred, seventy eight hundred, I can't remember. They thought we were going to ten thousand the next year and they rented the Coliseum in San Diego instead of the hotels. And then, of course, covid yeah, and it was just a you know, two or three years we had sold just prior to that. So have we not have sold that first year of COVID? I think was probably around a five million dollar loss, but they had clear and had insurance for it, fortunately. So I don't think they. I don't. I don't know the exact damage, but I know it would have probably wiped us out and we've been because we had a refund. Tickets with In the venue would not have soft to hook and I was a big bunch of crap when it comes to running conferences.   Kevin King: I mean, I do my billion dollar solar summit. You do your events now, like you do. You've done the couple AI summits, you've done the Perry's weird event or whatever. You do quite a different things. You have the Whatever, whatever, whatever. You done like three of those which are fascinating. You do, you know, you have the driven mastermind and you're involved with digital market and our space. There's a ton of people it's almost gotten through Events for Amazon sellers, like everybody. Everybody in their dog wants to have an event and the vast majority of them suck. There's like seven people there they can't sell tickets that are losing their shirt. Very few of them actually make money. What is the key actually, if you want to do an event or you're thinking about that to actually making these things work, is it the long term play you gotta have? The upsell is at the.   Perry: Well, events, events are very, very much an uphill battle. That's the reason. When you go to sell one, they have a lot of value. If you go to, if you build an event to a thousand, two thousand people, it has a lot of value in the exit market because once an event hits a certain inflection point, they're insanely profitable. So you're so, like digital market, we lost money On TNC for probably the first four years that we did it. But the way we made up for it, we filmed all of the sessions and we sold them as individual products. So we built all of our. We had a thing that really made that thing magical, because every session had to be good enough to sell as a product. So it made the event itself, you know, great because you had to have executable do this, do this, do this, do this. It couldn't just be a fluffy talk, right. Every talk had to be good enough to sell as a product when Ryan and I were doing them. So for the first three or four years we didn't make hardly any money, but we generated a lot of product out of that. We sold throughout the year. So we, you know, we did make money a couple million dollars a year From the product sales and then over time, as the attendance goes up, the ticket prices tend to go up. You start at really low ticket prices and you ratchet ticket prices up as the event gets bigger and bigger, bigger, and you start taking on sponsors and we basically got to the point by the time that we sold. You don't really want to sell right, because the sponsors were paying for 80 90% of the cost to put on the event. Tickets were you then over a thousand dollars a ticket? We were selling 7000 tickets. You didn't really need to sell, you know, because you the event was paid for by the sponsors. The ticket sales money was just free money. And then whatever you do at the event, you know in sales is even more free money. But when you look at companies like Clary on the by these things, they don't care about the product creation, they don't care about selling at the event, they only care about tickets and they make a lot of money on hotel rooms. So they so in when, when they're promoting they got a lot of cash, so they'll buy all the hotel rooms in downtown San Diego a year before we, right before we, now we announced the dates, they buy all the rooms and then when you're buying your room from bookingcom or American Express or whatever, you're actually buying that ticket from Clary on, because Clary on in a lot of cases bought all the rooms in the city for $120 a night and then a year later you're paying 350 on AmEx and they just pay AmEx a commission, a 20% commission.   Kevin King: That's different than the way when I do like for a billion dollar so much in order to not have to pay you know, $3,000 to turn the Internet on in the ballroom, or to have to per day, or from not having to pay for the ballrooms or this or that. We have to do guarantees. Rather than buying the rooms up front, we have to guarantee that we're going to put 50 butts in the in these beds or whatever. If we don't, we get penalized, you know, yeah, right.   Perry: We did a little bit different model. Yeah, we did, we did too. You still have room blocks, you know, and the killer and the killer in the convention businesses contract negotiation and room blocks. You know, if you can get room blocks down, we did one recently at the ARIA and I didn't have a room block anywhere because the ARIA surrounded by like eight hotels within walking distance, so there's no reason to book a room block. Everybody could stay where they wanted within that complex and the room blocks Everybody could stay where they wanted within that complex. And then we got together and it didn't. It didn't create the problem, but you know they get you. Would they charge you more for F&B? So they, they're going to get you right. So I've got my own event center now I've got a 50 person event center. I think we're going to expand to 100 people and and I really prefer having smaller workshops anyway, they're they're more intimate, they're more effective and if you're going to sell something else to the attendees, the smaller the room, the higher your conversion rates will always be if you're offering something to the attendees.   Kevin King: That's true, yeah, so then you took it from there to the mastermind you did the war room for a long time and I know my buddies, Manny  and Guillermo, at Helium 10. They joined the war room about two years into working on helium 10. They said that was the number one life changing thing that they did.   Perry: They killed it to that.   Kevin King: I don't know the numbers, but I know it's. I see what he's spending and what he's doing, so I'm like it's some serious numbers. But they they attribute that to war room, because there was some. Y'all did one event and I think it was in Austin, actually around 2018 ish, and it was all about system. Whatever the talk was on that one, because they're quarterly, they were quarterly deals. I think it was all about systemizing and getting out your way and like cutting all the riffraff. I don't, but they said that was. It was game changing for them and made them tens of millions of dollars. So, but to join a war room was what 30 grand, I know driven was what you have now which I've been driven 30 grand.   Perry: Yeah, I've been to.   Kevin King: I've been to driven. I went to the one back in July which was excellent out in LA and and I love going to these. Those of you are listening. You know this is not an Amazon conference. A lot of us go to Amazon conferences, but I think the best conferences for me are actually the non Amazon conferences, because I go into something like a driven where there's yeah, there's a handful of Amazon people there, but there's also a bunch of Facebook people. There's also a bunch of domain people, there's SEO people, there's people that you know just have some sort of a shop in Baltimore that you know do internet marketing and you, you meet this range of people and for me it's brainstorming sessions. I'm uninterrupted. You know if I'm watching stuff online, even the recording of that, you know I got phone calls coming in, the dogs barking. You know wife's nagging, whatever it may be. You're interrupted. But you're sitting in a room from nine to five, obviously not in the room. You're sitting in a room From nine to five listening to people, these people talking a lot of it. You might already know, some of it may be new to you, but you're just in there. One guy says something, perry says something, and then Kazim says something, and then Jason says something, and whoever else the speaker says something, you start going. If I put all these things together and I can do this for my business, holy shit, this is freaking incredible. And so that's. These people look at me. And why the heck would I pay 25 or 30 grand to be in some sort of event? And if in the Amazon space, I personally wouldn't, because I'm going to be the one delivering most of the value in a lot of cases. And so why would I pay to join something? They should be paying me to come to it. But when you go to something where it's a cross section of people in the marketing world that all think like you but they do different things, I think that's the most valuable thing, would you? Would you agree?   Perry: I think honestly, I think in a good mastermind and that there's that good being in parenthesis and a good mastermind. I don't think you can lose money. I think it's almost impossible. I've made money in every mastermind I've ever been in you just, I like the idea of the diversity, right. I might learn something from a guy in the funeral industry that can be applied to somebody that's selling weight loss, right. You never know. And you know my benefit. I guess I've been around a long time, like you, kevin, I've been around the block a bunch and I've been fortunate enough to work with like hundreds and hundreds and hundreds of businesses Pretty intimately in the, in the, the war room and now driven setting, and you know I get to see what's working and what's not working from like a 10,000 foot view inside all these businesses. So for me personally it's a great benefit that I get to learn something from really diverse. You know I learned the other day I was talking to a friend of mine, a client, that that they're in the, they sell online, that you book an appointment, you know they call you in, whatever, and they're in an industry that I have no interest in, no knowledge of, right. But they figured out that if they once somebody's booked an appointment, if they put a zoom, a live zoom, on the thank you page with somebody sitting there going hey, kevin, so glad you booked your appointment. By the way, jimmy can take you right now if you want, right. That one thing those, those people are coming in that way, or converting nine times higher than the people who book a normal sales call. And the beautiful thing now is.   Kevin King: You can do that with AI. There's tools with AI where you could actually, when they fill in that form I'm registered, I'm Kevin air dot AI and all that yeah, several and one that you could actually and you could put in you upload a spreadsheet or tie it into. You know, through an API to your, your cell system, that Jenny is available and it can actually, as I'm typing in, kevin King it's in the background recording a video with with Perry saying hey, hi, kevin, this is Perry. I glad you just signed up. Jenny's available right now. It's all automated and all like holy cow how to help her is just sitting around it and you know the conversions on that go through the roof.   Perry: Oh, they're nutty and but that's something I learned from a person who's in the like the the trauma they. They serve trauma psychiatrists, that's their market and I'm like I would never know that in a million years. Right, but but how many other businesses or clients of mine could that one tactic be applicable to? The answers? A lot, right, so you. So, when you go into those rooms where you know to be in driven, you got to be doing at least a million a year, but I think our average is around seven million a year gross and, and some you know up to, you know there's there's some hundred million dollar Folks and big players in there. There's some big players there, but you but nobody's stupid, right? You're in a room full of really, really smart people when they're basically telling you what they're doing. I joke about. I get paid for people to tell me. I get paid for really smart people to tell me what they're doing. That's really working and what I right, what a great gig I got right. But, yeah, we've been doing it for a really long time there. Those groups masterminds are hard to keep together and Keep happy and all that there because they are, because they're intimate, people share a lot of details and sometimes you have personality, kind of little things. This is crazy nutty stuff. That happens that you, the only problem with those things are just, they're a, they're a bit to, they're a bit to manage and you know that, as far as the 30 grand goes, or 50 grand, or 70. I know a lot of people charge. I know a buddy mine charge is 70,000 a year. You know we act like that's a lot of money but everybody's got an idiot on their payroll that there's a more than 30 grand to, I promise you. Everybody does. Everybody has a dodo on their payroll that they should have fired a long time ago but he brings the doughnuts or something and you don't farm that. Would you rather have that dodo licking stamps four hours a day or would you rather, you know, have access to some of the smartest people and your peers and you know really Really that? Keep you accountable, keep you on your toes and keep you up to date, because we do a call every week along with the meeting. So I I'm not pitching it down, I don't. This is sound like I'm hey, go buy my thing, but no matter what the industry you're in, get into a mastermind group. If you can, it'll one that you can afford.   Kevin King: You know ours is out of reach for most people because they're they're not because it's they can afford it, because they just don't meet the minimum sales, like you said, like you know, if you're at a one million and you said the average is around seven, you know, for 30 grand a year, all you need is one, one little idea, one thing, just you, just the ROI could be immense on just one thing.   Perry: I've heard a hundred times and I got all my value for the year within the first two hours. The first meeting yeah, you know, I've heard that so many times because this Kevin King gets up and talks and says something really smart and you go. Well, that was worth it, right, I got. I learned a thing that I didn't know and and, like you said, when you're doing, the beauty is the reason we don't take people that aren't doing a lot of money yet. It's hard to ROI. But if you're already doing let's say you're doing seven million a year and you get an idea that gives you a 5% bump, right, let's 350 grand, yeah for an idea. And you, you know, you're in for a year. You're in for 52 calls and four live meetings and Intensives and networks and private calls and all kinds of stuff. It's you know and I'm not saying for us, just for any man mind if you get a good mastermind, you can't lose money if you, if you have a good enough business already that you can ROI.   Kevin King: One of the things that you do that's really cool too is, like you said. You know, with digital market and I agree that you know you're recording it, turning it into content you do that now. Well, you'll do a Like that, the weird event you you straight up say, hey, come out to this thing. Yeah, it's gonna be a hundred of you here, but I'm recording this. I'm gonna turn this into a product. Yeah, you turn it into six products. You know, and I didn't with my billion dollar seller summit. I didn't used to record those, but now that's half the prop. That's where the actual the profit is. It's actually in recording it and then selling it to the people that didn't come. But one of the cool things that you do, like it driven and some of your other events your AI event you did this. I think you do it. Probably pretty much everyone I've ever been to is at the end you say get the kick the cameras out of the room, turn everything off. Let's grab a bottle of wine. You sit up with the stage. You might bring a couple other your partners or the couple other speakers and it's just two hours, three hours. They're just shooting the shit of Q&A and, yeah, stuff that comes out of that Alone pays for the entire event.   Perry: Yeah, the unplugged we've we've been doing unplugged forever because at the end of most events, you know, you still have unanswered questions and I don't want people to have unanswered questions. But also some people just don't want to talk about, they don't feel comfortable talking about the particulars of their business on camera. Yeah, so you know, if they because you know, sometimes a lot of my students are also Gurus, right, and you know how gurus are they don't want to tell you that. Well, they don't want to tell you that they're having a hard time making the lease payment on Because they're pretty ill, hurt their image, right, I talk about all of my screw ups and Almost going to jail and going broke and all it, because you know it's real, that's the real of people. But but a lot of the guru guy, well, I can't say that because it was just destroying my image. So I like doing unplugged sessions a lot of times because they people feel a little more comfortable talking about their challenges and Without feeling like it changes their position. And I think sometimes, just, you know, people don't want to ask their question on a microphone in front of a thousand people for fear of embarrassment. And what if my questions? A dumb question. So when you're just sitting down Slugging back a beer and you know chatting they feel more comfortable asking the questions. They probably should be asking it we I've done that as a policy for a really long time. We do wicked smart and we do unplugged, and those are the two. You know we always ask for the best idea in the room, and that that was a funny story.   Wicked smart was invented the first year that Ryan and I did Traffin conversion summit. We programmed three days worth of content for a three-day event and At 11 o'clock on the third day we were out. We'd have anything else to talk about. We actually we had miscalculated our time and we have anything else to talk about. So we went to lunch and we said man, we got to fill all afternoon. What are we gonna do? And and and I don't know if Ryan or I are together, I think we pretty much together we came up with the idea let's just challenge people to come up and tell us the smartest thing They've learned in the last six months and how it affected their business, and let's give whoever gives the best idea. And I think the first person that came up, ryan or I won Jeff Mulligan's, a good friend of ours and he's from as a former boss tonight lives in New Hampshire and he always says wicked smart, that's wicked smart, you know. And yeah, and the first person came up and they did their thing was whoo, that's wicked smart and that's stuck. And that's how wicked smart got started. But we never did unplugged. I used to do unplugged with Andy Jenkins at Stompernet years ago when I would. I used to go speak for them every now and then and one of the things that I did was really, really cool was called unplugged and we just Andy and I, would sit down on the edge of the stage. I don't, andy was brilliant. I don't know if you ever knew him or not. He was absolutely a really really brilliant guy and he and I would sit on the edge of the stage and talk to people for hours. You know it was a lot of fun. So I kind of picked that up from Andy.   Kevin King: Yeah, I do that at the billion dollar source. I'm not do a hat contest, so the last day, what well? I do two things. I incentivize the speakers to bring it, so I put a cash prize on the speakers. So, because I don't want them doing the same presentation they just did it three other conferences or same thing they talked about on podcast I want them to bring their a game, so I put a five thousand dollar cash prize on the first and twenty five hundred on second. It's voted on the last day. I'm ineligible. I always speak last, so I'm ineligible.   But all the other speakers that I invite after the last one spoke, everybody votes On who they thought was the best speaker, deliver the best value, and then that person gets five grand. So it's become like an honor to do that and then, as a result, everybody is bringing next level stuff that they normally wouldn't talk about. Because, and then I publish the list of the and you know, if there's 15 speakers I Public, I start at number 10. I don't show number 11 through 15. I want to embarrass somebody totally, but I start at number 10 and go backwards and announce them up like it's. You know, like it's a billboard top 100 or something, casey casem or whatever and it works really really well because Everybody's. If you're not in the top 10 of a speaker, you're like you know you didn't do so well, you didn't resonate, and then you're not coming back if you need a spelling of my name for the check. You've been involved in AI for like seven years before. It was the cool thing to do, I think probably six yeah, probably six years.   Perry: I got. I spoke on AI at the largest TNC, that one before COVID. I spoke on AI and showed Jarvis and Well said labs and a bunch of those before Anybody or anything, and and everybody in the room was just blown away by it and I feel certain they didn't do anything at all when the dog, you know. But I was using it for copywriting and we were building services For and like this AI bot that were it'll be after this Heirs, but but this AI bot, you know, we're really concentrating more on the business models that you can apply AI to. So the first AI bot summit was all about Opening people's minds up to it, so they understood what it was, understanding how to use the tools and and really just grasping this. One thought of If you had 10,000 really smart people willing to work for you 24 hours a day for free, what would you have them do? That's always my question, because with AI and a little bit of robotics, that's what you have. You have an unlimited amount of Robotic slaves to do your bidding right, whatever you want, and they don't take breaks and they don't break up with a boyfriend and they don't sue you for, you know, workplace compliance issues and all that stuff and, and you're gonna see, I think it's already happening. It's just people aren't exposed to it in mainstream yet, but Corporate is projecting like huge profits over the next few years as they Diminish the amount of workers, physical workers they haven't replaced with AI Elon Musk whether you like him or not, you know, cut the workforce at Twitter by 90% and arguably, the experience for the end user hasn't changed.   Kevin King: Yeah right, yeah, it's, it's your event back in just to tell a quick little story. Then we'll go into this. But your event back in April. You're showing some business uses. You know you're talking about the army of 10,000. You showed something about a. You know here's a building, the payroll of this building and use AI and the payroll goes from I don't know some crazy number of a million dollars a month to 86 dollars a month or what some exaggerate there.   Perry: It's the Empire State Building and the payroll. The daily payroll in the Empire State Building is about I I'm gonna paraphrase, I don't remember the numbers, but it's about a million dollars or more a day and the average worker output 750 words of text a day in white collar America. So if you translate that into the cost of open AI to generate the same 750 words, it's about 42 bucks, I think yeah, it's like you know it's it's in 42 I mean for all of them, not for one of all of you know 42 bucks or 92, but it wasn't much.   Kevin King: It was less than less than 200 dollars, I think, to generate the same amount of work product one of the things that you talked about there were newsletters and like how AI can automate a lot of newsletters and and I'm a I'm gonna disagree with you a little bit there on where you can actually have. I think at that time you may have changed your tune now I'm not sure. But you're like let AI do all the writing, do everything. You can just put these things on autopilot and I think that's definitely possible, but the quality sucks and for the most part, unless you're just assembling links. But if, but, but. What you said there actually about newsletters got me thinking. It's backed on that same thing we're talking about earlier bringing this all together. Here is where, about going to events. It's like you know what I used to run a newsletter in the late 90s and early 2000s that we that had 250,000 daily subscribers. We crushed it as using that as a lead magnet to sell memberships, to sell physical products, to sell everything. What, if you know? And this Amazon product space, everybody's always trying to build audiences and they're always like go build a Facebook group, go Create a blog post and you, as you know, the most valuable asset in any business as your customer list, your email list, your Custom list and be able to use that when you want, as you please. And you can't do that on social media. You have no control with algorithms on Facebook, you know, have no control over how many people see your LinkedIn post or or anything. But with an email list or a customer base database, you do. I was like, wait a second, what if we took newsletters and did this with physical products and actually to build audiences? So if I'm selling a dog products and I happen to have sustainable dog products, I'm like what if I build an audience? A dog, the dog markets half of America. That's too big. Well, if I niche that down to some people who ends dogs and sustainability, create a newsletter for them. I'm not trying to sell them anything. This is not a promotional email from my company saying, hey, look at our latest product, here's our new things. But it's more of a about the dogs, about dog training, dog tips, food tips, whatever. And then occasionally spreeking on some affiliate links To test things or you maybe even get a sponsorship. So make this thing self-sustaining and when you're ready to launch a product, you have an avid, rabid, loyal fan base to launch that product to as like this is the way to actually build things. So we I started looking into it Devoured everything you you showed about newsletters. You even set up a special tele I think it was telegram Newsletter channel, devoured everything in there. I went out, devoured everything in the newsletter space for three months, like everything is like. I already know this stuff, but I want to re educate myself on the latest tools, the latest strategies, and I just launched one In August, august 14th for the Amazon space. That's that I already have an audience there. Let me figure this out. Let me, like, figure out what are the best tools, the best systems, and then I can spread this to across multiple industries, multiple things, and that's what we're doing now and it's hugely Successful so far. And and AI is a part of that. But I'm not letting AI write it. AI is more of the, the creative side. It's how it it will rewrite something. If I'm trying to think of a headline, I'm like what's a better way to say X, y, z? I'll type in what's a better way, you know, to say we're ten ways that there are funny and catchy, in the tone of Perry Belcher, whatever it may be, to say this you know, give me all these cool ideas and then I mix and match, or sometimes it nails it, or I'll write a. I do a six you, you talked about this and one of your things the six second video, and so the beginning of every one of my newsletters is a six second, basic six second story. It's a personal story About me. It's something about me meeting Michael Jordan, spending a night with him in a sweet and Atlantic City the day before the night before he first retired, and you know it's crazy. Stories are about my divorce or about you know, so you're a naked girl on the balcony. I know it's, it's edgy, crazy story. But then I tie that back into the physical products and I'll use AI sometimes, maybe to help tweak that. Or if we got it some scientific document from Amazon about how the algorithm works, I'll use it to read the document, summarize it and then, you know, rewrite it with a human touch and add personality to it. So that's where using AI in other industries. I think it is brilliant. Most people aren't getting that right now. Most people just think of it as this is a threat to my job, this is a threat to you, this is the terminators coming to kill me and take over the world.   Perry: So what about? Everything's a conspiracy theory.   Kevin King: Yeah, I mean AI. I was just had just had a chat in August, so it's my father's 82nd birthday and I was sitting there for an hour explaining AI to you know, an 82 year old and a 79 year old in their mind was just, they're just was blown. They're like how do you know all this? This is, this is like science fiction movies or something, and like this is what you can do with it. And most people don't understand that. What are your thoughts on on AI right now and how people are misunderstanding or misusing and what are the best opportunities out there?   Perry: Well, circling back to your newsletter thing that the AI sucks for newsletters, it depends on the kind of newsletter you're writing.   Kevin King: That's what I said. If it's a link, newsletter or something, you can do it.   Perry: If it's a, if it's an aggregated or what you call a link newsletter, what I call a curated newsletter, they add as a really good job at writing basically a tweet and then linking to the article, and you do that like eight or nine times and you got a newsletter. But did you see the one?   Kevin King: the hustle, I think it's. They did a study. Like people are saying that. I don't know if you saw this from the hustle, but the hustle actually hired a guy, he went out and he did Let me see if I can fully automate a newsletter 100% AI so they had their programmers do some stuff and they put it out. It was about the nineties. So they would take today. You know, if today is, you know, April 6th, no, august 6th 2023, they would do August 6th 1993. What happened on that day? You know? Jurassic.   Perry: Park, the whole movie.   Kevin King: But the thing is it was repeating itself. The way it was writing was like all it was just you got to have, you got to have ins that.   Perry: Do a final review. I mean you got to have a human still, do a final review. Yeah, we've got a system. So Chad, my partner Chad, built a software system we're about to launch actually it's called Letterman and it we manage 18 newsletters a day through it and we do it with three outsourcers.   Kevin King: And the way that we do it is we hand out the we handpick what we're going to talk about.   Perry: So basically, we have a bunch of API feeds that tell us these are the stories that are trending about this subject today, and then our guys can go in and just hit, click, click, yes, yes, yes, no, yes, yes, no, delay, delay, delay. So maybe for a future issue, and then it's going to pull together those links and drop them into our software and then the software reads the article and then writes a like a tweet, that tells them to go, that compels them to go read this article. The call to action is compelling them to read the article. Right?   Kevin King: So that's SDO, then something really. It's a. Or is it a newsletter? It's a newsletter.   Perry: So this all goes into a newsletter and basically like, for instance, financials, a great example. The capitalists is ours and we want them to be able to get the gist of, like the Wall Street Journal and three thumbs swipes. And even though we're only writing, there might be 10 links in here. Right, we're writing like 140 characters on each link, compelling you to go click the link, and AI is writing that.   Kevin King: Okay.   Perry: And then they're going over and reading the actual article on the original source, right, okay, so so it's expanded.   Kevin King: It's an expanded judge report or something. It's exactly what it is.   Perry: It's not. It's not even kind of like it. It's exactly what it is Now the opposite. That's only really useful if you have a news worthy topic. Yeah. News or financial or something that's not for entertainment, financial entertainment, sports, politics things that change every single day. But if you're in the Amazon space, you got to think about it more like a, a magazine.   Kevin King: That's what I do, yeah.   Perry: So what we'll do there is find a feature article or three features. Three feature articles is even better. So we'll, let's say, for instance, my things on Amazon, and I'm talking about optimizing the perfect Amazon listing, right? I don't know whatever, but I'd go find three, the three best articles I could possibly find on that subject anywhere in the world, feed them into the AI, have them read all three and then write me a new article. And oftentimes the way we keep it interesting, we have characters, ghost writers created that right in the style of whomever right. So, but I mean really detailed. But one of the things that we found, Kevin, that's killing right now that you might find is our email list. I'm on a mission to get my email list to never send a promotion ever.   Kevin King: That's what I'm on to. I'm on to yeah.   Perry: So the way I do it is by sending out content, so like Perry might send out an email. You're doing it every day right now.   Kevin King: I get an email from you every day on copywriting Big, long email right. Yeah, big long. No, I save them. They're valuable. I mean, some of them go into my swap file.   Perry: It's a subtle.   Kevin King: It's a subtle like you're staying top of mind. You're doing it. Dan Kennedy does it right now and there's a couple others. He's doing that with Russell, but I and they're valuable. You can just read that and never do another thing. But it's you're staying top of mind and then you'll put in something OPS, remember the AI summits coming or whatever that stuff works.   Perry: But what's about to happen with those lists and we're doing another list right now is, once you open that thing about headline writing right, I can fire off a straight up promotion to you.   Kevin King: Yeah, you're segmenting based on what I click and what I do open and read Instantly.   Perry: So you're opening reading my article, right? So you just read my article about headlines and then the. Then you close that article down close that email. The next email in your queue is from me going hey, fibs, copywriting course is 50% off today. Great deal, and you're already so pre-framed to that. The open, the open rate on that second email is like 70 to 80%. Yeah, yeah, we're doing that.   Kevin King: We're going to do that in the product space, where we will watch what people click and if they're always looking on the docs and story, we'll start feeding them more docs. And there's a tool out there, there's a what. There's a tool that does this for the AMA right now, that that does newsletters, where it automated it watches everything and automatically get basically creates a personalized feed in a newsletter we want to Instagram.   Perry: We basically want to Instagram the newsletter business. So if you're only opening dots and stuff, then we want to deliver dots and stuff to you. If you're only delivering lip plumper articles, then we want to deliver a lip plumper off offers to you and and make the newsletter more lip related.   Kevin King: If that's your thing you're into in a makeup space, we're talking about it for newsletters, for you know Amazon sellers, but you can do this for physical products. You can do this for any industry and then leverage off of that. You see that they're always by clicking on the docs and ads. Then you start driving them to your print on demand docs and t-shirts, or you start driving them to Amazon to buy docs and bowls or whatever it's there's a guy that sells drones on Amazon.   Perry: You should have a drone newsletter. You know. You absolutely should have a drone newsletter. We say when, when Perry and I are talking about newsletters there's a big misconception in my mind.   Kevin King: Maybe you have a little bit different take on it, but so many people have what they call a newsletter. You go to their website you know the drone maker, sign up for our newsletter and the newsletter is nothing but a promotional email. It's like hey, we just announced two new parts. We just announced this to me. That's not a newsletter. That's a good one. That's not a newsletter.   Perry: That's a good one. You're not going to get deliverability on it either I mean a newsletter provides value.   Kevin King: It's like 95% value, 5% promotional. It's valued, something you want to get it to where people look forward to getting it, not, oh God dang. I just got another freaking email from drones. Or us Delete, delete, delete. They like I got to open this because they may have some cool tactic in there on how to fly my drone, you know, or in heavy winds, or whatever. Whatever it may be. That's where you got to be thinking when you're doing this, and AI is a great tool. And I always remember something you said when just as a quick aside here, it's a quote I often re-quote you on this and credit to you but you always said, when it comes to selling products on Amazon, people don't buy products on Amazon. They buy photos, absolutely, and so can you talk about just for the Amazon people.   Perry: Nobody can buy a picture. Nobody can buy anything on the internet. It's impossible. All you can do is buy a picture or something that's. Or if you're writing copy, you're creating a mental picture of a thing, right? So yeah, I'm a big believer in product photography being a giant piece of what you do and making something that's demonstrable. If you can actually show how it works in a 30 second video clip, I think that's different than anything. You know that works more powerfully than anything, because you've got to, and design I think you're seeing now is becoming more and more important the quality of your design, because we don't have any way to trust companies, right? You don't really have a way. It used to be the old Dan Kennedy world and Dan at the time was right. You know, ugly sells and pretty doesn't, right? The truth is today, pretty outsells ugly, and that's just. We've proved it eight times, eight times over. Pretty outsells ugly, and especially if you're selling a physical good, right? So don't skimp on the amount of money you spend on photography and photo editing and all those things. I was in was in Kevin interesting thing I was in Guangzhou, China, and I went to this illustration company. They do illustrations, you know. Have you been to? You've been to Yiwu before? Yeah, I've been able. Ok, so you know, upstairs in Yiwu, like on the fourth and fifth floor, it's all service companies, web companies, and I found a company up there and they were doing watches so they would take a watch. You can't take a good enough photograph of a watch for that photograph to actually work in a magazine. It's an impossibility. So what they do is they take a picture of the watch and they pull it into an illustration computer and then there's a program just for jewelry that has all of these textures and paint brushes and all that and they actually build the watch on top of the photo. They build an illustration of the watch and if you ever pick up a magazine and really look at, get a magnifying glass and look at the picture of the Rolex on the back right, you can see where there's an illustration piece cut here or there. You don't see any of the photo. They completely overlay it. But sometimes it takes these guys two weeks to set on illustrator and replace every little pixel dot. Everything is a vector and then they send that off and that.   Kevin King: But now AI can do a lot of that.   Perry: Yeah, I don't know how much I would trust it to do that, but yeah, it probably can. It can certainly enhance the photos a lot. You're seeing AI photo enhancement become a really big deal. Have you seen that thing that takes? I mentioned it at AIBotson. I'm trying to think of the name of it now Topaz.   Kevin King: Yeah.   Perry: Topazai. Well, you can take your old video footage and it'll turn it into 4K footage. It looks pretty doggone good. I mean, you take an old piece of footage that you shot 10 years ago and you run it through there and it'll give you a whole face lift and make it really appear to be a 4K footage.   Kevin King: Yeah, as Remini does that for photos, you can have some old photo or even something you downloaded, some stock image you downloaded online. It's kind of low res because they want you to go pay for the high res. Just download the low res, run it through Remini and it'll upscale it. And upscaleio is another one. There's a bunch of them and some of it's like holy cow. This is amazing stuff.   Perry: Another year from now, probably most of the things that we're using services for now will be you know you don't have to. We're making a lot of money right now in the Philippines by our outsource company uses AI to do things for people. So if you wanted an illustration of a product or whatever, you could send it to man. We're going to charge X for that, but we're actually going to use tools that cut our labor time down by 80, 90%. We haven't got it to where we can cut it all the way out yet and we still hire art directors. You know, really, but it allows you to, instead of hiring 30 B minus designers and you know an art director, you use AI and you get three or three or so, three or four really high level art directors and you don't need all the carpenters anymore. Right, and if you've seen the way they're building houses now, with the brick laying machines and all that all the carpenters, all the framers that won't be a profession in another 24 months.   Kevin King: Well, that's the scare I think that general public has when it comes to AI is like, well, it's going to take my job and so I don't want that, but look what happened in the industrial revolution, look what happened when the wheel wasn't been it. People will adapt and if you don't adapt, you're going to get left behind. And I think right now, one of the biggest skills if you're listening to this and you're, you know, in high school or college or you're young and still trying to figure you need to learn how to do prompting Prompting. I think good prompting versus okay prompting can make a world of difference with AI. As this gets more sophisticated, being good at prompting is going to be a major skill set that's high in demand. Would you agree with that?   0:55:51 - Perry: I think so. It's funny though, you know. Now you can go to open AI and say write me a mid-journey prompt. Yeah you know this and use this camera lens and this but you don't want the camera lens.   Kevin King: That's where photographers and artists right now are.   Perry: You kind of don't. You can actually have open AI right the mid-journey prompt for you. It's crazy and a lot of people are doing that and I think that's. I think prompting is going to become easier and easier, but it's still going to require imagination.   Kevin King: You know.   Perry: No, no artificial intelligence engines ever going to be able to replace imagination. You know it's not going to happen. So I think that we're we're we're fine for, you know, a good long while. I don't see it being a problem, but there's good money to be made right now with just arbitrage. You know how it is, kevin. You've been around this business long enough. When, anytime, a market is inefficient, that's when all the money's made, right, and right now you got people who need things done. Nobody wants to work, right? So you know AI is just filling the slot perfectly, so we can offer services. Now that used to be. You know, like. We'll do unlimited video editing for $2,000 a month, right? Well, we're doing 90% of that video editing with AI. If we were doing it by hand, we'd have searched $10,000 a month, right, and the end of the day, the customer doesn't care. The customer's getting the desired product delivered within a timeline. They don't really care if you did it yourself or if a robot did it. And if they do care, well, it's probably not your kind of customer, right? So all the stuff that you guys go through of writing product descriptions and all your SEO, your keyword loading and your product photo enhancement and all the stuff that you do, I'd say within a year, probably. Right now, if you're studious you can do 90% of it?   Kevin King: Yeah, you can, but within a year. I mean, it's been a big thing. I just was in another mastermind with a big Chinese seller. He does $50 million a year or something. He's based in China and sells into the US and he said that AI has been a leveling ground for the Chinese sellers.   Perry: Yeah, of course.   Kevin King: Because now they used to, you'd have all that broken English and stuff on listings or they couldn't understand the culture to write it in the right way. And he said with AI, that advantage is gone for Westerners, so you got to step up your game and now it's in. Still, you have an advantage in branding or innovation or some other areas, but it's leveling the playing field for a lot of people.   Perry: Yeah, we found it. We found with Mid Journey packaging design.   Kevin King: Yeah.   Perry: It's been. Packaging design mockups have been amazing. We've come up with some really great packaging ideas that we wouldn't have come up with and for the most part you can send those over to your factories in China and get a reasonable.   Kevin King: When people are doing that for product. Now they'll come up with a product idea like, hey, I want to make a I don't know a new dog bowl. You'll have the AI create. You know, they'll give it some parameters. It needs to be this, it needs to be slow the dog down from eating or not slip on the floor, whatever Right and have the AI create a hundred different models of it. Just boom, boom, boom. Use 3D illustrations, put that into a tool like PickFu, let people vote on it and then, you know, have the top couple. You know, go to molding and make prototypes and then do some additional testing. You couldn't do that. That's just what you can do. Now is just some of the times, sometimes almost mind boggling.   Perry: And robotics have really taken down molding costs.   Kevin King: Yeah.   Perry: Back when you and I started, you know I want to custom mold for this. Well, it'll be $100,000. Now you know, six grand you know, whatever it lasts, you know, depending on what you're molding, but it's crazy how cheap molding costs have gotten.   Kevin King: So we're almost out of time here. Actually we've gone over, but just real quick before we wrap up. What are? What would you say are three things out there that you're seeing right now that either hot opportunities that people need to be paying attention to, or three big, or maybe even three big mistakes that people are making when it comes to trying to sell physical products to people.  

Check Point CheckMates Cyber Security Podcast
S05E09: The Altavista of Large Language Models

Check Point CheckMates Cyber Security Podcast

Play Episode Listen Later Oct 7, 2023 33:01


PhoneBoy talks to Adam Gray, CTO of Novacoast about how ChatGPT is used by threat actors to compromise systems, the GPT-4 System Card, where ChatGPT seems to be useful in general with respect to cyber security, ChatGPT writing legal briefs, what early search engines and ChatGPT have in common, and how the more some things change, the more they stay the same.

Locked On Browns - Daily Podcast On The Cleveland Browns
Browns Starting Safety Juan Thornhill Says The Defense Will Be Special!

Locked On Browns - Daily Podcast On The Cleveland Browns

Play Episode Listen Later Mar 16, 2023 31:16


 Former UVA safety and two-time Super Bowl Champion Juan Thornhill is headed to the Cleveland Browns out of free agency. Former Virginia safety and two-time Super Bowl Champion Juan Thornhill has a new home in the National Football League. After spending the first four seasons of his professional career with the Kansas City Chiefs, Thornhill signed a three-year, $21 million contract to join the Cleveland Browns, as first reported on Wednesday by the NFL Network's Tom Pelissero. Two-time Super Bowl winner Juan Thornhill agreed to terms with the #Browns on a three-year, $21 million contract with $14 million fully guaranteed at signing over the first two years, per sources. Thornhill started 52 games for the Kansas City Chiefs and is a big addition for the Cleveland Browns. After an impressive career as a defensive back for the UVA football program, the Altavista, Virginia native was selected by the Chiefs with the 63rd overall pick in the second round of the 2019 NFL Draft. Thornhill had an excellent rookie season with three interceptions, but tore his left ACL late in the year and watched from the sidelines as the Chiefs beat the 49ers in Super Bowl LIV. Kansas City reached the Super Bowl the next season and the AFC Championship Game the year after that, but came up short of winning it all.  This season, the Chiefs reached the mountaintop again and Juan Thornhill played a pivotal role, recording five tackles and a pass breakup in Kansas City's 38-35 victory over the Philadelphia Eagles to win Super Bowl LVII, earning Thornhill his second Super Bowl ring. Thornhill was brilliant in the entire postseason run for the Chiefs, grading at 90.5 on Pro Football Focus over his final five games, the top mark in the NFL among safeties during that period. In 65 total appearances and 52 starts over four seasons with the Chiefs, Thornhill recorded eight interceptions, one pick-six, 20 passes defended, one sack, one forced fumble, one fumble recovery, five tackles for loss, and 234 total tackles. In 2022, Thornhill had three interceptions, nine passes defended and 71 tackles. #BrownsBuilt BarBuilt Bar is a protein bar that tastes like a candy bar. Go to builtbar.com and use promo code “LOCKEDON15,” and you'll get 15% off your next order.Ultimate Football GMTo download the game just visit Ultimate-GM.com or look it up on the app stores. Our listeners get a 100% free boost to their franchise when using the promo LOCKEDON (ALL CAPS) in the game store.FanDuelMake Every Moment More. Don't miss the chance to get your No Sweat First Bet up to ONE THOUSAND DOLLARS in Bonus Bets when you go FanDuel.com/LOCKEDON.FANDUEL DISCLAIMER: 21+ in select states. First online real money wager only. Bonus issued as nonwithdrawable free bets that expires in 14 days. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling Problem? Call 1-800-GAMBLER or visit FanDuel.com/RG (CO, IA, MD, MI, NJ, PA, IL, VA, WV), 1-800-NEXT-STEP or text NEXTSTEP to 53342 (AZ), 1-888-789-7777 or visit ccpg.org/chat (CT), 1-800-9-WITH-IT (IN), 1-800-522-4700 (WY, KS) or visit ksgamblinghelp.com (KS), 1-877-770-STOP (LA), 1-877-8-HOPENY or text HOPENY (467369) (NY), TN REDLINE 1-800-889-9789 (TN) Learn more about your ad choices. Visit podcastchoices.com/adchoices

Locked On Browns - Daily Podcast On The Cleveland Browns
Browns Starting Safety Juan Thornhill Says The Defense Will Be Special!

Locked On Browns - Daily Podcast On The Cleveland Browns

Play Episode Listen Later Mar 16, 2023 36:01


 Former UVA safety and two-time Super Bowl Champion Juan Thornhill is headed to the Cleveland Browns out of free agency. Former Virginia safety and two-time Super Bowl Champion Juan Thornhill has a new home in the National Football League. After spending the first four seasons of his professional career with the Kansas City Chiefs, Thornhill signed a three-year, $21 million contract to join the Cleveland Browns, as first reported on Wednesday by the NFL Network's Tom Pelissero. Two-time Super Bowl winner Juan Thornhill agreed to terms with the #Browns on a three-year, $21 million contract with $14 million fully guaranteed at signing over the first two years, per sources. Thornhill started 52 games for the Kansas City Chiefs and is a big addition for the Cleveland Browns.  After an impressive career as a defensive back for the UVA football program, the Altavista, Virginia native was selected by the Chiefs with the 63rd overall pick in the second round of the 2019 NFL Draft. Thornhill had an excellent rookie season with three interceptions, but tore his left ACL late in the year and watched from the sidelines as the Chiefs beat the 49ers in Super Bowl LIV. Kansas City reached the Super Bowl the next season and the AFC Championship Game the year after that, but came up short of winning it all.  This season, the Chiefs reached the mountaintop again and Juan Thornhill played a pivotal role, recording five tackles and a pass breakup in Kansas City's 38-35 victory over the Philadelphia Eagles to win Super Bowl LVII, earning Thornhill his second Super Bowl ring. Thornhill was brilliant in the entire postseason run for the Chiefs, grading at 90.5 on Pro Football Focus over his final five games, the top mark in the NFL among safeties during that period.  In 65 total appearances and 52 starts over four seasons with the Chiefs, Thornhill recorded eight interceptions, one pick-six, 20 passes defended, one sack, one forced fumble, one fumble recovery, five tackles for loss, and 234 total tackles. In 2022, Thornhill had three interceptions, nine passes defended and 71 tackles.  #Browns Built Bar Built Bar is a protein bar that tastes like a candy bar. Go to builtbar.com and use promo code “LOCKEDON15,” and you'll get 15% off your next order. Ultimate Football GM To download the game just visit Ultimate-GM.com or look it up on the app stores. Our listeners get a 100% free boost to their franchise when using the promo LOCKEDON (ALL CAPS) in the game store. FanDuel Make Every Moment More. Don't miss the chance to get your No Sweat First Bet up to ONE THOUSAND DOLLARS in Bonus Bets when you go FanDuel.com/LOCKEDON. FANDUEL DISCLAIMER: 21+ in select states. First online real money wager only. Bonus issued as nonwithdrawable free bets that expires in 14 days. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling Problem? Call 1-800-GAMBLER or visit FanDuel.com/RG (CO, IA, MD, MI, NJ, PA, IL, VA, WV), 1-800-NEXT-STEP or text NEXTSTEP to 53342 (AZ), 1-888-789-7777 or visit ccpg.org/chat (CT), 1-800-9-WITH-IT (IN), 1-800-522-4700 (WY, KS) or visit ksgamblinghelp.com (KS), 1-877-770-STOP (LA), 1-877-8-HOPENY or text HOPENY (467369) (NY), TN REDLINE 1-800-889-9789 (TN) Learn more about your ad choices. Visit podcastchoices.com/adchoices