American Kickboxing Undisputed World Heavyweight Champion
POPULARITY
The Boyz actually stay on topic, talking about their favourite video games! Nandito ulit si Alexio!SPOILER WARNING: Pag-uusapan namin ang mga mahahalagang story beats ng mga video game.
Nagpapapayat na kami kasi ayaw namin mamatay baka wala na kasi kayo mapakinggan na podcast. Andito si Alexio sa episode na to!Enjoy a good game of BingoPlus! Licensed by Pagcor. Get it at Google Play and Apple Store or visit www.bingoplus.com PS - keep it legal! Gaming is for 21-year-olds and older only. Game responsibly!
Dance Music
Latin Dance Music
On va pas tortiller du popotin pour déféquer dans l'axe (pour rester poli), on est clairement parti dans un gros délire sur cet épisode et qu'est-ce qu'on a rigolé ! Avec deux invités en grandes formes, on a discuté de cinéma, de stand-up, on s'est envoyé quelques punchlines et on a terminé en beauté avec un jeu totalement inédit et loufoque de notre Alexio qui à foutu en rogne notre ami Bedou et laissé bouche bée le grand Pascal ! Encore une fois une heure de plaisir qui vaut le détour ! Retrouvez nos copains sur instagram : @pas_tino @phasme2000 Il est fortement recommandé de suivre la page @opbl_podcast ainsi que ces deux animateurs @mathieu.triay et @lalexio_hb Une émission produite par @station_woosh Avec la précieuse aide de @renaudswl
Secretly artistic and very talented war general James Caraan discovers a new world where victory doesn't lie in winning violent battles. Spirit liberators Red, Alexio and Andren urges him to pursue a path where he doesn't have to wear his heavy armor, where he doesn't carry the weight of his people, and where he can finally achieve happiness by realizing his artistic potential.Gets ba, yung tinutukoy na pabigat ay yung Koolpals. Baka di niyo gets yung metaphor kaya inexplain ko na. Hosted on Acast. See acast.com/privacy for more information.
Une heure intense où la rivalité entre MissAugine et Mathieu ne cesse de grandir, parviendra-t-il à la battre un jour ? Avec un Nico tout en décontraction qui ne sait pas faire un compliment sans dire une méchanceté derrière, et un Alexio qui nous pond encore une véritable pépite en jingle ! Un épisode de bonnes personnes bios et élevés en plein air à écouter sans modération. Retrouvez nos deux artistes sur Instagram : @missaugine @nicodureau Il est fortement recommandé de suivre la page @opbl_podcast ainsi que ces deux animateurs @mathieu.triay et @lalexio_hb Une émission produite par @station_woosh Avec la précieuse aide de @renaudswl.
Andren, Alexio, and Red are chosen to be the country's top defenders but The Dark Wizard Roy magically transports them into a different world where their fighting skills can only be used for cooking.Thanks to BingoPlus for helping make this episode happen!Enjoy a good game of BingoPlus! — the first online poker casino in the Philippines. Licensed by Pagcor. Get it at Google Play and App Store, or visit www.bingoplus.com. PS — keep it legal! Gaming is for 21-year-olds and older only. Game responsibly!
Alexio dreams of being the best hostess in his small town but a massive alien space fleet led by Commander Andren of the Pangasinan Federation wages war against the United Front Allies led by the masked Demon General only known to people as “Red”Thanks to BingoPlus for helping make this episode happen!Enjoy a good game of BingoPlus! — the first online poker casino in the Philippines. Licensed by Pagcor. Get it at Google Play and App Store, or visit www.bingoplus.com. PS — keep it legal! Gaming is for 21-year-olds and older only. Game responsibly!
C'est la reprise et ça démarre fort ! Entre triche et invention de la part de Kévin, imitation incroyable de la part de Lucas , et la découverte d'un véritable talent musical chez notre Alexio, on peut dire que c'est une rentrée réussie ! Retrouvez nos deux potos sur instagram : @lucas_drapier @kevin_chiocca Il est fortement recommandé de suivre la page @opbl_podcast ainsi que nos deux animateurs @mathieu.triay et @lalexio_hb Une émission produite par @station_woosh Avec la précieuse aide de @renaudswl.
Disclaimer: We recorded this episode ~1.5 months ago, timing for the FastHTML release. It then got bottlenecked by Llama3.1, Winds of AI Winter, and SAM2 episodes, so we're a little late. Since then FastHTML was released, swyx is building an app in it for AINews, and Anthropic has also released their prompt caching API. Remember when Dylan Patel of SemiAnalysis coined the GPU Rich vs GPU Poor war? (if not, see our pod with him). The idea was that if you're GPU poor you shouldn't waste your time trying to solve GPU rich problems (i.e. pre-training large models) and are better off working on fine-tuning, optimized inference, etc. Jeremy Howard (see our “End of Finetuning” episode to catchup on his background) and Eric Ries founded Answer.AI to do exactly that: “Practical AI R&D”, which is very in-line with the GPU poor needs. For example, one of their first releases was a system based on FSDP + QLoRA that let anyone train a 70B model on two NVIDIA 4090s. Since then, they have come out with a long list of super useful projects (in no particular order, and non-exhaustive):* FSDP QDoRA: this is just as memory efficient and scalable as FSDP/QLoRA, and critically is also as accurate for continued pre-training as full weight training.* Cold Compress: a KV cache compression toolkit that lets you scale sequence length without impacting speed.* colbert-small: state of the art retriever at only 33M params* JaColBERTv2.5: a new state-of-the-art retrievers on all Japanese benchmarks.* gpu.cpp: portable GPU compute for C++ with WebGPU.* Claudette: a better Anthropic API SDK. They also recently released FastHTML, a new way to create modern interactive web apps. Jeremy recently released a 1 hour “Getting started” tutorial on YouTube; while this isn't AI related per se, but it's close to home for any AI Engineer who are looking to iterate quickly on new products: In this episode we broke down 1) how they recruit 2) how they organize what to research 3) and how the community comes together. At the end, Jeremy gave us a sneak peek at something new that he's working on that he calls dialogue engineering: So I've created a new approach. It's not called prompt engineering. I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it.He explains it a bit more ~44:53 in the pod, but we'll just have to wait for the public release to figure out exactly what he means.Timestamps* [00:00:00] Intro by Suno AI* [00:03:02] Continuous Pre-Training is Here* [00:06:07] Schedule-Free Optimizers and Learning Rate Schedules* [00:07:08] Governance and Structural Issues within OpenAI and Other AI Labs* [00:13:01] How Answer.ai works* [00:23:40] How to Recruit Productive Researchers* [00:27:45] Building a new BERT* [00:31:57] FSDP, QLoRA, and QDoRA: Innovations in Fine-Tuning Large Models* [00:36:36] Research and Development on Model Inference Optimization* [00:39:49] FastHTML for Web Application Development* [00:46:53] AI Magic & Dialogue Engineering* [00:52:19] AI wishlist & predictionsShow Notes* Jeremy Howard* Previously on Latent Space: The End of Finetuning, NeurIPS Startups* Answer.ai* Fast.ai* FastHTML* answerai-colbert-small-v1* gpu.cpp* Eric Ries* Aaron DeFazio* Yi Tai* Less Wright* Benjamin Warner* Benjamin Clavié* Jono Whitaker* Austin Huang* Eric Gilliam* Tim Dettmers* Colin Raffel* Sebastian Raschka* Carson Gross* Simon Willison* Sepp Hochreiter* Llama3.1 episode* Snowflake Arctic* Ranger Optimizer* Gemma.cpp* HTMX* UL2* BERT* DeBERTa* Efficient finetuning of Llama 3 with FSDP QDoRA* xLSTMTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: And today we're back with Jeremy Howard, I think your third appearance on Latent Space. Welcome.Jeremy [00:00:19]: Wait, third? Second?Swyx [00:00:21]: Well, I grabbed you at NeurIPS.Jeremy [00:00:23]: I see.Swyx [00:00:24]: Very fun, standing outside street episode.Jeremy [00:00:27]: I never heard that, by the way. You've got to send me a link. I've got to hear what it sounded like.Swyx [00:00:30]: Yeah. Yeah, it's a NeurIPS podcast.Alessio [00:00:32]: I think the two episodes are six hours, so there's plenty to listen, we'll make sure to send it over.Swyx [00:00:37]: Yeah, we're trying this thing where at the major ML conferences, we, you know, do a little audio tour of, give people a sense of what it's like. But the last time you were on, you declared the end of fine tuning. I hope that I sort of editorialized the title a little bit, and I know you were slightly uncomfortable with it, but you just own it anyway. I think you're very good at the hot takes. And we were just discussing in our pre-show that it's really happening, that the continued pre-training is really happening.Jeremy [00:01:02]: Yeah, absolutely. I think people are starting to understand that treating the three ULM FIT steps of like pre-training, you know, and then the kind of like what people now call instruction tuning, and then, I don't know if we've got a general term for this, DPO, RLHFE step, you know, or the task training, they're not actually as separate as we originally suggested they were in our paper, and when you treat it more as a continuum, and that you make sure that you have, you know, more of kind of the original data set incorporated into the later stages, and that, you know, we've also seen with LLAMA3, this idea that those later stages can be done for a lot longer. These are all of the things I was kind of trying to describe there. It wasn't the end of fine tuning, but more that we should treat it as a continuum, and we should have much higher expectations of how much you can do with an already trained model. You can really add a lot of behavior to it, you can change its behavior, you can do a lot. So a lot of our research has been around trying to figure out how to modify the model by a larger amount rather than starting from random weights, because I get very offended at the idea of starting from random weights.Swyx [00:02:14]: Yeah, I saw that in ICLR in Vienna, there was an outstanding paper about starting transformers from data-driven piers. I don't know if you saw that one, they called it sort of never trained from scratch, and I think it was kind of rebelling against like the sort of random initialization.Jeremy [00:02:28]: Yeah, I've, you know, that's been our kind of continuous message since we started Fast AI, is if you're training for random weights, you better have a really good reason, you know, because it seems so unlikely to me that nobody has ever trained on data that has any similarity whatsoever to the general class of data you're working with, and that's the only situation in which I think starting from random weights makes sense.Swyx [00:02:51]: The other trends since our last pod that I would point people to is I'm seeing a rise in multi-phase pre-training. So Snowflake released a large model called Snowflake Arctic, where they detailed three phases of training where they had like a different mixture of like, there was like 75% web in the first instance, and then they reduced the percentage of the web text by 10% each time and increased the amount of code in each phase. And I feel like multi-phase is being called out in papers more. I feel like it's always been a thing, like changing data mix is not something new, but calling it a distinct phase is new, and I wonder if there's something that you're seeingJeremy [00:03:32]: on your end. Well, so they're getting there, right? So the point at which they're doing proper continued pre-training is the point at which that becomes a continuum rather than a phase. So the only difference with what I was describing last time is to say like, oh, there's a function or whatever, which is happening every batch. It's not a huge difference. You know, I always used to get offended when people had learning rates that like jumped. And so one of the things I started doing early on in Fast.ai was to say to people like, no, you should actually have your learning rate schedule should be a function, not a list of numbers. So now I'm trying to give the same idea about training mix.Swyx [00:04:07]: There's been pretty public work from Meta on schedule-free optimizers. I don't know if you've been following Aaron DeFazio and what he's doing, just because you mentioned learning rate schedules, you know, what if you didn't have a schedule?Jeremy [00:04:18]: I don't care very much, honestly. I don't think that schedule-free optimizer is that exciting. It's fine. We've had non-scheduled optimizers for ages, like Less Wright, who's now at Meta, who was part of the Fast.ai community there, created something called the Ranger optimizer. I actually like having more hyperparameters. You know, as soon as you say schedule-free, then like, well, now I don't get to choose. And there isn't really a mathematically correct way of, like, I actually try to schedule more parameters rather than less. So like, I like scheduling my epsilon in my atom, for example. I schedule all the things. But then the other thing we always did with the Fast.ai library was make it so you don't have to set any schedules. So Fast.ai always supported, like, you didn't even have to pass a learning rate. Like, it would always just try to have good defaults and do the right thing. But to me, I like to have more parameters I can play with if I want to, but you don't have to.Alessio [00:05:08]: And then the more less technical side, I guess, of your issue, I guess, with the market was some of the large research labs taking all this innovation kind of behind closed doors and whether or not that's good, which it isn't. And now we could maybe make it more available to people. And then a month after we released the episode, there was the whole Sam Altman drama and like all the OpenAI governance issues. And maybe people started to think more, okay, what happens if some of these kind of labs, you know, start to break from within, so to speak? And the alignment of the humans is probably going to fall before the alignment of the models. So I'm curious, like, if you have any new thoughts and maybe we can also tie in some of the way that we've been building Answer as like a public benefit corp and some of those aspects.Jeremy [00:05:51]: Sure. So, yeah, I mean, it was kind of uncomfortable because two days before Altman got fired, I did a small public video interview in which I said, I'm quite sure that OpenAI's current governance structure can't continue and that it was definitely going to fall apart. And then it fell apart two days later and a bunch of people were like, what did you know, Jeremy?Alessio [00:06:13]: What did Jeremy see?Jeremy [00:06:15]: I didn't see anything. It's just obviously true. Yeah. So my friend Eric Ries and I spoke a lot before that about, you know, Eric's, I think probably most people would agree, the top expert in the world on startup and AI governance. And you know, we could both clearly see that this didn't make sense to have like a so-called non-profit where then there are people working at a company, a commercial company that's owned by or controlled nominally by the non-profit, where the people in the company are being given the equivalent of stock options, like everybody there was working there with expecting to make money largely from their equity. So the idea that then a board could exercise control by saying like, oh, we're worried about safety issues and so we're going to do something that decreases the profit of the company, when every stakeholder in the company, their remuneration pretty much is tied to their profit, it obviously couldn't work. So I mean, that was a huge oversight there by someone. I guess part of the problem is that the kind of people who work at non-profits and in this case the board, you know, who are kind of academics and, you know, people who are kind of true believers. I think it's hard for them to realize that 99.999% of the world is driven very heavily by money, especially huge amounts of money. So yeah, Eric and I had been talking for a long time before that about what could be done differently, because also companies are sociopathic by design and so the alignment problem as it relates to companies has not been solved. Like, companies become huge, they devour their founders, they devour their communities and they do things where even the CEOs, you know, often of big companies tell me like, I wish our company didn't do that thing. You know, I know that if I didn't do it, then I would just get fired and the board would put in somebody else and the board knows if they don't do it, then their shareholders can sue them because they're not maximizing profitability or whatever. So what Eric's spent a lot of time doing is trying to think about how do we make companies less sociopathic, you know, how to, or more, you know, maybe a better way to think of it is like, how do we make it so that the founders of companies can ensure that their companies continue to actually do the things they want them to do? You know, when we started a company, hey, we very explicitly decided we got to start a company, not a academic lab, not a nonprofit, you know, we created a Delaware Seacorp, you know, the most company kind of company. But when we did so, we told everybody, you know, including our first investors, which was you Alessio. They sound great. We are going to run this company on the basis of maximizing long-term value. And in fact, so when we did our second round, which was an angel round, we had everybody invest through a long-term SPV, which we set up where everybody had to agree to vote in line with long-term value principles. So like never enough just to say to people, okay, we're trying to create long-term value here for society as well as for ourselves and everybody's like, oh, yeah, yeah, I totally agree with that. But when it comes to like, okay, well, here's a specific decision we have to make, which will not maximize short-term value, people suddenly change their mind. So you know, it has to be written into the legal documents of everybody so that no question that that's the way the company has to be managed. So then you mentioned the PBC aspect, Public Benefit Corporation, which I never quite understood previously. And turns out it's incredibly simple, like it took, you know, like one paragraph added to our corporate documents to become a PBC. It was cheap, it was easy, but it's got this huge benefit, which is if you're not a public benefit corporation, then somebody can come along and offer to buy you with a stated description of like turning your company into the thing you most hate, right? And if they offer you more than the market value of your company and you don't accept it, then you are not necessarily meeting the kind of your fiduciary responsibilities. So the way like Eric always described it to me is like, if Philip Morris came along and said that you've got great technology for marketing cigarettes to children, so we're going to pivot your company to do that entirely, and we're going to pay you 50% more than the market value, you're going to have to say yes. If you have a PBC, then you are more than welcome to say no, if that offer is not in line with your stated public benefit. So our stated public benefit is to maximize the benefit to society through using AI. So given that more children smoking doesn't do that, then we can say like, no, we're not selling to you.Alessio [00:11:01]: I was looking back at some of our emails. You sent me an email on November 13th about talking and then on the 14th, I sent you an email working together to free AI was the subject line. And then that was kind of the start of the C round. And then two days later, someone got fired. So you know, you were having these thoughts even before we had like a public example of like why some of the current structures didn't work. So yeah, you were very ahead of the curve, so to speak. You know, people can read your awesome introduction blog and answer and the idea of having a R&D lab versus our lab and then a D lab somewhere else. I think to me, the most interesting thing has been hiring and some of the awesome people that you've been bringing on that maybe don't fit the central casting of Silicon Valley, so to speak. Like sometimes I got it like playing baseball cards, you know, people are like, oh, what teams was this person on, where did they work versus focusing on ability. So I would love for you to give a shout out to some of the awesome folks that you have on the team.Jeremy [00:11:58]: So, you know, there's like a graphic going around describing like the people at XAI, you know, Elon Musk thing. And like they are all connected to like multiple of Stanford, Meta, DeepMind, OpenAI, Berkeley, Oxford. Look, these are all great institutions and they have good people. And I'm definitely not at all against that, but damn, there's so many other people. And one of the things I found really interesting is almost any time I see something which I think like this is really high quality work and it's something I don't think would have been built if that person hadn't built the thing right now, I nearly always reach out to them and ask to chat. And I tend to dig in to find out like, okay, you know, why did you do that thing? Everybody else has done this other thing, your thing's much better, but it's not what other people are working on. And like 80% of the time, I find out the person has a really unusual background. So like often they'll have like, either they like came from poverty and didn't get an opportunity to go to a good school or had dyslexia and, you know, got kicked out of school in year 11, or they had a health issue that meant they couldn't go to university or something happened in their past and they ended up out of the mainstream. And then they kind of succeeded anyway. Those are the people that throughout my career, I've tended to kind of accidentally hire more of, but it's not exactly accidentally. It's like when I see somebody who's done, two people who have done extremely well, one of them did extremely well in exactly the normal way from the background entirely pointing in that direction and they achieved all the hurdles to get there. And like, okay, that's quite impressive, you know, but another person who did just as well, despite lots of constraints and doing things in really unusual ways and came up with different approaches. That's normally the person I'm likely to find useful to work with because they're often like risk-takers, they're often creative, they're often extremely tenacious, they're often very open-minded. So that's the kind of folks I tend to find myself hiring. So now at Answer.ai, it's a group of people that are strong enough that nearly every one of them has independently come to me in the past few weeks and told me that they have imposter syndrome and they're not convinced that they're good enough to be here. And I kind of heard it at the point where I was like, okay, I don't think it's possible that all of you are so far behind your peers that you shouldn't get to be here. But I think part of the problem is as an R&D lab, the great developers look at the great researchers and they're like, wow, these big-brained, crazy research people with all their math and s**t, they're too cool for me, oh my God. And then the researchers look at the developers and they're like, oh, they're killing it, making all this stuff with all these people using it and talking on Twitter about how great it is. I think they're both a bit intimidated by each other, you know. And so I have to kind of remind them like, okay, there are lots of things in this world where you suck compared to lots of other people in this company, but also vice versa, you know, for all things. And the reason you came here is because you wanted to learn about those other things from those other people and have an opportunity to like bring them all together into a single unit. You know, it's not reasonable to expect you're going to be better at everything than everybody else. I guess the other part of it is for nearly all of the people in the company, to be honest, they have nearly always been better than everybody else at nearly everything they're doing nearly everywhere they've been. So it's kind of weird to be in this situation now where it's like, gee, I can clearly see that I suck at this thing that I'm meant to be able to do compared to these other people where I'm like the worst in the company at this thing for some things. So I think that's a healthy place to be, you know, as long as you keep reminding each other about that's actually why we're here. And like, it's all a bit of an experiment, like we don't have any managers. We don't have any hierarchy from that point of view. So for example, I'm not a manager, which means I don't get to tell people what to do or how to do it or when to do it. Yeah, it's been a bit of an experiment to see how that would work out. And it's been great. So for instance, Ben Clavier, who you might have come across, he's the author of Ragatouille, he's the author of Rerankers, super strong information retrieval guy. And a few weeks ago, you know, this additional channel appeared on Discord, on our private Discord called Bert24. And these people started appearing, as in our collab sections, we have a collab section for like collaborating with outsiders. And these people started appearing, there are all these names that I recognize, like Bert24, and they're all talking about like the next generation of Bert. And I start following along, it's like, okay, Ben decided that I think, quite rightly, we need a new Bert. Because everybody, like so many people are still using Bert, and it's still the best at so many things, but it actually doesn't take advantage of lots of best practices. And so he just went out and found basically everybody who's created better Berts in the last four or five years, brought them all together, suddenly there's this huge collaboration going on. So yeah, I didn't tell him to do that. He didn't ask my permission to do that. And then, like, Benjamin Warner dived in, and he's like, oh, I created a whole transformers from scratch implementation designed to be maximally hackable. He originally did it largely as a teaching exercise to show other people, but he was like, I could, you know, use that to create a really hackable BERT implementation. In fact, he didn't say that. He said, I just did do that, you know, and I created a repo, and then everybody's like starts using it. They're like, oh my god, this is amazing. I can now implement all these other BERT things. And it's not just answer AI guys there, you know, there's lots of folks, you know, who have like contributed new data set mixes and blah, blah, blah. So, I mean, I can help in the same way that other people can help. So like, then Ben Clavier reached out to me at one point and said, can you help me, like, what have you learned over time about how to manage intimidatingly capable and large groups of people who you're nominally meant to be leading? And so, you know, I like to try to help, but I don't direct. Another great example was Kerem, who, after our FSTP QLORA work, decided quite correctly that it didn't really make sense to use LoRa in today's world. You want to use the normalized version, which is called Dora. Like two or three weeks after we did FSTP QLORA, he just popped up and said, okay, I've just converted the whole thing to Dora, and I've also created these VLLM extensions, and I've got all these benchmarks, and, you know, now I've got training of quantized models with adapters that are as fast as LoRa, and as actually better than, weirdly, fine tuning. Just like, okay, that's great, you know. And yeah, so the things we've done to try to help make these things happen as well is we don't have any required meetings, you know, but we do have a meeting for each pair of major time zones that everybody's invited to, and, you know, people see their colleagues doing stuff that looks really cool and say, like, oh, how can I help, you know, or how can I learn or whatever. So another example is Austin, who, you know, amazing background. He ran AI at Fidelity, he ran AI at Pfizer, he ran browsing and retrieval for Google's DeepMind stuff, created Jemma.cpp, and he's been working on a new system to make it easier to do web GPU programming, because, again, he quite correctly identified, yeah, so I said to him, like, okay, I want to learn about that. Not an area that I have much expertise in, so, you know, he's going to show me what he's working on and teach me a bit about it, and hopefully I can help contribute. I think one of the key things that's happened in all of these is everybody understands what Eric Gilliam, who wrote the second blog post in our series, the R&D historian, describes as a large yard with narrow fences. Everybody has total flexibility to do what they want. We all understand kind of roughly why we're here, you know, we agree with the premises around, like, everything's too expensive, everything's too complicated, people are building too many vanity foundation models rather than taking better advantage of fine-tuning, like, there's this kind of general, like, sense of we're all on the same wavelength about, you know, all the ways in which current research is fucked up, and, you know, all the ways in which we're worried about centralization. We all care a lot about not just research for the point of citations, but research that actually wouldn't have happened otherwise, and actually is going to lead to real-world outcomes. And so, yeah, with this kind of, like, shared vision, people understand, like, you know, so when I say, like, oh, well, you know, tell me, Ben, about BERT 24, what's that about? And he's like, you know, like, oh, well, you know, you can see from an accessibility point of view, or you can see from a kind of a actual practical impact point of view, there's far too much focus on decoder-only models, and, you know, like, BERT's used in all of these different places and industry, and so I can see, like, in terms of our basic principles, what we're trying to achieve, this seems like something important. And so I think that's, like, a really helpful that we have that kind of shared perspective, you know?Alessio [00:21:14]: Yeah. And before we maybe talk about some of the specific research, when you're, like, reaching out to people, interviewing them, what are some of the traits, like, how do these things come out, you know, usually? Is it working on side projects that you, you know, you're already familiar with? Is there anything, like, in the interview process that, like, helps you screen for people that are less pragmatic and more research-driven versus some of these folks that are just gonna do it, you know? They're not waiting for, like, the perfect process.Jeremy [00:21:40]: Everybody who comes through the recruiting is interviewed by everybody in the company. You know, our goal is 12 people, so it's not an unreasonable amount. So the other thing to say is everybody so far who's come into the recruiting pipeline, everybody bar one, has been hired. So which is to say our original curation has been good. And that's actually pretty easy, because nearly everybody who's come in through the recruiting pipeline are people I know pretty well. So Jono Whitaker and I, you know, he worked on the stable diffusion course we did. He's outrageously creative and talented, and he's super, like, enthusiastic tinkerer, just likes making things. Benjamin was one of the strongest parts of the fast.ai community, which is now the alumni. It's, like, hundreds of thousands of people. And you know, again, like, they're not people who a normal interview process would pick up, right? So Benjamin doesn't have any qualifications in math or computer science. Jono was living in Zimbabwe, you know, he was working on, like, helping some African startups, you know, but not FAANG kind of credentials. But yeah, I mean, when you actually see people doing real work and they stand out above, you know, we've got lots of Stanford graduates and open AI people and whatever in our alumni community as well. You know, when you stand out above all of those people anyway, obviously you've got something going for you. You know, Austin, him and I worked together on the masks study we did in the proceeding at the National Academy of Science. You know, we had worked together, and again, that was a group of, like, basically the 18 or 19 top experts in the world on public health and epidemiology and research design and so forth. And Austin, you know, one of the strongest people in that collaboration. So yeah, you know, like, I've been lucky enough to have had opportunities to work with some people who are great and, you know, I'm a very open-minded person, so I kind of am always happy to try working with pretty much anybody and some people stand out. You know, there have been some exceptions, people I haven't previously known, like Ben Clavier, actually, I didn't know before. But you know, with him, you just read his code, and I'm like, oh, that's really well-written code. And like, it's not written exactly the same way as everybody else's code, and it's not written to do exactly the same thing as everybody else's code. So yeah, and then when I chatted to him, it's just like, I don't know, I felt like we'd known each other for years, like we just were on the same wavelength, but I could pretty much tell that was going to happen just by reading his code. I think you express a lot in the code you choose to write and how you choose to write it, I guess. You know, or another example, a guy named Vic, who was previously the CEO of DataQuest, and like, in that case, you know, he's created a really successful startup. He won the first, basically, Kaggle NLP competition, which was automatic essay grading. He's got the current state-of-the-art OCR system, Surya. Again, he's just a guy who obviously just builds stuff, you know, he doesn't ask for permission, he doesn't need any, like, external resources. Actually, Karim's another great example of this, I mean, I already knew Karim very well because he was my best ever master's student, but it wasn't a surprise to me then when he then went off to create the world's state-of-the-art language model in Turkish on his own, in his spare time, with no budget, from scratch. This is not fine-tuning or whatever, he, like, went back to Common Crawl and did everything. Yeah, it's kind of, I don't know what I'd describe that process as, but it's not at all based on credentials.Swyx [00:25:17]: Assemble based on talent, yeah. We wanted to dive in a little bit more on, you know, turning from the people side of things into the technical bets that you're making. Just a little bit more on Bert. I was actually, we just did an interview with Yi Tay from Reka, I don't know if you're familiar with his work, but also another encoder-decoder bet, and one of his arguments was actually people kind of over-index on the decoder-only GPT-3 type paradigm. I wonder if you have thoughts there that is maybe non-consensus as well. Yeah, no, absolutely.Jeremy [00:25:45]: So I think it's a great example. So one of the people we're collaborating with a little bit with BERT24 is Colin Raffle, who is the guy behind, yeah, most of that stuff, you know, between that and UL2, there's a lot of really interesting work. And so one of the things I've been encouraging the BERT group to do, Colin has as well, is to consider using a T5 pre-trained encoder backbone as a thing you fine-tune, which I think would be really cool. You know, Colin was also saying actually just use encoder-decoder as your Bert, you know, why don't you like use that as a baseline, which I also think is a good idea. Yeah, look.Swyx [00:26:25]: What technical arguments are people under-weighting?Jeremy [00:26:27]: I mean, Colin would be able to describe this much better than I can, but I'll give my slightly non-expert attempt. Look, I mean, think about like diffusion models, right? Like in stable diffusion, like we use things like UNet. You have this kind of downward path and then in the upward path you have the cross connections, which it's not a tension, but it's like a similar idea, right? You're inputting the original encoding path into your decoding path. It's critical to make it work, right? Because otherwise in the decoding part, the model has to do so much kind of from scratch. So like if you're doing translation, like that's a classic kind of encoder-decoder example. If it's decoder only, you never get the opportunity to find the right, you know, feature engineering, the right feature encoding for the original sentence. And it kind of means then on every token that you generate, you have to recreate the whole thing, you know? So if you have an encoder, it's basically saying like, okay, this is your opportunity model to create a really useful feature representation for your input information. So I think there's really strong arguments for encoder-decoder models anywhere that there is this kind of like context or source thing. And then why encoder only? Well, because so much of the time what we actually care about is a classification, you know? It's like an output. It's like generating an arbitrary length sequence of tokens. So anytime you're not generating an arbitrary length sequence of tokens, decoder models don't seem to make much sense. Now the interesting thing is, you see on like Kaggle competitions, that decoder models still are at least competitive with things like Deberta v3. They have to be way bigger to be competitive with things like Deberta v3. And the only reason they are competitive is because people have put a lot more time and money and effort into training the decoder only ones, you know? There isn't a recent Deberta. There isn't a recent Bert. Yeah, it's a whole part of the world that people have slept on a little bit. And this is just what happens. This is how trends happen rather than like, to me, everybody should be like, oh, let's look at the thing that has shown signs of being useful in the past, but nobody really followed up with properly. That's the more interesting path, you know, where people tend to be like, oh, I need to get citations. So what's everybody else doing? Can I make it 0.1% better, you know, or 0.1% faster? That's what everybody tends to do. Yeah. So I think it's like, Itay's work commercially now is interesting because here's like a whole, here's a whole model that's been trained in a different way. So there's probably a whole lot of tasks it's probably better at than GPT and Gemini and Claude. So that should be a good commercial opportunity for them if they can figure out what those tasks are.Swyx [00:29:07]: Well, if rumors are to be believed, and he didn't comment on this, but, you know, Snowflake may figure out the commercialization for them. So we'll see.Jeremy [00:29:14]: Good.Alessio [00:29:16]: Let's talk about FSDP, Qlora, Qdora, and all of that awesome stuff. One of the things we talked about last time, some of these models are meant to run on systems that nobody can really own, no single person. And then you were like, well, what if you could fine tune a 70B model on like a 4090? And I was like, no, that sounds great, Jeremy, but like, can we actually do it? And then obviously you all figured it out. Can you maybe tell us some of the worst stories behind that, like the idea behind FSDP, which is kind of taking sharded data, parallel computation, and then Qlora, which is do not touch all the weights, just go quantize some of the model, and then within the quantized model only do certain layers instead of doing everything.Jeremy [00:29:57]: Well, do the adapters. Yeah.Alessio [00:29:59]: Yeah. Yeah. Do the adapters. Yeah. I will leave the floor to you. I think before you published it, nobody thought this was like a short term thing that we're just going to have. And now it's like, oh, obviously you can do it, but it's not that easy.Jeremy [00:30:12]: Yeah. I mean, to be honest, it was extremely unpleasant work to do. It's like not at all enjoyable. I kind of did version 0.1 of it myself before we had launched the company, or at least the kind of like the pieces. They're all pieces that are difficult to work with, right? So for the quantization, you know, I chatted to Tim Detmers quite a bit and, you know, he very much encouraged me by saying like, yeah, it's possible. He actually thought it'd be easy. It probably would be easy for him, but I'm not Tim Detmers. And, you know, so he wrote bits and bytes, which is his quantization library. You know, he wrote that for a paper. He didn't write that to be production like code. It's now like everybody's using it, at least the CUDA bits. So like, it's not particularly well structured. There's lots of code paths that never get used. There's multiple versions of the same thing. You have to try to figure it out. So trying to get my head around that was hard. And you know, because the interesting bits are all written in CUDA, it's hard to like to step through it and see what's happening. And then, you know, FSTP is this very complicated library and PyTorch, which not particularly well documented. So the only really, really way to understand it properly is again, just read the code and step through the code. And then like bits and bytes doesn't really work in practice unless it's used with PEF, the HuggingFace library and PEF doesn't really work in practice unless you use it with other things. And there's a lot of coupling in the HuggingFace ecosystem where like none of it works separately. You have to use it all together, which I don't love. So yeah, trying to just get a minimal example that I can play with was really hard. And so I ended up having to rewrite a lot of it myself to kind of create this like minimal script. One thing that helped a lot was Medec had this LlamaRecipes repo that came out just a little bit before I started working on that. And like they had a kind of role model example of like, here's how to train FSTP, LoRa, didn't work with QLoRa on Llama. A lot of the stuff I discovered, the interesting stuff would be put together by Les Wright, who's, he was actually the guy in the Fast.ai community I mentioned who created the Ranger Optimizer. So he's doing a lot of great stuff at Meta now. So yeah, I kind of, that helped get some minimum stuff going and then it was great once Benjamin and Jono joined full time. And so we basically hacked at that together and then Kerim joined like a month later or something. And it was like, gee, it was just a lot of like fiddly detailed engineering on like barely documented bits of obscure internals. So my focus was to see if it kind of could work and I kind of got a bit of a proof of concept working and then the rest of the guys actually did all the work to make it work properly. And, you know, every time we thought we had something, you know, we needed to have good benchmarks, right? So we'd like, it's very easy to convince yourself you've done the work when you haven't, you know, so then we'd actually try lots of things and be like, oh, and these like really important cases, the memory use is higher, you know, or it's actually slower. And we'd go in and we just find like all these things that were nothing to do with our library that just didn't work properly. And nobody had noticed they hadn't worked properly because nobody had really benchmarked it properly. So we ended up, you know, trying to fix a whole lot of different things. And even as we did so, new regressions were appearing in like transformers and stuff that Benjamin then had to go away and figure out like, oh, how come flash attention doesn't work in this version of transformers anymore with this set of models and like, oh, it turns out they accidentally changed this thing, so it doesn't work. You know, there's just, there's not a lot of really good performance type evals going on in the open source ecosystem. So there's an extraordinary amount of like things where people say like, oh, we built this thing and it has this result. And when you actually check it, so yeah, there's a shitload of war stories from getting that thing to work. And it did require a particularly like tenacious group of people and a group of people who don't mind doing a whole lot of kind of like really janitorial work, to be honest, to get the details right, to check them. Yeah.Alessio [00:34:09]: We had a trade out on the podcast and we talked about how a lot of it is like systems work to make some of these things work. It's not just like beautiful, pure math that you do on a blackboard. It's like, how do you get into the nitty gritty?Jeremy [00:34:22]: I mean, flash attention is a great example of that. Like it's, it basically is just like, oh, let's just take the attention and just do the tiled version of it, which sounds simple enough, you know, but then implementing that is challenging at lots of levels.Alessio [00:34:36]: Yeah. What about inference? You know, obviously you've done all this amazing work on fine tuning. Do you have any research you've been doing on the inference side, how to make local inference really fast on these models too?Jeremy [00:34:47]: We're doing quite a bit on that at the moment. We haven't released too much there yet. But one of the things I've been trying to do is also just to help other people. And one of the nice things that's happened is that a couple of folks at Meta, including Mark Seraphim, have done a nice job of creating this CUDA mode community of people working on like CUDA kernels or learning about that. And I tried to help get that going well as well and did some lessons to help people get into it. So there's a lot going on in both inference and fine tuning performance. And a lot of it's actually happening kind of related to that. So PyTorch team have created this Torch AO project on quantization. And so there's a big overlap now between kind of the FastAI and AnswerAI and CUDA mode communities of people working on stuff for both inference and fine tuning. But we're getting close now. You know, our goal is that nobody should be merging models, nobody should be downloading merged models, everybody should be using basically quantized plus adapters for almost everything and just downloading the adapters. And that should be much faster. So that's kind of the place we're trying to get to. It's difficult, you know, because like Karim's been doing a lot of work with VLM, for example. These inference engines are pretty complex bits of code. They have a whole lot of custom kernel stuff going on as well, as do the quantization libraries. So we've been working on, we're also quite a bit of collaborating with the folks who do HQQ, which is a really great quantization library and works super well. So yeah, there's a lot of other people outside AnswerAI that we're working with a lot who are really helping on all this performance optimization stuff, open source.Swyx [00:36:27]: Just to follow up on merging models, I picked up there that you said nobody should be merging models. That's interesting because obviously a lot of people are experimenting with this and finding interesting results. I would say in defense of merging models, you can do it without data. That's probably the only thing that's going for it.Jeremy [00:36:45]: To explain, it's not that you shouldn't merge models. You shouldn't be distributing a merged model. You should distribute a merged adapter 99% of the time. And actually often one of the best things happening in the model merging world is actually that often merging adapters works better anyway. The point is, Sean, that once you've got your new model, if you distribute it as an adapter that sits on top of a quantized model that somebody's already downloaded, then it's a much smaller download for them. And also the inference should be much faster because you're not having to transfer FB16 weights from HPM memory at all or ever load them off disk. You know, all the main weights are quantized and the only floating point weights are in the adapters. So that should make both inference and fine tuning faster. Okay, perfect.Swyx [00:37:33]: We're moving on a little bit to the rest of the fast universe. I would have thought that, you know, once you started Answer.ai, that the sort of fast universe would be kind of on hold. And then today you just dropped Fastlight and it looks like, you know, there's more activity going on in sort of Fastland.Jeremy [00:37:49]: Yeah. So Fastland and Answerland are not really distinct things. Answerland is kind of like the Fastland grown up and funded. They both have the same mission, which is to maximize the societal benefit of AI broadly. We want to create thousands of commercially successful products at Answer.ai. And we want to do that with like 12 people. So that means we need a pretty efficient stack, you know, like quite a few orders of magnitude more efficient, not just for creation, but for deployment and maintenance than anything that currently exists. People often forget about the D part of our R&D firm. So we've got to be extremely good at creating, deploying and maintaining applications, not just models. Much to my horror, the story around creating web applications is much worse now than it was 10 or 15 years ago in terms of, if I say to a data scientist, here's how to create and deploy a web application, you know, either you have to learn JavaScript or TypeScript and about all the complex libraries like React and stuff, and all the complex like details around security and web protocol stuff around how you then talk to a backend and then all the details about creating the backend. You know, if that's your job and, you know, you have specialists who work in just one of those areas, it is possible for that to all work. But compared to like, oh, write a PHP script and put it in the home directory that you get when you sign up to this shell provider, which is what it was like in the nineties, you know, here are those 25 lines of code and you're done and now you can pass that URL around to all your friends, or put this, you know, .pl file inside the CGI bin directory that you got when you signed up to this web host. So yeah, the thing I've been mainly working on the last few weeks is fixing all that. And I think I fixed it. I don't know if this is an announcement, but I tell you guys, so yeah, there's this thing called fastHTML, which basically lets you create a complete web application in a single Python file. Unlike excellent projects like Streamlit and Gradio, you're not working on top of a highly abstracted thing. That's got nothing to do with web foundations. You're working with web foundations directly, but you're able to do it by using pure Python. There's no template, there's no ginger, there's no separate like CSS and JavaScript files. It looks and behaves like a modern SPA web application. And you can create components for like daisy UI, or bootstrap, or shoelace, or whatever fancy JavaScript and or CSS tailwind etc library you like, but you can write it all in Python. You can pip install somebody else's set of components and use them entirely from Python. You can develop and prototype it all in a Jupyter notebook if you want to. It all displays correctly, so you can like interactively do that. And then you mentioned Fastlight, so specifically now if you're using SQLite in particular, it's like ridiculously easy to have that persistence, and all of your handlers will be passed database ready objects automatically, that you can just call dot delete dot update dot insert on. Yeah, you get session, you get security, you get all that. So again, like with most everything I do, it's very little code. It's mainly tying together really cool stuff that other people have written. You don't have to use it, but a lot of the best stuff comes from its incorporation of HTMX, which to me is basically the thing that changes your browser to make it work the way it always should have. So it just does four small things, but those four small things are the things that are basically unnecessary constraints that HTML should never have had, so it removes the constraints. It sits on top of Starlet, which is a very nice kind of lower level platform for building these kind of web applications. The actual interface matches as closely as possible to FastAPI, which is a really nice system for creating the kind of classic JavaScript type applications. And Sebastian, who wrote FastAPI, has been kind enough to help me think through some of these design decisions, and so forth. I mean, everybody involved has been super helpful. Actually, I chatted to Carson, who created HTMX, you know, so about it. Some of the folks involved in Django, like everybody in the community I've spoken to definitely realizes there's a big gap to be filled around, like, highly scalable, web foundation-based, pure Python framework with a minimum of fuss. So yeah, I'm getting a lot of support and trying to make sure that FastHTML works well for people.Swyx [00:42:38]: I would say, when I heard about this, I texted Alexio. I think this is going to be pretty huge. People consider Streamlit and Gradio to be the state of the art, but I think there's so much to improve, and having what you call web foundations and web fundamentals at the core of it, I think, would be really helpful.Jeremy [00:42:54]: I mean, it's based on 25 years of thinking and work for me. So like, FastML was built on a system much like this one, but that was of hell. And so I spent, you know, 10 years working on that. We had millions of people using that every day, really pushing it hard. And I really always enjoyed working in that. Yeah. So, you know, and obviously lots of other people have done like great stuff, and particularly HTMX. So I've been thinking about like, yeah, how do I pull together the best of the web framework I created for FastML with HTMX? There's also things like PicoCSS, which is the CSS system, which by default, FastHTML comes with. Although, as I say, you can pip install anything you want to, but it makes it like super easy to, you know, so we try to make it so that just out of the box, you don't have any choices to make. Yeah. You can make choices, but for most people, you just, you know, it's like the PHP in your home directory thing. You just start typing and just by default, you'll get something which looks and feels, you know, pretty okay. And if you want to then write a version of Gradio or Streamlit on top of that, you totally can. And then the nice thing is if you then write it in kind of the Gradio equivalent, which will be, you know, I imagine we'll create some kind of pip installable thing for that. Once you've outgrown, or if you outgrow that, it's not like, okay, throw that all away and start again. And this like whole separate language that it's like this kind of smooth, gentle path that you can take step-by-step because it's all just standard web foundations all the way, you know.Swyx [00:44:29]: Just to wrap up the sort of open source work that you're doing, you're aiming to create thousands of projects with a very, very small team. I haven't heard you mention once AI agents or AI developer tooling or AI code maintenance. I know you're very productive, but you know, what is the role of AI in your own work?Jeremy [00:44:47]: So I'm making something. I'm not sure how much I want to say just yet.Swyx [00:44:52]: Give us a nibble.Jeremy [00:44:53]: All right. I'll give you the key thing. So I've created a new approach. It's not called prompt engineering. It's called dialogue engineering. But I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it. So I always just build stuff for myself and hope that it'll be useful for somebody else. Think about chat GPT with code interpreter, right? The basic UX is the same as a 1970s teletype, right? So if you wrote APL on a teletype in the 1970s, you typed onto a thing, your words appeared at the bottom of a sheet of paper and you'd like hit enter and it would scroll up. And then the answer from APL would be printed out, scroll up, and then you would type the next thing. And like, which is also the way, for example, a shell works like bash or ZSH or whatever. It's not terrible, you know, like we all get a lot done in these like very, very basic teletype style REPL environments, but I've never felt like it's optimal and everybody else has just copied chat GPT. So it's also the way BART and Gemini work. It's also the way the Claude web app works. And then you add code interpreter. And the most you can do is to like plead with chat GPT to write the kind of code I want. It's pretty good for very, very, very beginner users who like can't code at all, like by default now the code's even hidden away, so you never even have to see it ever happened. But for somebody who's like wanting to learn to code or who already knows a bit of code or whatever, it's, it seems really not ideal. So okay, that's one end of the spectrum. The other end of the spectrum, which is where Sean's work comes in, is, oh, you want to do more than chat GPT? No worries. Here is Visual Studio Code. I run it. There's an empty screen with a flashing cursor. Okay, start coding, you know, and it's like, okay, you can use systems like Sean's or like cursor or whatever to be like, okay, Apple K in cursors, like a creative form that blah, blah, blah. But in the end, it's like a convenience over the top of this incredibly complicated system that full-time sophisticated software engineers have designed over the past few decades in a totally different environment as a way to build software, you know. And so we're trying to like shoehorn in AI into that. And it's not easy to do. And I think there are like much better ways of thinking about the craft of software development in a language model world to be much more interactive, you know. So the thing that I'm building is neither of those things. It's something between the two. And it's built around this idea of crafting a dialogue, you know, where the outcome of the dialogue is the artifacts that you want, whether it be a piece of analysis or whether it be a Python library or whether it be a technical blog post or whatever. So as part of building that, I've created something called Claudette, which is a library for Claude. I've created something called Cosette, which is a library for OpenAI. They're libraries which are designed to make those APIs much more usable, much easier to use, much more concise. And then I've written AI magic on top of those. And that's been an interesting exercise because I did Claudette first, and I was looking at what Simon Willison did with his fantastic LLM library. And his library is designed around like, let's make something that supports all the LLM inference engines and commercial providers. I thought, okay, what if I did something different, which is like make something that's as Claude friendly as possible and forget everything else. So that's what Claudette was. So for example, one of the really nice things in Claude is prefill. So by telling the assistant that this is what your response started with, there's a lot of powerful things you can take advantage of. So yeah, I created Claudette to be as Claude friendly as possible. And then after I did that, and then particularly with GPT 4.0 coming out, I kind of thought, okay, now let's create something that's as OpenAI friendly as possible. And then I tried to look to see, well, where are the similarities and where are the differences? And now can I make them compatible in places where it makes sense for them to be compatible without losing out on the things that make each one special for what they are. So yeah, those are some of the things I've been working on in that space. And I'm thinking we might launch AI magic via a course called how to solve it with code. The name is based on the classic Polya book, if you know how to solve it, which is, you know, one of the classic math books of all time, where we're basically going to try to show people how to solve challenging problems that they didn't think they could solve without doing a full computer science course, by taking advantage of a bit of AI and a bit of like practical skills, as particularly for this like whole generation of people who are learning to code with and because of ChatGPT. Like I love it, I know a lot of people who didn't really know how to code, but they've created things because they use ChatGPT, but they don't really know how to maintain them or fix them or add things to them that ChatGPT can't do, because they don't really know how to code. And so this course will be designed to show you how you can like either become a developer who can like supercharge their capabilities by using language models, or become a language model first developer who can supercharge their capabilities by understanding a bit about process and fundamentals.Alessio [00:50:19]: Nice. That's a great spoiler. You know, I guess the fourth time you're going to be on learning space, we're going to talk about AI magic. Jeremy, before we wrap, this was just a great run through everything. What are the things that when you next come on the podcast in nine, 12 months, we're going to be like, man, Jeremy was like really ahead of it. Like, is there anything that you see in the space that maybe people are not talking enough? You know, what's the next company that's going to fall, like have drama internally, anything in your mind?Jeremy [00:50:47]: You know, hopefully we'll be talking a lot about fast HTML and hopefully the international community that at that point has come up around that. And also about AI magic and about dialogue engineering. Hopefully dialogue engineering catches on because I think it's the right way to think about a lot of this stuff. What else? Just trying to think about all on the research side. Yeah. I think, you know, I mean, we've talked about a lot of it. Like I think encoder decoder architectures, encoder only architectures, hopefully we'll be talking about like the whole re-interest in BERT that BERT 24 stimulated.Swyx [00:51:17]: There's a safe space model that came out today that might be interesting for this general discussion. One thing that stood out to me with Cartesia's blog posts was that they were talking about real time ingestion, billions and trillions of tokens, and keeping that context, obviously in the state space that they have.Jeremy [00:51:34]: Yeah.Swyx [00:51:35]: I'm wondering what your thoughts are because you've been entirely transformers the whole time.Jeremy [00:51:38]: Yeah. No. So obviously my background is RNNs and LSTMs. Of course. And I'm still a believer in the idea that state is something you can update, you know? So obviously Sepp Hochreiter came up, came out with xLSTM recently. Oh my God. Okay. Another whole thing we haven't talked about, just somewhat related. I've been going crazy for like a long time about like, why can I not pay anybody to save my KV cash? I just ingested the Great Gatsby or the documentation for Starlet or whatever, you know, I'm sending it as my prompt context. Why are you redoing it every time? So Gemini is about to finally come out with KV caching, and this is something that Austin actually in Gemma.cpp had had on his roadmap for years, well not years, months, long time. The idea that the KV cache is like a thing that, it's a third thing, right? So there's RAG, you know, there's in-context learning, you know, and prompt engineering, and there's KV cache creation. I think it creates like a whole new class almost of applications or as techniques where, you know, for me, for example, I very often work with really new libraries or I've created my own library that I'm now writing with rather than on. So I want all the docs in my new library to be there all the time. So I want to upload them once, and then we have a whole discussion about building this application using FastHTML. Well nobody's got FastHTML in their language model yet, I don't want to send all the FastHTML docs across every time. So one of the things I'm looking at doing in AI Magic actually is taking advantage of some of these ideas so that you can have the documentation of the libraries you're working on be kind of always available. Something over the next 12 months people will be spending time thinking about is how to like, where to use RAG, where to use fine-tuning, where to use KV cache storage, you know. And how to use state, because in state models and XLSTM, again, state is something you update. So how do we combine the best of all of these worlds?Alessio [00:53:46]: And Jeremy, I know before you talked about how some of the autoregressive models are not maybe a great fit for agents. Any other thoughts on like JEPA, diffusion for text, any interesting thing that you've seen pop up?Jeremy [00:53:58]: In the same way that we probably ought to have state that you can update, i.e. XLSTM and state models, in the same way that a lot of things probably should have an encoder, JEPA and diffusion both seem like the right conceptual mapping for a lot of things we probably want to do. So the idea of like, there should be a piece of the generative pipeline, which is like thinking about the answer and coming up with a sketch of what the answer looks like before you start outputting tokens. That's where it kind of feels like diffusion ought to fit, you know. And diffusion is, because it's not autoregressive, it's like, let's try to like gradually de-blur the picture of how to solve this. So this is also where dialogue engineering fits in, by the way. So with dialogue engineering, one of the reasons it's working so well for me is I use it to kind of like craft the thought process before I generate the code, you know. So yeah, there's a lot of different pieces here and I don't know how they'll all kind of exactly fit together. I don't know if JEPA is going to actually end up working in the text world. I don't know if diffusion will end up working in the text world, but they seem to be like trying to solve a class of problem which is currently unsolved.Alessio [00:55:13]: Awesome, Jeremy. This was great, as usual. Thanks again for coming back on the pod and thank you all for listening. Yeah, that was fantastic. Get full access to Latent Space at www.latent.space/subscribe
Just when you thought Alexio's out, we pull him back in! Hosted on Acast. See acast.com/privacy for more information.
JW Roadcast is back baby! Jeremiah Wonders with Justin Alexio from Everybody Wants Some, and Peter Banachowski from Fired From Cruise Ships. The guys walk down memory lane and talk about Jeremiah bombing, their early improv days, starting out in LA, and more! #JeremiahWatkins #Podcast #JeremiahWonders #JustinAlexio #PeterBanachowski Watch my 1 Hour Special DADDY out now!: https://www.youtube.com/watch?v=Tg__EXw0b3s&lc=UgwumA_HDPa_vCVVyoF4AaABAg Subscribe to the Stand-Up On The Spot channel here!: https://www.youtube.com/@standupots Trailer Tales Merch!: https://www.eatmytrash.com Support the show by grabbing some merch at jeremiahwatkins.com Email the show at jeremiahwonders@gmail.com
Stand-Up On The Spot! Featuring completely improvised sets from Che Durena, Joel Jimenez, JT Parr, Josh Potter, Justine Marino, Justin Alexio & Jeremiah Watkins. No material. Comedians create Stand-Up On The Spot off audience suggestions. You know Che Durena from his viral videos, Joel Jimenez from Kill Tony and his podcast Lesser Known Characters, JT Parr from his Netflix series and podcast Chad and JT Go Deep, Josh Potter from Your Moms House and The Josh Potter Show, Justine Marino from The Funny Dance Show on E! and her podcast Glitter and Garbage, Justin Alexio from Everybody Wants Some, and Jeremiah Watkins you know from Scissor Bros, Trailer Tales, and his special DADDY. This episode covers everything from Disney Adults, to how Blind People wipe, Swingers & more! #1HourSpecial #JTParr #StandupComedy #CheDurena #JoshPotter #YourMomsHouse #JustineMarino #JustinAlexio #StandUpOnTheSpot #SOTS #CrowdWork #JeremiahWatkins #ScissorBros #KillTony #JoelJimenez Follow the Comedians! Jeremiah Watkins https://www.instagram.com/jeremiahstandup @jeremiahwatkins @TrailerTalesPod Che Durena https://www.instagram.com/chedurena @Chedurena Joel Jimenez https://www.instagram.com/joeljimenezcomedy JT Parr https://www.instagram.com/jtparr14 @Chadandjtgodeep Josh Potter https://www.instagram.com/josh_potter @TheJoshPotterShow Justine Marino https://www.instagram.com/justinemachine1 Justin Alexio https://www.instagram.com/justinalexio Stand-Up On The Spot https://www.instagram.com/standupots @standupots Sponsored by: FÜM Support the show, save 10% off the Journey Pack and start the Good Habit at https://www.tryfum.com/STANDUP code STANDUP Filmed at The Huntington Beach Rec Room Jeremiah's Shirt from https://7-strong.com
Who can resist a heavy breathin, lip smacking, snackity snack episode with our resident weather expert Alexio Tabafunda? Hosted on Acast. See acast.com/privacy for more information.
The guys talk to Justin Alexio about being an Arizona Cardinals fan. They discuss Justin's history with football, his Cardinals Mount Rushmore, and Arizona sports highlights. Watch the video of today's episode at Patreon.com/TheFlagrantOnes. Like the show? Rate FOOSBALLZ! 5-Stars on Apple PodcastsAdvertise on the podcast via Gumball.fm See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
JOIN THE COMMUNITY TODAY: https://www.themorningmeetup.comAlexiou Gibson on IG: https://www.instagram.com/alexiou2.0/In this episode, we have Alexio, one of the most successful Shark Tank guests, joining the conversation. Alexio shares his experience on the show and how his pitch became one of the best on various lists. He talks about the pressure he felt to succeed and not let down his community. The discussion also covers the investment he received from the sharks and how he devalued his company to work with them. Alexio expresses his gratitude for the opportunity to work with the sharks and discusses the success his business achieved in its first 10 months. Listen to this episode to gain insight into the world of Shark Tank and the journey of a successful entrepreneur.Subscribe to the Social Proof Podcast:Apple Podcasts: https://podcasts.apple.com/us/podcast/social-proof-podcast/id1374373035Spotify: https://open.spotify.com/show/6GT6VgjBy0zviALR8xkRe1?si=82LcP8siQXGVHb2ibg5axA&nd=1*** Grab the Podcast EBOOK
Erica continues our new series on Genesis. This week he is speaking on Genesis 2:1-25. Her topic is "God's masterpiece" Summary In her sermon titled "God's Masterpiece" on January 14th, 2024, part of a series focused on the book of Genesis, Erica Lugg delves into the profound theme of humanity being God's masterpiece. The biblical foundation for her message is rooted in Genesis 1:26-27 and 2:7. Throughout the sermon, Lugg employs vivid imagery and relatable anecdotes to convey the significance of recognizing each individual as intricately crafted in the image of God. Lugg begins by highlighting the accessibility of the book of Genesis and its foundational role in the Bible. The central theme of the sermon revolves around the concept of God's masterpiece, prompting the congregation to ponder the question of who or what God's masterpiece might be. Lugg engages the audience by challenging them to consider their identity in Christ, building anticipation for the revelation of the answer. The sermon unfolds as Lugg explores personal encounters, such as standing before the awe-inspiring Victoria Falls. By describing the grandeur of this natural wonder, she draws parallels between the majesty of creation and the intentional design of every individual as God's masterpiece. The mention of Victoria Falls serves as a powerful metaphor for the overwhelming love and creativity of God cascading over each person. Lugg skillfully weaves personal anecdotes into the narrative, sharing a pivotal moment standing on the bridge near Victoria Falls with her family. This personal touch adds authenticity to the message, illustrating the transformative impact of recognizing God's craftsmanship in one's life. As the sermon progresses, Lugg expands on the idea of various natural wonders, like the Northern Lights, the Grand Canyon, and the intricate details of a ladybird's spots, each serving as a testament to God's creative genius. By incorporating these diverse examples, she emphasizes the vastness and richness of God's artistic expression in both the macro and micro aspects of creation. The sermon takes a reflective turn as Lugg addresses concerns and doubts individuals might harbor about their own significance. She shares a powerful revelation of how, just like the rest of creation, humans are God's masterpiece, irrespective of doubts or external opinions. Drawing attention to Genesis 1:27 and 2:7, Lugg stresses that being created in God's image is a foundational truth that surpasses any external circumstances or challenges. Throughout the sermon, Lugg integrates key Bible passages, including Genesis 1:26-27, Genesis 2:7, Isaiah 43 (pertaining to the new thing God is doing), and Psalm 139. These passages serve as anchors, grounding the sermon in biblical truth and emphasizing the continuity of God's involvement in the lives of His creation. In the concluding segment of the sermon, Lugg encourages the congregation to shift their focus from flaws and mistakes to Jesus as the ultimate restorer. She passionately urges them to embrace their identity as God's masterpiece, emphasizing that this identity is not earned but bestowed by God's intentional design. The sermon wraps up with a powerful affirmation of each individual's intrinsic value and the constant presence of God in their lives. The congregation is left with a resounding call to perceive themselves as God's masterpiece, echoing the truth from Psalm 139 that every moment God is thinking of them and cherishing them. In summary, Erica Lugg's sermon masterfully weaves together personal experiences, relatable anecdotes, and biblical truths to convey the profound message that every individual is God's masterpiece. Her engaging delivery and emphasis on recognizing one's identity in Christ make the sermon both inspiring and impactful. Key Bible passages referenced in the sermon include Genesis 1:26-27 and 2:7, Isaiah 43 (the new thing), and Psalm 139. Erica concludes with a powerful affirmation of each person's intrinsic value and God's continuous presence in their lives. Bible Passages Used: Genesis 1:26-27 Genesis 2:7 Isaiah 43 (referencing the new thing) Psalm 139 Transcription We're going to be going to Genesis, which is like a really easy book in the Bible to find in that it's right at the very beginning. And the theme of this morning is God's masterpiece. Now I know that those of you who know Jesus, you know the answer to this question, this statement about who is God's masterpiece or what is God's masterpiece, any ideas? We don't have any ideas, right? We have no idea as Christians, oh my goodness people. And what we've seen and what we've heard this morning is God doing what God does with the people that He loves with all His heart. And I'm going to talk about God's masterpiece, but I know you already know the answer, but I'm hoping that by the end of it we will see it not just as a thing that we kind of know in our head, but where it positions us this morning and where it places us this morning and how God sees us. Now I remember the first time I ever encountered the mighty Victoria Falls. Any of you been to Victoria Falls? A few people. It's incredible. It spans 1,700 metres across the Zambezi River. Every single minute, 500 million litres of water cascade over that waterfall every single minute. People in Zambia and Zimbabwe, they don't call it Victoria Falls. That was the name that David Livingston gave it. They call it Mosi Owatunya, which means the smoke that thunders, because the spray from that falls rises 400 metres above the waterfall. It's the smoke that thunders. I remember the day, Nick and I stood there with two very little boys, we were just starting on our adventure, our Zambia adventure of extended life there. On the backdrop of maybe some concerns from friends and family that maybe we were taking the children somewhere that would mean that they would end up not being able to fulfil their opportunities because we were taking their way for the UK and we were taking them somewhere else and maybe they would end up lacking in their future because of it. And I understand the concerns, but as I stood on the bridge that day and we were absolutely wept through to the skin from the spray from that mighty Victoria Falls, the bridge was shaking with the power and the thunder was deafening. It was a glory to God in the highest moment and I knew then we weren't lacking anything, we were gaining absolutely everything, but that's a different story altogether. We have in Genesis, right at the very beginning, a glimpse of Creator God displaying his glory and if you, well, that was one thing. In the beginning God created the heavens and the earth and we have been in awe of his creation ever since. Any of you seen the Northern light? I never have. Spectacular. Was it a glory to God in the highest moment? Unbelievable. Unbelievable. Have any of you seen a full moon on a winter's evening? Yes. Yes. Is it a glory to God in the highest moment? You can look at the moon. What about a field of sunflowers? Look at that. Isn't it a glory to God in the highest moment? Look at that. Graham, you're doing a grand job. Moving on to the Grand Canyon. I've never seen this. Look at that. Any of you seen the Grand Canyon? Neil, how does it compare? Amazing. Amazing. Did you stand on the edge going, wow. You went down there. What about the intricate detail of a ladybird's spots? Every single one, completely unique, no two ladybirds the same. Glory to God in the highest. What about the Great Barrier Reef? I've never been there. I've heard it spectacular. Anybody been? What's it like, Ross? Spoketacular, no idea. What about man Everest? Anybody be there up there? No. Of course you haven't. It takes training. What about this one? I love this one. What about a meerkat? Aren't they the most extraordinary of animals? This family of meerkats must have posed for the photograph. Look at that. Glory to God in the highest. What about, did you know? I mean, Dylan is still in here this morning. Down at 1-4. Oh, hi, guys, at 1-4-6. What about, I'm kind of glad he's not here because I probably won't get it right, but what about the buzzy bee? They tell us that if the buzzy bee goes out of existence, becomes extinct, the whole of our ecosystem will go to pot. Because of that, a tiny little fluffy thing. Glory to God in the highest. Don't you just want to say it? Glory to God in the highest. But as is the case with every great composer and every great artist or choreographer, there is only room for one masterpiece. That one piece, that is the most excellently done of all things. That one piece, which is their greatest work of great works. That one outstanding, creative thing of skill and depth. The one piece that most reflects the capacity and the imagination and the skill and the craftsmanship of the master himself. Look at yourself. Amazing. The piece that most expresses his thoughts and his ideas. The piece that most helps us to see him as King of Kings and Lord of Lords, as his likeness is most expressed in every paint stroke are everything. Van Gogh, he painted a great many work. I think this is the next picture up there, Graham. But this apparently is his masterpiece. I don't understand it, frankly. Anybody art critics or whatever? Yeah, sorry. Art critics, do you get it? I have no idea why that would be a masterpiece. It looks like something that Sammy could draw. But apparently it is the piece of art that most expresses who he was as an artist. Starry Knight. He didn't actually ask my permission about what I thought about his painting before it became his masterpiece. In fact, nobody in the art critic world has ever asked me, in my opinion, about what I think about there. It is just the masterpiece. It is a matter of fact. I don't know how to pronounce this, but this is the next one. Guernica. What is it called? Guernica. You see? I don't even know how to say it. Apparently this is Picasso's masterpiece. According to Google, this is the one. Mozart's is the Jupiter Symphony. Any of you heard of the Jupiter Symphony? No idea. I've never heard of it in my life. Never heard it. And I'm quite interested to know that there are no words to sing and there's no chord sheet that you can use to strum along to. And yet those three things are the one thing that most expresses their thoughts and ideas, the composers and the artists that drew that. And here we find in Genesis chapter 1, verse 27. Am I clicking on this too much? Am I spitting? Is it all right? God said in Genesis chapter 1 verse 26, he says, let us make mankind in our image, in our likeness. So God created mankind in his own image, in the image of God. He created the male and female. He created them. Nick mentioned that last week. Genesis chapter 2 verse 7 says this, The Lord God formed a man from the dust of the ground and breathed into his nostrils the breath of life. And man began to live. And I want you to get into your mind the incredible imagery that is here. Up until now God had spoken the world into being as he spoke the world was created. But in the case of mankind, he steps down and he gathers the dust with his own hands. And he begins to mold and to make with his own hands forming, putting together very specifically and very carefully. And when it was exactly how he wanted it to be, he stooped even lower. Now I'm a first aid instructor, so I know what it is like to give someone mouth to mouth resuscitation. You have to come very, very close. But the Bible says that after he had moulded and shaped God, stooped even lower and breathed his own life into mankind's nostrils. And when he had done that, he stepped back after six incredible days of creativity. He stepped back and he said, I'm done now. And he rested. He didn't rest because he was tired. He rested because he was done. That's it. This is the perfect thing. Glory to God in the highest. And I love the finality of this. Nothing more to add, nothing to change, to remove or tweak or consider. He was done. We were like him, made in his image. Now it was very good. Some of you this morning are imagining that is the case for everybody else out there except you. That is the case for the whole of humanity. Made in the image of God, brought to life through the breath of God. Every single one of us seated here today. Yeah, but not me. Right, aren't we? Yes, you, every single person. Now we all know that this isn't the end of the story and that fear and sin and failure and weakness and sickness of all mards, the image of God in us. But I want to just encourage you this morning again with Katie's testimony and Katie and Katie. The Katie's testimony and the other testimonies that we've heard that day, that today, that we cannot, because the image of God isn't something that we got for ourselves or something that we earned or a quality that we possess and only certain people have it. It was something that made humanity what it is. And if we didn't give it to ourselves and it's nothing that we possess, we cannot lose it even if things have come along and marred the image of God in us. I am made in the image of God full stop. Now I know stuff has come in. That's why I gave my life to Jesus all those years ago. I know that there are things in my life that have come in and marred and distorted and disfigured the image of God within me. And yet underneath, still underneath, wants a masterpiece, always a masterpiece. Is that right? Some of you are not sure. Is a masterpiece still a masterpiece if fire and flood damage it? Yes. Yes, because it's a condition. It was created that way. It was a masterpiece because it was made that way. Fire and flood come in and suddenly everybody springs into action because, oh my goodness, we need to do something to restore that masterpiece back to what it was supposed to look like. Remember when the, was it two years ago, is it the Louvre in France burnt down? Notre Dame. I've said that several times. Yeah, Notre Dame burnt down. And everybody, in fact it made headline news above everything else that was going on in the world. It was because this incredible masterpiece, this incredible thing of beauty had burnt down and suddenly all the restorers, millions and millions and millions of pounds poured into restoring this incredible masterpiece. Nobody said, right, you know what, fire and flood has come, that's it. Fish bash boss, throw it, roll it all up and throw it away. That's not what you do with a masterpiece. You call in the restorers. You call in the restorers come. Let's try and work out how we take it back to the original condition, even though it's been marred. And tarnished and covered in fire and certain whatever it is. And it's true for us. I am made in the image of God. I am his masterpiece. When people tell you that we are just like all the other animals, we're not. We're not like all the other animals. All the other animals are created beings. Nobody stooped down and breathed life into them. That's what makes humankind, humankind. We are incredible. Some of you are going, yes, but I'm not. I can hear you. I can hear you. All that stuff didn't destroy who we were. It marred it and distorted it and twisted it, but basically, fundamentally, in the core of who we are, we have been created in the image of God. And his life is in us. So the restorer. Who is our restorer? Sound really nervous this morning. Is it Jesus? I think the answer is Jesus. I'm not really sure if it's Jesus. I think it's Jesus. I think because we're Christians. I think the answer is supposed to be Christ Jesus. Have you heard about that little boy who went to Sunday school and somebody described something? You know, what is fluffy and got a long tail and gray and all of that? I think. How does he say it? I know the answer is Jesus. I know the answer should be Jesus, but it sounds like a squirrel to me. That's what it is coming across this morning. I think the answer is Jesus because we're in church and everything comes down to Jesus. But actually, it sounds like a squirrel to me. Who is our restorer? When you came up to the front here, when you came up to the front here and you declared about yourselves, all the things that have marred and distorted and disfigured, the basic image of God in you that he put in you, you did nothing for yourselves. Who is it that came alongside you and said, do you know what? I'm going to take you back to your original condition. Who was it? Jesus. Jesus. The rest of you are all nervous. You think I'm just going to pick on me in a minute? I might do. It's Jesus. Jesus came along and he didn't say, oh, gosh, I can't do anything with this. It's too far gone for me, guys. All the rest are made in God's image, but God kind of messed up with you. Jesus is the reason. The reason Jesus came was to redeem us and restore us and renew us. He came as a master restorer and he looks and he says, I know that the image of God is under all this stuff because that's human kind. It's under all this stuff and sin and fear and failure, but just give me a moment. Just give me some time. Just give me your willingness. Just bring me yourself and I will sit and I will carefully and painstakingly pick away and add touches here, there and everywhere and bit by bit by bit. You will begin to see the image of God that is there already. You'll begin to see it for yourself. Hallelujah. You know those moments, don't you? When you see it for yourself, suddenly you, maybe you've had an issue with something all your life and suddenly you realize it's gone. It's because Jesus, the restorer, I'm going to work at this bit by bit by bit. And you're going to see my image that is already there. I'm going to make it visible. No idea where I am on my notes. The master himself says, I know what this is supposed to look like. Allow me to make you look like the image that is already inside you. I love it when people that don't believe in God don't understand that actually, whether they believe in God or not, they're still made in his image. It's a fact. Yeah, I know, but I'm an atheist, I don't believe in God. It doesn't change who you are. It doesn't change your core. I know, but I don't believe in anything that you're talking about. I really don't believe in God. There is no God. Well, I'm really sorry to tell you because you're walking around carrying his image. It's just covered by stuff. And when we become believers, what we are saying to Jesus is I surrender to you deal with the stuff that is marring and distorting the image of God that is already inside me. It's a fact. Yeah, I know, but I don't feel it. I don't feel like, did he ask your permission? Okay, so when you feel like it, then you can be made in my image. And when you don't feel like it, then you don't have to be made in my image. It's just a fact. Whatever you feel like this morning, he didn't ask your permission when he made you. He doesn't ask your permission when he calls you a masterpiece. He is the master and he decides who is his masterpiece. And I'm sorry to tell you that that's you. Whether you like it or whether you don't. He really is not interested in changing his opinion on who I am. I wasn't going to show this video, but I think I might. Can I just put that video up? I love it. You'll see, I don't know whether you'll see why I'm showing it. Are you amazing? Say it with a big voice. Are you amazing? Are you amazing? Are you amazing? Say it with a big voice. Are you amazing? Thanks, Graham. Anti-Aker, you are amazing. I did have to prompt him. Did you notice? I said to him, say, anti-Aker, you are amazing. And then I had to say to him, say it in a big voice. Anti-Aker, you are amazing. And the reason I love this is because every year it comes up on my Facebook reminder. And it just, do you know what? I am amazing. But as are every single one of you seated in here. It's not a special thing for special people. It is just the nature of God in you, his breath in you, a masterpiece made in his image, in his likeness, like nothing else in all creation. Anti-Aker, you are amazing. And that's for you, and for you, and for you. I wish I could have got him to do all your names. We'd have been here a very long time. Some of you are saying, yeah, but not me. It's just a fact. All these other things have come into Mart and Distour. But Jesus in his grace carefully comes alongside and he begins to pick away and paint away and brush away and smooth away and restore this and restore that and change this and change that. And one day we stand back and we look and we say, oh my God, you are amazing. Glory to God in the highest. When was the last time you looked at yourself in the mirror without a critical view and said, glory to God in the highest, look at what you've made. Anti-Aker, you are amazing. When was the last time? When was the last time you said, because we do this, we think it is humility to rubbish ourselves. We think it's not humility, it's actually pride. It's false humility because what it says is, God, you don't know what you're talking about. And God, you are telling lies because this doesn't apply to me. That's what it means. It's not humility. To stand and say, I am who I am by the grace of God made in the image. Oh, yes, I know. You can tell me to the cows come home. All the things currently that are marring my life that are distorting the image of God in me. I know all the rubbish that I carry, but basically bottom line. Anti-Aker, you are amazing because of Jesus. I want to encourage you this morning. I want to encourage you. So the restorer comes along, no idea where I am, I knew this would happen. The restorer comes along and what we do is we get fixated on the stuff that is marring. We get fixated on the stuff that is rubbish, the stuff that we know shouldn't be there, the stuff when you've been a Christian a long time and you do that thing you know you shouldn't do and you shouldn't know better. Anybody else like that or is it just me? Liz, your hand went straight up. We ought to know better. Graham, thank you for being honest. Those of you that didn't put your hand up, just put your hand up now and be done with it. Christians for a long time then we still do the things that we know we shouldn't do or think the things we shouldn't think and all of that and I get all of that and we become fixated on this and we become fixated on all that is rubbish and all that is wrong and all that we should change and all of that stuff. What we need to become fixated on is the restorer. We need to become fixated on his hands as he changes and tweaks and whatever. Every now and again he might say to you I'd get the job done a lot quicker if you'd stop throwing back paint over there every single time I change that bit and we have to change that. Let's stop fixating on old things. Isaiah 43, any of you do Alexio 365? I think the passage at the beginning of the year was from Isaiah 43 where it says behold I'm doing a new thing the oldest gone and the new has come. Is that a statement of fact? Is it a statement of fact? Is he saying I wish the old had gone and I wish the new had come? He says no I'm doing a new thing the old has gone, the new has come and then what does he say? He asks his question. Do you not perceive it? Why do you think he asked that question? Do you not perceive it? Sorry what was that voice? Because we know we won't believe it. I've just told you the old has gone and the new has come but do you not perceive it? He wants us to see it. That this is the process is just a matter of fact this is what Jesus has done. We need to start perceiving it. We need to start owning it. The world is an awesome place but even more awesome than the world that we live in. The world that we live in are the people that are in it. Ephesians 2 verse 10 and I have no idea where I am says I am his masterpiece created in his image. Would you like to repeat that? You want to say it again? Oh yeah but what about that? What about that thing? What about that situation? What about that relationship? What about this? What about that? What's the answer? I? Oh no you see now you've all lost your confidence. You're doing exactly what I'm saying. Oh gosh maybe that's changed it. Oh that's obviously changed it. I've stopped being his masterpiece now. Now I'm just a failure and I'm rubbish and I've made a mistake. Am I or am I not his masterpiece? Okay let's say it in a big voice shall we? Okay right. Come on guys if we're going to get it to go from here to here we're going to have to put some effort in aren't we? Right after three, one, two, three. What about that? What about that issue in your life? What about that thing that keeps going wrong? Is he telling a lie about your life? Is he speaking the truth about your life? Oh hallelujah I think it's sunk in. I am a masterpiece created in his image. I am the apple of his eye. You are the apple of his eye, what else are we? Fifthly and wonderfully made. Fifthly and wonderfully made. Fifthly and wonderfully made. I think the apple of his eye is precious, long known, seen and accepted, restored, redeemed, healed, forgiven. More than the mighty Victoria Falls, more than the northern lights, more than the butterflies, ladybird dots, more than the Grand Canyon, more than any of the seven wonders of the world, more than anything that you could ever walk on, see with your eyes, hear with your ears, look through a window at, more than the moon in the sky, more than the sun, more than the stars, more than absolutely everything you made in the image of God with his breath, living inside you, you are the apple of his eye. You are his greatest masterpiece, you are the thing that he looks at and he says, you represent me, you are more like me than anything else in all creation. I love you and I cherish you and I'm working on you and I'm changing you, glory to God in the highest. Glory to God in the highest. I'm a conqueror, glory to God in the highest and you know when we grasp that truth and we stop being the kind of people that go, no, not me. When we do that, that's unbelief. And we say to God, I don't actually believe your word, I know that you're saying it about somebody else and I really love you that you're saying it about somebody else but it doesn't apply to me, what we are saying to him is that he's telling lies. The Bible tells me is that my God is not a person, he's not a man that he should lie. He speaks the truth and he says about you, oh my word, when I created you, masterpiece, yes I know that there is stuff that needs to be done. I know you need to reflect me more, I know that we need to deal with some things in your life, I know that it will be an ongoing process of restoration but whatever happens and however it works, you are my masterpiece. Glory to God in the highest. Just to finish, have you heard that wonderful quote, it's not in the Bible, it says beauty is in the eye of the beholder, have you heard that? Have any of you ever used it? It's a rubbish quote. It's a rubbish quote, I analyzed it because when you look up that word, it's rubbish because what it all meering, see that's what enthusiasm does, drips your earring out your ear. It says because it says that beauty doesn't exist on its own, it has to be something that is observed, that's what it says. Beauty, your beauty, your pop masterpiece-ishness exists because the God created it to be. To be seen in the glorious world around us and much more than that, beauty exists because he created it in you. Glory to God in the highest. So next time you stand and you see something spectacular, just remember, anti-acre, you are amazing and then put your name there. I want to read this over you just to finish. Psalm 139 says, I thank you God, for making me so mysteriously complex, that makes me smile. Everything you do is marvelously breathtaking. It simply amazes me to think about it. How thoroughly you know me Lord, you even formed every bone in my body when you created me in the secret place. Carefully and skillfully, you shaped me from nothing to something. Hallelujah. You saw who you created me to be before I even became me. Before I'd even seen the light of day, the number of days you planned for me were already recorded in your book. It tells me he has purpose over your life. Listen to this, every single moment you are thinking of me. You think you're alone and that nobody cares, that you're isolated in your pain and your sorrow and nobody would even notice. He says, every single moment. Remember the 500 liters of water going over, Victoria Falls, every single moment you were thinking of me. How precious and wonderful to consider that you cherish me constantly in your every thought. Oh God, your desires towards me are more than the grains of sand on every sea shore. Glory to God in the highest. And when I awake each morning, how does that end? You are still with me. Yeah, but I'm on my own. No, you're still with me. God, I invite your searching gaze into my heart, examine me through and through, find out everything that may be hidden within me, put me to the test and sift through all my anxious cares. In other words, restore. See if there is any path of pain I'm walking on and lead me back to your glorious everlasting ways, the path that brings me back to you. You are incredibly made in the image of God with the breath of Him within you. Glory to God in the highest. Amen.
In this episode of Vegas Business Spotlight, host Tim Knifton sits down with Alexio Ramirez, owner-operator of Rocket TV Media, to discuss the power of indoor digital billboards in boosting business visibility. As a successful real estate entrepreneur, Ramirez recognized the importance of effective marketing strategies and saw an opportunity to provide small to medium-sized businesses with a cost-effective advertising solution. By strategically placing indoor digital billboards in high-traffic and high-dwell areas throughout Las Vegas and Henderson Valley, Rocket TV Media offers businesses the chance to share their stories and engage with their target audience through short commercials. Ramirez explains how this approach allows businesses to gain exposure without the hefty price tag associated with traditional outdoor billboards. Tune in to this inspiring conversation as Ramirez shares his journey to success and sheds light on the impact of Rocket TV Media's innovative advertising platform.Items discussed:Tim Knifton welcomes Alexio Ramirez, owner operator of Rocket TV Media, to the Vegas Business Spotlight podcast.Ramirez introduces Rocket TV Media's indoor digital billboards, strategically placed in high-traffic and high-dwell areas in Las Vegas and Henderson Valley.He highlights the opportunity Rocket TV Media provides for small to medium-sized businesses to advertise and tell their stories through 15 to 32-second commercials.Ramirez emphasizes the affordability and effectiveness of indoor digital billboards, particularly for businesses that cannot invest in traditional outdoor billboards.The concept of “farming” in marketing is discussed, explaining the power of repetition and visibility to build brand recognition and trust.Ramirez shares his background in real estate and how his experience as a business owner led him to recognize the significance of marketing in driving success.The conversation delves into the impact of Rocket TV Media's advertising platform, showcasing success stories of businesses that have utilized their services.Ramirez concludes by encouraging businesses to invest in marketing and highlights the value of Rocket TV Media's unique approach to amplifying visibility and engaging with target audiences.Join Tim Knifton as he uncovers the story behind Rocket TV Media and the impact of indoor digital billboards in revolutionizing business advertising strategies. Discover how this innovative platform is empowering businesses to connect with their customers and gain a competitive edge in the Las Vegas market.http://rockettvmedia.com/About Alexio RamirezAlexio Ramirez is the owner and operator of Rocket TV Media, a company that strategically places indoor digital billboards in high traffic and high dwell areas throughout Las Vegas and Henderson Valley. This provides small to medium-sized business owners with the opportunity to advertise and tell their story in a 15 to 32nd commercial to their clients. By bringing indoor digital billboards to low locations throughout the Vegas Valley, Alexio is revolutionizing the way businesses can advertise and reach their target audience.About The Show Sponsor:“Vegas Business Spotlight” podcast is proudly sponsored by RSVP Las Vegas, your premier direct mail postcard service in the heart of Las Vegas. With a commitment to delivering outstanding results and effective marketing solutions, RSVP Las Vegas specializes in helping businesses connect with their target audience through direct mail.Visit their website at RSVPLasVegas.com to explore the range of direct mail services they offer. From designing eye-catching postcards to precisely targeting your desired audience, RSVP Las Vegas has your direct mail marketing needs covered. Their team of experts is dedicated to helping your business make a lasting impression and drive results.Whether you're launching a new marketing campaign, promoting a special offer, or aiming to boost brand awareness, RSVP Las Vegas is your trusted partner in direct mail marketing success. Contact them at (725) 333-8660, and their knowledgeable team will be ready to assist you.Experience the power of effective direct mail marketing with RSVP Las Vegas. Trust their expertise and enjoy the benefits of reaching your audience directly. Visit their website or give them a call today to start your next successful marketing campaign with RSVP Las Vegas, your premier direct mail postcard service in Las Vegas.Vegas Business Spotlighthttps://businessinnovatorsradio.com/vegas-business-spotlight/Source: https://businessinnovatorsradio.com/rocket-tv-media-amplifying-business-visibility-with-indoor-digital-billboards
We've brought back our resident weather (and food) expert, Alexio Tabafunda for a happy holiday XMAS episode! Hosted on Acast. See acast.com/privacy for more information.
Join us for one of our online services, and check out our website to find a bit more information about us. We would love to have you join us at https://live.lpchurch.com/ Find locations, our beliefs, and more info about us at https://www.lpchurch.com/about. CONNECT WITH LIEPOINT Hub: www.lpchurch.com/hub YouTube: www.youtube.com/c/LifePointChurchNW Facebook: www.facebook.com/LifePointNW/ Instagram: www.instagram.com/lifepointnw
Learn more about your ad choices. Visit megaphone.fm/adchoices
El género del reguetón está de luto con la muerte de Alexio, a causa de un cáncer de seno. Además, ¿de qué acusan a Pepe Gamez?
Learn more about your ad choices. Visit megaphone.fm/adchoices
Rip Alexio "La Bruja" luego de una larga batalla contra el cáncer el cantante y compositor de musica urbana #alexiolabruja fallecio nuestro más sentido pésame a su familia y allegados..#ZonaDeFujitivo #Fallece
Our most guested is back and tacklin a topic he's REALLY an expert about. Listen to our bonafide baboy moments as Andren, Red, and Alexio sit down and talk while they were HUNGRY. Hosted on Acast. See acast.com/privacy for more information.
Join us for one of our online services, and check out our website to find a bit more information about us. We would love to have you join us at https://online.lpchurch.church Find locations, our beliefs, and more info about us at https://www.lpchurch.com/about. CONNECT WITH LIFEPOINT Hub: www.lpchurch.com/hub YouTube: www.youtube.com/c/LifePointChurchNW Facebook: www.facebook.com/LifePointNW/ Instagram: www.instagram.com/lifepointnw
I meaaaan, there's not really a way to explain further what this episode is. Listen, and prepare to meet your expectations. Hosted on Acast. See acast.com/privacy for more information.
(Fremont County, WY) - Riverton's young soccer stars; Ashton, Joilys, Xaden, Alexio, Lexi and Santiago, along with and coach Julian Mejorado stopped by the Today in the 10 Show on KOVE this week. They chat about the upcoming Rampage Rumble Tournament in Riverton, which will bring well over 1,000 players, parents, officials and spectators to Fremont County Saturday and Sunday. We also hear from the players and Coach Mejorado about a trip many of the young athletes will be taking to Europe this summer, and how you can help the team fundraise! Catch the full conversation in the player below or by subscribing to the County 10 Podcast!
GFK-Helden | Konflikte lösen und Persönlichkeitsentwicklung mit Gewaltfreier Kommunikation
Themen & Fragen zu dieser Episode: - Was genau ist das Systemische Konsensieren (SK)? - Wie läuft der Prozess ab? - Unterschied Mehrheitsentscheid und Systemisches Konsensieren. - Grenzen des Systemischen Konseniseren. - Erfahrungen aus Teams und Gruppen. - Wie passt Gewaltfreie Kommunikation und Systemisches Konsensieren zusammen?
Due to a sudden guest cancellation, the Bago Matulog crew scrambles to pull an episode out of nowhere and ended up talking about a very important topic. Hosted on Acast. See acast.com/privacy for more information.
Join us for one of our online services, and check out our website to find a bit more information about us. We would love to have you join us at https://lifepoint.online.church Find locations, our beliefs, and more info about us at https://www.lpchurch.com/about. CONNECT WITH LIFEPOINT Hub: www.lpchurch.com/hub YouTube: www.youtube.com/c/LifePointChurchNW Facebook: www.facebook.com/LifePointNW/ Instagram: www.instagram.com/lifepointnw
Today we talk to vegan advocate, podcaster and overall porn slut Lizzie Love! Lizzie talks about getting into porn, how vegan diets help with sex and special guest Mike Alexio (our friend Chad) reveals something to Lizzie that shocks everyone.Follow Lizzie everywhere:All links: https://allmylinks.com/itslizzieloveTwitter: https://twitter.com/ItsLizzieLoveOnlyFans: https://onlyfans.com/lizzieloveInstagram: https://www.instagram.com/itslizzieloveManyvids: https://www.manyvids.com/Profile/1000960761/ItsLizzieLoveXO/Store/Videos/YouTube: https://www.youtube.com/c/lizzieloveAnd follow us as well:YourWorstFriend.comPatreon: https://www.Patreon.com/WorstFriendCastTwitter: https://twitter.com/worstfriendcastInstagram: https://www.instagram.com/worstfriendcast
This podcast is special for many reasons - not only the host is Teresa Presas, Senior Consultant in Magellan Circle, but we also had the pleasure of talking with Ana Paula Mesquita and Alexio Picco, CEO and Chairman of Magellan Circle, respectively. For this unique podcast, we discussed everything there is to know about the new merger that saw Magellan and Circle Group unite into what is now Magellan Circle. Get to know our new venture, and understand better the motivations, values and vision behind Magellan Circle. The future looks promising - discover more by listening to our brand new episode below!
Join us for one of our online services, and check out our website to find a bit more information about us. We would love to have you join us at https://lifepointc.churchonline.org/ Find locations, our beliefs, and more info about us at https://www.lpchurch.com/about. CONNECT WITH LIFEPOINT Hub: www.lpchurch.com/hub YouTube: www.youtube.com/ Facebook: www.facebook.com/LifePointC/ Instagram: www.instagram.com/lifepointnw/
FULL SHOW NOTES https://podcast.nz365guy.com/376A conversation about Alexio Tonderai Chandiwana's family and life. Alexio talks about his journey and how he got into technology. A discussion about Alexio's career background. Talks about how helpful and important the Community is to Alexio. How did Alexio's journey to becoming a Business Applications MVP start? Alexio's involvement in the community and speaking events Discussions about Alexio's learning experiences in this journey Alexio's advice to people wanting to become a Microsoft MVP OTHER RESOURCES: Microsoft MVP YouTube Series - How to Become a Microsoft MVP 90 Day Mentoring Challenge - https://ako.nz365guy.com/ AgileXRM AgileXRm - The integrated BPM for Microsoft Power PlatformSupport the show
The guest of this episode is Alexio Picco, Managing Director of Circle Group, an Italian company celebrating 10 years of providing software solutions as well as consultancy and EU funding services in the area of transport and logistics. We ask Alexio to comment on the strategies for the European Green Deal and the Fit-for-55 climate action plan while paying special attention to the current events.
Episode 138 of the #MVPbuzzChat interview series. Conversation between Microsoft Regional Director and MVP Christian Buckley (@buckleyplanet), and Business Applications MVP Alexio Chandiwana (@alex_chandiwana), a Solution Designer and Microsoft Certified Trainer (MCT) based in Essex, England. You can also find this episode on the CollabTalk YouTube page at https://youtu.be/rVMYzffajZ0
Chino catches up with his favorite, alliterative comedy trio in the Triple A's. He talks to them about what they've been up to during the pandemic, how Solid OK has held up, and what the future holds. They also talk a little about wrestling, more comedy, and Instagram story statuses. **** Shop on Lazada and help the podcast out at no additional cost to you by using this link: Podlink.co/shk Shop on Amazon and help the podcast out at no additional cost to you by using this link: Podlink.com/s6o Register your podcast at Podmetrics by using my referral code: CLASSCLOWN
Join us with our guests, Alexio Tabafunda and Rae Mammuad, as we list down problems faced by book-smart people. From being not-so street smart to expectations in sports and overall image. We talked about the struggles we had in school and the smart shaming that intellectual people are experiencing now in online. Can all of us spell the word "apartment"? Find out here. Enjoy!
What has been happening lately in cybersecurity in healthcare? Today, Anne Genge, CEO of Alexio Corporation is my guest on this episode of Practice Management Nuggets For Your Healthcare Practice! Anne and Jean discuss recent privacy breach scenarios and cybersecurity trends and steps that you can take now to prevent these events to happen to you! Virtual care, telehealth, and working from home presents opportunities – and cybersecurity risks. Digital health and digital transformation has grown rapidly in the last year. Take time now to review your practice and defend yourself from dramatic increases in cybersecurity attacks. Meet Anne Genge Anne Genge is a pioneer in protecting health data and those who use it. She is a Certified Information Privacy Professional with a specialization in dentistry. Anne also holds certifications for HIPAA, Credit Card Security, Internet, and Network Security. Ransomware and data theft have changed the face of dentistry in the past decade meaning dentists need a new toolkit for protecting their practices. With over 20 years of experience, Anne knows the challenges healthcare providers face with technology. She and her team at Alexio Corporation work with dental and medical professionals to minimize data risk and maximize patient care. As healthcare grows increasingly dependent on the digital environment, cyber-security becomes increasingly more difficult. Protection of patient data is not only law, it’s imperative for business success and reputation. Anne simplifies cyber-security for dentists and other healthcare providers and gives ‘real world’ strategies to protect patient information and the practice business. To find more, see https://getalexio.com My Takeaways Anne shared Top 3 Tips For an Incident Free 2021 for healthcare providers and dentists and protect your practice and your patients including these nuggets. Secure the network Secure the people Disaster recovery plan Show Notes 00:10 Introduction 00:54 Episode #082 6 Deadly Sins 03:00 COVID-19 biggest influence on digital transformation 07:01 E-Health Saskatchewan Breach 10:31 Anne’s recommendations basic steps for healthcare practices 18:08 Diagnostic Imaging clerical staff snooping - Employees access 3K patients’ records in privacy breach at Red Deer hospital. Red Deer Advocate. Apr. 13, 2021 25:16 Episode #099 Table-Top Privacy Breach Fire Drills 28:00 Top 3 Tips Incident free 2021 29:08 GetAlexio.com 30:00 Practical Privacy Officer Strategies https://informationmanagers.ca/practical-privacy-officer-training/
AN INSIDE LOOK INTO CREATING COMEDY VIDEOS Are you someone who loves making people laugh through creative ways? This might be great content for you! Join us as our experience partners Sari, Ryan, Andren, Alexio, and Aldo tells us about how they started SOLID OK, and the art behind creating comedy videos! SOLID OK is composed of a group of people who creates funny films for our entertainment! Check out their content on their Facebook page below! https://www.facebook.com/solidoktv --- Send in a voice message: https://anchor.fm/experiencephilippines/message
It was just simply good fun times this episode, Alexio and Andren joins me in answering fan questions. The ep is also peppered with some good spirited roasting as the koolpals raided the chat (which made me go heel mode cutting promos on them) Listen for some solid laughs and maybe, good advice? --- DISCLAIMER: The views and opinions expressed by the podcast creators, hosts, and guests do not necessarily reflect the official policy and position of Podcast Network Asia. Any content provided by the people on the podcast are of their own opinion, and are not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.
Ang Sexbomb New Gen ng Pinoy stand-up comedy!DISCLAIMER: The views and opinions expressed by the podcast creators, hosts, and guests do not necessarily reflect the official policy and position of Podcast Network Asia. Any content provided by the people on the podcast are of their own opinion, and are not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. See acast.com/privacy for privacy and opt-out information.
Fellow fat comedian Alexio Tabafunda guests. We talk about how self-deprecating jokes about our biggest insecurity can be both good and bad. Lots of deep-dive insights on how fat people are treated in the entertainment industry. We also talk about some Solid OK sketches that didn't make the cut, that we wish would've. --- DISCLAIMER: The views and opinions expressed by the podcast creators, hosts, and guests do not necessarily reflect the official policy and position of Podcast Network Asia. Any content provided by the people on the podcast are of their own opinion, and are not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.
Il Consulente finanziario vende tempo. Andare a leva con il tempo vuol dire, innanzitutto, avere la capacità di poter delegare.A chi? Allo staff, al team.
Aquí podrás escuchar los últimos remixes, edits, mashups, bootlegs de Cristian Gil. Para descargarlos gratuitamente visita su web www.cristiangil.com
Aquí podrás escuchar los últimos remixes, edits, mashups, bootlegs de Cristian Gil. Para descargarlos gratuitamente visita su web www.cristiangil.com
Esta Noche El Mejor Radio Show de los Lunes en la Noche Festeja El Ya Tradicionar The Birthday Bash Show dedicado a Marky Marcano y por primera vez El Staff del Mejor Show Reunido en un Solo Estudio en la Musica Escucharemos a En Estreno Tay-na y Jinny las Rebuleras -Wild West Love by Dj LMS -Almas Band -Flo-Rida y Alexio la Bestia como tambien contando con El Mejor Entretenimiento,Comedia y Mucho pero q Mucho Mas .
Comedians Justin Alexio and Peter Banachowski hop in The Hotbox to talk about commercial auditions, the deep web, and bear maulings. Special Guest Host Chris Edwards!