Podcasts about SVN

  • 152PODCASTS
  • 218EPISODES
  • 47mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 7, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about SVN

Latest podcast episodes about SVN

Keys to the Commonwealth
E76 - Weston Lockhart, Navigating Commercial and Retail Real Estate

Keys to the Commonwealth

Play Episode Listen Later Apr 7, 2025 45:21


Send us a textWeston Lockhart serves as an Advisor with SVN Stone Commercial Real Estate focusing on Retail Real Estate. He is a native of Lexington and received a Bachelor of Business Administration from the University of Kentucky.During his time at SVN, Weston has worked successfully with clients assisting with asset acquisition/disposition, site selection for national and local retailers, and property repositioning through lease-up.Weston serves as the Kentucky / Tennessee Talent Development Chair for ICSC and is heavily involved in Retail Real Estate in the Southeast. Weston has worked closely on portfolio expansion with the following tenants: Popeyes Chicken, Goodwill Industries of Kentucky, Driven Brands, Ractetrac, Five Guys Burgers and Fries, Pizza Hut, Bargain Hunt, Take 5 Oil Change & more. Being in a relationship-driven industry, he views himself as another team member for Emerging Brands, Developers, and Investors in order to achieve their goals and optimize their respective businesses and investment portfolios._______________________________Find Weston onLinkedIn:https://www.linkedin.com/in/weston-lockhartReal Estate:https://myelisting.com/commercial-agent/weston-lockhart-131289/X:https://x.com/WestonBLockhart/status/1795650024480583962Stone Commercial Real Estate:https://svnstone.com/our-team/?brokerId=weston.lockhart%40svn.comDevelopLex Podcast:https://www.middletech.com/developlexBuild Out:https://buildout.com/plugins/51/svnlex.com/brokers/weston.lockhart@svn.com?pluginId=0&iframe=true&embedded=true&brokerId=weston.lockhart%40svn.com&cacheSearch=trueFacebook:https://www.facebook.com/developlexpod/_______________________________Show hosted by Landry Fieldshttps://www.x.com/landryfieldz'https://www.linkedin.com/in/landryfields/https://www.instagram.com/landryfields_https://www.youtube.com/@landryfields_www.novainsurancegroup.com859-687-2004

Visionaries Global Media
Nattering With E S2 #26: End Of Lex Express RPW Recap With Peapod

Visionaries Global Media

Play Episode Listen Later Mar 20, 2025 71:02


Sometimes in life, good things come to an end. On that note, today marks the 31st anniversary of Wrestlemania 10, where the culmination of the Lex Express came to a crashing end thanks to the Bret Hart Conspiracy train. Anchored by your main pal E of Nattering With E, I'm joined today by the legendary Peapod of Ruthless Pro Wrestling, to recap Lex's final obstacle at Wrestlemania 10. We also broke down both of RPW's joint shows with SVN and realized maybe our beloved Cagematch isn't as accurate as you thought. Lastly, we hypothesize about why two death match legends, Nick Gage and Masada can't be friends. As always, support our local distributor, Visionaries Global Media, where you can get your podcasts. Look up Nattering With E! Peapod can be found here via his link tree https://linktr.ee/PeapodCalls4U? Tom - @high5TOM on Twitter SJ - @KarnivalofKhaos Bauerhausen and Eric can both be found on the NatteringE Instagram page

RIANOUTLOUD!
In Conversation with SEVNDEEP

RIANOUTLOUD!

Play Episode Listen Later Oct 10, 2024 63:02


In the Season 7 premiere of RIANOUTLOUD!, host Rian-Louis welcomes the iconic artist SEVNDEEP for an insightful conversation about his journey as a Black queer independent artist. The episode explores the vital role of LGBTQ representation in the music industry and how it shapes both the creative landscape and personal narratives. SVN reflects on his evolution from dance to music, sharing the influences that have shaped his artistry and the significance of empowerment in his songs. He discusses the challenges and rewards of navigating social media as an independent artist, emphasizing the importance of creating a safe space for Black queer creatives. Listeners will gain valuable insights as SVN offers advice for aspiring artists, encouraging them to trust their instincts and rise above negativity. The episode also highlights his upcoming projects and the creative processes behind them, showcasing how personal experiences fuel his work. Join us for a powerful conversation that champions authenticity, empowerment, and the transformative impact of music. For all things SVNDEEP: https://www.instagram.com/officialsevndeep/ https://x.com/thesevndeep?lang=en https://www.tiktok.com/@sevndeep Stream All Tea. All Shade. : https://music.apple.com/us/album/all-tea-all-shade/1718838152

The Curious Wire
#102: Reid Bennett National Council Chair Multifamily SVN

The Curious Wire

Play Episode Listen Later Aug 14, 2024 23:43


Reid Bennett is the National Council Chair of Multifamily at SVN. He is a SVP and Principal of the Multifamily Group. He has more than 20 years experience being a broker out of Chicago. I'm Moshe Crane connect with me on LinkedIn. My day job is the VP of Branding and Strategic Initiatives at Sage Ventures.Sage Ventures is a commercial real estate firm based in Baltimore, MD. The company buys and operates multifamily rental properties. The company also builds and develops homes that we sell.

Diary of an Apartment Investor
EXP - Understanding The Deal With Reid Bennett

Diary of an Apartment Investor

Play Episode Listen Later Jun 28, 2024 33:54


Continue the conversation with Brian on LinkedInJoin our multifamily investing community for in-depth courses and live networking with like-minded apartment investors at the Tribe of TitanThis episode originally aired on June 28, 2024----Watch the episode on YouTube: https://www.youtube.com/channel/UCcsYmSLMxQCA9hgt_PciN3g?sub_confirmation=1 Listen to us on your favorite podcast app:Apple Podcasts: https://tinyurl.com/AppleDiaryPodcast Spotify: https://tinyurl.com/SpotDiaryPodcast Google Podcasts: https://tinyurl.com/GoogleDiaryPodcast Follow us on:Instagram: https://www.instagram.com/diary_of_an_apartment_investor Facebook: https://www.facebook.com/DiaryAptInv/ Twitter: https://twitter.com/Diary_Apt_Inv ----Your host, Brian Briscoe, has owned over twenty apartment complexes worth hundreds of millions of dollars and is dedicated to helping aspiring apartment investors learn how to do the same. He founded the Tribe of Titans as his platform to educate aspiring apartment investors and is continually creating new content for the subscribers and coaching clients.He is the founder of Streamline Capital based in Salt Lake City, Utah, and is probably working on closing another apartment complex in the greater SLC area. He retired as a Lieutenant Colonel in the United States Marine Corps in 2021 after 20 years of service.Connect with him on LinkedIn----Reid BennettReid Bennett is the National Council Chair of Multifamily Properties for SVN International and a Principal of the Multifamily Group for SVN Chicago Commercial, where he has been working for over 16 years. He is also a Certified Commercial Investment Member (CCIM), a prestigious credential that demonstrates his expertise in financial analysis, market analysis, user decision analysis, and investment analysis for commercial real estate. With a focus on the sale of apartment communities across the Midwest, Reid leverages his extensive network, market knowledge, and collaborative approach to deliver optimal results for his clients. He has achieved multiple awards and recognition for his outstanding performance, including the Partners Circle Award four times, ranking him in the top .02% among all SVN advisors globally. He is also a commercial real estate coach for The Massimo Group, where he helps other brokers elevate their business and maximize their potential. Reid is passionate about fostering responsible development and affordable housing in his community, and has served as the chair of the Development Committee for River North Residents Association.Learn more about him at: https://www.linkedin.com/in/reidbennettccim/ or Cell 773-251-7342

Zakendoen | BNR
De top van NL | Hoe draagt SVn bij aan het oplossen van de woningcrisis?

Zakendoen | BNR

Play Episode Listen Later Jun 20, 2024 21:01


Nederland verkeert in een woningcrisis. Het nieuwe kabinet wil dat er per jaar structureel 100.000 woningen per jaar bij gebouwd worden. Maar zijn die woningen wel altijd te betalen voor huurders of kopers? En: hoe kan Stimuleringsfonds Volkshuisvesting Nederlandse Gemeenten daarbij helpen?  In ‘De top van Nederland' een uitgebreid gesprek met Arjen Gielen, bestuursvoorzitter van Stimuleringsfonds Volkshuisvesting Nederlandse Gemeenten (SVn).  Presentator Meindert Schut vraagt hem of...  - SVn vooral met publiek of privaat geld werkt;   - starters SVn altijd weten te vinden voor een lening;  - startersleningen ook nodig zijn nu de hypotheekrente daalt;  - het nieuwe kabinet de woningcrisis op kan lossen;  - de huurwet van demissionair minister Hugo de Jonge een goed idee is.  Over Stimuleringsfonds Volkshuisvesting Nederlandse Gemeenten (SVn)  SVn versterkt leningen aan consumenten, verenigingen en bedrijven op het gebied van wonen. SVn versterkt in verschillende gemeentes bijvoorbeeld een starterslening bonveop de hypotheek.   Over Meindert Schut  Meindert Schut is journalist en radiopresentator bij BNR. Hij presenteert onder andere de programma's De nationale Autoshow en BNR Mobility. Ook is hij de vaste vervanger van presentator Thomas van Zijl.  Abonneer je op de podcast  Ga naar ‘De top van Nederland' en abonneer je op de podcast, ook te beluisteren via Apple Podcast en Spotify. See omnystudio.com/listener for privacy information.

CEO Podcast | BNR
De top van NL | Hoe draagt SVn bij aan het oplossen van de woningcrisis?

CEO Podcast | BNR

Play Episode Listen Later Jun 20, 2024 21:01


Nederland verkeert in een woningcrisis. Het nieuwe kabinet wil dat er per jaar structureel 100.000 woningen per jaar bij gebouwd worden. Maar zijn die woningen wel altijd te betalen voor huurders of kopers? En: hoe kan Stimuleringsfonds Volkshuisvesting Nederlandse Gemeenten daarbij helpen?  In ‘De top van Nederland' een uitgebreid gesprek met Arjen Gielen, bestuursvoorzitter van Stimuleringsfonds Volkshuisvesting Nederlandse Gemeenten (SVn).  Presentator Meindert Schut vraagt hem of...  - SVn vooral met publiek of privaat geld werkt;   - starters SVn altijd weten te vinden voor een lening;  - startersleningen ook nodig zijn nu de hypotheekrente daalt;  - het nieuwe kabinet de woningcrisis op kan lossen;  - de huurwet van demissionair minister Hugo de Jonge een goed idee is.  Over Stimuleringsfonds Volkshuisvesting Nederlandse Gemeenten (SVn)  SVn versterkt leningen aan consumenten, verenigingen en bedrijven op het gebied van wonen. SVn versterkt in verschillende gemeentes bijvoorbeeld een starterslening bonveop de hypotheek.   Over Meindert Schut  Meindert Schut is journalist en radiopresentator bij BNR. Hij presenteert onder andere de programma's De nationale Autoshow en BNR Mobility. Ook is hij de vaste vervanger van presentator Thomas van Zijl.  Abonneer je op de podcast  Ga naar ‘De top van Nederland' en abonneer je op de podcast, ook te beluisteren via Apple Podcast en Spotify. See omnystudio.com/listener for privacy information.

Pierwsze kroki w IT
Jak zawód programisty zmieniał się przez 15 lat?

Pierwsze kroki w IT

Play Episode Listen Later May 9, 2024 102:38


Patryk Jar, Tech Lead, opowiada o zmianach jakie zaszły w ostatnich 15 latach w pracy programisty. [more] Rozmawiamy m.in. o tym jak wyglądała praca i wymagania na stanowisko programisty, co się zmieniło na lepsze (i gorsze) i jakie przed nami perspektywy. Pełen opis odcinka, polecane materiały i linki oraz transkrypcję znajdziesz na: https://devmentor.pl/b/ || devmentor.pl/rozmowa ⬅ Chcesz przebranżowić się do IT i poznać rozwiązania, które innym pozwoliły skutecznie znaleźć pracę? Jestem doświadczonym developerem oraz mentorem programowania – chętnie odpowiem na Twoje pytania o naukę programowania oraz świat IT. Umów się na bezpłatną, niezobowiązującą rozmowę! ~ Mateusz Bogolubow, twórca podcastu Pierwsze kroki w IT || devmentor.pl/podcast ⬅ Oficjalna strona podcastu

RetornoCast
COMO INVESTIR NO EXTERIOR | Tudo que você precisa saber! | SVN Investimentos | RETORNOCAST #91

RetornoCast

Play Episode Listen Later Apr 4, 2024 27:48


No episódio de hoje do RetornoCast, recebemos Pedro Tiezzi, Head de alocação de recursos da SVN Investimentos. Com anos de experiência no mercado, Pedro é o responsável por toda a estrutura de investimentos internacionais da SVN. A SVN Investimentos é o melhor escritório de investimentos do Brasil, reconhecido pela Brazil Advisor Awards. Ela está há mais de 16 anos no mercado financeiro ajudando pessoas a cuidarem do seu patrimônio e conta com um time de mais de 450 assessores e especialistas em investimentos. Neste episódio, exploramos tudo que você precisa saber para investir no exterior. Descubra oportunidades e as melhores estratégias para diversificar seu portfólio com ativos internacionais, principalmente neste cenário de juros em queda no Brasil.

Freelandev - Vivir del desarrollo en WordPress
#249 – No somos infalibles e nuevas ideas para plugins

Freelandev - Vivir del desarrollo en WordPress

Play Episode Listen Later Feb 12, 2024 31:48


Síguenos en: A veces nos quejamos de la tecnología, pero no olvidemos que el error humano sigue siendo ¿Qué tal la semana? Semana esther Me cargo la hoja de cálculo de dashboard de gestión de TWP sin opción a recuperarlo

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The "Normsky" architecture for AI coding agents — with Beyang Liu + Steve Yegge of SourceGraph

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 14, 2023 79:37


We are running an end of year survey for our listeners. Let us know any feedback you have for us, what episodes resonated with you the most, and guest requests for 2024! RAG has emerged as one of the key pieces of the AI Engineer stack. Jerry from LlamaIndex called it a “hack”, Bryan from Hex compared it to “a recommendation system from LLMs”, and even LangChain started with it. RAG is crucial in any AI coding workflow. We talked about context quality for code in our Phind episode. Today's guests, Beyang Liu and Steve Yegge from SourceGraph, have been focused on code indexing and retrieval for over 15 years. We locked them in our new studio to record a 1.5 hours masterclass on the history of code search, retrieval interfaces for code, and how they get SOTA 30% completion acceptance rate in their Cody product by being better at the “bin packing problem” of LLM context generation. Google Grok → SourceGraph → CodyWhile at Google in 2008, Steve built Grok, which lives on today as Google Kythe. It allowed engineers to do code parsing and searching across different codebases and programming languages. (You might remember this blog post from Steve's time at Google) Beyang was an intern at Google at the same time, and Grok became the inspiration to start SourceGraph in 2013. The two didn't know eachother personally until Beyang brought Steve out of retirement 9 years later to join him as VP Engineering. Fast forward 10 years, SourceGraph has become to best code search tool out there and raised $223M along the way. Nine months ago, they open sourced SourceGraph Cody, their AI coding assistant. All their code indexing and search infrastructure allows them to get SOTA results by having better RAG than competitors:* Code completions as you type that achieve an industry-best Completion Acceptance Rate (CAR) as high as 30% using a context-enhanced open-source LLM (StarCoder)* Context-aware chat that provides the option of using GPT-4 Turbo, Claude 2, GPT-3.5 Turbo, Mistral 7x8B, or Claude Instant, with more model integrations planned* Doc and unit test generation, along with AI quick fixes for common coding errors* AI-enhanced natural language code search, powered by a hybrid dense/sparse vector search engine There are a few pieces of infrastructure that helped Cody achieve these results:Dense-sparse vector retrieval system For many people, RAG = vector similarity search, but there's a lot more that you can do to get the best possible results. From their release:"Sparse vector search" is a fancy name for keyword search that potentially incorporates LLMs for things like ranking and term expansion (e.g., "k8s" expands to "Kubernetes container orchestration", possibly weighted as in SPLADE): * Dense vector retrieval makes use of embeddings, the internal representation that LLMs use to represent text. Dense vector retrieval provides recall over a broader set of results that may have no exact keyword matches but are still semantically similar. * Sparse vector retrieval is very fast, human-understandable, and yields high recall of results that closely match the user query. * We've found the approaches to be complementary.There's a very good blog post by Pinecone on SPLADE for sparse vector search if you're interested in diving in. If you're building RAG applications in areas that have a lot of industry-specific nomenclature, acronyms, etc, this is a good approach to getting better results.SCIPIn 2016, Microsoft announced the Language Server Protocol (LSP) and the Language Server Index Format (LSIF). This protocol makes it easy for IDEs to get all the context they need from a codebase to get things like file search, references, “go to definition”, etc. SourceGraph developed SCIP, “a better code indexing format than LSIF”:* Simpler and More Efficient Format: SCIP utilizes Protobuf instead of JSON, which is used by LSIF. Protobuf is more space-efficient, simpler, and more suitable for systems programming. * Better Performance and Smaller Index Sizes: SCIP indexers, such as scip-clang, show enhanced performance and reduced index file sizes compared to LSIF indexers (10%-20% smaller)* Easier to Develop and Debug: SCIP's design, centered around human-readable string IDs for symbols, makes it faster and more straightforward to develop new language indexers. Having more efficient indexing is key to more performant RAG on code. Show Notes* Sourcegraph* Cody* Copilot vs Cody* Steve's Stanford seminar on Grok* Steve's blog* Grab* Fireworks* Peter Norvig* Noam Chomsky* Code search* Kelly Norton* Zoekt* v0.devSee also our past episodes on Cursor, Phind, Codeium and Codium as well as the GitHub Copilot keynote at AI Engineer Summit.Timestamps* [00:00:00] Intros & Backgrounds* [00:05:20] How Steve's work on Grok inspired SourceGraph for Beyang* [00:08:10] What's Cody?* [00:11:22] Comparison of coding assistants and the capabilities of Cody* [00:16:00] The importance of context (RAG) in AI coding tools* [00:21:33] The debate between Chomsky and Norvig approaches in AI* [00:30:06] Normsky: the Norvig + Chomsky models collision* [00:36:00] The death of the DSL?* [00:40:00] LSP, Skip, Kythe, BFG, and all that fun stuff* [00:53:00] The SourceGraph internal stack* [00:58:46] Building on open source models* [01:02:00] SourceGraph for engineering managers?* [01:12:00] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:16]Swyx: Hey, and today we're christening our new podcast studio in the Newton, and we have Beyang and Steve from Sourcegraph. Welcome. [00:00:25]Beyang: Hey, thanks for having us. [00:00:26]Swyx: So this has been a long time coming. I'm very excited to have you. We also are just celebrating the one year anniversary of ChatGPT yesterday, but also we'll be talking about the GA of Cody later on today. We'll just do a quick intros of both of you. Obviously, people can research you and check the show notes for more. Beyang, you worked in computer vision at Stanford and then you worked at Palantir. I did, yeah. You also interned at Google. [00:00:48]Beyang: I did back in the day where I get to use Steve's system, DevTool. [00:00:53]Swyx: Right. What was it called? [00:00:55]Beyang: It was called Grok. Well, the end user thing was Google Code Search. That's what everyone called it, or just like CS. But the brains of it were really the kind of like Trigram index and then Grok, which provided the reference graph. [00:01:07]Steve: Today it's called Kythe, the open source Google one. It's sort of like Grok v3. [00:01:11]Swyx: On your podcast, which you've had me on, you've interviewed a bunch of other code search developers, including the current developer of Kythe, right? [00:01:19]Beyang: No, we didn't have any Kythe people on, although we would love to if they're up for it. We had Kelly Norton, who built a similar system at Etsy, it's an open source project called Hound. We also had Han-Wen Nienhuys, who created Zoekt, which is, I think, heavily inspired by the Trigram index that powered Google's original code search and that we also now use at Sourcegraph. Yeah. [00:01:45]Swyx: So you teamed up with Quinn over 10 years ago to start Sourcegraph and you were indexing all code on the internet. And now you're in a perfect spot to create a code intelligence startup. Yeah, yeah. [00:01:56]Beyang: I guess the backstory was, I used Google Code Search while I was an intern. And then after I left that internship and worked elsewhere, it was the single dev tool that I missed the most. I felt like my job was just a lot more tedious and much more of a hassle without it. And so when Quinn and I started working together at Palantir, he had also used various code search engines in open source over the years. And it was just a pain point that we both felt, both working on code at Palantir and also working within Palantir's clients, which were a lot of Fortune 500 companies, large financial institutions, folks like that. And if anything, the pains they felt in dealing with large complex code bases made our pain points feel small by comparison. So that was really the impetus for starting Sourcegraph. [00:02:42]Swyx: Yeah, excellent. Steve, you famously worked at Amazon. And you've told many, many stories. I want every single listener of Latent Space to check out Steve's YouTube because he effectively had a podcast that you didn't tell anyone about or something. You just hit record and just went on a few rants. I'm always here for your Stevie rants. And then you moved to Google, where you also had some interesting thoughts on just the overall Google culture versus Amazon. You joined Grab as head of eng for a couple of years. I'm from Singapore, so I have actually personally used a lot of Grab's features. And it was very interesting to see you talk so highly of Grab's engineering and sort of overall prospects. [00:03:21]Steve: Because as a customer, it sucked? [00:03:22]Swyx: Yeah, no, it's just like, being from a smaller country, you never see anyone from our home country being on a global stage or talked about as a startup that people admire or look up to, like on the league that you, with all your legendary experience, would consider equivalent. Yeah. [00:03:41]Steve: Yeah, no, absolutely. They actually, they didn't even know that they were as good as they were, in a sense. They started hiring a bunch of people from Silicon Valley to come in and sort of like fix it. And we came in and we were like, Oh, we could have been a little better operational excellence and stuff. But by and large, they're really sharp. The only thing about Grab is that they get criticized a lot for being too westernized. Oh, by who? By Singaporeans who don't want to work there. [00:04:06]Swyx: Okay. I guess I'm biased because I'm here, but I don't see that as a problem. If anything, they've had their success because they were more westernized than the Sanders Singaporean tech company. [00:04:15]Steve: I mean, they had their success because they are laser focused. They copy to Amazon. I mean, they're executing really, really, really well for a giant. I was on a slack with 2,500 engineers. It was like this giant waterfall that you could dip your toe into. You'd never catch up. Actually, the AI summarizers would have been really helpful there. But yeah, no, I think Grab is successful because they're just out there with their sleeves rolled up, just making it happen. [00:04:43]Swyx: And for those who don't know, it's not just like Uber of Southeast Asia, it's also a super app. PayPal Plus. [00:04:48]Steve: Yeah. [00:04:49]Swyx: In the way that super apps don't exist in the West. It's one of the enduring mysteries of B2C that super apps work in the East and don't work in the West. We just don't understand it. [00:04:57]Beyang: Yeah. [00:04:58]Steve: It's just kind of curious. They didn't work in India either. And it was primarily because of bandwidth reasons and smaller phones. [00:05:03]Swyx: That should change now. It should. [00:05:05]Steve: And maybe we'll see a super app here. [00:05:08]Swyx: You retired-ish? I did. You retired-ish on your own video game? Mm-hmm. Any fun stories about that? And that's also where you discovered some need for code search, right? Mm-hmm. [00:05:16]Steve: Sure. A need for a lot of stuff. Better programming languages, better databases. Better everything. I mean, I started in like 95, right? Where there was kind of nothing. Yeah. Yeah. [00:05:24]Beyang: I just want to say, I remember when you first went to Grab because you wrote that blog post talking about why you were excited about it, about like the expanding Asian market. And our reaction was like, oh, man, how did we miss stealing it with you? [00:05:36]Swyx: Hiring you. [00:05:37]Beyang: Yeah. [00:05:38]Steve: I was like, miss that. [00:05:39]Swyx: Tell that story. So how did this happen? Right? So you were inspired by Grok. [00:05:44]Beyang: I guess the backstory from my point of view is I had used code search and Grok while at Google, but I didn't actually know that it was connected to you, Steve. I knew you from your blog posts, which were always excellent, kind of like inside, very thoughtful takes from an engineer's perspective on some of the challenges facing tech companies and tech culture and that sort of thing. But my first introduction to you within the context of code intelligence, code understanding was I watched a talk that you gave, I think at Stanford, about Grok when you're first building it. And that was very eye opening. I was like, oh, like that guy, like the guy who, you know, writes the extremely thoughtful ranty like blog posts also built that system. And so that's how I knew, you know, you were involved in that. And then, you know, we always wanted to hire you, but never knew quite how to approach you or, you know, get that conversation started. [00:06:34]Steve: Well, we got introduced by Max, right? Yeah. It was temporal. Yeah. Yeah. I mean, it was a no brainer. They called me up and I had noticed when Sourcegraph had come out. Of course, when they first came out, I had this dagger of jealousy stabbed through me piercingly, which I remember because I am not a jealous person by any means, ever. But boy, I was like, but I was kind of busy, right? And just one thing led to another. I got sucked back into the ads vortex and whatever. So thank God Sourcegraph actually kind of rescued me. [00:07:05]Swyx: Here's a chance to build DevTools. Yeah. [00:07:08]Steve: That's the best. DevTools are the best. [00:07:10]Swyx: Cool. Well, so that's the overall intro. I guess we can get into Cody. Is there anything else that like people should know about you before we get started? [00:07:18]Steve: I mean, everybody knows I'm a musician. I can juggle five balls. [00:07:24]Swyx: Five is good. Five is good. I've only ever managed three. [00:07:27]Steve: Five is hard. Yeah. And six, a little bit. [00:07:30]Swyx: Wow. [00:07:31]Beyang: That's impressive. [00:07:32]Alessio: So yeah, to jump into Sourcegraph, this has been a company 10 years in the making. And as Sean said, now you're at the right place. Phase two. Now, exactly. You spent 10 years collecting all this code, indexing, making it easy to surface it. Yeah. [00:07:47]Swyx: And also learning how to work with enterprises and having them trust you with their code bases. Yeah. [00:07:52]Alessio: Because initially you were only doing on-prem, right? Like a lot of like VPC deployments. [00:07:55]Beyang: So in the very early days, we're cloud only. But the first major customers we landed were all on-prem, self-hosted. And that was, I think, related to the nature of the problem that we're solving, which becomes just like a critical, unignorable pain point once you're above like 100 devs or so. [00:08:11]Alessio: Yeah. And now Cody is going to be GA by the time this releases. So congrats to your future self for launching this in two weeks. Can you give a quick overview of just what Cody is? I think everybody understands that it's a AI coding agent, but a lot of companies say they have a AI coding agent. So yeah, what does Cody do? How do people interface with it? [00:08:32]Beyang: Yeah. So how is it different from the like several dozen other AI coding agents that exist in the market now? When we thought about building a coding assistant that would do things like code generation and question answering about your code base, I think we came at it from the perspective of, you know, we've spent the past decade building the world's best code understanding engine for human developers, right? So like it's kind of your guide as a human dev if you want to go and dive into a large complex code base. And so our intuition was that a lot of the context that we're providing to human developers would also be useful context for AI developers to consume. And so in terms of the feature set, Cody is very similar to a lot of other assistants. It does inline autocompletion. It does code base aware chat. It does specific commands that automate, you know, tasks that you might rather not want to do like generating unit tests or adding detailed documentation. But we think the core differentiator is really the quality of the context, which is hard to kind of describe succinctly. It's a bit like saying, you know, what's the difference between Google and Alta Vista? There's not like a quick checkbox list of features that you can rattle off, but it really just comes down to all the attention and detail that we've paid to making that context work well and be high quality and fast for human devs. We're now kind of plugging into the AI coding assistant as well. Yeah. [00:09:53]Steve: I mean, just to add my own perspective on to what Beyang just described, RAG is kind of like a consultant that the LLM has available, right, that knows about your code. RAG provides basically a bridge to a lookup system for the LLM, right? Whereas fine tuning would be more like on the job training for somebody. If the LLM is a person, you know, and you send them to a new job and you do on the job training, that's what fine tuning is like, right? So tuned to our specific task. You're always going to need that expert, even if you get the on the job training, because the expert knows your particular code base, your task, right? That expert has to know your code. And there's a chicken and egg problem because, right, you know, we're like, well, I'm going to ask the LLM about my code, but first I have to explain it, right? It's this chicken and egg problem. That's where RAG comes in. And we have the best consultants, right? The best assistant who knows your code. And so when you sit down with Cody, right, what Beyang said earlier about going to Google and using code search and then starting to feel like without it, his job was super tedious. Once you start using these, do you guys use coding assistants? [00:10:53]Swyx: Yeah, right. [00:10:54]Steve: I mean, like we're getting to the point very quickly, right? Where you feel like almost like you're programming without the internet, right? Or something, you know, it's like you're programming back in the nineties without the coding assistant. Yeah. Hopefully that helps for people who have like no idea about coding systems, what they are. [00:11:09]Swyx: Yeah. [00:11:10]Alessio: I mean, going back to using them, we had a lot of them on the podcast already. We had Cursor, we have Codium and Codium, very similar names. [00:11:18]Swyx: Yeah. Find, and then of course there's Copilot. [00:11:22]Alessio: You had a Copilot versus Cody blog post, and I think it really shows the context improvement. So you had two examples that stuck with me. One was, what does this application do? And the Copilot answer was like, oh, it uses JavaScript and NPM and this. And it's like, but that's not what it does. You know, that's what it's built with. Versus Cody was like, oh, these are like the major functions. And like, these are the functionalities and things like that. And then the other one was, how do I start this up? And Copilot just said NPM start, even though there was like no start command in the package JSON, but you know, most collapse, right? Most projects use NPM start. So maybe this does too. How do you think about open source models? Because Copilot has their own private thing. And I think you guys use Starcoder, if I remember right. Yeah, that's correct. [00:12:09]Beyang: I think Copilot uses some variant of Codex. They're kind of cagey about it. I don't think they've like officially announced what model they use. [00:12:16]Swyx: And I think they use a range of models based on what you're doing. Yeah. [00:12:19]Beyang: So everyone uses a range of model. Like no one uses the same model for like inline completion versus like chat because the latency requirements for. Oh, okay. Well, there's fill in the middle. There's also like what the model's trained on. So like we actually had completions powered by Claude Instant for a while. And but you had to kind of like prompt hack your way to get it to output just the code and not like, hey, you know, here's the code you asked for, like that sort of text. So like everyone uses a range of models. We've kind of designed Cody to be like especially model, not agnostic, but like pluggable. So one of our kind of design considerations was like as the ecosystem evolves, we want to be able to integrate the best in class models, whether they're proprietary or open source into Cody because the pace of innovation in the space is just so quick. And I think that's been to our advantage. Like today, Cody uses Starcoder for inline completions. And with the benefit of the context that we provide, we actually show comparable completion acceptance rate metrics. It's kind of like the standard metric that folks use to evaluate inline completion quality. It's like if I show you a completion, what's the chance that you actually accept the completion versus you reject it? And so we're at par with Copilot, which is at the head of that industry right now. And we've been able to do that with the Starcoder model, which is open source and the benefit of the context fetching stuff that we provide. And of course, a lot of like prompt engineering and other stuff along the way. [00:13:40]Alessio: And Steve, you wrote a post called cheating is all you need about what you're building. And one of the points you made is that everybody's fighting on the same axis, which is better UI and the IDE, maybe like a better chat response. But data modes are kind of the most important thing. And you guys have like a 10 year old mode with all the data you've been collecting. How do you kind of think about what other companies are doing wrong, right? Like, why is nobody doing this in terms of like really focusing on RAG? I feel like you see so many people. Oh, we just got a new model. It's like a bit human eval. And it's like, well, but maybe like that's not what we should really be doing, you know? Like, do you think most people underestimate the importance of like the actual RAG in code? [00:14:21]Steve: I think that people weren't doing it much. It wasn't. It's kind of at the edges of AI. It's not in the center. I know that when ChatGPT launched, so within the last year, I've heard a lot of rumblings from inside of Google, right? Because they're undergoing a huge transformation to try to, you know, of course, get into the new world. And I heard that they told, you know, a bunch of teams to go and train their own models or fine tune their own models, right? [00:14:43]Swyx: Both. [00:14:43]Steve: And, you know, it was a s**t show. Nobody knew how to do it. They launched two coding assistants. One was called Code D with an EY. And then there was, I don't know what happened in that one. And then there's Duet, right? Google loves to compete with themselves, right? They do this all the time. And they had a paper on Duet like from a year ago. And they were doing exactly what Copilot was doing, which was just pulling in the local context, right? But fundamentally, I thought of this because we were talking about the splitting of the [00:15:10]Swyx: models. [00:15:10]Steve: In the early days, it was the LLM did everything. And then we realized that for certain use cases, like completions, that a different, smaller, faster model would be better. And that fragmentation of models, actually, we expected to continue and proliferate, right? Because we are fundamentally, we're a recommender engine right now. Yeah, we're recommending code to the LLM. We're saying, may I interest you in this code right here so that you can answer my question? [00:15:34]Swyx: Yeah? [00:15:34]Steve: And being good at recommender engine, I mean, who are the best recommenders, right? There's YouTube and Spotify and, you know, Amazon or whatever, right? Yeah. [00:15:41]Swyx: Yeah. [00:15:41]Steve: And they all have many, many, many, many, many models, right? For all fine-tuned for very specific, you know. And that's where we're heading in code, too. Absolutely. [00:15:50]Swyx: Yeah. [00:15:50]Alessio: We just did an episode we released on Wednesday, which we said RAG is like Rexis or like LLMs. You're basically just suggesting good content. [00:15:58]Swyx: It's like what? Recommendations. [00:15:59]Beyang: Recommendations. [00:16:00]Alessio: Oh, got it. [00:16:01]Steve: Yeah, yeah, yeah. [00:16:02]Swyx: So like the naive implementation of RAG is you embed everything, throw it in a vector database, you embed your query, and then you find the nearest neighbors, and that's your RAG. But actually, you need to rank it. And actually, you need to make sure there's sample diversity and that kind of stuff. And then you're like slowly gradient dissenting yourself towards rediscovering proper Rexis, which has been traditional ML for a long time. But like approaching it from an LLM perspective. Yeah. [00:16:24]Beyang: I almost think of it as like a generalized search problem because it's a lot of the same things. Like you want your layer one to have high recall and get all the potential things that could be relevant. And then there's typically like a layer two re-ranking mechanism that bumps up the precision and tries to get the relevant stuff to the top of the results list. [00:16:43]Swyx: Have you discovered that ranking matters a lot? Oh, yeah. So the context is that I think a lot of research shows that like one, context utilization matters based on model. Like GPT uses the top of the context window, and then apparently Claude uses the bottom better. And it's lossy in the middle. Yeah. So ranking matters. No, it really does. [00:17:01]Beyang: The skill with which models are able to take advantage of context is always going to be dependent on how that factors into the impact on the training loss. [00:17:10]Swyx: Right? [00:17:10]Beyang: So like if you want long context window models to work well, then you have to have a ton of data where it's like, here's like a billion lines of text. And I'm going to ask a question about like something that's like, you know, embedded deeply into it and like, give me the right answer. And unless you have that training set, then of course, you're going to have variability in terms of like where it attends to. And in most kind of like naturally occurring data, the thing that you're talking about right now, the thing I'm asking you about is going to be something that we talked about recently. [00:17:36]Swyx: Yeah. [00:17:36]Steve: Did you really just say gradient dissenting yourself? Actually, I love that it's entered the casual lexicon. Yeah, yeah, yeah. [00:17:44]Swyx: My favorite version of that is, you know, how we have to p-hack papers. So, you know, when you throw humans at the problem, that's called graduate student dissent. That's great. It's really awesome. [00:17:54]Alessio: I think the other interesting thing that you have is this inline assist UX that I wouldn't say async, but like it works while you can also do work. So you can ask Cody to make changes on a code block and you can still edit the same file at the same time. [00:18:07]Swyx: Yeah. [00:18:07]Alessio: How do you see that in the future? Like, do you see a lot of Cody's running together at the same time? Like, how do you validate also that they're not messing each other up as they make changes in the code? And maybe what are the limitations today? And what do you think about where the attack is going? [00:18:21]Steve: I want to start with a little history and then I'm going to turn it over to Bian, all right? So we actually had this feature in the very first launch back in June. Dominic wrote it. It was called nonstop Cody. And you could have multiple, basically, LLM requests in parallel modifying your source [00:18:37]Swyx: file. [00:18:37]Steve: And he wrote a bunch of code to handle all of the diffing logic. And you could see the regions of code that the LLM was going to change, right? And he was showing me demos of it. And it just felt like it was just a little before its time, you know? But a bunch of that stuff, that scaffolding was able to be reused for where we're inline [00:18:56]Swyx: sitting today. [00:18:56]Steve: How would you characterize it today? [00:18:58]Beyang: Yeah, so that interface has really evolved from a, like, hey, general purpose, like, request anything inline in the code and have the code update to really, like, targeted features, like, you know, fix the bug that exists at this line or request a very specific [00:19:13]Swyx: change. [00:19:13]Beyang: And the reason for that is, I think, the challenge that we ran into with inline fixes, and we do want to get to the point where you could just fire and forget and have, you know, half a dozen of these running in parallel. But I think we ran into the challenge early on that a lot of people are running into now when they're trying to construct agents, which is the reliability of, you know, working code generation is just not quite there yet in today's language models. And so that kind of constrains you to an interaction where the human is always, like, in the inner loop, like, checking the output of each response. And if you want that to work in a way where you can be asynchronous, you kind of have to constrain it to a domain where today's language models can generate reliable code well enough. So, you know, generating unit tests, that's, like, a well-constrained problem. Or fixing a bug that shows up as, like, a compiler error or a test error, that's a well-constrained problem. But the more general, like, hey, write me this class that does X, Y, and Z using the libraries that I have, that is not quite there yet, even with the benefit of really good context. Like, it definitely moves the needle a lot, but we're not quite there yet to the point where you can just fire and forget. And I actually think that this is something that people don't broadly appreciate yet, because I think that, like, everyone's chasing this dream of agentic execution. And if we're to really define that down, I think it implies a couple things. You have, like, a multi-step process where each step is fully automated. We don't have to have a human in the loop every time. And there's also kind of like an LM call at each stage or nearly every stage in that [00:20:45]Swyx: chain. [00:20:45]Beyang: Based on all the work that we've done, you know, with the inline interactions, with kind of like general Codyfeatures for implementing longer chains of thought, we're actually a little bit more bearish than the average, you know, AI hypefluencer out there on the feasibility of agents with purely kind of like transformer-based models. To your original question, like, the inline interactions with CODI, we actually constrained it to be more targeted, like, you know, fix the current error or make this quick fix. I think that that does differentiate us from a lot of the other tools on the market, because a lot of people are going after this, like, shnazzy, like, inline edit interaction, whereas I think where we've moved, and this is based on the user feedback that we've gotten, it's like that sort of thing, it demos well, but when you're actually coding day to day, you don't want to have, like, a long chat conversation inline with the code base. That's a waste of time. You'd rather just have it write the right thing and then move on with your life or not have to think about it. And that's what we're trying to work towards. [00:21:37]Steve: I mean, yeah, we're not going in the agent direction, right? I mean, I'll believe in agents when somebody shows me one that works. Yeah. Instead, we're working on, you know, sort of solidifying our strength, which is bringing the right context in. So new context sources, ways for you to plug in your own context, ways for you to control or influence the context, you know, the mixing that happens before the request goes out, etc. And there's just so much low-hanging fruit left in that space that, you know, agents seems like a little bit of a boondoggle. [00:22:03]Beyang: Just to dive into that a little bit further, like, I think, you know, at a very high level, what do people mean when they say agents? They really mean, like, greater automation, fully automated, like, the dream is, like, here's an issue, go implement that. And I don't have to think about it as a human. And I think we are working towards that. Like, that is the eventual goal. I think it's specifically the approach of, like, hey, can we have a transformer-based LM alone be the kind of, like, backbone or the orchestrator of these agentic flows? Where we're a little bit more bearish today. [00:22:31]Swyx: You want the human in the loop. [00:22:32]Beyang: I mean, you kind of have to. It's just a reality of the behavior of language models that are purely, like, transformer-based. And I think that's just like a reflection of reality. And I don't think people realize that yet. Because if you look at the way that a lot of other AI tools have implemented context fetching, for instance, like, you see this in the Copilot approach, where if you use, like, the at-workspace thing that supposedly provides, like, code-based level context, it has, like, an agentic approach where you kind of look at how it's behaving. And it feels like they're making multiple requests to the LM being like, what would you do in this case? Would you search for stuff? What sort of files would you gather? Go and read those files. And it's like a multi-hop step, so it takes a long while. It's also non-deterministic. Because any sort of, like, LM invocation, it's like a dice roll. And then at the end of the day, the context it fetches is not that good. Whereas our approach is just like, OK, let's do some code searches that make sense. And then maybe, like, crawl through the reference graph a little bit. That is fast. That doesn't require any sort of LM invocation at all. And we can pull in much better context, you know, very quickly. So it's faster. [00:23:37]Swyx: It's more reliable. [00:23:37]Beyang: It's deterministic. And it yields better context quality. And so that's what we think. We just don't think you should cargo cult or naively go like, you know, agents are the [00:23:46]Swyx: future. [00:23:46]Beyang: Let's just try to, like, implement agents on top of the LM that exists today. I think there are a couple of other technologies or approaches that need to be refined first before we can get into these kind of, like, multi-stage, fully automated workflows. [00:24:00]Swyx: It makes sense. You know, we're very much focused on developer inner loop right now. But you do see things eventually moving towards developer outer loop. Yeah. So would you basically say that they're tackling the agent's problem that you don't want to tackle? [00:24:11]Beyang: No, I would say at a high level, we are after maybe, like, the same high level problem, which is like, hey, I want some code written. I want to develop some software and can automate a system. Go build that software for me. I think the approaches might be different. So I think the analogy in my mind is, I think about, like, the AI chess players. Coding, in some senses, I mean, it's similar and dissimilar to chess. I think one question I ask is, like, do you think producing code is more difficult than playing chess or less difficult than playing chess? More. [00:24:41]Swyx: I think more. [00:24:41]Beyang: Right. And if you look at the best AI chess players, like, yes, you can use an LLM to play chess. Like, people have showed demos where it's like, oh, like, yeah, GPT-4 is actually a pretty decent, like, chess move suggester. Right. But you would never build, like, a best in class chess player off of GPT-4 alone. [00:24:57]Swyx: Right. [00:24:57]Beyang: Like, the way that people design chess players is that you have kind of like a search space and then you have a way to explore that search space efficiently. There's a bunch of search algorithms, essentially. We were doing tree search in various ways. And you can have heuristic functions, which might be powered by an LLM. [00:25:12]Swyx: Right. [00:25:12]Beyang: Like, you might use an LLM to generate proposals in that space that you can efficiently explore. But the backbone is still this kind of more formalized tree search based approach rather than the LLM itself. And so I think my high level intuition is that, like, the way that we get to more reliable multi-step workflows that do things beyond, you know, generate unit test, it's really going to be like a search based approach where you use an LLM as kind of like an advisor or a proposal function, sort of your heuristic function, like the ASTAR search algorithm. But it's probably not going to be the thing that is the backbone, because I guess it's not the right tool for that. Yeah. [00:25:50]Swyx: I can see yourself kind of thinking through this, but not saying the words, the sort of philosophical Peter Norvig type discussion. Maybe you want to sort of introduce that in software. Yeah, definitely. [00:25:59]Beyang: So your listeners are savvy. They're probably familiar with the classic like Chomsky versus Norvig debate. [00:26:04]Swyx: No, actually, I wanted, I was prompting you to introduce that. Oh, got it. [00:26:08]Beyang: So, I mean, if you look at the history of artificial intelligence, right, you know, it goes way back to, I don't know, it's probably as old as modern computers, like 50s, 60s, 70s. People are debating on like, what is the path to producing a sort of like general human level of intelligence? And kind of two schools of thought that emerged. One is the Norvig school of thought, which roughly speaking includes large language models, you know, regression, SVN, basically any model that you kind of like learn from data. And it's like data driven. Most of machine learning would fall under this umbrella. And that school of thought says like, you know, just learn from the data. That's the approach to reaching intelligence. And then the Chomsky approach is more things like compilers and parsers and formal systems. So basically like, let's think very carefully about how to construct a formal, precise system. And that will be the approach to how we build a truly intelligent system. I think Lisp was invented so that you could create like rules-based systems that you would call AI. As a language. Yeah. And for a long time, there was like this debate, like there's certain like AI research labs that were more like, you know, in the Chomsky camp and others that were more in the Norvig camp. It's a debate that rages on today. And I feel like the consensus right now is that, you know, Norvig definitely has the upper hand right now with the advent of LMs and diffusion models and all the other recent progress in machine learning. But the Chomsky-based stuff is still really useful in my view. I mean, it's like parsers, compilers, basically a lot of the stuff that provides really good context. It provides kind of like the knowledge graph backbone that you want to explore with your AI dev tool. Like that will come from kind of like Chomsky-based tools like compilers and parsers. It's a lot of what we've invested in in the past decade at Sourcegraph and what you build with Grok. Basically like these formal systems that construct these very precise knowledge graphs that are great context providers and great kind of guard rails enforcers and kind of like safety checkers for the output of a more kind of like data-driven, fuzzier system that uses like the Norvig-based models. [00:28:03]Steve: Jang was talking about this stuff like it happened in the middle ages. Like, okay, so when I was in college, I was in college learning Lisp and prologue and planning and all the deterministic Chomsky approaches to AI. And I was there when Norvig basically declared it dead. I was there 3,000 years ago when Norvig and Chomsky fought on the volcano. When did he declare it dead? [00:28:26]Swyx: What do you mean he declared it dead? [00:28:27]Steve: It was like late 90s. [00:28:29]Swyx: Yeah. [00:28:29]Steve: When I went to Google, Peter Norvig was already there. He had basically like, I forget exactly where. It was some, he's got so many famous short posts, you know, amazing. [00:28:38]Swyx: He had a famous talk, the unreasonable effectiveness of data. Yeah. [00:28:41]Steve: Maybe that was it. But at some point, basically, he basically convinced everybody that deterministic approaches had failed and that heuristic-based, you know, data-driven statistical approaches, stochastic were better. [00:28:52]Swyx: Yeah. [00:28:52]Steve: The primary reason I can tell you this, because I was there, was that, was that, well, the steam-powered engine, no. The reason was that the deterministic stuff didn't scale. [00:29:06]Swyx: Yeah. Right. [00:29:06]Steve: They're using prologue, man, constraint systems and stuff like that. Well, that was a long time ago, right? Today, actually, these Chomsky-style systems do scale. And that's, in fact, exactly what Sourcegraph has built. Yeah. And so we have a very unique, I love the framing that Bjong's made, that the marriage of the Chomsky and the Norvig, you know, sort of models, you know, conceptual models, because we, you know, we have both of them and they're both really important. And in fact, there, there's this really interesting, like, kind of overlap between them, right? Where like the AI or our graph or our search engine could potentially provide the right context for any given query, which is, of course, why ranking is important. But what we've really signed ourselves up for is an extraordinary amount of testing. [00:29:45]Swyx: Yeah. [00:29:45]Steve: Because in SWIGs, you were saying that, you know, GPT-4 tends to the front of the context window and maybe other LLMs to the back and maybe, maybe the LLM in the middle. [00:29:53]Swyx: Yeah. [00:29:53]Steve: And so that means that, you know, if we're actually like, you know, verifying whether we, you know, some change we've made has improved things, we're going to have to test putting it at the beginning of the window and at the end of the window, you know, and maybe make the right decision based on the LLM that you've chosen. Which some of our competitors, that's a problem that they don't have, but we meet you, you know, where you are. Yeah. And we're, just to finish, we're writing tens of thousands. We're generating tests, you know, fill in the middle type tests and things. And then using our graph to basically sort of fine tune Cody's behavior there. [00:30:20]Swyx: Yeah. [00:30:21]Beyang: I also want to add, like, I have like an internal pet name for this, like kind of hybrid architecture that I'm trying to make catch on. Maybe I'll just say it here. Just saying it publicly kind of makes it more real. But like, I call the architecture that we've developed the Normsky architecture. [00:30:36]Swyx: Yeah. [00:30:36]Beyang: I mean, it's obviously a portmanteau of Norvig and Chomsky, but the acronym, it stands for non-agentic, rapid, multi-source code intelligence. So non-agentic because... Rolls right off the tongue. And Normsky. But it's non-agentic in the sense that like, we're not trying to like pitch you on kind of like agent hype, right? Like it's the things it does are really just developer tools developers have been using for decades now, like parsers and really good search indexes and things like that. Rapid because we place an emphasis on speed. We don't want to sit there waiting for kind of like multiple LLM requests to return to complete a simple user request. Multi-source because we're thinking broadly about what pieces of information and knowledge are useful context. So obviously starting with things that you can search in your code base, and then you add in the reference graph, which kind of like allows you to crawl outward from those initial results. But then even beyond that, you know, sources of information, like there's a lot of knowledge that's embedded in docs, in PRDs or product specs, in your production logging system, in your chat, in your Slack channel, right? Like there's so much context is embedded there. And when you're a human developer, and you're trying to like be productive in your code base, you're going to go to all these different systems to collect the context that you need to figure out what code you need to write. And I don't think the AI developer will be any different. It will need to pull context from all these different sources. So we're thinking broadly about how to integrate these into Codi. We hope through kind of like an open protocol that like others can extend and implement. And this is something else that should be accessible by December 14th in kind of like a preview stage. But that's really about like broadening this notion of the code graph beyond your Git repository to all the other sources where technical knowledge and valuable context can live. [00:32:21]Steve: Yeah, it becomes an artifact graph, right? It can link into your logs and your wikis and any data source, right? [00:32:27]Alessio: How do you guys think about the importance of, it's almost like data pre-processing in a way, which is bring it all together, tie it together, make it ready. Any thoughts on how to actually make that good? Some of the innovation you guys have made. [00:32:40]Steve: We talk a lot about the context fetching, right? I mean, there's a lot of ways you could answer this question. But, you know, we've spent a lot of time just in this podcast here talking about context fetching. But stuffing the context into the window is, you know, the bin packing problem, right? Because the window is not big enough, and you've got more context than you can fit. You've got a ranker maybe. But what is that context? Is it a function that was returned by an embedding or a graph call or something? Do you need the whole function? Or do you just need, you know, the top part of the function, this expression here, right? You know, so that art, the golf game of trying to, you know, get each piece of context down into its smallest state, possibly even summarized by another model, right, before it even goes to the LLM, becomes this is the game that we're in, yeah? And so, you know, recursive summarization and all the other techniques that you got to use to like stuff stuff into that context window become, you know, critically important. And you have to test them across every configuration of models that you could possibly need. [00:33:32]Beyang: I think data preprocessing is probably the like unsexy, way underappreciated secret to a lot of the cool stuff that people are shipping today. Whether you're doing like RAG or fine tuning or pre-training, like the preprocessing step matters so much because it's basically garbage in, garbage out, right? Like if you're feeding in garbage to the model, then it's going to output garbage. Concretely, you know, for code RAG, if you're not doing some sort of like preprocessing that takes advantage of a parser and is able to like extract the key components of a particular file of code, you know, separate the function signature from the body, from the doc string, what are you even doing? Like that's like table stakes. It opens up so much more possibilities with which you can kind of like tune your system to take advantage of the signals that come from those different parts of the code. Like we've had a tool, you know, since computers were invented that understands the structure of source code to a hundred percent precision. The compiler knows everything there is to know about the code in terms of like structure. Like why would you not want to use that in a system that's trying to generate code, answer questions about code? You shouldn't throw that out the window just because now we have really good, you know, data-driven models that can do other things. [00:34:44]Steve: Yeah. When I called it a data moat, you know, in my cheating post, a lot of people were confused, you know, because data moat sort of sounds like data lake because there's data and water and stuff. I don't know. And so they thought that we were sitting on this giant mountain of data that we had collected, but that's not what our data moat is. It's really a data pre-processing engine that can very quickly and scalably, like basically dissect your entire code base in a very small, fine-grained, you know, semantic unit and then serve it up. Yeah. And so it's really, it's not a data moat. It's a data pre-processing moat, I guess. [00:35:15]Beyang: Yeah. If anything, we're like hypersensitive to customer data privacy requirements. So it's not like we've taken a bunch of private data and like, you know, trained a generally available model. In fact, exactly the opposite. A lot of our customers are choosing Cody over Copilot and other competitors because we have an explicit guarantee that we don't do any of that. And that we've done that from day one. Yeah. I think that's a very real concern in today's day and age, because like if your proprietary IP finds its way into the training set of any model, it's very easy both to like extract that knowledge from the model and also use it to, you know, build systems that kind of work on top of the institutional knowledge that you've built up. [00:35:52]Alessio: About a year ago, I wrote a post on LLMs for developers. And one of the points I had was maybe the depth of like the DSL. I spent most of my career writing Ruby and I love Ruby. It's so nice to use, but you know, it's not as performant, but it's really easy to read, right? And then you look at other languages, maybe they're faster, but like they're more verbose, you know? And when you think about efficiency of the context window, that actually matters. [00:36:15]Swyx: Yeah. [00:36:15]Alessio: But I haven't really seen a DSL for models, you know? I haven't seen like code being optimized to like be easier to put in a model context. And it seems like your pre-processing is kind of doing that. Do you see in the future, like the way we think about the DSL and APIs and kind of like service interfaces be more focused on being context friendly, where it's like maybe it's harder to read for the human, but like the human is never going to write it anyway. We were talking on the Hacks podcast. There are like some data science things like spin up the spandex, like humans are never going to write again because the models can just do very easily. Yeah, curious to hear your thoughts. [00:36:51]Steve: Well, so DSLs, they involve, you know, writing a grammar and a parser and they're like little languages, right? We do them that way because, you know, we need them to compile and humans need to be able to read them and so on. The LLMs don't need that level of structure. You can throw any pile of crap at them, you know, more or less unstructured and they'll deal with it. So I think that's why a DSL hasn't emerged for sort of like communicating with the LLM or packaging up the context or anything. Maybe it will at some point, right? We've got, you know, tagging of context and things like that that are sort of peeking into DSL territory, right? But your point on do users, you know, do people have to learn DSLs like regular expressions or, you know, pick your favorite, right? XPath. I think you're absolutely right that the LLMs are really, really good at that. And I think you're going to see a lot less of people having to slave away learning these things. They just have to know the broad capabilities and the LLM will take care of the rest. [00:37:42]Swyx: Yeah, I'd agree with that. [00:37:43]Beyang: I think basically like the value profit of DSL is that it makes it easier to work with a lower level language, but at the expense of introducing an abstraction layer. And in many cases today, you know, without the benefit of AI cogeneration, like that totally worth it, right? With the benefit of AI cogeneration, I mean, I don't think all DSLs will go away. I think there's still, you know, places where that trade-off is going to be worthwhile. But it's kind of like how much of source code do you think is going to be generated through natural language prompting in the future? Because in a way, like any programming language is just a DSL on top of assembly, right? And so if people can do that, then yeah, like maybe for a large portion of the code [00:38:21]Swyx: that's written, [00:38:21]Beyang: people don't actually have to understand the DSL that is Ruby or Python or basically any other programming language that exists. [00:38:28]Steve: I mean, seriously, do you guys ever write SQL queries now without using a model of some sort? At least a draft. [00:38:34]Swyx: Yeah, right. [00:38:36]Steve: And so we have kind of like, you know, past that bridge, right? [00:38:39]Alessio: Yeah, I think like to me, the long-term thing is like, is there ever going to be, you don't actually see the code, you know? It's like, hey, the basic thing is like, hey, I need a function to some two numbers and that's it. I don't need you to generate the code. [00:38:53]Steve: And the following question, do you need the engineer or the paycheck? [00:38:56]Swyx: I mean, right? [00:38:58]Alessio: That's kind of the agent's discussion in a way where like you cannot automate the agents, but like slowly you're getting more of the atomic units of the work kind of like done. I kind of think of it as like, you know, [00:39:09]Beyang: do you need a punch card operator to answer that for you? And so like, I think we're still going to have people in the role of a software engineer, but the portion of time they spend on these kinds of like low-level, tedious tasks versus the higher level, more creative tasks is going to shift. [00:39:23]Steve: No, I haven't used punch cards. [00:39:25]Swyx: Yeah, I've been talking about like, so we kind of made this podcast about the sort of rise of the AI engineer. And like the first step is the AI enhanced engineer. That is that software developer that is no longer doing these routine, boilerplate-y type tasks, because they're just enhanced by tools like yours. So you mentioned OpenCodeGraph. I mean, that is a kind of DSL maybe, and because we're releasing this as you go GA, you hope for other people to take advantage of that? [00:39:52]Beyang: Oh yeah, I would say so OpenCodeGraph is not a DSL. It's more of a protocol. It's basically like, hey, if you want to make your system, whether it's, you know, chat or logging or whatever accessible to an AI developer tool like Cody, here's kind of like the schema by which you can provide that context and offer hints. So I would, you know, comparisons like LSP obviously did this for kind of like standard code intelligence. It's kind of like a lingua franca for providing fine references and codefinition. There's kind of like analogs to that. There might be also analogs to kind of the original OpenAI, kind of like plugins, API. There's all this like context out there that might be useful for an LM-based system to consume. And so at a high level, what we're trying to do is define a common language for context providers to provide context to other tools in the software development lifecycle. Yeah. Do you have any critiques of LSP, by the way, [00:40:42]Swyx: since like this is very much, very close to home? [00:40:45]Steve: One of the authors wrote a really good critique recently. Yeah. I don't think I saw that. Yeah, yeah. LSP could have been better. It just came out a couple of weeks ago. It was a good article. [00:40:54]Beyang: Yeah. I think LSP is great. Like for what it did for the developer ecosystem, it was absolutely fantastic. Like nowadays, like it's much easier now to get code navigation up and running in a bunch of editors by speaking this protocol. I think maybe the interesting question is like looking at the different design decisions comparing LSP basically with Kythe. Because Kythe has more of a... How would you describe it? [00:41:18]Steve: A storage format. [00:41:20]Beyang: I think the critique of LSP from a Kythe point of view would be like with LSP, you don't actually have an actual symbolic model of the code. It's not like LSP models like, hey, this function calls this other function. LSP is all like range-based. Like, hey, your cursor's at line 32, column 1. [00:41:35]Swyx: Yeah. [00:41:35]Beyang: And that's the thing you feed into the language server. And then it's like, okay, here's the range that you should jump to if you click on that range. So it kind of is intentionally ignorant of the fact that there's a thing called a reference underneath your cursor, and that's linked to a symbol definition. [00:41:49]Steve: Well, actually, that's the worst example you could have used. You're right. But that's the one thing that it actually did bake in is following references. [00:41:56]Swyx: Sure. [00:41:56]Steve: But it's sort of hardwired. [00:41:58]Swyx: Yeah. [00:41:58]Steve: Whereas Kythe attempts to model [00:42:00]Beyang: like all these things explicitly. [00:42:02]Swyx: And so... [00:42:02]Steve: Well, so LSP is a protocol, right? And so Google's internal protocol is gRPC-based. And it's a different approach than LSP. It's basically you make a heavy query to the back end, and you get a lot of data back, and then you render the whole page, you know? So we've looked at LSP, and we think that it's a little long in the tooth, right? I mean, it's a great protocol, lots and lots of support for it. But we need to push into the domain of exposing the intelligence through the protocol. Yeah. [00:42:29]Beyang: And so I would say we've developed a protocol of our own called Skip, which is at a very high level trying to take some of the good ideas from LSP and from Kythe and merge that into a system that in the near term is useful for Sourcegraph, but I think in the long term, we hope will be useful for the ecosystem. Okay, so here's what LSP did well. LSP, by virtue of being like intentionally dumb, dumb in air quotes, because I'm not like ragging on it, allowed language servers developers to kind of like bypass the hard problem of like modeling language semantics precisely. So like if all you want to do is jump to definition, you don't have to come up with like a universally unique naming scheme for each symbol, which is actually quite challenging because you have to think about like, okay, what's the top scope of this name? Is it the source code repository? Is it the package? Does it depend on like what package server you're fetching this from? Like whether it's the public one or the one inside your... Anyways, like naming is hard, right? And by just going from kind of like a location to location based approach, you basically just like throw that out the window. All I care about is jumping definition, just make that work. And you can make that work without having to deal with like all the complex global naming things. The limitation of that approach is that it's harder to build on top of that to build like a true knowledge graph. Like if you actually want a system that says like, okay, here's the web of functions and here's how they reference each other. And I want to incorporate that like semantic model of how the code operates or how the code relates to each other at like a static level. You can't do that with LSP because you have to deal with line ranges. And like concretely the pain point that we found in using LSP for source graph is like in order to do like a find references [00:44:04]Swyx: and then jump definitions, [00:44:04]Beyang: it's like a multi-hop process because like you have to jump to the range and then you have to find the symbol at that range. And it just adds a lot of latency and complexity of these operations where as a human, you're like, well, this thing clearly references this other thing. Why can't you just jump me to that? And I think that's the thing that Kaith does well. But then I think the issue that Kaith has had with adoption is because it is more sophisticated schema, I think. And so there's basically more things that you have to implement to get like a Kaith implementation up and running. I hope I'm not like, correct me if I'm wrong about any of this. [00:44:35]Steve: 100%, 100%. Kaith also has a problem, all these systems have the problem, even skip, or at least the way that we implemented the indexers, that they have to integrate with your build system in order to build that knowledge graph, right? Because you have to basically compile the code in a special mode to generate artifacts instead of binaries. And I would say, by the way, earlier I was saying that XREFs were in LSP, but it's actually, I was thinking of LSP plus LSIF. [00:44:58]Swyx: Yeah. That's another. [00:45:01]Steve: Which is actually bad. We can say that it's bad, right? [00:45:04]Steve: It's like skip or Kaith, it's supposed to be sort of a model serialization, you know, for the code graph, but it basically just does what LSP needs, the bare minimum. LSIF is basically if you took LSP [00:45:16]Beyang: and turned that into a serialization format. So like you build an index for language servers to kind of like quickly bootstrap from cold start. But it's a graph model [00:45:23]Steve: with all of the inconvenience of the API without an actual graph. And so, yeah. [00:45:29]Beyang: So like one of the things that we try to do with skip is try to capture the best of both worlds. So like make it easy to write an indexer, make the schema simple, but also model some of the more symbolic characteristics of the code that would allow us to essentially construct this knowledge graph that we can then make useful for both the human developer through SourceGraph and through the AI developer through Cody. [00:45:49]Steve: So anyway, just to finish off the graph comment, we've got a new graph, yeah, that's skip based. We call it BFG internally, right? It's a beautiful something graph. A big friendly graph. [00:46:00]Swyx: A big friendly graph. [00:46:01]Beyang: It's a blazing fast. [00:46:02]Steve: Blazing fast. [00:46:03]Swyx: Blazing fast graph. [00:46:04]Steve: And it is blazing fast, actually. It's really, really interesting. I should probably have to do a blog post about it to walk you through exactly how they're doing it. Oh, please. But it's a very AI-like iterative, you know, experimentation sort of approach. We're building a code graph based on all of our 10 years of knowledge about building code graphs, yeah? But we're building it quickly with zero configuration, and it doesn't have to integrate with your build. And through some magic tricks that we have. And so what just happens when you install the plugin, that it'll be there and indexing your code and providing that knowledge graph in the background without all that build system integration. This is a bit of secret sauce that we haven't really like advertised it very much lately. But I am super excited about it because what they do is they say, all right, you know, let's tackle function parameters today. Cody's not doing a very good job of completing function call arguments or function parameters in the definition, right? Yeah, we generate those thousands of tests, and then we can actually reuse those tests for the AI context as well. So fortunately, things are kind of converging on, we have, you know, half a dozen really, really good context sources, and we mix them all together. So anyway, BFG, you're going to hear more about it probably in the holidays? [00:47:12]Beyang: I think it'll be online for December 14th. We'll probably mention it. BFG is probably not the public name we're going to go with. I think we might call it like Graph Context or something like that. [00:47:20]Steve: We're officially calling it BFG. [00:47:22]Swyx: You heard it here first. [00:47:24]Beyang: BFG is just kind of like the working name. And so the impetus for BFG was like, if you look at like current AI inline code completion tools and the errors that they make, a lot of the errors that they make, even in kind of like the easy, like single line case, are essentially like type errors, right? Like you're trying to complete a function call and it suggests a variable that you defined earlier, but that variable is the wrong type. [00:47:47]Swyx: And that's the sort of thing [00:47:47]Beyang: where it's like a first year, like freshman CS student would not make that error, right? So like, why does the AI make that error? And the reason is, I mean, the AI is just suggesting things that are plausible without the context of the types or any other like broader files in the code. And so the kind of intuition here is like, why don't we just do the basic thing that like any baseline intelligent human developer would do, which is like click jump to definition, click some fine references and pull in that like Graph Context into the context window and then have it generate the completion. So like that's sort of like the MVP of what BFG was. And turns out that works really well. Like you can eliminate a lot of type errors that AI coding tools make just by pulling in that context. Yeah, but the graph is definitely [00:48:32]Steve: our Chomsky side. [00:48:33]Swyx: Yeah, exactly. [00:48:34]Beyang: So like this like Chomsky-Norvig thing, I think pops up in a bunch of differ

Commercial Real Estate School
Building your Sphere - Network for Success!

Commercial Real Estate School

Play Episode Listen Later Dec 8, 2023 8:35


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Commercial Real Estate School
Mastering Communication

Commercial Real Estate School

Play Episode Listen Later Dec 7, 2023 9:56


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Best Real Estate Investing Advice Ever
JF3380: Unlocking Commercial Real Estate Opportunities in 2024: Navigating Distressed Deals and Market Realities

Best Real Estate Investing Advice Ever

Play Episode Listen Later Dec 6, 2023 30:10


Commercial real estate expert, Reid Bennett, joins host Paul Mueller to discuss the trends and challenges that investors should keep an eye on as they navigate the complex landscape of commercial real estate in 2024. From rising insurance premiums to supply chain issues, Bennett shares valuable insights that will help investors make informed decisions in the coming year. Key Takeaways The Three Ds of Selling: Bennett emphasizes the importance of monitoring owners' motivations, often driven by "death, disease, or divorce," which can lead to opportunities for investors. Additionally, lenders are facing tough decisions as debt service coverage ratios drop, potentially resulting in distressed property sales. Timing the Market: Bennett advises investors not to try to time the market's bottom but instead to focus on deals that make sense with today's debt. He highlights the potential for deals to work out well even with current interest rates, stressing the importance of realistic assumptions. Prepare for Distressed Opportunities: With a prediction of increased distress in 2024, Bennett suggests that investors act swiftly when opportunities arise. He cautions against overestimating rental increases and expense reductions, advocating for a conservative approach to assumptions. Reid Bennett | Real Estate Background National Council Chair of Multifamily Properties for SVN, a full-service commercial real estate franchisor of the SVN® brand comprised of over 1,600 commercial real estate advisors and staff that continues to expand across the globe. Multifamily real estate broker who works without council to serve multifamily clients in over 170 markets around the country. Based in: Chicago, IL Say hi to him at:  LinkedIn Twitter Best Ever Book: Insider's Edge to Real Estate Investing by James L. Nelson Greatest Lesson: Hire a coach, and if you don't have an admin, you are one.   Sponsors CRE Daily BV Captial BAM Capital Syndication Attorneys

Commercial Real Estate School
The 5 Steps to Mastering Prospecting

Commercial Real Estate School

Play Episode Listen Later Dec 6, 2023 15:57


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Commercial Real Estate School
Readers are LEADERS

Commercial Real Estate School

Play Episode Listen Later Dec 5, 2023 9:33


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Commercial Real Estate School
Complimentary Specialization

Commercial Real Estate School

Play Episode Listen Later Dec 4, 2023 9:00


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Commercial Real Estate School

Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Commercial Real Estate School
Knowledge is POWER

Commercial Real Estate School

Play Episode Listen Later Nov 30, 2023 8:20


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Commercial Real Estate School
Productivity and getting control of your week

Commercial Real Estate School

Play Episode Listen Later Nov 29, 2023 8:33


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Commercial Real Estate School
Choose the Right Firm

Commercial Real Estate School

Play Episode Listen Later Nov 28, 2023 9:02


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Commercial Real Estate School
Know your "Why" and have a Firm Resolve

Commercial Real Estate School

Play Episode Listen Later Nov 28, 2023 7:18


Welcome to Season 18 of Commercial Real Estate School! This season, I welcome James Bean, Vice President of SVN | Rich Investment Real Estate Partners. James Bean is an Investment Broker, as well as a published author, writer, national sales coach, expert witness and accomplished public speaker, as well as a graphic designer. He began his real estate career in 1995 working for a dry-cleaning plant developer where he was responsible for site acquisition, negotiating leases, and the installation of new stores. In 2000, James joined Cutler Commercial, a boutique industrial brokerage firm, where he played an integral role on the team that grew the portfolio from 1.5MM SF to more than 6MM SF. In 2015, Mr. Bean was recruited by Marcus and Millichap to open a new office in Ventura CA, where he completed more than 15 transactions for the three years he was there, until the opportunity came to join SVN. James has been involved in more than 1500 lease and sale transactions, with more than 250 of those being user and investment sales transactions. Combined, these deals have involved more than $1 billion in volume and comprised several million square feet of improvments. Digsy Virtual Assistants are a great resource, but people don't hire them because it takes a lot of training, management, and documentation to make them effective. Digsy solves this problem with its fully-managed virtual assistant service. Check them out ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ About "Commercial Real Estate School" Welcome to Commercial Real Estate School. Every episode is a lesson featuring some of the top names in the business, sharing insights and expertise to inspire your commercial real estate journey. Our powerhouse guests offer a wealth of expertise, guiding you through the complex commercial landscape and providing actionable insights to fast-track your success. Class is in session. --- Send in a voice message: https://podcasters.spotify.com/pod/show/creschoolshow/message Support this podcast: https://podcasters.spotify.com/pod/show/creschoolshow/support

Potkaars Podcast
Bijeenkomst Controle Verkiezingen 29 okt. - Dennis Spaanstra, Rico Brouwer en Pepijn van Houwelingen

Potkaars Podcast

Play Episode Listen Later Nov 1, 2023 50:54


Op 22 november 2023 worden vervroegde Tweede Kamerverkiezingen gehouden. 'Controle Verkiezingen' gaat daarna met zoveel mogelijk vrijwilligers het stemmen tellen controleren. Op 29 oktober was er voor de vrijwilligers een informatiemiddag georganiseerd, die werd afgesloten door toespraken van journalist Rico Brouwer en Forum Voor Democratie kandidaat nr.4, Pepijn van Houwelingen. Dennis Spaanstra, organisator van Controle Verkiezingen nodigde alle politieke partijen uit, alleen FvD en SVN gaven gehoor. Dit tot ongenoegen van tenminste één bezoeker (afbeelding) die graag een weerwoord had gezien op de kritiek van Van Houwelingen.De organisatie verzorgde via haar Facebook kanaal een livestream, die je hier kunt terugzien: https://www.facebook.com/controleverkiezingen/videos/bijeenkomst-controle-verkiezingen-te-driebergen-deel-1/3570706136541980 Deel 2 van de dag in dit videoverslag, met de bijdragen van Rico en Pepijn, gevolgd door een vraag/antwoord blok met het publiek.0:00 Dennis Spaanstra1:18 Rico Brouwer17:07 Pepijn van Houwelingen24:46 Q&A, vragen uit publiekWebsite: https://controle-verkiezingen.nl/Dit verslag op Potkaars: https://potkaars.nl/blog/2023/11/1/bijeenkomst-controle-verkiezingen-29-okt-dennis-spaanstra-rico-brouwer-en-pepijn-van-houwelingen

Best Real Estate Investing Advice Ever
JF3286: Reid Bennett - Navigating the Multifamily Brokerage and Lending Landscape

Best Real Estate Investing Advice Ever

Play Episode Listen Later Sep 3, 2023 35:28


Ready to unravel the secrets of successful multifamily real estate investing? Dive into a captivating discussion with Reid Bennett, a multifamily industry veteran with over 22 years of experience. Explore multifamily strategies, lender insights, and market dynamics that can reshape your investment approach and maximize your gains. Key Takeaways: The Power of Networking in Commercial Real Estate: Learn how to effectively build relationships with brokers and other key players in the industry, and discover the importance of demonstrating your legitimacy as a buyer. Insights into Distressed Assets and REO Properties: Delve into the world of distressed multifamily assets and Real Estate Owned (REO) properties. Bennett provides valuable advice on how to approach lenders, manage properties in receivership, and navigate the challenges posed by insurance rates and market uncertainties. Strategies for Real Estate Success and Economic Trends: Gain valuable insights into the current economic landscape, lending trends, and rate forecasts. Bennett shares his thoughts on conservative underwriting, the impact of rising interest rates, and the potential benefits of seller financing in today's market conditions.     Reid Bennett | Real Estate Background National Council Chair of Multifamily Properties for SVN, a full-service commercial real estate franchisor of the SVN® brand comprised of over 1,600 commercial real estate advisors and staff that continues to expand across the globe. Multifamily real estate broker who works without council to serve multifamily clients in over 170 markets around the country. Based in: Chicago, IL Say hi to him at:  LinkedIn Twitter Best Ever Book: Insider's Edge to Real Estate Investing by James L. Nelson Greatest Lesson: Hire a coach, and if you don't have an admin, you are one.   Click here to learn more about our sponsors: Masterworks Delete Me BAM Capital SyndicationAttorneys.com

SVN | On The Go
SVN | On The Go - Season 3 Ep. 4 ft. Tyler Davis from SVN | Saunders Ralston Dantzler

SVN | On The Go

Play Episode Listen Later Aug 23, 2023 27:17


In this episode, Julian Banuelos and Cameron Williams connect with Tyler Davis, CFO and Advisor of SVN | Saunders Ralston Dantzler to discuss why he got into commercial real estate, the value he finds in specializing in development land brokerage and investments, his growth with SVN, and so much more.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
LLMs Everywhere: Running 70B models in browsers and iPhones using MLC — with Tianqi Chen of CMU / OctoML

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Aug 10, 2023 52:10


We have just announced our first set of speakers at AI Engineer Summit! Sign up for the livestream or email sponsors@ai.engineer if you'd like to support.We are facing a massive GPU crunch. As both startups and VC's hoard Nvidia GPUs like countries count nuclear stockpiles, tweets about GPU shortages have become increasingly common. But what if we could run LLMs with AMD cards, or without a GPU at all? There's just one weird trick: compilation. And there's one person uniquely qualified to do it.We had the pleasure to sit down with Tianqi Chen, who's an Assistant Professor at CMU, where he both teaches the MLC course and runs the MLC group. You might also know him as the creator of XGBoost, Apache TVM, and MXNet, as well as the co-founder of OctoML. The MLC (short for Machine Learning Compilation) group has released a lot of interesting projects:* MLC Chat: an iPhone app that lets you run models like RedPajama-3B and Vicuna-7B on-device. It gets up to 30 tok/s!* Web LLM: Run models like LLaMA-70B in your browser (!!) to offer local inference in your product.* MLC LLM: a framework that allows any language models to be deployed natively on different hardware and software stacks.The MLC group has just announced new support for AMD cards; we previously talked about the shortcomings of ROCm, but using MLC you can get performance very close to the NVIDIA's counterparts. This is great news for founders and builders, as AMD cards are more readily available. Here are their latest results on AMD's 7900s vs some of top NVIDIA consumer cards.If you just can't get a GPU at all, MLC LLM also supports ARM and x86 CPU architectures as targets by leveraging LLVM. While speed performance isn't comparable, it allows for non-time-sensitive inference to be run on commodity hardware.We also enjoyed getting a peek into TQ's process, which involves a lot of sketching:With all the other work going on in this space with projects like ggml and Ollama, we're excited to see GPUs becoming less and less of an issue to get models in the hands of more people, and innovative software solutions to hardware problems!Show Notes* TQ's Projects:* XGBoost* Apache TVM* MXNet* MLC* OctoML* CMU Catalyst* ONNX* GGML* Mojo* WebLLM* RWKV* HiPPO* Tri Dao's Episode* George Hotz EpisodePeople:* Carlos Guestrin* Albert GuTimestamps* [00:00:00] Intros* [00:03:41] The creation of XGBoost and its surprising popularity* [00:06:01] Comparing tree-based models vs deep learning* [00:10:33] Overview of TVM and how it works with ONNX* [00:17:18] MLC deep dive* [00:28:10] Using int4 quantization for inference of language models* [00:30:32] Comparison of MLC to other model optimization projects* [00:35:02] Running large language models in the browser with WebLLM* [00:37:47] Integrating browser models into applications* [00:41:15] OctoAI and self-optimizing compute* [00:45:45] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Okay, and we are here with Tianqi Chen, or TQ as people call him, who is assistant professor in ML computer science at CMU, Carnegie Mellon University, also helping to run Catalyst Group, also chief technologist of OctoML. You wear many hats. Are those, you know, your primary identities these days? Of course, of course. [00:00:42]Tianqi: I'm also, you know, very enthusiastic open source. So I'm also a VP and PRC member of the Apache TVM project and so on. But yeah, these are the things I've been up to so far. [00:00:53]Swyx: Yeah. So you did Apache TVM, XGBoost, and MXNet, and we can cover any of those in any amount of detail. But maybe what's one thing about you that people might not learn from your official bio or LinkedIn, you know, on the personal side? [00:01:08]Tianqi: Let me say, yeah, so normally when I do, I really love coding, even though like I'm trying to run all those things. So one thing that I keep a habit on is I try to do sketchbooks. I have a book, like real sketchbooks to draw down the design diagrams and the sketchbooks I keep sketching over the years, and now I have like three or four of them. And it's kind of a usually a fun experience of thinking the design through and also seeing how open source project evolves and also looking back at the sketches that we had in the past to say, you know, all these ideas really turn into code nowadays. [00:01:43]Alessio: How many sketchbooks did you get through to build all this stuff? I mean, if one person alone built one of those projects, he'll be a very accomplished engineer. Like you built like three of these. What's that process like for you? Like it's the sketchbook, like the start, and then you think about the code or like. [00:01:59]Swyx: Yeah. [00:02:00]Tianqi: So, so usually I start sketching on high level architectures and also in a project that works for over years, we also start to think about, you know, new directions, like of course generative AI language model comes in, how it's going to evolve. So normally I would say it takes like one book a year, roughly at that rate. It's usually fun to, I find it's much easier to sketch things out and then gives a more like a high level architectural guide for some of the future items. Yeah. [00:02:28]Swyx: Have you ever published this sketchbooks? Cause I think people would be very interested on, at least on a historical basis. Like this is the time where XGBoost was born, you know? Yeah, not really. [00:02:37]Tianqi: I started sketching like after XGBoost. So that's a kind of missing piece, but a lot of design details in TVM are actually part of the books that I try to keep a record of. [00:02:48]Swyx: Yeah, we'll try to publish them and publish something in the journals. Maybe you can grab a little snapshot for visual aid. Sounds good. [00:02:57]Alessio: Yeah. And yeah, talking about XGBoost, so a lot of people in the audience might know it's a gradient boosting library, probably the most popular out there. And it became super popular because many people started using them in like a machine learning competitions. And I think there's like a whole Wikipedia page of like all state-of-the-art models. They use XGBoost and like, it's a really long list. When you were working on it, so we just had Tri Dao, who's the creator of FlashAttention on the podcast. And I asked him this question, it's like, when you were building FlashAttention, did you know that like almost any transform race model will use it? And so I asked the same question to you when you were coming up with XGBoost, like, could you predict it would be so popular or like, what was the creation process? And when you published it, what did you expect? We have no idea. [00:03:41]Tianqi: Like, actually, the original reason that we built that library is that at that time, deep learning just came out. Like that was the time where AlexNet just came out. And one of the ambitious mission that myself and my advisor, Carlos Guestrin, then is we want to think about, you know, try to test the hypothesis. Can we find alternatives to deep learning models? Because then, you know, there are other alternatives like, you know, support vector machines, linear models, and of course, tree-based models. And our question was, if you build those models and feed them with big enough data, because usually like one of the key characteristics of deep learning is that it's taking a lot [00:04:22]Swyx: of data, right? [00:04:23]Tianqi: So we will be able to get the same amount of performance. That's a hypothesis we're setting out to test. Of course, if you look at now, right, that's a wrong hypothesis, but as a byproduct, what we find out is that, you know, most of the gradient boosting library out there is not efficient enough for us to test that hypothesis. So I happen to have quite a bit of experience in the past of building gradient boosting trees and their variants. So Effective Action Boost was kind of like a byproduct of that hypothesis testing. At that time, I'm also competing a bit in data science challenges, like I worked on KDDCup and then Kaggle kind of become bigger, right? So I kind of think maybe it's becoming useful to others. One of my friends convinced me to try to do a Python binding of it. That tends to be like a very good decision, right, to be effective. Usually when I build it, we feel like maybe a command line interface is okay. And now we have a Python binding, we have R bindings. And then it realized, you know, it started getting interesting. People started contributing different perspectives, like visualization and so on. So we started to push a bit more on to building distributive support to make sure it works on any platform and so on. And even at that time point, when I talked to Carlos, my advisor, later, he said he never anticipated that we'll get to that level of success. And actually, why I pushed for gradient boosting trees, interestingly, at that time, he also disagreed. He thinks that maybe we should go for kernel machines then. And it turns out, you know, actually, we are both wrong in some sense, and Deep Neural Network was the king in the hill. But at least the gradient boosting direction got into something fruitful. [00:06:01]Swyx: Interesting. [00:06:02]Alessio: I'm always curious when it comes to these improvements, like, what's the design process in terms of like coming up with it? And how much of it is a collaborative with like other people that you're working with versus like trying to be, you know, obviously, in academia, it's like very paper-driven kind of research driven. [00:06:19]Tianqi: I would say the extra boost improvement at that time point was more on like, you know, I'm trying to figure out, right. But it's combining lessons. Before that, I did work on some of the other libraries on matrix factorization. That was like my first open source experience. Nobody knew about it, because you'll find, likely, if you go and try to search for the package SVD feature, you'll find some SVN repo somewhere. But it's actually being used for some of the recommender system packages. So I'm trying to apply some of the previous lessons there and trying to combine them. The later projects like MXNet and then TVM is much, much more collaborative in a sense that... But, of course, extra boost has become bigger, right? So when we started that project myself, and then we have, it's really amazing to see people come in. Michael, who was a lawyer, and now he works on the AI space as well, on contributing visualizations. Now we have people from our community contributing different things. So extra boost even today, right, it's a community of committers driving the project. So it's definitely something collaborative and moving forward on getting some of the things continuously improved for our community. [00:07:37]Alessio: Let's talk a bit about TVM too, because we got a lot of things to run through in this episode. [00:07:42]Swyx: I would say that at some point, I'd love to talk about this comparison between extra boost or tree-based type AI or machine learning compared to deep learning, because I think there is a lot of interest around, I guess, merging the two disciplines, right? And we can talk more about that. I don't know where to insert that, by the way, so we can come back to it later. Yeah. [00:08:04]Tianqi: Actually, what I said, when we test the hypothesis, the hypothesis is kind of, I would say it's partially wrong, because the hypothesis we want to test now is, can you run tree-based models on image classification tasks, where deep learning is certainly a no-brainer right [00:08:17]Swyx: now today, right? [00:08:18]Tianqi: But if you try to run it on tabular data, still, you'll find that most people opt for tree-based models. And there's a reason for that, in the sense that when you are looking at tree-based models, the decision boundaries are naturally rules that you're looking at, right? And they also have nice properties, like being able to be agnostic to scale of input and be able to automatically compose features together. And I know there are attempts on building neural network models that work for tabular data, and I also sometimes follow them. I do feel like it's good to have a bit of diversity in the modeling space. Actually, when we're building TVM, we build cost models for the programs, and actually we are using XGBoost for that as well. I still think tree-based models are going to be quite relevant, because first of all, it's really to get it to work out of the box. And also, you will be able to get a bit of interoperability and control monotonicity [00:09:18]Swyx: and so on. [00:09:19]Tianqi: So yes, it's still going to be relevant. I also sometimes keep coming back to think about, are there possible improvements that we can build on top of these models? And definitely, I feel like it's a space that can have some potential in the future. [00:09:34]Swyx: Are there any current projects that you would call out as promising in terms of merging the two directions? [00:09:41]Tianqi: I think there are projects that try to bring a transformer-type model for tabular data. I don't remember specifics of them, but I think even nowadays, if you look at what people are using, tree-based models are still one of their toolkits. So I think maybe eventually it's not even a replacement, it will be just an ensemble of models that you can call. Perfect. [00:10:07]Alessio: Next up, about three years after XGBoost, you built this thing called TVM, which is now a very popular compiler framework for models. Let's talk about, so this came out about at the same time as ONNX. So I think it would be great if you could maybe give a little bit of an overview of how the two things work together. Because it's kind of like the model, then goes to ONNX, then goes to the TVM. But I think a lot of people don't understand the nuances. I can get a bit of a backstory on that. [00:10:33]Tianqi: So actually, that's kind of an ancient history. Before XGBoost, I worked on deep learning for two years or three years. I got a master's before I started my PhD. And during my master's, my thesis focused on applying convolutional restricted Boltzmann machine for ImageNet classification. That is the thing I'm working on. And that was before AlexNet moment. So effectively, I had to handcraft NVIDIA CUDA kernels on, I think, a GTX 2070 card. I have a 22070 card. It took me about six months to get one model working. And eventually, that model is not so good, and we should have picked a better model. But that was like an ancient history that really got me into this deep learning field. And of course, eventually, we find it didn't work out. So in my master's, I ended up working on recommender system, which got me a paper, and I applied and got a PhD. But I always want to come back to work on the deep learning field. So after XGBoost, I think I started to work with some folks on this particular MXNet. At that time, it was like the frameworks of CAFE, Ciano, PyTorch haven't yet come out. And we're really working hard to optimize for performance on GPUs. At that time, I found it's really hard, even for NVIDIA GPU. It took me six months. And then it's amazing to see on different hardwares how hard it is to go and optimize code for the platforms that are interesting. So that gets me thinking, can we build something more generic and automatic? So that I don't need an entire team of so many people to go and build those frameworks. So that's the motivation of starting working on TVM. There is really too little about machine learning engineering needed to support deep learning models on the platforms that we're interested in. I think it started a bit earlier than ONNX, but once it got announced, I think it's in a similar time period at that time. So overall, how it works is that TVM, you will be able to take a subset of machine learning programs that are represented in what we call a computational graph. Nowadays, we can also represent a loop-level program ingest from your machine learning models. Usually, you have model formats ONNX, or in PyTorch, they have FX Tracer that allows you to trace the FX graph. And then it goes through TVM. We also realized that, well, yes, it needs to be more customizable, so it will be able to perform some of the compilation optimizations like fusion operator together, doing smart memory planning, and more importantly, generate low-level code. So that works for NVIDIA and also is portable to other GPU backends, even non-GPU backends [00:13:36]Swyx: out there. [00:13:37]Tianqi: So that's a project that actually has been my primary focus over the past few years. And it's great to see how it started from where I think we are the very early initiator of machine learning compilation. I remember there was a visit one day, one of the students asked me, are you still working on deep learning frameworks? I tell them that I'm working on ML compilation. And they said, okay, compilation, that sounds very ancient. It sounds like a very old field. And why are you working on this? And now it's starting to get more traction, like if you say Torch Compile and other things. I'm really glad to see this field starting to pick up. And also we have to continue innovating here. [00:14:17]Alessio: I think the other thing that I noticed is, it's kind of like a big jump in terms of area of focus to go from XGBoost to TVM, it's kind of like a different part of the stack. Why did you decide to do that? And I think the other thing about compiling to different GPUs and eventually CPUs too, did you already see some of the strain that models could have just being focused on one runtime, only being on CUDA and that, and how much of that went into it? [00:14:50]Tianqi: I think it's less about trying to get impact, more about wanting to have fun. I like to hack code, I had great fun hacking CUDA code. Of course, being able to generate CUDA code is cool, right? But now, after being able to generate CUDA code, okay, by the way, you can do it on other platforms, isn't that amazing? So it's more of that attitude to get me started on this. And also, I think when we look at different researchers, myself is more like a problem solver type. So I like to look at a problem and say, okay, what kind of tools we need to solve that problem? So regardless, it could be building better models. For example, while we build extra boots, we build certain regularizations into it so that it's more robust. It also means building system optimizations, writing low-level code, maybe trying to write assembly and build compilers and so on. So as long as they solve the problem, definitely go and try to do them together. And I also see it's a common trend right now. Like if you want to be able to solve machine learning problems, it's no longer at Aggressor layer, right? You kind of need to solve it from both Aggressor data and systems angle. And this entire field of machine learning system, I think it's kind of emerging. And there's now a conference around it. And it's really good to see a lot more people are starting to look into this. [00:16:10]Swyx: Yeah. Are you talking about ICML or something else? [00:16:13]Tianqi: So machine learning and systems, right? So not only machine learning, but machine learning and system. So there's a conference called MLsys. It's definitely a smaller community than ICML, but I think it's also an emerging and growing community where people are talking about what are the implications of building systems for machine learning, right? And how do you go and optimize things around that and co-design models and systems together? [00:16:37]Swyx: Yeah. And you were area chair for ICML and NeurIPS as well. So you've just had a lot of conference and community organization experience. Is that also an important part of your work? Well, it's kind of expected for academic. [00:16:48]Tianqi: If I hold an academic job, I need to do services for the community. Okay, great. [00:16:53]Swyx: Your most recent venture in MLsys is going to the phone with MLCLLM. You announced this in April. I have it on my phone. It's great. I'm running Lama 2, Vicuña. I don't know what other models that you offer. But maybe just kind of describe your journey into MLC. And I don't know how this coincides with your work at CMU. Is that some kind of outgrowth? [00:17:18]Tianqi: I think it's more like a focused effort that we want in the area of machine learning compilation. So it's kind of related to what we built in TVM. So when we built TVM was five years ago, right? And a lot of things happened. We built the end-to-end machine learning compiler that works, the first one that works. But then we captured a lot of lessons there. So then we are building a second iteration called TVM Unity. That allows us to be able to allow ML engineers to be able to quickly capture the new model and how we demand building optimizations for them. And MLCLLM is kind of like an MLC. It's more like a vertical driven organization that we go and build tutorials and go and build projects like LLM to solutions. So that to really show like, okay, you can take machine learning compilation technology and apply it and bring something fun forward. Yeah. So yes, it runs on phones, which is really cool. But the goal here is not only making it run on phones, right? The goal is making it deploy universally. So we do run on Apple M2 Macs, the 17 billion models. Actually, on a single batch inference, more recently on CUDA, we get, I think, the most best performance you can get out there already on the 4-bit inference. Actually, as I alluded earlier before the podcast, we just had a result on AMD. And on a single batch, actually, we can get the latest AMD GPU. This is a consumer card. It can get to about 80% of the 4019, so NVIDIA's best consumer card out there. So it's not yet on par, but thinking about how diversity and what you can enable and the previous things you can get on that card, it's really amazing that what you can do with this kind of technology. [00:19:10]Swyx: So one thing I'm a little bit confused by is that most of these models are in PyTorch, but you're running this inside a TVM. I don't know. Was there any fundamental change that you needed to do, or was this basically the fundamental design of TVM? [00:19:25]Tianqi: So the idea is that, of course, it comes back to program representation, right? So effectively, TVM has this program representation called TVM script that contains more like computational graph and operational representation. So yes, initially, we do need to take a bit of effort of bringing those models onto the program representation that TVM supports. Usually, there are a mix of ways, depending on the kind of model you're looking at. For example, for vision models and stable diffusion models, usually we can just do tracing that takes PyTorch model onto TVM. That part is still being robustified so that we can bring more models in. On language model tasks, actually what we do is we directly build some of the model constructors and try to directly map from Hugging Face models. The goal is if you have a Hugging Face configuration, we will be able to bring that in and apply optimization on them. So one fun thing about model compilation is that your optimization doesn't happen only as a soft language, right? For example, if you're writing PyTorch code, you just go and try to use a better fused operator at a source code level. Torch compile might help you do a bit of things in there. In most of the model compilations, it not only happens at the beginning stage, but we also apply generic transformations in between, also through a Python API. So you can tweak some of that. So that part of optimization helps a lot of uplifting in getting both performance and also portability on the environment. And another thing that we do have is what we call universal deployment. So if you get the ML program into this TVM script format, where there are functions that takes in tensor and output tensor, we will be able to have a way to compile it. So they will be able to load the function in any of the language runtime that TVM supports. So if you could load it in JavaScript, and that's a JavaScript function that you can take in tensors and output tensors. If you're loading Python, of course, and C++ and Java. So the goal there is really bring the ML model to the language that people care about and be able to run it on a platform they like. [00:21:37]Swyx: It strikes me that I've talked to a lot of compiler people, but you don't have a traditional compiler background. You're inventing your own discipline called machine learning compilation, or MLC. Do you think that this will be a bigger field going forward? [00:21:52]Tianqi: First of all, I do work with people working on compilation as well. So we're also taking inspirations from a lot of early innovations in the field. Like for example, TVM initially, we take a lot of inspirations from Halide, which is just an image processing compiler. And of course, since then, we have evolved quite a bit to focus on the machine learning related compilations. If you look at some of our conference publications, you'll find that machine learning compilation is already kind of a subfield. So if you look at papers in both machine learning venues, the MLC conferences, of course, and also system venues, every year there will be papers around machine learning compilation. And in the compiler conference called CGO, there's a C4ML workshop that also kind of trying to focus on this area. So definitely it's already starting to gain traction and becoming a field. I wouldn't claim that I invented this field, but definitely I helped to work with a lot of folks there. And I try to bring a perspective, of course, trying to learn a lot from the compiler optimizations as well as trying to bring in knowledges in machine learning and systems together. [00:23:07]Alessio: So we had George Hotz on the podcast a few episodes ago, and he had a lot to say about AMD and their software. So when you think about TVM, are you still restricted in a way by the performance of the underlying kernel, so to speak? So if your target is like a CUDA runtime, you still get better performance, no matter like TVM kind of helps you get there, but then that level you don't take care of, right? [00:23:34]Swyx: There are two parts in here, right? [00:23:35]Tianqi: So first of all, there is the lower level runtime, like CUDA runtime. And then actually for NVIDIA, a lot of the mood came from their libraries, like Cutlass, CUDN, right? Those library optimizations. And also for specialized workloads, actually you can specialize them. Because a lot of cases you'll find that if you go and do benchmarks, it's very interesting. Like two years ago, if you try to benchmark ResNet, for example, usually the NVIDIA library [00:24:04]Swyx: gives you the best performance. [00:24:06]Tianqi: It's really hard to beat them. But as soon as you start to change the model to something, maybe a bit of a variation of ResNet, not for the traditional ImageNet detections, but for latent detection and so on, there will be some room for optimization because people sometimes overfit to benchmarks. These are people who go and optimize things, right? So people overfit the benchmarks. So that's the largest barrier, like being able to get a low level kernel libraries, right? In that sense, the goal of TVM is actually we try to have a generic layer to both, of course, leverage libraries when available, but also be able to automatically generate [00:24:45]Swyx: libraries when possible. [00:24:46]Tianqi: So in that sense, we are not restricted by the libraries that they have to offer. That's why we will be able to run Apple M2 or WebGPU where there's no library available because we are kind of like automatically generating libraries. That makes it easier to support less well-supported hardware, right? For example, WebGPU is one example. From a runtime perspective, AMD, I think before their Vulkan driver was not very well supported. Recently, they are getting good. But even before that, we'll be able to support AMD through this GPU graphics backend called Vulkan, which is not as performant, but it gives you a decent portability across those [00:25:29]Swyx: hardware. [00:25:29]Alessio: And I know we got other MLC stuff to talk about, like WebLLM, but I want to wrap up on the optimization that you're doing. So there's kind of four core things, right? Kernel fusion, which we talked a bit about in the flash attention episode and the tiny grab one memory planning and loop optimization. I think those are like pretty, you know, self-explanatory. I think the one that people have the most questions, can you can you quickly explain [00:25:53]Swyx: those? [00:25:54]Tianqi: So there are kind of a different things, right? Kernel fusion means that, you know, if you have an operator like Convolutions or in the case of a transformer like MOP, you have other operators that follow that, right? You don't want to launch two GPU kernels. You want to be able to put them together in a smart way, right? And as a memory planning, it's more about, you know, hey, if you run like Python code, every time when you generate a new array, you are effectively allocating a new piece of memory, right? Of course, PyTorch and other frameworks try to optimize for you. So there is a smart memory allocator behind the scene. But actually, in a lot of cases, it's much better to statically allocate and plan everything ahead of time. And that's where like a compiler can come in. We need to, first of all, actually for language model, it's much harder because dynamic shape. So you need to be able to what we call symbolic shape tracing. So we have like a symbolic variable that tells you like the shape of the first tensor is n by 12. And the shape of the third tensor is also n by 12. Or maybe it's n times 2 by 12. Although you don't know what n is, right? But you will be able to know that relation and be able to use that to reason about like fusion and other decisions. So besides this, I think loop transformation is quite important. And it's actually non-traditional. Originally, if you simply write a code and you want to get a performance, it's very hard. For example, you know, if you write a matrix multiplier, the simplest thing you can do is you do for i, j, k, c, i, j, plus, equal, you know, a, i, k, times b, i, k. But that code is 100 times slower than the best available code that you can get. So we do a lot of transformation, like being able to take the original code, trying to put things into shared memory, and making use of tensor calls, making use of memory copies, and all this. Actually, all these things, we also realize that, you know, we cannot do all of them. So we also make the ML compilation framework as a Python package, so that people will be able to continuously improve that part of engineering in a more transparent way. So we find that's very useful, actually, for us to be able to get good performance very quickly on some of the new models. Like when Lamato came out, we'll be able to go and look at the whole, here's the bottleneck, and we can go and optimize those. [00:28:10]Alessio: And then the fourth one being weight quantization. So everybody wants to know about that. And just to give people an idea of the memory saving, if you're doing FB32, it's like four bytes per parameter. Int8 is like one byte per parameter. So you can really shrink down the memory footprint. What are some of the trade-offs there? How do you figure out what the right target is? And what are the precision trade-offs, too? [00:28:37]Tianqi: Right now, a lot of people also mostly use int4 now for language models. So that really shrinks things down a lot. And more recently, actually, we started to think that, at least in MOC, we don't want to have a strong opinion on what kind of quantization we want to bring, because there are so many researchers in the field. So what we can do is we can allow developers to customize the quantization they want, but we still bring the optimum code for them. So we are working on this item called bring your own quantization. In fact, hopefully MOC will be able to support more quantization formats. And definitely, I think there's an open field that's being explored. Can you bring more sparsities? Can you quantize activations as much as possible, and so on? And it's going to be something that's going to be relevant for quite a while. [00:29:27]Swyx: You mentioned something I wanted to double back on, which is most people use int4 for language models. This is actually not obvious to me. Are you talking about the GGML type people, or even the researchers who are training the models also using int4? [00:29:40]Tianqi: Sorry, so I'm mainly talking about inference, not training, right? So when you're doing training, of course, int4 is harder, right? Maybe you could do some form of mixed type precision for inference. I think int4 is kind of like, in a lot of cases, you will be able to get away with int4. And actually, that does bring a lot of savings in terms of the memory overhead, and so on. [00:30:09]Alessio: Yeah, that's great. Let's talk a bit about maybe the GGML, then there's Mojo. How should people think about MLC? How do all these things play together? I think GGML is focused on model level re-implementation and improvements. Mojo is a language, super sad. You're more at the compiler level. Do you all work together? Do people choose between them? [00:30:32]Tianqi: So I think in this case, I think it's great to say the ecosystem becomes so rich with so many different ways. So in our case, GGML is more like you're implementing something from scratch in C, right? So that gives you the ability to go and customize each of a particular hardware backend. But then you will need to write from CUDA kernels, and you write optimally from AMD, and so on. So the kind of engineering effort is a bit more broadened in that sense. Mojo, I have not looked at specific details yet. I think it's good to start to say, it's a language, right? I believe there will also be machine learning compilation technologies behind it. So it's good to say, interesting place in there. In the case of MLC, our case is that we do not want to have an opinion on how, where, which language people want to develop, deploy, and so on. And we also realize that actually there are two phases. We want to be able to develop and optimize your model. By optimization, I mean, really bring in the best CUDA kernels and do some of the machine learning engineering in there. And then there's a phase where you want to deploy it as a part of the app. So if you look at the space, you'll find that GGML is more like, I'm going to develop and optimize in the C language, right? And then most of the low-level languages they have. And Mojo is that you want to develop and optimize in Mojo, right? And you deploy in Mojo. In fact, that's the philosophy they want to push for. In the ML case, we find that actually if you want to develop models, the machine learning community likes Python. Python is a language that you should focus on. So in the case of MLC, we really want to be able to enable, not only be able to just define your model in Python, that's very common, right? But also do ML optimization, like engineering optimization, CUDA kernel optimization, memory planning, all those things in Python that makes you customizable and so on. But when you do deployment, we realize that people want a bit of a universal flavor. If you are a web developer, you want JavaScript, right? If you're maybe an embedded system person, maybe you would prefer C++ or C or Rust. And people sometimes do like Python in a lot of cases. So in the case of MLC, we really want to have this vision of, you optimize, build a generic optimization in Python, then you deploy that universally onto the environments that people like. [00:32:54]Swyx: That's a great perspective and comparison, I guess. One thing I wanted to make sure that we cover is that I think you are one of these emerging set of academics that also very much focus on your artifacts of delivery. Of course. Something we talked about for three years, that he was very focused on his GitHub. And obviously you treated XGBoost like a product, you know? And then now you're publishing an iPhone app. Okay. Yeah. Yeah. What is his thinking about academics getting involved in shipping products? [00:33:24]Tianqi: I think there are different ways of making impact, right? Definitely, you know, there are academics that are writing papers and building insights for people so that people can build product on top of them. In my case, I think the particular field I'm working on, machine learning systems, I feel like really we need to be able to get it to the hand of people so that really we see the problem, right? And we show that we can solve a problem. And it's a different way of making impact. And there are academics that are doing similar things. Like, you know, if you look at some of the people from Berkeley, right? A few years, they will come up with big open source projects. Certainly, I think it's just a healthy ecosystem to have different ways of making impacts. And I feel like really be able to do open source and work with open source community is really rewarding because we have a real problem to work on when we build our research. Actually, those research bring together and people will be able to make use of them. And we also start to see interesting research challenges that we wouldn't otherwise say, right, if you're just trying to do a prototype and so on. So I feel like it's something that is one interesting way of making impact, making contributions. [00:34:40]Swyx: Yeah, you definitely have a lot of impact there. And having experience publishing Mac stuff before, the Apple App Store is no joke. It is the hardest compilation, human compilation effort. So one thing that we definitely wanted to cover is running in the browser. You have a 70 billion parameter model running in the browser. That's right. Can you just talk about how? Yeah, of course. [00:35:02]Tianqi: So I think that there are a few elements that need to come in, right? First of all, you know, we do need a MacBook, the latest one, like M2 Max, because you need the memory to be big enough to cover that. So for a 70 million model, it takes you about, I think, 50 gigahertz of RAM. So the M2 Max, the upper version, will be able to run it, right? And it also leverages machine learning compilation. Again, what we are doing is the same, whether it's running on iPhone, on server cloud GPUs, on AMDs, or on MacBook, we all go through that same MOC pipeline. Of course, in certain cases, maybe we'll do a bit of customization iteration for either ones. And then it runs on the browser runtime, this package of WebLM. So that will effectively... So what we do is we will take that original model and compile to what we call WebGPU. And then the WebLM will be to pick it up. And the WebGPU is this latest GPU technology that major browsers are shipping right now. So you can get it in Chrome for them already. It allows you to be able to access your native GPUs from a browser. And then effectively, that language model is just invoking the WebGPU kernels through there. So actually, when the LATMAR2 came out, initially, we asked the question about, can you run 17 billion on a MacBook? That was the question we're asking. So first, we actually... Jin Lu, who is the engineer pushing this, he got 17 billion on a MacBook. We had a CLI version. So in MLC, you will be able to... That runs through a metal accelerator. So effectively, you use the metal programming language to get the GPU acceleration. So we find, okay, it works for the MacBook. Then we asked, we had a WebGPU backend. Why not try it there? So we just tried it out. And it's really amazing to see everything up and running. And actually, it runs smoothly in that case. So I do think there are some kind of interesting use cases already in this, because everybody has a browser. You don't need to install anything. I think it doesn't make sense yet to really run a 17 billion model on a browser, because you kind of need to be able to download the weight and so on. But I think we're getting there. Effectively, the most powerful models you will be able to run on a consumer device. It's kind of really amazing. And also, in a lot of cases, there might be use cases. For example, if I'm going to build a chatbot that I talk to it and answer questions, maybe some of the components, like the voice to text, could run on the client side. And so there are a lot of possibilities of being able to have something hybrid that contains the edge component or something that runs on a server. [00:37:47]Alessio: Do these browser models have a way for applications to hook into them? So if I'm using, say, you can use OpenAI or you can use the local model. Of course. [00:37:56]Tianqi: Right now, actually, we are building... So there's an NPM package called WebILM, right? So that you will be able to, if you want to embed it onto your web app, you will be able to directly depend on WebILM and you will be able to use it. We are also having a REST API that's OpenAI compatible. So that REST API, I think, right now, it's actually running on native backend. So that if a CUDA server is faster to run on native backend. But also we have a WebGPU version of it that you can go and run. So yeah, we do want to be able to have easier integrations with existing applications. And OpenAI API is certainly one way to do that. Yeah, this is great. [00:38:37]Swyx: I actually did not know there's an NPM package that makes it very, very easy to try out and use. I want to actually... One thing I'm unclear about is the chronology. Because as far as I know, Chrome shipped WebGPU the same time that you shipped WebILM. Okay, yeah. So did you have some kind of secret chat with Chrome? [00:38:57]Tianqi: The good news is that Chrome is doing a very good job of trying to have early release. So although the official shipment of the Chrome WebGPU is the same time as WebILM, actually, you will be able to try out WebGPU technology in Chrome. There is an unstable version called Canary. I think as early as two years ago, there was a WebGPU version. Of course, it's getting better. So we had a TVM-based WebGPU backhand two years ago. Of course, at that time, there were no language models. It was running on less interesting, well, still quite interesting models. And then this year, we really started to see it getting matured and performance keeping up. So we have a more serious push of bringing the language model compatible runtime onto the WebGPU. [00:39:45]Swyx: I think you agree that the hardest part is the model download. Has there been conversations about a one-time model download and sharing between all the apps that might use this API? That is a great point. [00:39:58]Tianqi: I think it's already supported in some sense. When we download the model, WebILM will cache it onto a special Chrome cache. So if a different web app uses the same WebILM JavaScript package, you don't need to redownload the model again. So there is already something there. But of course, you have to download the model once at least to be able to use it. [00:40:19]Swyx: Okay. One more thing just in general before we're about to zoom out to OctoAI. Just the last question is, you're not the only project working on, I guess, local models. That's right. Alternative models. There's gpt4all, there's olama that just recently came out, and there's a bunch of these. What would be your advice to them on what's a valuable problem to work on? And what is just thin wrappers around ggml? Like, what are the interesting problems in this space, basically? [00:40:45]Tianqi: I think making API better is certainly something useful, right? In general, one thing that we do try to push very hard on is this idea of easier universal deployment. So we are also looking forward to actually have more integration with MOC. That's why we're trying to build API like WebILM and other things. So we're also looking forward to collaborate with all those ecosystems and working support to bring in models more universally and be able to also keep up the best performance when possible in a more push-button way. [00:41:15]Alessio: So as we mentioned in the beginning, you're also the co-founder of Octomel. Recently, Octomel released OctoAI, which is a compute service, basically focuses on optimizing model runtimes and acceleration and compilation. What has been the evolution there? So Octo started as kind of like a traditional MLOps tool, where people were building their own models and you help them on that side. And then it seems like now most of the market is shifting to starting from pre-trained generative models. Yeah, what has been that experience for you and what you've seen the market evolve? And how did you decide to release OctoAI? [00:41:52]Tianqi: One thing that we found out is that on one hand, it's really easy to go and get something up and running, right? So if you start to consider there's so many possible availabilities and scalability issues and even integration issues since becoming kind of interesting and complicated. So we really want to make sure to help people to get that part easy, right? And now a lot of things, if we look at the customers we talk to and the market, certainly generative AI is something that is very interesting. So that is something that we really hope to help elevate. And also building on top of technology we build to enable things like portability across hardwares. And you will be able to not worry about the specific details, right? Just focus on getting the model out. We'll try to work on infrastructure and other things that helps on the other end. [00:42:45]Alessio: And when it comes to getting optimization on the runtime, I see when we run an early adopters community and most enterprises issue is how to actually run these models. Do you see that as one of the big bottlenecks now? I think a few years ago it was like, well, we don't have a lot of machine learning talent. We cannot develop our own models. Versus now it's like, there's these great models you can use, but I don't know how to run them efficiently. [00:43:12]Tianqi: That depends on how you define by running, right? On one hand, it's easy to download your MLC, like you download it, you run on a laptop, but then there's also different decisions, right? What if you are trying to serve a larger user request? What if that request changes? What if the availability of hardware changes? Right now it's really hard to get the latest hardware on media, unfortunately, because everybody's trying to work on the things using the hardware that's out there. So I think when the definition of run changes, there are a lot more questions around things. And also in a lot of cases, it's not only about running models, it's also about being able to solve problems around them. How do you manage your model locations and how do you make sure that you get your model close to your execution environment more efficiently? So definitely a lot of engineering challenges out there. That we hope to elevate, yeah. And also, if you think about our future, definitely I feel like right now the technology, given the technology and the kind of hardware availability we have today, we will need to make use of all the possible hardware available out there. That will include a mechanism for cutting down costs, bringing something to the edge and cloud in a more natural way. So I feel like still this is a very early stage of where we are, but it's already good to see a lot of interesting progress. [00:44:35]Alessio: Yeah, that's awesome. I would love, I don't know how much we're going to go in depth into it, but what does it take to actually abstract all of this from the end user? You know, like they don't need to know what GPUs you run, what cloud you're running them on. You take all of that away. What was that like as an engineering challenge? [00:44:51]Tianqi: So I think that there are engineering challenges on. In fact, first of all, you will need to be able to support all the kind of hardware backhand you have, right? On one hand, if you look at the media library, you'll find very surprisingly, not too surprisingly, most of the latest libraries works well on the latest GPU. But there are other GPUs out there in the cloud as well. So certainly being able to have know-hows and being able to do model optimization is one thing, right? Also infrastructures on being able to scale things up, locate models. And in a lot of cases, we do find that on typical models, it also requires kind of vertical iterations. So it's not about, you know, build a silver bullet and that silver bullet is going to solve all the problems. It's more about, you know, we're building a product, we'll work with the users and we find out there are interesting opportunities in a certain point. And when our engineer will go and solve that, and it will automatically reflect it in a service. [00:45:45]Swyx: Awesome. [00:45:46]Alessio: We can jump into the lightning round until, I don't know, Sean, if you have more questions or TQ, if you have more stuff you wanted to talk about that we didn't get a chance to [00:45:54]Swyx: touch on. [00:45:54]Alessio: Yeah, we have talked a lot. [00:45:55]Swyx: So, yeah. We always would like to ask, you know, do you have a commentary on other parts of AI and ML that is interesting to you? [00:46:03]Tianqi: So right now, I think one thing that we are really pushing hard for is this question about how far can we bring open source, right? I'm kind of like a hacker and I really like to put things together. So I think it's unclear in the future of what the future of AI looks like. On one hand, it could be possible that, you know, you just have a few big players, you just try to talk to those bigger language models and that can do everything, right? On the other hand, one of the things that Wailing Academic is really excited and pushing for, that's one reason why I'm pushing for MLC, is that can we build something where you have different models? You have personal models that know the best movie you like, but you also have bigger models that maybe know more, and you get those models to interact with each other, right? And be able to have a wide ecosystem of AI agents that helps each person while still being able to do things like personalization. Some of them can run locally, some of them, of course, running on a cloud, and how do they interact with each other? So I think that is a very exciting time where the future is yet undecided, but I feel like there is something we can do to shape that future as well. [00:47:18]Swyx: One more thing, which is something I'm also pursuing, which is, and this kind of goes back into predictions, but also back in your history, do you have any idea, or are you looking out for anything post-transformers as far as architecture is concerned? [00:47:32]Tianqi: I think, you know, in a lot of these cases, you can find there are already promising models for long contexts, right? There are space-based models, where like, you know, a lot of some of our colleagues from Albert, who he worked on this HIPPO models, right? And then there is an open source version called RWKV. It's like a recurrent models that allows you to summarize things. Actually, we are bringing RWKV to MOC as well, so maybe you will be able to see one of the models. [00:48:00]Swyx: We actually recorded an episode with one of the RWKV core members. It's unclear because there's no academic backing. It's just open source people. Oh, I see. So you like the merging of recurrent networks and transformers? [00:48:13]Tianqi: I do love to see this model space continue growing, right? And I feel like in a lot of cases, it's just that attention mechanism is getting changed in some sense. So I feel like definitely there are still a lot of things to be explored here. And that is also one reason why we want to keep pushing machine learning compilation, because one of the things we are trying to push in was productivity. So that for machine learning engineering, so that as soon as some of the models came out, we will be able to, you know, empower them onto those environments that's out there. [00:48:43]Swyx: Yeah, it's a really good mission. Okay. Very excited to see that RWKV and state space model stuff. I'm hearing increasing chatter about that stuff. Okay. Lightning round, as always fun. I'll take the first one. Acceleration. What has already happened in AI that you thought would take much longer? [00:48:59]Tianqi: Emergence of more like a conversation chatbot ability is something that kind of surprised me before it came out. This is like one piece that I feel originally I thought would take much longer, but yeah, [00:49:11]Swyx: it happens. And it's funny because like the original, like Eliza chatbot was something that goes all the way back in time. Right. And then we just suddenly came back again. Yeah. [00:49:21]Tianqi: It's always too interesting to think about, but with a kind of a different technology [00:49:25]Swyx: in some sense. [00:49:25]Alessio: What about the most interesting unsolved question in AI? [00:49:31]Swyx: That's a hard one, right? [00:49:32]Tianqi: So I can tell you like what kind of I'm excited about. So, so I think that I have always been excited about this idea of continuous learning and lifelong learning in some sense. So how AI continues to evolve with the knowledges that have been there. It seems that we're getting much closer with all those recent technologies. So being able to develop systems, support, and be able to think about how AI continues to evolve is something that I'm really excited about. [00:50:01]Swyx: So specifically, just to double click on this, are you talking about continuous training? That's like a training. [00:50:06]Tianqi: I feel like, you know, training adaptation and it's all similar things, right? You want to think about entire life cycle, right? The life cycle of collecting data, training, fine tuning, and maybe have your local context that getting continuously curated and feed onto models. So I think all these things are interesting and relevant in here. [00:50:29]Swyx: Yeah. I think this is something that people are really asking, you know, right now we have moved a lot into the sort of pre-training phase and off the shelf, you know, the model downloads and stuff like that, which seems very counterintuitive compared to the continuous training paradigm that people want. So I guess the last question would be for takeaways. What's basically one message that you want every listener, every person to remember today? [00:50:54]Tianqi: I think it's getting more obvious now, but I think one of the things that I always want to mention in my talks is that, you know, when you're thinking about AI applications, originally people think about algorithms a lot more, right? Our algorithm models, they are still very important. But usually when you build AI applications, it takes, you know, both algorithm side, the system optimizations, and the data curations, right? So it takes a connection of so many facades to be able to bring together an AI system and be able to look at it from that holistic perspective is really useful when we start to build modern applications. I think it's going to continue going to be more important in the future. [00:51:35]Swyx: Yeah. Thank you for showing the way on this. And honestly, just making things possible that I thought would take a lot longer. So thanks for everything you've done. [00:51:46]Tianqi: Thank you for having me. [00:51:47]Swyx: Yeah. [00:51:47]Alessio: Thanks for coming on TQ. [00:51:49]Swyx: Have a good one. [00:51:49] Get full access to Latent Space at www.latent.space/subscribe

Global Class Podcast
Think You Know What Leaders Need Most to Build Successful Global Organizations? Think Again | Xavier Mufraggi, CEO of SVN International Corporation

Global Class Podcast

Play Episode Listen Later Jun 28, 2023 61:42


In this episode, we are excited to welcome Xavier Mufraggi, Chief Executive Officer of SVN International Corporation and Partner at Eloquem Consulting. Prior to his role at SVN, Xavier was the CEO at YPO where he achieved a new record of 34,000 members in 150 countries. He was also previously the President and CEO of Club Med Europe, Middle East, and Africa. In this conversation, we talk about why speaking the same language as locals can increase your ability to develop an understanding of the culture that's required to build a successful expansion strategy in that market, the power of the authentic self as an integral part of personal and professional success, why skills are overrated in building teams (and what is the more valuable metric to look for instead), and the power of stories in better communication and reinforcing organizational vision. This episode is sponsored by our partner, ZEDRA. Learn more about how the ZEDRA team can support you in expanding to new markets at ⁠https://www.zedra.com ⁠ Get your copy of our Wall Street Journal Bestselling book, GLOBAL CLASS, a playbook on how to build a successful global business⁠. ⁠⁠https://www.amazon.com/Global-Class-Fastest-Growing-Companies-Globally/dp/1637742185 === SHOW NOTES: 1:10 - Video starts 2:42 - Xavier's formative experience that started his interpreneurial career. His father was an executive at a big American company, so Xavier was able to travel around the world at a young age. For example, he lived in Africa as an 11-year old 7:54 - How he learned that understanding culture and people is a very important skill early on 13:54 - They key in turning around Club Med in North America (During his time as the CEO, the company achieved historical records every year from 2011 to 2019) 20:15 - Building trust and ensuring organizational alignment in YPO, the world's most influential global leadership community 21:45 - What Xavier, who was then the CEO of Club Med North America, learned from going undercover as in the CBS Series, “Undercover Boss” 29: 00 - How organizations and business leaders could impact millions of lives in a positive way 40:16 - Why skills are overrated in building teams (and what is the more valuable metric to look for instead) 47:37 - The power of stories in effective communication and reinforcing organizational vision 49:19 - The power of the authentic self as an integral part of personal and professional success

GlobeSt Insiders Podcast Series
Why the Slowdown Is the Time to Invest in Future Growth

GlobeSt Insiders Podcast Series

Play Episode Listen Later Jun 21, 2023 21:37


Market downturn pressures hit commercial brokerages hard: from reduced deals and revenue drops to job cuts. . SVN International, however, views the market slowdown as a time for the opportunity to invest internally and retool for the upswing. In the latest episode of Thought Leadership podcast series, SVN International's new leadership—CEO Xavier Mufraggi and president Tim Spillane—talk about the value of real estate advisors and how SVN is reinvesting in its business in spite of the market disruption.   In this podcast, you'll hear:   The importance of an advisor in difficult times, How the downturn represents an ideal time to prepare for the next upswing, and What Mufraggi and Spillane's combined experience through previous downturns has taught them.

BackTable MSK
Ep. 13 Basivertebral Nerve Ablation with Dr. Olivier Clerk-Lamalice

BackTable MSK

Play Episode Listen Later Jun 15, 2023 59:20


In this episode, Dr. Jacob Fleming interviews Dr. Olivier Clerk-Lamalice about basivertebral nerve ablation for vertebrogenic back pain, including indications, procedure technique and exciting tech on the horizon in minimally invasive spine interventions. --- CHECK OUT OUR SPONSOR RADPAD® Radiation Protection https://www.radpad.com/ --- EARN CME Reflect on how this Podcast applies to your day-to-day and earn free AMA PRA Category 1 CMEs: https://earnc.me/kN1FPR --- SHOW NOTES Dr. Clerk-Lamalice trained in Canada, first in engineering, and then medicine and diagnostic radiology at the Université de Sherbrooke in Calgary. He then completed a neuroradiology fellowship at Harvard, and a fellowship in interventional pain at The Spine Fracture Institute in Oklahoma City with Dr. Douglas Beall. Furthermore, he obtained his credentials as a fellow of interventional pain practice (FIPP), which is a widely recognized international designation. He now works at a comprehensive outpatient radiology center, where he practices both diagnostic and interventional radiology daily. They offer intrathecal drug administration, spinal cord stimulators, vertebral augmentation, Spine Jack, disc augmentation, nucleolysis, and various nerve blocks and ablations in and out of the spine. Their goal was to create a one stop shop for patients to come for consultation, imaging, expert advice and treatment. Next, we discuss vertebrogenic back pain and the basivertebral nerve (BVN). The BVN is a nonmyelinated, intraosseous nerve, while most other peripheral nerves are myelinated, meaning they can regenerate. The BVN cannot, so ablation of this nerve is a permanent treatment. It is located within the central portion of the vertebral body midway between the superior and inferior end plates, one third ventral to the posterior wall of the vertebral body. On a sagittal T2 sequence on MRI, there is a triangle at the posterior aspect at the midpoint of the vertebral body called the basivertebral canal, which contains the nerve, artery and vein. The BVN is responsible for vertebrogenic back pain, which is a form of anterior column pain characterized by low back pain worsened by flexion and sitting. It is diagnosed via MRI using the Modic classifications. Modic type 1 (edematous), and type 2 (fibrofatty end plate) changes can be seen in this disease. It can be difficult to distinguish vertebrogenic from discogenic pain due to the fact that the sinuvertebral nerve (SVN), responsible for discogenic pain, crosses paths with the BVN. However, with MRI and an anesthetic discogram, it is possible to determine the etiology and choose the right treatment. Finally, we discuss the steps of the procedure. Dr. Clerk-Lamalice uses an 8 gauge needle via a transpedicular approach, as is common for other spine procedures. He ensures the probe is positioned in the center of the vertebral body, parallel to the endplates. The nerve is ablated for 15 minutes at 85 C. The procedure takes 45 minutes, which includes an epidural steroid injection to bridge pain control during the periprocedural period. Patients usually go home within one hour after the procedure, and begin to experience the results within a couple days. There have been two trials for BVN ablation, which have made this intervention the most minimally invasive and evidence-based treatment for vertebrogenic pain. These studies indicated 25% of patients had a 50% reduction in pain, while 75% of patients had a 75% reduction of pain. Within that 75%, 30% reported being almost entirely pain free. To date, the study has followed participants to 8 years, and the results show the treatment is durable. --- RESOURCES Ep 210: Modern Vertebral Augmentation https://www.backtable.com/shows/vi/podcasts/210/modern-vertebral-augmentation Ep 94: Spine Interventions https://www.backtable.com/shows/vi/podcasts/94/innovation-in-spine-interventions

The Real Estate Crowdfunding Show - DEAL TIME!
Reid Bennett, National Council Chair of Multifamily, SVN

The Real Estate Crowdfunding Show - DEAL TIME!

Play Episode Listen Later Jun 6, 2023 51:47


In the past five years, there has been an unprecedented wave of new commercial real estate syndicators entering the market with a similar wave of accredited investors new to the industry following in their wake. But as interest rates rise and lending standards tighten, the commercial real estate industry is facing significant challenges and these newcomers to the industry, both sponsors and (passive, accredited) investors are now, for the first time, dealing with the realities of what happens during a real estate downturn. My guest on The Real Estate Reality Show today, Reid Bennett, National Council Chair of Multifamily, SVN, a seasoned multifamily expert, discusses the impact of these changes on both developers and their investors.  With prices dropping and many sponsors experiencing capital calls, Bennett shares his insights on the potential for distressed assets in the coming months and the importance of 'LTC', location, timing, and capital in today's market.  Don't miss this informative episode as we navigate the current real estate landscape and uncover potential opportunities for savvy investors. *** In this brand new podcast series at GowerCrowd, The Real Estate Reality Show, we take a realistic view of commercial real estate investing, providing pragmatic insights for passive investors who are looking for sponsors they can trust and distressed opportunities they can invest in. You'll find no quick fixes or easy money ideas here, no sales pitches, big egos or hype. You'll learn how to build your wealth while protecting your capital investing as a limited partner in commercial real estate investments, even and especially during an economic downturn. Subscribe to our YouTube channel here: https://www.youtube.com/gowercrowd?sub_confirmation=1

Real Estate Runway
123: Commercial Real Estate Strategies for Investing in Times of Uncertainty

Real Estate Runway

Play Episode Listen Later May 18, 2023 42:24


Jana Truman is the Managing Director and Investment Advisor with SVN, Accel Commercial Real Estate. Jana began commercial real estate appraising in 2011, appraising all types of commercial real estate across all classes in the state of Tennessee. Through the Accel Group of businesses, Jana partners with investors, business owners, and entrepreneurs to grow their personal and professional wealth through advice on acquisition, disposition, and long-term strategies of commercial real estate and businesses. Brian Truman joined Accel Group in 2016, specializing in multi-family, retail, and business brokerage sales. He understands the mindset of business and building owners. His previous national sales experience and his obsession with service have increased the team's overall deal size and reach. He has negotiated both in the public and private sectors, with experience in selling to C-level decision makers and business owners doing deals in the hundreds of millions. Jana and Brian also own a business brokerage called Accel Exit Advisors and are getting ready to start a property management division for SVN. Don't miss this conversation packed full of advice from two experienced real estate agents!    Learn more about ALTERNATIVE BUSINESS and INVESTMENT STRATEGIES through QUATTRO CAPITAL!   LinkedIn: /TeamQuattroCapital Instagram: @TeamQuattroCapital Facebook: @TeamQuattroCapital Website: www.TheQuattroWay.com    [00:00 - 07:43] Introducing Brian and Jana Truman: From Community Activism to Business Brokerage and Real Estate Investing • Brian and Jana Truman are part of the SVN team, owning the Excel Group in Nashville. • They have a background in community activism, zoning and codes, commercial art, and business brokerage. • They discuss what is happening in the real estate market and if there will be a recession or correction of values.   [07:43 - 15:57] The Story of Jana and Brian Truman's Successful Journey • Started focusing on multi-family in 2016 • Got a call from an unknown number with an LA area code to find 3 properties • Found the properties, and the caller was good friends with Grant Cardone • Was invited to a national conference and got mobbed by people asking questions • Now have a full service brokerage with specialists in all segments of real estate   [15:57 - 23:47] Banks Pulling Back on Lending • The market is seeing banks pull back on their lending, regardless of segment. • Local lenders are only interested in investors who will occupy the building.   [23:47 - 39:38] Banks Tighten Lending Requirements as Cash Investors Take Advantage of Opportunities in Commercial Real Estate • Cash investors are taking advantage of the current market and finding opportunities. • Loan assumptions are being used to take advantage of lower interest rates. • Vacancy rates for retail and industrial are low, with rents still climbing. • Banks are scrutinizing portfolio management and increasing debt service coverage ratio. • Banks are open for lending but with higher LTV requirements. • Banks are not keen on having assets or balance sheets full of failed loans. • Credit unions and local banks have more money to lend out than before.   [39:38 - 42:21] Closing Segment • Their website is accelcre.com • Contact Jana at jana.truman@svn.com and Brian at brian.truman@svn.com • They provide free education twice a month on their website     Quotes:  "I knew I wanted to create wealth, but I knew that I would not do that working in a dollar per hour job." - Jana Truman "We don't list properties we sell them. It's a big difference between those two." - Brian Truman     Connect with Jana through: LinkedIn, Facebook or visit: https://accelcre.com   Connect with Brian through: LinkedIn or visit: https://accelcre.com     LEAVE A 5-STAR REVIEW + help someone who wants to explode their business growth by sharing this episode. Find out how team Quattro can help you by visiting www.TheQuattroWay.com. Real Estate Runway Podcast is all about alternative business and investment strategies to help you amplify life, and maximize wealth! Click here to find out more about the host, Chad Sutton.     Quattro Capital invites you to join Agora: Don't miss out on the opportunity to experience the forefront of investment management technology with Quattro Capital. Join Agora and schedule a demo to see our all-in-one investment management tool in action. As a bonus, enjoy Quattro's Promotion 10% discount on Yearly Subscription and Onboarding Priority! Our platform includes a powerful CRM, market-leading investor portal, and a fundraising tool that makes it easier to raise capital for new offerings. With our collaborative space, you can ensure transparency with investors and make reporting more accessible than ever before. Click here to schedule your demo and claim your discount today!   Entity Keeper:  Join the EntityKeeper community today to simplify the way you manage your entities and org charts while reducing manual errors. Easily organize corporate data, visualize ownership structures, store unlimited documents, and manage important filing dates with one secure solution. Click here to start simplifying your entity management with EntityKeeper now!

BackTable Podcast
Ep. 316 Basivertebral Nerve Ablation with Dr. Olivier Clerk-Lamalice

BackTable Podcast

Play Episode Listen Later Apr 28, 2023 59:09


In this episode, Dr. Jacob Fleming interviews Dr. Olivier Clerk-Lamalice about basivertebral nerve ablation for vertebrogenic back pain, including indications, procedure technique and exciting tech on the horizon in minimally invasive spine interventions. --- CHECK OUT OUR SPONSOR RADPAD® Radiation Protection https://www.radpad.com/ --- SHOW NOTES Dr. Clerk-Lamalice trained in Canada, first in engineering, and then medicine and diagnostic radiology at the Université de Sherbrooke in Calgary. He then completed a neuroradiology fellowship at Harvard, and a fellowship in interventional pain at The Spine Fracture Institute in Oklahoma City with Dr. Douglas Beall. Furthermore, he obtained his credentials as a fellow of interventional pain practice (FIPP), which is a widely recognized international designation. He now works at a comprehensive outpatient radiology center, where he practices both diagnostic and interventional radiology daily. They offer intrathecal drug administration, spinal cord stimulators, vertebral augmentation, Spine Jack, disc augmentation, nucleolysis, and various nerve blocks and ablations in and out of the spine. Their goal was to create a one stop shop for patients to come for consultation, imaging, expert advice and treatment. Next, we discuss vertebrogenic back pain and the basivertebral nerve (BVN). The BVN is a nonmyelinated, intraosseous nerve, while most other peripheral nerves are myelinated, meaning they can regenerate. The BVN cannot, so ablation of this nerve is a permanent treatment. It is located within the central portion of the vertebral body midway between the superior and inferior end plates, one third ventral to the posterior wall of the vertebral body. On a sagittal T2 sequence on MRI, there is a triangle at the posterior aspect at the midpoint of the vertebral body called the basivertebral canal, which contains the nerve, artery and vein. The BVN is responsible for vertebrogenic back pain, which is a form of anterior column pain characterized by low back pain worsened by flexion and sitting. It is diagnosed via MRI using the Modic classifications. Modic type 1 (edematous), and type 2 (fibrofatty end plate) changes can be seen in this disease. It can be difficult to distinguish vertebrogenic from discogenic pain due to the fact that the sinuvertebral nerve (SVN), responsible for discogenic pain, crosses paths with the BVN. However, with MRI and an anesthetic discogram, it is possible to determine the etiology and choose the right treatment. Finally, we discuss the steps of the procedure. Dr. Clerk-Lamalice uses an 8 gauge needle via a transpedicular approach, as is common for other spine procedures. He ensures the probe is positioned in the center of the vertebral body, parallel to the endplates. The nerve is ablated for 15 minutes at 85 C. The procedure takes 45 minutes, which includes an epidural steroid injection to bridge pain control during the periprocedural period. Patients usually go home within one hour after the procedure, and begin to experience the results within a couple days. There have been two trials for BVN ablation, which have made this intervention the most minimally invasive and evidence-based treatment for vertebrogenic pain. These studies indicated 25% of patients had a 50% reduction in pain, while 75% of patients had a 75% reduction of pain. Within that 75%, 30% reported being almost entirely pain free. To date, the study has followed participants to 8 years, and the results show the treatment is durable. --- RESOURCES Ep 210: Modern Vertebral Augmentation https://www.backtable.com/shows/vi/podcasts/210/modern-vertebral-augmentation Ep 94: Spine Interventions https://www.backtable.com/shows/vi/podcasts/94/innovation-in-spine-interventions Relievent device for BVN ablation: https://www.relievant.com/intracept/procedure-details/ Find this episode on backtable.com to view the full list of resources.

In The Frame: Theatre Interviews from West End Frame
S8 Ep19: Alexia McIntosh, star of Big Aunty & original Anna Of Cleves in Six

In The Frame: Theatre Interviews from West End Frame

Play Episode Listen Later Apr 25, 2023 37:21


OG Queen and Olivier nominee Alexia McIntosh is In The Frame!Alexia is currently starring in Big Aunty at the Belgrade Theatre in Coventry. The piece is directed and devised by Corey Campbell. Described as a darkly comic family drama, Big Aunty offers a welcome opportunity to gather and reflect on challenging times and how we can find a path to resolution.Alexia originated the role of Anna of Cleves in Six The Musical, taking the show from its first every UK tour through to the Edinburgh Fringe Festival and then to the West End at the Arts, Lyric and Vaudeville Theatres. After three years in Six, Alexia wrapped up her reign in November 2021.  She reprised her performance last July, reuniting with the original cast for three special performances of Six at Hampton Court Palace before shooting Six The Movie in the West End. Alexia was nominated for Best Actress in a Supporting Role in a Musical at the 2019 Olivier Awards along with the other OG queens; together they have formed the girl group SVN. Big Aunty runs at the Belgrade Theatre until 6th May. Visit www.belgrade.co.uk for info and tickets.Hosted by Andrew Tomlins  @AndrewTomlins32  Thanks for listening! Email: andrew@westendframe.co.uk Visit westendframe.co.uk for more info about our podcasts.  

The Café Bitcoin Podcast
NY Times Bitcoin FUD with Dennis Porter and Mandy Gunasekara and "Bitcoin Business Workshops" with Wesley and Rare Passenger - April 11th, 2023

The Café Bitcoin Podcast

Play Episode Listen Later Apr 11, 2023 124:18


We're joined by Dennis Porter and Mandy Gunasekara to debunk the Mining FUD coming from The NY Times and the work they're doing with the "Satoshi Action Fund" in progressing Bitcoin mining. We also are joined by SVN and Wesley from the "Tampa Bay Bitcoiners" meetup about their upcoming "Bitcoin for Business Workshops" and living on a Bitcoin standard. Bitcoin Bay Workshop: https://www.eventbrite.com/e/bitcoin-bay-business-workshop-registration-588627449547?lang=en-us&locale=en_US&status=30&view=listing Swan Private Team Members: Alex Stanczyk Twitter: https://twitter.com/alexstanczyk Café Bitcoin Crew: Ant: https://twitter.com/2140data Tomer: https://twitter.com/TomerStrolight Wicked: https://twitter.com/w_s_bitcoin Peter: https://twitter.com/PeterAnsel9 Produced by: https://twitter.com/Producer_Jacob Swan Bitcoin is the best way to accumulate Bitcoin with automatic recurring buys and instant buys from $10 to $10 million. Get started in just 5 minutes. Your first $10 purchase is on us: https://swanbitcoin.com/yt Download the all new Swan app! iOS: https://apps.apple.com/us/app/swan-bitcoin/id1576287352 Android: https://play.google.com/store/apps/details?id=com.swanbitcoin.android&pli=1 Join us for Pacific Bitcoin Festival 2023! Purchase your tickets now before prices go up: https://PacificBitcoin2023.com Are you a high net worth individual or do you represent corporation that might be interested in learning more about Bitcoin? Swan Private guides corporations and high net worth individuals toward building generational wealth with Bitcoin. Find out more at https://swanbitcoin.com/private Check out the best place for Bitcoin education, Swan Bitcoin's “Bitcoin Canon”. Compiling all of the greatest articles, news sources, videos and more from your favorite bitcoiners! https://www.swanbitcoin.com/canon/ Get paid to recruit new Bitcoiners: https://swanbitcoin.com/enlist Hello and welcome to The Café Bitcoin Podcast brought to you by Swan Bitcoin, the best way to buy and learn about Bitcoin. We're excited to announce we are bringing the The Café Bitcoin conversation from Twitter Spaces to you on this show, The Café Bitcoin Podcast, Monday - Friday every week. Join us as we speak to guest like Max Keiser, Lyn Alden, Tomer Strolight, Cory Klippsten and many others from the bitcoin space. Also, be sure to hit that subscribe button to make sure you get the notifications when we launch an episode. Join us Monday - Friday 7pst/10est every Morning and become apart of the conversation! Thank you again and we look forward to giving you the best bitcoin content daily here on The Café Bitcoin Podcast. Swan Bitcoin is the best way to accumulate Bitcoin with automatic recurring buys and instant buys from $10 to $10 million. Get started in just 5 minutes. Your first $10 purchase is on us: ⁠https://swanbitcoin.com/yt⁠ Connect with Swan on social media: Twitter: ⁠https://twitter.com/SwanBitcoin

Virtual Real Estate Investing
Mark Allen's Epic Journey: From Real Estate Rookie to DFW Multi-Family Award Winner

Virtual Real Estate Investing

Play Episode Listen Later Mar 28, 2023 48:36


In this episode, we sit down with Mark Allen, an Executive Managing Director with over $1B in real estate acquisitions, renovations, management, and dispositions. Mark's focus on multifamily owner representation stems from his military operations, sales, marketing, and real estate investment experience. In his first year as a broker, Mark brokered $176M in transaction volume, earning him the SVN "Rookie of the Year" award. Since then, he has been recognized as the SVN "Broker of the Year," DFW Commercial Real Estate "Rookie of the Year," and a Top Commercial Real Estate Broker in DFW by DCEO Magazine. Connect Media also recognized him as a 2019 Next Generation award winner due to his early success in the real estate industry.Mark discusses his dynamic marketing approach and social media presence, which have resulted in a list-to-close ratio averaging 101.3% over the last 24 months. Additionally, he shares insights into how his military tactics translate into commercial real estate success. Join us for an inspiring conversation about Mark's journey from Rookie of the Year to award-winning industry leader, and his rise to excellence in commercial real estate.

Iron-On Wrestling with Gregory Iron
EP. 195- Atticus Cogar & Mike Gevorgian: Circle 6 Year One & What the Future Holds

Iron-On Wrestling with Gregory Iron

Play Episode Listen Later Feb 22, 2023 130:18


Use promo code PARDON at Manscaped.com and save 20% and get FREE SHIPPING!     Get an extra 1 hour and 15 minute episode on Patreon about Elimination Chamber 2023, how to book Jey Uso going forward, and more of your social media questions answered!     Sign up at Patreon.com/IronOnWrestling and get 100's of episodes, EARLY & EXTENDED, plus bonus audio and video, interactive zoom chats and more starting at just $3!   Gregory Iron & Aaron Bauer invite Atticus Cogar & Circle 6 owner Mike Gevorgian back on the show to discuss the first year of Circle 6 and what's in store for 2023.   Mike & Atti talks about what the first event "Skewers" meant to them, their favorite moments from 2022, how and why Circle 6 was the first promotion to bring back Zachary Wentz, Jay & Silent Bob films, acquiring the rights to the TPI & KOTDM tournaments, SVN, and what the future holds for the rapidly growing Circle 6. Greg & Aar answer your questions from Discord and social media, and Hulk Hogan lies about wanting to run for president.   ---   Gregory Iron wrestles with cerebral palsy, a neurological disorder that affects the mobility of his right arm, hand and fingers. Trained by WWE NXT Superstar Johnny Gargano in 2006, Iron has conquered his disability and gone on to work with some of the top names in wrestling including "Stone Cold" Steve Austin, CM Punk, The Dudley Boyz, Tommaso Ciampa and many others.   Co-host Aaron Bauer has worked in the professional wrestling industry for over two decades. A jack-of-all-trades, Aaron has worked in the industry as a local promoter for ECW events, a manager, and has provided color commentary over matches for some of the biggest stars in WWE, WCW, ECW, AEW, Impact & ROH.   Follow "Iron-On Wrestling with Gregory Iron" on all social media platforms: facebook.com/irononwrestling twitter.com/irononwrestling Instagram.com/irononwrestling Back us on Patreon, where you can get complete bonus episodes and additional audio and video content for just $3 a month: Patreon.com/IronOnWrestling Follow Gregory Iron: Facebook.com/TheHandicappedHero Twitter.com/GregoryIron Instagram.com/gregory_iron Buy CLASSIC AND EXCLUSIVE Gregory Iron tees here: prowrestlingtees.com/GregoryIron To book Gregory Iron for pro wrestling events, speaking engagements, wrestling seminars, school workshops and more contact Greg on his website: Gregory-Iron.com Please check out our sponsors: Kayfabe News: Unreal news about an unreal sport! KayfabeNews.com Of The Dead Designs: Bringing Artwork To Life! OfTheDead.weebly.com SBS Printing: T-shirt one color prints starting at just $5! Done in 3-5 business days! Contact Jesse Massey: antisepticmaxrock@gmail.com Mystic Gear: If you get your pro wrestling gear from anyone but Mystic, you're making a mistake! Follow @MysticGear on Instagram or contact Tania Martin on Facebook! Special thanks to "Ajax" Alex Cantrell for creating the "Iron-On Wrestling" theme song. Check out Alex and his comedy film team "Aldous Mustache" on social media: YouTube.com/userAldousMustache

In The Frame: Theatre Interviews from West End Frame
S8 Ep2: Maiya Quansah-Breed, Philoclea in Head Over Heels & original Catherine Parr in Six

In The Frame: Theatre Interviews from West End Frame

Play Episode Listen Later Feb 3, 2023 34:42


Olivier nominee Maiya Quansah-Breed is starring as Philoclea in the UK premiere of Head Over Heels at the Hope Mill Theatre in Manchester. Described as a "hilarious, exuberant celebration of love" and featuring songs by The Go-Gos, Head Over Heels follows the escapades of a royal family on an outrageous journey to save their beloved kingdom from extinction—only to discover the key to their realm's survival lies within each of their own hearts.Maiya originated the role of Catherine Parr in Six The Musical. She starred in the show's first UK tour, which included a run at the Edinburgh Fringe Festival, before opening the West End production at the Arts Theatre. Alongside the original line-up of queens, Maiya was nominated for the 2019 Olivier Award for Best Actress in a Supporting Role in a Musical. Last year Maiya reunited with the original cast of Six for three special performances at Hampton Court Palace, before filming a live capture of the musical at the Vaudeville Theatre. She and the original queens have formed the girl group SVN. Maiya's other theatre credits include Mimi in Rent (Hope Mill Theatre), I Could Use A Drink In Conert (Garrick Theatre),  Laura in The Distance You Have Come (Apollo Theatre) and Martha in The Secret Garden In Concert (London Palladium).Head Over Heels runs at the Hope Mill Theatre until 4th March 2023. Visit www.hopemilltheatre.co.uk for info and tickets.Hosted by Andrew Tomlins. @AndrewTomlins32  Thanks for listening! Email: andrew@westendframe.co.uk Visit westendframe.co.uk for more info about our podcasts.  

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for November 8th, 2022 - Episode 171

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Nov 8, 2022 84:11


2022-11-08 Weekly News - Episode 171Watch the video version on YouTube at https://youtu.be/teJ4cpNvYOY Hosts:  Gavin Pickin - Senior Developer at Ortus Solutions Brad Wood - Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways  to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube.  Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github  Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Book - 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Patreon Support Goal 1 - We have 42 patreons providing 100% of the funding for our Modernize or Die Podcasts via our Patreon site: https://www.patreon.com/ortussolutions. Goal 2 - We are 38% of the way to fully fund the hosting of ForgeBox.io Patreon Sponsored Job Announcement - Tomorrows GuidesTomorrows Guides is a fast paced leader in the UK care sector, catering for care seekers across three areas: Care Homes, Nurseries and Home Care. We are often called the Trip Advisor of the care sector. Our Product team consists of over 20 individuals across the UK working remotely to expand and improve our offering with regular expansion in teams year on year. We work with both Coldfuson 2021 and Node.js/React in the Azure cloud, while also using both MSSQL and MongoDB databases. Currently we are looking for Senior Coldfusion developers and Automation Testers with training paths to node.js available as well. We offer a wide variety of perks from our company wide £4k bonus scheme, and quarterly nights out with the whole company and the Product team to a 6% company pension contribution. Current Roles in detail All roles: https://www.tomorrows.co.uk/jobs.cfm Senior Cf Developer – UK Only | Remote | Permanent | Circa £60k - https://app.occupop.com/shared/job/senior-coldfusion-developer-5925b/-  Minimum three years' experience with ColdFusion-  Database design, normalisation and ability to write/understand complex queries using MSSQL Server 2019-  Familiarity with Git-  Flexible skillset covering a wide range of development Automation Test Engineer – UK Only | Remote | Permanent | Crica £40k - https://app.occupop.com/shared/job/automation-test-engineer-a6545/-  Minimum three years experience with automated testing-  Experience with automated testing tools such as selenium-  Experience with API test tools such as Postman/Fiddler etc Benefits of both roles:-  £4,000 per annum discretionary company bonus scheme-  25 days annual leave + bank holidays-  6% employer pension contribution-  Access to free perks and discounts through Perkbox-  Long Service Awards-  Cycle to Work Scheme-   Company and Team nights outNews and AnnouncementsOpenSSL VulnerabilitiesPete has had a several people asking me about the openssl vulnerabilities that were patched this week: CVE-2022-3602 and CVE-2022-3786 aka Spooky SSL.https://www.petefreitag.com/item/1000.cfm ColdBox Master Class - Completely Free until the end of the Year!Want to learn about modern web apps in ColdFusion (CFML)? We have our ColdBox Master Class for FREE until the end of the year!  A gift to the community, so we can all build amazing apps together! Watch all the videos!  Binge Coding Anyone? Enjoy! https://www.cfcasts.com/series/cb-master-class?utm_source=podcast&utm_medium=PODCAST&utm_campaign=LM-PODCAST Wirebox DelegatesWireBox supports the concept of object delegation in a simple expressive DSL.  In object-oriented programming, delegation refers to the evaluating a member (property or method) of one object (the receiver) to the context of another object (the sender).  Basically a way to proxy calls from one object to the other and avoid the overuse of inheritance, avoid runtime mixins or traits.  WireBox provides a set of rules for method lookup and method dispatching that will allow you to provide delegation easily in your CFML applications.https://ortussolutions.notion.site/WireBox-Delegators-8608752a03d345ad80f8c1a1b441a428 CommandBox vNext supports providing SSL certs in PFX formatCommandBox vNext finally supports providing SSL certs in PFX format, which is a single file containing the public and private key as opposed to needing those in two separate files.https://ortussolutions.atlassian.net/browse/COMMANDBOX-1499 New Releases and UpdatesLucee released 5.3.9.166 StableThis a minor bug fix release, which addresses a few bugs listed below, mainly relating to concurrency or errors under heavy load.Anyone running 5.3.9.160 is encouraged to update to this release.https://dev.lucee.org/t/lucee-5-3-9-166-stable-release/11319 Restoring the CF Admin logviewer removed in Oct 2022 CF updates, at your own riskAs of the Oct 2022 CF updates (CF2021 update 5 and CF2018 update 15), Adobe has chosen to remove the CF Admin feature to view, search, download, and delete CF logs, due to asserted (but as-yet undocumented) security concerns.What if you want it back? In this post, I explain what changed, why, and how to get the functionality back--albeit at your own risk. For more, read on.https://www.carehart.org/blog/2022/11/3/restoring_admin_logviewer ICYMI - CBWIRE v2.1 ReleasedCBWIRE, our ColdBox module that makes building reactive, modern CFML apps delightfully easy, just dropped its 2.1 release. This release contains mostly bug fixes and also the ability to create your UI templates directly within your CBWIRE component using the onRender() method.We've added an example of using onRender() to our ever growing CBWIRE-Examples Repo that you can run on your machine locally. https://github.com/grantcopley/cbwire-examples https://www.ortussolutions.com/blog/cbwire-2-1-released Webinar / Meetups and WorkshopsOrtus Event Calendar for Googlehttps://calendar.google.com/calendar/u/0?cid=Y181NjJhMWVmNjFjNGIxZTJlNmQ4OGVkNzg0NTcyOGQ1Njg5N2RkNGJiNjhjMTQwZjc3Mzc2ODk1MmIyOTQyMWVkQGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20 Embeddable Link: https://calendar.google.com/calendar/embed?src=c_562a1ef61c4b1e2e6d88ed7845728d56897dd4bb68c140f773768952b29421ed%40group.calendar.google.com&ctz=America%2FLos_Angeles Ortus Software Craftsmanship Book Club - Patreon OnlyFriday, November 11th at 2pm CDT - 2nd Friday of the MonthClean Code: A Handbook of Agile Software Craftsmanship by Robert Martin (Uncle Bob)We will meet monthly on Zoom, and we'll use the Ortus Community Forum for Patreon to discuss the book.https://community.ortussolutions.com/t/ortus-software-craftsmanship-book-club-clean-code/9432 We will also be rewriting the code from Java to CFML as we proceed through the book.The final result will be here https://github.com/gpickin/clean-code-book-cfml-examples You can get a copy of the book at one of the below links, or your favorite bookstorehttps://amzn.to/3TIrmKm or https://www.audible.com/pd/Clean-Code-Audiobook/B08X7KL3TF?action_code=ASSGB149080119000H&share_location=pdp&shareTest=TestShare Ortus Webinar - Daniel Garcia - API Testing with PostManFriday, November 18th  at 11am CDT - 3rd Friday of the Monthhttps://us06web.zoom.us/meeting/register/tZYqc-uuqzMqGtAO7tQ6qCsN8bR0LyBf8DNP CF Hawaii Meetup - Managing All your ColdFusion Servers with CommandBox with Brad WoodCommandBox is a standalone, native tool for Windows, Mac, and Linux that will provide you with a Command Line Interface (CLI) for developer productivity, tool interaction, package management, embedded CFML server, application scaffolding, and sweet ASCII art. It seamlessly integrates to work with any of Ortus Solutions *Box products, but it is also open for extensibility for any ColdFusion (CFML) project as it is written in ColdFusion (CFML) using our concepts of CommandBox Commands.CommandBox also functions as a package management tool which integrates seamlessly with ForgeBox. During this meeting Brad will give you an introduction to CommandBox to mange your ColdFusion Server as well as CF Config to Mange the CF Admin.https://www.meetup.com/hawaii-coldfusion-meetup-group/events/289489609/CF Summit Online Adobe announced today that the “ColdFusion Summit Online” will begin soon, where they will be having presenters offer their sessions again from the CF Summit last month, to be live-streamed and recorded since that couldn't be done in Vegas.https://coldfusion.adobe.com/2022/11/coldfusion-summit-online/ All the webinars, all the speakers from Adobe ColdFusion Summit 2022 – brought right to your screen. All sessions will soon be streamed online, for your convenience. Stay tuned for more! Charlie Arehart - “How the Adobe CF Docker Images Have Evolved”Wednesday November 16 at 12pm – 1pm EST.Since Adobe's original 2018 release of Docker images for CF (initially for cf2018 and cf2016), the configurability features built into them have improved in significant ways, especially with cf2021, which is much smaller, faster, and whose admin settings can be configured via json. In this talk, veteran CF consultant Charlie Arehart will review and demonstrate those feature changes for the CF images, as well as the images for the CF Performance Monitoring Toolkit (PMT) and the CF Enterprise API Manager–all available at Dockerhub since 2021.Register: https://how-the-cf-docker-images-evolved.meetus.adobeevents.com/ Brad Wood - Message Queues with RabbitMQ1pm to 2pm ET on Nov 30Get to know about RabbitMQ – a tool used for worker queues, topic distribution, synch RPC invocations, and even web socket pushes to your web app in this session. Using the RabbitSDK for ColdFusion, you can get started today with queues and bring your apps to the next level. Stop thinking about API calls and start thinking about sending messages, thanks to this popular and robust queue.Ortus Office HoursA new  initiative where some Ortusians will be on a Zoom call and answer whatever questions people have. We are going to start less structured and see how things develop. December 2nd at 11am CDT - 1st Friday of the MonthDaniel Garcia will host a variety of Ortus people Office Hours questions & requests form availableRegister in advance for this meeting:https://us02web.zoom.us/meeting/register/tZYvcO-hrz8iHNS0C3o0aw2x3JMtmBrKwzfA ColdFusion Security Training - Writing Secure CFML with Pete Freitag from FoundeoWhen: Tuesday December 13, 2022 @ 11am-2pm & Wednesday December 14 @ 11am-2pm(Eastern Standard Time, UTC -5) - 6 hours in total.A hands-on CFML / ColdFusion Security Training class for developers. Learn how to identify and fix security vulnerabilities in your ColdFusion / CFML applications.The class will be recorded, so if you cannot attend it fully online you will have access to a recording.Where: Online / Web ConferenceWho: Taught by Pete FreitagCost: $999/student $899/student (Early Bird Discount)Register: https://foundeo.com/consulting/coldfusion/security-training/ Adobe Workshops & WebinarsJoin the Adobe ColdFusion Workshop to learn how you and your agency can leverage ColdFusion to create amazing web content. This one-day training will cover all facets of Adobe ColdFusion that developers need to build applications that can run across multiple cloud providers or on-premise.https://coldfusion.adobe.com/2022/10/upcoming-adobe-webinar-on-preview-of-cf2023-date-and-title-change/  WEBINAR - WEDNESDAY, NOVEMBER 23, 2022 - New Date - New Name10:00 AM PSTThe Road to FortunaMark Takatahttps://winter-special-preview-of-cf2023.meetus.adobeevents.com/ WEBINAR - THURSDAY, DECEMBER 22, 202210:00 AM PSTBuilding Native Mobile Applications with Adobe ColdFusion & Monaco.ioMark Takatahttps://building-native-mobile-apps-with-cf-monaco-io.meetus.adobeevents.com/ FREE :)Full list - https://meetus.adobeevents.com/coldfusion/ CFCasts Content Updateshttps://www.cfcasts.comJust Released Ortus Webinar - Gavin Pickin on Step up your Testing https://cfcasts.com/series/ortus-webinars-2022/videos/gavin-pickin-on-step-up-your-testing  Every video from ITB - For ITB Ticket Holders Only - Will be released for Subscribed in December 2022 ForgeBox Module of the Week Series - 1 new Video https://cfcasts.com/series/2022-forgebox-modules-of-the-week 2022 VS Code Hint tip and Trick of the Week Series - 1 new Video https://cfcasts.com/series/2022-vs-code-hint-tip-and-trick-of-the-week  Coming Soon -  More ForgeBox and VS Code Podcast snippet videos Box-ifying a 3rd Party Library from Gavin ColdBox Elixir from Eric Getting Started with ContentBox from Daniel ITB Videos will be released Dec for those who are not ITB Ticket Holders Conferences and TrainingDeploy from Digital OceanNovember 15-16, 2022The virtual conference for global buildersSubtract Complexity,Add Developer HappinessJoin us on the mission to simplify the developer experience.https://deploy.digitalocean.com/ Into the Box Latam 2022Dec 7th, 2022 - 8am - 5pm2 tracks - 1 set of sessions, 1 set of deep dive workshop sessionsPricing $9-$29 USDLocation: Hyatt Centric Las Cascadas Shopping Center,Merliot, La Libertad 99999 El Salvadorhttps://latam.intothebox.org/ VUEJS AMSTERDAM 20239-10 February 2023, Theater AmsterdamWorld's Most Special and Largest Vue ConferenceCALL FOR PAPERS AND BLIND TICKETS AVAILABLE NOW!Call for Papers: https://forms.gle/GopxfjYHfpE8fKa57 Blind Tickets: https://eventix.shop/abzrx3b5 https://vuejs.amsterdam/ Dev NexusApril 4-6th in AltantaEARLY BIRD CONFERENCE PASS - APRIL 5-6 (AVAILABLE UNTIL NOVEMBER 20) (Approx 40% off)If you are planning to speak, please submit often and early. The CALL FOR PAPERS is open until November 15WORKSHOPS WILL BE ON JAVA, JAVA SECURITY, SOFTWARE DESIGN, AGILE, DEVOPS, KUBERNETES, MICROSERVICES, SPRING ETC. SIGN UP NOW, AND YOU WILL BE ABLE TO CHOOSE A WORKSHOP, LATER ON,https://devnexus.com/ VueJS Live MAY 5 & 8, 2023ONLINE + LONDON, UKCODE / CREATE / COMMUNICATE35 SPEAKERS, 10 WORKSHOPS10000+ JOINING ONLINE GLOBALLY300 LUCKIES MEETING IN LONDONGet Early Bird Tickets: https://ti.to/gitnation/vuejs-london-2022 Watch 2021 Recordings: https://portal.gitnation.org/events/vuejs-london-2021 https://vuejslive.com/ Into the Box 2023 - 10th EditionMay 17, 18, and 19th, 2022.Middle of May - start planning.Final dates will be released as soon as the hotel confirms availability.Call for Speakers - this weekCFCampNo CFCAMP 2022, we're trying again for summer 2023TLDR is that it's just too hard and there's too much uncertainty right now.More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets, and Videos of the Week 11/8/22 - Tweet - Luis Majano - Ortus Solutions WireBox 7 - DelegatesThe power of the new WireBox 7 Delegates! Traits for #coldfusion #cfml are here! Composable reusability to modernize your CFCs! https://ortussolutions.notion.site/WireBox-Delegators-8608752a03d345ad80f8c1a1b441a428   #modernizeOrDie #wirebox #coldboxhttps://twitter.com/lmajano/status/1589934986991378433 11/8/22 - Tweet - Luis Majano - CbSecurity V3 is coming - including new Security Firewall VizualizerThe new ColdBox Security v3 is almost done! Brand new Security Firewall visualizer, basic auth, included user storage, rule simulator, ColdBox 7 delegates, jwt, new firewall blocks, reporting, fluent configuration and so much more! #secureAllThings #coldbox #modernizeOrDiehttps://twitter.com/lmajano/status/1589931501411598338https://twitter.com/lmajano 11/7/22 - Ortus Solutions - The holiday season is almost here and we want to give you an early present!For the first time ever, enjoy our "ColdBox Master Class" for FREE until Dec 31st, and start building secure and modern CFML web applications with up-to-date tools and methodologies that will help you increase your development productivity!Whether you are a ColdBox master or a beginner, this course will give you the tools and guidance you need to learn everything about this open-source modular web application framework from start to finish. Let's get started, modernize your web development projects today and optimize your services by getting the best out of our ColdBox MVC framework.https://www.ortussolutions.com/blog/become-a-coldbox-master-for-free 11/7/22 - Blog - Ben Nadel - Proxying Gravatar Images For Better Avatar Caching In ColdFusionWhen readers leave a comment on this blog, I render an avatar next to their authorship information. This avatar is served from Gravatar, which is (probably) the most popular avatar system on the web (brought to us by the same people who built WordPress). Unfortunately, serving avatars from Gravatar was hurting my Chrome LightHouse scores due to Gravatar's very short caching controls (5-mins). To help improve my LightHouse score, I'm starting to proxy the Gravatar images on my ColdFusion server, applying a custom Cache-Control HTTP header.https://www.bennadel.com/blog/4351-proxying-gravatar-images-for-better-avatar-caching-in-coldfusion.htm 11/5/22 - Linked In Post - Luis Majano - J on the Beach Meetup in Malaga Spain We had a great time!!! Our European Grass Roots events have started!! #cfml #coldfusion #coldbox #ortusYesterday we had a great meetup led by Jorge Reyes Bendeck from Ortus Solutions, Corp learning about all the different licenses available for #OpenSource software.https://www.linkedin.com/feed/update/urn:li:share:6994607593453162496/ 11/5/22 - Blog - Charlie Arehart - ColdFusion Portal - Enabling CF to switch to using Java's regex engineIf you may ever encounter problems trying to use regular expressions in CFML (which are actually PERL regex's), did you know that you can tell CF to use Java regex's instead? This has been possible since 2019, but you could have missed when the change was introduced via CF2018 update 5 in Sep 2019–and of course the option is also built into CF 2021.This is one of those settings which can be enabled/controlled at either:the server level: via the CF Admin “Settings” page, and its “Use Java as Regex Engine” optionor the application level: via the this.useJavaAsRegexEngine in application.cfc (or an attribute of the same name in cfapplication, if using application.cfm)https://coldfusion.adobe.com/2022/11/switching-cf-to-use-java-regex-engine/  11/5/22 - Blog - Charlie Arehart - ColdFusion Portal - Come learn “How the Adobe CF Docker Images Have Evolved”, launching CF Summit onlineThe first session for the Adobe ColdFusion Summit Online has been announced. I had reported here last week that Adobe was going to start having all the speakers from Adobe's CF Summit (in Vegas last month) offer their talks online, to be live-streamed and recorded. Well, it looks like I'm the lead-off batter.https://coldfusion.adobe.com/2022/11/come-learn-how-adobe-cf-docker-images-have-evolved/ 11/4/22 - Blog - Nolan Erck - Free ColdBox Training For The Rest Of 2022CFML developers that still say "I don't know how to use ColdBox", your excuses are now officially invalid. ;)The ColdBox Master Class video training series that I produced for Ortus Solutions is FREE for the rest of the year!https://southofshasta.com/blog/free-coldbox-training-for-the-rest-of-2022/ 11/4/22 - Blog - Pete Freitag - OpenSSL and ColdFusion / Lucee / TomcatPete have had a several people asking me about the openssl vulnerabilities that were patched this week: CVE-2022-3602 and CVE-2022-3786 aka Spooky SSL.https://www.petefreitag.com/item/1000.cfm 11/4/22 - Tweet - Pete Miller - Lost RespectI lost a lot of respect in a past job sticking with #CFML even to point I was moved sideways and new project manager came in with #PHP for new project. I left and 7 years later the #CFML runs their business and the #PHP project is dead and buried.https://twitter.com/millerpete/status/1588660303986036738https://twitter.com/millerpete 11/4/22 - Tweet - Brad Wood - Ortus - Microsoft 365's removal of plain text passwordsIf anyone is caught out by Microsoft 365's removal of plain text passwords to check Exchange mail, I've recently setup an Oauth flow using the GraphAPI for a client and posted some example code here in the Lucee forum to help you out: https://dev.lucee.org/t/check-email-on-o365-with-oauth/11389/5?u=bdw429s 11/4/22 - Blog - Zac Spitzer - Lucee - Lucee released 5.3.9.166 StableThis a minor bug fix release, which addresses a few bugs listed below, mainly relating to concurrency or errors under heavy load.Anyone running 5.3.9.160 is encouraged to update to this release.https://dev.lucee.org/t/lucee-5-3-9-166-stable-release/11319 11/3/22 - Blog - Charlie Arehart - Restoring the CF Admin logviewer removed in Oct 2022 CF updates, at your own riskAs of the Oct 2022 CF updates (CF2021 update 5 and CF2018 update 15), Adobe has chosen to remove the CF Admin feature to view, search, download, and delete CF logs, due to asserted (but as-yet undocumented) security concerns.What if you want it back? In this post, I explain what changed, why, and how to get the functionality back--albeit at your own risk. For more, read on.https://www.carehart.org/blog/2022/11/3/restoring_admin_logviewer 11/3/22 - Podcast - Michela Light - CFAlive - 123 State of CF Union Survey Analysis (part 2) with Gavin PickinGavin Pickin talks about “State of CF Union Survey Analysis (part 2)” in this episode of ColdFusion Alive Podcast with host Michaela Light.“we're going to be doing our second part on the state of the ColdFusion survey results. And we've got some very interesting data that we found we've done Gavin put together some really cool graphs show it so if you're watching on video, be able to see those if you're not on video, you can go to the show notes page on teratech.com to have a look at the graphs when we get to those.”https://teratech.com/podcast/state-cf-union-survey-analysis-part-2-with-gavin-pickin/ CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 145 ColdFusion positions from 80 companies across 66 locations in 5 Countries.2 new jobs listed this weekFull-Time - Senior ColdFusion Developer at London - United Kingdom Nov 03https://www.getcfmljobs.com/jobs/index.cfm/united-kingdom/Senior-ColdFusion-Developer-at-London/11532 Full-Time - Coldfusion Developer at London - United Kingdom Nov 03https://www.getcfmljobs.com/jobs/index.cfm/united-kingdom/Coldfusion-Developer-at-London/11531 Patreon Sponsored Job Announcement - Tomorrows GuidesTomorrows Guides is a fast paced leader in the UK care sector, catering for care seekers across three areas: Care Homes, Nurseries and Home Care. We are often called the Trip Advisor of the care sector. Our Product team consists of over 20 individuals across the UK working remotely to expand and improve our offering with regular expansion in teams year on year. We work with both Coldfuson 2021 and Node.js/React in the Azure cloud, while also using both MSSQL and MongoDB databases. Currently we are looking for Senior Coldfusion developers and Automation Testers with training paths to node.js available as well. We offer a wide variety of perks from our company wide £4k bonus scheme, and quarterly nights out with the whole company and the Product team to a 6% company pension contribution. Current Roles in detail All roles: https://www.tomorrows.co.uk/jobs.cfm Senior Cf Developer – UK Only | Remote | Permanent | Circa £60k -  https://app.occupop.com/shared/job/senior-coldfusion-developer-5925b/-  Minimum three years' experience with ColdFusion-  Database design, normalisation and ability to write/understand complex queries using MSSQL Server 2019-  Familiarity with Git-  Flexible skillset covering a wide range of development Automation Test Engineer – UK Only | Remote | Permanent | Crica £40k - https://app.occupop.com/shared/job/automation-test-engineer-a6545/-  Minimum three years experience with automated testing-  Experience with automated testing tools such as selenium-  Experience with API test tools such as Postman/Fiddler etc  Benefits of both roles:-  £4,000 per annum discretionary company bonus scheme-  25 days annual leave + bank holidays-  6% employer pension contribution-  Access to free perks and discounts through Perkbox-  Long Service Awards-  Cycle to Work Scheme-   Company and Team nights outOther Job Links Ortus Solutions https://www.ortussolutions.com/about-us/careers  There is a jobs channel in the CFML slack team, and in the box team slack now too ForgeBox Module of the WeekSwagger Redoc UI for ColdBoxThis is the Swagger Redoc UI module for ColdBox applications. It was inspired by the cbSwaggerUI module. By default, it looks in the /cbswagger location for the OpenAPI Swagger file.The UI is available at /redoc - where you will see a visual representation of your Swagger docs.Based on: https://github.com/Redocly/redoc Online Demo: https://redocly.github.io/redoc/ https://www.forgebox.io/view/cbswagger-redoc VS Code Hint Tips and Tricks of the WeekProject ManagerBy Alessandro FragnaniIt helps you to easily access your projects, no matter where they are located. Don't miss those important projects anymore.You can define your own Projects (also called Favorites), or choose for auto-detect Git, Mercurial or SVN repositories, VSCode folders, or any other folder.Here are some of the features that Project Manager provides: Save any folder or workspace as a Project Auto-detect Git, Mercurial or SVN repositories Organize your projects using Tags Open projects in the same or new window Identify deleted/renamed projects A Status Bar which identifies the current project A dedicated Side Bar https://marketplace.visualstudio.com/items?itemName=alefragnani.project-managerThank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox,  ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsNew Patreon - Tomorrows GuidesDon't forget, we have Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website All Patreon supporters have their own Private Channel access BoxTeam Slack Live Stream Access to streams like “Koding with the Kiwi + Friends” and Ortus Software Craftsmanship Book Club https://community.ortussolutions.com/ Patreons John Wilson - Synaptrix Jordan Clark Gary Knight Mario Rodrigues Giancarlo Gomez David Belanger Dan Card Jonathan Perret Jeffry McGee - Sunstar Media Dean Maunder Nolan Erck  Abdul Raheen Wil De Bruin Joseph Lamoree  Don Bellamy Jan Jannek Laksma Tirtohadi  Brian Ghidinelli - Hagerty MotorsportReg Carl Von Stetten Jeremy Adams Didier Lesnicki Matthew Clemente Daniel Garcia Scott Steinbeck - Agri Tracking Systems Ben Nadel  Richard Herbet Brett DeLine Kai Koenig Charlie Arehart Jason Daiger Shawn Oden Matthew Darby Ross Phillips Edgardo Cabezas Patrick Flynn Stephany Monge Kevin Wright John Whish Peter Amiri Cavan Vannice John Nessim You can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors Thanks everyone!!! ★ Support this podcast on Patreon ★

In The Frame: Theatre Interviews from West End Frame
S7 Ep23: Grace Mouat, Ella in Rodgers and Hammerstein's Cinderella

In The Frame: Theatre Interviews from West End Frame

Play Episode Listen Later Nov 4, 2022 45:04


Grace Mouat is currently starring as Ella in the UK premiere of Rodgers and Hammerstein's Cinderella at the Hope Mill Theatre in Manchester.  Cinderella is the only musical written for the television by legendary duo Rodgers & Hammerstein. Originally broadcast live in 1957 starring Julie Andrews, the broadcast was watched by more than 100 million people, before subsequently being remade for TV again in 1965 and 1997 starring Whitney Houston and Brandy.  A new Broadway version with a Tony-nominated book by Douglas Carter Beane premiered in 2013 featuring several fresh characters and songs. This will be the first time a fully staged version of the show has been performed in the UK, following a 2019 one-night concert version in London.  Grace was part of the original cast of Six. She understudied all six queens in the musical's original UK tour and Edinburgh Fringe season before becoming an alternate when Six began its West End run at the Arts Theatre.  Following a triumphant run in Six, Grace joined the original cast of & Juliet (Manchester / Shaftesbury Theatre) as Judith whilst understudying the title role. She went on to play Chrissy in Hair (Turbine), Chloe in Be More Chill (Shaftesbury Theatre) and Woman 1 in Closer Than Ever (BroadwayHD) before returning to & Juliet when it resumed its West End run, following the Covid-19 lockdowns, this time as the alternate for Juliet.  Most recently Grace played Pilar in Lucy Moss' revival of Legally Blonde at the Regent's Park Open Air Theatre.  Grace is part of the girl group SVN, alongside the original cast of Six. She previously hosted her own podcast Cut To The Grace and vlogged on YouTube. Rodgers and Hammerstein's Cinderella runs at the Hope Mill Theatre until Sunday 11th December 2022. Visit www.hopemilltheatre.co.uk for info and tickets.  Hosted by Andrew Tomlins. @AndrewTomlins32  Thanks for listening! Email: andrew@westendframe.co.uk Visit westendframe.co.uk for more info about our podcasts.  

Weiss Advice
Building Momentum In The Commercial Real Estate Market With Justin Ryder, CCIM

Weiss Advice

Play Episode Listen Later Oct 31, 2022 31:56


Justin serves as an Advisor with SVN Stone Commercial Real Estate, specializing in investment sales, auto-related sales, land-use, and multi-family. Justin's passion is to deliver value to clients within the context of service, leadership, and teamwork — attributes that perfectly highlight SVN Stone's competitive advantage. Justin is a native Kentuckian and graduated from the Gatton College of Business and Economics with a degree in Business Management. Prior to joining the SVN team, Justin worked with a nonprofit ministry where he was a proven leader and creative marketer.[00:01 - 02:56] Opening SegmentWe welcome, Justin Ryder, CCIM!Justin recommends branding yourself and making sure that people recognize your brand  [02:57 - 24:36] Building Momentum In The Commercial Real Estate MarketLexington is known for its sports culture and for being a great place to live and workIt has a strong commercial real estate market, with good cash flow and stabilityIt is unique in that it is not as popular as some other marketsIt is seeing an increase in vacancyTertiary markets will become more important in the coming yearsBrokerage is important when building momentum by doing deals and having a mentorKnowledgeable about tax laws and real estate trends in order to be successful in commercial real estate[24:37 - 31:56] THE FINAL FOURWhat's the worst job that you ever had?When he worked in a baseball card storeWhat's a book you've read that has given you a paradigm shift?“Principles For Dealing With The Changing World Order” by Ray DalioWhat is a skill or talent that you would like to learn?Learn how to play golfWhat does success mean to you?Justin says, “Success to me means to build faith meaningfully, build family memorably, and build business monumentally.”Connect with Justin Ryder, CCIM LinkedIn: Justin Ryder, CCIMLEAVE A 5-STAR REVIEW by clicking this link.WHERE CAN I LEARN MORE?Be sure to follow me on the below platforms:Subscribe to the podcast on Apple, Spotify, Google, or Stitcher.LinkedInYoutubeExclusive Facebook Groupwww.yonahweiss.comNone of this could be possible without the awesome team at Buzzsprout. They make it easy to get your show listed on every major podcast platform.Tweetable Quotes:“It is much better to specialize, and I think any commercial real estate training will teach you, you need to specialize.” – Justin Ryder, CCIM“God's gift to commercial real estate is momentum.” – Justin Ryder, CCIMSupport the show

The TreppWire Podcast
164. Wading Through Some CRE Distress with Tony Yousif, SVN

The TreppWire Podcast

Play Episode Listen Later Oct 25, 2022 40:36


In a special guest episode, we welcome Tony Yousif, Director of National Accounts at SVN. We dive into the latest in commercial real estate distress, with Tony sharing a boots-on-the-ground view of asset classes to watch, trends by geography, and the difference among types of lenders. Tony shares where the opportunities are and where some may get stuck in the mud. Tune in now. Episode Notes: - Background (0:21) - Property type coverage (5:26) - Geographical trends (7:11) - What's happening in multifamily? headwinds (9:05) - Typical timing for valuation requests and transactions (14:40) - What are lenders seeing? (17:53) - Is the extend and pretend appetite across the board? (25:35) - Office conversions (30:38) - Office occupancy sentiment (33:00) - Where's the opportunity right now? (35:48) Questions or comments? Contact us at podcast@trepp.com. Follow Trepp: Twitter: www.twitter.com/TreppWire LinkedIn: www.linkedin.com/company/trepp-llc Facebook: www.facebook.com/TreppLLC

Radio Toilet ov Hell
Toilet Radio 397 – Viy (feat Svn.Seeker)

Radio Toilet ov Hell

Play Episode Listen Later Oct 19, 2022 73:25


The October Spooktacular continues. This week we're joined by Nikita of Svn.Seeker to discuss the sole horror movie produced by the Soviet Union, Viy (1967). Let's come together and enjoy how much these filmmakers dislike the Russian Orthodox Church. Let's take a look at some delightful practical effects and camera tricks that filmmakers were using over 50 years ago. They don't make them like these anymore, folks (horror movies and communist states). We're also talking about Connecticut, going DIY, and collecting metal bandcamp squares like Pokemon. Next week: Body Melt (available on Tubi). And if you wanna get prepared for the Patreon-subscriber-only bonus show, we'll be watching Society (1989). Music featured on this week's show: Svn.Seeker – Pilfered This program is available on Spotify. It is also available on iTunes or whatever they call it now, where you can rate, review, and subscribe. Give us money on Patreon to get exclusive bonus episodes and other cool shit. 

In The Frame: Theatre Interviews from West End Frame
S7 Ep11: Jarnéia “Jaye'J” Richard-Noel, star of Millennials & original Catherine of Aragon in Six

In The Frame: Theatre Interviews from West End Frame

Play Episode Listen Later Aug 29, 2022 53:52


Today we're joined by Olivier nominated original Six Queen Jarnéia Richard-Noel (also known as Jaye'J) who is currently starring in the London production of Millennials by Elliot Clay at The Other Palace. Jarnéia originated the role of Catherine of Aaragon in Six The Musical. As part of the original cast, Jarnéia took the musical from its first tour, to a sell out run at the Edinburgh Fringe Festival to the West End where she stayed with the show for almost three years at the Arts Theatre, Lyric Theatre and Vaudeville Theatre. Alongside her fellow OG queens, Jarnéia was nominated for the 2019 Olivier Award for Best Actress in a Supporting Role in a Musical. Earlier this year, Jarnéia and the original West End cast of Six reunited for three sell-out performances at Hampton Court Palace, before shooting a filmed movie of Six at the Vaudeville Theatre. After training at the Urdang Academy, Jarnéia made her professional debut as a singer/dancer for P&O Cruises. After leaving Six, she played Alice in Dick Whittington at the Norwich Theatre Royal before joining the UK tour of Hairspray as a swing for the Dynamites.From six to seven, Jarnéia is part of the girl group SVN alongside the original West End cast of Six. Following the release of multiple singles, SVN recently played their debut headline show at the O2 Academy Islington.In this episode, Jaye'J discusses her path into theatre, her journey with Six and why she's excited to be showing a different side of her talents in Millennials.  Millenials runs at The Other Palace until Sunday 4th September. Hosted by Andrew Tomlins. @AndrewTomlins32  Thanks for listening! Email: andrew@westendframe.co.uk Visit westendframe.co.uk for more info about our podcasts.  

Streamageddon
#16 – Only Murders in the Building

Streamageddon

Play Episode Listen Later Jul 15, 2022 62:36


We're sending the investigation in a whole new direction this week with our impressions of Season 2 of Hulu's hit series Only Murders in the Building. It's a wildly inaccurate portrayal of podcasting, but a delightful and empathetic look at the cloistered lives of quirky city dwellers (Caution: Contains highly relatable and extremely awkward interactions with neighbors). Then we're revisiting our review of Season 1 of Showtime's I Love That For You. It's a series that piqued our interest but left us worried the dark twist would sink the stellar comedic performances. So, how did it turn out for Joanna Gold at SVN? You'll have to listen to the end to find out! ———

LINUX Unplugged
464: Git Happens

LINUX Unplugged

Play Episode Listen Later Jun 27, 2022 73:15 Very Popular


We're going back in time to witness the early days of a critical tool to build Linux, then jump forward 15 years and join our buddy Brent on his journey to learn that very tooling.

We Build Great Apartment Communities
109: Multi-Family Assets and Affordable Housing with Reid Bennett

We Build Great Apartment Communities

Play Episode Listen Later Apr 11, 2022 56:21


The pandemic has affected the real estate space in more ways than one. Blueprints, schematics, and plans have been thrown out the window as no one was prepared for the impact of COVID19. Everyone had to regroup and ground themselves in order to keep moving forward. In this episode, John has Reid Bennett, a pro of over 20 years, talk about multi-family and affordable housing. He also discusses strategies and how these two assets could get you through these times of uncertainty in the market. Let's tune in as John extracts “multi-million nuggets of knowledge” from the great, Reid Bennett.   Episode Highlights: Approaches to underwriting Strategies During Inflation Trading Multi-Family Assets 1031 Exchange CCIM Institute Condominium Conversion Financing Parallels between 2008 and 2021  Affordable Housing Communities Section 42 Communities Re-syndication Internal Rate of Return   Connect with Reid: LinkedIn  Twitter  Email    About Our Guest: Reid Bennett, CCIM serves as National Council Chair of Multifamily Properties for SVN International and a Senior Vice President for SVN - Chicago Commercial. As a licensed managing broker, he focuses primarily on the sale of apartment communities across the Midwest and also teams up with members of his council to serve clients across the country in over 150 markets. Reid prides himself on understanding the nuances and analysis of multiple unit apartment dwellings & low-income Section 8 & Section 42 communities. In 2016, 2018 & 2021 Reid received the Partners Circle Award from SVN where he was ranked in the top .02% among all 1,200+ SVN advisors in the world for the third time. A graduate from the University of Iowa, Reid also has achieved the highly coveted designation of Certified Commercial Investment Member (CCIM). Also active in his community, Reid chaired the Development Committee for River North Residents Association (RNRA) where he worked in conjunction with developers and area residents to foster responsible development in one of Chicago's most active and desirable neighborhoods. Prior to merging with SVN, Reid worked with condominium converters as well as large apartment complex buyers & sellers. He procured numerous multi-million dollar deals across the Midwest. Embodying the spirit of SVN, Reid fully utilizes the national platform and collaborative efforts to best perform for his clients on a global level. --- Did you enjoy today's episode? Please click here to leave a review for The We Build Great Apartment Communities. Be sure to subscribe on your favorite podcast app to get notified when a new episode comes out! Do you know someone who might enjoy this episode? Share this episode to inspire and empower! Connect with John Brackett and We Build Great Apartment Communities Instagram @webuildgreatcommunities Facebook @buildingreatcommunities LinkedIn @brackettjohn Website www.fidelitybps.com   Subscribe to The We Build Great Apartment Communities Apple Podcasts Spotify   Do you think you would be a great fit for the show? Apply to be a guest by clicking .   Fidelity Business Partners, Inc. 6965 El Camino Real Suite 105-190 Carlsbad, CA 92009 D: 760-301-5311 F: 760-987-6065

Cognac Corner
Rare Air Art Show

Cognac Corner

Play Episode Listen Later Apr 6, 2022 129:40


On this week episode Marcus is joined by guest co-host S Dot Giles and returning connoisseur Svn Phoenix. Svn is on the show to talk about his Rare Air Art Show and how its making a come back with more artist plus new location. S dot shares with us her upcoming events plus hold Marcus down as he falls apart as usual. Drink up and enjoy .

The Massimo Show
Massimo 2021 CRE New to Business Member of the Year - Steve Rowe

The Massimo Show

Play Episode Listen Later Feb 14, 2022 20:09


On this Episode of The Massimo Show, Rod sits down with Massimo's 2021 New to Business Member of the Year - Steve Rowe.  Originally from Connecticut, Steve moved to Mobile Alabama when he was nine years old. His grandfather was the vice president of a grocery store and his father owned a construction company. His intention was to work for his father after school. “So in high school, I never really tried that hard,” Steve tells Rod. As soon as he could, he immediately went to work for his dad. His teachers encouraged him to work on his resume but he didn't really see that as important since he knew he essentially already had a set career. “It was something that I thought was the right thing for me until I started really getting a better understanding of real estate,” Steve explains. “And that is why I made the transition into doing full-time brokerage because the difference maker for me has been the fact that I love what I do.”   A friend of Steve's - Justin - who started his own SVN franchise lamented that “Man, I'm having a hard time finding anybody.” Steve told him “yeah, you really need somebody like me who's okay making a lot of old calls.” Justin pitched the idea of Steve joining the team and Steve really didn't see why he couldn't make that move. So in 2019 he came on full time to the SVN team - before getting his license - and started making calls. He got licensed shortly after that and has been a powerhouse of sales ever since.  Rod and Steve spend the rest of their time together talking about a variety of things that lead to his early success at the start of his brokerage career including:  The importance of being sociable plus the use of systems becoming a resource and creating a niche in your community consistency in calling and tracking your pipeline and what it takes to feel ready to build a team