POPULARITY
Ralph and Lauren pull back the curtain on Meta's latest Advantage+ transformation, powered by the underreported Andromeda algorithm. With Meta silently replacing manual campaign controls with full automation, the hosts dive into what this shift means for media buyers, business owners, and performance marketers. From breaking down Advantage+ Shopping's evolution to analyzing its strategic implications for ad spend and conversion optimization, Ralph and Lauren explain why understanding Meta's AI is now mission-critical. If you're managing Meta ads in 2025, this episode is your wake-up call.Chapters:00:00:00 – Dive Into the Chaos: Welcome to the Professional Traffic Podcast00:01:33 – The Truth About Meta's Advantage+ and the Stealthy Andromeda Rollout00:03:17 – Hot Takes: Our Raw Reactions to Meta's Automated Future00:05:28 – Meta's $65 Billion Bet on AI—What It Really Means for Advertisers00:07:42 – Tariffs, Tension & Traffic: Why It's Time to Diversify Beyond Meta00:11:41 – Beyond the Big Two: Alternative Ad Networks You Shouldn't Ignore00:17:30 – Snapchat Isn't Dead: Surprising Wins and What the Data Shows00:19:23 – TikTok's Rise and the Platforms Quietly Dominating the Feed00:21:00 – Dollars and Downloads: The Platforms Driving Real Economic Growth00:21:09 – Threads is Blowing Up—Here's What That Means for Marketers00:22:03 – Personalized Ads That Don't Suck: Cracking the Relevance Code00:23:06 – Wait, Did We Just Say PornHub? The Untapped Traffic Opportunity00:24:15 – Can Threads Deliver Ad Results? Here's the Smart Way to Find Out00:25:25 – TikTok's Secret Superpower: B2B Lead Gen Done Right00:30:02 – Lead Quality Over Quantity: How to Actually Score Buyers00:31:07 – Mic Drop: Final Thoughts, Must-Know Takeaways, and Show NotesLINKS AND RESOURCES:Meta Andromeda: Supercharging Advantage+ automation with the next-gen personalized ads retrieval engineMeta to spend $65B on AI in 2025, Zuckerberg saysEpisode 693: How Meta Ads Are Changing Forever: Advantage+ & cAPI Subterfuge REVEALED!Episode 691: The MAJOR Meta Advantage+ Changes You Must KnowEpisode 660: 3 Astonishing New Features on TikTok You Didn't Know About…Until NowEpisode 629: (Part 2)How to Use TikTok Ads The Right Way to Grow Your BusinessEpisode 627: How to Use TikTok Ads The Right Way to Grow Your BusinessGet Your nCAC Calculator Now!Tier 11 JobsPerpetual Traffic on
En este episodio, desglosamos los temas más importantes que están marcando el pulso de los mercados: • Mercados atentos al 'Día de la Liberación': Los futuros se mantienen estables mientras los inversores esperan los detalles de los aranceles que Trump anunciará el 2 de abril. Se teme una escalada comercial si las medidas son amplias. También se publican hoy el informe JOLTS (7.69M esperadas) y los indicadores manufactureros de marzo. • Petróleo y oro repuntan: El Brent sube a $75.14 y el WTI a $71.84 tras amenazas de Trump contra Rusia e Irán. El oro alcanza un nuevo récord de $3,148.88 antes de moderarse a $3,132.37, acumulando +20% en 2025. Saxo Bank reporta toma de ganancias en metales y compras sostenidas en energía. • Celsius apuesta por el segmento femenino: $CELH adquiere Alani Nu por $1.65B para atacar el mercado femenino de bebidas energéticas, que se proyecta como el principal motor de crecimiento del sector. Truist elevó la acción a Buy con PT de $45. Las acciones suben +33% YTD y marcan máximos de seis meses. Acompáñanos para entender cómo el panorama arancelario, la demanda por refugios seguros y las nuevas estrategias de mercado están moldeando el rumbo de la economía global.
As a native New Orleanian, Mitch Landrieu knows a thing or two about crisis and recovery. He served as the lieutenant governor of Louisiana through Hurricanes Katrina and Rita in 2005 and the compounding effects of subsequent storms including Ike and Gustav. In 2010, he was sworn in as mayor of New Orleans—just one month after the Deepwater Horizon explosion undermined the region's efforts to recover from five years of depopulation and economic decline. Mayor Landrieu's experience working for the efficient restoration of New Orleans's critical infrastructure later led the Biden Administration to appoint him as an advisor on the national implementation of the 2021 Infrastructure Investment and Jobs Act. Otherwise known as the Bipartisan Infrastructure Law (BIL), this bill has been the largest long-term investment in U.S. infrastructure since the Federal-Aid Highway Act of 1965. It has prioritized and funded an array of essential, future-oriented projects throughout the country. The aftermath of Hurricane Katrina demonstrated how the increasing scale of environmental disasters will expose vulnerabilities in the nation's aging infrastructure. Local leaders are thus seeking strategies that balance the needs of growth and economic development with the proactive management of current and future risks. The work that Mayor Landrieu, city staff, and community partners undertook to steer New Orleans's recovery process away from bankruptcy and toward revived communities and a more secure built environment has provided a case study for policymakers and resilience groups around the world. In part one of this two-part episode, Mayor Landrieu talks with Ten Across founder Duke Reiter about the personal and professional experiences that have influenced his views on equity and resilience and shaped some of the bold positions he's taken in governing. Part two will delve further into his views and outlook on contemporary governance. We've taken a new approach with this episode, take a listen and let us know what you think by leaving a review on your preferred podcast platform. Related articles and resources: “Want to Understand the Future of U.S. Climate Resilience? Look to the Gulf Coast” (Ten Across Conversations podcast, Dec. 2024) “Sunk Costs, Sunken City: The Story of New Orleans with Richard Campanella” (Ten Across Conversations podcast, June 2023) “DOGE says it's now saved $65B in federal funds, but that's still impossible to verify” (ABC News, Feb. 26, 2025) “Veteran crisis hotline may be impacted by federal layoffs” (ABC 15, Feb. 24, 2025) “Angry Over Confederate Flag, Mayor Plans March” (New York Times, March 2000) “What is in the just-passed House Republican budget bill? What to know” (USA Today, Feb. 26, 2025)
Send us a textThe Department of Government Efficiency (DOGE), helmed by Elon Musk, has been tasked with optimizing federal processes, cutting redundancies, and improving transparency in government operations.Now, large consulting firms are in DOGE's crosshairs. The top 10 largest consulting firms that do business with the U.S. government have been called in to "defend the spend" this Friday. These firms represent over $65B in outstanding consulting contracts.Whether you agree with DOGE's mission or not isn't the point - the U.S. government is the single largest consumer of consulting services in the world, and this steady revenue stream is under threat for some of the world's largest consulting firms.The Wall Street Journal broke this news - read the original article.ChaptersWhat is do federal consultants do? (1:00)What is DOGE? (2:00)What is the current state of play? (3:02)What does this mean for consulting firms? (6:56)What does this mean for job seekers? (8:39)Additional ResourcesExplore the top 10 public sector consulting firms as ranked by Management ConsultedBuild your business acumen through our Black Belt case coaching programUnlock top consulting jobs on the Management Consulted Job BoardConnect With Namaan MianConnect with Namaan Mian on LinkedInConnect With Management Consulted Book a free 15min info call with Katie. Follow Management Consulted on LinkedIn, Instagram, and TikTok for the latest updates and industry insights. Join an upcoming live event - case interviews demos, expert panels, and more. Email our team (team@managementconsulted.com) with any questions or feedback.
The first-ever episode of Good Morning Outdoors is here with Matt Whitermore and Alex Burkett! We're diving into Blackstone's $5.65B acquisition of Safe Harbor Marinas—what does this mean for the future of outdoor recreation and waterfront access? Plus, DOGE is making significant cuts to public lands agencies, and the Outdoor Alliance says it could spell disaster for recreation and conservation. And finally, could we be witnessing Fyre Fest 2.0? We break down the latest controversy and what's really happening in the RV and outdoor travel boom. ---- Good Morning Outdoors is part of the Hospitality.FM Multi-Media Network and is a licensed podcast under Good Morning Hospitality! The hospitality industry is constantly growing, changing, and innovating! This podcast brings you the top news and topics from industry experts across different hospitality fields. Good Morning Hospitality publishes three thirty-minute weekly episodes: every Monday and Wednesday at 7 a.m. PST / 10 a.m. EST and every Tuesday at 8 a.m. CET for our European and UK-focused content. Make sure to tune in during our live show on our LinkedIn page or YouTube every week and join the conversation live! Explore everything Good Morning Hospitality has to offer: • Well & Good Morning Coffee: Enjoy our signature roast—order here! • Retreats: Join us at one of our exclusive retreats—learn more and register your interest here! • Episodes & More: Find all episodes and additional info at GoodMorningHospitality.com Thank you to all of the Hospitality.FM Partners that help make this show possible. If you have any press you want to be covered during the show, email us at goodmorning@hospitality.fm Learn more about your ad choices. Visit megaphone.fm/adchoices
NEWS: Pag-IBIG bares record P55.65B dividend | Feb. 28, 2025Visit our website at https://www.manilatimes.netFollow us:Facebook - https://tmt.ph/facebookInstagram - https://tmt.ph/instagramTwitter - https://tmt.ph/twitterDailyMotion - https://tmt.ph/dailymotionSubscribe to our Digital Edition - https://tmt.ph/digitalSign up to our newsletters: https://tmt.ph/newslettersCheck out our Podcasts:Spotify - https://tmt.ph/spotifyApple Podcasts - https://tmt.ph/applepodcastsAmazon Music - https://tmt.ph/amazonmusicDeezer: https://tmt.ph/deezerStitcher: https://tmt.ph/stitcherTune In: https://tmt.ph/tunein#TheManilaTimes Hosted on Acast. See acast.com/privacy for more information.
The first-ever episode of Good Morning Outdoors is here with Matt Whitermore and Alex Burkett! We're diving into Blackstone's $5.65B acquisition of Safe Harbor Marinas—what does this mean for the future of outdoor recreation and waterfront access? Plus, DOGE is making significant cuts to public lands agencies, and the Outdoor Alliance says it could spell disaster for recreation and conservation. And finally, could we be witnessing Fyre Fest 2.0? We break down the latest controversy and what's really happening in the RV and outdoor travel boom. ---- Good Morning Outdoors is part of the Hospitality.FM Multi-Media Network and is a licensed podcast under Good Morning Hospitality! The hospitality industry is constantly growing, changing, and innovating! This podcast brings you the top news and topics from industry experts across different hospitality fields. Good Morning Hospitality publishes three thirty-minute weekly episodes: every Monday and Wednesday at 7 a.m. PST / 10 a.m. EST and every Tuesday at 8 a.m. CET for our European and UK-focused content. Make sure to tune in during our live show on our LinkedIn page or YouTube every week and join the conversation live! Explore everything Good Morning Hospitality has to offer: • Well & Good Morning Coffee: Enjoy our signature roast—order here! • Retreats: Join us at one of our exclusive retreats—learn more and register your interest here! • Episodes & More: Find all episodes and additional info at GoodMorningHospitality.com Thank you to all of the Hospitality.FM Partners that help make this show possible. If you have any press you want to be covered during the show, email us at goodmorning@hospitality.fm Learn more about your ad choices. Visit megaphone.fm/adchoices
Nvidia (NVDA) shares are down 5% from all-time highs, but on track for a positive weekly finish. The company is poised to benefit from increased spending in A.I. infrastructure plans such as "Stargate" announced by President Trump. Meta Platforms (META) shared its plans to spend up to $65B in CapEx to increase its A.I. tech as well. Kevin Hincks looks at example options trades in both big tech names. ======== Schwab Network ======== Empowering every investor and trader, every market day. Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6D Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribe Download the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185 Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7 Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watch Watch on Vizio - https://www.vizio.com/en/watchfreeplus-explore Watch on DistroTV - https://www.distro.tv/live/schwab-network/ Follow us on X – https://twitter.com/schwabnetwork Follow us on Facebook – https://www.facebook.com/schwabnetwork Follow us on LinkedIn - https://www.linkedin.com/company/schwab-network/ About Schwab Network - https://schwabnetwork.com/about
Ben is Co-Head of GMO's Asset Allocation team and serves as a partner and portfolio manager at the firm. Founded in 1977, GMO manages over $65B (as of 9/30/24), and is renowned for its expertise in multi-asset class portfolios. In this episode, Ben shares his insights on his investment framework, the principles of value investing, and his perspectives on current market conditions.
11/20/24 Hour 3 Vince takes listener calls about Matt Gaetz's nomination and potential confirmation hearing. Opponents of Matt Gaetz are labeling him a sex offender, but the DOJ never pursued the accusations because the accusers lacked credibility. Vince speaks with Matt Rosendale, Congressman representing Montana's 2nd Congressional District, about Biden bringing us to the verge of WW3 before leaving the WH by authorizing Ukraine to use long range missiles and Biden State Department telling Ukraine they can keep $4.65B in taxpayer money they originally were supposed to pay back. For more coverage on the issues that matter to you visit www.WMAL.com, download the WMAL app or tune in live on WMAL-FM 105.9 from 3-6pm. To join the conversation, check us out on social media: @WMAL @VinceCoglianese. Executive Producer: Corey Inganamort @TheBirdWords See omnystudio.com/listener for privacy information.
The Daily Business and Finance Show - Tuesday, 5 November 2024 We get our business and finance news from Seeking Alpha and you should too! Subscribe to Seeking Alpha Premium for more in-depth market news and help support this podcast. Free for 14-days! Please click here for more info: Subscribe to Seeking Alpha Premium News Today's headlines: Nvidia moves Super Micro orders to other suppliers to maintain supply chain stability - report Solar, clean energy stocks jump as new poll indicates potential Harris election win Hims & Hers Health GAAP EPS of $0.32 beats by $0.27, revenue of $401.6M beats by $18.83M Super Micro faces mounting questions as negative news builds: Wedbush Viking stock drops 13% after jumping 9% on obesity drug data Palantir leaps after crushing estimates due to its popular AI tools Earnings week ahead: O, PLTR, QCOM, SQ, AMC, PARA, LCID, RIVN, WBD, and more Constellation Energy tops S&P losers list on Amazon nuclear power deal setback BCE to acquire U.S.-based Ziply Fiber in $3.65B deal, expanding its fibre footprint Explanations from OpenAI ChatGPT API with proprietary prompts. This podcast provides information only and should not be construed as financial or business advice. This podcast is produced by Klassic Studios Learn more about your ad choices. Visit megaphone.fm/adchoices
Send us a text00:19 | FigureAI (humanoid robots)- AI and humanoid robots drive efficiency and will drive cost of goods/services to $0, drive unlimited GDP- $675m raise $2.6b valuation; OpenAI, Microsoft, Nvidia- 10b humanoid robots by 2040- Musk is projecting 16b to 32b humanoid robots, Telsa Optimus is a humanoid robot- humanoid robots fit easily into a human world to easily replace humans11:49 | Stripe- These companies are so big they're doing share buybacks!- Planning new tender offer to repurchase shares from employees- Entire offer financed with Stripe's own cash, a shift from external funding- Generated $615M in free cash flow in June quarter vs. $500M cash burn in 2022- Valuation at $70B (secondary), up from $50B in 2022 and $65B in last tender- Up to 8,000 employees can sell up to $50,000 of vested shares at $27.51/share- Expanding beyond core payments into billing software; segment on track for $500M annual revenue25:39 | xAI- xAI differentiators are becoming clear; real time data, most accurate answers, Musk effect (i.e. unlimited capital)- AI large language model platform business- Released Grok-2 and Grok-2 Mini beta LLMs on X platform- Enterprise API arriving later this month- Top-four position on LMSYS chatbot leaderboard- Grok-2 Mini: efficient, ideal for speed/resource-critical scenarios- Focus on expanding multimodal understanding- Available to Premium/Premium+ subscribers on X at $8/month- Secondary market valuation: $25.7B (+6.9% vs May 2024 round)
This week saw a range of titles opening wide, with Blink Twice, The Forge, and The Crow each debuting to different unique audiences. We take an in-depth look at each, as well as how opportunities of cross-promotion, subscriptions, and bundling might be being missed. Uncover all the insights into the box office and audience analysis on this week's instalment of Behind the Screens. Topics and times: Year-to-date box office comparison - 0:16 Blink Twice box office overview - 1:00 Blink Twice audience analysis - 1:28 Content warnings in cinema - 3:04 The Forge box office and audience analysis - 7:14 The Crow box office and audience analysis - 10:16 Cross-promotion, subscriptions, and bundling opportunities - 12:22 Box office holdovers - 14:19 Next week - 16:54 Find us at https://www.linkedin.com/company/vista-group-limited/, and follow lifeatvistagroup on Instagram Box Office Overview: Deadpool & Wolverine regained the top spot domestically with an $18M gross, and $20M internationally, bringing the global total to $1.21B. Alien: Romulus grossed $16.2M domestically and $41M internationally with strong holds particularly in China, for a global running total of $225M. Inside Out 2 passed $1B internationally, with a worldwide total now of $1.65B. Blink Twice debuted with $7.3M in the domestic market and $6.7M internationally for a total of $14M. The Forge debuted to $6.6M in the domestic market. And The Crow opened in 8th position at the domestic market, grossing $4.6M.
Noticias Económicas y Financieras Los 14 años de gobierno conservador han terminado en el Reino Unido después de que el Partido Laborista ganara las elecciones generales del país por una abrumadora mayoría, con Keir Starmer como nuevo primer ministro. Ha recibido un importante mandato con 411 escaños de los 650 que tiene el parlamento, consiguiendo 211 escaños en la última votación, según las últimas encuestas a boca de urna. Es un gran cambio para el panorama político británico, que ha lidiado con una década tumultuosa que ha incluido el Brexit, una crisis del coste de la vida tras el COVID y la guerra en Ucrania, así como cuatro primeros ministros conservadores en los últimos cinco años. Las acciones relacionadas con las criptomonedas están bajo presión en las operaciones previas al mercado después de que Bitcoin (BTC-USD) extendiera sus pérdidas por cuarta sesión consecutiva. La criptomoneda más grande ha caído del nivel de $63.000 a alrededor de $54.000 en los últimos días, y la bolsa colapsada Mt. Gox se prepara para distribuir una gran cantidad de BTC a sus acreedores. Un atraco en 2011 se llevó hasta 950K Bitcoins, pero esta semana se realizarán los reembolsos a muchos acreedores, lo que provocó temores de dilución o muchos quieren cobrar sus inversiones recuperadas. El gobierno alemán también acaba de vender miles de bitcoins, que se dice que fueron confiscados en relación con el extinto sitio web de piratería Movie2k. Juntos, más fuertes, podría ser el lema del juego en el negocio del lujo, ya que las tiendas físicas siguen viéndose afectadas por las tendencias del comercio electrónico y la inflación pasa factura a los compradores aspiracionales. Hudson's Bay Co. ha confirmado la compra de Neiman Marcus Group por $2.65B, creando una potencia con $10B en ventas anuales y más de 150 tiendas, incluidas Saks Fifth Avenue, Saks OFF 5th, Neiman Marcus y Bergdorf Goodman. Como se mencionó anteriormente, Amazon $AMZN adquirirá una participación minoritaria en la empresa combinada, que se llamará Saks Global. Salesforce $CRM es otro inversor minoritario y se espera que ayude con la inteligencia artificial. El informe de empleos de hoy, que se publicará a las 8:30 a. m., hora del este de EE. UU., será muy esperado y probablemente será dispar, ya que se espera que el crecimiento continúe, pero a un ritmo más lento que en los meses anteriores. Los economistas estiman que en junio se agregaron 191 mil nóminas no agrícolas, mientras que la tasa de desempleo se mantuvo estable en el 4%. Como la inflación es el principal foco de atención de la Reserva Federal, los datos de ganancias promedio por hora también serán importantes en el próximo informe. Se espera que aumenten un 0.3% intermensual y un 3.9% interanual, frente al 0.4% intermensual y el 4.1% interanual de mayo, y que la tasa de participación de la fuerza laboral aumente ligeramente hasta el 62.6%. El fundador Jeff Bezos vende casi $5B en acciones de Amazon. Kolanovic de JPMorgan dice que dejará el banco después de una serie de malas decisiones. Actas de la Fed: se necesitan datos más favorables para la confianza inflacionaria.
Tanis Jorge is a serial, tech entrepreneur, and a leading advisor to startup founders on entrepreneurship and building successful cofounder partnerships. Over the course of her career in startups, spanning the last 20+ years, Tanis has cofounded, scaled, and successfully exited multiple data-driven businesses.Her successes culminated with her most recent venture, Trulioo, which she co-founded in 2011 with her long-term business partner, Stephen Ufford. Between 2011 and 2015, Tanis served as Chief Operations Officer of Trulioo, working to lay the groundwork and build the foundation for the trusted, innovative, and disruptive company it has become today. In 2021, Trulioo reached unicorn status (US $1.65B valuation) solidifying its place as the world's leading identity verification company and Jorge's track record for founding successful businesses. Following a record-breaking Series D, she stepped down from the Board of Trulioo to focus on her cofounder advisory work. She remains a minority shareholder of Trulioo.Tanis cofounded her first start-up, iQuiri in 1999. The company was one of the first to make consumer credit reports available online and was acquired by Experian in 2003. In 2004, she cofounded NCB Data Services, which was again acquired by Experian in 2006. In 2005, Tanis cofounded identity management firm Pharos Global Strategies, which was again acquired four years later.Today, Tanis is one of the go-to voices and experts on the ‘cofounder relationship', drawing on her experience cofounding and successfully scaling four technology businesses. She is the author of The Cofounder's Handbook (listed in USA Today's Top 10 Business Books To Scale Your Business in 2024) and Founder of The Cofounder's Hub, a platform dedicated to providing entrepreneurs with the necessary tools and resources to establish, nurture, and successfully exit business partnerships. Tanis also advises fast-growing start-ups and leading venture capitalists, focusing her work on how cofounders can function in an open, productive, and symbiotic way to ensure continued and long-term business success.Tanis sits on the Board of Directors at Ally Global, a non-profit that works to prevent human trafficking and supports survivors through safe homes, education, and work opportunities. Tanis lives in Vancouver, BC with her husband and two boys. When she isn't working she enjoys the “foodie” lifestyle with her husband David Jorge, MasterChef Canada Season 2 winner. She loves water sports and is currently working towards turning her brown belt in kickboxing into black, a lifelong item on her bucket list. Tanis also takes time to mentor students at the private school she founded, Live Learn Launch Academy, which focuses on entrepreneurship, financial literacy, and life skills.
Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well
Microsoft's layoffs have sent the FTC into a tizzy, adding fuel to the ongoing fire. Also: It's earnings season, with EA, Microsoft, Capcom, Nintendo, Square Enix, Paradox, Take-Two, Sega, and Meta reporting. We also cover the latest in the layoff storm that continues to drown the industry and Disney's $1.5B investment in Epic. This is episode is sponsored by Magic Mind, a supplement shot that includes ingredients to lower your stress, reduce fatigue, and boost your immune system. You can get 20% off your order of Magic Mind at https://www.magicmind.com using the code VirtualEconomy20. You can support Virtual Economy's growth via our Ko-Fi and also purchase Virtual Economy merchandise! TIME STAMPS [00:01:14] - EA Earnings [00:12:44] - Microsoft Earnings [00:22:59] - Capcom Earnings [00:28:23] - Nintendo Earnings [00:31:09] - Square Enix Earnings [00:37:45] - Paradox Earnings [00:42:38] - Take-Two Earnings [00:49:17] - SegaSammy Earnings [00:53:09] - Meta Earnings (Brief) [00:55:57] - Investment Interlude [01:12:13] - Quick Hits [01:19:36] - Labor Report SOURCES Electronic Arts Reports Strong Q3 FY24 Results | EA Microsoft Earnings Release FY24 Q2 | Microsoft Capcom On Track to Achieve Full-Year Guidance | Capcom Consolidated Results for the Nine Months Ended December 31, 2022 and 2023 | Nintendo Consolidated Financial Results for the Nine-Month Period Ended December 31, 2023 | Square Enix Year-end report January – December 2023 | Paradox Take-Two Interactive Software, Inc. Reports Results for Fiscal Third Quarter 2024 | Take-Two SegaSammy Q3 Results Presentation | SegaSammy Meta Reality Labs loses $4.65B in Q4 | Meta Tencent CEO feels its game business "achieved nothing" during 2023 | Game Developer Embracer lays off staff at Star Trek: Infinite dev Nimble Giant | Game Developer Wayfinder dev Airship Syndicate lays off 12 employees | Game Developer Square Enix absorbs Tokyo RPG Factory | Gematsu Hidden Path Layoffs | Jeff Pobst on LinkedIn Roughly half of Devolver's Artificer studio laid off | Games Industry Devolver Digital CEO Douglas Morin steps down | Games Industry Layoffs at Netease, MiHoYo, Tencent | Core Esports The FTC is Going After Microsoft Over Layoffs | Tom Warren (@tomwarren) on Twitter …And Microsoft has Responded | Tom Warren (@tomwarren) on Twitter INVESTMENT INTERLUDE Nordcurrent snaps up Cinemaware library | Games Industry Disney and Epic Games to Create Expansive and Open Games and Entertainment Universe Connected to Fortnite | Disney Nitro Games receives further €3.5 million to continue Warframe mobile development | Pocket Gamer Six Nations-backer CVC in talks to buy Runescape-maker for £900m | Sky News FromSoftware owner Kadokawa acquires Octopath Traveler studio | Eurogamer
Dave's been the Director of Sales Development for EMEA at Snowflake for the last year. He shares his lessons from managing and coaching five SDR managers and 50 SDRs at a unicorn company valued over $65B. *Subscribe to SDR Hire Insights newsletter: https://sdrhire.com/#newsletter In this episode, Srba and Dave discuss: - how Dave went from a BDR at Palo Alto Networks in 2013 to leading a global SDR team - what's it like leading a 50 people SDR team - how to build a successful SDR team from scratch - what are the winning traits of A players - what's the difference between a manager and a leader Hope you share and enjoy! - Connect with David on LinkedIn: https://www.linkedin.com/in/daveewilkins/ - Connect with Srba on LinkedIn: https://www.linkedin.com/in/srbamarkovic - Connect with Stefan on LinkedIn: https://www.linkedin.com/in/stefan-conic/ --------------------------------------- Resources and creators mentioned: - David's Ebook: Closing the Gap: Turbocharge Your SDR and Sales Team Alignment: https://payhip.com/b/qDTGY - Davis's newsletter: https://www.linkedin.com/newsletters/mastering-sales-development-7135961833558958080/ - Andy Laws: https://www.linkedin.com/in/andy-laws/ --------------------------------------- SDR Hire helps build remote SDR and sales teams. Experienced, trained, near to native, ready to be hired. Listen on YouTube, Apple, Spotify, Google podcast.
* Hunter Biden: I am responsible for my mistakes. * House GOP Starts Contempt Proceedings Against Hunter Biden - Forbes. * December White House Survey, Paid for by the NRSC, Not authorized by any candidate or candidate's committee - NRSC.org * Who should be the Republican Presidential nominee in 2024? * How important is it for a Republican Presidential nominee to advocate for a SECURE southern border? * How important is it for a Republican Presidential nominee to be TOUGH on crime? * How Important is it for a Republican Presidential nominee to UPHOLD conservative values? * Foreign investors are expected to pull $65B in capital out of China in 2024 as rising tensions and worry over the country's economy plague future outlooks - The Institute of International Finance (IIF). * Treasury Secretary Janet Yellen gave a speech on Thursday to celebrate the 50th anniversary of the US -China Business Council, emphasizing that the two countries should not economically "decouple," - Politico.
This week, we have a bonus mini episode where Jacquelyn talks with TechCrunch+ editor-in-chief Alex Wilhelm to dive back into the Sam Bankman-Fried trial and what has transpired in its second week. Major witnesses who were involved in the downfall of FTX and its sister company Alameda testified like Gary Wang, CTO and cofounder of FTX, and Caroline Ellison, CEO of Alameda. The two of them plead guilty to a number of charges and could face maximum sentences of 50 to 110 years, respectively. It's also worth noting Wang and Ellison testified as part of a cooperation agreement for pleading guilty. Jacquelyn and Alex talk about key points from the trial, anecdotes that you can't read on a transcript and what she anticipates from both the prosecutors and defense going forward. Want more? Here's the latest on the SBF trial: Former Alameda CEO Caroline Ellison explains how FTX hid losses, sandbagged lendersAlameda Research allegedly paid Chinese officials around $150M to regain $1B worth of exchange accountsAlameda Research's ex-CEO Caroline Ellison testifies, claims SBF directed her to commit crimesAlameda had a $65B line of credit and ‘unlimited withdrawals'As SBF's trial heads into its second week, here's what we know so farChain Reaction comes out every Thursday at 12:00 p.m. ET, so be sure to subscribe to us on Apple Podcasts, Spotify or your favorite pod platform to keep up with the action.
This week, we have a bonus mini episode where Jacquelyn talks with TechCrunch+ editor-in-chief Alex Wilhelm to dive back into the Sam Bankman-Fried trial and what has transpired in its second week. Major witnesses who were involved in the downfall of FTX and its sister company Alameda testified like Gary Wang, CTO and cofounder of FTX, and Caroline Ellison, CEO of Alameda. The two of them plead guilty to a number of charges and could face maximum sentences of 50 to 110 years, respectively. It's also worth noting Wang and Ellison testified as part of a cooperation agreement for pleading guilty. Jacquelyn and Alex talk about key points from the trial, anecdotes that you can't read on a transcript and what she anticipates from both the prosecutors and defense going forward. Want more? Here's the latest on the SBF trial: Former Alameda CEO Caroline Ellison explains how FTX hid losses, sandbagged lendersAlameda Research allegedly paid Chinese officials around $150M to regain $1B worth of exchange accountsAlameda Research's ex-CEO Caroline Ellison testifies, claims SBF directed her to commit crimesAlameda had a $65B line of credit and ‘unlimited withdrawals'As SBF's trial heads into its second week, here's what we know so farChain Reaction comes out every Thursday at 12:00 p.m. ET, so be sure to subscribe to us on Apple Podcasts, Spotify or your favorite pod platform to keep up with the action.
*) Türkiye destroys terror targets in Syria Türkiye has hit the terrorist group YPG/PKK positions as part of its anti-terror operations in northern Syria, destroying dozens of targets and "neutralising" many terrorists. 30 targets, including an oil well and a storage facility, caves, bunkers, shelters and warehouses used by terrorists were destroyed, Turkish Defence Ministry said in a statement. The Turkish anti-terror operation came after PKK/YPG attacked the Interior Ministry in the Turkish capital Ankara last Sunday. *) Bangladesh gets first uranium shipment from Russia for nuclear power plant Bangladesh has received the first Russian shipment of uranium fuel for its $12.65B debut nuclear power plant, making it the 33rd country in the world to produce nuclear power. The South Asian country is building the first of two nuclear power plants in collaboration with Russian state-owned atomic company Rosatom. Ninety percent of the project is financed through a Russian loan repayable within 28 years with a 10-year grace period. *) Biden to extend US border wall with Mexico using Trump-era funds US President Joe Biden has defended his plans to extend the border wall with Mexico, saying he didn't think such barriers worked, but he was bound by laws introduced under former president Donald Trump. Biden, who is polling neck-and-neck with rival Trump ahead of a likely 2024 rematch, insisted on Thursday his predecessor had tied his hands on the wall-building. "They have to use the money for what it was appropriated for. I can't stop that," he told reporters in the Oval Office. The US president also stressed that the border wall is ineffective. *) Drone attack on Syrian military ceremony kills 80, wounds 240 An attack on a Syrian crowded military graduation ceremony has killed 80 and wounded 240 others. Civilians, including children, and military personnel were among the dead. Syria's regime said in an earlier statement that drones laden with explosives targeted the ceremony. No group has claimed responsibility for the attack. *) Eastern Canada breaks autumn heat records Eastern Canada has shattered heat records this week with temperatures close to 30 degrees Celsius, worrying experts and everyday people struggling to cope with extreme weather made worse by the climate crisis. In the last three days, heat records were broken in Quebec and adjacent provinces. On Wednesday, the mercury reached 29.3 degrees Celsius in Montreal, surpassing the record of 26.7 degrees set in 2005.
We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language modeling, and long document summarization from LLaMA 7B to 65B. Meanwhile, the extended model by Position Interpolation preserve quality relatively well on tasks within its original context window. To achieve this goal, Position Interpolation linearly down-scales the input position indices to match the original context window size, rather than extrapolating beyond the trained context length which may lead to catastrophically high attention scores that completely ruin the self-attention mechanism. Our theoretical study shows that the upper bound of interpolation is at least $sim 600 times$ smaller than that of extrapolation, further demonstrating its stability. Models extended via Position Interpolation retain its original architecture and can reuse most pre-existing optimization and infrastructure. 2023: Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian https://arxiv.org/pdf/2306.15595v2.pdf
Kemper appoints interim CFO, Binance CEO warns of market crash, Graphcore challenges Nvidia, GM CEO disappointed by strike, Arm IPO raises $65B, PCAOB proposes expanded audit scope, Writers Guild on strike, Northwestern and UChicago establish biology institute, Libya's oil production unaffected by floods.
Salesforce's Q2 FY24 total revenue came in at $8.6B, beating analysts expected $8.53B, and represented 11% growth year-over-year. Salesforce raised their full year FY24 revenue guidance from $34.7B to $34.8B, which would represent 11% growth year-over-year and beat the expected guidance of $34.65B. They stated recent price increases were accounted for in these numbers, but the increases did not have a significant influence on what they are now expecting for revenue. Elongated sales cycles, additional deal approval layers and deal compression were also factored into Salesforce's revenue guidance. It was made abundantly clear that Salesforce is going to be heavily pushing Data Cloud and AI solutions, like AI CRM, onto their customer base. Both are seen as significant revenue and cloud expansion drivers for Benioff's modern version of Salesforce: “AI + Data + CRM + Trust.” In this podcast, our Salesforce Practice Leader, Adam Mansfield, discusses what customers should expect from Salesforce leading up to their year-end and beyond. He also shares how customers should approach Salesforce and how best to leverage Salesforce's clear goals during their upcoming negotiations. Host: Adam Mansfield: https://bit.ly/3rPGp8r SalesForce Commercial Advisory Services: https://bit.ly/2V78ADX
#ai #meta #languagemodel LLaMA is a series of large language models from 7B to 65B parameters, trained by Meta AI. They train for longer on more data and show that something like gpt-3 can be outperformed by significantly smaller models when trained like this. Meta also releases the trained models to the research community. OUTLINE: 0:00 - Introduction & Paper Overview 4:30 - Rant on Open-Sourcing 8:05 - Training Data 12:40 - Training Hyperparameters 14:50 - Architecture Modifications 17:10 - Optimizer 19:40 - Efficient Implementation 26:15 - Main Results 38:00 - Some more completions 40:00 - Conclusion Paper: https://arxiv.org/abs/2302.13971 Website: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/ Abstract: We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community. Authors: Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output. 2023: Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, L. Yu, Susan Zhang, Gargi Ghosh, M. Lewis, Luke Zettlemoyer, Omer Levy https://arxiv.org/pdf/2305.11206v1.pdf
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) but demand massive GPU resources for training. Lowering the threshold for LLMs training would encourage greater participation from researchers, benefiting both academia and society. While existing approaches have focused on parameter-efficient fine-tuning, which tunes or adds a small number of parameters, few have addressed the challenge of tuning the full parameters of LLMs with limited resources. In this work, we propose a new optimizer, LOw-Memory Optimization (LOMO), which fuses the gradient computation and the parameter update in one step to reduce memory usage. By integrating LOMO with existing memory saving techniques, we reduce memory usage to 10.8% compared to the standard approach (DeepSpeed solution). Consequently, our approach enables the full parameter fine-tuning of a 65B model on a single machine with 8 RTX 3090, each with 24GB memory. 2023: Kai Lv, Yuqing Yang, Tengxiao Liu, Qi-jie Gao, Qipeng Guo, Xipeng Qiu https://arxiv.org/pdf/2306.09782v1.pdf
We are now launching our dedicated new YouTube and Twitter! Any help in amplifying our podcast would be greatly appreciated, and of course, tell your friends! Notable followon discussions collected on Twitter, Reddit, Reddit, Reddit, HN, and HN. Please don't obsess too much over the GPT4 discussion as it is mostly rumor; we spent much more time on tinybox/tinygrad on which George is the foremost authority!We are excited to share the world's first interview with George Hotz on the tiny corp!If you don't know George, he was the first person to unlock the iPhone, jailbreak the PS3, went on to start Comma.ai, and briefly “interned” at the Elon Musk-run Twitter. Tinycorp is the company behind the deep learning framework tinygrad, as well as the recently announced tinybox, a new $15,000 “luxury AI computer” aimed at local model training and inference, aka your “personal compute cluster”:* 738 FP16 TFLOPS* 144 GB GPU RAM* 5.76 TB/s RAM bandwidth* 30 GB/s model load bandwidth (big llama loads in around 4 seconds)* AMD EPYC CPU* 1600W (one 120V outlet)* Runs 65B FP16 LLaMA out of the box (using tinygrad, subject to software development risks)(In the episode, we also talked about the future of the tinybox as the intelligence center of every home that will help run models, at-home robots, and more. Make sure to check the timestamps
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MetaAI: less is less for alignment., published by Cleo Nardo on June 13, 2023 on LessWrong. Summary In May 2023, MetaAI submitted a paper to arxiv called LIMA: Less Is More for Alignment. It's a pretty bad paper and (in my opinion) straightforwardly misleading. Let's get into it. The Superficial Alignment Hypothesis The authors present an interesting hypothesis about LLMs We define the Superficial Alignment Hypothesis: A model's knowledge and capabilities are learnt almost entirely during pretraining, while alignment teaches it which subdistribution of formats should be used when interacting with users. If this hypothesis is correct, and alignment is largely about learning style, then a corollary of the Superficial Alignment Hypothesis is that one could sufficiently tune a pretrained language model with a rather small set of examples. We hypothesize that alignment can be a simple process where the model learns the style or format for interacting with users, to expose the knowledge and capabilities that were already acquired during pretraining. (1) This hypothesis would have profound implications for AI x-risk It suggests that we could build a safe competent oracle by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of safe competent responses. It suggests that we could build an alignment researcher by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of alignment research. (2) Moreover, as by Ulisse Mini writes in their review of the LIMA paper, Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! (3) Finally, the hypothesis would've supported many of the intuitions in the Simulators sequence by Janus, and I share these intuitions. So I was pretty excited to read the paper! Unfortunately, the LIMA results were unimpressive upon inspection. MetaAI's experiment The authors finetune MetaAI's 65B parameter LLaMa language model on 1000 curated prompts and responses (mostly from StackExchange, wikiHow, and Reddit), and then compare it to five other LLMs (Alpaca 65B, DaVinci003, Bard, Claude, GPT4). Method: To compare LIMA to other models, we generate a single response for each test prompt. We then ask crowd workers to compare LIMA outputs to each of the baselines and label which one they prefer. We repeat this experiment, replacing human crowd workers with GPT-4, finding similar agreement levels. Results: In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Conclusion: The fact that simple fine-tuning over so few examples is enough to compete with the state of the art strongly supports the Superficial Alignment Hypothesis, as it demonstrates the power of pretraining and its relative importance over large-scale instruction tuning and reinforcement learning approaches. Problems with their experiment (1) Human evaluators To compare two chatbots A and B, you could ask humans whether they prefer A's response to B's response across 300 test prompts. But this is pretty bad proxy, because here's what users actually care about: What's the chatbots' accuracy on benchmark tests, e.g. BigBench, MMLU? Can the chatbot pass a law exam, or a medical exam? Can the chatbot write Python code that actually matches the specification? Can the chatbot perform worthwhi...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MetaAI: less is less for alignment., published by Cleo Nardo on June 13, 2023 on LessWrong. Summary In May 2023, MetaAI submitted a paper to arxiv called LIMA: Less Is More for Alignment. It's a pretty bad paper and (in my opinion) straightforwardly misleading. Let's get into it. The Superficial Alignment Hypothesis The authors present an interesting hypothesis about LLMs We define the Superficial Alignment Hypothesis: A model's knowledge and capabilities are learnt almost entirely during pretraining, while alignment teaches it which subdistribution of formats should be used when interacting with users. If this hypothesis is correct, and alignment is largely about learning style, then a corollary of the Superficial Alignment Hypothesis is that one could sufficiently tune a pretrained language model with a rather small set of examples. We hypothesize that alignment can be a simple process where the model learns the style or format for interacting with users, to expose the knowledge and capabilities that were already acquired during pretraining. (1) This hypothesis would have profound implications for AI x-risk It suggests that we could build a safe competent oracle by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of safe competent responses. It suggests that we could build an alignment researcher by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of alignment research. (2) Moreover, as by Ulisse Mini writes in their review of the LIMA paper, Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! (3) Finally, the hypothesis would've supported many of the intuitions in the Simulators sequence by Janus, and I share these intuitions. So I was pretty excited to read the paper! Unfortunately, the LIMA results were unimpressive upon inspection. MetaAI's experiment The authors finetune MetaAI's 65B parameter LLaMa language model on 1000 curated prompts and responses (mostly from StackExchange, wikiHow, and Reddit), and then compare it to five other LLMs (Alpaca 65B, DaVinci003, Bard, Claude, GPT4). Method: To compare LIMA to other models, we generate a single response for each test prompt. We then ask crowd workers to compare LIMA outputs to each of the baselines and label which one they prefer. We repeat this experiment, replacing human crowd workers with GPT-4, finding similar agreement levels. Results: In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Conclusion: The fact that simple fine-tuning over so few examples is enough to compete with the state of the art strongly supports the Superficial Alignment Hypothesis, as it demonstrates the power of pretraining and its relative importance over large-scale instruction tuning and reinforcement learning approaches. Problems with their experiment (1) Human evaluators To compare two chatbots A and B, you could ask humans whether they prefer A's response to B's response across 300 test prompts. But this is pretty bad proxy, because here's what users actually care about: What's the chatbots' accuracy on benchmark tests, e.g. BigBench, MMLU? Can the chatbot pass a law exam, or a medical exam? Can the chatbot write Python code that actually matches the specification? Can the chatbot perform worthwhi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MetaAI: less is less for alignment., published by Cleo Nardo on June 13, 2023 on The AI Alignment Forum. Summary In May 2023, MetaAI submitted a paper to arxiv called LIMA: Less Is More for Alignment. It's a pretty bad paper and (in my opinion) straightforwardly misleading. Let's get into it. The Superficial Alignment Hypothesis The authors present an interesting hypothesis about LLMs We define the Superficial Alignment Hypothesis: A model's knowledge and capabilities are learnt almost entirely during pretraining, while alignment teaches it which subdistribution of formats should be used when interacting with users. If this hypothesis is correct, and alignment is largely about learning style, then a corollary of the Superficial Alignment Hypothesis is that one could sufficiently tune a pretrained language model with a rather small set of examples. We hypothesize that alignment can be a simple process where the model learns the style or format for interacting with users, to expose the knowledge and capabilities that were already acquired during pretraining. (1) This hypothesis would have profound implications for AI x-risk It suggests that we could build a safe competent oracle by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of safe competent responses. It suggests that we could build an alignment researcher by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of alignment research. (2) Moreover, as by Ulisse Mini writes in their review of the LIMA paper, Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! (3) Finally, the hypothesis would've supported many of the intuitions in the Simulators sequence by Janus, and I share these intuitions. So I was pretty excited to read the paper! Unfortunately, the LIMA results were unimpressive upon inspection. MetaAI's experiment The authors finetune MetaAI's 65B parameter LLaMa language model on 1000 curated prompts and responses (mostly from StackExchange, wikiHow, and Reddit), and then compare it to five other LLMs (Alpaca 65B, DaVinci003, Bard, Claude, GPT4). Method: To compare LIMA to other models, we generate a single response for each test prompt. We then ask crowd workers to compare LIMA outputs to each of the baselines and label which one they prefer. We repeat this experiment, replacing human crowd workers with GPT-4, finding similar agreement levels. Results: In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Conclusion: The fact that simple fine-tuning over so few examples is enough to compete with the state of the art strongly supports the Superficial Alignment Hypothesis, as it demonstrates the power of pretraining and its relative importance over large-scale instruction tuning and reinforcement learning approaches. Problems with their experiment (1) Human evaluators To compare two chatbots A and B, you could ask humans whether they prefer A's response to B's response across 300 test prompts. But this is pretty bad proxy, because here's what users actually care about: What's the chatbots' accuracy on benchmark tests, e.g. BigBench, MMLU? Can the chatbot pass a law exam, or a medical exam? Can the chatbot write Python code that actually matches the specification? Can the chatbot per...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LIMA: Less Is More for Alignment, published by Ulisse Mini on May 30, 2023 on The AI Alignment Forum. Abstract Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output. Implications Data Quality & Capabilities Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! Perplexity != Quality When fine-tuning LIMA, we observe that perplexity on held-out Stack Exchange data (2,000 examples) negatively correlates with the model's ability to produce quality responses. To quantify this manual observation, we evaluate model generations using ChatGPT, following the methodology described in Section 5. Figure 9 shows that as perplexity rises with more training steps – which is typically a negative sign that the model is overfitting – so does the quality of generations increase. Lacking an intrinsic evaluation method, we thus resort to manual checkpoint selection using a small 50-example validation set. Because of this, the authors manually select checkpoints between the 5th and 10th epochs (out of 15) using the held-out 50-example development set. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters~(LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLoRA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) double quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) paged optimziers to manage memory spikes. We use QLoRA to finetune more than 1,000 models, providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets, multiple model types (LLaMA, T5), and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). 2023: Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer https://arxiv.org/pdf/2305.14314v1.pdf
It's 2006 on DrunkFriend and topics range from new console releases (PS3 and Wii) to soccer headbutts to YouTube being bought by Google. There's lots to talk about, so listen while Alex and Trav bring sexy back!SportsColts beat Bears, Peyton finally wins, Prince halftime showFlorida wins nattys in both football and basketball, Colt Brennan throws for 58 TDsGeorge Mason makes Final FourItaly wins World Cup, Zidane headbuttDomination: Florida college sports, Tiger Woods, Roger Federer, Jimmie JohnsonHeat beat Mavs, Kobe scores 81Cardinals beat Tigers, Japan wins first ever World Baseball ClassicCarolina beats EdmontonGamingPS3 is releasedBlu-ray Resistance: Fall of ManRidge Racer 7Final Fantasy 12 (PS2)Wii is releasedLoZ Twilight PrincessWii SportsTrauma Center: Second OpinionXbox 360Gears of WarDead RisingSaints RowSonic 06Nintendo DSNew Super Mario BrosMetroid Prime HuntersCooking MamaPSP released in 2005GTA Vice City StoriesMetal Gear Acid 2Me & My KatamariPCElder Scrolls 4: OblivionHalf-Life 2: Episode OneBattlefield 2142SongsWhat was everywhereYou're Beautiful - James BluntCrazy - Gnarls BarkleySexyBack - Justin TimberlakeDani California - RHCPWhat we likedSteady, as she goes - RaconteursWolf Like Me - TV on the RadioAlbumsTV on the Radio, “Return to Cookie Mountain”The Hold Steady, “Boys and Girls in America”Mastodon, “Blood Mountain”Tool - 10,000 DaysMoviesThe DepartedLittle Miss SunshineThe PrestigeCasino Royale300IdiocracyTalladega NightsNacho LibreNotable eventsGoogle Buys YouTube for $1.65B in stockPluto downgraded to dwarf planetSteve Irwin killedMost popular TV shows Released in 2006IT CrowdDeath Note30 RockDexterInternetMyspace was most popular social network stillNumaNuma video was very popularEvolution of Dance video was most watchedShared your pics via FlickRSupport the show Find more of our work on the Polymedia Network Find Travis on Twitter Find Alex on Twitter Send us an email drunkfriendpodcast@gmail.com
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community. 2023: Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aur'elien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample Ranked #1 on Question Answering on PIQA https://arxiv.org/pdf/2302.13971v1.pdf
Dave Lukas, The Misfit Entrepreneur_Breakthrough Entrepreneurship
This week's Misfit Entrepreneur is Tanis Jorge. Tanis is a serial, tech entrepreneur and a leading advisor to startup founders on entrepreneurship and building successful cofounder partnerships. During her career has spanning the last 20+ years, she has cofounded, scaled, and successfully exited multiple data-driven businesses. This culminated with her most recent venture, Trulioo, which she co-founded in 2011 with my long-term business partner. In 2021, Trulioo reached unicorn status with an over $1.65B valuation solidifying its place as the world's leading identity verification company and Tanis's track record for founding successful businesses. Trulioo's success also made her one of only three Canadian female “Unicorn” founders. Tanis has a wealth of knowledge in starting and building businesses, but I'm not if you caught something with what I just shared that has a been a constant for her and a big part of her success – all of her biggest and most successful ventures have been with another great co-founder and it has been their partnership that made it go. In fact, Tanis recently wrote the best-selling book, the Cofounder's Handbook detailing how she does this and that is exactly what I want to speak with her about today as one of the best way to succeed is by having a great partnership in your business. I know this from experience. www.TheCoFoundersHub.com Tanis met her co-founder in High School starting in grade 8. They were locker buddies and their time in high school and getting to know each other laid the groundwork for their future success. Right after High School, he partner approached her to be the first to do online credit reports in Canada. It was their first business and gave them a ton of learning experience. They found their roles and ultimately exited after 3 years. They then did that 2 more times over the next 10 years. After that, Tanis took a break to focus on family and her partner went to Silicon Valley to pitch an idea they had had previous. The idea was well received and she was pulled back in. A year and half into it, they raised their first seed round and it that helped them to get to a valuation over $1.5 billion. Talk to us about the magic of co-founding partnerships. Why and how can they work so well? They are either the greatest asset or greatest liability in your business. 65% of business fail because of issues between the founders. Every business with a partnership will either make or break on that partnership. When Tanis talked to people with bad partnerships, she noticed it came down to values that did not align. A great partnership that is aligned can take businesses to levels not possible with just a single founder. Elements of great partnerships? Shared values Intentionality. Great partners are focused on their partnership. A partnership is not “set it and forget it.” They help hold each other up on the rollercoaster of building a business. Did you ever have times where you got sick of each other in your partnership? How did you get things back on track and keep the partnership strong? Your mentality going into the partnership is important. In the end, you are all striving for the same goal. Honest mistakes happen – very seldom are they malicious. Realize that you are in it together. Don't get in your own way when you don't have to. It is like a marriage, but it is not. Marriage is focusing on keeping each other happy. In business, the business is the focus. The founders need to focus there and make that the priority. Ask what is the best for the business…. At the 20 min mark, Tanis talks about the dynamics of partnerships… There is a customization to a good partnership. Look at yourself first. Look at your model and determine what you need in a partner. Tanis developed a self-assessment for this. What is the best way to find a business partner? The first and most important thing is to do a self-assessment. You can do this at https://thecofoundershub.com/ It is worth the time and money to invest in the process of figuring things out. Your network is a great place to start and ask, “Who do you know?” There are Meet-ups in different markets that give opportunities as well. You also want to think through all the major points of the partnership ahead of time and have the conversations in discovery sessions with potential partners. You will also want to look at the legal documents needed to create a great business partnership. What have your learned on how to build and scale companies? Tanis loves the early stage of a business. Build your business like you will own it forever, but have the long-term vision to build it to be sold. Play chess. Every month, take time to whiteboard and stay steps ahead. Game plan and plan intensively the moves in your business. What have you learned about how to sell a business? It is stressful and like having a second job. It always take longer. It always costs more than you think. You need to prepare ahead of time and know it will not be easy. You must plan for it and ensure you are ready. Know your number. You will not get as much as you want, but you can get more than what you would settle for. What is something you wish you have known earlier in your entrepreneur career? Tanis was very young when she started and green to the business world. It took a couple of businesses to realize that regardless of age, she knew her business the best and to have confidence in herself. Don't be afraid to take charge and step out in your business. Best Quote: Co-founder partnerships can be the greatest asset or greatest liability in your business. Tanis's Misfit 3: Choose the key people in your life carefully and invest in them. Question everything! Always look at both sides. Be curious. Enjoy the journey. Show Sponsors: Benchmark Email (Free account): www.MisfitEntrepreneur.com/Benchmark 5 Minute Journal: www.MisfitEntrepreneur.com/Journal
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta open sources LMs competitive with Chinchilla, PaLM, and code-davinci-002 (Paper), published by Lawrence Chan on February 24, 2023 on The AI Alignment Forum. As the title says, Meta trained 4 foundational models with 7B, 13B, 33B, and 65B parameters respectively, and is open sourcing them for research. You can get their code on their Github repo: but you need to fill in a Google form to get the weights. On downstream benchmarks, the models do comparably well with Chinchilla and PaLM and only a bit worse than Flan-PaLM-540B and code-davinci-002/text-davinci-002. (The authors don't evaluate on those models, but you can look at their performance from other work such as Stanford's HELM or Chung, Hou, Longpre et al's "Scaling Instruction-Finetuned Language Models". Abstract: We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla70B and PaLM-540B. We release all our models to the research community. Twitter thread from authors: Eliezer guesses that the model won't be impressive in practice: I blindly guess, could be wrong, that this model will turn out sufficiently unimpressive in practice that nobody uses it for much. Basically based on a guess that more than benchmarks matter, and Meta has no people competent to do the tricky stuff needed to stay on current edge. It's not necessarily open source as you think of it -- you need to fill in a Google form, and then they might give it to you: In order to download the checkpoints and tokenizer, fill this google form The license is intended only for non-commercial, research work: Meta grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Meta's copyright interests to reproduce, distribute, and create derivative works of the Software solely for your non-commercial research purposes. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Healthcare Ecosystem–Times They Are A-Changin' with Fahad Rahman, CEO, Lumi Health! Fahad Rahman, CEO, Lumi Health, joins Maureen Shaffer, CEO, Mingletoe, to chat about the rapidly changing healthcare ecosystem for patients, providers, facilities, and especially for medtech startups. Fahad also shared his thoughts on how to think through referral patterns changes, hospital at home, and health equity.
Constellation Brands (STZ) has divested part of its mainstream and premium wine portfolio to The Wine Group. Adam Lampe discusses this, as well as the takeaways from STZ's earnings. He talks about how STZ's revenue came in at $2.65B versus an estimated $2.50B. He then evaluates STZ investment in Canopy Growth (CGC). Finally, he goes over the long-term outlook for STZ. Tune in to find out ore about the stock market today.
This is a special episode pulled from the Just Go Grind vault! Dr. Iman Abuzeid is the Co-founder and CEO of Incredible Health, a career marketplace whose custom matching technology offers hospitals the fastest, most effective way to hire qualified permanent nursing staff. By 2024, the United States is poised to experience its biggest employment crisis to date, a shortage of one million nurses, putting patient care in jeopardy and hospitals at risk of significant financial loss. The COVID-19 pandemic has accelerated the nursing shortage, highlighting the need for qualified nurses across the country. Incredible Health is on a mission to reinvent the hospital and healthcare staffing landscape, helping to solve the nursing shortage. As a former medical doctor whose immediate family includes three surgeons, Iman understands the importance of helping healthcare professionals find and do their best work. Incredible Health is based in San Francisco, backed by top tier venture capital firm Andreessen Horowitz, and is used by hundreds of leading hospitals across the country, including Cedars-Sinai Medical Center, Stanford Healthcare, Baylor Scott & White and many more. Dr. Abuzeid holds an MBA from The Wharton School of the University of Pennsylvania. In August 2022, Iman led Incredible Health to $1.65B unicorn status with an $80M Series B raise. This makes Incredible Health the highest valued tech-enabled career marketplace in healthcare. This new funding comes on the heels of several major milestones for Incredible Health. In 2021, revenue grew 500% as their marketplace grew exponentially. More than 10,000 US nurses now join Incredible Health's platform every single week, and they've expanded from partnering with 200 hospitals to 600 hospitals. Their marketplace technology has reduced the average time to hire permanent nurses to only 14 days – from an industry standard of 82 days – while saving each hospital location at least $2M annually in travel nurse, overtime, and HR costs. Topics Covered by Iman Abuzeid in this Episode How Incredible Health got started Being 1 million nurses short in the U.S. by 2024 How Iman figured out what the initial version of Incredible Health would be Creating a solution that's 10x better than what's currently available How Iman met her co-founder, Rome, and the conversation around going into business together How Iman thought through the equity split early on Raising a few hundred thousand dollars from friends and family to get Incredible Health off the ground Raising a $2.4M Seed Round in 2-3 months after going through the NFX accelerator Raising a $15M Series A in 4-5 weeks led by Jeff Jordan at Andreessen Horowitz Iman's advice for fundraising and creating a process and how her own skills evolved How Iman brought on the first hospitals through cold-calling The MVP of Incredible Health and why you have to fight the urge to ship perfection How the team has grown for Incredible Health The values of Incredible Health and how it's part of everything they do How Incredible Health uses customer insights to develop their platform, operations, and marketing Why Incredible Health offers free continuing education for every nurse in the country How Iman thinks through the products and tools they're going to provide to their customers The sales process of getting hospitals on their platform The 3 biggest value adds of the investors Iman has for Incredible Health Questions that Iman asked of CEOs for reference checks on investors How Iman takes care of her own mental health and why she has a therapist How getting an MBA from Wharton impacted Iman's journey as an entrepreneur Books and blogs that Iman recommends How Iman recharges away from work and why she doesn't work on Saturdays Iman's experience as a minority woman founder Diversity debt Links from the Episode Rome Portlock NFX Jeff Jordan Andreessen Horowitz Obvious Ventures signal.nfx.com James Joaquin The Hard Thing About Hard Things Who (A book on hiring) Above the Crowd by Bill Gurley Stephanie Lampkin Blendoor Listen to all episodes of the Just Go Grind Podcast: https://www.justgogrind.com Follow Justin Gordon on Twitter: https://twitter.com/justingordon212
Oil prices are climbing to start the week, as investors wait for the latest word on supply levels from the world's top producers. The international crude benchmark Brent is up 2% to trade at around $95 a barrel. Energy markets are also in turmoil following Russia's decision to keep its natural gas pipeline to Germany closed indefinitely. Berlin has announced a $65B bailout plan to help Europe's largest economy cope with its worsening energy crunch. For more on energy market, we were joined by Victoria Scholar, investment head of Interactive Investor in London. #OPEC #Energy #Oil
Germany has announced a fresh stimulus package to help consumers cope with the nation's cost of living crisis. The $65B measure includes tax breaks for energy firms, so they can pass on the savings to households and businesses. There will also be cash handouts to students and pensioners. For more, we spoke to Peter Oliver from Berlin. #Germany #BailoutPackage #OlafScholz
Only 4 in 10 employees use all of their Paid Time Off each year, leaving over 750 million days unused annually in the US. Unused PTO either sits on the balance sheet of companies as a liability, or is forfeited by employees - representing a staggering $65B in lost compensation each year.PTO Exchange is out to fit this problem. Through their platform organizations are able to offer their employees easy ways to convert unused PTO into retirement plans, charitable contributions, wellness programs and more.In this conversation, we chat with the founder of PTO Exchange, Rob Whalen. We discuss the massive impact this problem has on both employees and companies, how the landscape has shifted as a result of COVID, what the future of employee benefits looks like and much more.If you enjoyed the episode, please leave a review! https://kite.link/disruptors-ptoexchangePODCAST INFO:Podcast website: https://manifold.group/podcastApple Podcasts: https://apple.co/3EbkMEkSpotify: https://spoti.fi/3nqwNiDRSS: https://bit.ly/3ntcFw1Full episodes playlist: https://youtube.com/playlist?list=PLdnqR-lZH65HqCM09dwh6otuU7TArkq8JClips playlist: https://youtube.com/playlist?list=PLdnqR-lZH65HrrlaeQZnf1sicFkprjpqkSOCIAL:- Sean Johnson on Twitter: https://twitter.com/intentionally- Sean Johnson on LinkedIn: https://linkedin.com/in/seanjohnson- Sean Johnson on Instagram: https://www.instagram.com/intentionally_- Manifold on Twitter: https://twitter.com/manifold_group- Manifold on LinkedIn: https://www.linkedin.com/m/company/manifold-group- Manifold on Facebook: https://www.facebook.com/themanifoldgroup- Manifold on Instagram: https://www.instagram.com/manifold_group/- Manifold on Medium: https://medium.com/build-better-products
Pandu Adi Laras joins the Indo Tekno podcast to discuss age-old working capital challenges amongst Indonesia's ~50,000 SME used auto dealers, and what his start-up broom.id is doing to improve liquidity in the industry through easy access to capital using inventory as collateral. Indonesia's used car market currently boasts $65B in yearly transaction value. broom.id's digitized solution offers quick approvals for short term working capital, addressing the industry's near-complete absence of data and lack of access to the incumbent banking industry.
US equity markets opened August with modest declines, with all three benchmark indices snapping a three session winning streak - Dow fell -47-points or -0.14%, Boeing Co rallied +6.13% to lead Dow gainers after the aerospace and defence giant reportedly cleared a hurdle with the Federal Aviation Administration (FAA) that could allow it to resume deliveries of its 787 airliner. The FAA said it would approve Boeing's process for validating fixes to each 787 plane before they are delivered to customers, according to an Associated Press report. Separately, The Wall Street Journal reported that Boeing's defence manufacturing plants will vote on Wednesday (3 August) on a labour contract proposal, which temporarily delays a strike that was scheduled to begin as soon as Monday. The broader S&P500 -0.28%, with Energy (down -2.18%) leading seven of the eleven primary sectors lower. Chevron Corp fell -2.00% and ExxonMobil Corp -2.53%. Consumer Staples sat atop the primary leaderboard with a +1.21% gain. The Nasdaq -0.18% despite a solid session for chipmakers, with Advanced Micro Devices Inc up +2.45% ahead its quarterly result tonight AEST, Intel Corp +1.79%, Micron Technology Inc +1.10% and Nvidia Corp +1.53%. Apple Inc (down -0.62%) raised US$5.5B via the issuance of four series of bonds with ratings of AAA from Moody's Investors Service and AA+ from S&P Global. The small capitalisation Russell 2000 lost -0.10%. Automotive oil, additives and lubricant maker Valvoline Inc (down -2.70%) it has reached an agreement with Saudi Arabian Oil Co (Aramco) to sell its global products business for US$2.65B in cash.
US equity markets opened August with modest declines, with all three benchmark indices snapping a three session winning streak - Dow fell -47-points or -0.14%, Boeing Co rallied +6.13% to lead Dow gainers after the aerospace and defence giant reportedly cleared a hurdle with the Federal Aviation Administration (FAA) that could allow it to resume deliveries of its 787 airliner. The FAA said it would approve Boeing's process for validating fixes to each 787 plane before they are delivered to customers, according to an Associated Press report. Separately, The Wall Street Journal reported that Boeing's defence manufacturing plants will vote on Wednesday (3 August) on a labour contract proposal, which temporarily delays a strike that was scheduled to begin as soon as Monday. The broader S&P500 -0.28%, with Energy (down -2.18%) leading seven of the eleven primary sectors lower. Chevron Corp fell -2.00% and ExxonMobil Corp -2.53%. Consumer Staples sat atop the primary leaderboard with a +1.21% gain. The Nasdaq -0.18% despite a solid session for chipmakers, with Advanced Micro Devices Inc up +2.45% ahead its quarterly result tonight AEST, Intel Corp +1.79%, Micron Technology Inc +1.10% and Nvidia Corp +1.53%. Apple Inc (down -0.62%) raised US$5.5B via the issuance of four series of bonds with ratings of AAA from Moody's Investors Service and AA+ from S&P Global. The small capitalisation Russell 2000 lost -0.10%. Automotive oil, additives and lubricant maker Valvoline Inc (down -2.70%) it has reached an agreement with Saudi Arabian Oil Co (Aramco) to sell its global products business for US$2.65B in cash.
Tim Cook-led Apple (AAPL) earned $1.20 per share on $82.96B in revenue. That's a 2% increase year-over-year. Revenue attributed to the iPhone came in $40.65B. Services revenue rose to $19.6B. Daniel Rubino and Kyle Clark discuss factors driving Apple's success including sales of the iPad, Mac, Wearables and Accessories which came in at $7.22B, $7.38B, and $8.08B, respectively.
Ku'ulei gives the Boston Celtics the "beautiful basketball" nod after their performance in Game 3 of the NBA Finals. The LA Rams reach a 3-year $80M extension with Cooper Kupp...how the heck are they doing this?! Plus, how John Elway lost out on $900+ million with the $4.65B sale of the Denver Broncos.
Will the Broncos get a new logo with the new owner? What can you buy for $4.65B and what can we expect to see out of the Avs as they await to find out their opponent for the Stanley Cup Final? We'll answer all these questions starting right now!See omnystudio.com/listener for privacy information.
Today's word of the day is ‘66' as in 66 massage therapists as in a report came out yesterday that Deshaun Watson used 66 massage therapists in a 17-month window while with the Houston Texans. There have been 24 civil suits against Watson. More details allege that the Texans may have helped set up some of these meetings. The NFL better start doing real work now to figure out what is right. Deshaun Watson's future in the NFL should very much be in doubt (although it should have been before). (22:55) Review: Unknown. (27:35) Joe Maddon was fired as the manager of the Angels. They had lost 12-straight games. Mike Trout couldn't get a hit. Shohei Ohtani couldn't hit. They went from first to 10 games back in the AL West. (35:10) It looks like another manager could be on the hot hot hot seat. Don Mattingly. The Miami Marlins held a 90-minute team meeting before its 12-2 win over the Nationals. The team is underachieving. The team has some issues in the clubhouse. And they cannot blame me anymore! (43:55) NPPOD. (46:00) The Denver Broncos are being sold to Walmart for $4.65B. That's a nice chunk of change. Learn more about your ad choices. Visit megaphone.fm/adchoices
4c tips off with Deshaun Watson and the new info surrounding the New York Times article and HBO series, and why it's no surprise the Haslam family hired Watson. The Denver Broncos sold for $4.65B, and is the most recent NFL flex. Additionally, how the Angles overreacted in firing Joe Maddon, and reasons why the Mets & Yankees are the best teams in baseball. Is Paul Goldschmidt a Hall of Famer? Derek Jeter is on Twitter, and has a documentary next month.
The Broncos now have the richest owner in the NFL, and it’s not close! What does this mean for the team, what are the next steps and who is Mellody Hobson? We've got all the details plus we show you what you could buy for $4.65B besides the Broncos!See omnystudio.com/listener for privacy information.
Tanis Jorge is a serial tech entrepreneur and a leading advisor on entrepreneurship and building successful cofounder partnerships. Throughout her career in startups, spanning the last 20 years, Tanis has cofounded, scaled, and successfully exited multiple data-driven businesses with the same partner. Her successes culminated with her most recent venture, Trulioo, which she co-founded in 2011. Between 2011 and 2015, Tanis served as Chief Operations Officer of Trulioo, working to lay the groundwork and building the foundation for the trusted, innovative, and disruptive company it has become today. In 2021, Trulioo reached unicorn status (US $1.65B valuation), solidifying its place as the world's leading identity verification company and Jorge's track record for founding successful businesses. Today, Tanis is one of the go-to voices and experts on the cofounder relationship, drawing on her experience co-founding and successfully scaling four technology businesses. Advising fast-growing start-ups and leading venture capitalists, her work focuses on how cofounders can function in an open, productive, and symbiotic way to ensure continued and long-term business success.
This month’s episode features two incredible guests: Rob Ziliak, Chief Operating Officer at Buckingham Wealth Partners and Ryan Armock, Head of Operations at Thrivent Advisor Network. Founded in 2019, Thrivent currently has 450 employees, and manages $6.5B in AUM. Buckingham Wealth Partners, a well-known RIA founded in 1994, currently manages $65B in collective assets between Buckingham Strategic Wealth and Buckingham Strategic Partners and has 540 employees. Together, Matt, Rob, and Ryan discuss the effect their respective roles play in creating and ensuring a great client experience and much more, including: An overview of each firm and their respective strategies for growth How each firm has adapted their business throughout the pandemic and the long-term effects of these changes What our guests are doing to attract the best talent in the current job market How our guests’ roles in operations impact their firm’s M&A strategy We hope you enjoy, share, and subscribe! You can listen and subscribe on Google Podcasts, Apple Podcasts, or Spotify. Sign up here to be notified of new practice management content added to our blog on a regular basis. Investment advisory services offered through Thrivent Advisor Network, LLC., (herein referred to as “Thrivent”), a registered investment adviser. Advisory Persons of Thrivent provide advisory services under a “doing business as” name or may have their own legal business entities. However, advisory services are engaged exclusively through Thrivent Advisor Network, LLC, a registered investment adviser. Thrivent Advisor Network reported over $5.3 billion in AUM in its latest annual Form ADV filing. Thrivent Financial for Lutherans is ranked 369 on the Fortune 500. (Fortune Magazine, June 2021). Correction: When Ryan Armock states, "…and that brings us to where we are at today with about six and a half billion in assets under management, about 450 employees and about 300 IARs.” the correct number is 200 IARs.
This month’s episode features two incredible guests: Rob Ziliak, Chief Operating Officer at Buckingham Wealth Partners and Ryan Armock, Head of Operations at Thrivent Advisor Network. Founded in 2019, Thrivent currently has 450 employees, and manages $6.5B in AUM. Buckingham Wealth Partners, a well-known RIA founded in 1994, currently manages $65B in collective assets between Buckingham Strategic Wealth and Buckingham Strategic Partners and has 540 employees. Together, Matt, Rob, and Ryan discuss the effect their respective roles play in creating and ensuring a great client experience and much more, including: An overview of each firm and their respective strategies for growth How each firm has adapted their business throughout the pandemic and the long-term effects of these changes What our guests are doing to attract the best talent in the current job market How our guests’ roles in operations impact their firm’s M&A strategy We hope you enjoy, share, and subscribe! You can listen and subscribe on Google Podcasts, Apple Podcasts, or Spotify. Sign up here to be notified of new practice management content added to our blog on a regular basis. Investment advisory services offered through Thrivent Advisor Network, LLC., (herein referred to as “Thrivent”), a registered investment adviser. Advisory Persons of Thrivent provide advisory services under a “doing business as” name or may have their own legal business entities. However, advisory services are engaged exclusively through Thrivent Advisor Network, LLC, a registered investment adviser. Thrivent Advisor Network reported over $5.3 billion in AUM in its latest annual Form ADV filing. Thrivent Financial for Lutherans is ranked 369 on the Fortune 500. (Fortune Magazine, June 2021). Correction: When Ryan Armock states, "…and that brings us to where we are at today with about six and a half billion in assets under management, about 450 employees and about 300 IARs.” the correct number is 200 IARs.
Rony Abovitz (born 1971) is an American entrepreneur. Abovitz founded MAKO Surgical Corp., a company manufacturing surgical robotic arm assistance platforms, in 2004. MAKO was acquired by Stryker Corporation in 2013 for $1.65B. Abovitz is... The post Season 2 Episode 1: Rony Abovitz – Xverse appeared first on AllThingsXR.com.
Fire In Paradise Welcome to The Guys Review, where we review media, products and experiences. **READ APPLE REVIEWS/Fan Mail**Mention Twitter DM group - like pinned tweetRead emailsTwitter Poll Fire In Paradise Directed by: Zackary CanepariDrea Cooper Starring: Joy BeesonBeth BowersoxAbbie DavisHiyori Kon Released: November 1, 2019 (Netflix) Budget: No Info Box Office: No Info Ratings: IMDb 7.4/10 Rotten Tomatoes 83%Metacritic NONE Google Users 74% Fire in Paradise premiered at the 2019 Telluride Film Festival. It also showed at the 2019 Hamptons International Film Festival, where it won the Audience Award for Best Short Film. First time you saw the movie? Plot:The film opens with shots of a tranqual forest, and Paradise, California, population 26,561, cut in with home videos of people doing day-to-day activites, while a voice over gives a safety alert for fire conditions. PG&E could cut power for safety. Nov 8, 2018, 6:16am, Ray Johnson, wearing a firefighter shirt, says the news stated it was going to be windy day, and could produce fires. So he stationed himself at the water tender at Station 33, and that he felt the day didn't feel right. Cut to a call center and a woman stating that night shift is 7pm to 7am and that it was quiet that night. Beth Bowersox said a call came in around 5:30am that her supervisor took, from a PG&E employee, about a Pulga Fire. Dacia Wiliams describes laying in bed with her kid, and her mom had taken other children to school, and said they saw smoke. 7:16am, 911 calls are played about the Pulga fire. Fire fighter Sean Norman speaks about hearing calls about a fire, and how he had to get on the road to come help. Beth dispatched the fighers and named it "The Campfire" due to regulations and guidelines...and how it seemed like a normal fire, at first, but her faces drops and follows with, "it got bad, real quick." As we see video of fires and power lines. 7:19am. Driving through town, Mary Ludwig speaks about getting to school and the frenzy it was in, and how quickly the sky turned orange. Sgt. Rob Nichols talks about meeting with his partner, in their car with Ash raining down. 7:29am, calls of smoke everywhere. Beth says it still seems normal for a fire, and more calls play of people reporting smoke and fire, and are told to evacuate. Beth states a co-worker took a call for a house fire in Paradise, and she is suprised. 7:41am: the 911 call for paradise plays. 7:45 minutes start counting up and more 911 calls play of paradise residents calling in fires. S:-I remember hearing about this fire, but to see people who live there talk about it, its obviously very dramatic. Ofc. Nichols speaks about a spot fire in the middle of town. Cut to a house burning wildly, and flames shooting everywhere with a woman talking about her house being on fire. Dacia speaks about getting stuck in traffic; and waiting over 40 minutes, and that it's not normal. Video of a man and dog waiting in traffic. Ray talks about seeing the ploom of smoke coming at him and how eerie it was, like a monster. Cut to a shot of the smoke, and it is massive. Rays wife Jennifer talks about evacuating and seeing all her neighbors and friends around her, trying to get out, and how unreal it is. Mary speaks about kids being outside and the wind being so strong, branches were falling on fire. The kids are then evacuated on the school bus. Abbie Davis, a teacher, talks about getting on the evac bus with the kids. Mary was scared about getting on the bus with the kids, and even said she didn't want to, but did. And how the first corner they hit, there was fire and they were stuck in treaffic. Cut to video of people driving surrounded by everything on fire. Abbie speaking about being next to McDonalds and it caught on fire. And then it went completely black. Ray talks about his wife scraming and crying, cut over someone filming trying to calm people down surrounded by fire. Madeline Johnson, Ray and Jennifers daughter, talks about trying to stay calm, and being fucking brave, that she wasn't going to die. As Naoh, her brother, talks about praying, all this while showing film of someones car having flaming branches fall on it with someone scraming, and flames all over the road. More 911 calls and Beth telling people to get out, that they don't have anyone to come and help them. Joy Beeson talks about getting out with her son, and how he pushed her out of the way of a falling tree. Beth gets choked up talking about taking calls with people who are afraid and how hard it was to have to hang up and take more calls. S:-The timeline of the story is a little weird here for me, if everyone is evacuating, shouldn't they go get their kids too? And why were kids outside if FLAMING BRANCHES are literally falling from the sky?-That bit with Madeline... man... That got me. Esp with the footage they played. Like, obviously they survived...but my God. The fear. 9:35am. Abbie and Mary speak about the exit ramp being on fire, and the first feeling of deep hopelessness. The kids started falling asleep, so they created some homemade filters. Abbie tells Mary she doesn't think they're going to get out. They prayed, and they prayed to die of smoke inhalation, and went back to work. 10:42am. Nichols talks about getting to Clark and Skyway, and how bad of an intersection it is. Total gridlock. And the firewall is coming straight at them. A video of Nichols talking to a guy in a car telling him they're stuck, and the man looks scared and asks if they're going to be ok. They then start telling everyone to abandon their cars and evacuate on foot. Joy tells of balls of flame, like from the bible, falling around them. They cut to video of a fire tornado. They move everyone to a large parking lot. Sean talks about realizing they're not going to be able to put this fire out. So they start breaking into buildings to put people in them, as the field behind it was a propane strage field, and they started to explode. Sean described it as war. Dacia tells how the fire fighters told them they're surrounded, and the only way to survive would be laying down on the concrete. She speaks about prying with her child under a blanket for hours. Finally, the front passed and they were bussed out. Norman tells of driving around, trying to get people out of their houses, as embers are flying and catching more on fire, and they were refusing. So they took them, and wouldn't let them go back when their dog ran off. He knew he wouldn't make it through the fire front, so he started looked for somewhere to go, but there was nothing. So he drove straight into and through the fire front and survived. He gets choked up talking about surviving, and that those people probably hate him, but they're alive to hate him. S:-Speaking of the part where they prayed they would die of smoke inhalation: have you ever had an instance where you thought you could die?-The people being alive to hate him... How ungrateful Shots of burnt out cars, melted cars, burned homes, and some chairs, as avoice over says its been contained 3 weeks later. A flyover shot of where homes once stood, just burned ashes and some reminants where walls once were; as news casters discuss fatalities, and missing persons. We see video of a man walking to a car, telling how he knew the person inside who died, and we see a skeleton; he says he's sorry, buddy. In a meeting, people are being briefed about going out and finding missing people and giving closure to familes. The largest ever search and rescue operation in California. Norman talks about how it was a very unprecidented fire, and fire behavior. He speaks of all the fires that have occured in California and how bad they've been, mass descrution, not being able to control them, fire fighters being trapped and killed. He said the climate has been part of the problem. Ray is cutting down a burned tree, in front of what used to be his house, with only a chimney standing. Jennifer says it's like death and a greeving process. Ray wants to see everything rebuilt, but that it's not the same. Durham, CA, in a temporary school for Paradise students, Mary is teaching. And speaks about being on the bus for 6 hours. We see some drawings the kids have done, and how sad they are. Mary says shes scared to go back. She walks through the burned out school in awe. We see a burnt out forrest, whit some home videos playing over it of happier times. Dacias kids tell her they just want to go home... but she is afraid of what the road would look like. Nichols talks about how there isn't enough housing for all the residents, and we see Joy in some sort of tent/housing. Beth says she hasn't been back to Paradise because of how many people died and are missing, but trying to figure out where to go and what to do next. She's worried people will forget with more disasters. Shes visably shaken. It cuts to black, and texts that reads: the camp fire killed 85 people, making it the dealiest wildfire in the United States in over 100 years. End title card, cut to black, and roll credits, as they display pictures of what I assume are victims of the fire. S:-There is a shot where Mary is walking out from the burned out school, and a mural on the wall of Where the Sidewalk Ends... Kind of apropos, considering. A lot of things ended, but there can still be life and happiness, like the poems from Shel Silverstien. Where the Sidewalk Ends by Shel SilversteinThere is a place where the sidewalk endsAnd before the street begins,And there the grass grows soft and white,And there the sun burns crimson bright,And there the moon-bird rests from his flightTo cool in the peppermint wind. Let us leave this place where the smoke blows blackAnd the dark street winds and bends.Past the pits where the asphalt flowers growWe shall walk with a walk that is measured and slow,And watch where the chalk-white arrows goTo the place where the sidewalk ends. Yes we'll walk with a walk that is measured and slow,And we'll go where the chalk-white arrows go,For the children, they mark, and the children, they knowThe place where the sidewalk ends. Top Five Trivia of the movie: 5: The Camp Fire was the deadliest and most destructive wildfire in California's history, and the most expensive natural disaster in the world in 2018 in terms of insured losses. 85 deaths, 18,804 buildings destroyed, $16.65B in 2018.4: Ignited by a faulty electric transmission line on Nov 8, 2018,3: Paradise, which typically sees five inches of autumn rain by November 12, had only received one-seventh of an inch by that date in 2018.2: Burned 153,336 acres or 240 square miles1: The fire reached 100 percent containment after seventeen days on November 25 TOP 5Stephen:1 Breakfast club2 T23 Sandlot4 Back to the Future5 Mail order brides Chris:1. sandlots2. T23. trick r treat4. rocky horror picture show5. hubie halloween Trey:MeatballsBoondocks SaintsMail Order BridesSandlotLone Survivor Tucker:1. Beer review 2. T23. Gross Pointe Blank4. Mail order brides5. Escape rooms Web: https://theguysreview.simplecast.com/EM: theguysreviewpod@gmail.comIG: @TheGuysReviewPodTW: @The_GuysReviewFB: https://facebook.com/TheGuysReviewPod/YouTube: https://www.youtube.com/channel/UCYKXJhq9LbQ2VfR4K33kT9Q Please, Subscribe, rate and review us wherever you get your podcasts from!! Thank you,-The Guys
President Biden signed the Bipartisan Infrastructure Deal into law. It has historic investments in roads, bridges, and other physical infrastructure improvements. But this new signed legislation also focuses on one of ASA's policy priorities, bridging the digital divide, as it invests $65B into affordable broadband, ensuring every American can participate in the modern economy. Our guests discuss what's in the new law, how it is being implemented, and how older adults stand to benefit. Read more at:https://www.benton.org/blog/when-do-we-get-our-broadbandhttps://www.aarp.org/politics-society/advocacy/info-2021/infrastructure-for-older-americans.html
Broadband connectivity and access has gained recent public attention due to the $65B carve out in Biden's $2Tr infrastructure plan. But for many Americans, it has been a persistent problem that has prevented them from elevating their work, education, and more recently health status. Broadband, as John likes to say, is a "Super Determinant of Health", impacting all aspects of a patient's relationship with our healthcare system. On this episode, John speaks with public health guru and broadband expert Pierre Vigilance, MD, MPH. Dr. Vigilance is a health care executive with 20 years of experience in the private, public, and academic arenas. Starting as an ER doc, Dr. Vigilance turned to public life to become the health commissioner of Baltimore and DC, before launching a career in academia and the burgeoning world of health tech start-ups. Enjoy!
Today’s blockchain and cryptocurrency news Brought to you by OliveAI.com/careers Bitcoin is up .5 at $62,476 Ethereum is up 1% at $2438 and Binance Coin is up slightly at 543.36 XFin Network up 41% Maker up 22% Compound up 17% $623M of Bitcoin that was stolen in the Bitfinex hack in 2016 moved yesterday. Berlin hard fork is complete on Ethereum Coinbase valued at over 65B after first day. VanEck launches new digital asset ETF. AXA Switzerland will let customers pay premiums in BTC.
“You gotta be a little crazy to plant vines in New York,” Sam Filler of the New York Wine & Grape Foundation tells us. Yet, New York is the third-largest wine-producing state after California and Washington. It showcases a diversity of high quality, cool and cold climate wine growing regions, such as the Finger Lakes, Long Island, and Niagara. Sam gives us an overview of wine (and juice grape growing) in New York and tells us about how the foundation supports its members by building the brand of New York Wine, its winegrowing regions, and primarily family businesses that make it up. Special Announcement: "Be the Change is hosting a virtual job fair on April 22nd for the beverage alcohol industry. Registration is now open for all employers at bethechangejobfair.com. Sign up to connect with up to 1,000 jobseekers. This is an equal opportunity job fair, open to all."Detailed Show Notes: New York Wines470 wineries, ~100 vinifera producingMostly family-owned wineries#3 in US wine production behind CA and WA$6.65B in economic impact supports 72,000 jobs~40,000 acres planted, 11 AVA’sHas a diversity of climates - maritime (Long Island), river influenced (Hudson Valley), great lakes (Finger Lakes)Main AVA’sFinger Lakes - Riesling; lots of winegrowing history; 1st wine trail (1983), Pleasant Valley Winery founded in 1860 - 1st US bonded wineryNiagara - could be a leading Pinot Noir regionLong Island - debate on the signature grape, Merlot the base of Rose, Sauv Blanc, Cab Franc; the breeze from bodies of water reduce mildew pressureLake Erie - major Concord grape growing region for Welch’s grape juice, Double A Vineyards nursery an important player; Riesling, Traminette (like Gewurztraminer)Hudson Valley - Cab Franc focused, 500 acres plantedFinger Lakes and Long Island recognized both locally and globallyMain varietalsRiesling - grown in most regions, biggest in Finger LakesCab Franc - grown in many partsChardonnay - Finger Lakes, Lake Erie, Long Island; more Chablis style winePinot Noir - big for sparkling wine productionUnifying elements of New York winesCool climate, mainly driven by family businessesDefined by bodies of water that surround the regionsPersonalized hospitality (elevated with Covid, e.g., Macari Vineyards glamping tents)NY Wine & Grape FoundationEstablished in 1985 by state law - to lead promotion and research efforts for the stateAssociated with Cornell University and various programs, including the wine analytics labIncludes juice grapes (⅔ of vines planted in the state) - Welch’s a key partner for viticulture research; very limited table grapes in the stateSuccess for the foundation is building the capacity of the industry - e.g., getting DTC online, improved websites, connections to customers, and maintaining relationshipsMembers are mostly “farm wineries” (a legal term that means wineries use 100% NY grapes), grape growers, and business partnersState provides baseline funding for infrastructure, receive some matching funds for research, and some membership duesWorking with other local wine marketing groupsUsed to fund some marketing materialsNow co-sponsor events and collaborate closely with the local marketing bodiesGeographic FocusNew York City is the #1 focusChicago and Florida also important; PA difficult b/c of the liquor control boardSome export but need to find the right nicheMarketing efforts and programsNY wine ~70% sold out of tasting room - hospitality keyFor markets w/in a 5-hour drive of NY state, NY is the #1 destination to visitTourism sales did well in Covid with people traveling locally (less visitation, but higher sales/visitor)Building more online presence (e.g., Macari vineyards dialed in their wine club program during Covid)Virtual tastings helping broaden the geographic reachThey did some advertising on Levi Dalton’s I’ll Drink to That podcast and now with SevenFifty to reach the trade audienceThe key effort is in keeping consumers engaged with wineriesBelieves telling the individual stories of wineries is compelling, potentially more than having a signature grapeMost effective marketing - when people can connect in person, started experimenting with incorporating local elements in trade tours (e.g., state park visits, walk-around tastings with a meal, more curated events vs. bussing around to many wineries)NY State Wine messagingUsed to be “Uncork New York”Now “Boldly New York” - embodies the risk-taking spirit across the state, “gotta be a little crazy to plant vines in New York”Vision - To be the world’s greatest cool and cold climate grape-growing regionNY wine investmentPaul Hobbs started a winery in the Finger Lakes focusing on RieslingThe trend has mostly been family wineries buying other family-owned wineries - the industry is investing in itself
Dean Acosta is the Lockheed Martin Chief Communications Officer and Senior Vice President. Lockheed Martin is a $65B global security and aerospace company with 114,000 employees worldwide. Dean Acosta leads a team of 500, oversees global communications strategies and activities, and provides strategic communications counsel to the corporation's senior executives. He's held senior leadership positions at NASA, Boeing, Honeywell, Phillips 66, and Resideo. While at NASA he was press secretary in the aftermath of the space shuttle Columbia tragedy and returned to flight. Dean has received an Emmy Award for investigative reporting and holds a bachelor's degree from the University of Texas at San Antonio and a master's degree from Seton Hall University. Dean's LinkedIn Profile: https://www.linkedin.com/in/deanacosta/ (https://www.linkedin.com/in/deanacosta/ ) WHAT YOU WILL DISCOVER FROM THIS EPISODE: The big life lesson Dean learned as a sportscaster. The advice Dean would give his younger self. How you strike the balance of “tooting your own horn” and being humble. How his mettle was tested during the Columbia Space Shuttle crisis and the Pandemic. Key strategies every leader can use to respond to a crisis. How to have a “steady hand” in a crisis. The single trait Dean would instill in every employee. Success strategies every employee should keep in mind. A twist in Dean's career that led to his success down the road. The three board games that Dean likes to play with his family. HIGHLIGHTS: Advice for someone facing a crisis in their career or business: Involve more people into a conversation, not less. Transparency and authenticity need to be key principles in how you approach your communications. Tips on avoiding burnout: Make sure you have work/life balance. Family first, then the mission. Stay focused. Be okay with asking for help or asking for a break. Be able to reflect on what's working for you and what's not. QUOTES: “Advocate for yourself and your team.” “There's a fine line between arrogance and confidence.” “Understand your audience.” “Think about the things that advance the mission of the company.” “Involve more people in the conversation.” “When you're thinking over communicating, you're usually not.” “Pause and reflect upon what you've learned during the past year". RESOURCES: Good to Great, Jim Collins https://www.amazon.com/Good-Great-Some-Companies-Others-ebook/dp/B0058DRUV6 (https://www.amazon.com/Good-Great-Some-Companies-Others-ebook/dp/B0058DRUV6) The Right Stuff https://www.amazon.com/Right-Stuff-Tom-Wolfe/dp/0312427565 (https://www.amazon.com/Right-Stuff-Tom-Wolfe/dp/0312427565) Ghost Fleet https://www.amazon.com/Ghost-Fleet-Novel-Next-World/dp/054470505X (https://www.amazon.com/Ghost-Fleet-Novel-Next-World/dp/054470505X)
The growing number of states legalizing medical marijuana and increasing demand for cannabis in medical and recreational applications are some of the main factors that are expected to drive the growth of the cannabis market size. Even states without final tallies for all of 2020 reported record sales figures. The North America cannabis market is expected to reach $ 65B by 2027 from $ 6B in 2019. The market is anticipated to grow at a CAGR of 29.9% from 2020 to 2027. A snapshot of a few marijuana markets across the U.S. revealed that even states that took a hit at points during 2020 – such as Nevada, where tourism dried up for several months starting in March – either rebounded or grew steadily throughout the year. The North America region is going to dominate the global cannabis market due to increasing legalization of cannabis for both medical and recreational purposes all over North America. Experts agreed that while 2020 might have set a new bar for industry performance, it’ll likely be an easy one to meet as the legal marijuana sector continues to grow. Show Notes: Cannabis Market Overview February 2021 https://www.headset.io/posts/cannabis-market-overview-february-2021 Host: Josh Kincaid, Capital Markets Analyst & host of your cannabis business podcast. https://www.linkedin.com/in/joshkincaid/ Episode 654 of The Talking Hedge: Your cannabis business podcast covering cannabis products, business news, investments & events. https://www.theTalkingHedgepodcast.com Music Info: Song: Dark Trap Beats Hard Rap Instrumental | Gang | 2018Artist: LuxrayBeats Keywords: Podcast, Hemp News, Weed News, Cannabis News, Marijuana News, Cannabis Business, Marijuana Business, Cannabis Industry News, Marijuana Industry News, Weed News 420, Talking Hedge Podcast, Cannabis Podcast, Marijuana Podcast, Business Podcast, CBD podcast, THC podcast, Cannabis Pitch Deck, Marijuana Pitch Deck, Marijuana Investment Deck, Cannabis Investment Deck, Cannabis Compliance, Cannabis Data, Cannabis Banking, Cannabis Investment, Pot Stocks, Cannabis Stocks, Weed Stocks, Marijuana Stocks,
The growing number of states legalizing medical marijuana and increasing demand for cannabis in medical and recreational applications are some of the main factors that are expected to drive the growth of the cannabis market size. Even states without final tallies for all of 2020 reported record sales figures. The North America cannabis market is expected to reach $ 65B by 2027 from $ 6B in 2019. The market is anticipated to grow at a CAGR of 29.9% from 2020 to 2027. A snapshot of a few marijuana markets across the U.S. revealed that even states that took a hit at points during 2020 – such as Nevada, where tourism dried up for several months starting in March – either rebounded or grew steadily throughout the year. The North America region is going to dominate the global cannabis market due to increasing legalization of cannabis for both medical and recreational purposes all over North America. Experts agreed that while 2020 might have set a new bar for industry performance, it’ll likely be an easy one to meet as the legal marijuana sector continues to grow. Show Notes: Cannabis Market Overview February 2021 https://www.headset.io/posts/cannabis-market-overview-february-2021 Host: Josh Kincaid, Capital Markets Analyst & host of your cannabis business podcast. https://www.linkedin.com/in/joshkincaid/ Episode 654 of The Talking Hedge: Your cannabis business podcast covering cannabis products, business news, investments & events. https://www.theTalkingHedgepodcast.com Music Info: Song: Dark Trap Beats Hard Rap Instrumental | Gang | 2018Artist: LuxrayBeats Keywords: Podcast, Hemp News, Weed News, Cannabis News, Marijuana News, Cannabis Business, Marijuana Business, Cannabis Industry News, Marijuana Industry News, Weed News 420, Talking Hedge Podcast, Cannabis Podcast, Marijuana Podcast, Business Podcast, CBD podcast, THC podcast, Cannabis Pitch Deck, Marijuana Pitch Deck, Marijuana Investment Deck, Cannabis Investment Deck, Cannabis Compliance, Cannabis Data, Cannabis Banking, Cannabis Investment, Pot Stocks, Cannabis Stocks, Weed Stocks, Marijuana Stocks, --- Support this podcast: https://anchor.fm/talkinghedge/support
The growing number of states legalizing medical marijuana and increasing demand for cannabis in medical and recreational applications are some of the main factors that are expected to drive the growth of the cannabis market size. Even states without final tallies for all of 2020 reported record sales figures. The North America cannabis market is expected to reach $ 65B by 2027 from $ 6B in 2019. The market is anticipated to grow at a CAGR of 29.9% from 2020 to 2027. A snapshot of a few marijuana markets across the U.S. revealed that even states that took a hit at points during 2020 – such as Nevada, where tourism dried up for several months starting in March – either rebounded or grew steadily throughout the year. The North America region is going to dominate the global cannabis market due to increasing legalization of cannabis for both medical and recreational purposes all over North America. Experts agreed that while 2020 might have set a new bar for industry performance, it'll likely be an easy one to meet as the legal marijuana sector continues to grow. Show Notes: Cannabis Market Overview February 2021 https://www.headset.io/posts/cannabis-market-overview-february-2021 Host: Josh Kincaid, Capital Markets Analyst & host of your cannabis business podcast. https://www.linkedin.com/in/joshkincaid/ Episode 654 of The Talking Hedge: Your cannabis business podcast covering cannabis products, business news, investments & events. https://www.theTalkingHedgepodcast.com Music Info: Song: Dark Trap Beats Hard Rap Instrumental | Gang | 2018Artist: LuxrayBeats Keywords: Podcast, Hemp News, Weed News, Cannabis News, Marijuana News, Cannabis Business, Marijuana Business, Cannabis Industry News, Marijuana Industry News, Weed News 420, Talking Hedge Podcast, Cannabis Podcast, Marijuana Podcast, Business Podcast, CBD podcast, THC podcast, Cannabis Pitch Deck, Marijuana Pitch Deck, Marijuana Investment Deck, Cannabis Investment Deck, Cannabis Compliance, Cannabis Data, Cannabis Banking, Cannabis Investment, Pot Stocks, Cannabis Stocks, Weed Stocks, Marijuana Stocks,
Topics: Enerplus Buys in Williston, Biden Bans Fracking, SpaceX Oil Co., Musk Offers Reward, Gates Gives a Bill, Rivian Raises $2.65B The post Elon Becomes an Oil Man | Energy Roundup 1/28/20 appeared first on Digital Wildcatters.
Detroit headlines: The city of Detroit and surrounding counties are having to slow their COVID-19 vaccine rollout due to supply problems from the Federal government. Rivian gets $2.65B in investment, while Stellantis unveils their new sign out in Auburn Hills. But with the merger of FCA and Peugeot with their Dutch headquarters, we're really down to just the Detroit 2. Or is it just GM and Ford, without a name? Online sports betting starts Friday in Michigan, and Detroit casinos are in on the action. Henry Ford Health System opens their new Birgitte Harris Cancer Pavilion on West Grand Boulevard in the city tomorrow. Tomorrow (Wednesday) is officially "What's Going On?" day to remember the iconic song and album by Marvin Gaye. Plus Fletcher Sharpe (http://www.twitter.com/saintfdw) joins us for sports: Michigan State and Illinois basketball has been postponed due to COVID-19. It's the third game for the program. Detroit Lions hire Brad Holmes as General Manager What are the coaching prospects now for the Lions as Saleh is off to the Jets? Jim Harbaugh shaking things up over at U of M Football Red Wings honor Marlowe Stoudamire Thanks for listening! If you love what we're doing there are a couple ways to help. 1) Leave a review on Apple Podcasts. https://podcasts.apple.com/us/podcast/daily-detroit/id1220563942?mt=2 2) Join us a member on Patreon. Your support, like Nissa and Jonathan did, means the world to this project. http://www.patreon.com/dailydetroit
Countrywide reports on the politics of food and farming; China imposes more tariffs and farm value rises to $65B in 2020
Countrywide reports on the politics of food and farming; China imposes more tariffs and farm value rises to $65B in 2020
Here we are! Location Weekly is ready with intriguing news from the industry along with exciting new guests in our Members At Home section. So tune in to learn all about Fit:Match teaming with Brookfield for virtual fitting rooms in malls, Walmart, Cadillac Fairview and others to transforming parking lots to virtual cinemas, Uber buying Postmates for $2.65B and listen to our guests: Emil/Filip, Co-Founders of Bluedot.
This week's Alpha Trader podcast features hosts Aaron Task and Stephen Alpher talking about the second-half outlook with Brad McMillan, chief investment officer at Commonwealth Financial Network. Prior to the chat with McMillan, Task and Alpher talk about recent market action. Among the items - a bull run that appears to be getting its legs again, and a pickup in dealmaking. Of the bull run, Task notes that an equal-weighted S&P 500 is down 9% since June 8. The move higher in the headline indices of late is getting even more focused on big-cap tech. Of dealmaking, the hosts are interested to see Warren Buffett and Berkshire Hathaway ([[BRK.A]], [[BRK.B]]) finally putting some of their $137B to work with the $9.7B cash purchase of the natural gas transmission assets of Dominion Energy (D). There was also Monday's $2.65B purchase of Postmates by Uber (UBER), with Task pointing out that this is an all-stock deal. At least Uber thinks its currency is of pretty fancy value right here. It's hardly surprising that the key to McMillan's outlook rests with what happens with the pandemic. For now, his base case is that we get Covid-19 under control and the economic recovery continues (even if at a slower pace than current). In that case, we're pretty much back to normal by early next year. That doesn't necessarily mean McMillan's a big bull on the stock market at current levels. Instead, he sees stocks as pretty expensive even if earnings come back, and expects the S&P 500 to end the year about where it stands right now. There's plenty more, including McMillan's take on the wild swings in jobs numbers we've seen, whether or not we get another stimulus package, and what he considers to be the most underrated economic statistic.
Find out how you can tap into the global Techstars network from right here in Calgary. Did you know that Techstars Startup Weekend is run by local Organizers in over 700 cities and 150 countries worldwide? Join us for the next Startup Weekend Calgary from February 28 to March 1, 2020 and get connected to the global Techstars network. Register now!Interview date: November 12, 2019Featured Speaker:Jacqueline Ros, Regional Director, Americas at TechstarsAs the Regional Director for the Americas, Jacqueline is responsible for creating and executing the strategy for the region, managing a multi-cultural remote team that supports the Techstars network, and supporting governments building innovation ecosystems.About Techstars: Techstars is a Worldwide Network that helps entrepreneurs succeed by investing in, supporting, and educating some of the best entrepreneurs in the globe. With a portfolio of over 1,100 companies with a total market cap of $65B, Techstars operates in 150 countries with a massive network of mentors, investors, founders, and Community Leaders.Join Calgary's Startup Community: Startup Calgary’s free Community Membership will help you stay on top of community events, resources and updates. Members receive exclusive discounts and a free downloadable version of our Startup Community 101 deck. You can sign up here.Follow Startup Calgary on social media
Introduce Jared. Have you seen anything good recently? Jared: I Tonya, Baahubali, Lagaan, Incident in a Ghost Land, Isle of Dogs Nick: Nate Bargatze (Netflix) / Bram Stoker’s Dracula / Shrill (Hulu) / WonderCon NEWS: Shooting at a Mosque in Christchurch, NZ (15 Mar) /// Batman is 80 years old (Mar 30, 1939) /// Apple TV+ trailer Main Topic: Disney Acquires 20th Century Fox The Walt Disney Company bought 21st Century Fox in a $71.3 Billion-dollar deal @ 12:02 am on 20 Mar 2019. Biggest media consolidation in Hollywood history. History of the merger according to Wikipedia, Comcast almost outbid them ($65B on 13Jun18). In June 2018, the DOJ approved the anti-trust approval case, so the deal was all but done. Controversy: Disney-Fox now controls about 37% of domestic box-office share power. Disney released 10 films in 2018 – Netflix released 93. Disney shuts down Fox 2000 which makes mid-level movies (1 of 4 film production companies owned by Fox) – Fight Club, The Fault in Our Stars, Life of Pi, Diary of a Wimpy Kid, Hidden Figures, Walk the Line, Bridge of Spies, The Hate U Give, Love Simon What Disney owns now – 20th Century Fox Movies: Avatar, Titanic, Star Wars (Ep 4), Home Alone, Night at the Museum, Cast Away, The Martian, Mrs. Doubtfire, The Simpsons, Alien, Predator, Fight Club, Dragon Ball, Kingsman, There’s Something About Mary, Independence Day, 28 Days Later, Fantastic Voyage, The Sound of Music, M*A*S*H, Big, Rocky Horror Picture Show… 20th Century Fox TV – The Simpsons, King of the Hill, In Living Color, The X-Files, The Mick, 24, Fresh Off the Boat, The Last Man on Earth, American Horror Story Controlling stake in HULU – already owned 30%, bought Fox’s 30% totaling 60% What does this mean for Disney+ ? What does this mean for the future of Hollywood? Should we be afraid of this? What does it signify? Fox Corp / News Corp still exists, which owns Fox News, Fox TV and Fox Sports. What are you looking forward to seeing/doing in the next few months? Twilight Zone (CBS, Apr 1) Pet Sematary, Shazam! (Apr 5) / Hellboy (Apr 12) / Avengers Endgame (Apr 26) Cannes Film Festival (May 25) / John Wick Ch 3 (May 17) / Aladdin (May 24) ‘Spider-Man: Far From Home’ thoughts? ‘100 Bullets’ Sign off.
Our hosts Simon and Leda are joined by two great guests, Livia Benisty, Head of Financial Crime at ComplyAdvantage and Ryan Edwards-Pritchard, MD at Funding Options. First up, Revolut’s brand new banking licence. British fintech unicorn Revolut wins banking licence. Revolut has secured a European banking licence from authorities in Lithuania, allowing them to start offering current accounts and loans across the EU from early next year. Initial focus on Lithuania, where it has about 150,000 customers, before expanding into larger markets including the UK, France and Poland later next year. However, its plans to offer full banking services in its home market could be delayed in the event of a disorderly Brexit if financial firms lose the right to serve retail customers on licences granted in the rest of the EU. Barclays customers to switch off their spending. Barclays has become the first High Street bank to allow its customers to "switch off" certain types of spending on their debit cards. The idea is to help vulnerable customers, particularly problem gamblers, or those in serious debt. They can’t block specific retailers but all account holders can now block their own spending in a number of categories. Fintech is changing how millennials manage debt. Fintech is moving millennials from credit cards to personal loans in the US. Compared to the previous generations, millennials are more likely to take out a personal loan to cover everyday costs, relying less heavily on credit cards as their predecessors. Fintech firms originated 36% of all personal loans last year, compared to less than 1 percent in 2010. Metro payments go international. Metro Bank is adding international payments through personal and business accounts via its mobile app. The new service enables customers to make same day Swift and next day SEPA (Single Euro Payments Area) payments in Euros, US dollars and Sterling. International payments are integrated alongside the app’s domestic payments functionality, and allows customers to choose whether they or the payee cover the charges. We spoke to Alex Park, Metro Bank’s Director of Digital to tell us more about this, how it was a customer-led initiative, and what’s coming next. HSBC Launches robo advisors. My Investment, an online platform designed to make wealth advice accessible to investors, will be available from as little as £1,000 to invest at an initial charge of 0.5% followed by fees of up to 0.46%. HSBC estimates My Investment could be used by 2.87 million of its existing retail customers, including many who may not have considered wealth management an option in the past. Advice will be given based on investor's answers to questions about their finances, investment experience and appetite to risk. Funding Options bags CMA prize. Business finance marketplace, Funding Options, has been announced as one of the winners of the second phase of Nesta’s ‘Open Up Challenge’ - one of only three companies to have won both stages of this challenge. The Open Up Challenge prize is part of a package of reforms from the Competition and Markets Authority designed to shake up retail banking. Plaid to hit $2.65B valuation. $250M series C fundraising round led by Mary Meeker who will join the board. Other investors include Andreessen Horowitz, Goldman Sachs, Spark Capital and others. The platform allows companies to create financial services applications without having to hire their own team of engineers to build out a tool that connects apps to its users’ bank accounts. Plaid builds infrastructure that allows a consumer to interact with their bank account on the web through a number of third-party applications, like Venmo, Robinhood, Coinbase, Acorns and LendingClub. SoftBank’s record breaking IPO. SoftBank Group Corp is set to raise 2.65 TRILLION yen ($23.5 billion) in Japan’s biggest-ever IPO. The share sale widely regarded as finalizing the group’s transition from domestic telco to a monolithic global tech investor. It also said it will sell all extra shares set aside for excess demand, taking the total just shy of the record $25 billion raised in 2014 by Chinese e-commerce giant Alibaba Group - of which it also owns a 30% share. SoftBank offered nearly 2 billion shares for sale and allocated over 80% of the sale for domestic retail investors. Barclaycard joins up with Evernym. Barclaycard wants to make passwords a thing of the past and as such they have joined an accelerator programme run by self-sovereign identity (SSI) specialist Evernym. SSI is the concept of enabling people to securely store information about their digital identity, totally under their control, so that they can share verifiable proof of who they are, for life. We spoke to Evernym MD Andy Tobin to find our more about this partnership and the Evernym accelerator. And Finally... Finger payments to go campus-wide at Copenhagen Business School. Finger vein payments technology is to be introduced campus-wide at Copenhagen Business School. The technology, named FingoPay, works via an electronic reader which builds a 3D map of the customer’s finger veins, generating a 'natural personal key' - thus removing the need for the individual to enter any personal details upon registration to make a payment. FingoPay successfully processed more than 12,000 transactions in the restaurant and coffee shop of the business school. All this and so much more on today's episode of Fintech Insider! Subscribe so you never miss an episode, leave a review on iTunes and every other podcast app. Spread the fintech love by sharing or tweeting this podcast. Let us know your thoughts @FintechInsiders and join the discussion by signing up at www.fintechinsidernews.com This week's episode was produced and written by Laura Watkins, and edited by Alex Woodhouse. Special Guests: Leda Glyptis, Livia Benisty, and Ryan Edwards-Pritchard.
The Talking Stack Podcast Show Notes 1. A NEW ONLINE-OFFLINE RETAIL WAVE IS COMING - ARE MARKETERS READY? Marketers are finally using technology to do what it should have done in the first place - deliver customer experience (CX) in way that acknowledges how normal customers behave – online and offline. Are retailer’s dependent on the Googles of the world to help make the connection? What does the combination of mobile marketing, locational data and this move by google mean to retailers and how could they be approaching the opportunity? Google announces tweaks to drive brick and mortar sale Location based marketing report Snapchat facilitates in store purchase 2. SALESFORCE DEBUTS GOOGLE AND MARKETING CLOUD INTEGRATIONS Read the Salesforce story here What do the new capabilities within Google Analytics 360 really mean to marketers who use Salesforce? Will they be able to use their data more effectively or is it cosmetic changes to the dashboard that make like easier but not necessarily better? 3. IS FACEBOOK THE NEW MODERATOR OF YOUR BRAND CX DELIVERABLES? Facebook launches new ‘Leave Feedback’ tool What role is Facebook really playing in our lives? Is it setting itself up as a mediator between advertisers and consumers? Will it be defining what a good CX for a Brand is? And how vulnerable does this leave marketers? 4. MEDIA CONSOLIDATION/ MONOPOLY ISN’T ENDING ANY TIME SOON Read the articles on the Time Warner acquisition here and here CB Insights says that with AT&T’s $85B acquisition of Time Warner getting the green light earlier this week and Comcast making a $65B bid for 21st Century Fox, consolidation in the media industry seems to be accelerating. What do marketers need to do to survive in a world where only a handful of giants will own all media and advertising platforms? Where will the disruption come from? HAIL OF THE WEEK: this fun, awesome video from CountryTime Lemonade Company. Oh, the irony of the Kafka-esque times we live in! Whatever next? Maybe the 6th season of Silicon Valley holds the secrets! (Listen to the podcast to know what we mean!) It was a slow week for Fails, so we skipped it! Thanks for listening. See you next week! Please follow and review us on Spotify or iTunes! Thanks! See all episodes of The Talking Stack here podcasts.apple.com/us/podcast/talk…st/id1373600978 www.martechadvisor.com/multimedia/podcasts/ podcasters.spotify.com/podcast/4Cmet…etiZ/overview
We're joined by Henry Pickavet, editorial director at TechCrunch and co-host of the CTRL+T podcast, to discuss the second season of Netflix's Queer Eye revival. We also look at the ramifications of AT&T's acquisition of Time Warner going through — and at whether or not Netflix is moving into gaming. Links: [AT&T completes its acquisition of Time Warner][1] [Comcast bids $65B for Fox assets, setting the stage for a fight with Disney][2] [Netflix is adding an interactive ‘Minecraft’ story to its lineup, denies entry into gaming][3] [1]: https://techcrunch.com/2018/06/15/att-completes-its-acquisition-of-time-warner/ [2]: https://techcrunch.com/2018/06/13/comcast-bids-65b-for-fox/ [3]: https://techcrunch.com/2018/06/13/netflix-is-adding-an-interactive-minecraft-story-to-its-lineup-denies-entry-into-gaming/
At halftime of the World Cup 2018’s opening game, Bill Mann pops by the studio to discuss Comcast’s $65B bid for Fox’s assets and how Disney could respond. We also stare in horror at the smoldering embers of two retail stocks (Michael’s, Tailored Brands) falling 20%. Plus, we dip into the Fool Mailbag to discuss valuation vs. market opportunity. Thanks to Casper for supporting The Motley Fool. Save $50 on a mattress at http://www.casper.com/fool (use the promo code “Fool”)
Michael Cohen hunts for new lawyers in FBI probe, Comcast challenges Disney with $65B bid for Fox, Anticipation surrounds Fed’s rate forecasts after next hike. --- Support this podcast: https://anchor.fm/anchor-news-rundown/support
FAR 074 Expected Air Date: 10/12/17 Topics: Mark your calendar now and plan to attend “FlipStarter” is coming October 20 and 21. . Join my special guests: EJ Lashlee, author of the True Trust Book, The Protection Book, and founder of True Trust Services. He will be speaking about asset protection through Private Asset Trusts. Jennifer Hammond, Host of The Jennifer Hammond show on Sirius/XM Radio The Urban View. Nationally acclaimed realtor and investor extraordinaire. Jay Conner. America’s private money authority Bruce Mack. Founder of Platinum Finance Pat Dornan, developer of the “Ultimate Rehab Estimator” and author of “Expect the Unexpected.” Roger Herring, Founder of Investors Accounting. Karen Anderson, Direct Marketing and List Consultant Mike Ventry, Advanta IRA, How to use your self-directed IRA to create potentially tax-free income. And of course, me, the Flipping America guy, Roger Blankenship. I’ve made my career in real estate investing by doing it rather than talking about it. I’ve flipped hundreds of houses and I’m going to show you how to do it. I have a few techniques no one else uses or teaches, but I’m going to share them with you. Why? Because we are going to tell you how to get started Flipping Houses and building wealth. It’s not just owning a business, it’s owning a wealth generation machine that will not only provide for you, but be a blessing to your children and grandchildren. And that’s not all, when you purchase your ticket for only $97 for this two day event, my good friend Jay Conner is going to invite you to HIS event in October at no additional charge. That’s right, two multi-day real estate training events for one low price. Sign up now at FlipStarterEvent.com. The FIRST 20 people who sign up for FlipStarter can save 20% off the admission cost by using the coupon code HOT20. I flipped 75 houses before I finally decided to go to a seminar about flipping houses. I wanted to improve my techniques. Over the years I’ve attended many training events and I can confidently tell you that FlipStarter is unlike any real estate investor training out there. See for yourself at FlipStarterEvent.com. Trends: Questions@flippingamericaradio.com Tell us where you’re from! 5 Bathroom trends to avoid! And we are not just talking about laying off the late night pizza... News: 73, 77, 45. Harvey, Irma, Marie $175B, $200B, $75B Cali wildfires: Could be up to 65B. 21 already dead. 11,000 homes at significant risk, 9.1 million homes at some level of risk. Emails: Questions@flippingamericaradio.com Tell us where you’re from! Lorraine from Traverse City, MI “You’re talking a lot about fix and flips this month and I understand - what with your event coming up. But what about those of us who want to buy rental properties. Will you be doing programs about that? David from Columbus OH, “I’m retired - don’t want to flip houses, but I love your show. I want to be a private funder for your or some of your students. How would I get started?” Andy from Tryon, NC “I have a business currently, but would be interested in flipping houses as a side hustle. Is that possible and how should I go about it?” Keith, Powder Springs, GA “I have $50,000 to invest. Not enough to buy a house, but enough to do something, right? What are my options?” Frankie from Jonesboro, GA “I went to a 3 day event from ________ _______ and got some good information. It was $197 and I was impressed. I wanted to sign up for their main course until I found out it was close to $35,000. I’m not doing that. Where I live I can buy houses for $35,000. I think I’d rather buy a house than pay that much to learn how to buy houses. So anyway - tell us plainly - are you going to upsell us to a $35,000 course too?” In a word, no, but let’s think about it... If you could make 35k per month after paying that, what would the deal look like then? Compare to a franchise… $500k to make $50k per year?? How about switching that around? But I’m not going to try to sell you something for $500k. I left 200k on the table, so my education might have cost me that. I’m not charging that. I’m not charging $50,000. The real problem is a course doesn’t really teach you how to do this. You need a coach -- a mentor. The Flipping America Mentoring Program. Our program begins with a surprisingly small commitment, with an agreement that your coach is going to get paid a little bit when you make money. Your first four deals for sure. More if you need it. Topics: The Secret to Flipping Houses in your Spare Time. Systems Persistence Consistency Discipline Discernment What you need to know to flip: So what do you need to know to really fix and flip houses? How to find deals 10 sources Foreclosure auction an entirely different set of risks and knowledge base. Deal Marketing Direct mail Landing page Answering the phone Screening calls Making offers How to know if it IS a deal Diff between profit margin and profit Diff between gross profit and net Calculating profit Estimating repairs Determining ARV How to buy it. Negotiating Creating solutions Maximizing profits Structure Purchase Cash Equity Partner? Hard Money Private money Owner Finance Installment contract Lease Option Lease Purchase Subject-To Wrap around Financing How to flip it Wholesale? Assignable contract Double closings Fix it Structural knowledge The parts of a structure Recognizing what needs to be repaired or replaced Design decisions Choosing colors Styles Accessories Contractor vetting and approval Where to find em How to check them out How to interview them Contract law and development Contractor Supervision Payment process Draw schedules Lien waivers Sell it Marketing approach Using a realtor How to find How to vet How to interview Whom to hire Manage the relationship Listing and pricing strategies Gathering feedback Managing objections Price reduction strategy Negotiating with buyers Negotiating AFTER its under contract Running Your Business Setting up properly Keeping proper records Creating business systems Tax strategies and planning Protecting your assets Proper Insurance for all assets Managing tenants Know the laws Prepare for court! Handling service requests Leases! What goes in them What can’t be in them Who’s on the lease? That’s the TIP. The iceberg is what is underneath all that. It’s a lot. But what do you get in return? The real possibility of a six figure income in a couple of years. The opportunity to build lasting wealth to pass on to your children and grandchildren Freedom - work when you want, however long you want, quit when you want or when you can each day. The opportunity to live life on your own terms. Don’t “make a living.” LIVE. Do that “higher meaning” thing.
With all the latest talk about Facebook changing their newsfeed’s algorithm and the inevitable comparisons to Instagram (aka maximal vs. minimal design approaches), I’d like to focus our upcoming Visual Storytelling Today show on Instagram and how visual storytellers like you can take advantage. About our guest Julie Cabezas is a co-founder of The Ideal Marketing Agency. Julie (B.S.B.E.) was a key marketing team member in the $1.65B acquisition of a groundbreaking medical device robotics company prior to co-founding IDEAL. With a background in engineering, training, and marketing, she developed the elegant, flexible framework IDEAL uses to teach entrepreneurs and executives to create social brands their customers love. She is the inventor of Social Brand D.N.A. and the Content Creation Formula. Julie has loads of expertise transforming brands into what she calls “Little Black Dress Brands” on Instagram. What you will learn: What is Instagram Storytelling and why as a marketer you should care? What are the typical business objectives marketers could use Instagram Storytelling for? How does the end-to-end process of creating an effective Instagram Storytelling program look like? Great industry examples and tips to get you started Read our episode's blog post about this topic on VSI Blog. This podcast is brought to you by the Visual Storytelling Institute (VSI) from Miami, FL.
In this Episode of Oil & Gas This Week – Lower oil prices slash Saudi’s deficit, Saudi Crown Prince favors NYSE for Aramco IPO, Oil & Gas producers grapple over returns vs growth, Nabors acquires Tesco for $215m, Brazil plans $17B in new investments by Q4, Norway’s $65B missed opportunity. Have a question? Click here to ask. Show Notes & Links: 2017 on the road sponsors: Totaland The World’s Most Advanced Field Land Management System The Landman’s Virtual Office https://www.totaland.com Lee Hecht Harrison As global experts in talent management, LHH is currently helping 75% of the Fortune 500 Oil & Gas companies simplify the complexity of leadership and workforce transformation. http://www.lhh.com API-YP Events Stories:Higher Higher Oil Prices Slash Saudi Deficit Saudi Crown Prince Favors New York Over London For Aramco IPO Oil And Gas Producers Grapple Over Returns Vs. Growth Nabors acquiring Tesco for $215 million Brazil Plans for $17 Billion In New Investments By 2017 End This $65 Billion Oil Opportunity Will Never Be Tapped Dear Millennials, Big Oil Is Not Your Enemy Weekly Rig Count As of 8/15/2017 – The American Rig count is 1023 active rigs. Redwing Has A Winner! Dan Beisner, Owner at Little Prarie Oil & Gas you’re this week’s winner! Congrats! CLICK HERE TO ENTER FOR YOUR CHANCE TO WIN! Get Mark’s Monthly Events Email Get Automatically Notified About Oil & Gas Events Once a Month Connect with Us OGGN LinkedIn Group OGGN Facebook Group Join API-YP Jake Corley | Facebook | LinkedIn | Email Mark LaCour | Facebook | Twitter | LinkedIn | Email | modalpoint.com SaveSave
In this Episode of Oil & Gas This Week – Lower oil prices slash Saudi’s deficit, Saudi Crown Prince favors NYSE for Aramco IPO, Oil & Gas producers grapple over returns vs growth, Nabors acquires Tesco for $215m, Brazil plans $17B in new investments by Q4, Norway’s $65B missed opportunity. Have a question? Click […] The post Saudi Aramco eyes NYSE | Growth vs Returns for Producers | Nabors Acquires Tesco – OGTW122 appeared first on Oil and Gas This Week Podcast.
Hunter Walk is a partner at Homebrew Ventures, a seed stage venture fund. Over a decade ago, while Hunter was a Product Manager at Google, working on the newly acquired YouTube platform, Apple had approached them before Steve Jobs had even announced the world's first iPhone. There was no public App Store at that time and no third party apps. Apple wanted to have complete control over the experience and built the first video app for the iPhone that would pull YouTube content. It was a risky move, but the alternative to working with Apple was - at that time - going directly to a carrier and paying them large sums of money to include you app on all their devices. It was an exciting but challenging time as Hunter recounts - because there were so many moving pieces and not everyone saw that smartphone were going to play such a huge role in our daily lives in just a few more years. So if you’re a Product Manager yourself, love YouTube, or are just curious about how a startup like YouTube sold to Google for $1.65B in 2006 (the largest consumer tech acquisition at the time) and then went on to work with Apple to ensure they’d make the jump to smartphones - get ready, cause Hunter is going to share his insights and experiences into all this and more.
Imagine your startup is down to its last bit of cash, your investors won't follow on, you're sure you're going bankrupt, and everyone tells you to give up. What do you do? You decide to take your company public--it's dubbed by BusinessWeek as “The IPO from Hell.” That's just one of many cold-sweat moments Ben Horowitz has experienced first-hand in his career. Ben took that beleaguered company public and eventually sold it to HP for $1.65B in one of the most impressive turnaround stories in history. He went on to co-found the prestigious Silicon Valley venture capital firm, Andreessen Horowitz. His blog and his book “The Hard Thing About Hard Things” are essential reading for tech entrepreneurs, and are an honest look at entrepreneurship from his personal experience. I interviewed him live at UCLA. He shared some really deep leadership insights that can be translated to any industry, so while he's not a manufacturing entrepreneur, I wanted to share it with you this week. I was curious to hear about what he's learned through his rollercoaster journey as an entrepreneur. We talk about his love for rap music and the technology trends he's most excited about. He gives plenty of advice to the students in the audience to prepare them for meaningful careers. We spent a lot of time talking about leadership, making hard decisions, and how to build the right culture in your company. And as we learn, decision-making and culture go hand-in-hand in pursuit of success. The views expressed on The Art of Manufacturing podcast are those of the guests, and not our sponsors or partners. For more information, photos, and links, check out the show notes at http://makeitinla.org/benhorowitz.
Things are starting to get interesting with the PRC 10-year Rate Review: Industry filed two separate (but similar) motions to change the procedural schedule o Asking that Phase 1 is purely focused on if the current system is failing to achieve the 9 objectives and 14 factors o Once that is determined, then Phase 2 can be focused on what changes should be considered USPS went on record, opposing the motions filed by industry. At the heart of the issue: o Industry believes the current system is working and wants to retain the CPI Cap. o USPS wants to break the cap and have more ability to raise postage prices. Several industry associations then responded to the USPS opposing motion o Noting the request is a practical one, not a legal one. 9 by 14 matrix of craziness… USPS Published Address Quality Census Method and Assessment Process: Currently out on USPS PostalPro site Will be published in the Federal Register to allow for industry feedback & comments o Initially published in Dec of 2014, they received extensive industry feedback o In July 2016 the USPS published a revised set of proposed rules (which also received extensive comments) o Based on feedback, it has now published a 2nd revised set of proposed rules Still need to do a detailed read, but a few items are already jumping out: o eDoc Submitter still being held accountable for Move Update compliance, not the Mail Owner eDoc Submitter will have access to data showing source of errors by Mail Owner o Even if you pass on the Mailer’s Scorecard, the Inspection Service can still go after individual Mail Owners o Size of the Penalty for failure is still unknown (speculation is saying somewhere around 7 cents per piece) When asked about changes to the currently proposed 0.5% threshold o The USPS noted that they are committed to providing at least 90-days notice. USPS Posts a new version of Publication 6850: Streamlined Mail Acceptance for Letters and Flats Similar to Move Update, it will be published in the Federal Register for Industry to comment Can find it now posted out on USPS PostalPro website. USPS received over 200 comment last time around. Pub 6850 is 128 pages long that contain verification, acceptance processes and policies o Two recent Fed Reg notices that add Seamless Acceptance and eInduction to the DMM o Both reference this new Publication, so it is likely to get a lot of feedback again this time around. PRC Issues Annual Report to Congress & President Noted much of the PRC resources this coming year will be focused on the 10-year rate review In FY2016 the PRC approved 281 Negotiated Service Agreements for Competitive Products PRC Estimated the cost of the USO (Universal Service Obligation): The cost of binding the nation together o USO has sever principal attributes: Geographic Scope Product Range Access Delivery Pricing Service Quality Enforcement Mechanism o While not required, the PRC also proposes an estimate for the value of the Postal Monopoly (Balanced) FY2011: Cost 5.26B Value: 4.25B FY2012: Cost 4.84B Value: 3.98B FY2013: Cost 4.65B Value: 4.74B FY2014: Cost 4.34B Value: 5.38B FY2015: Cost 4.24B Value: 6.48B What does the Federal WorkForce Hiring Freeze mean for the USPS: USPS provides essential service to the people and businesses of the United States, so it does not… While trying to find this answer, I found some interesting facts regarding the Federal WorkForce: o Total size is just under 2.1 Million o Turnover has averaged about 210K jobs per year for the past 5 years ~75k quit ~65k retire ~55k leave because appointments expire ~10k are fired
Official Website: http://www.lawabidingbiker.com PODCAST & VIDEO-In this video Lurch and I ride to the Rattlesnake Mountain Harley-Davidson dealership at 3305 W 19th Ave, Kennewick, WA 99338. Lurch has a plan to trade in his 2008 Harley Street Glide for a 2015 Harley Road Glide Special. Although I have always personally liked the look of the batwing style fairing on the Street Glide models, Harley has re-designed the shark nose fairing on the 2015 Road Glide models and I like the new look much better. Lurch and his 2015 Road Glide Lurch is 6 foot 6 inches tall, so because of the fixed fairing (does not turn with the handlebars) we will be able to install 13" ape handlebars and push them forward farther. This will give Lurch a more stretched out and comfortable riding position. The bars can not be pushed as far forward on the Street Glide, because of the different fairing that turns with the bars. •Check out our #1 rated Harley Handlebar Install Videos-Fairing and non-fairing videos available! Further Information: VIDEO & BLOG-In this video I had some time with a 2015 Harley Davidson Road Glide in what Harley is calling "mysterious red sunglo" & it is beautiful! Harley took 2014 off on producing the Road Glide, so I was excited to check this bike out! I shot some awesome video and photographed the bike. There is also the Road Glide Special for 2015, which is essentially the same bike, but comes with some extra goodies. The special comes standard with the upgraded larger 6.5 inch touch screen version of the new "Boom!™ Box" infotainment/GPS system, a security system, and Premium low hand-adjustable rear suspension, which I like much better from experience in riding with both. Here is a free video & information I released on how to properly adjust the low hand adjustable rear suspension. The special also comes with a finished off shiny black painted inner dash instead of the matte black finish on the standard model. Note that the larger 6.5-inch touch-screen Boom!™ Box/GPS system is an option on the standard model. Harley-Davidson touring models first came out with the Boom!™ Box system in 2014 as part of Project Rushmore. Now, although the Boom!™ Box Infotainment System is pretty awesome, it does come with a fairly steep learning curve that has frustrated many Harley owners until now. In fact, most owners of the Boom!™ Box system have no idea about the hidden menus, bugs, issues, necessary software updates, mapping integration, nor the system's full overall capabilities. With my #1 viewed Harley-Davidson Boom!™ Box Infotainment System instructional videos I straighten that learning curve out for motorcyclists/bikers and relieve the stress. These videos have received rave reviews from the motorcycle/biker community and for that I thank you! Check out the #1 Viewed Harley-Davidson Boom!™ Box Instructional Videos Here! SEE PHOTO GALLERY The feature that most distinguishes the Road Glide from the other touring models is its fixed frame-mount “shark-nose” fairing and dual front headlights. The handlebars move inside the fairing, which stays put. On other touring models, such as the Street Glide, the handlebars move with the fairing and it is often called the "batwing" fairing. First seen on the 2014 touring models, the Road Glide now sports the upper fairing vent with a manual switch to shut it off in inclement weather. This vent is to minimize head buffeting by putting a curtain of air (wall) in front of you, forcing the oncoming air up and over your helmet while riding. My 2014 Street Glide special has this fairing vent and I have ridden a ton. My estimation is that it reduces head buffeting by 20-30 percent, so it is well worth it. In addition to the main upper vent are two vents to each side of the headlights, which are “Dual Reflector Daymaker LED” headlights. Harley says these headlights have significantly more spread and greater punch than standard headlights. Harley-Davidson is calling this new three-way air venting system the “triple splitstream vent”. Because the fixed fairing sits way further forward from the rider than on other models, the two additional lower vents are designed to lift the airflow per Harley-Davidson. The dramatic reduction in head buffeting has not only improved rider comfort, but also the listening quality of the Boom!™ Box sound system. A new handlebar bend reduces rider reach by 5.5 inches, which improves long-range comfort. Revised hand controls (just like 2014 models) have improved ergonomics, and a pair of thumb joysticks helps control the Boom!™ Box Infotainment and navigation. The standard comes with air-adjustable twin rear shocks. Here is a free video & information I released on how to adjust these air type shocks. With the Rushmore Project Harley put the new hard injection-molded saddlebags, which are the same convenient easy-open units found on Harley’s other touring models. The 2015 Road Glide standard comes with optional Reflex Linked Brakes with ABS, which I highly recommend for all riders. Of course they are anti-lock brakes, but much more. The integrated system senses too much front or rear brake applied by a rider and evens it out for you automatically, which is critical for emergency braking situations. I have tested this new braking system out intensely in all kinds of weather and road conditions and let me tell you that this bike stops fast and it is very controlled! With all the close calls us motorcyclists/bikers have to deal with, these brakes are well worth it. The iterated brakes don't react in until a bit higher speeds. So, for those of us that perform low-speed skills course drills, we won't have to worry about using that back brake and having it activate suddenly! Road Glide Specifications: •PRICE $20,899/$23,199 •ENGINE TYPE Air-cooled ohv V-twin, 103 cubic inches •BORE & STROKE 98.4mm x 111.1mm •COMPRESSION RATIO 9.7:1 •VALVE TRAIN Pushrod; two valves per cylinder •INDUCTION Electronic Sequential Port •TRANSMISSION Six-speed •FINAL DRIVE Belt •FRONT SUSPENSION 49mm fork; 4.6 in. travel •REAR SUSPENSION Air-adjustable shocks, 2.1 in. travel •FRONT BRAKE Dual 300mm floating discs, four-piston fixed calipers •REAR BRAKE 300mm disc, four-piston fixed caliper •FRONT TIRE Dunlop D408F 130/60B-19 bias ply •REAR TIRE Dunlop D407T 180/65B-16 bias ply •WHEELBASE 64.0 in. •RAKE 26º •TRAIL 6.8 in. •SEAT HEIGHT (UNLADEN) 27.2 in. •FUEL CAPACITY 6.0 gal. •EPA FUEL ECONOMY 42 mpg •COLORS Vivid Black, Amber Whiskey, Mysterious Red Sunglo, Black Denim and Superior Blue •DRY WEIGHT 812 lb. ________________________________________________________________ CHECK US OUT AND SUBSCRIBE: Website: http://www.LawAbidingBiker.com Email & Voicemail: http://www.LawAbidingBiker.com/Contact Phone Hotline: 509-731-3548 Twitter: https://twitter.com/LawAbidingBiker Facebook: https://www.facebook.com/lawabidingbiker YouTube: http://www.youtube.com/scrappy587 Google Plus Page: https://plus.google.com/b/104041070580228657262/+Lawabidingbiker587 Instagram: http://instagram.com/lawabidingbiker RSS: feed://www.LawAbidingBiker.com/feed iTunes Direct Link to Podcast: https://itunes.apple.com/us/podcast/law-abiding-biker-podcast/id622424087 Stitcher Radio: http://www.stitcher.com/podcast/law-abiding-biker-podcast TuneIn Radio: http://tunein.com/radio/Law-Abiding-Biker-p562288/
OFFICIAL WEBSITE: http://www.lawabidingbiker.com Want to call us and get your topic on the show or leave us feedback? Listener Line: (509) 731-3548 Computer Voicemail: http://www.lawabidingbiker.com/voicemail General Contact: http://www.lawabidingbiker.com/contact PODCAST-We try to answer the question of which of the Harley-Davidson "Street Glide" class motorcycles you should buy. What are the difference and specifications? What are the difference in prices and why? Should I buy a Harley, Victory, Indian, Yamaha, or Kawasaki? We dive deep into this topic and you really have to listen in to this episode to get all the details. We call all these bikes the "Harley Street Glide" class, because the Street Glide is The Motor Company’s perennial best-seller, and the new “Special” edition is the world’s favorite bagger. It is simply a fact that Harley came up with the overall look and style of this class of bagger and the other manufacturers have been chasing it every since trying to get a piece of the pie. Harley has been developing and designing this bike for many years and the others are all fairly new to the game in comparison. Harley has had much time to perfect and test the Street Glide. Not that the others aren't good bikes, but just stating the facts. The 2014 contenders we chose to compete with the Harley Street Glidein the "cool baggers" class are Victory Cross County Indian Cheiftain Yamaha Star Stratoliner Deluxe Kawasaki Vulcan Vaquero ABS SE MSRP Indian Chieftan $22,999 Harley Street Glide is $20,599 Victory is $18,999 Kawasaki is $18,699 Star is $17,240 EFI All are fuel injected Engines Harley Street Glide is 103 CI/ 1690 CC High Output V-Twin Air Cooled Integrated oil cooler Victory is 106 CI / 1731 CC V-Twin Air CooledStar Stratoliner Deluxe 113-cubic-inch/ 1854cc air-cooled V-twin Integrated oil cooler Indian is ThunderStroke 111 CI/ 1811 CC air cooled Kawasaki is liquid-cooled V-Twin/ 1,700cc / 103.7ci Patreon DO YOU WANT TO BECOME A PATRON AND SUPPORT US? Horsepower All are around 80 HP plus or minus Torque Harley is 104.7 ft-lb @ 3,250 RPM Victory is 105.5 lbs Star is over 100 ft.-lb. from 1600 to 4400 rpm Indian is 102.8 ft-lb @ 3100 RPM Kawasaki is 108 lb-ft @ 2,750 rpm Valve Train/Misc Harley is twin cam Victory is single overhead camshafts with 4 valves per cylinder Star is 4 valves/cylinder Indian 2 valves per cyl. Kawasaki is four valve per cylinder Transmissions Harley SG is 6 speed/ wet clutch Victory is 6 speed/ wet clutch Star is 5 speed/ wet clutch Indian is a 6 speed/ wet clutch Kawasaki is 6-speed Clutch Lever Harley Davidson is a hydraulic clutch Victory is hydraulic cable clutch Star is a hydraulic clutch Indian is cable clutch Kawasaki is hydraulic clutch Primary Drive Harley is chain Victory is GEAR DRIVE Star ?? Indian is gear Kawasaki is ?? Final Drives All have a belt final drive Cruise Control All have cruise control Wheel Base Harley is 64 inches Kawasaki is 65.6 inches Victory is 65.7 Star is 67.5 Indian is 68.1 Ground clearance Harley is 5.3 in. Indian is 5.6 IN / 142 MM Kawasaki is 5.7 inches Victory is 5.8 IN / 148 MM Star is 6.1 in Seat Heights Harley is 26.1 Victor is 26.3 IN / 667 MM Star is 27.8 Indian is 26.0 IN / 660 MM Kawasaki is 28.7 in Rear Suspension Harley is manually (hand) adjustable Victory is air adjustable Star is Single shock; 4.3-in travel Indian is SINGLE SHOCK / 4.49 IN (114 MM) / PNEUMATIC ADJUSTMENT Kawasaki is Swingarm with twin air-assisted shocks Brakes Harley is dual front disc brakes w/ optional ABS-Standard on the SpecialVictory is dual front disc brakes w/ optional ABS "Reflex Linked ABS System" Star is dual front disc brakes/ ABS?? Indian is dual front disc brakes w/ ABS (comes standard) Kawasaki is dual front brakes w/ ABS Our Tech Gripper Cell Phone Motorcycle Mount Affiliate Link: Need a motorcycle cell phone or GPS mounting solution Bikaholics? That's right, the Law Abiding Biker Podcast approves of these mounts & we personally use them on our motorcycles! Great looking mounts, good prices, and fast shipping? Check out our COMPLETE REVIEW Check out these awesome cell phone mounts No additional cost to your, but we get a small commission for each sale. Tires Harley Front: 130/60B-19 Rear: 180/65B-16 Victory Front: 130/70R-18 Rear: 180/60R-16 Star Front: 130/70-18 Rear: 190/60-17 Indian Front: 130/90B16 73H Rear: 180/60R16 80H Kawasaki Front: 130/90x16 Rear: 170/70x16 Weights Harley is 810 lbs Wet Star is 813 lbs Wet Victory is 760 lbs DRY (basically the same as others when wet) Indian is 848 wet Kawasaki is 835 lbs Fuel Capacities Harley is 6 gallons Victory is 5.8 gallons/ 22 litre Indian is 5.5 GALLONS / 20.8 LITERS Kawasaki is 5.3 gallons Star is 4.5 gallon Fuel Economy Harley is 42 mpg Victory is 42 mpg Star website says "N/A" Indian is 42 mpg Kawasaki is 36 mpg Entertainment Systems Harley-Davidson Boom!™ Boox Infotainment System Integrated GPS 6.5" touch screen model 4.3" non touch screen Learn to use the Boom!™ Box system and more! All other models have a standard stereo with no integrated GPS You will want to tune in and listen to this episode for a ton of more information that we go over. We talk about what all this data means for the rider. We answer many questions and give much guidance based on our experiences. Keep the rubber side down and the shiny side up! ________________________________________________________________ CHECK US OUT AND SUBSCRIBE: Website: http://www.LawAbidingBiker.com Email & Voicemail: http://www.LawAbidingBiker.com/Contact Phone Hotline: 509-731-3548 Twitter: https://twitter.com/LawAbidingBiker Facebook: https://www.facebook.com/lawabidingbiker YouTube: http://www.youtube.com/scrappy587 Google Plus Page: https://plus.google.com/b/104041070580228657262/+Lawabidingbiker587 Instagram: http://instagram.com/lawabidingbiker RSS: feed://www.LawAbidingBiker.com/feed iTunes Direct Link to Podcast: https://itunes.apple.com/us/podcast/law-abiding-biker-podcast/id622424087 Stitcher Radio: http://www.stitcher.com/podcast/law-abiding-biker-podcast TuneIn Radio: http://tunein.com/radio/Law-Abiding-Biker-p562288/