POPULARITY
This episode explores:Expanding remits for supply chain leaders and the value they deliver. (1:17)How organizational structure underpins these new remits, and Ralph Lauren's philosophy on this structure. (5:08)Talent opportunities borne from expanded remits for supply chain talent and leadership. (8:35)Technology's role in expanding supply chain's remit. (11:44)Actionable advice for supply chain leaders of tomorrow. (16:33)Supply Chain Podcast host Thomas O'Connor discusses the evolving role supply chain leaders play in their businesses with Halide Alagöz, chief product and merchandising officer (including supply chain) for Ralph Lauren. They explore Halide's unique career path and role at Ralph Lauren offer insight into changing expectations and growth opportunities for supply chain leaders, as well as how Ralph Lauren's organizational approaches to talent and technology helped uncover them. Thomas and Halide close the show with recommendations for supply chain leaders of the future, and how they can use these lesions to evolve.Gartner clients interested in finding out more about this topic can access the following: Supply Chain Executive Report: Radically Rethinking ReorganizationExecutive FastStart™ for CSCOs: How to Build Relationships and Personal BrandAbout the GuestHalide Alagöz is the Chief Product and Merchandising Officer of Ralph Lauren Corporation. She is responsible for the end-to-end product life cycle as leader of the company's Polo, RRL and Lauren brand teams and the Brand Image and Purple Label Merchandising teams. Halide additionally drives innovation and execution – from development through sourcing – of all products across the Ralph Lauren portfolio.Prior to joining Ralph Lauren, Halide was with H&M Corporation for 18 years, most recently in Hong Kong as the Head of Purchasing. During her tenure with H&M, Halide was responsible for various regional and global supply chain operations in Hong Kong, China, Bangladesh, and in her native country, Turkey.Halide also serves on the board of directors of the American Apparel & Footwear Association since April 2018 and was confirmed as its vice chair for its 2024-2025 term in March 2024. Halide earned both her bachelor's degree in industrial engineering and her master's degree in engineering management from Istanbul Technical University.
In this podcast episode, MRS Bulletin's Laura Leay interviews David Cahen from the Weizmann Institute of Science, Israel, about the impact surface defects have on bulk properties, specifically in the case of lead halide perovskites. In a perspective he co-authored, Cahen connected numerous experimental data from other researchers that exposed this phenomenon. By understanding how surface defects control the material's electronic behavior, researchers can pursue new materials for the development of long-lasting devices. This work was published in a recent issue of Advanced Materials.
Sebastiaan de With, een Nederlandse app-ontwikkelaar maakt naam in Silicon Valley. Hij vertelt hoe zijn prijswinnende camera-app Halide gebruikers een unieke ervaring biedt. En hij is openhartig: “Apple kan alles wat wij doen ook zelf. Als ze willen, is onze business morgen weg.” Een app voor de liefhebber Halide is geen standaard camera-app. Waar Apple’s ingebouwde camera moet werken voor iedereen – van grootmoeders tot professionals – richt Halide zich compromisloos op gebruikers die méér controle willen. “Wij bedienen de niche die Apple niet kan bedienen,” legt De With uit. Met functies zoals raw-fotografie en een intuïtieve bediening, biedt Halide een ervaring die fotografen meer creatieve vrijheid geeft. De app is ontwikkeld door een team van slechts twee personen. Ondertussen behaalden zij ook nog internationale erkenning in de vorm van voor hun app Kino een Apple Design Award. “Het geheim zit in de eenvoud,” zegt De With. “We ontwerpen apps die aanvoelen als een verlengstuk van je handen. Geen menu’s, maar directe controle.” De schaduwzijde van innovatie De With uit zorgen over nieuwe Europese regelgeving, zoals de Digital Markets Act. Deze wetten verplichten platforms om meerdere app-stores aan te bieden. “Dat klinkt als een kans voor kleine bedrijven, maar het creëert juist meer complexiteit en kosten,” waarschuwt hij. De With vreest dat dit soort regels vooral grote bedrijven in de kaart speelt. “Het dwingt ons om meer tijd en middelen te besteden aan administratie in plaats van innovatie. En wie is daar uiteindelijk mee geholpen?” De toekomst van fotografie Een ander groot thema is de rol van kunstmatige intelligentie (AI) in fotografie. Smartphones maken steeds meer keuzes voor de gebruiker – van belichting tot kleurcorrectie – vaak zonder dat de fotograaf het merkt. “Het is indrukwekkend, maar het roept vragen op. Op welk punt maken we geen foto meer, maar bedenken we die gewoon met AI?” De With ziet dat software en AI dominant worden. Toch ziet hij kansen voor Halide: “Er blijft behoefte aan controle en authenticiteit. Niet iedereen wil een foto die volledig door een algoritme is bepaald.” Een terugkeer naar imperfectie Tegen deze technologische achtergrond signaleert De With een opvallende trend: de herwaardering van imperfectie. “Mensen grijpen weer naar oude digitale camera’s en filmrolletjes. Niet omdat ze technisch beter zijn, maar omdat ze authentiek voelen.” Halide speelt in op deze behoefte door gebruikers meer creatieve vrijheid te geven, zonder de complexiteit van traditionele camera’s. Een app die inspireert en raakt De With’s verhaal raakt aan universele vragen: hoe behoud je controle in een wereld die steeds meer wordt overgenomen door technologie? Zijn visie inspireert niet alleen fotografen, maar ook ontwikkelaars en beleidsmakers. Door een brug te slaan tussen innovatie en menselijke emotie, laat Halide zien dat technologie meer kan zijn dan alleen een tool – het kan een ervaring zijn die raakt. Met zijn focus op controle, eenvoud en authenticiteit biedt Halide niet alleen een oplossing voor fotografen, maar ook een inspirerend perspectief op de toekomst van technologie.See omnystudio.com/listener for privacy information.
İmren Gece Özbey ile Halide Edib'in ilk ve son eserini konuşuyoruz.
İmren Gece Özbey ile Halide Edib'in ilk ve son eserini konuşuyoruz.
When both Apple and the photo centric publication PetaPixel in the same week deem that Kino is the smartphone app of the year, I take notice. Kino is an app for iPhone video shooters who want manual controls over what they see, and an alternative to the all-automatic sheen of making videos in the iPhone Camera app. The cost is all of $9.99, and it's a relative bargain. What Apple said: “Kino shows users how cinematic life can be through its film-inspired filters and advanced controls.”Petapixel: “Kino is just that one small extra step to mobile video capture that makes it a lot more approachable.”Have you heard about iPhone video advancements in the 15 and 16 series that let you shoot video in the “ProRes,” and “Log” formats, for better control over your color and final filmic look to the project? The Kino app helps you make sense of all that and offers tools to make use of them in a way easier fashion than via the iPhone Camera app. On the iPhone Camera app, you have to take the time to process the images. Not so with Kino. You also get the ability to shoot in manual focus and adjust the lighting with more refinement.Kino comes from the folks at Lux, which also makes the Halide app, covered here a few months ago. It's the still photo app for people who don't like shooting everything in auto mode on the iPhone, and offers, like Kino, an unprocessed version of what you see in real life. So let's run down Kino. In a welcome twist for an app developer, Lux actually begins the process with a manual telling you about all the features. But let's admit it: no one wants to read. They just want to press buttons. So let me tell you what they are. * The tools shown on the main page are basic: Auto focus vs. manual, the choice of lenses (.5 ultra-wide, wide and telephoto, if you have them on your iPhone) the format you want to shoot in (LOG vs. regular processed video) and the welcome sight of audio meters, to let you know that you indeed have audio. * Two other buttons send you to color grade options. Similar to Styles, Apple's tools to apply different color looks to your photos, before or after the fact, these different kind of film looks are offered, and can be applied to the video before you start shooting. In tech terms, they're called LUTs, and by having them right there in the app, it makes the editing process a whole lot easier, as they can be added before or after you shoot. * A second arrow on the main page brings up more choices: White Balance lets you adjust the color, and you can also increase the stabilization, use the level tool to make sure your horizon is in check and flip the camera around to selfie mode. Finally, there's Settings, where you have the ability to apply more color grades (many are available for sale on the internet) and change the quality of your recording. The big deal about the app is shooting in LOG (unprocessed video) and getting to apply the different filmic choices (called LUTs) directly to the video. The look is a little less sharpened and glossy, and more reminiscent of what things looked like in the film days. Let's be honest though. Most people wouldn't be able to tell the difference between a graded video and out of the camera automatic on the iPhone. I can, however, and I think it looks great. Since you read this newsletter, I'm guessing you'll be able to tell the difference too. My go-to video app is still Blackmagic Camera, which offers many of the same features, for free, plus the ability to record great timelapse videos, in a slightly more complicated fashion, but it's really easy to recommend Kino. The app is for anyone who wants to improve the look of their videos and see what all the fuss is about from the filmmaking community for getting higher grade iPhone videos. For $9.99, you can't go wrong. You'll need an iPhone 15 or 16 to get the most of the app. Kino is only available for iPhones. Sorry Android fans.A Watch App tooIn Petapixel's best of article, it also signaled out a $6.99 Apple Watch app that should appeal to photographers, called Lumy. This app gives you information about sunrises and sunsets, when to expect Golden and Blue hour and more. “It works alongside Apple's various Watch faces and is the perfect companion for outdoor photographers,” Petapixel said. I don't have an Apple Watch (my old one won't connect, and I don't like wearing watches anyway,) but this sounds like great info I'd love to have on my wrist. Other award winnersPetapixel named the iPhone 16 Pro the phone of the year, the new Mac Mini the computer of the year, and Blackmagic's DaVinci Resolve editing software the desktop software of the year. DaVinci is software that's free, unlike $299 for Apple's Final Cut Pro or a hefty monthly subscription (starts at $20.99) for Adobe Premiere, and is beloved by editors for color grading tools. Plus—ahem, it's free. PetaPixel says it's “powerful and fast, but also feature-rich.”Of the iPhone, PetaPixel called it “the uncontested champ of content creation. No other smartphone comes close to the iPhone for making high-quality videos.” It signaled the phone out for the new Camera Control feature, which sounded great to me when it first came out, because it's a one-click button to open the camera faster, but in reality, I haven't used it since. It's too cumbersome, and I find it easier to just open the camera the only way. Videos look great though!Phone updateLast week I told you to be very wary of the “free” phone offers from the big 3 wireless companies, which, in a nutshell, will upgrade you to a higher rate program that you don't need, give you an entry level phone with little storage on it and extend your contract another year to pay for the “free” device. They also make you pay a hefty sales tax on the purchase price of the phone. Get a new iPhone 16 Pro for free? In California, add $100 tax and the $50 or so “activation” fee.My mom Judy would like you to know that you can skip all those games by picking up a decent used model. She likes Androids, especially cheap old ones, and paid $175 for a Samsung Galaxy S20 FE (the current models are S25) for $175. That's a little more than the tax/fees for the “free” device without having to pay for a higher rate contract or extend the terms. She says:“Had Samsung, so wanted same to be familiar with the workings. Not concerned about photo capabilities. Do take pictures. Mostly to upload to Etsy or eBay for items I'm selling. The cost, including tax, and new case, was less than tax on "free" phone. I'm very happy with decision not to get "free" phone.”At $175, even if it was only one year of use, that's great. (To answer the question you're thinking: I bought her a new phone earlier this year, but it was too small—she didn't like it, and had me return it.)YouTube TV Stinks!Actually, I think the streaming alternative to cable TV is as good as it gets, with a easy to follow interface and the ability to record TV shows and watch them later. But the pricing is just out of whack. It was just a few years I covered the introductory press conference when it started with a $35 monthly rate. By 2024, it had climbed to $72.99 monthly, and this week YouTube announced a hefty $10 monthly price increase, to $82.99 monthly. The culprit? Higher programming costs. To which I say: I left cable because I didn't like paying over $100 a month to watch TV and get all these channels I never look at. FireworksLast weekend the city of Manhattan Beach, where I live, kicked off the holidays, as it always does, with a big fireworks display, and as always, I was there to document it with a video. What was different this year though was that I made it a group project. A bunch of us camera enthusiasts got together at a local party, and then dispersed, to get different points of view of the rockets blasting. Having a New Year's party this year around a fireworks display? You might try this method—as it makes for a more interesting video. The video clips were all shot on iPhones. Thanks as always for taking the time to read, watch and listen! I'm off to the next big Photowalks shoot next week, back to San Francisco, so here's hoping for lots of dry days! Jeff Get full access to Jefferson Graham's PhotowalksTV newsletter - Tech & Travel at www.photowalkstv.com/subscribe
Apple's App Store Awards finalists showcase the most innovative and exciting apps across Apple platforms for 2024, highlighting incredible developers and groundbreaking applications that push the boundaries of technology and creativity. From professional video tools to immersive storytelling apps, this year's selections offer something for every type of user. iPhone App of the Year Finalists: -Kino: A professional video capturing app from the makers of Halide that allows cinematic video recording with advanced filmmaking tools like LUT grading and time codes. -Runa: A comprehensive running and training plan app that customizes workout plans, tracks progress, and integrates with fitness platforms like Strava. -Tripsy: A privacy-focused travel planning app that organizes trip details, imports reservations, and provides weather forecasts without extensive data sharing. iPhone Game of the Year Finalists: -AFK Journey: A fantasy RPG with storytelling similar to Kingdom Hearts and Final Fantasy. -WearCleaner: A comedic stealth game about a werewolf janitor trying to hide his supernatural identity during night shifts. -Zenless Zone Zero: An anime-style RPG with strategic gameplay. iPad App of the Year Finalists: -Bluey, Let's Play: An educational game for children featuring the popular Australian cartoon characters. -Moises: An AI-powered music app that separates vocals and instruments, helps with learning songs, and provides chord detection. -Procreate Dreams: A $20 2D animation app for creating storytelling videos with timeline controls and drawing tools. Mac App and Game of the Year Finalists: -OmniFocus 4: A powerful task management app with advanced features like defer dates. -Adobe Lightroom -Shapr 3D CAD Modeling -Mac Games: Frostpunk 2, Stray, and Thank Goodness You're Here Apple Arcade Game of the Year Finalists: -Balatro+: A unique card game mixing poker and solitaire mechanics -Outlanders 2: A city-building strategy simulation -Sonic Dream Team: An adventure game featuring classic Sonic characters Apple Vision Pro App and Game Finalists: -Jigspace: 3D model interaction app -NBA Live -What If: An immersive Marvel-inspired story experience -Games include Luna, Thrasher Arcade Odyssey, and Vacation Simulator Apple Watch App of the Year Finalists: -Look Up English Dictionary: A vocabulary-building app with daily word features -Lumi: A photography tool showing sunrise, sunset, and golden hour times -Watch to 5K: A comprehensive running training program integrated with Apple Health Cultural Impact Finalists: -Diverse apps and games focusing on storytelling, accessibility, and social connection, including Oko (navigation for visually impaired), Partyful (event planning), and The Wreck (narrative adventure game) Hosts: Mikah Sargent and Rosemary Orchard Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord. You can also contribute to iOS Today by sending an email to iOSToday@TWiT.tv.
Apple's App Store Awards finalists showcase the most innovative and exciting apps across Apple platforms for 2024, highlighting incredible developers and groundbreaking applications that push the boundaries of technology and creativity. From professional video tools to immersive storytelling apps, this year's selections offer something for every type of user. iPhone App of the Year Finalists: -Kino: A professional video capturing app from the makers of Halide that allows cinematic video recording with advanced filmmaking tools like LUT grading and time codes. -Runa: A comprehensive running and training plan app that customizes workout plans, tracks progress, and integrates with fitness platforms like Strava. -Tripsy: A privacy-focused travel planning app that organizes trip details, imports reservations, and provides weather forecasts without extensive data sharing. iPhone Game of the Year Finalists: -AFK Journey: A fantasy RPG with storytelling similar to Kingdom Hearts and Final Fantasy. -WearCleaner: A comedic stealth game about a werewolf janitor trying to hide his supernatural identity during night shifts. -Zenless Zone Zero: An anime-style RPG with strategic gameplay. iPad App of the Year Finalists: -Bluey, Let's Play: An educational game for children featuring the popular Australian cartoon characters. -Moises: An AI-powered music app that separates vocals and instruments, helps with learning songs, and provides chord detection. -Procreate Dreams: A $20 2D animation app for creating storytelling videos with timeline controls and drawing tools. Mac App and Game of the Year Finalists: -OmniFocus 4: A powerful task management app with advanced features like defer dates. -Adobe Lightroom -Shapr 3D CAD Modeling -Mac Games: Frostpunk 2, Stray, and Thank Goodness You're Here Apple Arcade Game of the Year Finalists: -Balatro+: A unique card game mixing poker and solitaire mechanics -Outlanders 2: A city-building strategy simulation -Sonic Dream Team: An adventure game featuring classic Sonic characters Apple Vision Pro App and Game Finalists: -Jigspace: 3D model interaction app -NBA Live -What If: An immersive Marvel-inspired story experience -Games include Luna, Thrasher Arcade Odyssey, and Vacation Simulator Apple Watch App of the Year Finalists: -Look Up English Dictionary: A vocabulary-building app with daily word features -Lumi: A photography tool showing sunrise, sunset, and golden hour times -Watch to 5K: A comprehensive running training program integrated with Apple Health Cultural Impact Finalists: -Diverse apps and games focusing on storytelling, accessibility, and social connection, including Oko (navigation for visually impaired), Partyful (event planning), and The Wreck (narrative adventure game) Hosts: Mikah Sargent and Rosemary Orchard Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord. You can also contribute to iOS Today by sending an email to iOSToday@TWiT.tv.
Apple's App Store Awards finalists showcase the most innovative and exciting apps across Apple platforms for 2024, highlighting incredible developers and groundbreaking applications that push the boundaries of technology and creativity. From professional video tools to immersive storytelling apps, this year's selections offer something for every type of user. iPhone App of the Year Finalists: -Kino: A professional video capturing app from the makers of Halide that allows cinematic video recording with advanced filmmaking tools like LUT grading and time codes. -Runa: A comprehensive running and training plan app that customizes workout plans, tracks progress, and integrates with fitness platforms like Strava. -Tripsy: A privacy-focused travel planning app that organizes trip details, imports reservations, and provides weather forecasts without extensive data sharing. iPhone Game of the Year Finalists: -AFK Journey: A fantasy RPG with storytelling similar to Kingdom Hearts and Final Fantasy. -WearCleaner: A comedic stealth game about a werewolf janitor trying to hide his supernatural identity during night shifts. -Zenless Zone Zero: An anime-style RPG with strategic gameplay. iPad App of the Year Finalists: -Bluey, Let's Play: An educational game for children featuring the popular Australian cartoon characters. -Moises: An AI-powered music app that separates vocals and instruments, helps with learning songs, and provides chord detection. -Procreate Dreams: A $20 2D animation app for creating storytelling videos with timeline controls and drawing tools. Mac App and Game of the Year Finalists: -OmniFocus 4: A powerful task management app with advanced features like defer dates. -Adobe Lightroom -Shapr 3D CAD Modeling -Mac Games: Frostpunk 2, Stray, and Thank Goodness You're Here Apple Arcade Game of the Year Finalists: -Balatro+: A unique card game mixing poker and solitaire mechanics -Outlanders 2: A city-building strategy simulation -Sonic Dream Team: An adventure game featuring classic Sonic characters Apple Vision Pro App and Game Finalists: -Jigspace: 3D model interaction app -NBA Live -What If: An immersive Marvel-inspired story experience -Games include Luna, Thrasher Arcade Odyssey, and Vacation Simulator Apple Watch App of the Year Finalists: -Look Up English Dictionary: A vocabulary-building app with daily word features -Lumi: A photography tool showing sunrise, sunset, and golden hour times -Watch to 5K: A comprehensive running training program integrated with Apple Health Cultural Impact Finalists: -Diverse apps and games focusing on storytelling, accessibility, and social connection, including Oko (navigation for visually impaired), Partyful (event planning), and The Wreck (narrative adventure game) Hosts: Mikah Sargent and Rosemary Orchard Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord. You can also contribute to iOS Today by sending an email to iOSToday@TWiT.tv.
Apple's App Store Awards finalists showcase the most innovative and exciting apps across Apple platforms for 2024, highlighting incredible developers and groundbreaking applications that push the boundaries of technology and creativity. From professional video tools to immersive storytelling apps, this year's selections offer something for every type of user. iPhone App of the Year Finalists: -Kino: A professional video capturing app from the makers of Halide that allows cinematic video recording with advanced filmmaking tools like LUT grading and time codes. -Runa: A comprehensive running and training plan app that customizes workout plans, tracks progress, and integrates with fitness platforms like Strava. -Tripsy: A privacy-focused travel planning app that organizes trip details, imports reservations, and provides weather forecasts without extensive data sharing. iPhone Game of the Year Finalists: -AFK Journey: A fantasy RPG with storytelling similar to Kingdom Hearts and Final Fantasy. -WearCleaner: A comedic stealth game about a werewolf janitor trying to hide his supernatural identity during night shifts. -Zenless Zone Zero: An anime-style RPG with strategic gameplay. iPad App of the Year Finalists: -Bluey, Let's Play: An educational game for children featuring the popular Australian cartoon characters. -Moises: An AI-powered music app that separates vocals and instruments, helps with learning songs, and provides chord detection. -Procreate Dreams: A $20 2D animation app for creating storytelling videos with timeline controls and drawing tools. Mac App and Game of the Year Finalists: -OmniFocus 4: A powerful task management app with advanced features like defer dates. -Adobe Lightroom -Shapr 3D CAD Modeling -Mac Games: Frostpunk 2, Stray, and Thank Goodness You're Here Apple Arcade Game of the Year Finalists: -Balatro+: A unique card game mixing poker and solitaire mechanics -Outlanders 2: A city-building strategy simulation -Sonic Dream Team: An adventure game featuring classic Sonic characters Apple Vision Pro App and Game Finalists: -Jigspace: 3D model interaction app -NBA Live -What If: An immersive Marvel-inspired story experience -Games include Luna, Thrasher Arcade Odyssey, and Vacation Simulator Apple Watch App of the Year Finalists: -Look Up English Dictionary: A vocabulary-building app with daily word features -Lumi: A photography tool showing sunrise, sunset, and golden hour times -Watch to 5K: A comprehensive running training program integrated with Apple Health Cultural Impact Finalists: -Diverse apps and games focusing on storytelling, accessibility, and social connection, including Oko (navigation for visually impaired), Partyful (event planning), and The Wreck (narrative adventure game) Hosts: Mikah Sargent and Rosemary Orchard Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord. You can also contribute to iOS Today by sending an email to iOSToday@TWiT.tv.
Apple's App Store Awards finalists showcase the most innovative and exciting apps across Apple platforms for 2024, highlighting incredible developers and groundbreaking applications that push the boundaries of technology and creativity. From professional video tools to immersive storytelling apps, this year's selections offer something for every type of user. iPhone App of the Year Finalists: -Kino: A professional video capturing app from the makers of Halide that allows cinematic video recording with advanced filmmaking tools like LUT grading and time codes. -Runa: A comprehensive running and training plan app that customizes workout plans, tracks progress, and integrates with fitness platforms like Strava. -Tripsy: A privacy-focused travel planning app that organizes trip details, imports reservations, and provides weather forecasts without extensive data sharing. iPhone Game of the Year Finalists: -AFK Journey: A fantasy RPG with storytelling similar to Kingdom Hearts and Final Fantasy. -WearCleaner: A comedic stealth game about a werewolf janitor trying to hide his supernatural identity during night shifts. -Zenless Zone Zero: An anime-style RPG with strategic gameplay. iPad App of the Year Finalists: -Bluey, Let's Play: An educational game for children featuring the popular Australian cartoon characters. -Moises: An AI-powered music app that separates vocals and instruments, helps with learning songs, and provides chord detection. -Procreate Dreams: A $20 2D animation app for creating storytelling videos with timeline controls and drawing tools. Mac App and Game of the Year Finalists: -OmniFocus 4: A powerful task management app with advanced features like defer dates. -Adobe Lightroom -Shapr 3D CAD Modeling -Mac Games: Frostpunk 2, Stray, and Thank Goodness You're Here Apple Arcade Game of the Year Finalists: -Balatro+: A unique card game mixing poker and solitaire mechanics -Outlanders 2: A city-building strategy simulation -Sonic Dream Team: An adventure game featuring classic Sonic characters Apple Vision Pro App and Game Finalists: -Jigspace: 3D model interaction app -NBA Live -What If: An immersive Marvel-inspired story experience -Games include Luna, Thrasher Arcade Odyssey, and Vacation Simulator Apple Watch App of the Year Finalists: -Look Up English Dictionary: A vocabulary-building app with daily word features -Lumi: A photography tool showing sunrise, sunset, and golden hour times -Watch to 5K: A comprehensive running training program integrated with Apple Health Cultural Impact Finalists: -Diverse apps and games focusing on storytelling, accessibility, and social connection, including Oko (navigation for visually impaired), Partyful (event planning), and The Wreck (narrative adventure game) Hosts: Mikah Sargent and Rosemary Orchard Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord. You can also contribute to iOS Today by sending an email to iOSToday@TWiT.tv.
(This newsletter is sponsored by me, alerting you about the three mobile photography video courses I have available: beginning iPhone, iPhone 16 and Google Pixel. http://www.jeffersongraham.com/courses)You gotta love a photography application that calls itself the “Anti-AI” app. Halide has my name written all over it. The app has actually been around for several years, since 2017, but recently added a new feature called “Process Zero,” which lets us photograph things the way we actually see them, as opposed to how we'd like to remember the world. That means no fake blue skies, no artificially enhanced colors, no over-sharpened images. The app, which costs $19.99 yearly or $2.99 monthly, produces “zero computational photography to produce natural, film-like photos,” Halide says. Yay! (Sorry Galaxy and Pixel fans—Halide is only available for the iPhone and iPad.)Top, iPhone 16 Pro Apple fake blue sky, over sharpened. The bottom shot, again on the iPhone 16 Pro, is with the Halide app. All the photos shown here are unedited. My beef on the AI revolution is that it's great for looking up things, transcribing interviews, getting cars to drive automatically and for photo editing—but not for photo taking. I've written about how Google has taken a step towards ruining photography as we know it with the new Pixel 9 series of photos that offers features to add people to the photo, and do things that just aren't there. Google's “Add Me,” lets you take a photo of say, the two of you, and add a third person to the image after the fact, while the really controversial one is called Remagine, which turns ordinary photos into unlabeled generative AI artwork. Yuck. Apple has made a big deal of new AI features coming to the iPhone, but luckily they don't include altering major reality as part of the deal. However, what Apple, Samsung and Google have done is to take a basic camera and turn it into something that never produces an image that's out of focus, rarely too dark or light, with computational skills. The phone cameras take 9 images every time you click the shutter and merges them into one master photo with few flaws. As the Verge noted in an iPhone review about Halide: “If you're one of the many people who think that iPhone photos look overprocessed lately, then this is the feature for you.”The New Yorker, of all places, did a feature on Halide this week that you should check out. When's the last time the New Yorker did a piece on an app? Just wondering. “Process Zero has made me enjoy taking photos with my phone again, because I don't feel like I'm constantly fighting against algorithmic editing that I can't control or predict,” writes the author Kyle Chayka in the New Yorker. Halide notes that turning off the auto features has tradeoffs. It admits that it can't handle low light well and cannot access some features of the iPhone, notably the 2x zoom feature that crops a portion of the 48 megapixel sensor to “zoom” in and get closer. Top, iPhone, with enhanced orange and flag, bottom shot is with Halide. I've included here many examples of Apple vs. Halide, so you can see for yourself. You may prefer the Apple approach, which has its place—in many of these shots, the extra color is nice to have, but as a rule, I'd rather have the option to add those looks myself in editing afterwards, and have a cleaner image to play with. Top: Apple Photos shot, bottom via the Halide app. You can see how the colors are actually crisper in the Halide photo, minus the fake darkened sky. If you have the new iPhone 16, you might have heard about the “Camera Control” button on the side of the phone which lets you click to open the camera, without having to use FaceID. The button can be programmed to make Halide the default camera. Meanwhile, let me tell you more about the Halide app. It's incredibly simple. At the top of the screen you click the drop-down menu to select “Process Zero” or “Apple Processed” photos. Another tool lets you select Auto or Manual exposure, which you can tweak by pressing down on the screen and making the image lighter or darker. At the bottom of the app, you choose which lens you want to use, .5, 1x or 5x, the ability to use the Portrait Mode to blur the background and selfie mode. Hidden after a swipe is the flash, timer, white balance and settings, where you can choose to photograph in the traditional JPG, the HEIC smaller file, or larger RAW. Again, you may prefer the auto features, and that's fine, but if you want to go back to the days where a photo was a photo, without enhancement that can't be tweaked, you should check out Halide. NewsbytesInstafamous: If you want to be seen on Instagram, but don't feel like making a highly produced short Reels video, Instagram announced this week that Carousels and photos with music now can show up in the Reels tab too, which it favors over basic photo posts. Get busy creators!Amazon Stinks: When the company announced that it was adding adverts to its Prime Video offerings, unless we paid a fee, it said it would offer limited ads and not be obnoxious. Well, that didn't last very long. This week Amazon said it will start increasing the ad load in the coming year. Surprise, surprise!Speaking of AI editing: Software giant Adobe announced several new photo editing tools at its MAX conference in Miami, including the ability generate seconds of video footage from a text prompt in its Premiere Pro video editing program, and new tools for Lightroom Mobile. Here you can automatically apply effects for retouching backgrounds, teeth, eyes, skin, and more.Let's go to Japan!On tonight's episode of Photowalks on Scripps, we visit the Kansai region of Japan, including stops to Japan's “Kitchen” of Osaka, the street food capital of the country, and the lively port city of Kobe. The show airs at 8 p.m. ET on Scripps News. Mobile Photography on FlipboardI have a long-standing relationship with the digital social magazine Flipboard, one of the great apps for getting a curated look at things of interest. I'm currently the photographer in residence there for the fall—and the great Mia Q just posted a nice interview with me on Flipboard that you might enjoy!Record Store DaysI got such great response to last Sunday's bonus edition, talking about my days working as a clerk at a used record store in Berkeley at age 20, and all the important life lessons I learned at that time. But I broke the format of this newsletter, with nothing about tech, travel or photography in the story. Should I continue with my tale, I asked readers. “I'm loving learning the back story,” commented Josh. “I love reading about your adventures,” said Brian. “More, more, more Jeff” requested Jolene. Thank you! So by popular demand, I have part 3 of my personal history all cued up and ready for you in tomorrow's edition. Thanks so much for the interest!And as always, thanks again for taking the time to read, watch and listen!Jeff Get full access to Jefferson Graham's PhotowalksTV newsletter - Tech & Travel at www.photowalkstv.com/subscribe
For this full-on “what is a photo” episode, we start by chatting with Halide developers Ben Sandofsky and Sebastiaan De With about what it means to build a camera app in 2024 — and what it means to try and accurately capture a photo. Then The Verge's Allison Johnson joins the show to talk about her experiment going all-in on AI-ifying her photos. Finally, we answer a hotline about which gadgets to attach to your head when you go for a run. Further reading: Halide Halide's Process Zero feature captures photos with no AI processing Let's compare Apple, Google, and Samsung's definitions of ‘a photo' No one's ready for this Google's AI tool helped us add disasters and corpses to our photos The AI photo editing era is hare, and it's every person for themselves This system can sort real pictures from AI fakes — why aren't platforms using it? Email us at vergecast@theverge.com or call us at 866-VERGE11, we love hearing from you. Learn more about your ad choices. Visit podcastchoices.com/adchoices
The latest In Touch With iOS with Dave he is joined by guest Jill McKinley,, Jeff Gamet, and Ben Roethig. The team discusses enhancements to Vision Pro and VisionOS, insights into the iOS 18.1 beta experience, and challenges with recent watchOS and iPadOS betas. Attention shifts to the iPhone 16's improved camera features, accessibility innovations in iOS 18, and privacy enhancements like app locking. Additionally, the hosts explore the Chris Croissants app for social media and speculate on Ted Lasso's potential fourth season on Apple TV+. This episode provides expert insights into Apple's ongoing advancements and their real-world implications. The show notes are at InTouchwithiOS.com Direct Link to Audio Links to our Show Give us a review on Apple Podcasts! CLICK HERE we would really appreciate it! Click this link Buy me a Coffee to support the show we would really appreciate it. intouchwithios.com/coffee Another way to support the show is to become a Patreon member patreon.com/intouchwithios Website: In Touch With iOS YouTube Channel In Touch with iOS Magazine on Flipboard Facebook Page Mastodon X Instagram Threads Spoutible Summary In this episode of In Touch with iOS, we dive deep into the latest developments in the Apple ecosystem with host Dave Ginsburg alongside co-hosts Jill McKinley, Ben Roething, and Jeff Gamet. The team covers a plethora of topics ranging from the recent updates to various Apple devices, to explorations of beta software and new features introduced in iOS and watchOS. Starting off, we discuss the latest updates regarding Vision Pro, including a minor software fix that addresses a Safari YouTube issue and other improvements aimed at enhancing the overall user experience. This leads to a conversation about the third beta release for VisionOS, which introduces new features like support for a larger ultra-wide screen for Mac users and multi-view for MLB and MLS games. The groundwork shifts to the iOS 18.1 beta, with Ben sharing his insights on a stable experience so far, while the group also examines issues related to recently pulled beta versions of watchOS and iPadOS due to critical bugs. This leads to discussions regarding the importance of maintaining software updates and how they impact device performance. The podcast then transitions into an exploration of the iPhone 16 series, focusing on its enhanced camera capabilities. The group praises the Halide app for its in-depth analysis of the new camera systems, highlighting advancements such as improved macro photography and the new fusion camera feature. Dave and the team reflect on the user experience while discussing battery capacities, fast charging options, and the convenience of MagSafe charging. Accessibility and health-focused features come into the limelight next, notably the introduction of vehicle motion cues in iOS 18 designed to help alleviate motion sickness for passengers. Additionally, the episode emphasizes Apple's commitment to enhancing accessibility through various new features across their devices. As the dialogue continues, the hosts delve into privacy features, particularly the ability to lock and hide specific apps on the iPhone—an invaluable tool for maintaining a level of security for sensitive information. The conversation rounds off with notable mentions of a new app called Croissants, focusing on social media cross-posting, and thoughts surrounding the potential return of Massimo's blood oxygen technology to the Apple Watch amid corporate developments. In wrapping up the episode, the group shares thoughts on the surprising news about Ted Lasso's potential return for a fourth season on Apple TV+ and discusses the practical innovations in streaming technology, highlighting the arrival of the Tablo app for Apple TV. Topics and links Referenced In Touch with Vision Pro this week. Apple Releases visionOS 2.0.1 With Safari YouTube Fix Apple Seeds Third visionOS 2.1 Update to Developers Juno YouTube App for Vision Pro Removed From App Store Beta this week. Apple Seeds Third Beta of watchOS 11.1 to Developers - MacRumors Apple Pulls Third Software Update in Past Month Due to Issues - MacRumors Apple Releases Third tvOS 18.1 and HomePod Software 18.1 Public Betas iPadOS 18.0.1 Now Available for M4 iPad Pro Models After Pulled Update - MacRumors Apple Releases iOS 18.0.1 With Touch Screen Bug Fix and More - MacRumors MacOS 15.0.1 released Apple Releases watchOS 11.0.1 With Fix for Battery Drain and Touchscreen Issues - MacRumors Here's What You Need for iPhone 16 Fast Charging via MagSafe or USB-C Halide Maker Does Deep Dive Into iPhone 16 Pro Camera iPhone 16 Battery Capacities Revealed - MacRumors iPhone 16: Edit Spatial Audio in Videos With Audio Mix iOS 18 Avoid Vehicle Motion Sickness With This New iOS 18 Feature 25 ºNew Features You May Have Missed in watchOS 11 Despite New Ceramic Shield, iPhone 16 Models Still Vulnerable to Drops iOS 18: How to Lock and Hide iPhone Apps Apps Croissant simplifies social media cross-posting on iOS News Masimo's CEO resigns, raising speculation that blood oxygen could return to the Apple Watch Masimo founder resigns as CEO after removal from board – Six Colors Apple Podcasts App Rolling Out Transcriptions in These 8 Additional Languages Starting Today NEW! Tablo on Apple TV Finally 'Ted Lasso' to Return to Apple TV+ as Season Four Allegedly 'Confirmed' - MacRumors Announcements Macstock 8 wrapped up for 2024. But you can purchase the digital pass and still see the great talks we had including Dave talking about Apple Services and more. Content is now available! . Click here for more information: Digital Pass | Macstock Conference & Expo with discounts on previous events. Our Host Dave Ginsburg is an IT professional supporting Mac, iOS and Windows users and shares his wealth of knowledge of iPhone, iPad, Apple Watch, Apple TV and related technologies. Visit the YouTube channel https://youtube.com/intouchwithios follow him on Mastadon @daveg65, and the show @intouchwithios Our Regular Contributors Jeff Gamet is a podcaster, technology blogger, artist, and author. Previously, he was The Mac Observer's managing editor, and Smile's TextExpander Evangelist. You can find him on Mastadon @jgamet as well as Twitter and Instagram as @jgamet His YouTube channel https://youtube.com/jgamet Marty Jencius, Ph.D., is a professor of counselor education at Kent State University, where he researches, writes, and trains about using technology in teaching and mental health practice. His podcasts include Vision Pro Files, The Tech Savvy Professor and Circular Firing Squad Podcast. Find him at jencius@mastodon.social https://thepodtalk.net Ben Roethig Former Associate Editor of GeekBeat.TV and host of the Tech Hangout and Deconstruct with Patrice Mac user since the mid 90s. Tech support specialist. Twitter @benroethig Website: https://roethigtech.blogspot.com About our Guest Jill McKinley is a professional in the field of enterprise software, server administration, and IT. She started her technical career in Windows but now exclusively uses a Mac in her personal life. She hosts several podcasts, including Start with Small Steps and Small Steps with God, where she offers tips and insights for a better life. Her podcast is at https://abetterlifeinsmallsteps.com/ and X @schmern.
Apple's nieuwste iPhone 16 is 'made for AI'. Toch koop je de nieuwe smartphone op dit moment zonder enige AI-functie die Apple beloofde. Is deze iPhone, te koop vanaf 969 euro, zonder AI ook de aanschaf waard? Dat bespreek techredacteur Stijn Goossens tijdens de ochtendspits met Bas van Werven en Frederique Moll. De iPhone 16 introduceert twéé nieuwe knoppen: de actieknop die de oude mute-switch vervangt (bekend van de iPhone 15 Pro), en een nieuwe camara control-knop (ook nieuw op de Pro-modellen). De camera control-knop maakt foto's maken alles behalve intuitiever. Het lijkt vooral meer tijd te kosten, en veel gebruikers zullen de knop mogelijk bijnan ooit gebruiken. Het enige voordeel: je kan er andere camera-apps sneller mee openen, denk aan Snapchat of Halide. Verder verschilt de 16 van zijn voorganger door een iets verbeterde accu, wat meer megapixels in de camera en een krachtigere chip. Ten opzichte van de duurdere iPhone 16 Pro is het scherm vooral een doorn in het oog: de 60Hz-verversing en het ontbreken van een altijd-aan scherm doen de iPhone 16 voelen als weinig vernieuwend ten opzichte van 3 jaar terug. Ben je in de markt voor een nieuwe iPhone? Dan is dit misschien niet direct de beste koop. Voor minder geld zal de iPhone 15 je veel van hetzelfde bieden, zeker doordat Apple's AI de komende tijd nog niet te krijgen zal zijn in Europa. Met een iPhone 12 of ouder zal deze iPhone 16 voelen al een echte upgrade, zeker als je er nog 3-4 jaar mee zal doen. Kom je van een iPhone 13 of nieuwer? Dan zou een iPhone 15 (Pro) ook interessant kunnen zijn, maar een jaartje overslaan is misschien nog wel de meest duurzame keuze. Luister deze podcast voor de uitgebreide review van Stijn GoossensSee omnystudio.com/listener for privacy information.
Vandaag verwelkomen we een nieuwe collega in de Bright Podcast: Mark Hofman. Met hem bespreken we de nieuwe brillen van Meta, en met onze autokenner Rutger hebben we het over de versnelde afbouw van de korting op de wegenbelasting voor EV-rijders.Verder deze aflevering: Google's marktaandeel daalt, jongeren lezen nieuws vooral via sociale media, een grote update voor Windows 11en opnieuw Meta dat een miljoenenboete krijgt van de EU.Sponsor:Breinstein maakt jouw organisatie of bedrijf klaar voor de toekomst, door je te koppelen aan de beste nieuwe, frisse IT-professionals. Ook biedt Breinstein op dat gebied traineeships aan: werken terwijl je bijleert bij de beste bedrijven, die weer profiteren van jouw kunde. Meer op Breinstein.nl.Tips uit deze aflevering:Gadget: Insta360 Flow Pro, zo'n handige gestabiliseerde standaard voor je telefoon. Heeft een speciale basketbal-modus, naast ander leuks.Artikel: de camera-review van de iPhone 16 Pro door Sebastiaan de With. De ontwerper van camera-app Halide trekt er jaarlijks op uit om te zien wat de camera van de nieuwste iPhone kan, wat 'ie anders doet en wat dat voor gevolgen heeft voor je foto's. En dat alles voorzien van schitterende foto's, die laten zien wat je uit een iPhone kan persen.Serie: Het tweede seizoen van Squid Game is er eind dit jaar en daarom luidt de tip: kijk het eerste seizoen nog een keer. Met de kennis van de plottwists die in de serie zitten blijkt dit een twee-keer-kijken-serie: het is ineens een heel ander verhaal.Serie: The Penguin, een mini-serie op HBO Max. Een spin-off van de film The Batman uit 2022. Dus veel regen in Gotham en een bijna onherkenbare Colin Farrell als Oz Cobb alias de Pinguïn die na de dood van mafiabaas Falcone de stad wil veroveren.Zie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.
The panel talks about the new iPhone 16s, with some having the devices in hand. Why was the Halide camera app initially rejected from the App Store? Apple has updated its vintage and obsolete list: Is your device on the list now? The upcoming 28 Years Later film was apparently shot on the iPhone 15. Apple TV+ got 10 Emmys. iFixit tears down the iPhone 16. From Ming-Chi Kuo: Estimates of first weekend of new iPhone sales. Halide rejected from the App Store because it doesn't explain why the camera takes photos. iPhone 16 firmware can now be restored wirelessly from another iPhone. Apple working to fix iPadOS 18 bug that bricked M4 iPad Pro. Firefox no longer works after upgrading to macOS Sequoia. Apple adds these 12 Macs to vintage and obsolete products lists. RCS-enhanced iMessage in iOS 18 still has security issues when adding Android users. Wear OS watches might soon have an edge when it comes to blood oxygen. Apple Vision Pro's eye tracking exposed what people type. Oprah buys back her Apple TV+ documentary to lock it away. 28 Years Later: Danny Boyle's new zombie flick was shot on an iPhone 15. Apple TV+ gets GLAAD's only failing grade in annual LGBTQ representation study. Apple TV+ bags 10 Emmys, including first for Slow Horses. Apple Music Classical 2.0 adds thousands of full album booklets. Picks of the Week: Andy's Pick: Flightaware Live iPhone Tracker Alex's Pick: The Noun Project Jason's Pick: Small USB to USB C Adapters for travel Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: zocdoc.com/macbreak
The panel talks about the new iPhone 16s, with some having the devices in hand. Why was the Halide camera app initially rejected from the App Store? Apple has updated its vintage and obsolete list: Is your device on the list now? The upcoming 28 Years Later film was apparently shot on the iPhone 15. Apple TV+ got 10 Emmys. iFixit tears down the iPhone 16. From Ming-Chi Kuo: Estimates of first weekend of new iPhone sales. Halide rejected from the App Store because it doesn't explain why the camera takes photos. iPhone 16 firmware can now be restored wirelessly from another iPhone. Apple working to fix iPadOS 18 bug that bricked M4 iPad Pro. Firefox no longer works after upgrading to macOS Sequoia. Apple adds these 12 Macs to vintage and obsolete products lists. RCS-enhanced iMessage in iOS 18 still has security issues when adding Android users. Wear OS watches might soon have an edge when it comes to blood oxygen. Apple Vision Pro's eye tracking exposed what people type. Oprah buys back her Apple TV+ documentary to lock it away. 28 Years Later: Danny Boyle's new zombie flick was shot on an iPhone 15. Apple TV+ gets GLAAD's only failing grade in annual LGBTQ representation study. Apple TV+ bags 10 Emmys, including first for Slow Horses. Apple Music Classical 2.0 adds thousands of full album booklets. Picks of the Week: Andy's Pick: Flightaware Live iPhone Tracker Alex's Pick: The Noun Project Jason's Pick: Small USB to USB C Adapters for travel Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: zocdoc.com/macbreak
The panel talks about the new iPhone 16s, with some having the devices in hand. Why was the Halide camera app initially rejected from the App Store? Apple has updated its vintage and obsolete list: Is your device on the list now? The upcoming 28 Years Later film was apparently shot on the iPhone 15. Apple TV+ got 10 Emmys. iFixit tears down the iPhone 16. From Ming-Chi Kuo: Estimates of first weekend of new iPhone sales. Halide rejected from the App Store because it doesn't explain why the camera takes photos. iPhone 16 firmware can now be restored wirelessly from another iPhone. Apple working to fix iPadOS 18 bug that bricked M4 iPad Pro. Firefox no longer works after upgrading to macOS Sequoia. Apple adds these 12 Macs to vintage and obsolete products lists. RCS-enhanced iMessage in iOS 18 still has security issues when adding Android users. Wear OS watches might soon have an edge when it comes to blood oxygen. Apple Vision Pro's eye tracking exposed what people type. Oprah buys back her Apple TV+ documentary to lock it away. 28 Years Later: Danny Boyle's new zombie flick was shot on an iPhone 15. Apple TV+ gets GLAAD's only failing grade in annual LGBTQ representation study. Apple TV+ bags 10 Emmys, including first for Slow Horses. Apple Music Classical 2.0 adds thousands of full album booklets. Picks of the Week: Andy's Pick: Flightaware Live iPhone Tracker Alex's Pick: The Noun Project Jason's Pick: Small USB to USB C Adapters for travel Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: zocdoc.com/macbreak
The panel talks about the new iPhone 16s, with some having the devices in hand. Why was the Halide camera app initially rejected from the App Store? Apple has updated its vintage and obsolete list: Is your device on the list now? The upcoming 28 Years Later film was apparently shot on the iPhone 15. Apple TV+ got 10 Emmys. iFixit tears down the iPhone 16. From Ming-Chi Kuo: Estimates of first weekend of new iPhone sales. Halide rejected from the App Store because it doesn't explain why the camera takes photos. iPhone 16 firmware can now be restored wirelessly from another iPhone. Apple working to fix iPadOS 18 bug that bricked M4 iPad Pro. Firefox no longer works after upgrading to macOS Sequoia. Apple adds these 12 Macs to vintage and obsolete products lists. RCS-enhanced iMessage in iOS 18 still has security issues when adding Android users. Wear OS watches might soon have an edge when it comes to blood oxygen. Apple Vision Pro's eye tracking exposed what people type. Oprah buys back her Apple TV+ documentary to lock it away. 28 Years Later: Danny Boyle's new zombie flick was shot on an iPhone 15. Apple TV+ gets GLAAD's only failing grade in annual LGBTQ representation study. Apple TV+ bags 10 Emmys, including first for Slow Horses. Apple Music Classical 2.0 adds thousands of full album booklets. Picks of the Week: Andy's Pick: Flightaware Live iPhone Tracker Alex's Pick: The Noun Project Jason's Pick: Small USB to USB C Adapters for travel Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: zocdoc.com/macbreak
The panel talks about the new iPhone 16s, with some having the devices in hand. Why was the Halide camera app initially rejected from the App Store? Apple has updated its vintage and obsolete list: Is your device on the list now? The upcoming 28 Years Later film was apparently shot on the iPhone 15. Apple TV+ got 10 Emmys. iFixit tears down the iPhone 16. From Ming-Chi Kuo: Estimates of first weekend of new iPhone sales. Halide rejected from the App Store because it doesn't explain why the camera takes photos. iPhone 16 firmware can now be restored wirelessly from another iPhone. Apple working to fix iPadOS 18 bug that bricked M4 iPad Pro. Firefox no longer works after upgrading to macOS Sequoia. Apple adds these 12 Macs to vintage and obsolete products lists. RCS-enhanced iMessage in iOS 18 still has security issues when adding Android users. Wear OS watches might soon have an edge when it comes to blood oxygen. Apple Vision Pro's eye tracking exposed what people type. Oprah buys back her Apple TV+ documentary to lock it away. 28 Years Later: Danny Boyle's new zombie flick was shot on an iPhone 15. Apple TV+ gets GLAAD's only failing grade in annual LGBTQ representation study. Apple TV+ bags 10 Emmys, including first for Slow Horses. Apple Music Classical 2.0 adds thousands of full album booklets. Picks of the Week: Andy's Pick: Flightaware Live iPhone Tracker Alex's Pick: The Noun Project Jason's Pick: Small USB to USB C Adapters for travel Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: zocdoc.com/macbreak
The panel talks about the new iPhone 16s, with some having the devices in hand. Why was the Halide camera app initially rejected from the App Store? Apple has updated its vintage and obsolete list: Is your device on the list now? The upcoming 28 Years Later film was apparently shot on the iPhone 15. Apple TV+ got 10 Emmys. iFixit tears down the iPhone 16. From Ming-Chi Kuo: Estimates of first weekend of new iPhone sales. Halide rejected from the App Store because it doesn't explain why the camera takes photos. iPhone 16 firmware can now be restored wirelessly from another iPhone. Apple working to fix iPadOS 18 bug that bricked M4 iPad Pro. Firefox no longer works after upgrading to macOS Sequoia. Apple adds these 12 Macs to vintage and obsolete products lists. RCS-enhanced iMessage in iOS 18 still has security issues when adding Android users. Wear OS watches might soon have an edge when it comes to blood oxygen. Apple Vision Pro's eye tracking exposed what people type. Oprah buys back her Apple TV+ documentary to lock it away. 28 Years Later: Danny Boyle's new zombie flick was shot on an iPhone 15. Apple TV+ gets GLAAD's only failing grade in annual LGBTQ representation study. Apple TV+ bags 10 Emmys, including first for Slow Horses. Apple Music Classical 2.0 adds thousands of full album booklets. Picks of the Week: Andy's Pick: Flightaware Live iPhone Tracker Alex's Pick: The Noun Project Jason's Pick: Small USB to USB C Adapters for travel Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: zocdoc.com/macbreak
Chuck Joiner, David Ginsburg, Brian Flanigan-Arthurs, Marty Jencius, Jim Rea, Mark Fuccio, and Web Bixby look at a laughable claim by Google comparing their user privacy focus with Apple's. There seems to be a bit of a backlash against AI, based on Procreate's rejection of AI in favor of traditional art and Halide's new app on raw photography without the benefit of on-iPhone processing. Finally, the panel considers layoffs at GM and Sonos and the implications on their struggles with software development and user experience. MacVoices is supported by the new MacVoices Discord, our latest benefit for MacVoices Patrons. Sign up, get access, and jin the conversations at Patreon.com/macvoices. Show Notes: Chapters: 00:00 Google vs. Apple: Privacy Wars01:27 Google's Privacy Claims Under Scrutiny05:35 Procreate's Stand Against AI12:47 The Backlash Against AI Begins20:03 Sonos Struggles with Software Issues0 Links: T-Mobile fined $60M for unauthorized access to data, the largest fine of its type https://9to5mac.com/2024/08/15/t-mobile-fined-60m/ Google jabs at Apple during Pixel event included a ridiculous claim https://9to5mac.com/2024/08/14/google-jabs-at-apple/ iPad Illustration App Procreate Condemns Generative AI https://www.macrumors.com/2024/08/19/procreate-condemns-ai/ Halide 2.15 Lets Users Take Shots With Zero Computational Processing https://www.macrumors.com/2024/08/14/halide-2-15-lets-users-take-shots-with-zero-computational-processing/ GM cuts 1,000 software jobs as it prioritizes quality and AI https://techcrunch.com/2024/08/19/gm-cuts-1000-software-jobs-as-it-prioritizes-quality-and-ai/ Sonos, still trying to fix its broken app, lays off 100 employees https://www.engadget.com/audio/sonos-still-trying-to-fix-its-broken-app-reportedly-lays-off-100-employees-203224705.html?guccounter=1 Sonos Can't Release Old App for Customers Unhappy With Design Changes https://www.macrumors.com/2024/08/20/sonos-not-rereleasing-old-app/ Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn. Brian Flanigan-Arthurs is an educator with a passion for providing results-driven, innovative learning strategies for all students, but particularly those who are at-risk. He is also a tech enthusiast who has a particular affinity for Apple since he first used the Apple IIGS as a student. You can contact Brian on twitter as @brian8944. He also recently opened a Mastodon account at @brian8944@mastodon.cloud. Mark Fuccio is actively involved in high tech startup companies, both as a principle at piqsure.com, or as a marketing advisor through his consulting practice Tactics Sells High Tech, Inc. Mark was a proud investor in Microsoft from the mid-1990's selling in mid 2000, and hopes one day that MSFT will be again an attractive investment. You can contact Mark through Twitter, LinkedIn, or on Mastodon. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
Hi everyone. Please forgive me for the awful audio mixup in this episode that was released initially. It has been repaired and should be okay now. Dave and I welcome Paul Stolyarov to the show for a discussion about RAW vs ProRAW. We'll discuss the pros and cons of each format. We also discuss Halide's breakthrough feature, Process Zero and why you might want to try it. All this and more! Paul Stolyarov on Instagram Paul on Threads Paul's Website Ben Long on LinkedIn Learning Greg's Book The Podcast Website - http://iphoneography.ca Dave on Instagram Dave on Twitter Dave on TikTok Dave on VERO Dave on Mastodon Greg on About.me Greg on Instagram Greg on Glass Greg on VERO Greg on Twitter Greg's Website Greg on Mastodon The Podcast YouTube Channel Shayne Mostyn's YouTube Channel Smartphone Photography Club Smartphone Photography Club Forum The iPhoneography Podcast Facebook Group Shayne Mostyn's Bloody Legends Facebook Group Rick Sammon's Smartphone Photo Experience Facebook Group Reeflex's Facebook Group Get your first year of Glass for $20: https://glass.photo/offer/greg Reeflex Lenses - Get 10% off Reeflex lenses with the coupon code 10%OFFGREG Buy me a coffee at https://www.buymeacoffee.com/mcmillan The music at the end of this video was created on a Mac using GarageBand.
Chuck Joiner, David Ginsburg, Brian Flanigan-Arthurs, Marty Jencius, Jim Rea, Mark Fuccio, and Web Bixby look at a laughable claim by Google comparing their user privacy focus with Apple's. There seems to be a bit of a backlash against AI, based on Procreate's rejection of AI in favor of traditional art and Halide's new app on raw photography without the benefit of on-iPhone processing. Finally, the panel considers layoffs at GM and Sonos and the implications on their struggles with software development and user experience. MacVoices is supported by the new MacVoices Discord, our latest benefit for MacVoices Patrons. Sign up, get access, and jin the conversations at Patreon.com/macvoices. Show Notes: Chapters: 00:00 Google vs. Apple: Privacy Wars 01:27 Google's Privacy Claims Under Scrutiny 05:35 Procreate's Stand Against AI 12:47 The Backlash Against AI Begins 20:03 Sonos Struggles with Software Issues0 Links: T-Mobile fined $60M for unauthorized access to data, the largest fine of its type https://9to5mac.com/2024/08/15/t-mobile-fined-60m/ Google jabs at Apple during Pixel event included a ridiculous claim https://9to5mac.com/2024/08/14/google-jabs-at-apple/ iPad Illustration App Procreate Condemns Generative AI https://www.macrumors.com/2024/08/19/procreate-condemns-ai/ Halide 2.15 Lets Users Take Shots With Zero Computational Processing https://www.macrumors.com/2024/08/14/halide-2-15-lets-users-take-shots-with-zero-computational-processing/ GM cuts 1,000 software jobs as it prioritizes quality and AI https://techcrunch.com/2024/08/19/gm-cuts-1000-software-jobs-as-it-prioritizes-quality-and-ai/ Sonos, still trying to fix its broken app, lays off 100 employees https://www.engadget.com/audio/sonos-still-trying-to-fix-its-broken-app-reportedly-lays-off-100-employees-203224705.html?guccounter=1 Sonos Can't Release Old App for Customers Unhappy With Design Changes https://www.macrumors.com/2024/08/20/sonos-not-rereleasing-old-app/ Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn. Brian Flanigan-Arthurs is an educator with a passion for providing results-driven, innovative learning strategies for all students, but particularly those who are at-risk. He is also a tech enthusiast who has a particular affinity for Apple since he first used the Apple IIGS as a student. You can contact Brian on twitter as @brian8944. He also recently opened a Mastodon account at @brian8944@mastodon.cloud. Mark Fuccio is actively involved in high tech startup companies, both as a principle at piqsure.com, or as a marketing advisor through his consulting practice Tactics Sells High Tech, Inc. Mark was a proud investor in Microsoft from the mid-1990's selling in mid 2000, and hopes one day that MSFT will be again an attractive investment. You can contact Mark through Twitter, LinkedIn, or on Mastodon. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
This is The Digital Story Podcast #961, August 20, 2024. Today's theme is "Zero Computational Photography with Process Zero." I'm Derrick Story. Opening Monologue Are you ready to see what kind of pictures your iPhone captures with absolutely no computational photography applied? You might wonder how that could even happen. The latest version of Halide (2.15) includes a feature called Process Zero. And when it's enabled, you record a RAW file with no AI or computational photography adjustments. Basically, it's like shooting slide film with an analog camera. And the results just might surprise you. I explain how it works, plus more, on today's TDS Photography Podcast. I hope you enjoy the show.
Halide Edib'in Hindistan'a Dair kitabı üzerine konuşuyoruz.
Konuğumuz Nedim Saban ile Halide Edib tiyatrosu üzerine konuşuyoruz.
Photographers & camera app developers breakdown the Apple Intelligence announcement With guests Sebastian de With and Ben Sandofsky, developers of the apps Halide and Kino https://www.lux.camera Watch the video (https://youtu.be/2Rgfq4qrMXE) Special Guest: Sebastiaan de With.
Konuğumuz İnci Enginün ile Halide Edib'i konuşmaya devam ediyoruz.
Konuğumuz İnci Enginün ile Halide Edib'i konuşuyoruz.
Nog maar een paar dagen totdat WWDC begint, en we weten wat de nieuwe versies van iOS, macOS, iPadOS en meer brengen. Er is al veel informatie uitgelekt over iOS 18, en het is wel heel waarschijnlijk dat Siri dit jaar een stuk slimmer wordt. Het woord AI zal ook eindelijk bij Apple veelvuldig vallen, alleen zal Apple het wel net wat anders aanpakken. We bespreken wat we van die aanpak verwachten.Verder vandaag: miss-verkiezing voor AI-modellen, Nederlandse airco die werkt via radiator en warmtepomp en PostNL introduceert anti-phishing maatregel.Tips uit deze aflevering:Serie: de nieuwe Star Wars-serie The Acolyte gaat vandaag in premiere op Dinsey+. Nieuwe ronde, nieuwe kansen. We zien de Jedi zoals we ze niet eerder gezien hebben, 100 jaar voor The Phantom Menace, aan het eind van het High Republic-tijdperk, een gouden eeuw voor de Jedi. Maar dan begint het natuurlijk te rommelen… App: Kino van de makers van foto-app Halide. Kino is de evenknie, maar dan voor filmen. De film-app voor wie de volledige mogelijkheden van de iPhone-camera wil benutten. Zo kun je bijvoorbeeld filmen in Apple Log ProRes-formaat op de iPhone 15 Pro. Maar je kan ook direct een color grade inbakken in je opnames, of zelfs in Log-formaat filmen met een bepaalde color grade toegepast. Een color grade is geen ordinair filter, maar de manier waarop videobeelden in films en series een bepaalde tint krijgen, om de sfeer te bepalen. Dat kan je nu dus ook zelf doen. Er worden een aantal van die color grades meegeleverd, maar je kan ook je eigen color grade toevoegen aan de app.Verder is Kino veel meer een film-app dan de standaard camera-app. Je kan het focuspunt en de belichting los van elkaar bepalen en vastzetten. En er zitten ook gidsjes in de app zodat je kan leren om beter te filmen. Kost 22,99 euro.Zie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.
Apple is slowly adding more to watch on Apple Vision Pro, it's gearing up for AI, emoji, and more at WWDC, and yet another Apple Store has opened -- but it's made out of Lego.Contact your hosts@williamgallagher_ on Threads@WGallagher on TwitterWilliam's 58keys on YouTubeWilliam Gallagher on email@hillithreads on Threads@Hillitech on TwitterWes on MastodonWes Hilliard on emailLinks from the Show:This Lego Apple Store model needs votes for a slim chance of getting madeThe Legend of Zelda Great Deku Tree 2-in-1 LEGO SetExperience Immersive sizzle reel for Apple Vision Pro updated with new scenes, sportsJob Simulator - Three VR game staples have arrived on Apple Vision ProThis is the best look yet at Apple's immersive video camerasNew iPad mini with OLED screen rumored to arrive in 2026Your next iPhone could be the iPad mini - iPhone 15 vs iPad mini showdownApp icon customization, new emoji creation coming to iOS 18Apple and OpenAI allegedly reach deal to bring ChatGPT functionality to iOS 18Apple confirms WWDC 2024 keynote timing, but offers no more AI hintsThe ultimate guide on how to customize your iPhone running iOS 16Bigger and brighter: iPhone 16 & iPhone 16 Pro rumored screen changesApple's durability testing is way more than a YouTuber can manageNew Kino app by Halide developer is perfect for novice and pro videomakersJob listing suggests Apple is moving forward with plans for Apple TV on AndroidSupport the show:Support the show on Patreon or Apple Podcasts to get ad-free episodes every week, access to our private Discord channel, and early release of the show! We would also appreciate a 5-star rating and review in Apple PodcastsMore AppleInsider podcastsTune in to our HomeKit Insider podcast covering the latest news, products, apps and everything HomeKit related. Subscribe in Apple Podcasts, Overcast, or just search for HomeKit Insider wherever you get your podcasts.Subscribe and listen to our AppleInsider Daily podcast for the latest Apple news Monday through Friday. You can find it on Apple Podcasts, Overcast, or anywhere you listen to podcasts.Podcast artwork from Basic Apple Guy. Download the free wallpaper pack here.Those interested in sponsoring the show can reach out to us at: advertising@appleinsider.com (00:00) - Intro (02:36) - LEGO (07:34) - Apple Vision Pro (26:26) - iPad Pro (28:24) - iPad mini (37:50) - WWDC and more emoji, yay (43:40) - Home screens (55:32) - iOS 18 and Ai ★ Support this podcast on Patreon ★
Konuğumuz İpek Çalışlar ile Halide Edib üzerine konuşmaya devam ediyoruz.
Konuğumuz İpek Çalışlar ile Halide Edib'i konuşuyoruz.
Ahmed Nuri ile Halide Edib'in eserlerinin İsveççeye çevrimi ve İsveç'te alımlanmasını konuşuyoruz.
Konuğumuz Ayşe Durakbaşa ile Halide Edib'i konuşmaya devam ediyoruz.
Konuğumuz Ayşe Durakbaşa ile Halide Edib'i konuşuyoruz.
Leo has returned from his retreat! Apple TV+ and Paramount+ may soon be offered as a bundled service to entice more possible customers. Chase could be the replacement partner for Apple Card as Goldman Sachs seeks to end its partnership with Apple. And Apple reveals with 2023 App Store Award winners and Apple Podcast names Julia Louis-Dreyfus 'Wiser Than Me' the 2023 Show of the Year! Apple and Paramount considering bundling their streaming services. Netflix didn't see much 'interruption' in launch of original shows and movies Because of strikes, Co-CEO Ted Sarandos claims. From Gurman: Chase could be an ideal replacement for Goldman Sachs, as they already have a relationship with Apple. Filmic's entire staff laid off by parent company Bending Spoons. Developers of Halide doing video app, Kino. Here's what's actually going on in that viral 'glitch in the matrix' iPhone mirror picture. Apple reveals 2023 App Store Award winners; names generative AI the 'trend of the year'. Apple Podcasts names Julia Louis-Dreyfus' "Wiser Than Me" the 2023 Show of the Year. Apple announces expanded partnership with Amkor for advanced silicon packaging in the U.S. There's a new iMessage for Android app — and it actually works. Picks of the Week Alex's Pick: Topaz Video AI Andy's Pick: Panic Playdate Advent Calendar Jason's Pick: Channels Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: fastmail.com/twit ZipRecruiter.com/macbreak zocdoc.com/macbreak
Leo has returned from his retreat! Apple TV+ and Paramount+ may soon be offered as a bundled service to entice more possible customers. Chase could be the replacement partner for Apple Card as Goldman Sachs seeks to end its partnership with Apple. And Apple reveals with 2023 App Store Award winners and Apple Podcast names Julia Louis-Dreyfus 'Wiser Than Me' the 2023 Show of the Year! Apple and Paramount considering bundling their streaming services. Netflix didn't see much 'interruption' in launch of original shows and movies Because of strikes, Co-CEO Ted Sarandos claims. From Gurman: Chase could be an ideal replacement for Goldman Sachs, as they already have a relationship with Apple. Filmic's entire staff laid off by parent company Bending Spoons. Developers of Halide doing video app, Kino. Here's what's actually going on in that viral 'glitch in the matrix' iPhone mirror picture. Apple reveals 2023 App Store Award winners; names generative AI the 'trend of the year'. Apple Podcasts names Julia Louis-Dreyfus' "Wiser Than Me" the 2023 Show of the Year. Apple announces expanded partnership with Amkor for advanced silicon packaging in the U.S. There's a new iMessage for Android app — and it actually works. Picks of the Week Alex's Pick: Topaz Video AI Andy's Pick: Panic Playdate Advent Calendar Jason's Pick: Channels Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: fastmail.com/twit ZipRecruiter.com/macbreak zocdoc.com/macbreak
Leo has returned from his retreat! Apple TV+ and Paramount+ may soon be offered as a bundled service to entice more possible customers. Chase could be the replacement partner for Apple Card as Goldman Sachs seeks to end its partnership with Apple. And Apple reveals with 2023 App Store Award winners and Apple Podcast names Julia Louis-Dreyfus 'Wiser Than Me' the 2023 Show of the Year! Apple and Paramount considering bundling their streaming services. Netflix didn't see much 'interruption' in launch of original shows and movies Because of strikes, Co-CEO Ted Sarandos claims. From Gurman: Chase could be an ideal replacement for Goldman Sachs, as they already have a relationship with Apple. Filmic's entire staff laid off by parent company Bending Spoons. Developers of Halide doing video app, Kino. Here's what's actually going on in that viral 'glitch in the matrix' iPhone mirror picture. Apple reveals 2023 App Store Award winners; names generative AI the 'trend of the year'. Apple Podcasts names Julia Louis-Dreyfus' "Wiser Than Me" the 2023 Show of the Year. Apple announces expanded partnership with Amkor for advanced silicon packaging in the U.S. There's a new iMessage for Android app — and it actually works. Picks of the Week Alex's Pick: Topaz Video AI Andy's Pick: Panic Playdate Advent Calendar Jason's Pick: Channels Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: fastmail.com/twit ZipRecruiter.com/macbreak zocdoc.com/macbreak
Leo has returned from his retreat! Apple TV+ and Paramount+ may soon be offered as a bundled service to entice more possible customers. Chase could be the replacement partner for Apple Card as Goldman Sachs seeks to end its partnership with Apple. And Apple reveals with 2023 App Store Award winners and Apple Podcast names Julia Louis-Dreyfus 'Wiser Than Me' the 2023 Show of the Year! Apple and Paramount considering bundling their streaming services. Netflix didn't see much 'interruption' in launch of original shows and movies Because of strikes, Co-CEO Ted Sarandos claims. From Gurman: Chase could be an ideal replacement for Goldman Sachs, as they already have a relationship with Apple. Filmic's entire staff laid off by parent company Bending Spoons. Developers of Halide doing video app, Kino. Here's what's actually going on in that viral 'glitch in the matrix' iPhone mirror picture. Apple reveals 2023 App Store Award winners; names generative AI the 'trend of the year'. Apple Podcasts names Julia Louis-Dreyfus' "Wiser Than Me" the 2023 Show of the Year. Apple announces expanded partnership with Amkor for advanced silicon packaging in the U.S. There's a new iMessage for Android app — and it actually works. Picks of the Week Alex's Pick: Topaz Video AI Andy's Pick: Panic Playdate Advent Calendar Jason's Pick: Channels Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: fastmail.com/twit ZipRecruiter.com/macbreak zocdoc.com/macbreak
Leo has returned from his retreat! Apple TV+ and Paramount+ may soon be offered as a bundled service to entice more possible customers. Chase could be the replacement partner for Apple Card as Goldman Sachs seeks to end its partnership with Apple. And Apple reveals with 2023 App Store Award winners and Apple Podcast names Julia Louis-Dreyfus 'Wiser Than Me' the 2023 Show of the Year! Apple and Paramount considering bundling their streaming services. Netflix didn't see much 'interruption' in launch of original shows and movies Because of strikes, Co-CEO Ted Sarandos claims. From Gurman: Chase could be an ideal replacement for Goldman Sachs, as they already have a relationship with Apple. Filmic's entire staff laid off by parent company Bending Spoons. Developers of Halide doing video app, Kino. Here's what's actually going on in that viral 'glitch in the matrix' iPhone mirror picture. Apple reveals 2023 App Store Award winners; names generative AI the 'trend of the year'. Apple Podcasts names Julia Louis-Dreyfus' "Wiser Than Me" the 2023 Show of the Year. Apple announces expanded partnership with Amkor for advanced silicon packaging in the U.S. There's a new iMessage for Android app — and it actually works. Picks of the Week Alex's Pick: Topaz Video AI Andy's Pick: Panic Playdate Advent Calendar Jason's Pick: Channels Hosts: Leo Laporte, Alex Lindsay, Andy Ihnatko, and Jason Snell Download or subscribe to this show at https://twit.tv/shows/macbreak-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: fastmail.com/twit ZipRecruiter.com/macbreak zocdoc.com/macbreak
Contact your host with questions, suggestions, or requests about sponsoring the AppleInsider Daily:charles_martin@appleinsider.com (00:00) - 01 - Intro (00:12) - 02 - Chase chases Apple Card (01:17) - 03 - Zoom comes to Apple TV ... (02:07) - 04 - ... and so does ExpressVPN (02:43) - 05 - Former Doctor Who on Apple TV+ (03:06) - 06 - Scorsese's "Killers" new moneygrab (03:40) - 07 - "Central Park" and "Swagger" cancelled (03:59) - 08 - From the ashes of Filmic Pro ... (04:28) - 09 - ... comes Halide's Kino! (05:04) - 10 - Another SJ auction (05:44) - 11 - Apple Watch saves lives (06:47) - 12 - Outro Links from the showChase could be a potential partner for Apple CardZoom app for Apple TV surfaces in the App StoreExpressVPN brings its VPN app to the Apple TV'Criminal Record' gets first trailer, premieres January 10 on Apple TV+'Killers of the Flower Moon' gets digital release before Apple TV+ streaming debutApple TV+ opts not to renew both 'Central Park' and 'Swagger'Bending Spoons lays off entire team behind Filmic ProHalide plans first iPhone video app 'Kino' for February 2024 release$4.01 Steve Jobs check expected to fetch $25,000 at auctionApple Watch Fall Detection saves hiker after fall in the forestSubscribe to the AppleInsider podcast on: Apple Podcasts Overcast Pocket Casts Spotify Subscribe to the HomeKit Insider podcast on:• Apple Podcasts• Overcast• Pocket Casts• Spotify
Special guests Sebastiaan de With and Ben Sandofsky, co-founders of Lux, join the show to talk about their apps (Halide, Spectre, and Orion) and speculate about next week's “Scary Fast” Apple event.
A new category of iPad apps now exist with the release of iPadOS 17. The developers of Halide created a new app called Orion that lets you use your iPad as a screen for your Nintendo Switch, PS5, and other gaming consoles. This is an especially fun app with some delightful UI created around this concept of the Orion Operating System. The other huge release in this category is Camo Studio for iPad by Reincubate. This app brings a full Twitch and YouTube streaming solution to the iPad. In this episode is an interview with Eden Liu from Reincubate about Camo Studio for iPad. We dive into all of the features in the app currently and touch on some of the things that may be coming in a future update. This episode of iPad Pros is sponsored by Agenda, the award winning app that seamlessly integrates calendar events into your note taking. Learn more at www.agenda.com. Agenda 18 is now available as a free download for macOS, iPadOS, and iOS.Bonus content and early episodes with chapter markers are available by supporting the podcast at www.patreon.com/ipadpros. Bonus content and early episodes are also now available in Apple Podcasts! Subscribe today to get instant access to iPad Possibilities, iPad Ponderings, and iPad Historia!Show notes are available at www.iPadPros.net. Feedback is welcomed at iPadProsPodcast@gmail.com.Links:https://reincubate.com/camo/https://www.switcherstudio.comhttps://streamelements.comhttps://www.elgato.com/us/en/p/game-capture-hd60-xChapter Markers:00:00:00: Opening00:02:28: Support the Podcast00:03:13: Eden Liu00:05:21: The origins of Camo00:07:22: The iPad version00:08:29: How do you use iPad?00:12:19: Elevator Pitch for Camo Studio00:13:17: What did iPadOS 17 add?00:15:55: Multiple video sources?00:19:33: Doing a video podcast from iPad?00:21:47: Switcher Studio and Remote Guests00:25:29: Multiple Audio Tracks?00:30:22: Twitch and YouTube Streaming00:33:25: Web Overlays00:34:17: Stream Elements00:38:36: Stream Panels00:39:55: Sponsor - Agenda00:43:04: Capture Cards00:48:23: Save video locally00:51:05: Using the iPhone with the iPad?00:53:30: Other uses of Camo Studio00:57:22: USB-C iPad Required?00:59:50: Apple Pencil01:01:52: Adding media to your stream01:03:45: Camo Scenes01:07:13: Keyboard Shortcuts01:09:43: Streaming your iPad's Screen01:11:48: Camera Effects01:14:48: Auto Framing01:16:09: Remove01:16:36: Spotlight and Lighting01:19:35: The Mac Version01:22:29: Free forever?01:23:18: Where can people learn more?01:28:32: Closing Hosted on Acast. See acast.com/privacy for more information.
Hola Sebastiaan, Muchas gracias por aceptar la invitación a nuestro podcast español, para nosotros es un honor que estés con nosotros, eres una persona que ha influido al mundo del desarrollo con tus maravillosas apps “Halide” y “Spectre”. Y ahora con tu espectacular app “Orion”. Gracias nuevamente Sebastian por estar aquí. https://apps.apple.com/us/app/hdmi-monitor-orion/id6459355072 https://apps.apple.com/us/app/halide/id885697368 https://www.lux.camera/meet-orion/ https://orion.tube/?ref=lux.camera https://www.amazon.com/Capture-1080P60-Streaming-Recorder-Compatible/dp/B08Z3XDYQ7?crid=12990E8QDBTEL&keywords=usb%2Bc%2Bcapture%2Bcard&qid=1695254389&sprefix=usb%2Bc%2Bcapture%2Bcar%2Caps%2C115&linkCode=ll1&tag=luxoptics-20 https://orion.tube/accessories https://www.lux.camera/how-to-design-for-iphone-x-without-an-iphone-x/ https://www.lux.camera/how-to-design-for-iphone-x-without-an-iphone-x/ https://rep.ly/sdw https://www.lux.camera/author/sebastiaan/ https://loversmagazine.com/interviews/sebastiaan-de-with https://www.instagram.com/sdw/ https://twitter.com/sdw?lang=es https://twitter.com/halidecamera Podéis contribuir al mantenimiento de nuestro podcast por Paypal : envío como (amigos o familiares). israeledison20@hotmail.com Y en el concepto podéis poner café, comida, cervezas ese tipo de cosas evitando la palabra donación apoyo y demás palabras que lleven a pensar que esto se trata de otra cosa más que de una simple ayuda. De esta manera ninguna plataforma se quedará con un tanto por ciento de vuestras colaboraciones Gracias a tod@s por adelantado //Donde encontrarnos Canal Youtube https://www.youtube.com/c/ApplelianosApplelianos/featured Grupo Telegram (enlace de invitación) https://t.me/+U9If86lsuY00MGU0 Correo electrónico applelianos@gmail.com Canal Telegram Episodios https://t.me/ApplelianosFLAC Mi Shop Amazon https://amzn.to/30sYcbB Twitter https://twitter.com/ApplelianosPod ( (https://twitter.com/ApplelianosPod)https://twitter.com/ApplelianosPod ) Apple Podcasts https://podcasts.apple.com/es/podcast/applelianos-podcast/id993909563 Ivoox https://www.ivoox.com/podcast-applelianos-podcast_sq_f1170563_1.html ( (https://www.ivoox.com/podcast-applelianos-podcast_sq_f1170563_1.html) https://www.ivoox.com/podcast-applelianos-podcast_sq_f1170563_1.html ) Spotify https://open.spotify.com/show/2P1alAORWd9CaW7Fws2Fyd?si=6Lj9RFMyTlK8VFwr9LgoOw
Want to help define the AI Engineer stack? Have opinions on the top tools, communities and builders? We're collaborating with friends at Amplify to launch the first State of AI Engineering survey! Please fill it out (and tell your friends)!If AI is so important, why is its software so bad?This was the motivating question for Chris Lattner as he reconnected with his product counterpart on Tensorflow, Tim Davis, and started working on a modular solution to the problem of sprawling, monolithic, fragmented platforms in AI development. They announced a $30m seed in 2022 and, following their successful double launch of Modular/Mojo
We have just announced our first set of speakers at AI Engineer Summit! Sign up for the livestream or email sponsors@ai.engineer if you'd like to support.We are facing a massive GPU crunch. As both startups and VC's hoard Nvidia GPUs like countries count nuclear stockpiles, tweets about GPU shortages have become increasingly common. But what if we could run LLMs with AMD cards, or without a GPU at all? There's just one weird trick: compilation. And there's one person uniquely qualified to do it.We had the pleasure to sit down with Tianqi Chen, who's an Assistant Professor at CMU, where he both teaches the MLC course and runs the MLC group. You might also know him as the creator of XGBoost, Apache TVM, and MXNet, as well as the co-founder of OctoML. The MLC (short for Machine Learning Compilation) group has released a lot of interesting projects:* MLC Chat: an iPhone app that lets you run models like RedPajama-3B and Vicuna-7B on-device. It gets up to 30 tok/s!* Web LLM: Run models like LLaMA-70B in your browser (!!) to offer local inference in your product.* MLC LLM: a framework that allows any language models to be deployed natively on different hardware and software stacks.The MLC group has just announced new support for AMD cards; we previously talked about the shortcomings of ROCm, but using MLC you can get performance very close to the NVIDIA's counterparts. This is great news for founders and builders, as AMD cards are more readily available. Here are their latest results on AMD's 7900s vs some of top NVIDIA consumer cards.If you just can't get a GPU at all, MLC LLM also supports ARM and x86 CPU architectures as targets by leveraging LLVM. While speed performance isn't comparable, it allows for non-time-sensitive inference to be run on commodity hardware.We also enjoyed getting a peek into TQ's process, which involves a lot of sketching:With all the other work going on in this space with projects like ggml and Ollama, we're excited to see GPUs becoming less and less of an issue to get models in the hands of more people, and innovative software solutions to hardware problems!Show Notes* TQ's Projects:* XGBoost* Apache TVM* MXNet* MLC* OctoML* CMU Catalyst* ONNX* GGML* Mojo* WebLLM* RWKV* HiPPO* Tri Dao's Episode* George Hotz EpisodePeople:* Carlos Guestrin* Albert GuTimestamps* [00:00:00] Intros* [00:03:41] The creation of XGBoost and its surprising popularity* [00:06:01] Comparing tree-based models vs deep learning* [00:10:33] Overview of TVM and how it works with ONNX* [00:17:18] MLC deep dive* [00:28:10] Using int4 quantization for inference of language models* [00:30:32] Comparison of MLC to other model optimization projects* [00:35:02] Running large language models in the browser with WebLLM* [00:37:47] Integrating browser models into applications* [00:41:15] OctoAI and self-optimizing compute* [00:45:45] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Okay, and we are here with Tianqi Chen, or TQ as people call him, who is assistant professor in ML computer science at CMU, Carnegie Mellon University, also helping to run Catalyst Group, also chief technologist of OctoML. You wear many hats. Are those, you know, your primary identities these days? Of course, of course. [00:00:42]Tianqi: I'm also, you know, very enthusiastic open source. So I'm also a VP and PRC member of the Apache TVM project and so on. But yeah, these are the things I've been up to so far. [00:00:53]Swyx: Yeah. So you did Apache TVM, XGBoost, and MXNet, and we can cover any of those in any amount of detail. But maybe what's one thing about you that people might not learn from your official bio or LinkedIn, you know, on the personal side? [00:01:08]Tianqi: Let me say, yeah, so normally when I do, I really love coding, even though like I'm trying to run all those things. So one thing that I keep a habit on is I try to do sketchbooks. I have a book, like real sketchbooks to draw down the design diagrams and the sketchbooks I keep sketching over the years, and now I have like three or four of them. And it's kind of a usually a fun experience of thinking the design through and also seeing how open source project evolves and also looking back at the sketches that we had in the past to say, you know, all these ideas really turn into code nowadays. [00:01:43]Alessio: How many sketchbooks did you get through to build all this stuff? I mean, if one person alone built one of those projects, he'll be a very accomplished engineer. Like you built like three of these. What's that process like for you? Like it's the sketchbook, like the start, and then you think about the code or like. [00:01:59]Swyx: Yeah. [00:02:00]Tianqi: So, so usually I start sketching on high level architectures and also in a project that works for over years, we also start to think about, you know, new directions, like of course generative AI language model comes in, how it's going to evolve. So normally I would say it takes like one book a year, roughly at that rate. It's usually fun to, I find it's much easier to sketch things out and then gives a more like a high level architectural guide for some of the future items. Yeah. [00:02:28]Swyx: Have you ever published this sketchbooks? Cause I think people would be very interested on, at least on a historical basis. Like this is the time where XGBoost was born, you know? Yeah, not really. [00:02:37]Tianqi: I started sketching like after XGBoost. So that's a kind of missing piece, but a lot of design details in TVM are actually part of the books that I try to keep a record of. [00:02:48]Swyx: Yeah, we'll try to publish them and publish something in the journals. Maybe you can grab a little snapshot for visual aid. Sounds good. [00:02:57]Alessio: Yeah. And yeah, talking about XGBoost, so a lot of people in the audience might know it's a gradient boosting library, probably the most popular out there. And it became super popular because many people started using them in like a machine learning competitions. And I think there's like a whole Wikipedia page of like all state-of-the-art models. They use XGBoost and like, it's a really long list. When you were working on it, so we just had Tri Dao, who's the creator of FlashAttention on the podcast. And I asked him this question, it's like, when you were building FlashAttention, did you know that like almost any transform race model will use it? And so I asked the same question to you when you were coming up with XGBoost, like, could you predict it would be so popular or like, what was the creation process? And when you published it, what did you expect? We have no idea. [00:03:41]Tianqi: Like, actually, the original reason that we built that library is that at that time, deep learning just came out. Like that was the time where AlexNet just came out. And one of the ambitious mission that myself and my advisor, Carlos Guestrin, then is we want to think about, you know, try to test the hypothesis. Can we find alternatives to deep learning models? Because then, you know, there are other alternatives like, you know, support vector machines, linear models, and of course, tree-based models. And our question was, if you build those models and feed them with big enough data, because usually like one of the key characteristics of deep learning is that it's taking a lot [00:04:22]Swyx: of data, right? [00:04:23]Tianqi: So we will be able to get the same amount of performance. That's a hypothesis we're setting out to test. Of course, if you look at now, right, that's a wrong hypothesis, but as a byproduct, what we find out is that, you know, most of the gradient boosting library out there is not efficient enough for us to test that hypothesis. So I happen to have quite a bit of experience in the past of building gradient boosting trees and their variants. So Effective Action Boost was kind of like a byproduct of that hypothesis testing. At that time, I'm also competing a bit in data science challenges, like I worked on KDDCup and then Kaggle kind of become bigger, right? So I kind of think maybe it's becoming useful to others. One of my friends convinced me to try to do a Python binding of it. That tends to be like a very good decision, right, to be effective. Usually when I build it, we feel like maybe a command line interface is okay. And now we have a Python binding, we have R bindings. And then it realized, you know, it started getting interesting. People started contributing different perspectives, like visualization and so on. So we started to push a bit more on to building distributive support to make sure it works on any platform and so on. And even at that time point, when I talked to Carlos, my advisor, later, he said he never anticipated that we'll get to that level of success. And actually, why I pushed for gradient boosting trees, interestingly, at that time, he also disagreed. He thinks that maybe we should go for kernel machines then. And it turns out, you know, actually, we are both wrong in some sense, and Deep Neural Network was the king in the hill. But at least the gradient boosting direction got into something fruitful. [00:06:01]Swyx: Interesting. [00:06:02]Alessio: I'm always curious when it comes to these improvements, like, what's the design process in terms of like coming up with it? And how much of it is a collaborative with like other people that you're working with versus like trying to be, you know, obviously, in academia, it's like very paper-driven kind of research driven. [00:06:19]Tianqi: I would say the extra boost improvement at that time point was more on like, you know, I'm trying to figure out, right. But it's combining lessons. Before that, I did work on some of the other libraries on matrix factorization. That was like my first open source experience. Nobody knew about it, because you'll find, likely, if you go and try to search for the package SVD feature, you'll find some SVN repo somewhere. But it's actually being used for some of the recommender system packages. So I'm trying to apply some of the previous lessons there and trying to combine them. The later projects like MXNet and then TVM is much, much more collaborative in a sense that... But, of course, extra boost has become bigger, right? So when we started that project myself, and then we have, it's really amazing to see people come in. Michael, who was a lawyer, and now he works on the AI space as well, on contributing visualizations. Now we have people from our community contributing different things. So extra boost even today, right, it's a community of committers driving the project. So it's definitely something collaborative and moving forward on getting some of the things continuously improved for our community. [00:07:37]Alessio: Let's talk a bit about TVM too, because we got a lot of things to run through in this episode. [00:07:42]Swyx: I would say that at some point, I'd love to talk about this comparison between extra boost or tree-based type AI or machine learning compared to deep learning, because I think there is a lot of interest around, I guess, merging the two disciplines, right? And we can talk more about that. I don't know where to insert that, by the way, so we can come back to it later. Yeah. [00:08:04]Tianqi: Actually, what I said, when we test the hypothesis, the hypothesis is kind of, I would say it's partially wrong, because the hypothesis we want to test now is, can you run tree-based models on image classification tasks, where deep learning is certainly a no-brainer right [00:08:17]Swyx: now today, right? [00:08:18]Tianqi: But if you try to run it on tabular data, still, you'll find that most people opt for tree-based models. And there's a reason for that, in the sense that when you are looking at tree-based models, the decision boundaries are naturally rules that you're looking at, right? And they also have nice properties, like being able to be agnostic to scale of input and be able to automatically compose features together. And I know there are attempts on building neural network models that work for tabular data, and I also sometimes follow them. I do feel like it's good to have a bit of diversity in the modeling space. Actually, when we're building TVM, we build cost models for the programs, and actually we are using XGBoost for that as well. I still think tree-based models are going to be quite relevant, because first of all, it's really to get it to work out of the box. And also, you will be able to get a bit of interoperability and control monotonicity [00:09:18]Swyx: and so on. [00:09:19]Tianqi: So yes, it's still going to be relevant. I also sometimes keep coming back to think about, are there possible improvements that we can build on top of these models? And definitely, I feel like it's a space that can have some potential in the future. [00:09:34]Swyx: Are there any current projects that you would call out as promising in terms of merging the two directions? [00:09:41]Tianqi: I think there are projects that try to bring a transformer-type model for tabular data. I don't remember specifics of them, but I think even nowadays, if you look at what people are using, tree-based models are still one of their toolkits. So I think maybe eventually it's not even a replacement, it will be just an ensemble of models that you can call. Perfect. [00:10:07]Alessio: Next up, about three years after XGBoost, you built this thing called TVM, which is now a very popular compiler framework for models. Let's talk about, so this came out about at the same time as ONNX. So I think it would be great if you could maybe give a little bit of an overview of how the two things work together. Because it's kind of like the model, then goes to ONNX, then goes to the TVM. But I think a lot of people don't understand the nuances. I can get a bit of a backstory on that. [00:10:33]Tianqi: So actually, that's kind of an ancient history. Before XGBoost, I worked on deep learning for two years or three years. I got a master's before I started my PhD. And during my master's, my thesis focused on applying convolutional restricted Boltzmann machine for ImageNet classification. That is the thing I'm working on. And that was before AlexNet moment. So effectively, I had to handcraft NVIDIA CUDA kernels on, I think, a GTX 2070 card. I have a 22070 card. It took me about six months to get one model working. And eventually, that model is not so good, and we should have picked a better model. But that was like an ancient history that really got me into this deep learning field. And of course, eventually, we find it didn't work out. So in my master's, I ended up working on recommender system, which got me a paper, and I applied and got a PhD. But I always want to come back to work on the deep learning field. So after XGBoost, I think I started to work with some folks on this particular MXNet. At that time, it was like the frameworks of CAFE, Ciano, PyTorch haven't yet come out. And we're really working hard to optimize for performance on GPUs. At that time, I found it's really hard, even for NVIDIA GPU. It took me six months. And then it's amazing to see on different hardwares how hard it is to go and optimize code for the platforms that are interesting. So that gets me thinking, can we build something more generic and automatic? So that I don't need an entire team of so many people to go and build those frameworks. So that's the motivation of starting working on TVM. There is really too little about machine learning engineering needed to support deep learning models on the platforms that we're interested in. I think it started a bit earlier than ONNX, but once it got announced, I think it's in a similar time period at that time. So overall, how it works is that TVM, you will be able to take a subset of machine learning programs that are represented in what we call a computational graph. Nowadays, we can also represent a loop-level program ingest from your machine learning models. Usually, you have model formats ONNX, or in PyTorch, they have FX Tracer that allows you to trace the FX graph. And then it goes through TVM. We also realized that, well, yes, it needs to be more customizable, so it will be able to perform some of the compilation optimizations like fusion operator together, doing smart memory planning, and more importantly, generate low-level code. So that works for NVIDIA and also is portable to other GPU backends, even non-GPU backends [00:13:36]Swyx: out there. [00:13:37]Tianqi: So that's a project that actually has been my primary focus over the past few years. And it's great to see how it started from where I think we are the very early initiator of machine learning compilation. I remember there was a visit one day, one of the students asked me, are you still working on deep learning frameworks? I tell them that I'm working on ML compilation. And they said, okay, compilation, that sounds very ancient. It sounds like a very old field. And why are you working on this? And now it's starting to get more traction, like if you say Torch Compile and other things. I'm really glad to see this field starting to pick up. And also we have to continue innovating here. [00:14:17]Alessio: I think the other thing that I noticed is, it's kind of like a big jump in terms of area of focus to go from XGBoost to TVM, it's kind of like a different part of the stack. Why did you decide to do that? And I think the other thing about compiling to different GPUs and eventually CPUs too, did you already see some of the strain that models could have just being focused on one runtime, only being on CUDA and that, and how much of that went into it? [00:14:50]Tianqi: I think it's less about trying to get impact, more about wanting to have fun. I like to hack code, I had great fun hacking CUDA code. Of course, being able to generate CUDA code is cool, right? But now, after being able to generate CUDA code, okay, by the way, you can do it on other platforms, isn't that amazing? So it's more of that attitude to get me started on this. And also, I think when we look at different researchers, myself is more like a problem solver type. So I like to look at a problem and say, okay, what kind of tools we need to solve that problem? So regardless, it could be building better models. For example, while we build extra boots, we build certain regularizations into it so that it's more robust. It also means building system optimizations, writing low-level code, maybe trying to write assembly and build compilers and so on. So as long as they solve the problem, definitely go and try to do them together. And I also see it's a common trend right now. Like if you want to be able to solve machine learning problems, it's no longer at Aggressor layer, right? You kind of need to solve it from both Aggressor data and systems angle. And this entire field of machine learning system, I think it's kind of emerging. And there's now a conference around it. And it's really good to see a lot more people are starting to look into this. [00:16:10]Swyx: Yeah. Are you talking about ICML or something else? [00:16:13]Tianqi: So machine learning and systems, right? So not only machine learning, but machine learning and system. So there's a conference called MLsys. It's definitely a smaller community than ICML, but I think it's also an emerging and growing community where people are talking about what are the implications of building systems for machine learning, right? And how do you go and optimize things around that and co-design models and systems together? [00:16:37]Swyx: Yeah. And you were area chair for ICML and NeurIPS as well. So you've just had a lot of conference and community organization experience. Is that also an important part of your work? Well, it's kind of expected for academic. [00:16:48]Tianqi: If I hold an academic job, I need to do services for the community. Okay, great. [00:16:53]Swyx: Your most recent venture in MLsys is going to the phone with MLCLLM. You announced this in April. I have it on my phone. It's great. I'm running Lama 2, Vicuña. I don't know what other models that you offer. But maybe just kind of describe your journey into MLC. And I don't know how this coincides with your work at CMU. Is that some kind of outgrowth? [00:17:18]Tianqi: I think it's more like a focused effort that we want in the area of machine learning compilation. So it's kind of related to what we built in TVM. So when we built TVM was five years ago, right? And a lot of things happened. We built the end-to-end machine learning compiler that works, the first one that works. But then we captured a lot of lessons there. So then we are building a second iteration called TVM Unity. That allows us to be able to allow ML engineers to be able to quickly capture the new model and how we demand building optimizations for them. And MLCLLM is kind of like an MLC. It's more like a vertical driven organization that we go and build tutorials and go and build projects like LLM to solutions. So that to really show like, okay, you can take machine learning compilation technology and apply it and bring something fun forward. Yeah. So yes, it runs on phones, which is really cool. But the goal here is not only making it run on phones, right? The goal is making it deploy universally. So we do run on Apple M2 Macs, the 17 billion models. Actually, on a single batch inference, more recently on CUDA, we get, I think, the most best performance you can get out there already on the 4-bit inference. Actually, as I alluded earlier before the podcast, we just had a result on AMD. And on a single batch, actually, we can get the latest AMD GPU. This is a consumer card. It can get to about 80% of the 4019, so NVIDIA's best consumer card out there. So it's not yet on par, but thinking about how diversity and what you can enable and the previous things you can get on that card, it's really amazing that what you can do with this kind of technology. [00:19:10]Swyx: So one thing I'm a little bit confused by is that most of these models are in PyTorch, but you're running this inside a TVM. I don't know. Was there any fundamental change that you needed to do, or was this basically the fundamental design of TVM? [00:19:25]Tianqi: So the idea is that, of course, it comes back to program representation, right? So effectively, TVM has this program representation called TVM script that contains more like computational graph and operational representation. So yes, initially, we do need to take a bit of effort of bringing those models onto the program representation that TVM supports. Usually, there are a mix of ways, depending on the kind of model you're looking at. For example, for vision models and stable diffusion models, usually we can just do tracing that takes PyTorch model onto TVM. That part is still being robustified so that we can bring more models in. On language model tasks, actually what we do is we directly build some of the model constructors and try to directly map from Hugging Face models. The goal is if you have a Hugging Face configuration, we will be able to bring that in and apply optimization on them. So one fun thing about model compilation is that your optimization doesn't happen only as a soft language, right? For example, if you're writing PyTorch code, you just go and try to use a better fused operator at a source code level. Torch compile might help you do a bit of things in there. In most of the model compilations, it not only happens at the beginning stage, but we also apply generic transformations in between, also through a Python API. So you can tweak some of that. So that part of optimization helps a lot of uplifting in getting both performance and also portability on the environment. And another thing that we do have is what we call universal deployment. So if you get the ML program into this TVM script format, where there are functions that takes in tensor and output tensor, we will be able to have a way to compile it. So they will be able to load the function in any of the language runtime that TVM supports. So if you could load it in JavaScript, and that's a JavaScript function that you can take in tensors and output tensors. If you're loading Python, of course, and C++ and Java. So the goal there is really bring the ML model to the language that people care about and be able to run it on a platform they like. [00:21:37]Swyx: It strikes me that I've talked to a lot of compiler people, but you don't have a traditional compiler background. You're inventing your own discipline called machine learning compilation, or MLC. Do you think that this will be a bigger field going forward? [00:21:52]Tianqi: First of all, I do work with people working on compilation as well. So we're also taking inspirations from a lot of early innovations in the field. Like for example, TVM initially, we take a lot of inspirations from Halide, which is just an image processing compiler. And of course, since then, we have evolved quite a bit to focus on the machine learning related compilations. If you look at some of our conference publications, you'll find that machine learning compilation is already kind of a subfield. So if you look at papers in both machine learning venues, the MLC conferences, of course, and also system venues, every year there will be papers around machine learning compilation. And in the compiler conference called CGO, there's a C4ML workshop that also kind of trying to focus on this area. So definitely it's already starting to gain traction and becoming a field. I wouldn't claim that I invented this field, but definitely I helped to work with a lot of folks there. And I try to bring a perspective, of course, trying to learn a lot from the compiler optimizations as well as trying to bring in knowledges in machine learning and systems together. [00:23:07]Alessio: So we had George Hotz on the podcast a few episodes ago, and he had a lot to say about AMD and their software. So when you think about TVM, are you still restricted in a way by the performance of the underlying kernel, so to speak? So if your target is like a CUDA runtime, you still get better performance, no matter like TVM kind of helps you get there, but then that level you don't take care of, right? [00:23:34]Swyx: There are two parts in here, right? [00:23:35]Tianqi: So first of all, there is the lower level runtime, like CUDA runtime. And then actually for NVIDIA, a lot of the mood came from their libraries, like Cutlass, CUDN, right? Those library optimizations. And also for specialized workloads, actually you can specialize them. Because a lot of cases you'll find that if you go and do benchmarks, it's very interesting. Like two years ago, if you try to benchmark ResNet, for example, usually the NVIDIA library [00:24:04]Swyx: gives you the best performance. [00:24:06]Tianqi: It's really hard to beat them. But as soon as you start to change the model to something, maybe a bit of a variation of ResNet, not for the traditional ImageNet detections, but for latent detection and so on, there will be some room for optimization because people sometimes overfit to benchmarks. These are people who go and optimize things, right? So people overfit the benchmarks. So that's the largest barrier, like being able to get a low level kernel libraries, right? In that sense, the goal of TVM is actually we try to have a generic layer to both, of course, leverage libraries when available, but also be able to automatically generate [00:24:45]Swyx: libraries when possible. [00:24:46]Tianqi: So in that sense, we are not restricted by the libraries that they have to offer. That's why we will be able to run Apple M2 or WebGPU where there's no library available because we are kind of like automatically generating libraries. That makes it easier to support less well-supported hardware, right? For example, WebGPU is one example. From a runtime perspective, AMD, I think before their Vulkan driver was not very well supported. Recently, they are getting good. But even before that, we'll be able to support AMD through this GPU graphics backend called Vulkan, which is not as performant, but it gives you a decent portability across those [00:25:29]Swyx: hardware. [00:25:29]Alessio: And I know we got other MLC stuff to talk about, like WebLLM, but I want to wrap up on the optimization that you're doing. So there's kind of four core things, right? Kernel fusion, which we talked a bit about in the flash attention episode and the tiny grab one memory planning and loop optimization. I think those are like pretty, you know, self-explanatory. I think the one that people have the most questions, can you can you quickly explain [00:25:53]Swyx: those? [00:25:54]Tianqi: So there are kind of a different things, right? Kernel fusion means that, you know, if you have an operator like Convolutions or in the case of a transformer like MOP, you have other operators that follow that, right? You don't want to launch two GPU kernels. You want to be able to put them together in a smart way, right? And as a memory planning, it's more about, you know, hey, if you run like Python code, every time when you generate a new array, you are effectively allocating a new piece of memory, right? Of course, PyTorch and other frameworks try to optimize for you. So there is a smart memory allocator behind the scene. But actually, in a lot of cases, it's much better to statically allocate and plan everything ahead of time. And that's where like a compiler can come in. We need to, first of all, actually for language model, it's much harder because dynamic shape. So you need to be able to what we call symbolic shape tracing. So we have like a symbolic variable that tells you like the shape of the first tensor is n by 12. And the shape of the third tensor is also n by 12. Or maybe it's n times 2 by 12. Although you don't know what n is, right? But you will be able to know that relation and be able to use that to reason about like fusion and other decisions. So besides this, I think loop transformation is quite important. And it's actually non-traditional. Originally, if you simply write a code and you want to get a performance, it's very hard. For example, you know, if you write a matrix multiplier, the simplest thing you can do is you do for i, j, k, c, i, j, plus, equal, you know, a, i, k, times b, i, k. But that code is 100 times slower than the best available code that you can get. So we do a lot of transformation, like being able to take the original code, trying to put things into shared memory, and making use of tensor calls, making use of memory copies, and all this. Actually, all these things, we also realize that, you know, we cannot do all of them. So we also make the ML compilation framework as a Python package, so that people will be able to continuously improve that part of engineering in a more transparent way. So we find that's very useful, actually, for us to be able to get good performance very quickly on some of the new models. Like when Lamato came out, we'll be able to go and look at the whole, here's the bottleneck, and we can go and optimize those. [00:28:10]Alessio: And then the fourth one being weight quantization. So everybody wants to know about that. And just to give people an idea of the memory saving, if you're doing FB32, it's like four bytes per parameter. Int8 is like one byte per parameter. So you can really shrink down the memory footprint. What are some of the trade-offs there? How do you figure out what the right target is? And what are the precision trade-offs, too? [00:28:37]Tianqi: Right now, a lot of people also mostly use int4 now for language models. So that really shrinks things down a lot. And more recently, actually, we started to think that, at least in MOC, we don't want to have a strong opinion on what kind of quantization we want to bring, because there are so many researchers in the field. So what we can do is we can allow developers to customize the quantization they want, but we still bring the optimum code for them. So we are working on this item called bring your own quantization. In fact, hopefully MOC will be able to support more quantization formats. And definitely, I think there's an open field that's being explored. Can you bring more sparsities? Can you quantize activations as much as possible, and so on? And it's going to be something that's going to be relevant for quite a while. [00:29:27]Swyx: You mentioned something I wanted to double back on, which is most people use int4 for language models. This is actually not obvious to me. Are you talking about the GGML type people, or even the researchers who are training the models also using int4? [00:29:40]Tianqi: Sorry, so I'm mainly talking about inference, not training, right? So when you're doing training, of course, int4 is harder, right? Maybe you could do some form of mixed type precision for inference. I think int4 is kind of like, in a lot of cases, you will be able to get away with int4. And actually, that does bring a lot of savings in terms of the memory overhead, and so on. [00:30:09]Alessio: Yeah, that's great. Let's talk a bit about maybe the GGML, then there's Mojo. How should people think about MLC? How do all these things play together? I think GGML is focused on model level re-implementation and improvements. Mojo is a language, super sad. You're more at the compiler level. Do you all work together? Do people choose between them? [00:30:32]Tianqi: So I think in this case, I think it's great to say the ecosystem becomes so rich with so many different ways. So in our case, GGML is more like you're implementing something from scratch in C, right? So that gives you the ability to go and customize each of a particular hardware backend. But then you will need to write from CUDA kernels, and you write optimally from AMD, and so on. So the kind of engineering effort is a bit more broadened in that sense. Mojo, I have not looked at specific details yet. I think it's good to start to say, it's a language, right? I believe there will also be machine learning compilation technologies behind it. So it's good to say, interesting place in there. In the case of MLC, our case is that we do not want to have an opinion on how, where, which language people want to develop, deploy, and so on. And we also realize that actually there are two phases. We want to be able to develop and optimize your model. By optimization, I mean, really bring in the best CUDA kernels and do some of the machine learning engineering in there. And then there's a phase where you want to deploy it as a part of the app. So if you look at the space, you'll find that GGML is more like, I'm going to develop and optimize in the C language, right? And then most of the low-level languages they have. And Mojo is that you want to develop and optimize in Mojo, right? And you deploy in Mojo. In fact, that's the philosophy they want to push for. In the ML case, we find that actually if you want to develop models, the machine learning community likes Python. Python is a language that you should focus on. So in the case of MLC, we really want to be able to enable, not only be able to just define your model in Python, that's very common, right? But also do ML optimization, like engineering optimization, CUDA kernel optimization, memory planning, all those things in Python that makes you customizable and so on. But when you do deployment, we realize that people want a bit of a universal flavor. If you are a web developer, you want JavaScript, right? If you're maybe an embedded system person, maybe you would prefer C++ or C or Rust. And people sometimes do like Python in a lot of cases. So in the case of MLC, we really want to have this vision of, you optimize, build a generic optimization in Python, then you deploy that universally onto the environments that people like. [00:32:54]Swyx: That's a great perspective and comparison, I guess. One thing I wanted to make sure that we cover is that I think you are one of these emerging set of academics that also very much focus on your artifacts of delivery. Of course. Something we talked about for three years, that he was very focused on his GitHub. And obviously you treated XGBoost like a product, you know? And then now you're publishing an iPhone app. Okay. Yeah. Yeah. What is his thinking about academics getting involved in shipping products? [00:33:24]Tianqi: I think there are different ways of making impact, right? Definitely, you know, there are academics that are writing papers and building insights for people so that people can build product on top of them. In my case, I think the particular field I'm working on, machine learning systems, I feel like really we need to be able to get it to the hand of people so that really we see the problem, right? And we show that we can solve a problem. And it's a different way of making impact. And there are academics that are doing similar things. Like, you know, if you look at some of the people from Berkeley, right? A few years, they will come up with big open source projects. Certainly, I think it's just a healthy ecosystem to have different ways of making impacts. And I feel like really be able to do open source and work with open source community is really rewarding because we have a real problem to work on when we build our research. Actually, those research bring together and people will be able to make use of them. And we also start to see interesting research challenges that we wouldn't otherwise say, right, if you're just trying to do a prototype and so on. So I feel like it's something that is one interesting way of making impact, making contributions. [00:34:40]Swyx: Yeah, you definitely have a lot of impact there. And having experience publishing Mac stuff before, the Apple App Store is no joke. It is the hardest compilation, human compilation effort. So one thing that we definitely wanted to cover is running in the browser. You have a 70 billion parameter model running in the browser. That's right. Can you just talk about how? Yeah, of course. [00:35:02]Tianqi: So I think that there are a few elements that need to come in, right? First of all, you know, we do need a MacBook, the latest one, like M2 Max, because you need the memory to be big enough to cover that. So for a 70 million model, it takes you about, I think, 50 gigahertz of RAM. So the M2 Max, the upper version, will be able to run it, right? And it also leverages machine learning compilation. Again, what we are doing is the same, whether it's running on iPhone, on server cloud GPUs, on AMDs, or on MacBook, we all go through that same MOC pipeline. Of course, in certain cases, maybe we'll do a bit of customization iteration for either ones. And then it runs on the browser runtime, this package of WebLM. So that will effectively... So what we do is we will take that original model and compile to what we call WebGPU. And then the WebLM will be to pick it up. And the WebGPU is this latest GPU technology that major browsers are shipping right now. So you can get it in Chrome for them already. It allows you to be able to access your native GPUs from a browser. And then effectively, that language model is just invoking the WebGPU kernels through there. So actually, when the LATMAR2 came out, initially, we asked the question about, can you run 17 billion on a MacBook? That was the question we're asking. So first, we actually... Jin Lu, who is the engineer pushing this, he got 17 billion on a MacBook. We had a CLI version. So in MLC, you will be able to... That runs through a metal accelerator. So effectively, you use the metal programming language to get the GPU acceleration. So we find, okay, it works for the MacBook. Then we asked, we had a WebGPU backend. Why not try it there? So we just tried it out. And it's really amazing to see everything up and running. And actually, it runs smoothly in that case. So I do think there are some kind of interesting use cases already in this, because everybody has a browser. You don't need to install anything. I think it doesn't make sense yet to really run a 17 billion model on a browser, because you kind of need to be able to download the weight and so on. But I think we're getting there. Effectively, the most powerful models you will be able to run on a consumer device. It's kind of really amazing. And also, in a lot of cases, there might be use cases. For example, if I'm going to build a chatbot that I talk to it and answer questions, maybe some of the components, like the voice to text, could run on the client side. And so there are a lot of possibilities of being able to have something hybrid that contains the edge component or something that runs on a server. [00:37:47]Alessio: Do these browser models have a way for applications to hook into them? So if I'm using, say, you can use OpenAI or you can use the local model. Of course. [00:37:56]Tianqi: Right now, actually, we are building... So there's an NPM package called WebILM, right? So that you will be able to, if you want to embed it onto your web app, you will be able to directly depend on WebILM and you will be able to use it. We are also having a REST API that's OpenAI compatible. So that REST API, I think, right now, it's actually running on native backend. So that if a CUDA server is faster to run on native backend. But also we have a WebGPU version of it that you can go and run. So yeah, we do want to be able to have easier integrations with existing applications. And OpenAI API is certainly one way to do that. Yeah, this is great. [00:38:37]Swyx: I actually did not know there's an NPM package that makes it very, very easy to try out and use. I want to actually... One thing I'm unclear about is the chronology. Because as far as I know, Chrome shipped WebGPU the same time that you shipped WebILM. Okay, yeah. So did you have some kind of secret chat with Chrome? [00:38:57]Tianqi: The good news is that Chrome is doing a very good job of trying to have early release. So although the official shipment of the Chrome WebGPU is the same time as WebILM, actually, you will be able to try out WebGPU technology in Chrome. There is an unstable version called Canary. I think as early as two years ago, there was a WebGPU version. Of course, it's getting better. So we had a TVM-based WebGPU backhand two years ago. Of course, at that time, there were no language models. It was running on less interesting, well, still quite interesting models. And then this year, we really started to see it getting matured and performance keeping up. So we have a more serious push of bringing the language model compatible runtime onto the WebGPU. [00:39:45]Swyx: I think you agree that the hardest part is the model download. Has there been conversations about a one-time model download and sharing between all the apps that might use this API? That is a great point. [00:39:58]Tianqi: I think it's already supported in some sense. When we download the model, WebILM will cache it onto a special Chrome cache. So if a different web app uses the same WebILM JavaScript package, you don't need to redownload the model again. So there is already something there. But of course, you have to download the model once at least to be able to use it. [00:40:19]Swyx: Okay. One more thing just in general before we're about to zoom out to OctoAI. Just the last question is, you're not the only project working on, I guess, local models. That's right. Alternative models. There's gpt4all, there's olama that just recently came out, and there's a bunch of these. What would be your advice to them on what's a valuable problem to work on? And what is just thin wrappers around ggml? Like, what are the interesting problems in this space, basically? [00:40:45]Tianqi: I think making API better is certainly something useful, right? In general, one thing that we do try to push very hard on is this idea of easier universal deployment. So we are also looking forward to actually have more integration with MOC. That's why we're trying to build API like WebILM and other things. So we're also looking forward to collaborate with all those ecosystems and working support to bring in models more universally and be able to also keep up the best performance when possible in a more push-button way. [00:41:15]Alessio: So as we mentioned in the beginning, you're also the co-founder of Octomel. Recently, Octomel released OctoAI, which is a compute service, basically focuses on optimizing model runtimes and acceleration and compilation. What has been the evolution there? So Octo started as kind of like a traditional MLOps tool, where people were building their own models and you help them on that side. And then it seems like now most of the market is shifting to starting from pre-trained generative models. Yeah, what has been that experience for you and what you've seen the market evolve? And how did you decide to release OctoAI? [00:41:52]Tianqi: One thing that we found out is that on one hand, it's really easy to go and get something up and running, right? So if you start to consider there's so many possible availabilities and scalability issues and even integration issues since becoming kind of interesting and complicated. So we really want to make sure to help people to get that part easy, right? And now a lot of things, if we look at the customers we talk to and the market, certainly generative AI is something that is very interesting. So that is something that we really hope to help elevate. And also building on top of technology we build to enable things like portability across hardwares. And you will be able to not worry about the specific details, right? Just focus on getting the model out. We'll try to work on infrastructure and other things that helps on the other end. [00:42:45]Alessio: And when it comes to getting optimization on the runtime, I see when we run an early adopters community and most enterprises issue is how to actually run these models. Do you see that as one of the big bottlenecks now? I think a few years ago it was like, well, we don't have a lot of machine learning talent. We cannot develop our own models. Versus now it's like, there's these great models you can use, but I don't know how to run them efficiently. [00:43:12]Tianqi: That depends on how you define by running, right? On one hand, it's easy to download your MLC, like you download it, you run on a laptop, but then there's also different decisions, right? What if you are trying to serve a larger user request? What if that request changes? What if the availability of hardware changes? Right now it's really hard to get the latest hardware on media, unfortunately, because everybody's trying to work on the things using the hardware that's out there. So I think when the definition of run changes, there are a lot more questions around things. And also in a lot of cases, it's not only about running models, it's also about being able to solve problems around them. How do you manage your model locations and how do you make sure that you get your model close to your execution environment more efficiently? So definitely a lot of engineering challenges out there. That we hope to elevate, yeah. And also, if you think about our future, definitely I feel like right now the technology, given the technology and the kind of hardware availability we have today, we will need to make use of all the possible hardware available out there. That will include a mechanism for cutting down costs, bringing something to the edge and cloud in a more natural way. So I feel like still this is a very early stage of where we are, but it's already good to see a lot of interesting progress. [00:44:35]Alessio: Yeah, that's awesome. I would love, I don't know how much we're going to go in depth into it, but what does it take to actually abstract all of this from the end user? You know, like they don't need to know what GPUs you run, what cloud you're running them on. You take all of that away. What was that like as an engineering challenge? [00:44:51]Tianqi: So I think that there are engineering challenges on. In fact, first of all, you will need to be able to support all the kind of hardware backhand you have, right? On one hand, if you look at the media library, you'll find very surprisingly, not too surprisingly, most of the latest libraries works well on the latest GPU. But there are other GPUs out there in the cloud as well. So certainly being able to have know-hows and being able to do model optimization is one thing, right? Also infrastructures on being able to scale things up, locate models. And in a lot of cases, we do find that on typical models, it also requires kind of vertical iterations. So it's not about, you know, build a silver bullet and that silver bullet is going to solve all the problems. It's more about, you know, we're building a product, we'll work with the users and we find out there are interesting opportunities in a certain point. And when our engineer will go and solve that, and it will automatically reflect it in a service. [00:45:45]Swyx: Awesome. [00:45:46]Alessio: We can jump into the lightning round until, I don't know, Sean, if you have more questions or TQ, if you have more stuff you wanted to talk about that we didn't get a chance to [00:45:54]Swyx: touch on. [00:45:54]Alessio: Yeah, we have talked a lot. [00:45:55]Swyx: So, yeah. We always would like to ask, you know, do you have a commentary on other parts of AI and ML that is interesting to you? [00:46:03]Tianqi: So right now, I think one thing that we are really pushing hard for is this question about how far can we bring open source, right? I'm kind of like a hacker and I really like to put things together. So I think it's unclear in the future of what the future of AI looks like. On one hand, it could be possible that, you know, you just have a few big players, you just try to talk to those bigger language models and that can do everything, right? On the other hand, one of the things that Wailing Academic is really excited and pushing for, that's one reason why I'm pushing for MLC, is that can we build something where you have different models? You have personal models that know the best movie you like, but you also have bigger models that maybe know more, and you get those models to interact with each other, right? And be able to have a wide ecosystem of AI agents that helps each person while still being able to do things like personalization. Some of them can run locally, some of them, of course, running on a cloud, and how do they interact with each other? So I think that is a very exciting time where the future is yet undecided, but I feel like there is something we can do to shape that future as well. [00:47:18]Swyx: One more thing, which is something I'm also pursuing, which is, and this kind of goes back into predictions, but also back in your history, do you have any idea, or are you looking out for anything post-transformers as far as architecture is concerned? [00:47:32]Tianqi: I think, you know, in a lot of these cases, you can find there are already promising models for long contexts, right? There are space-based models, where like, you know, a lot of some of our colleagues from Albert, who he worked on this HIPPO models, right? And then there is an open source version called RWKV. It's like a recurrent models that allows you to summarize things. Actually, we are bringing RWKV to MOC as well, so maybe you will be able to see one of the models. [00:48:00]Swyx: We actually recorded an episode with one of the RWKV core members. It's unclear because there's no academic backing. It's just open source people. Oh, I see. So you like the merging of recurrent networks and transformers? [00:48:13]Tianqi: I do love to see this model space continue growing, right? And I feel like in a lot of cases, it's just that attention mechanism is getting changed in some sense. So I feel like definitely there are still a lot of things to be explored here. And that is also one reason why we want to keep pushing machine learning compilation, because one of the things we are trying to push in was productivity. So that for machine learning engineering, so that as soon as some of the models came out, we will be able to, you know, empower them onto those environments that's out there. [00:48:43]Swyx: Yeah, it's a really good mission. Okay. Very excited to see that RWKV and state space model stuff. I'm hearing increasing chatter about that stuff. Okay. Lightning round, as always fun. I'll take the first one. Acceleration. What has already happened in AI that you thought would take much longer? [00:48:59]Tianqi: Emergence of more like a conversation chatbot ability is something that kind of surprised me before it came out. This is like one piece that I feel originally I thought would take much longer, but yeah, [00:49:11]Swyx: it happens. And it's funny because like the original, like Eliza chatbot was something that goes all the way back in time. Right. And then we just suddenly came back again. Yeah. [00:49:21]Tianqi: It's always too interesting to think about, but with a kind of a different technology [00:49:25]Swyx: in some sense. [00:49:25]Alessio: What about the most interesting unsolved question in AI? [00:49:31]Swyx: That's a hard one, right? [00:49:32]Tianqi: So I can tell you like what kind of I'm excited about. So, so I think that I have always been excited about this idea of continuous learning and lifelong learning in some sense. So how AI continues to evolve with the knowledges that have been there. It seems that we're getting much closer with all those recent technologies. So being able to develop systems, support, and be able to think about how AI continues to evolve is something that I'm really excited about. [00:50:01]Swyx: So specifically, just to double click on this, are you talking about continuous training? That's like a training. [00:50:06]Tianqi: I feel like, you know, training adaptation and it's all similar things, right? You want to think about entire life cycle, right? The life cycle of collecting data, training, fine tuning, and maybe have your local context that getting continuously curated and feed onto models. So I think all these things are interesting and relevant in here. [00:50:29]Swyx: Yeah. I think this is something that people are really asking, you know, right now we have moved a lot into the sort of pre-training phase and off the shelf, you know, the model downloads and stuff like that, which seems very counterintuitive compared to the continuous training paradigm that people want. So I guess the last question would be for takeaways. What's basically one message that you want every listener, every person to remember today? [00:50:54]Tianqi: I think it's getting more obvious now, but I think one of the things that I always want to mention in my talks is that, you know, when you're thinking about AI applications, originally people think about algorithms a lot more, right? Our algorithm models, they are still very important. But usually when you build AI applications, it takes, you know, both algorithm side, the system optimizations, and the data curations, right? So it takes a connection of so many facades to be able to bring together an AI system and be able to look at it from that holistic perspective is really useful when we start to build modern applications. I think it's going to continue going to be more important in the future. [00:51:35]Swyx: Yeah. Thank you for showing the way on this. And honestly, just making things possible that I thought would take a lot longer. So thanks for everything you've done. [00:51:46]Tianqi: Thank you for having me. [00:51:47]Swyx: Yeah. [00:51:47]Alessio: Thanks for coming on TQ. [00:51:49]Swyx: Have a good one. [00:51:49] Get full access to Latent Space at www.latent.space/subscribe
Tim Cook says the Apple product lineup is set for the holidays, your hosts go in-depth on Stage Manager for iPad and Mac, initial reviews of the new Apple TV 4K are out, we highlight Halide's iPhone 14 Pro camera review, and Elon Musk acquires Twitter on the AppleInsider podcast.Contact our hostsTips via Signal: +1 863-703-0668@stephenrobles on Twitter@Hillitech on TwitterSupport the showSupport the show on Patreon or Apple Podcasts to get ad-free episodes every week, access to our private Discord channel, and early release of the show! We would also appreciate a 5-star rating and review in Apple PodcastsLinks from the showTim Cook casts doubt on new M2 MacBook Pros in 20222022 iPad Pro review: World's best tablet gets M2 power boost - but not much elseVisible Unlimited Phone Plans | No Annual ContractStage Manager Tweet ThreadiPadOS 16.1 review: imperfect preview of what's nextiPhone 14 Pro Camera Review: A Small Step, A Huge Leap - LuxiPhone 14 Pro camera is a huge leap, says HalideApple TV 4K review roundup: lower cost and future-proofedApple Watch optimized charging available in watchOS 9Apple is freezing hiring, cutting budgets, claims new reportThread on Elon Musk Buying TwitterMore AppleInsider podcastsTune in to our HomeKit Insider podcast covering the latest news, products, apps and everything HomeKit related. Subscribe in Apple Podcasts, Overcast, or just search for HomeKit Insider wherever you get your podcasts.Subscribe and listen to our AppleInsider Daily podcast for the latest Apple news Monday through Friday. You can find it on Apple Podcasts, Overcast, or anywhere you listen to podcasts.Podcast artwork from Basic Apple Guy. Download the free wallpaper pack here.Those interested in sponsoring the show can reach out to us at: steve@appleinsider.com (00:00) - Intro (06:26) - Follow-Up: iPad Cellular (10:52) - No More Macs in 2022 (17:13) - M2 iPad Pro (20:03) - Live Activities (21:39) - Stage Manager (34:44) - iPhone 14 Pro Camera Review (40:30) - New Apple TV 4K (48:00) - Apple Hiring Freeze (49:58) - Elon Owns Twitter ★ Support this podcast on Patreon ★