POPULARITY
Raising kids can be hard enough, but how do you manage it when you spend half of your time in remote parts of Australia? Brendan Hodges is a Fly in Fly Out Worker, and has been since before his three kids were born. Brendan shares what it's like to live this unique lifestyle, and how he's managed to get the balance right for his family. Hosted on Acast. See acast.com/privacy for more information.
This week we meet Chrissy Conyers who works as a maternaland child health midwife and navigator. She shares her story of working a FIFO model and how this broad scope allows her to stretch her professional development and opportunities.If you are interested in more information or being a guest on future podcasts, contact me anurseoutwhere@outlook.comDon't forget to follow for more episodes and updates onsocial media:Facebook- https://www.facebook.com/anurseoutwhereInstagram- https://www.instagram.com/anurseoutwhereWebsite: https://anurseoutwhere.com.au
Think flipping property is only for cashed-up tradies or reno experts? Think again. Graham Whitfield shows you a new way. In Part 2 of this powerful Get Invested conversation, Bushy Martin is joined once again by the Aussie property flipper and coach, who proves you don’t need money to make money in property. Over the past four years, Graham has completed more than 27 flips, manufactured nearly $3 million in equity, and banked over $2 million in profit – all without deep pockets or DIY skills. Now, he’s lifting the lid on the exact strategies that got him there. Following on from last week’s deep dive into Graham’s personal journey – including how a trashed rental kickstarted his flipping career – this episode zeroes in on the how. You’ll discover: The step-by-step process behind Graham’s flipping formula How to find the right property and avoid common mistakes The pros and cons of flipping versus buy-and-hold Whether active property investing is the right fit for your goals The core principles behind Graham’s Red Mane Coaching program If you’re ready to build equity fast, reduce your reliance on savings or finance, and add a powerful new tool to your investing arsenal, this episode is for you. Get ready to flip your mindset and fast-track your property journey. About Graham: Graham Whitfield is the founder of Red Mane Coaching and a passionate property coach who specialises in flipping and creative buy-renovate-hold strategies. Known as Australia’s ‘un-handiest man’, Graham proves that you don’t need tradie skills to succeed — just the right mindset and a system that works. Special Listener Offer: Graham is offering a free flipping coaching session exclusively for Get Invested listeners! To win, email your biggest takeaway from the episode to: hello@knowhowproperty.com.au Explore more at redmane.com.au Find your Freedom Formula Success in property starts with your 'why', and then the 'what' and 'how'. Let me, Bushy Martin, lead you through it! Sign up for my Freedom Formula program. The first session is absolutely free, and it only takes around an hour! Find out more https://bushymartin.com.au/freedom-formula-course Subscribe to Property Hub for free now on your favourite podcast player. Take the next step - connect, engage and get more insights with the Property Hub community at linktr.ee/propertyhubau Book a personal solutions session with Bushy to go deeper on your specific property needs or challenges Continue the discussion with likeminded investors and experts on The Property Hub Collective Facebook group Get a copy of Bushy's book, Get Invested, for FREE, and find out what it takes for you to invest in living more, working less Get all Property Hub info here linktr.ee/propertyhubau About Get Invested, a Property Hub show Get Invested is the leading weekly podcast for Australians who want to learn how to unlock their full ‘self, health and wealth’ potential. Hosted by Bushy Martin, an award winning property investor, founder, author and media commentator who is recognised as one of Australia’s most trusted experts in property, investment and lifestyle, Get Invested reveals the secrets of the high performers who invest for success in every aspect of their lives and the world around them. Subscribe now on Apple Podcasts, Spotify and YouTube to get every Get Invested episode each week for free. For business enquiries, email andrew@apiromarketing.com.See omnystudio.com/listener for privacy information.
Is the ‘buy and hold’ approach too slow to get you off the hamster wheel? Discover how flipping property helped one Aussie investor make $2M in just four years — starting with a tenant disaster. If you’re tired of being stuck in a job that leaves you time-poor and energy-depleted, and you’re wondering if there’s a faster path to freedom through property investment — you’re not alone. This week on Get Invested, Bushy Martin kicks off a compelling two-part conversation with Graham Whitfield — a former FIFO worker turned full-time property flipper and coach. After a trashed rental property left him $35k out of pocket, Graham decided to flip his script and pursue a hands-on investment strategy that would soon deliver multimillion-dollar returns. In this first instalment, Graham shares the highs, lows and big turning points in his personal journey. You’ll hear how he overcame the fear of flipping, what he’s learned from over 30 renovation projects, and how creative, active investing can help you break free from the long game of traditional property. Bushy also unpacks the current state of the market, the shift away from easy growth, and why active strategies like flipping may be the key to staying ahead — especially as Australia moves into a new property cycle. About Graham: Graham Whitfield is the founder of Red Mane Coaching and a passionate property coach who specialises in flipping and creative buy-renovate-hold strategies. Known as Australia’s ‘un-handiest man’, Graham proves that you don’t need tradie skills to succeed — just the right mindset and a system that works. Special Listener Offer: Graham is offering a free flipping coaching session exclusively for Get Invested listeners! To win, email your biggest takeaway from the episode to: hello@knowhowproperty.com.au Explore more at redmane.com.au Find your Freedom Formula Success in property starts with your 'why', and then the 'what' and 'how'. Let me, Bushy Martin, lead you through it! Sign up for my Freedom Formula program. The first session is absolutely free, and it only takes around an hour! Find out more https://bushymartin.com.au/freedom-formula-course Subscribe to Property Hub for free now on your favourite podcast player. Take the next step - connect, engage and get more insights with the Property Hub community at linktr.ee/propertyhubau Book a personal solutions session with Bushy to go deeper on your specific property needs or challenges Continue the discussion with likeminded investors and experts on The Property Hub Collective Facebook group Get a copy of Bushy's book, Get Invested, for FREE, and find out what it takes for you to invest in living more, working less Get all Property Hub info here linktr.ee/propertyhubau About Get Invested, a Property Hub show Get Invested is the leading weekly podcast for Australians who want to learn how to unlock their full ‘self, health and wealth’ potential. Hosted by Bushy Martin, an award winning property investor, founder, author and media commentator who is recognised as one of Australia’s most trusted experts in property, investment and lifestyle, Get Invested reveals the secrets of the high performers who invest for success in every aspect of their lives and the world around them. Subscribe now on Apple Podcasts, Spotify and YouTube to get every Get Invested episode each week for free. For business enquiries, email andrew@apiromarketing.com.See omnystudio.com/listener for privacy information.
We’re diving into a submission that sounds like it came straight out of a movie — but it’s very real. Our anonymous storyteller is stuck at the intersection of fate, temptation, and a life-altering decision. She’s got a rocky relationship, two kids, and just got offered a high-paying FIFO job that could turn things around financially. But here’s the twist… on the way to the job interview, she bumps into her ex of three years — and then, as if that wasn’t enough — runs into another ex of TEN years once she’s onsite. Both exes want to reconnect. Both are working at the same place. And now, she’s got to decide: Does she take the job… and risk the temptation that could unravel everything? Or walk away from the opportunity her family desperately needs? Tune in for this wild, real-life story full of emotional twists, what-would-you-do moments, and the kind of Freaky Friday fate you couldn’t script if you tried. Follow us on Instagram @sherises.podcast Join us in our Facebook forum
As tariffs and border crackdowns continue stateside, the number of Australians taking trips to the US has slumped, with travel to Asia surging instead – a trend noticed by Flight Centre, which is taking a hit to its bottom line. It's not just passengers that are causing a headache for aviation amid the US uncertainty, however: if the trade war causes a Chinese slowdown, the resulting drop in demand for resources could have a knock-on impact on the FIFO sector, with NJE's Lim Kim Hai already looking to cut spending. Adam and Jake discuss what the Trump administration's policies abroad could do – and have already done – to aviation in Australia. Plus, is there a culture problem at general aviation businesses?
Mad Mumzie takes us on a personal journey, reflecting on her recent holiday to Adelaide and the insights she gained while people-watching. As she shares her experiences, she highlights the importance of respect and kindness in the mining community, reminding us that we are all navigating our own challenges. Plus some exciting news! With her signature humour and warmth, Mad Mumzie discusses the interactions she observed among miners and the significance of being aware of those around us. From the amusing encounters at the motel restaurant to heartfelt moments with family, this audio blog is a delightful mix of storytelling and introspection. Mad Mumzie also hints at exciting changes coming to the podcast, including a new video format on YouTube, making this episode a must-listen for fans and newcomers alike. New in 2025! Mad Mumzie, direct from the recording studio, in person versions of the long running Beers with A Miner Podcast. A whole new level on youtube..... subscribe to the channel here: Click here to Subscribe to Mad Mumzie's Youtube Channel Show Notes Page: https://madmumzie.com/beers97 Are you looking for a job in the mines but don't know where to start? Head to https://www.madmumzie.com/noexperience/ Online courses and community by Mad Mumzie: https://mining.teachable.com/ What Boots Podcast https://steelcapsisters.com/
Forcing perennials out of winter dormancy and into bloom for spring and summer sales isn't a new concept for most growers, but growth in the perennial market has inspired a lot of folks to add perennials to their mix and there's been plenty of research done to update protocols for breaking dormancy. Because of this, Tech On Demand host Bill Calkins wanted to have a quick discussion with perennial plant guru Chris Fifo (from Darwin Perennials) to make sure everyone is on the same page when it comes to waking perennials up this spring. In this concise tech tip-style conversation Bill and Chris discuss the “traditional method” of forcing perennials and the “new-school way,” which results in better uniformity and reduced losses. Then Chris explains tactics any grower can use to force long-day perennials using an extension cord and a string of lightbulbs, as well as offering suggestions for reducing the risk of disease when perennials are being forced to wake up in early spring. The bottom line: It's not as difficult as it sounds—warming them up and basic night interruption will work wonders on your perennial crop this spring. WATCH THE VIDEO: https://youtu.be/rsq3hRTX0l0 Resources: GROWERTALKS WEBINAR: 3 1/2 Steps to Overwintering Perennials MICHIGAN STATE GUIDE: Long-Day Perennials
Scott Sattler and Mat Rogers from Sportsday joins Fletch and Missile to talk the NSW forward pack, Dear Gerard, Two-Up and Testosterone 00:00 Missile on testosterone 03:00 Does it help with pre-existing injuries 03:30 Queensland in disarray 03:45 Fletch's NSW forward pack 04:30 Satts at Mascot Oval 05:30 The two wins in Sunshine Coast 06:30 The RSL in Bondi not doing Two-Up 07:20 Chinese Tariffs to the US 09:10 Mob Land tv show 09:30 Dear Gerard 12:25 Satts doing surveillance on a Fifo worker 13:15 Missile Driving Range Etiquette Listen to The Run Home with Joel and Fletch live every weekday: 3pm AEST on SEN 1170 AM Sydney and SEN 693 AM Brisbane Listen Online: https://www.sen.com.au/listen Subscribe to The Run Home YouTube Channel https://www.youtube.com/@JoelandFletchSEN Follow us on Social Media! TikTok https://www.tiktok.com/@joelfletchsen Instagram: https://www.instagram.com/joelfletchsen X: https://x.com/joelfletchsen Learn more about your ad choices. Visit megaphone.fm/adchoices
BRISBANE, AUSTRALIA - In this episode we have Ciara Munnelly from Mayo, who has spent the last 5 years in Australia, now based in Brisbane, and works in construction. I chat to Ciara about her experience moving to Australia in her early thirties in the shadow of covid and the end of a long term relationship, feeling like she's constantly split between Ireland and Australia, what it's like working FIFO, and how life never works out how you expect it to including marrying an Aussie!You can follow and message the podcast on socials @Whenareyoucominghomepod, and don't forget to rate and share the podcast with your expat friends! Hosted on Acast. See acast.com/privacy for more information.
This week, Trent Fleskens runs a Q&A session on the Perth Property Show, addressing listener questions on various property topics. He discusses the affordability of coastal suburbs like Scarborough and Rockingham, offers advice on whether to buy established homes or build new ones, and explores the pros and cons of selling during autumn versus spring. Trent also touches on the best rental yields for FIFO workers, how investors can capitalize on short-term rentals like Airbnb, and the risks of investing in outer suburbs. Additionally, he provides guidance for first-time home buyers, determining borrowing capacity, and the typical costs associated with buying property in Perth.
Would you move away from your family, friends, and entire support system just to own a home outright? Well this week’s Friday Drinks is all about big trade-offs, bold moves, and knowing your worth (even when it feels uncomfortable to ask for it). This week, one listener is thinking of quitting her FIFO job and selling her home to move interstate and live mortgage-free... it’s a huge lifestyle shake-up, but is the financial freedom worth it? And we’re also diving into a workplace conundrum that’s way too familiar: what to do when your title, your responsibilities, and your pay are definitely not on the same page... and how to confidently negotiate the raise you know you deserve. Plus, of course, we’ve got all the money wins, juicy confessions, broke tips, and Friday fun you know and love. Ready for more laughs, lessons, and unhinged money chats? Check out our oh-so-bingeable Friday Drinks playlist. Listen here. Join our 300K+ She's on the Money community in our Facebook Group and on Instagram. Acknowledgement of Country By Natarsha Bamblett aka Queen Acknowledgements. The advice shared on She's On The Money is general in nature and does not consider your individual circumstances. She's On The Money exists purely for educational purposes and should not be relied upon to make an investment or financial decision. If you do choose to buy a financial product, read the PDS, TMD and obtain appropriate financial advice tailored towards your needs. Victoria Devine and She's On The Money are authorised representatives of Money Sherpa PTY LTD ABN - 321649 27708, AFSL - 451289.See omnystudio.com/listener for privacy information.
A Hacienda no le interesa que veas esto: fondos de inversión para pagar menos impuestos
Finanse Bardzo Osobiste: oszczędzanie | inwestowanie | pieniądze | dobre życie
Na świecie robi się coraz bardziej niestabilnie, co zwiększa pokusę, żeby choć częścią majątku „oddalić” się od Polski. Szybki i prosty sposób na to oferuje inwestowanie przez zagraniczne konto maklerskie. Musisz jednak poznać ciemną stronę takiego rachunku za granicą – niestety znacznie komplikuje rozliczenie podatkowe inwestycji. Komplikuje, ale jest to jak najbardziej do ogarnięcia. Pomogę Ci to zrobić krok po kroku. De facto samodzielnie wyliczymy sobie zamiennik PITa-8C, czyli wyliczymy przychody i koszty uzyskania przychodu. Wytłumaczę, jak działa metoda FIFO, jak ująć prowizje i inne opłaty maklerskie, jak przeliczyć waluty obce do zeznania podatkowego…no i oczywiście gdzie to wpisać potem we właściwym formularzu PIT.
In this “Connect The Dots” episode, Chris & Filly dive into a listener's case who is wife to a FIFO, and is feeling exhausted, brain foggy, struggling with an autoimmune condition, lichen sclerosis (immune system attacking cells in the vaginal area), and zero motivation to do much about it because of the tiredness-factor. Chris & Filly cover: The listener's case history - symptomatology, when it all started, what she's tried so far “I've tried it all” - and what this really means underneath the surface Body systems that would be worth lab testing, to identify physical imbalances connected with exhaustion, low motivation and lichen sclerosis The vagus nerve and the dorsal vagal shutdown/immobilisation state Unconscious core beliefs related to patterns of over-doing, worrying, hurrying, catastrophising, victim-mentality The Drama Triangle and how humans play out the victim, rescuer and aggressor roles, and how it makes us sick Show Note Links: If you're keen to get your case workshopped (anonymously - you won't personally be on the episode), fill in this application form. A “Connect The Dots” Initial Consult is usually $297 with Filly. But you'll get to have your case investigated for free on this episode! Book in for a Connect The Dots Initial Consult Join the Ending Body Burnout Method waitlist Take the Ending Body Burnout Assessment here Disclaimer: This Ending Body Burnout Show podcast and any information, advice, opinions or statements within it do not constitute medical, health care or other professional advice, and are provided for general information purposes only. All care is taken in the preparation of the information in this Podcast. Chris & Filly Functional Medicine does not make any representations or give any warranties about its accuracy, reliability, completeness or suitability for any particular purpose. This Podcast and any information, advice, opinions or statements within it are not to be used as a substitute for professional medical, psychology, psychiatric or other mental health care or natural medicine health care. Chris & Filly Functional Medicine recommends you seek the advice of your doctor or other qualified health providers with any questions you may have regarding a medical condition. Inform your doctor of any changes you may make to your lifestyle and discuss these with your doctor. Do not disregard medical advice or delay visiting a medical professional because of something you hear in this Podcast. To the extent permissible by law Chris & Filly Functional Medicine and the Ending Body Burnout Show Podcast will not be liable for any expenses, losses, damages (including indirect or consequential damages) or costs which might be incurred as a result of the information being inaccurate or incomplete in any way and for any reason. No part of this Podcast can be reproduced, redistributed, published, copied or duplicated in any form without the prior permission of Chris & Filly Functional Medicine.
This week's EYE ON NPI is trendy and buzzy, it's Boréas Technologies' BOS1931 High-Efficiency Piezo Driver (https://www.digikey.com/en/product-highlight/b/boreas/bos1931-high-efficiency-piezo-driver). This chip is a compact way to add powerful high-voltage piezo drive to any product, combining three chips: power supply, waveform generator and driver. With a complete I2C/I3C interface that you can connect to any microcontroller/processor it's the most advanced all-in-one piezo driver we've seen! Piezo (https://en.wikipedia.org/wiki/Piezoelectricity) discs are multi-use devices that convert mechanical movement to electrical signal, and vice-versa. They're most often seen as electrical-to-mechanical converters such as piezo beepers (https://en.wikipedia.org/wiki/Piezoelectric_speaker) where an AC signal, usually 3 to 6V peak-to-peak square wave, is applied across the disk. The frequency of the wave is translated into a sound frequency. It doesn't have the same fidelity as a magnetic speaker but its much thinner, less expensive for the component and driving circuitry, and for 2 to 4 KHz beeps it's just fine. Piezos can also be used the opposite way, where mechanical stress on the crystal is translated into an electrical signal. In this way it can be used as a switch or force sensor (https://en.wikipedia.org/wiki/Piezoelectric_sensor), again usually a few microamperes' worth of current is generated. For these basic uses, your standard microcontroller pin, or at best an H-Bridge will work just fine: you can drive piezo's differentially to get more Vpp across the disc but essentially we're still talking about only a few Volts. There are some times when you want to make a piezo really 'loud' - that is, putting 100+ Volts across the crystal to generate a big mechanical response. This is often not for audible use cases, after all if you wanted to do that you'd just use a magnetic speaker (https://www.adafruit.com/product/1732) that can get to many many Watts of output efficiently. FYI there's two variants of the chip: the BOS1931 (https://www.digikey.com/short/w9tz9tbj) and the BOS1921 (https://www.digikey.com/short/nnb0r29r). The '31 can only do piezo driving. The '21 can do sensing as well as driving, so it can be used for force-feedback products. In this particular EYE ON NPI we'll just be chatting about the driving capabilities of both. So, while we can do basic sensing/beeping with a few Volts - when we want to have significant motion for blasting sonar or moving fluid around we can only increase the movement by increasing the peak-to-peak voltage. Each piezo you buy will have a voltage rating - and you will need a boost converter to generate that peak-to-peak. For the BOS19 series of chips, you can get +-95V so 190Vpp max, which will drive any piezo you find, and you only need 3~5V input thanks to a built-in DC/DC boost converter. Boréas didn't stop there. Not only do you get a booster, but also a full waveform manager with I2C/I3C control. You can can fill up a FIFO buffer with waveform bytes to generate different shapes. There's a sine generator you can control with an envelope creator. Or, you can piece together waveform shapes for different pump/haptic behavior, giving you the customizability of a byte-wise waveform generator with the simplicity of a sine generator. They even have a Haptics Studio' to help you craft the waveform you want (https://www.boreas.ca/pages/haptic-studio). The BOS1931 (https://www.digikey.com/short/w9tz9tbj) and the BOS1921 (https://www.digikey.com/short/nnb0r29r) come in two packages: an easy-to-layout-and-solder QFN and a tiny-and-advanced BGA. Both have the same core so just pick whether you need simplicity or small size. Since its a pretty serious boost converter and driver - the piezo connects directly to the output pins - you'll need to watch your layout. Check the datasheet for their recommended setup to make sure you don't have excessive power loss or EMI. IF you want to get started quickly, the BOS1921-KIT-B01 (https://www.digikey.com/short/v9hn8mcd) evaluation board will let you use their configuration software to quickly determine how your piezo actuator or sensor response to the waveform generator and booster before you start laying out the components on a prototype PCB. If you have some serious piezo-ing you need to get moving, the Boréas Technologies' BOS1931 High-Efficiency Piezo Driver (https://www.digikey.com/short/w9tz9tbj) can do everything from voltage generation, waveform shaping, and differential driving. And best of all it's in stock right now at Digi-Key for immediate shipment! Order today and DigiKey will pick and pack your order in an instant so that you can be vibin' with your fancy new piezo controller by tomorrow afternoon.
Ever start a new financial goal feeling unstoppable, only to lose momentum when life gets in the way? Staying motivated isn't just about working harder—it's about working smarter. Today, Joe and OG take a deep dive into Dynamic Drive, the key to keeping your financial momentum going without burnout. We'll break down the seven essential pillars—mindset, energy, discipline, curiosity, resilience, connection, and confidence—so you're ready to hit the ground running when we welcome former sports agent Molly Fletcher on Wednesday.
At the ITA Showcase, I sat down with Bob Young of FIFO Networks to discuss an emerging challenge for utilities and telecom carriers: meeting new federal cybersecurity requirements for broadband grants. As a cybersecurity consultant for public utilities, Bob is helping organizations navigate these evolving regulations—ensuring they secure funding while protecting critical infrastructure. Why Cybersecurity is Now a Requirement for Federal Grants Until recently, federal broadband grants did not require cybersecurity compliance. The Rural Broadband Initiatives focused on expanding fiber networks in underserved areas without specific security mandates. However, new regulations now tie funding eligibility to cybersecurity readiness. "The cybersecurity requirements seem scary at first," Young explained. "But the reality is, they've provided a well-laid-out process with clear goals. It can be done without adding much expense at all." What Grant Applicants Need to Know To qualify for funding, applicants must complete: A Cybersecurity Assessment – Identifies security risks and vulnerabilities A Cybersecurity Plan – Details how the organization will mitigate those risks These documents must be submitted with grant applications, ensuring recipients not only build broadband infrastructure but also protect it. The Standards Behind These Requirements The government's Cybersecurity Playbook draws from two key frameworks: NIST Cybersecurity Framework – Developed by the National Institute of Standards and Technology CISA Cybersecurity Performance Goals (CPGs) – Created by the Cybersecurity and Infrastructure Security Agency These frameworks provide a roadmap for securing telecom and utility networks—critical as broadband infrastructure becomes part of national security. How FIFO Networks Helps Utilities Stay Secure FIFO Networks works with public utilities and telecom carriers to implement these security measures. Whether a company needs full-service consulting or just an extra set of eyes to support an internal CISO or IT director, Young ensures organizations comply with federal requirements while keeping costs manageable. Beyond Compliance: Reducing Cyber Risk Beyond meeting grant requirements, Young challenges conventional wisdom on cybersecurity, urging companies to rethink centralization and internet reliance. "Once you connect your data to the internet, you create a global attack surface," he explained. "If you connect two data centers with a dedicated private circuit instead, it costs more—but it's infinitely more secure." Learn More For utilities, ISPs, and telecom operators looking to secure federal grants while strengthening cybersecurity, FIFO Networks provides specialized consulting services. Visit www.fifonetworks.com
Parenting is tough—but when one parent is flying in and out for work, the challenges multiply. This week on Thriving Parenting, I'm joined by Vicky Pellowe, founder and CEO of The FIFO Family Project, to talk about the realities of FIFO (Fly-In, Fly-Out) parenting and how families can find the support they need.Vicky, a former FIFO worker herself, understands both sides of the FIFO lifestyle—the highs, the lows, and the unexpected hurdles that come with raising a family when one parent is away for extended periods. In this conversation, we explore:What FIFO parenting really looks like beyond the stereotypesThe biggest struggles for FIFO parents, partners at home, and kidsWhy isolation is the number one challenge for families and how to combat itThe importance of community, education, and wellbeing for FIFO familiesPractical strategies to make FIFO work for your unique family situationVicky shares the incredible work she's doing with The FIFO Family Project—from community meetups to educational resources and upcoming support programs. Whether you're a FIFO family, know someone who is, or just want to understand the complexities of this lifestyle, this episode is packed with insights and solutions.Want more support? I offer free sleep clarity sessions to help parents navigate stress, sleep, and connection-based changes. Reach out to see how we can work together!Watch my free empowerment video and in less than 20 mins, gain some healthy Instagram perspective to help you to stay clear and become mindful of the Instagram vortex. For more information on this topic, head to the show notes: Episode 54 Show NotesAnd I'd love to hear your thoughts on this episode! Come and connect with me on Instagram at @sleep_thrive_grow.And click the +Follow button to never miss an episode. New episodes are released every Tuesday!To find out more about how I can support you, visit my website here. Until next time, Thrivers!
Send us a textMoving to the other side of the world is tough, we know this from our own experience making the move during a global pandemic. To help you decide whether a life down under is for you this series aims to share with you the highs and the lows of the migration journey to help and inspire you to make the move to Australia yourself.Our guest Charlotte moved to Australia with her husband and 2 young children in search of a change of lifestyle. However faced challenges living in a rural country town with no support and a husband working FIFO. How did she overcome all this to remain in Australia? I guess we'll find out.
Episode 46 and im joined by Dan. Dan joined the RAAF in 2014 as a tech before changing to a clerk. He deployed to AMAB in 2019. He left full time service in 2021 but still continued to work with DVA as an advocate and fulfilling his reserves obligations. Now he works FIFO on the gas rigs and spends plenty of time keeping fit. Hosted on Acast. See acast.com/privacy for more information.
BROOKE MCINTOSH - Resilience mentorKeynote speakerEntrepreneurRunner“From Trauma to Triumph: How Running Across Australia Saved My Life”Brooke ran 1,600 km across Australia in just 27 days to raise awareness for mental health. Now, she's preparing for an even greater challenge—running acrossthe entire continent, averaging 80 km per day for 180 days.Brooke's journey with mental health began at 12 after facing significant trauma. Instead of letting it define her, she turned her struggles into a mission—advocating for open conversations, particularly in high-stress, male-dominated FIFO (fly-in, fly-out) mining industries. She knows firsthand that breaking the silence can save lives.Brooke's Journey: From Trauma to TriumphBrooke's story isn't just about endurance—it's about survival and transformation. After experiencing sexual assault at 12, 14, and 24, she faced battles with anxiety, depression, and substance abuse. But instead of being consumed by pain, she chose to fight back.A major turning point came in August 2022 when she survived a serious car accident. That wake-up call led her to redefine her purpose, and running became her way to heal—not just herself, but others struggling in silence.Key Takeaways: ✅ The Power of Running for Mental Health – How running provides clarity, relieves stress, and builds resilience.✅ Training & Recovery – Strength training, ice baths, visualization, and nutrition- ALL the strategies, tips and tricks that keep her going.✅ Facing Fear Head-On – How surviving a life-threatening accident shaped her mental toughness and approach to PTSD.✅ Safety While Running – Dealing with harassment on the road and essential strategies for staying safe.✅ The Importance of Community – Why open conversations and seeking support are crucial for mental well-being.Brooke's story is one of grit, purpose, and transformation. Whether you're a runner, someone battling inner struggles, or looking for inspiration, this episode will challenge how you think about resilience, endurance, and mental health.
L'info du matin - Grégory Ascher et Erika Moulet présentent la méthode FIFO, une nouvelle façon de ranger chez soi. FIFO signifie "First in, first out", soit "premier entré, premier sorti". Le principe : les éléments les plus anciens de votre garde-robe ou de vos placards doivent être rangés ou éliminés en premier. Le winner du jour : - Des douaniers français ont fait une saisie incroyable et historique : des dents de reptiles en provenance du Maroc, datant de 72 à 66 millions d'années. - Une fan des One Direction a hurlé si fort lors d'un concert il y a quelques années qu'elle s'est décroché les poumons et a passé la nuit à l'hôpital. Heureusement, elle va bien aujourd'hui. Le flashback d'avril 1988 - La sortie du premier album de Tracy Chapman avec des tubes comme "Talkin' 'bout a Revolution". - Le film allemand "Bagdad Café" sort au cinéma. - Céline Dion remporte l'Eurovision pour la Suisse avec "Ne partez pas sans moi". Les savoirs inutiles : - Le dieu romain Crépitus représente un concept particulier : il est le dieu des pets, des gaz et des flatulences. 3 choses à savoir sur Ed Sheeran Qu'est-ce qu'on teste ? - Une patinoire transformée en supermarché disco pour faire ses courses en roller au Royaume-Uni. - Un pot en parapluie qui récupère l'eau de pluie pour arroser les plantes. Le jeu surprise : Adeline de Saint-Brieuc repart avec un séjour tout compris à la montagne dans l'un des Villages Clubs du Soleil : pension complète, clubs enfants dès 4 mois, randonnées, VTT et détente en famille. La banque RTL2 : - Audrey de Marseille gagne 1 200 euros. - Céline de Nantes gagne 300 euros.
This week's EYE ON NPI is neither-here-nor-there - it's STMicroelectronics' ST25R200 NFC/HF RFID Reader IC (https://www.digikey.com/en/product-highlight/s/stmicroelectronics/st25r200-nfc-hf-rfid-reader-ic) a simple but powerful NFC/RFID reader and writer chip that will let you add a contactless interface to your next design. Thanks to the high power RF stage and dual antenna support, you can avoid the frustration of "where do I tap??" by giving you plenty of surface area for successful transactions. We're big fans of intuitive RFID/NFC interfaces using tags, they come in all sorts of sizes and shapes (https://www.adafruit.com/product/365) from standard business cards to microtags that can fit in a manicure (https://www.adafruit.com/product/2800). They don't require a battery, and can store up to a few KB of data, including encrypted/secured data sections so as to make the tag 'trustworthy'. They're often used for small-money transactions like copy shops, laundromats and public transport, where speed is important and we can store value on the card. Or for identification like access cards. With proper design, they'll work up to 4 inches away from a reader, don't suffer from corrosion or contact wear or affected by water/humidity. Reading and writing RFID/NFC tags, which use 13.56MHz as a carrier frequency, requires a proper chip that can handle the requirements of blasting enough RF signal to 'power' the tag, then transmit a command and receive the response before the quiescent power runs out. If you have a big antenna, this isn't too hard - but the real challenge is to manage it with a small antenna. That's the nice thing about the ST25R200 (https://www.digikey.com/short/5ttf9ptj) - it has powerful output drivers so even mini wearable-sized antennas work well. You can configure the outputs to be one differential or two single-ended antenna coils. If you want to design your PCB antenna, we recommend ST's website for NFC inductance calculations (https://eds.st.com/antenna/#/) it will let you determine the inductance based on width, height, copper thickness and trace width so you get maximum power transfer. The ST25R200's connection to the controller is over standard 4-pin SPI, so you can use any microcontroller or microcomputer with 4 pins available. An IRQ line is also handy to 'wake on card detect'. Other than that, the interface is fairly low level: registers are used to configure the RF section and encoding but otherwise, data is transmitted or received via two FIFO buffers. This makes the chip easy to adapt to the various sub-protocols and standards (https://nfc-forum.org/build/specifications) designed by competing RFID companies: ISFO14443A/NFC-A, ISO14443B/NFC-B, ISO15693/NFC-V, NFC Forum T1T, T2T, T4T, and T5T tag types, and proprietary protocols, such as Kovio, CTS, and B'. In order to make your life easier when it comes to implementation, ST has released RFAL an RF/NFC abstraction layer (https://www.st.com/en/embedded-software/stsw-st25rfal004.html) that is written in pure C so that it can be ported to any platform or compiler. To get started quickly we recommend the STEVAL-25R200SA evaluation board (https://www.digikey.com/en/products/detail/stmicroelectronics/STEVAL-25R200SA/25701817) which comes with a USB debug STLink interface, SMTable module, 4 pluggable antenna options including one flex PCB printed antenna, and two micro-tags for testing. If you want to integrate RFID/NFC 'touchless' support to your next design, the ST ST25R200 NFC/HF RFID Reader IC (https://www.digikey.com/short/5ttf9ptj) is small, inexpensive, and fast to get started with minimal external components, and ready-to-go drivers. And best of all the chips are in stock right now at DigiKey for immediate shipment. Order the ST25R200 (https://www.digikey.com/short/5ttf9ptj) and an eval board today and you can tap your way to contactless communication by tomorrow afternoon!
Ask us a question or suggest a topic
Heather lives just outside Mareeba in Far North Queensland with her husband Jack and their 15-week-old daughter Paige. After being diagnosed with PCOS, Heather's journey to conception took 15 months, involving lifestyle changes and medication to help regulate her cycles. _________ Sleep more comfortably with Sleepybelly, the breakthrough pregnancy pillow that supports your belly and back to prevent back sleeping and ensure restful nights. Get $10 off with our code ABS10 - Learn MoreSee omnystudio.com/listener for privacy information.
Indigenous community members say the abuse is derogatory and demeaning, and similar to what was spruiked during the Indigenous Voice to Parliament referendum.
I read from fiend to FIFO. The word of the episode is "fiercely". Use my special link https://zen.ai/thedictionary to save 30% off your first month of any Zencastr paid plan. Create your podcast today! #madeonzencastr Theme music from Tom Maslowski https://zestysol.com/ Merchandising! https://www.teepublic.com/user/spejampar "The Dictionary - Letter A" on YouTube "The Dictionary - Letter B" on YouTube "The Dictionary - Letter C" on YouTube "The Dictionary - Letter D" on YouTube "The Dictionary - Letter E" on YouTube "The Dictionary - Letter F" on YouTube Featured in a Top 10 Dictionary Podcasts list! https://blog.feedspot.com/dictionary_podcasts/ Backwards Talking on YouTube: https://www.youtube.com/playlist?list=PLmIujMwEDbgZUexyR90jaTEEVmAYcCzuq https://linktr.ee/spejampar dictionarypod@gmail.com https://www.facebook.com/thedictionarypod/ https://www.threads.net/@dictionarypod https://twitter.com/dictionarypod https://www.instagram.com/dictionarypod/ https://www.patreon.com/spejampar https://www.tiktok.com/@spejampar 917-727-5757
Kyle Hency brings years of entrepreneurial expertise to the world of ecommerce and operational efficiency. He is the Co-Founder and former CEO of Chubbies, Co-Founder at Loop Returns, and now Co-Founder & CEO of GoodDay Software, where he's redefining ERPs for modern Shopify brands.Kyle's experience spans building beloved DTC brands to scaling SaaS solutions. His time at Chubbies and Loop Returns has equipped him with unparalleled insights into brand growth, team leadership, and operational optimization.With a passion for empowering businesses, Kyle specializes in creating tools that streamline processes, boost resilience, and help brands thrive in today's competitive landscape. His journey reflects a commitment to innovation, entrepreneurship, and delivering impactful solutions that drive success.In This Conversation We Discuss: [00:40] Intro[01:19] Focusing on early internet innovations[03:23] Establishing values through collaborative processes[05:05] Focusing on why customers engage with your brand[06:40] Balancing innovation with retail fundamentals[08:40] Simplifying customer service with self-service tools[10:23] Expanding Loop's reach with new customers[11:38] Turning a problem into a new business opportunity[13:11] Moving from spreadsheets to systems[17:47] Overcoming ERP complexities with streamlined solutions[19:45] Tailoring solutions for Shopify-centric operations[21:59] Empowering teams with centralized data access[24:04] Helping early-stage brands with scalable systems[25:48] Partnering to simplify warehouse management systems[27:40] Preparing for challenges with a resilience-first mindsetResources:Subscribe to Honest Ecommerce on YoutubeA unified, reliable retail OS gooddaysoftware.com/Proper length men's shorts & more chubbiesshorts.com/Returns management for Ecommerce brands loopreturns.com/Follow Kyle Hency linkedin.com/in/khencyIf you're enjoying the show, we'd love it if you left Honest Ecommerce a review on Apple Podcasts. It makes a huge impact on the success of the podcast, and we love reading every one of your reviews!
Loneliness, trust issues, cheating, and casual hook ups with colleagues - these are just a few of your experiences dating FIFO workers. For this episode, Dave Marchese from the Hack podcast brings us your stories.DM us your thoughts, questions, topics, or to just vent at @triplejthehookup on IG or email us: thehookup@abc.net.auThe Hook Up is an ABC podcast, produced by triple j. It is recorded on the lands of the Wurundjeri people of the Kulin nation. We pay our respects to elders past and present. We acknowledge Aboriginal and Torres Strait Islander peoples as the First Australians and Traditional Custodians of the land where we live, work, and learn.
This week on the podcast, I'm joined by FIFo Toa. We chat about the call he answered from New Zealand to come help fill the labour gap in Australias mining industry. We discuss the way he's using his social media as a tool to help others find work and how recruitment agencies have taken notice. He schools me on the difference between an Aussy and a Kiwi, #australia #newzealand #undergroundmining #mining #oilandgas #offshore #bluecollar #fifo #podcast #podcasting #podcasting
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.The single most requested domain was computer vision, and we could think of no one better to help us recap 2024 than our friends at Roboflow, who was one of our earliest guests in 2023 and had one of this year's top episodes in 2024 again. Roboflow has since raised a $40m Series B!LinksTheir slides are here:All the trends and papers they picked:* Isaac Robinson* Sora (see our Video Diffusion pod) - extending diffusion from images to video* SAM 2: Segment Anything in Images and Videos (see our SAM2 pod) - extending prompted masks to full video object segmentation* DETR Dominancy: DETRs show Pareto improvement over YOLOs* RT-DETR: DETRs Beat YOLOs on Real-time Object Detection* LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection* D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement* Peter Robicheaux* MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)* * Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks) * PalíGemma / PaliGemma 2* PaliGemma: A versatile 3B VLM for transfer* PaliGemma 2: A Family of Versatile VLMs for Transfer* AlMv2 (Multimodal Autoregressive Pre-training of Large Vision Encoders) * Vik Korrapati - MoondreamFull Talk on YouTubeWant more content like this? Like and subscribe to stay updated on our latest talks, interviews, and podcasts.Transcript/Timestamps[00:00:00] Intro[00:00:05] AI Charlie: welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. When we were thinking of ways to add value to our academic conference coverage, we realized that there was a lack of good talks, just recapping the best of 2024, going domain by domain.[00:00:36] AI Charlie: We sent out a survey to the over 900 of you. who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field. 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our second featured keynote is The Best of Vision 2024, with Peter Robichaud and Isaac [00:01:00] Robinson of Roboflow, with a special appearance from Vic Corrapati of Moondream.[00:01:05] AI Charlie: When we did a poll of our attendees, the highest interest domain of the year was vision. And so our first port of call was our friends at Roboflow. Joseph Nelson helped us kickstart our vision coverage in episode 7 last year, and this year came back as a guest host with Nikki Ravey of Meta to cover segment Anything 2.[00:01:25] AI Charlie: Roboflow have consistently been the leaders in open source vision models and tooling. With their SuperVision library recently eclipsing PyTorch's Vision library. And Roboflow Universe hosting hundreds of thousands of open source vision datasets and models. They have since announced a 40 million Series B led by Google Ventures.[00:01:46] AI Charlie: Woohoo.[00:01:48] Isaac's picks[00:01:48] Isaac Robinson: Hi, we're Isaac and Peter from Roboflow, and we're going to talk about the best papers of 2024 in computer vision. So, for us, we defined best as what made [00:02:00] the biggest shifts in the space. And to determine that, we looked at what are some major trends that happened and what papers most contributed to those trends.[00:02:09] Isaac Robinson: So I'm going to talk about a couple trends, Peter's going to talk about a trend, And then we're going to hand it off to Moondream. So, the trends that I'm interested in talking about are These are a major transition from models that run on per image basis to models that run using the same basic ideas on video.[00:02:28] Isaac Robinson: And then also how debtors are starting to take over the real time object detection scene from the YOLOs, which have been dominant for years.[00:02:37] Sora, OpenSora and Video Vision vs Generation[00:02:37] Isaac Robinson: So as a highlight we're going to talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Is the what?[00:02:48] Isaac Robinson: Yeah. Yeah. So just it's a, SORA is just a a post. So I'm going to fill it in with details from replication efforts, including open SORA and related work, such as a stable [00:03:00] diffusion video. And then we're also going to talk about SAM2, which applies the SAM strategy to video. And then how debtors, These are the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO based models.[00:03:15] Isaac Robinson: So to start this off, we're going to talk about the state of the art of video generation at the end of 2023, MagVIT MagVIT is a discrete token, video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state of the art handcrafted video compression frameworks.[00:03:38] Isaac Robinson: In terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens generate some pretty nice stuff, but up to like five seconds length and, you know, not super detailed. And then suddenly a few months later we have this, which when I saw it, it was totally mind blowing to me.[00:03:59] Isaac Robinson: 1080p, [00:04:00] a whole minute long. We've got light reflecting in puddles. That's reflective. Reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics. You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for.[00:04:24] Isaac Robinson: In the same way that like six fingers on a hand. You're not going to notice is a giveaway unless you're looking for it. So yeah, as we said, SORA does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos.[00:04:48] Isaac Robinson: This, this is a trick that they introduced in Dolly 3, where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model [00:05:00] on that. Their Sora and their application efforts also show a bunch of other steps that are necessary for good video generation.[00:05:09] Isaac Robinson: Including filtering by aesthetic score and filtering by making sure the videos have enough motion. So they're not just like kind of the generators not learning to just generate static frames. So. Then we encode our video into a series of space time latents. Once again, SORA, very sparse in details.[00:05:29] Isaac Robinson: So the replication related works, OpenSORA actually uses a MAG VIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as the Each sequential frames and videos have mostly redundant information.[00:05:53] Isaac Robinson: So by compressing against, compressing in the temporal space, you allow the latent to hold [00:06:00] a lot more semantic information while avoiding that duplicate. So, we've got our spacetime latents. Possibly via, there's some 3D VAE, presumably a MAG VATV2 and then you throw it into a diffusion transformer.[00:06:19] Isaac Robinson: So I think it's personally interesting to note that OpenSORA is using a MAG VATV2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion diffusion transformer. So it's still a transformer happening. Just the question is like, is it?[00:06:37] Isaac Robinson: Parameterizing the stochastic differential equation is, or parameterizing a conditional distribution via autoregression. It's also it's also worth noting that most diffusion models today, the, the very high performance ones are switching away from the classic, like DDPM denoising diffusion probability modeling framework to rectified flows.[00:06:57] Isaac Robinson: Rectified flows have a very interesting property that as [00:07:00] they converge, they actually get closer to being able to be sampled with a single step. Which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.[00:07:22] Isaac Robinson: So, and naturally, the third step is throwing lots of compute at the problem. So I didn't, I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.[00:07:48] Isaac Robinson: What mattered was that you were just increasing the amount of compute that the model had. So, I love how in the, once again, little blog posts, they don't even talk about [00:08:00] like the specific hyperparameters. They say, we're using a diffusion transformer, and we're just throwing more compute at it, and this is what happens.[00:08:08] Isaac Robinson: OpenSora shows similar results. The primary issue I think here is that no one else has 32x compute budget. So we end up with these we end up in the middle of the domain and most of the related work, which is still super, super cool. It's just a little disappointing considering the context. So I think this is a beautiful extension of the framework that was introduced in 22 and 23 for these very high quality per image generation and then extending that to videos.[00:08:39] Isaac Robinson: It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.[00:08:46] SAM and SAM2[00:08:46] Isaac Robinson: The next, so next paper I wanted to talk about is SAM. So we at Roboflow allow users to label data and train models on that data. Sam, for us, has saved our users 75 years of [00:09:00] labeling time.[00:09:00] Isaac Robinson: We are the, to the best of my knowledge, the largest SAM API that exists. We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks which has the great side effect of requiring less training data to have a meaningful convergence.[00:09:20] Isaac Robinson: So most people are data limited in the real world. So anything that requires less data to get to a useful thing is that super useful. Most of our users actually run their object per frame object detectors on every frame in a video, or maybe not most, but many, many. And so Sam follows into this category of taking, Sam 2 falls into this category of taking something that really really works and applying it to a video which has the wonderful benefit of being plug and play with most of our Many of our users use cases.[00:09:53] Isaac Robinson: We're, we're still building out a sufficiently mature pipeline to take advantage of that, but it's, it's in the works. [00:10:00] So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it which is very challenging for existing object trackers.[00:10:14] Isaac Robinson: High level overview of how SAM2 works. We there's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.[00:10:36] Isaac Robinson: I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high level overview of how SAM works. You have an image encoder that runs on every frame. SAM two can be used on a single image, in which case the only difference between SAM two and SAM is that image encoder, which Sam used a standard VIT [00:11:00] Sam two replaced that with a hara hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is.[00:11:11] Isaac Robinson: Excellent, especially considering how in a trend of 23 was replacing the VAT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.[00:11:31] Isaac Robinson: So the feature set that is created is essentially well, I'll go more into it in a couple of slides, but we take the features from the past couple frames, plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the.[00:11:57] Isaac Robinson: Image features and add that to the memory bank. [00:12:00] It's, well, I'll say more in a minute. The just like SAM, the SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model used the model to label more of it and asked people to refine the predictions of the model.[00:12:20] Isaac Robinson: And then ultimately the data set is just created from the engine Final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a dataset in a way that is very unique. It seems unlikely that another model could come in and have such a tight.[00:12:37] Isaac Robinson: So brief overview of how the memory bank works, the paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video. And we take the last couple of frames from our video attend that, along with the set of prompts that we provided, they could come from the future, [00:13:00] they could come from anywhere in the video, as well as reference object pointers, saying, by the way, here's what we've found so far attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually[00:13:18] Isaac Robinson: By limiting the amount of frames that you attend to, you manage to keep the model running in real time. This is such an interesting topic for me because one would assume that attending to all of the frames is super essential, or having some type of summarization of all the frames is super essential for high performance.[00:13:35] Isaac Robinson: But we see in their later ablation that that actually is not the case. So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me.[00:13:59] Isaac Robinson: [00:14:00] We see in section C, the number of memories. One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies, in my mind, just having this FIFO queue of memories.[00:14:20] Isaac Robinson: Although in the future, I'm super interested to see A more dedicated summarization of all of the last video, not just a stacking of the last frames. So that another extension of beautiful per frame work into the video domain.[00:14:42] Realtime detection: DETRs > YOLO[00:14:42] Isaac Robinson: The next trend I'm interested in talking about is this interesting at RoboFlow, we're super interested in training real time object detectors.[00:14:50] Isaac Robinson: Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space. We are finally starting to see something change. So, [00:15:00] for years, YOLOs have been the dominant way of doing real time object detection, and we can see here that they've essentially stagnated.[00:15:08] Isaac Robinson: The performance between 10 and 11 is not meaningfully different, at least, you know, in this type of high level chart. And even from the last couple series, there's not. A major change so YOLOs have hit a plateau, debtors have not. So we can look here and see the YOLO series has this plateau. And then these RT debtor, LW debtor, and Define have meaningfully changed that plateau so that in fact, the best Define models are plus 4.[00:15:43] Isaac Robinson: 6 AP on Cocoa at the same latency. So three major steps to accomplish this. The first RT deditor, which is technically a 2023 paper preprint, but published officially in 24, so I'm going to include that. I hope that's okay. [00:16:00] That is showed that RT deditor showed that we could actually match or out speed YOLOs.[00:16:04] Isaac Robinson: And then LWdebtor showed that pre training is hugely effective on debtors and much less so on YOLOs. And then DeFine added the types of bells and whistles that we expect from these types, this, this arena. So the major improvements that RTdebtor shows was Taking the multi scale features that debtors typically pass into their encoder and decoupling them into a much more efficient transformer encoder.[00:16:30] Isaac Robinson: The transformer is of course, quadratic complexity. So decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime or increasing your throughput. So that change basically brought us up to yellow speed and then they do a hardcore analysis on. Benchmarking YOLOs, including the NMS step.[00:16:54] Isaac Robinson: Once you once you include the NMS in the latency calculation, you see that in fact, these debtors [00:17:00] are outperforming, at least this time, the the, the YOLOs that existed. Then LW debtor goes in and suggests that in fact, the frame, the huge boost here is from pre training. So, this is the define line, and this is the define line without pre training.[00:17:19] Isaac Robinson: It's within range, it's still an improvement over the YOLOs, but Really huge boost comes from the benefit of pre training. When YOLOx came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre training.[00:17:40] Isaac Robinson: So, you see in this graph from LWdebtor, in fact, YOLOs do have a real benefit from pre training, but it goes away as we increase the training time. Then, the debtors converge much faster. LWdebtor trains for only 50 epochs, RTdebtor is 60 epochs. So, one could assume that, in fact, [00:18:00] the entire extra gain from pre training is that you're not destroying your original weights.[00:18:06] Isaac Robinson: By relying on this long training cycle. And then LWdebtor also shows superior performance to our favorite data set, Roboflow 100 which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. Yellow models tend to have a lot of very specific complicated loss functions.[00:18:26] Isaac Robinson: This Define brings that into the debtor world and shows consistent improvement on a variety of debtor based frameworks. So bring these all together and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data and debtors are clearly becoming a promising step in that direction.[00:18:56] Isaac Robinson: The, what we're interested in seeing [00:19:00] from the debtors in this, this trend to next is. Codetter and the models that are currently sitting on the top of the leaderboard for large scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real time ones and then throw a Swingy at it.[00:19:23] Isaac Robinson: Like, do we have a Pareto curve that extends from the real time domain all the way up to the super, super slow but high performance domain? We also want to see people benchmarking in RF100 more, because that type of data is what's relevant for most users. And we want to see more pre training, because pre training works now.[00:19:43] Isaac Robinson: It's super cool.[00:19:48] Peter's Picks[00:19:48] Peter Robicheaux: Alright, so, yeah, so in that theme one of the big things that we're focusing on is how do we get more out of our pre trained models. And one of the lenses to look at this is through sort of [00:20:00] this, this new requirement for like, how Fine grained visual details and your representations that are extracted from your foundation model.[00:20:08] Peter Robicheaux: So it's sort of a hook for this Oh, yeah, this is just a list of all the the papers that I'm going to mention I just want to make sure I set an actual paper so you can find it later[00:20:18] MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)[00:20:18] Peter Robicheaux: Yeah, so sort of the big hook here is that I make the claim that LLMs can't see if you go to if you go to Claude or ChatGPT you ask it to see this Watch and tell me what time it is, it fails, right?[00:20:34] Peter Robicheaux: And so you could say, like, maybe, maybe the Like, this is, like, a very classic test of an LLM, but you could say, Okay, maybe this, this image is, like, too zoomed out, And it just, like, it'll do better if we increase the resolution, And it has easier time finding these fine grained features, Like, where the watch hands are pointing.[00:20:53] Peter Robicheaux: Nodice. And you can say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands. But if you actually prompt [00:21:00] it textually, it's very easy for it to tell the time. So this to me is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.[00:21:08] Peter Robicheaux: So the question is sort of why? And for you anthropic heads out there, cloud fails too. So the, the, my first pick for best paper of 2024 Envision is this MMVP paper, which tries to investigate the Why do LLMs not have the ability to see fine grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?[00:21:32] Peter Robicheaux: And it gets it wrong, and then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is sort of contained in its hypothesis for why it can't. See these details. So it hypothesizes that models that have been initialized with, with Clip as their vision encoder, they don't have fine grained details and the, the features extracted using Clip because Clip sort of doesn't need to find these fine grained [00:22:00] details to do its job correctly, which is just to match captions and images, right?[00:22:04] Peter Robicheaux: And sort of at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively at all. The vision encoder wasn't trained contrastively at all. Still, in order to do its job of capturing the image it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?[00:22:21] Peter Robicheaux: So This paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in clip space, but far in DynaV2 space. So DynaV2 is a foundation model that was trained self supervised purely on image data. And it kind of uses like some complex student teacher framework, but essentially, and like, it patches out like certain areas of the image or like crops with certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine grained visual features.[00:22:54] Peter Robicheaux: And so if you take things that are very close in clip space and very far in DynaV2 space, you get a set of images [00:23:00] that Basically, pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?[00:23:14] Peter Robicheaux: Because to, to, from the perspective of the vision encoder, they're the same image. And so if you ask a question like, how many eyes does this animal have? It answers the same for both. And like all these other models, including Lava do the same thing, right? And so this is the benchmark that they create, which is like finding clip, like clip line pairs, which is pairs of images that are similar in clip space and creating a data set of multiple choice questions based off of those.[00:23:39] Peter Robicheaux: And so how do these models do? Well, really bad. Lava, I think, So, so, chat2BT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this dataset. It does much, much, much, much worse [00:24:00] than random guessing, which means that this process has done a very good job of identifying hard images for, for Lava, specifically.[00:24:07] Peter Robicheaux: And that's because Lava is basically not trained for very long and is initialized from Clip, and so You would expect it to do poorly on this dataset. So, one of the proposed solutions that this paper attempts is by basically saying, Okay, well if clip features aren't enough, What if we train the visual encoder of the language model also on dyno features?[00:24:27] Peter Robicheaux: And so it, it proposes two different ways of doing this. One, additively which is basically interpolating between the two features, and then one is interleaving, which is just kind of like training one on the combination of both features. So there's this really interesting trend when you do the additive mixture of features.[00:24:45] Peter Robicheaux: So zero is all clip features and one is all DynaV2 features. So. It, as you, so I think it's helpful to look at the right most chart first, which is as you increase the number of DynaV2 features, your model does worse and worse and [00:25:00] worse on the actual language modeling task. And that's because DynaV2 features were trained completely from a self supervised manner and completely in image space.[00:25:08] Peter Robicheaux: It knows nothing about text. These features aren't really compatible with these text models. And so you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for this. These models to solve. And so that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions if as you include more dyna V two features up to a point, but then you, when you oversaturate, it completely loses its ability to like.[00:25:36] Peter Robicheaux: Answer language and do language tasks. So you can also see with the interleaving, like they essentially double the number of tokens that are going into these models and just train on both, and it still doesn't really solve the MMVP task. It gets Lava 1. 5 above random guessing by a little bit, but it's still not close to ChachiPT or, you know, Any like human performance, obviously.[00:25:59] Peter Robicheaux: [00:26:00] So clearly this proposed solution of just using DynaV2 features directly, isn't going to work. And basically what that means is that as a as a vision foundation model, DynaV2 is going to be insufficient for language tasks, right?[00:26:14] Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks)[00:26:14] Peter Robicheaux: So my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only This dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image, so they're, they're, they have enough pixel information, but also can be talked about and can be reasoned about.[00:26:44] Peter Robicheaux: And that's on the semantic granularity axis. So here's an example of basically three different paradigms of labeling that they do. So they, they create a big dataset. One is text, which is just captioning. And you would expect a model that's trained [00:27:00] only on captioning to have similar performance like chat2BT and like not have spatial hierarchy, not have features that are meaningful at the pixel level.[00:27:08] Peter Robicheaux: And so they add another type, which is region text pairs, which is essentially either classifying a region or You're doing object detection or doing instance segmentation on that region or captioning that region. And then they have text phrased region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find it's like, It's placed in a descriptive paragraph about the image, which is basically trying to introduce even more like semantic understanding of these regions.[00:27:39] Peter Robicheaux: And so like, for instance, if you're saying a woman riding on the road, right, you have to know what a woman is and what the road is and that she's on top of it. And that's, that's basically composing a bunch of objects in this visual space, but also thinking about it semantically, right? And so the way that they do this is they take basically they just dump Features from a vision encoder [00:28:00] straight into a encoder decoder transformer.[00:28:03] Peter Robicheaux: And then they train a bunch of different tasks like object detection and so on as a language task. And I think that's one of the big things that we saw in 2024 is these, these vision language models operating in, on pixel space linguistically. So they introduced a bunch of new tokens to point to locations and[00:28:22] Peter Robicheaux: So how does it work? How does it actually do? We can see if you look at the graph on the right, which is using the, the Dino, the the Dino framework your, your pre trained Florence 2 models transfer very, very well. They get 60%, 60 percent map on Cocoa, which is like approaching state of the art and they train[00:28:42] Vik Korrapati: with, and they[00:28:43] Peter Robicheaux: train with a much more more efficiently.[00:28:47] Peter Robicheaux: So they, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre trained weights effectively. So where is it falling short? So these models, I forgot to mention, Florence is a 0. 2 [00:29:00] billion and a 0. 7 billion parameter count. So they're very, very small in terms of being a language model.[00:29:05] Peter Robicheaux: And I think that. This framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like this, segmentation, it actually performs better as an object detector.[00:29:25] Peter Robicheaux: And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity.[00:29:32] PalíGemma / PaliGemma 2[00:29:32] Peter Robicheaux: So I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024 or two papers. So PolyGemma came out earlier this year.[00:29:42] Peter Robicheaux: PolyGemma 2 was released, I think like a week or two ago. Oh, I forgot to mention, you can actually train You can, like, label text datasets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within, like, 14 hours of release, which I was really excited about.[00:29:59] Peter Robicheaux: So, anyway, so [00:30:00] PolyGemma 2, so PolyGemma is essentially doing the same thing, but instead of doing an encoder decoder, it just dumps everything into a decoder only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2, so PolyGemma uses Gemma as the language encoder, and it uses Gemma2B.[00:30:17] Peter Robicheaux: PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder decoder is they use the concept of prefix loss. Which basically means that when it's generating, tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do.[00:30:41] Peter Robicheaux: They're attending to each other fully, full attention. Which means that, you know, it can sort of. Find high level it's easier for the, the prefix to color, to color the output of the suffix and also to just find like features easily. So this is sort of [00:31:00] an example of like one of the tasks that was trained on, which is like, you describe the task in English and then you give it all these, like, You're asking for it to segment these two classes of objects, and then it finds, like, their locations using these tokens, and it finds their masks using some encoding of the masks into tokens.[00:31:24] Peter Robicheaux: And, yeah, so, one of my critiques, I guess, of PolyGemma 1, at least, is that You find that performance saturates as a pre trained model after only 300 million examples seen. So, what this graph is representing is each blue dot is a performance on some downstream task. And you can see that after seeing 300 million examples, It sort of does equally well on all of the downtrend tasks that they tried it on, which was a lot as 1 billion examples, which to me also kind of suggests a lack of capacity for this model.[00:31:58] Peter Robicheaux: PolyGemma2, [00:32:00] you can see the results on object detection. So these were transferred to to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as. Both the resolution increases, and the parameter count of the language model increases, performance increases.[00:32:16] Peter Robicheaux: So resolution makes sense, obviously, it helps to find small images, or small objects in the image. But it also makes sense for another reason, which is that it kind of gives the model a thinking register, and it gives it more tokens to, like, process when making its predictions. But yeah, you could, you could say, oh, 43.[00:32:30] Peter Robicheaux: 6, that's not that great, like Florence 2 got 60. But this is not Training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Cocoa. So it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses. It doesn't even have bipartite graph matching or anything like that.[00:32:52] Peter Robicheaux: Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away [00:33:00] on MMVP. I mean, 47. 3, sure, that's nowhere near human accuracy, which, again, is 94%, but for a, you know, a 2 billion language, 2 billion parameter language model to be chat2BT, that's quite the achievement.[00:33:12] Peter Robicheaux: And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, Maybe this language model, like, maybe coming up with all these specific annotations to find features and with high fidelity and pixel space isn't actually necessary. And we can come up with an even simpler, more beautiful idea for combining you know, image tokens and pixel tokens in a way that's interfaceable for language tasks.[00:33:44] Peter Robicheaux: And this is nice because it can scale, you can come up with lots more data if you don't have to come up with all these annotations, right? So the way that it works. is it does something very, very similar to PolyGemo, where you have a vision encoder that dumps image tokens into a decoder only transformer.[00:33:59] Peter Robicheaux: But [00:34:00] the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So instead of having to come up with fancy object detection or semantic, or segment, or segmentation labels, you can just try to reconstruct the image and have it learn fine grained features that way.[00:34:16] Peter Robicheaux: And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemma line of thinking, which is randomly sampling a prefix line of thinking Prefix length and using only this number of image tokens as the prefix. And so doing a similar thing with the causal. So the causal with prefix is the, the attention mask on the right.[00:34:35] Peter Robicheaux: So it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, This is the dataset that they train on. It's image or internet scale data, very high quality data created by the data filtering networks paper, essentially which is maybe The best clip data that exists.[00:34:59] Peter Robicheaux: [00:35:00] And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it's, it appears to be, oh, at the highest parameter account, it appears to be improving in performance with more and more samples seen. And so you can sort of think that. You know, if we just keep bumping the parameter count and increasing the example scene, which is the, the, the line of thinking for language models, then it'll keep getting better.[00:35:27] Peter Robicheaux: So how does it actually do at finding, oh, it also improves with resolution, which you would expect for a model that This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine grained visual features.[00:35:44] Peter Robicheaux: And so how does that actually do compared to CLIP on Cocoa? Well, you can see that if you slap a transformer detection head on it, Entry now in Cocoa, it's just 60. 2, which is also within spitting distance of Soda, which means that it does a very good job of [00:36:00] finding visual features, but you could say, okay, well, wait a second.[00:36:03] Peter Robicheaux: Clip got to 59. 1, so. Like, how does this prove your claim at all? Because doesn't that mean like clip, which is known to be clip blind and do badly on MMVP, it's able to achieve a very high performance on fine, on this fine grained visual features task of object detection, well, they train on like, Tons of data.[00:36:24] Peter Robicheaux: They train on like objects, 365, Cocoa, Flickr and everything else. And so I think that this benchmark doesn't do a great job of selling how good of a pre trained model MV2 is. And we would like to see the performance on fewer data as examples and not trained to convergence on object detection. So seeing it in the real world on like a dataset, like RoboFlow 100, I think would be quite interesting.[00:36:48] Peter Robicheaux: And our, our, I guess our final, final pick for paper of 2024 would be Moondream. So introducing Vic to talk about that.[00:36:54] swyx: But overall, that was exactly what I was looking for. Like best of 2024, an amazing job. Yeah, you can, [00:37:00] if there's any other questions while Vic gets set up, like vision stuff,[00:37:07] swyx: yeah,[00:37:11] swyx: Vic, go ahead. Hi,[00:37:13] Vik Korrapati / Moondream[00:37:13] question: well, while we're getting set up, hi, over here, thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies Even these MLMs, they're just like worse than RT Tether at detection still. Like, if you wanted to pay a bunch of money to auto label your detection dataset, If you gave it to OpenAI or Cloud, that would be like a big waste.[00:37:37] question: So I'm curious, just like, even Pali Gemma 2, like is worse. So, so I'm curious to hear your thoughts on like, how come, Nobody's cracked the code on like a generalist that really you know, beats a specialist model in computer vision like they have in in LLM land.[00:38:00][00:38:01] Isaac Robinson: Okay. It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, in AIMv2 showed, a simple attentional probe on the pre trained features gets like 90%, which is as well as anyone does. The, the, the, the bigger question, like, why isn't it transferring to object detection, especially like real time object detection.[00:38:25] Isaac Robinson: I think, in my mind, there are two answers. One is, object detection is really, really, really the architectures are super domain specific. You know, we see these, all these super, super complicated things, and it's not super easy to, to, to build something that just transfers naturally like that, whereas image classification, you know, clip pre training transfers super, super quickly.[00:38:48] Isaac Robinson: And the other thing is, until recently, the real time object detectors didn't even really benefit from pre training. Like, you see the YOLOs that are like, essentially saturated, showing very little [00:39:00] difference with pre training improvements, with using pre trained model at all. It's not surprising, necessarily, that People aren't looking at the effects of better and better pre training on real time detection.[00:39:12] Isaac Robinson: Maybe that'll change in the next year. Does that answer your question?[00:39:17] Peter Robicheaux: Can you guys hear me? Yeah, one thing I want to add is just like, or just to summarize, basically, is that like, Until 2024, you know, we haven't really seen a combination of transformer based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or like the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolution models like just don't benefit from pre training and just don't like have the level of intelligence of transformer models.[00:39:56] swyx: Awesome. Hi,[00:39:59] Vik Korrapati: can [00:40:00] you hear me?[00:40:01] swyx: Cool. I hear you. See you. Are you sharing your screen?[00:40:04] Vik Korrapati: Hi. Might have forgotten to do that. Let me do[00:40:07] swyx: that. Sorry, should have done[00:40:08] Vik Korrapati: that.[00:40:17] swyx: Here's your screen. Oh, classic. You might have to quit zoom and restart. What? It's fine. We have a capture of your screen.[00:40:34] swyx: So let's get to it.[00:40:35] Vik Korrapati: Okay, easy enough.[00:40:49] Vik Korrapati: All right. Hi, everyone. My name is Vic. I've been working on Moondream for almost a year now. Like Shawn mentioned, I just went and looked and it turns out the first version I released December [00:41:00] 29, 2023. It's been a fascinating journey. So Moonbeam started off as a tiny vision language model. Since then, we've expanded scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it.[00:41:13] Vik Korrapati: Unlike traditional large models that are focused at assistant type use cases, we're laser focused on building capabilities that developers can, sorry, it's yeah, we're basically focused on building capabilities that developers can use to build vision applications that can run anywhere. So, in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, etc.[00:41:40] Vik Korrapati: So That's really important. We have we have different output modalities that we support. There's query where you can ask general English questions about an image and get back human like answers. There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot.[00:41:57] Vik Korrapati: We've done a lot of work to minimize those sessions there. [00:42:00] So that's. Use lot. We have open vocabulary object detection built in similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say show me soccer balls in this image or show me if there are any deer in this image, it'll detect it.[00:42:14] Vik Korrapati: More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object you can just ask it to point out where that is. This is very useful when you're doing, you know, I automation type stuff. Let's see, LA we, we have two models out right now.[00:42:33] Vik Korrapati: There's a general purpose to be para model, which runs fair. Like it's, it's it's fine if you're running on server. It's good for our local Amma desktop friends and it can run on flagship, flagship mobile phones, but it never. so much for joining us today, and we'll see you in the [00:43:00] next one. Less memory even with our not yet fully optimized inference client.[00:43:06] Vik Korrapati: So the way we built our 0. 5b model was to start with the 2 billion parameter model and prune it while doing continual training to retain performance. We, our objective during the pruning was to preserve accuracy across a broad set of benchmarks. So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels MLP rows and whatnot using basically a technique based on the gradient.[00:43:37] Vik Korrapati: I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions. Then we iteratively prune a small chunk that will minimize loss and performance retrain the model to recover performance and bring it back. The 0. 5b we released is more of a proof of concept that this is possible.[00:43:54] Vik Korrapati: I think the thing that's really exciting about this is it makes it possible for for developers to build using the 2B param [00:44:00] model and just explore, build their application, and then once they're ready to deploy figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target.[00:44:12] Vik Korrapati: So yeah, very excited about that. Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, like, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas, where you have a bunch of analog devices that you need to monitor.[00:44:34] Vik Korrapati: It's expensive to. And I was like, okay, let's have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to, happy to help you distill that. Let's, let's get it going. Turns out our model couldn't do it at all.[00:44:51] Vik Korrapati: I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. Did not work either. So I was like, let's look at what the folks with [00:45:00] hundreds of billions of dollars in market cap have to offer. And yeah, that doesn't work either. My hypothesis is that like the, the way these models are trained are using a large amount of image text data scraped from the internet.[00:45:15] Vik Korrapati: And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild, they're product images. Detail images like these, where it's always set to zero. It's paired with an alt text that says something like GIVTO, pressure sensor, PSI, zero to 30 or something. And so the models are fairly good at picking up those details.[00:45:35] Vik Korrapati: It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there. And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to, Solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance.[00:45:57] Vik Korrapati: And thinking about it, reading a gauge is like [00:46:00] not a one, like it's not a zero short process in our minds, right? Like if you had to tell me the reading in Celsius for this, Real world gauge. There's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one.[00:46:14] Vik Korrapati: You look at the tip of the needle, you look at what labels it's between, and you count how many and do some math to figure out what that probably is. So what happens if we just add that as a Chain of thought to give the model better understanding of the different sub, to allow the model to better learn the subtasks it needs to perform to accomplish this goal.[00:46:37] Vik Korrapati: So you can see in this example, this was actually generated by the latest version of our model. It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. So the second tick, it's a little debatable here, like there's a weird shadow situation going on, the dial is off, so I don't know what the ground truth is, but it works okay.[00:46:57] Vik Korrapati: There's points on there that are, the points [00:47:00] over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around on the image. The model actually has to predict where this points are, I was already trying to do this with bounding boxes, but then Malmo came out with pointing capabilities.[00:47:15] Vik Korrapati: And it's like pointing is a much better paradigm to to represent this. We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light. Blue chart is with our rounded chain of thought. This measures, we have, we built a clock reading benchmark about 500 images.[00:47:37] Vik Korrapati: This measures accuracy on that. You can see it's a lot more sample efficient when you're using the chain of thought to model. Another big benefit from this approach is like, you can kind of understand how the model is. it and how it's failing. So in this example, the actual correct reading is 54 Celsius, the model output [00:48:00] 56, not too bad but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the 7th tick, it actually predicted that it was the 8th tick and that's why it went with 56.[00:48:14] Vik Korrapati: So now that you know that this. Failing in this way, you can adjust how you're doing the chain of thought to maybe say like, actually count out each tick from 40, instead of just trying to say it's the eighth tick. Or you might say like, okay, I see that there's that middle thing, I'll count from there instead of all the way from 40.[00:48:31] Vik Korrapati: So helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that like we're seeing minor errors on, they can give us a couple of examples where like, if it's miss detecting the. Needle, they can go in and correct that in the chain of thought.[00:48:49] Vik Korrapati: And hopefully that works the next time. Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably, like, there's some science [00:49:00] from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.[00:49:05] Vik Korrapati: So, in addition to the image based chain of thought stuff, I also added some spelling based chain of thought to help it understand better understand OCR, I guess. I don't understand why everyone doesn't do this, by the way. Like, it's trivial benchmark question. It's Very, very easy to nail. But I also wanted to support it for stuff like license plate, partial matching, like, hey, does any license plate in this image start with WHA or whatever?[00:49:29] Vik Korrapati: So yeah, that sort of worked. All right, that, that ends my story about the gauges. If you think about what's going on over here it's interesting that like LLMs are showing enormous. Progress in reasoning, especially with the latest set of models that we've seen, but we're not really seeing, I have a feeling that VLMs are lagging behind, as we can see with these tasks that should be very simple for a human to do [00:50:00] that are very easy to find VLMs failing at.[00:50:04] Vik Korrapati: My hypothesis on why this is the case is because On the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.[00:50:20] Vik Korrapati: Like, maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever. But the actual data on how to, like, look at images is, isn't really present. Also, the Data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.[00:50:40] Vik Korrapati: So yeah, I, I think our solution here is really just we need to teach them how to operate on individual tasks and figure out how to scale that out. All right. Yep. So conclusion. At Moondream we're trying to build amazing PLMs that run everywhere. Very hard problem. Much work ahead, but we're making a ton of progress and I'm really excited [00:51:00] about If anyone wants to chat about more technical details about how we're doing this or interest in collaborating, please, please hit me up.[00:51:08] Isaac Robinson: Yeah,[00:51:09] swyx: like, I always, when people say, when people say multi modality, like, you know, I always think about vision as the first among equals in all the modalities. So, I really appreciate having the experts in the room. Get full access to Latent Space at www.latent.space/subscribe
The Separation Guide | A starting point for better separation and divorce
In this episode, host Sabina Read sits down with Arabella Feltham, Separation Consultant at The Separation Guide, to explore how men can navigate separation with strength and support. They discuss: Staying out of conflict and prioritising health The importance of support networks Why men are often blindsided by separation news Managing grief, anger, and mental health challenges Practical advice for FIFO and emergency service workers Discover tools and resources to regain control and move forward. Don't miss it!
FIFO workers in WA's construction sector are far more likely to consider taking their own lives than the average Australian. A woman whose friend acted on that impulse is among those trying to address the issue.
Text me what you thought of the show
In Australia, over 100,000 FIFO (Fly In, Fly Out) workers deal with the pressures of extended time away from home and countless others, partners, friends and kids also have to deal with the fallout that runs alongside someone they love being away for long periods of time. So, does working away from home Increase your own or your partners drinking? The stats seem to suggest that people who work away tend to over drink more than most and party harder on their days off.It could be caused by the sense of freedom people feel when working away or it could be loneliness, boredom or missing loved ones. Then, when they get home from a work trip... the party starts. Time to kick back and enjoy a well-earned break and have a few drinks to pass the time. A toxic cycle of drinking that can leak from a workplace culture into the home.'But Hame and Vic - What do you know?! You don't even have real jobs! we hear you cry!Well...you're right...and that's why we have welcomed onto the podcast the wonderful Shaun Palmer who openly shares his story of domestic abuse, grief, work as a FIFO, over drinking and eventual journey to sobriety. Shaun has overcome more hurdles than most of us will in a lifetime and he now spends his time trying to help others who are struggling too.It was an absolute pleasure to have him on the podcast and we hope his story inspires you as much as it did us!Enjoy!NotesShaun's Revive Sobriety website: https://www.revivesobrietycoaching.com/You can contact Shaun by emailing him at: spalmer@revivesobrietycoaching.comVic's book is out! Go and get yourself a copy whilst you can...https://www.booktopia.com.au/a-thousand-wasted-sundays-victoria-vanstone/book/9780645757941.htmland please give a review at www.goodreads.comJOIN PATREON! and buy us a Cuppa so we can keep being awkward!https://www.patreon.com/user?u=81897291www.cuppa.community – The Free Social Network for the Sober and Sober Curious - Sober Events – Therapy – Sobriety Courses – Sober Groups and loads more.@soberawkward @drunkmummysobermummy@cuppa.community @hamishadamscairns @patreon @spotifyIf you are struggling with your relationship with alcohol please reach out to your local doctor, a therapist, AA Group or just chat to a close friend. Don't feel shame, just get the help you deserve. Contact us! If you have a topic you'd like us to cover then please email us -vicandhamish@soberawkward.comSign up to our 30 Day Sober Tour Guide at www.soberawkward.com#soberawkward #soberawkwardpodcast #drunkmummysobermummy #cuppa.community #sober #sobermom #sobermummy #sobriety #soberaf #sobermovement #sobercurious #alcoholfree #mummybloggers #writersofinsta #soberfamily #greyareadrinking #addiction #soberissexy #soberwomen #sobermomtribe #sobrietyrocks #soberlifestyle #alcoholfreelife #wedorecover #sobernation #mumblog #mentalhealth #motherhood #wineoclock #sobermums #selfcare #womeninrecovery #sobercommunity #soberdads #1000sundays Hosted on Acast. See acast.com/privacy for more information.
“Influencers are forever reinforcing the same images. They’re spending no time in the actual place, other than the requisite time to take the photo. From the local community’s point of view, these kinds of tourists bring very little value.” –Stuart McDonald In this episode of Deviate, Rolf and Stuart talk about why Stuart chose to make his office in West Bali, and why South Bali has developed something of a bad reputation in terms of over-tourism (2:30); the mythos of Bali, how it became a “dreamscape” in the Western consciousness, and how it has changed in recent years (6:30); why certain areas in Bali become over-touristed, and how it has recently been affected by “influencers” (18:00); how black magic and ghosts are part of the belief systems of Balinese, yet few travelers ascertain this (24:00); and how much social-media travel content leaves out essential cultural context (31:00). Stuart McDonald (@travelfishery) is the co-founder of Travelfish.org, a travel planning website covering Southeast Asia, which he launched in 2014. He has been traveling in that part of the world since 1993, and living there since 1997. Notable Links: The Vagabond’s Way, by Rolf Potts (book) Bali Hai Immigrant Song (YouTube mashup) Dutch presence in Bali (colonialist history) Eat, Pray, Love, by Elizabeth Gilbert (book) Canggu (coastal village in Bali) Fly-in fly-out [FIFO] (term for temporary laborers) Digital nomads (remote workers who travel) Lonely Planet (travel guidebook publisher) Infinity pool (type of swimming pool) National Geographic (magazine) GetYourGuide (tour company) Gates of Heaven (photogenic temple in Bali) Balinese sacred textiles Kastom (Melanesian traditional culture) Kava (sedative drink in Melanesia) Listicle (article structured as a list) Filterworld: How Algorithms Flattened Culture, by Kyle Chayka (book) Externality (indirect economic cost) This episode of Deviate is also brought to you by AirTreks, an industry leader in multi-stop international travel. If you've ever planned a trip with multiple stops, you know that finding the right flights can be difficult. Between balancing travel logistics and cost, it often becomes impossible to build an itinerary that matches your travel goals. AirTreks is a distributed travel company with employees working from all corners of the world to help with your flight planning, specializing in complex routes with up to 25 stops. The AirTreks website offers suggested pre-planned travel itineraries to help you get started, but can customize to fit your journey. The Deviate theme music comes from the title track of Cedar Van Tassel's 2017 album Lumber. Note: We don't host a “comments” section, but we're happy to hear your questions and insights via email, at deviate@rolfpotts.com.
Moff chats about his debut at derby day, while Butts talks about the FIFO mission to Brisbane as part of the Punteroos. 200Plus Summer Series starts next week..... Enjoy plums and remember to GET THE KNEES UP! Send us your voice messages here: https://memo.fm/200pluspodcast/ Produced by Josh Moffitt 200 PLUS Instagram: https://www.instagram.com/200pluspod/ Sam Draper: www.instagram.com/drvper/ Nick Butler: https://www.instagram.com/nick_butler10/ Charlie Comben: https://www.instagram.com/charliecomben/ Max Lynch: https://www.instagram.com/_maximumlynch_ Clubby Sports: https://www.instagram.com/ClubbySports Producey: https://producey.com/
In this heartfelt episode, we dive into Chelaine's journey through the highs and lows of infertility as a FIFO couple navigating the complex world of fertility treatments. For the past two years, Chelaine and her husband have been balancing life with a "two weeks away, one week home" roster while managing full-time work, countless doctor visits, miscarriage and emotional challenges. In this episode, Chelaine sheds light on the resilience and courage needed to face ongoing testing, treatment changes, and the pursuit of parenthood. If you or someone you know is navigating infertility, this episode offers companionship, insights, and a reminder that you are not alone.
Hey, Holistic Wellness Warrior
Hey Lifers! PLEASE VOTE FOR US IN THE AUS PODCAST AWARDS Have you ever developed an allergy later in life? Laura may have developed a new one and it's impacting every aspect of her life and every pore in her body!Vibes for the week:Britt - Summer Fridays Jet Lag MaskKeeshia - Huberman - Esther Perel: How to Find, Build & Maintain Healthy Romantic RelationshipsLaura - Into the Fire: The Lost Daughter on Netflix We mentioned our episode with Esther Perel Then we jump into the questions: IS IT BETTER TO TURN A BLIND EYE?Recently I found texts on my boyfriend of 5 years phone that appear to be organising to have sex with a sex worker, as well as messaging other girls to meet him out while he was working in another country (I don't know if this ever eventuated). We don't see each other much as I work in Aus and he works overseas for a lot of the year. He has a much higher sex drive than me and long distance/not much sex doesn't bother me but it really bothers him. I haven't told my boyfriend I know this yet and I haven't told any family or friends. I know as soon as I tell anyone they will hate him and tell me the things that I would tell anyone else - to break up with him. The problem is I'm 30 next year, I want to have a baby in the next two years andI had my life pretty much sorted with him on paper. He has his faults but he is my best friend. The thing is, he makes an enormous amount of money and I don't. In our future I know my children will be looked after and they won't have to struggle. I know I won't have to struggle. I come from a family who doesn't have a lot of money and being with my boyfriend means I know I can take care of them better than if I'm single. Right now I don't know whether to confront my boyfriend because I know as soon as I say this out loud I can't take it back and it will mean that I have to break up with him as cheating is not something I want in a relationship. Is it morally wrong to just look the other way because of the benefits this relationship brings me, my family and my future family? I know this seems like an obvious answer but I know how hard life can be and being with him means my life will in some ways be easier with him in it. Or if this will always be in the back of my mind and ruin my happiness even with the security it brings. Help. Please. I feel like I'm old, have nothing to show for myself and I'm scared I'll never be a mother if we break up now. HOW TO CHOOSE WEDDING LOCATION WHEN FAMILY ON OPPOSITE SIDES OF AUSI have recently got engaged and we have already started talking about when and where as these are the biggest questions we have to answer. We would like to have it maybe this time next year to allow people interstate and international sort their lives out to come. Now the big question is, where? My family is all east of Australia and we live in WA. My fiancé's immediate family is here in WA as well. Our friends are in WA. I have family that are elderly and wouldn't be able to travel. We have brought up the subject with my fiancés parents and my MIL didn't have a very good reaction which I knew would happen. How do I say that I don't know how long my elderly family members will be with us and I want them at my wedding as they mean so much to me. How do I approach this topic and not seem like I'm being a bridezilla making us have our wedding east? Or do we elope and have two parties one east and one west? DO I TELL HER THAT HER HUSBAND IS CHEATING?I'm in a tough situation and need some advice. I overheard my partner talking about a night out where one of his coworkers cheated on his partner, who is home with their 6-month-old baby. I feel awful for the woman and want to tell her, but I don't know her or her partner, and my partner would be furious if he knew I was eavesdropping. On top of that, if I say something, it could put my partner in a difficult position since the cheater is connected to management. I can't shake the feeling that I need to do something, but I'm worried about the consequences for everyone involved. What should I do?We got some additional info - the partner works FIFO and was disturbed when he heard the coworker say that he had been cheating. You can watch us on Youtube Find us on Instagram Join us on tiktok Or join the Facebook Discussion Group Tell your mum, tell your dad, tell your dog, tell your friend and share the love because WE LOVE LOVE! xxSee omnystudio.com/listener for privacy information.
Welcome to the Personal Development Trailblazers Podcast! In this episode, we'll explore the steps one nurse took to regain control of her life and career, and how you too can find harmony amid the chaos. Jeanelle is a FIFO wife, mum of 2 and registered nurse passionate about beating and preventing burnout. She wrote and self-published her book 'Nursing the Nurse: The Ultimate 6-Step Guide to Beating Burnout' at the beginning of 2023 and has since launched the signature walkthrough program of the same name as well as the wellness app, Nursing the Nurse: Holistic Huddle which incorporates a holistic approach to the unique needs of the shift working nurse to go from surviving to thriving both at work and home. Connect with Jeanelle here: https://www.facebook.com/mindfulnessforhealthau https://www.instagram.com/mindfulnessforhealthau/ https://www.facebook.com/jeanelle.classen/ www.nursingthenurse.com Grab the freebie here: Free video training: https://www.nursingthenurse.com/css-videotraining-request =================================== If you enjoyed this episode, remember to hit the like button and subscribe. Then share this episode with your friends. Thanks for watching the Personal Development Trailblazers Podcast. This podcast is part of the Digital Trailblazer family of podcasts. To learn more about Digital Trailblazer and what we do to help entrepreneurs, go to DigitalTrailblazer.com. Are you a coach, consultant, expert, or online course creator? Then we'd love to invite you to our FREE Facebook Group where you can learn the best strategies to land more high-ticket clients and customers. Request to join here: https://www.facebook.com/groups/profitablecoursecreators QUICK LINKS: APPLY TO BE FEATURED: https://app.digitaltrailblazer.com/podcast-guest-application GET MORE CLIENTS: https://app.digitaltrailblazer.com/client-acquisition-accelerator-pdf DIGITAL TRAILBLAZER: https://digitaltrailblazer.com/ JOIN OUR FREE FACEBOOK GROUP: https://www.facebook.com/groups/profitablecoursecreators
In this episode, Alex and Wade discuss tax-efficient retirement strategies, specifically focusing on tax diversification. They explain the three broad types of tax treatments in the tax code: taxable accounts, tax-deferred accounts (such as IRAs and 401ks), and tax-exempt accounts (such as Roth IRAs). They highlight the importance of having assets in each category to provide flexibility in retirement planning. They also discuss the characteristics and advantages of each type of account, including tax treatment, liquidity, and growth potential. Additionally, they touch on the different methods of tracking cost basis in taxable accounts. In this conversation, Alex and Wade discuss tax-efficient retirement distribution strategies. They cover the different types of retirement accounts, including tax-deferred accounts (such as traditional IRAs and 401(k)s), tax-exempt accounts (such as Roth IRAs and Roth 401(k)s), and taxable accounts. They explain the tax advantages and disadvantages of each type of account and discuss the importance of considering your current and future tax rates when deciding where to contribute. They also touch on the backdoor Roth contribution strategy and the concept of required minimum distributions (RMDs). Overall, the conversation emphasizes the importance of tax efficiency in retirement planning. Takeaways Tax diversification involves having assets in taxable accounts, tax-deferred accounts, and tax-exempt accounts to provide flexibility in retirement planning. Taxable accounts are the least tax-efficient but offer advantages such as preferential income treatment, step-up in basis at death, and liquidity. Tax-deferred accounts, such as IRAs and 401ks, offer tax deductions on contributions and tax-deferred growth, but have required minimum distributions and early withdrawal penalties. Tax-exempt accounts, such as Roth IRAs, offer tax-free growth and tax-free distributions, but contributions are not tax-deductible. Tracking cost basis in taxable accounts can be done using methods like average cost, first in first out (FIFO), or specific identification of tax lots. Consider your current and future tax rates when deciding where to contribute to retirement accounts. Tax-deferred accounts (such as traditional IRAs and 401(k)s) provide a tax deduction now but are taxed upon withdrawal. Tax-exempt accounts (such as Roth IRAs and Roth 401(k)s) are funded with after-tax dollars but provide tax-free withdrawals in retirement. Taxable accounts have no tax advantages but offer flexibility and liquidity. The backdoor Roth contribution strategy allows high-income earners to contribute to a Roth IRA by making a non-deductible contribution to a traditional IRA and then converting it to a Roth IRA. Required minimum distributions (RMDs) are mandatory withdrawals from tax-deferred retirement accounts starting at age 72 (or 70.5 for those born before 1960). Tax efficiency is an important aspect of retirement planning and can have a significant impact on your overall financial situation. Chapters 00:00 Introduction and Excitement for Tax-Efficient Retirement Strategies 01:26 Tax-Efficient Retirement Distributions as a General Theme 03:01 Understanding Tax Diversification and the Three Types of Tax Treatments 04:20 Advantages and Considerations of Taxable Accounts 15:11 Benefits and Limitations of Tax-Deferred Accounts 25:14 The Advantages of Tax-Exempt Accounts 26:04 Methods of Tracking Cost Basis in Taxable Accounts 00:31 Overview of Retirement Accounts 08:43 Tax-Deferred Accounts 18:30 Tax-Exempt Accounts 25:14 Taxable Accounts 28:47 Backdoor Roth Contribution 33:44 Required Minimum Distributions (RMDs) 38:26 Tax Efficiency in Retirement Planning 45:11 Retirement Tax Cliff 47:09 Conclusion Links The Retirement Planning Guidebook: 2nd Edition has just been updated for 2024! Visit your preferred book retailer or simply click here to order your copy today: https://www.wadepfau.com/books/ This episode is sponsored by McLean Asset Management. Visit https://www.mcleanam.com/retirement-income-planning-llm/ to download McLean's free eBook, “Retirement Income Planning”
Hey Lifers, Welcome back to ask uncut where we answer all of your deep and burning questions! Britt has some terrible dating advice that includes faking a celebrity interaction.The tide seems to be turning on Raygun. There is more speculation around the ethics of her journey to the Olympics since we recorded on Monday morning.Laura helps Britt learn about her (Ben's) new home in Romania. Vibes for the week: Laura: Two Doting Dads Book: The Quest For Free Time Keeshia: It Ends With Us Film Britt: Diary of a CEO podcast with Francis Ngannou Then we jump into your questions! AM I UNFULFILLED OR IS THIS NORMAL? I've been with my boyfriend for 5 years. I have kids, he doesn't. I have always felt like he is my best mate, not necessarily my penguin but at the same time it's been 5 years we've put in the work and my kids now adore him. His best friend moved in with us a year ago. I obviously noticed this man was attractive and I've known him for a long time but I was happy and content with my partner and I thought hey it's normal to just appreciate someone's good looks. Until… he messaged me one night. I was at work and he had been drinking. It said “Hey please don't repeat what I'm about to tell you.” He goes on to tell me that he finds me irresistible and the reason he chose to take a FIFO (3 on, 1 off) job was because he struggles to be alone with me. He said he thinks about me non stop. I would be lying if I didn't say I felt the lust. I went home, slept it off and felt so guilty that I showed my boyfriend the messages. He kind of just said ‘Oh wow he is thinking with the wrong body part' and has since pretended like it never happened. Now I'm in a tailspin. I dream about this man. He creeps into my mind constantly. He comes home in a week and I've tried gently suggesting to my boyfriend that we ask him to move out. Obviously I haven't told him that I am attracted to his best friend but I just said that it's a little awkward. My partner just replied that the extra income is helpful and he probably won't try anything. But what he doesn't know is that his friend has messaged me since telling me he is sorry, however I am just so beautiful and kind, how I'm the sweetest and he can't help but think about me. I truly think that I'm just feeling this way because after 5 years and being a full time working mum, I feel invisible to my family so having a man call me irresistible is a thrill. I do not want to go against my morals and destroy someone's trust over a fling but I also don't want to tell my partner how I feel. Should I be looking at this as a sign I'm unfulfilled in my relationship and maybe it's time to move on (not with his friend) or is this just a normal reaction to having an attractive man show me attention and once he has moved out I can just move past this without hurting my boyfriend? DON'T WANT TO WEAR THE BRIDAL OUTFITHow do I tell the bride and groom of a wedding I'm attending later this year that the bridal party outfit they have chosen for me is awful and I don't want to wear it? Keep in mind they also asked me to pay for it (so now I am out of pocket too). I live in a different state to them, so had to order online without trying the outfit before buying. It is unflattering, does not suit my shape, and I feel so uncomfortable in it. I do not want to wear this in public, let alone in front of a crowd at a wedding. I had suggested early on that if I'm paying for it, could I buy a nice dress in their colours that I'd be likely to wear again. They insisted however they wanted everyone to be ‘uniform'. Do I just have to suck it up, as the day is not about me? (Also, this is a destination wedding so I am already spending thousands on travel and accommodation to attend) I DON'T LIKE HIM DRINKING ALONEMy husband and I had a disagreement and both genuinely could not work out who was in the wrong, so reverting to the brains trust! My husband works shift work, so often has midweek days off or finishes really early on weekdays that I'm working. Sometimes (say once a week) he likes to go to the local pub by himself and have a few beers until I finish work. He usually comes home tipsy on these occasions. I feel uncomfortable about him drinking by himself and coming home tipsy after doing so. I don't have any issue with him drinking with friends or if we have a few drinks together - it's just the by himself aspect (which I think stems from growing up with parents who had issues with alcohol). He gets upset by this and feels like I'm trying to control how he spends his free time. He doesn't think it's unreasonable to do this once a week. I don't have an issue with him doing any activity by himself that doesn't involve alcohol, so don't feel that I'm being controlling. Who is in the wrong?! Am I being unreasonable? We both have had multiple convos about this and both are not sure if each of us are in the wrong. For context, we are in our early 30's and have no kids, just living at home the 2 of us (and otherwise have an amazing relationship)! You can watch us on Youtube Find us on Instagram Join us on tiktok Or join the Facebook Discussion Group Tell your mum, tell your dad, tell your dog, tell your friend and share the love because WE LOVE LOVE! xxSee omnystudio.com/listener for privacy information.
A common piece of conventional wisdom around how law firms prioritize their work is that they should organize everything around deadlines and due dates. While deadline-driven prioritization does have a place, it's my experience that there's a better technique that law firms should be adopting as their default prioritization method: first-in first-out (FIFO).Tune in this week to discover the advantages of a first-in first-out policy in your law firm. I discuss how deadline-driven prioritization leads to overwhelm and unpredictability, and you'll learn how FIFO as a default prioritization method will keep work flowing through your organization.For full show notes, transcript, and more information, visit: https://www.agileattorney.com/27Start your Agile transformation today and check out free resources, including my Law Firm Policy Template, to help you and your team develop a more Agile legal practice: https://www.agileattorney.com/start
Sam and Ryan read and discuss a fantastic interactive blog post about queueing in HTTP written by Sam Rose.Timestamps:0:00 - Intro6:57 - Queueing: An interactive study of queueing strategies9:05 - Why do we need queues?13:16 - FIFO and timing out17:55 - LIFO20:58 - Priority queues25:21 - Active queue management29:08 - Comparing queues36:32 - ConclusionLinks:Queueing: An interactive study of queueing strategiesUp and Down the Ladder of Abstraction
Georgia Wilson shares her journey from being an underground truck driver to a respected shot firer and safety professional. Georgia is a passionate miner and safety advocate. She talks about her initiative, "Chicks in Mining," a supportive community and mining merch brand for women in the mining sector. Listen in as Georgia and Mad Mumzie discuss the ups and downs of their careers, the importance of culture in mining, and the hilarious mishaps that come with the job. They also delve into the significance of mentoring and supporting women in the industry. Show Notes: https://madmumzie.com/beers95 Chicks in Mining - Get your merch here: https://www.chicksinmining.com Chicks In Mining Australia private Facebook group: https://www.facebook.com/groups/505767284787838 Chicks In Mining Instagram: https://www.instagram.com/chicksinmining/ Georgia Wilson - LinkedIn: https://www.linkedin.com/in/georgia-wilson-44b54a109 Mad Mumzie's Mining Portal: 'dot com :-)' https://mining.teachable.com Dump Truck Operator. No Experience? Where do you start? https://madmumzie.com/noexperience Mad Mumzie and her sister, Hard Hat Mentor have a collaboration "Steel Cap Sisters" Listen to What Boots Podcast here: https://steelcapsisters.com Thanks to Bantacs Accounting Group Sponsor for this episode Are you a Rich Miner-Do Your want to stay a Rich Miner? Use your $ well https://madmumzie.com/money Thanks Girlfriend for the tunes x "Until next time, stay safe, be real, be special....and have fun, for we only live once!" Cheers, Mad Mumzie
Michael Bullock is a production excavator and dozer operator at a large iron ore mine in Western Australia. Michael and Aaron discuss the intricacies of Australian mining following Aaron's recent trip to the area. To see Michael's photos and videos, you can follow him on Instagram at @the_english_earthmover. Questions or feedback? Email us at dirttalk@buildwitt.com! Stay Dirty! **UPDATE** Dirt Talk is STOKED to announce Ariat as our first official sponsor for the year! They make world-class footwear and workwear that we see on every job site we visit, and their folks are just as great as their products. Dirt Talk listeners can receive 10% off their first order with Ariat by clicking here or visiting Ariat.com/dirttalk.
Episode 906 - Why Homesteading and Coffee? Today we celebrate eight years of Living Free in Tennessee with a look back at where we started and why homesteading, why coffee, why community. Direct Download for episode 1: https://traffic.libsyn.com/nicolesauce/NicoleSauce_Podcast_May_20_2016_-_52016_6.52_PM.mp3 Featured Event: Haven Earth Trade School Homesteading Bootcamp, June 14-16 in Old Fort Tennessee (Nearish Cleveland and the Ocoee River).https://www.havenearthtradeschool.net/homestead-bootcamp-haven-village Live we did yesterday: https://www.youtube.com/watch?v=WwreK9qPVKI Sponsor 1: EMPShield.com Sponsor 2: DiscountMylarBags.com Livestream Schedule BIG Week This week! Spicy Sisters are Back This Wednesday at 2pm, Joel Ryals will share his story about moving from Florida to Ohio on the Tuesday Live and we have our usual Homestead Happenings episode this Friday. Tales from the Prepper Pantry Harvest Right Freeze Drier is back up and running and helping with a plethora of eggs Affiliate Link For Harvest Right: https://affiliates.harvestright.com/1095.html Designing the final part of the house to include pantry storage, a pool, and an outdoor kitchen Ground Beef Discussion Time to finish curing the SRF Pork Garden is finally coming together Operation Independence Reduced projects that I am focusing on and as a result finds have also reduced. We are assessing the homestead in terms of what brings value now more than ever. With the extra time, the home is almost back to a source of support rather than a drain. There is a lesson in this. Main topic of the Show: Why Homesteading and Coffee May 20, 2016 - an excerpt Nine years ago, we started on an adventure in the country. What began as a weekend getaway quickly changed into a small homestead with chickens, gardens, laughter, neighbors, and sometimes the opposite of laughter. The Holler Homestead is known in our area for our home roasted coffee (it takes less time to roast your own than drive to the store), elephant garlic, stone ground flour and hand rolled oats. We also help people learn how to preserve food and are keenly interested in self sufficient living. Y'all, I have not listened to episode one since about a week after recording it to figure out how I could be a better podcaster. It feels like so much has changed, and it has. But the core, the mission, the motivation is still the same as it was on day one. Do you know how good that makes me feel? Also a bit surprised. Back in 2016, this podcast was my only creative outlet. It was an attempt to reach more people who were interested not only on homesteading but it taking whatever steps in their life that they could to build something for them, outside the system, to live as they saw fit, without buying into the societal expectaitons - and without asking permission. >>GexX on Tiktok and the FIFO approach to life