POPULARITY
I've been asked this question a lot lately…“Why didn't network marketing EXPLODE from the pandemic and recession, like so many expected? Why are we seeing declines instead of massive growth?”Let's get into it…There's no single reason — it's a combo of things. Yes, the pandemic created a surge for many companies in 2020–2021. But what people don't talk about is that growth was artificial.It was more of a sugar rush than it was sustainable momentum. People were home.They had more time.More stimulus checks.Ecommerce in general saw record-breaking numbers.But then what happened?Life returned to “normal,” attention spans shrank, wallets tightened, and expectations changed.Here's what I believe are the top reasons we've seen double-digit declines across many product-based companies:1. Artificial Pandemic BumpMany companies grew from a moment, not a movement. Growth wasn't built on deep belief or long-term behavior—it was built on convenience and timing.2. Lack of InnovationA lot of companies got lazy during the good times. They stopped innovating. Compensation plans didn't evolve. Products didn't stay ahead of the curve. Messaging got stale. And when consumer priorities shifted, companies weren't ready. This is a GENERALIZATION. I know many of you did evolve.3. We're in the “Options Era”Let's say someone used to buy 100 things. Now their budget forces them to cut back. 10–30% of those go first. Did your product make the cut?Uber. Airbnb. Affiliate. Creator economy. Ecom dropshipping. All legit options now. If we don't create real, unique value, we lose.Those options weren't avail in 2008.4. Poor Customer RetentionMost companies still don't do a great job reselling the customer after the sale. The fortune isn't just in the follow-up—it's in the retention experience.5. False ExpectationsPeople heard “work from home” and thought “easy income.” They weren't ready for rejection, consistency, and leadership development. So they quit early.Lazy on Leadership DevelopmentWe stopped developing leaders because it was so easy!Companies and leaders have been back in the trenches. Some for the last 2 years but you don't just plant a seed and expect a tree to grow the next day. It takes time to see the fruits of your labor. We are starting to see some of those results already and we will continue to see more. This is the WEEDING out season.Listen up—go to www.rankandbanksystem.com right now. Why? Because if your network marketing check isn't where you want it, this is the fix. It's my 30-day Rank & Bank System—daily trainings from me, daily challenges that actually move the needle, and accountability that keeps you in the game. Here's the kicker: it's got a 100% money-back guarantee. You follow the system, show up for 30 days, and if your check doesn't grow, you pay nothing. Zero risk. Most programs drown you in fluff—this one's built to get you results fast, without the overwhelm. I've taken people from $400 a month to millions (not normal but possible) with this exact framework. You've got nothing to lose and everything to gain—go to www.rankandbanksystem.com and lock it in before you miss out. Let's make your bank account match your hustle.
In this episode, we chat about using multiple large scale neuroimaging datasets for developmental cognitive neuroscience research. We highlight two exciting recent projects, including the Reproducible Brain Charts database and a foundational study on white matter development. Along the way, we chat about harmonizing multiple datasets, testing in multiple datasets for generalizations, replicability, practical challenges, and more. Matt is joined by two wonderful guests from Dr. Ted Satterthwhaite's lab who are pioneering these momentous efforts - Dr. Golia Shafiei and Audrey LuoReach out to our host:Matt Mattoni (matt.mattoni@temple.edu)Connect with our guests:Dr. Golia Shafiei: https://bsky.app/profile/goliashf.bsky.socialAudrey Luo: https://bsky.app/profile/audreycluo.bsky.social Connect with us on social media! We are always looking for ideas for episode topics, co-hosts, or guests. @FluxSocietyKey Resources:Reproducible Brain ChartsTwo Axes of White Matter DevelopmentReproducibilibuddyhttps://reprobrainchart.github.io/
WATCH --> https://2ly.link/1zQfw In this episode, we explore a modern and thoughtful take on force fetch with Chris Arminini of Full Send Canine. Learn how free shaping, marker training, and the NePoPo system are changing the game for retriever training. Whether you're a seasoned handler or starting your first dog, this episode brings clarity and innovation to one of the most debated training topics out there.
Is your win-loss data just expensive fiction written by your sales team? In this episode, Ryan Sorley (ex-Forrester, ex-Gartner) joins the crew to expose the hard truth about buyer research. From ripped jeans costing deals to bootstrapping a business while juggling kids and payroll, Ryan doesn't hold back on his journey from corporate misery to specialized success. Oh, and we uncovered the stupidly simple reason most product marketers fail at research. Tune in for honest laughs, real insights, and maybe a wake-up call about your "data."In this episode, we're covering:How to run a solo business when you have kids and bills to payThe "Taylor Swift Squad" theory of business growthWhy most buyer research is just wishful thinking with fancy graphsThe Powder Blue Taurus Moment: when Ryan knew corporate life was BSWhy writing a business plan is a complete waste of time for solopreneursHow Ryan went from 0 to 100 clients with ZERO salespeopleCheck out his new book Blindspots on Amazon too! Timestamps:01:00 Introducing Ryan Sorley, Win-Loss Research Expert02:36 Are Product Marketers Actually Marketers?04:15 Ryan's Take on Product Marketing as Research06:45 The Importance of Intentional Research10:00 The "Superpower" of Win-Loss Analysis12:34 The HubSpot Ripped Jeans Story17:15 Ryan's Entrepreneurial Journey19:00 The NJ Turnpike Moment: When Ryan Knew Corporate Life Wasn't for Him21:00 How Ryan Discovered the Win-Loss Opportunity23:45 Ryan's Transition from Gartner to Entrepreneurship27:00 What's the Minimum Viable Plan for Going Solo?30:00 The "Squad Life" Approach to Client Relationships34:00 The Dark Times: Managing Cash Flow and Contractors39:00 Specialization vs. Generalization in Consulting42:45 Ryan's New Book: "Blind Spots" (Launch April 1st)44:30 Closing Remarks and FarewellShow Notes:Ryan's LinkedInBlindspots on AmazonHosted by Ausha. See ausha.co/privacy-policy for more information.
This is not a show about teaching eye contact. We'll get to that in a bit. First though, I should note that the 22nd installment of the Inside JABA Series is coming out comically late. I apologize for getting us off schedule. The good news is that we already have a great paper to discuss for the 23rd Inside JABA episode that I think you're going to love, so I hope to have that one out later on in the spring. Back to this episode. Drs. Danny Conine and Jenn Fritz join me to discuss a paper Danny wrote with his colleagues called, "Evaluating a screening-to-intervention model with caregiver training for response to name among children with autism." There are so many great things about this paper, and listeners will be able to tell this from my enthusiasm in discussing it with Danny and Jenn. As I noted above, this is not about teaching eye contact, but rather, a more generalized repertoire of responding to one's name (RTN). We get into why these two things are different, and, as Danny tells it, RTN repertoires have many benefits that directly impact learning and safety. In this paper, he describes an elegant assessment and intervention that his research team implemented to develop RTN in the study's participants. In carrying out this study, they also employed a simple and effective assent withdrawal component, which we get into. Then, they took what the skills they developed in a clinic setting, and taught the participant's caregivers to implement RTN procedures at home. As such, this paper provides a great example of how to generalize skills across settings. Very cool! Along the way, Danny provides practical tips clinicians can consider for their own practice. All of this to say, I'm hoping you'll agree that the wait for this episode will be worth it! Resources discussed in this podcast: Conine, et al. (2025). Evaluating a screening-to-intervention model with caregiver training for response to name among children with autism. Conine, et al. (2020). Assessment and treatment of response to name for children with autism spectrum disorder: Toward an efficient intervention model. Conine, Vollmer, and Bolívar (2019). Response to name in children with autism: Treatment, generalization, and maintenance. BOP Session 212 with Tim Hackenberg. Luczynski and Hanley (2013). Prevention of problem behavior by teaching functional communication and self-control skills to preschoolers. The Verbal Behavior Approach, by Dr. Mary Barbera. Links to Danny's faculty page, Research Gate profile, LinkedIn, and his lab's Instagram. Jenn's faculty page, Research Gate profile, LinkedIn, and the UHCL ABA Program page. If you enjoy this episode, please consider sharing with friends and colleagues!
We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems.The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster's supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author of the DiscoPOP paper, which demonstrates how language models can discover and design better training algorithms. Also joining is Robert Tjarko Lange, a founding member of Sakana AI who specializes in evolutionary algorithms and large language models. Robert leads research at the intersection of evolutionary computation and foundation models, and is completing his PhD at TU Berlin on evolutionary meta-learning. The discussion also features Cong Lu, currently a Research Scientist at Google DeepMind's Open-Endedness team, who previously helped develop The AI Scientist and Intelligent Go-Explore.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/**** DiscoPOP - A framework where language models discover their own optimization algorithms* EvoLLM - Using language models as evolution strategies for optimizationThe AI Scientist - A fully automated system that conducts scientific research end-to-end* Neural Attention Memory Models (NAMMs) - Evolved memory systems that make transformers both faster and more accurateTRANSCRIPT + REFS:https://www.dropbox.com/scl/fi/gflcyvnujp8cl7zlv3v9d/Sakana.pdf?rlkey=woaoo82943170jd4yyi2he71c&dl=0Robert Tjarko Langehttps://roberttlange.com/Chris Luhttps://chrislu.page/Cong Luhttps://www.conglu.co.uk/Sakanahttps://sakana.ai/blog/TOC:1. LLMs for Algorithm Generation and Optimization [00:00:00] 1.1 LLMs generating algorithms for training other LLMs [00:04:00] 1.2 Evolutionary black-box optim using neural network loss parameterization [00:11:50] 1.3 DiscoPOP: Non-convex loss function for noisy data [00:20:45] 1.4 External entropy Injection for preventing Model collapse [00:26:25] 1.5 LLMs for black-box optimization using abstract numerical sequences2. Model Learning and Generalization [00:31:05] 2.1 Fine-tuning on teacher algorithm trajectories [00:31:30] 2.2 Transformers learning gradient descent [00:33:00] 2.3 LLM tokenization biases towards specific numbers [00:34:50] 2.4 LLMs as evolution strategies for black box optimization [00:38:05] 2.5 DiscoPOP: LLMs discovering novel optimization algorithms3. AI Agents and System Architectures [00:51:30] 3.1 ARC challenge: Induction vs. transformer approaches [00:54:35] 3.2 LangChain / modular agent components [00:57:50] 3.3 Debate improves LLM truthfulness [01:00:55] 3.4 Time limits controlling AI agent systems [01:03:00] 3.5 Gemini: Million-token context enables flatter hierarchies [01:04:05] 3.6 Agents follow own interest gradients [01:09:50] 3.7 Go-Explore algorithm: archive-based exploration [01:11:05] 3.8 Foundation models for interesting state discovery [01:13:00] 3.9 LLMs leverage prior game knowledge4. AI for Scientific Discovery and Human Alignment [01:17:45] 4.1 Encoding Alignment & Aesthetics via Reward Functions [01:20:00] 4.2 AI Scientist: Automated Open-Ended Scientific Discovery [01:24:15] 4.3 DiscoPOP: LLM for Preference Optimization Algorithms [01:28:30] 4.4 Balancing AI Knowledge with Human Understanding [01:33:55] 4.5 AI-Driven Conferences and Paper Review
On the Road to Aya.Cael becomes the Amazon's Unorthodox Global DiplomatBy FinalStand. Listen to the Podcast at Explicit Novels.For me, the diplomacy revolved around Delilah and Virginia, I had already fallen on my knees and begged Odette to let me go see Aya 'alone'. A few sexual-charged hours later, she agreed. That left four choices for the role of my two agents. They wanted to go 'as is'. Rachel informed them they would be murdered in-flight and their bodies tossed out over a convenient body of water.Rachel felt that the only reasonable course of action was for them to not come. That way the two could live a few more weeks. However, she would settle for stripping them down, doing a full body scan and then sealing them naked in airtight coffins (with a suitable amount of oxygen) for the journey. I suspected they might still slip out the baggage compartment somewhere between takeoff and landing.I cut through the clash of egos and made the final decision. Delilah and Virginia would be stripped and thoroughly examined. Initially I had the chore. Rachel was deeply suspicious of my true intentions. Freed of any electronic devices and with their weaponry in my keeping during the trip, they would be blindfolded as we made it to Aya without bloodshed.They applauded my wisdom by roundly refusing my decision. Pamela was of no help. Ten minutes into it, I informed them I was going alone, completely alone. They laughed, snorted and chuckled. Rachel reminded me that I didn't know where to go. I lied and told her that Katrina had given me the coordinates for the super-secret juvenile, all-feline [yes, I meant cats], survival training school.Fine, they would just keep me under constant surveillance. I responded by assuring them that despite my lack of spy-like abilities, I would escape and get to relive my Summer Camp experience with the only woman who respected my Demigod-like combat status. Their laughter hurt my feelings. Pamela stepped up and told the room they could either respect my compromise, or she would help me evade them.It was even more depressing to see the room full of women who had previously been mocking me suddenly 'snap to' and quickly agree to my earlier suggestions."It is okay," Pamela told me softly as the actual mechanics of my vacation were figured out by others. "I didn't want to play Bill Munny to your Ben Logan."Pamela's eyes flared brighter than any phoenix's rebirth. She'd stumped me."The Unforgiven, my Son," she patted my cheek. "It is a western made in 1992 starring Clint Eastwood, recast masterfully by 'Yours Truly' and, we need to work on you making a convincing Morgan Freeman.""Doesn't Freeman end up in a pinewood box in the first third of the movie?" Virginia mused."I didn't want to dishearten him," Pamela grinned. To me. "He ran off alone and got himself killed.""I was what, not even a year old when that movie came out," I responded with indignation."You've never heard of Block Busters, Netflix, Redbox, Dish, Hulu, or late night, Spanish language television?" Pamela snickered."I only watch Univision for their sports coverage," I countered."You mean for those sexy female sports announcers," Delilah chuckled. That earned her a 'well duh' look from all the other women."Before I consent to the strip search and inevitable follow-up anal probe, are we really going to be in a situation that requires us to fight this time?" Virginia asked."We should be perfectly safe," Rachel responded."Check, bring extra ammo," Virginia nodded."Good for you, Ms. Maddox," Pamela winked. "One day there is hope your life will have some meaning to me.""Great," Special Agent Maddox muttered, "now I have to think of what to get her for Christmas." We all laughed. Christmas was such a long way away.We packed up, rode to a private airfield near Doebridge, learned that SD was smarter than the rest of us, boarded our flight, and then finally entered US airspace from there. Around Ohio, a thought occurred to Maddox."If we were somehow forced to land and have the plane searched, how bad would it be?" she requested of Rachel."Bad enough that we have a better chance of fighting our way free than seeing freedom before dying in prison," Rachel answered calmly."Hmm, Rachel, if something like that happened, how many parachutes do we have?" Delilah joined in."Enough. Mona rides down with Cael because he's a virgin," Rachel stated."Oh! Come on Rachel," I fell down on my knees. "Can't I bungee jump it?""Luv," Delilah snorted. "If the drop didn't kill ya, the bounce back would snap you in two.""Cáel, we are at thirty thousand feet," Tiger Lily giggled. "You are more likely to end as a streamer than a pancake." An Amazon giggle, a most joyous noise."Rachel, I have been unkind," Virginia confessed. "Cáel is so personable and so dead set on getting himself killed. I had no idea your assignment was so herculean.""Acknowledged," Rachel said, "and we don't use 'that' word." Hercules was Greek too."We have it worse," Delilah patted Maddox on her shoulder. "We must obey some sort of legal code that doesn't allow us to preemptively save him.""We must too," Rachel gave a depressive sigh. "Her," she pointed at Pamela."Hey," Pamela pouted. "I'm more a force for vigilante justice than a team player. I ride alone.""Alone?" I took a quick headcount and added our Amazon pilot. "I count ten, Lone Phaser.""Am I included in that count?" Miyako yawned from under her blanket. "This jet lag is killing me.""Where did she come from?" Virginia hopped up."She was here when we boarded," I told her. "I searched her, I swear.""Yes he did," Miyako gave a sleepy, Hello Kitty smile. She'd 'searched' me too."I bet you did," Rachel glared at me, then Pamela, then me again since I was the titular boss.Thankfully we all 'bought a vowel', played a card in Clue, and shared an Inspector Clouseau moment. The gang settled down for a nap. Sleeping was not complicated. Rachel, as my bodyguard, slept beside me. The airplane's touchdown was so flawless I had to be shaken to alertness. Did I fall asleep? More on that later.It would have been better if Virginia hadn't figured out our pilot had violated numerous FAA regulations, like dropping below radar at one remote airport then sailing along for an unknown number of kilometers at nape of the Earth until we reached our final destination (This is great in date flicks, btw. It convinces the girl that we should 'live in the moment'/screw as much as possible.)We weren't there yet, of course. That level of un-convoluted thinking would have been an Amazon indicator of senility. Being a male Amazon, I was immune to such considerations, that meant I was always nuts in their regard, but they chose to humor me. Our plane had to park in a camouflaged hangar before we were allowed to disembark.I concluded we must be getting close to our desert gulag/re-education center as the sharp glare of sunlight was accompanied by an equally heartless glare of hostility rolling forth from our waiting all-terrain vehicle caravan. Thank goodness Rachel had the foresight to bring sunscreen for the passel of us. I swallowed the bitter realization I'd lost a $1000 bet concerning our landing zone with Virginia (a Temperate Rainforest) and Delilah (the American Southwest). In retrospect, betting on the site of 'Camp Rock' wasn't my smartest wager.The Brit made off with $2000 of our money and she wanted to be paid in Euros. That's €778 from me, you offspring of those who didn't have the courage to cross the Atlantic 100 years ago. Neither Virginia nor I really cared. With the level of violence about to escalate, it was all looking like 'funny' money to us. I didn't share my misery. Our Welcome Wagon ladies hardly looked sympathetic, or all that opposed to utilizing scalping as a valid debating tool.They didn't view this moment as just a bad thing, me showing up. My arrival was apocalyptic: #1, a man. #2, with a member of another secret society. #3, #2 was a professional assassin. #4 and #5, two more outsider women. #6, an unscheduled visit, as in 'the camp guardians hadn't been given six months to plan out all contingencies'. And you think your daycare takes its security seriously?"Cáel Ishara," the curt, mega-harsh bitch addressed me in English. As the other seven women dismounted from the four Jeep Wranglers (Delilah enlightened us), it was obvious they were well armed and armored, right and ready to provide some extra-curricular para-military fun. "Welcome," and 'oh please tear out one or two of my fingernails you Ginormous Pain in my ass' she greeted the exalted me. We spoke in Hittite;"I am”, then I used a phrase which I hoped meant 'I had shed blood in battle with sister Aya'. "No other name means more to me right now." Ah, the lovely jerk that full-blooded Amazons gave the first time they heard a male speak their tongue. The slot machine of her intellect kicked into high gear. No arm grasp was coming my way. I almost forgot."The outsiders are to remain armed as guests of House Ishara." That command was crucial. When/if I got my way with my first request, I was going to be rendered 'one of the girls'."If that is your wish. (Evil grin) Grab your bags and make it snappy," the woman ordered. "I don't like any extended activity at this airfield.""Ladies, let's hurry up and get our bags," Pamela barked in English. "You too, you hairless ape." That would be me, if there was any question. The Super-friendly camp counselors, with their slung FN P90's, didn't lift a finger to help us. Miyako flounced around without a care in the world. Pamela, eh, there were only eight of them. Three of my SD group were cautious while the pilot was already effecting her refueling and departure.Rachel shot one of the guardians a look I perceived to be friendly. A double-take elucidated things. She was Rachel's younger sister and had already been updated on my bona fides. Then in Hittite;"Male, you are agreeable to the eye," Rachel's sister fired off. Three whole seconds."Why thank you. I run faster than you would think, thankfully heal even faster and have the venerated outdoor skills of Bigfoot," I smiled.The seven other ladies weren't sure what to make of that jocularity."A very, very young Bigfoot," Rachel corrected."There is nothing wrong with the size of his feet," Tiger Lily added to the fun. And then all the homicidal fanatics chuckled.Pamela's whispered translation brought a subdued, yet similar reaction from the non-Amazon contingent. Sure, the new group knew about the New Directive, my fun encounters which I equated to my life and death struggle in those earlier days, my rise to house leadership, Constanza's blinding, the grenade launcher episode and the totality of my last confrontation with Hayden. Amazons are some hard-ass bitches.As we were loading up the jeeps, the leader tapped me on the shoulder with some force, in the same way a teacher catches an unruly student's attention."What was sex with an augur like? My name is Caprica Mielikki.""Out of respect for your authority, I will answer this personal question that is really none of your business," I looked down a good ten centimeters at her. No fear."It was beautiful, like every other woman I have had the treasured pleasure to have sex with," I continued. My reply's undercurrent was simple: I am not a House Head while I'm here. I am an Amazon, not a slave, or outsider male."Did you suffer stigmata?""Yes. To be fair, I was also having intercourse with her personal guardian at the same time. I'm not sure where to lay the blame, or importance," I inhaled her rugged fragrance."Both?" a different camp counselor questioned."As I told you, he has a really big and craftily-wielded foot," Tiger Lily teased, then Pamela said in Hittite;"And he is banned from having sex with any Amazon women for fifty more days," Pamela reminded them. Miyako, Delilah and Maddox weren't involved so were left uninformed of that detail. That bludgeoning innuendo dealt with, off to camp we went. Our journey was a pleasant diversion, punctuated by our trail, or lack thereof.The jeeps split up once we hit the aerial cover of the desert pines. At that point, every rock, shrub, tree and loose bit of debris revealed its God-given mission in life was to kill us. I kept telling myself that surely our Amazon driver abhorred suicide as much as I frowned on vehicular manslaughter as a means of me dying.Failing to believe that left me with tuck, duck and roll and that death-defying move would leave me lost and waterless, somewhere. I would have thought 'somewhere without cell reception', but none of our mobile devices had made the trip, despite a valiant effort at skullduggery by Special Agent Maddox and some highly creative types back at the Hoover Building.See, after we dutifully packed all our gear, the troupe got to watch Rachel's team toss everything into a cargo bin set to be loaded onto a flight to, the ticket said Banjul, Gambia. Woot! My ten ton armored long coat was going to Africa without me. It would have undoubtedly have tried to kill me in this heat. I was lured into acceptance by hoping this was going to be a 'birthday suit' flight.Yay! (Sarcasm) We got all new undies, shirts, shoes, pants, shorts, jackets, ponchos (I was beginning to suspect duplicity on that one), and a variety of other gear, including guns. They were nice enough to replace our weapons with the exact same production models. The sole exceptions were my trusty axes and I trembled at the scrutiny they must have endured.Meanwhile, back to my archaic, misogynistic inspiration that women shouldn't be allowed to drive: after the third skirting of what must have been a ten meter drop, I realized I was looking at this journey in the wrong light. I raised my hands over my head and began screaming like a fool. I was on the best rollercoaster ride ever!!The hobnail boot was on the other foot. My driver really wanted to know what the fuck I was up to, but couldn't take her concentration off the terrain. One massive lurch planted us in an arroyo (that's a dry riverbed for those of us who aren't freaked out every time it rains). Rachel and I were sitting in the back. Turning around in the front seat, Pamela grinned at me."I dare you to surf the hood," she laughed. Sweet Mother Ishara, that was the best mixing of 'you must be a redneck'/'immortal high schooler madness' I'd ever heard. I unbuckled milliseconds before Rachel could stop me. Her look said it all. 'Please, you Moron, don't do this to me. I've been a good little guardian and really don't deserve this, now do I?'I gave her a deep French kiss. She moaned, just not in a sexual manner. One of these days Rachel was going to start running around with a needle and fast acting sedative to keep me safe from myself. Understand, my driver was racing down this dirt, well, "pathway" was being generous. Her first warning that something wasn't right was me hand-standing on the roll bar and flipping onto the dashboard.Considering I was up against a 70 kilometer headwind, I felt I pulled off that maneuver rather well. She grabbed my closest ankle with one hand while keeping the other on the wheel. Our eyes were masked with goggles, but my smile said it all. No, I hadn't been thrown forward, and no, I wasn't running away from something in the back seat.I shook free, stepped over the windshield, braced my right heel against its base and leaned into the torrent of air. I was surfing a jeep. Then I was flying above the jeep, but only for a second. We'd hit a rock the size of an armadillo, or maybe it was an actual armadillo. I wasn't looking back to check. Why was I doing this? It was a tad complex. I gave Psych 101 a shot.My life was not where I had envisioned it would be when I kissed Dr. Kimberly Geisler, and my last two Bolingbrook girlfriends, who had been unaware of each other until that moment, good-bye before leaving college forever. I proudly considered myself amoral. No social contract would keep me from some good cunt, and since I found all cunt to be good if you worked at it, I slept with every girl I could, married, committed, bored, desperate, I didn't care.I held no relationship sacred. I had already proved I could do any girl's mother, daughter, aunt, roommate, childhood friend and total stranger. I hadn't cared. I knew I was going to cause multiple women emotional pain and I did it anyway. Sure, I regretted the agony I left in my wake.I never considered myself a sadist, but I had been a pretty horrible person by ignoring the inevitable consequences of my actions. Then Havenstone. Suddenly people were doing bad stuff to people I didn't know and it mattered to me. I was talking to women without the end goal being a sexual encounter.Hell, I had been honest to women without them using pain, or the threat of pain, on me. I didn't stop being me. I nailed four women at Loraine's, Europa's and Aya's school. I nailed Nicole while waiting for Trent to toss me his social table scraps, Libra. A whole army of women engaged in murder, slavery and infanticide on a regular basis, and I cared for them.I cared for them in a way that confronted damnation, not sexual adventurism. I had graduated from 'Dude, don't do that to the lady' at some bar to 'do this and I'll have you killed' and meaning it, and making it happen. I hadn't learned my lesson. I'd gone on to kill Hayden and Goddess-knows how many other women who Hayden had placed on that list.Yep, dead, dead, dead and it was all on me. Worse, I would do it all over again because deep down, tearing up my insides, was morality. To me that boiled down to caring about someone else without reward. And all that led me to surfing the hood of a jeep on my way to meet my lodestone of this transformation, Aya.My laughter was drowned out by the noises of the engine, tires, rocks, wind and sand. It resonated all the more. The driver didn't slow down. I sincerely doubted she understood my lunacy. That was okay. Pamela did and Aya would. She'd want to go jeep surfing too. Man, for a jackass and dastardly betrayer, I was accumulating a sizable heart-load of people I could honestly say I loved.Kimberly had once told me that the pain of knowledge is never being able to forget it. Good, or bad, it is an affliction for which there is no cure. That was where I was, pained by the creeping advancement of my soul and unable to turn back now that the door to familial affection had been opened.My thoughts of Dad dying and of a thunderstorm burst in my noggin weren't being terribly helpful to my mental state either. The horn blew and I snuck a quick peek back. The driver was making a sharp, forward jabbing motion with her right hand, then thrusting to the left. We were getting ready to exit the arroyo and that probably required some hellish footwork far beyond my ability.I made a hasty, less dignified, yet safer return to my seat. Rachel quickly buckled me in before a rapid turn up and over the bank of the river bed had us heading for another forested area."What was that all about?" Rachel asked once we were back into the tree cover. She'd have asked earlier but she was too busy clenching and unclenching her jaw in frustration.
In this episode, I share why developing specialized skills, rather than becoming a generalist, allows you to scale and grow your multifamily portfolio more effectively. I discuss why determining whether you are going to become someone who's excellent at finding deals, raising debt/equity, or operating deals is key to removing the ambiguity around your business.This makes it easier for others to invest in your deals, connect you with people who can bring value to your business, and potentially partner with you, as well.I also use an analogy where I talk about Stringer Bell and The Wire... so this will be a fun episode for all of you out there who are fans of the show, like myself. Are you looking to invest in real estate, but don't want to deal with the hassle of finding great deals, signing on debt, and managing tenants? Aligned Real Estate Partners provides investment opportunities to passive investors looking for the returns, stability, and tax benefits multifamily real estate offers, but without the work - join our investor club to be notified of future investment opportunities.Connect with Axel:Follow him on InstagramConnect with him on LinkedInSubscribe to our YouTube channelLearn more about Aligned Real Estate Partners
What to listen for:“If I cannot, in five repetitions, isolate a variable down and get it where I want it, I do need to stop. That doesn't mean the next time we'll be successful. What that tells me is, don't do the same thing again and again and again. And there's been a lot of variation. Isolate that variable. It looks good now. Put it back in a chain, it doesn't look good. What have I done wrong?”In part one of this two-part discussion, Robin Greubel and Crystal Wing sit down with special guest Denise Fenzi, a trainer and educator who specializes in building cooperation, joy, and extreme precision in competition dog sports teams.“Patterns lead to confidence. Unpredictable leads to listening.”Famous for breaking down complex concepts into bite-sized lessons, Denise does a deep dive into a training philosophy that stresses adaptability and prioritizing the dog's wellbeing. She discusses how to build your dog's vocabulary via strategic repetition, as well as her approach to encouraging or disrupting behavioral patterns during training.“The more ways you generalize the behavior, the better the dog gets at those behaviors.”Denise shares why tools like choke chains and e-collars might be a thing of the past, and why tailored, humane options like aversives better enhance understanding and cooperation between you and your dog–which in turn fosters a bond built on trust and respect rather than pain and fear.“A behavior chain is simply a bunch of behaviors strung together, so you have to get comfortable with degrees of messiness.”Finally, Denise unravels the complexities of behavior chains and the balance between instinctual behaviors and structured learning. She walks us through the art of rewarding gradual progress and adapting training techniques to maintain a dog's confidence.Key Topics:Denise's Training Methodology (06:51)Applying Micro Behaviors To Full Training Scenarios (20:22)Generalizing Behaviors (26:13)Drive Versus Arousal (35:50)Resources:Fenzi Dog Sports AcademyWe want to hear from you:Check out the K9 Detection Collaborative FB page and comment on the episode post!K9Sensus Detection Dog Trainer AcademyK9Sensus Foundation can be found on Facebook and Instagram. We have a Trainer's Group on Facebook!Scentsabilities Nosework is also on Facebook. Here is a Facebook group you should join!Crystal Wing K9 Coach can be found here at CB K9 and here at Evolution Working Dog Club. Also, check out her Functional Obedience Class here.You can follow us for notifications of upcoming episodes, find us at k9detectioncollaborative.comJingle by: www.mavericksings.comAudio editing & other podcast services by: www.thepodcastman.com
In this episode of The Nomad Solopreneur Show, I sit down with Blair LaCourte, a versatile business executive who has transitioned into coaching from leading major companies in various industries. Blair discusses the importance of relationships, the effects of loneliness, and how humans are hardwired to connect. He shares his philosophy on balancing stress and recovery and the need for self-awareness in both personal and professional life. We dive into how Blair maintains healthy relationships and his journey toward becoming an astronaut with Virgin Galactic. Tune in to hear valuable insights on leadership, self-awareness, and authentic connections.
Sanctuary cities, music, Christmas movies/songs, Ozone hole "healing," stereotypes. Isaac Woodard was drunk! Blacks involved in Emmett Till kidnapping!The Hake Report, Monday, December 23, 2024 AD Bigg Bump https://www.youtube.com/@biggbump | https://x.com/bigg_bump | https://www.instagram.com/bigg_bump // TIMESTAMPS * (0:00:00) Start* (0:01:28) Topics with Bigg Bump* (0:05:44) Hey, guys!* (0:07:24) ALEX, CA: FE debunked? NASA lies.* (0:13:00) ALEX: Drone problems* (0:14:26) Bigg Bump: Air Force base, Chinese spy?* (0:16:27) York City Council, PA… Gaddafi; Female City Council* (0:34:26) DANIEL, TX: Bigg Bump music; natural vs taught talent* (0:40:02) DANIEL: Eminem…* (0:41:46) TERRI, OR: Fave Xmas movie, song? The Ref. Cristina, Things Fall Apart* (0:50:07) Ozone hole healing… science* (1:00:35) Trump court rulings* (1:05:49) Patience…* (1:06:40) MAZE, OH: NAACP, Regulations, Lawsuits* (1:13:50) MAZE: Do you listen to your husband? Debt Ceiling* (1:17:47) RIGO, TX: "Don't judge a book by its cover"; Generalizations* (1:24:40) Sgt Isaac Woodard story* (1:32:18) Being black…* (1:40:33) Emmett Till* (1:50:55) Jose Feliciano - "Feliz Navidad" - 1970LINKSBLOG https://www.thehakereport.com/blog/2024/12/23/christmas-with-bigg-bump-mon-12-23-24PODCAST / Substack HAKE NEWS from JLP https://www.thehakereport.com/jlp-news/2024/12/23/hake-news-mon-12-23-24Hake is live M-F 9-11a PT (11-1CT/12-2ET) Call-in 1-888-775-3773 https://www.thehakereport.com/showVIDEO YouTube - Rumble* - Facebook - X - BitChute - Odysee*PODCAST Substack - Apple - Spotify - Castbox - Podcast Addict*SUPER CHAT on platforms* above or BuyMeACoffee, etc.SHOP - Printify (new!) - Spring (old!) - Cameo | All My LinksJLP Network:JLP - Church - TFS - Nick - Joel - Punchie Get full access to HAKE at thehakereport.substack.com/subscribe
In Nonviolent Communication (NVC), the "Jackal" represents a communication style that is often aggressive and judgmental and tends to reflect a mindset focused on blame, criticism, or demands. Here are some key characteristics of the Jackal mode of communication: Judgment: Jackal communication often involves labeling people, actions, or thoughts as good or bad, right or wrong. This creates a sense of separation and defensiveness. Blame: The Jackal tends to assign responsibility to others for feelings or situations, often leading to conflict and resentment. It focuses on finding fault rather than understanding. Criticism: Jackal communication typically involves expressing disapproval or dissatisfaction in a hurtful or dismissive way, which can discourage open dialogue. Demands: Jackal communication often involves making demands rather than requests, creating feelings of fear or obligation rather than fostering a collaborative atmosphere. Generalizations : Making broad statements about people's actions or character can oversimplify complex situations and lead to misunderstandings. Emotional Disconnection: Jackal communication often lacks empathy, leading to emotional distance and a lack of connection with others' feelings and needs. In contrast, the "Giraffe" symbolizes a more compassionate form of communication in NVC. It focuses on expressing feelings and needs while fostering understanding and connection. NVC aims to shift from Jackal to Giraffe communication for healthier and more constructive interactions.
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Former Dashlane HR leader Ciara Lakhani shares her unfiltered journey from New York to Paris tech, debunking myths about French business culture while revealing the art of cross-cultural leadership. From smoke breaks with works councils to earning trust through authenticity, Ciara offers practical wisdom on navigating international business dynamics and building bridges between American and French work cultures. *Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.co For coaching and advising inquire at https://kellidragovich.com/ HR Heretics is a podcast from Turpentine.
Alessandro Palmarini is a post-baccalaureate researcher at the Santa Fe Institute working under the supervision of Melanie Mitchell. He completed his undergraduate degree in Artificial Intelligence and Computer Science at the University of Edinburgh. Palmarini's current research focuses on developing AI systems that can efficiently acquire new skills from limited data, inspired by François Chollet's work on measuring intelligence. His work builds upon the DreamCoder program synthesis system, introducing a novel approach called "dream decompiling" to improve library learning in inductive program synthesis. Palmarini is particularly interested in addressing the Abstraction and Reasoning Corpus (ARC) challenge, aiming to create AI systems that can perform abstract reasoning tasks more efficiently than current approaches. His research explores the balance between computational efficiency and data efficiency in AI learning processes. DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? MLST is sponsored by Tufa Labs: Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more. Future plans: Expanding to complex environments like Warcraft 2 and Starcraft 2. Interested? Apply for an ML research position: benjamin@tufa.ai TOC: 1. Intelligence Measurement in AI Systems [00:00:00] 1.1 Defining Intelligence in AI Systems [00:02:00] 1.2 Research at Santa Fe Institute [00:04:35] 1.3 Impact of Gaming on AI Development [00:05:10] 1.4 Comparing AI and Human Learning Efficiency 2. Efficient Skill Acquisition in AI [00:06:40] 2.1 Intelligence as Skill Acquisition Efficiency [00:08:25] 2.2 Limitations of Current AI Systems in Generalization [00:09:45] 2.3 Human vs. AI Cognitive Processes [00:10:40] 2.4 Measuring AI Intelligence: Chollet's ARC Challenge 3. Program Synthesis and ARC Challenge [00:12:55] 3.1 Philosophical Foundations of Program Synthesis [00:17:14] 3.2 Introduction to Program Induction and ARC Tasks [00:18:49] 3.3 DreamCoder: Principles and Techniques [00:27:55] 3.4 Trade-offs in Program Synthesis Search Strategies [00:31:52] 3.5 Neural Networks and Bayesian Program Learning 4. Advanced Program Synthesis Techniques [00:32:30] 4.1 DreamCoder and Dream Decompiling Approach [00:39:00] 4.2 Beta Distribution and Caching in Program Synthesis [00:45:10] 4.3 Performance and Limitations of Dream Decompiling [00:47:45] 4.4 Alessandro's Approach to ARC Challenge [00:51:12] 4.5 Conclusion and Future Discussions Refs: Full reflist on YT VD, Show Notes and MP3 metadata Show Notes: https://www.dropbox.com/scl/fi/x50201tgqucj5ba2q4typ/Ale.pdf?rlkey=0ubvk7p5gtyx1gpownpdadim8&st=5pniu3nq&dl=0
François Chollet discusses the limitations of Large Language Models (LLMs) and proposes a new approach to advancing artificial intelligence. He argues that current AI systems excel at pattern recognition but struggle with logical reasoning and true generalization. This was Chollet's keynote talk at AGI-24, filmed in high-quality. We will be releasing a full interview with him shortly. A teaser clip from that is played in the intro! Chollet introduces the Abstraction and Reasoning Corpus (ARC) as a benchmark for measuring AI progress towards human-like intelligence. He explains the concept of abstraction in AI systems and proposes combining deep learning with program synthesis to overcome current limitations. Chollet suggests that breakthroughs in AI might come from outside major tech labs and encourages researchers to explore new ideas in the pursuit of artificial general intelligence. TOC 1. LLM Limitations and Intelligence Concepts [00:00:00] 1.1 LLM Limitations and Composition [00:12:05] 1.2 Intelligence as Process vs. Skill [00:17:15] 1.3 Generalization as Key to AI Progress 2. ARC-AGI Benchmark and LLM Performance [00:19:59] 2.1 Introduction to ARC-AGI Benchmark [00:20:05] 2.2 Introduction to ARC-AGI and the ARC Prize [00:23:35] 2.3 Performance of LLMs and Humans on ARC-AGI 3. Abstraction in AI Systems [00:26:10] 3.1 The Kaleidoscope Hypothesis and Abstraction Spectrum [00:30:05] 3.2 LLM Capabilities and Limitations in Abstraction [00:32:10] 3.3 Value-Centric vs Program-Centric Abstraction [00:33:25] 3.4 Types of Abstraction in AI Systems 4. Advancing AI: Combining Deep Learning and Program Synthesis [00:34:05] 4.1 Limitations of Transformers and Need for Program Synthesis [00:36:45] 4.2 Combining Deep Learning and Program Synthesis [00:39:59] 4.3 Applying Combined Approaches to ARC Tasks [00:44:20] 4.4 State-of-the-Art Solutions for ARC Shownotes (new!): https://www.dropbox.com/scl/fi/i7nsyoahuei6np95lbjxw/CholletKeynote.pdf?rlkey=t3502kbov5exsdxhderq70b9i&st=1ca91ewz&dl=0 [0:01:15] Abstraction and Reasoning Corpus (ARC): AI benchmark (François Chollet) https://arxiv.org/abs/1911.01547 [0:05:30] Monty Hall problem: Probability puzzle (Steve Selvin) https://www.tandfonline.com/doi/abs/10.1080/00031305.1975.10479121 [0:06:20] LLM training dynamics analysis (Tirumala et al.) https://arxiv.org/abs/2205.10770 [0:10:20] Transformer limitations on compositionality (Dziri et al.) https://arxiv.org/abs/2305.18654 [0:10:25] Reversal Curse in LLMs (Berglund et al.) https://arxiv.org/abs/2309.12288 [0:19:25] Measure of intelligence using algorithmic information theory (François Chollet) https://arxiv.org/abs/1911.01547 [0:20:10] ARC-AGI: GitHub repository (François Chollet) https://github.com/fchollet/ARC-AGI [0:22:15] ARC Prize: $1,000,000+ competition (François Chollet) https://arcprize.org/ [0:33:30] System 1 and System 2 thinking (Daniel Kahneman) https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 [0:34:00] Core knowledge in infants (Elizabeth Spelke) https://www.harvardlds.org/wp-content/uploads/2017/01/SpelkeKinzler07-1.pdf [0:34:30] Embedding interpretive spaces in ML (Tennenholtz et al.) https://arxiv.org/abs/2310.04475 [0:44:20] Hypothesis Search with LLMs for ARC (Wang et al.) https://arxiv.org/abs/2309.05660 [0:44:50] Ryan Greenblatt's high score on ARC public leaderboard https://arcprize.org/
The Intersection of High Conflict Personalities and Domestic ViolenceIn this compelling episode, Bill Eddy and Megan Hunter dive into the complex relationship between high conflict personalities and domestic violence. They explore how individuals who have borderline personality disorder (BPD) and antisocial personality disorder (ASPD) may contribute to intimate partner violence (IPV), while emphasizing the importance of distinguishing between high conflict families and domestic violence cases.Bill and Megan discuss the challenges faced by professionals in identifying the true perpetrator in a domestic violence situation, as well as the underlying fears and motivations that may drive abusive behavior in individuals with these personality types. They also address the issue of accountability and the potential benefits of group therapy for individuals who have BPD.Questions we answer in this episode:How do high conflict personalities relate to domestic violence?What role do individuals who have BPD and ASPD play in intimate partner violence?What are effective interventions for perpetrators of domestic violence?Key Takeaways:Distinguishing between high conflict families and domestic violence cases is crucial.Individuals who have BPD and ASPD have a higher incidence of IPV perpetration.Setting limits and imposing consequences are essential for holding perpetrators accountable.This episode offers valuable insights into the complexities of domestic violence and high conflict personalities, making it a must-listen for anyone navigating these challenges.Links & Other NotesBOOKSSplitting: Protecting Yourself While Divorcing Someone with Borderline or Narcissistic Personality DisorderOur New World of Adult BulliesDating RadarCalming Upset People with EARHigh Conflict People in Legal DisputesCOURSESConversations About Domestic Violence in Family Law with 16 ExpertsStrategies for Helping Clients with Borderline Personalities in DivorceHandling Family Law Cases Involving Antisocial High Conflict PeopleARTICLESDomestic Violence vs. High Conflict Families: Are one or two people driving the conflict?Domestic Violence and Personality Disorders: What's the Connection?Living with High-Conflict People: Do's and Don'ts for Living with an Antisocial High Conflict PeopleDifferences in Dealing with Borderline, Narcissistic and Antisocial Clients in Family LawWhy I Wrote SplittingUnderstanding Borderline Personality Disorder in Family Law CasesOUR WEBSITEhttps://www.highconflictinstitute.com/QUESTIONSSubmit a Question for Bill and MeganAll of our books can be found in our online store or anywhere books are sold, including as e-books.You can also find these show notes at our site as well.Note: We are not diagnosing anyone in our discussions, merely discussing patterns of behavior. (00:00) - Welcome to It's All Your Fault (00:38) - The 5 Types of People Who Can Ruin Your Life Part 4 (01:26) - Domestic Violence and HCPs (03:49) - Bill's Background (06:48) - Stats (09:23) - Anti-Social (14:38) - Verbally Abusive (16:42) - Accountability (18:53) - Disruptive (20:21) - When Law Enforcement's Involved (23:13) - Borderline Personality (27:17) - More Reactive (28:18) - Remorse (29:41) - Can't Control Themselves (31:06) - Generalizations (31:38) - When in One of These Relationships (36:09) - Reminders & Coming Next Week: Law Enforcement Guest Learn more about our Conflict Influencer Class. Get started today!
Join the live global event, "Introduction to Regenerative Coffee Farming," an online event in English, Spanish, and Portuguese for coffee producers and the wider coffee industry on October 28, 29, and 30th.Register now at: https://www.eventbrite.com.au/e/introduction-to-regenerative-coffee-farming-tickets-1032555741017••••••••••••••••••••••••••••••••This is the 4th episode of a new five-part series on The Daily Coffee Pro by the Map It Forward Podcast, hosted by Lee Safar.Our first-time guest on the podcast for this new series is Miguel Zamora from the International Coffee Organization (ICO) based in London, UK.Miguel is the Coordinator of the Coffee Public-Private Taskforce. One of the responsibilities of this taskforce is to work towards a living income for coffee producers around there world.The theme for this series is "A living income for coffee producers"The five-episode of this series are:1. Who and What Is The ICO? - https://youtu.be/xoxEyUdID7o2. What Is A Living Income For Coffee Producers? - https://youtu.be/i5_FJ0nkmKw3. The Stakeholders Defining Coffee Producers' Living Income - https://youtu.be/wevnoKEM_kQ4. The Obstacles Preventing Farms Living Income - https://youtu.be/DuWl28ZM-pU5. The Path Forward for Coffee Farmer Incomes - https://youtu.be/R_2vW88576EIn this episode of 'The Daily Coffee Pro by Map It Forward,' Lee and Miguel delve into the complexities surrounding living income for coffee producers, the impact of government and corporate policies, and the role of regenerative agriculture as a sustainable path forward.Lee and Miguel address issues of transparency, accountability, and the need for collaboration among various stakeholders to ensure the prosperity of coffee farmers. The conversation underscores the significance of informed consumer behavior and regulatory measures in shaping the future of coffee production and sustainability.00:00 Introduction: The Voice of the People00:46 Announcement: Regenerative Coffee Farming Workshop02:14 Living Income for Coffee Producers03:11 Generalization in the Coffee Industry04:53 Challenges in Coffee Pricing and Living Income10:24 Trust and Jaded Perspectives12:34 The Role of Governments and Corporations25:29 Complexities of the Coffee Crisis30:38 Conclusion: The Path ForwardConnect with Miguel Zamora and the ICO here:https://www.ico.org/https://www.linkedin.com/in/miguelzamora/https://www.instagram.com/icocoffeeorg••••••••••••••••••••••••••••••••Support this podcast by supporting our Patreon:https://bit.ly/MIFPatreon••••••••••••••••••••••••••••••••The Daily Coffee Pro by Map It Forward Podcast Host: Lee Safarhttps://www.mapitforward.coffeehttps://www.instagram.com/mapitforward.coffeehttps://www.instagram.com/leesafar••••••••••••••••••••••••••••••••
Join the live global event, "Introduction to Regenerative Coffee Farming," an online event in English, Spanish, and Portuguese for coffee producers and the wider coffee industry on October 28, 29, and 30th.Register now at: https://www.eventbrite.com.au/e/introduction-to-regenerative-coffee-farming-tickets-1032555741017••••••••••••••••••••••••••••••••This is the 4th episode of a new five-part series on The Daily Coffee Pro by the Map It Forward Podcast, hosted by Lee Safar.Our first-time guest on the podcast for this new series is Miguel Zamora from the International Coffee Organization (ICO) based in London, UK.Miguel is the Coordinator of the Coffee Public-Private Taskforce. One of the responsibilities of this taskforce is to work towards a living income for coffee producers around there world.The theme for this series is "A living income for coffee producers"The five-episode of this series are:1. Who and What Is The ICO? - https://youtu.be/xoxEyUdID7o2. What Is A Living Income For Coffee Producers? - https://youtu.be/i5_FJ0nkmKw3. The Stakeholders Defining Coffee Producers' Living Income - https://youtu.be/wevnoKEM_kQ4. The Obstacles Preventing Farms Living Income - https://youtu.be/DuWl28ZM-pU5. The Path Forward for Coffee Farmer Incomes - https://youtu.be/R_2vW88576EIn this episode of 'The Daily Coffee Pro by Map It Forward,' Lee and Miguel delve into the complexities surrounding living income for coffee producers, the impact of government and corporate policies, and the role of regenerative agriculture as a sustainable path forward.Lee and Miguel address issues of transparency, accountability, and the need for collaboration among various stakeholders to ensure the prosperity of coffee farmers. The conversation underscores the significance of informed consumer behavior and regulatory measures in shaping the future of coffee production and sustainability.00:00 Introduction: The Voice of the People00:46 Announcement: Regenerative Coffee Farming Workshop02:14 Living Income for Coffee Producers03:11 Generalization in the Coffee Industry04:53 Challenges in Coffee Pricing and Living Income10:24 Trust and Jaded Perspectives12:34 The Role of Governments and Corporations25:29 Complexities of the Coffee Crisis30:38 Conclusion: The Path ForwardConnect with Miguel Zamora and the ICO here:https://www.ico.org/https://www.linkedin.com/in/miguelzamora/https://www.instagram.com/icocoffeeorg••••••••••••••••••••••••••••••••Support this podcast by supporting our Patreon:https://bit.ly/MIFPatreon••••••••••••••••••••••••••••••••The Daily Coffee Pro by Map It Forward Podcast Host: Lee Safarhttps://www.mapitforward.coffeehttps://www.instagram.com/mapitforward.coffeehttps://www.instagram.com/leesafar••••••••••••••••••••••••••••••••
In this special crossover episode of The Cognitive Revolution, Nathan shares an insightful conversation from the Latent.Space podcast. Swyx and Alessio interview Alistair Pullen of Cosine, creators of Genie, showcasing the cutting edge of AI automation in software engineering. Learn how Cosine achieves state-of-the-art results on the SWE-bench benchmark by implementing advanced AI techniques. This episode complements Nathan's recent discussion on AI Automation, demonstrating how far these practices can be pushed in real-world applications. Don't miss this opportunity to explore the future of AI-driven software development and its implications for businesses across industries. Check out the Latent.Space podcast here: https://www.latent.space Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/ SPONSORS: WorkOS: Building an enterprise-ready SaaS app? WorkOS has got you covered with easy-to-integrate APIs for SAML, SCIM, and more. Join top startups like Vercel, Perplexity, Jasper & Webflow in powering your app with WorkOS. Enjoy a free tier for up to 1M users! Start now at https://bit.ly/WorkOS-Turpentine-Network Weights & Biases Weave: Weights & Biases Weave is a lightweight AI developer toolkit designed to simplify your LLM app development. With Weave, you can trace and debug input, metadata and output with just 2 lines of code. Make real progress on your LLM development and visit the following link to get started with Weave today: https://wandb.me/cr 80,000 Hours: 80,000 Hours offers free one-on-one career advising for Cognitive Revolution listeners aiming to tackle global challenges, especially in AI. They connect high-potential individuals with experts, opportunities, and personalized career plans to maximize positive impact. Apply for a free call at https://80000hours.org/cognitiverevolution to accelerate your career and contribute to solving pressing AI-related issues. Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ CHAPTERS: (00:00:00) About the Show (00:00:22) Sponsors: WorkOS (00:01:22) About the Episode (00:04:29) Alistair and Cosine intro (00:13:50) Building the Code Retrieval Tool (00:17:36) Sponsors: Weights & Biases Weave | 80,000 Hours (00:20:15) Developing Genie and Fine-tuning Process (00:27:41) Working with Customer Data (00:30:53) Code Retrieval Challenges and Solutions (00:36:39) Sponsors: Omneky (00:37:02) Planning and Reasoning in AI Models (00:45:55) Language Support and Generalization (00:49:46) Fine-tuning Experience with OpenAI (00:52:56) Synthetic Data and Self-improvement Loop (00:55:57) Benchmarking and SWE-bench Results (01:01:47) Future Plans for Genie (01:03:02) Industry Trends and Cursor's Success (01:05:23) Calls to Action and Ideal Customers (01:08:43) Outro
Conceptual papers that offer new theories are hard to write and even harder to publish. You do not have empirical data to back up your arguments, which makes the papers easy to reject in the review cycle. We are also typically not well trained in theorizing, and there isn't even a clear process to theorizing we could learn or follow. Does that mean that we shouldn't even try to write theory papers? We ponder these questions, figure out what is so hard in writing conceptual papers – and share a few tricks that might help if you still wanted to write such a paper. References Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing Artificial Intelligence. MIS Quarterly, 45(3), 1433-1450. Glaser, B. G., & Strauss, A. L. (1967). The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine Publishing Company. Watson, R. T., Boudreau, M.-C., & Chen, A. J. (2010). Information Systems and Environmentally Sustainable Development: Energy Informatics and New Directions for the IS Community. MIS Quarterly, 34(1), 23-38. Lee, A. S., & Baskerville, R. (2003). Generalizing Generalizability in Information Systems Research. Information Systems Research, 14(3), 221-243. Tsang, E. W. K., & Williams, J. N. (2012). Generalization and Induction: Misconceptions, Clarifications, and a Classification of Induction. MIS Quarterly, 36(3), 729-748. Yoo, Y., Henfridsson, O., & Lyytinen, K. (2010). The New Organizing Logic of Digital Innovation: An Agenda for Information Systems Research. Information Systems Research, 21(4), 724-735. Yoo, Y. (2010). Computing in Everyday Life: A Call for Research on Experiential Computing. MIS Quarterly, 34(2), 213-231. Merleau-Ponty, M. (1962). Phenomenology of Perception Routledge. Baldwin, C. Y., & Clark, K. B. (2000). Design Rules, Volume 1: The Power of Modularity. MIT Press. Weick, K. E. (1989). Theory Construction as Disciplined Imagination. Academy of Management Review, 14(4), 516-531. Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design Science in Information Systems Research. MIS Quarterly, 28(1), 75-105. Sætre, A. S., & van de Ven, A. H. (2021). Generating Theory by Abduction. Academy of Management Review, 46(4), 684-701. Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291. Farjoun, M. (2010). Beyond Dualism: Stability and Change As a Duality. Academy of Management Review, 35(2), 202-225. Recker, J., & Green, P. (2019). How do Individuals Interpret Multiple Conceptual Models? A Theory of Combined Ontological Completeness and Overlap. Journal of the Association for Information Systems, 20(8), 1210-1241. Jabbari, M., Recker, J., Green, P., & Werder, K. (2022). How Do Individuals Understand Multiple Conceptual Modeling Scripts? Journal of the Association for Information Systems, 23(4), 1037-1070. Cornelissen, J. P. (2017). Editor's Comments: Developing Propositions, a Process Model, or a Typology? Addressing the Challenges of Writing Theory Without a Boilerplate. Academy of Management Review, 42(1), 1-9. Recker, J., Lukyanenko, R., Jabbari, M., Samuel, B. M., & Castellanos, A. (2021). From Representation to Mediation: A New Agenda for Conceptual Modeling Research in a Digital World. MIS Quarterly, 45(1), 269-300. Haerem, T., Pentland, B. T., & Miller, K. (2015). Task Complexity: Extending a Core Concept. Academy of Management Review, 40(3), 446-460. Kallinikos, J., Aaltonen, A., & Marton, A. (2013). The Ambivalent Ontology of Digital Artifacts. MIS Quarterly, 37(2), 357-370. Ho, S. Y., Recker, J., Tan, C.-W., Vance, A., & Zhang, H. (2023). MISQ Special Issue on Registered Reports. MIS Quarterly, . Simon, H. A. (1990). Bounded Rationality. In J. Eatwell, M. Milgate, & P. Newman (Eds.), Utility and Probability (pp. 15-18). Palgrave Macmillan. James, W. (1890). The Principles of Psychology. Henry Holt and Company. Watson, H. J. (2009). Tutorial: Business Intelligence - Past, Present, and Future. Communications of the Association for Information Systems, 25(39), 487-510. Baird, A., & Maruping, L. M. (2021). The Next Generation of Research on IS Use: A Theoretical Framework of Delegation to and from Agentic IS Artifacts. MIS Quarterly, 45(1), 315-341.
Max is the CEO and co-founder of Nixtla, where he is developing highly accurate forecasting models using time series data and deep learning techniques, which developers can use to build their own pipelines. Max is a self-taught programmer and researcher with a lot of prior experience building things from scratch. 00:00:50 Introduction 00:01:26 Entry point in AI 00:04:25 Origins of Nixtla 00:07:30 Idea to product 00:11:21 Behavioral economics & psychology to time series prediction 00:16:00 Landscape of time series prediction 00:26:10 Foundation models in time series 00:29:15 Building TimeGPT 00:31:36 Numbers and GPT models 00:34:35 Generalization to real-world datasets 00:38:10 Math reasoning with LLMs 00:40:48 Neural Hierarchical Interpolation for Time Series Forecasting 00:47:15 TimeGPT applications 00:52:20 Pros and Cons of open-source in AI 00:57:20 Insights from building AI products 01:02:15 Tips to researchers & hype vs Reality of AI More about Max: https://www.linkedin.com/in/mergenthaler/ and Nixtla: https://www.nixtla.io/ Check out TimeGPT: https://github.com/Nixtla/nixtla About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
In this podcast, we dive into the new concept of OCR 2.0 - the future of OCR with LLMs. We explore how this new approach addresses the limitations of traditional OCR by introducing a unified, versatile system capable of understanding various visual languages. We discuss the innovative GOT (General OCR Theory) model, which utilizes a smaller, more efficient language model. The podcast highlights GOT's impressive performance across multiple benchmarks, its ability to handle real-world challenges, and its capacity to preserve complex document structures. We also examine the potential implications of OCR 2.0 for future human-computer interactions and visual information processing across diverse fields. Key Points Traditional OCR vs. OCR 2.0 Current OCR limitations (multi-step process, prone to errors) OCR 2.0: A unified, end-to-end approach Principles of OCR 2.0 End-to-end processing Low cost and accessibility Versatility in recognizing various visual languages GOT (General OCR Theory) Model Uses a smaller, more efficient language model (Quinn) Trained in diverse visual languages (text, math formulas, sheet music, etc.) Training Innovations Data engines for different visual languages E.g. LaTeX for mathematical formulas Performance and Capabilities State-of-the-art results on standard OCR benchmarks Outperforms larger models in some tests Handles real-world challenges (blurry images, odd angles, different lighting) Advanced Features Formatted document OCR (preserving structure and layout) Fine-grained OCR (precise text selection) Generalization to untrained languages This episode was generated using Google Notebook LM, drawing insights from the paper "General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model". Stay ahead in your AI journey with Bot Nirvana AI Mastermind. Podcast Transcript: All right, so we're diving into the future of OCR today. Really interesting stuff. Yeah, and you know how sometimes you just gain a document, you just want the text, you don't really think twice about it. Right, right. But this paper, General OCR Theory, towards OCR 2.0 via a unified end-to-end model. Catchy title. I know, right? But it's not just the title, they're proposing this whole new way of thinking about OCR. OCR 2.0 as they call it. Exactly, it's not just about text anymore. Yeah, it's really about understanding any kind of visual information, like humans do. So much bigger. It's a really ambitious goal. Okay, so before we get ahead of ourselves, let's back up for a second. Okay. How does traditional OCR even work? Like when you and I scan a document, what's actually going on? Well, it's kind of like, imagine an assembly line, right? First, the system has to figure out where on the page the actual text is. Find it. Right, isolate it. Then it crops those bits out. Okay. And then it tries to recognize the individual letters and words. So it's like a multi-step? Yeah, it's a whole process. And we've all been there, right? When one of those steps goes wrong. Oh, tell me about it. And you get that OCR output that's just… Gibberish, told gibberish. The worst. And the paper really digs into this. They're saying that whole assembly line approach, it's not just prone to errors, it's just clunky. Yeah, very inefficient. Like different fonts can throw it off. And write. Different languages, forget it. Oh yeah, if it's not basic printed text, OCR 1.0 really struggles. It's like it doesn't understand the context. Yeah, exactly. It's treating information like it's just a bunch of isolated letters, instead of seeing the bigger picture, you know, the relationships between them. It doesn't get the human element of it. It's missing that human touch, that understanding of how we visually organize information. And that's a problem. A big one. Especially now, when we're just like drowning in visual information everywhere you look. It's true, we need something way more powerful than what we have now. We need a serious upgrade. Enter OCR 2.0. That's what they're proposing, yeah. So what's the magic formula? What makes it so different from what we're used to? Well, the paper lays out three main principles for OCR 2.0. Okay. First, it has to be end to end. It needs to be… And to end. Low cost, accessible. Got it. And most importantly, it needs to be versatile. Versatile, that's a good one. So okay, let's break it down end to end. Does that mean ditching that whole assembly line thing we were talking about? Exactly, yeah. Instead of all those separate steps, OCR 2.0, they're saying it should be one unified model. Okay. One model that can handle the entire process. So much simpler. And much more efficient. Okay, that makes sense. And easier to use, which is key. And then low cost, I mean. Oh, absolutely. That's got to be a priority. We want this to be accessible to everyone, not just… Sure. You know. Right, not just companies with tons of resources. Exactly. And the researchers were really clever about this. Yeah. They actually chose to use a smaller, more efficient language model. Oh, really? Yeah, they called it Quinn and… Instead of one of the massive ones that's been in the news. Exactly. And they proved that you don't need this giant energy guzzling model to get really impressive results with OCR. So efficient and powerful. I like it. That's the goal. But versatile. That's the part that always gets me thinking because… It's where things get really interesting. Yeah, we're not even just talking about recognizing text anymore. No, it's about recognizing any kind of… Visual information. Visual information that humans create, right? Yeah. Like, think about it. Math formulas, diagrams, even something like sheet music. Hold on. Sheet music. Like actually reading music. Yeah. And it's a really good example of how different this is. Okay. Because music, it's not just about recognizing the notes themselves. Right. It's about understanding the timing, the rhythm. So languid. How those symbols all relate to each other. It's a whole system. That's wild. Okay, so how do they even begin to teach a machine to do that? Well, they got really creative with the training data. Okay. Instead of just feeding it like raw text and images, they built these data engines to teach JART different visual languages. Data engines. That sounds intense. Yeah, it's basically like, imagine for the sheet music they used, let me see, it's called humdrum kern. Okay. And essentially what that does is it turns musical notation into code. Oh, interesting. So Johnny T learned to connect those visual symbols to their actual musical meaning. So it's learning the language. Exactly. That's incredible, but sheet music's just one example, right? What other kind of crazy stuff did they throw at this thing? Oh, they really tried everything. Math formulas, those are always fun. I bet. Molecular formula, even simple geometric shapes, squares and circles. Really? Yeah, they used all sorts of tricks to represent these visual elements as code. So GOT could understand it. Exactly. Like for the math formulas, they used a language called latex. Have you heard of that one? Yeah, yeah, that's how a lot of scientists and mathematicians, they use that to write equations. Exactly. It's how they write it so computers can understand it. It's like the code of math. Exactly. And so by training GOT on latex, they weren't just teaching it to recognize what a formula looks like. Right, right. They were teaching it the underlying structure, like the grammar of math itself. Okay, now that is really cool. Yeah, and they found that GOT could actually generalize this knowledge. It could even recognize elements of formulas that it had never seen before. No way. It was like it was starting to understand the language of math, which is pretty incredible when you think about it. Yeah, that's wild. Okay, so we've got this model. It can recognize text. It can recognize all these other complex visual languages. We're getting somewhere. But how does it actually perform? Like does it actually live up to the hype? So this is it, huh? We've got this super OCR model that's been trained on everything but the kitchen sink. Time to put it to the test. We went through the ringer. Yeah. What did they even start with? Well, the classics, right? Plain document OCR, PDFs, articles, that kind of thing. Basic but important. Exactly. And they tested it in both English and Chinese just to see how well-rounded it was. And drumroll, how to do? Crushed it. Absolutely crushed it. No way. State-of-the-art performance on all the standard document OCR benchmarks. That's amazing. Oh, and here's the really interesting part. It actually outperformed some much larger, more complex models in their tests. So it's efficient and it's powerful. That's a winning combo. Exactly. It shows you don't always have to go bigger to get better results. Okay, that's awesome. But what about real-world stuff? You know, the messy stuff. Oh, they thought of that. Like trying to read a sign with a weird font or a crumpled-up napkin with handwriting on it? Yep. All that. They have these data sets specifically designed to trip up OCR systems with blurry images, weird angles, different lighting. The stuff nightmares are made of. Right. And GOT handled it all like a champ. It was really impressive. Okay, so this isn't just some theoretical thing. It actually works. It's the real deal. I'm sold. But there was another thing they mentioned, something about formatted document OCR. What is that exactly? That's where things get really elegance. The formatted documents, it's not just about recognizing the words. Right. It's about understanding the structure of a document. Okay, like the headings and bullet points? Exactly. Tables, the whole nine yards. It's about preserving the way information is organized. So it's like imagine being able to convert a complex PDF into a perfectly formatted word doc automatically. Precisely. That's the dream, right? I would save me so many hours of my life. Oh, tell me about it. No more reformatting everything manually. Did GOT actually managed to do that? It did. And it wasn't just a fluke. The researchers found that GOT was consistently able to preserve document structure, which really shows that this OCR 2.0 approach, it can understand information hierarchy in a way that we just haven't seen before. That's a game changer. Okay, before I forget, we got to talk about that fine grained OCR thing. They mentioned. Yes, that's where it gets really precise. It sounds like you have microscopic control over the text. Like you're telling it exactly what to read. Yeah. It's like having a laser pointer for text. You can say, read the text in that green box over there, or read the text between these coordinates on the image. That is wild. And how accurate is it when you get that specific? It was surprisingly accurate, even at that level of granularity. That's amazing. And they didn't even have to specifically train it for every little thing. Well, that's this part. They actually found that GOT could sometimes recognize text in languages they hadn't even trained it on. What? Are you serious? Yeah. It's because it had encountered similar characters in different contexts, so it was able to make educated guesses. So it's learning. It's actually learning. Exactly. It's not just pattern matching anymore. It's actually generalizing its knowledge. Okay, so big picture here. Is OCR 2.0 the real deal, or is this just hype? I think the results speak for themselves. This isn't just a minor upgrade. This is a fundamental shift in how we think about extracting meaning from images. GOT proves that this OCR 2.0 approach, it's not just a pipe dream. It has incredible potential to change everything. Yeah, it really feels like we're moving beyond just digitizing stuff. You know, it's like machines are actually starting to understand what they're seeing. Exactly. It's a whole new era of human-computer interaction. And if GOT can already handle sheet music and geometric shapes and complex document formatting, I mean, the possibilities are, it's kind of mind-blowing. It really makes you wonder what other fields are on the verge of their own 2.0 transformations. That's a great question, one to ponder. But for now, this has been an incredible deep dive into the future of OCR. Thanks for joining me. And until next time, keep those minds curious.
Do you want to engage your buyers with tailored communication strategies that enhance your sales success? We'll be sharing the solution so that you can achieve that result. Discover the unexpected connection between AI insights and the movie "Man of Honor". How does this true story inspire a new approach to sales success? Dive into this intriguing journey with Amarpreet Kalkat on the Modern Selling Podcast. What's the surprising link? Find out now. Be honest, are you tired of sending out countless generic messages and emails, only to be met with disappointing results? You're not alone. You've probably been told to cast a wide net and hope for the best, but let's face it, that approach is leaving you feeling frustrated and unproductive. If you're tired of the same old ineffective strategies and the pain of not getting the results you want, it's time to try a new approach. Uncover the Power of AI for Sales Success AI in sales is a significant step change, providing valuable insights beyond data. Utilize AI wisely to gain a deeper understanding of buyer psychology and engagement. The potential of AI lies in providing insights and enhancing sales strategies thoughtfully. Amarpreet Kalkat's journey into the realm of AI and sales is a fascinating blend of professional expertise and personal transformation. With over a decade of active involvement in AI, he defies the notion of it being a new endeavor. As a two-time AI entrepreneur, his commitment to excellence led to global recognition, with Forrester acknowledging his consumer intelligence AI as a top contender. Beyond the professional sphere, Amarpreet's evolution from fearing dogs to becoming a devoted owner of a majestic German shepherd adds a relatable and endearing layer to his story. His insights on leveraging AI for sales success are rooted in a unique blend of personal growth and professional accomplishments, offering a refreshing perspective for sales professionals aiming to enhance their strategies. Amarpreet's narrative is a compelling fusion of determination, resilience, and unexpected charm, leaving a lasting impact on anyone seeking to navigate the modern sales landscape with sophistication and innovation. Buyer intelligence is nothing but a way of building that buyer first approach. Because when you walk into a meeting, you spend 30 seconds looking at someone's profile and say, okay, this is what matters to this person. This is what doesn't matter. Hence I should say this, not say that, right? Simple things. It's not about you. Your process, your qualification methodology, your medics, your med pics. No buyer doesn't care. - Amarpreet Kalkat My special guest is Amarpreet Kalkat Amarpreet Kalkat is the CEO and founder of Human tech AI, with a solid decade of experience in the field of AI. His previous AI startup was recognized by Forrester as one of the top five consumer intelligence AIs, and The Wall Street Journal labeled it as a technology that could reshape the world. With a strong emphasis on behavior and personality prediction engines, Amarpreet is dedicated to providing sellers with invaluable insights to adopt a "buyer first" approach. His expertise in leveraging AI for sales success offers a wealth of knowledge that promises to enhance engagement and tailored communication strategies for sales professionals seeking to refine their approach. In this episode, you will be able to: Maximize sales potential with AI-driven strategies. Tailor your sales approach to prioritize the buyer's needs. Gain valuable insights on leveraging human touch in AI-powered sales. Craft personalized messages to resonate with your prospects. Uncover the impact of personality insights on driving sales success. The key moments in this episode are: 00:00:09 - Introduction to AI in Sales 00:03:29 - Buyer First Approach 00:07:34 - Nuanced Approach to AI in Sales 00:10:10 - Leveraging AI for Thoughtful Engagement 00:13:45 - The Challenge of AI SDRs 00:14:49 - The State of AI in Sales 00:16:10 - The Future of AI in Sales 00:17:28 - The Role of AI in Message Preparation 00:19:13 - Risks of AI in Sales 00:27:34 - Importance of Buyer Intelligence 00:29:36 - Importance of Putting Buyers First 00:31:01 - Applying Buyer-First Approach 00:34:55 - Challenges and Solutions in Buyer Insight 00:36:21 - Understanding Buyer's Personality 00:39:38 - Personalized Engagement with Buyers 00:42:36 - Importance of Buyer Intelligence 00:43:07 - Subject Line Performance 00:45:01 - Tactics vs. Concepts 00:46:33 - Generalization in Advice 00:48:35 - All-Time Favorite Movie Timestamped summary of this episode: 00:00:09 - Introduction to AI in Sales Mario Martinez introduces Amaprit Kalkat, CEO and founder of Human Tech AI, to discuss AI and sales personality insights for better buyer engagement. 00:03:29 - Buyer First Approach Amaprit emphasizes the importance of a "buyer first" approach in sales, focusing on understanding buyers at a deeper level and shifting the sales conversation to be more about the buyer. 00:07:34 - Nuanced Approach to AI in Sales Amaprit highlights the need for a nuanced approach to leveraging AI in sales, emphasizing the importance of thoughtful and intelligent automation over "spray and pray" tactics. 00:10:10 - Leveraging AI for Thoughtful Engagement Amaprit discusses the potential of combining multiple signals and insights to personalize sales engagement, moving beyond traditional ICP targeting and focusing on understanding user psychology and psychometrics. 00:13:45 - The Challenge of AI SDRs Mario Martinez and Amaprit discuss the challenges and implications of AI-driven SDRs in sales, addressing the issues of spammy and unthoughtful messaging, and the need for more thoughtful engagement strategies. 00:14:49 - The State of AI in Sales Amarpreet discusses the current state of AI in sales, mentioning that AI STR is not fully ready but is better than most bad STRs. 00:16:10 - The Future of AI in Sales The discussion shifts to the future of AI in sales, with Amarpreet emphasizing the importance of human-assisted AI and the potential for AI to coexist with human sales representatives. 00:17:28 - The Role of AI in Message Preparation Amarpreet explores the idea of AI preparing messages for sales outreach, suggesting the possibility of human validation based on personality insights before sending out automated messages. 00:19:13 - Risks of AI in Sales The conversation delves into the risks of AI in sales, including the potential for leaders to replace people with AI without fully understanding its impact on the sales process and the disillusionment surrounding AI's predicted economic impact. 00:27:34 - Importance of Buyer Intelligence Amarpreet introduces the concept of buyer intelligence, emphasizing the significance of understanding buyers at a deeper level and the potential for salespeople to drive revenue by helping buyers buy more effectively. 00:29:36 - Importance of Putting Buyers First Amarpreet emphasizes the significance of prioritizing the buyer's needs and interests in sales. He highlights the effectiveness of a buyer-first approach and shares insights on understanding what matters to each individual buyer. 00:31:01 - Applying Buyer-First Approach Amarpreet illustrates how a seller can apply the buyer-first approach using Humanic's insights. He shares a real-life example of tailoring his communication to align with the specific needs and preferences of a potential customer. 00:34:55 - Challenges and Solutions in Buyer Insight Mario discusses the challenges faced when there are no buyer insights available. Amarpreet acknowledges the limitations and shares how Humanic is working on expanding its data catchment area to capture dynamic buyer intelligence. 00:36:21 - Understanding Buyer's Personality Amarpreet explains the importance of understanding a buyer's personality and how it influences decision-making. He emphasizes the value of dynamic buyer intelligence in capturing real-time insights into a buyer's mood and behavior. 00:39:38 - Personalized Engagement with Buyers Mario and Amarpreet discuss the power of personalized engagement based on a buyer's personality. They highlight the need to tailor communication and interactions to suit individual buyer preferences for effective sales engagement. 00:42:36 - Importance of Buyer Intelligence Amarpreet discusses the necessity of buyer intelligence and optimizing messages before sending them out. The focus is on word count and subject line construction for better engagement. 00:43:07 - Subject Line Performance Amarpreet shares insights into subject line performance, highlighting the impact of longer subject lines on engagement and click rates, contrary to the popular belief of shorter subject lines being more effective. 00:45:01 - Tactics vs. Concepts The conversation shifts to the distinction between tactics and concepts, emphasizing the importance of fundamental concepts over temporary tactics for long-term success in sales engagement. 00:46:33 - Generalization in Advice The discussion delves into the issue of generalized advice and statistics, emphasizing the value of specific and contrary approaches for real success in sales engagement. 00:48:35 - All-Time Favorite Movie Amarpreet shares his all-time favorite movie, "Man of Honor," and reflects on the impact of the movie's true story and its memorable scenes. Embrace Buyer-First Selling Strategy Prioritizing the buyer's needs and interests is crucial in sales success. Tailoring sales approaches to align with buyer preferences leads to higher engagement. Top sellers excel at putting buyers first, achieving higher win rates and success. Gain Human Tech AI Sales Insights Amarpreet shares insights on human-assisted AI, emphasizing assistive AI over the replacement. Buyer intelligence tools enhance understanding buyers' characteristics for personalized engagement. The episode highlights the role of technology in supporting a buyer-first approach in sales. The resources mentioned in this episode are: Connect with Amarpreet Kalkat on LinkedIn and send a personalized connection request mentioning the Modern Selling Podcast. Follow Amarpreet Kalkat on Twitter for more insights and updates. Download FlyMessage IO for free to save 20 hours or more in a month and increase your productivity. Give the Modern Selling Podcast a five-star rating and review on iTunes to show your support.
Is your gym's marketing hitting the right balance between cost and conversions? Welcome to the Gym Marketing Made Simple podcast, your go-to resource for exploring the strategies that boutique fitness gyms need to thrive. In this week's episode, we dive deep into the challenges and strategies surrounding boutique gym marketing. Joined by Tommy Allen, we discuss balancing targeting a specific audience while keeping your appeal broad enough to avoid high-cost, low-conversion leads. We break down the math behind lead generation—aiming for $13-$18 per lead, needing a 60% conversion rate to make a $45 lead profitable—and how your ad copy and structured offers can make or break your campaign's success. The conversation highlights the importance of retention and boosting the average revenue per member to ensure sustainable growth. While cheaper marketing options may seem appealing, the team emphasizes that predictable results from tested strategies are worth the investment. Tommy also shares insights on how testing new ad copy variations and crafting offers tailored to client needs can lead to consistent marketing outcomes, ensuring your gym continues growing. 00:00 Intro 00:30 Marketing Strategy for Boutique Fitness Gyms 03:55 Balancing Specificity and Generalization in Advertising 04:45 Cost Per Lead and Conversion Rates 21:34 Structuring Offers and Retention Strategies 24:21 The Value of Predictable Results in Marketing Tune in to learn how your gym can achieve sustainable marketing success and what actionable steps you can take to optimize your approach today.
Kodsnack 600 - Just use +, with Christian Clausen 2024-09-03 05:26 Ladda ner (mp3)Öppna länkar i nya flikar Fredrik talks to Christian Clausen about the many facets of simplicity. The cloud and serverless was supposed to be simpler than running your own hardware, but you easily get stuck trying to select the right message bus, needing to know the intricacies of your chosen cloud provider infrastructure, and the like. You end up building your software around the infrastructure you've ended up with - rather than picking infrastructure which is right for your software. The CFO should not be the architect of the software. Core values and principles - set them up, reflect on them, and notice and decide what to do when they are broken. Should the system change if its core principles are broken, or should the principles be updated to reflect reality? Christian argues simplicity should be a core principle, and very carefully considered and encouraged. There are enough barriers already, even before you start adding complexity around the problems you're trying to solve. And hide the things you do pull in behind true abstractions which don't leak all over the place. Don't ask what you can add, ask what you can postpone. Generality adds complexity. The more often something changes, the more specific it should be. Where are the tools which suggest more things to remove instead of things to add? Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlundand @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Christian Øredev 2023 Designing infrastructure-free systems - Christians Øredev 2023 talk Merrymake - Christian's company Five lines of code Nosql Conway - don't let HR be the architect Christian's blog Spring Quarkus - “supersonic subatomic Java” Reactive programming Hibernate Gateway drug React Angular Vue Google's serverless is actually Knative Support us on Ko-fi! Redux Sonarqube Occam's razor Cyclomatic complexity Don't repeat yourself A/B testing Christian on Medium Bonus links - thanks Tomas Kronvall! Adding two numbers in Javascript Some additional backstory Titles Life happened Serverless the right way It's grown a lot I love refactoring Just as hard as choosing hardware Everything into one collection I don't want the CFO to be the architect of the software It disappears immediately Entropy for the real world I came back after six years Why though? Why do you have this? What problem couldn't you solve without it? There are enough barriers already Just use + Zero of the founding principles But it looks like ice cream I've always hated frameworks I feel like I'm writing Javascript Was the salary worth it? Lending the money to your future self What can I postpone? Generalization land Suggest I remove things! Is this the right problem to have? I want to say no more Humans can build this
Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe. YT version: https://youtu.be/DP454c1K_vQ MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC 00:00:00 Intro 00:03:38 Reasoning 00:13:09 Potential AI Breakthroughs Reducing Computation Needs 00:20:39 Memorization vs. Generalization in AI 00:25:19 Approach to the ARC Challenge 00:29:10 Perceptions of Chat GPT and AGI 00:58:45 Abstract Principles of Jurgen's Approach 01:04:17 Analogical Reasoning and Compression 01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI 01:15:50 Use of LSTM in Language Models by Tech Giants 01:21:08 Neural Network Aspect Ratio Theory 01:26:53 Reinforcement Learning Without Explicit Teachers Refs: ★ "Annotated History of Modern AI and Deep Learning" (2022 survey by Schmidhuber): ★ Chain Rule For Backward Credit Assignment (Leibniz, 1676) ★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800) ★ First 20th Century Pioneer of Practical AI (Quevedo, 1914) ★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925) ★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34) ★ Unpublished ideas about evolving RNNs (Turing, 1948) ★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958) ★ First Published Learning RNNs (Amari and others, ~1972) ★ First Deep Learning (Ivakhnenko & Lapa, 1965) ★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68) ★ ReLUs (Fukushima, 1969) ★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960) ★ Backpropagation for NNs (Werbos, 1982) ★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988). ★ Metalearning or Learning to Learn (Schmidhuber, 1987) ★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT) ★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990) ★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT) ★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT) ★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber) ★ LSTM journal paper (1997, most cited AI paper of the 20th century) ★ xLSTM (Hochreiter, 2024) ★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015) ★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber's team) https://arxiv.org/abs/2305.17066 ★ Bremermann's physical limit of computation (1982) EXTERNAL LINKS CogX 2018 - Professor Juergen Schmidhuber https://www.youtube.com/watch?v=17shdT9-wuA Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability (Neural Networks, 1997) https://sferics.idsia.ch/pub/juergen/loconet.pdf The paradox at the heart of mathematics: Gödel's Incompleteness Theorem - Marcus du Sautoy https://www.youtube.com/watch?v=I4pQbo5MQOs (Refs truncated, full version on YT VD)
How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat to Peter Hase about his research into these questions. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast The transcript: https://axrp.net/episode/2024/08/24/episode-35-peter-hase-llm-beliefs-easy-to-hard-generalization.html Topics we discuss, and timestamps: 0:00:36 - NLP and interpretability 0:10:20 - Interpretability lessons 0:32:22 - Belief interpretability 1:00:12 - Localizing and editing models' beliefs 1:19:18 - Beliefs beyond language models 1:27:21 - Easy-to-hard generalization 1:47:16 - What do easy-to-hard results tell us? 1:57:33 - Easy-to-hard vs weak-to-strong 2:03:50 - Different notions of hardness 2:13:01 - Easy-to-hard vs weak-to-strong, round 2 2:15:39 - Following Peter's work Peter on Twitter: https://x.com/peterbhase Peter's papers: Foundational Challenges in Assuring Alignment and Safety of Large Language Models: https://arxiv.org/abs/2404.09932 Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs: https://arxiv.org/abs/2111.13654 Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models: https://arxiv.org/abs/2301.04213 Are Language Models Rational? The Case of Coherence Norms and Belief Revision: https://arxiv.org/abs/2406.03442 The Unreasonable Effectiveness of Easy Training Data for Hard Tasks: https://arxiv.org/abs/2401.06751 Other links: Toy Models of Superposition: https://transformer-circuits.pub/2022/toy_model/index.html Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV): https://arxiv.org/abs/1711.11279 Locating and Editing Factual Associations in GPT (aka the ROME paper): https://arxiv.org/abs/2202.05262 Of nonlinearity and commutativity in BERT: https://arxiv.org/abs/2101.04547 Inference-Time Intervention: Eliciting Truthful Answers from a Language Model: https://arxiv.org/abs/2306.03341 Editing a classifier by rewriting its prediction rules: https://arxiv.org/abs/2112.01008 Discovering Latent Knowledge Without Supervision (aka the Collin Burns CCS paper): https://arxiv.org/abs/2212.03827 Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision: https://arxiv.org/abs/2312.09390 Concrete problems in AI safety: https://arxiv.org/abs/1606.06565 Rissanen Data Analysis: Examining Dataset Characteristics via Description Length: https://arxiv.org/abs/2103.03872 Episode art by Hamish Doodles: hamishdoodles.com
Shallow welcomes Dr. Patrick Davidson as they discuss the role of academia in the fitness industry. They reflect on the cycles and trends in the industry and the need for patience and discernment. Dr. Davidson shares his thoughts on specialization versus generalization and the need for a holistic approach. He also discusses the importance of establishing a worldview and how it informs his coaching and teaching. https://www.drpatdavidson.net/ https://www.instagram.com/dr.patdavidson/?hl=en Join the Pre-Script® Level 1 Opt-In list now. Learn more at https://www.pre-script.com/psl1 We've got a new sponsor! Marek Health is a health optimization company that offers advanced blood testing, health coaching, and expert medical oversight. Our services can help you enhance your lifestyle, nutrition, and supplementation to medical treatment and care. https://marekhealth.com/rxd Code RXD Don't miss the release of our newest educational community - The Pre-Script ® Collective! Join the community today at www.pre-script.com. For other strength training, health, and injury prevention resources, check out our website, YouTube channel, and Instagram. For more episodes, subscribe and tune in to our podcast. Also, make sure to sign up to our mailing list at www.pre-script.com to get the first updates on new programming releases. You can also follow Dr. Jordan Shallow and Dr. Jordan Jiunta on Instagram! Dr. Jordan Shallow: https://www.instagram.com/the_muscle_doc/ Dr. Jordan Jiunta: https://www.instagram.com/redwiteandjordan/ Role of Academia in the Fitness Industry (00:03:07) Influence of Science and Well-Produced Content (00:07:23) Specialization vs Generalization in the Fitness Industry (00:11:02) Importance of Establishing a Worldview (00:17:27) Communication and Error in Driving Progress (00:27:07) Acting on Ideas and Spite (00:31:44) Pushing Boundaries in Training (00:36:12) Reconciling the Experiential Brain with the Theoretical Brain (00:40:43) Embracing Physical Discomfort and Physiological Changes (00:42:48) Using the Partner Fit App for Measuring Training Intensity (00:45:31)
Key Topics and Timestamps:Introduction and "Podwalking" Concept (00:00 - 23:00)Discussion of "podwalking" as a new trend for hybrid teamsBenefits of structured learning and discussion in organizationsImportance of leadership in facilitating learning experiencesThe Evolution of Public Speaking (23:00 - 30:00)Richard's 21 years of speaking experienceThe timeless principles of effective communicationHow audience expectations have changed over timeThe Shapiro Matrix and Four Novelties (30:00 - 39:00)Introduction to Julian Shapiro's concept of novelties in writing/speakingDetailed explanation of the four novelties: a) Counterintuitive b) Counter-narrative c) Elegant articulation d) Shock and aweExamples and personal experiences with each novelty typeThe Impact of AI and Technology on Public Speaking (39:00 - end)The increasing value of authentic, in-person communicationHow AI and shortened attention spans have elevated the importance of public speakingThe unique combination of authority and extended attention in live presentationsTheatrics in Keynote SpeakingDiscussion of novel presentation techniques (e.g., 3D projections)Examples of impactful presentations (Jamie Oliver's sugar demonstration, Bill Gates' mosquito release)Importance of Fundamental Speaking PrinciplesLogos, Pathos, EthosRhetorical devices like antimetabole and tricolonSkill vs. System Curve in SpeakingHigh skill, low system speakers can be entertaining but may not drive changeExample: Ken Robinson's popular TED Talk criticized for lack of substanceRich Mulholland's Approach to TalksCore talk: "Relentless Relevance" with variations for different audiencesBalancing information and entertainmentKey Topics for Corporate Speaking EngagementsSales, leadership, culture, and futureImportance of Future-Oriented Content in SpeakingSpecialization vs. Generalization in Speaking CareersNeed for niche expertise, especially in markets like the USStrategies for Speaking on Unfamiliar TopicsUsing metaphors and "distance management"Creating relatable examples and frameworksAuthenticity and Audience ConnectionImportance of understanding audience limitationsAvoiding pretending to be an insider when you're notThe Impact of AI on Public SpeakingGrowing importance of storytelling and authenticityChallenges of engaging audiences in a world of hyperintelligenceThe Universal "Bullshit Detector" in AudiencesImportance of being genuine and playing to your strengths as a speaker Please like and subscribe to our channel and leave us a comment! We love hearing from our listeners and we thank you for being apart of our community! Socials:Instagram: @theexpansivepodcastX: @theexpansivepodLinkedin: The Expansive PodcastTik Tok: theexpansivepodcast
David Epstein joins the show to talk Olympics and how athletes are developed around the world. Topics include generalization vs. specialization in sports, developing athletes in the U.S. vs. other countries, why athletes can play for longer than ever before, and more! 0:00 Welcome back to the Domonique Foxworth Show 1:13 Generalization vs. specialization in sports 2:54 What should the U.S. copy from other countries in sports? 11:08 Could NFL players play in the NBA and vice versa? 15:18 Why do athletes have longer careers now? Learn more about your ad choices. Visit megaphone.fm/adchoices
David Epstein joins the show to talk Olympics and how athletes are developed around the world. Topics include generalization vs. specialization in sports, developing athletes in the U.S. vs. other countries, why athletes can play for longer than ever before, and more! 0:00 Welcome back to the Domonique Foxworth Show 1:13 Generalization vs. specialization in sports 2:54 What should the U.S. copy from other countries in sports? 11:08 Could NFL players play in the NBA and vice versa? 15:18 Why do athletes have longer careers now? Learn more about your ad choices. Visit megaphone.fm/adchoices
David Epstein joins the show to talk Olympics and how athletes are developed around the world. Topics include generalization vs. specialization in sports, developing athletes in the U.S. vs. other countries, why athletes can play for longer than ever before, and more! 0:00 Welcome back to the Domonique Foxworth Show 1:13 Generalization vs. specialization in sports 2:54 What should the U.S. copy from other countries in sports? 11:08 Could NFL players play in the NBA and vice versa? 15:18 Why do athletes have longer careers now? Learn more about your ad choices. Visit megaphone.fm/adchoices
David Epstein joins the show to talk Olympics and how athletes are developed around the world. Topics include generalization vs. specialization in sports, developing athletes in the U.S. vs. other countries, why athletes can play for longer than ever before, and more! 0:00 Welcome back to the Domonique Foxworth Show 1:13 Generalization vs. specialization in sports 2:54 What should the U.S. copy from other countries in sports? 11:08 Could NFL players play in the NBA and vice versa? 15:18 Why do athletes have longer careers now? Learn more about your ad choices. Visit megaphone.fm/adchoices
Like assumptions, our generalizations can limit your flourishing by putting blinders on how you see the world. Generalizations help us make sense of a complex world; generalizing is a fundamental human function. Most of the time, our generalizations serve us well. But, when we make generalization errors, bad things happen, such as stereotypes that lead to prejudice. In this episode, Craig discusses how we make generalizations, why they're necessary, and how generalization errors and overgeneralizing can harm us, and those around us.When to use a generalization flowchart: https://www.neogaf.com/threads/generalizations-and-assumptions-a-flow-chart-i-just-made.901709/------Live Well and Flourish website: https://www.livewellandflourish.com/Email: livewellandflourish@pm.me The theme music for Live Well and Flourish was written by Hazel Crossler, hazel.crossler@gmail.com.Production assistant - Paul Robert
For International TCK Day we are going back in our archives to our conversation with Michèle Phoenix from Season 1, Episode 22. Michèle is known around the world as an advocate for Third Culture Kids (TCKs) and the children of missionaries. Listen in for a glimpse into the world of TCKs! “Passport culture plus adoptive culture(s) equals Third Culture Kid (TCK).” “Because I have those two cultures in me, my closest sense of belonging is with others who are, like me, multiculturally formed in their formative years. There's a misconception that third culture is actually my individual third culture that I form out of the two that shaped me [...] but actually the term means that we find belonging with others who are also third cultured.” “My differences were similar to their differences, and that I wasn't weird; I was a TCK.” “Mostly belonging in multiple places increases their skillset; it makes them bridge builders.” “To ask them to figure out what is uniquely one culture and uniquely another in the way they think and in the way they think, even in the way they speak, is going to be a real challenge for them.” “Something that feels fairly minor to a monocultural adult who has lived multiculturally for a while might feel like this tidal wave of all of these emotions coming back to the TCK or MK.” Article: Nine Tips for Living Well in a Season of Grief “The enormity of the blessing and the strength that comes from growing up as a TCK is immeasurable. You have blessed them in ways that you probably won't ever be able to fully realize.” “Generalizations about TCKs are not always entirely helpful, but knowing what the majority of them tend toward I think can be a really helpful thing in mentoring them and walking with them.” “Because of our experience seeing things done differently in other parts of the world [...] we can start to draw people from their highly selective clusters toward each other.” Learn more about Michèle's ministry here! What's changing our lives: Keane: Reading more than one book at a time Heather: The Next Right Thing Journal by Emily P. Freeman Michèle: Four Tiny-Small Questions for the Quarantine-Weary What small thing can I do that will make me feel alive in this moment? What small thing can I do to take some sting out of this day? What small thing can I do to make today feel purposeful? What small thing can I do today that will connect me with God? We'd love it if you would subscribe, rate, review, and share this show! And as always, you can reach us at podcast@teachbeyond.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLM Generality is a Timeline Crux, published by eggsyntax on June 24, 2024 on LessWrong. Short Summary LLMs may be fundamentally incapable of fully general reasoning, and if so, short timelines are less plausible. Longer summary There is ML research suggesting that LLMs fail badly on attempts at general reasoning, such as planning problems, scheduling, and attempts to solve novel visual puzzles. This post provides a brief introduction to that research, and asks: Whether this limitation is illusory or actually exists. If it exists, whether it will be solved by scaling or is a problem fundamental to LLMs. If fundamental, whether it can be overcome by scaffolding & tooling. If this is a real and fundamental limitation that can't be fully overcome by scaffolding, we should be skeptical of arguments like Leopold Aschenbrenner's (in his recent 'Situational Awareness') that we can just 'follow straight lines on graphs' and expect AGI in the next few years. Introduction Leopold Aschenbrenner's recent 'Situational Awareness' document has gotten considerable attention in the safety & alignment community. Aschenbrenner argues that we should expect current systems to reach human-level given further scaling and 'unhobbling', and that it's 'strikingly plausible' that we'll see 'drop-in remote workers' capable of doing the work of an AI researcher or engineer by 2027. Others hold similar views. Francois Chollet and Mike Knoop's new $500,000 prize for beating the ARC benchmark has also gotten considerable recent attention in AIS[1]. Chollet holds a diametrically opposed view: that the current LLM approach is fundamentally incapable of general reasoning, and hence incapable of solving novel problems. We only imagine that LLMs can reason, Chollet argues, because they've seen such a vast wealth of problems that they can pattern-match against. But LLMs, even if scaled much further, will never be able to do the work of AI researchers. It would be quite valuable to have a thorough analysis of this question through the lens of AI safety and alignment. This post is not that[2], nor is it a review of the voluminous literature on this debate (from outside the AIS community). It attempts to briefly introduce the disagreement, some evidence on each side, and the impact on timelines. What is general reasoning? Part of what makes this issue contentious is that there's not a widely shared definition of 'general reasoning', and in fact various discussions of this use various terms. By 'general reasoning', I mean to capture two things. First, the ability to think carefully and precisely, step by step. Second, the ability to apply that sort of thinking in novel situations[3]. Terminology is inconsistent between authors on this subject; some call this 'system II thinking'; some 'reasoning'; some 'planning' (mainly for the first half of the definition); Chollet just talks about 'intelligence' (mainly for the second half). This issue is further complicated by the fact that humans aren't fully general reasoners without tool support either. For example, seven-dimensional tic-tac-toe is a simple and easily defined system, but incredibly difficult for humans to play mentally without extensive training and/or tool support. Generalizations that are in-distribution for humans seems like something that any system should be able to do; generalizations that are out-of-distribution for humans don't feel as though they ought to count. How general are LLMs? It's important to clarify that this is very much a matter of degree. Nearly everyone was surprised by the degree to which the last generation of state-of-the-art LLMs like GPT-3 generalized; for example, no one I know of predicted that LLMs trained on primarily English-language sources would be able to do translation between languages. Some in the field argued as...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLM Generality is a Timeline Crux, published by Egg Syntax on June 24, 2024 on The AI Alignment Forum. Short Summary LLMs may be fundamentally incapable of fully general reasoning, and if so, short timelines are less plausible. Longer summary There is ML research suggesting that LLMs fail badly on attempts at general reasoning, such as planning problems, scheduling, and attempts to solve novel visual puzzles. This post provides a brief introduction to that research, and asks: Whether this limitation is illusory or actually exists. If it exists, whether it will be solved by scaling or is a problem fundamental to LLMs. If fundamental, whether it can be overcome by scaffolding & tooling. If this is a real and fundamental limitation that can't be fully overcome by scaffolding, we should be skeptical of arguments like Leopold Aschenbrenner's (in his recent 'Situational Awareness') that we can just 'follow straight lines on graphs' and expect AGI in the next few years. Introduction Leopold Aschenbrenner's recent 'Situational Awareness' document has gotten considerable attention in the safety & alignment community. Aschenbrenner argues that we should expect current systems to reach human-level given further scaling[1], and that it's 'strikingly plausible' that we'll see 'drop-in remote workers' capable of doing the work of an AI researcher or engineer by 2027. Others hold similar views. Francois Chollet and Mike Knoop's new $500,000 prize for beating the ARC benchmark has also gotten considerable recent attention in AIS[2]. Chollet holds a diametrically opposed view: that the current LLM approach is fundamentally incapable of general reasoning, and hence incapable of solving novel problems. We only imagine that LLMs can reason, Chollet argues, because they've seen such a vast wealth of problems that they can pattern-match against. But LLMs, even if scaled much further, will never be able to do the work of AI researchers. It would be quite valuable to have a thorough analysis of this question through the lens of AI safety and alignment. This post is not that[3], nor is it a review of the voluminous literature on this debate (from outside the AIS community). It attempts to briefly introduce the disagreement, some evidence on each side, and the impact on timelines. What is general reasoning? Part of what makes this issue contentious is that there's not a widely shared definition of 'general reasoning', and in fact various discussions of this use various terms. By 'general reasoning', I mean to capture two things. First, the ability to think carefully and precisely, step by step. Second, the ability to apply that sort of thinking in novel situations[4]. Terminology is inconsistent between authors on this subject; some call this 'system II thinking'; some 'reasoning'; some 'planning' (mainly for the first half of the definition); Chollet just talks about 'intelligence' (mainly for the second half). This issue is further complicated by the fact that humans aren't fully general reasoners without tool support either. For example, seven-dimensional tic-tac-toe is a simple and easily defined system, but incredibly difficult for humans to play mentally without extensive training and/or tool support. Generalizations that are in-distribution for humans seems like something that any system should be able to do; generalizations that are out-of-distribution for humans don't feel as though they ought to count. How general are LLMs? It's important to clarify that this is very much a matter of degree. Nearly everyone was surprised by the degree to which the last generation of state-of-the-art LLMs like GPT-3 generalized; for example, no one I know of predicted that LLMs trained on primarily English-language sources would be able to do translation between languages. Some in the field argued as...
Justin Michael, best selling author and executive coach, shares his approach to sales, combining traditional business tactics with the principles of manifestation and spirituality. Justin discusses the power of belief, the importance of mindset, and the role of gratitude in achieving success. He challenges conventional sales methodologies, advocating for a heart-centered approach that emphasizes genuine connections and service to clients. Justin has awesome insights on overcoming self-imposed limitations and the integration of holistic practices, redefining what it means to be successful in sales today.Chapters:00:00:00 - Clearing Resentments: The Power of Hoʻoponopono00:00:55 - Welcome to the Rising Leader Podcast: A New Wave of Leadership00:01:27 - Meet Justin Michael: Author and Sales Innovator00:02:00 - The Two Sides of Manifestation in Sales00:04:00 - From 'The Secret' to Success: Justin's Journey00:08:02 - Tuning In: How Music and Visualization Can Transform Your Sales Game00:11:00 - Using Music to Achieve a Meditative State in Sales00:14:15 - Inner Game Meets Sales Techniques: Finding the Perfect Balance00:18:00 - The Justin Michael Method: Neuroscience-Backed Sales Techniques00:21:45 - Battling Self-Doubt in Sales: Practical Insights00:25:12 - Personal Transformations: Stories from the Trenches of Coaching00:27:32 - The Evolution of Sales: Adapting Techniques in a Changing World00:30:00 - Specialization vs. Generalization in Sales Roles00:34:12 - The Future of Sales: Keeping the Human Connection Alive00:38:17 - Final Thoughts: Embracing Service and Transformation in SalesConnect With Justin here:Justin on LinkedInHard Skill.ExchangeAttraction Selling by Justin MichaelSales Superpowers 1.0 by Justin MichaelJustin Michael Method 2.0 by Justin MichaelQuantum Leap PodcastBeyond Sales DevelopmentThanks so much for joining us this week. Want to subscribe to The Rising Leader? Have some feedback you'd like to share? Connect with us on iTunes and leave us a review!Mentioned in this episode: The Arise Immersion
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jailbreak steering generalization, published by Sarah Ball on June 20, 2024 on The AI Alignment Forum. This work was performed as part of SPAR We use activation steering (Turner et al., 2023; Rimsky et al., 2023) to investigate whether different types of jailbreaks operate via similar internal mechanisms. We find preliminary evidence that they may. Our analysis includes a wide range of jailbreaks such as harmful prompts developed in Wei et al. 2024, the universal jailbreak in Zou et al. (2023b), and the payload split jailbreak in Kang et al. (2023). For all our experiments we use the Vicuna 13B v1.5 model. In a first step, we produce jailbreak vectors for each jailbreak type by contrasting the internal activations of jailbreak and non-jailbreak versions of the same request (Rimsky et al., 2023; Zou et al., 2023a). Interestingly, we find that steering with mean-difference jailbreak vectors from one cluster of jailbreaks helps to prevent jailbreaks from different clusters. This holds true for a wide range of jailbreak types. The jailbreak vectors themselves also cluster according to semantic categories such as persona modulation, fictional settings and style manipulation. In a second step, we look at the evolution of a harmfulness-related direction over the context (found via contrasting harmful and harmless prompts) and find that when jailbreaks are included, this feature is suppressed at the end of the instruction in harmful prompts. This provides some evidence for the fact that jailbreaks suppress the model's perception of request harmfulness. Effective jailbreaks usually decrease the amount of the harmfulness feature present more. However, we also observe one jailbreak ("wikipedia with title"[1]), which is an effective jailbreak although it does not suppress the harmfulness feature as much as the other effective jailbreak types. Furthermore, the jailbreak steering vector based on this jailbreak is overall less successful in reducing the attack success rate of other types. This observation indicates that harmfulness suppression might not be the only mechanism at play as suggested by Wei et al. (2024) and Zou et al. (2023a). References Turner, A., Thiergart, L., Udell, D., Leech, G., Mini, U., and MacDiarmid, M. Activation addition: Steering language models without optimization. arXiv preprint arXiv:2308.10248, 2023. Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., and Hashimoto, T. Exploiting programmatic behavior of LLMs: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733, 2023. Rimsky, N., Gabrieli, N., Schulz, J., Tong, M., Hubinger, E., and Turner, A. M. Steering Llama 2 via contrastive activation addition. arXiv preprint arXiv:2312.06681, 2023. Wei, A., Haghtalab, N., and Steinhardt, J. Jailbroken: How does LLM safety training fail? Advances in Neural Information Processing Systems, 36, 2024. Zou, A., Phan, L., Chen, S., Campbell, J., Guo, P., Ren, R., Pan, A., Yin, X., Mazeika, M., Dombrowski, A.-K., et al. Representation engineering: A top-down approach to AI transparency. arXiv preprint arXiv:2310.01405, 2023a. Zou, A., Wang, Z., Kolter, J. Z., and Fredrikson, M. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023b. 1. ^ This jailbreak type asks the model to write a Wikipedia article titled as . Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jailbreak steering generalization, published by Sarah Ball on June 20, 2024 on LessWrong. This work was performed as part of SPAR We use activation steering (Turner et al., 2023; Rimsky et al., 2023) to investigate whether different types of jailbreaks operate via similar internal mechanisms. We find preliminary evidence that they may. Our analysis includes a wide range of jailbreaks such as harmful prompts developed in Wei et al. 2024, the universal jailbreak in Zou et al. (2023b), and the payload split jailbreak in Kang et al. (2023). For all our experiments we use the Vicuna 13B v1.5 model. In a first step, we produce jailbreak vectors for each jailbreak type by contrasting the internal activations of jailbreak and non-jailbreak versions of the same request (Rimsky et al., 2023; Zou et al., 2023a). Interestingly, we find that steering with mean-difference jailbreak vectors from one cluster of jailbreaks helps to prevent jailbreaks from different clusters. This holds true for a wide range of jailbreak types. The jailbreak vectors themselves also cluster according to semantic categories such as persona modulation, fictional settings and style manipulation. In a second step, we look at the evolution of a harmfulness-related direction over the context (found via contrasting harmful and harmless prompts) and find that when jailbreaks are included, this feature is suppressed at the end of the instruction in harmful prompts. This provides some evidence for the fact that jailbreaks suppress the model's perception of request harmfulness. Effective jailbreaks usually decrease the amount of the harmfulness feature present more. However, we also observe one jailbreak ("wikipedia with title"[1]), which is an effective jailbreak although it does not suppress the harmfulness feature as much as the other effective jailbreak types. Furthermore, the jailbreak steering vector based on this jailbreak is overall less successful in reducing the attack success rate of other types. This observation indicates that harmfulness suppression might not be the only mechanism at play as suggested by Wei et al. (2024) and Zou et al. (2023a). References Turner, A., Thiergart, L., Udell, D., Leech, G., Mini, U., and MacDiarmid, M. Activation addition: Steering language models without optimization. arXiv preprint arXiv:2308.10248, 2023. Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., and Hashimoto, T. Exploiting programmatic behavior of LLMs: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733, 2023. Rimsky, N., Gabrieli, N., Schulz, J., Tong, M., Hubinger, E., and Turner, A. M. Steering Llama 2 via contrastive activation addition. arXiv preprint arXiv:2312.06681, 2023. Wei, A., Haghtalab, N., and Steinhardt, J. Jailbroken: How does LLM safety training fail? Advances in Neural Information Processing Systems, 36, 2024. Zou, A., Phan, L., Chen, S., Campbell, J., Guo, P., Ren, R., Pan, A., Yin, X., Mazeika, M., Dombrowski, A.-K., et al. Representation engineering: A top-down approach to AI transparency. arXiv preprint arXiv:2310.01405, 2023a. Zou, A., Wang, Z., Kolter, J. Z., and Fredrikson, M. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023b. 1. ^ This jailbreak type asks the model to write a Wikipedia article titled as . Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Balancing Athletic Careers and Life Lessons with Rick Hendrickson Contacts Coaching Podcast In this episode of the Contacts Coaching Podcast, host Justin provides a deep dive into the journey and experiences of Rick Hendrickson, Director of Athletics at Northfield Mount Hermon. Rick shares insights from his first year in his current role, recounting his extensive background in various roles, including public and private school coaching, along with corporate experiences. The discussion touches on the critical life lessons learned through athletics, the importance of mentorship, conflict resolution, maintaining culture within sports programs, and how athletic skills translate into corporate success. Rick also emphasizes the importance of specialization versus participating in multiple sports. Whether you're a coach, athlete, or administrator, this episode offers valuable takeaways on fostering both personal and professional growth within the realm of sports. This episode is brought to you by LMNT! Spelled LMNT. What is LMNT? It's a delicious, sugar-free electrolyte drink-mix. I tried this recently after hearing about it on another podcast, and since then, I've stocked up on boxes and boxes of this and usually use it 1–2 times per day. LMNT is a great alternative to other commercial recovery and performance drinks. As a coach or an athlete, you will not find a better product that focuses on the essential electrolyte your body needs during competition. LMNT has become a staple in my own training and something we are excited to offer to our coaches and student-athletes as well. LMNT is used by Military Special Forces teams, Team USA weightlifting, At least 5 NFL teams, and more than half the NBA. You can try it risk-free. If you don't like it, LMNT will give you your money back no questions asked. They have extremely low return rates. LMNT came up with a very special offer for you as a listener to this podcast. For a limited time, you can claim a free LMNT Sample Pack—you only cover the cost of shipping. For US customers, this means you can receive an 8-count sample pack for only $5. Simply go to DrinkLMNT.com/contacts to claim your free 8-count sample pack. Taking a bunch of pills and capsules is hard on the stomach and hard to keep up with. To help each of us be at our best, we at Athletic Greens developed a better approach to providing your body with everything it needs for optimal performance. 75 vitamins, minerals, whole-food sourced superfoods, probiotics, and adaptogens in one convenient daily serving to bring you the nutrition you need. Go to https://athleticgreens.com/contacts/ for more. 00:00 Introduction and Guest Welcome 00:22 Rick Hendrickson's Early Life and Athletic Journey 01:35 Transition to Teaching and Coaching 03:41 Discovering Boarding Schools and Career Growth 06:28 Corporate America Experience and Return to Athletics 09:44 The Importance of Mentorship and Coaching Philosophy 14:26 Lessons from Coaching and Conflict Resolution 27:38 Embracing Adversity: Lessons from Tom Hanks 31:56 The Role of Wrestling in Overcoming Challenges 38:26 The Craft of Coaching: Building Culture and Values 44:03 Hiring the Right Coaches: Key Separators 49:06 Changing Perspectives: Specialization vs. Generalization in Sports 54:49 Conclusion and Final Thoughts --- Support this podcast: https://podcasters.spotify.com/pod/show/justin-clymo30/support
is with us today. She has done some amazing theory construct research using computational methods before this was really an accepted thing. We discuss which work she built her research around to give it legitimacy, what good stopping rules are for authors or reviewers to know when enough is enough, and how we can engage in humble generalizations of interesting and general regularities. References Miranda, S. M., Kim, I., & Summers, J. D. (2015). Jamming with Social Media: How Cognitive Structuring of Organizing Vision Facets Affects IT Innovation Diffusion. MIS Quarterly, 39(3), 591-614. Walsh, I., Holton, J. A., Bailyn, L., Fernandez, W. D., Levina, N., & Glaser, B. G. (2015). What Grounded Theory Is ... A Critically Reflective Conversation Among Scholars. Organizational Research Methods, 18(4), 581-599. Levina, N., & Vaast, E. (2015). Leveraging Archival Data from Online Communities for Grounded Process Theorizing. In K. D. Elsbach & R. M. Kramer (Eds.), Handbook of Qualitative Organizational Research: Innovative Pathways and Methods (pp. 215-224). Routledge. Berente, N., Seidel, S., & Safadi, H. (2019). Data-Driven Computationally-Intensive Theory Development. Information Systems Research, 30(1), 50-64. Miranda, S. M., Wang, D., & Tian, C. (2022). Discursive Fields and the Diversity-Coherence Paradox: An Ecological Perspective on the Blockchain Community Discourse. MIS Quarterly, 46(3), 1421-1452. Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2021). Will Humans-in-the-Loop Become Borgs? Merits and Pitfalls of Working with AI. MIS Quarterly, 45(3), 1527-1556. Lindberg, A., Schecter, A., Berente, N., Hennel, P., & Lyytinen, K. (2024). The Entrainment of Task Allocation and Release Cycles in Open Source Software Development. MIS Quarterly, 48(1), 67-94. Sahaym, A., Vithayathil, J., Sarker, S., Sarker, S., & Bjørn-Andersen, N. (2023). Value Destruction in Information Technology Ecosystems: A Mixed-Method Investigation with Interpretive Case Study and Analytical Modeling. Information Systems Research, 34(2), 508-531. Miranda, S. M., Berente, N., Seidel, S., Safadi, H., & Burton-Jones, A. (2022). Computationally Intensive Theory Construction: A Primer for Authors and Reviewers. MIS Quarterly, 46(2), i-xvi. Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design Science in Information Systems Research. MIS Quarterly, 28(1), 75-105. Adamic, L. A., & Glance, N. (2005). The Political Blogosphere and the 2004 U.S. Election: Divided They Blog. Paper presented at the 3rd International Workshop on Link Discovery, Chicago, Illinois. Pentland, B. T., Vaast, E., & Ryan Wolf, J. (2021). Theorizing Process Dynamics with Directed Graphs: A Diachronic Analysis of Digital Trace Data. MIS Quarterly, 45(2), 967-984. Sarker, S., Xiao, X., Beaulieu, T., & Lee, A. S. (2018). Learning from First-Generation Qualitative Approaches in the IS Discipline: An Evolutionary View and Some Implications for Authors and Evaluators (PART 1/2). Journal of the Association for Information Systems, 19(8), 752-774. Lee, A. S., & Baskerville, R. (2003). Generalizing Generalizability in Information Systems Research. Information Systems Research, 14(3), 221-243. Tsang, E. W. K., & Williams, J. N. (2012). Generalization and Induction: Misconceptions, Clarifications, and a Classification of Induction. MIS Quarterly, 36(3), 729-748. Hume, D. (1748/1998). An Enquiry Concerning Human Understanding [Reprint]. In J. Perry & M. E. Bratman (Eds.), Introduction to Philosophy: Classical and Contemporary Readings (3rd ed., pp. 190-220). Oxford University Press. Exemplar Computationally-intensive Theory Construction Papers Bachura, E., Valecha, R., Chen, R., & Rao, H. R. (2022). The OPM Data Breach: An Investigation of Shared Emotional Reactions on Twitter. MIS Quarterly, 46(2), 881-910. Gal, U., Berente, N., & Chasin, F. (2022). Technology Lifecycles and Digital Innovation: Patterns of Discourse Across Levels of Abstraction: A Study of Wikipedia Articles. Journal of the Association for Information Systems, 23(5), 1102-1149. Hahn, J., & Lee, G. (2021). The Complex Effects of Cross-Domain Knowledge on IS Development: A Simulation-Based Theory Development. MIS Quarterly, 45(4), 2023-2054. Indulska, M., Hovorka, D. S., & Recker, J. (2012). Quantitative Approaches to Content Analysis: Identifying Conceptual Drift Across Publication Outlets. European Journal of Information Systems, 21(1), 49-69. Lindberg, A., Majchrzak, A., & Malhotra, A. (2022). How Information Contributed After an Idea Shapes New High-Quality Ideas in Online Ideation Contests. MIS Quarterly, 46(2), 1195-1208. Nan, N. (2011). Capturing Bottom-Up Information Technology Use Processes: A Complex Adaptive Systems Model. MIS Quarterly, 35(2), 505-532. Pentland, B. T., Recker, J., Ryan Wolf, J., & Wyner, G. (2020). Bringing Context Inside Process Research With Digital Trace Data. Journal of the Association for Information Systems, 21(5), 1214-1236. Vaast, E., Safadi, H., Lapointe, L., & Negoita, B. (2017). Social Media Affordances for Connective Action: An Examination of Microblogging Use During the Gulf of Mexico Oil Spill. MIS Quarterly, 41(4), 1179-1205.
Speakers for AI Engineer World's Fair have been announced! See our Microsoft episode for more info and buy now with code LATENTSPACE — we've been studying the best ML research conferences so we can make the best AI industry conf! Note that this year there are 4 main tracks per day and dozens of workshops/expo sessions; the free livestream will air much less than half of the content this time.Apply for free/discounted Diversity Program and Scholarship tickets here. We hope to make this the definitive technical conference for ALL AI engineers.ICLR 2024 took place from May 6-11 in Vienna, Austria. Just like we did for our extremely popular NeurIPS 2023 coverage, we decided to pay the $900 ticket (thanks to all of you paying supporters!) and brave the 18 hour flight and 5 day grind to go on behalf of all of you. We now present the results of that work!This ICLR was the biggest one by far, with a marked change in the excitement trajectory for the conference:Of the 2260 accepted papers (31% acceptance rate), of the subset of those relevant to our shortlist of AI Engineering Topics, we found many, many LLM reasoning and agent related papers, which we will cover in the next episode. We will spend this episode with 14 papers covering other relevant ICLR topics, as below.As we did last year, we'll start with the Best Paper Awards. Unlike last year, we now group our paper selections by subjective topic area, and mix in both Outstanding Paper talks as well as editorially selected poster sessions. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. To cap things off, Chris Ré's spot from last year now goes to Sasha Rush for the obligatory last word on the development and applications of State Space Models.We had a blast at ICLR 2024 and you can bet that we'll be back in 2025
In this groundbreaking episode of the Cognitive Revolution, we explore the intersection of AI and biology with expert Amelie Schreiber. Learn about the advances in drug design, protein network engineering, and the unfolding AI revolution in scientific discovery. Discover the implications for human health, longevity, and the future of biological research. Join us as we delve into an exciting conversation that may redefine our understanding of biology and medicine. SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ CHAPTERS: (00:00:00) Introduction (00:04:53) Introduction to Amelie Schreiber and the Podcast (00:08:59) Understanding Protein Interactions (00:11:45) Traditional Methods vs. AI Approaches (00:13:51) Molecular Dynamics and AI Models (00:18:02) AlphaFold and Protein Structure Prediction (00:18:43) Sponsors: Oracle | Brave (00:20:51) Protein Dynamics and New AI Models (00:32:36) Sponsors: Squad | Omneky (00:34:22) Challenges in Protein Interaction Models (00:44:44) Generalization and Data Splitting in AI Models (00:48:43) Advanced AI Models for Protein Complexes (00:52:25) Practical Applications of AI in Biochemistry (01:01:53) Designing Protein Sequences with Ligand and PNN (01:05:19) Binder Design and Fold Conditioning (01:08:48) Challenges and Bottlenecks in Drug Discovery (01:16:09) Adoption and Accessibility of New Technologies (01:21:04) Future Prospects and Ethical Considerations (01:37:08) The Role of AI Agents in Biological Research (01:40:18) Balancing Innovation and Safety in Biotechnology
Lairdinho is joined by Surfula and GatorGuy231 to discuss the pros and cons of specialization within a single competition track or whether it's better to spread out as much as possible. Intro and outro music: As You Were by Track Tribe
Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication
This week the whole crew is together. We discuss generalization and does changed behavior mean more than words. Jo, Syd, Mek, Mal --- Send in a voice message: https://podcasters.spotify.com/pod/show/baltimore-podcast-studio/message
Have you ever had someone jump to false conclusions about you based on a single comment or tweet? That's what you call a hasty generalization, and people on both sides of the political and theological aisle have been guilty of committing this logical fallacy, many times resulting in the marginalization of whomever is on the receiving end of the accusation. In this current season of political chaos, how can you speak your mind (and share the truth) without falling into the trap of becoming a victim (or maybe even a perpetrator) of these snap judgments? This week, Frank addresses 10 hot-button topics in a single podcast episode! How will he cram so much in so little time? Hastily! In this lightning fast discussion on politics, religion, and logical fallacies, Frank will answer questions like: Is it racist to care about border security? Does being concerned about racial disparities mean that a person is "woke"? Is it divisive to call out heresy in church? How did Frank respond when someone called him a bigot? Do Christians seem to carry a lot of political influence as of late? What did the "He Gets Us" ad campaign get wrong...again? Later in the episode, Frank responds to an email from a skeptic who made a few hasty generalizations in regards to the previous podcast episode featuring Eric Metaxas. Whether it's political issues like border security and election fraud or theological debates like a young earth versus an old earth, we shouldn't make assumptions without having sufficient information. Want to learn more about how to avoid hasty generalizations? Consider attending the Fearless Faith conference in Xenia, OH this upcoming weekend (February 16-17)! To view the entire VIDEO PODCAST be sure to join our CrossExamined private community. It's the perfect place to jump into some great discussions with like-minded Christians while simultaneously providing financial support for our ministry. You can also SUPPORT THE PODCAST HERE.
Can world religions possibly share more similarities than we may have thought? The similarities may surprise us if we look back at ancient religions and trace them through history. Listen up to learn: Why comparing religions can be dangerous What inspired the formation of religious texts How religious study can impact students Yair Lior, Ph.D. of religious studies in the department of religion at Boston University, shares his research and study in comparative religion and its science. While many of the major world religions today may appear vastly different at first glance, they may share ancient similarities branching from distant commonalities. Especially when considering beliefs in the distant past before modern religious ideas, we can begin to uncover the similarities which branch from unexpected practices. If we begin to separate ourselves from where we base our religious beliefs, we can adopt an analytical mindset when studying the world of religion. Moreover, by introducing exploration and education into current beliefs, it may even strengthen the beliefs we currently hold, allowing us to explore deeper than we previously believed. Visit https://bu.academia.edu/YLior to learn more. Episode also available on Apple Podcast: http://apple.co/30PvU9C