Podcasts about SSO

  • 380PODCASTS
  • 680EPISODES
  • 40mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 22, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about SSO

Latest podcast episodes about SSO

The Lunar Society
How Does Claude 4 Think? — Sholto Douglas & Trenton Bricken

The Lunar Society

Play Episode Listen Later May 22, 2025 144:01


New episode with my good friends Sholto Douglas & Trenton Bricken. Sholto focuses on scaling RL and Trenton researches mechanistic interpretability, both at Anthropic.We talk through what's changed in the last year of AI research; the new RL regime and how far it can scale; how to trace a model's thoughts; and how countries, workers, and students should prepare for AGI.See you next year for v3. Here's last year's episode, btw. Enjoy!Watch on YouTube; listen on Apple Podcasts or Spotify.----------SPONSORS* WorkOS ensures that AI companies like OpenAI and Anthropic don't have to spend engineering time building enterprise features like access controls or SSO. It's not that they don't need these features; it's just that WorkOS gives them battle-tested APIs that they can use for auth, provisioning, and more. Start building today at workos.com.* Scale is building the infrastructure for safer, smarter AI. Scale's Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you're an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh.* Lighthouse is THE fastest immigration solution for the technology industry. They specialize in expert visas like the O-1A and EB-1A, and they've already helped companies like Cursor, Notion, and Replit navigate U.S. immigration. Explore which visa is right for you at lighthousehq.com/ref/Dwarkesh.To sponsor a future episode, visit dwarkesh.com/advertise.----------TIMESTAMPS(00:00:00) – How far can RL scale?(00:16:27) – Is continual learning a key bottleneck?(00:31:59) – Model self-awareness(00:50:32) – Taste and slop(01:00:51) – How soon to fully autonomous agents?(01:15:17) – Neuralese(01:18:55) – Inference compute will bottleneck AGI(01:23:01) – DeepSeek algorithmic improvements(01:37:42) – Why are LLMs ‘baby AGI' but not AlphaZero?(01:45:38) – Mech interp(01:56:15) – How countries should prepare for AGI(02:10:26) – Automating white collar work(02:15:35) – Advice for students Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Risky Business
Risky Biz Soap Box: Push Security's browser-first twist on identity security

Risky Business

Play Episode Listen Later May 15, 2025 34:24


In this wholly sponsored Soap Box edition of the show, Patrick Gray chats with Adam Bateman and Luke Jennings from Push Security. Push has built an identity security platform that collects identity information and events from your users' browsers. It can detect phish kits and shut down phishing attempts, protect SSO credentials, and find shadow/personal account that a user has spun up. It's extremely difficult to bypass. That's because when you're in the browser it doesn't matter how a phishing link arrives, or how a threat actor has concealed it from your detection stack – if the user sees it, Push sees it. There are solutions for protecting your users SSO credentials, like passkeys. But what about all the SaaS in your environment? Even if it's enrolled into your SSO, are you sure that's how your users are authenticating to it? What about the automation platforms your developers and admins use? What about data platforms like Snowflake? Are your using setting up passkeys for those accounts? How would you know, and what problems can it cause if those accounts are vulnerable? This is a fun one! This episode is also available on Youtube. Show notes

COMPRESSEDfm
203 | Feature Flags, Framework Wars, and Landing Your Next Dev Job

COMPRESSEDfm

Play Episode Listen Later May 13, 2025 46:34


In this hosts-only episode, Amy and Brad get real about the developer experience - from the stress of job interviews to the complexities of choosing the right framework. They discuss why companies are comparing candidates more than ever, share strategies for answering behavioral interview questions, and debate the merits of Remix versus Next.js (spoiler: Brad's all-in on Remix). The conversation shifts to feature flags and progressive rollouts, with insights from Brad's work at Stripe. SponsorWorkOS helps you launch enterprise features like SSO and user management with ease. Thanks to the AuthKit SDK for JavaScript, your team can integrate in minutes and focus on what truly matters—building your app. Chapter Marks00:00 - Intro00:41 - Sponsor: WorkOS01:47 - Brad's Keyboard and Mouse Shopping Spree04:30 - Keyboard Layout Discussion07:23 - Apple Ecosystem: Reminders and Notes09:23 - Family Sharing and Raycast Integration09:43 - Notion vs Apple Notes for Project Management11:31 - File Storage and Backup Strategies14:00 - Machine Backup Philosophy16:46 - Job Interview Preparation Tips19:40 - Answering the "Weakness" Question21:53 - Addressing Weaknesses: Delegation Examples24:29 - Conflict Resolution Interview Questions25:46 - Company Research Before Interviews27:00 - Tech Stack Considerations: Remix vs Next.js28:30 - Framework Migration Decisions29:30 - Astro for Content Sites31:02 - Backend Languages: Go vs TypeScript32:30 - React Server Components Future34:23 - Feature Flags and Boolean as a Service35:30 - Feature Flag Segmentation and A/B Testing36:54 - PostHog and Analytics Tools38:30 - Progressive Rollouts and Error Monitoring40:20 - Amy's Picks and Plugs43:35 - Brad's Picks and Plugs  

Hacker Public Radio
HPR4377: Password store and the pass command

Hacker Public Radio

Play Episode Listen Later May 13, 2025


This show has been flagged as Clean by the host. Standard UNIX password manager Password management is one of those computing problems you probably don't think about often, because modern computing usually has an obvious default solution built-in. A website prompts you for a password, and your browser auto-fills it in for you. Problem solved. However, not all browsers make it very easy to get to your passwords store, which makes it complex to migrate passwords to a new system without also migrating the rest of your user profile, or to share certain passwords between different users. There are several good open source options that offer alternatives to the obvious defaults, but as a user of Linux and UNIX, I love a minimal and stable solution when one is available. The pass command is a password manager that uses GPG encryption to keep your passwords safe, and it features several system integrations so you can use it seamlessly with your web browser of choice. Install pass The pass command is provided by the PasswordStore project. You can install it from your software repository or ports collection. For example, on Fedora: $ sudo dnf install pass On Debian and similar: $ sudo apt install pass Because the word pass is common, the name of the package may vary, depending on your distribution and operating system. For example, pass is available on Slackware and FreeBSD as password-store. The pass command is open source, so the source code is available at git.zx2c4.com/password-store. Create a GPG key First, you must have a GPG key to use for encryption. You can use a key you already have, or create a new one just for your password store. To create a GPG key, use the gpg command along with the --gen-key option (if you already have a key you want to use for your password store, you can skip this step): $ gpg --gen-key Answer the prompts to generate a key. When prompted to provide values for Real name, Email, and Comment, you must provide a response for each one, even though GPG allows you to leave them empty. In my experience, pass fails to initialize when one of those values is empty. For example, here are my responses for purposes of this article: Real name: Tux Email: tux@example.com Comment: My first key This information is combined, in a different order, to create a unique GPG ID. You can see your GPG key ID at any time: $ gpg --list-secret-keys | grep uid uid: Tux (My first key) tux@example.com Other than that, it's safe to accept the default and recommended options for each prompt. In the end, you have a GPG key to serve as the master key for your password store. You must keep this key safe. Back it up, keep a copy of your GPG keyring on a secure device. Should you lose this key, you lose access to your password store. Initialize a password store Next, you must initialize a password store on your system. When you do, you create a hidden directory where your passwords are stored, and you define which GPG key to use to encrypt passwords. To initialize a password store, use the pass init command along with your unique GPG key ID. Using my example key: $ pass init "Tux (My first key) " You can define more than one GPG key to use with your password store, should you intend to share passwords with another user or on another system using a different GPG key. Add and edit passwords To add a password to your password store, use the pass insert command followed by the URL (or any string) you want pass to keep. $ pass insert example.org Enter the password at the prompt, and then again to confirm. Most websites require more than just a password, and so pass can manage additional data, like username, email, and any other field. To add extra data to a password file, use pass edit followed by the URL or string you saved the password as: $ pass edit example.org The first line of a password file must be the password itself. After that first line, however, you can add any additional data you want, in the format of the field name followed by a colon and then the value. For example, to save tux as the value of the username field on a website: myFakePassword123 username: tux Some websites use an email address instead of a username: myFakePassword123 email: tux@example.com A password file can contain any data you want, so you can also add important notes or one-time recovery codes, and anything else you might find useful: myFake;_;Password123 email: tux@example.com recovery email: tux@example.org recovery code: 03a5-1992-ee12-238c note: This is your personal account, use company SSO at work List passwords To see all passwords in your password store: $ pass list Password Store ├── example.com ├── example.org You can also search your password store: $ pass find bandcamp Search Terms: bandcamp └── www.bandcamp.com Integrating your password store Your password store is perfectly usable from a terminal, but that's not the only way to use it. Using extensions, you can use pass as your web browser's password manager. There are several different applications that provide a bridge between pass and your browser. Most are listed in the CompatibleClients section of passwordstore.org. I use PassFF, which provides a Firefox extension. For browsers based on Chromium, you can use Browserpass with the Browserpass extension. In both cases, the browser extension requires a "host application", or a background bridge service to allow your browser to access the encrypted data in your password store. For PassFF, download the install script: $ wget https://codeberg.org/PassFF/passff-host/releases/download/latest/install_host_app.sh Review the script to confirm that it's just installing the host application, and then run it: $ bash ./install_host_app.sh firefox Python 3 executable located at /usr/bin/python3 Pass executable located at /usr/bin/pass Installing Firefox host config Native messaging host for Firefox has been installed to /home/tux/.mozilla/native-messaging-hosts. Install the browser extension, and then restart your browser. When you navigate to a URL with an file in your password store, a pass icon appears in the relevant fields. Click the icon to complete the form. Alternately, a pass icon appears in your browser's extension tray, providing a menu for direct interaction with many pass functions (such as copying data directly to your system clipboard, or auto-filling only a specific field, and so on.) Password management like UNIX The pass command is extensible, and there are some great add-ons for it. Here are some of my favourites: pass-otp: Add one-time password (OTP) functionality. pass-update: Add an easy workflow for updating passwords that you frequently change. pass-import: Import passwords from chrome, 1password, bitwarden, apple-keychain, gnome-keyring, keepass, lastpass, and many more (including pass itself, in the event you want to migrate a password store). The pass command and the password store system is a comfortably UNIX-like password management solution. It stores your passwords as text files in a format that doesn't even require you to have pass installed for access. As long as you have your GPG key, you can access and use the data in your password store. You own your data not only in the sense that it's local, but you have ownership of how you interact with it. You can sync your password stores between different machines using rsync or syncthing, or even backup the store to cloud storage. It's encrypted, and only you have the key.Provide feedback on this episode.

UBC News World
This IAM Consultant Offers MFA/SSO Solutions for Banks & Financial Institutions

UBC News World

Play Episode Listen Later May 7, 2025 3:24


Protect your customers' vital data with the help of Azure IAM, the industry's leading cybersecurity consulting firm. They can implement dynamic MFA and SSO solutions for your business. To team up with them and secure your data for good, visit https://azureiam.com/ Azure IAM, LLC City: Sterling Address: P. O. Box 650685 Website: https://azureiam.com

Dans La Tech
Sécurité dans le Cloud : Nos expériences, bonnes pratiques et anecdotes

Dans La Tech

Play Episode Listen Later Apr 29, 2025 80:32


Dans cet épisode de Dans la Tech, après une (petite) pause prolongée, l'équipe se retrouve au complet pour aborder un sujet essentiel : la sécurité dans le cloud. Pour l'occasion, nous accueillons Victor, consultant indépendant spécialisé AWS, infrastructures et sécurité, pour un échange riche et sans filtre ! Au programme : • Nos parcours personnels avec la sécurité dans le cloud (AWS, Société Générale, startup, grand groupe, etc.) • Premiers réflexes à avoir pour sécuriser une nouvelle infrastructure sur cloud public (AWS, Scaleway, OVH…) • Bonnes pratiques autour de l'Infra as Code, IAM, CI/CD, backup, SSO, isolation réseau, gestion des permissions, et plateformes self-service sécurisées. • Incidents de sécurité vécus : phishing, crypto-mining, erreurs humaines, Shadow IT, supply chain… • Débat ouvert sur le SSH, la compromission humaine, les risques de l'attaque interne, et les limites du MFA. • Focus sur la protection des données sensibles, le rôle des outils comme Riot ou AWS Control Tower, et l'importance de l'audit et de la sensibilisation continue.

The Lunar Society
AGI is Still 30 Years Away — Ege Erdil & Tamay Besiroglu

The Lunar Society

Play Episode Listen Later Apr 17, 2025 188:28


Ege Erdil and Tamay Besiroglu have 2045+ timelines, think the whole "alignment" framing is wrong, don't think an intelligence explosion is plausible, but are convinced we'll see explosive economic growth (economy literally doubling every year or two).This discussion offers a totally different scenario than my recent interview with Scott and Daniel.Ege and Tamay are the co-founders of Mechanize, a startup dedicated to fully automating work. Before founding Mechanize, Ege and Tamay worked on AI forecasts at Epoch AI.Watch on Youtube; listen on Apple Podcasts or Spotify.----------Sponsors* WorkOS makes it easy to become enterprise-ready. With simple APIs for essential enterprise features like SSO and SCIM, WorkOS helps companies like Vercel, Plaid, and OpenAI meet the requirements of their biggest customers. To learn more about how they can help you do the same, visit workos.com* Scale's Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you're an AI researcher or engineer, learn about how Scale's Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh* Google's Gemini Pro 2.5 is the model we use the most at Dwarkesh Podcast: it helps us generate transcripts, identify interesting clips, and code up new tools. If you want to try it for yourself, it's now available in Preview with higher rate limits! Start building with it today at aistudio.google.com.----------Timestamps(00:00:00) - AGI will take another 3 decades(00:22:27) - Even reasoning models lack animal intelligence (00:45:04) - Intelligence explosion(01:00:57) - Ege & Tamay's story(01:06:24) - Explosive economic growth(01:33:00) - Will there be a separate AI economy?(01:47:08) - Can we predictably influence the future?(02:19:48) - Arms race dynamic(02:29:48) - Is superintelligence a real thing?(02:35:45) - Reasons not to expect explosive growth(02:49:00) - Fully automated firms(02:54:43) - Will central planning work after AGI?(02:58:20) - Career advice Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Cyber Bites
Cyber Bites - 11th April 2025

Cyber Bites

Play Episode Listen Later Apr 10, 2025 7:45


* Cyber Attacks Target Multiple Australian Super Funds, Half Million Dollars Stolen* Intelligence Agencies Warn of "Fast Flux" Threat to National Security* SpotBugs Token Theft Revealed as Origin of Multi-Stage GitHub Supply Chain Attack* ASIC Secures Court Orders to Shut Down 95 "Hydra-Like" Scam Companies* Oracle Acknowledges "Legacy Environment" Breach After Weeks of DenialCyber Attacks Target Multiple Australian Super Funds, Half Million Dollars Stolenhttps://www.itnews.com.au/news/aussie-super-funds-targeted-by-fraudsters-using-stolen-creds-616269https://www.abc.net.au/news/2025-04-04/superannuation-cyber-attack-rest-afsa/105137820Multiple Australian superannuation funds have been hit by a wave of cyber attacks, with AustralianSuper confirming that four members have lost a combined $500,000 in retirement savings. The nation's largest retirement fund has reportedly faced approximately 600 attempted cyber attacks in the past month alone.AustralianSuper has now confirmed that "up to 600" of its members were impacted by the incident. Chief member officer Rose Kerlin stated, "This week we identified that cyber criminals may have used up to 600 members' stolen passwords to log into their accounts in attempts to commit fraud." The fund has taken "immediate action to lock these accounts" and notify affected members.Rest Super has also been impacted, with CEO Vicki Doyle confirming that "less than one percent" of its members were affected—equivalent to fewer than 20,000 accounts based on recent membership reports. Rest detected "unauthorised activity" on its member access portal "over the weekend of 29-30 March" and "responded immediately by shutting down the member access portal, undertaking investigations and launching our cyber security incident response protocols."While Rest stated that no member funds were transferred out of accounts, "limited personal information" was likely accessed. "We are in the process of contacting impacted members to work through what this means for them and provide support," Doyle said.HostPlus has confirmed it is "actively investigating the situation" but stated that "no HostPlus member losses have occurred" so far. Several other funds including Insignia and Australian Retirement were also reportedly affected.Members across multiple funds have reported difficulty accessing their accounts online, with some logging in to find alarming $0 balances displayed. The disruption has caused considerable anxiety among account holders.National cyber security coordinator Lieutenant General Michelle McGuinness confirmed that "cyber criminals are targeting individual account holders of a number of superannuation funds" and is coordinating with government agencies and industry stakeholders in response. The Australian Prudential Regulation Authority (APRA) and Australian Securities and Investments Commission (ASIC) are engaging with all potentially impacted funds.AustralianSuper urged members to log into their accounts "to check that their bank account and contact details are correct and make sure they have a strong and unique password that is not used for other sites." The fund also noted it has been working with "the Australian Signals Directorate, the National Office of Cyber Security, regulators and other authorities" since detecting the unauthorised access.If you're a member of any of those funds, watch for official communications and be wary of potential phishing attempts that may exploit the situation.Intelligence Agencies Warn of "Fast Flux" Threat to National Securityhttps://www.cyber.gov.au/about-us/view-all-content/alerts-and-advisories/fast-flux-national-security-threatMultiple intelligence agencies have issued a joint cybersecurity advisory warning organizations about a significant defensive gap in many networks against a technique known as "fast flux." The National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), FBI, Australian Signals Directorate, Canadian Centre for Cyber Security, and New Zealand National Cyber Security Centre have collaborated to raise awareness about this growing threat.Fast flux is a domain-based technique that enables malicious actors to rapidly change DNS records associated with a domain, effectively concealing the locations of malicious servers and creating resilient command and control infrastructure. This makes tracking and blocking such malicious activities extremely challenging for cybersecurity professionals."This technique poses a significant threat to national security, enabling malicious cyber actors to consistently evade detection," states the advisory. Threat actors employ two common variants: single flux, where a single domain links to numerous rotating IP addresses, and double flux, which adds an additional layer by frequently changing the DNS name servers responsible for resolving the domain.The advisory highlights several advantages that fast flux networks provide to cybercriminals: increased resilience against takedown attempts, rendering IP blocking ineffective due to rapid address turnover, and providing anonymity that complicates investigations. Beyond command and control communications, fast flux techniques are also deployed in phishing campaigns and to maintain cybercriminal forums and marketplaces.Notably, some bulletproof hosting providers now advertise fast flux as a service differentiator. One such provider boasted on a dark web forum about protecting clients from Spamhaus blocklists through easily enabled fast flux capabilities.The advisory recommends organizations implement a multi-layered defense approach, including leveraging threat intelligence feeds, analyzing DNS query logs for anomalies, reviewing time-to-live values in DNS records, and monitoring for inconsistent geolocation. It also emphasizes the importance of DNS and IP blocking, reputation filtering, enhanced monitoring, and information sharing among cybersecurity communities."Organizations should not assume that their Protective DNS providers block malicious fast flux activity automatically, and should contact their providers to validate coverage of this specific cyber threat," the advisory warns.Intelligence agencies are urging all stakeholders—both government and providers—to collaborate in developing scalable solutions to close this ongoing security gap that enables threat actors to maintain persistent access to compromised systems while evading detection.SpotBugs Token Theft Revealed as Origin of Multi-Stage GitHub Supply Chain Attackhttps://unit42.paloaltonetworks.com/github-actions-supply-chain-attack/Security researchers have traced the sophisticated supply chain attack that targeted Coinbase in March 2025 back to its origin point: the theft of a personal access token (PAT) associated with the popular open-source static analysis tool SpotBugs.Palo Alto Networks Unit 42 revealed in their latest update that while the attack against cryptocurrency exchange Coinbase occurred in March 2025, evidence suggests the malicious activity began as early as November 2024, demonstrating the attackers' patience and methodical approach."The attackers obtained initial access by taking advantage of the GitHub Actions workflow of SpotBugs," Unit 42 explained. This initial compromise allowed the threat actors to move laterally between repositories until gaining access to reviewdog, another open-source project that became a crucial link in the attack chain.Investigators determined that the SpotBugs maintainer was also an active contributor to the reviewdog project. When the attackers stole this maintainer's PAT, they gained the ability to push malicious code to both repositories.The breach sequence began when attackers pushed a malicious GitHub Actions workflow file to the "spotbugs/spotbugs" repository using a disposable account named "jurkaofavak." Even more concerning, this account had been invited to join the repository by one of the project maintainers on March 11, 2025 – suggesting the attackers had already compromised administrative access.Unit 42 revealed the attackers exploited a vulnerability in the repository's CI/CD process. On November 28, 2024, the SpotBugs maintainer modified a workflow in the "spotbugs/sonar-findbugs" repository to use their personal access token while troubleshooting technical difficulties. About a week later, attackers submitted a malicious pull request that exploited a GitHub Actions feature called "pull_request_target," which allows workflows from forks to access secrets like the maintainer's PAT.This compromise initiated what security experts call a "poisoned pipeline execution attack" (PPE). The stolen credentials were later used to compromise the reviewdog project, which in turn affected "tj-actions/changed-files" – a GitHub Action used by numerous organizations including Coinbase.One puzzling aspect of the attack is the three-month delay between the initial token theft and the Coinbase breach. Security researchers speculate the attackers were carefully monitoring high-value targets that depended on the compromised components before launching their attack.The SpotBugs maintainer has since confirmed the stolen PAT was the same token later used to invite the malicious account to the repository. All tokens have now been rotated to prevent further unauthorized access.Security experts remain puzzled by one aspect of the attack: "Having invested months of effort and after achieving so much, why did the attackers print the secrets to logs, and in doing so, also reveal their attack?" Unit 42 researchers noted, suggesting there may be more to this sophisticated operation than currently understood.ASIC Secures Court Orders to Shut Down 95 "Hydra-Like" Scam Companieshttps://asic.gov.au/about-asic/news-centre/find-a-media-release/2025-releases/25-052mr-asic-warns-of-threat-from-hydra-like-scammers-after-obtaining-court-orders-to-shut-down-95-companies/The Australian Securities and Investments Commission (ASIC) has successfully obtained Federal Court orders to wind up 95 companies suspected of involvement in sophisticated online investment and romance baiting scams, commonly known as "pig butchering" schemes.ASIC Deputy Chair Sarah Court warned consumers to remain vigilant when engaging with online investment websites and mobile applications, describing the scam operations as "hydra-like" – when one is shut down, two more emerge in its place."Scammers will use every tool they can think of to steal people's money and personal information," Court said. "ASIC takes action to frustrate their efforts, including by prosecuting those that help facilitate their conduct and taking down over 130 scam websites each week."The Federal Court granted ASIC's application after the regulator discovered most of the companies had been incorporated using false information. Justice Stewart described the case for winding up each company as "overwhelming," citing a justifiable lack of confidence in their conduct and management.ASIC believes many of these companies were established to provide a "veneer of credibility" by purporting to offer genuine services. The regulator has taken steps to remove numerous related websites and applications that allegedly facilitated scam activity by tricking consumers into making investments in fraudulent foreign exchange, digital assets, or commodities trading platforms.In some cases, ASIC suspects the companies were incorporated using stolen identities, highlighting the increasingly sophisticated techniques employed by scammers. These operations often create professional-looking websites and applications designed to lull victims into a false sense of security.The action represents the latest effort in ASIC's ongoing battle against investment scams. The regulator reports removing approximately 130 scam websites weekly, with more than 10,000 sites taken down to date – including 7,227 fake investment platforms, 1,564 phishing scam hyperlinks, and 1,257 cryptocurrency investment scams.Oracle Acknowledges "Legacy Environment" Breach After Weeks of Denialhttps://www.bloomberg.com/news/articles/2025-04-02/oracle-tells-clients-of-second-recent-hack-log-in-data-stolenOracle has finally admitted to select customers that attackers breached a "legacy environment" and stole client credentials, according to a Bloomberg report. The tech giant characterized the compromised data as old information from a platform last used in 2017, suggesting it poses minimal risk.However, this account conflicts with evidence provided by the threat actor from late 2024 and posted records from 2025 on a hacking forum. The attacker, known as "rose87168," listed 6 million data records for sale on BreachForums on March 20, including sample databases, LDAP information, and company lists allegedly stolen from Oracle Cloud's federated SSO login servers.Oracle has reportedly informed customers that cybersecurity firm CrowdStrike and the FBI are investigating the incident. According to cybersecurity firm CybelAngel, Oracle told clients that attackers gained access to the company's Gen 1 servers (Oracle Cloud Classic) as early as January 2025 by exploiting a 2020 Java vulnerability to deploy a web shell and additional malware.The breach, detected in late February, reportedly involved the exfiltration of data from the Oracle Identity Manager database, including user emails, hashed passwords, and usernames.When initially questioned about the leaked data, Oracle firmly stated: "There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data." However, cybersecurity expert Kevin Beaumont noted this appears to be "wordplay," explaining that "Oracle rebadged old Oracle Cloud services to be Oracle Classic. Oracle Classic has the security incident." This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit edwinkwan.substack.com

Passage to Profit Show
Entrepreneurs, Build a Thriving Travel Brand Through Social Media with Jessica Dante + Others (Full Episode)

Passage to Profit Show

Play Episode Listen Later Apr 7, 2025 76:10


Richard Gearhart and Elizabeth Gearhart, co-hosts of Passage to Profit Show interview Jessica Dante from Dante Media and the "Love and London" brand, "The Mind Whisperer" Dawna Campbell from The Healing Heart, Inc. and Ian L. Paterson from Plurilock™.   In this episode, we chat with Jessica Dante, founder of Dante Media and the savvy travel guru behind the viral “Love and London” brand. From uncovering classic tourist scams to dishing out honest advice on what to skip (sorry, Madame Tussauds!), Jessica shares how she built a million-strong following by helping travelers have smarter, more authentic adventures in London and Paris. Read more at: Love and London website: https://loveandlondon.com/, Youtube: https://www.youtube.com/user/loveandlondon, Instagram: https://www.instagram.com/loveandlondon/?hl=en, Love and London's free 101 Guide: https://loveandlondon.com/london-101-guide-main/    Dawna Campbell is the CEO and Founder of The Healing Heart, Inc., an international business that provides life-changing services to clients all over the world. Dawna is widely recognized as The Mind Whisperer for her unparalleled ability to reprogram the subconscious brain for instant money creation, enabling her clients to manifest a life of happiness, prosperity, and love. Read more at: Read more at: https://www.dawnacampbell.com/   Ian L. Paterson is the CEO of Plurilock™ and is a data entrepreneur with more than 15 years of experience in leading and commercializing technology companies focused on data analytics and cybersecurity. Plurilock™ is a global cyber solutions provider and maker of Plurilock AI, leading platform for SSO, CASB, DLP, AI identity + AI safety. Read more at: https://plurilock.com/   Whether you're a seasoned entrepreneur, a startup, an inventor, an innovator, a small business or just starting your entrepreneurial journey, tune into Passage to Profit Show for compelling discussions, real-life examples, and expert advice on entrepreneurship, intellectual property, trademarks and more. Visit https://passagetoprofitshow.com/ for the latest updates and episodes. Chapters (00:00:00) - Start Your Business Now(00:00:25) - Passage to Profit(00:01:38) - How to Spot Unsightly Opportunities as an Entrepreneur(00:03:28) - How to Spot Unseen Opportunities?(00:05:06) - Spotting Unsightly Opportunities(00:06:13) - The Importance of Identifying Unsightly Opportunities(00:07:25) - Meet Jessica Dante(00:10:05) - Love and London(00:11:51) - Tutorial on How to Make a Living on YouTube(00:15:36) - Have All the Attention Made You a Better Manager?(00:17:28) - Oprah on Her Own Career(00:18:13) - The challenges of running a small business(00:19:20) - Jessica Alba on Meet and Focuses(00:20:12) - How to Make a Money on YouTube With Shorts(00:24:01) - Small Business Health Insurance(00:25:01) - Travel Guides for London(00:27:10) - Intellectual Property News: AI and Copyright(00:30:31) - Do Authors Own AI Content?(00:36:57) - Home Warranty: How to Prosper Yourself(00:38:57) - Richard and Elizabeth Gearhart(00:39:22) - What's Going On With Your Projects?(00:41:07) - Carb and colorectal cancer risk(00:41:55) - How to Read Your Mind's Quantum Field(00:45:53) - How to Stop Resisting in Your Life(00:48:58) - Does Money Play a Role in Healing?(00:50:34) - What Made Me Who I Am(00:53:20) - Where Do You See Your Practice Taking You?(00:54:50) - Cybersecurity in the Elevator(00:55:56) - How to Outrun Cyber Threats(01:01:30) - Top 5 tips for cyber security(01:06:40) - Is there anything really exciting coming down the pike in cybersecurity?(01:09:34) - Tax Doctor(01:10:55) - What is Your Secret to Success?(01:13:37) - Ian L. Patterson on Networking(01:15:11) - Passive to Profit

SurgOnc Today
Live at SSO 2025: Society of Surgical Oncology International Committee presents: the International Career Development Exchange Program

SurgOnc Today

Play Episode Listen Later Apr 4, 2025 15:05


The International Career Development Exchange (ICDE) program provides support for up-and-coming, early career surgical oncologists with leadership potential to receive one-on-one mentoring and engagement with a distinguished senior SSO member. SSO supports a participant from each of our 15 Global Partner Societies, plus two SSO member participants from countries not connected to one of our Global Partners. Participants receive complimentary registration for the SSO annual meeting and the opportunity to participate in a minimum one-week clinical observership at a US-based SSO member's institution. Each participant is paired with an SSO Member Mentor with the goal that a long-term professional relationship will develop and continue over the course of the participant's career. In this episode past ICDE recipients are interviewed about their experiences and the impact of the ICDE program on their career trajectories.

The Secure Developer
Authentication, Authorization, And The Future Of AI Security With Alex Salazar

The Secure Developer

Play Episode Listen Later Apr 1, 2025 38:36


Episode SummaryIn this episode of The Secure Developer, host Danny Allan sits down with Alex Salazar, founder and CEO of Arcade, to discuss the evolving landscape of authentication and authorization in an AI-driven world. Alex shares insights on the shift from traditional front-door security to back-end agent interactions, the challenges of securing AI-driven agents, and the role of identity in modern security frameworks. The conversation delves into the future of AI, agentic workflows, and how organizations can navigate authentication, authorization, and security in this new era.Show NotesDanny Allan welcomes Alex Salazar, an experienced security leader and CEO of Arcade, to explore the transformation of authentication and authorization in AI-powered environments. Drawing from his experience at Okta, Stormpath, and venture capital, Alex provides a unique perspective on securing interactions between AI agents and authenticated services.Key topics discussed include:The Evolution of Authentication & Authorization: Traditional models focused on front-door access (user logins, SSO), whereas AI-driven agents require secure back-end interactions.Agentic AI and Security Risks: How AI agents interact with services on behalf of users, and why identity becomes the new perimeter in security.OAuth and Identity Challenges: Adapting OAuth for AI agents, ensuring least-privilege access, and maintaining security compliance.AI Hallucinations & Risk Management: Strategies for mitigating LLM hallucinations, ensuring accuracy, and maintaining human oversight.The Future of AI & Agentic Workflows: Predictions on how AI will continue to evolve, the rise of specialized AI models, and the intersection of AI and physical automation.Alex and Danny also discuss the broader impact of AI on developer productivity, with insights into how companies can leverage AI responsibly to boost efficiency without compromising security.LinksArcade.dev - Make AI Actually Do ThingsOkta - IdentityOAuth - Authorization ProtocolLangChain - Applications that Can ReasonHugging Face - The AI Community Building the FutureSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn

Sengoku Daimyo's Chronicles of Japan

This episode we will discuss various embassies to and from Yamato during the reign of Takara Hime, with a particular focus on the embassy of 659, which occured at a particularly eventful time and happened to be extremely well-recorded fro the period by Iki no Hakatoko, who was apparently on the mission to the Tang court itself. For more, check out our blog post at: https://sengokudaimyo.com/podcast/episode-123 Rough Transcript Welcome to Sengoku Daimyo's Chronicles of Japan.  My name is Joshua, and this is episode 123: Embassy Interrupted.   Iki no Hakatoko sat in his room, gazing out at the city.   It was truly an amazing place, filled with all kinds of people from around the world.  And yet, still, after 9 months of confinement, the place felt small.  Sure, there he hadwere visits from ranking nobles and dignitaries, but even the most lenient of house arrests was still house arrest. But that didn't mean that he had nothing to do.  There were books and more that he had access to—many that had not yet made it to the archipelago, and some of which he no doubt hoped he could bring back with him.  And of course, there was paper, brush, and ink. And then there were the experiences he and others had acquired on this mission to the Great Tang.  From the very beginning the missionit washad been plagued with disaster when they lost half of their ships and company mission to rogue winds on the open seas.  Now they were trapped because the Emperor himself wouldn't let them return home.  They had experienced and seen so much, and that provided ample material for one to catalogue. As the seasons changed, and rumors arrived that perhaps his situation would also something would change soon, Iki no Hakatoko spread out the paper on the desk in front of him, dipped his brush in the ink, and began to write.  He wrote down notes about his experiences, and what had befallen him and the others.  He had no idea who It is unclear whom he thought might read it, and if he was intending this to be an official or personal record, but he wrote it down anyway. Hakatoko He couldn't have known then that his words would eventually be captured in a much larger work, chronicling the entire history of Yamato from its very creation, nor that his would be one of the oldest such personal accounts records to be handed down.  His Itwords  wwould only survive in fragments—or perhaps his writing was simply that terse—but his words they would be preserved, in a format that was still being read over a thousand years later.     Last episode we finished up the story of Xuanzang and his Journey to the West—which is to say the Western Regions -- , and thence on to India, or Tianzhu, where he walked in the footsteps of the historical Buddha, studied the scriptures at the feet of venerable teachers, such as Silabadhra at the Great Monastery of Nalanda, and eventually wound up bringingbrought back hundreds of manuscripts to Chang'an to , which he and others be translated and disseminated, impacting Buddhist thought across East Asia.  HisXuanzang's travels lasted from around 629 to 645, and he was still teaching in Chang'an in the 650s when various student-monks from Yamato  arrived to study and learn from him, eventually bringing back his teachings to the archipelago as part of the Faxiang, or Hossou, school of Buddhism. Before that we talked about the visitors from “Tukhara” and “Sha'e” recorded in the Chronicles.  As we noted, these peopley were morest likely from the Ryukyuan islands, and the names may have been conflated with distant lands overseas – but regardless, .  Whether or not it was a mistake, this it does seem to indicated that Yamato had at least an inkling of the wider world, introduced through the continental literature that they had been importing, if not the direct interactions with individuals from the Korean peninsula and the Tang court. This episode, we're going to talk about some of the relations between Yamato and the continent, including the various embassies sent back and forth, as well as one especially detailed embassy from Yamato to the Tang Court that found itself in a bit of a pickle.  After all, what did you do, back in those days, when you were and ambassador, and your country suddenly went to war?  We'll talk about that and what happened. To reorient ourselves in time, we're in the reign of Takara Hime, called aka Kyogoku Tennou during her first reign, who had reascended to the throne in 655, following the death of her brother, Prince Karu.  The Chroniclers would dub her Saimei Tennou in her second run on the throne. From the very beginning of her second reign, Takara Hime was entertaining foreign envoys.  In 654, the Three Han of the Korean Peninsula—Goguryeo, Baekje, and Silla—all sent ambassadors to express their condolence on the death of her brother, and presumably to witness her ascension.  And in the 8th month of her reign, Kawabe no Maro no Omi, along with others, returned from Chang'an.  He Kawabe no Maro no Omi had been the Chief Ambassador to the Tang on an embassy sent , traveling there in the 2nd month of the previous year.  Originally he had been He was under the command of the controlling envoy, Takamuku no Obito no Kuromaro, but Kuromaro who unfortunately died in Chang'an and so Kawabe no Mari no Omi took over his role. That same year, 655, we know that there were about 100 persons recorded in Yamato from Baekje, along with envoys of Goguryeo and Silla.  These are likely the same ones we mentioned back in episode 117 when 150 Baekje envoys were present at court along with multiple members of the Emishi. Silla, for their part, had sent to Yamato a special hostage , whom we know as something like “Mimu”, along with skilled workmen.  Unfortunately, we are told that Mimu fell ill and died.  The Chronicles are pretty sparse on what this meant, but I can't imagine it was great.  After all, the whole idea of sending a hostage to another nation was as a pledge of good behavior – the idea being that the hostage was the idea that they werewas valuable enough that the sending nation wouldn't do anything too rash.  The flip side of that is if the hostage died, Of course, if they perished, the hosting country lost any leverage—and presumably the sending nation would be none too pleased.  That said, people getting sick and passing away was hardly a hostile action, and likely just considered an unfortunate situation. The following year, in 656, we see that Goguryeo, Baekje, and Silla again all sent ambassadords were all sent to offer “tribute”.  The Chronicles mention that dark purple curtains were drawn around the palace site to entertain the ambassadors—likely referring to the new palace site at Asuka no Wokamoto, which probably was not yet fully built out, yet.   We are given the name of the Goguryeo ambassador, Talsa, and associate ambassador, Ilchi,  in the 8th month, Talsa and Ilichi, with 81 total members in the Goguryeo retinueof the embassy.  In seeming response, Yamato sent an embassy was sent to Goguryeo with the likes of Kashiwade no Omi no Hatsumi as the Chief Ambassador and Sakahibe no Muraji no Iwasuki as the Associate Ambassador.  Other names mentioned include We also see the likes of Inugami no Shiromaro, Kawachi no Fumi no Obito—no personal name is given—and Ohokura no Maro.  We also see thea note in the Chronicles that Yamato ambassadors to the quote-unquote “Western Sea”—which seems to refer to the Tang court, but could possibly refer to anything from the Korean Peninsula west—returned in that same year.  The two are named as Saheki no Muraji no Takunaha and Oyamashita no Naniha no Kishi no Kunikatsu.  These are both families that were clearly involved in cross-strait relations , based on how they are frequently referenced in the Chronicles as being associated with various overseas missions.  but  However, we don't seem to have clear evidence of them when these particular individualsy leavingft on this mission.  “Kunikatsu” mightay refer to an earlier ambassador to Baekje, but the names are different, so that is largely just speculation.  In any case, Uupon their return, they are said to have brought with them a parrot.  This wasn't the first parrot the court had seen—that feathery traveler had arrived in 647, or at least that is the first parrotinstance  we have in the written record -- .  Aand that one came from Silla as part of that embassy's gifts. Continuing on, in 657, The following year there was another group of ambassadors returned coming  from the “Western Seas”, in this case coming back from—or through—Baekje.  Thisese wasere Adzumi no Muraji no Tsuratari and Tsu no Omi no Kutsuma.  The presents they brought back were, of all things:  one camel and two donkeys.  And can you imagine bringing a camel back across the sea at this point?  Even if they were using the larger ships based on continental designs, it still must have been something else to put up with a camel and donkeys onboard, animals that are not exactly known for their easy-going and compliant nature. Speaking of boats, we should probably touch on what we *think* they were usinghas been going on here.  I say *think* because we only get glimpses  of the various boats being used in the archipelago, whether from mentions in or around Yamato, archaeology, or artistic depictions, many of which came from later periods., and wSo while it is generally assumed that they the Yamato were using Tang style vessels by the 8th and 9th century, there does not appear to be clear evidence of exactly what kind of boats were being used during the early earlier periods of contact. A quick note on boat technology and navigation: while travel between the Japanese archipelago and the Korean Peninsula, and up the Yellow Sea, wasn't safe, it would have been possible with the vessels of the time.  Japan sits on the continental shelf, meaning that to the east where the shelf gives way to the Pacific Ocean with the Phillippine Sea to the south, the waters are much, much deeper than they are to the west.  In deep waters, waves are not necessarily affected by the ocean floor, meaning they can build up much more energy and require different kinds of technology to sail.  In shallower areas, such as the Sea of Japan, the Yellow Sea, the East China Sea or the Korean Straits to the west of the archipelago, there's more drag that dampens out the wave effect – it's not that these areas are uniformly shallow and calm, but they are calmer and easier to navigate in general.  Our oldest example of boats in the archipelago of any kind are dugout canoes, .  These are logs that are hollowed out  and shaped. , and tThese appear to be what Jomon era populations used to cross to the archipelago and travel between the various islands.  Though they may be considered primitive, without many of the later innovations that would increase stability and seaworthiness—something I'll touch on more a bit later—, they were clearly effective enough to populate the islands of the Ryukyuan chain and even get people and livestock, in the form of pigs, down to the Hachijo islands south of modern Tokyo.    So they weren't ineffective. Deep waters mean that the waves are not necessarily affected by the ocean floor.  Once it hits shallower water, there is more drag that affects larger waves.  This means that there can be more energy in these ocean waves.  That usually means that shallower areas tend to be more calm and easier to navigate—though there are other things that can affect that as well. We probably should note, however, that Japan sits on the edge of the continental shelf.  To the west, the seas are deep, but not nearly as deep as they are to the east, where continental shelf gives way to the Pacific ocean, with the Philippine Sea to the south.  These are much deeper waters than those of the Yellow Sea, the East China Sea, or the Korean Straits.  The Sea of Japan does have some depth to it, but even then it doesn't compare in both size and depth. Deep waters mean that the waves are not necessarily affected by the ocean floor.  Once it hits shallower water, there is more drag that affects larger waves.  This means that there can be more energy in these ocean waves.  That usually means that shallower areas tend to be more calm and easier to navigate—though there are other things that can affect that as well. All this to say that travel between the Japanese archipelago and the Korean Peninsula, and up the Yellow Sea, were all things that were likely much easier to navigate with the vessels available at the time, but that doesn't mean that it was safe. Later, we see a different type of vessel appear: .  This is a built vessel, made of multiple hewn pieces of wood.  The examples that we see show a rather square front and back that rise up, sometimes dramatically, .  There are with various protrusions on either side. We see examples of this shape , and we've seen examples in haniwa from about the 6th century, and we have some corresponding wooden pieces found around the Korean peninsula that pretty closely match the haniwa boat shapesuggest similar boats were in use there as well, .  Nnot surprising given the cultural connections.  These boats do not show examples of sails, and were likely crewed by rowers.  Descriptions of some suggest that they might be adorned with branches, jewels, mirrors, and other such things for formal occasions to identify some boats as special -- , and we even have one record of the rowers in ceremonial garb with deer antlers.  But none of this suggests more than one basic boat typevery different types of boats. In the areas of the Yellow and Yangzi rivers, area of modern China, particularly in the modern PRC, the boats we see are a little different.  They tend to be flat bottomed boats, possible evolved from  which appear to have been designed from rafts or similar .   These vessels would have evolved out of those used to transport goods and people up and down the Yellow and Yangzi rivers and their tributaries.  These boats y had developed sails, but still the boats wwere n'ot necessarily the most stable on the open ocean.  Larger boats could perhaps make their way through some of the waves, and were no doubt used throughout the Yellow Sea and similar regions.  However, for going farther abroad, we are told thatcourt chronicles note that there were other boats that were preferred: . These are sometimes called  the Kun'lun-po, or Boats of the Kunlun, or the Boats of the Dark-skinned people.  A quick dive here into how this name came to be. Originally, “Kunlun” appears to refer to a mythical mountain range, the Kunlun-shan, which may have originated in the Shan-hai-jing, the Classic of Mountains and Seas, and so may not have referred to anything specific terrestrial mountain range, ally.  Italthough the term would later attach be used to describe to the mountain chain that forms the northern edge of the Tibetan plateau, on the southern edge of the Tarim Basin. However, at some point, it seems that “Kunlun” came to refer to people -- .  Sspecifically, it came to refer to people of dark complexion, with curly hair.  There are Tang era depictions of such people, but their origin is not exactly known: it might .  It is thought that it may have have equally referred to dark-skinned individuals of African descent, or possibly referring to some of the dark-skinned people who lived in the southern seas—people like the Andamanese living on the islands west of modern Thailand or some of the people of the Malay peninsula, for example. It is these latter groups that likely were the origin, then, of the “Kun'lun-po”, referring to the ships of the south, such as those of Malay and AsutronesianAustronesian origin.  We know that from the period of at least the Northern and Southern Dynasties, and even into the early Tang, these foreign ships often , which were often plyingied the waters from trade port to trade port, and were the preferred sailing vessels for voyages to the south, where the waters could be more treacherous.  Indeed, the Malay language eventually gives us the term of their vessels as “Djong”, a term that eventually made its way into Portuguese as “Junco” and thus into English as “junk”, though this terms has since been rather broadly applied to different “Asian” style sailing vessels. So that leaves us with three ship types that the Yamato court could have been using to send these embassies back and forth to the continent: .  Were they still using their own style of native boat as seen on haniwa,, or were they adopting continental boats to their needs?   If so, were they using the flat-bottomed boats of the Tang dynasty, or the more seaworthy vessels of the foreign merchants?. Which were they using?  The general thinking is that IMost depictions I have seen of the kentoushi, the Japanese embassies to the Tang court, depict them as t is generally thought that they were probably using the more continental-style flat-bottomed, riverine vessels.  After all, they were copying so much of what the Sui and Tang courts were doing, why would they not consider these ships to likewise be superior to their own?  At least for diplomatic purposes.  I suspect that local fishermen did their own were keeping their own counsel as far as ships are concernedthing, and I also have to wonder about what got used they were using from a military standpoint for military purposes.  Certainly we see the Tang style boats used in later centuries, suggesting that these had been adopted at some earlier point, possibly by the 650s or earlier. Whatever they used, and while long-distance sailing vessels could Sailing vessels could be larger than short-distance riverine craft, this was not a luxury cruise.  , but conditions on board were not necessarily a luxury cruise.  From later accounts we know that they would really pack people into these shipspeople could be packed in.  It should be noted that individual beds and bedrooms were a luxury in much of the world, and many people probably had little more than a mat to sleep on.  Furthermore, people could be packed in tight.   Think of the size of some of these embassies, which are said to be 80 to 150 people in size.  A long, overseas journey likely meant getting quite cozy with your neighbors on the voyage.  So how much more so with a camel and two donkeys on board a vessel that was likely never meant to carry them?  Not exactly the most pleasant experience, I imagine – and this is not really any different than European sailing vessels during the later age of exploration.. So, from the records for just the first few years of Takara-hime's second reign, we see that there are lots of people going back and forth, and we have a sense of how they might be getting to and from the continent and peninsula.  Let's dive into Next, we are going to talk about one of the most heavily documented embassies to the Tang court, which set out in the 7th month of the year 659.  Not only do we get a pretty detailed account of this embassy, but we even know who wrote the account: as in our imagined intro, , as this is one of the accounts by the famous Iki no Muraji no Hakatoko, transcribed by Aston as “Yuki” no Muraji. Iki no Hakatoko's name first appears in an entry for 654, where he is quoted as giving information about the status of some of the previous embassies to the Tang court.  Thereafter, various entries are labeled as “Iki no Muraji no Hakatoko says:”, which   This would seem to indicate that these particular entries came are taken directly from another work written by Iki no Hakatoko and referred to as the “Iki Hakatoko Sho”.  Based on the quoted fragments found in the Nihon Shoki, itthis appears to be one of ourthis oldest Japanese travelogues.  It , and spends considerable time on the mission of 659, of which it would appear that Iki no Hakatoko was himself a member, though not a ranking one.  Later, Iki no Hakatoko would find himself mentioned in the Nihon Shoki directly, and he would even be an ambassador, himself. The embassy of 659 itself, as we shall see, was rather momentous.  Although it started easily enough, the embassy would be caught up in some of the most impactful events that would take place between the Tang, Yamato, and the states of the Korean peninsula. This embassy was formally under the command of Sakahibe no Muraji no Iwashiki and Tsumori no Muraji no Kiza.  It's possible In the first instance it is not clear to me if this isthat he is the same person as the previously mentioned associate envoy, Sakahibe no Iwasuki—but the kanji are different enough, and there is another Sakahibe no Kusuri who shows up between the two in the record.  However, they are both listed as envoys during the reign of Takara Hime, aka Saimei Tennou, and as we've abundantly seen, and it wouldn't be the first time that scribal error crept in. has taken place, especially if the Chroniclers were pulling from different sources. The ambassadors took a retinue with them, including members of the northern Emishi, whom they were bringing along with them to show to the Tang court.  TheThey also  embassy ttook two ships—perhaps because of the size of the retinue, but I suspect that this was also because if anything happened to the one, you still had the other.  A kind of backup plan due to the likelihood something went wrong.  And wouldn't you know it, something did go wrong.  You see, things started out fine, departing Mitsu Bay, in Naniwa, on the 3rd day of the 7th month.  They sailed through the Seto Inland Sea and stopped at Tsukushi, likely for one last resupply and to check in with the Dazai, located near modern Fukuoka, who would have been in charge of overseeing ships coming and going to the archipelago.  They departed from Ohotsu bay in Tsukushi on the 11th day of the 8th month. A quick note: Sspeedboats these were not.  Today, one can cross from Fukuoka to Busan, on the southeast corner of the Korean peninsula, in less than a day.  The envoys, however, were taking their time.  They may have even stopped at the islands of Iki and Tsushima on their way.  By the 13th day of the 9th month—over a month from leaving Kyushu behind -- , the  ships finally came to an island along the southern border of Yamato's ally, Baekje.  Hakatoko does not recall the name of the island, but o On the following morning, around 4 AM, so just before sunrise, the two ships put out to sea together to cross the ocean, heading south, towards the mouth of the Yangzi river.  Unfortunately, the following day, the ship Iwashiki was on met with a contrary wind, and was driven away from the other ship – with nothing known of its fate until some time afterwards.  Meanwhile, the other ship, under the command of Tsumori no Muraji no Kiza, continued on and by midnight on the 16th day, it arrived at Mt. Xuan near Kuaiji Commandary in the Yue district, in modern Zhejiang.  Suddenly a violent northeast wind blew up, and p.  Tthey were saileding another 7 days before they finally arrived at Yuyao.  Today, this is part of the city of Ningbo, at the mouth of the Qiantang river, south of Shanghai and considered a part of the Yangzi Delta Region.  This area has been inhabited since at least 6300 years ago, and it has long been a trade port, especially with the creation of the Grand Canal connecting between the Yangzi and the Yellow River, which would have allowed transshipment of goods to both regions. The now half-size Yamato contingenty  left their ship at Yuyao and disembarked, and made their way to Yuezhou, the capital of the Kuaiji Commandary.  This took them a bit of time—a little over a month.  Presumably this was because of paperwork and logistics: they probably because they had to send word ahead, and I suspect they had to inventory everything they brought and negotiate carts and transportationfigure out transportation., since   Tthey didn't exactly have bags of holding to stuff it all in, so they probably needed to negotiate carts and transportation.  The finally made it to Yuezhou on the first day of the 11th intercalary month.  An “intercalary” month refers to an extra month in a year.  It was determined by various calculations and was added to keep the lunar and solar years in relative synch. From Yuezhou, things went a bit more quickly, as they were placed on post-horses up to the Eastern Capital, or Luoyang, where the Emperor Tang Gaozong was in residence.   The Tang kept a capital at Luoyang and another to the west, in Chang'an.  The trip to Luoyang was long—over 1,000 kilometers, or 1 megameter, as it were.  The trip first took them through the Southern Capital, meaning the area of modern Nanjing, which they entered on the 15th day of the month.  They then continued onwards, reaching Luoyang on the 29th day of the 11th month.  The following day, on the 30th day of the 11th intercalary month of the year 659, the Yamato envoys were granted an audience with Emperor Tang Gaozong.  As was proper, he inquired about the health of their sovereign, Takara Hime, and the envoys reported that she was doing well.  He asked other questions about how the officials were doing and whether there was peace in Yamato.  The envoys all responded affirmatively, assuring him that Yamato was at peace. Tang Gaozong also asked about the Emishi they had brought with them.  We mentioned this event previously, back in Episode XXX117 , how the Emishi had been shown to the Tang Emperor, and how they had described them for him.  This is actually one of the earliest accounts that we have describing the Emishi from the Yamato point of view, rather than just naming them—presumably because everyone in Yamato already knew who they were.  From a diplomatic perspective, of course, this was no doubt Yamato demonstrating how they were, in many ways, an Empire, similar to the Tang, with their own subordinate ethnicities and “barbarians”. After answering all of the emperor's questions, the audience was concluded.  The following day, however, was something of its own. This was the first day of the regular 11th lunar month, and it also was the celebration of the Winter Solstice—so though it was the 11th month, it may have been about 22 December according to our modern western calendars.  The envoys once again met with the emperor, and they were treated as distinguished guests—at least according to their own records of it.  Unfortunately, during the festivities, it seems that a fire broke out, creating some confusion, and .  Tthe matters of the diplomatic mission were put on hold while all of that went on. We don't know exactly what happened in the ensuing month.  Presumably the envoys took in the sites of the city, may have visited various monasteries, and likely got to know the movers and shakers in the court, who likely would have wined and dined them, inviting them to various gatherings, as since they brought their own exotic culture and experiences to the Tang court. Unfortunately, things apparently turned sour.  First off, it seems clear that the members of this embassyy weren't the only Japanese in the court.  There may have been various merchants, of course, but and we definitely know that there were students who had come on other missions and were still there likely still studying, such as those who had been learning from studying with Master Xuanzang, whose journeys we mentioned in the last several episodes.  But Wwe are given a very specific name of a troublemaker, however:  Kawachi no Aya no Ohomaro, and we are told that he was aa servant of Han Chihung, who .  Han Chihung, himself, is thought to have possiblymay have been of mixed ethnicity—both Japanese and ethnic Han, and may .  Hhe may have traveled to the Tang court on or around 653. , based on some of the records, but it isn't entirely clear. For whatever reason, on the 3rd day of the 12th month of the year 659, Kawachi no Aya no Ohomaro slandered the envoys, and although .  Wwe don't know exactly what he said, but the Tang court caught wind of the accusations and found the envoys guilty.  They were condemned to banishment, until the author of our tale, none other than Iki no Hakatoko himself, stepped up, .  He made representation to the Emperor, pleading against the slander.  , and tThe punishment was remitted, .  Sso they were no longer banished.  However, they were also then told that they could no't return home.  You see, the Tang court was in the middle of some sensitive military operations in the lands east of the sea—in other words they were working with Silla to and invadeing the Kingdom of Baekje.  Since Yamato was an ally of Baekje, it would be inconvenient if the envoys were to return home and rally Yamato to Baekje's defense. And so the entire Yamato embassy was moved to the Western Capital, Chang'an, where they were placed under individual house arrest.  They no doubt were treated well, but they were not allowed to leave, and .  Tthey ended up spending the next year in this state. of house arrest. Unfortunately, we don't have a record of just how they passed their time in Chang'an.  They likely studied, and were probably visited by nobles and others.  They weren't allowed to leave, but they weren't exactly thrown in jail, either.  After all, they were foreign emissaries, and though the Tang might be at war with their ally, there was no formal declaration of war with Yamato, as far as I can make out.  And so the embassy just sat there, for about 9 months. Finally, in the 7th month of 660, the records tell us we are told thatthat tThe Tang and Silla forces had been successful: .  Baekje was destroyed..  The Tang and Silla forces had been successful.   News must have reached Chang'an a month later, as Iki Hakatoko writes that this occurred in the 8th month of the year 660.  With the Tang special military operation on the Korean peninsula concluded, they released the envoys and allowed them to return to their own countries.  They envoys began their preparations as of the 12th day of the 9th month, no doubt eager to return home, and left were leaving Chang'an a week later, on the 19th day of the 9th month.  From there, it took them almost a month to reach Luoyang, arriving on the 16th day of the 10th month, and here they were greeted with more good news, for here it was that they met up once again with those members of their delegation who had been blown off course. As you may remember, the ship carrying Iwashiki was blown off-course on the 15th day of the 9th month in the year 659, shortly after setting out from the Korean peninsula.  The two ships had lost contact and Tsumori no Muraji no Kiza and his ship had been the one that had continued on.   Iwashiki and those with him, however, found themselves at the mercy of the contrary winds and eventually came ashore at an island in the Southern Sea, which Aston translates as “Erh-kia-wei”.   There appears to be at least some suggestion that this was an island in the Ryukyuan chain, possibly the island of Kikai.  There, local islanders, none too happy about these foreigners crashing into their beach, destroyed the ship, and presumably attacked the embassy.  Several members, including Yamato no Aya no Wosa no Atahe no Arima (yeah, that *is* a mouthful), Sakahibe no Muraji no Inadzumi (perhaps a relative of Iwashiki) and others all stole a local ship and made their way off the island.  They eventually made landfall at a Kuazhou, southeast of Lishui City in modern Zhejiang province, where they met with local officials of the Tang government, who then sent them under escort to the capital at Luoyang.  Once there, they were probably held in a similar state of house arrest, due to the invasion of Baekje, but they met back up with Kiza and Hakatoko's party. The envoys, now reunited, hung out in Luoyang for a bit longer, and thus .  Thus it was on the first day of the 11th month of 660 that they witnessed war captives being brought to the capital.  This included 13 royal persons of Baekje, from the King on down to the Crown Prince and various nobles, including the PRimiePrime Minister, as well as 37 other persons of lower rank—50 people all told.  TheThese captives y were delivered up to the Tang government and led before the emperor.  Of course, with the war concluded, and Baekje no longer a functioning state, while he could have had them executed, Tang Gaozong instead released them, demonstrating a certain amount of magnanimity.  The Yamato envoys remained in Luoyang for most of the month.  On the 19th, they had another audience with the emperor, who bestowed on them various gifts and presents, and then five days later they departed the Luoyang, and began the trek back to the archipelago in earnest. By the 25th day of the first month of 661, the envoys arrived back at Yuezhou, head of the Kuaiji Commandery.  They stayed there for another couple of months, possibly waiting for the right time, as crossing the sea at in the wrong season could be disastrous.  They finally departed east from Yuezhou on the first day of the fourth month, coming to .  They came to Mt. Cheng-an 6 days later, on the 7th, and set out to sea first thing in the morning on the 8th.  They had a southwest wind initially in their favor, but they lost their way in the open ocean, an all too commonall-too-common problem without modern navigational aids.  Fortunately, the favorable winds had carried them far enough that only a day later they made landfall on the island of Tamna, aka Jeju island. Jeju island was, at this point, its own independent kingdom, situated off the southern coast of the Korean peninsula.  Dr. Alexander Vovin suggested that the name “Tamna” may have been a corruption of a Japonic or proto-Japonic name: Tanimura.  The island was apparently quite strange to the Yamato embassy, and they met with various residents natives of Jeju island.  They, even convincinged Prince Aphaki and eight other men of the island to come with them to be presented at the Yamato court. The rest of their journey took a little over a month.  They finally arrived back in Yamato on the 23rd day of the fifth month of 661.  They had been gone for approximately two years, and a lot had changed, especially with the destruction of Baekje.  The Yamato court had already learned of what had happened and was in the process of drawing up plans for an expedition back to the Korean peninsula to restore the Baekje kingdom, and pPrince Naka no Oe himself was set to lead the troops. The icing on the cake was: Tthe reception that the envoys received upon their return was rather cold.  Apparently they were had been slandered to the Yamato court by another follower of Han Chihung—Yamato no Aya no Atahe no Tarushima—and so they weren't met with any fanfare.  We still don't know what it was that Tarsuhima was saying—possibly he had gotten letters from Chihung or Ohomaro and was simply repeating what they had said. Either way, the envoys were sick of it.  They had traveled all the way to the Tang capitals, they had been placed under house arrest for a year, and now they had returned.  They not only had gifts from the Tang emperor, but they were also bringing the first ever embassy from the Kingdom of Tamna along with them.  The slander would not stand.  And so they did what anyone would do at the time:  They apparently appealed to the Kami.  We are told that their anger reached to the Gods of the High Heaven, which is to say the kami of Takamanohara, who killed Tarushima with a thunderbolt.  Which I guess was one way to shut him up. From what we can tell, the embassy was eventually considered a success.  Iki no Hakatoko's star would rise—and fall—and rise again in the court circles.  As I noted, his account of this embassy is really one of the best and most in depth that we have from this time.  It lets us see the relative route that the envoys were taking—the Chronicles in particular note that they traveled to the Great Tang of Wu, and, sure enough, they had set out along the southern route to the old Wu capital, rather than trying to cross the Bohai Sea and make landfall by the Shandong peninsula or at the mouth of the Yellow River.  From there they traveled through Nanjing—the southern “capital” likely referring, in this instance, to the old Wu capital—and then to Luoyang.  Though they stayed there much longer than they had anticipated, they ended up living there through some of the most impactful events that occurred during this point in Northeast Asia.  they And that is something we will touch on next episode.  Until then, thank you once again for listening and for all of your support. If you like what we are doing, please tell your friends and feel free to rate us wherever you listen to podcasts.  If you feel the need to do more, and want to help us keep this going, we have information about how you can donate on Patreon or through our KoFi site, ko-fi.com/sengokudaimyo, or find the links over at our main website,  SengokuDaimyo.com/Podcast, where we will have some more discussion on topics from this episode. Also, feel free to reach out to our Sengoku Daimyo Facebook page.  You can also email us at the.sengoku.daimyo@gmail.com.  Thank you, also, to Ellen for their work editing the podcast. And that's all for now.  Thank you again, and I'll see you next episode on Sengoku Daimyo's Chronicles of Japan

SurgOnc Today
Live at SSO 2025: Mastering Debulking of NET Liver Metastases

SurgOnc Today

Play Episode Listen Later Apr 1, 2025 27:48


In this episode of SurgOnc Today, Dr. Julie Hallet, chair of the HPB disease site working group, and Dr. Callisia Clarke, member of the SSO board of directors, are joined by Dr. Jessica Maxwell and Dr. Alexandra Gangi to explore the evolving field of surgery for neuroendocrine tumors liver metastases. They discuss patient selection, pre-operative optimization, and unique surgical techniques to optimize perioperative and oncologic outcomes.

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Monday, March 31st: Comparing Phishing Sites; DOH and MX Abuse Phishing; opkssh

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Mar 31, 2025 7:15


A Tale of Two Phishing Sties Two phishing sites may use very different backends, even if the site itself appears to be visually very similar. Phishing kits are often copied and modified, leading to sites using similar visual tricks on the user facing site, but very different backends to host the sites and reporting data to the miscreant. https://isc.sans.edu/diary/A%20Tale%20of%20Two%20Phishing%20Sites/31810 A Phihsing Tale of DOH and DNS MX Abuse Infoblox discovered a new variant of the Meerkat phishing kit that uses DoH in Javascript to discover MX records, and generate better customized phishing pages. https://blogs.infoblox.com/threat-intelligence/a-phishing-tale-of-doh-and-dns-mx-abuse/ Using OpenID Connect for SSH Cloudflare opensourced it's OPKSSH too. It integrates SSO systems supporting OpenID connect with SSH. https://github.com/openpubkey/opkssh/

Cyber Bites
Cyber Bites - 28th March 2025

Cyber Bites

Play Episode Listen Later Mar 27, 2025 10:24


* Critical Flaw in Next.js Allows Authorization Bypass* Hackers Can Now Weaponize AI Coding Assistants Through Hidden Configuration Rules* Hacker Claims Oracle Cloud Data Theft, Company Refutes Breach* Chinese Hackers Infiltrate Asian Telco, Maintain Undetected Network Access for Four Years* Cloudflare Launches Aggressive Security Measure: Shutting Down HTTP Ports for API AccessCritical Flaw in Next.js Allows Authorization Bypasshttps://zhero-web-sec.github.io/research-and-things/nextjs-and-the-corrupt-middlewareA critical vulnerability, CVE-2025-29927, has been discovered in the Next.js web development framework, enabling attackers to bypass authorization checks. This flaw allows malicious actors to send requests that bypass essential security measures.Next.js, a popular React framework used by companies like TikTok, Netflix, and Uber, utilizes middleware components for authentication and authorization. The vulnerability stems from the framework's handling of the "x-middleware-subrequest" header, which normally prevents infinite loops in middleware processing. Attackers can manipulate this header to bypass the entire middleware execution chain.The vulnerability affects Next.js versions prior to 15.2.3, 14.2.25, 13.5.9, and 12.3.5. Users are strongly advised to upgrade to patched versions immediately. Notably, the flaw only impacts self-hosted Next.js applications using "next start" with "output: standalone." Applications hosted on Vercel and Netlify, or deployed as static exports, are not affected. As a temporary mitigation, blocking external user requests containing the "x-middleware-subrequest" header is recommended.Hackers Can Now Weaponize AI Coding Assistants Through Hidden Configuration Ruleshttps://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agentsResearchers Uncover Dangerous "Rules File Backdoor" Attack Targeting GitHub Copilot and CursorIn a groundbreaking discovery, cybersecurity researchers from Pillar Security have identified a critical vulnerability in popular AI coding assistants that could potentially compromise software development processes worldwide. The newly unveiled attack vector, dubbed the "Rules File Backdoor," allows malicious actors to silently inject harmful code instructions into AI-powered code editors like GitHub Copilot and Cursor.The vulnerability exploits a fundamental trust mechanism in AI coding tools: configuration files that guide code generation. These "rules files," typically used to define coding standards and project architectures, can be manipulated using sophisticated techniques including invisible Unicode characters and complex linguistic patterns.According to the research, nearly 97% of enterprise developers now use generative AI coding tools, making this attack particularly alarming. By embedding carefully crafted prompts within seemingly innocent configuration files, attackers can essentially reprogram AI assistants to generate code with hidden vulnerabilities or malicious backdoors.The attack mechanism is particularly insidious. Researchers demonstrated that attackers could:* Override security controls* Generate intentionally vulnerable code* Create pathways for data exfiltration* Establish long-term persistent threats across software projectsWhen tested, the researchers showed how an attacker could inject a malicious script into an HTML file without any visible indicators in the AI's response, making detection extremely challenging for developers and security teams.Both Cursor and GitHub have thus far maintained that the responsibility for reviewing AI-generated code lies with users, highlighting the critical need for heightened vigilance in AI-assisted development environments.Pillar Security recommends several mitigation strategies:* Conducting thorough audits of existing rule files* Implementing strict validation processes for AI configuration files* Deploying specialized detection tools* Maintaining rigorous manual code reviewsAs AI becomes increasingly integrated into software development, this research serves as a crucial warning about the expanding attack surfaces created by artificial intelligence technologies.Hacker Claims Oracle Cloud Data Theft, Company Refutes Breachhttps://www.bleepingcomputer.com/news/security/oracle-denies-data-breach-after-hacker-claims-theft-of-6-million-data-records/Threat Actor Offers Stolen Data on Hacking Forum, Seeks Ransom or Zero-Day ExploitsOracle has firmly denied allegations of a data breach after a threat actor known as rose87168 claimed to have stolen 6 million data records from the company's Cloud federated Single Sign-On (SSO) login servers.The threat actor, posting on the BreachForums hacking forum, asserts they accessed Oracle Cloud servers approximately 40 days ago and exfiltrated data from the US2 and EM2 cloud regions. The purported stolen data includes encrypted SSO passwords, Java Keystore files, key files, and enterprise manager JPS keys.Oracle categorically rejected the breach claims, stating, "There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data."To substantiate their claims, the hacker shared an Internet Archive URL indicating they uploaded a text file containing their ProtonMail email address to the login.us2.oraclecloud.com server. The threat actor also suggested that SSO passwords, while encrypted, could be decrypted using available files.The hacker's demands are multifaceted: they are selling the allegedly stolen data for an undisclosed price or seeking zero-day exploits. Additionally, they proposed offering partial data removal for companies willing to pay a specific amount to protect their employees' information.In a provocative move, rose87168 claimed to have emailed Oracle, demanding 100,000 Monero (XMR) in exchange for breach details. According to the threat actor, Oracle refused the offer after requesting comprehensive information for fixing and patching the vulnerability.The threat actor alleges that Oracle Cloud servers are running a vulnerable version with a public CVE (Common Vulnerabilities and Exposures) that currently lacks a public proof-of-concept or exploit.Chinese Hackers Infiltrate Asian Telco, Maintain Undetected Network Access for Four Yearshttps://www.sygnia.co/threat-reports-and-advisories/weaver-ant-tracking-a-china-nexus-cyber-espionage-operation/Sophisticated Espionage Campaign Exploits Vulnerable Home RoutersCybersecurity researchers from Sygnia have uncovered a sophisticated four-year cyber espionage campaign by Chinese state-backed hackers targeting a major Asian telecommunications company. The threat actor, dubbed "Weaver Ant," demonstrated extraordinary persistence and technical sophistication in maintaining undetected access to the victim's network.The attack began through a strategic compromise of home routers manufactured by Zyxel, which served as the initial entry point into the telecommunications provider's environment. Sygnia attributed the campaign to Chinese actors based on multiple indicators, including the specific targeting, campaign objectives, hacker working hours, and the use of the China Chopper web shell—a tool frequently employed by Chinese hacking groups.Oren Biderman, Sygnia's incident response leader, described the threat actors as "incredibly dangerous and persistent," emphasizing their primary goal of infiltrating critical infrastructure and collecting sensitive information. The hackers demonstrated remarkable adaptability, continuously evolving their tactics to maintain network access and evade detection.A key tactic in the attack involved operational relay box (ORB) networks, a sophisticated infrastructure comprising compromised virtual private servers, Internet of Things devices, and routers. By leveraging an ORB network primarily composed of compromised Zyxel routers from Southeast Asian telecom providers, the hackers effectively concealed their attack infrastructure and enabled cross-network targeting.The researchers initially discovered the campaign during the final stages of a separate forensic investigation, when they noticed suspicious account restoration and encountered a web shell variant deployed on a long-compromised server. Further investigation revealed multiple layers of web shells that allowed the hackers to move laterally within the network while remaining undetected.Sygnia's analysis suggests the campaign's ultimate objective was long-term espionage, enabling continuous information collection and potential future strategic operations. The hackers' ability to maintain access for four years, despite repeated elimination attempts, underscores the sophisticated nature of state-sponsored cyber intrusions.Cloudflare Launches Aggressive Security Measure: Shutting Down HTTP Ports for API Accesshttps://blog.cloudflare.com/https-only-for-cloudflare-apis-shutting-the-door-on-cleartext-traffic/Company Takes Bold Step to Prevent Potential Data ExposuresCloudflare has announced a comprehensive security initiative to completely eliminate unencrypted HTTP traffic for its API endpoints, marking a significant advancement in protecting sensitive digital communications. The move comes as part of the company's ongoing commitment to enhancing internet security by closing cleartext communication channels that could potentially expose critical information.Starting immediately, any attempts to connect to api.cloudflare.com using unencrypted HTTP will be entirely rejected, rather than simply redirected. This approach addresses a critical security vulnerability where sensitive information like API tokens could be intercepted during initial connection attempts, even before a secure redirect could occur.The decision stems from a critical observation that initial plaintext HTTP requests can expose sensitive data to network intermediaries, including internet service providers, Wi-Fi hotspot providers, and potential malicious actors. By closing HTTP ports entirely, Cloudflare prevents the transport layer connection from being established, effectively blocking any potential data exposure before it can occur.Notably, the company plans to extend this feature to its customers, allowing them to opt-in to HTTPS-only traffic for their websites by the last quarter of 2025. This will provide users with an additional layer of security at no extra cost.While the implementation presents challenges—with approximately 2-3% of requests still coming over plaintext HTTP from "likely human" clients and over 16% from automated sources—Cloudflare has developed sophisticated technical solutions to manage the transition. The company has leveraged tools like Tubular to intelligently manage IP addresses and network connections, ensuring minimal disruption to existing services.The move is part of Cloudflare's broader mission to make the internet more secure, with the company emphasizing that security features should be accessible to all users without additional charges. Developers and users of Cloudflare's API will need to ensure they are using HTTPS connections exclusively moving forward. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit edwinkwan.substack.com

Identity At The Center
#337 - Adaptive Authentication and Fraud Prevention with Ping's Patrick Harding

Identity At The Center

Play Episode Listen Later Mar 17, 2025 58:14


In this episode of the Identity Center Podcast, Jim McDonald discusses policy enforcement, adaptive authentication, and fraud prevention with Patrick Harding, Chief Product Architect at Ping Identity. They delve into how policy enforcement can be managed locally to maintain performance for SaaS applications while ensuring greater flexibility using standards like AuthZEN. Jim and Patrick also cover the benefits and challenges of using SAML and OpenID Connect for single sign-on (SSO) and explore the future role of AI agents in identity and access management. Additionally, they provide valuable tips for attending identity-focused conferences in Berlin and Las Vegas.Chapters00:00 Introduction to Policy Enforcement01:29 Welcome to the Identity Center Podcast01:54 Conference Discount Codes03:03 Guest Introduction: Patrick Harding from Ping Identity03:54 Patrick's Journey into Identity06:56 Challenges in Adaptive Authentication10:50 SaaS Applications and Policy Enforcement21:18 Advanced Fraud Analytics29:23 Integrating On-Premise and Cloud Applications30:35 Effort and Challenges in Modernizing Applications31:22 The Shift to OpenID Connect32:22 SaaS Applications and Single Sign-On Costs33:52 AI Agents and Adaptive Authentication34:54 The Future of AI Agents in Business39:15 Delegation and Authentication for AI Agents43:46 The Impact of AI on Jobs and Efficiency47:11 Advice for Future Careers in a Tech-Driven World52:57 Conference Tips and Final ThoughtsConnect with Patrick: https://www.linkedin.com/in/pharding/Conference Discounts!European Identity and Cloud Conference 2025 - Use code idac25mko for 25% off: https://www.kuppingercole.com/events/eic2025?ref=partneridacIdentiverse 2025 - Use code IDV25-IDAC25 for 25% off: https://identiverse.com/Connect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at http://idacpodcast.com

Jamf After Dark
Jamf ID, Jamf Account, Jamf Updates and New Features, Oh My!

Jamf After Dark

Play Episode Listen Later Mar 11, 2025 42:20


Join Kat and Sean to discuss some exciting updates: Jamf ID, Jamf Account, SSO, betas, Compliance Benchmarks, Blue Prints and so much more. They are joined by William (Bill Smith), Technical Enablement Manager III, to discuss upcoming changes, what's currently in betas and how partners and Jamf Admins can maximize benefits from these latest updates. The team also makes constant references to Severance the entire segment with nods to the recent Apple event.

The Secure Developer
The Future Of Security, Privacy And Control With Wayne Chang

The Secure Developer

Play Episode Listen Later Mar 4, 2025 39:22


Episode SummaryIn this episode of The Secure Developer, Danny Allan, CTO of Snyk, sits down with Wayne Chang, Founder and CEO of SpruceID, to explore the evolving landscape of digital identity and security. From self-sovereign identity to the role of AI in authentication, they discuss the future of identity management, the risks of centralized systems, and the benefits of decentralized approaches. They also dive into how policy, compliance, and emerging technologies like passkeys and zero-knowledge proofs are shaping the security ecosystem.Show NotesThe world of digital identity is changing fast, and in this episode of The Secure Developer, we explore how security professionals and developers can navigate this evolving space. Host Danny Allan is joined by Wayne Chang, Founder and CEO of SpruceID, to discuss key trends and challenges in identity management.Topics Discussed:Wayne's Background: From health tech to digital identity, how Wayne's early struggles with integrating health records led to his passion for self-sovereign identity.The Evolution of Digital Identity: Why usernames and passwords are no longer the gold standard, and how newer methods like passkeys and cryptographic credentials improve security.Decentralization vs. Centralization: The trade-offs between federated identity systems (like OAuth and SSO) and self-hosted identity wallets.The Role of AI in Identity Security: How AI is both a tool for improving security and a threat vector for identity fraud.Privacy and Compliance: How regulations like GDPR, CCPA, and emerging state-level laws influence digital identity strategies.The Future of Authentication: The move from multi-factor authentication to "myriad factor authentication," leveraging multiple signals for seamless and secure access.Wayne and Danny also discuss real-world use cases, including the development of mobile driver's licenses, emerging digital identity wallets, and the challenges of ensuring privacy and security while maintaining usability. The conversation highlights how organizations can stay ahead with better authentication practices and privacy-preserving architectures as fraud becomes more sophisticated.LinksSpruceID - Identity infrastructure for the digital worldNIST - The National Institute of Standards and TechnologyNIST SP 800-63 - Digital Identity GuidelinesACLU Digital ID State Legislative RecommendationsSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Today's episode is with Paul Klein, founder of Browserbase. We talked about building browser infrastructure for AI agents, the future of agent authentication, and their open source framework Stagehand.* [00:00:00] Introductions* [00:04:46] AI-specific challenges in browser infrastructure* [00:07:05] Multimodality in AI-Powered Browsing* [00:12:26] Running headless browsers at scale* [00:18:46] Geolocation when proxying* [00:21:25] CAPTCHAs and Agent Auth* [00:28:21] Building “User take over” functionality* [00:33:43] Stagehand: AI web browsing framework* [00:38:58] OpenAI's Operator and computer use agents* [00:44:44] Surprising use cases of Browserbase* [00:47:18] Future of browser automation and market competition* [00:53:11] Being a solo founderTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.swyx [00:00:12]: Hey, and today we are very blessed to have our friends, Paul Klein, for the fourth, the fourth, CEO of Browserbase. Welcome.Paul [00:00:21]: Thanks guys. Yeah, I'm happy to be here. I've been lucky to know both of you for like a couple of years now, I think. So it's just like we're hanging out, you know, with three ginormous microphones in front of our face. It's totally normal hangout.swyx [00:00:34]: Yeah. We've actually mentioned you on the podcast, I think, more often than any other Solaris tenant. Just because like you're one of the, you know, best performing, I think, LLM tool companies that have started up in the last couple of years.Paul [00:00:50]: Yeah, I mean, it's been a whirlwind of a year, like Browserbase is actually pretty close to our first birthday. So we are one years old. And going from, you know, starting a company as a solo founder to... To, you know, having a team of 20 people, you know, a series A, but also being able to support hundreds of AI companies that are building AI applications that go out and automate the web. It's just been like, really cool. It's been happening a little too fast. I think like collectively as an AI industry, let's just take a week off together. I took my first vacation actually two weeks ago, and Operator came out on the first day, and then a week later, DeepSeat came out. And I'm like on vacation trying to chill. I'm like, we got to build with this stuff, right? So it's been a breakneck year. But I'm super happy to be here and like talk more about all the stuff we're seeing. And I'd love to hear kind of what you guys are excited about too, and share with it, you know?swyx [00:01:39]: Where to start? So people, you've done a bunch of podcasts. I think I strongly recommend Jack Bridger's Scaling DevTools, as well as Turner Novak's The Peel. And, you know, I'm sure there's others. So you covered your Twilio story in the past, talked about StreamClub, you got acquired to Mux, and then you left to start Browserbase. So maybe we just start with what is Browserbase? Yeah.Paul [00:02:02]: Browserbase is the web browser for your AI. We're building headless browser infrastructure, which are browsers that run in a server environment that's accessible to developers via APIs and SDKs. It's really hard to run a web browser in the cloud. You guys are probably running Chrome on your computers, and that's using a lot of resources, right? So if you want to run a web browser or thousands of web browsers, you can't just spin up a bunch of lambdas. You actually need to use a secure containerized environment. You have to scale it up and down. It's a stateful system. And that infrastructure is, like, super painful. And I know that firsthand, because at my last company, StreamClub, I was CTO, and I was building our own internal headless browser infrastructure. That's actually why we sold the company, is because Mux really wanted to buy our headless browser infrastructure that we'd built. And it's just a super hard problem. And I actually told my co-founders, I would never start another company unless it was a browser infrastructure company. And it turns out that's really necessary in the age of AI, when AI can actually go out and interact with websites, click on buttons, fill in forms. You need AI to do all of that work in an actual browser running somewhere on a server. And BrowserBase powers that.swyx [00:03:08]: While you're talking about it, it occurred to me, not that you're going to be acquired or anything, but it occurred to me that it would be really funny if you became the Nikita Beer of headless browser companies. You just have one trick, and you make browser companies that get acquired.Paul [00:03:23]: I truly do only have one trick. I'm screwed if it's not for headless browsers. I'm not a Go programmer. You know, I'm in AI grant. You know, browsers is an AI grant. But we were the only company in that AI grant batch that used zero dollars on AI spend. You know, we're purely an infrastructure company. So as much as people want to ask me about reinforcement learning, I might not be the best guy to talk about that. But if you want to ask about headless browser infrastructure at scale, I can talk your ear off. So that's really my area of expertise. And it's a pretty niche thing. Like, nobody has done what we're doing at scale before. So we're happy to be the experts.swyx [00:03:59]: You do have an AI thing, stagehand. We can talk about the sort of core of browser-based first, and then maybe stagehand. Yeah, stagehand is kind of the web browsing framework. Yeah.What is Browserbase? Headless Browser Infrastructure ExplainedAlessio [00:04:10]: Yeah. Yeah. And maybe how you got to browser-based and what problems you saw. So one of the first things I worked on as a software engineer was integration testing. Sauce Labs was kind of like the main thing at the time. And then we had Selenium, we had Playbrite, we had all these different browser things. But it's always been super hard to do. So obviously you've worked on this before. When you started browser-based, what were the challenges? What were the AI-specific challenges that you saw versus, there's kind of like all the usual running browser at scale in the cloud, which has been a problem for years. What are like the AI unique things that you saw that like traditional purchase just didn't cover? Yeah.AI-specific challenges in browser infrastructurePaul [00:04:46]: First and foremost, I think back to like the first thing I did as a developer, like as a kid when I was writing code, I wanted to write code that did stuff for me. You know, I wanted to write code to automate my life. And I do that probably by using curl or beautiful soup to fetch data from a web browser. And I think I still do that now that I'm in the cloud. And the other thing that I think is a huge challenge for me is that you can't just create a web site and parse that data. And we all know that now like, you know, taking HTML and plugging that into an LLM, you can extract insights, you can summarize. So it was very clear that now like dynamic web scraping became very possible with the rise of large language models or a lot easier. And that was like a clear reason why there's been more usage of headless browsers, which are necessary because a lot of modern websites don't expose all of their page content via a simple HTTP request. You know, they actually do require you to run this type of code for a specific time. JavaScript on the page to hydrate this. Airbnb is a great example. You go to airbnb.com. A lot of that content on the page isn't there until after they run the initial hydration. So you can't just scrape it with a curl. You need to have some JavaScript run. And a browser is that JavaScript engine that's going to actually run all those requests on the page. So web data retrieval was definitely one driver of starting BrowserBase and the rise of being able to summarize that within LLM. Also, I was familiar with if I wanted to automate a website, I could write one script and that would work for one website. It was very static and deterministic. But the web is non-deterministic. The web is always changing. And until we had LLMs, there was no way to write scripts that you could write once that would run on any website. That would change with the structure of the website. Click the login button. It could mean something different on many different websites. And LLMs allow us to generate code on the fly to actually control that. So I think that rise of writing the generic automation scripts that can work on many different websites, to me, made it clear that browsers are going to be a lot more useful because now you can automate a lot more things without writing. If you wanted to write a script to book a demo call on 100 websites, previously, you had to write 100 scripts. Now you write one script that uses LLMs to generate that script. That's why we built our web browsing framework, StageHand, which does a lot of that work for you. But those two things, web data collection and then enhanced automation of many different websites, it just felt like big drivers for more browser infrastructure that would be required to power these kinds of features.Alessio [00:07:05]: And was multimodality also a big thing?Paul [00:07:08]: Now you can use the LLMs to look, even though the text in the dome might not be as friendly. Maybe my hot take is I was always kind of like, I didn't think vision would be as big of a driver. For UI automation, I felt like, you know, HTML is structured text and large language models are good with structured text. But it's clear that these computer use models are often vision driven, and they've been really pushing things forward. So definitely being multimodal, like rendering the page is required to take a screenshot to give that to a computer use model to take actions on a website. And it's just another win for browser. But I'll be honest, that wasn't what I was thinking early on. I didn't even think that we'd get here so fast with multimodality. I think we're going to have to get back to multimodal and vision models.swyx [00:07:50]: This is one of those things where I forgot to mention in my intro that I'm an investor in Browserbase. And I remember that when you pitched to me, like a lot of the stuff that we have today, we like wasn't on the original conversation. But I did have my original thesis was something that we've talked about on the podcast before, which is take the GPT store, the custom GPT store, all the every single checkbox and plugin is effectively a startup. And this was the browser one. I think the main hesitation, I think I actually took a while to get back to you. The main hesitation was that there were others. Like you're not the first hit list browser startup. It's not even your first hit list browser startup. There's always a question of like, will you be the category winner in a place where there's a bunch of incumbents, to be honest, that are bigger than you? They're just not targeted at the AI space. They don't have the backing of Nat Friedman. And there's a bunch of like, you're here in Silicon Valley. They're not. I don't know.Paul [00:08:47]: I don't know if that's, that was it, but like, there was a, yeah, I mean, like, I think I tried all the other ones and I was like, really disappointed. Like my background is from working at great developer tools, companies, and nothing had like the Vercel like experience. Um, like our biggest competitor actually is partly owned by private equity and they just jacked up their prices quite a bit. And the dashboard hasn't changed in five years. And I actually used them at my last company and tried them and I was like, oh man, like there really just needs to be something that's like the experience of these great infrastructure companies, like Stripe, like clerk, like Vercel that I use in love, but oriented towards this kind of like more specific category, which is browser infrastructure, which is really technically complex. Like a lot of stuff can go wrong on the internet when you're running a browser. The internet is very vast. There's a lot of different configurations. Like there's still websites that only work with internet explorer out there. How do you handle that when you're running your own browser infrastructure? These are the problems that we have to think about and solve at BrowserBase. And it's, it's certainly a labor of love, but I built this for me, first and foremost, I know it's super cheesy and everyone says that for like their startups, but it really, truly was for me. If you look at like the talks I've done even before BrowserBase, and I'm just like really excited to try and build a category defining infrastructure company. And it's, it's rare to have a new category of infrastructure exists. We're here in the Chroma offices and like, you know, vector databases is a new category of infrastructure. Is it, is it, I mean, we can, we're in their office, so, you know, we can, we can debate that one later. That is one.Multimodality in AI-Powered Browsingswyx [00:10:16]: That's one of the industry debates.Paul [00:10:17]: I guess we go back to the LLMOS talk that Karpathy gave way long ago. And like the browser box was very clearly there and it seemed like the people who were building in this space also agreed that browsers are a core primitive of infrastructure for the LLMOS that's going to exist in the future. And nobody was building something there that I wanted to use. So I had to go build it myself.swyx [00:10:38]: Yeah. I mean, exactly that talk that, that honestly, that diagram, every box is a startup and there's the code box and then there's the. The browser box. I think at some point they will start clashing there. There's always the question of the, are you a point solution or are you the sort of all in one? And I think the point solutions tend to win quickly, but then the only ones have a very tight cohesive experience. Yeah. Let's talk about just the hard problems of browser base you have on your website, which is beautiful. Thank you. Was there an agency that you used for that? Yeah. Herb.paris.Paul [00:11:11]: They're amazing. Herb.paris. Yeah. It's H-E-R-V-E. I highly recommend for developers. Developer tools, founders to work with consumer agencies because they end up building beautiful things and the Parisians know how to build beautiful interfaces. So I got to give prep.swyx [00:11:24]: And chat apps, apparently are, they are very fast. Oh yeah. The Mistral chat. Yeah. Mistral. Yeah.Paul [00:11:31]: Late chat.swyx [00:11:31]: Late chat. And then your videos as well, it was professionally shot, right? The series A video. Yeah.Alessio [00:11:36]: Nico did the videos. He's amazing. Not the initial video that you shot at the new one. First one was Austin.Paul [00:11:41]: Another, another video pretty surprised. But yeah, I mean, like, I think when you think about how you talk about your company. You have to think about the way you present yourself. It's, you know, as a developer, you think you evaluate a company based on like the API reliability and the P 95, but a lot of developers say, is the website good? Is the message clear? Do I like trust this founder? I'm building my whole feature on. So I've tried to nail that as well as like the reliability of the infrastructure. You're right. It's very hard. And there's a lot of kind of foot guns that you run into when running headless browsers at scale. Right.Competing with Existing Headless Browser Solutionsswyx [00:12:10]: So let's pick one. You have eight features here. Seamless integration. Scalability. Fast or speed. Secure. Observable. Stealth. That's interesting. Extensible and developer first. What comes to your mind as like the top two, three hardest ones? Yeah.Running headless browsers at scalePaul [00:12:26]: I think just running headless browsers at scale is like the hardest one. And maybe can I nerd out for a second? Is that okay? I heard this is a technical audience, so I'll talk to the other nerds. Whoa. They were listening. Yeah. They're upset. They're ready. The AGI is angry. Okay. So. So how do you run a browser in the cloud? Let's start with that, right? So let's say you're using a popular browser automation framework like Puppeteer, Playwright, and Selenium. Maybe you've written a code, some code locally on your computer that opens up Google. It finds the search bar and then types in, you know, search for Latent Space and hits the search button. That script works great locally. You can see the little browser open up. You want to take that to production. You want to run the script in a cloud environment. So when your laptop is closed, your browser is doing something. The browser is doing something. Well, I, we use Amazon. You can see the little browser open up. You know, the first thing I'd reach for is probably like some sort of serverless infrastructure. I would probably try and deploy on a Lambda. But Chrome itself is too big to run on a Lambda. It's over 250 megabytes. So you can't easily start it on a Lambda. So you maybe have to use something like Lambda layers to squeeze it in there. Maybe use a different Chromium build that's lighter. And you get it on the Lambda. Great. It works. But it runs super slowly. It's because Lambdas are very like resource limited. They only run like with one vCPU. You can run one process at a time. Remember, Chromium is super beefy. It's barely running on my MacBook Air. I'm still downloading it from a pre-run. Yeah, from the test earlier, right? I'm joking. But it's big, you know? So like Lambda, it just won't work really well. Maybe it'll work, but you need something faster. Your users want something faster. Okay. Well, let's put it on a beefier instance. Let's get an EC2 server running. Let's throw Chromium on there. Great. Okay. I can, that works well with one user. But what if I want to run like 10 Chromium instances, one for each of my users? Okay. Well, I might need two EC2 instances. Maybe 10. All of a sudden, you have multiple EC2 instances. This sounds like a problem for Kubernetes and Docker, right? Now, all of a sudden, you're using ECS or EKS, the Kubernetes or container solutions by Amazon. You're spending up and down containers, and you're spending a whole engineer's time on kind of maintaining this stateful distributed system. Those are some of the worst systems to run because when it's a stateful distributed system, it means that you are bound by the connections to that thing. You have to keep the browser open while someone is working with it, right? That's just a painful architecture to run. And there's all this other little gotchas with Chromium, like Chromium, which is the open source version of Chrome, by the way. You have to install all these fonts. You want emojis working in your browsers because your vision model is looking for the emoji. You need to make sure you have the emoji fonts. You need to make sure you have all the right extensions configured, like, oh, do you want ad blocking? How do you configure that? How do you actually record all these browser sessions? Like it's a headless browser. You can't look at it. So you need to have some sort of observability. Maybe you're recording videos and storing those somewhere. It all kind of adds up to be this just giant monster piece of your project when all you wanted to do was run a lot of browsers in production for this little script to go to google.com and search. And when I see a complex distributed system, I see an opportunity to build a great infrastructure company. And we really abstract that away with Browserbase where our customers can use these existing frameworks, Playwright, Publisher, Selenium, or our own stagehand and connect to our browsers in a serverless-like way. And control them, and then just disconnect when they're done. And they don't have to think about the complex distributed system behind all of that. They just get a browser running anywhere, anytime. Really easy to connect to.swyx [00:15:55]: I'm sure you have questions. My standard question with anything, so essentially you're a serverless browser company, and there's been other serverless things that I'm familiar with in the past, serverless GPUs, serverless website hosting. That's where I come from with Netlify. One question is just like, you promised to spin up thousands of servers. You promised to spin up thousands of browsers in milliseconds. I feel like there's no real solution that does that yet. And I'm just kind of curious how. The only solution I know, which is to kind of keep a kind of warm pool of servers around, which is expensive, but maybe not so expensive because it's just CPUs. So I'm just like, you know. Yeah.Browsers as a Core Primitive in AI InfrastructurePaul [00:16:36]: You nailed it, right? I mean, how do you offer a serverless-like experience with something that is clearly not serverless, right? And the answer is, you need to be able to run... We run many browsers on single nodes. We use Kubernetes at browser base. So we have many pods that are being scheduled. We have to predictably schedule them up or down. Yes, thousands of browsers in milliseconds is the best case scenario. If you hit us with 10,000 requests, you may hit a slower cold start, right? So we've done a lot of work on predictive scaling and being able to kind of route stuff to different regions where we have multiple regions of browser base where we have different pools available. You can also pick the region you want to go to based on like lower latency, round trip, time latency. It's very important with these types of things. There's a lot of requests going over the wire. So for us, like having a VM like Firecracker powering everything under the hood allows us to be super nimble and spin things up or down really quickly with strong multi-tenancy. But in the end, this is like the complex infrastructural challenges that we have to kind of deal with at browser base. And we have a lot more stuff on our roadmap to allow customers to have more levers to pull to exchange, do you want really fast browser startup times or do you want really low costs? And if you're willing to be more flexible on that, we may be able to kind of like work better for your use cases.swyx [00:17:44]: Since you used Firecracker, shouldn't Fargate do that for you or did you have to go lower level than that? We had to go lower level than that.Paul [00:17:51]: I find this a lot with Fargate customers, which is alarming for Fargate. We used to be a giant Fargate customer. Actually, the first version of browser base was ECS and Fargate. And unfortunately, it's a great product. I think we were actually the largest Fargate customer in our region for a little while. No, what? Yeah, seriously. And unfortunately, it's a great product, but I think if you're an infrastructure company, you actually have to have a deeper level of control over these primitives. I think it's the same thing is true with databases. We've used other database providers and I think-swyx [00:18:21]: Yeah, serverless Postgres.Paul [00:18:23]: Shocker. When you're an infrastructure company, you're on the hook if any provider has an outage. And I can't tell my customers like, hey, we went down because so-and-so went down. That's not acceptable. So for us, we've really moved to bringing things internally. It's kind of opposite of what we preach. We tell our customers, don't build this in-house, but then we're like, we build a lot of stuff in-house. But I think it just really depends on what is in the critical path. We try and have deep ownership of that.Alessio [00:18:46]: On the distributed location side, how does that work for the web where you might get sort of different content in different locations, but the customer is expecting, you know, if you're in the US, I'm expecting the US version. But if you're spinning up my browser in France, I might get the French version. Yeah.Paul [00:19:02]: Yeah. That's a good question. Well, generally, like on the localization, there is a thing called locale in the browser. You can set like what your locale is. If you're like in the ENUS browser or not, but some things do IP, IP based routing. And in that case, you may want to have a proxy. Like let's say you're running something in the, in Europe, but you want to make sure you're showing up from the US. You may want to use one of our proxy features so you can turn on proxies to say like, make sure these connections always come from the United States, which is necessary too, because when you're browsing the web, you're coming from like a, you know, data center IP, and that can make things a lot harder to browse web. So we do have kind of like this proxy super network. Yeah. We have a proxy for you based on where you're going, so you can reliably automate the web. But if you get scheduled in Europe, that doesn't happen as much. We try and schedule you as close to, you know, your origin that you're trying to go to. But generally you have control over the regions you can put your browsers in. So you can specify West one or East one or Europe. We only have one region of Europe right now, actually. Yeah.Alessio [00:19:55]: What's harder, the browser or the proxy? I feel like to me, it feels like actually proxying reliably at scale. It's much harder than spending up browsers at scale. I'm curious. It's all hard.Paul [00:20:06]: It's layers of hard, right? Yeah. I think it's different levels of hard. I think the thing with the proxy infrastructure is that we work with many different web proxy providers and some are better than others. Some have good days, some have bad days. And our customers who've built browser infrastructure on their own, they have to go and deal with sketchy actors. Like first they figure out their own browser infrastructure and then they got to go buy a proxy. And it's like you can pay in Bitcoin and it just kind of feels a little sus, right? It's like you're buying drugs when you're trying to get a proxy online. We have like deep relationships with these counterparties. We're able to audit them and say, is this proxy being sourced ethically? Like it's not running on someone's TV somewhere. Is it free range? Yeah. Free range organic proxies, right? Right. We do a level of diligence. We're SOC 2. So we have to understand what is going on here. But then we're able to make sure that like we route around proxy providers not working. There's proxy providers who will just, the proxy will stop working all of a sudden. And then if you don't have redundant proxying on your own browsers, that's hard down for you or you may get some serious impacts there. With us, like we intelligently know, hey, this proxy is not working. Let's go to this one. And you can kind of build a network of multiple providers to really guarantee the best uptime for our customers. Yeah. So you don't own any proxies? We don't own any proxies. You're right. The team has been saying who wants to like take home a little proxy server, but not yet. We're not there yet. You know?swyx [00:21:25]: It's a very mature market. I don't think you should build that yourself. Like you should just be a super customer of them. Yeah. Scraping, I think, is the main use case for that. I guess. Well, that leads us into CAPTCHAs and also off, but let's talk about CAPTCHAs. You had a little spiel that you wanted to talk about CAPTCHA stuff.Challenges of Scaling Browser InfrastructurePaul [00:21:43]: Oh, yeah. I was just, I think a lot of people ask, if you're thinking about proxies, you're thinking about CAPTCHAs too. I think it's the same thing. You can go buy CAPTCHA solvers online, but it's the same buying experience. It's some sketchy website, you have to integrate it. It's not fun to buy these things and you can't really trust that the docs are bad. What Browserbase does is we integrate a bunch of different CAPTCHAs. We do some stuff in-house, but generally we just integrate with a bunch of known vendors and continually monitor and maintain these things and say, is this working or not? Can we route around it or not? These are CAPTCHA solvers. CAPTCHA solvers, yeah. Not CAPTCHA providers, CAPTCHA solvers. Yeah, sorry. CAPTCHA solvers. We really try and make sure all of that works for you. I think as a dev, if I'm buying infrastructure, I want it all to work all the time and it's important for us to provide that experience by making sure everything does work and monitoring it on our own. Yeah. Right now, the world of CAPTCHAs is tricky. I think AI agents in particular are very much ahead of the internet infrastructure. CAPTCHAs are designed to block all types of bots, but there are now good bots and bad bots. I think in the future, CAPTCHAs will be able to identify who a good bot is, hopefully via some sort of KYC. For us, we've been very lucky. We have very little to no known abuse of Browserbase because we really look into who we work with. And for certain types of CAPTCHA solving, we only allow them on certain types of plans because we want to make sure that we can know what people are doing, what their use cases are. And that's really allowed us to try and be an arbiter of good bots, which is our long term goal. I want to build great relationships with people like Cloudflare so we can agree, hey, here are these acceptable bots. We'll identify them for you and make sure we flag when they come to your website. This is a good bot, you know?Alessio [00:23:23]: I see. And Cloudflare said they want to do more of this. So they're going to set by default, if they think you're an AI bot, they're going to reject. I'm curious if you think this is something that is going to be at the browser level or I mean, the DNS level with Cloudflare seems more where it should belong. But I'm curious how you think about it.Paul [00:23:40]: I think the web's going to change. You know, I think that the Internet as we have it right now is going to change. And we all need to just accept that the cat is out of the bag. And instead of kind of like wishing the Internet was like it was in the 2000s, we can have free content line that wouldn't be scraped. It's just it's not going to happen. And instead, we should think about like, one, how can we change? How can we change the models of, you know, information being published online so people can adequately commercialize it? But two, how do we rebuild applications that expect that AI agents are going to log in on their behalf? Those are the things that are going to allow us to kind of like identify good and bad bots. And I think the team at Clerk has been doing a really good job with this on the authentication side. I actually think that auth is the biggest thing that will prevent agents from accessing stuff, not captchas. And I think there will be agent auth in the future. I don't know if it's going to happen from an individual company, but actually authentication providers that have a, you know, hidden login as agent feature, which will then you put in your email, you'll get a push notification, say like, hey, your browser-based agent wants to log into your Airbnb. You can approve that and then the agent can proceed. That really circumvents the need for captchas or logging in as you and sharing your password. I think agent auth is going to be one way we identify good bots going forward. And I think a lot of this captcha solving stuff is really short-term problems as the internet kind of reorients itself around how it's going to work with agents browsing the web, just like people do. Yeah.Managing Distributed Browser Locations and Proxiesswyx [00:24:59]: Stitch recently was on Hacker News for talking about agent experience, AX, which is a thing that Netlify is also trying to clone and coin and talk about. And we've talked about this on our previous episodes before in a sense that I actually think that's like maybe the only part of the tech stack that needs to be kind of reinvented for agents. Everything else can stay the same, CLIs, APIs, whatever. But auth, yeah, we need agent auth. And it's mostly like short-lived, like it should not, it should be a distinct, identity from the human, but paired. I almost think like in the same way that every social network should have your main profile and then your alt accounts or your Finsta, it's almost like, you know, every, every human token should be paired with the agent token and the agent token can go and do stuff on behalf of the human token, but not be presumed to be the human. Yeah.Paul [00:25:48]: It's like, it's, it's actually very similar to OAuth is what I'm thinking. And, you know, Thread from Stitch is an investor, Colin from Clerk, Octaventures, all investors in browser-based because like, I hope they solve this because they'll make browser-based submission more possible. So we don't have to overcome all these hurdles, but I think it will be an OAuth-like flow where an agent will ask to log in as you, you'll approve the scopes. Like it can book an apartment on Airbnb, but it can't like message anybody. And then, you know, the agent will have some sort of like role-based access control within an application. Yeah. I'm excited for that.swyx [00:26:16]: The tricky part is just, there's one, one layer of delegation here, which is like, you're authoring my user's user or something like that. I don't know if that's tricky or not. Does that make sense? Yeah.Paul [00:26:25]: You know, actually at Twilio, I worked on the login identity and access. Management teams, right? So like I built Twilio's login page.swyx [00:26:31]: You were an intern on that team and then you became the lead in two years? Yeah.Paul [00:26:34]: Yeah. I started as an intern in 2016 and then I was the tech lead of that team. How? That's not normal. I didn't have a life. He's not normal. Look at this guy. I didn't have a girlfriend. I just loved my job. I don't know. I applied to 500 internships for my first job and I got rejected from every single one of them except for Twilio and then eventually Amazon. And they took a shot on me and like, I was getting paid money to write code, which was my dream. Yeah. Yeah. I'm very lucky that like this coding thing worked out because I was going to be doing it regardless. And yeah, I was able to kind of spend a lot of time on a team that was growing at a company that was growing. So it informed a lot of this stuff here. I think these are problems that have been solved with like the SAML protocol with SSO. I think it's a really interesting stuff with like WebAuthn, like these different types of authentication, like schemes that you can use to authenticate people. The tooling is all there. It just needs to be tweaked a little bit to work for agents. And I think the fact that there are companies that are already. Providing authentication as a service really sets it up. Well, the thing that's hard is like reinventing the internet for agents. We don't want to rebuild the internet. That's an impossible task. And I think people often say like, well, we'll have this second layer of APIs built for agents. I'm like, we will for the top use cases, but instead of we can just tweak the internet as is, which is on the authentication side, I think we're going to be the dumb ones going forward. Unfortunately, I think AI is going to be able to do a lot of the tasks that we do online, which means that it will be able to go to websites, click buttons on our behalf and log in on our behalf too. So with this kind of like web agent future happening, I think with some small structural changes, like you said, it feels like it could all slot in really nicely with the existing internet.Handling CAPTCHAs and Agent Authenticationswyx [00:28:08]: There's one more thing, which is the, your live view iframe, which lets you take, take control. Yeah. Obviously very key for operator now, but like, was, is there anything interesting technically there or that the people like, well, people always want this.Paul [00:28:21]: It was really hard to build, you know, like, so, okay. Headless browsers, you don't see them, right. They're running. They're running in a cloud somewhere. You can't like look at them. And I just want to really make, it's a weird name. I wish we came up with a better name for this thing, but you can't see them. Right. But customers don't trust AI agents, right. At least the first pass. So what we do with our live view is that, you know, when you use browser base, you can actually embed a live view of the browser running in the cloud for your customer to see it working. And that's what the first reason is the build trust, like, okay, so I have this script. That's going to go automate a website. I can embed it into my web application via an iframe and my customer can watch. I think. And then we added two way communication. So now not only can you watch the browser kind of being operated by AI, if you want to pause and actually click around type within this iframe that's controlling a browser, that's also possible. And this is all thanks to some of the lower level protocol, which is called the Chrome DevTools protocol. It has a API called start screencast, and you can also send mouse clicks and button clicks to a remote browser. And this is all embeddable within iframes. You have a browser within a browser, yo. And then you simulate the screen, the click on the other side. Exactly. And this is really nice often for, like, let's say, a capture that can't be solved. You saw this with Operator, you know, Operator actually uses a different approach. They use VNC. So, you know, you're able to see, like, you're seeing the whole window here. What we're doing is something a little lower level with the Chrome DevTools protocol. It's just PNGs being streamed over the wire. But the same thing is true, right? Like, hey, I'm running a window. Pause. Can you do something in this window? Human. Okay, great. Resume. Like sometimes 2FA tokens. Like if you get that text message, you might need a person to type that in. Web agents need human-in-the-loop type workflows still. You still need a person to interact with the browser. And building a UI to proxy that is kind of hard. You may as well just show them the whole browser and say, hey, can you finish this up for me? And then let the AI proceed on afterwards. Is there a future where I stream my current desktop to browser base? I don't think so. I think we're very much cloud infrastructure. Yeah. You know, but I think a lot of the stuff we're doing, we do want to, like, build tools. Like, you know, we'll talk about the stage and, you know, web agent framework in a second. But, like, there's a case where a lot of people are going desktop first for, you know, consumer use. And I think cloud is doing a lot of this, where I expect to see, you know, MCPs really oriented around the cloud desktop app for a reason, right? Like, I think a lot of these tools are going to run on your computer because it makes... I think it's breaking out. People are putting it on a server. Oh, really? Okay. Well, sweet. We'll see. We'll see that. I was surprised, though, wasn't I? I think that the browser company, too, with Dia Browser, it runs on your machine. You know, it's going to be...swyx [00:30:50]: What is it?Paul [00:30:51]: So, Dia Browser, as far as I understand... I used to use Arc. Yeah. I haven't used Arc. But I'm a big fan of the browser company. I think they're doing a lot of cool stuff in consumer. As far as I understand, it's a browser where you have a sidebar where you can, like, chat with it and it can control the local browser on your machine. So, if you imagine, like, what a consumer web agent is, which it lives alongside your browser, I think Google Chrome has Project Marina, I think. I almost call it Project Marinara for some reason. I don't know why. It's...swyx [00:31:17]: No, I think it's someone really likes the Waterworld. Oh, I see. The classic Kevin Costner. Yeah.Paul [00:31:22]: Okay. Project Marinara is a similar thing to the Dia Browser, in my mind, as far as I understand it. You have a browser that has an AI interface that will take over your mouse and keyboard and control the browser for you. Great for consumer use cases. But if you're building applications that rely on a browser and it's more part of a greater, like, AI app experience, you probably need something that's more like infrastructure, not a consumer app.swyx [00:31:44]: Just because I have explored a little bit in this area, do people want branching? So, I have the state. Of whatever my browser's in. And then I want, like, 100 clones of this state. Do people do that? Or...Paul [00:31:56]: People don't do it currently. Yeah. But it's definitely something we're thinking about. I think the idea of forking a browser is really cool. Technically, kind of hard. We're starting to see this in code execution, where people are, like, forking some, like, code execution, like, processes or forking some tool calls or branching tool calls. Haven't seen it at the browser level yet. But it makes sense. Like, if an AI agent is, like, using a website and it's not sure what path it wants to take to crawl this website. To find the information it's looking for. It would make sense for it to explore both paths in parallel. And that'd be a very, like... A road not taken. Yeah. And hopefully find the right answer. And then say, okay, this was actually the right one. And memorize that. And go there in the future. On the roadmap. For sure. Don't make my roadmap, please. You know?Alessio [00:32:37]: How do you actually do that? Yeah. How do you fork? I feel like the browser is so stateful for so many things.swyx [00:32:42]: Serialize the state. Restore the state. I don't know.Paul [00:32:44]: So, it's one of the reasons why we haven't done it yet. It's hard. You know? Like, to truly fork, it's actually quite difficult. The naive way is to open the same page in a new tab and then, like, hope that it's at the same thing. But if you have a form halfway filled, you may have to, like, take the whole, you know, container. Pause it. All the memory. Duplicate it. Restart it from there. It could be very slow. So, we haven't found a thing. Like, the easy thing to fork is just, like, copy the page object. You know? But I think there needs to be something a little bit more robust there. Yeah.swyx [00:33:12]: So, MorphLabs has this infinite branch thing. Like, wrote a custom fork of Linux or something that let them save the system state and clone it. MorphLabs, hit me up. I'll be a customer. Yeah. That's the only. I think that's the only way to do it. Yeah. Like, unless Chrome has some special API for you. Yeah.Paul [00:33:29]: There's probably something we'll reverse engineer one day. I don't know. Yeah.Alessio [00:33:32]: Let's talk about StageHand, the AI web browsing framework. You have three core components, Observe, Extract, and Act. Pretty clean landing page. What was the idea behind making a framework? Yeah.Stagehand: AI web browsing frameworkPaul [00:33:43]: So, there's three frameworks that are very popular or already exist, right? Puppeteer, Playwright, Selenium. Those are for building hard-coded scripts to control websites. And as soon as I started to play with LLMs plus browsing, I caught myself, you know, code-genning Playwright code to control a website. I would, like, take the DOM. I'd pass it to an LLM. I'd say, can you generate the Playwright code to click the appropriate button here? And it would do that. And I was like, this really should be part of the frameworks themselves. And I became really obsessed with SDKs that take natural language as part of, like, the API input. And that's what StageHand is. StageHand exposes three APIs, and it's a super set of Playwright. So, if you go to a page, you may want to take an action, click on the button, fill in the form, etc. That's what the act command is for. You may want to extract some data. This one takes a natural language, like, extract the winner of the Super Bowl from this page. You can give it a Zod schema, so it returns a structured output. And then maybe you're building an API. You can do an agent loop, and you want to kind of see what actions are possible on this page before taking one. You can do observe. So, you can observe the actions on the page, and it will generate a list of actions. You can guide it, like, give me actions on this page related to buying an item. And you can, like, buy it now, add to cart, view shipping options, and pass that to an LLM, an agent loop, to say, what's the appropriate action given this high-level goal? So, StageHand isn't a web agent. It's a framework for building web agents. And we think that agent loops are actually pretty close to the application layer because every application probably has different goals or different ways it wants to take steps. I don't think I've seen a generic. Maybe you guys are the experts here. I haven't seen, like, a really good AI agent framework here. Everyone kind of has their own special sauce, right? I see a lot of developers building their own agent loops, and they're using tools. And I view StageHand as the browser tool. So, we expose act, extract, observe. Your agent can call these tools. And from that, you don't have to worry about it. You don't have to worry about generating playwright code performantly. You don't have to worry about running it. You can kind of just integrate these three tool calls into your agent loop and reliably automate the web.swyx [00:35:48]: A special shout-out to Anirudh, who I met at your dinner, who I think listens to the pod. Yeah. Hey, Anirudh.Paul [00:35:54]: Anirudh's a man. He's a StageHand guy.swyx [00:35:56]: I mean, the interesting thing about each of these APIs is they're kind of each startup. Like, specifically extract, you know, Firecrawler is extract. There's, like, Expand AI. There's a whole bunch of, like, extract companies. They just focus on extract. I'm curious. Like, I feel like you guys are going to collide at some point. Like, right now, it's friendly. Everyone's in a blue ocean. At some point, it's going to be valuable enough that there's some turf battle here. I don't think you have a dog in a fight. I think you can mock extract to use an external service if they're better at it than you. But it's just an observation that, like, in the same way that I see each option, each checkbox in the side of custom GBTs becoming a startup or each box in the Karpathy chart being a startup. Like, this is also becoming a thing. Yeah.Paul [00:36:41]: I mean, like, so the way StageHand works is that it's MIT-licensed, completely open source. You bring your own API key to your LLM of choice. You could choose your LLM. We don't make any money off of the extract or really. We only really make money if you choose to run it with our browser. You don't have to. You can actually use your own browser, a local browser. You know, StageHand is completely open source for that reason. And, yeah, like, I think if you're building really complex web scraping workflows, I don't know if StageHand is the tool for you. I think it's really more if you're building an AI agent that needs a few general tools or if it's doing a lot of, like, web automation-intensive work. But if you're building a scraping company, StageHand is not your thing. You probably want something that's going to, like, get HTML content, you know, convert that to Markdown, query it. That's not what StageHand does. StageHand is more about reliability. I think we focus a lot on reliability and less so on cost optimization and speed at this point.swyx [00:37:33]: I actually feel like StageHand, so the way that StageHand works, it's like, you know, page.act, click on the quick start. Yeah. It's kind of the integration test for the code that you would have to write anyway, like the Puppeteer code that you have to write anyway. And when the page structure changes, because it always does, then this is still the test. This is still the test that I would have to write. Yeah. So it's kind of like a testing framework that doesn't need implementation detail.Paul [00:37:56]: Well, yeah. I mean, Puppeteer, Playwright, and Slenderman were all designed as testing frameworks, right? Yeah. And now people are, like, hacking them together to automate the web. I would say, and, like, maybe this is, like, me being too specific. But, like, when I write tests, if the page structure changes. Without me knowing, I want that test to fail. So I don't know if, like, AI, like, regenerating that. Like, people are using StageHand for testing. But it's more for, like, usability testing, not, like, testing of, like, does the front end, like, has it changed or not. Okay. But generally where we've seen people, like, really, like, take off is, like, if they're using, you know, something. If they want to build a feature in their application that's kind of like Operator or Deep Research, they're using StageHand to kind of power that tool calling in their own agent loop. Okay. Cool.swyx [00:38:37]: So let's go into Operator, the first big agent launch of the year from OpenAI. Seems like they have a whole bunch scheduled. You were on break and your phone blew up. What's your just general view of computer use agents is what they're calling it. The overall category before we go into Open Operator, just the overall promise of Operator. I will observe that I tried it once. It was okay. And I never tried it again.OpenAI's Operator and computer use agentsPaul [00:38:58]: That tracks with my experience, too. Like, I'm a huge fan of the OpenAI team. Like, I think that I do not view Operator as the company. I'm not a company killer for browser base at all. I think it actually shows people what's possible. I think, like, computer use models make a lot of sense. And I'm actually most excited about computer use models is, like, their ability to, like, really take screenshots and reasoning and output steps. I think that using mouse click or mouse coordinates, I've seen that proved to be less reliable than I would like. And I just wonder if that's the right form factor. What we've done with our framework is anchor it to the DOM itself, anchor it to the actual item. So, like, if it's clicking on something, it's clicking on that thing, you know? Like, it's more accurate. No matter where it is. Yeah, exactly. Because it really ties in nicely. And it can handle, like, the whole viewport in one go, whereas, like, Operator can only handle what it sees. Can you hover? Is hovering a thing that you can do? I don't know if we expose it as a tool directly, but I'm sure there's, like, an API for hovering. Like, move mouse to this position. Yeah, yeah, yeah. I think you can trigger hover, like, via, like, the JavaScript on the DOM itself. But, no, I think, like, when we saw computer use, everyone's eyes lit up because they realized, like, wow, like, AI is going to actually automate work for people. And I think seeing that kind of happen from both of the labs, and I'm sure we're going to see more labs launch computer use models, I'm excited to see all the stuff that people build with it. I think that I'd love to see computer use power, like, controlling a browser on browser base. And I think, like, Open Operator, which was, like, our open source version of OpenAI's Operator, was our first take on, like, how can we integrate these models into browser base? And we handle the infrastructure and let the labs do the models. I don't have a sense that Operator will be released as an API. I don't know. Maybe it will. I'm curious to see how well that works because I think it's going to be really hard for a company like OpenAI to do things like support CAPTCHA solving or, like, have proxies. Like, I think it's hard for them structurally. Imagine this New York Times headline, OpenAI CAPTCHA solving. Like, that would be a pretty bad headline, this New York Times headline. Browser base solves CAPTCHAs. No one cares. No one cares. And, like, our investors are bored. Like, we're all okay with this, you know? We're building this company knowing that the CAPTCHA solving is short-lived until we figure out how to authenticate good bots. I think it's really hard for a company like OpenAI, who has this brand that's so, so good, to balance with, like, the icky parts of web automation, which it can be kind of complex to solve. I'm sure OpenAI knows who to call whenever they need you. Yeah, right. I'm sure they'll have a great partnership.Alessio [00:41:23]: And is Open Operator just, like, a marketing thing for you? Like, how do you think about resource allocation? So, you can spin this up very quickly. And now there's all this, like, open deep research, just open all these things that people are building. We started it, you know. You're the original Open. We're the original Open operator, you know? Is it just, hey, look, this is a demo, but, like, we'll help you build out an actual product for yourself? Like, are you interested in going more of a product route? That's kind of the OpenAI way, right? They started as a model provider and then…Paul [00:41:53]: Yeah, we're not interested in going the product route yet. I view Open Operator as a model provider. It's a reference project, you know? Let's show people how to build these things using the infrastructure and models that are out there. And that's what it is. It's, like, Open Operator is very simple. It's an agent loop. It says, like, take a high-level goal, break it down into steps, use tool calling to accomplish those steps. It takes screenshots and feeds those screenshots into an LLM with the step to generate the right action. It uses stagehand under the hood to actually execute this action. It doesn't use a computer use model. And it, like, has a nice interface using the live view that we talked about, the iframe, to embed that into an application. So I felt like people on launch day wanted to figure out how to build their own version of this. And we turned that around really quickly to show them. And I hope we do that with other things like deep research. We don't have a deep research launch yet. I think David from AOMNI actually has an amazing open deep research that he launched. It has, like, 10K GitHub stars now. So he's crushing that. But I think if people want to build these features natively into their application, they need good reference projects. And I think Open Operator is a good example of that.swyx [00:42:52]: I don't know. Actually, I'm actually pretty bullish on API-driven operator. Because that's the only way that you can sort of, like, once it's reliable enough, obviously. And now we're nowhere near. But, like, give it five years. It'll happen, you know. And then you can sort of spin this up and browsers are working in the background and you don't necessarily have to know. And it just is booking restaurants for you, whatever. I can definitely see that future happening. I had this on the landing page here. This might be a slightly out of order. But, you know, you have, like, sort of three use cases for browser base. Open Operator. Or this is the operator sort of use case. It's kind of like the workflow automation use case. And it completes with UiPath in the sort of RPA category. Would you agree with that? Yeah, I would agree with that. And then there's Agents we talked about already. And web scraping, which I imagine would be the bulk of your workload right now, right?Paul [00:43:40]: No, not at all. I'd say actually, like, the majority is browser automation. We're kind of expensive for web scraping. Like, I think that if you're building a web scraping product, if you need to do occasional web scraping or you have to do web scraping that works every single time, you want to use browser automation. Yeah. You want to use browser-based. But if you're building web scraping workflows, what you should do is have a waterfall. You should have the first request is a curl to the website. See if you can get it without even using a browser. And then the second request may be, like, a scraping-specific API. There's, like, a thousand scraping APIs out there that you can use to try and get data. Scraping B. Scraping B is a great example, right? Yeah. And then, like, if those two don't work, bring out the heavy hitter. Like, browser-based will 100% work, right? It will load the page in a real browser, hydrate it. I see.swyx [00:44:21]: Because a lot of people don't render to JS.swyx [00:44:25]: Yeah, exactly.Paul [00:44:26]: So, I mean, the three big use cases, right? Like, you know, automation, web data collection, and then, you know, if you're building anything agentic that needs, like, a browser tool, you want to use browser-based.Alessio [00:44:35]: Is there any use case that, like, you were super surprised by that people might not even think about? Oh, yeah. Or is it, yeah, anything that you can share? The long tail is crazy. Yeah.Surprising use cases of BrowserbasePaul [00:44:44]: One of the case studies on our website that I think is the most interesting is this company called Benny. So, the way that it works is if you're on food stamps in the United States, you can actually get rebates if you buy certain things. Yeah. You buy some vegetables. You submit your receipt to the government. They'll give you a little rebate back. Say, hey, thanks for buying vegetables. It's good for you. That process of submitting that receipt is very painful. And the way Benny works is you use their app to take a photo of your receipt, and then Benny will go submit that receipt for you and then deposit the money into your account. That's actually using no AI at all. It's all, like, hard-coded scripts. They maintain the scripts. They've been doing a great job. And they build this amazing consumer app. But it's an example of, like, all these, like, tedious workflows that people have to do to kind of go about their business. And they're doing it for the sake of their day-to-day lives. And I had never known about, like, food stamp rebates or the complex forms you have to do to fill them. But the world is powered by millions and millions of tedious forms, visas. You know, Emirate Lighthouse is a customer, right? You know, they do the O1 visa. Millions and millions of forms are taking away humans' time. And I hope that Browserbase can help power software that automates away the web forms that we don't need anymore. Yeah.swyx [00:45:49]: I mean, I'm very supportive of that. I mean, forms. I do think, like, government itself is a big part of it. I think the government itself should embrace AI more to do more sort of human-friendly form filling. Mm-hmm. But I'm not optimistic. I'm not holding my breath. Yeah. We'll see. Okay. I think I'm about to zoom out. I have a little brief thing on computer use, and then we can talk about founder stuff, which is, I tend to think of developer tooling markets in impossible triangles, where everyone starts in a niche, and then they start to branch out. So I already hinted at a little bit of this, right? We mentioned more. We mentioned E2B. We mentioned Firecrawl. And then there's Browserbase. So there's, like, all this stuff of, like, have serverless virtual computer that you give to an agent and let them do stuff with it. And there's various ways of connecting it to the internet. You can just connect to a search API, like SERP API, whatever other, like, EXA is another one. That's what you're searching. You can also have a JSON markdown extractor, which is Firecrawl. Or you can have a virtual browser like Browserbase, or you can have a virtual machine like Morph. And then there's also maybe, like, a virtual sort of code environment, like Code Interpreter. So, like, there's just, like, a bunch of different ways to tackle the problem of give a computer to an agent. And I'm just kind of wondering if you see, like, everyone's just, like, happily coexisting in their respective niches. And as a developer, I just go and pick, like, a shopping basket of one of each. Or do you think that you eventually, people will collide?Future of browser automation and market competitionPaul [00:47:18]: I think that currently it's not a zero-sum market. Like, I think we're talking about... I think we're talking about all of knowledge work that people do that can be automated online. All of these, like, trillions of hours that happen online where people are working. And I think that there's so much software to be built that, like, I tend not to think about how these companies will collide. I just try to solve the problem as best as I can and make this specific piece of infrastructure, which I think is an important primitive, the best I possibly can. And yeah. I think there's players that are actually going to like it. I think there's players that are going to launch, like, over-the-top, you know, platforms, like agent platforms that have all these tools built in, right? Like, who's building the rippling for agent tools that has the search tool, the browser tool, the operating system tool, right? There are some. There are some. There are some, right? And I think in the end, what I have seen as my time as a developer, and I look at all the favorite tools that I have, is that, like, for tools and primitives with sufficient levels of complexity, you need to have a solution that's really bespoke to that primitive, you know? And I am sufficiently convinced that the browser is complex enough to deserve a primitive. Obviously, I have to. I'm the founder of BrowserBase, right? I'm talking my book. But, like, I think maybe I can give you one spicy take against, like, maybe just whole OS running. I think that when I look at computer use when it first came out, I saw that the majority of use cases for computer use were controlling a browser. And do we really need to run an entire operating system just to control a browser? I don't think so. I don't think that's necessary. You know, BrowserBase can run browsers for way cheaper than you can if you're running a full-fledged OS with a GUI, you know, operating system. And I think that's just an advantage of the browser. It is, like, browsers are little OSs, and you can run them very efficiently if you orchestrate it well. And I think that allows us to offer 90% of the, you know, functionality in the platform needed at 10% of the cost of running a full OS. Yeah.Open Operator: Browserbase's Open-Source Alternativeswyx [00:49:16]: I definitely see the logic in that. There's a Mark Andreessen quote. I don't know if you know this one. Where he basically observed that the browser is turning the operating system into a poorly debugged set of device drivers, because most of the apps are moved from the OS to the browser. So you can just run browsers.Paul [00:49:31]: There's a place for OSs, too. Like, I think that there are some applications that only run on Windows operating systems. And Eric from pig.dev in this upcoming YC batch, or last YC batch, like, he's building all run tons of Windows operating systems for you to control with your agent. And like, there's some legacy EHR systems that only run on Internet-controlled systems. Yeah.Paul [00:49:54]: I think that's it. I think, like, there are use cases for specific operating systems for specific legacy software. And like, I'm excited to see what he does with that. I just wanted to give a shout out to the pig.dev website.swyx [00:50:06]: The pigs jump when you click on them. Yeah. That's great.Paul [00:50:08]: Eric, he's the former co-founder of banana.dev, too.swyx [00:50:11]: Oh, that Eric. Yeah. That Eric. Okay. Well, he abandoned bananas for pigs. I hope he doesn't start going around with pigs now.Alessio [00:50:18]: Like he was going around with bananas. A little toy pig. Yeah. Yeah. I love that. What else are we missing? I think we covered a lot of, like, the browser-based product history, but. What do you wish people asked you? Yeah.Paul [00:50:29]: I wish people asked me more about, like, what will the future of software look like? Because I think that's really where I've spent a lot of time about why do browser-based. Like, for me, starting a company is like a means of last resort. Like, you shouldn't start a company unless you absolutely have to. And I remain convinced that the future of software is software that you're going to click a button and it's going to do stuff on your behalf. Right now, software. You click a button and it maybe, like, calls it back an API and, like, computes some numbers. It, like, modifies some text, whatever. But the future of software is software using software. So, I may log into my accounting website for my business, click a button, and it's going to go load up my Gmail, search my emails, find the thing, upload the receipt, and then comment it for me. Right? And it may use it using APIs, maybe a browser. I don't know. I think it's a little bit of both. But that's completely different from how we've built software so far. And that's. I think that future of software has different infrastructure requirements. It's going to require different UIs. It's going to require different pieces of infrastructure. I think the browser infrastructure is one piece that fits into that, along with all the other categories you mentioned. So, I think that it's going to require developers to think differently about how they've built software for, you know

Windows Weekly (MP3)
WW 921: Regret as a Service - Drag tray, 3 new Framework PCs, Free Office test?

Windows Weekly (MP3)

Play Episode Listen Later Feb 27, 2025 159:02


Week D - If a preview update falls in the woods and no one downloads it, did it really happen? Plus, what is going on with AI for free? Isn't this stuff expensive? Windows 23H2/24H2: Taskbar share, Spotlight updates, Windows Backup snooze in File Explorer, etc. Dev and Beta - Semantic search adds OneDrive photo search to Search (was in File Explorer previously), plus the Recall reboot no one is explaining. And Trim comes to Snipping Tool (Canary and Dev) Beta (23H2) - Share gets a drag tray and Start All apps gets new Grid and Category views Lenovo revenues surge 20 percent Framework announces Ryzen AI-based Laptop 13, plus Laptop 12 and Desktop Opera adds Bluesky, Discord, and Slack to the sidebar Microsoft 365 Microsoft confuses us with a test of a free, ad-supported core Office suite for Windows Amazon kills Chime, will use Zoom, Teams, and more Amazon kills Appstore for Android Google to drop SMS-based 2FA, move to QR codes Paul continues with his SSO removals, an update on whether this impacts account availability AI/Dev Following up the previous discussion with an interesting way to use an AI chatbot Alexa enters the AI era OpenAI now has 400 million weekly active users Microsoft cancels some AI datacenter leases, but it's not done spending billions on AI Anthropic releases first reasoning model, with a twist Gemini Code Assist is now free for individuals! ThinkDeeper and Voice in Copilot no longer have usage restrictions OpenAI makes Deep Research available to all paid customers Apple delays biggest Siri advances past iOS 18.4 - Math is hard, but AI is even harder Spotify expands into AI-narrated audiobooks NVIDIA partners to bring free ASL training to everyone .NET 10 Preview 1 arrives with the promise of LTS and not much else Xbox Xbox Cloud Gaming gets its first update in a while, and it's a big one Microsoft delays Fable reboot to 2026 Tips and Picks Tip of the week: You can view the source code for the oldest machine-readable version of Unix App pick of the week: Adobe Photoshop for iPhone RunAs Radio this week: Exchange Server in 2025 with Michel de Rooij Brown liquor pick of the week: Glenrothes 15 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: 1password.com/windowsweekly cachefly.com/twit

All TWiT.tv Shows (MP3)
Windows Weekly 921: Regret as a Service

All TWiT.tv Shows (MP3)

Play Episode Listen Later Feb 27, 2025 159:02 Transcription Available


Week D - If a preview update falls in the woods and no one downloads it, did it really happen? Plus, what is going on with AI for free? Isn't this stuff expensive? Windows 23H2/24H2: Taskbar share, Spotlight updates, Windows Backup snooze in File Explorer, etc. Dev and Beta - Semantic search adds OneDrive photo search to Search (was in File Explorer previously), plus the Recall reboot no one is explaining. And Trim comes to Snipping Tool (Canary and Dev) Beta (23H2) - Share gets a drag tray and Start All apps gets new Grid and Category views Lenovo revenues surge 20 percent Framework announces Ryzen AI-based Laptop 13, plus Laptop 12 and Desktop Opera adds Bluesky, Discord, and Slack to the sidebar Microsoft 365 Microsoft confuses us with a test of a free, ad-supported core Office suite for Windows Amazon kills Chime, will use Zoom, Teams, and more Amazon kills Appstore for Android Google to drop SMS-based 2FA, move to QR codes Paul continues with his SSO removals, an update on whether this impacts account availability AI/Dev Following up the previous discussion with an interesting way to use an AI chatbot Alexa enters the AI era OpenAI now has 400 million weekly active users Microsoft cancels some AI datacenter leases, but it's not done spending billions on AI Anthropic releases first reasoning model, with a twist Gemini Code Assist is now free for individuals! ThinkDeeper and Voice in Copilot no longer have usage restrictions OpenAI makes Deep Research available to all paid customers Apple delays biggest Siri advances past iOS 18.4 - Math is hard, but AI is even harder Spotify expands into AI-narrated audiobooks NVIDIA partners to bring free ASL training to everyone .NET 10 Preview 1 arrives with the promise of LTS and not much else Xbox Xbox Cloud Gaming gets its first update in a while, and it's a big one Microsoft delays Fable reboot to 2026 Tips and Picks Tip of the week: You can view the source code for the oldest machine-readable version of Unix App pick of the week: Adobe Photoshop for iPhone RunAs Radio this week: Exchange Server in 2025 with Michel de Rooij Brown liquor pick of the week: Glenrothes 15 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: 1password.com/windowsweekly cachefly.com/twit

Radio Leo (Audio)
Windows Weekly 921: Regret as a Service

Radio Leo (Audio)

Play Episode Listen Later Feb 27, 2025 159:02 Transcription Available


Week D - If a preview update falls in the woods and no one downloads it, did it really happen? Plus, what is going on with AI for free? Isn't this stuff expensive? Windows 23H2/24H2: Taskbar share, Spotlight updates, Windows Backup snooze in File Explorer, etc. Dev and Beta - Semantic search adds OneDrive photo search to Search (was in File Explorer previously), plus the Recall reboot no one is explaining. And Trim comes to Snipping Tool (Canary and Dev) Beta (23H2) - Share gets a drag tray and Start All apps gets new Grid and Category views Lenovo revenues surge 20 percent Framework announces Ryzen AI-based Laptop 13, plus Laptop 12 and Desktop Opera adds Bluesky, Discord, and Slack to the sidebar Microsoft 365 Microsoft confuses us with a test of a free, ad-supported core Office suite for Windows Amazon kills Chime, will use Zoom, Teams, and more Amazon kills Appstore for Android Google to drop SMS-based 2FA, move to QR codes Paul continues with his SSO removals, an update on whether this impacts account availability AI/Dev Following up the previous discussion with an interesting way to use an AI chatbot Alexa enters the AI era OpenAI now has 400 million weekly active users Microsoft cancels some AI datacenter leases, but it's not done spending billions on AI Anthropic releases first reasoning model, with a twist Gemini Code Assist is now free for individuals! ThinkDeeper and Voice in Copilot no longer have usage restrictions OpenAI makes Deep Research available to all paid customers Apple delays biggest Siri advances past iOS 18.4 - Math is hard, but AI is even harder Spotify expands into AI-narrated audiobooks NVIDIA partners to bring free ASL training to everyone .NET 10 Preview 1 arrives with the promise of LTS and not much else Xbox Xbox Cloud Gaming gets its first update in a while, and it's a big one Microsoft delays Fable reboot to 2026 Tips and Picks Tip of the week: You can view the source code for the oldest machine-readable version of Unix App pick of the week: Adobe Photoshop for iPhone RunAs Radio this week: Exchange Server in 2025 with Michel de Rooij Brown liquor pick of the week: Glenrothes 15 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: 1password.com/windowsweekly cachefly.com/twit

Windows Weekly (Video HI)
WW 921: Regret as a Service - Drag tray, 3 new Framework PCs, Free Office test?

Windows Weekly (Video HI)

Play Episode Listen Later Feb 27, 2025 159:02


Week D - If a preview update falls in the woods and no one downloads it, did it really happen? Plus, what is going on with AI for free? Isn't this stuff expensive? Windows 23H2/24H2: Taskbar share, Spotlight updates, Windows Backup snooze in File Explorer, etc. Dev and Beta - Semantic search adds OneDrive photo search to Search (was in File Explorer previously), plus the Recall reboot no one is explaining. And Trim comes to Snipping Tool (Canary and Dev) Beta (23H2) - Share gets a drag tray and Start All apps gets new Grid and Category views Lenovo revenues surge 20 percent Framework announces Ryzen AI-based Laptop 13, plus Laptop 12 and Desktop Opera adds Bluesky, Discord, and Slack to the sidebar Microsoft 365 Microsoft confuses us with a test of a free, ad-supported core Office suite for Windows Amazon kills Chime, will use Zoom, Teams, and more Amazon kills Appstore for Android Google to drop SMS-based 2FA, move to QR codes Paul continues with his SSO removals, an update on whether this impacts account availability AI/Dev Following up the previous discussion with an interesting way to use an AI chatbot Alexa enters the AI era OpenAI now has 400 million weekly active users Microsoft cancels some AI datacenter leases, but it's not done spending billions on AI Anthropic releases first reasoning model, with a twist Gemini Code Assist is now free for individuals! ThinkDeeper and Voice in Copilot no longer have usage restrictions OpenAI makes Deep Research available to all paid customers Apple delays biggest Siri advances past iOS 18.4 - Math is hard, but AI is even harder Spotify expands into AI-narrated audiobooks NVIDIA partners to bring free ASL training to everyone .NET 10 Preview 1 arrives with the promise of LTS and not much else Xbox Xbox Cloud Gaming gets its first update in a while, and it's a big one Microsoft delays Fable reboot to 2026 Tips and Picks Tip of the week: You can view the source code for the oldest machine-readable version of Unix App pick of the week: Adobe Photoshop for iPhone RunAs Radio this week: Exchange Server in 2025 with Michel de Rooij Brown liquor pick of the week: Glenrothes 15 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: 1password.com/windowsweekly cachefly.com/twit

All TWiT.tv Shows (Video LO)
Windows Weekly 921: Regret as a Service

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Feb 27, 2025 159:02 Transcription Available


Week D - If a preview update falls in the woods and no one downloads it, did it really happen? Plus, what is going on with AI for free? Isn't this stuff expensive? Windows 23H2/24H2: Taskbar share, Spotlight updates, Windows Backup snooze in File Explorer, etc. Dev and Beta - Semantic search adds OneDrive photo search to Search (was in File Explorer previously), plus the Recall reboot no one is explaining. And Trim comes to Snipping Tool (Canary and Dev) Beta (23H2) - Share gets a drag tray and Start All apps gets new Grid and Category views Lenovo revenues surge 20 percent Framework announces Ryzen AI-based Laptop 13, plus Laptop 12 and Desktop Opera adds Bluesky, Discord, and Slack to the sidebar Microsoft 365 Microsoft confuses us with a test of a free, ad-supported core Office suite for Windows Amazon kills Chime, will use Zoom, Teams, and more Amazon kills Appstore for Android Google to drop SMS-based 2FA, move to QR codes Paul continues with his SSO removals, an update on whether this impacts account availability AI/Dev Following up the previous discussion with an interesting way to use an AI chatbot Alexa enters the AI era OpenAI now has 400 million weekly active users Microsoft cancels some AI datacenter leases, but it's not done spending billions on AI Anthropic releases first reasoning model, with a twist Gemini Code Assist is now free for individuals! ThinkDeeper and Voice in Copilot no longer have usage restrictions OpenAI makes Deep Research available to all paid customers Apple delays biggest Siri advances past iOS 18.4 - Math is hard, but AI is even harder Spotify expands into AI-narrated audiobooks NVIDIA partners to bring free ASL training to everyone .NET 10 Preview 1 arrives with the promise of LTS and not much else Xbox Xbox Cloud Gaming gets its first update in a while, and it's a big one Microsoft delays Fable reboot to 2026 Tips and Picks Tip of the week: You can view the source code for the oldest machine-readable version of Unix App pick of the week: Adobe Photoshop for iPhone RunAs Radio this week: Exchange Server in 2025 with Michel de Rooij Brown liquor pick of the week: Glenrothes 15 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: 1password.com/windowsweekly cachefly.com/twit

Radio Leo (Video HD)
Windows Weekly 921: Regret as a Service

Radio Leo (Video HD)

Play Episode Listen Later Feb 27, 2025 159:02 Transcription Available


Week D - If a preview update falls in the woods and no one downloads it, did it really happen? Plus, what is going on with AI for free? Isn't this stuff expensive? Windows 23H2/24H2: Taskbar share, Spotlight updates, Windows Backup snooze in File Explorer, etc. Dev and Beta - Semantic search adds OneDrive photo search to Search (was in File Explorer previously), plus the Recall reboot no one is explaining. And Trim comes to Snipping Tool (Canary and Dev) Beta (23H2) - Share gets a drag tray and Start All apps gets new Grid and Category views Lenovo revenues surge 20 percent Framework announces Ryzen AI-based Laptop 13, plus Laptop 12 and Desktop Opera adds Bluesky, Discord, and Slack to the sidebar Microsoft 365 Microsoft confuses us with a test of a free, ad-supported core Office suite for Windows Amazon kills Chime, will use Zoom, Teams, and more Amazon kills Appstore for Android Google to drop SMS-based 2FA, move to QR codes Paul continues with his SSO removals, an update on whether this impacts account availability AI/Dev Following up the previous discussion with an interesting way to use an AI chatbot Alexa enters the AI era OpenAI now has 400 million weekly active users Microsoft cancels some AI datacenter leases, but it's not done spending billions on AI Anthropic releases first reasoning model, with a twist Gemini Code Assist is now free for individuals! ThinkDeeper and Voice in Copilot no longer have usage restrictions OpenAI makes Deep Research available to all paid customers Apple delays biggest Siri advances past iOS 18.4 - Math is hard, but AI is even harder Spotify expands into AI-narrated audiobooks NVIDIA partners to bring free ASL training to everyone .NET 10 Preview 1 arrives with the promise of LTS and not much else Xbox Xbox Cloud Gaming gets its first update in a while, and it's a big one Microsoft delays Fable reboot to 2026 Tips and Picks Tip of the week: You can view the source code for the oldest machine-readable version of Unix App pick of the week: Adobe Photoshop for iPhone RunAs Radio this week: Exchange Server in 2025 with Michel de Rooij Brown liquor pick of the week: Glenrothes 15 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: 1password.com/windowsweekly cachefly.com/twit

The Cybersecurity Defenders Podcast
#190 - How MSSPs can help clients meet regulatory requirements with Garret Grajek, CEO at YouAttest

The Cybersecurity Defenders Podcast

Play Episode Listen Later Feb 3, 2025 38:18


On this episode of The Cybersecurity Defenders Podcast we speak with Garret Grajek, CEO of YouAttest, about how MSSPs help clients meet regulatory requirements and what it means for the MSSP.Garret is a certified security leader with nearly 30 years of experience in information security. Garret is widely recognized as a visionary in identity, access, and authentication, holding 13 patents in areas such as x.509, mobile security, single sign-on (SSO), federation, and multi-factor technologies. Over the course of his career, he has contributed to major security projects for prominent commercial clients like Dish Networks, Office Depot, TicketMaster, and E*Trade, as well as public sector organizations including the U.S. Navy and the EPA.Garret began his career as a security programmer at Texas Instruments, IBM, and Tandem Computers, later advancing to key roles at RSA, Netegrity, and Cisco. He is also the founder and creator of SecureAuth IdP, a two-factor authentication and SSO platform. Known for his expertise in security architecture, product development, and leadership, Garret is a thought leader in modern IT architecture, including mobile deployments, cloud, hybrid environments, and advanced authentication technologies.

ASCO Daily News
Advances in Adjuvant Therapy for High-Risk Early Breast Cancer With Germline Mutations

ASCO Daily News

Play Episode Listen Later Jan 30, 2025 19:38


Dr. Jasmine Sukumar and Dr. Dionisia Quiroga discuss advances in adjuvant therapy for patients with early breast cancer and BRCA1/2 mutations, including how to identify patients who should receive genetic testing and the significant survival benefits of olaparib that emerged from the OlympiA trial. TRANSCRIPT Dr. Jasmine Sukumar: Hello, I'm Dr. Jasmine Sukumar, your guest host of the ASCO Daily News Podcast today. I'm an assistant professor and breast medical oncologist at the University of Texas MD Anderson Cancer Center. On today's episode, we'll be exploring advances in adjuvant therapy for high-risk early breast cancer in people with BRCA1/2 germline mutations. Joining me for this discussion is Dr. Dionisa Quiroga, an assistant professor and breast medical oncologist at the Ohio State University Comprehensive Cancer Center.  Our full disclosures are available in the transcript of this episode.  Dr. Quiroga, it's great to have you on the podcast. Thanks for being here. Dr. Dionisia Quiroga: Thank you. Looking forward to discussing this important topic. Dr. Jasmine Sukumar: Let's start by going over who should be tested for BRCA1/2 genetic mutations. How do you identify patients with breast cancer in your clinic who should be offered BRCA1/2 genetic testing? Dr. Dionisia Quiroga: So, guidelines on who to offer testing to somewhat differ between organizations at this point. I would say, generally, I do follow our current ASCO-Society of Surgical Oncology (SSO) Guidelines, though. Those guidelines recommend that BRCA1/2 mutation testing be offered to all patients who are diagnosed with breast cancer and are 65 years old or younger. For those that are older than 65 years old, there are additional factors to really take into account to decide on who to recommend testing for. Some of this has to do with personal and family history as well as ancestry. The NCCN also has their own specific guidelines for who to offer testing to. For example, people assigned male at birth; those who are found to have a second breast primary; those who are diagnosed at a young age; and those with significant family history should also be offered BRCA1/2 testing.  I think, very important for our discussion today, ASCO and SSO also made a very important point that all patients who may be eligible for PARP inhibitor therapy should be offered testing. So clearly this includes a large amount of our patient population. In my practice, we often refer to our Cancer Genetics Program. We're fortunate to have many experienced genetic counselors who can complete pre-test and post-test counseling with our patients. However, in settings where this may not be accessible to patients, it can also be appropriate for oncology providers to order the testing and ideally perform some of this counseling as well. Dr. Jasmine Sukumar: Thank you Dr. Quiroga. Let's next review where we are in current clinical practice guidelines. What current options do we have for adjuvant therapy specific to people with high-risk early breast cancer and BRCA1/2 genetic mutations? Dr. Dionisia Quiroga: Our current guidelines recommend adjuvant olaparib for one year for individuals with HER2-negative high risk breast cancer. This approval largely came from the data and the results of the OlympiA trial. This was a prospective phase 3, double blind, randomized clinical trial. It enrolled patients who had been diagnosed with HER2-negative early-stage breast cancer who also carried germline pathogenic or likely pathogenic variants of either the BRCA1 and/or BRCA2 genes. The disease also had to be considered high-risk and there were several criteria that had to be evaluated to deem whether or not these patients were high-risk. For example, those who are treated with neoadjuvant chemotherapy, if they had disease that was triple-negative, they needed to have some level of invasive residual disease at time of surgery. Alternatively, if the disease was hormone receptor-positive, they needed to have residual disease and a calculated CPS + EG score of 3 or higher. This scoring system is something that estimates relapse probability on the basis of clinical and pathologic stage, ER status, and histologic grade, and this will give you a score ranging from 0 to 6. In general, the higher the score, the worse the prognosis. This calculator though is available to the public online to allow providers to calculate this risk.  For the subset of patients who received adjuvant chemotherapy, for them to qualify for the OlympiA trial, if they had triple-negative disease, they needed to have a tumor of at least 2 cm or greater and/or have positive lymph nodes for disease. For hormone receptor-positive disease that was treated with adjuvant chemotherapy, they were required to have four or more pathologically confirmed positive lymph nodes at time of surgery. From this specified pool, patients were then randomized 1:1 to get either adjuvant olaparib starting at 300 mg twice a day or a matching placebo twice a day after they had completed surgery, chemotherapy and radiation treatment if needed. Dr. Jasmine Sukumar: And what were the outcomes of this study? Dr. Dionisia Quiroga: The study ended up enrolling over 1,800 patients and from these 1,800 patients, 70% had a BRCA1 mutation while 30% had a BRCA2 mutation. About 80% of the patients had triple-negative disease compared to hormone receptor-positive disease. Interestingly, about half of all patients enrolled had received neoadjuvant chemotherapy while the other half received adjuvant chemotherapy.  Looking at the outcomes, this was overall a very positive study. We actually now have outcomes data from a median of about 6 years out. This was just reported in December at the 2024 San Antonio Breast Cancer Symposium. There was found to be a 9.4% absolute difference in six-year invasive disease-free survival favoring the olaparib arm over the placebo arm. What was also interesting is that this was consistent across multiple subgroups of patients and the benefit was really seen whether or not they had hormone receptor-positive or triple-negative disease. The absolute difference in distant disease-free survival was also high at 7.8% and additionally favored olaparib. Most importantly, there was found to be a significant overall survival benefit. The six-year overall survival was 87.5% in the olaparib group compared to 83.2% in the placebo group. This translates to about a 4.4% difference and a relative 28% overall survival benefit in using olaparib.  Now, future follow up is going to be very important. Follow up for this study is actually planned to continue out until June 2029 so we can continue to observe if these survival curves will continue to branch apart as they have so far at each follow up. And I think this is especially important for those patients diagnosed with hormone receptor-positive cancers because we know those patients are at particular risk for later recurrences.  As an additional side note, the researchers also noted that there were fewer primary malignancies in the olaparib group, not just of the breast but also primary ovarian or fallopian tube cancers as well, which is not completely surprising knowing that this drug is also heavily used and beneficial in different types of gynecologic cancers. Ultimately, the amount of adverse events reported have been low with only about 9.9% of patients receiving olaparib needing to discontinue drug due to adverse events, and this is compared to 4.2% reported in the placebo group. Dr. Jasmine Sukumar: You mentioned that the OlympiA trial showed an overall survival benefit, but interestingly the OlympiAD trial looking at olaparib versus chemotherapy in patients with advanced metastatic HER2-negative breast cancer did not show a significant overall survival benefit. Could you discuss those differences? Dr. Dionisia Quiroga: I agree, that's a very good point. So OlympiA's comparator arm was, of course, a placebo. So while this isn't the same as comparing to chemotherapy, it does still potentially suggest that there is a degree of benefit that olaparib can provide when it's introduced in the early local disease setting compared to advanced metastatic disease. I think we need more future trials looking at potential other combinations to see if we can improve the efficacy of PARP inhibitors in the metastatic setting. Dr. Jasmine Sukumar: For patients who do choose to proceed with use of adjuvant olaparib due to the promising efficacy, what side effects should oncologists counsel their patients about? Dr. Dionisia Quiroga: The most common notable side effects, I would say with olaparib and other PARP inhibitors are really cytopenias. Gastrointestinal side effects such as nausea and vomiting can occur as well as fatigue. There are some less common but potentially more serious side effects that we should counsel our patients on. This includes pneumonitis. So counseling patients on if they're short of breath or experiencing cough to let their provider know. Venous thromboembolism can also be increased rates of occurrence. And then of course myelodysplastic syndromes or acute myeloid leukemia is something that we often are concerned about. That being said, I think it should be noted that interestingly in the OlympiA trial so far, there have been less new cases of MDS and AML in the olaparib group than actually what's been reported in the placebo group at this median follow up of over six years out. So we'll need to continue to monitor this endpoint over time, but I do think this provides some reassurance. Dr. Jasmine Sukumar: Since the initiation of the OlympiA trial, other adjuvant treatments have also been studied and FDA approved for non-metastatic HER2-negative breast cancer. So for example, the CREATE-X trial established adjuvant capecitabine as an FDA approved treatment option in patients with triple-negative breast cancer who had residual disease following neoadjuvant chemotherapy. So if a patient with triple-negative breast cancer with residual disease is eligible for both adjuvant olaparib and adjuvant capecitabine treatments, how do you decide amongst the two? Dr. Dionisia Quiroga: If a patient's eligible for both, I honestly often favor olaparib, and I do this because I find the data for adjuvant olaparib a little bit more compelling. There are also differences in toxicity profile and treatment duration between the two that I think we should discuss with patients. For example, olaparib is supposed to be taken for a year total, whereas with capecitabine we typically treat for six to eight cycles with each cycle taking three weeks. There are some who may also sequence the two drugs in very high-risk disease. However, this is very much a data free zone. We don't have any current clinical trials really comparing these two or if sequencing of these agents is appropriate. So I don't currently do this in my own clinical practice. Dr. Jasmine Sukumar: Nowadays, almost all patients with stage 2 to 3 triple-negative breast cancer will be offered neoadjuvant chemotherapy plus immune checkpoint inhibitor therapy pembrolizumab per our KEYNOTE-522 trial data. With our current approach, pembrolizumab is continued into the adjuvant setting regardless of surgical outcome, so that patients receive a year total of immunotherapy. So in patients with residual disease and a BRCA germline mutation, do you suggest using adjuvant olaparib concurrently with pembrolizumab? Do we have any data to support that approach? Dr. Dionisia Quiroga: I do. I do use them concurrently. If a patient is eligible for adjuvant olaparib, I would use it the same way as if they were not on pembrolizumab. That being said, there are no large studies currently that have shown what the benefit or the toxicity of pembrolizumab plus olaparib are for early-stage disease. However, we do have some safety data of this combinatorial approach from other studies. For example, the phase 2/3 KEYLYNK-009 study showed that patients with advanced metastatic triple-negative breast cancer who were receiving concurrent pembrolizumab and olaparib had a manageable safety profile, particularly as the toxicities of these drugs alone don't tend to overlap. Dr. Jasmine Sukumar: And what about endocrine therapy for those that also have hormone receptor-positive disease? Dr. Dionisia Quiroga: Adjuvant endocrine therapy should definitely be continued while patients are on olaparib if they're hormone receptor-positive. An important component of this will also likely be ovarian suppression, which should include recommendation of risk reducing bilateral salpingo oophorectomy due to the risk of ovarian cancer development in patients who carry BRCA1/2 gene mutations. In most cases, this should happen at age 40 or before for those that carry a BRCA1 mutation, and at age 45 or prior for those with BRCA2 mutations. Dr. Jasmine Sukumar: And do you also consider adjuvant bisphosphonates in this context? Dr. Dionisia Quiroga: Yes. Like adjuvant endocrine therapy, adjuvant bisphosphonates were also instructed to be given according to standard guidelines in the OlympiA trial, so I would recommend use of bisphosphonates when indicated. You can refer to the ASCO Ontario Health Guidelines on Adjuvant Bone-Modifying Therapy Breast Cancer to guide that decision in order to utilize this due to multiple clinical benefits. It doesn't just help in terms of adjuvant breast cancer treatment but also reduction of fracture rate and down the line, improved breast cancer mortality.  Dr. Jasmine Sukumar: Particularly in hormone receptor-positive breast cancer, another adjuvant therapy option that was not available when the OlympiA trial started are the CDK4/6 inhibitors, ribociclib and abemaciclib, based on the NATALEE and monarchE studies. So how do you consider the use of these adjuvant therapy drugs in the context of olaparib and BRCA mutations? Dr. Dionisia Quiroga: Yeah, so we are definitely in a data-free zone here. And that's in part because the NATALEE and the monarchE studies are still ongoing and reporting data out at the same time that we're getting updated OlympiA data. So unlike some of our other adjuvant treatments that we discussed, where olaparib could be safely given concurrently, the risk of myelosuppression and using both a CDK4/6 inhibitor and a PARP inhibitor at the same time would be too high. In some cases, even if a patient has a BRCA1/2 mutation, they may not meet that specified inclusion criteria that OlympiA set for what they consider to be high-risk disease. And we know from the NATALEE and the monarchE trial there are also different markers that they use to denote high-risk disease. So it's possible, for example, in the NATALEE trial that looks specifically at adjuvant ribociclib, they included a much larger pool of hormone receptor-positive early-stage breast cancers, including a subset that did not have positive axillary lymph nodes.  In cases where patients would qualify for both olaparib and a CDK4/6 inhibitor, I think this is worth a nuanced discussion with our patients about the potential benefits, risks and administration of these drugs. I think another point to bring up is the cost associated with these drugs and the length of time patients will be on for, because financial toxicity is always something that we should bring up with patients as well. When sequencing these in high-risk disease, my practice is to generally favor olaparib first due to the overall survival data. There is also some data to support that patients with BRCA1/2 germline mutations may not respond quite as well to CDK4/6 inhibitors compared to those without. But again, this is still outside of the purview of current guidelines. Fortunately, we have more potential choices for patients, and that's a good thing, but shared decision making also needs to be key. Dr. Jasmine Sukumar: And while our focus today is on adjuvant treatment for people who carry germline BRCA mutations, what about other related gene mutations such as PALB2 pathogenic variant? Dr. Dionisia Quiroga: That's a great question. Clinical trials in the advanced metastatic setting have shown that there is efficacy of olaparib in the setting for PALB2 mutations. This is largely based on the TBCRC 048 phase 2 trial and that provided a Category 2B NCCN recommendation for patients with these PALB2 gene mutations. However, we're really still lacking enough clinical data for use in early-stage disease, so I don't currently use adjuvant olaparib in this case. I am definitely eager for more data in this area as the efficacy of PARP inhibitors in PALB2 gene mutations is very compelling. I think also, in the same line, there's been some data for somatic BRCA1/2 mutations in the metastatic setting, but we still have a lack of data for the early stage setting here as well. Dr. Jasmine Sukumar: Thank you Dr. Quiroga, for sharing your valuable insights with us today on the ASCO Daily News Podcast. Dr. Dionisia Quiroga: Thank you, Dr. Sukumar. Dr. Jasmine Sukumar: And thank you to our listeners for your time today. You'll find links to the studies discussed today in the transcript of this episode. Finally, if you value the insights that you hear on the ASCO Daily News Podcast, please take a moment to rate, review and subscribe wherever you get your podcasts. Thank you. Disclaimer: The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. Follow today's speakers:   Dr. Dionisa Quiroga @quirogad @quirogad.bsky.social Dr. Jasmine Sukumar @JasmineSukumar  @jasmine.sukumar.bsky.social Follow ASCO on social media:  @ASCO on X   @ASCO on Bluesky    ASCO on Facebook    ASCO on LinkedIn    Disclosures: Dr. Dionisia Quiroga:  No relationships to disclose Dr. Jasmine Sukumar: Honoraria: Sanofi (Immediate Family Member)  

Tech Optimist
#87 - Meet the Startup Changing How We Authenticate Everything

Tech Optimist

Play Episode Listen Later Jan 16, 2025 23:35


In this Meet the Startup episode of the Alumni Ventures Tech Optimist Podcast, Lucas Pasch sits down with Chad Gerstensang, co-founder and CEO of UNIXi, to explore how the company is redefining cybersecurity. Chad explains how UNIXi's integration-free technology delivers universal single sign-on (SSO) to protect enterprises against social engineering attacks like phishing and credential theft. By addressing critical gaps in identity and access management solutions, UNIXi empowers businesses to secure every application seamlessly. Tune in to learn how UNIXi is shaping the future of cybersecurity and gaining traction in industries like healthcare and finance.To Learn More:Alumni Ventures (AV)AV LinkedInAV Deep Tech FundTech OptimistUNIXiLegal Disclosure:https://av-funds.com/tech-optimist-disclosuresCreators & Guests Chad Gerstensang - Guest Lucas Pasch - Guest

Ridgefield Tiger Talk
Ridgefield Tiger Talk 115: School Security Update

Ridgefield Tiger Talk

Play Episode Listen Later Dec 13, 2024 18:50


In this week's episode of Ridgefield Tiger Talk, we welcome back to the show the Director of Safety and Security, Josh Zabin. We discuss how the new school security officer (SSO) model is working, the benefits/community impact of having these SSO's at each building, the continuing work in making our buildings secure, and Sandy Hook promise our anonymous reporting system. Thanks for listening!

Unofficial SAP on Azure podcast
#219 - The one with SSO to SAP GUI using Global Secure Access (Martin Raepple) | SAP on Azure Video Podcast

Unofficial SAP on Azure podcast

Play Episode Listen Later Nov 29, 2024 32:03


In episode 219 of our SAP on Azure video podcast we talk about about SSO to the SAP GUI using Global Secure Access. In a previous episode we already talked about using MFA with SAP GUI leveraging the SAP Secure Login Service. Today we look again at providing a seucre access from your SAP GUI to your SAP system, but this time using Microsoft Global Secure Access. For this we have Martin Raepple joining!Find all the links mentioned here: https://www.saponazurepodcast.de/episode219Reach out to us for any feedback / questions:* Robert Boban: https://www.linkedin.com/in/rboban/* Goran Condric: https://www.linkedin.com/in/gorancondric/* Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #SAPGUI #SSO

The Cybersecurity Defenders Podcast
#172 - Cybercrime cottage industries with Reed McGinley-Stempel, the Co-Founder and CEO of Stytch

The Cybersecurity Defenders Podcast

Play Episode Listen Later Nov 27, 2024 35:28


On today's episode of The Cybersecurity Defenders Podcast we talk about cybercrime cottage industries with Reed McGinley-Stempel, the Co-Founder and CEO of StytchStytch is a platform designed to streamline authentication, authorization, and fraud prevention in a way that enhances security while minimizing user friction. Stytch serves both consumer and B2B applications, offering a variety of authentication solutions, including features like Google One-Tap and Biometrics for consumer-facing applications, as well as SSO, Role-Based Access Control, and SCIM integrations for enterprise SaaS. Reed founded Stytch after witnessing the challenges teams face when building secure and user-friendly authentication solutions, a problem he first encountered while working at Plaid. He is also a proud duke alumni and was the recipient of the prestigious Fullbright Scholarship

SurgOnc Today
SOI Article Series: The Prognostic Role of Post-operative cfDNA after Resection of Colorectal Liver Metastases: A Systematic Review and Meta-Analysis

SurgOnc Today

Play Episode Listen Later Nov 4, 2024 14:24


In this inaugural episode of the Surgical Oncology Insight series of SurgOnc Today®, Dr. Shishir Maithel, Editor of Surgical Oncology Insight, SSO's open-access journal, discusses with Dr. Brett Ecker the results of a systematic review and meta-analysis characterizing the incidence of cfDNA-positivity after resection of colorectal cancer liver metastases  with quantification of its sensitivity and specificity for postoperative recurrence, as reported in his article, "The Prognostic Role of Post-operative cfDNA after Resection of Colorectal Liver Metastases: A Systematic Review and Meta-Analysis."

Paul's Security Weekly
Making TLS More Secure, Lessons from IPv6, LLMs Finding Vulns - Arnab Bose, Shiven Ramji - ASW #305

Paul's Security Weekly

Play Episode Listen Later Oct 29, 2024 82:48


Better TLS implementations with Rust, fuzzing, and managing certs, appsec lessons from the everlasting transition to IPv6, LLMs for finding vulns (and whether fuzzing is better), and more! Also check out this presentation from BSides Knoxville that we talked about briefly, https://youtu.be/DLn7Noex_fc?feature=shared Generative AI has been the talk of the technology industry for the past 18+ months. Companies are seeing its value, so generative AI budgets are growing. With more and more AI agents expected in the coming years, it's essential that we are securing how consumers interact with generative AI agents and how developers build AI agents into their apps. This is where identity comes in. Shiven Ramji, President of Customer Identity Cloud at Okta, will dive into the importance of protecting the identity of AI agents and Okta's new security tools revealed at Oktane that address some of the largest issues consumers and businesses have with generative AI right now. Segment Resources: https://www.okta.com/oktane/ https://www.okta.com/press-room/press-releases/okta-helps-builders-easily-implement-auth-for-genai-apps-secure-how/ Today, there isn't an identity security standard for enterprise applications that ensures interoperability across all SaaS and IDPs. There also isn't an easy way for an app, resource, workload, API or any other enterprise technology to make itself discoverable, governable, support SSO and SCIM and continuous authentication. This lack of standardization is one of the biggest barriers to cybersecurity today. Arnab Bose, Chief Product Officer, Workforce Identity Cloud at Okta, joins Security Weekly's Mandy Logan to discuss the need for a new, comprehensive identity security standard for enterprise applications, and the work Okta is doing alongside other industry players to institute a framework for SaaS companies to enhance the end-to-end security of their products across every touchpoint of their technology stack. Segment Resources: https://www.okta.com/oktane/ https://www.okta.com/press-room/press-releases/okta-openid-foundation-tech-firms-tackle-todays-biggest-cybersecurity/ https://www.okta.com/press-room/press-releases/okta-is-reducing-the-risk-of-unmanaged-identities-social-engineering/ This segment is sponsored by Oktane, to view all of the CyberRisk TV coverage from Oktane visit https://securityweekly.com/oktane. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-305

Paul's Security Weekly TV
Protecting Identity of AI Agents & Standardizing Identity Security for SaaS Apps - Shiven Ramji, Arnab Bose - ASW #305

Paul's Security Weekly TV

Play Episode Listen Later Oct 29, 2024 30:42


Generative AI has been the talk of the technology industry for the past 18+ months. Companies are seeing its value, so generative AI budgets are growing. With more and more AI agents expected in the coming years, it's essential that we are securing how consumers interact with generative AI agents and how developers build AI agents into their apps. This is where identity comes in. Shiven Ramji, President of Customer Identity Cloud at Okta, will dive into the importance of protecting the identity of AI agents and Okta's new security tools revealed at Oktane that address some of the largest issues consumers and businesses have with generative AI right now. Segment Resources: https://www.okta.com/oktane/ https://www.okta.com/press-room/press-releases/okta-helps-builders-easily-implement-auth-for-genai-apps-secure-how/ Today, there isn't an identity security standard for enterprise applications that ensures interoperability across all SaaS and IDPs. There also isn't an easy way for an app, resource, workload, API or any other enterprise technology to make itself discoverable, governable, support SSO and SCIM and continuous authentication. This lack of standardization is one of the biggest barriers to cybersecurity today. Arnab Bose, Chief Product Officer, Workforce Identity Cloud at Okta, joins Security Weekly's Mandy Logan to discuss the need for a new, comprehensive identity security standard for enterprise applications, and the work Okta is doing alongside other industry players to institute a framework for SaaS companies to enhance the end-to-end security of their products across every touchpoint of their technology stack. Segment Resources: https://www.okta.com/oktane/ https://www.okta.com/press-room/press-releases/okta-openid-foundation-tech-firms-tackle-todays-biggest-cybersecurity/ https://www.okta.com/press-room/press-releases/okta-is-reducing-the-risk-of-unmanaged-identities-social-engineering/ This segment is sponsored by Oktane, to view all of the CyberRisk TV coverage from Oktane visit https://securityweekly.com/oktane. Show Notes: https://securityweekly.com/asw-305

Application Security Weekly (Audio)
Making TLS More Secure, Lessons from IPv6, LLMs Finding Vulns - Arnab Bose, Shiven Ramji - ASW #305

Application Security Weekly (Audio)

Play Episode Listen Later Oct 29, 2024 82:48


Better TLS implementations with Rust, fuzzing, and managing certs, appsec lessons from the everlasting transition to IPv6, LLMs for finding vulns (and whether fuzzing is better), and more! Also check out this presentation from BSides Knoxville that we talked about briefly, https://youtu.be/DLn7Noex_fc?feature=shared Generative AI has been the talk of the technology industry for the past 18+ months. Companies are seeing its value, so generative AI budgets are growing. With more and more AI agents expected in the coming years, it's essential that we are securing how consumers interact with generative AI agents and how developers build AI agents into their apps. This is where identity comes in. Shiven Ramji, President of Customer Identity Cloud at Okta, will dive into the importance of protecting the identity of AI agents and Okta's new security tools revealed at Oktane that address some of the largest issues consumers and businesses have with generative AI right now. Segment Resources: https://www.okta.com/oktane/ https://www.okta.com/press-room/press-releases/okta-helps-builders-easily-implement-auth-for-genai-apps-secure-how/ Today, there isn't an identity security standard for enterprise applications that ensures interoperability across all SaaS and IDPs. There also isn't an easy way for an app, resource, workload, API or any other enterprise technology to make itself discoverable, governable, support SSO and SCIM and continuous authentication. This lack of standardization is one of the biggest barriers to cybersecurity today. Arnab Bose, Chief Product Officer, Workforce Identity Cloud at Okta, joins Security Weekly's Mandy Logan to discuss the need for a new, comprehensive identity security standard for enterprise applications, and the work Okta is doing alongside other industry players to institute a framework for SaaS companies to enhance the end-to-end security of their products across every touchpoint of their technology stack. Segment Resources: https://www.okta.com/oktane/ https://www.okta.com/press-room/press-releases/okta-openid-foundation-tech-firms-tackle-todays-biggest-cybersecurity/ https://www.okta.com/press-room/press-releases/okta-is-reducing-the-risk-of-unmanaged-identities-social-engineering/ This segment is sponsored by Oktane, to view all of the CyberRisk TV coverage from Oktane visit https://securityweekly.com/oktane. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-305

Blerd and Beyond
Horror Xiaolin Showdown ( Season 6 Finale)

Blerd and Beyond

Play Episode Listen Later Oct 29, 2024 90:59


What's up you beautiful people out there and Our Fellow Space Rangers. We are Blerd and Beyond and this is our Season 6 Finale Horror Xiaolin Showdown. We have some favorites and some horror greats in today's battle. So grab your Shen Gong Wu you don't want to miss this finale. We really appreciate everyone that been following our Journey,. P.S So don't let anyone stop you from going beyond your dreams.

COMPRESSEDfm
184 | Laravel vs. Full Stack JavaScript: The Debate

COMPRESSEDfm

Play Episode Listen Later Oct 23, 2024 54:23


Amy, Brad, and Aaron discuss how Laravel and JavaScript frameworks like React can coexist in a modern web development workflow. They examine the benefits of Laravel's integrated ecosystem, the pragmatism behind choosing technologies like SQLite, and the cultural differences between Laravel's "benevolent dictator" model versus the JavaScript community's preference for composability and modularity.SponsorWorkOS - WorkOS helps you launch enterprise features like SSO and user management with ease. Thanks to the AuthKit for JavaScript, your team can integrate in minutes and focus on what truly matters—building your app. Show Notes00:00:00 - Introduction and Sponsors00:01:06 - Welcoming Aaron Francis00:02:07 - Amy's Experience with Laravel and AI00:04:12 - Aaron's Transition to Try Hard Studios00:08:02 - Production Setup and Process00:10:15 - Monetization Strategies for Try Hard Studios00:13:39 -  The Resurgence of SQLite00:18:08 - SQLite in Modern Development Workflows00:23:24 - Terso and Innovations with SQLite00:28:00 - Laravel vs. Full Stack JavaScript: The Debate00:33:16 - Integrating Laravel with Frontend Frameworks00:39:03 - Pragmatic Approaches to Web Development00:44:54 - JavaScript Ecosystem and Laravel Comparisons00:53:00 - Laravel's Evolution to Embrace JavaScript00:54:09 - Closing Remarks

CISO-Security Vendor Relationship Podcast
Does Burying Your Head in the Sand Count as a Security Posture? (LIVE in Boca Raton, FL)

CISO-Security Vendor Relationship Podcast

Play Episode Listen Later Oct 8, 2024 45:54


All links and images for this episode can be found on CISO Series. This week's episode is hosted by me, David Spark (@dspark), producer of CISO Series and Eduardo Ortiz, vp, global head of cybersecurity, Techtronic Industries. Joining us is Adam Fletcher, CSO, Blackstone. In this episode: Keeping our eyes on new risks The hiring disconnect Mental health in incident response Moving on from CrowdStrike Thanks to our podcast sponsors, Fortra, Quadrant Information Security, and Savvy Security! Fortra's Data Protection solutions protect sensitive data while keeping users productive. Our interlocking data loss prevention (DLP), data classification, and secure collaboration tools can be SaaS deployed or on-premises, and we offer managed services to extend your team and reduce risk. Visit www.fortra.com/solutions/data-security/data-protection for more information. Quadrant Security is bad news for bad dudes. Quadrant's XDR solution combines the best people, processes, and technology — managing your security so you can manage business operations. For a limited time, our analysts will provide your organization a free dark web report, detailing the data leaving you vulnerable. Learn more: quadrantsec.com/darkweb. Despite significant investments in SSO, MFA, IGA, and PAM, organizations still face significant challenges in securing identities, particularly with SaaS apps. Savvy Security augments these tools with full app and identity visibility to discover and remediate shadow and shared accounts, misconfigured authentication, and weak, reused, or compromised credentials. Visit savvy.security/ciso-series to learn more.

COMPRESSEDfm
183 | Auth-some Sauce: Spicing up Security

COMPRESSEDfm

Play Episode Listen Later Oct 2, 2024 37:37


In this episode, Amy and Brad sit down with Michael Chan to discuss WorkOS, a tool simplifying authentication and authorization for developers. They explore how WorkOS makes complex processes like OAuth, SSO, and MFA easy to implement, compare it to other auth providers, and dive deep into AuthKit's capabilities.SponsorsWorkOS - WorkOS helps you launch enterprise features like SSO and user management with ease. Thanks to the AuthKit for JavaScript, your team can integrate in minutes and focus on what truly matters—building your app.Show Notes00:00 - Intro01:15 - Introduction to WorkOSWorkOSAuthKitWorkOS on YouTube02:23 - Comparing WorkOS with Competitors03:50 - Features of WorkOS AuthKit06:53 - WorkOS's Evolution and Target Audience09:30 - Challenges in Implementing Auth Solutions10:30 - Should Developers Build Their Own Auth?Selma's Blog Post: One Does Not Simply Delete Cookies12:50 - The Cascade of Auth Decisions: Emails and Databases14:22 - WorkOS Integration with Astro and Remix19:50 - Key Benefits of WorkOS for Developers22:00 - Integrating AuthKit with Next and RemixSam Selikoff's YouTube Video on WorkOS + AuthKit + Remix: Using AuthKit's Headless APIs in Remix24:01 - Challenges in Documentation for DevelopersDivio's Guide to Documentation33:06 - The Future of Documentation and AI's Role35:00 - Wrap-up

SurgOnc Today
Superficial Soft Tissue Sarcoma

SurgOnc Today

Play Episode Listen Later Sep 26, 2024 21:31


The surgical management of superficial soft tissue sarcoma require a multidisciplinary surgical team given the defect and reconstruction required from an extended resection to achieve negative margins. This team most often includes the primary surgeon (surgical oncology or orthopedic oncology) and a plastic surgeon.  A critical decision in the treatment plan is whether or not to reconstruct immediately at the index operation or delay final reconstruction pending pathologic assessment of margins.  In this podcast, we will focus on the multidisciplinary surgical approach for superficial sarcoma. We will highlight the role of delayed reconstruction and key clinical considerations in this approach. 

Paul's Security Weekly
Paying Down Tech Debt, Rust in Firmware, EUCLEAK, Deploying SSO - ASW #298

Paul's Security Weekly

Play Episode Listen Later Sep 10, 2024 56:25


Considerations in paying down tech debt, make Rust work on bare metal, ECDSA side-channel in Yubikeys, trade-offs in deploying SSO quickly, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-298

Paul's Security Weekly TV
Paying Down Tech Debt, Rust in Firmware, EUCLEAK, Deploying SSO - ASW #298

Paul's Security Weekly TV

Play Episode Listen Later Sep 10, 2024 56:25


Considerations in paying down tech debt, make Rust work on bare metal, ECDSA side-channel in Yubikeys, trade-offs in deploying SSO quickly, and more! Show Notes: https://securityweekly.com/asw-298

Application Security Weekly (Audio)
Paying Down Tech Debt, Rust in Firmware, EUCLEAK, Deploying SSO - ASW #298

Application Security Weekly (Audio)

Play Episode Listen Later Sep 10, 2024 56:25


Considerations in paying down tech debt, make Rust work on bare metal, ECDSA side-channel in Yubikeys, trade-offs in deploying SSO quickly, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-298

Multifamily Investing Made Simple
3 Things You Need To Know About Return Projections | Ep. 573

Multifamily Investing Made Simple

Play Episode Listen Later Sep 3, 2024 28:02


Return projections can be a fickle thing. There is a lot that goes into them... data, figures, numbers... and they're almost always wrong.However, here are the 3 things that you NEED to know about return projections. Returns almost never...scratch that... returns NEVER meet expectations. A proforma is, at it's most basic level, just a guess into the future. Even the most finely tuned budget in the world is... really just a best guess. But knowing these 3 things can help you make a more accurate guess. 1) Interest Only Periods2) Winter3)Uh-oh'sSo, what do we mean by this? How do winters effect your return projections? What is an uh-oh? Find out on this week's episode of Multifamily Investing Made Simple.  LEAVE A REVIEW if you liked this episode!! Keep up with the podcast! Follow us on Apple, Stitcher, Google, and other podcast streaming platforms. To learn more, visit us at https://invictusmultifamily.com/. **Want to learn more about investing with us?** We'd love to learn more about you and your investment goals. Please fill out this form and let's schedule a call: https://invictusmultifamily.com/contact/ **Let's Connect On Social Media!** LinkedIn: https://www.linkedin.com/company/11681388/admin/ Facebook: https://www.facebook.com/InvictusMultifamily YouTube: https://bit.ly/2Lc0ctX

Azure Friday (HD) - Channel 9
Get full-stack observability with the Azure Native New Relic Service

Azure Friday (HD) - Channel 9

Play Episode Listen Later Aug 16, 2024


New Relic's all-in-one observability platform makes it simple to optimize your performance by giving you a single source of truth to analyze your apps, infrastructure, and all of your Azure services. Glenn Thomas from New Relic joins Scott Hanselman to talk about Azure's Native New Relic Service in Azure. Glenn demos how easy it is to get started with New Relic and manage Azure resources directly in the Azure portal. In addition, he provides an overview of how New Relic can help quickly identify and troubleshoot performance issues, including a look at Ask AI in New Relic Observability. Chapters 00:00 - Introduction 00:53 - Getting started from Azure Marketplace 02:27 - Exploring a new service 03:10 - Installing New Relic extension in a VM 04:05 - Accessing your New Relic service with SSO 04:47 - Troubleshooting scenario walkthrough with AI analysis 11:23 - Ask AI in New Relic 14:00 - Wrap-up Recommended resources New Relic's Azure Native Solution on the Azure Marketplace Azure Native New Relic Service Introduction New Relic AI New Relic Errors Inbox New Relic Distributed Tracing Azure Native New Relic Service: Full stack observability in minutes Create a Pay-as-You-Go account (Azure) Create a free account (Azure) Connect Scott Hanselman | Twitter/X: @SHanselman Azure Friday | Twitter/X: @AzureFriday New Relic | Twitter/X: @NewRelic Azure Support | Twitter/X: @AzureSupport

Azure Friday (Audio) - Channel 9
Get full-stack observability with the Azure Native New Relic Service

Azure Friday (Audio) - Channel 9

Play Episode Listen Later Aug 16, 2024


New Relic's all-in-one observability platform makes it simple to optimize your performance by giving you a single source of truth to analyze your apps, infrastructure, and all of your Azure services. Glenn Thomas from New Relic joins Scott Hanselman to talk about Azure's Native New Relic Service in Azure. Glenn demos how easy it is to get started with New Relic and manage Azure resources directly in the Azure portal. In addition, he provides an overview of how New Relic can help quickly identify and troubleshoot performance issues, including a look at Ask AI in New Relic Observability. Chapters 00:00 - Introduction 00:53 - Getting started from Azure Marketplace 02:27 - Exploring a new service 03:10 - Installing New Relic extension in a VM 04:05 - Accessing your New Relic service with SSO 04:47 - Troubleshooting scenario walkthrough with AI analysis 11:23 - Ask AI in New Relic 14:00 - Wrap-up Recommended resources New Relic's Azure Native Solution on the Azure Marketplace Azure Native New Relic Service Introduction New Relic AI New Relic Errors Inbox New Relic Distributed Tracing Azure Native New Relic Service: Full stack observability in minutes Create a Pay-as-You-Go account (Azure) Create a free account (Azure) Connect Scott Hanselman | Twitter/X: @SHanselman Azure Friday | Twitter/X: @AzureFriday New Relic | Twitter/X: @NewRelic Azure Support | Twitter/X: @AzureSupport

LINUX Unplugged
575: Brent's Busted Builds

LINUX Unplugged

Play Episode Listen Later Aug 12, 2024 86:18


Brent's computer pulls an all-nighter at the worst possible moment, and the hits keep coming for open-source Android distributions and our new 2FA tool.Sponsored By:Core Contributor Membership: Take $1 a month of your membership for a lifetime!Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:

Help Me With HIPAA
How Can SMBs Do SSO? - Ep 466

Help Me With HIPAA

Play Episode Listen Later Jul 12, 2024 50:41


How can small and medium businesses (SMBs) tackle the complexities of single sign-on (SSO) and boost their password security? A recent study from CISA highlighted the lag in SSO adoption among SMBs and why basic security measures like SSO and multi-factor authentication (MFA) should be standard. Join us as we navigate through the maze of managing multiple passwords, the pitfalls of manual methods, and the critical need for vendors to prioritize security from the get-go.  More info at HelpMeWithHIPAA.com/466

Radio Wonderland
#361 - Radio Wonderland

Radio Wonderland

Play Episode Listen Later Apr 19, 2024 60:34


Alison plays new music from Billy Xane, FrostTop, Dillon Francis, Oddly Godly, RemK and more!Don't forget to rate & review on all of your favorite podcast apps! Post your comments on twitter @awonderland #RADIOWONDERLANDTracklist:1. RADIO WONDERLAND OPENER2. Billy Xane - Arkham3. Dillon Francis - Rainy (Mary Droppinz Remix)4. Jauz - Teardrops5. Łaszewo - Shimmering Light6. Dillon Francis, Arden Jones - I'm My Only Friend (FrostTop Remix)7. FRED AGAIN, LIL YACHTY, OVERMONO - STAYINIT (frosttop edit)8. Eli Brown & Lilly Palmer - Gasoline9. Oddly Godly - NO FRNDZ10. Ivy Lab - Cake (Nikki Nair Remix)11. Pirra - Backfoot (juuku Remix)12. SSOS, Ekali - Vendetta13. Dabow, Julieta Diorio, Ivan Reich - ADC14. So Sus & Karn - S T A Y15. NXSTY & Blush - Switch16. Malixe - Deadman17. GTA x Skrillex - Red Lips (RemK 2024 Remix)18. Louis the Child, NJOMZA, Daniel Allan - Falling19. Skrillex - Cinema (Walschlager Rework)20. Whethan - Cruise Control21. CORTR, Maazel - Apex22. Kenya Grace - Hey, Hi, How Are You?