POPULARITY
Auto encoders are neural networks that compress data into a smaller "code," enabling dimensionality reduction, data cleaning, and lossy compression by reconstructing original inputs from this code. Advanced auto encoder types, such as denoising, sparse, and variational auto encoders, extend these concepts for applications in generative modeling, interpretability, and synthetic data generation. Links Notes and resources at ocdevel.com/mlg/36 Try a walking desk - stay healthy & sharp while you learn & code Build the future of multi-agent software with AGNTCY. Thanks to T.J. Wilder from intrep.io for recording this episode! Fundamentals of Autoencoders Autoencoders are neural networks designed to reconstruct their input data by passing data through a compressed intermediate representation called a “code.” The architecture typically follows an hourglass shape: a wide input and output separated by a narrower bottleneck layer that enforces information compression. The encoder compresses input data into the code, while the decoder reconstructs the original input from this code. Comparison with Supervised Learning Unlike traditional supervised learning, where the output differs from the input (e.g., image classification), autoencoders use the same vector for both input and output. Use Cases: Dimensionality Reduction and Representation Autoencoders perform dimensionality reduction by learning compressed forms of high-dimensional data, making it easier to visualize and process data with many features. The compressed code can be used for clustering, visualization in 2D or 3D graphs, and input into subsequent machine learning models, saving computational resources and improving scalability. Feature Learning and Embeddings Autoencoders enable feature learning by extracting abstract representations from the input data, similar in concept to learned embeddings in large language models (LLMs). While effective for many data types, autoencoder-based encodings are less suited for variable-length text compared to LLM embeddings. Data Search, Clustering, and Compression By reducing dimensionality, autoencoders facilitate vector searches, efficient clustering, and similarity retrieval. The compressed codes enable lossy compression analogous to audio codecs like MP3, with the difference that autoencoders lack domain-specific optimizations for preserving perceptually important data. Reconstruction Fidelity and Loss Types Loss functions in autoencoders are defined to compare reconstructed outputs to original inputs, often using different loss types depending on input variable types (e.g., Boolean vs. continuous). Compression via autoencoders is typically lossy, meaning some information from the input is lost during reconstruction, and the areas of information lost may not be easily controlled. Outlier Detection and Noise Reduction Since reconstruction errors tend to move data toward the mean, autoencoders can be used to reduce noise and identify data outliers. Large reconstruction errors can signal atypical or outlier samples in the dataset. Denoising Autoencoders Denoising autoencoders are trained to reconstruct clean data from noisy inputs, making them valuable for applications in image and audio de-noising as well as signal smoothing. Iterative denoising as a principle forms the basis for diffusion models, where repeated application of a denoising autoencoder can gradually turn random noise into structured output. Data Imputation Autoencoders can aid in data imputation by filling in missing values: training on complete records and reconstructing missing entries for incomplete records using learned code representations. This approach leverages the model's propensity to output ‘plausible' values learned from overall data structure. Cryptographic Analogy The separation of encoding and decoding can draw parallels to encryption and decryption, though autoencoders are not intended or suitable for secure communication due to their inherent lossiness. Advanced Architectures: Sparse and Overcomplete Autoencoders Sparse autoencoders use constraints to encourage code representations with only a few active values, increasing interpretability and explainability. Overcomplete autoencoders have a code size larger than the input, often in applications that require extraction of distinct, interpretable features from complex model states. Interpretability and Research Example Research such as Anthropic's “Towards Monosemanticity” applies sparse autoencoders to the internal activations of language models to identify interpretable features correlated with concrete linguistic or semantic concepts. These models can be used to monitor and potentially control model behaviors (e.g., detecting specific language usage or enforcing safety constraints) by manipulating feature activations. Variational Autoencoders (VAEs) VAEs extend autoencoder architecture by encoding inputs as distributions (means and standard deviations) instead of point values, enforcing a continuous, normalized code space. Decoding from sampled points within this space enables synthetic data generation, as any point near the center of the code space corresponds to plausible data according to the model. VAEs for Synthetic Data and Rare Event Amplification VAEs are powerful in domains with sparse data or rare events (e.g., healthcare), allowing generation of synthetic samples representing underrepresented cases. They can increase model performance by augmenting datasets without requiring changes to existing model pipelines. Conditional Generative Techniques Conditional autoencoders extend VAEs by allowing controlled generation based on specified conditions (e.g., generating a house with a pool), through additional decoder inputs and conditional loss terms. Practical Considerations and Limitations Training autoencoders and their variants requires computational resources, and their stochastic training can produce differing code representations across runs. Lossy reconstruction, lack of domain-specific optimizations, and limited code interpretability restrict some use cases, particularly where exact data preservation or meaningful decompositions are required.
Optimizing your online profiles for professional networking is essential in today's digital-first professional landscape. Your ClearanceJobs profile is often the first interaction someone has with you—whether it's a recruiter, hiring manager, or other executive leader using our platform.Platforms like ClearanceJobs rank profiles against Boolean searches (or Intellisearch) that recruiters key in to match the most qualified candidates. By strategically using keywords related to your industry, roles, and skills you improve your chances of appearing in searches, making it easier for hiring managers to find you. Hosted on Acast. See acast.com/privacy for more information.
Andrew and Chris dive into issues with SMTP configuration on new Digital Ocean droplets, their experiences with various email delivery gems like Postmark and Mailtrap and go over some best practices for handling account creation and user associations in Rails applications. The conversation also touches on deployment automation, developing new features like the inbox on Podia, and the importance of having visual tools and browser extensions for effective debugging. They share some lighter moments discussing fun side projects, including Andrew's insult generator app and their humorous take on turning everyday developer annoyances into creative gem ideas. The episode wraps up with some Stripe announcements and TV show recommendations.LinksJudoscale- Remote Ruby listener giftMailtrapHotwire Dev ToolsActualDbSchemaRailsCasts- Episode 288: Billing with StripeActiveSupport: Allow quick cast Boolean to integer #18552Our top product updates from Sessions 2025 (Stripe Blog)Developer Insult Generator by AndrewShoresyStar Wars: AndorStar Wars: Skeleton Crew Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter
In this hosts-only episode, Amy and Brad get real about the developer experience - from the stress of job interviews to the complexities of choosing the right framework. They discuss why companies are comparing candidates more than ever, share strategies for answering behavioral interview questions, and debate the merits of Remix versus Next.js (spoiler: Brad's all-in on Remix). The conversation shifts to feature flags and progressive rollouts, with insights from Brad's work at Stripe. SponsorWorkOS helps you launch enterprise features like SSO and user management with ease. Thanks to the AuthKit SDK for JavaScript, your team can integrate in minutes and focus on what truly matters—building your app. Chapter Marks00:00 - Intro00:41 - Sponsor: WorkOS01:47 - Brad's Keyboard and Mouse Shopping Spree04:30 - Keyboard Layout Discussion07:23 - Apple Ecosystem: Reminders and Notes09:23 - Family Sharing and Raycast Integration09:43 - Notion vs Apple Notes for Project Management11:31 - File Storage and Backup Strategies14:00 - Machine Backup Philosophy16:46 - Job Interview Preparation Tips19:40 - Answering the "Weakness" Question21:53 - Addressing Weaknesses: Delegation Examples24:29 - Conflict Resolution Interview Questions25:46 - Company Research Before Interviews27:00 - Tech Stack Considerations: Remix vs Next.js28:30 - Framework Migration Decisions29:30 - Astro for Content Sites31:02 - Backend Languages: Go vs TypeScript32:30 - React Server Components Future34:23 - Feature Flags and Boolean as a Service35:30 - Feature Flag Segmentation and A/B Testing36:54 - PostHog and Analytics Tools38:30 - Progressive Rollouts and Error Monitoring40:20 - Amy's Picks and Plugs43:35 - Brad's Picks and Plugs
Lots of people are using Chat and other AIs to generate copy and search the web but this is the tip of the iceberg when it comes to recruitment. In this edition of the #MARShow we are delighted to have Nitin Sharma as our guest. He will be exploring a wide range of uses that recruiters can put these powerful AIs tools to: - ✅ Coach you thru difficult business situations with clients ✅ Rate candidates against a vacancy from their CV and any of your notes and emails ✅ Provide powerful Boolean searches for any vacancy type. ✅ Bring job adverts to life, by creating engaging, SEO optimised adverts that work ✅ Help you offer a CV assistance service to help your candidates optimise their CVs. These are only a few of the uses Nitin will explain how to do with AIs. He'll also be exploring many more uses with us that may surprise you. The key is in the way you prompt the machine to do what you want, so we'll also be looking at the best way to do that too. Finally, we'll end up by predicting how AI will affect the rec industry in the future – who knows what Deepseek and its descendants will do! So, please join us if you want to make the most of this very powerful tool. Practical, powerful, and easy to use – a bit like us really.
Are you connecting with random people on LinkedIn but not seeing any real business results? In this episode, Scott and Nancy reveal their proven process for building a strategic network on LinkedIn that actually generates clients and referrals.After helping hundreds of service professionals transform their LinkedIn strategy, they share the two specific types of people you should be connecting with (and why most people waste time on the wrong connections), how to use Boolean search techniques to find your ideal clients, and real examples of how just one strategic connection can lead to multiple high-value opportunities.Whether you're a coach, consultant, or service professional looking to leverage LinkedIn more effectively, this episode gives you an actionable framework for building a network that delivers real business results.Key takeaways:The only two types of people worth connecting with on LinkedInHow to use Boolean search to find high-value connections quicklyWhy being specific in your search criteria expands rather than limits opportunitiesReal examples of how one strategic connection led to 16 paid speaking engagementsCommon LinkedIn networking mistakes that waste your time and how to avoid themStay updated on our upcoming workshops by visiting: thetimetogrow.com/workshops.
In today's episode, Sam and Vivien dive into the ~fascinating~ world of Boolean operators, discussing their importance in recruitment — and how to effectively use them to enhance candidate sourcing. Throughout the conversation, they explore both advanced and basic techniques, emphasizing the need for recruiters to adapt their search strategies to find the best candidates. Different candidates may express their skills and experiences using different terminology — which is why using Boolean effectively can help ensure you find the best possible match.Chapters:00:00 - Welcome to Boolean Mastery: Sourcing Top Talent02:30 - Circumflex Power: Ranking Variables in Boolean Searches06:06 - Logic in Sourcing: Harnessing AND, OR, and Parenthesis08:54 - Getting Started with Boolean: Practical Advice for RecruitersExplore all our episodes and catch the full video experience at loxo.co/podcastBecoming a Hiring Machine is brought to you by Loxo. To discover more about us, just visit loxo.co
This sponsored episode features mathematician Ohad Asor discussing logical approaches to AI, focusing on the limitations of machine learning and introducing the Tau language for software development and blockchain tech. Asor argues that machine learning cannot guarantee correctness. Tau allows logical specification of software requirements, automatically creating provably correct implementations with potential to revolutionize distributed systems. The discussion highlights program synthesis, software updates, and applications in finance and governance.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + RESEARCH:https://www.dropbox.com/scl/fi/t849j6v1juk3gc15g4rsy/TAU.pdf?rlkey=hh11h2mhog3ncdbeapbzpzctc&dl=0Tau:https://tau.net/Tau Language:https://tau.ai/tau-language/Research:https://tau.net/Theories-and-Applications-of-Boolean-Algebras-0.29.pdfTOC:1. Machine Learning Foundations and Limitations [00:00:00] 1.1 Fundamental Limitations of Machine Learning and PAC Learning Theory [00:04:50] 1.2 Transductive Learning and the Three Curses of Machine Learning [00:08:57] 1.3 Language, Reality, and AI System Design [00:12:58] 1.4 Program Synthesis and Formal Verification Approaches2. Logical Programming Architecture [00:31:55] 2.1 Safe AI Development Requirements [00:32:05] 2.2 Self-Referential Language Architecture [00:32:50] 2.3 Boolean Algebra and Logical Foundations [00:37:52] 2.4 SAT Solvers and Complexity Challenges [00:44:30] 2.5 Program Synthesis and Specification [00:47:39] 2.6 Overcoming Tarski's Undefinability with Boolean Algebra [00:56:05] 2.7 Tau Language Implementation and User Control3. Blockchain-Based Software Governance [01:09:10] 3.1 User Control and Software Governance Mechanisms [01:18:27] 3.2 Tau's Blockchain Architecture and Meta-Programming Capabilities [01:21:43] 3.3 Development Status and Token Implementation [01:24:52] 3.4 Consensus Building and Opinion Mapping System [01:35:29] 3.5 Automation and Financial ApplicationsCORE REFS (more in pinned comment):[00:03:45] PAC (Probably Approximately Correct) Learning framework, Leslie Valianthttps://en.wikipedia.org/wiki/Probably_approximately_correct_learning[00:06:10] Boolean Satisfiability Problem (SAT), Varioushttps://en.wikipedia.org/wiki/Boolean_satisfiability_problem[00:13:55] Knowledge as Justified True Belief (JTB), Matthias Steuphttps://plato.stanford.edu/entries/epistemology/[00:17:50] Wittgenstein's concept of the limits of language, Ludwig Wittgensteinhttps://plato.stanford.edu/entries/wittgenstein/[00:21:25] Boolean algebras, Ohad Osorhttps://tau.net/tau-language-research/[00:26:10] The Halting Problemhttps://plato.stanford.edu/entries/turing-machine/#HaltProb[00:30:25] Alfred Tarski (1901-1983), Mario Gómez-Torrentehttps://plato.stanford.edu/entries/tarski/[00:41:50] DPLLhttps://www.cs.princeton.edu/~zkincaid/courses/fall18/readings/SATHandbook-CDCL.pdf[00:49:50] Tarski's undefinability theorem (1936), Alfred Tarskihttps://plato.stanford.edu/entries/tarski-truth/[00:51:45] Boolean Algebra mathematical foundations, J. Donald Monkhttps://plato.stanford.edu/entries/boolalg-math/[01:02:35] Belief Revision Theory and AGM Postulates, Sven Ove Hanssonhttps://plato.stanford.edu/entries/logic-belief-revision/[01:05:35] Quantifier elimination in atomless boolean algebra, H. Jerome Keislerhttps://people.math.wisc.edu/~hkeisler/random.pdf[01:08:35] Quantifier elimination in Tau language specification, Ohad Asorhttps://tau.ai/Theories-and-Applications-of-Boolean-Algebras-0.29.pdf[01:11:50] Tau Net blockchain platformhttps://tau.net/[01:19:20] Tau blockchain's innovative approach treating blockchain code itself as a contracthttps://tau.net/Whitepaper.pdf
Have You Ever Used a Boolean Search on LinkedIn?
Episode: 00257 Released on March 10, 2025 Description: In this episode of Analyst Talk with Jason Elder, expert Jan Mondale discusses another Open Secret in the world of open-source intelligence (OSINT) with expert Jan Mondale. From the basics of Google and Bing searches to advanced tips for finding hidden information online, Jan breaks down the art of effective searching. We explore Boolean operators, the power of search shortcuts, and why it sometimes pays to think like a "Google dork." Whether you're an investigative support analyst or just looking to sharpen your search skills, this episode is packed with actionable advice. Tune in to learn how to uncover open secrets like a pro! [Note: Description produced by ChatGPT.] Get to know more about Jan by listening to his episode on Analyst Talk With Jason Elder: https://www.leapodcasts.com/e/atwje-jan-mondale-inquiring-minds/ CHALLENGE: There are Easter eggs in one of the tables of the Excel chapter that Jason wrote for the IACA textbook. First-person to email us at leapodcasts@gmail.com about what the Easter eggs are will receive a $75 gift card from us. Happy hunting! *** Episode 7 Analysis - IACA Conference Preview - Rethinking Thought https://youtu.be/YC_b8GWofDk *** Related Links: Jan's Online Search document: https://mcdn.podbean.com/mf/web/437i47bpbj5n8wzh/OS02_Online_Search8tf85.pdf Association(s) Mentioned: Vendor(s) Mentioned: Contact: https://www.linkedin.com/in/janmondale/ Transcript: https://mcdn.podbean.com/mf/web/5ruixn7e7crti923/OS02_transcript.pdf Podcast Writer: Podcast Researcher: Theme Song: Written and Recorded by The Rough & Tumble. Find more of their music at www.theroughandtumble.com. Logo: Designed by Kyle McMullen. Please visit www.moderntype.com for any printable business forms and planners. Podcast Email: leapodcasts@gmail.com Podcast Webpage: www.leapodcasts.com Podcast Twitter: @leapodcasts
This show has been flagged as Clean by the host. The most basic security toggle on your Linux computer is the setenforce command. Using just a single setenforce instruction, you can configure SELinux to allow a violation it would normally prevent. There are two states: Enabled and Permissive. By default, SELinux is Enabled (also represented as 1 when using Boolean values). To set SELinux to permissive mode: $ sudo setenforce Permissive When something works in Permissive mode, you've successfully identified the symptom, but you haven't fixed the problem yet. Activate Enforcing mode again: $ sudo setenforce Enforcing Check the status of SELinux You can check the state of SELinux at any time using the sestatus command: $ sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing [...] Look at labels and contexts If you have a running Linux system, then you have an example of what SELinux requires for normal operation. You don't have to learn about security contexts or memorize labels. For most anything you try to do on your computer, there are likely already files doing something similar. Use those files as templates. You can look at the security labels of any file you have access to by using the -Z (that's a capital Z) option of ls: $ touch hellotouch hello.txt $ ls -Z hello.txt unconfined_u:object_r:user_home_t:s0 hello An empty file created by a user in the user's own home directory has, as you might expect, a very specific security profile. Even with the executable bit set, that file would not be permitted to run as a systemwide service. It just doesn't have the correct security context. If you use an ll alias, try adding the -Z option to its option list so you get used to seeing SELinux labels. The more you see what labels exist on your system, and how they relate to various system roles, you're more likely to recognize when they're wrong. Copy contexts Suppose you were developing a custom SELinux service for your laptop. You've written a shell script, a service file, and you've placed them in the appropriate system locations. You're also careful to set ownership and permissions correctly. But no matter what you do, you get errors when attempting to start the service. You suspect that SELinux might be preventing an unrecognized service from running. That would normally be appreciated, but in this case you want to make an exception. First, confirm that the service runs successfully with SELinux in Permissive mode: $ sudo setenforce Permissive $ sestatus | grep Current Current mode: permissive $ sudo systemctl start hello.service || echo "fail" $ $ sudo setenforce Enforcing Then look at the files you've created using the -Z and compare them with other files that you know to be working properly. Note the differences: $ ls -Z /usr/lib/systemd/system/hello.service unconfined_u:object_r:systemd_unit_file_t:s0 $ ls -Z /usr/lib/systemd/system/rdisc.service system_u:object_r:rdisc_unit_file_t:s0 The working service (rdisc.service in this example, chosen at random) features the system_u label as well as a special rdisc_unit_file_t label. Suppose you know from previous experience with ls -Z that a common SELinux label for systemd service files is systemd_unit_file_t so you ignore that difference. However, unconfined_u and system_u seem to be important. Use the chcon ("change context") command to change the security context of your service file: $ sudo chcon system_u:object_r:systemd_unit_file_t:s0 /usr/lib/systemd/system/hello.service $ ls -Z /usr/lib/systemd/system/hello.service system_u:object_r:systemd_unit_file_t:s0 Your systemd service is probably triggering some executable file on your system. If you created that yourself, it probably also has the incorrect security context. Comparing it to a known working script: $ ls -Z /usr/bin/example.sh unconfined_u:object_r:gconf_home_t:s0 $ ls -Z /usr/bin/brltty-prologue.sh system_u:object_r:bin_t:s0 Again, there's one obvious difference, which you can correct with chcon: $ chcon system_u:object_r:bin_t:s0 /usr/bin/example.sh Provide feedback on this episode.
Holly Bond, President of Facet Recruitment, is living proof that hiring is still a human game—no matter how fancy the tech gets. From her days as a franchise owner to leading in the AI hiring era, she shares the real secrets to landing (or making) the perfect hire: build real relationships, take bold risks, and never let an algorithm do all the thinking. Tune in to hear how Holly blends experience, intuition, and a dash of humor to keep recruiting refreshingly human. Key Highlights of Our Interview: From Focus Groups to Foundations “When we launched Facets, we started with brutal honesty: focus groups full of blunt feedback about recruiters. We listened, and we built a company rooted in empathy and care.” Breaking the Commission Chain “Recruiting isn't about commissions; it's about people. I refused to return to a model where clients matter more than candidates. Instead, I built a team paid for their passion, not percentages.” Catching What AI Misses “If AI had done my recruitment, I would've slipped through the cracks. Boolean searches don't see potential outside the box. Humans do. That's why we look beyond traditional roles, exploring adjacent sectors for talent.” Spotting the Unsung Stars “A recruiter's superpower? Seeing someone's potential before they do. When I reached out to a candidate in her 60s, she couldn't believe I meant her. But age? It's just a number—wisdom wins every time.” Putting Yourself Out There “Take risks, be bold, and let people know what you're looking for. Whether it's an informational interview or a thoughtful message, putting yourself out there often leads to unexpected opportunities—sometimes even before the job officially exists.” The True Value of a Strong Network “A broad network isn't just about advancing your career; it's about helping others too. Being able to connect someone to the right opportunity or advice is the most rewarding part of building genuine, lasting relationships.” _________________________ Connect with us: Host: Vince Chan | Guest: Holly Bond --Chief Change Officer-- Outgrow Yourself. Change Ambitiously. The Global Go-To-Source of Raw Human Intelligence for Growth Progressives, Visionary Underdogs, Transformation Gurus & Bold Hearts. Global Top 3% Podcast on Listen Notes. Top 20 US Business Podcast on Apple. Top 1 US Careers Podcast on Apple. 5+ Million All-Time Downloads. Reaching 80+ Countries Daily. >>>100,000+ subscribers are outgrowing. Act Today.
Nalini AnantharamanGéométrie spectraleCollège de FranceAnnée 2023-2024Colloque - Géométries aléatoires et applications : Modèles d'images aléatoires et applications en mammographie digitaleIntervenante :Agnès DesolneuxCNRS, École normale supérieure Paris-SaclayRésuméIn this talk I will present several random image models that are else explicit (such as Gaussian models or Boolean models for instance), or more "implicit" (such as images generated by a neural network). I will discuss how these models are used to understand the detectability of some lesions in digital mammograms. I will also discuss another interest of such models, which is that they allow to perform virtual clinical trials.----Le terme « géométrie aléatoire » désigne tout processus permettant de construire de manière aléatoire un objet géométrique ou des familles d'objets géométriques. Un procédé simple consiste à assembler aléatoirement des éléments de base : sommets et arêtes dans le cas des graphes aléatoires, triangles ou carrés dans certains modèles de surfaces aléatoires, ou encore triangles, « pantalons » ou tétraèdres hyperboliques dans le cadre des géométries hyperboliques. La théorie des graphes aléatoires imprègne toutes les branches des mathématiques actuelles, des plus théoriques (théorie des groupes, algèbres d'opérateurs, etc.) aux plus appliquées (modélisation de réseaux de communication, par exemple). En mathématiques, l'approche probabiliste consiste à évaluer la probabilité qu'une propriété géométrique donnée apparaisse : lorsque l'on ne sait pas si un théorème est vrai, on peut tenter de démontrer qu'il l'est dans 99 % des cas.Une autre méthode classique pour générer des paysages aléatoires consiste à utiliser les séries de Fourier aléatoires, avec de nombreuses applications en théorie du signal ou en imagerie.En physique théorique, les géométries aléatoires sont au cœur de la théorie de la gravité quantique et d'autres théories des champs quantiques. Les différents aspects mathématiques s'y retrouvent curieusement entremêlés, par exemple, la combinatoire des quadrangulations ou des triangulations apparaît dans le calcul de certaines fonctions de partition.Ce colloque offrira un panorama non exhaustif des géométries aléatoires, couvrant des aspects allant des plus abstraits aux applications concrètes en imagerie et télécommunications.
Jay McKeown is the Director of Talent Acquisition at Red River, a defense contractor that brings together the ideal combination of talent, partners and products to disrupt the status quo in technology and drive success for business and government in ways previously unattainable. Red River serves organizations well beyond traditional technology integration, bringing more than 20 years of experience and mission-critical expertise in security, networking, analytics, collaboration, mobility and cloud solutions. Learn more at redriver.com. THE CHALLENGEIn today's candidate market post pandemic, we are dealing with back to office mandates and obviously for an industry that requires some travel to a SCIF or customer site. McKeown says, “I think for us and for everybody else, the challenges finding people that are willing to come in the office to some degree in a hybrid capacity… but there are still a ton of candidates out there looking for fully remote positions.”HOW CLEARANCEJOBS HELPSJay has a decade in the US Army and served a decade as a police officer and got into business about seven years ago. “And I have always been reluctant about joining this business world, coming from 2 tactical sides. I was a vice cop in DC working undercover for a decade…and was like what the heck is recruiting?” After starting his career and learning all about the different job boards and from the very beginning of working in the government contracting sector, he became the super user of ClearanceJobs.com. “Coming from that side of the industry and having a board that's dedicated towards clearances and tends to be military and government heavy - it almost feels comfortable. I guess when you're in there, it almost feels like a like a board made for prior service type of people.”For ClearanceJobs, McKeown loves the user friendliness and ease of functionality. “The UI / UX or the front end of ClearanceJobs has an ease of use and feel. You know, when I'm comparing it to the other boards, it is pretty self-explanatory where you can navigate around and buttons and pretty much find what you're looking for. I've noticed with the other boards some of the additional seem to be hidden or hard to find.”Red River's favorite functionality of the site is being able to build pipelines and tap into the most engaged talent to land a phone call and eventually extend an offer. Their recruiting team understands that the deadline is today when the government says they need a candidate today. “ClearanceJobs has bailed me out.” After using the Boolean or Intellisearch function to find qualified candidates, Red River sorts candidates by who was last active on the site to get a sense of who who's been on the board most recently. By pulling those last ten active candidates that are qualified and calling those individuals, they've improved their success rate to receiving call backs to the 90th percentile. “Just for that reason alone justifies your board. Much less all the other cool functionalities and features.”In any industry, but particularly the cleared space, it is important to act quickly and find that talent for national security programs. Is ClearanceJobs a top source for these types of missions? McKeown says, “Yeah, by far you're our top source for cleared candidates.” Hosted on Acast. See acast.com/privacy for more information.
Lourna Dee is on a quest for redemption with the Jedi in their fight against Marchion Ro and the Nihil…or is she? We explore the latest The High Republic audio drama from Cavan Scott. In this fully armed and operational episode of Podcast Stardust, we discuss: Our overall non-spoiler thoughts about this audio drama, The performance of the voice cast with special props to Jessica Almasy as Lourna Dee, Avar Kriss's fall back into behavior that predated the events of Tessa Gratton's Temptation of the Force, Jesi Master Keeve Trennis and her struggle with the Jedi's role in the galaxy and becoming the “Light of the Jedi,” Lourna Dee's confrontation with Marchion Ro at the ball, Boolean's motivations, and Whether Lourna Dee found redemption. For our discussion of Tempest Runner, check out episode 308. Thanks for joining us for another episode! Subscribe to Podcast Stardust for all your Star Wars news, reviews, and discussion wherever you get your podcasts. And please leave us a five star review on Apple Podcasts. Find Jay and her cosplay adventures on J.Snips Cosplay on Instagram. Join us for real time discussion on the RetroZap Discord Server here: RetroZap Discord. Follow us on social media: Twitter | Facebook | Instagram | Pinterest | YouTube. T-shirts, hoodies, stickers, masks, and posters are available on TeePublic. Find all episodes on RetroZap.com.
Holly Bond, once a franchise owner and now the President of Facet Recruitment, spills the secrets of blending old-school charm with modern tech in recruiting. From fax machine résumés to navigating an AI-driven hiring world, Holly's seen it all. She dishes on the power of networking (spoiler: it's not about business cards and stale canapés), the importance of bold moves (sometimes you just email anyway), and why being real beats AI algorithms every time. With wisdom, wit, and a knack for finding hidden talent, Holly reminds us that while AI might help, people are still the heart of every great hire. Key Highlights of Our Interview: From Focus Groups to Foundations “When we launched Facets, we started with brutal honesty: focus groups full of blunt feedback about recruiters. We listened, and we built a company rooted in empathy and care.” Breaking the Commission Chain “Recruiting isn't about commissions; it's about people. I refused to return to a model where clients matter more than candidates. Instead, I built a team paid for their passion, not percentages.” Catching What AI Misses “If AI had done my recruitment, I would've slipped through the cracks. Boolean searches don't see potential outside the box. Humans do. That's why we look beyond traditional roles, exploring adjacent sectors for talent.” Spotting the Unsung Stars “A recruiter's superpower? Seeing someone's potential before they do. When I reached out to a candidate in her 60s, she couldn't believe I meant her. But age? It's just a number—wisdom wins every time.” Putting Yourself Out There “Take risks, be bold, and let people know what you're looking for. Whether it's an informational interview or a thoughtful message, putting yourself out there often leads to unexpected opportunities—sometimes even before the job officially exists.” The True Value of a Strong Network “A broad network isn't just about advancing your career; it's about helping others too. Being able to connect someone to the right opportunity or advice is the most rewarding part of building genuine, lasting relationships.” _________________________ Connect with us: Host: Vince Chan | Guest: Holly Bond Chief Change Officer: Make Change Ambitiously. Experiential Human Intelligence for Growth Progressives Global Top 3% Podcast on Listen Notes World's #1 Career Podcast on Apple Top 1: US, CA, MX, IE, HU, AT, CH, FI, JP 2.5+ Millions Downloads 80+ Countries
Tape your glasses and get out your pocket protector — we're taking it old school to find a job on LinkedIn.
Hector and Alicia explore the extensive capabilities of QuickBooks Online Advanced's modern reports and custom report builder. They walk through key features including dynamic column customization, Boolean filters, multi-level grouping, pivot tables, and interactive charts - demonstrating how these tools can transform raw data into actionable insights. The hosts share practical tips for leveraging related table data, creating KPI widgets, and maximizing the flexibility of the new reporting system, while also discussing how these features compare to classic QuickBooks reports.SponsorsZoho Expense - https://uqb.promo/zohoexpenseIgnition - https://uqb.promo/ignitionCoefficient - https://uqb.promo/coefficientSend your Questions/Comments (we could read/answer them on air) ask@uqapodcast.comLinks/Apps Mentioned in this episode:Enroll for Alicia's Nov-Dec QBO Complete Hands-On Training (HOT) at https://royalwise.lpages.co/qbo-complete/Intuit's 2024 Investor Day event: https://www.youtube.com/embed/8cJ9vqr6gYg?si=C3UD7Hsxu2QMtx4rHector's App - RightTool www.righttool.appAlicia's 1099 class: http://royl.ws/QBO1099Alicia's RoyalWise OWLS QBO Training - http://royl.ws/uqapodcastIntuit Connect Conference www.quickbooksconnect.comCheck out Alicia's step-by-step QBO Textbooks at http://www.questivaconsultants.comThe Comprehensive Guide to Converting from QuickBooks® Desktop (QBDT®) to QuickBooks® Online (QBO®) https://www.amazon.com/dp/B0D8L29Z5LQuickBooks Online: From Setup to Tax Time https://www.amazon.com/dp/B0CXZB1R95Sign up to Earmark to earn free CPE for listening to this podcasthttps://www.earmark.app/onboarding (00:00) - Welcome to the Unofficial QuickBooks Accountants Podcast (02:19) - Exploring Custom Report Builder Templates (05:22) - Visualizing Reports with Chart View (10:22) - Advanced Filtering and Boolean Logic (18:21) - Grouping and Subgrouping Data (22:20) - Leveraging More Columns for Detailed Insights (28:21) - Pivot Tables and Summary Reports (30:01) - Benefits of Pivot Tables (34:33) - General Options and Formatting (41:37) - Exporting and Sharing Reports (45:19) - Upcoming Classes and Events (48:05) - Conclusion and Final Thoughts
It's no secret that your database is your biggest asset — but are you leveraging it to the best of your ability? In this Tactical Tuesday episode, Sam walks us through how to use tags on candidate profiles to make your searches even smarter. It's an underused feature in Loxo (and for recruiters in general) and one that takes the concept of an MPC approach and puts it on steroids.Getting the most out of your database — and having a longterm approach to candidate relationships — is the way to win in 2024 and beyond. Do you feel confident that you're doing so? Chapters:00:00 - Podcast intro01:40 - How to leverage the power of tags in your candidate database05:00 - Boolean logic meets tags: A smarter way to find top talent05:43 - Podcast wrap-upExplore all our episodes and catch the full video experience at loxo.co/podcastBecoming a Hiring Machine is brought to you by Loxo. To discover more about us, just visit loxo.co
Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: How do you envision the future of physics-informed neuroscience? In particular, do you believe that despite the brain being a warm environment, quantum effects such as entanglement and superposition play a role in its function? Finally, do you think the concept of "quantum cognition" will remain more philosophical than scientific? - Are microtubules like electrochemical transistors? - Could the concrete Boolean arithmetic functional devices in our brains be affected by temperature, or is temperature one layer above that? - Which do you think would happen first: repairing brains naturally through natural science research or having the first "computer brain" transplant for those who suffer brain traumas? - I've heard AI should be able to develop treatments for cancer, but it will take decades of machine learning. What do you think could accelerate this learning process? - Maybe not a cure, but a control? Micro-monitoring and cancer-killing nanobots? - Will we ever perfect the human immune system? - Do you think that the relevance weight of the "microbiome" in medical science will increase in the future? - Maybe not an artificial brain, but what about artificial hearts? Would those be easier to have a technological implant vs. a natural one? Or even livers or kidneys? - In the future, hopefully we can have a machine/detector that can detect every atom or molecule in our bodies, and we can simulate solutions on a fast computer.
Find More Episodes on PCA Overdrive: https://www.pcaoverdrive.org/ask-a-painter PCA Overdrive is free for members. Not a member? Download the app on the Apple Store or Google Play and enjoy a 7 day free trial! Become a member: https://www.pcapainted.org/membership-resources/ Guests Chris Kiefer and Nathan TenNapel www.helloboolean.com Book a discovery call with Nathan: Https://meetings.hubspot.com/nathan-tennapel/discovery Attend a Painting Contractors Association - PCA event for FREE! Scholarship applications: https://www.pcapainted.org/event-scholarships/ Want a LIVE Masters Class in your area? Contact Marsha at the PCA mbass@pcapainted.org and we can arrange it for you! https://app.be.live/ytsmZZsSU9Zsk5h6MhcbmWixPnMKBpkE/guest PCA member companies are 15x larger, more profitable and more stable than the average painting company. How did they do this? PCA's Business Training: https://www.pcapainted.org/business-training/ Link to the PCA's event page: https://www.pcapainted.org/events/ Upcoming events: -PCA en Espanol 12-13 September Anaheim, Ca -Tom Reber's Mile High Profit Summit - 19 Sep -SoCal // SurfPrep Master's Class - 27 Sep - Lake Elsinore Ca -Women In Paint - 8-10 Oct Hollywood Beach Fl -PCA Residential Conference 24-25 Oct Minneapolis Mn -PCA Commercial Contractor's Retreat 12-15 Nov Scottsdale, Az -Gathering of MN Painters / SW / Graco Master's Class - 6 Dec -PCA Expo Feb 3-7 2025 Colorado Springs
By mastering LinkedIn's advanced search, you can take a proactive approach to job hunting and networking. The post #645 Powerful LinkedIn Boolean Search Methods appeared first on Cheeky Scientist.
Reviews have a huge impact on your business. Five-star writeups are a big boost! But a few negative reviews can be ruinous. Chris Kiefer of Boolean explains the art of generating great reviews from your customers. (Plus: Andrew's just had dental work, and his face is, well…half frozen?) Sponsored by: BEHR & HYDE
In this episode of Cheeky Scientist, we discuss how to increase your Social Selling Index online and how to attract more employers to your professional online profiles. We also discuss how to do advanced Boolean searches for jobs that are a good fit for you by using combinations of search terms such as skills, groups, companies and more. The post #606 Increasing Your Social Selling Index & Doing Advanced Boolean Searches To Get Hired appeared first on Cheeky Scientist.
Listen to Chris Kiefer talk about his journey and give tips on using automation in your industry. This episode is full of ideas on how automation and AI can change your business!Key Takeaways:Save Time & Money with Automation: Using technology to handle tasks like updating customer records, making quotes, and managing schedules can save lots of time and money, letting your team work on more important things.Find and Fix Workflow Problems: Look at how your work gets done and find the slow spots. Then, you can use automation to make things run smoother and faster.Improve Jobs, Don't Replace Them: Automation should make jobs better by letting people focus on more valuable tasks, not taking their jobs away.Don't miss out on this opportunity to gain valuable insights and take your business to the next level with AI and automation!Find Chris:Via Email: chris@helloboolean.comLinkedIn: www.linkedin.com/in/chris-kieferIG: www.instagram.com/pursuitofpurpose.podFB: www.facebook.com/chriskiefer4TikTok: https://www.tiktok.com/@chriskiefer1Join Our Group: https://www.facebook.com/groups/hvacrevealedPresented By On Purpose Media: https://www.onpurposemedia.ca/For HVAC Internet Marketing reach out to us at info@onpurposemedia.ca or 888-428-0662Sponsored By: Chiirp: https://chiirp.com/hssrElite Call: https://elitecall.netService World Expo: https://www.serviceworldexpo.com/On Purpose Media: https://onpurposemedia.ca
John Venn created the Venn diagram, and though he's an important figure in the fields of mathematics and logic, he eventually left that work behind to write historical accounts of the places and people that were important in his life. Research: Baron, Margaret E.. “A Note on the Historical Development of Logic Diagrams: Leibniz, Euler and Venn.” The Mathematical Gazette, vol. 53, no. 384, 1969, pp. 113–25. JSTOR, https://doi.org/10.2307/3614533 Bassett, Troy J. "Author: Susanna Carnegie Venn." At the Circulating Library: A Database of Victorian Fiction, 1837—1901, 3 June 2024, http://www.victorianresearch.org/atcl/show_author.php?aid=661 com Editors. “John Venn Biography.: A&E. April 2, 2014. https://www.biography.com/scientists/john-venn Boyer, Carl B.. "Leonhard Euler". Encyclopedia Britannica, 21 Jun. 2024, https://www.britannica.com/biography/Leonhard-Euler Britannica, The Editors of Encyclopaedia. "Boolean algebra". Encyclopedia Britannica, 14 May. 2024, https://www.britannica.com/topic/Boolean-algebra Britannica, The Editors of Encyclopaedia. "Kingston upon Hull". Encyclopedia Britannica, 23 Jun. 2024, https://www.britannica.com/place/Kingston-upon-Hull “A Cricket Sensation.” Saffron Walden Weekly News. June 11, 1909. https://www.newspapers.com/image/800046974/?match=1&terms=John%20Venn%20cricket%20machine Collier, Irwin. “Cambridge. Guide to the Moral Sciences Tripos. James Ward, editor, 1891.” Feb 26, 2018. https://www.irwincollier.com/cambridge-on-the-moral-sciences-tripos-james-ward-editor-1891/ Duignan, Brian. "John Venn". Encyclopedia Britannica, 12 Jun. 2024, https://www.britannica.com/biography/John-Venn Duignan, Brian. "Venn diagram". Encyclopedia Britannica, 25 Apr. 2024, https://www.britannica.com/topic/Venn-diagram Gordon, Neil. “Venn: the person behind the famous diagrams – and why his work still matters today.” EconoTimes. April 14, 2023. https://www.econotimes.com/Venn-the-person-behind-the-famous-diagrams--and-why-his-work-still-matters-today-1654353 Hall, Madeleine. “The Improbably Genius of John Venn.” The Spectator. April 4, 2023. https://www.spectator.co.uk/article/the-improbable-genius-of-john-venn/ “History.” Highgate School. https://www.highgateschool.org.uk/about/our-history/ “The Jargon.” Queens' College Cambridge. https://www.queens.cam.ac.uk/visiting-the-college/history/university-facts/the-jargon “John Venn Of Caius.” The British Medical Journal, vol. 1, no. 3250, 1923, pp. 641–42. JSTOR, http://www.jstor.org/stable/20423118 Lenze, Wolfgang. “Leibniz: Logic.” Internet Encyclopedia of Philosophy. https://iep.utm.edu/leib-log/ O'Connor, J.J. and E.F. Robertson. “John Venn.” Mac Tutor. School of Mathematics and Statistics, University of St. Andrews, Scotland. October 2003. “Professor Hugh Hunt leads engineering team to recreate historic cricket bowling machine.” Trinity College Cambridge. June 6, 2024. https://www.trin.cam.ac.uk/news/professor-hugh-hunt-leads-engineering-team-to-recreate-historic-bowling-machine-that-bowled-out-australian-cricketers-more-than-100-years-ago/ Venn, John. “The logic of chance. An essay on the foundations and province of the theory of probability, with especial reference to its logical bearings and its application to moral and social science.” London. Macmillan, 1876. Accessed online: https://archive.org/details/50424309/page/n19/mode/2up Venn, John. “The principles of empirical or inductive logic.” 1889. https://archive.org/details/principlesempir00venngoog B.H. “John Venn.” Obituary notices of fellows deceased. Royal Society Publishing. April 1, 1926. Accessed online: https://royalsocietypublishing.org/doi/epdf/10.1098/rspa.1926.0036 Young, Angus. “John Venn Inspired £325k makeover of Hull's Drypool Bridge is now complete.” Hull Live. June 5, 2017. https://www.hulldailymail.co.uk/news/drypool-bridge-turned-work-art-91547 See omnystudio.com/listener for privacy information.
The term ‘nil' refers to the absence of value, but we often imbue it with much more meaning than just that. Today, hosts Joël and Stephanie discuss the various ways we tend to project extra semantics onto nil and the implications of this before unpacking potential alternatives and trade-offs. Joël and Stephanie highlight some of the key ways programmers project additional meaning onto nil (and why), like when it's used to create a guest session, and how this can lead to bugs, confusion, and poor user experiences. They discuss solutions to this problem, like introducing objects for improved readability, before taking a closer look at the implications of excessive guard clauses in code. Our hosts also explore the three-state Boolean problem, illustrating the pitfalls of using nullable Booleans, and why you should use default values in your database. Joël then shares insights from the Elm community and how it encourages rigorous checks and structured data modeling to manage nil values effectively. They advocate for using nil only to represent truly optional data, cautioning against overloading nil with additional meanings that can compromise code clarity and reliability. Joël also shares a fun example of modeling a card deck, explaining why you might be tempted to add extra semantics onto nil, and why the joker always inevitably ends up causing chaos! Key Points From This Episode The project Joël is working on and why he's concerned about bugs and readability. Potential solutions for a confusing constant definition in a nested module. A client work update from Stephanie: cleaning up code and removing dead dependencies. How she used Figjam to discover dependencies and navigate her work. Today's topic: how programmers project extra semantics onto nil. What makes nil really tricky to use, like forcing you to go down a default path. How nil sweeps the cases you don't want to think too hard about under the rug. Extra semantics that accompany nil (that you might not know about) like a guest session. Examples of how these semantics mean different things in different contexts. How these can lead to bugs, hard-to-find knowledge, confusion, and poor user experiences. Introducing objects to replace extra nil semantics, improve readability, and other solutions. Some of the reasons why programmers tend to project extra semantics onto nil. How to notice that nil has additional meanings, and when to model it differently. The implications of excessive guard clauses in code. An overview of the three-state Boolean problem with nullable Booleans. Connecting with the Elm community: how it can help you conduct more rigorous checks. Some of the good reasons to have nil as a value in your database. The benefits of using nil only to represent truly optional data. Links Mentioned in Today's Episode Figjam (https://www.figma.com/figjam/) Miro (https://miro.com/) 'Working Iteratively' blog post (https://thoughtbot.com/blog/working-iteratively) Mermaid.js (https://mermaid.js.org/) Draw.io (https://draw.io/) Check your return values (web) (https://thoughtbot.com/blog/check-return-values-web) Check your return values (API) (https://thoughtbot.com/blog/check-return-values-api) Primitive obsession (https://wiki.c2.com/?PrimitiveObsession) 'Avoid the Three-state Boolean Problem' (https://thoughtbot.com/blog/avoid-the-threestate-boolean-problem) Elm Community (https://elm-lang.org/community) 'The Shape of Data': Modeling a deck of cards (https://thoughtbot.com/blog/modeling-with-union-types#the-shape-of-data) The Bike Shed (https://bikeshed.thoughtbot.com/) Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Public health job titles and postings are not as standardized as they are in many other fields. Another issue is that many public health organizations use different terms to describe similar roles. So what happens when your search is no longer yielding results?Fortunately, there are simple strategies for finding job opportunities in the public health field beyond the obvious ones. When you're stuck, try generating specific and general job titles to search for with this formula which includes the prefix (topic) and suffix (functional duty).Once you have your job search terms, using a Boolean search strategy can help you to combine different job titles and keywords to yield more specific results. Find out more in this episode of the Public Health Insight Podcast.References◼️ LinkedIn Post: One way to find jobs when you're stuck...Hosts & Producers◼️ Gordon Thane, BMSc, MPH, PMP®◼️ Leshawn Benedict, MPH, MSc, PMP®Production Notes◼️ Music from Johnny Harris x Tom Fox: The Music RoomSubscribe to the NewsletterSubscribe to The Insight newsletter so you don't miss out on the latest podcast episodes, live events, job skills, learning opportunities, and other engaging professional development content here.Send us a Text Message to let us know what you think.
This Tactical Tuesday conversation highlights the limitations of using job titles as a search term when looking for candidates — and introduces better ways to source talent.Job titles are not always indicative of the specific skills, experience, and responsibilities of a candidate — and often, they're more of a representation of where the candidate is than we're they're going.There are other issues, too, like the fact that companies often use creative job titles that may not accurately represent the role, and job titles can vary widely between companies — making it difficult to compare candidates. Vivien's recommendation is that instead of relying on job titles, recruiters should use Boolean search and filters to find candidates based on specific skills and experiences.Chapters:00:00 - Podcast Intro02:02 - Rethinking Job Titles in Talent Sourcing08:33 - Optimizing Candidate Search with Boolean Techniques13:37 - Final Tips and Podcast Wrap-UpExplore all our episodes and catch the full video experience at loxo.co/podcastBecoming a Hiring Machine is brought to you by Loxo. To discover more about us, just visit loxo.co
In The Room Series 4 Eps 05. Angels, Demons, NDE, then 1s and 0s. Garvin and George started talking about angels and demons and Garvin asked George if he believed in them. As a deacon, he does, but things have to be tested out to see what is true or false. A Boolean statement looked as though it was going to emerge, however, things go off in other directions about what drives us and our purpose.
Are you tired of feeling like you're always one step behind in finding the best candidates in today's competitive job market? In this enlightening episode of The Elite Recruiter Podcast, host Benjamin Mena welcomes recruiting expert Michael Rasmussen to share his wealth of knowledge on modern sourcing hacks that can elevate your recruitment game. Amid the rapid changes in recruiting technology and techniques, knowing how to effectively source talent is critical for any recruiter aiming for success. Whether you are a seasoned pro or just starting out, this episode delivers actionable insights to enhance your sourcing strategies and achieve better results. 1. **Harnessing AI for Greater Efficiency:** Michael Rasmussen reveals how to leverage AI tools like Chat GPT to fine-tune Boolean strings, craft personalized candidate messages, and identify synonymous terms that can expand your search results—saving you time while increasing your productivity. 2. **Exploring Beyond LinkedIn:** Learn how to tap into a vast array of platforms such as Facebook, GitHub, and employing sophisticated Google search techniques using the site operator to uncover diverse candidate profiles, including those in trades who may not be on LinkedIn. 3. **Game-Changing Sourcing Tools:** Gain expert recommendations on a variety of powerful tools like Lucia, ContactOut, Rocket Reach, Kendo, and several others to streamline your contact methods and ensure you have every possible edge in sourcing the right talent. Don't miss out on these transformative sourcing strategies—listen to the episode now to unlock the full potential of your recruiting efforts and start placing top-tier talent effortlessly! Join The Elite Recruiter Community: https://elite-recruiter.circle.so/join?invitation_token=5089bd69d8ac69486fc7afca52662675ec3ffc8a-d63afaf0-02f2-4925-9f80-b83f00d142de Signup for future emails from The Elite Recruiter Podcast: https://eliterecruiterpodcast.beehiiv.com/subscribe YouTube: https://youtu.be/AXD0k-Iak_8 Michael Rasmussen LinkedIn: https://www.linkedin.com/in/michaelrasmussen408/ With your Host Benjamin Mena with Select Source Solutions: http://www.selectsourcesolutions.com/ Benjamin Mena LinkedIn: https://www.linkedin.com/in/benjaminmena/ Benjamin Mena Instagram: https://www.instagram.com/benlmena/ Benjamin Mena TikTok: https://www.tiktok.com/@benjaminlmena
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The consistent guessing problem is easier than the halting problem, published by Jessica Taylor on May 20, 2024 on The AI Alignment Forum. The halting problem is the problem of taking as input a Turing machine M, returning true if it halts, false if it doesn't halt. This is known to be uncomputable. The consistent guessing problem (named by Scott Aaronson) is the problem of taking as input a Turing machine M (which either returns a Boolean or never halts), and returning true or false; if M ever returns true, the oracle's answer must be true, and likewise for false. This is also known to be uncomputable. Scott Aaronson inquires as to whether the consistent guessing problem is strictly easier than the halting problem. This would mean there is no Turing machine that, when given access to a consistent guessing oracle, solves the halting problem, no matter which consistent guessing oracle (of which there are many) it has access too. As prior work, Andrew Drucker has written a paper describing a proof of this, although I find the proof hard to understand and have not checked it independently. In this post, I will prove this fact in a way that I at least find easier to understand. (Note that the other direction, that a Turing machine with access to a halting oracle can be a consistent guessing oracle, is trivial.) First I will show that a Turing machine with access to a halting oracle cannot in general determine whether another machine with access to a halting oracle will halt. Suppose M(O, N) is a Turing machine that returns true if N(O) halts, false otherwise, when O is a halting oracle. Let T(O) be a machine that runs M(O, T), halting if it returns false, running forever if it returns true. Now M(O, T) must be its own negation, a contradiction. In particular, this implies that the problem of deciding whether a Turing machine with access to a halting oracle halts cannot be a Σ01 statement in the arithmetic hierarchy, since these statements can be decided by a machine with access to a halting oracle. Now consider the problem of deciding whether a Turing machine with access to a consistent guessing oracle halts for all possible consistent guessing oracles. If this is a Σ01 statement, then consistent guessing oracles must be strictly weaker than halting oracles. Since, if there were a reliable way to derive a halting oracle from a consistent guessing oracle, then any machine with access to a halting oracle can be translated to one making use of a consistent guessing oracle, that halts for all consistent guessing oracles if and only if the original halts when given access to a halting oracle. That would make the problem of deciding whether a Turing machine with access to a halting oracle halts a Σ01 statement, which we have shown to be impossible. What remains to be shown is that the problem of deciding whether a Turing machine with access to a consistent guessing oracle halts for all consistent guessing oracles, is a Σ01 statement. To do this, I will construct a recursively enumerable propositional theory T that depends on the Turing machine. Let M be a Turing machine that takes an oracle as input (where an oracle maps encodings of Turing machines to Booleans). Add to the T the following propositional variables: ON for each Turing machine encoding N, representing the oracle's answer about this machine. H, representing that M(O) halts. Rs for each possible state s of the Turing machine, where the state includes the head state and the state of the tape, representing that s is reached by the machine's execution. Clearly, these variables are recursively enumerable and can be computably mapped to the natural numbers. We introduce the following axiom schemas: (a) For any machine N that halts and returns true, ON. (b) For any machine N that halts and returns false, ON. (c) For any ...
Are you looking to learn the ins and outs of LinkedIn Boolean search? Then this podcast is for you! I'm breaking down the basics of Boolean search and exploring the best keywords to use, how to use Boolean operators to refine your search, and get the best results. By the end of this podcast, you'll have a better understanding of how to use Boolean searches on LinkedIn and save time on your search. Learn everything you need to know about Boolean search on LinkedIn today!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can Kauffman's NK Boolean networks make humans swarm?, published by Yori Ong on May 8, 2024 on The AI Alignment Forum. With this article, I intend to initiate a discussion with the community on a remarkable (thought) experiment and its implications. The experiment is to conceptualize Stuart Kauffman's NK Boolean networks as a digital social communication network, which introduces a thus far unrealized method for strategic information transmission. From this premise, I deduce that such a technology would enable people to 'swarm', i.e.: engage in self-organized collective behavior without central control. Its realization could result in a powerful tool for bringing about large-scale behavior change. The concept provides a tangible connection between network topology, common knowledge and cooperation, which can improve our understanding of the logic behind prosocial behavior and morality. It also presents us with the question of how the development of such a technology should be pursued and how the underlying ideas can be applied to the alignment of AI with human values. The intention behind sharing these ideas is to test whether they are correct, create common knowledge of unexplored possibilities, and to seek concrete opportunities to move forward. This article is a more freely written form of a paper I recently submitted to the arXiv, which can be found here. Introduction Random NK Boolean networks were first introduced by Stuart Kauffman in 1969 to model gene regulatory systems.[1] The model consists of N automata which are either switched ON (1) or OFF (0). The next state of each automaton is determined by a random boolean function that takes the current state of K other automata as input, resulting in a dynamic network underpinned by a semi-regular and directed graph. It can be applied to model gene regulation, in which the activation of some leads to the activation or suppression of others, but also to physical systems, in which a configuration of spins acting on another will determine whether it flips up or down. NK Boolean networks evolve deterministically: each following state can be computed based on its preceding state. Since the total number of possible states of the network is finite (although potentially very large), the network must eventually return to a previously visited state, resulting in cyclic behavior. The possible instances of Boolean networks can be subdivided between an ordered and a chaotic regime, which is mainly determined by the number of inputs for each node, K. In the ordered regime, the behavior of the network eventually gets trapped in cycles (attractors) that are relatively short and few in number. When a network in the ordered phase is perturbed by an externally induced 'bit-flip', the network eventually returns to the same or slightly altered ordered behavior. If the connectivity K is increased beyond a certain critical threshold, the network's behavior transitions from ordered to chaotic. States of the network become part of many and long cycles and minute external perturbations can easily change the course of the network state's evolution to a different track. This is popularly called the 'butterfly effect'. It has been extensively demonstrated that human behavior is not just determined by our 'own' decisions. Both offline and online social networks determine the input we receive, and causally influence the choices we make and opinions we adopt autonomously.[2] However, social networks are not regular, social ties are often reciprocal instead of directed and people are no automata. NK Boolean networks are therefore not very suitable for modeling an existing reality. What is nevertheless possible in the digital age, is to conceptualize and realize online communication networks based on its logic: just give N people a 'lightbulb app...
In this episode of the Manufacturing Culture Podcast, host Jim Mayer interviews Ann Wyatt, the founder of Ann Wyatt Recruiting and the host of the Workforce 4.0 show. Ann shares her journey from working in the government to starting her own recruiting business. She specializes in recruiting for the manufacturing industry, particularly metals, pulp, and paper. Ann discusses the skills gap in manufacturing and how she helps companies bridge that gap by advocating for workforce development and implementing new technologies. Ann Wyatt discusses training challenges in the manufacturing industry and the need to advocate for short-term training and transferable skill sets in this conversation. They also explore the automation culture paradox and the importance of data in measuring workforce performance. Ann shares her insights on the future of talent acquisition and company culture in manufacturing, emphasizing the value of people over capital investments. They conclude by discussing the need to focus on employee experience and the importance of not singling out employees. Takeaways Ann Wyatt is a pioneer in talent acquisition for the manufacturing industry, focusing on metals, pulp, and paper. She is passionate about workforce development and bridging the skills gap in manufacturing. Ann helps companies by advocating for workforce development programs and implementing new technologies. She uses Boolean search strings to find qualified candidates and enjoys the screening process. Advocate for short-term training and identify transferable skill sets to address training challenges in the manufacturing industry. Recognize the automation culture paradox and the need for cultural readiness to embrace automation in manufacturing facilities. Utilize data to measure workforce performance and make informed talent acquisition and retention decisions. Value people as a capital investment in manufacturing and prioritize their well-being and development. To create a positive and engaging work culture, focus on the entire employee experience, from recruitment to exit interviews. Connect with Ann on Linkedin Watch Workforce 4.0 on YouTube Are you ready to elevate your team's skills to the next level? Check out Baltu Technologies! They specialize in advancing workforce development through intuitive micro-learning platforms. Whether in manufacturing or education, Baltu provides tailored upskilling programs and software solutions designed to boost efficiency and expertise. Empower your organization with the tools it needs for tomorrow's challenges. Visit Baltu Technologies today and start your journey towards a smarter workforce. Imagine a workplace where every team member feels recognized and valued. That's the promise of Secchi, the leading Employee Relationship Management solution. Secchi empowers frontline leaders to effectively inspire, recognize, and coach their teams. With Secchi's system, you can enhance performance through strategic decision-making, impactful recognition, and real-time process control. Ready to transform your organizational culture? Visit Secchi now and see how it can affect your team's dynamics.
Introduction This is the start of a short series about the JSON data format, and how the command-line tool jq can be used to process such data. The plan is to make an open series to which others may contribute their own experiences using this tool. The jq command is described on the GitHub page as follows: jq is a lightweight and flexible command-line JSON processor …and as: jq is like sed for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text. The jq tool is controlled by a programming language (also referred to as jq), which is very powerful. This series will mainly deal with this. JSON (JavaScript Object Notation) To begin we will look at JSON itself. It is defined on the Wikipedia page thus: JSON is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). It is a common data format with diverse uses in electronic data interchange, including that of web applications with servers. The syntax of JSON is defined by RFC 8259 and by ECMA-404. It is fairly simple in principle but has some complexity. JSON's basic data types are (edited from the Wikipedia page): Number: a signed decimal number that may contain a fractional part and may use exponential E notation, but cannot include non-numbers. (NOTE: Unlike what I said in the audio, there are two values representing non-numbers: 'nan' and infinity: 'infinity'. String: a sequence of zero or more Unicode characters. Strings are delimited with double quotation marks and support a backslash escaping syntax. Boolean: either of the values true or false Array: an ordered list of zero or more elements, each of which may be of any type. Arrays use square bracket notation with comma-separated elements. Object: a collection of name–value pairs where the names (also called keys) are strings. Objects are delimited with curly brackets and use commas to separate each pair, while within each pair the colon ':' character separates the key or name from its value. null: an empty value, using the word null Examples These are the basic data types listed above (same order): 42 "HPR" true ["Hacker","Public","Radio"] { "firstname": "John", "lastname": "Doe" } null jq From the Wikipedia page: jq was created by Stephen Dolan, and released in October 2012. It was described as being “like sed for JSON data”. Support for regular expressions was added in jq version 1.5. Obtaining jq This tool is available in most of the Linux repositories. For example, on Debian and Debian-based releases you can install it with: sudo apt install jq See the download page for the definitive information about available versions. Manual for jq There is a detailed manual describing the use of the jq programming language that is used to filter JSON data. It can be found at https://jqlang.github.io/jq/manual/. The HPR statistics page This is a collection of statistics about HPR, in the form of JSON data. We will use this as a moderately detailed example in this episode. A link to this page may be found on the HPR Calendar page close to the foot of the page under the heading Workflow. The link to the JSON statistics is https://hub.hackerpublicradio.org/stats.json. If you click on this you should see the JSON data formatted for you by your browser. Different browsers represent this in different ways. You can also collect and display this data from the command line, using jq of course: $ curl -s https://hub.hackerpublicradio.org/stats.json | jq '.' | nl -w3 -s' ' 1 { 2 "stats_generated": 1712785509, 3 "age": { 4 "start": "2005-09-19T00:00:00Z", 5 "rename": "2007-12-31T00:00:00Z", 6 "since_start": { 7 "total_seconds": 585697507, 8 "years": 18, 9 "months": 6, 10 "days": 28 11 }, 12 "since_rename": { 13 "total_seconds": 513726307, 14 "years": 16, 15 "months": 3, 16 "days": 15 17 } 18 }, 19 "shows": { 20 "total": 4626, 21 "twat": 300, 22 "hpr": 4326, 23 "duration": 7462050, 24 "human_duration": "0 Years, 2 months, 27 days, 8 hours, 47 minutes and 30 seconds" 25 }, 26 "hosts": 356, 27 "slot": { 28 "next_free": 8, 29 "no_media": 0 30 }, 31 "workflow": { 32 "UPLOADED_TO_IA": "2", 33 "RESERVE_SHOW_SUBMITTED": "27" 34 }, 35 "queue": { 36 "number_future_hosts": 7, 37 "number_future_shows": 28, 38 "unprocessed_comments": 0, 39 "submitted_shows": 0, 40 "shows_in_workflow": 15, 41 "reserve": 27 42 } 43 } The curl utility is useful for collecting information from links like this. I have used the -s option to ensure it does not show information about the download process, since it does this by default. The output is piped to jq which displays the data in a “pretty printed” form by default, as you see. In this case I have given jq a minimal filter which causes what it receives to be printed. The filter is simply '.'. I have piped the formatted JSON through the nl command to get line numbers for reference. The JSON shown here consists of nested JSON objects. The first opening brace and the last at line 43 define the whole thing as a single object. Briefly, the object contains the following: a number called stats_generated (line 2) an object called age on lines 3-18; this object contains two strings and two objects an object called shows on lines 19-25 a number called hosts on line 26 an object called slot on lines 27-30 an object called workflow on lines 31-34 an object called queue on lines 35-42 We will look at ways to summarise and reformat such output in a later episode. Next episode I will look at some of the options to jq next time, though most of them will be revealed as they become relevant. I will also start looking at jq filters in that episode. Links JSON (JavaScript Object Notation): Wikipedia page about JSON Standards: RFC8259: The JavaScript Object Notation (JSON) Data Interchange Format ECMA-404: The JSON data interchange syntax jq: GitHub page Downloading jq The jq manual Wikipedia page about the jq programming language MrX's show on using the HPR statistics in JSON: Modifying a Python script with some help from ChatGPT
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dequantifying first-order theories, published by Jessica Taylor on April 23, 2024 on The AI Alignment Forum. The Löwenheim-Skolem theorem implies, among other things, that any first-order theory whose symbols are countable, and which has an infinite model, has a countably infinite model. This means that, in attempting to refer to uncountably infinite structures (such as in set theory), one "may as well" be referring to an only countably infinite structure, as far as proofs are concerned. The main limitation I see with this theorem is that it preserves arbitrarily deep quantifier nesting. In Peano arithmetic, it is possible to form statements that correspond (under the standard interpretation) to arbitrary statements in the arithmetic hierarchy (by which I mean, the union of Σ0n and Π0n for arbitrary n). Not all of these statements are computable. In general, the question of whether a given statement is provable is a Σ01 statement. So, even with a countable model, one can still believe one's self to be "referring" to high levels of the arithmetic hierarchy, despite the computational implausibility of this. What I aim to show is that these statements that appear to refer to high levels of the arithmetic hierarchy are, in terms of provability, equivalent to different statements that only refer to a bounded level of hypercomputation. I call this "dequantification", as it translates statements that may have deeply nested quantifiers to ones with bounded or no quantifiers. I first attempted translating statements in a consistent first-order theory T to statements in a different consistent first-order theory U, such that the translated statements have only bounded quantifier depth, as do the axioms of U. This succeeded, but then I realized that I didn't even need U to be first-order; U could instead be a propositional theory (with a recursively enumerable axiom schema). Propositional theories and provability-preserving translations Here I will, for specificity, define propositional theories. A propositional theory is specified by a countable set of proposition symbols, and a countable set of axioms, each of which is a statement in the theory. Statements in the theory consist of proposition symbols, , , and statements formed from and/or/not and other statements. Proving a statement in a propositional theory consists of an ordinary propositional calculus proof that it follows from some finite subset of the axioms (I assume that base propositional calculus is specified by inference rules, containing no axioms). A propositional theory is recursively enumerable if there exists a Turing machine that eventually prints all its axioms; assume that the (countable) proposition symbols are specified by their natural indices in some standard ordering. If the theory is recursively enumerable, then proofs (that specify the indices of axioms they use in the recursive enumeration) can be checked for validity by a Turing machine. Due to the soundness and completeness of propositional calculus, a statement in a propositional theory is provable if and only if it is true in all models of the theory. Here, a model consists of an assignment of Boolean truth values to proposition symbols such that all axioms are true. (Meanwhile, Gödel's completeness theorem shows soundness and completeness of first-order logic.) Let's start with a consistent first-order theory T, which may, like propositional theories, have a countable set of symbols and axioms. Also assume this theory is recursively enumerable, that is, there is a Turing machine printing its axioms. The initial challenge is to find a recursively enumerable propositional theory U and a computable translation of T-statements to U-statements, such that a T-statement is provable if and only if its translation is provable. This turns out to be trivia...
Our friend Damien Riehl stopped by to talk with Marlene Gebauer about two big happenings at vLex. Riehl unveiled exciting advancements in vLex's AI-powered legal research platform and shed light on vLex's commitment to streamlining legal workflows and reducing the need for extensive prompt engineering. One of the major developments is the enhanced document analysis feature. Users can now upload legal documents, such as complaints, and vLex's AI will automatically extract key information including claims, facts, parties involved, and potential legal defenses. This eliminates the tedious manual process of reviewing and analyzing documents, saving lawyers significant time and effort. Additionally, the platform suggests relevant legal research questions based on the document's content, further expediting the research process. vLex's advancements directly address the growing concerns surrounding prompt engineering in legal tech. By automating key analytical tasks, the platform empowers lawyers to focus on higher-level strategizing and client interactions, rather than spending hours crafting the perfect prompts for AI tools. Riehl echoes the sentiment of OpenAI's Sam Altman, believing that successful AI integration should render prompt engineering obsolete. He acknowledges that the option to fine-tune prompts remains, similar to Boolean search techniques, but emphasizes that vLex aims to make it a choice rather than a necessity. The potential impact on the legal industry is substantial. Clients, especially large corporations, can leverage vLex's capabilities to analyze past legal actions and assess the value provided by their law firms. This transparency could lead to a shift from billable hours to flat-fee arrangements, incentivizing efficiency and cost-effectiveness. Further amplifying vLex's potential, the company welcomes Daniel Hoadley, a renowned legal tech expert, to lead their research and development team. Hoadley's expertise in data science and large language models promises exciting advancements in harnessing the power of vLex's vast legal document database. With a robust roadmap of projects, vLex's is poised to continue pushing the boundaries of legal technology and shaping the future of legal practice. Listen on mobile platforms: Apple Podcasts | Spotify | YouTube Contact Us: Twitter: @gebauerm, or @glambertThreads: @glambertpod or @gebauerm66Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca
Techcrunch is reporting that Checkr, a 10-year-old employee background check company which was last valued at $5 billion has laid off 382 employees as companies are not significantly hiring talent. https://techcrunch.com/2024/04/10/checkr-layoffs/ Tech company Multiverse has acquired Searchlight, a talent intelligence and skills assessment platform that uses AI to help companies close their skills gaps. https://hrtechfeed.com/multiverse-acquires-ai-talent-software-company-searchlight/ NEW YORK – Cadient, a leading provider of talent acquisition solutions in the hourly hiring sector, has been acquired by Basis Vectors Capital, a private equity and technology investment firm that focuses on the B2B sass space https://hrtechfeed.com/cadient-ats-acquired-by-private-equity-firm/ SeekOut, the Talent Intelligence Platform, announced the release of conversational search as part of its SeekOut Assist generative AI product portfolio. The new feature expands the capabilities of SeekOut Assist, enabling recruiters to use their own language in sourcing. This makes powerful AI-assisted searches accessible to all recruiters, allowing for simple descriptions instead of complex Boolean queries. https://hrtechfeed.com/seekout-adds-conversational-search-to-its-platform/ ReadySetHire, powered by Talroo, is a new recruiting platform built from the ground-up exclusively for small businesses that lack the tools and insight needed to follow recruiting best practices to attract new hires. https://hrtechfeed.com/talroo-launches-readysethire/
Dive into the world of Building Automation System (BAS) point objects with Phil Zito in Episode 449 of the Smart Buildings Academy Podcast. This episode is dedicated to demystifying BAS point objects, a fundamental concept for professionals in the building automation industry. Whether you're new to BAS or looking to refresh your knowledge, this episode provides valuable insights into different types of point objects, their applications, and how they interact within a BAS environment. Episode Highlights: Introduction to Point Objects: Phil begins with an overview of the basic types of point objects found in building automation systems, including Boolean, Numeric, and Enumerated objects, and their common aliases (Binary, Analog, Multistate). Deep Dive into Point Types: Explore the characteristics of Boolean (Binary), Numeric (Analog), and Enumerated (Multistate) points, including their functions, how they're used in BAS, and nuances across different BAS software. Priority Arrays Explained: Gain a clear understanding of priority arrays, a crucial concept for managing point object priorities within a BAS. Learn how priority arrays influence the behavior of BAS objects and ensure desired outcomes in automation logic. Practical Applications: Phil discusses real-world applications and scenarios where different point objects are utilized within a BAS, providing listeners with practical knowledge to apply in their daily work. Interactive Q&A: The episode includes an interactive Q&A session, where Phil addresses listeners' questions, offering further clarifications on point objects and their use in building automation systems. Join Phil Zito for this informative session on BAS point objects, designed to enhance your understanding and mastery of building automation systems. This episode is a must-listen for anyone involved in designing, implementing, or managing BAS, providing the foundational knowledge needed to navigate the complexities of automation with confidence.
In this episode, I sit down with Chris Kiefer, Automation, Data, and Analytics expert and founder of Boolean. Join us as Chris shares his insights on the importance of automation in a painting business. He gives practical examples of what can be automated to increase efficiency and profitability. Download our podcast-exclusive toolkit of templates for your painting company: www.paintergrowth.com/podcast/ Learn how to grow your painting company NOW: go.paintergrowth.com?el=podcast Free business breakthrough session with my team: go.paintergrowth.com/schedule?el=podcast Free training series on Youtube: https://www.youtube.com/@paintergrowthblueprint Painter Growth Secrets Facebook group: https://www.facebook.com/groups/paintergrowt
Episode Description: Dive into the intricacies of building automation programming with Phil Zito in Episode 447 of the Smart Buildings Academy Podcast. This technical episode takes a deep dive into the art and science of writing effective building automation programs, focusing on sequences of operations, design patterns, and translating complex sequences into graphical programming interfaces. Episode Highlights: Introduction to Building Automation Programming: Phil sets the stage for a comprehensive exploration of programming fundamentals, emphasizing the transition from theoretical knowledge to practical application. Understanding Sequences of Operations: Learn how to dissect and understand general sequences of operations, focusing on economizers as a primary example to illustrate the process of identifying patterns and translating them into code. Graphical vs. Line Code Programming: Phil explains the difference between graphical and line code programming, focusing on the use of graphical blocks to represent programming logic, making it accessible for beginners and seasoned professionals alike. Decoding Design Patterns: Discover the importance of design patterns in building automation programming, including comparative patterns and PID (Proportional, Integral, Derivative) patterns, and how they apply to various automation tasks. Practical Programming Demonstration: Through a live demonstration, Phil showcases the step-by-step process of writing a program, from identifying variables to implementing logic blocks and adjusting setpoints. Troubleshooting and Optimization: Insights into common programming challenges, such as understanding interlocks, utilizing Boolean logic, and the significance of loop enables for efficient PID control. Q&A and Interactive Learning: Phil addresses listener questions and emphasizes the importance of community feedback in shaping future podcast topics, particularly focusing on areas like Priority Arrays and BACnet fundamentals.
In this episode, we're taking a look at how the explosion in our demand for data storage has led to needing more capacity than ever before, and whether long-vanished ideas from our computing past could influence technological innovation in the future. In 2022 the world generated 97 Zettabytes of data. It has been predicted that, by 2025, that number will almost have doubled to 181 Zettabytes. Although at the rate generative AI and machine learning is expanding, that figure could be even higher.As the Head of the Hewlett Packard Enterprise storage division, Senior Vice President Patrick Osborne has storage at the forefront of everything he does. He sees just how much his customers' needs are growing every year and is always actively looking for new methods and fabrics to meet those needs.Alongside those requirements for greater data storage also sits the need for faster data processing - and there are a number of technologies nearing maturity which could revolutionise the space. Aidong Xu is Head of Semiconductor Capability at Cambridge Consultants, and is keeping a close eye on these technologies, especially in the memory space. For him, the big challenge is combining performance with efficiency. However, whilst we're looking at the future of data storage, it's hard not to draw parallels with the past. Colin Eby from the National Museum of Computing knows a thing or two about that, guiding us through the history of the storage technologies which marked our pathway to today - some of which, in the decades since they fell out of favour, may have come round once more.But what if the future of data storage isn't data at all, but something more organic. Mark Bathe is a professor of biological engineering at MIT, specialising in DNA storage, and what that can mean for the future of our digital archiving needs. Sources and statistics cited in this episode:Zettabytes usage - https://www.statista.com/statistics/871513/worldwide-data-created/Sales of storage units - https://www.statista.com/forecasts/1251240/worldwide-storage-unit-sales-volumeHard drive shipment figures - https://www.statista.com/statistics/398951/global-shipment-figures-for-hard-disk-drives/Random access DNA memory using Boolean search in an archival file storage system - https://www.nature.com/articles/s41563-021-01021-3
Are you making the most of LinkedIn's free search? Don't miss out on untapped opportunities to elevate your sales approach. Join us for this enlightening conversation and discover how to maximize LinkedIn's potential without spending a dime. Listen in as we uncover often-overlooked features and show you how to optimize your LinkedIn strategy. You'll master the art of navigating the search bar and learn how to leverage second-degree connections for warm introductions. Plus, you'll delve into the impact of Boolean search techniques and gain insights into the effectiveness of video messages. Don't overlook the valuable bonus tip on searching companies through first-degree connections. This episode will provide you with the necessary tools to improve your sales on LinkedIn.
We have an awesome returning podcast guest this week, Chris Kiefer, Founder of Boolean, an Online Review Software and Automation Consulting, is here to discuss the incoming AI “technology wave” that is impacting businesses of all sizes and services. Automation of your organization's workflow and no code tools are things that every business NEEDS to plan for, or risk falling vastly behind the competition. Chris is passionate about this topic and we packed a lot of content into this conversation. If you are not using Ai in your business now, it's time to get started. Shared Resources: Boolean Automation Channel: https://www.youtube.com/@BooleanAutomation/videosAI & Technology Facebook Group: https://www.facebook.com/groups/920265796370898/AI & Technology for Painters Mailing List: https://go.booleanreview.com/ai-technology-for-paintersChris Kiefer is an engineer, entrepreneur, and thought leader. In 2013, he founded SkEye Media, a top medical marketing and branding agency, followed by the launch of Boolean Review, the highest converting Google review software in the market, and in 2022 created Boolean Automation –a consulting arm that helps residential and commercial painting companies implement no-code automation solutions. He is also the host of Pursuit of Purpose, a celebrated podcast that interviews successful entrepreneurs and inspirational people worldwide. Chris is passionate about exercise, CrossFit, adventure, and spending time with his wife and four kids in northern Idaho. To follow Chris, please visit chriskiefer.com, www.booleanreview.com and @pursuitofpurpose.pod
Thanks to the over 11,000 people who joined us for the first AI Engineer Summit! A full recap is coming, but you can 1) catch up on the fun and videos on Twitter and YouTube, 2) help us reach 1000 people for the first comprehensive State of AI Engineering survey and 3) submit projects for the new AI Engineer Foundation.See our Community page for upcoming meetups in SF, Paris, NYC, and Singapore. This episode had good interest on Twitter.Last month, Imbue was crowned as AI's newest unicorn foundation model lab, raising a $200m Series B at a >$1 billion valuation. As “stealth” foundation model companies go, Imbue (f.k.a. Generally Intelligent) has stood as an enigmatic group given they have no publicly released models to try out. However, ever since their $20m Series A last year their goal has been to “develop generally capable AI agents with human-like intelligence in order to solve problems in the real world”.From RL to Reasoning LLMsAlong with their Series A, they announced Avalon, “A Benchmark for RL Generalization Using Procedurally Generated Worlds”. Avalon is built on top of the open source Godot game engine, and is ~100x faster than Minecraft to enable fast RL benchmarking and a clear reward with adjustable game difficulty.After a while, they realized that pure RL isn't a good path to teach reasoning and planning. The agents were able to learn mechanical things like opening complex doors, climbing, but couldn't go to higher level tasks. A pure RL world also doesn't include a language explanation of the agent reasoning, which made it hard to understand why it made certain decisions. That pushed the team more towards the “models for reasoning” path:“The second thing we learned is that pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were able to learn all sorts of crazy things: They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing.”Inspired by Chelsea Finn's work on SayCan at Stanford, the team pivoted to have their agents do the reasoning in natural language instead. This development parallels the large leaps in reasoning that humans have developed as the scientific method:“We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask:* What was the original claim that was made? * What evidence is there for this claim? * Does the evidence support the claim? * Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we can generate data that's much more specific to them.“The Full Stack Model LabOne year later, it would seem that the pivot to reasoning has had tremendous success, and Imbue has now reached a >$1B valuation, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. Imbue tackles their work with a “full stack” approach:* Models. Pretraining very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks, with a ~10,000 Nvidia H100 GPU cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.* Tools and Agents. Building internal productivity tools from coding agents for fixing type checking and linting errors, to sophisticated systems like CARBS (for hyperparameter tuning and network architecture search).* Interface Invention. Solving agent trust and collaboration (not merely communication) with humans by creating better abstractions and interfaces — IDEs for users to program computers in natural language.* Theory. Publishing research about the theoretical underpinnings of self-supervised learning, as well as scaling laws for machine learning research.Kanjun believes we are still in the “bare metal phase” of agent development, and they want to take a holistic approach to building the “operating system for agents”. We loved diving deep into the Imbue approach toward solving the AI Holy Grail of reliable agents, and are excited to share our conversation with you today!Timestamps* [00:00:00] Introductions* [00:06:07] The origin story of Imbue* [00:09:39] Imbue's approach to training large foundation models optimized for reasoning* [00:12:18] Imbue's goals to build an "operating system" for reliable, inspectable AI agents* [00:15:37] Imbue's process of developing internal tools and interfaces to collaborate with AI agents* [00:17:27] Imbue's focus on improving reasoning capabilities in models, using code and other data* [00:19:50] The value of using both public benchmarks and internal metrics to evaluate progress* [00:21:43] Lessons learned from developing the Avalon research environment* [00:23:31] The limitations of pure reinforcement learning for general intelligence* [00:28:36] Imbue's vision for building better abstractions and interfaces for reliable agents* [00:31:36] Interface design for collaborating with, rather than just communicating with, AI agents* [00:37:40] The future potential of an agent-to-agent protocol* [00:39:29] Leveraging approaches like critiquing between models and chain of thought* [00:45:49] Kanjun's philosophy on enabling team members as creative agents at Imbue* [00:53:51] Kanjun's experience co-founding the communal co-living space The Archive* [01:00:22] Lightning RoundShow Notes* Imbue* Avalon* CARBS (hyperparameter optimizer)* Series B announcement* Kanjun/Imbue's Podcast* MIT Media Lab* Research mentioned:* Momentum Contrast* SimClr* Chelsea Finn - SayCan* Agent Protocol - part of the AI Engineer Foundation* Xerox PARC* Michael Nielsen* Jason Benn* Outset Capital* Scenius - Kevin Kelly* South Park Commons* The Archive* Thursday Nights in AITranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Hey, and today in the studio we have Kanjun from Imbue. Welcome. So you and I have, I guess, crossed paths a number of times. You're formerly named Generally Intelligent and you've just announced your rename, rebrand in huge, humongous ways. So congrats on all of that. And we're here to dive in into deeper detail on Imbue. We like to introduce you on a high level basis, but then have you go into a little bit more of your personal side. So you graduated your BS at MIT and you also spent some time at the MIT Media Lab, one of the most famous, I guess, computer hacking labs in the world. Then you graduated MIT and you went straight into BizOps at Dropbox, where you're eventually chief of staff, which is a pretty interesting role we can dive into later. And then it seems like the founder bug hit you. You were basically a three times founder at Ember, Sorceress, and now at Generally Intelligent slash Imbue. What should people know about you on the personal side that's not on your LinkedIn? That's something you're very passionate about outside of work. [00:01:12]Kanjun: Yeah. I think if you ask any of my friends, they would tell you that I'm obsessed with agency, like human agency and human potential. [00:01:19]Swyx: That's work. Come on.Kanjun: It's not work. What are you talking about?Swyx: So what's an example of human agency that you try to promote? [00:01:27]Kanjun: With all of my friends, I have a lot of conversations with them that's kind of helping figure out what's blocking them. I guess I do this with a team kind of automatically too. And I think about it for myself often, like building systems. I have a lot of systems to help myself be more effective. At Dropbox, I used to give this onboarding talk called How to Be Effective, which people liked. I think like a thousand people heard this onboarding talk, and I think maybe Dropbox was more effective. I think I just really believe that as humans, we can be a lot more than we are. And it's what drives everything. I guess completely outside of work, I do dance. I do partner dance. [00:02:03]Swyx: Yeah. Lots of interest in that stuff, especially in the sort of group living houses in San Francisco, which I've been a little bit part of, and you've also run one of those. [00:02:12]Kanjun: That's right. Yeah. I started the archive with two friends, with Josh, my co-founder, and a couple of other folks in 2015. That's right. And GPT-3, our housemates built. [00:02:22]Swyx: Was that the, I guess, the precursor to Generally Intelligent, that you started doing more things with Josh? Is that how that relationship started? Yeah. [00:02:30]Kanjun: This is our third company together. Our first company, Josh poached me from Dropbox for Ember. And there we built a really interesting technology, laser raster projector, VR headset. And then we were like, VR is not the thing we're most passionate about. And actually it was kind of early days when we both realized we really do believe that in our lifetimes, like computers that are intelligent are going to be able to allow us to do much more than we can do today as people and be much more as people than we can be today. And at that time, we actually, after Ember, we were like, work on AI research or start an AI lab. A bunch of our housemates were joining OpenAI, and we actually decided to do something more pragmatic to apply AI to recruiting and to try to understand like, okay, if we are actually trying to deploy these systems in the real world, what's required? And that was Sorceress. That taught us so much about maybe an AI agent in a lot of ways, like what does it actually take to make a product that people can trust and rely on? I think we never really fully got there. And it's taught me a lot about what's required. And it's kind of like, I think informed some of our approach and some of the way that we think about how these systems will actually get used by people in the real world. [00:03:42]Swyx: Just to go one step deeper on that, you're building AI agents in 2016 before it was cool. You got some muscle and you raised $30 million. Something was working. What do you think you succeeded in doing and then what did you try to do that did not pan out? [00:03:56]Kanjun: Yeah. So the product worked quite well. So Sorceress was an AI system that basically looked for candidates that could be a good fit and then helped you reach out to them. And this was a little bit early. We didn't have language models to help you reach out. So we actually had a team of writers that like, you know, customized emails and we automated a lot of the customization. But the product was pretty magical. Like candidates would just be interested and land in your inbox and then you can talk to them. As a hiring manager, that's such a good experience. I think there were a lot of learnings, both on the product and market side. On the market side, recruiting is a market that is endogenously high churn, which means because people start hiring and then we hire the role for them and they stop hiring. So the more we succeed, the more they... [00:04:39]Swyx: It's like the whole dating business. [00:04:40]Kanjun: It's the dating business. Exactly. Exactly. And I think that's the same problem as the dating business. And I was really passionate about like, can we help people find work that is more exciting for them? A lot of people are not excited about their jobs and a lot of companies are doing exciting things and the matching could be a lot better. But the dating business phenomenon like put a damper on that, like it's actually a pretty good business. But as with any business with like relatively high churn, the bigger it gets, the more revenue we have, the slower growth becomes because if 30% of that revenue you lose year over year, then it becomes a worse business. So that was the dynamic we noticed quite early on after our Series A. I think the other really interesting thing about it is we realized what was required for people to trust that these candidates were like well vetted and had been selected for a reason. And it's what actually led us, you know, a lot of what we do at Imbue is working on interfaces to figure out how do we get to a situation where when you're building and using agents, these agents are trustworthy to the end user. That's actually one of the biggest issues with agents that, you know, go off and do longer range goals is that I have to trust, like, did they actually think through this situation? And that really informed a lot of our work today. [00:05:52]Alessio: Let's jump into GI now, Imbue. When did you decide recruiting was done for you and you were ready for the next challenge? And how did you pick the agent space? I feel like in 2021, it wasn't as mainstream. Yeah. [00:06:07]Kanjun: So the LinkedIn says that it started in 2021, but actually we started thinking very seriously about it in early 2020, late 2019, early 2020. So what we were seeing is that scale is starting to work and language models probably will actually get to a point where like with hacks, they're actually going to be quite powerful. And it was hard to see that at the time, actually, because GPT-3, the early versions of it, there are all sorts of issues. We're like, oh, that's not that useful, but we could kind of see like, okay, you keep improving it in all of these different ways and it'll get better. What Josh and I were really interested in is how can we get computers that help us do bigger things? Like, you know, there's this kind of future where I think a lot about, you know, if I were born in 1900 as a woman, like my life would not be that fun. I'd spend most of my time like carrying water and literally like getting wood to put in the stove to cook food and like cleaning and scrubbing the dishes and, you know, getting food every day because there's no refrigerator, like all of these things, very physical labor. And what's happened over the last 150 years since the industrial revolution is we've kind of gotten free energy, like energy is way more free than it was 150 years ago. And so as a result, we've built all these technologies like the stove and the dishwasher and the refrigerator, and we have electricity and we have infrastructure, running water, all of these things that have totally freed me up to do what I can do now. And I think the same thing is true for intellectual energy. We don't really see it today, but because we're so in it, but our computers have to be micromanaged. You know, part of why people are like, oh, you're stuck to your screen all day. Well, we're stuck to our screen all day because literally nothing happens unless I'm doing something in front of my screen. I don't, you know, I can't send my computer off to do a bunch of stuff for me. And there is a future where that's not the case, where, you know, I can actually go off and do stuff and trust that my computer will pay my bills and figure out my travel plans and do the detailed work that I am not that excited to do so that I can like be much more creative and able to do things that I as a human, I'm very excited about and collaborate with other people. And there are things that people are uniquely suited for. So that's kind of always been the thing that has been really exciting to me. Like Josh and I have known for a long time, I think that, you know, whatever AI is, it would happen in our lifetimes. And the personal computer kind of started giving us a bit of free intellectual energy. And this is like really the explosion of free intellectual energy. So in early 2020, we were thinking about this and what happened was self-supervised learning basically started working across everything. So worked in language, SimClear came out, I think MoCo had come out, Momentum Contrast had come out earlier in 2019, SimClear came out in early 2020. And we're like, okay, for the first time, self-supervised learning is working really well across images and text and suspect that like, okay, actually it's the case that machines can learn things the way that humans do. And if that's true, if they can learn things in a fully self-supervised way, because like as people, we are not supervised. We like go Google things and try to figure things out. So if that's true, then like what the computer could be is much bigger than what it is today. And so we started exploring ideas around like, how do we actually go? We didn't think about the fact that we could actually just build a research lab. So we were like, okay, what kind of startup could we build to like leverage self-supervised learning? So that eventually becomes something that allows computers to become much more able to do bigger things for us. But that became General Intelligence, which started as a research lab. [00:09:39]Alessio: So your mission is you aim to rekindle the dream of the personal computer. So when did it go wrong and what are like your first products and user facing things that you're building to rekindle it? [00:09:53]Kanjun: Yeah. So what we do at Imbue is we train large foundation models optimized for reasoning. And the reason for that is because reasoning is actually, we believe the biggest blocker to agents or systems that can do these larger goals. If we think about something that writes an essay, like when we write an essay, we like write it. We put it and then we're done. We like write it and then we look at it and we're like, oh, I need to do more research on that area. I'm going to go do some research and figure it out and come back and, oh, actually it's not quite right. The structure of the outline. So I'm going to rearrange the outline, rewrite it. It's this very iterative process and it requires thinking through like, okay, what am I trying to do? Is the goal correct? Also like, has the goal changed as I've learned more? So as a tool, like when should I ask the user questions? I shouldn't ask them questions all the time, but I should ask them questions in higher risk situations. How certain am I about the like flight I'm about to book? There are all of these notions of like risk certainty, playing out scenarios, figuring out how to make a plan that makes sense, how to change the plan, what the goal should be. That are things that we lump under the bucket of reasoning and models today, they're not optimized for reasoning. It turns out that there's not actually that much explicit reasoning data on the internet as you would expect. And so we get a lot of mileage out of optimizing our models for reasoning in pre-training. And then on top of that, we build agents ourselves and we, I can get into, we really believe in serious use, like really seriously using the systems and trying to get to an agent that we can use every single day, tons of agents that we can use every single day. And then we experiment with interfaces that help us better interact with the agents. So those are some set of things that we do on the kind of model training and agent side. And then the initial agents that we build, a lot of them are trying to help us write code better because code is most of what we do every day. And then on the infrastructure and theory side, we actually do a fair amount of theory work to understand like, how do these systems learn? And then also like, what are the right abstractions for us to build good agents with, which we can get more into. And if you look at our website, we build a lot of tools internally. We have a like really nice automated hyperparameter optimizer. We have a lot of really nice infrastructure and it's all part of the belief of like, okay, let's try to make it so that the humans are doing the things humans are good at as much as possible. So out of our very small team, we get a lot of leverage. [00:12:18]Swyx: And so would you still categorize yourself as a research lab now, or are you now in startup mode? Is that a transition that is conscious at all? [00:12:26]Kanjun: That's a really interesting question. I think we've always intended to build, you know, to try to build the next version of the computer, enable the next version of the computer. The way I think about it is there's a right time to bring a technology to market. So Apple does this really well. Actually, iPhone was under development for 10 years, AirPods for five years. And Apple has a story where iPhone, the first multi-touch screen was created. They actually were like, oh wow, this is cool. Let's like productionize iPhone. They actually brought, they like did some work trying to productionize it and realized this is not good enough. And they put it back into research to try to figure out like, how do we make it better? What are the interface pieces that are needed? And then they brought it back into production. So I think of production and research as kind of like these two separate phases. And internally we have that concept as well, where like things need to be done in order to get to something that's usable. And then when it's usable, like eventually we figure out how to productize it. [00:13:20]Alessio: What's the culture like to make that happen, to have both like kind of like product oriented, research oriented. And as you think about building the team, I mean, you just raised 200 million. I'm sure you want to hire more people. What are like the right archetypes of people that work at Imbue? [00:13:35]Kanjun: I would say we have a very unique culture in a lot of ways. I think a lot about social process design. So how do you design social processes that enable people to be effective? I like to think about team members as creative agents, because most companies, they think of their people as assets and they're very proud of this. And I think about like, okay, what is an asset? It's something you own that provides you value that you can discard at any time. This is a very low bar for people. This is not what people are. And so we try to enable everyone to be a creative agent and to really unlock their superpowers. So a lot of the work I do, you know, I was mentioning earlier, I'm like obsessed with agency. A lot of the work I do with team members is try to figure out like, you know, what are you really good at? What really gives you energy and where can we put you such that, how can I help you unlock that and grow that? So much of our work, you know, in terms of team structure, like much of our work actually comes from people. Carbs, our hyperparameter optimizer came from Abe trying to automate his own research process doing hyperparameter optimization. And he actually pulled some ideas from plasma physics. He's a plasma physicist to make the local search work. A lot of our work on evaluations comes from a couple of members of our team who are like obsessed with evaluations. We do a lot of work trying to figure out like, how do you actually evaluate if the model is getting better? Is the model making better agents? Is the agent actually reliable? A lot of things kind of like, I think of people as making the like them shaped blob inside imbue and I think, you know, yeah, that's the kind of person that we're, we're hiring for. We're hiring product engineers and data engineers and research engineers and all these roles. We have projects, not teams. We have a project around data, data collection and data engineering. That's actually one of the key things that improve the model performance. We have a pre-training kind of project with some fine tuning as part of that. And then we have an agent's project that's like trying to build on top of our models as well as use other models in the outside world to try to make agents then we actually use as programmers every day. So all sorts of different, different projects. [00:15:37]Swyx: As a founder, you're now sort of a capital allocator among all of these different investments effectively at different projects. And I was interested in how you mentioned that you were optimizing for improving reasoning and specifically inside of your pre-training, which I assume is just a lot of data collection. [00:15:55]Kanjun: We are optimizing reasoning inside of our pre-trained models. And a lot of that is about data. And I can talk more about like what, you know, what exactly does it involve? But actually big, maybe 50% plus of the work is figuring out even if you do have models that reason well, like the models are still stochastic. The way you prompt them still makes, is kind of random, like makes them do random things. And so how do we get to something that is actually robust and reliable as a user? How can I, as a user, trust it? We have all sorts of cool things on the, like, you know, I was mentioning earlier when I talked to other people building agents, they have to do so much work, like to try to get to something that they can actually productize and it takes a long time and agents haven't been productized yet for, partly for this reason is that like the abstractions are very leaky. We can get like 80% of the way there, but like self-driving cars, like the remaining 20% is actually really difficult. We believe that, and we have internally, I think some things that like an interface, for example, that lets me really easily like see what the agent execution is, fork it, try out different things, modify the prompt, modify like the plan that it is making. This type of interface, it makes it so that I feel more like I'm collaborating with the agent as it's executing, as opposed to it's just like doing something as a black box. That's an example of a type of thing that's like beyond just the model pre-training, but on the model pre-training side, like reasoning is a thing that we optimize for. And a lot of that is about what data do we put in. [00:17:27]Swyx: It's interesting just because I always think like, you know, out of the levers that you have, the resources that you have, I think a lot of people think that running foundation model company or a research lab is going to be primarily compute. And I think the share of compute has gone down a lot over the past three years. It used to be the main story, like the main way you scale is you just throw more compute at it. And now it's like, Flops is not all you need. You need better data, you need better algorithms. And I wonder where that shift has gone. This is a very vague question, but is it like 30-30-30 now? Is it like maybe even higher? So one way I'll put this is people estimate that Llama2 maybe took about three to $4 million of compute, but probably 20 to $25 million worth of labeling data. And I'm like, okay, well that's a very different story than all these other foundation model labs raising hundreds of millions of dollars and spending it on GPUs. [00:18:20]Kanjun: Data is really expensive. We generate a lot of data. And so that does help. The generated data is close to actually good, as good as human labeled data. [00:18:34]Swyx: So generated data from other models? [00:18:36]Kanjun: From our own models. From your own models. Or other models, yeah. [00:18:39]Swyx: Do you feel like there's certain variations of this? There's the sort of the constitutional AI approach from Anthropic and basically models sampling training on data from other models. I feel like there's a little bit of like contamination in there, or to put it in a statistical form, you're resampling a distribution that you already have that you already know doesn't match human distributions. How do you feel about that basically, just philosophically? [00:19:04]Kanjun: So when we're optimizing models for reasoning, we are actually trying to like make a part of the distribution really spiky. So in a sense, like that's actually what we want. We want to, because the internet is a sample of the human distribution that's also skewed in all sorts of ways. That is not the data that we necessarily want these models to be trained on. And so when we're generating data, we're not really randomly generating data. We generate very specific things that are like reasoning traces and that help optimize reasoning. Code also is a big piece of improving reasoning. So generated code is not that much worse than like regular human written code. You might even say it can be better in a lot of ways. So yeah. So we are trying to already do that. [00:19:50]Alessio: What are some of the tools that you thought were not a good fit? So you built Avalon, which is your own simulated world. And when you first started, the metagame was like using games to simulate things using, you know, Minecraft and then OpenAI is like the gym thing and all these things. And I think in one of your other podcasts, you mentioned like Minecraft is like way too slow to actually do any serious work. Is that true? Yeah. I didn't say it. [00:20:17]Swyx: I don't know. [00:20:18]Alessio: That's above my pay grade. But Avalon is like a hundred times faster than Minecraft for simulation. When did you figure that out that you needed to just like build your own thing? Was it kind of like your engineering team was like, Hey, this is too slow. Was it more a long-term investment? [00:20:34]Kanjun: Yeah. At that time we built Avalon as a research environment to help us learn particular things. And one thing we were trying to learn is like, how do you get an agent that is able to do many different tasks? Like RL agents at that time and environments at that time. What we heard from other RL researchers was the like biggest thing keeping holding the field back is lack of benchmarks that let us explore things like planning and curiosity and things like that and have the agent actually perform better if the agent has curiosity. And so we were trying to figure out in a situation where, how can we have agents that are able to handle lots of different types of tasks without the reward being pretty handcrafted? That's a lot of what we had seen is that like these very handcrafted rewards. And so Avalon has like a single reward it's across all tasks. And it also allowed us to create a curriculum so we could make the level more or less difficult. And it taught us a lot, maybe two primary things. One is with no curriculum, RL algorithms don't work at all. So that's actually really interesting. [00:21:43]Swyx: For the non RL specialists, what is a curriculum in your terminology? [00:21:46]Kanjun: So a curriculum in this particular case is basically the environment Avalon lets us generate simpler environments and harder environments for a given tasks. What's interesting is that the simpler environments, what you'd expect is the agent succeeds more often. So it gets more reward. And so, you know, kind of my intuitive way of thinking about it is, okay, the reason why it learns much faster with a curriculum is it's just getting a lot more signal. And that's actually an interesting general intuition to have about training these things as like, what kind of signal are they getting? And like, how can you help it get a lot more signal? The second thing we learned is that reinforcement learning is not a good vehicle, like pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were not able to, they were able to learn all sorts of crazy things. They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing. And so that actually started to get us on the track of thinking about, okay, how do we do the reasoning part in language? And we were pretty inspired by our friend Chelsea Finn at Stanford was I think working on SACAN at the time where it's basically an experiment where they have robots kind of trying to do different tasks and actually do the reasoning for the robot in natural language. And it worked quite well. And that led us to start experimenting very seriously with reasoning. [00:23:31]Alessio: How important is the language part for the agent versus for you to inspect the agent? You know, like is it the interface to kind of the human on the loop really important or? [00:23:43]Kanjun: Yeah, I personally think of it as it's much more important for us, the human user. So I think you probably could get end to end agents that work and are fairly general at some point in the future. But I think you don't want that. Like we actually want agents that we can like perturb while they're trying to figure out what to do. Because, you know, even a very simple example, internally we have like a type error fixing agent and we have like a test generation agent. Test generation agent goes off rails all the time. I want to know, like, why did it generate this particular test? [00:24:19]Swyx: What was it thinking? [00:24:20]Kanjun: Did it consider, you know, the fact that this is calling out to this other function? And the formatter agent, if it ever comes up with anything weird, I want to be able to debug like what happened with RL end to end stuff. Like we couldn't do that. Yeah. [00:24:36]Swyx: It sounds like you have a bunch of agents operating internally within the company. What's your most, I guess, successful agent and what's your least successful one? [00:24:44]Kanjun: The agents don't work. All of them? I think the only successful agents are the ones that do really small things. So very specific, small things like fix the color of this button on the website or like change the color of this button. [00:24:57]Swyx: Which is now sweep.dev is doing that. Exactly. [00:25:00]Kanjun: Perfect. Okay. [00:25:02]Swyx: Well, we should just use sweep.dev. Well, I mean, okay. I don't know how often you have to fix the color of a button, right? Because all of them raise money on the idea that they can go further. And my fear when encountering something like that is that there's some kind of unknown asymptote ceiling that's going to prevent them, that they're going to run head on into that you've already run into. [00:25:21]Kanjun: We've definitely run into such a ceiling. But what is the ceiling? [00:25:24]Swyx: Is there a name for it? Like what? [00:25:26]Kanjun: I mean, for us, we think of it as reasoning plus these tools. So reasoning plus abstractions, basically. I think actually you can get really far with current models and that's why it's so compelling. Like we can pile debugging tools on top of these current models, have them critique each other and critique themselves and do all of these, like spend more computer inference time, context hack, retrieve augmented generation, et cetera, et cetera, et cetera. Like the pile of hacks actually does get us really far. And a way to think about it is like the underlying language model is kind of like a noisy channel. Actually I don't want to use this analogy. It's actually a really bad analogy, but you kind of like trying to get more signal out of the channel. We don't like to think about it that way. It's what the default approach is, is like trying to get more signal out of this noising channel. But the issue with agents is as a user, I want it to be mostly reliable. It's kind of like self-driving in that way. Like it's not as bad as self-driving, like in self-driving, you know, you're like hurtling at 70 miles an hour. It's like the hardest agent problem. But one thing we learned from Sorceress and one thing we learned by using these things internally is we actually have a pretty high bar for these agents to work. You know, it's actually really annoying if they only work 50% of the time and we can make interfaces to make it slightly less annoying. But yeah, there's a ceiling that we've encountered so far and we need to make the models better. We also need to make the kind of like interface to the user better. And also a lot of the like critiquing. I hope what we can do is help people who are building agents actually like be able to deploy them. I think, you know, that's the gap that we see a lot of today is everyone who's trying to build agents to get to the point where it's robust enough to be deployable. It just, it's like an unknown amount of time. Okay. [00:27:12]Swyx: So this goes back into what Embu is going to offer as a product or a platform. How are you going to actually help people deploy those agents? Yeah. [00:27:21]Kanjun: So our current hypothesis, I don't know if this is actually going to end up being the case. We've built a lot of tools for ourselves internally around like debugging, around abstractions or techniques after the model generation happens. Like after the language model generates the text and like interfaces for the user and the underlying model itself, like models talking to each other, maybe some set of those things kind of like an operating system. Some set of those things will be helpful for other people. And we'll figure out what set of those things is helpful for us to make our agents. Like what we want to do is get to a point where we can like start making an agent, deploy it, it's reliable, like very quickly. And there's a similar analog to software engineering, like in the early days, in the seventies and the sixties, like to program a computer, like you have to go all the way down to the registers and write things and eventually we had assembly. That was like an improvement. But then we wrote programming languages with these higher levels of abstraction and that allowed a lot more people to do this and much faster. And the software created is much less expensive. And I think it's basically a similar route here where we're like in the like bare metal phase of agent building. And we will eventually get to something with much nicer abstractions. [00:28:36]Alessio: We had this conversation with George Hotz and we were like, there's not a lot of reasoning data out there. And can the models really understand? And his take was like, look, with enough compute, you're not that complicated as a human. Like the model can figure out eventually why certain decisions are made. What's been your experience? Like as you think about reasoning data, like do you have to do a lot of like manual work or like is there a way to prompt models to extract the reasoning from actions that they [00:29:03]Swyx: see? [00:29:03]Kanjun: So we don't think of it as, oh, throw enough data at it and then it will figure out what the plan should be. I think we're much more explicit. You know, a way to think about it is as humans, we've learned a lot of reasoning strategies over time. We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask like, huh, what was the original claim that was made? What evidence is there for this claim? Does the evidence support the claim? Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we didn't always have them. And because they're invented, like we can generate data that's much more specific to them. So I think internally, yeah, we have a lot of thoughts on what reasoning is and we generate a lot more specific data. We're not just like, oh, it'll figure out reasoning from this black box or like it'll figure out reasoning from the data that exists. Yeah. [00:30:04]Alessio: I mean, the scientific method is like a good example. If you think about hallucination, right, people are thinking, how do we use these models to do net new, like scientific research? And if you go back in time and the model is like, well, the earth revolves around the sun and people are like, man, this model is crap. It's like, what are you talking about? Like the sun revolves around the earth. It's like, how do you see the future? Like if the models are actually good enough, but we don't believe them, it's like, how do we make the two live together? So you're like, you use Inbu as a scientist to do a lot of your research and Inbu tells you, hey, I think this is like a serious path you should go down. And you're like, no, that sounds impossible. Like how is that trust going to be built? And like, what are some of the tools that maybe are going to be there to inspect it? [00:30:51]Kanjun: Really there are two answers to this. One element of it is as a person, like I need to basically get information out of the model such that I can try to understand what's going on with the model. Then the second question is like, okay, how do you do that? And that's kind of some of our debugging tools, they're not necessarily just for debugging. They're also for like interfacing with and interacting with the model. So like if I go back in this reasoning trace and like change a bunch of things, what's going to happen? Like, what does it conclude instead? So that kind of helps me understand like, what are its assumptions? And, you know, we think of these things as tools. And so it's really about like, as a user, how do I use this tool effectively? I need to be willing to be convinced as well. It's like, how do I use this tool effectively? And what can it help me with? [00:31:36]Swyx: And what can it tell me? There's a lot of mention of code in your process. And I was hoping to dive in even deeper. I think we might run the risk of giving people the impression that you view code or you use code just as like a tool within InView just for coding assistance. But I think you actually train code models. And I think there's a lot of informal understanding about how adding code to language models improves their reasoning capabilities. I wonder if there's any research or findings that you have to share that talks about the intersection of code and reasoning. Hmm. Yeah. [00:32:08]Kanjun: So the way I think about it intuitively is like code is the most explicit example of reasoning data on the internet. [00:32:15]Swyx: Yeah. [00:32:15]Kanjun: And it's not only structured, it's actually very explicit, which is nice. You know, it says this variable means this, and then it uses this variable. And then the function does this. As people, when we talk in language, it takes a lot more to extract that explicit structure out of our language. And so that's one thing that's really nice about code is I see it as almost like a curriculum for reasoning. I think we use code in all sorts of ways. The coding agents are really helpful for us to understand what are the limitations of the agents. The code is really helpful for the reasoning itself. But also code is a way for models to act. So by generating code, it can act on my computer. And, you know, when we talk about rekindling the dream of the personal computer, kind of where I see computers going is, you know, like computers will eventually become these much more malleable things where I, as a user today, I have to know how to write software code, like in order to make my computer do exactly what I want it to do. But in the future, if the computer is able to generate its own code, then I can actually interface with it in natural language. And so one way we think about agents is kind of like a natural language programming language. It's a way to program my computer in natural language that's much more intuitive to me as a user. And these interfaces that we're building are essentially IDEs for users to program our computers in natural language. Maybe I should say what we're doing that way. Maybe it's clearer. [00:33:47]Swyx: I don't know. [00:33:47]Alessio: That's a good pitch. What do you think about the different approaches people have, kind of like text first, browser first, like multi-on? What do you think the best interface will be? Or like, what is your, you know, thinking today? [00:33:59]Kanjun: In a lot of ways, like chat as an interface, I think Linus, Linus Lee, you had on this. I really like how he put it. Chat as an interface is skeuomorphic. So in the early days, when we made word processors on our computers, they had notepad lines because that's what we understood these like objects to be. Chat, like texting someone is something we understand. So texting our AI is something that we understand. But today's word documents don't have notepad lines. And similarly, the way we want to interact with agents, like chat is a very primitive way of interacting with agents. What we want is to be able to inspect their state and to be able to modify them and fork them and all of these other things. And we internally have, think about what are the right representations for that? Like architecturally, like what are the right representations? What kind of abstractions do we need to build? And how do we build abstractions that are not leaky? Because if the abstractions are leaky, which they are today, like, you know, this stochastic generation of text is like a leaky abstraction. I cannot depend on it. And that means it's actually really hard to build on top of. But our experience and belief is actually by building better abstractions and better tooling, we can actually make these things non-leaky. And now you can build like whole things on top of them. So these other interfaces, because of where we are, we don't think that much about them. [00:35:17]Swyx: Yeah. [00:35:17]Alessio: I mean, you mentioned, this is kind of like the Xerox Spark moment for AI. And we had a lot of stuff come out of Parc, like the, what you see is what you got editors and like MVC and all this stuff. But yeah, but then we didn't have the iPhone at Parc. We didn't have all these like higher things. What do you think it's reasonable to expect in like this era of AI, you know, call it like five years or so? Like what are like the things we'll build today and what are things that maybe we'll see in kind of like the second wave of products? [00:35:46]Kanjun: That's interesting. I think the waves will be much faster than before. Like what we're seeing right now is basically like a continuous wave. Let me zoom a little bit earlier. So people like the Xerox Parc analogy I give, but I think there are many different analogies. Like one is the like analog to digital computer is kind of an example, like another analogy to where we are today. The analog computer Vannevar Bush built in the 1930s, I think, and it's like a system of pulleys and it can only calculate one function. Like it can calculate like an integral. And that was so magical at the time because you actually did need to calculate this integral bunch, but it had a bunch of issues like in analog errors compound. And so there was actually a set of breakthroughs necessary in order to get to the digital computer, like Turing's decidability, Shannon. I think the like whole like relay circuits can be thought of as can be mapped to Boolean operators and a set of other like theoretical breakthroughs, which essentially were abstractions. They were like creating abstractions for these like very like lossy circuits. They were creating abstractions for these like very analog circuits and digital had this nice property of like being error correcting. And so when I talk about like less leaky abstractions, that's what I mean. That's what I'm kind of pointing a little bit to. It's not going to look exactly the same way. And then the Xerox PARC piece, a lot of that is about like, how do we get to computers that as a person, I can actually use well. And the interface actually helps it unlock so much more power. So the sets of things we're working on, like the sets of abstractions and the interfaces, like hopefully that like help us unlock a lot more power in these systems. Like hopefully that'll come not too far in the future. I could see a next version, maybe a little bit farther out. It's like an agent protocol. So a way for different agents to talk to each other and call each other. Kind of like HTTP. [00:37:40]Swyx: Do you know it exists already? [00:37:41]Kanjun: Yeah, there is a nonprofit that's working on one. I think it's a bit early, but it's interesting to think about right now. Part of why I think it's early is because the issue with agents, it's not quite like the internet where you could like make a website and the website would appear. The issue with agents is that they don't work. And so it may be a bit early to figure out what the protocol is before we really understand how these agents get constructed. But, you know, I think that's, I think it's a really interesting question. [00:38:09]Swyx: While we're talking on this agent to agent thing, there's been a bit of research recently on some of these approaches. I tend to just call them extremely complicated chain of thoughting, but any perspectives on kind of meta-GPT, I think it's the name of the paper. I don't know if you care about at the level of individual papers coming out, but I did read that recently and TLDR, it beat GPT-4 and human eval by role-playing software agent development agency, instead of having sort of single shot or single role, you have multiple roles and how having all of them criticize each other as agents communicating with other agents. [00:38:45]Kanjun: Yeah, I think this is an example of an interesting abstraction of like, okay, can I just plop in this like multi-role critiquing and see how it improves my agent? And can I just plop in chain of thought, tree of thought, plop in these other things and see how they improve my agent? One issue with this kind of prompting is that it's still not very reliable. It's like, there's one lens, which is like, okay, if you do enough of these techniques, you'll get to high reliability. And I think actually that's a pretty reasonable lens. We take that lens often. And then there's another lens that's like, okay, but it's starting to get really messy what's in the prompt and like, how do we deal with that messiness? And so maybe you need like cleaner ways of thinking about and constructing these systems. And we also take that lens. So yeah, I think both are necessary. Yeah. [00:39:29]Swyx: Side question, because I feel like this also brought up another question I had for you. I noticed that you work a lot with your own benchmarks, your own evaluations of what is valuable. I would say I would contrast your approach with OpenAI as OpenAI tends to just lean on, hey, we played StarCraft or hey, we ran it on the SAT or the, you know, the AP bio test and that did results. Basically, is benchmark culture ruining AI? [00:39:55]Swyx: Or is that actually a good thing? Because everyone knows what an SAT is and that's fine. [00:40:04]Kanjun: I think it's important to use both public and internal benchmarks. Part of why we build our own benchmarks is that there are not very many good benchmarks for agents, actually. And to evaluate these things, you actually need to think about it in a slightly different way. But we also do use a lot of public benchmarks for like, is the reasoning capability in this particular way improving? So yeah, it's good to use both. [00:40:26]Swyx: So for example, the Voyager paper coming out of NVIDIA played Minecraft and set their own benchmarks on getting the Diamond X or whatever and exploring as much of the territory as possible. And I don't know how that's received. That's obviously fun and novel for the rest of the engineer, the people who are new to the scene. But for people like yourselves, you build Avalon just because you already found deficiencies with using Minecraft. Is that valuable as an approach? Oh, yeah. I love Voyager. [00:40:57]Kanjun: I mean, Jim, I think is awesome. And I really like the Voyager paper and I think it has a lot of really interesting ideas, which is like the agent can create tools for itself and then use those tools. [00:41:06]Swyx: He had the idea of the curriculum as well, which is something that we talked about earlier. Exactly. [00:41:09]Kanjun: And that's like a lot of what we do. We built Avalon mostly because we couldn't use Minecraft very well to like learn the things we wanted. And so it's like not that much work to build our own. [00:41:19]Swyx: It took us, I don't know. [00:41:22]Kanjun: We had like eight engineers at the time, took about eight weeks. So six weeks. [00:41:27]Swyx: And OpenAI built their own as well, right? Yeah, exactly. [00:41:30]Kanjun: It's just nice to have control over our environment. But if you're doing our own sandbox to really trying to inspect our own research questions. But if you're doing something like experimenting with agents and trying to get them to do things like Minecraft is a really interesting environment. And so Voyager has a lot of really interesting ideas in it. [00:41:47]Swyx: Yeah. Cool. One more element that we had on this list, which is context and memory. I think that's kind of like the foundational, quote unquote, RAM of our era. I think Andrej Karpathy has already made this comparison. So there's nothing new here. And that's just the amount of working knowledge that we can fit into one of these agents. And it's not a lot, right? Especially if you need to get them to do long running tasks. If they need to self-correct from errors that they observe while operating in their environment. Do you see this as a problem? Do you think we're going to just trend to infinite context and that'll go away? Or how do you think we're going to deal with it? [00:42:22]Kanjun: I think when you talked about what's going to happen in the first wave and then in the second wave, I think what we'll see is we'll get like relatively simplistic agents pretty soon. And they will get more and more complex. And there's like a future wave in which they are able to do these like really difficult, really long running tasks. And the blocker to that future, one of the blockers is memory. And that was true of computers too. You know, I think when von Neumann made the von Neumann architecture, he was like, the biggest blocker will be like, we need this amount of memory, which is like, I don't remember exactly like 32 kilobytes or something to store programs. And that will allow us to write software. He didn't say it this way because he didn't have these terms, but that only really was like happened in the seventies with the microchip revolution. It may be the case that we're waiting for some research breakthroughs or some other breakthroughs in order for us to have like really good long running memory. And then in the meantime, agents will be able to do all sorts of things that are a little bit smaller than that. I do think with the pace of the field, we'll probably come up with all sorts of interesting things like, you know, RAG is already very helpful. [00:43:26]Swyx: Good enough, you think? [00:43:27]Kanjun: Maybe good enough for some things. [00:43:29]Swyx: How is it not good enough? I don't know. [00:43:31]Kanjun: I just think about a situation where you want something that's like an AI scientist. As a scientist, I have learned so much about my fields and a lot of that data is maybe hard to fine tune or on, or maybe hard to like put into pre-training. Like a lot of that data, I don't have a lot of like repeats of the data that I'm seeing. You know, like if I'm a scientist, I've like accumulated so many little data points. And ideally I'd want to store those somehow, or like use those to fine tune myself as a model somehow, or like have better memory somehow. I don't think RAG is enough for that kind of thing. But RAG is certainly enough for like user preferences and things like that. Like what should I do in this situation? What should I do in that situation? That's a lot of tasks. We don't have to be a scientist right away. Awesome. [00:44:21]Swyx: I have a hard question, if you don't mind me being bold. Yeah. I think the most comparable lab to InView is Adept. You know, a research lab with like some amount of product situation on the horizon, but not just yet, right? Why should people work for InView over Adept? And we can cut this if it's too like... Yeah. [00:44:40]Kanjun: The way I think about it is I believe in our approach. The type of thing that we're doing is we're trying to like build something that enables other people to build agents and build something that really can be maybe something like an operating system for agents. I know that that's what we're doing. I don't really know what everyone else is doing. You know, I can kind of like talk to people and have some sense of what they're doing. And I think it's a mistake to focus too much on what other people are doing, because extremely focused execution on the right thing is what matters. To the question of like, why us? I think like strong focus on reasoning, which we believe is the biggest blocker, on inspectability, which we believe is really important for user experience and also for the power and capability of these systems. Building non-leaky, good abstractions, which we believe is solving the core issue of agents, which is around reliability and being able to make them deployable. And then really seriously trying to use these things ourselves, like every single day, and getting to something that we can actually ship to other people that becomes something that is a platform. Like, it feels like it could be Mac or Windows. I love the dogfooding approach. [00:45:49]Swyx: That's extremely important. And you will not be surprised how many agent companies I talk to that don't use their own agent. Oh no, that's not good. That's a big surprise. [00:45:59]Kanjun: Yeah, I think if we didn't use our own agents, then we would have all of these beliefs about how good they are. Wait, did you have any other hard questions you wanted to ask? [00:46:08]Swyx: Yeah, mine was just the only other follow-up that you had based on the answer you just gave was, do you see yourself releasing models or do you see yourself, what is the artifacts that you want to produce that lead up to the general operating system that you want to have people use, right? And so a lot of people just as a byproduct of their work, just to say like, hey, I'm still shipping, is like, here's a model along the way. Adept took, I don't know, three years, but they released Persimmon recently, right? Like, do you think that kind of approach is something on your horizon? Or do you think there's something else that you can release that can show people, here's kind of the idea, not the end products, but here's the byproducts of what we're doing? [00:46:51]Kanjun: Yeah, I don't really believe in releasing things to show people like, oh, here's what we're doing that much. I think as a philosophy, we believe in releasing things that will be helpful to other people. [00:47:02]Swyx: Yeah. [00:47:02]Kanjun: And so I think we may release models or we may release tools that we think will help agent builders. Ideally, we would be able to do something like that, but I'm not sure exactly what they look like yet. [00:47:14]Swyx: I think more companies should get into the releasing evals and benchmarks game. Yeah. [00:47:20]Kanjun: Something that we have been talking to agent builders about is co-building evals. So we build a lot of our own evals and every agent builder tells me, basically evals are their biggest issue. And so, yeah, we're exploring right now. And if you are building agents, please reach out to me because I would love to, like, figure out how we can be helpful based on what we've seen. Cool. [00:47:40]Swyx: That's a good call to action. I know a bunch of people that I can send your way. Cool. Great. [00:47:43]Kanjun: Awesome. [00:47:44]Swyx: Yeah. We can zoom out to other interests now. [00:47:46]Alessio: We got a lot of stuff. So we have Sherif from Lexicon, the podcast. He had a lot of interesting questions on his website. You similarly have a lot of them. Yeah. [00:47:55]Swyx: I need to do this. I'm very jealous of people with personal websites right there. Like, here's the high level questions of goals of humanity that I want to set people on. And I don't have that. [00:48:04]Alessio: It's never too late, Sean. [00:48:05]Swyx: Yeah. [00:48:05]Alessio: It's never too late. [00:48:06]Kanjun: Exactly. [00:48:07]Alessio: There were a few that stuck out as related to your work that maybe you're kind of learning [00:48:12]Swyx: more about it. [00:48:12]Alessio: So one is why are curiosity and goal orientation often at odds? And from a human perspective, I get it. It's like, you know, would you want to like go explore things or kind of like focus on your career? How do you think about that from like an agent perspective? Where it's like, should you just stick to the task and try and solve it as in the guardrails as possible? Or like, should you look for alternative solutions? [00:48:34]Swyx: Yeah. [00:48:34]Kanjun: I think one thing that's really interesting about agents actually is that they can be forked. Like, you know, we can take an agent that's executed to a certain place and said, okay, here, like fork this and do a bunch of different things. I try a bunch of different things. Some of those agents can be goal oriented and some of them can be like more curiosity driven. You can prompt them in slightly different ways. And something I'm really curious about, like what would happen if in the future, you know, we were able to actually go down both paths. As a person, why I have this question on my website is I really find that like I really can only take one mode at a time and I don't understand why. And like, is it inherent in like the kind of context that needs to be held? That's why I think from an agent perspective, like forking it is really interesting. Like I can't fork myself to do both, but I maybe could fork an agent to like add a certain point in a task. [00:49:26]Swyx: Yeah. Explore both. Yeah. [00:49:28]Alessio: How has the thinking changed for you as the funding of the company changed? That's one thing that I think a lot of people in the space think is like, oh, should I raise venture capital? Like, how should I get money? How do you feel your options to be curious versus like goal oriented has changed as you raise more money and kind of like the company has grown? [00:49:50]Kanjun: Oh, that's really funny. Actually, things have not changed that much. So we raised our Series A $20 million in late 2021. And our entire philosophy at that time was, and still kind of is, is like, how do we figure out the stepping stones, like collect stepping stones that eventually let us build agents, kind of these new computers that help us do bigger things. And there was a lot of curiosity in that. And there was a lot of goal orientation in that. Like the curiosity led us to build CARBS, for example, this hyperparameter optimizer. Great name, by the way. [00:50:28]Swyx: Thank you. [00:50:29]Kanjun: Is there a story behind that name? [00:50:30]Swyx: Yeah. [00:50:31]Kanjun: Abe loves CARBS. It's also cost aware. So as soon as he came up with cost aware, he was like, I need to figure out how to make this work. But the cost awareness of it was really important. So that curiosity led us to this really cool hyperparameter optimizer. That's actually a big part of how we do our research. It lets us experiment on smaller models. And for those experiment results to carry to larger ones. [00:50:56]Swyx: Which you also published a scaling laws, which is great. I think the scaling laws paper from OpenAI was like the biggest. And from Google, I think, was the greatest public service to machine learning that any research lab can do. Yeah, totally. [00:51:10]Kanjun: What was nice about CARBS is it gave us scaling laws for all sorts of hyperparameters. So yeah, that's cool. It basically hasn't changed very much. So there's some curiosity. And then there's some goal oriented parts. Like Avalon, it was like a six to eight week sprint for all of us. And we got this thing out. And then now different projects do like more curiosity or more goal orientation at different times. Cool. [00:51:36]Swyx: Another one of your questions that we highlighted was, how can we enable artificial agents to permanently learn new abstractions and processes? I think this is might be called online learning. [00:51:45]Kanjun: Yeah. So I struggle with this because, you know, that scientist example I gave. As a scientist, I've like permanently learned a lot of new things. And I've updated and created new abstractions and learned them pretty reliably. And you were talking about like, okay, we have this RAM that we can store learnings in. But how well does online learning actually work? And the answer right now seems to be like, as models get bigger, they fine tune faster. So they're more sample efficient as they get bigger. [00
Ever wondered how the complexity of Language Modeling (LLMs) and cybernetics can revolutionize the way we communicate and interact? Are you curious about the sociopolitical implications of such advancements? Join us in an enlightening discussion with the well-versed Mark Rainey, as we dissect these technologies and their potential impact on our society.Our journey begins with an exploration of LLMs, their potential, and inherent limitations. We discuss the nuanced logic of human language, Boolean operators, and their influence on the design of these systems. We then shift gears to delve into the intriguing world of cybernetics, viable systems, and the behavioralist and cognitivist wars. We scrutinize the implications of these advancements and the tests used to measure intelligence, all the while contemplating the potential and pitfalls of Stafford Beer's Viable System Model. As we navigate further, we probe into the relationship between technology and Marxism, questioning the teleological feedback loop of capital and its effects on the proletariat. We also explore the exciting realm of cybernetic planning and the potential role of LLMs in such systems. Finally, we reflect on the concepts of agency, alienation, class dynamics, and the implications of capitalism on social reproduction. This rich and riveting conversation with Mark Rainey is not to be missed! Support the showCrew:Host: C. Derick VarnAudio Producer: Paul Channel Strip ( @aufhebenkultur )Intro and Outro Music by Bitter Lake.Intro Video Design: Jason MylesArt Design: Corn and C. Derick VarnLinks and Social Media:twitter: @skepoetYou can find the additional streams on Youtube
Joël recently had a fascinating conversation with some friends about the power of celebrating and highlighting small wins in their lives. He talks about bringing this into his work life. May Stephanie interest you in a secret she learned regarding homemade pizza? RubyConf is coming! Who's submitting talks?! It's hekkin scary. Don't fret! Joël and Stephannie are here to help. Today, they discussed submitting a conference talk proposal from start to finish. Sheet pan pizza (https://anewsletter.alisoneroman.com/p/may-we-interest-you-in-a-party-of) RubyConf CFP (https://sessionize.com/rubyconf-2023/) Speakerline.io (https://speakerline.io/) WNB.rb (https://www.wnb-rb.dev/) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we've come here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been having a really interesting conversation with some of my friends recently about the power of celebrating and highlighting small wins in our lives, both in, like, kind of sharing it with each other, like, you know, if something small happens, it's good for me to share it with my friends. But also, where it becomes really cool is where the friend group kind of gets together and celebrates that small win for one person, and how that can be, like, a small step to take, but it's just really powerful and encouraging for a friend group. And I think that applies not just among friends but in a team or other grouping in the workplace. STEPHANIE: That's so fun. How are you celebrating these small wins, like, over text? Is that the main way you're communicating something good that happened? JOËL: It depends on the friend group. I think, like, different friend groups will have, like, a different kind of cadence for the kind of things they do. And do they all hang out together? Do they have a group text, things like that? One of the friend groups I'm a part of, we meet weekly to go climbing at a rock-climbing gym, so that's kind of our hang-out. And [inaudible 01:34], we're there to do stuff at the gym, but it's also a social thing. And it's an opportunity to be like, "Oh, you know, did that thing workout, you know, at work?" "You know, good for you," Or "Did you get this project accepted?" And yeah, when small wins come up, it's a great time to celebrate. STEPHANIE: That's awesome. I think having regular time that you see people and being able to ask them about something that they had mentioned previously is so special and really important to me, like, in bonding and building the relationship. I also love the idea of celebrating milestones. So, this is, I guess, more of a bigger win, but milestones that aren't traditionally celebrated. You know, so, yeah, we'll have, like, a party when someone graduates or someone gets married. But I also have really enjoyed celebrating when someone gets a promotion at work. And, you know, maybe that's not, like, a once-in-a-lifetime thing, but it's still so worthy of going out for dinner or buying them a drink. I also will maybe, like, send my friends a little treat if I know that they did something small but hard for them, right? And sometimes that's even, like, responding to a scary email that they had sitting in their inbox for a while. Yeah, I really love that idea of supporting people, even in the small things in life that they do. JOËL: Yeah, and that's really validating, I think when you've done something hard and then a friend or a colleague reaches out to you. And it's kind of like, hey, I saw that. Good for you. STEPHANIE: How have you been thinking about bringing this into your work life? JOËL: I think it's about being on the lookout for things that other people do. And I think one thing I like to do is kind of publicly calling that out. It sounds like a negative thing, right? But just giving people kind of a public shout-out when they've succeeded at something. I think we're all kind of socialized not to maybe talk too much about accomplishments, especially if they feel kind of small and mundane. Being somebody else, I think, gives you a lot more leeway to say, "Hey, no, Stephanie, I see that you did that thing. And maybe it feels kind of like, oh no, you're just doing your job, but I think that's cool. And I want to, you know, just give you a shout-out in the company Slack channel or something." It doesn't have to be something big. You know, I'm not sending champagne to your home. But having that opportunity to just kind of celebrate something small and say, "Wait a minute, let's pause and acknowledge that you just did something cool." STEPHANIE: Yeah, I was thinking about how that's kind of, like, amplifying the win a little bit. I've definitely done this before, too, when I see someone share a win of theirs, maybe in a smaller Slack channel or kind of a personal level, or even just to me individually. And I really want other people to know that that happened to you and that they, you know, did an awesome job. And so, I have enjoyed, you know, sharing them more publicly on their behalf if they are comfortable with it. JOËL: And I'll say on the other end of that, I think it feels really good to be acknowledged by someone else that you've done something that they recognize. It's fun to share a win with other people because you're excited, but it's doubly fun when somebody else shares it for you. STEPHANIE: I agree. I think one thing that you, Joël, do really well, actually, is sharing your own personal wins in a very casual way. That's something I've always admired about you is how you recognize the small wins for yourself. JOËL: It's taken me, I think, a long time to get to that and find a way where, you know, you are sharing things that are fun for other people to see, things that might be inspiring, things that are kind of cool, and that are not just kind of, like, self-aggrandizing, like, bragging about stuff. It can be a fine line to walk. And, to a certain extent, you're a little bit marketing yourself. But yeah, I think I've kind of hit that right balance. STEPHANIE: Yeah, I think the thing that makes it work is that there's usually, like, a challenge or something that maybe you, like, went through a journey or overcame a little bit. And I think that's what is the inspiring part that makes me feel like, oh, okay, so, like, this is a realistic thing that, you know, Joël went through and, you know, he struggled with it maybe. But then, like, ultimately, you know, had some insights or came out the other side with some learnings. And I like that it's real, right? It's not just, "Hey, like, I did this, like, cool thing." It's like, "I went on this journey." And I find that really motivating when I am in that kind of situation next time. JOËL: There's a power to stories, right? And I think especially when you can make something relatable to other people. So, it's not just like, "Hey, I did a cool thing," which, you know, is also fun. But being able to say, "Hey, I messed up," or "I, you know, had this challenging problem dropped in my lap, and here's the journey I went on to resolve it. Hopefully, it acts a little bit as like a here's a template you could follow if you're ever in that situation." But maybe also a little bit of, like, inspiration for others as well, just being like, hey, Joël, messes up sometimes. So, Stephanie, what is new in your world? STEPHANIE: Speaking of small wins, I have finally perfected our at-home pizza situation for making pizza at home, which I have been struggling with for so long. Because I always was excited by the idea of making pizza and, you know, sometimes we would make our own dough. And sometimes, we would buy store-bought dough, but it never ended up being as crispy and cooked well-done the way that I want it to. It was always, like, a little bit mushy on the inside. The dough wasn't totally baked. And I would inevitably be disappointed when I had been, you know, building that excitement for pizza. And the other week, I found a new recipe to try, and I think it will be my new go-to recipe for making pizza at home. JOËL: I don't know if I'm allowed to ask this on air, but what's the secret? STEPHANIE: The secret? Well, okay, the first secret and/or learning that I've gathered is to not put as much sauce, cheese, and toppings as you think you want to because that's definitely what contributes to the under-doneness of the dough. But I pivoted to trying a more grandma-style crust that is kind of more like focaccia; really, you know, it involves a lot of olive oil. And you're cooking it for a while on pretty high heat to ensure the crispness and, you know, that it's cooked through. And, I mean, I love focaccia bread, so I don't mind it as, you know, the base of my pizza. It is a bit different from, you know, other kinds of pizzas. And if we had, like, a really, you know, fancy pizza oven to do the, like, super high heat, like, Neapolitan-style deal, I would also really enjoy that. But you know what? That's just not the reality of my home kitchen. So, I have really been enjoying this pizza recipe by Alison Roman that I will link in the show notes. But yeah, it has really changed my at-home pizza game. And I hopefully won't have any of my, you know, soggy dough bottom problems anymore. JOËL: So, you mentioned just kind of offhand, like, oh yeah, you know, the crust is just kind of, like, how you make focaccia. It sounds like you've made focaccia yourself before. STEPHANIE: I have made focaccia at home, and so I think applying it to Pizza was a real, like, light bulb moment for me. But, you know, it's not, like, totally effortless. But I think it's a lot more forgiving than other types of bread and, therefore, other types of pizza crust. And the one really enjoyable thing about making focaccia is there's a step where you use your fingers, and you're kind of holding your hands like you're playing a piano. And you, like, press into the dough after it has risen a little bit to create dimples and, you know, lets the oil kind of seep into the little holes. And it's very satisfying. It's a very good feeling. JOËL: The kind of the tactile aspect of it? STEPHANIE: Yeah, exactly. It's very fun. [chuckles] So, yeah, it's just an added bonus to my pizza adventures. JOËL: A win on top of a win. We'll take it. So, there's some news in the Ruby community this week because RubyConf has just opened their CFP, their call for proposals. And so, they're asking for people to submit their ideas for conference talks, and if you're lucky, you get picked to speak at the conference. And, Stephanie, I know that over the course of a year, you have a document where you collect conference talk ideas so that you have ideas to work on when the CFP comes around. Are you looking at any of them to potentially submit to RubyConf this year? STEPHANIE: Joël, I have to be honest with you; so far, I only have one idea on that list. [laughs] But that is one that I suppose could eventually become a conference talk proposal. So, when I heard the news, I definitely went down the rabbit hole of revisiting that idea and kind of starting to think about if it's something I wanted to pursue. I think the answer is yes. I definitely got a big push of motivation when I was like, oh, it's open. Like, now I can just get started if I want to. And then I was like, well, it's open for a month, so I could also just sit on it a little longer, you know, put it aside and revisit it when I have a little more time. But yeah, I was pretty excited because I think it gave me the motivation I needed to really think a little more deeply about this idea that I have. Otherwise, I think it would have continued to sit half-baked in my document for a long time. JOËL: And just for all of our listeners, the CFP just opened on July 12th, and it closes on August 20th. So, if you are listening and it's before August 20th, you still have a shot to submit your idea to be a speaker. STEPHANIE: Something that I've talked about with my other friends who enjoy speaking at conferences is how they come up with proposals, and I found that we all have different approaches. And I am really interested in digging into this further with you. But I realized that, for me, I really struggle with just, like, throwing out ideas and submitting them before I feel really confident that it's something that I have interesting things to say really, or, like, kind of adding a new perspective, or maybe approaching a topic that hasn't been approached before. I feel sometimes a bit hindered by my process, where I need to feel really confident before submitting something. Because a friend of mine she was telling me that her approach is to submit CFP for topic ideas that she wants to explore further. So, maybe it is something that she doesn't know a lot about yet, and she's using this process to learn more and dive deeper, and that, you know, gives her a reason to do that, whereas that seems really scary to me. JOËL: That's really interesting because it sounds like kind of an underlying motivation for your friend for submitting these talks is curiosity, exploration. And thinking back to myself, I think I usually submit ideas that have me excited or passionate, so that's kind of my underlying motivation for a talk. What would you say is maybe your underlying motivation when you're pitching an idea? STEPHANIE: Yeah, I think, for me, it is impact and, like, having an impact, especially for something that I've struggled with and wanting to share my experience and, hopefully, sharing something where other people can relate to. It's funny you mentioned that your motivators are, you know, excitement and passion. Because another person that I kind of had this conversation with mentioned that she writes talks based on experiences that have been very aggravating [chuckles] and painful for her. So, that ends up being, you know, a big motivator because she's so frustrated. [laughs] And, you know, wants to share this journey that she went on from a point of, I guess, maybe similar to me, like, making it easier for someone else who might find themselves struggling with the same problem. JOËL: I kind of like the idea of taking that to an extreme, and you're, like, rage submitting. STEPHANIE: Yeah, I feel like there would just be an infinite number [laughs] of topics that you could come up with in that case. JOËL: Like, I'm so angry at this bug. It cost me a week of my life. And now, it is going to get the spotlight on it at RubyConf. And I get to share that moment with everyone, express a lot of emotions, and, hopefully, save everyone else from having to do the same thing I did. STEPHANIE: Yeah. Or this terrible bug cost me a week of my life, and now you all get to hear about it. [laughter] Let me tell you -- JOËL: Yes. STEPHANIE: Exactly all the problems that I had to deal with. JOËL: And, honestly, as a narrative, it kind of works, right? There are different types of talks. Sometimes you go to a talk because you really want to learn a deep topic. Sometimes I just want to go and listen to, like, a good horror story. If someone's a good storyteller, like, yes, there are lessons I can take away from it, and I can be like, okay, this is what I can do. And I heard Stephanie talk about this bug, and so I'm going to use inspiration from that the next time I hit a bug. But sometimes it's also just good to, like, go there and sit and be, like, yes, I've been there. Yeah, kind of following along with the story and, you know, kind of the ups and downs because it is so relatable. STEPHANIE: Yeah. And I like that you mentioned that there are different types of talks that leave the audience, you know, with different things. Because I know some people who have been interested in speaking in the past maybe feel a bit hesitant to because they don't think they have something to say, or, like, they don't have something to share that other people might find interesting. And to that, I really believe that everyone has something that they are knowledgeable about and something that they can bring to others that is valuable. Even if it's not for every single person at the conference if you give a talk that is meaningful to a handful of people, right? Especially because, you know, there's people of all different kinds of levels at these conferences. Those are really important too. In fact, I think it can be even more powerful because they are targeting a specific audience. JOËL: And I think you've hit on a key point, that is, it's great when you're building the talk, but even when you're pitching the idea is, who is this talk for? Who is the audience for this talk? And if the audience is whoever shows up at the conference center, maybe you need to workshop a little bit more. STEPHANIE: Yeah, because one thing can't really be for everyone. JOËL: Right. You're kind of diffusing its impact at that point. You were talking about how sometimes it's difficult to take an idea, flesh it out, and submit it until you're feeling, like, 100% confident about it. I'm curious how the transition goes from kind of the earlier phase of, like, you have a document, and I assume these are, like, bullet points with, like, one sentence, or maybe even just title idea. How does it go from bullet point to multiple paragraphs that might be submittable? STEPHANIE: Yeah, that's a good question. I think it starts as a bullet point because maybe I notice something that caused me pain or caused a teammate pain, and maybe we had, like, kind of an interesting discussion about it. And, yeah, I write it down as something to explore further as, like, is this an idea that can be a little broader in scope, can have a few more applications beyond this particular instance that sparked it? And so, maybe from there, I will think about, like, okay, like, the pain point that I jotted down was coupling and tests, right? And let me go, you know, jog through my memory of other times where I kind of felt a similar thing or was doing some code review and also noted a similar problem. And I think if I am able to find enough, like, supporting examples that might go along with this, for me, it's really a feeling. [laughs] Then I'll try to extract that a little further and come up with a theme, right? A theme that's a little more encompassing because what I hope to do is to be able to come up with some kind of takeaway that can be a strong thesis for a conference proposal. JOËL: And that's kind of how conference proposals work, right? There's a few different sections you have to fill out. But the really important one is the abstract, which is usually just a few sentences. It's character limited. And that's what is got to sell your talk both to the committee, but then also, that's what's going to be publicly viewable. And so, that's what's going to get people excited to show up at your conference room. So, my kind of secret trick for writing a proposal is to do the abstract last. Even though it's that first section on the form, I struggle to write a compelling abstract. And so, I'll go through and fill out some of the other fields that are only for the committee, and there'll be, you know, a lot of detail in there. And then, sometimes, I find that I put, like, really good compelling sentences in there, and I'll pull them out and put them in the abstract and kind of use that to start. But those other sections, like pitch and all that I think they're a great place to start because you get to go a little bit more into detail. And you can talk about here are the themes I want to address. Here are maybe the examples I'm going to be building around. Here's the audience that I want to speak to. STEPHANIE: Audience is interesting for me because I tend to write the kind of talks that I wish I had watched earlier or, like, what really speaks to me. In fact, one of my first conference talks was literally called The Intro to Abstraction I Wish I'd Received. [laughs] So, that is a good place for me to start, is thinking about like, well, like, who was I at the time? Like, what kind of developer was I at the time that I, like, really needed this information or really wished for this information? And similarly, I had mentioned, you know, like, maybe my ideas are coming from conversations I've had with other people. So, I'm imagining those other people, and I'm asking myself, like, who are they? Like, where are they in their development careers? And is there a specific demographic or audience persona that kind of fits them, and, you know, usually there is, right? And what is nice is I can kind of go to them as well and be like, "Hey, like, I have this idea. Do you think this would be helpful for you? Or is this something you would be interested in watching?" And that at least helps me ground it in an audience that is real to me as opposed to kind of trying to imagine who might show up without a clear idea, like, of what they might get a takeaway or, like, be wanting in a conference talk. JOËL: Would it be fair to say that when you're coming up with an idea for a presentation, the audience you have in mind is you or maybe a particular version of you, so you two years ago or you five years ago? STEPHANIE: Yeah, I think that's a really compelling way for me to write these because, you know, I almost think it kind of goes back to the idea that everyone has something to say, right? It's like I have something to say to me, my past self. And I believe that other people, you know, are in that position as well. And so, that's been my approach. But I'm curious about yours because I think the types of talks that you write are maybe less about, like, what you wished you had learned earlier and more for a different kind of audience. JOËL: Yeah, I think they are...I start with a topic that I'm excited about. And then, sometimes, I have to find what element of it that I want to pull out because it can be kind of a whole kind of cloud of themes, and I have to pick one to commit to. Depending on the one I commit to and the approach I want to take, it will define the audience that...or vice versa. I can say, okay, this is specifically for this audience, and that will show how I want to approach it. So, for example, I gave a talk at RailsConf this past spring on the math every programmer needs, talking a little bit about discrete math and how it's applicable in day-to-day programming. And I think I very quickly came to the realization that I wanted this talk to be for people who had never done a formal, like, discrete math class, likely people who don't have a traditional, like, CS background. And so, once I knew this is the audience I'm speaking to, that really shaped how I pitched the talk, what elements I want to bring in, what examples I'm using, what do I want to emphasize during this talk. Finding that audience really helped that proposal come together. Even though I knew...before I found the audience, I knew I wanted to talk about discrete math and how cool and relevant it was to day-to-day programming. But that's not enough. I needed to really fit it to an audience. STEPHANIE: Yeah, I have two thoughts about this. One was that when you were writing the proposal for this talk, I remember you had shared a bunch of your different ideas about the topic to your co-workers. And it was almost kind of, like, a buffet of topics. And you were asking for feedback about, like, hey, like, what is interesting to you? Like, what would be, like, helpful for you to know? And I think that ended up really helping you focus on, like, what your audience would want. But I'm curious, do you recall, like, how you decided that you wanted to target people who didn't have that traditional CS background? Like, why was that important to you? JOËL: I think I'm generally most excited about taking some, like, larger technical insights and bringing them to people who maybe have some of the intuition but don't always know why the things they do work the way they do and kind of bridging a little bit of that, like, practical, theoretical gap. That's the space that I'm probably most excited about when it comes to sharing and teaching, helping people go from things that are really practical and then just throwing just enough theory at them. But keeping it really grounded so that they can kind of hit the next level of where they want to be. Because that's an area that I think I thrive in, an area that gets me most excited to share about. And so, I think, naturally, I'm kind of moving in that direction. But also, like you said, it's talking to other people and seeing, like, what are the elements that are interesting to you? And then, like, once you start seeing some of these, it's like, okay, well, what is exciting in talking about Boolean algebra? Do I want to go really deep on some of the theory? Do I want to say, you know, if someone has a vague notion of this because they've been writing code for several years but don't know the theoreticals behind it? That interaction, I think, was more compelling to me. STEPHANIE: Got it. It's almost like knowledge sharing at just this really high level, or, like, at a really large scale. I like that a lot. JOËL: So, you highlighted something interesting, and that is that writing a proposal doesn't have to be a solo activity, and getting feedback on ideas can totally transform your proposal. Do you find that you reach out to a lot of people to get feedback on your proposals? And what does that look like in practice? STEPHANIE: Oh yeah, I definitely need someone to rubber-duck an idea for me. [laughs] JOËL: So, even at the idea stage. So, you've got that topic sentence or whatever, and then you say, "Someone, can you sit down with me, and we'll just talk through places this might go?" STEPHANIE: Yeah. I have found that really helpful for me. Otherwise, I think I get a little too precious about it, right? If I've just been working on it by myself. And then it feels really scary to submit it and be like, okay, I don't know if this is any good. It might get rejected. But the first time that I did a conference talk, WNB.rb, the women and non-binary Ruby group I'm in, they had organized a CFP working group channel. And so, there were, you know, a handful of people, some of them writing conference talks for the first time, some of them having done it before, just getting together and holding each other accountable, and checking in and asking for feedback. And, yeah, I think finding other people who either have done it before. I've also, you know, reached out to people whose conference talks I loved and felt really inspired by. And if they were available, like, kind of asking them how to get started. But also, like, peer support as well, other people doing it for the first time can be really important in just making it feel a little more manageable, a little less lonely. I think there are, like, more people out there who are interested in dipping their toe in conference speaking than one might think because it can definitely feel very overwhelming. But with a support group, I think it makes it a lot easier. JOËL: So, you've gotten feedback. You've gotten support. You've put this idea together. You're feeling pretty confident. You hit that submit button. And now you can't take it back. [laughs] How does that feel at that point? STEPHANIE: Terrifying. [laughter] Like, for me, I have to exercise it from my mind and not think about it, not dwell on it at all. And like, ideally, you know, when I hear back, I will have forgotten all about it so that, you know, I didn't spend the whole month or however many weeks, like, ruminating about whether or not it was accepted. Yeah, I really struggle with that part, I think, because I, yeah, have a hard time with rejection, you know, I'm just going to say it. [laughs] And, you know, it's hard for me not to take it personally. But I think that's actually one area that I want to get better at is to feel a little less, like, personally attached. And I think working with others helps me with that because it's not just something I've, you know, like, squirreled away and feel very attached to. Working with others and then, like, hopefully, coming up with other ideas along the way, right? Within conversations that we have that might spark ideas for the future. So, knowing that if this one doesn't end up being submitted, there's always next time. There's always another conference season. And also, you know, celebrating others when their conference talks do get accepted that is also really buoying because it helps me direct that energy into wanting to celebrate my friends and inspiring me for next time. Joël, I know you oftentimes submit more than one proposal, and I'm wondering if that helps with those feelings of being too attached to a topic idea or, you know, worrying about whether they will be accepted. JOËL: I think it definitely helps with the attachment thing that I've not kind of put all of my work and all of my...like, pinned all of my hopes on one topic idea. Sometimes it can hurt, you know, if you've got, like, you know, two or three and, like, you just get multiple rejection notices in a day. That kind of sucks sometimes. But I think, in some ways, yes, it does help with that feeling of rejection because you've not tied yourself emotionally so much to a single idea that has to, like, succeed or fail. STEPHANIE: Do you then submit those ideas to other conferences? JOËL: The ones that get rejected? Yes. I've definitely resubmitted ideas. In fact, I plan to resubmit a rejection to RubyConf this year, so we'll see how that goes. Actually, now that I think of it, that could be a really fun opening line for a talk. Like, let's say it gets accepted. And, like, you know, you're on the stage, and you open it, and you're just like, "This talk got rejected." That'd be a fun intro. STEPHANIE: Yeah, it would be. I think, oftentimes, you know, it's not always even about the idea itself, right? It's just about maybe the theme of the conference that year, and what they were looking for, and the direction they wanted to go. And there are other conferences or other another year, right? Where maybe there isn't another talk that touches on the same, like, area. And that will be the opportunity that it is a fit for the conference. JOËL: Yeah, definitely. It is a little bit haphazard to get in. And just because your talk gets rejected does not mean it's a bad idea. It just means that it wasn't the best fit for that conference at that time. STEPHANIE: I actually want to plug a website, speakerline.io, where people can post all of their, you know, proposals that they've submitted, whether they were accepted or rejected. And I found that resource really helpful in, you know, just knowing that, like, very good ideas get rejected sometimes, and that's okay. As well as, you know, kind of trying to get a sense of, you know, for the ones that were accepted, okay, like, what about these proposals really stood out or, like, really shine? And how might I get some inspiration from that to incorporate next time around? JOËL: So, you've submitted a proposal. Terrifying. You're trying to not think about it for a couple of weeks, assuming you're submitting ahead a couple of weeks, I don't know. Are you a last-minute kind of submitter? STEPHANIE: I'm a probably two or three days before the deadline kind of submitter. JOËL: So, you've submitted the talk two or three days to the deadline. I guess, like, a couple of weeks after that to get review. And then, you get that notification that says, you know, you've got a response on your talk from the committee. Are you the kind of person that, like, drops everything and immediately looks at it? Do you kind of, like, wait for, like, maybe a moment where you're, like, more in a good zone emotionally before you open that email to find out if you're accepted or rejected? What's your strategy? STEPHANIE: Oh God, I don't think I have the willpower to wait until I'm, you know, in an emotionally good state. I will just click on that thing. And yeah, I think, I mean, having been on the receiving end of accepting those rejections and once waitlisted, [laughs] which was a real doozy because it's like, great, like, now I have to write a talk. But, like, I don't know if it will actually be given or not. I think this is also where the support group really shines as well because maybe some of my other friends are also sharing the results and making it okay, like, sharing a rejection, right? And I think it's nice to just have, like, an outlet for that, whatever the outcome is, and not having to just, like, sit alone in either the sadness or the happiness, right? Like, we're talking about celebrating small wins. Like, it really is even more special when someone else can validate your success. JOËL: Have you ever had to navigate kind of, like, slight feelings of jealousy where it's, like, another friend gets in? Or maybe somebody else gets in with, like, your topic, and their talk got picked instead of yours? STEPHANIE: Yeah, for sure. I think it's totally natural and human. I think one nice thing, though, is that there are so many conferences all of the time. You know, this is not a once-in-a-lifetime situation, right? And maybe the next conference, you know, the people who submit will be different, the people who review will be different. And you've kind of already done the hard part of writing the thing. I actually was just thinking about a few of my friends are writers, and the submission process for them, you know, of spinning a book proposal or short stories for, like, a magazine or something like that, it's, like, fraught with rejections. And they've really built that muscle of acceptance and, like, knowing that it's not a reflection of their value, and building the resilience to keep trying. And so, yeah, I think definitely going through that process has helped me feel a little bit more comfortable with that, not completely, but I guess it's like exposure therapy, [laughs], isn't it? JOËL: I think that the not helpful answer here is that it gets better when you've given more talks. When you're trying to break in and give your first talk, right? It is such a big deal. And, you know, the high of getting accepted is just, you know, mountain top. But the feelings of rejection are also similarly intense. As opposed to when you've done a few, then it's like, you know what? Win some, lose some. And it's much easier to move on. STEPHANIE: I think another suggestion that I might have would be to maybe start smaller, right? Even giving a talk at work for your co-workers, or even the next step is giving a talk at your local meetup or then a small regional conference. There are so many in-between steps, I think, that exist that bestow the benefits of giving a conference talk, and meeting new people, and feeling good about the impact you're having beyond some of the bigger, more traditional conferences. So, if that does seem really scary or, you know, maybe you've given it a shot and feel a little bit demoralized from trying again, there is a group out there who will benefit and be interested in hearing what you have to say. JOËL: That's a really important reminder because just because a conference rejected your talk doesn't mean that it's a bad idea. And yes, you can shop it around and bring it to other conferences, but maybe think about other venues for the idea. You've already done the hard work of crafting a pitch, so maybe turn it into a blog post and share it that way. Maybe turn it into a pitch to be a guest on a podcast that you enjoy. Podcasts that do weekly guests are constantly looking for interesting people to talk to. And you've kind of, like, done all the work for them, where you can say, "Hey, here's the thing I'm an expert on. Ask me questions about this." And most places will gladly bring you on. STEPHANIE: Yeah, I like to think of conference talks as really, like, a supplement of what you're learning and investing in in your career, right? You know, it is nice to be able to share all of those things in a perfectly wrapped package. But also, there are so many different ways for that to manifest. And there are people who know that speaking is not for them and really focus on writing, and that's, like, their avenue. But yeah, it's not...I don't think it's, like, a pinnacle of, like, something you have to do in your career at all. It's just something that can be fun. JOËL: Yeah, and sharing takes many different forms. It can be a talk in a conference room, but it can just as easily turn into maybe some kind of video, some kind of written work. Like I said, it could be an interview on a podcast. There are so many different ways that you can share your ideas. And just because it didn't fit in one place, now that you've done the work to kind of polish that gem a little bit, oftentimes, it's very little additional work to just convert it to a different form. STEPHANIE: Yeah, I like what you just said about polishing a gem. Really, I think the value for me is having a channel to funnel and reflect on my experiences, and, you know, conference talks happen to be, like, one form of that for me. But I hate to say it's about the journey, not the destination, but sometimes it is. And, yeah, I think everyone kind of has to, like, figure that out for themselves. JOËL: That being said, sometimes the destination is pretty exciting. And when you open that email that says, "Congratulations, your talk has been accepted," wow, what a rush. STEPHANIE: On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
It's updates on the work front today! Stephanie was tasked with removing a six-year-old feature flag from a codebase. Joël's been doing a lot of small database migrations. A listener question sparked today's main discussion on gerunds' interesting relationship to data modeling. Episode 386: Value Objects Revisited: The Tally Edition (https://www.bikeshed.fm/386) RailsConf 2017: In Relentless Pursuit of REST by Derek Prior (https://www.youtube.com/watch?v=HctYHe-YjnE) REST Turns Humans Into Database Clients (https://chrislwhite.com/rest-contortion/) Parse, don't validate (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) Wikipedia Getting to Philosophy (https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosophy) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, this week, I've been tasked with something that I've been finding very fun, which is removing a six-year-old feature flag from the codebase that is still very much in use in the sense that it is actually a mechanism for providing customers access to a feature that had been originally launched as a beta. And that was why the feature flag was introduced. But in the years since, you know, the business has shifted to a model where you have to pay for those features. And some customers are still hanging on to this beta feature flag that lets them get the features for free. So one of the ways that we're trying to convert those people to be paying for the feature is to, you know, gradually remove the feature flag and maybe, you know, give them a heads up that this is happening. I'm also getting to improve the codebase with this change as well because it has really been propagating [laughs] in there. There wasn't necessarily a single, I guess, entry point for determining whether customers should get access to this feature through the flag or not. So it ended up being repeated in a bunch of different places because the feature set has grown. And so, now we have to do this check for the flag in several places, like, different pages of the application. And it's been really interesting to see just how this kind of stuff can grow and mutate over several years. JOËL: So, if I understand correctly, there's kind of two overlapping conditions now around this feature. So you have access to it if you've either paid for the feature or if you were a beta tester. STEPHANIE: Yeah, exactly. And the interesting thought that I had about this was it actually sounds a lot like the strangler fig pattern, which we've talked about before, where we've now introduced the new source of data that we want to be using moving forward. But we still have this, you know, old limb or branch hanging on that hasn't quite been removed or pruned off [chuckles] yet. So that's what I'm doing now. And it's nice in the sense that I can trust that we are already sending the correct data that we want to be consuming, and it's just the cleanup part. So, in some ways, we had been in that half-step for several years, and they're now getting to the point where we can finally remove it. JOËL: I think in kind of true strangler fig pattern, you would probably move all of your users off of that feature flag so that the people that have it active are zero, at which point it is effectively dead code, and then you can remove it. STEPHANIE: Yeah, that's a great point. And we had considered doing that first, but the thing that we had kind of come away with was that removing all of those customers from that feature flag would probably require a script or, you know, updating the production data. And that seemed a bit riskier actually to us because it wasn't as reversible as a code change. JOËL: I think you bring up a really interesting point, which is that production data changes, in general, are just scarier than code changes. At least for me, it feels like it's fairly easy generally to revert a code change. Whereas if I've messed up the production database, [laughs] that's going to be unpleasant few days. STEPHANIE: What's interesting is that this feature flag is not really supported by a nice user interface for managing it. And so, we inevitably had to do a more developer-focused solution to remove these customers from being able to access this feature. And so, the two options, you know, that we had available were to do it through data, like I mentioned, or do it through that code change. And again, I think we evaluated both options. But what's kind of nice about doing it with the code change is that when we eventually get to delete those feature flag records, it will be really nice and easy. JOËL: That's really exciting. One thing that's different about kind of more mature projects is that we often get to do some kind of change management, unlike a greenfield app where you just get to, oh, let's introduce this new thing, cool. Oftentimes, on a more mature project, before you introduce the new thing, you have to figure out, like, what is the migration path towards that? Is that a kind of work that you enjoy? STEPHANIE: I think this was definitely an exercise in thinking about how to break this down into steps. So, yeah, that change management process you mentioned, I, like, did find a lot of satisfaction in trying to break it up, you know, especially because I was also thinking that you know, maybe I am not able to see the complete, like, cleanup and removal, and, like, where can someone pick up after me? In some ways, I feel like I was kind of stepping into that migration, you know, six years [laughs] in the making from beta to the paid product. But I think I will feel really satisfied if I'm able to see this thing through and get to celebrate the success of saying, hey, like, I removed...at this point, it's a few hundred lines of code. [laughs] And also, you know, with the added business value of encouraging more customers to pay for the product. But I think I also I'm maybe figuring out how to accept like, okay, like, how could I, like, step away from this in the middle and be able to feel good that I've left it in a place that someone else could see through? JOËL: So you mentioned you're taking this over from somebody else, and this has been kind of six years in the making. I'm curious, is the person who introduced this feature flag six years ago are they even still at the company? STEPHANIE: No, they are not, which I think is pretty typical, you know, it's, like, really common for someone who had all that context about how it came to be. In fact, I actually didn't even realize that the feature flag was the original beta version of the product because that's not what it's called. [laughs] And it was when I was first onboarding onto this project, and I was like, "Hey, like, what is this? Like, why is this still here?" Knowing that the canonical, you know, version that customers were using was the paid version. And the team was like, "Oh, yeah, like, that's this whole thing that we've been meaning to remove for a long time." So it's really interesting to see the lifecycle, like, as to some of this code a little bit. And sometimes, it can be really frustrating, but this has felt a little more like an archaeology dig a little bit. JOËL: That sounds like a really interesting project to be on. STEPHANIE: Yeah. What about you, Joël, what's new in your world? JOËL: So, on my project, I've been having to do a lot of small database migrations. So I've got a bunch of these little features to do that all involve doing database migrations. They're not building on each other. So I'm just doing them all, like, in different feature branches, and pushing them all up to GitHub to get reviewed, kind of working on them in parallel. And the problem that happens is that when you switch from one branch where you've run a migration to another and then run migrations again, some local database state persists between the branch switch, which means that when you run the migrations, then this app uses a structure.sql. And the structure.sql has a bunch of extra junk from other branches you've been on that you don't want as part of your diff. And beyond, like, two or three branches, this becomes an absolute mess. STEPHANIE: Oh, I have been there. [laughs] It's always really frustrating when I switch branches and then try to do my development and then realize that I have had my leftover database changes. And then having to go back and then always forgetting what order of operations to do to reverse the migration and then having to re-migrate. I know that pain very well. JOËL: Something I've been doing for this project is when I switch branches, making sure that my structure SQL is checked out to the latest version from the main branch. So I have a clean structure SQL then I drop my local database, recreate an empty one, and run a rake db:schema:load. And that will load that structure file as it is on the main branch into the database schema. That does not have any of the migrations on this branch run, so, at that point, I can run a rake db:migrate. And I will get exactly what's on main plus what gets generated on this branch and nothing else. And so, that's been a way that I've been able to kind of switch between branches and run database operations without getting any cross-contamination. STEPHANIE: Cross-contamination. I like that term. Have you automated this at all, or are you doing this manually? JOËL: Entirely manually. I could probably script some of this. Right now...so it's three steps, right? Drop, create, schema load. I just have them in one command because you can chain Unix commands with a double ampersand. So that's what I'm doing right now. I want to say there's a db:reset task, but I think that it uses migrate rather than schema load. And I don't want to actually run migrations. STEPHANIE: Yeah, that would take longer. That's funny. I do love the up arrow key [laughs] in your terminal for, you know, going back to the thing you're running over and over again. I also appreciate the couple extra seconds that you're spending in waiting for your database to recreate. Like, you're paying that cost upfront rather than down the line when you are in the middle of doing [laughs] what you're trying to do and realize, oh no, my database is not in the state that I want it to be for this branch. JOËL: Or I'm dealing with some awful git conflict when trying to merge some of these branches. Or, you know, somebody comments on my PR and says, "Why are you touching the orders table? This change has nothing to do with orders." I'm like, "Oh, sorry, that actually came out of a different thing that I did." So, yep, keeping those diffs small. STEPHANIE: Nice. Well, I'm glad that you found a way to manage it. JOËL: So you mentioned the up arrow key and how that's really nice in the terminal. Something that I've been relying on a lot recently is reverse history search, CTRL+R in the terminal. That allows me to, instead of, like, going one by one in order of the history, filter for something that matches the thing that I've written. So, in this case, I'll hit CTRL+R, type, you know, Rails DB or whatever, then immediately it shows me, oh, did you want this long command? Hit enter, and I'm done. Even if I've done, you know, 20 git commands between then and the last time I ran it. STEPHANIE: Yeah, that's a great tip. So, a few weeks ago, we received a listener question from John, and he was responding to an episode where I'd asked about what the grammatical term is for verbs that are also nouns. He told us about the phrase, a verbal noun, for which there's a specific term called gerund, which is basically, in English, the words ending in ING. So, the gerund version of bike would be biking. And he pointed out a really interesting relationship that gerunds have to data modeling, where you can use a gerund to model something that you might describe as a verb, especially as a user interaction, but can be turned into a noun to form a resource that you might want to introduce CRUD operations for in your application. So one example that he was telling us about is the idea of maybe confirming a reservation. And, you know, we think of that as an action, but there is also a noun form of that, which is a confirmation. And so, confirmation could be a new resource, right? It could even be backed at the database level. And now you have a simpler way of representing the idea of confirming a reservation that is more about the confirmation as the resource itself rather than some kind of append them to a reservation itself. JOËL: That's really cool. We get to have a crossover between grammar terms and programming, and being able to connect those two is always a fun day for me. STEPHANIE: Yeah, I actually find it quite difficult, I think, to come up with noun forms of verbs on my own. Like, I just don't really think about resources that way. I'm so used to thinking about them in a more tangible way, I suppose. And it's really kind of cool that, you know, in the English language, we have turned these abstract ideas, these actions into, like, an object form. JOËL: And this is particularly useful when we're trying to design RESTful either APIs or even just resources for a Rails app that's server-rendered so that instead of trying to create all these, like, extra actions on our controller that are verbs, we might decide to instead create new resources in the system, new nouns that people can do the standard 7 to. STEPHANIE: Yes. I like that better than introducing custom controller actions or routes that deviate from RESTful conventions because, you know, I probably have seen a slash confirm reservation [laughs] URL. And, you know, this is, I think, an interesting way of avoiding having too many of those deviating endpoints. JOËL: Yeah, I found that while Rails does have support for those, just all the built-in things play much more nicely if you're restricting yourself to the classic seven. And I think, in general, it's easier to model and think about things in a Rails app when you have a lot of noun resources rather than one giant controller with a bunch of kind of verb actions that you can do to it. In the more formal jargon, I think we might refer to that as RESTful style versus RPC style, a Remote Procedure Call. STEPHANIE: Could you tell me more about Remote Procedure Calls and what that means? JOËL: The general idea is that it's almost like doing a method call on an object somewhere. And so, you would say, hey, I've got an account, and I want to call the confirm method on it because I know that maybe underlying this is an ActiveRecord account model. And the API or the web UI is just a really thin layer over those objects. And so, more or less, whatever your methods on your object are, can be accessed through the API. So the two kind of mirror each other. STEPHANIE: Got it. That's interesting because I can see how someone might want to do that, especially if, you know, the account is the domain object they're using at the, you know, persistence layer, and maybe they're not quite able to see an abstraction for something else. And so, they kind of want to try to fit that into their API design. JOËL: So I have a perhaps controversial opinion, which is that the resources in your Rails application, so your controllers, shouldn't map one-to-one with your database tables, your models. STEPHANIE: So, are you saying that you are more likely to have more abstractions or various resources than what you might have at the database level? JOËL: Well, you know what? Maybe more, but I would say, in general, different. And I think because both layers, the controller layer, and the model layer, are playing with very different sets of constraints. So when I'm designing database tables, I'm thinking in terms of normalization. And so, maybe I would take one big concept and split it up into smaller concepts, smaller tables because I need this data to be normalized so that there's no ambiguity when I'm making queries. So maybe something that's one resource at the controller layer might actually be multiple tables at the database layer. But the inverse could also be true, right? You might have, in the example that John gave, you know, an account that has a single table in the database with just a Boolean field confirmed yes or no. And maybe there's just a generic account resource. But then, separately, there's also a confirmation resource. And so, now we've got more resources at the controller layer than at the database layer. So I think it can go either way, but they're just not tightly coupled to each other. STEPHANIE: Yeah, that makes sense. I think another way that I've seen this manifest is when, like you said, like, maybe multiple database tables need to be updated by, you know, a request to this endpoint. And now we get into [chuckles] what some people may call services or that territory of basically something. And what's interesting is that a lot of the service classes are named as verbs, right? So order, creator. And, like, whatever order of operations that needs to happen on multiple database objects that happens as a result of a user placing an order. But the idea that those are frequently named as verbs was kind of interesting to me and a bit of a connection to our new gerund tip. JOËL: That's really interesting. I had not made that connection before. Because I think my first instinct would be to avoid a service object there and instead use something closer to a form object that takes the same idea and represents it as a noun, potentially with the same name as the resource. So maybe leaning really heavily into that idea of the verbal noun, not just in describing the controller or the route but then also maybe the object backing it, even if it's not connecting directly to a database table. STEPHANIE: Interesting. So, in this case, would the form object be mapped closer to your controller resource? JOËL: Potentially, yes. So maybe I do have some kind of, like, object that represents a confirmation and makes it nicer to render the confirmation form on the edit page or the new page. In this case, you know, it's probably just one checkbox, so maybe it's not worth creating an object. But if there were multiple fields, then yes, maybe it's nice to create an in-memory object that has the same name as the resource. Similar maybe for a resource that represents multiple underlying database tables. It can be nice to have kind of one object that represents all of them, almost like a facade, I guess. STEPHANIE: Yeah, that's really interesting. I like that idea of a facade, or it's, like, something at a higher level representing hopefully, like, some kind of meaning of all of these database objects together. JOËL: I want to give a shout-out to talk from a former thoughtboter, Derek Prior—actually, former Bike Shed host—from RailsConf 2017 called In Relentless Pursuit of REST, where he digs into a lot of these concepts, particularly how to model resources in your Rails app that don't necessarily map one to one with a database table, and why that can be a good thing. Have you seen that talk? STEPHANIE: I haven't, but I love the title of it. It's a great pun. It's very evocative, I think because I'm really curious about this idea of a relentless pursuit. Because I think another way to react to that could be to be done with REST entirely and maybe go with something like GraphQL. JOËL: So instead of a relentless pursuit, it's a relentless...what's the opposite of pursuing? Fleeing? STEPHANIE: Fleeing? [laughs] I like how we arrived there at the same time. Yes. So now I'm thinking of I had mentioned a little bit ago on the show we had our spicy takes Lightning Talks on our Boost Team. And a fellow thoughtboter, Chris White, he had given a talk about Why REST Is Not the Best and for -- JOËL: Also, a great title. STEPHANIE: Yes, also, a great title. JOËL: I love the rhyming there. STEPHANIE: Yeah. And his reaction to the idea of trying to conform user interactions that don't quite map to a noun or an obvious resource was to potentially introduce GraphQL, where you have one endpoint that can service really anything that you can think of, I suppose. But, in his example, he was making the argument that human interactions are not database resources, right? And maybe if you're not able to find that abstraction as a noun or object, with GraphQL, you can encapsulate those ideas as closer to actions, but in the GraphQL world, like, I think they're called mutations. But it is, I think, a whole world of, like, deciding what you want to be changed on the server side that is a little less constrained to having to come up with the right abstraction. JOËL: I feel like GraphQL kind of takes that, like, complete opposite philosophy in that instead of saying, hey, let's have, like, this decoupling between the API layer and the database, GraphQL almost says, "No, let's lean into that." And yeah, you want to traverse the graph of, like, tables under the hood? Absolutely. You get to know the tables. You get to know how they're related to each other. I guess, in theory, you could build a middle layer, and that's the graph that gets traversed rather than the graph of the tables. In practice, I think most people build it so that the API layer more or less has access directly to tables. Has that been your experience? STEPHANIE: That's really interesting that you brought that up. I haven't worked with GraphQL in a while, but I was reading up on it before we started recording because I was kind of curious about how it might play with what we're talking about now. But the idea that it's graphed based, to me, was like, oh, like, that naturally, it could look very much like, you know, an entity graph of your relational database. But the more I was reading about the GraphQL schema and different types, I realized that it could actually look quite different. And because it is a little bit closer to your UI layer, like, maybe you are building an abstraction that is more for serving that as that middle layer between your front end and your back end. JOËL: That's really interesting that you mentioned that because I feel like the sort of traditional way that APIs are built is that they are built by the back-end team. And oftentimes, they will reflect the database schema. But you kind of mentioned with GraphQL here, sometimes it's the opposite that happens. Instead of being driven kind of from the back towards the front, it might be driven from the front towards the back where the UI team is building something that says, hey, we need these objects. We need these connections. Can you expose them to us? And then they get access to them. What has been your experience when you've been working with front ends that are backed by a GraphQL API? STEPHANIE: I think I've tended to see a GraphQL API when you do have a pretty rich client-side application with a lot of user interactions that then need to, you know, go and fetch some data. And you, like, really, you know, obviously don't want a page reload, right? So it's really interesting, actually, that you pointed out that it's, like, perhaps the front end or the UI driving the API. Because, on one hand, the flexibility is really nice. And there's a lot more freedom even in maybe, like, what the product can do or how it would look. On the other hand, what I've kind of also seen is that eventually, maybe we do just want an API that we can talk to separate from, you know, any kind of UI. And, at that point, we have to go and build a separate thing [laughs] for the same data. JOËL: So we've been talking about structuring APIs and, like, boundaries and things like that. I think my personal favorite feature of GraphQL is not the graph part but the fact that it comes with a built-in schema. And that plays really nicely with some typed technologies. Particularly, I've used Elm with some of the GraphQL libraries there, and that experience is just really nice. Where it will tell you if your front-end code is not compatible with the current API schema, and it will generate some things based off the schema. So you have this really nice feedback cycle where somebody makes a change to the API, or you want to make a change to the code, and it will tell you immediately is your front end compatible with the current state of the back end? Which is a classic problem with developing front-end code. STEPHANIE: First of all, I think it's very funny that you admitted to not preferring the graph part of GraphQL as a graph enthusiast yourself. [laughs] But I think I'm in agreement with you because, like, normally, I'm looking at it in its schema format. And that makes a lot of sense to me. But what you said was really interesting because, in some ways, we're now kind of going back to the idea of maybe boundaries blurring because the types that you are creating for GraphQL are kind of then servicing both your front end and your back end. Do you think that's accurate? JOËL: Ooh. That is an important distinction. I think you can. And I want to say that in some TypeScript implementations, you do use the types on both sides. In Elm, typically, you would not unless there's something really primitive, like a string or something like that. STEPHANIE: Okay, how does that work? JOËL: So you have some conversion layer that happens. STEPHANIE: Got it. JOËL: Honestly, I think that's my preference, and not just at the front end versus API layer but kind of all throughout. So the shape of an object in the database should not be the same shape as the object in the business logic that runs on the back end, which should not be the same shape as the object in transport, so JSON or whatever, which is also not the same shape as the object in your front-end code. Those might be similar, but each of these layers has different responsibilities, different things it's trying to optimize for. Your code should be built, in my opinion, in a way that allows all four of those layers to diverge in their interpretation of not only what maybe common entities are, so maybe a user looks slightly different at each of these layers, but maybe even what the entities are to start with. And that maybe in the database what, we don't have a full user, we've got a profile and an account, and those get merged somehow. And eventually, when it gets to the front end, all we care about is the concept of a user because that's what we need in that context. STEPHANIE: Yeah, that's really interesting because now it almost sounds like separate systems, which they kind of are, and then finding a way to make them work also as one bigger [laughs] system. I would love to ask, though, what that conversion looks like to you. Or, like, how have you implemented that? Or, like, what kind of pattern would you use for that? JOËL: So I'm going to give a shout-out to the article that I always give a shout-out to: Parse, Don't Validate. In general, yeah, you do a transformation, and potentially it can fail. Let's say I'm pulling data from a GraphQL API into an Elm app. Elm has some built-in libraries for doing those transformations and will tell you at compile time if you're incorrectly transforming the data that comes from the shape that we expect from the schema. But just because the schema comes in as, like, a flat object with certain fields or maybe it's a deeply nested chain of objects in GraphQL, it doesn't mean that it has to be that way in your Elm app. So that transformation step, you get to sort of make it whatever you want. So my general approach is, at each layer, forget what other people are sending you and just design the entities that you would like to. I've heard the term wish-driven development, which I really like. So just, you know, if you could have, like, to make your life easy, what would the entities look like? And then kind of work backwards from there to make that sort of perfect world a reality for you and make it play nicely with other systems. And, to me, that's true at every layer of the application. STEPHANIE: Interesting. So I'm also imagining that the transformation kind of has to happen both ways, right? Like, the server needs a way to transform data from the front end or some, you know, whatever, third party. But that's also true of the front end because what you're kind of saying is that these will be different. [laughs] JOËL: Right. And, in many ways, it has to be because JSON is a very limited format. But some of the fancier things that you might have access to either on the back end or on the front end might be challenging to represent natively in JSON. And a classic one would be what Elm calls a custom type. You know, they're also called tagged unions, discriminated unions, algebraic data types. These things go by a bajillion names, and it's confusing. But they're really kind of awkward and hard, almost impossible to represent in straight-up JSON because JSON is a very limited kind of transportation format. So you have to almost, like, have a rehydration step on one side and a kind of packing down step on the other when you're reading or writing from a JSON API. STEPHANIE: Have you ever heard of or played that Wikipedia game Getting to Philosophy? JOËL: I've done, I think, variations on it, the idea that you have a start and an end article, and then you have to either get through in the fewest amount of clicks, or it might be a timed thing, whoever can get to the target article first. Is that what you're referring to? STEPHANIE: Yeah. So, in this case, I'm thinking, how many clicks through Wikipedia to get to the Wiki article about philosophy? And that's how I'm thinking about how we end up getting to [laughs] talking about types and parsing, and graphs even [laughs] on the show. JOËL: It's all connected, almost as if it forms a graph of knowledge. STEPHANIE: Learning that's another common topic on the show. [laughs] I think it's great. It's a lot of interesting lenses to view, like, the same things and just digging further and further deeper into them to always, like, come away with a little more perspective. JOËL: So, in the vein of wish-driven development, if you're starting a brand-new front-end UI, what is your sort of dream approach for working with an API? STEPHANIE: Wish-driven development is very visceral to me because I often think about when I'm working with legacy code and what my wishes and dreams were for the, you know, the stack or the technology or whatever. But, at that point, I don't really have the power to change it. You know, it's like I have what I have. And that's different from being in the driver's seat of a greenfield application where you're not just wishing. You're just deciding for yourself. You get to choose. At the end of the day, though, I think, you know, you're likely starting from a simple application. And you haven't gotten to the point where you have, like, a lot of features that you have to figure out how to support and, like, complexity to manage. And, you know, you don't even know if you're going to get there. So I would probably start with REST. JOËL: So we started this episode from a very back-end perspective where we're talking about Rails, and routes, and controllers. And we kind of ended it talking from a very front-end perspective. We also contrasted kind of a more RESTful approach, versus GraphQL, versus more kind of old-school RPC-style routing. And now, I'm almost starting to wonder if there's some kind of correlation between whether someone primarily works from the back end and maybe likes, let's say, REST versus maybe somebody on the front end maybe preferring GraphQL. So I'd be happy for any of our listeners who have strong opinions preferring GraphQL, or REST, or something else; message us at hosts@bikeshed.fm and let us know. And, if you do, please let us know if you're primarily a front-end or a back-end developer because I think it would be really fun to see any connections there. STEPHANIE: Absolutely. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.