POPULARITY
Mua Nhà To: Vì "An" Hay Vì "Oai"? | #Homentor SS02 EP03 | Văn Phú X Spiderum |"Nhà cao cửa rộng" - một chuẩn mực về sự thành công từng là khát khao của bao thế hệ trước, giờ đây có thể đang trở thành gánh nặng của thế hệ hiện tại. Trong bối cảnh hiện đại, khi con người đề cao trải nghiệm, cấu trúc gia đình thay đổi và quỹ đất ngày càng eo hẹp, liệu "mét vuông" có còn là yếu tố quyết định hạnh phúc và sự sung túc của một gia đình?Để cùng trò chuyện sâu hơn về sự dịch chuyển trong tư duy chọn nhà này, tập 3 của Homentor mùa 2 sẽ "bóc tách" chuẩn mực "nhà cao cửa rộng" dưới góc nhìn của thế hệ mới: Đâu là kỳ vọng cũ đã lỗi thời, và đâu là nhu cầu thực sự làm nên một tổ ấm? Đồng hành cùng tập 3 Homentor là MC Đức Bảo. Với trải nghiệm thực tế từ "trai phố cổ" đến vai trò trụ cột của gia đình nhỏ, anh sẽ ưu tiên sự tiện lợi, an toàn và phù hợp hay là một không gian nhà hoành tráng, rộng rãi? Đâu là yếu tố anh quan trọng khi xây đắp tổ ấm cho gia đình nhỏ của mình?Bên cạnh đó, chuyên gia Hoa Hồng Nhung từ Văn Phú - Thương hiệu Bất động sản Vị nhân sinh sẽ mang đến góc nhìn chuyên môn từ thị trường, phân tích lý do các căn hộ "vừa đủ" đang trở thành xu hướng, và cách các chủ đầu tư đang kiến tạo những giá trị "vị nhân sinh" thay vì chỉ tập trung vào diện tích ngôi nhà.Từ những chia sẻ thực tế và phân tích sâu sắc, podcast kỳ vọng sẽ định nghĩa lại khái niệm "an cư" hiện đại: Hạnh phúc không nằm ở bề rộng, mà nằm ở sự phù hợp và kết nối bên trong gia đình.HOMENTOR SEASON 2 EP03Script & Host: BePChuyên gia: Hoa Hồng Nhung - Giám đốc Ban Thương hiệu Công ty Văn Phú - Thương hiệu Bất động sản Vị nhân sinhKhách mời: MC Đức BảoExecutive Producer: Văn Phú Project Manager: Nga LeviAccount: Trúc QuỳnhProduction House: HustleSound Engineer: PinkdotGraphic Designer: wxrdieMarketing: Quỳnh Phương______________
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Daily News Rundown November 03 3025:Welcome to AI Unraveled, Your daily briefing on the real world business impact of AIIn today's edition:
Blaise Agüera y Arcas explores some mind-bending ideas about what intelligence and life really are—and why they might be more similar than we think (filmed at ALIFE conference, 2025 - https://2025.alife.org/).Life and intelligence are both fundamentally computational (he says). From the very beginning, living things have been running programs. Your DNA? It's literally a computer program, and the ribosomes in your cells are tiny universal computers building you according to those instructions.**SPONSOR MESSAGES**—Prolific - Quality data. From real people. For faster breakthroughs.https://www.prolific.com/?utm_source=mlst—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— Blaise argues that there is more to evolution than random mutations (like most people think). The secret to increasing complexity is *merging* i.e. when different organisms or systems come together and combine their histories and capabilities.Blaise describes his "BFF" experiment where random computer code spontaneously evolved into self-replicating programs, showing how purpose and complexity can emerge from pure randomness through computational processes.https://en.wikipedia.org/wiki/Blaise_Ag%C3%BCera_y_Arcashttps://x.com/blaiseaguera?lang=enTRANSCRIPT:https://app.rescript.info/public/share/VX7Gktfr3_wIn4Bj7cl9StPBO1MN4R5lcJ11NE99hLgTOC:00:00:00 Introduction - New book "What is Intelligence?"00:01:45 Life as computation - Von Neumann's insights00:12:00 BFF experiment - How purpose emerges00:26:00 Symbiogenesis and evolutionary complexity00:40:00 Functionalism and consciousness00:49:45 AI as part of collective human intelligence00:57:00 Comparing AI and human cognitionREFS:What is intelligence [Blaise Agüera y Arcas]https://whatisintelligence.antikythera.org/ [Read free online, interactive rich media]https://mitpress.mit.edu/9780262049955/what-is-intelligence/ [MIT Press]Large Language Models and Emergence: A Complex Systems Perspectivehttps://arxiv.org/abs/2506.11135Our first Noam Chomsky MLST interviewhttps://www.youtube.com/watch?v=axuGfh4UR9Q Chance and Necessity [Jacques Monod]https://monoskop.org/images/9/99/Monod_Jacques_Chance_and_Necessity.pdfWonderful Life: The Burgess Shale and the History of Nature [Stephen Jay Gould]https://www.amazon.co.uk/Wonderful-Life-Burgess-Nature-History/dp/0099273454 The major evolutionary transitions [E Szathmáry, J M Smith]https://wiki.santafe.edu/images/0/0e/Szathmary.MaynardSmith_1995_Nature.pdfDon't Sleep, There Are Snakes: Life and Language in the Amazonian Jungle [Dan Everett]https://www.amazon.com/Dont-Sleep-There-Are-Snakes/dp/0307386120 The Nature of Technology: What It Is and How It Evolves [W. Brian Arthur] https://www.amazon.com/Nature-Technology-What-How-Evolves-ebook/dp/B002RI9W16/ The MANIAC [Benjamin Labatut]https://www.amazon.com/MANIAC-Benjam%C3%ADn-Labatut/dp/1782279814 When We Cease to Understand the World [Benjamin Labatut]https://www.amazon.com/When-We-Cease-Understand-World/dp/1681375664/ The Boys in the Boat [Dan Brown]https://www.amazon.com/Boys-Boat-Americans-Berlin-Olympics/dp/0143125478 [Petter Johansson] (Split brain)https://www.lucs.lu.se/fileadmin/user_upload/lucs/2011/01/Johansson-et-al.-2006-How-Something-Can-Be-Said-About-Telling-More-Than-We-Can-Know.pdfIf Anyone Builds It, Everyone Dies [Eliezer Yudkowsky, Nate Soares]https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640 The science of cycologyhttps://link.springer.com/content/pdf/10.3758/bf03195929.pdf
We sat down with Sara Saab (VP of Product at Prolific) and Enzo Blindow (VP of Data and AI at Prolific) to explore the critical role of human evaluation in AI development and the challenges of aligning AI systems with human values. Prolific is a human annotation and orchestration platform for AI used by many of the major AI labs. This is a sponsored show in partnership with Prolific. **SPONSOR MESSAGES**—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— While technologists want to remove humans from the loop for speed and efficiency, these non-deterministic AI systems actually require more human oversight than ever before. Prolific's approach is to put "well-treated, verified, diversely demographic humans behind an API" - making human feedback as accessible as any other infrastructure service.When AI models like Grok 4 achieve top scores on technical benchmarks but feel awkward or problematic to use in practice, it exposes the limitations of our current evaluation methods. The guests argue that optimizing for benchmarks may actually weaken model performance in other crucial areas, like cultural sensitivity or natural conversation.We also discuss Anthropic's research showing that frontier AI models, when given goals and access to information, independently arrived at solutions involving blackmail - without any prompting toward unethical behavior. Even more concerning, the more sophisticated the model, the more susceptible it was to this "agentic misalignment." Enzo and Sarah present Prolific's "Humane" leaderboard as an alternative to existing benchmarking systems. By stratifying evaluations across diverse demographic groups, they reveal that different populations have vastly different experiences with the same AI models. Looking ahead, the guests imagine a world where humans take on coaching and teaching roles for AI systems - similar to how we might correct a child or review code. This also raises important questions about working conditions and the evolution of labor in an AI-augmented world. Rather than replacing humans entirely, we may be moving toward more sophisticated forms of human-AI collaboration.As AI tech becomes more powerful and general-purpose, the quality of human evaluation becomes more critical, not less. We need more representative evaluation frameworks that capture the messy reality of human values and cultural diversity. Visit Prolific: https://www.prolific.com/Sara Saab (VP Product):https://uk.linkedin.com/in/sarasaabEnzo Blindow (VP Data & AI):https://uk.linkedin.com/in/enzoblindowTRANSCRIPT:https://app.rescript.info/public/share/xZ31-0kJJ_xp4zFSC-bunC8-hJNkHpbm7Lg88RFcuLETOC:[00:00:00] Intro & Background[00:03:16] Human-in-the-Loop Challenges[00:17:19] Can AIs Understand?[00:32:02] Benchmarking & Vibes[00:51:00] Agentic Misalignment Study[01:03:00] Data Quality vs Quantity[01:16:00] Future of AI OversightREFS:Anthropic Agentic Misalignmenthttps://www.anthropic.com/research/agentic-misalignmentValue Compasshttps://arxiv.org/pdf/2409.09586Reasoning Models Don't Always Say What They Think (Anthropic)https://www.anthropic.com/research/reasoning-models-dont-say-think https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdfApollo research - science of evals blog posthttps://www.apolloresearch.ai/blog/we-need-a-science-of-evals Leaderboard Illusion https://www.youtube.com/watch?v=9W_OhS38rIE MLST videoThe Leaderboard Illusion [2025]Shivalika Singh et alhttps://arxiv.org/abs/2504.20879(Truncated, full list on YT)
On this week's episode, Paul Matteis, John Maraganore, Eric Schmidt, and Graig Suvannavejh open with a look at biotech market sentiment, which has notably strengthened amid steady M&A and successful drug launches. The XBI is also up over 40% in six months, signaling optimism that the long “biotech winter” may be ending. While cautious, the co-hosts agree the recovery feels sustainable. The group then discussed the IPO and private financing landscape, noting a more mature crop of companies could drive strong IPOs in 2026. On the regulatory front, the co-hosts discussed the FDA's announcement of nine voucher recipients under the new Commissioner's National Priority Voucher (CNPV) pilot program. President Trump's comments on reducing GLP-1 pricing were also noted. In M&A, BioCryst's ~$700B acquisition of Astria Therapeutics was seen as a healthy sign of industry consolidation. The FDA's OAI letter to Novo Nordiskalso has implications for Scholar Rock and Regeneron. In data news, Praxis' positive essential tremor results were highlighted as a win in the CNS space, showing strong data can drive meaningful raises. Next, John recapped his STAT Summit panel with Chris Viehbacher and Emma Walmsley on the hurdles the pharma industry has faced and the next decade ahead. Bicara Therapeutics' breakthrough therapy designation in head and neck cancer was another sentiment boost. The group also previewed Alector's upcoming Phase 3 readout in frontotemporal dementia. The episode closed with excitement heading into ESMO this weekend. *This episode aired on October 17, 2025.
Dr. Ilia Shumailov - Former DeepMind AI Security Researcher, now building security tools for AI agentsEver wondered what happens when AI agents start talking to each other—or worse, when they start breaking things? Ilia Shumailov spent years at DeepMind thinking about exactly these problems, and he's here to explain why securing AI is way harder than you think.**SPONSOR MESSAGES**—Check out notebooklm for your research project, it's really powerfulhttps://notebooklm.google.com/—Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— We're racing toward a world where AI agents will handle our emails, manage our finances, and interact with sensitive data 24/7. But there is a problem. These agents are nothing like human employees. They never sleep, they can touch every endpoint in your system simultaneously, and they can generate sophisticated hacking tools in seconds. Traditional security measures designed for humans simply won't work.Dr. Ilia Shumailovhttps://x.com/iliaishackedhttps://iliaishacked.github.io/https://sequrity.ai/TRANSCRIPT:https://app.rescript.info/public/share/dVGsk8dz9_V0J7xMlwguByBq1HXRD6i4uC5z5r7EVGMTOC:00:00:00 - Introduction & Trusted Third Parties via ML00:03:45 - Background & Career Journey00:06:42 - Safety vs Security Distinction00:09:45 - Prompt Injection & Model Capability00:13:00 - Agents as Worst-Case Adversaries00:15:45 - Personal AI & CAML System Defense00:19:30 - Agents vs Humans: Threat Modeling00:22:30 - Calculator Analogy & Agent Behavior00:25:00 - IMO Math Solutions & Agent Thinking00:28:15 - Diffusion of Responsibility & Insider Threats00:31:00 - Open Source Security Concerns00:34:45 - Supply Chain Attacks & Trust Issues00:39:45 - Architectural Backdoors00:44:00 - Academic Incentives & Defense Work00:48:30 - Semantic Censorship & Halting Problem00:52:00 - Model Collapse: Theory & Criticism00:59:30 - Career Advice & Ross Anderson TributeREFS:Lessons from Defending Gemini Against Indirect Prompt Injectionshttps://arxiv.org/abs/2505.14534Defeating Prompt Injections by Design. Debenedetti, E., Shumailov, I., Fan, T., Hayes, J., Carlini, N., Fabian, D., Kern, C., Shi, C., Terzis, A., & Tramèr, F. https://arxiv.org/pdf/2503.18813Agentic Misalignment: How LLMs could be insider threatshttps://www.anthropic.com/research/agentic-misalignmentSTOP ANTHROPOMORPHIZING INTERMEDIATE TOKENS AS REASONING/THINKING TRACES!Subbarao Kambhampati et alhttps://arxiv.org/pdf/2504.09762Meiklejohn, S., Blauzvern, H., Maruseac, M., Schrock, S., Simon, L., & Shumailov, I. (2025). Machine learning models have a supply chain problem. https://arxiv.org/abs/2505.22778 Gao, Y., Shumailov, I., & Fawaz, K. (2025). Supply-chain attacks in machine learning frameworks. https://openreview.net/pdf?id=EH5PZW6aCrApache Log4j Vulnerability Guidancehttps://www.cisa.gov/news-events/news/apache-log4j-vulnerability-guidance Bober-Irizar, M., Shumailov, I., Zhao, Y., Mullins, R., & Papernot, N. (2022). Architectural backdoors in neural networks. https://arxiv.org/pdf/2206.07840Position: Fundamental Limitations of LLM Censorship Necessitate New ApproachesDavid Glukhov, Ilia Shumailov, ...https://proceedings.mlr.press/v235/glukhov24a.html AlphaEvolve MLST interview [Matej Balog, Alexander Novikov]https://www.youtube.com/watch?v=vC9nAosXrJw
We need AI systems to synthesise new knowledge, not just compress the data they see. Jeremy Berman, is a research scientist at Reflection AI and recent winner of the ARC-AGI v2 public leaderboard.**SPONSOR MESSAGES**—Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— Imagine trying to teach an AI to think like a human i.e. solving puzzles that are easy for us but stump even the smartest models. Jeremy's evolutionary approach—evolving natural language descriptions instead of python code like his last version—landed him at the top with about 30% accuracy on the ARCv2.We discuss why current AIs are like "stochastic parrots" that memorize but struggle to truly reason or innovate as well as big ideas like building "knowledge trees" for real understanding, the limits of neural networks versus symbolic systems, and whether we can train models to synthesize new ideas without forgetting everything else. Jeremy Berman:https://x.com/jerber888TRANSCRIPT:https://app.rescript.info/public/share/qvCioZeZJ4Q_NlR66m-hNUZnh-qWlUJcS15Wc2OGwD0TOC:Introduction and Overview [00:00:00]ARC v1 Solution [00:07:20]Evolutionary Python Approach [00:08:00]Trade-offs in Depth vs. Breadth [00:10:33]ARC v2 Improvements [00:11:45]Natural Language Shift [00:12:35]Model Thinking Enhancements [00:13:05]Neural Networks vs. Symbolism Debate [00:14:24]Turing Completeness Discussion [00:15:24]Continual Learning Challenges [00:19:12]Reasoning and Intelligence [00:29:33]Knowledge Trees and Synthesis [00:50:15]Creativity and Invention [00:56:41]Future Directions and Closing [01:02:30]REFS:Jeremy's 2024 article on winning ARCAGI1-pubhttps://jeremyberman.substack.com/p/how-i-got-a-record-536-on-arc-agiGetting 50% (SoTA) on ARC-AGI with GPT-4o [Greenblatt]https://blog.redwoodresearch.org/p/getting-50-sota-on-arc-agi-with-gpt https://www.youtube.com/watch?v=z9j3wB1RRGA [his MLST interview]A Thousand Brains: A New Theory of Intelligence [Hawkins]https://www.amazon.com/Thousand-Brains-New-Theory-Intelligence/dp/1541675819https://www.youtube.com/watch?v=6VQILbDqaI4 [MLST interview]Francois Chollet + Mike Knoop's labhttps://ndea.com/On the Measure of Intelligence [Chollet]https://arxiv.org/abs/1911.01547On the Biology of a Large Language Model [Anthropic]https://transformer-circuits.pub/2025/attribution-graphs/biology.html The ARChitects [won 2024 ARC-AGI-1-private]https://www.youtube.com/watch?v=mTX_sAq--zY Connectionism critique 1998 [Fodor/Pylshyn]https://uh.edu/~garson/F&P1.PDF Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis [Kumar/Stanley]https://arxiv.org/pdf/2505.11581 AlphaEvolve interview (also program synthesis)https://www.youtube.com/watch?v=vC9nAosXrJw ShinkaEvolve: Evolving New Algorithms with LLMs, Orders of Magnitude More Efficiently [Lange et al]https://sakana.ai/shinka-evolve/ Deep learning with Python Rev 3 [Chollet] - READ CHAPTER 19 NOW!https://deeplearningwithpython.io/
Meta's new AI Ray-Ban Display & Mark Zuckerberg give us a glimpse of what an augmented-reality future might look like & they're here VERY soon. Zuck's new $800 AR glasses will be here in two weeks for $800 but he also showed off Hyperscape, a new gaussian-splat style virtual environment. They also struggled to demo some of the new tech but, hey, live demos… we've all been there, right? Plus, OpenAI brings GPT-5 to Codex, wins yet another programming competition & might be bringing *spicy* talk to ChatGPT. New tech from Reve, Marble World Labs, Wuji's robot hand and, yes, we made a dumb Italian Brainrot website. ITS ALL JUST FODDER FOR THE SIMULATION. WE STILL HEART Y'ALL. Come to our Discord to try our Secret Project: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Meta AR / AI Glasses Reveal Video + More https://www.facebook.com/Meta/videos/1927325824791552/ Demo Fails https://x.com/nearcyan/status/1968473003592990847 The Verge's Review https://www.theverge.com/tech/779566/meta-ray-ban-display-hands-on-smart-glasses-price-battery-specs RottoBotto https://rottobotto.com/ GPT-5 Codex Update https://openai.com/index/introducing-upgrades-to-codex/ OpenAI's Gets Perfect Score on International Collegiate Programming Contest (ICPC) World Finals https://x.com/OpenAI/status/1968368138535436297 Research Lead at OAI https://x.com/MillionInt/status/1968370113297723588 How people are using ChatGPT https://openai.com/index/how-people-are-using-chatgpt/ Sam Altman on ChatGPT Guardrails & Freedom https://x.com/sama/status/1967955739911364693 Marble World Labs https://marble.worldlabs.ai/ New Reve Update https://x.com/reve/status/1967640858372751540 The Remarkable Recovering Unitree (More Kicking of Robots) https://x.com/Sentdex/status/1967652309258920232 Wuji Hand Robot https://x.com/CyberRobooo/status/1968324425809580379 Neural Viz: The Adventures of Reemo Green https://youtu.be/5bYA2Rv2CQ8?si=_floiCUdaxlDVSBf
Professor Andrew Wilson from NYU explains why many common-sense ideas in artificial intelligence might be wrong. For decades, the rule of thumb in machine learning has been to fear complexity. The thinking goes: if your model has too many parameters (is "too complex") for the amount of data you have, it will "overfit" by essentially memorizing the data instead of learning the underlying patterns. This leads to poor performance on new, unseen data. This is known as the classic "bias-variance trade-off" i.e. a balancing act between a model that's too simple and one that's too complex.**SPONSOR MESSAGES**—Tufa AI Labs is an AI research lab based in Zurich. **They are hiring ML research engineers!** This is a once in a lifetime opportunity to work with one of the best labs in EuropeContact Benjamin Crouzier - https://tufalabs.ai/ —Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— Description Continued:Professor Wilson challenges this fundamental belief (fearing complexity). He makes a few surprising points:**Bigger Can Be Better**: massive models don't just get more flexible; they also develop a stronger "simplicity bias". So, if your model is overfitting, the solution might paradoxically be to make it even bigger.**The "Bias-Variance Trade-off" is a Misnomer**: Wilson claims you don't actually have to trade one for the other. You can have a model that is incredibly expressive and flexible while also being strongly biased toward simple solutions. He points to the "double descent" phenomenon, where performance first gets worse as models get more complex, but then surprisingly starts getting better again.**Honest Beliefs and Bayesian Thinking**: His core philosophy is that we should build models that honestly represent our beliefs about the world. We believe the world is complex, so our models should be expressive. But we also believe in Occam's razor—that the simplest explanation is often the best. He champions Bayesian methods, which naturally balance these two ideas through a process called marginalization, which he describes as an automatic Occam's razor.TOC:[00:00:00] Introduction and Thesis[00:04:19] Challenging Conventional Wisdom[00:11:17] The Philosophy of a Scientist-Engineer[00:16:47] Expressiveness, Overfitting, and Bias[00:28:15] Understanding, Compression, and Kolmogorov Complexity[01:05:06] The Surprising Power of Generalization[01:13:21] The Elegance of Bayesian Inference[01:33:02] The Geometry of Learning[01:46:28] Practical Advice and The Future of AIProf. Andrew Gordon Wilson:https://x.com/andrewgwilshttps://cims.nyu.edu/~andrewgw/https://scholar.google.com/citations?user=twWX2LIAAAAJ&hl=en https://www.youtube.com/watch?v=Aja0kZeWRy4 https://www.youtube.com/watch?v=HEp4TOrkwV4 TRANSCRIPT:https://app.rescript.info/public/share/H4Io1Y7Rr54MM05FuZgAv4yphoukCfkqokyzSYJwCK8Hosts:Dr. Tim Scarfe / Dr. Keith Duggar (MIT Ph.D)REFS:Deep Learning is Not So Mysterious or Different [Andrew Gordon Wilson]https://arxiv.org/abs/2503.02113Bayesian Deep Learning and a Probabilistic Perspective of Generalization [Andrew Gordon Wilson, Pavel Izmailov]https://arxiv.org/abs/2002.08791Compute-Optimal LLMs Provably Generalize Better With Scale [Marc Finzi, Sanyam Kapoor, Diego Granziol, Anming Gu, Christopher De Sa, J. Zico Kolter, Andrew Gordon Wilson]https://arxiv.org/abs/2504.15208
OpenAI just did a $300B deal with Oracle to insure that they have the compute to get to the next stage of AI. The future of AI is EXPENSIVE. Sam Altman is making deals all over the place, trying to set OpenAI up for the future. But there's a LOT of competition now, including from companies like Replit & their new Agent 3 which can work for hours at a time. Plus, Seeddance 4.0 is an incredible new AI image model, Apple's new Air Pod Pro 3s can live translate using AI, AlterEgo can *kind* of read your mind & we get insight into a weird AI watermelon world. IT'S A NEW WEEK BUT AI KEEPS ON ROLLING. #ai #ainews #openai Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // OpenAI Signs $300B Dollar Deal With Oracle (Say What) https://www.theverge.com/ai-artificial-intelligence/776170/oracle-openai-300-billion-contract-project-stargate Sam Says AI Going From 10 to 100 will maybe feel less crazy than 0 to 1 https://x.com/slow_developer/status/1965441316466421772 Microsoft “Shifting” from OAI? https://www.reuters.com/business/microsoft-use-some-ai-anthropic-shift-openai-information-reports-2025-09-09/ Replit's Agent 3 = 10x Increase in Autonomy https://x.com/amasad/status/1965800350071590966 OpenAI Routs Sensitive Conversations & Adds Parental Controls https://openai.com/index/building-more-helpful-chatgpt-experiences-for-everyone/ Apple's Air Pods 3 Live Translation https://x.com/adrianweckler/status/1965463329734041970 AlterEgo: We No Longer Need To Talk? https://x.com/alterego_io/status/1965113585299849535 Seedream 4.0 AI Image Is VERY Good https://seed.bytedance.com/en/seedream4_0 https://x.com/fofrAI/status/1965422936367743429 Eleven Labs Voice Re-Mixing https://x.com/elevenlabsio/status/1965806127897264300 Oboe: Learn Anything With AI https://x.com/mignano/status/1965780172688494653 The Sphere's AI Re-Imagining Of The Wizard of Oz Printing 2m Per Day https://www.hollywoodreporter.com/business/business-news/wizard-of-oz-sphere-more-films-1236364915/ Unitree IPO Coming https://x.com/ns123abc/status/1965083434847703481 The Return of The “Cute” Robot (Fourier) https://x.com/TheHumanoidHub/status/1965861846138954048 Romanian Watermelon Village https://x.com/venturetwins/status/1965609196348735785 Mortar Boom (PJ Ace New Joint) https://x.com/PJaccetturo/status/1966136806652653826 Fartscroll-lid https://x.com/iannuttall/status/1966074800595698131
In this episode, hosts Tim and Keith finally realize their long-held dream of sitting down with their hero, the brilliant neuroscientist Professor Karl Friston. The conversation is a fascinating and mind-bending journey into Professor Friston's life's work, the Free Energy Principle, and what it reveals about life, intelligence, and consciousness itself.**SPONSORS**Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal - https://github.com/google-gemini/gemini-cli--- Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!---cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst***They kick things off by looking back on the 20-year journey of the Free Energy Principle. Professor Friston explains it as a fundamental rule for survival: all living things, from a single cell to a human being, are constantly trying to make sense of the world and reduce unpredictability. It's this drive to minimize surprise that allows things to exist and maintain their structure.This leads to a bigger question: What does it truly mean to be "intelligent"? The group debates whether intelligence is everywhere, even in a virus or a plant, or if it requires a certain level of complexity. Professor Friston introduces the idea of different "kinds" of things, suggesting that creatures like us, who can model themselves and think about the future, possess a unique and "strange" kind of agency that sets us apart.From intelligence, the discussion naturally flows to the even trickier concept of consciousness. Is it the same as intelligence? Professor Friston argues they are different. He explains that consciousness might emerge from deep, layered self-awareness—not just acting, but understanding that you are the one causing your actions and thinking about your place in the world.They also explore intelligence at different sizes. Is a corporation intelligent? What about the entire planet? Professor Friston suggests there might be a "Goldilocks zone" for intelligence. It doesn't seem to exist at the super-tiny atomic level or at the massive scale of planets and solar systems, but thrives in the complex middle-ground where we live.Finally, they tackle one of the most pressing topics of our time: Can we build a truly conscious AI? Professor Friston shares his doubts about whether our current computers are capable of a feat like that. He suggests that genuine consciousness might require a different kind of "mortal" computation, where the machine's physical body and its "mind" are inseparable, much like in biological creatures.TRANSCRIPT:https://app.rescript.info/public/share/FZkF8BO7HMt9aFfu2_q69WGT_ZbYZ1VVkC6RtU3eeOITOC:00:00:00: Introduction & Retrospective on the Free Energy Principle00:09:34: Strange Particles, Agency, and Consciousness00:37:45: The Scale of Intelligence: From Viruses to the Biosphere01:01:35: Modelling, Boundaries, and Practical Application01:21:12: Conclusion
Homilia Padre Kennedy da Silva, IVE: Evangelho de Jesus Cristo segundo Lucas 6,20-26Naquele tempo,Jesus levantando os olhos para os seus discípulos, disse: "Bem-aventurados vós, os pobres, porque vosso é o Reino de Deus! Bem-aventurados, vós que agora tendes fome, porque sereis saciados! Bem-aventurados vós, que agora chorais, porque havereis de rir! Bem-aventurados sereis, quando os homens vos odiarem,vos expulsarem, vos insultarem e amaldiçoarem o vosso nome, por causa do Filho do Homem! Alegrai-vos, nesse dia, e exultai pois será grande a vossa recompensa no céu; porque era assim que os antepassados deles tratavam os profetas. Mas, ai de vós, ricos, porque já tendes vossa consolaçãoAi de vós, que agora tendes fartura,porque passareis fome! Ai de vós, que agora rides, porque tereis luto e lágrimas! Ai de vós quando todos vos elogiam!Era assim que os antepassados deles tratavam os falsos profetas". Palavra da Salvação.
We are joined by Cristopher Moore, a professor at the Santa Fe Institute with a diverse background in physics, computer science, and machine learning.The conversation begins with Cristopher, who calls himself a "frog" explaining that he prefers to dive deep into specific, concrete problems rather than taking a high-level "bird's-eye view". They explore why current AI models, like transformers, are so surprisingly effective. Cristopher argues it's because the real world isn't random; it's full of rich structures, patterns, and hierarchies that these models can learn to exploit, even if we don't fully understand how.**SPONSORS**Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!---cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy.Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst***Cristopher Moore:https://sites.santafe.edu/~moore/TOC:00:00:00 - Introduction00:02:05 - Meet Christopher Moore: A Frog in the World of Science00:05:14 - The Limits of Transformers and Real-World Data00:11:19 - Intelligence as Creative Problem-Solving00:23:30 - Grounding, Meaning, and Shared Reality00:31:09 - The Nature of Creativity and Aesthetics00:44:31 - Computational Irreducibility and Universality00:53:06 - Turing Completeness, Recursion, and Intelligence01:11:26 - The Universe Through a Computational Lens01:26:45 - Algorithmic Justice and the Need for TransparencyTRANSCRIPT: https://app.rescript.info/public/share/VRe2uQSvKZOm0oIBoDsrNwt46OMCqRnShVnUF3qyoFkFilmed at DISI (Diverse Intelligences Summer Institute)https://disi.org/REFS:The Nature of computation [Chris Moore]https://nature-of-computation.org/ Birds and Frogs [Freeman Dyson]https://www.ams.org/notices/200902/rtx090200212p.pdf Replica Theory [Parisi et al]https://arxiv.org/pdf/1409.2722 Janossy pooling [Fabian Fuchs]https://fabianfuchsml.github.io/equilibriumaggregation/ Cracking the cryptic [YT channel]https://www.youtube.com/c/CrackingTheCrypticSudoko Bench [Sakana]https://sakana.ai/sudoku-bench/Fractured entangled representations “phylogenetic locking in comment” [Kumar/Stanley]https://arxiv.org/pdf/2505.11581 (see our shows on this)The War Against Cliché: [Martin Amis]https://www.amazon.com/War-Against-Cliche-Reviews-1971-2000/dp/0375727167Rule 110 (CA)https://mathworld.wolfram.com/Rule150.htmlUniversality in Elementary Cellular Automata [Matt Cooke]https://wpmedia.wolfram.com/sites/13/2018/02/15-1-1.pdf Small Semi-Weakly Universal Turing Machines [Damien Woods] https://tilde.ini.uzh.ch/users/tneary/public_html/WoodsNeary-FI09.pdf COMPUTING MACHINERY AND INTELLIGENCE [Turing, 1950]https://courses.cs.umbc.edu/471/papers/turing.pdf Comment on Space Time as a causal set [Moore, 88]https://sites.santafe.edu/~moore/comment.pdf Recursion Theory on the Reals and Continuous-time Computation [Moore, 96]
Follow Monstercat Silk on all platforms - https://monster.cat/silk Follow our playlists: https://ffm.bio/monstercat Tracklist 1. ALLKNIGHT - Should've Known Better [Monstercat Silk] [00:35] Silk Spotlight: 2. Finding Mero - Glass Souls [Monstercat Silk] [05:14] 3. MXV - Can You (ft. Courtney Storm) [Monstercat Silk] [08:26] 4. Modd & Lisandro - Balloon [Monstercat Silk] [12:40] 5. ORACLE - Reflections [Monstercat Silk] [17:58] 6. OAI & Ra5im - Echoes of Silence [Monstercat Silk] [21:56] 7. ATTLAS & Jodie Knight - Used To The Silence [Monstercat Silk] [27:01] Silk Exclusive: 8. Oncor - Bones [Monstercat Silk] [30:26] 9. PROFF - Nara [Monstercat Silk] [32:36] 10. ALLKNIGHT - Sunflower [Monstercat Silk] [39:14] 11. LTN pres. Ghostbeat & Passive Progressive - Echo [Monstercat Silk] [42:06] 12. Shingo Nakamura - Driving [Monstercat Silk] [45:30] 13. rshand - Loveblind [Monstercat Silk] [49:09] 14. Hausman - Secrets [Monstercat Silk] [53:20] Thank you for listening to Monstercat Silk Showcase! Learn more about your ad choices. Visit megaphone.fm/adchoices
OAI ambassador Erik Wilde explores how new specs like Arazzo and MCP are shaping AI-driven workflows, and why APIs may soon be your last line of defense.
Hey everyone, Alex here
In this episode, we sit down with Oai Truong, the visionary behind Bounce Design, a creative agency known for bridging traditional business strategy with cutting-edge technologies. As the lead of a multidisciplinary team in design, development, and videography, Oai brings a results-driven approach to everything from brand strategy to social media trends.With a background rooted in both creative storytelling and strategic marketing, Oai shares how he helps businesses cut through the noise to focus on what truly matters: clarity, execution, and measurable results. We dive into his process, what makes a great brand in 2025, and how emerging tech is reshaping the way companies connect with their audience.This episode is all about vision, precision, and getting real with what it takes to succeed in a saturated digital world.In this episode:
Dark and hopeful Illusionary Images 162, Emerald Sunrise is here. Tycho - Cypress (Original Mix) [Mom+Pop] bleach.bath - data syringe_17 (Original Mix) [bleach.bath] Thylacine - The Road (Parra For Cuva Remix) [Intuitive Records] lycoriscoris - Light Leaks (Original Mix) [Ki Records] Seb Wildblood - final lap (Original Mix) [Beyond Rec] Parra for Cuva - Playa Ride (Original Mix) [Parra For Cuva] AK - insecurities (Original Mix) [Aljosha Frederick Konstanty] Jazver & Zorah feat. maybeallice - Meet You In The Rain (Original Mix) [scenery.] Fløa, OAI, Polyline - The Same Melodic Guy (Original Mix) [NORR] Stendahl - First Breath (Extended Mix) [Only For A Moment] LeyeT, Klur - Impossible (Extended Mix) [Colorize (Enhanced)] Banyan, Afnan Prince - Surrender (Extended Mix) [Lilly Era (DE)] Half Tone - Haze (Original Mix) [Half Tone] Blugazer - Watching Dreamscapes Passing By (Extended Mix) [Enhanced Chill] Sterling Grove, Ellyn Woods - Wake Up (BAILE Remix Extended Edit) [House of Youth] Fløa - Unlock (Extended Mix) [Rewoven] Mees Dierdorp - Toumate (Original Mix) [MEES Records] SØNIN - Swans (Extended Mix) [Songspire Records] Mark Novas - Half Truths (Extended Mix) [Anjunadeep Explorations] coiro - Una (Original Mix) [Ki Records] Booka Shade, Satin Jackets - Fusion Royale (Original Mix) [Blaufield Music] Natascha Polké - Poison Of Choice (Original Mix) [[PIAS] ÉLECTRONIQUE] Natascha Polké, Fejká - Echoes (Extended Mix) [Anjunadeep] Echolocation, Jordan Whitlock - Condor (Echo Edit) [There Is A Light Explorations] Alex Pich - Apollo (Extended Mix) [Sekora] Monojoke - Flavors of the World (Original Mix) [Earth Sound Recordings] Braxton - On The Shores Of A Happy Sea (Extended Mix) [Anjunadeep] Keanler, Ren Ocean - Soft Lights (Extended Mix) [Lilly Era (DE)] Jody Wisternoff, PROFF, James Grant, Siobhan Wilson, Takeshi Furukawa - Mui (Ezequiel Arias Extended Mix) [Anjunadeep] ARVOW - Days Pass (Original Mix) [Be Your Own Studio Label] Bliss Looper & Sion Louks - Hope (Original Mix) [High Vibe Records] Manu Zain - Surrounded by Impatience (Original Mix) [Songspire Records] New Silence - Breaking Free (Original Mix) [Timelock-Music]
The free livestreams for AI Engineer Summit are now up! Please hit the bell to help us appease the algo gods. We're also announcing a special Online Track later today.Today's Deep Research episode is our last in our series of AIE Summit preview podcasts - thanks for following along with our OpenAI, Portkey, Pydantic, Bee, and Bret Taylor episodes, and we hope you enjoy the Summit! Catch you on livestream.Everybody's going deep now. Deep Work. Deep Learning. DeepMind. If 2025 is the Year of Agents, then the 2020s are the Decade of Deep.While “LLM-powered Search” is as old as Perplexity and SearchGPT, and open source projects like GPTResearcher and clones like OpenDeepResearch exist, the difference with “Deep Research” products is they are both “agentic” (loosely meaning that an LLM decides the next step in a workflow, usually involving tools) and bundling custom-tuned frontier models (custom tuned o3 and Gemini 1.5 Flash).The reception to OpenAI's Deep Research agent has been nothing short of breathless:"Deep Research is the best public-facing AI product Google has ever released. It's like having a college-educated researcher in your pocket." - Jason Calacanis“I have had [Deep Research] write a number of ten-page papers for me, each of them outstanding. I think of the quality as comparable to having a good PhD-level research assistant, and sending that person away with a task for a week or two, or maybe more. Except Deep Research does the work in five or six minutes.” - Tyler Cowen“Deep Research is one of the best bargains in technology.” - Ben Thompson“my very approximate vibe is that it can do a single-digit percentage of all economically valuable tasks in the world, which is a wild milestone.” - sama“Using Deep Research over the past few weeks has been my own personal AGI moment. It takes 10 mins to generate accurate and thorough competitive and market research (with sources) that previously used to take me at least 3 hours.” - OAI employee“It's like a bazooka for the curious mind” - Dan Shipper“Deep research can be seen as a new interface for the internet, in addition to being an incredible agent… This paradigm will be so powerful that in the future, navigating the internet manually via a browser will be "old-school", like performing arithmetic calculations by hand.” - Jason Wei“One notable characteristic of Deep Research is its extreme patience. I think this is rapidly approaching “superhuman patience”. One realization working on this project was that intelligence and patience go really well together.” - HyungWon“I asked it to write a reference Interaction Calculus evaluator in Haskell. A few exchanges later, it gave me a complete file, including a parser, an evaluator, O(1) interactions and everything. The file compiled, and worked on my test inputs. There are some minor issues, but it is mostly correct. So, in about 30 minutes, o3 performed a job that would take me a day or so.” - Victor Taelin“Can confirm OpenAI Deep Research is quite strong. In a few minutes it did what used to take a dozen hours. The implications to knowledge work is going to be quite profound when you just ask an AI Agent to perform full tasks for you and come back with a finished result.” - Aaron Levie“Deep Research is genuinely useful” - Gary MarcusWith the advent of “Deep Research” agents, we are now routinely asking models to go through 100+ websites and generate in-depth reports on any topic. The Deep Research revolution has hit the AI scene in the last 2 weeks: * Dec 11th: Gemini Deep Research (today's guest!) rolls out with Gemini Advanced* Feb 2nd: OpenAI releases Deep Research* Feb 3rd: a dozen “Open Deep Research” clones launch* Feb 5th: Gemini 2.0 Flash GA* Feb 15th: Perplexity launches Deep Research * Feb 17th: xAI launches Deep SearchIn today's episode, we welcome Aarush Selvan and Mukund Sridhar, the lead PM and tech lead for Gemini Deep Research, the originators of the entire category. We asked detailed questions from inspiration to implementation, why they had to finetune a special model for it instead of using the standard Gemini model, how to run evals for them, and how to think about the distribution of use cases. (We also have an upcoming Gemini 2 episode with our returning first guest Logan Kilpatrick so stay tuned
For many young adults, turning 18 is an exciting milestone—a step toward independence while still having the support of family. But for children in foster care, this birthday marks the date you age out of the system.Sometimes, these individuals don't feel fully prepared to navigate the adult world. Often, an 18-year-old aging out of foster care has to face life decisions most people wouldn't expect to tackle until their mid-20s. It's a daunting situation that can feel impossible to face alone, and our latest guest is here to shed light on this challenging issue.Nicole Davis is the Executive Director of Operation: Achieve Independence (OAI). OAI focuses on supporting youth aging out of foster care by providing mentoring, life skills training, education, and career preparation.In this episode, Nicole shares the important role education plays in breaking cycles of generational trauma, why the challenges of aging out will look different for every child, how we can best support young adults who are about to age out, and much more.Find the show notes and links to anything we discussed here: riversideproject.org/nicole-davis-33Connect with us!Website: https://riversideproject.orgInstagram: https://www.instagram.com/the.riverside.projectFacebook: https://www.facebook.com/riversideproject.htx
AI News: OpenAI says AGI is incoming… James Cameron says that might not be good, even though OAI is building new chips AND has a new frontier model (GPT-5) on the way. Plus, Apple Intelligence is here and it's just ok, Meta is taking on Google Search, Mircosoft's Github takes on Cursor with Spark that can write code, Red Panda is a mysterious new AI image model, MuVi brings automatic soundtracks to video using AI & we're visited by the future ghost of Robert Downey, Jr who has something to say about current RDJ, Jr's current stance on AI. IT'S ALL MOVING FAST Y'ALL Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // OpenAI CFO on AGI https://x.com/tsarnick/status/1850999598032306569 OpenAI Builds It's Own Chips https://www.reuters.com/technology/artificial-intelligence/openai-builds-first-chip-with-broadcom-tsmc-scales-back-foundry-ambition-2024-10-29/ Next OpenAI Model Coming By December https://x.com/kyliebytes/status/1849625175354233184 Sam Altman Says Fake News But Prob Just Cuz Not Called Orion https://x.com/sama/status/1849661093083480123 New o1 Full Features Revealed at DevDay in London https://x.com/stevenheidel/status/1851574257819562195 James Cameron on AGI https://youtu.be/e6Uq_5JemrI?si=nmrZPwACepoJ3ikN Apple Intelligence is…fine? https://www.tomsguide.com/phones/iphones/i-tried-all-new-apple-intelligence-features-in-ios-18-1-heres-the-best-and-worst Meta's AI Search Plans https://www.theverge.com/2024/10/28/24282017/meta-ai-powered-search-engine-report Notebook Llama Open Source Podcast Model https://x.com/reach_vb/status/1850522281681813862 GitHub Spark Kills Cursor? https://techcrunch.com/2024/10/29/github-spark-lets-you-build-web-apps-in-plain-english/ VIDEO: https://x.com/ashtom/status/1851333075374051725 Microsoft Owned GitHub Co-Pilot Will Support Anthropic, Google & OAI https://www.theverge.com/2024/10/29/24282544/github-copilot-multi-model-anthropic-google-open-ai-github-spark-announcement Google Says 25% of All New Code is AI Generated https://x.com/AndrewCurran_/status/1851374530998256126 Canva Integrates Leonardo https://www.theverge.com/2024/10/22/24276662/canva-ai-update-new-text-to-image-generator-leonardo Robert Downey Jr Will Sue From The Grave If You Use Him For AI https://gizmodo.com/robert-downey-jr-will-sue-from-the-grave-if-hollywood-ever-recreates-his-likeness-with-ai-2000517884 Then & Now Flux Lora https://x.com/andrew_n_carr/status/1851031004070424672 https://glif.app/glifs/cm2swpljc0000yqd7v20vtskv PDF to Brain Rot https://x.com/kimmonismus/status/1850635739312042086 https://www.memenome.gg/ LLM Pictionary https://x.com/paul_cal/status/1850262678712856764 MuVi Generates Music Based on Visuals https://x.com/dreamingtulpa/status/1850588949514756274 Arthur Morgan (Thick of It) https://youtu.be/uai4Y_-FRtY?si=zP0FkJOORDN8V9Ne Gavin's Act-One Video https://youtu.be/W_L2bEKJBSc?si=up3JBi9Hsas1AzNA
- Sáng ngày 24/10/2024, Cảnh sát biển Việt Nam tổ chức Hội nghị triển khai Đề án tổ chức Cuộc thi “Em yêu biển, đảo quê hương” trên phạm vi toàn quốc giai đoạn 2024-2030 và những năm tiếp theo bằng hình thức trực tiếp tại BTL Cảnh sát biển và trực tuyến với 4 điểm cầu tại BTL Vùng Cảnh sát biển 1, 2, 3, 4 với 450 đại biểu tham gia. Trung tướng Bùi Quốc Oai, Bí thư Đảng ủy, Chính ủy Cảnh sát biển Việt Nam chủ trì hội nghị --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1sukien/support
- Mưa lớn diện rộng, mực nước tại sông Bùi và sông Tích dâng cao khiến nhiều địa phương ngoại thành Hà Nội, trong đó có các xã của huyện Quốc Oai và Chương Mỹ bị chìm trong biển nước. Trước thực trạng các tuyến đê trên địa bàn bị đe doạ, chính quyền các địa phương đang căng mình bảo vệ. Chủ đề : sông Bùi, sông Tích, Hà Nội --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1tintuc/support
- Trong khuôn khổ chương trình giao lưu công tác đảng, công tác chính trị (CTĐ, CTCT) CSB Việt Nam và CSB Trung Quốc, sáng nay 28/8/2024, đoàn đại biểu CSB Việt Nam - CSB Trung Quốc đã tới thăm và giao lưu tại Bộ Tư lệnh Vùng Cảnh sát biển 1 tại thành phố Hải Phòng. Đoàn Việt Nam do Trung tướng Bùi Quốc Oai, Chính ủy Cảnh sát biển Việt Nam làm trưởng đoàn. Đoàn Trung Quốc do Thiếu tướng Lưu Hậu Kiệt, Phó chính ủy Cục Cảnh sát biển Trung Quốc làm trưởng đoàn. PV Thu Lan phản ánh --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1sukien/support
- Chủ trì họp Ban Chỉ đạo Cải cách tư pháp Trung ương, Chủ tịch nước Tô Lâm yêu cầu tiếp tục đẩy mạnh cải cách tư pháp, chống bảo thủ, cục bộ.- Trong chuyến thăm cấp Nhà nước đến Ấn Độ, Thủ tướng Phạm Minh Chính tiếp lãnh đạo một số tập đoàn hàng đầu của Ấn Độ đang đầu tư vào Việt Nam.- Mưa lũ tại các tỉnh miền Bắc tiếp tục gây sạt lở nhiều tuyến đường. Thành phố Hà Nội thành lập Ban chỉ đạo xử lý, khắc phục lũ lụt tại ba huyện là Chương Mỹ, Quốc Oai và Thạch Thất. Bộ Giao thông vận tải đề xuất 2 mức thu phí đối với đường cao tốc do Nhà nước đầu tư, trong đó thấp nhất là 900 đồng/km, cao nhất 5.200 đồng/km.- Tình hình chiến sự tại Trung Đông tăng nhiệt khi Israel không kích và tiêu diệt chỉ huy của lực lượng Hezbollah ngay tại thủ đô Beirut của Lebanon.- Phó Tổng thống Mỹ Kamala Harris chính thức khởi động chiến dịch tranh cử bằng việc ghi điểm tại 6 trên 7 bang chiến địa. Chủ đề : Chủ tịch nước, Tô Lâm --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1thoisu0/support
- Tình hình kinh tế - xã hội tháng 7 và 7 tháng duy trì xu hướng tích cực. Các ngành, lĩnh vực đạt được nhiều kết quả quan trọng, tạo đà cho các tháng và quí tiếp theo.- Tính đến chiều nay, đã có hơn 702 nghìn thí sinh đăng ký nguyện vọng xét tuyển vào đại học năm 2024.- Liên tiếp xảy ra hàng chục trận động đất tại Kon Tum chỉ trong một ngày. Các chuyên gia khuyến cáo chính quyền địa phương phải có các kịch bản ứng phó với động đất kích thích.- Tổng thống Venezuela Nicolás Maduro tái đắc cử nhiệm kỳ thứ ba với 51,2% số phiếu ủng hộ. Nhiều nhà lãnh đạo thế giới đã gửi lời chúc mừng chiến thắng tới ông Maduro.- Trung Quốc liên tiếp xảy ra vỡ đê khiến hàng nghìn người phải đi sơ tán. Chính quyền nước này đã phát cảnh báo màu cam về mưa lũ. Chủ đề : Trung Quốc, mưa lũ, Quốc Oai, đại học --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1thoisu0/support
This week on Futuristic we're talking about the new ChatGPT-4o model, GPT officially passes the Turing Test, the OAI founder who thinks AGI is only 2-3 years away, Ilya has left OAI, Sam Altman doesn't think we are worried enough about how AI will impact the economy, Google's medical AI destroys GPT's benchmark and outperforms doctors and ChatGPT-4 beat 100% of all psychologists in a study of Social Intelligence.
To celebrate the release of their stunning EP 'Luminous (with CEAUS) / Soil (with OAI)', Heard Right returns to Colorcast Radio! This show is syndicated & distributed exclusively by Syndicast. If you are a radio station interested in airing the show or would like to distribute your podcast / radio show please register here: https://syndicast.co.uk/distribution/registration
- Nhân dịp chuẩn bị đón Xuân Giáp Thìn năm 2024, ngày 20-21/01/2024, đoàn công tác BTL Cảnh sát biển Việt Nam do Trung tướng Bùi Quốc Oai, Chính ủy Cảnh sát biển Việt Nam làm trưởng đoàn đã đến thăm, kiểm tra và chúc Tết cán bộ, chiến sĩ các đơn vị sẵn sàng chiến đấu: BTL Vùng Cảnh sát biển 4, Đoàn đặc nhiệm phòng, chống tội phạm Ma túy số 4 và chính quyền, các đơn vị lực lượng vũ trang, nhân dân trên địa bàn tỉnh Cà Mau. --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1tintuc/support
David introduces upcoming pods and then David and Paul talk Startup Boards of Directors. We discuss the recent BoD drama at OpenAI, the OAI structure and how it might have been better. We separately cover:1. What is a startup board?2. Why do startups have boards?3. Typical board structure and roles.4. Board chairs.5. Independent directors.6. Board Observers.7.Advisory boards.8. What goes wrong with boards?9. Legal challenges boards face?.10. Should you serve on a BoD?11. What are good resources for Startup Boards?Reach David on Twitter/X @DGRollingSouth for comments and entertaining cartoons on Venture. We invite your feedback and suggestions at ventureinthesouth.com or email david@ventureinthesouth.com. Learn more about RollingSouth at rollingsouth.vc or email david@rollingsouth.vc. Follow Paul on LinkedIn. Download our White Papers and Cheat Sheets HERE. Thanks for listening and remember: Our mission is to MAKE MONEY, HAVE FUN AND DO GOOD.
This week, we discuss the distribution of cloud revenue, explore who is investing in A.I., and take a look back at Mesosphere DC/OS. Plus, some thoughts on the peacefulness of flying. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=EX5FY4MJpR0) 445 (https://www.youtube.com/watch?v=EX5FY4MJpR0) Runner-up Titles I feel very safe on an airplane We're one mishap away from Lost Difference between the right decision and the safe decision First rule of AI Ethics Team: Don't fake the demo “Rice” They got run over by the Kubernetes Truck Figure out where all the yaml goes Rundown Many Different Kinds Of Cloud, Very Big Piles Of Money - IT Jungle (https://www.itjungle.com/2023/12/11/many-different-kinds-of-cloud-very-big-piles-of-money/) Software Startup That Rejected Buyout From Microsoft Shuts Down, Sells Assets to Nutanix (https://www.theinformation.com/articles/a16z-backed-startup-that-once-rejected-150m-sale-to-microsoft-shuts-down) Apache Mesos (https://en.wikipedia.org/wiki/Apache_Mesos) Docker acquires AtomicJar, a testing startup that raised $25M in January (https://techcrunch.com/2023/12/11/docker-acquires-atomicjar-a-testing-startup-that-raised-25m-in-january/) Relevant to your Interests Just about every Windows and Linux device vulnerable to new LogoFAIL firmware attack (https://arstechnica.com/security/2023/12/just-about-every-windows-and-linux-device-vulnerable-to-new-logofail-firmware-attack/) OAI staff deeply did not want to work for Microsoft (https://www.threads.net/@kalihays1/post/C0hUizEJfPV/?igshid=MzRlODBiNWFlZA==) Silicon Valley Confronts a Grim New A.I. Metric (https://www.nytimes.com/2023/12/06/business/dealbook/silicon-valley-artificial-intelligence.html) The Fastest Growing Brands of 2023 (https://pro.morningconsult.com/analyst-reports/fastest-growing-brands-2023) Vast Data lands $118M to grow its data storage platform for AI workloads (https://techcrunch.com/2023/12/06/vast-data-lands-118m-to-grow-its-data-storage-platform-for-ai-workloads/) 'No one saw this coming': Kevin O'Leary says remote work trend is now hurting sectors other than real estate — here's why he's saying certain ‘banks are going to fail' (https://finance.yahoo.com/news/no-one-saw-coming-kevin-133000274.html) The OpenAI Board Member Who Clashed With Sam Altman Shares Her Side (https://www.wsj.com/tech/ai/helen-toner-openai-board-2e4031ef) Apple joins AI fray with release of model framework (https://www.theverge.com/2023/12/6/23990678/apple-foundation-models-generative-ai-mlx) Will VMware customers balk as Broadcom transitions them to subscriptions? (https://www.constellationr.com/blog-news/insights/will-vmware-customers-balk-broadcom-transitions-them-subscriptions) Broadcom to divest VMware's EUC and Carbon Black units (https://www.theregister.com/2023/12/07/broadcom_q4_2023/) Apple Confirms It Shut Down iMessage for Android App Beeper Mini (https://www.macrumors.com/2023/12/10/apple-confirms-it-shut-down-beeper-mini/) Tech company VMware slashing 577 jobs in Austin (https://www.kvue.com/article/money/economy/boomtown-2040/vmware-austin-layoffs/269-c65851d4-54cb-4cf3-a3db-34fb907de932) The Problems with Money In (Open Source) Software | Aneel Lakhani | Monktoberfest 2023 (https://www.youtube.com/watch?v=LTCuLyv6SHo) WFH levels have become "flat as a pancake" (https://x.com/I_Am_NickBloom/status/1729557222424731894?s=20) Broadcom halves sub price for VMware's flagship hybrid cloud (https://www.theregister.com/2023/12/12/vmware_broadcom_licensing_changes/) VMware by Broadcom Dramatically Simplifies Offer Lineup and Licensing Model (https://news.vmware.com/company/vmware-by-broadcom-business-transformation) Cloud engineer gets 2 years for wiping ex-employer's code repos (https://www.bleepingcomputer.com/news/security/cloud-engineer-gets-2-years-for-wiping-ex-employers-code-repos/) The Kubernetes 1.29 release interview (https://open.substack.com/pub/craigbox/p/the-kubernetes-129-release-interview?r=1s6gmq&utm_campaign=post&utm_medium=web) HashiCorp Shares Drop 22% After Forecasting Slowing Sales Growth (https://www.marketwatch.com/story/hashicorp-shares-drop-22-after-forecasting-slowing-sales-growth-d3dd4d7d) Oracle shares slide as revenue misses estimates (https://www.cnbc.com/2023/12/11/oracle-orcl-q2-earnings-report-2024.html) Platform teams need a delightfully different approach, not one that sucks less (https://www.chkk.io/blog/platform-teams-different-approach) Nonsense Nearly Everyone Gets A's at Yale. Does That Cheapen the Grade? (https://www.nytimes.com/2023/12/05/nyregion/yale-grade-inflation.html?searchResultPosition=1) It's time for the Excel World Championships! (https://www.theverge.com/2023/12/9/23995236/its-time-for-the-excel-world-championships) Only in Australia: huge snake drops from roof during podcast recording – video (https://www.theguardian.com/environment/video/2023/dec/11/only-in-australia-huge-snake-drops-from-roof-during-podcast-recording-video) Committing to Costco (https://twitter.com/Wes_nship/status/1734207909137989969) 220-ton Nova Scotia building moved using 700 bars of soap (https://www.upi.com/Odd_News/2023/12/08/canada-Halifax-Nova-Scotia-Elmwood-building-moved-soap/2971702059148/) Conferences Jan 29, 2024 to Feb 1, 2024 That Conference Texas (https://that.us/events/tx/2024/schedule/) SCaLE 21x, March 14th to 17th, 2024 (https://www.socallinuxexpo.org/scale/21x) DevOpsDays Birmingham 2024 (https://talks.devopsdays.org/devopsdays-birmingham-al-2024/cfp) April 17-18, 2024 If you want your conference mentioned, let's talk media sponsorships. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: The Fund (https://www.amazon.com/Fund-Bridgewater-Associates-Unraveling-Street/dp/1250276934) Matt: Reese's minis unwrapped (https://amzn.to/3NoGV9i) Anti-recommendation: Keychron Acoustic Kit (https://www.keychron.com/collections/keychron-add-on/products/keychron-q10-acoustic-upgrade-kit) Coté: Sex Education (https://en.wikipedia.org/wiki/Sex_Education_(TV_series)), season 3. (some follow-ups: Menewood was good, a bit too Moby Dick w/r/t to wheat and barley in the middle; Descript is still fantastic, getting even better with the AI stuff) Photo Credits Header (https://unsplash.com/photos/blue-and-white-airplane-seats-DSOohFTAfno) Artwork (https://unsplash.com/photos/gray-and-black-airplane-seats-1KcGnn5HAPU)
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Epistemic range of motion" and LessWrong moderation, published by habryka on November 28, 2023 on LessWrong. (Context for the reader: Gabriel reached out to me a bit more than a year ago to ask me to delete a few comments on this post by Jacob Hilton, who was working at OpenAI at the time. I referenced this in my recent dialogue with Olivia, where I quoted an email I sent to Eliezer about having some concerns about Conjecture partially on the basis of that interaction. We ended up scheduling a dialogue to talk about that and related stuff.) You were interested in a dialogue, probably somewhat downstream of my conversation with Olivia and also some of the recent advocacy work you've been doing. Yup. Two things I'd like to discuss: I was surprised by you (on a recent call) stating that you found LessWrong to be a good place for the Lying is Cowardice not Strategy post. I think you misunderstand my culture. Especially around civility, and honesty. Yeah, I am interested in both of the two things. I don't have a ton of context on the second one, so am curious about hearing a bit more. Gabriel's principles for moderating spaces About the second one: I think people should be free to be honest in their private spaces. I think people should be free to create their own spaces, enact their vision, and to the extent you participate in the space, you should help them. If you invite someone to your place, you ought to not do things that would have caused them not to come if they knew ahead of time. So, about my post and the OAI thing: By 3, I feel ok writing my post on my blog. I feel ok with people dissing OAI on their blogs, and on their posts if you are ok with it (I take you as proxy for "person with vision for LW") I feel much less ok about ppl dissing OAI on their own blog posts on LW. I assume that if they knew ahead of time, they would have been much less likely to participate. I would have felt completely ok if you told me "I don't think your post has the tone required for LW, I want less adversariality / less bluntness / more charitability / more ingroupness" How surprising are these to you? Meta-comment: Would have been great to know that the thing with OAI shocked you enough to send a message to Eliezer about it. Would have been much better from my point of view to talk about it publicly, and even have a dialogue/debate like this if you were already opened to it. If you were already open to it, I should have offered. (I might have offered, but can't remember.) Ah, ok. Let me think about this a bit. I have thoughts on the three principles you outline, but I think I get the rough gist of the kind of culture you are pointing to without needing to dive into that. I think I don't understand the "don't do things that will make people regret they came" principle. Like, I can see how it's a nice thing to aspire to, but if you have someone submit a paper to a journal, and then the paper gets reviewed and rejected as shoddy, then like, they probably regret submitting to you, and this seems good. Similarly if I show up in a jewish community gathering or something, and I wasn't fully aware of all of the rules and guidelines they follow and this make me regret coming, then that's sad, but it surely wouldn't have been the right choice for them to break their rules and guidelines just because I was there. I do think I don't really understand the "don't do things that will make people regret they came" principle. Like, I can see how it's a nice thing to aspire to, but if you have someone submit a paper to a journal, and then the paper gets reviewed and rejected as shoddy, then like, they probably regret submitting to you, and this seems good. You mention 'the paper gets reviewed and rejected', but I don't think the comments on OAI post was much conditioned on the quality of the post....
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman fired from OpenAI, published by Lawrence Chan on November 17, 2023 on The AI Alignment Forum. Basically just the title, see the OAI blog post for more details. Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI. In a statement, the board of directors said: "OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam's many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company's research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. EDIT: Also, Greg Brockman is stepping down from his board seat: As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO. The remaining board members are: OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner. EDIT 2: Sam Altman tweeted the following. i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what's next later. Greg Brockman has also resigned. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman fired from OpenAI, published by LawrenceC on November 17, 2023 on LessWrong. Basically just the title, see the OAI blog post for more details. Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI. In a statement, the board of directors said: "OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam's many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company's research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
In this episode, Lynn Tonini interviews Nicole Davis, Executive Director of Operation: Achieve Independence (OAI) located in Spring, TX. We discussed Nicole's background and the journey that brought her to working with youth aging out of foster care. We then talked about OAI's strategy for providing caring mentors for youth and developing strong partnerships with other organizations and agencies to be able to provide a spectrum of services. Their primary programs address financial support for youth in transitional living situations, and helping youth with educational and career preparation.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much do markets value Open AI?, published by Ben West on May 14, 2023 on The Effective Altruism Forum. Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their annual recurring revenue, which is high but not unheard of. Various factors make this multiple hard to interpret, but it generally does not seem consistent with investors believing that Open AI will capture revenue consistent with creating transformative AI. Overview Epistemic status: revenue multiples are intended as a rough estimate of how much investors believe a company is going to grow, and I would be surprised if my estimated revenue multiple was off by more than a factor of 5. But the "strategic considerations" portion of this is a bunch of wild guesses that I feel much less confident about. There has been some discussion about how much markets are expecting transformative AI, e.g. here. One obvious question is "why isn't Open AI valued at a kajillion dollars?" I estimate that Microsoft's investment implicitly valued OAI at 220-430x their annual recurring revenue. This is high - average multiples are around 7x, but some pharmaceutical companies have multiples > 1000x. This would seem to support the argument that investors think that OAI is exceptional (but not "equivalent to the Industrial Revolution" exceptional). However, Microsoft received a set of benefits from the deal which make the EV multiple overstated. Based on adjustments, I can see the actual implied multiple being anything from -2,200x to 3,200x. (Negative multiples imply that Microsoft got more value from access to OAI models than the amount they invested and are therefore willing to treat their investment as a liability rather than an asset.) One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase. Qualitatively, my guess is that this was a nice but not exceptional deal for OAI, and I feel confused why they took it. One possible explanation is “the kind of people who can deploy $10B of capital are institutionally incapable of investing at > 200x revenue multiples”, which doesn't seem crazy to me. Another explanation is that this is basically guaranteeing them a massive customer (Microsoft), and they are willing to give up some stock to get that customer. Squiggle model here It would be cool if someone did a similar write up about Anthropic, although publicly available information on them is slim. My guess is that they will have an even higher revenue multiple (maybe infinite? I'm not sure if they had revenue when they first raised). Details Valuation: $19B A bunch of news sites (e.g. here) reported that Microsoft invested $10 billion to value OAI at $29 billion. I assume that this valuation is post money, meaning the pre-money valuation is 19 billion. Although this site says that they were valued at $14 billion in 2021, meaning that they only increased in value 35% the past two years. This seems weird, but I guess it is consistent with the view that markets aren't valuing the possibility of TAI. Revenue: $54M/year Reuters claims they are projecting $200M revenue in 2023. FastCompany says they made $30 million in 2022. If the deal closed in early 2023, then presumably annual projections of their monthly revenue were higher than $30 million, though it's unclear how much. Let's arbitrarily say MRR will increase 10x this year, implying a monthly growth rate of 10^(1/12) = 1.22 Solving the geometric series of 200 = x (1-1.22^12) / (1 -1.22) we get that their first month revenue is $4.46M, a run rate of $53.52M/year Other factors: The vast majority of the investment is going to be spent on Micros...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sam Altman on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367, published by Gabriel Mukobi on March 25, 2023 on LessWrong. Lex Fridman just released a podcast episode with Sam Altman, CEO of OpenAI. In my opinion, there wasn't too much new here that hasn't been said in other recent interviews. However, here are some scattered notes on parts I found interesting from an AI safety lens: AI risk Lex asks Sama to steelman Eliezer Yudkowsky's views Sama said there's some chance of no hope, but the only way he knows how to fix things is to keep iterating and eliminating the "1-shot-to-get-it-right" cases. He does like one of Eliezer's posts that discusses his reasons why he thinks alignment is hard [I believe this is in reference to AGI Ruin: A List of Lethalities]. Lex confirms he will do an interview with Eliezer. Sama: Now is the time to ramp up technical alignment work. Lex: What about fast takeoffs? Sama: I'm not that surprised by GPT-4, was a little surprised by ChatGPT [I think this means this feels slow to him]. I'm in the long-takeoffs, short-timelines quadrant. I'm scared of the short-takeoff scenarios. Sama has heard of but not seem Ex Machina On power Sama says it's weird that it will be OOM thousands of people in control of the first AGI . Acknowledges the AIS people think OAI deploying things fast is bad. Sama asks how Lex thinks they're doing. Lex likes the transparency and openly sharing the issues. Sama: Should we open source GPT-4? Lex: Knowing people at OAI, no (bc he trusts them,) Sama: I think people at OAI know the stakes of what we're building. But we're always looking for feedback from smart people. Lex: How do you take feedback? Sama: Twitter is unreadable. Mostly from convos like this. On responsibility Sama: We will have very significant but new and different challenges [with governing/deciding how to steer AI] Lex: Is it up to GPT or the humans to decrease the amount of hate in the world. Sama: I think we as OAI have responsibility for the tools we put out in the world, I think the tools can't have responsibility. Lex: So there could be harm caused by these tools Sama: There will be harm caused by these tools. There will be tremendous benefits. But tools do wonderful good and real bad. And we will minimize the bad and maximize the good. Jailbreaking Lex: How do you prevent jailbreaking? Sama: It kinda sucks being on the side of the company being jailbroken. We want the users to have a lot of control and have the models behave how they want within broad bounds. The existence of jailbreaking shows we haven't solved that problem yet, and the more we solve it, the less need there will be for jailbreaking. People don't really jailbreak iPhones anymore. Shipping products Lex: shows this tweet summarizing all the OAI products in the last year Sama: There's a question of should we be very proud of that or should other companies be very embarrassed. We have a high bar on our team, we work hard, we give a huge amount of trust, autonomy, and authority to individual people, and we try to hold each other to very high standards. These other things enable us to ship at such a high velocity. Lex: How do you go about hiring? Sama: I spend 1/3 of my time hiring, and I approve every OAI hire. There are no shortcuts to good hiring. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
- Chủ tịch QH Vương Đình Huệ và Đoàn công tác Trung ương làm việc với Ban Thường vụ Tỉnh ủy Bình Thuận.-Việt Nam đang thiếu khoảng 1 triệu nhân lực khu vực công nghệ-180 hành khách Nhật Bản đáp xuống sây bay Quốc tế Đà Nẵng, chính thức nối lại đường bay trực tiếp Narita (Nhật Bản) - Đà Nẵng.-Cảng tàu khách quốc tế Hạ Long (Quảng Ninh), đón hơn 2 nghìn du khách châu Âu.-CSB Việt Nam thực hiện chiến dịch cao điểm chống khai thác hải sản bất hợp pháp IUU. P/V Đài TNVN phỏng vấn Trung tướng Bùi Quốc Oai, Chính ủy Cảnh sát biển Việt Nam về nội dung này.-Tổng thống Nga Putin tuyên bố: Nga sẽ triển khai vũ khí hạt nhân chiến thuật trên lãnh thổ nước láng giềng Belarus.-Cục Dự trữ Liên bang Mỹ cho biết, gần 100 tỷ USD bị rút khỏi các ngân hàng thương mại Mỹ trong vòng 1 tuần qua.-Tổng giám đốc Quỹ tiền tệ Quốc tế cảnh báo rủi ro tài chính toàn cầu đang gia tăng. Chủ đề : Chủ tịch QH, Vương Đình Huệ, Đoàn công tác T.Ư, làm việc, tỉnh Bình Thuận --- Support this podcast: https://podcasters.spotify.com/pod/show/vov1thoisu0/support
Follow Monstercat Silk on all platforms - monster.cat/silk Tracklist 1. Approaching Black - Stella Held My Hand [Monstercat Silk] [00:35] 2. Talamanca & Roald Velden - Silk Road [Monstercat Silk] [06:15] 3. Vintage & Morelli - Karibu [Monstercat Silk] [10:54] 4. Forty Cats & Ra5im - Hope [Monstercat Silk] [15:33] Silk Exclusive: 5. Keanler & Lewyn - Oxygen [Monstercat Silk] [20:42] 6. OAI & Ra5im - Echoes Of Silence [Monstercat Silk] [24:35] 7. Into The Ether - Back To Me [Monstercat Silk] [29:48] Silk Spotlight: 8. Aether - Sapphire [Monstercat Silk] [32:42] 9. Shingo Nakamura - Falling Off (Club Mix) [Monstercat Silk] [35:29] 10. PROFF - Nara [Monstercat Silk] [38:27] 11. Lane 8 - Closer (Falden Remix) [This Never Happened] [45:10] 12. Vintage and Morelli & Anthony Nikita - Terra Nuova [Anjunabeats] [48:31] 13. Spooky - Belong (Sasha Involver Remix) [Prankster Edit] [Anjunadeep] [53:41] Thank you for listening to Monstercat Silk Showcase!
Follow Monstercat Silk on all platforms - monster.cat/silk Tracklist 1. Direct & Harvey - I Love You [Monstercat Silk] [00:35] 2. Liam Thomas - Waterloo [Monstercat Silk] [03:32] 3. LAR - Haze [Monstercat Silk] [05:08] 4. Jay FM - Borderline [Monstercat Silk] [06:28] 5. Bound to Divide x Julian Gray x Avrii Castle - Losing My Mind [Monstercat Silk] [08:34] 6. Elypsis - Compass [Monstercat Silk] [10:07] 7. Dokho & Banaati - Espérance [Monstercat Silk] [12:28] 8. Enjac - you said you'd try [Monstercat Silk] [15:08] 9. Feathervane & OREONIC - Overcast [Monstercat Silk] [18:10] 10. OAI & Ra5im - Dream Says Hello [Monstercat Silk] [19:24] 11. ATTLAS & Mango - Over The Water [Monstercat Silk] [21:30] 12. Cloudcage & Feathervane - Arcus [Monstercat Silk] [25:36] 13. OCULA - Renaissance (feat. Luke Coulson) [Monstercat Silk] [30:31] 14. Claes Rosen - True Love [Monstercat Silk] [33:24] 15. Shingo Nakamura - Falling Off [Monstercat Silk] [35:28] 16. Vintage & Morelli - Once Upon A World [Monstercat Silk] [38:09] 17. PROFF - Nara [Monstercat Silk] [40:33] 18. Hausman, Wynnwood & Lumynesynth - Calliope (Club Mix) [Monstercat Silk] [43:11] 19. Odsen & Katrine Steinbekk - Horizon [Monstercat Silk] [46:45] 20. Flexible Fire - Las Rosas [Monstercat Silk] [48:21] 21. Stendahl - Austrumi [Monstercat Silk] [49:59] 22. Fløa & Astroleaf - Anna [Monstercat Silk] [51:44] 23. Approaching Black - A Cause And Effect [Monstercat Silk] [53:24] 24. Blood Groove & Kikis x Brandon Mignacca - Let Me Hold You [Monstercat Silk] [55:47] Silk Exclusive: 25. ID - ID [Monstercat Silk] [58:01] Thank you for listening to Monstercat Silk Showcase!
Follow Monstercat Silk on all platforms - monster.cat/silk Tracklist 1. Cloudcage - Drift [Monstercat Silk] [00:35] 2. Ra5im - Talk To Me [Monstercat Silk] [02:32] 3. Glaue - Everything Changes [Monstercat Silk] [04:10] 4. Aftruu - Addiction [Monstercat Silk] [05:46] 5. Dokho & Banaati - Dream Cycle [Monstercat Silk] [06:50] 6. Into The Ether & Lumynesynth - Dive [Monstercat Silk] [09:28] 7. OAI & Ra5im - Dream Says Hello [Monstercat Silk] [11:17] 8. Bound to Divide & Lauren L'aimant - When The Sun Goes Down [Monstercat Silk] [14:25] 9. Odsen - Flip The Coin [Monstercat Silk] [15:27] 10. Eminence x Weston & Teston - Take Me Away (ft. Meredith Bull) [Monstercat Silk] [17:15] 11. Blood Groove & Kikis x Brandon Mignacca - Let Me Hold You [Monstercat Silk] [19:50] 12. Banaati & Brandon Mignacca - Hopeful [Monstercat Silk] [21:00] 13. Feathervane & OREONIC - Overcast [Monstercat Silk] [23:34] 14. Bound to Divide - All I Need [Monstercat Silk] [23:57] 15. OCULA - Renaissance (ft. Luke Coulson) [Monstercat Silk] [26:52] 16. PROFF - Nibbana [Monstercat Silk] [28:36] 17. Approaching Black - Make You Mine (ft. Indi Starling) [Monstercat Silk] [31:16] 18. Fløa & Astroleaf - Anna [Monstercat Silk] [32:53] 19. Direct & Harvey - I Love You [Monstercat Silk] [34:51] 20. OCULA - Waiting [Monstercat Silk] [37:16] 21. OAI & Ra5im - Echoes Of Silence [Monstercat Silk] [38:30] 22. Glaue - Coyote [Monstercat Silk] [39:33] 23. ATTLAS & Mango - Over The Water [Monstercat Silk] [41:38] 24. Aftruu - Words Left Unsaid [Monstercat Silk] [43:53] 25. Banaati - No Time [Monstercat Silk] [45:27] 26. Bound to Divide x Julian Gray x Avrii Castle - Losing My Mind [Monstercat Silk] [47:02] 27. Elypsis - Compass [Monstercat Silk] [48:35] 28. Terry Da Libra - Sparkles [Monstercat Silk] [50:08] 29. LAR & Kliran.B - Hardway [Monstercat Silk] [51:12] 30. Flexible Fire & Etza - Sunlight [Monstercat Silk] [52:16] 31. Vintage & Morelli - Karibu [Monstercat Silk] [53:53] 32. Elypsis - All Around Me [Monstercat Silk] [54:56] 33. Enjac - you said you'd try [Monstercat Silk] [56:29] 34. zensei - dreaming of you [Monstercat Silk] [57:45] 35. Scarr. & Finding Mero - i can feel your eyes on me [Monstercat Silk] [59:06] Thank you for listening to Monstercat Silk Showcase!
Follow the show: https://www.monstercat.com/COTW Tracklist 00:45 No Mana - Space (ft. ill-esha) [Instinct Spotlight] 04:45 Curbi - Vertigo (ft. PollyAnna) 07:15 WILL K & Drove - Ghost 10:54 Pegboard Nerds - Downhearted (ft. Jonny Rose) [Chimeric Remix] 13:10 Reach - Throw Handz 15:44 Chime - Bring Me Back [Monstercat Exclusive] 19:03 Papa Khan - So Far Away 22:07 Sullivan King & Wooli - Let Me Go [Uncaged Spotlight] 25:45 ROY KNOX, hayve & Mike Robert - Bad Habits [Gold Feature] 28:41 Pixel Terror - Ultima 31:47 Maazel - Crashing Down 33:25 San Holo - They Just Haven't Seen It (ft. The Nicholas) 37:32 Mr FijiWiji & Direct - Tomorrow (ft. Matt Van & Holly Drummond) 41:17 Alex H - Eagle Rock [Silk Spotlight] 45:09 Eminence x Weston & Teston - Around You [Monstercat Exclusive] 48:15 Tony Romera - My Mind (ft. Karina Ramage) [Badjokes Remix] [Monstercat Exclusive] 52:18 OAI & Ra5im - Dream Says Hello 57:03 Koven & Crystal Skies - You Me And Gravity [Community Pick] Thank you for listening to Monstercat: Call of the Wild!
Follow Monstercat Silk on all platforms - monster.cat/silk Tracklist 1. Vintage & Morelli - Karibu [Monstercat Silk] [00:35] 2. Approaching Black - A Cause And Effect [Monstercat Silk] [06:15] 3. Flexible Fire & Etza - Cyan [Monstercat Silk] [09:37] 4. OAI & Ra5im - Echoes Of Silence [Monstercat Silk] [14:00] 5. Maarja Nuut x Ruum x Sultan + Shepard - Kuud Kuulama (Sultan + Shepard Remix) [This Never Happened] [18:42] 6. Maestro Chives & Martin Graff - The Moment [Monstercat Silk] [23:37] 7. PROFF - Nara [Monstercat Silk] [28:31] 8. Forty Cats & Arentis - The Lost Ancient Charm (Club Mix) [Monstercat Silk] [34:27] Silk Exclusive: 9. Alex H - Eagle Rock [Monstercat Silk] [39:06] 10. Vintage & Morelli & Anthony Nikita - Terra Nuova [Anjunabeats] [45:02] Silk Spotlight: 11. zensei ゼンセー - Dreaming of You [Monstercat Silk] [52:15] 12. Veeshy & Phonic Youth - Arcade Highs [Monstercat Silk] [55:27] Thank you for listening to Monstercat Silk Showcase!
The mid-September 2022 episode features tracks by Cedric Gervais, Heard Right & (Oai), D'nox &…
Full Spectrum - Trance, Psytrance, Progressive, Breaks, Bass, EDM - Mixed by frequenZ phaZe
"You are some kind of a mystery suspended between two eternities. And when a mind looks out at the world and asks the question, ‘What is it?'... In that moment art is created." - Terence McKenna || 01. Bound to Divide - When The Sun Goes Down [Monstercat] (fZ Intro Mix) || 02. OAI & Ra5im - Dream Says Hello [Monstercat] || 03. Chicane - Offshore (Evolution Mix) [Modena] || 04. Ranj Kaler ft. ASYN - Last Sunset (Ranj Kaler Extended Sunset Breaks Mix) [Dissident] || 05. ORNICAN - Refuse You [Intricate] || 06. KAMADEV & Aeron Aether - Louvre [Maldesoule] || 07. Leena Punks - On The Floor [Anjunabeats] || 08. Solanca & Joel Oliver - Seed [Songspire] || 09. RYAN (CUB) - Aufklarung (Static Guru Remix) [One Of A Kind] || 10. 06R - Night Night Sky (Dark Sky Version) [Zodiac13] || 11. Ákos Győrfy - Outside Of The Box (Retroid Remix) [Morphosis] || 12. Huminal & Paul Deetman - Dancing Underwater [Songspire] Never miss an episode! Subscribe to the Full Spectrum podcast, find the latest releases at https://ffaze.com
Follow Monstercat Silk on all platforms - monster.cat/silk Tracklist 1. Forty Cats & Ra5im - Hope [Monstercat] [00:35] 2. Andromedha - Only Wide Open Space And Me [Monstercat] [06:53] 3. OAI & Ra5im - Echoes Of Silence [Monstercat] [12:38] 4. ATTLAS & Mango - Over The Water [Monstercat] [17:57] Silk Exclusive: 5. Heard Right & Meeting Molly - Mirror Of Erised [Monstercat] [22:02] 6. Into The Ether - Back To Me [Monstercat] [25:44] 7. Cressida - Beacon [Monstercat] [31:15] Silk Spotlight: 8. Manu Zain - Lois Eyes [Monstercat] [36:14] 9. Fløa - Journey [Monstercat] [38:44] 10. Stendahl - Parhelion [Monstercat] [43:39] 11. Sol Rising & Banaati - Arise [Monstercat] [48:20] 12. Martin Roth - Make Love To Me Baby [Anjunadeep] [53:02] Thank you for listening to Monstercat Silk Showcase!
Follow Monstercat Silk on all platforms - monster.cat/silk Tracklist 1. Stendahl - Paraselene [Monstercat] [00:35] 2. Embliss & Lumynesynth - Phases of the Moon [Monstercat] [05:48] 3. Skua & Cosmaks x Anita Tatlow - Life [Monstercat] [10:55] 4. Fløa - Watch When I'm Near [Monstercat] [15:09] 5. Skua & Cosmaks x Anita Tatlow - Time [Monstercat] [19:30] Silk Spotlight: 6. Vintage & Morelli - On The Beach [Monstercat] [25:34] Silk Exclusive: 7. Fløa & Astroleaf - Anna [Monstercat] [29:16] 8. Andromedha - Only Wide Open Space And Me [Monstercat] [33:53] 9. OAI & Ra5im - Echoes Of Silence [Monstercat] [39:40] 10. ATTLAS & Mango - Over The Water [Monstercat] [45:00] 11. Odsen - Your Way [Monstercat] [49:04] 12. Fløa - Journey [Monstercat] [52:17] 13. PROFF pres. Soultorque x flowanastasia - Sound & Silence [Monstercat] [56:48] Thank you for listening to Monstercat Silk Showcase!
Follow the show: https://www.monstercat.com/COTW Tracklist 00:45 Ellis & Nu Aspect - U (2017) 06:03 helloworld - see str8 [Instinct Spotlight] 09:55 CloudNone - Spring Snow 14:43 OAI & Ra5im - Echoes Of Silence [Silk Spotlight] 18:12 Mr FijiWiji - Yours Truly (ft. Danyka Nadeau) 22:21 Stonebank - Another Day (ft. EMEL) 25:51 Grant & Ellis - Dead Man Walking [Gold Feature] 29:53 Justin OH & Nitro Fun - Killswitch 32:45 Mazare & Calva Louise - Throne [Uncaged Spotlight] 36:38 Leah Culver - Cold 39:50 Dion Timmer - Panic [Community Pick] 43:34 Maazel & Darby - Mirrors (ft. BELELA) [Monstercat Exclusive] 45:21 Stonebank - Eagle Eyes (ft. EMEL) 50:23 Eptic - Power 54:26 Bossfight - Toxic [Monstercat Exclusive] 57:51 Lookas - Eclipse Thank you for listening to Monstercat: Call of the Wild!
Follow Monstercat Silk on all platforms - monster.cat/silk Tracklist 1. Angara - Rwanda [Monstercat Silk] [00:35] 2. LOOSID - Talisman (ft. Raycee Jones & Lyon Hart) [Monstercat Instinct] [03:36] 3. Noel Sanger & Mezo - Believed In You [Monstercat Silk] [05:36] 4. Kage & SLATIN - Limit [Monstercat Uncaged] [07:07] 5. Pegboard Nerds - Blackout [Monstercat Uncaged] [08:46] 6. CloudNone & Direct - Arms Race [Monstercat Instinct] [09:50] 7. Shingo Nakamura - Glow [Monstercat Silk] [12:14] 8. EDDIE - Night Runners (ft. Voicians) [Monstercat Uncaged] [13:30] 9. Infected Mushroom, Freedom Fighters & Mr. Bill - Freedom Bill [Monstercat Uncaged] [15:29] 10. Direct & CloudNone - Elixir [Monstercat Instinct] [17:25] 11. Tisoki - GLASS [Monstercat Uncaged] [19:20] 12. Godlands - GODSP33D [Monstercat Uncaged] [21:41] 13. Rameses B - Never Forget [Monstercat Instinct] [22:08] 14. zensei ゼンセー & Mr. Hilroy - patience [Monstercat Silk] [23:46] 15. DROELOE - Bon Voyage [Monstercat Instinct] [25:26] 16. AK & Liam Thomas - Purple [Monstercat Silk] [26:33] 17. Drinks On Me - Falling Down [Monstercat Instinct] [28:27] 18. Maliboux - Remedy [Monstercat Instinct] [30:24] 19. Nigel Good - Discover [Monstercat Silk] [31:09] 20. Eminence x Weston & Teston - Take Me Away (ft. Meredith Bull) [Monstercat Silk] [32:39] 21. Melchi - Lights [Monstercat Silk] [34:42] 22. Conro - back2u [Monstercat Instinct] [36:47] 23. Bound to Divide & Lauren L'aimant - When The Sun Goes Down [Monstercat Silk] [38:06] 24. Bound to Divide - Holding Down [Monstercat Silk] [40:41] 25. Rootkit - Voyage [Monstercat Instinct] [42:15] 26. Melchi - Lights [Monstercat Silk] [43:22] Silk Spotlight: 27. Embliss & Lumynesynth - Phases of the Moon [Monstercat Silk] [45:37] 28. Sound Quelle & Referna - Lauria [Monstercat Silk] [47:32] 29. Seaways & Fløa - Not Sorry [Monstercat Silk] [49:10] 30. Throttle - Japan [Monstercat Instinct] [51:01] 31. Bad Computer - Blue [Monstercat Silk] [52:35] 32. Vicetone - Animal (ft. Jordan Powers & Bekah Novi) [Monstercat Silk] [54:10] 33. Tony Romera & OddKidOut - I'll Love U [Monstercat Instinct] [55:58] Silk Exclusive: 34. OAI & Ra5im - Echoes Of Silence [Monstercat Silk] [57:31] Thank you for listening to Monstercat Silk Showcase!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slowing down AI progress is an underexplored alignment strategy, published by Michael Huang on July 13, 2022 on The Effective Altruism Forum. LessWrong user "Norman Borlaug" with an interesting post on reducing existential risk through deliberate overregulation of AI development: In the latest Metaculus forecasts, we have 13 years left until some lab somewhere creates AGI, and perhaps far less than that until the blueprints to create it are published and nothing short of a full-scale nuclear war will stop someone somewhere from doing so. The community strategy (insofar as there even is one) is to bet everything on getting a couple of technical alignment folks onto the team at top research labs in the hopes that they will miraculously solve alignment before the mad scientists in the office next door turn on the doomsday machine. While I admit there is at least a chance this might work, and it IS worth doing technical alignment research, the indications we have so far from the most respected people in the field are that this is an extremely hard problem and there is at least a non-zero chance it is fundamentally unsolvable. There are a dozen other strategies we could potentially deploy to achieve alignment, but they all depend on someone not turning on the doomsday machine. But thus far we have almost completely ignored the class of strategies that might buy more time. The cutting edge of thought on this front seems to come from one grumpy former EA founder on Twitter who isn't even trying that hard. From Kerry Vaughan's Twitter thread: I've recently learned that this is a spicy take on AI Safety: AGI labs (eg OpenAI, DeepMind, and others) are THE CAUSE of the fundamental problem the AI Safety field faces. I thought this was obvious until very recently. Since it's not, I should explain my position. (I'll note that while I single out OpenAI and DeepMind here, that's only because they appear to be advancing the cutting edge the most. This critique applies to any company or academic researcher that spends their time working to solve the bottlenecks to building AGI.) To vastly oversimply the situation, you can think of AI Safety as a race. In one corner you have the AGI builders who are trying to create AGI as fast as possible. In the other corner, you have people trying to make sure AGI will be aligned with human goals once we build it. If AGI gets built before we know how to align it, it might be CATASTROPHIC. Fortunately, aligning an AGI is unlikely to be impossible. So, given enough time and effort into the problem, we will eventually solve it. This means the actual enemy is time. If we have enough time to both find capable people and have them work productively on the problem, we will eventually win. If not, we lose. I think the fundamental dynamic is really just that simple. AGI labs like OAI and DeepMind have it as their MISSION to decrease the time we have. Their FOUNDING OBJECTIVE is to build AGI and they are very clearly and obviously trying as hard as they can to do just that. They raise money, hire talent, etc. all premised on this goal. Every day an AGI engineer at OpenAI or DeepMind shows up to work and tries to solve the current bottlenecks in creating AGI, we lose just a little bit of time. Every day they show up to work, the odds of victory get a little bit lower. My very bold take is that THIS IS BAD Now you might be thinking: "Demis Hassabis and Sam Altman are not psychopaths or morons. If they get close to AGI without solving alignment they can just not deploy the AGI." There are a number of problems with this, but the most obvious is: they're still robbing us of time. Every. Single. Day. the AGI labs are steadily advancing the state of the art on building AGI. With every new study they publish, researcher they train, and technology they commercialize, ...