POPULARITY
He noted the firm has the OVX computer that is meant to do simulation and graphic simulation physics engine, and it is used to synthesize and generate data. And this data is consumed by the DGX computer, which are used to train foundation models. And then it is deployed to the HX computer, which is the runtime on the edge for platforms like humanoid robots.Sponsors:1password.com/rideLinks:GitHub's new AI coding agent can fix bugs for you (The Verge)The new Microsoft Discovery agentic platform targets scientists and researchers (Neowin)Trump signs the Take It Down Act into law (The Verge)Stablecoin Bill Advances in US Senate in Big Win for Crypto (Bloomberg)EU to impose €2 tax on low-cost items in blow to Temu and Shein (FT)Nvidia charges ahead with humanoid robotics aided by the cloud (GamesBeat)Autonomous cars with ‘social sensitivity' cut threat to road users, study finds (FT)Mountainhead TrailerSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Dollar General has announced it will be closing 100 stores across the nation the first quarter of the year. They will also shut 45 of their Pop-shelf stores. The company will change the former Pop-shelf stores to Dollar General locations. The stores closing will be less than 1 percent of the company's store base overall. A list of stores that will close is currently not been released at this time. The dollar General currently operates more than 20,000 Dollar General Stores, DG Markets, DGX stores and Pop- Shelf stores in 48 states.Article Link
Die Golem.de Redakteure Johannes Hiltscher, Martin Böckmann und Martin Wolf diskutieren über die aktuellen KI-Entwicklungen in Hard- und Software.
S&P Futures are displaying weakness this morning with the markets focused on earnings, bond yields and commentary from Fed officials. Shares of DGX, DHR, GM, KMB MMM and RTX delivered eps beats this morning. JCP Investment Management has built a 2% stake in Cheesecake Factory. Fed Speakers are indicating a gradual pace of rate cuts and indicated that a pause could occur. In Europe, markets are lower, yet the recent earnings announcements were mostly positive. Oil prices are higher by approximately 1% this morning.
One of the biggest earnings movers from Tuesday: Quest Diagnostics (DGX). The lab & testing company beat on the top-and-bottom line in its 3Q report, as shares surged to multi-month highs. George Tsilis provides his analysis on DGX after earnings. ======== Schwab Network ======== Empowering every investor and trader, every market day. Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6D Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribe Download the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185 Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7 Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watch Watch on Vizio - https://www.vizio.com/en/watchfreeplus-explore Watch on DistroTV - https://www.distro.tv/live/schwab-network/ Follow us on X – https://twitter.com/schwabnetwork Follow us on Facebook – https://www.facebook.com/schwabnetwork Follow us on LinkedIn - https://www.linkedin.com/company/schwab-network/ About Schwab Network - https://schwabnetwork.com/about
Our next SF event is AI UX 2024 - let's see the new frontier for UX since last year! Last call: we are recording a preview of the AI Engineer World's Fair with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an “ex-technical co-founder type”. Reach out to him for more!David Luan has been at the center of the modern AI revolution: he was the ~30th hire at OpenAI, he led Google's LLM efforts and co-led Google Brain, and then started Adept in 2022, one of the leading companies in the AI agents space. In today's episode, we asked David for some war stories from his time in early OpenAI (including working with Alec Radford ahead of the GPT-2 demo with Sam Altman, that resulted in Microsoft's initial $1b investment), and how Adept is building agents that can “do anything a human does on a computer" — his definition of useful AGI.Why Google *couldn't* make GPT-3While we wanted to discuss Adept, we couldn't talk to a former VP Eng of OpenAI and former LLM tech lead at Google Brain and not ask about the elephant in the room. It's often asked how Google had such a huge lead in 2017 with Vaswani et al creating the Transformer and Noam Shazeer predicting trillion-parameter models and yet it was David's team at OpenAI who ended up making GPT 1/2/3. David has some interesting answers:“So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized…what they (should) have done would be say, hey, Noam Shazeer, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too…You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing. He's got this decoder only transformer that's probably going to get there before we do. And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why. At the time, there was a thing called the Brain Credit Marketplace. Everyone's assigned a credit. So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused.”Cloning HGI for AGIHuman intelligence got to where it is today through evolution. Some argue that to get to AGI, we will approximate all the “FLOPs” that went into that process, an approach most famously mapped out by Ajeya Cotra's Biological Anchors report:The early days of OpenAI were very reinforcement learning-driven with the Dota project, but that's a very inefficient way for these models to re-learn everything. (Kanjun from Imbue shared similar ideas in her episode).David argues that there's a shortcut. We can bootstrap from existing intelligence.“Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution in order to be able to get there… I think we are ignoring the fact that you have a giant shortcut, which is you can behaviorally clone everything humans already know. And that's what we solved with LLMs!”LLMs today basically model intelligence using all (good!) written knowledge (see our Datasets 101 episode), and have now expanded to non-verbal knowledge (see our HuggingFace episode on multimodality). The SOTA self-supervised pre-training process is surprisingly data-efficient in taking large amounts of unstructured data, and approximating reasoning without overfitting.But how do you cross the gap from the LLMs of today to building the AGI we all want? This is why David & friends left to start Adept.“We believe the clearest framing of general intelligence is a system that can do anything a human can do in front of a computer. A foundation model for actions, trained to use every software tool, API, and webapp that exists, is a practical path to this ambitious goal” — ACT-1 BlogpostCritical Path: Abstraction with ReliabilityThe AGI dream is fully autonomous agents, but there are levels to autonomy that we are comfortable giving our agents, based on how reliable they are. In David's word choice, we always want higher levels of “abstractions” (aka autonomy), but our need for “reliability” is the practical limit on how high of an abstraction we can use.“The critical path for Adept is we want to build agents that can do a higher and higher level abstraction things over time, all while keeping an insanely high reliability standard. Because that's what turns us from research into something that customers want. And if you build agents with really high reliability standard, but are continuing pushing a level of abstraction, you then learn from your users how to get that next level of abstraction faster. So that's how you actually build the data flow. That's the critical path for the company. Everything we do is in service of that.”We saw how Adept thinks about different levels of abstraction at the 2023 Summit:The highest abstraction is the “AI Employee”, but we'll get there with “AI enabled employees”. Alessio recently gave a talk about the future of work with “services as software” at this week's Nvidia GTC (slides).No APIsUnlike a lot of large research labs, Adept's framing of AGI as "being able to use your computer like a human" carries with it a useful environmental constraint:“Having a human robot lets you do things that humans do without changing everything along the way. It's the same thing for software, right? If you go itemize out the number of things you want to do on your computer for which every step has an API, those numbers of workflows add up pretty close to zero. And so then many points along the way, you need the ability to actually control your computer like a human. It also lets you learn from human usage of computers as a source of training data that you don't get if you have to somehow figure out how every particular step needs to be some particular custom private API thing. And so I think this is actually the most practical path (to economic value).”This realization and conviction means that multimodal modals are the way to go. Instead of using function calling to call APIs to build agents, which is what OpenAI and most of the open LLM industry have done to date, Adept wants to “drive by vision”, (aka see the screen as a human sees it) and pinpoint where to click and type as a human does. No APIs needed, because most software don't expose APIs.Extra context for readers: You can see the DeepMind SIMA model in the same light: One system that learned to play a diverse set of games (instead of one dedicated model per game) using only pixel inputs and keyboard-and-mouse action outputs!The OpenInterpreter team is working on a “Computer API” that also does the same.To do this, Adept had to double down on a special kind of multimodality for knowledge work:“A giant thing that was really necessary is really fast multimodal models that are really good at understanding knowledge work and really good at understanding screens. And that is needs to kind of be the base for some of these agents……I think one big hangover primarily academic focus for multimodal models is most multimodal models are primarily trained on like natural images, cat and dog photos, stuff that's come out of the camera… (but) where are they going to be the most useful? They're going to be most useful in knowledge work tasks. That's where the majority of economic value is going to be. It's not in cat and dogs. And so if that's what it is, what do you need to train? I need to train on like charts, graphs, tables, invoices, PDFs, receipts, unstructured data, UIs. That's just a totally different pre-training corpus. And so Adept spent a lot of time building that.”With this context, you can now understand the full path of Adept's public releases:* ACT-1 (Sept 2022): a large Transformers model optimized for browser interactions. It has a custom rendering of the browser viewport that allows it to better understand it and take actions.* Persimmon-8B (Sept 2023): a permissive open LLM (weights and code here)* Fuyu-8B (Oct 2023): a small version of the multimodal model that powers Adept. Vanilla decoder-only transformer with no specialized image encoder, which allows it to handle input images of varying resolutions without downsampling.* Adept Experiments (Nov 2023): A public tool to build automations in the browser. This is powered by Adept's core technology but it's just a piece of their enterprise platform. They use it as a way to try various design ideas.* Fuyu Heavy (Jan 2024) - a new multimodal model designed specifically for digital agents and the world's third-most-capable multimodal model (beating Gemini Pro on MMMU, AI2D, and ChartQA), “behind only GPT4-V and Gemini Ultra, which are 10-20 times bigger”The Fuyu-8B post in particular exhibits a great number of examples on knowledge work multimodality:Why Adept is NOT a Research LabWith OpenAI now worth >$90b and Anthropic >$18b, it is tempting to conclude that the AI startup metagame is to build a large research lab, and attract the brightest minds and highest capital to build AGI. Our past guests (see the Humanloop episode) and (from Imbue) combined to ask the most challenging questions of the pod - with David/Adept's deep research pedigree from Deepmind and OpenAI, why is Adept not building more general foundation models (like Persimmon) and playing the academic benchmarks game? Why is Adept so focused on commercial agents instead?“I feel super good that we're doing foundation models in service of agents and all of the reward within Adept is flowing from “Can we make a better agent”…… I think pure play foundation model companies are just going to be pinched by how good the next couple of (Meta Llama models) are going to be… And then seeing the really big players put ridiculous amounts of compute behind just training these base foundation models, I think is going to commoditize a lot of the regular LLMs and soon regular multimodal models. So I feel really good that we're just focused on agents.”and the commercial grounding is his answer to Kanjun too (whom we also asked the inverse question to compare with Adept):“… the second reason I work at Adept is if you believe that actually having customers and a reward signal from customers lets you build AGI faster, which we really believe, then you should come here. And I think the examples for why that's true is for example, our evaluations are not academic evals. They're not simulator evals. They're like, okay, we have a customer that really needs us to do these particular things. We can do some of them. These are the ones they want us to, we can't do them at all. We've turned those into evals.. I think that's a degree of practicality that really helps.”And his customers seem pretty happy, because David didn't need to come on to do a sales pitch:David: “One of the things we haven't shared before is we're completely sold out for Q1.”Swyx: “Sold out of what?”David: “Sold out of bandwidth to onboard more customers.”Well, that's a great problem to have.Show Notes* David Luan* Dextro at Data Driven NYC (2015)* Adept* ACT-1* Persimmon-8B* Adept Experiments* Fuyu-8B* $350M Series B announcement* Amelia Wattenberger talk at AI Engineer Summit* FigureChapters* [00:00:00] Introductions* [00:01:14] Being employee #30 at OpenAI and its early days* [00:13:38] What is Adept and how do you define AGI?* [00:21:00] Adept's critical path and research directions* [00:26:23] How AI agents should interact with software and impact product development* [00:30:37] Analogies between AI agents and self-driving car development* [00:32:42] Balancing reliability, cost, speed and generality in AI agents* [00:37:30] Potential of foundation models for robotics* [00:39:22] Core research questions and reasons to work at AdeptTranscriptsAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:15]: Hey, and today we have David Luan, CEO, co-founder of Adept in the studio. Welcome.David [00:00:20]: Yeah, thanks for having me.Swyx [00:00:21]: Been a while in the works. I've met you socially at one of those VC events and you said that you were interested in coming on and glad we finally were able to make this happen.David: Yeah, happy to be part of it.Swyx: So we like to introduce the speaker and then also just like have you talk a little bit about like what's not on your LinkedIn, what people should just generally know about you. You started a company in college, which was the first sort of real time video detection classification API that was Dextro, and that was your route to getting acquired into Axon where you're a director of AI. Then you were the 30th hire at OpenAI?David [00:00:53]: Yeah, 30, 35, something around there. Something like that.Swyx [00:00:56]: So you were VP of Eng for two and a half years to two years, briefly served as tech lead of large models at Google, and then in 2022 started Adept. So that's the sort of brief CV. Is there anything else you like want to fill in the blanks or like people should know more about?David [00:01:14]: I guess a broader story was I joined OpenAI fairly early and I did that for about two and a half to three years leading engineering there. It's really funny, I think second or third day of my time at OpenAI, Greg and Ilya pulled me in a room and we're like, you know, you should take over our directs and we'll go mostly do IC work. So that was fun, just coalescing a bunch of teams out of a couple of early initiatives that had already happened. The company, the Dota effort was going pretty hard and then more broadly trying to put bigger picture direction around what we were doing with basic research. So I spent a lot of time doing that. And then I led Google's LLM efforts, but also co-led Google Brain was one of the brain leads more broadly. You know, there's been a couple of different eras of AI research, right? If we count everything before 2012 as prehistory, which people hate it when I say that, kind of had this like you and your three best friends write a research paper that changes the world period from like 2012 to 2017. And I think the game changed in 2017 and like most labs didn't realize it, but we at OpenAI really did. I think in large part helped by like Ilya's constant beating of the drum that the world would be covered in data centers. And I think-Swyx [00:02:15]: It's causally neat.David [00:02:16]: Yeah. Well, like I think we had conviction in that, but it wasn't until we started seeing results that it became clear that that was where we had to go. But also part of it as well was for OpenAI, like when I first joined, I think one of the jobs that I had to do was how do I tell a differentiated vision for who we were technically compared to, you know, hey, we're just smaller Google Brain, or like you work at OpenAI if you live in SF and don't want to commute to Mountain View or don't want to live in London, right? That's like not enough to like hang your technical identity as a company. And so what we really did was, and I spent a lot of time pushing this, is just how do we get ourselves focused on a certain class of like giant swings and bets, right? Like how do you flip the script from you just do bottom-up research to more about how do you like leave some room for that, but really make it about like, what are the big scientific outcomes that you want to show? And then you just solve them at all costs, whether or not you care about novelty and all that stuff. And that became the dominant model for a couple of years, right? And then what's changed now is I think the number one driver of AI products over the next couple of years is going to be the deep co-design and co-evolution of product and users for feedback and actual technology. And I think labs, every tool to go do that are going to do really well. And that's a big part of why I started Adept.Alessio [00:03:20]: You mentioned Dota, any memories thinking from like the switch from RL to Transformers at the time and kind of how the industry was evolving more in the LLM side and leaving behind some of the more agent simulation work?David [00:03:33]: Like zooming way out, I think agents are just absolutely the correct long-term direction, right? You just go to find what AGI is, right? You're like, Hey, like, well, first off, actually, I don't love AGI definitions that involve human replacement because I don't think that's actually how it's going to happen. Even this definition of like, Hey, AGI is something that outperforms humans at economically valuable tasks is kind of implicit view of the world about what's going to be the role of people. I think what I'm more interested in is like a definition of AGI that's oriented around like a model that can do anything a human can do on a computer. If you go think about that, which is like super tractable, then agent is just a natural consequence of that definition. And so what did all the work we did on our own stuff like that get us was it got us a really clear formulation. Like you have a goal and you want to maximize the goal, you want to maximize reward, right? And the natural LLM formulation doesn't come with that out of the box, right? I think that we as a field got a lot right by thinking about, Hey, how do we solve problems of that caliber? And then the thing we forgot is the Novo RL is like a pretty terrible way to get there quickly. Why are we rediscovering all the knowledge about the world? Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution in order to be able to get there. Right.Swyx [00:04:44]: The biological basis theory. Right.David [00:04:46]: So I think we are ignoring the fact that you have a giant shortcut, which is you can behavioral clone everything humans already know. And that's what we solved with LLMs. We've solved behavioral cloning, everything that humans already know. Right. So like today, maybe LLMs is like behavioral cloning every word that gets written on the internet in the future, the multimodal models are becoming more of a thing where behavioral cloning the visual world. But really, what we're just going to have is like a universal byte model, right? Where tokens of data that have high signal come in, and then all of those patterns are like learned by the model. And then you can regurgitate any combination now. Right. So text into voice out, like image into other image out or video out or whatever, like these like mappings, right? Like all just going to be learned by this universal behavioral cloner. And so I'm glad we figured that out. And I think now we're back to the era of how do we combine this with all of the lessons we learned during the RL period. That's what's going to drive progress.Swyx [00:05:35]: I'm still going to pressure you for a few more early opening stories before we turn to the ADET stuff. On your personal site, which I love, because it's really nice, like personal, you know, story context around like your history. I need to update it. It's so old. Yeah, it's so out of date. But you mentioned GPT-2. Did you overlap with GPT-1? I think you did, right?David [00:05:53]: I actually don't quite remember. I think I was joining right around- Right around then?Swyx [00:05:57]: I was right around that, yeah. Yeah. So what I remember was Alec, you know, just kind of came in and was like very obsessed with Transformers and applying them to like Reddit sentiment analysis. Yeah, sentiment, that's right. Take us through-David [00:06:09]: Sentiment neuron, all this stuff.Swyx [00:06:10]: The history of GPT as far as you know, you know, according to you. Ah, okay.David [00:06:14]: History of GPT, according to me, that's a pretty good question. So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized, where like, again, you and your three best friends write papers, right? Okay. So zooming way out, right? I think about my job when I was a full-time research leader as a little bit of a portfolio allocator, right? So I've got really, really smart people. My job is to convince people to coalesce around a small number of really good ideas and then run them over the finish line. My job is not actually to promote a million ideas and never have critical mass. And then as the ideas start coming together and some of them start working well, my job is to nudge resources towards the things that are really working and then start disbanding some of the things that are not working, right? That muscle did not exist during my time at Google. And I think had they had it, what they would have done would be say, hey, Noam Shazir, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too.Swyx [00:07:17]: He's talking about trillion parameter models in 2017.David [00:07:20]: Yeah. So that's the core of the GPT story, right? Which is that, and I'm jumping around historically, right? But after GPT-2, we were all really excited about GPT-2. I can tell you more stories about that. It was the last paper that I even got to really touch before everything became more about building a research org. You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing, right? He's got this decoder only transformer that's probably going to get there before we do. And I was like, but like, please just like let this model finish, right? And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why, right? At the time, there was a thing called the brain credit marketplace. And did you guys know the brain credit marketplace? No, I never heard of this. Oh, so it's actually, it's a, you can ask any Googler.Swyx [00:08:23]: It's like just like a thing that, that, I mean, look like, yeah, limited resources, you got to have some kind of marketplace, right? You know, sometimes it's explicit, sometimes it isn't, you know, just political favors.David [00:08:34]: You could. And so then basically everyone's assigned a credit, right? So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused. And I think, again, that's like part of the narrative of like this phase one of AI, right? Of like this modern AI era to phase two. And I think in the same way, I think phase three company is going to out execute phase two companies because of the same asymmetry of success.Swyx [00:09:12]: Yeah. I think it's underrated how much NVIDIA works with you in the early days as well. I think maybe, I think it was Jensen. I'm not sure who circulated a recent photo of him delivering the first DGX to you guys.David [00:09:24]: I think Jensen has been a complete legend and a mastermind throughout. I have so much respect for NVIDIA. It is unreal.Swyx [00:09:34]: But like with OpenAI, like kind of give their requirements, like co-design it or just work of whatever NVIDIA gave them.David [00:09:40]: So we work really closely with them. There's, I'm not sure I can share all the stories, but examples of ones that I've found particularly interesting. So Scott Gray is amazing. I really like working with him. He was on one of my teams, the supercomputing team, which Chris Berner runs and Chris Berner still does a lot of stuff in that. As a result, like we had very close ties to NVIDIA. Actually, one of my co-founders at Adept, Eric Elson, was also one of the early GPGPU people. So he and Scott and Brian Catanzaro at NVIDIA and Jonah and Ian at NVIDIA, I think all were very close. And we're all sort of part of this group of how do we push these chips to the absolute limit? And I think that kind of collaboration helped quite a bit. I think one interesting set of stuff is knowing the A100 generation, that like quad sparsity was going to be a thing. Is that something that we want to go look into, right? And figure out if that's something that we could actually use for model training. Really what it boils down to is that, and I think more and more people realize this, six years ago, people, even three years ago, people refused to accept it. This era of AI is really a story of compute. It's really the story of how do you more efficiently map actual usable model flops to compute,Swyx [00:10:38]: Is there another GPT 2, 3 story that you love to get out there that you think is underappreciated for the amount of work that people put into it?David [00:10:48]: So two interesting GPT 2 stories. One of them was I spent a good bit of time just sprinting to help Alec get the paper out. And I remember one of the most entertaining moments was we were writing the modeling section. And I'm pretty sure the modeling section was the shortest modeling section of any ML, reasonably legitimate ML paper to that moment. It was like section three model. This is a standard vanilla decoder only transformer with like these particular things, those paragraph long if I remember correctly. And both of us were just looking at the same being like, man, the OGs in the field are going to hate this. They're going to say no novelty. Why did you guys do this work? So now it's funny to look at in hindsight that it was pivotal kind of paper, but I think it was one of the early ones where we just leaned fully into all we care about is solving problems in AI and not about, hey, is there like four different really simple ideas that are cloaked in mathematical language that doesn't actually help move the field forward?Swyx [00:11:42]: Right. And it's like you innovate on maybe like data set and scaling and not so much the architecture.David [00:11:48]: We all know how it works now, right? Which is that there's a collection of really hard won knowledge that you get only by being at the frontiers of scale. And that hard won knowledge, a lot of it's not published. A lot of it is stuff that's actually not even easily reducible to what looks like a typical academic paper. But yet that's the stuff that helps differentiate one scaling program from another. You had a second one? So the second one is, there's like some details here that I probably shouldn't fully share, but hilariously enough for the last meeting we did with Microsoft before Microsoft invested in OpenAI, Sam Altman, myself and our CFO flew up to Seattle to do the final pitch meeting. And I'd been a founder before. So I always had a tremendous amount of anxiety about partner meetings, which this basically this is what it was. I had Kevin Scott and Satya and Amy Hood, and it was my job to give the technical slides about what's the path to AGI, what's our research portfolio, all of this stuff, but it was also my job to give the GPT-2 demo. We had a slightly bigger version of GPT-2 that we had just cut maybe a day or two before this flight up. And as we all know now, model behaviors you find predictable at one checkpoint are not predictable in another checkpoint. And so I'd spent all this time trying to figure out how to keep this thing on rails. I had my canned demos, but I knew I had to go turn it around over to Satya and Kevin and let them type anything in. And that just, that really kept me up all night.Swyx [00:13:06]: Nice. Yeah.Alessio [00:13:08]: I mean, that must have helped you talking about partners meeting. You raised $420 million for Adept. The last round was a $350 million Series B, so I'm sure you do great in partner meetings.Swyx [00:13:18]: Pitchers meetings. Nice.David [00:13:20]: No, that's a high compliment coming from a VC.Alessio [00:13:22]: Yeah, no, I mean, you're doing great already for us. Let's talk about Adept. And we were doing pre-prep and you mentioned that maybe a lot of people don't understand what Adept is. So usually we try and introduce the product and then have the founders fill in the blanks, but maybe let's do the reverse. Like what is Adept? Yeah.David [00:13:38]: So I think Adept is the least understood company in the broader space of foundational models plus agents. So I'll give some color and I'll explain what it is and I'll explain also why it's actually pretty different from what people would have guessed. So the goal for Adept is we basically want to build an AI agent that can do, that can basically help humans do anything a human does on a computer. And so what that really means is we want this thing to be super good at turning natural language like goal specifications right into the correct set of end steps and then also have all the correct sensors and actuators to go get that thing done for you across any software tool that you already use. And so the end vision of this is effectively like I think in a couple of years everyone's going to have access to like an AI teammate that they can delegate arbitrary tasks to and then also be able to, you know, use it as a sounding board and just be way, way, way more productive. Right. And just changes the shape of every job from something where you're mostly doing execution to something where you're mostly actually doing like these core liberal arts skills of what should I be doing and why. Right. And I find this like really exciting and motivating because I think it's actually a pretty different vision for how AGI will play out. I think systems like Adept are the most likely systems to be proto-AGIs. But I think the ways in which we are really counterintuitive to everybody is that we've actually been really quiet because we are not a developer company. We don't sell APIs. We don't sell open source models. We also don't sell bottom up products. We're not a thing that you go and click and download the extension and like we want more users signing up for that thing. We're actually an enterprise company. So what we do is we work with a range of different companies, some like late stage multi-thousand people startups, some fortune 500s, et cetera. And what we do for them is we basically give them an out of the box solution where big complex workflows that their employees do every day could be delegated to the model. And so we look a little different from other companies in that in order to go build this full agent thing, the most important thing you got to get right is reliability. So initially zooming way back when, one of the first things that DEP did was we released this demo called Act One, right? Act One was like pretty cool. It's like kind of become a hello world thing for people to show agent demos by going to Redfin and asking to buy a house somewhere because like we did that in the original Act One demo and like showed that, showed like Google Sheets, all this other stuff. Over the last like year since that has come out, there's been a lot of really cool demos and you go play with them and you realize they work 60% of the time. But since we've always been focused on how do we build an amazing enterprise product, enterprises can't use anything that isn't in the nines of reliability. And so we've actually had to go down a slightly different tech tree than what you might find in the prompt engineering sort of plays in the agent space to get that reliability. And we've decided to prioritize reliability over all else. So like one of our use cases is crazy enough that it actually ends with a physical truck being sent to a place as the result of the agent workflow. And if you're like, if that works like 60% of the time, you're just blowing money and poor truck drivers going places.Alessio [00:16:30]: Interesting. One of the, our investment teams has this idea of services as software. I'm actually giving a talk at NVIDIA GTC about this, but basically software as a service, you're wrapping user productivity in software with agents and services as software is replacing things that, you know, you would ask somebody to do and the software just does it for you. When you think about these use cases, do the users still go in and look at the agent kind of like doing the things and can intervene or like are they totally removed from them? Like the truck thing is like, does the truck just show up or are there people in the middle checking in?David [00:17:04]: I think there's two current flaws in the framing for services as software, or I think what you just said. I think that one of them is like in our experience, as we've been rolling out Adept, the people who actually do the jobs are the most excited about it because they don't go from, I do this job to, I don't do this job. They go from, I do this job for everything, including the shitty rote stuff to I'm a supervisor. And I literally like, it's pretty magical when you watch the thing being used because now it parallelizes a bunch of the things that you had to do sequentially by hand as a human. And you can just click into any one of them and be like, Hey, I want to watch the trajectory that the agent went through to go solve this. And the nice thing about agent execution as opposed to like LLM generations is that a good chunk of the time when the agent fails to execute, it doesn't give you the wrong result. It just fails to execute. And the whole trajectory is just broken and dead and the agent knows it, right? So then those are the ones that the human then goes and solves. And so then they become a troubleshooter. They work on the more challenging stuff. They get way, way more stuff done and they're really excited about it. I think the second piece of it that we've found is our strategy as a company is to always be an augmentation company. And I think one out of principle, that's something we really care about. But two, actually, if you're framing yourself as an augmentation company, you're always going to live in a world where you're solving tasks that are a little too hard for what the model can do today and still needs a human to provide oversight, provide clarifications, provide human feedback. And that's how you build a data flywheel. That's how you actually learn from the smartest humans how to solve things models can't do today. And so I actually think that being an augmentation company forces you to go develop your core AI capabilities faster than someone who's saying, ah, okay, my job is to deliver you a lights off solution for X.Alessio [00:18:42]: Yeah. It's interesting because we've seen two parts of the market. One is we have one company that does agents for SOC analysts. People just don't have them, you know, and just they cannot attract the talent to do it. And similarly, in a software development, you have Copilot, which is the augmentation product, and then you have sweep.dev and you have these products, which they just do the whole thing. I'm really curious to see how that evolves. I agree that today the reliability is so important in the enterprise that they just don't use most of them. Yeah. Yeah. No, that's cool. But it's great to hear the story because I think from the outside, people are like, oh, a dev, they do Act One, they do Persimon, they do Fuyu, they do all this stuff. Yeah, it's just the public stuff.Swyx [00:19:20]: It's just public stuff.David [00:19:21]: So one of the things we haven't shared before is we're completely sold out for Q1. And so I think...Swyx [00:19:26]: Sold out of what?David [00:19:27]: Sold out of bandwidth to go on board more customers. And so we're like working really hard to go make that less of a bottleneck, but our expectation is that I think we're going to be significantly more public about the broader product shape and the new types of customers we want to attract later this year. So I think that clarification will happen by default.Swyx [00:19:43]: Why have you become more public? You know, if the whole push has... You're sold out, you're my enterprise, but you're also clearly putting effort towards being more open or releasing more things.David [00:19:53]: I think we just flipped over that way fairly recently. That's a good question. I think it actually boils down to two things. One, I think that, frankly, a big part of it is that the public narrative is really forming around agents as being the most important thing. And I'm really glad that's happening because when we started the company in January 2022, everybody in the field knew about the agents thing from RL, but the general public had no conception of what it was. They were still hanging their narrative hat on the tree of everything's a chatbot. And so I think now one of the things that I really care about is that when people think agent, they actually think the right thing. All sorts of different things are being called agents. Chatbots are being called agents. Things that make a function call are being called agents. To me, an agent is something that you can give a goal and get an end step workflow done correctly in the minimum number of steps. And so that's a big part of why. And I think the other part is because I think it's always good for people to be more aware of Redept as they think about what the next thing they want to do in their careers. The field is quickly pivoting in a world where foundation models are looking more and more commodity. And I think a huge amount of gain is going to happen from how do you use foundation models as the well-learned behavioral cloner to go solve agents. And I think people who want to do agents research should really come to Redept.Swyx [00:21:00]: When you say agents have become more part of the public narrative, are there specific things that you point to? I'll name a few. Bill Gates in his blog post mentioning that agents are the future. I'm the guy who made OSes, and I think agents are the next thing. So Bill Gates, I'll call that out. And then maybe Sam Altman also saying that agents are the future for open AI.David [00:21:17]: I think before that even, I think there was something like the New York Times, Cade Metz wrote a New York Times piece about it. Right now, in a bit to differentiate, I'm seeing AI startups that used to just brand themselves as an AI company, but now brand themselves as an AI agent company. It's just like, it's a term I just feel like people really want.Swyx [00:21:31]: From the VC side, it's a bit mixed. Is it? As in like, I think there are a lot of VCs where like, I would not touch any agent startups because like- Why is that? Well, you tell me.Alessio [00:21:41]: I think a lot of VCs that are maybe less technical don't understand the limitations of the-Swyx [00:21:46]: No, that's not fair.Alessio [00:21:47]: No, no, no, no. I think like- You think so? No, no. I think like the, what is possible today and like what is worth investing in, you know? And I think like, I mean, people look at you and say, well, these guys are building agents. They needed 400 million to do it. So a lot of VCs are maybe like, oh, I would rather invest in something that is tacking on AI to an existing thing, which is like easier to get the market and kind of get some of the flywheel going. But I'm also surprised a lot of funders just don't want to do agents. It's not even the funding. Sometimes we look around and it's like, why is nobody doing agents for X? Wow.David [00:22:17]: That's good to know actually. I never knew that before. My sense from my limited perspective is there's a new agent company popping up every day.Swyx [00:22:24]: So maybe I'm- They are. They are. But like I have advised people to take agents off of their title because it's so diluted.David [00:22:31]: It's now so diluted.Swyx [00:22:32]: Yeah. So then it doesn't stand for anything. Yeah.David [00:22:35]: That's a really good point.Swyx [00:22:36]: So like, you know, you're a portfolio allocator. You have people know about Persimmon, people know about Fuyu and Fuyu Heavy. Can you take us through like how you think about that evolution of that and what people should think about what that means for adepts and sort of research directions? Kind of take us through the stuff you shipped recently and how people should think about the trajectory of what you're doing.David [00:22:56]: The critical path for adepts is we want to build agents that can do a higher and higher level abstraction things over time, all while keeping an insanely high reliability standard. Because that's what turns us from research into something that customers want. And if you build agents with really high reliability standard, but are continuing pushing a level of abstraction, you then learn from your users how to get that next level of abstraction faster. So that's how you actually build the data flow. That's the critical path for the company. Everything we do is in service of that. So if you go zoom way, way back to Act One days, right? Like the core thing behind Act One is can we teach large model basically how to even actuate your computer? And I think we're one of the first places to have solved that and shown it and shown the generalization that you get when you give it various different workflows and texts. But I think from there on out, we really realized was that in order to get reliability, companies just do things in various different ways. You actually want these models to be able to get a lot better at having some specification of some guardrails for what it actually should be doing. And I think in conjunction with that, a giant thing that was really necessary is really fast multimodal models that are really good at understanding knowledge work and really good at understanding screens. And that is needs to kind of be the base for some of these agents. Back then we had to do a ton of research basically on how do we actually make that possible? Well, first off, like back in forgot exactly one month to 23, like there were no multimodal models really that you could use for things like this. And so we pushed really hard on stuff like the Fuyu architecture. I think one big hangover primarily academic focus for multimodal models is most multimodal models are primarily trained on like natural images, cat and dog photos, stuff that's come out of the camera. Coco. Yeah, right. And the Coco is awesome. Like I love Coco. I love TY. Like it's really helped the field. Right. But like that's the build one thing. I actually think it's really clear today. Multimodal models are the default foundation model, right? It's just going to supplant LLMs. Like you just train a giant multimodal model. And so for that though, like where are they going to be the most useful? They're going to be most useful in knowledge work tasks. That's where the majority of economic value is going to be. It's not in cat and dogs. Right. And so if that's what it is, what do you need to train? I need to train on like charts, graphs, tables, invoices, PDFs, receipts, unstructured data, UIs. That's just a totally different pre-training corpus. And so a depth spent a lot of time building that. And so the public for use and stuff aren't trained on our actual corpus, it's trained on some other stuff. But you take a lot of that data and then you make it really fast and make it really good at things like dense OCR on screens. And then now you have the right like raw putty to go make a good agent. So that's kind of like some of the modeling side, we've kind of only announced some of that stuff. We haven't really announced much of the agent's work, but that if you put those together with the correct product form factor, and I think the product form factor also really matters. I think we're seeing, and you guys probably see this a little bit more than I do, but we're seeing like a little bit of a pushback against the tyranny of chatbots as form factor. And I think that the reason why the form factor matters is the form factor changes what data you collect in the human feedback loop. And so I think we've spent a lot of time doing full vertical integration of all these bits in order to get to where we are.Swyx [00:25:44]: Yeah. I'll plug Amelia Wattenberger's talk at our conference, where she gave a little bit of the thinking behind like what else exists other than chatbots that if you could delegate to reliable agents, you could do. I was kind of excited at Adept experiments or Adept workflows, I don't know what the official name for it is. I was like, okay, like this is something I can use, but it seems like it's just an experiment for now. It's not your product.David [00:26:06]: So you basically just use experiments as like a way to go push various ideas on the design side to some people and just be like, yeah, we'll play with it. Actually the experiments code base underpins the actual product, but it's just the code base itself is kind of like a skeleton for us to go deploy arbitrary cards on the side.Swyx [00:26:22]: Yeah.Alessio [00:26:23]: Makes sense. I was going to say, I would love to talk about the interaction layer. So you train a model to see UI, but then there's the question of how do you actually act on the UI? I think there was some rumors about open app building agents that are kind of like, they manage the end point. So the whole computer, you're more at the browser level. I read in one of your papers, you have like a different representation, kind of like you don't just take the dome and act on it. You do a lot more stuff. How do you think about the best way the models will interact with the software and like how the development of products is going to change with that in mind as more and more of the work is done by agents instead of people?David [00:26:58]: This is, there's so much surface area here and it's actually one of the things I'm really excited about. And it's funny because I've spent most of my time doing research stuff, but there's like a whole new ball game that I've been learning about and I find it really cool. So I would say the best analogy I have to why Adept is pursuing a path of being able to use your computer like a human, plus of course being able to call APIs and being able to call APIs is the easy part, like being able to use your computer like a human is a hard part. It's in the same way why people are excited about humanoid robotics, right? In a world where you had T equals infinity, right? You're probably going to have various different form factors that robots could just be in and like all the specialization. But the fact is that humans live in a human environment. So having a human robot lets you do things that humans do without changing everything along the way. It's the same thing for software, right? If you go itemize out the number of things you want to do on your computer for which every step has an API, those numbers of workflows add up pretty close to zero. And so then many points along the way, you need the ability to actually control your computer like a human. It also lets you learn from human usage of computers as a source of training data that you don't get if you have to somehow figure out how every particular step needs to be some particular custom private API thing. And so I think this is actually the most practical path. I think because it's the most practical path, I think a lot of success will come from going down this path. I kind of think about this early days of the agent interaction layer level is a little bit like, do you all remember Windows 3.1? Like those days? Okay, this might be, I might be, I might be too old for you guys on this. But back in the day, Windows 3.1, we had this transition period between pure command line, right? Being the default into this new world where the GUI is the default and then you drop into the command line for like programmer things, right? The old way was you booted your computer up, DOS booted, and then it would give you the C colon slash thing. And you typed Windows and you hit enter, and then you got put into Windows. And then the GUI kind of became a layer above the command line. The same thing is going to happen with agent interfaces is like today we'll be having the GUI is like the base layer. And then the agent just controls the current GUI layer plus APIs. And in the future, as more and more trust is built towards agents and more and more things can be done by agents, if more UIs for agents are actually generative in and of themselves, then that just becomes a standard interaction layer. And if that becomes a standard interaction layer, what changes for software is that a lot of software is going to be either systems or record or like certain customized workflow execution engines. And a lot of how you actually do stuff will be controlled at the agent layer.Alessio [00:29:19]: And you think the rabbit interface is more like it would like you're not actually seeing the app that the model interacts with. You're just saying, hey, I need to log this call on Salesforce. And you're never actually going on salesforce.com directly as the user. I can see that being a model.David [00:29:33]: I think I don't know enough about what using rabbit in real life will actually be like to comment on that particular thing. But I think the broader idea that, you know, you have a goal, right? The agent knows how to break your goal down into steps. The agent knows how to use the underlying software and systems or record to achieve that goal for you. The agent maybe presents you information in a custom way that's only relevant to your particular goal, all just really leads to a world where you don't really need to ever interface with the apps underneath unless you're a power user for some niche thing.Swyx [00:30:03]: General question. So first of all, I think like the sort of input mode conversation. I wonder if you have any analogies that you like with self-driving, because I do think like there's a little bit of how the model should perceive the world. And you know, the primary split in self-driving is LiDAR versus camera. And I feel like most agent companies that I'm tracking are all moving towards camera approach, which is like the multimodal approach, you know, multimodal vision, very heavy vision, all the Fuyu stuff that you're doing. You're focusing on that, including charts and tables. And do you find that inspiration there from like the self-driving world? That's a good question.David [00:30:37]: I think sometimes the most useful inspiration I've found from self-driving is the levels analogy. I think that's awesome. But I think that our number one goal is for agents not to look like self-driving. We want to minimize the chances that agents are sort of a thing that you just have to bang your head at for a long time to get to like two discontinuous milestones, which is basically what's happened in self-driving. We want to be living in a world where you have the data flywheel immediately, and that takes you all the way up to the top. But similarly, I mean, compared to self-driving, like two things that people really undervalue is like really easy to driving a car down highway 101 in a sunny day demo. That actually doesn't prove anything anymore. And I think the second thing is that as a non-self-driving expert, I think one of the things that we believe really strongly is that everyone undervalues the importance of really good sensors and actuators. And actually a lot of what's helped us get a lot of reliability is a really strong focus on actually why does the model not do this thing? And the non-trivial amount of time, the time the model doesn't actually do the thing is because if you're a wizard of ozzing it yourself, or if you have unreliable actuators, you can't do the thing. And so we've had to fix a lot of those problems.Swyx [00:31:43]: I was slightly surprised just because I do generally consider the way most that we see all around San Francisco as the most, I guess, real case of agents that we have in very material ways.David [00:31:55]: Oh, that's absolutely true. I think they've done an awesome job, but it has taken a long time for self-driving to mature from when it entered the consciousness and the driving down 101 on a sunny day moment happened to now. Right. So I want to see that more compressed.Swyx [00:32:07]: And I mean, you know, cruise, you know, RIP. And then one more thing on just like, just going back on this reliability thing, something I have been holding in my head that I'm curious to get your commentary on is I think there's a trade-off between reliability and generality, or I want to broaden reliability into just general like sort of production readiness and enterprise readiness scale. Because you have reliability, you also have cost, you have speed, speed is a huge emphasis for a debt. The tendency or the temptation is to reduce generality to improve reliability and to improve cost, improve speed. Do you perceive a trade-off? Do you have any insights that solve those trade-offs for you guys?David [00:32:42]: There's definitely a trade-off. If you're at the Pareto frontier, I think a lot of folks aren't actually at the Pareto frontier. I think the way you get there is basically how do you frame the fundamental agent problem in a way that just continues to benefit from data? I think one of the main ways of being able to solve that particular trade-off is you basically just want to formulate the problem such that every particular use case just looks like you collecting more data to go make that use case possible. I think that's how you really solve. Then you get into the other problems like, okay, are you overfitting on these end use cases? You're not doing a thing where you're being super prescriptive for the end steps that the model can only do, for example.Swyx [00:33:17]: Then the question becomes, do you have one house model that you can then customize for each customer and you're fine-tuning them on each customer's specific use case?David [00:33:25]: Yeah.Swyx [00:33:26]: We're not sharing that. You're not sharing that. It's tempting, but that doesn't look like AGI to me. You know what I mean? That is just you have a good base model and then you fine-tune it.David [00:33:35]: For what it's worth, I think there's two paths to a lot more capability coming out of the models that we all are training these days. I think one path is you figure out how to spend, compute, and turn it into data. In that path, I consider search, RL, all the things that we all love in this era as part of that path, like self-play, all that stuff. The second path is how do you get super competent, high intelligence demonstrations from humans? I think the right way to move forward is you kind of want to combine the two. The first one gives you maximum sample efficiency for a little second, but I think that it's going to be hard to be running at max speed towards AGI without actually solving a bit of both.Swyx [00:34:16]: You haven't talked much about synthetic data, as far as I can tell. Probably this is a bit too much of a trend right now, but any insights on using synthetic data to augment the expensive human data?David [00:34:26]: The best part about framing AGI as being able to help people do things on computers is you have an environment.Swyx [00:34:31]: Yes. So you can simulate all of it.David [00:34:35]: You can do a lot of stuff when you have an environment.Alessio [00:34:37]: We were having dinner for our one-year anniversary. Congrats. Yeah. Thank you. Raza from HumanLoop was there, and we mentioned you were coming on the pod. This is our first-Swyx [00:34:45]: So he submitted a question.Alessio [00:34:46]: Yeah, this is our first, I guess, like mailbag question. He asked, when you started GPD 4 Data and Exist, now you have a GPD 4 vision and help you building a lot of those things. How do you think about the things that are unique to you as Adept, and like going back to like the maybe research direction that you want to take the team and what you want people to come work on at Adept, versus what is maybe now become commoditized that you didn't expect everybody would have access to?David [00:35:11]: Yeah, that's a really good question. I think implicit in that question, and I wish he were tier two so he can push back on my assumption about his question, but I think implicit in that question is calculus of where does advantage accrue in the overall ML stack. And maybe part of the assumption is that advantage accrues solely to base model scaling. But I actually believe pretty strongly that the way that you really win is that you have to go build an agent stack that is much more than that of the base model itself. And so I think like that is always going to be a giant advantage of vertical integration. I think like it lets us do things like have a really, really fast base model, is really good at agent things, but is bad at cat and dog photos. It's pretty good at cat and dog photos. It's not like soda at cat and dog photos, right? So like we're allocating our capacity wisely, right? That's like one thing that you really get to do. I also think that the other thing that is pretty important now in the broader foundation modeling space is I feel despite any potential concerns about how good is agents as like a startup area, right? Like we were talking about earlier, I feel super good that we're doing foundation models in service of agents and all of the reward within Adept is flowing from can we make a better agent? Because right now I think we all see that, you know, if you're training on publicly available web data, you put in the flops and you do reasonable things, then you get decent results. And if you just double the amount of compute, then you get predictably better results. And so I think pure play foundation model companies are just going to be pinched by how good the next couple of llamas are going to be and the next what good open source thing. And then seeing the really big players put ridiculous amounts of compute behind just training these base foundation models, I think is going to commoditize a lot of the regular LLMs and soon regular multimodal models. So I feel really good that we're just focused on agents.Swyx [00:36:56]: So you don't consider yourself a pure play foundation model company?David [00:36:59]: No, because if we were a pure play foundation model company, we would be training general foundation models that do summarization and all this other...Swyx [00:37:06]: You're dedicated towards the agent. Yeah.David [00:37:09]: And our business is an agent business. We're not here to sell you tokens, right? And I think like selling tokens, unless there's like a...Swyx [00:37:14]: Not here to sell you tokens. I love it.David [00:37:16]: It's like if you have a particular area of specialty, right? Then you won't get caught in the fact that everyone's just scaling to ridiculous levels of compute. But if you don't have a specialty, I find that, I think it's going to be a little tougher.Swyx [00:37:27]: Interesting. Are you interested in robotics at all? Just a...David [00:37:30]: I'm personally fascinated by robotics. I've always loved robotics.Swyx [00:37:33]: Embodied agents as a business, you know, Figure is like a big, also sort of open AI affiliated company that raises a lot of money.David [00:37:39]: I think it's cool. I think, I mean, I don't know exactly what they're doing, but...Swyx [00:37:44]: Robots. Yeah.David [00:37:46]: Well, I mean, that's a...Swyx [00:37:47]: Yeah. What question would you ask? If we had them on, what would you ask them?David [00:37:50]: Oh, I just want to understand what their overall strategy is going to be between now and when there's reliable stuff to be deployed. But honestly, I just don't know enough about it.Swyx [00:37:57]: And if I told you, hey, fire your entire warehouse workforce and, you know, put robots in there, isn't that a strategy? Oh yeah.David [00:38:04]: Yeah. Sorry. I'm not questioning whether they're doing smart things. I genuinely don't know what they're doing as much, but I think there's two things. One, I'm so excited for someone to train a foundation model of robots. It's just, I think it's just going to work. Like I will die on this hill, but I mean, like again, this whole time, like we've been on this podcast, we're just going to continually saying these models are basically behavioral cloners. Right. So let's go behavioral clone all this like robot behavior. Right. And then you figure out everything else you have to do in order to teach you how to solve a new problem. That's going to work. I'm super stoked for that. I think unlike what we're doing with helping humans with knowledge work, it just sounds like a more zero sum job replacement play. Right. And I'm personally less excited about that.Alessio [00:38:46]: We had a Ken June from InBoo on the podcast. We asked her why people should go work there and not at Adept.Swyx [00:38:52]: Oh, that's so funny.Alessio [00:38:54]: Well, she said, you know, there's space for everybody in this market. We're all doing interesting work. And she said, they're really excited about building an operating system for agent. And for her, the biggest research thing was like getting models, better reasoning and planning for these agents. The reverse question to you, you know, why should people be excited to come work at Adept instead of InBoo? And maybe what are like the core research questions that people should be passionate about to have fun at Adept? Yeah.David [00:39:22]: First off, I think that I'm sure you guys believe this too. The AI space to the extent there's an AI space and the AI agent space are both exactly as she likely said, I think colossal opportunities and people are just going to end up winning in different areas and a lot of companies are going to do well. So I really don't feel that zero something at all. I would say to like change the zero sum framing is why should you be at Adept? I think there's two huge reasons to be at Adept. I think one of them is everything we do is in the service of like useful agents. We're not a research lab. We do a lot of research in service of that goal, but we don't think about ourselves as like a classic research lab at all. And I think the second reason I work at Adept is if you believe that actually having customers and a reward signal from customers lets you build a GI faster, which we really believe, then you should come here. And I think the examples for why that's true is for example, our evaluations, they're not academic evals. They're not simulator evals. They're like, okay, we have a customer that really needs us to do these particular things. We can do some of them. These are the ones they want us to, we can't do them at all. We've turned those into evals, solve it, right? I think that's really cool. Like everybody knows a lot of these evals are like pretty saturated and the new ones that even are not saturated. You look at someone and you're like, is this actually useful? Right? I think that's a degree of practicality that really helps. Like we're equally excited about the same problems around reasoning and planning and generalization and all of this stuff. They're very grounded in actual needs right now, which is really cool.Swyx [00:40:45]: Yeah. This has been a wonderful dive. You know, I wish we had more time, but I would just leave it kind of open to you. I think you have broad thoughts, you know, just about
March madness... I know for some folks this means basketball or something, but since this is an AI newsletter, and this March was indeed mad, I am claiming it. This week seemed madder from one day to another. And the ai announcements kept coming throughout the recording, I used the "breaking news" button a few times during this week's show! This week we covered tons of corporate AI drama in the BigCO segment, from Inflection → Microsoft move, to Apple Gemini rumors, to Nvidia GTC conference, but we also had a bunch of OpenSource to go over, including an exciting glimpse into the O1 from Open Interpreter, which the founder Killian (of the ThursdAI mafia haha) joined to chat about briefly after an all nighter release push! Another returning FOTP (friend of the pod) Matt Shumer joined as we did a little deep dive into prompting Claude, and how he went viral (seems to happen a lot to Matt) with a project of his to make Claude write prompts for itself! Definitely worth a listen, it's the first segment post the TL'DR on the pod
53e épisode de l'actualité Quantum avec Olivier Ezratty et Fanny BoutonÉvénements Conférence Erasmus à Delft le 16 novembrehttps://institutfrancais.nl/21st-erasmus-descartes-conference-quantum-leap-getting-europe-ready-for-technological-sovereignty/ Lancement de Kuytai par Scaleway le 17 novembreNvidia SuperPod installé chez Scaleway avec 127 DGX équipés de processeurs H100.+annonce de Quandela avec l'émulateur Perceval comme chez OVHcloud.Lancement d'un laboratoire d'IA voulant concurrencer OpenAI, avec 300M€ Conférence Européenne en Espagne le 23 novembreLa “Quantum Technologies Conference in Europe” sponsorisée par QuantERA et organisée par la présidence espagnole de l'Union Européenne à Madrid.https://www.linkedin.com/pulse/highlights-from-eqtc-2023-silvia-marigonda-yuzye/Vidéo de 4h : https://www.youtube.com/watch?v=QMFp2s2Aixs QEI Workshop à Singapour du 20 au 24 novembreLe premier workshop de la Quantum Energy Initiative avait lieu à Singapour avec 140 participants et intervenants et la participation d. https://www.linkedin.com/feed/update/urn:li:activity:7135454151802564608/Les vidéos du workshop sont presque toutes disponibles sur https://www.youtube.com/playlist?list=PLjqlGitBPAYCCeSyhNTeThuHFYtsS44Bq. L'Agora des nanos le 25 novembre organisé par le CEA et l'École Estienne au sujet de la médiation scientifique, au Théâtre de la Ville du Chatelet.https://www.theatredelaville-paris.com/fr/spectacles/projets-passerelles/rencontres/lagora-des-nanos OVHcloud Summit le 28 novembre à ParisOctave Klaba rappelle l'importance en tant qu'entreprise de prendre le sujet dès maintenant et annonce 5 émulateurs quantiques chez OVHcloud d'ici fin 2023 dont dès aujourd'hui Felis d'Alice & Bob Conférence sur le quantique au Vatican du 30 novembre au 2 décembreLe Vatican organisait une sorte de conférence Solvay de trois jours https://www.pas.va/en/events/2023/quantum_science.html Quantum 2042 chez Bouygues SA le 1er décembre Q2B à Santa Clara début décembrehttps://q2b.qcware.com/2023-conferences/silicon-valley/ Copenhague en décembre avec DTUau même moment que la Q2B, les 6, 7 et 8 décembrehttps://www.eventbrite.dk/e/quantum-dtu-networking-event-on-quantum-tech-and-cybersecurity-tickets-751620405867?aff=oddtdtcreator Plein de nouvelles au sujet de Quandela Levée de fonds de 50M€ dont 9.5M€ de financement de Bpifrance pour leur usine inaugurée en juin 2023. Belle levée de fonds.https://www.bfmtv.com/economie/replay-emissions/tech-and-co/valerian-giesz-quandela-quandela-au-coeur-de-la-revolution-quantique-07-11_VN-202311071064.htmlhttps://thequantuminsider.com/2023/11/07/quandela-secures-e50-million-to-support-international-expansion-further-industrial-development/https://www.lesechos.fr/start-up/portraits/ordinateur-quantique-pascale-senellart-mardon-la-tete-chercheuse-de-quandela-2027097 Changement de CEO au passage. Niccolo Somaschi qui était CTO devient CEO et Valérian Giesz qui était CEO devient COO. Vente de trois ordinateurs quantiques à Exaion au Canada et en France. Cas d'usage : apprendre le calcul et expérimenter la création d'algorithmes à petite échelle. Publication sur arXiv d'un nouveau blueprint de Quandela avec une alternative intéressante au modèle MBQC.A Spin-Optical Quantum Computing Architecture by Grégoire de Gliniasty, Paul Hilaire, Pierre-Emmanuel Emeriau, Stephen C. Wein, Alexia Salavrakos, Shane Mansfield, Quandela and LIP6, November 2023 (20 pages). Photonic : levée de fonds de $100Mhttps://photonic.com/news/photonic-raises-100m-for-quantum-technology/Scalable Fault-Tolerant Quantum Technologies with Silicon Colour Centres by Stephanie Simmons, Photonic, October 2023 (16 pages) which documents well their architecture. Alice&Bob partenaire d'Equinixhttps://www.lesechos.fr/tech-medias/hightech/la-puissance-quantique-dalice-bob-accessible-a-distance-via-lamericain-equinix-2029338 OQC est aussi chez Equinix mais au Japonhttps://www.linkedin.com/feed/update/urn:li:activity:7134823045331853312/https://www.prnewswire.com/news-releases/oqc-launches-oqc-toshiko-the-worlds-first-enterprise-ready-quantum-platform-301997838.html Amazon annonce son premier qubit logiquehttps://www.youtube.com/watch?v=pJG6nmR7XxIhttps://www.linkedin.com/posts/simoneseverini_quantumcomputing-reinvent2023-activity-7135156264887619584-iPKP IQM annonce 54 et 150 qubits pour 2024 et 2025https://www.prnewswire.com/apac/news-releases/iqm-quantum-computers-launches-iqm-radiance--a-150-qubit-system-paving-the-way-to-quantum-advantage-301981384.html. Xanaduhttps://www.digitimes.com/news/a20231116VL209/xanadu-silicon-photonics-canada-quantum-computing.html Mission UKLe gouvernement UK se donne une feuille de route sur le calcul quantique.https://www.gov.uk/government/publications/national-quantum-strategy/national-quantum-strategy-missionsEd. Gerck qui prétend avoir cassé RSA avec un smartphonehttps://www.linkedin.com/feed/update/urn:li:activity:7125215279688601600/?msgControlName=view_message_button&msgConversationId=2-ODFkZTk3ZTktM2ZmOC00ZTY4LWI1NTEtODM0ZGIyNWIwMTBmXzAxMg%3D%3D&msgOverlay=truehttps://www.mdpi.com/2227-7390/11/1/68https://www.govinfosecurity.com/blogs/researcher-claims-to-crack-rsa-2048-quantum-computer-p-3536 Alibaba ferme son laboratoire quantiquehttps://thequantuminsider.com/2023/11/25/reports-chinas-alibaba-shuts-down-quantum-lab/ Multiverse Computing lance CompactifAIhttps://multiversecomputing.com/resources/multiverse-computing-launches-compactifai-to-streamline-llms-to-reduce-energy-use-and-compute Hype LLMhttps://techbullion.com/exploring-the-possibility-can-quantum-computers-unlock-the-secrets-of-time-travel/Énorme LOL : “Ethics and Safety Concerns: The concept of time travel raises ethical and safety concerns that must be carefully considered before conducting any experiments. Tampering with events in the past could potentially have severe consequences for the future, leading to unintended consequences or paradoxes”. https://www.supplychaindive.com/spons/navigating-the-quantum-revolution-in-logistics/696224/All these factors combine to make logistics ripe for quantum innovation. Logistics organizations face four key challenges where quantum computing shines:Twitter/X Dulwich Quantum qui est hilarant !https://dulwichquantum.github.io/blog/how-to-talk-about-quantum-computing/ Sur le livre « Quantum computing for dummies » de William Hurley de Strangeworks.https://twitter.com/DulwichQuantum/status/1729791019892290017
John Wynkoop, Cloud Economist & Platypus Herder at The Duckbill Group, joins Corey on Screaming in the Cloud to discuss why he decided to make a career move and become an AWS billing consultant. Corey and John discuss how once you're deeply familiar with one cloud provider, those skills become transferable to other cloud providers as well. John also shares the trends he has seen post-pandemic in the world of cloud, including the increased adoption of a multi-cloud strategy and the need for costs control even for VC-funded start-ups. About JohnWith over 25 years in IT, John's done almost every job in the industry, from running cable and answering helpdesk calls to leading engineering teams and advising the C-suite. Before joining The Duckbill Group, he worked across multiple industries including private sector, higher education, and national defense. Most recently he helped IGNW, an industry leading systems integration partner, get acquired by industry powerhouse CDW. When he's not helping customers spend smarter on their cloud bill, you can find him enjoying time with his family in the beautiful Smoky Mountains near his home in Knoxville, TN.Links Referenced: The Duckbill Group: https://duckbillgroup.com LinkedIn: https://www.linkedin.com/in/jlwynkoop/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. And the times, they are changing. My guest today is John Wynkoop. John, how are you?John: Hey, Corey, I'm doing great. Thanks for having me.Corey: So, big changes are afoot for you. You've taken a new job recently. What are you doing now?John: Well [laugh], so I'm happy to say I have joined The Duckbill Group as a cloud economist. So, came out of the big company world, and have dived back in—or dove back into the startup world.Corey: It's interesting because when we talk to those big companies, they always identify us as oh, you're a startup, which is hilarious on some level because our AWS account hangs out in AWS's startup group, but if you look at the spend being remarkably level from month to month to month to year to year to year, they almost certainly view us as they're a startup, but they suck at it. They completely failed. And so, many of the email stuff that you get from them presupposes that you're venture-backed, that you're trying to conquer the entire world. We don't do that here. We have this old-timey business model that our forebears would have understood of, we make more money than we spend every month and we continue that trend for a long time. So first, thanks for joining us, both on the show and at the company. We like having you around.John: Well, thanks. And yeah, I guess that's—maybe a startup isn't the right word to describe what we do here at The Duckbill Group, but as you said, it seems to fit into the industry classification. But that was one of the things I actually really liked about the—that was appealing about joining the team was, we do spend less than we make and we're not after hyper-growth and we're not trying to consume everything.Corey: So, it's interesting when you put a job description out into the world and you see who applies—and let's be clear, for those who are unaware, job descriptions are inherently aspirational shopping lists. If you look at a job description and you check every box on the thing and you've done all the things they want, the odds are terrific you're going to be bored out of your mind when you wind up showing up to do these… whatever that job is. You should be learning stuff and growing. At least that's always been my philosophy to it. One of the interesting things about you is that you checked an awful lot of boxes, but there is one that I think would cause people to raise an eyebrow, which is, you're relatively new to the fun world of AWS.John: Yeah. So, obviously I, you know, have been around the block a few times when it comes to cloud. I've used AWS, built some things in AWS, but I wouldn't have classified myself as an AWS guru by any stretch of the imagination. I spent the last probably three years working in Google Cloud, helping customers build and deploy solutions there, but I do at least understand the fundamentals of cloud, and more importantly—at least for our customers—cloud costs because at the end of the day, they're not all that different.Corey: I do want to call out that you have a certain humility to you which I find endearing. But you're not allowed to do that here; I will sing your praises for you. Before they deprecated it like they do almost everything else, you were one of the relatively few Google Cloud Certified Fellows, which was sort of like their Heroes program only, you know, they killed it in favor of something else like there's a Champion program or whatnot. You are very deep in the world of both Kubernetes and Google Cloud.John: Yeah. So, there was a few of us that were invited to come out and help Google pilot that program in, I believe it was 2019, and give feedback to help them build the Cloud Fellows Program. And thankfully, I was selected based on some of our early experience with Anthos, and specifically, it was around Certified Fellow in what they call hybrid multi-cloud, so it was experience around Anthos. Or at the time, they hadn't called it Anthos; they were calling it CSP or Cloud Services Platform because that's not an overloaded acronym. So yeah, definitely, was very humbled to be part of that early on.I think the program, as you said, grew to about 70 or so maybe 100 certified individuals before they transitioned—not killed—transitioned to that program into the Cloud Champions program. So, those folks are all still around, myself included. They've just now changed the moniker. But we all get to use the old title still as well, so that's kind of cool.Corey: I have to ask, what would possess you to go from being one of the best in the world at using Google Cloud over here to our corner of the AWS universe? Because the inverse, if I were to somehow get ejected from here—which would be a neat trick, but I'm sure it's theoretically possible—like, “What am I going to do now?” I would almost certainly wind up doing something in the AWS ecosystem, just due to inertia, if nothing else. You clearly didn't see things quite that way. Why make the switch?John: Well, a couple of different reasons. So, being at a Google partner presents a lot of challenges and one of the things that was supremely interesting about coming to Duckbill is that we're independent. So, we're not an AWS partner. We are an independent company that is beholden only to our customers. And there isn't anything like that in the Google ecosystem today.There's, you know, there's Google partners and then there's Google customers and then there's Google. So, that was part of the appeal. And the other thing was, I enjoy learning new things, and honestly, learning, you know, into the depths of AWS cost hell is interesting. There's a lot to learn there and there's a lot of things that we can extract and use to help customers spend less. So, that to me was super interesting.And then also, I want to help build an organization. So, you know, I think what we're doing here at The Duckbill Group is cool and I think that there's an opportunity to grow our services portfolio, and so I'm excited to work with the leadership team to see what else we can bring to market that's going to help our customers, you know, not just with cost optimization, not just with contract negotiation, but you know, through the lifecycle of their AWS… journey, I guess we'll call it.Corey: It's one of those things where I always have believed, on some level, that once you're deep in a particular cloud provider, if there's reason for it, you can rescale relatively quickly to a different provider. There are nuances—deep nuances—that differ from provider to provider, but the underlying concepts generally all work the same way. There's only so many ways you can have data go from point A to point B. There's only so many ways to spin up a bunch of VMs and whatnot. And you're proof-positive that theory was correct.You'd been here less than a week before I started learning nuances about AWS billing from you. I think it was something to do with the way that late fees are assessed when companies don't pay Amazon as quickly as Amazon desires. So, we're all learning new things constantly and no one stuffs this stuff all into their head. But that, if nothing else, definitely cemented that yeah, we've got the right person in the seat.John: Yeah, well, thanks. And certainly, the deeper you go on a specific cloud provider, things become fresh in your memory, you know, other cached so to speak. So, coming up to speed on AWS has been a little bit more documentation reading than it would have been, if I were, say, jumping right into a GCP engagement. But as he said, at the end of the day, there's a lot of similarities. Obviously understanding the nuances of, for example, account organization versus, you know, GCP's Project and Folders. Well, that's a substantial difference and so there's a lot of learning that has to happen.Thankfully, you know, all these companies, maybe with the exception of Oracle, have done a really good job of documenting all of the concepts in their publicly available documentation. And then obviously, having a team of experts here at The Duckbill Group to ask stupid questions of doesn't hurt. But definitely, it's not as hard to come up to speed as one may think, once you've got it understood in one provider.Corey: I took a look recently and was kind of surprised to discover that I've been doing this—as an independent consultant prior to the formation of The Duckbill Group—for seven years now. And it's weird, but I've gone through multiple industry cycles and changes as a part of this. And it feels like I haven't been doing it all that long, but I guess I have. One thing that's definitely changed is that it used to be that companies would basically pick one provider and almost everything would live there. At any reasonable point of scale, everyone is using multiple things.I see Google in effectively every client that we have. It used to be that going to Google Cloud Next was a great place to hang out with AWS customers. But these days, it's just as true to say that a great reason to go to re:Invent is to hang out with Google Cloud customers. Everyone uses everything, and that has become much more clear over the last few years. What have you seen change over the… I guess, since the start of the pandemic, just in terms of broad cycles?John: Yeah. So, I think there's a couple of different trends that we're seeing. Obviously, one is that as you said, especially as large enterprises make moves to the cloud, you see independent teams or divisions within a given organization leveraging… maybe not the right tool for the job because I think that there's a case to be made for swapping out a specific set of tools and having your team learn it, but we do see what I like to refer to as tool fetishism where you get a team that's super, super deep into BigQuery and they're not interested in moving to Redshift, or Snowflake, or a competitor. So, you see, those start to crop up within large organizations where the distributed—the purchasing power, rather—is distributed. So, that's one of the trends is the multi-cloud adoption.And I think the big trend that I like to emphasize around multi-cloud is, just because you can run it anywhere doesn't mean you should run it everywhere. So Kubernetes, as you know, right, as it took off 2019 timeframe, 2020, we started to see a lot of people using that as an excuse to try to run their production application in two, three public cloud providers and on-prem. And unless you're a SaaS customer—or SaaS company with customers in every cloud, there's very little reason to do that. But having that flexibility—that's the other one, is we've seen that AWS has gotten a little difficult to negotiate with, or maybe Google and Microsoft have gotten a little bit more aggressive. So obviously, having that flexibility and being able to move your workloads, that was another big trend.Corey: I'm seeing a change in things that I had taken as givens, back when I started. And that's part of the reason, incidentally, I write the Last Week in AWS newsletter because once you learn a thing, it is very easy not to keep current with that thing, and things that are not possible today will be possible tomorrow. How do you keep abreast of all of those changes? And the answer is to write a deeply sarcastic newsletter that gathers in everything from the world of AWS. But I don't recommend that for most people. One thing that I've seen in more prosaic terms that you have a bit of background in is that HPC on cloud was, five, six years ago, met with, “Oh, that's a good one; now pull the other one, it has bells on it,” into something that, these days, is extremely viable. How'd that happen?John: So, [sigh] I think that's just a—again, back to trends—I think that's just a trend that we're seeing from cloud providers and listening to their customers and continuing to improve the service. So, one of the reasons that HPC was—especially we'll call it capacity-level HPC or large HPC, right—you've always been able to run high throughput; the cloud is a high throughput machine, right? You can run a thousand disconnected VMs no problem, auto-scaling, anybody who runs a massive web front-end can attest to that. But what we saw with HPC—and we used to call those [grid 00:12:45] jobs, right, the small, decoupled computing jobs—but what we've seen is a huge increase in the quality of the underlying fabric—things like RDMA being made available, things like improved network locality, where you now have predictive latency between your nodes or between your VMs—and I think those, combined with the huge investment that companies like AWS have made in their file systems, the huge investment companies like Google have made in their data storage systems have made HPC viable, especially at a small-scale—for cloud-based HPC specifically—viable for organizations.And for a small engineering team, who's looking to run say, computer-aided engineering simulation or who's looking to prototype some new way of testing or doing some kind of simulation, it's a huge, huge improvement in speed because now they don't have to order a dozen or two dozen or five dozen nodes, have them shipped, rack them, stack them, cool them, power them, right? They can just spin up the resource in the cloud, test it out, try their simulation, try out the new—the software that they want, and then spin it all down if it doesn't work. So, that elasticity has also been huge. And again, I think the big—to kind of summarize, I think the big driver there is the improvement in this the service itself, right? We're seeing cloud providers taking that discipline a little bit more seriously.Corey: I still see that there are cases where the raw math doesn't necessarily add up for sustained, long-term use cases. But I also see increasingly that with HPC, that's usually not what the workload looks like. With, you know, the exception of we're going to spend the next 18 months training some new LLM thing, but even then the pricing is ridiculous. What is it their new P6 or whatever it is—P5—the instances that have those giant half-rack Nvidia cards that are $800,000 and so a year each if you were to just rent them straight out, and then people running fleets of these things, it's… wow that's more commas in that training job than I would have expected. But I can see just now the availability for driving some of that, but the economics of that once you can get them in your data center doesn't strike me as being particularly favoring the cloud.John: Yeah, there's a couple of different reasons. So, it's almost like an inverse curve, right? There's a crossover point or a breakeven point at which—you know, and you can make this argument with almost any level of infrastructure—if you can keep it sufficiently full, whether it's AI training, AI inference, or even traditional HPC if you can keep the machine or the group of machines sufficiently full, it's probably cheaper to buy it and put it in your facility. But if you don't have a facility or if you don't need to use it a hundred percent of the time, the dividends aren't always there, right? It's not always worth, you know, buying a $250,000 compute system, you know, like say, an Nvidia, as you—you know, like, a DGX, right, is a good example.The DGX H100, I think those are a couple $100,000. If you can't keep that thing full and you just need it for training jobs or for development and you have a small team of developers that are only going to use it six hours a day, it may make sense to spin that up in the cloud and pay for a fractional use, right? It's no different than what HPC has been doing for probably the past 50 years with national supercomputing centers, which is where my background came from before cloud, right? It's just a different model, right? One is public economies of, you know, insert your credit card and spend as much as you want and the other is grant-funded and supporting academic research, but the economy of scales is kind of the same on both fronts.Corey: I'm also seeing a trend that this is something that is sort of disturbing when you realize what I've been doing and how I've been going about things, that for the last couple of years, people actually started to care about the AWS bill. And I have to say, I felt like I was severely out of sync with a lot of the world the first few years because there's giant savings lurking in your AWS bill, and the company answer in many cases was, “We don't care. We'd rather focus our energies on shipping faster, building something new, expanding, capturing market.” And that is logical. But suddenly those chickens are coming home to roost in a big way. Our phone is ringing off the hook, as I'm sure you've noticed and your time here, and suddenly money means something again. What do you think drove it?John: So, I think there's a couple of driving factors. The first is obviously the broader economic conditions, you know, with the economic growth in the US, especially slowing down post-pandemic, we're seeing organizations looking for opportunities to spend less to be able to deliver—you know, recoup that money and deliver additional value. But beyond that, right—because, okay, but startups are probably still lighting giant piles of VC money on fire, and that's okay, but what's happening, I think, is that the first wave of CIOs that said cloud-first, cloud-only basically got their comeuppance. And, you know, these enterprises saw their explosive cloud bills and they saw that, oh, you know, we moved 5000 servers to AWS or GCP or Azure and we got the bill, and that's not sustainable. And so, we see a lot of cloud repatriation, cloud optimization, right, a lot of second-gen… cloud, I'll call them second-gen cloud-native CIOs coming into these large organizations where their predecessor made some bad financial decisions and either left or got asked to leave, and now they're trying to stop from lighting their giant piles of cash on fire, they're trying to stop spending 3X what they were spending on-prem.Corey: I think an easy mistake for folks to make is to get lost in the raw infrastructure cost. I'm not saying it's not important. Obviously not, but you could save a giant pile of money on your RDS instances by running your own database software on top of EC2, but I don't generally recommend folks do it because you also need engineering time to be focusing on getting those things up, care and feeding, et cetera. And what people lose sight of is the fact that the payroll expense is almost universally more than the cloud bill at every company I've ever talked to.So, there's a consistent series of, “Well, we're just trying to get to be the absolute lowest dollar figure total.” It's the wrong thing to emphasize on, otherwise, “Cool, turn everything off and your bill drops to zero.” Or, “Migrate to another cloud provider. AWS bill becomes zero. Our job is done.” It doesn't actually solve the problem at all. It's about what's right for the business, not about getting the absolute lowest possible score like it's some kind of code golf tournament.John: Right. So, I think that there's a couple of different ways to look at that. One is obviously looking at making your workloads more cloud-native. I know that's a stupid buzzword to some people, but—Corey: The problem I have with the term is that it means so many different things to different people.John: Right. But I think the gist of that is taking advantage of what the cloud is good at. And so, what we saw was that excess capacity on-prem was effectively free once you bought it, right? There were there was no accountability for burning through extra V CPUs or extra RAM. And then you had—Corey: Right. You spin something up in your data center and the question is, “Is the physical capacity there?” And very few companies had a reaping process until they were suddenly seeing capacity issues and suddenly everyone starts asking you a whole bunch of questions about it. But that was a natural forcing function that existed. Now, S3 has infinite storage, or it might as well. They can add capacity faster than you can fill it—I know this; I've tried—and the problem that you have then is that it's always just a couple more cents per gigabyte and it keeps on going forever. There's no, we need to make an investment decision because the SAN is at 80% capacity. Do you need all those 16 copies of the production data that you haven't touched since 2012? No, I probably don't.John: Yeah, there's definitely a forcing function when you're doing your own capacity planning. And the cloud, for the most part, as you've alluded to, for most organizations is infinite capacity. So, when they're looking at AWS or they're looking at any of the public cloud providers, it's a potentially infinite bill. Now, that scares a lot of organizations, and so because they didn't have the forcing function of, hey, we're out of CPUs, or we're out of hard disk space, or we're out of network ports, I think that because the cloud was a buzzword that a lot of shareholders and boards wanted to see in IT status reports and IT strategic plans, I think we grew a little bit further than we should have, from an enterprise perspective. And I think a lot of that's now being clawed back as organizations are maturing and looking to manage cost. Obviously, the huge growth of just the term FinOps from a search perspective over the last three years has cemented that, right? We're seeing a much more cost-conscious consumer—cloud consumer—than we saw three years ago.Corey: I think that the baseline level of understanding has also risen. It used to be that I would go into a client environment, prepared to deploy all kinds of radical stuff that these days look like context-aware architecture and things that would automatically turn down developer environments when developers were done for the day or whatnot. And I would discover that, oh, you haven't bought Reserved Instances in three years. Maybe start there with the easy thing. And now you don't see those, the big misconfigurations or the big oversights the way that you once did.People are getting better at this, which is a good thing. I'm certainly not having a problem with this. It means that we get to focus on things that are more architecturally nuanced, which I love. And I think that it forces us to continue innovating rather than just doing something that basically any random software stack could provide.John: Yeah, I think to your point, the easy wins are being exhausted or have been exhausted already, right? Very rarely do we walk into a customer and see that they haven't bought a, you know, Reserved Instance, or a Savings Plan. That's just not a thing. And the proliferation of software tools to help with those things, of course, in some cases, dubious proposition of, “We'll fix your cloud bill automatically for a small percentage of the savings,” that some of those software tools have, I think those have kind of run their course. And now you've got a smarter populace or smarter consumer and it does come into the more nuanced stuff, right.All right, do you really need to replicate data across AZs? Well, not if your workloads aren't stateful. Well, so some of the old things—and Kubernetes is a great example of this, right—the age old adage of, if I'm going to spin up an EKS cluster, I need to put it in three AZs, okay, why? That's going to cost you money [laugh], the cross-AZ traffic. And I know cross-AZ traffic is a simple one, but we still see that. We still see, “Well, I don't know why I put it across all three AZs.”And so, the service-to-service communication inside that cluster, the control plane traffic inside that cluster, is costing you money. Now, it might be minimal, but as you grow and as you scale your product or the services that you're providing internally, that may grow to a non-trivial sum of money.Corey: I think that there's a tipping point where an unbounded growth problem is always going to emerge as something that needs attention and needs to be focused on. But I should ask you this because you have a skill set that is, as you know, extremely in demand. You also have that rare gift that I wish wasn't as rare as it is where you can be thrown into the deep end knowing next to nothing about a particular technology stack, and in a remarkably short period of time, develop what can only be called subject matter expertise around it. I've seen you do this years past with Kubernetes, which is something I'm still trying to wrap my head around. You have a natural gift for it which meant that, from many respects, the world was your oyster. Why this? Why now?John: So, I think there's a couple of things that are unique at this thing, at this time point, right? So obviously, helping customers has always been something that's fun and exciting for me, right? Going to an organization and solving the same problem I've solved 20 different times, for example, spinning up a Kubernetes cluster, I guess I have a little bit of a little bit of squirrel syndrome, so to speak, and that gets—it gets boring. I'd rather just automate that or build some tooling and disseminate that to the customers and let them do that. So, the thing with cost management is, it's always a different problem.Yeah, we're solving fundamentally the same problem, which is, I'm spending too much, but it's always a different root cause, you know? In one customer, it could be data transfer fees. In another customer, it could be errant development growth where they're not controlling the spend on their development environments. In yet another customer, it could be excessive object storage growth. So, being able to hunt and look for those and play detective is really fun, and I think that's one of the things that drew me to this particular area.The other is just from a timing perspective, this is a problem a lot of organizations have, and I think it's underserved. I think that there are not enough companies—service providers, whatever—focusing on the hard problem of cost optimization. There's too many people who think it's a finance problem and not enough people who think it's an engineering problem. And so, I wanted to do work on a place where we think it's an engineering problem.Corey: It's been a very… long road. And I think that engineering problems and people problems are both fascinating to me, and the AWS bill is both. It's often misunderstood as a finance problem, and finance needs to be consulted absolutely, but they can't drive an optimization project, and they don't know what the context is behind an awful lot of decisions that get made. It really is breaking down bridges. But also, there's a lot of engineering in here, too. It scratches my itch in that direction, anyway.John: Yeah, it's one of the few business problems that I think touches multiple areas. As you said, it's obviously a people problem because we want to make sure that we are supporting and educating our staff. It's a process problem. Are we making costs visible to the organization? Are we making sure that there's proper chargeback and showback methodologies, et cetera? But it's also a technology problem. Did we build this thing to take advantage of the architecture or did we shoehorn it in a way that's going to cost us a small fortune? And I think it touches all three, which I think is unique.Corey: John, I really want to thank you for taking the time to speak with me. If people want to learn more about what you're up to in a given day, where's the best place for them to find you?John: Well, thanks, Corey, and thanks for having me. And, of course obviously, our website duckbillgroup.com is a great place to find out what we're working on, what we have coming. I also, I'm pretty active on LinkedIn. I know that's [laugh]—I'm not a huge Twitter guy, but I am pretty active on LinkedIn, so you can always drop me a follow on LinkedIn. And I'll try to post interesting and useful content there for our listeners.Corey: And we will, of course, put links to that in the [show notes 00:28:37], which in my case, is of course extremely self-aggrandizing. But that's all right. We're here to do self-promotion. Thank you so much for taking the time to chat with me, John. I appreciate it. Now, get back to work.John: [laugh]. All right, thanks, Corey. Have a good one.Corey: John Wynkoop, cloud economist at The Duckbill Group. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice while also taking pains to note how you're using multiple podcast platforms these days because that just seems to be the way the world went.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
S&P Futures are gaining this morning as the market is focused on earnings reports. Middle East tensions have lower somewhat as diplomates continue to work toward containing the crisis and providing aid. Treasury yields are displaying weakness and markets are displaying positive action as Earning season heats up. KO, KMB, VZ ADM, DGX, HAL, GM, & MMM all delivered earnings beats this morning. In Europe, Stocks are mostly higher, even though PMI data remains under 50. Oil prices are display slight gains.
Training 3.0 benchmark results show performance gains of up to 1.54x compared to six months ago and 33-49x improvement over the first round, driving innovation and energy efficiency in the industry. Intel's Habana Gaudi2 ML training engine competes with Nvidia's offerings, boasting better performance than A100 and lower pricing than H100. Nvidia, on the other hand, unveils their NeMo model with half a trillion parameters and expands the MLPerf Training suite to include GPT-3 and a new Recommendation engine. Their collaboration with CoreWeave showcases the superior performance of the H100, providing a 3.6x speed increase for GPT-3 compared to Intel Xeon and Gaudi2. Nvidia is also developing foundation models for their DGX cloud, collaborating with major players in the industry, and Intel is widely rumored to be developing its own Gaudi2-as-a-Service offering. Then there's the Tiny 1.1 inferencing benchmark, which saw over 150 results and performance improvements up to 1000x. Time Stamps: 0:00 - Welcome to the Rundown 0:48 - What Red Hat is doing with CentOS 3:36 - Moving Windows to the cloud for consumers 6:31 - IBM acquires Apptio 8:51 - Cisco set to acquire SamKnows 12:04 - Databricks Acquires MosaicML 15:37 - Cato Networks introduces AI tracker for malware command and control 18:39 - MLPerf 3 Upsets the AI Apple Cart 32:10 - The Weeks Ahead 33:40 - Thanks for Watching Follow our Hosts on Social Media Tom Hollingsworth: https://www.twitter.com/NetworkingNerd Stephen Foskett: https://www.twitter.com/SFoskett Tim Bertino: https://www.twitter.com/TimBertino Follow Gestalt IT Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/Gestalt-IT #Rundown, #MLPerf, #CentOS, #RHEL, @RedHat, #Cloud, @Microsoft, @Windows, @IBM, @Apptio, #NetworkMonitoring, @Cisco, @SamKnows, @Databricks, @MosaicML, @CatoNetworks, #AI, @MLCommons, #MLPerf3,
EvenUp, a legal tech start-up, has raised $50.5m in a series B funding round led by Bessemer Venture Partners, with participation from Bain Capital Ventures, Behance founder Scott Belsky and legal tech firm Clio.https://techcrunch.com/2023/06/08/evenup-wants-to-automate-personal-injury-settlements-to-a-point/ Researchers at Purdue University have developed an AI-driven technology that uses a smartphone camera to detect and diagnose medical conditions like anemia.https://medicalxpress.com/news/2023-06-ai-driven-mobile-health-algorithm-camera.html Researchers from MIT and IBM have developed a new technique for analysing unlabeled audio and visual data that could improve the performance of machine-learning models.https://news.mit.edu/2023/scaling-audio-visual-learning-without-labels-0605 NVIDIA has announced its next DGX supercomputer, the DGX GH200, which is designed to help companies develop generative AI models.https://www.engadget.com/nvidias-next-dgx-supercomputer-is-all-about-generative-ai-043053544.html Visit www.integratedaisolutions.com
Podcast jest dostępny także w formie newslettera: https://ainewsletter.integratedaisolutions.com/ EvenUp, prawniczy start-up technologiczny, zebrał 50,5 miliona dolarów w rundzie finansowania serii B prowadzonej przez Bessemer Venture Partners, z udziałem Bain Capital Ventures, założyciela Behance Scotta Belsky'ego i firmy prawniczej Clio.https://techcrunch.com/2023/06/08/evenup-wants-to-automate-personal-injury-settlements-to-a-point/ Naukowcy z Purdue University opracowali technologię opartą na sztucznej inteligencji, która wykorzystuje aparat w smartfonie do wykrywania i diagnozowania schorzeń, takich jak anemia.https://medicalxpress.com/news/2023-06-ai-driven-mobile-health-algorithm-camera.html Naukowcy z MIT i IBM opracowali nową technikę analizy nieoznakowanych danych audio i wideo, która może poprawić wydajność modeli uczenia maszynowego.https://news.mit.edu/2023/scaling-audio-visual-learning-without-labels-0605 NVIDIA ogłosiła swój kolejny superkomputer DGX, DGX GH200, który ma pomóc firmom w opracowywaniu generatywnych modeli sztucznej inteligencji.https://www.engadget.com/nvidias-next-dgx-supercomputer-is-all-about-generative-ai-043053544.html Odwiedź www.integratedaisolutions.com
Pourquoi j'investis dans Nvidia ! Le moteur de l'intelligence artificielle ! J'investis à fond dans l'intelligence artificielle avec Nvidia et ChatGPT (Nvidia DGX) !!! NVIDIA Accelerated Computing commence avec DGX, qui est le supercalculateur d'IA au monde et le moteur derrière la grande percée du modèle de langage. Le GPU de DGX est composé de huit modules H-100, qui sont liés les uns aux autres par NV pour permettre des transactions entièrement non bloquantes. Les huit H-100 fonctionnent comme un GPU géant, et la structure informatique est l'un des systèmes les plus vitaux du supercalculateur IA. NVIDIA DGX H100 est le modèle pour les clients qui construisent une infrastructure d'IA dans le monde entier et est maintenant en pleine production. Ajoutez à cela la puissance de Nvidia Omniverse et vous avez là un acteur majeur de l'intelligence artificielle! Site internet : https://www.metamorphose47.com TikTok : https://www.tiktok.com/@metamorphose47 Twitter: https://twitter.com/Metamorphose_47 Discord : https://discord.gg/2njKqNp8j7 Podcasts : Metamorphose 47 Date d'enregistrement: 06 avril 2023 Investir comporte des risques de perte. Je ne donne aucun conseil en investissement. #Nvidia #ChatGPT #Bourse #Investir #IA
For keyboard players, this episode is about easy play music for O Brother, Where Art Thou. The Yamaha E463 and DGX-640 keyboards are what I use to play in my videos. No Sugar is the soundtrack from the YouTube Audio Library. Thank you for watching.
Nvidia GTC is happening this week and there's lots of new hardware coming out. From the H100 card optimized for large language models to screaming fast GPUs for laptops and workstations Nvidia has something for everyone. We've already referenced the announcements around Bluefield 3 but Nvidia has also announced a new supercompute cluster with DGX cloud that you can totally rent for $37,000 a month. 0:00 | Welcome to the Rundown 0:42 | SSE Revenues Growing per Dell'Oro 4:09 | Nutanix CIO Out Amid Operational Oddities 8:42 | Nvidia Bring Bluefield 3 to Oracle Cloud 12:23 | They Don't Make HDDs Like They Used To 18:53 | Nvidia GTC Looms Large 28:02 | The Weeks Ahead 29:36 | Thanks for Watching Follow our hosts on Social MediaTom Hollingsworth: https://www.twitter.com/NetworkingNerdStephen Foskett: https://www.twitter.com/SFoskett Chris Grundemann: https://www.twitter.com/ChrisGrundemann Follow Gestalt ITWebsite: https://www.GestaltIT.com/Twitter: https://www.twitter.com/GestaltITLinkedIn: https://www.linkedin.com/company/1789 Tags: #Rundown #NVIDIAGTC #Bluefield3 #Nividia #SSD #HDD #SSE @NVIDIA @Oracle @WesternDigital
This is about the sound from two different Yamaha keyboards for a song from the Wizard of OZ - If I Only Had a Brain. Compare the sound of the DGX-640 church organ with the grand piano voice from the Yamaha PSR-E463. What do you think? Both are great, right? There it is. Thanks for watching and listening. NOTE TO WEDNESDAY NIGHT MUSIC CLUB STUDENTS. That's it for me heading into 2023. One song, remember? Be of good courage like the Lion in the Wizard of OZ. What song did you choose or are working on? Record or videotape your performance for further study concerning technique. Record your progress in writing in a journal. Stay positive. Thanks for watching and/or listening.
Taiwan is a country about half the size of Maine with about 17 times the population of that state. Taiwan sits just over a hundred miles off the coast of mainland China. It's home to some 23 and a half million humans, roughly half way between Texas and Florida or a few more than live in Romania for the Europeans. Taiwan was connected to mainland China by a land bridge in the Late Pleistocene and human remains have been found dating back to 20,000 to 30,000 years ago. About half a million people on the island nation are aboriginal, or their ancestors are from there. But the population became more and more Chinese in recent centuries. Taiwan had not been part of China during the earlier dynastic ages but had been used by dynasties in exile to attack one another and so became a part of the Chinese empire in the 1600s. Taiwan was won by Japan in the late 1800s and held by the Japanese until World War II. During that time, a civil war had raged on the mainland of China with the Republic of China eventually formed as the replacement government for the Qing dynasty following a bloody period of turf battles by warlords and then civil war. Taiwan was in martial law from the time the pre-communist government of China retreated there during the exit of the Nationalists from mainland China in the 1940s to the late 1980. During that time, just like the exiled Han dynasty, they orchestrated war from afar. They stopped fighting, much like the Koreans, but have still never signed a peace treaty. And so large parts of the world remained in stalemate. As the years became decades, Taiwan, or the Republic of China as they still call themselves, has always had an unsteady relationship with the People's Republic of China, or China as most in the US calls them. The Western world recognized the Republic of China and the Soviet and Chines countries recognized the mainland government. US President Richard Nixon visited mainland China in 1972 to re-open relations with the communist government there and relations slowly improved. The early 1970s was a time when much of the world still recognized the ruling government of Taiwan as the official Chinese government and there were proxy wars the two continued to fight. The Taiwanese and Chinese still aren't besties. There are deep scars and propaganda that keep relations from being repaired. Before World War II, the Japanese also invaded Hong Kong. During the occupation there, Morris Chang's family became displaced and moved to a few cities during his teens before he moved Boston to go to Harvard and then MIT where he did everything to get his PhD except defend his thesis. He then went to work for Sylvania Semiconductor and then Texas Instruments, finally getting his PhD from Stanford in 1964. He became a Vice President at TI and helped build an early semiconductor designer and foundry relationship when TI designed a chip and IBM manufactured it. The Premier of Taiwan at the time, Sun Yun-suan, who played a central role in Taiwan's transformation from an agrarian economy to a large exporter. His biggest win was when to recruit Chang to move to Taiwan and found TSCM, or Taiwan Semiconductor Manufacturing Company. Some of this might sound familiar as it mirrors stories from companies like Samsung in South Korea. In short, Japanese imperialism, democracies versus communists, then rapid economic development as a massive manufacturing powerhouse in large part due to the fact that semiconductor designers were split from semiconductor foundry's or where chips are actually created. In this case, a former Chinese national was recruited to return as founder and led TSMC for 31 years before he retired in 2018. Chang could see from his time with TI that more and more companies would design chips for their needs and outsource manufacturing. They worked with Texas Instruments, Intel, AMD, NXP, Marvell, MediaTek, ARM, and then the big success when they started to make the Apple chips. The company started down that path in 2011 with the A5 and A6 SoCs for iPhone and iPad on trial runs but picked up steam with the A8 and A9 through A14 and the Intel replacement for the Mac, the M1. They now sit on a half trillion US dollar market cap and are the largest in Taiwan. For perspective, their market cap only trails the GDP of the whole country by a few billion dollars. Nvidia TSMC is also a foundry Nvidia uses. As of the time of this writing, Nvidia is the 8th largest semiconductor company in the world. We've already covered Broadcom, Qualcomm, Micron, Samsung, and Intel. Nvidia is a fabless semiconductor company and so design chips that vendors like TSMC manufacture. Nvidia was founded by Jensen Huang, Chris Malachowsky, and Curtis Priem in 1993 in Santa Clara, California (although now incorporated in Delaware). Not all who leave the country they were born in due to war or during times of war return. Huang was born in Taiwan and his family moved to the US right around the time Nixon re-established relations with mainland China. Huang then went to grad school at Stanford before he became a CPU designer at AMD and a director at LSI Logic, so had experience as a do-er, a manager, and a manager's manager. He was joined by Chris Malachowsky and Curtis Priem, who had designed the IBM Professional Graphics Adapter and then the GX graphics chip at Sun. because they saw this Mac and Windows and Amiga OS graphical interface, they saw the games one could play on machines, and they thought the graphics cards would be the next wave of computing. And so for a long time, Nvidia managed to avoid competition with other chip makers with a focus on graphics. That initially meant gaming and higher end video production but has expanded into much more like parallel programming and even cryptocurrency mining. They were more concerned about the next version of the idea or chip or company and used NV in the naming convention for their files. When it came time to name the company, they looked up words that started with those letters, which of course don't exist - so instead chose invidia or Nvidia for short, as it's latin for envy - what everyone who saw those sweet graphics the cards rendered would feel. They raised $20 million in funding and got to work. First with SGS-Thomson Microelectronics in 1994 to manufacture what they were calling a graphical-user interface accelerator that they packaged on a single chip. They worked with Diamond Multimedia Systems to install the chips onto the boards. In 1995 they released NV1. The PCI card was sold as Diamond Edge 3D and came with a 2d/3d graphics core with quadratic texture mapping. Screaming fast and Virtual Fighter from Sega ported to the platform. DirectX had come in 1995. So Nviia released DirectX drivers that supported Direct3D, the api that Microsoft developed to render 3d graphics. This was a time when 3d was on the rise for consoles and desktops. Nvidia timed it perfectly and reaped the rewards when they hit a million sold in the first four months for the RIVA, a 128-bit 3d processor that got used as an OEM in 1997. Then the 1998 RIVAZX with RIVATNT for multi-texture 3D processing. They also needed more manufacturing support at this point and entered into a strategic partnership with TSMC to manufacture their boards. A lot of vendors had a good amount of success in their niches. By the late 1990s there were companies who made memory, or the survivors of the DRAM industry after ongoing price dumping issues. There were companies that made central processors like Intel. Nvidia led the charge for a new type of chip, the GPU. They invented the GPU in 1999 when they released the GeForce 256. This was the first single-chip GPU processor. This means integrated lightings, triangle setups, rendering, like the old math coprocessor but for video. Millions of polygons could be drawn on screens every second. They also released the Quadro Pro GPU for professional graphics and went public in 1999 at an IPO of $12 per share. Nvidia used some of the funds from the IPO to scale operations, organically and inorganically. In 2000 they released the GeForce2 Go for laptops and acquired 3dfx, closing deals to get their 3d chips in devices from OEM manufacturers who made PCs and in the new Microsoft Xbox. By 2001 they hit $1 billion in revenues and released the GeForce 3 with a programmable GPU, using APIs to make their GPU a platform. They also released the nForce integrated graphics and so by 2002 hit 100 million processors out on the market. They acquired MediaQ in 2003 and partnered with game designer Blizzard to make Warcraft. They continued their success in the console market when the GeForce platform was used in the PS 3 in 2005 and by 2006 had sold half a billion processors. They also added the CUDA architecture that year to put a general purpose GPU on the market and acquired Hybrid Graphics who develops 2D and 3D embedded software for mobile devices. In 2008 they went beyond the consoles and PCs when Tesla used their GPUs in cars. They also acquired PortalPlayer, who supplies semiconductors and software for personal media players and launched the Tegra mobile processor to get into the exploding mobile market. More acquisitions in 2008 but a huge win when the GeForce 9400M was put into Apple MacBooks. Then more smaller chips in 2009 when the Tegra processors were used in Android devices. They also continued to expand how GPUs were used. They showed up in Ultrasounds and in 2010 the Audi. By then they had the Tianhe-1A ready to go, which showed up in supercomputers and the Optimus. All these types of devices that could use a GPU meant they hit a billion processors sold in 2011, which is when they went dual core with the Tegra 2 mobile processor and entered into cross licensing deals with Intel. At this point TSMC was able to pack more and more transistors into smaller and smaller places. This was a big year for larger jobs on the platform. By 2012, Nvidia got the Kepler-based GPUs out by then and their chips were used in the Titan supercomputer. They also released a virtualized GPU GRID for cloud processing. It wasn't all about large-scale computing efforts. The Tegra-3 and GTX 600 came out in 2012 as well. Then in 2013 the Tegra 4, a quad-core mobile processor, a 4G LTE mobile processor, Nvidia Shield for portable gaming, the GTX Titan, a grid appliance. In 2014 the Tegra K1 192, a shield tablet, and Maxwell. In 2015 came the TegraX1 with deep learning with 256 cores and Titan X and Jetson TX1 for smart machines, and the Nvidia Drive for autonomous vehicles. They continued that deep learning work with an appliance in 2016 with the DGX-1. The Drive got an update in the form of PX 2 for in-vehicle AI. By then, they were a 20 year old company and working on the 11th generation of the GPU and most CPU architectures had dedicated cores for machine learning options of various types. 2017 brought the Volta, Jetson TX2, and SHIELD was ported over to the Google Assistant. 2018 brought the Turing GPU architecture, the DGX-2, AGX Xavier, Clara, 2019 brought AGX Orin for robots and autonomous or semi-autonomous piloting of various types of vehicles. They also made the Jetson Nano and Xavier, and EGX for Edge Computing. At this point there were plenty of people who used the GPUs to mine hashes for various blockchains like with cryptocurrencies and the ARM had finally given Intel a run for their money with designs from the ARM alliance showing up in everything but a Windows device (so Apple and Android). So they tried to buy ARM from SoftBank in 2020. That deal fell through eventually but would have been an $8 billion windfall for Softbank since they paid $32 billion for ARM in 2016. We probably don't need more consolidation in the CPU sector. Standardization, yes. Some of top NVIDIA competitors include Samsung, AMD, Intel Corporation Qualcomm and even companies like Apple who make their own CPUs (but not their own GPUs as of the time of this writing). In their niche they can still make well over $15 billion a year. The invention of the MOSFET came from immigrants Mohamed Atalla, originally from Egypt, and Dawon Kahng, originally from from Seoul, South Korea. Kahng was born in Korea in 1931 but immigrated to the US in 1955 to get his PhD at THE Ohio State University and then went to work for Bell Labs, where he and Atalla invented the MOSFET, and where Kahng retired. The MOSFET was an important step on the way to a microchip. That microchip market with companies like Fairchild Semiconductors, Intel, IBM, Control Data, and Digital Equipment saw a lot of chip designers who maybe had their chips knocked off, either legally in a clean room or illegally outside of a clean room. Some of those ended in legal action, some didn't. But the fact that factories overseas could reproduce chips were a huge part of the movement that came next, which was that companies started to think about whether they could just design chips and let someone else make them. That was in an era of increasing labor outsourcing, so factories could build cars offshore, and the foundry movement was born - or companies that just make chips for those who design them. As we have covered in this section and many others, many of the people who work on these kinds of projects moved to the United States from foreign lands in search of a better life. That might have been to flee Europe or Asian theaters of Cold War jackassery or might have been a civil war like in Korea or Taiwan. They had contacts and were able to work with places to outsource too and given that these happened at the same time that Hong Kong, Singapore, South Korea, and Taiwan became safe and with no violence. And so the Four Asian Tigers economies exploded, fueled by exports and a rapid period of industrialization that began in the 1960s and continues through to today with companies like TSMC, a pure play foundry, or Samsung, a mixed foundry - aided by companies like Nvidia who continue to effectively outsource their manufacturing operations to companies in the areas. At least, while it's safe to do so. We certainly hope the entire world becomes safe. But it currently is not. There are currently nearly a million Rohingya refugees fleeing war in Myanmar. Over 3.5 million have fled the violence in Ukraine. 6.7 million have fled Syria. 2.7 million have left Afghanistan. Over 3 million are displaced between Sudan and South Sudan. Over 900,000 have fled Somalia. Before Ukranian refugees fled to mostly Eastern European countries, they had mainly settled in Turkey, Jordan, Lebanon, Pakistan, Uganda, Germany, Iran, and Ethiopia. Very few comparably settled in the 2 largest countries in the world: China, India, or the United States. It took decades for the children of those who moved or sent their children abroad to a better life to be able to find a better life. But we hope that history teaches us to get there faster, for the benefit of all.
Warren Buffett isn't planning on paying his shareholders a dividend, but he's a big fan of getting a piece of his stocks' profits. Ricky Mulvey joins The Motley Fool's Matt Argersinger and Anthony Schiavone to talk about the fundamentals of dividend investing, including: - The case for buying dividend stocks - How to spot healthy payouts - A few interesting income-generating opportunities Tickers mentioned: BRK.A, BRK.B, GE, KO, MTN, VYM, VIG, NOBL, EBAY, DGX, EPR Bonus Resource - List of Dividend Aristocrats - https://www.fool.com/investing/stock-market/types-of-stocks/dividend-stocks/dividend-aristocrats/ Host: Ricky Mulvey Guests: Matt Argersinger, Anthony Schiavone Engineers: Dan Boyd, Rick Engdahl
Hey, man. If you were ever curious how Elvis would react to modern-day things like the DGX and Starbucks…ya came to the right place.
Episode Summary:Elon Musk Sells $5 Billion of Tesla Shares$500 Shiba Inu Giveaway Guests:Ben Rabizadeh StoryTrading 10:00Vivi Biotech Queen https://twitter.com/Biotech_SD 24:00Zandy Forbes, Ph.D. President & CEO of MeiraGTx (NASDAQ: MGTX) 41:00Ronen Samuel, CEO of Kornit (KRNT) 55:00Scott Mathis, CEO and Chariman of Guacho Holdings $VINO 70:00Renato Capelj, Benzinga, Physik Invest 110:00https://physikinvest.com/Hosts:Spencer IsraelTwitter: https://twitter.com/sjisraelAaaron BryTwitter: https://twitter.com/aaronbry5Subscribe to all Benzinga Podcasts hereClick here for BENZINGA TRADING SCHOOL Get 20% off Benzinga PRO here Become a BENZINGA AFFILIATE and earn 30% on new subscriptionsDisclaimer: All of the information, material, and/or content contained in this program is for informational purposes only. Investing in stocks, options, and futures is risky and not suitable for all investors. Please consult your own independent financial adviser before making any investment decisions.Unedited Transcriptwe got a lot of guests today. Here's that's what I said. Here's what we got. We got Ben from story trading in like eight minutes. We've got Vivi biotech, uh, or the Bio-Queen at, uh, 1215. We've got Zandi Forbes from Mira GTX. She's a presidency yet.We're talking gene editing at 1230. We got Ronan Senor from Coronet digital we're talking, uh, fashion and the fashion supply chain at 1245. Did I get all that right? As far as timing goes. Yep. We got Scott Mathis from Gaucho holding sicker V I know, uh, at one and that Renato, uh, Capella, he is a Benzinger writer and also does some really, really cool options trading on the side.Uh, he'll be on the show at one 30. So we got what we got. 1, 2, 3, we got six guests today. We have all that's a lot, frankly. Uh, maybe a few too many, but nonetheless, here we are. So before we get to those guests, uh, we're going to talk about what's what's moving. We're going to talk about, uh, crypto. We're going to do a guest that sharp sediment.Cause we got some good feedback from that yesterday. So, um, AB where should we start? Um, well, let's start with just looking at the overall market. Spencer. I see Christian in the chat asking who do we have on for a guest today? You just ran through them, but you can also check the description in the YouTube, uh, the YouTube description for the guests for the day.Um, and shout out to the chat yesterday. We had some good trade ideas thrown out in the chat yesterday. We did, uh, easy Mike was talking about Uber puts. I played those. They were up nicely. Um, and we were talking about playing Disney for a big move on either side. We got talked out of it by our main man, Nick Shaheen.But yeah, I don't know. Maybe we should have done it. Yeah. I felt bad about that one going and they got talked out of it, but, uh, you know, it was a pretty brutal quarter. So, um, anyway, I don't even know why I look at Disney it's in my never sell portfolio. That's my first mistake is don't look at and stuff and your never sell portfolio.Otherwise you're just giving yourself anxiety. Rowan DAS pips is saying audio levels. Are we good, Bruce? Or Ron? Are you awake? I'm here. Okay. Oh, he's coming through us through the sky. I did not know that was coming from Terminator. I'm hanging out on the background, like a good idea. Hey, uh, while we figure out those levels, if you all want to do a solid, as DK suggested and hit that like button ladies and gentlemen, we'd appreciate that.Thank you very much. Um, Hey, what's do a guest, that chart segment. We're going to start doing this every day yesterday. Wasn't it easy one a B I don't even remember what it was. It was D whack. Oh yeah. That was that. That was yours. Full disclosure, I guess the one today. No, because I changed it. Shoot. So beforehand Spencer showed me the guests that chart and I guessed it and he didn't like the fact that I got it so easily.So he went out and picked a new one. I don't know what it is. I went out and I picked her from, Ooh, this will be a good drawer. And then he gets to like right away. So it's not a firms that don't get that, um, drop your answers on the chat and whoever's right. Uh, email us afterwards and we'll send you some swag.Um, here is two days chart of the day. This is going back to like February, or actually this is going back to the start of the year. This is going back to the interests of the start of the year. Now I will give you some hints because otherwise it'd be impossible. I feel like, um, in some respects, this is a technology stock in some respects.Oh man, we can have a winner already. Holy moly. Christian Gallagher. Wait, it was PayPal. This is PayPal. How so are we looking at weekly candles right now? Is this a daily PayPal has gotten beat up over the past month or so it looks like, yes, it has gotten beat up is, is, is, is a nice way of putting it, but yes, Christian Gallagher.Did anyone yesterday? No, Frendo on yesterday. Christian email us shows app benzinga.com. Hey, why is PayPal down so much? Didn't they just announce a Venmo, Amazon integration. So yeah, you would think that tell me you don't own PayPal or maybe you do and you don't know it. Um, I think in my like real portfolio, the one that I don't manage it's in there scourge.It's too easy with showing prices. Well, that's an idea. I take the price off ridiculously hard, but we could do that. Should I buy? I like calls on PayPal. No, it's gotta be coming back up at some point, right? That's one way of putting it. Um, you want to throw out like, uh, you know, you want to buy the, the, the, the, the two 80 strikes expiring and like six months.Really worth looking at. I'm sure it's not very much money. I do think though we have to, um, you know, keep up, like if we start doing obscure biotech stocks, you know, it'd be nearly impossible. So what we'll probably try to stick to, I dunno, S and P 500. Yeah. That's, that's, that's a good plan. Stick to the S and P 500.So we're going to do this everyday. Christian email us shows up and it's going to come. We will hook you up with some schwag before we go to our first guest AB I seem working backstage. Let's do our first crypto update of the day. Sure. Yeah. So yesterday Bitcoin spiked on the CPI data and then gave up all those gains and more.Let's take a look today to see how the, the crypto markets are responding to that yesterday. Uh, we, we have a lot of people in the chat that have been asking kind of how are they supposed to be trading crypto right now? So I know you and I, Spencer are in that boat where we're just like adding Ethereum and not really touching it or selling it.I haven't even added recently. It's been a while for me. I hadn't bought a theorem. I wanted the board. So yesterday, w like I've mentioned, once that CPI data came out, Bitcoin spiked and actually hit brand new, all time highs, um, gave all those gains up is currently trading about $65,000 a coin down 4.8%.Ethereum is also down, albeit not as much Ethereum currently, right around $4,800. So we talked about that $5,000 level being a big level for Ethereum. I think once we see a theory and finally breakthrough that $5,000 level, we can really see it run. Um, but yeah, everything pretty much in the red, like you can see from this heat map Sheba, he knew though in the green up 5%, um, we're actually going to have a big Shiba Inu guest on the show tomorrow.Uh, his name is Ross. He was an early investor and also a kind of co-developer of Sheba. So he's someone that I think I can say confidently knows more about Shiba Inu than 99.9% of people in the world. Um, so if you're interested in Sheba or what's next for the coin tune in tomorrow, he'll be on about one 30, uh, Spencer, any thoughts on this crypto heat map?I I'm trying to get a link right now because we're actually doing a sheep giveaway. I'm trying to get the link for that. I don't have it handy, but we're giving away some free shipping. I hope somebody gets me that link so I can get it to you. Uh, that's my thought, my thought is no. I mean, Bitcoin is an inflation hit crypto isn't inflation, hedge.Um, I don't quite understand the why it's down like this today, but you don't. All you need to do is look at the reaction to Bitcoin at 8:30 AM. Eastern time yesterday, right? Inflation comes out. It's harder than expected. It's it's more than expected. What does Bitcoin do? It goes up. Mike is saying, Hey, be it.It's a good play. Definitely go for PayPal options. Um, so I don't know if he's, you know, facetiously saying that he wants to see me lose some money, but I'm looking at him. I'm looking at the calls. Um, but yeah, Spencer, like you mentioned get some free Bitcoin we'll we'll throw that zing token up real quick.Go to Voyager. You put in a hundred dollars, you get $50 free Bitcoin, if you use the code zing. Uh, so not a bad deal at all. I mean, who doesn't want $50 of free Bitcoin checkout Voyager for that? Use the code zing. Um, all right. Real quick before we get to Ben. Yeah. Yeah. And, and I'm going to post the link to the ship giveaway.Um, when I, once you find that I find it. Yeah, no problem. Somewhere out there, easy Mick, you might be the guy who knows technicals what they're talking about when it comes to technicals, I'm watching a. Dash, which has been up a lot over the past, I don't know, week or so. Is this, is this, this the kind of subtle head and shoulders forming on the, on the one day chart?I think we're looking at five minute candles right here. I see one shoulder here. This could be the head, if this is the, the second shoulder, um, you know, could it start falling down or should I be looking at some dash puts, I need someone who knows technicals better than I do to let me know if this is, uh, you know, I'm not a big patterns guy, so I'm not really know, but we have some people out there that are outsourcing.And I know, I know when I need to listen to someone that knows more than I do. Yeah. Um, Alvin say, and he's looking at Wayfair puts an ADSL. K what's ADSL desk. Autodesk. Yeah, by the way, I just put the link in the chat for the ship giveaway. There it is. There it is. You can't click it on the screen, but it's in the chat.I'm just throwing up there. So y'all know it is there. All right. Y'all so story train. If you guys have watched a show before you've seen Ben on the show, he gave us a E H R at, I don't know, four bucks, three bucks. What does it now? It's like, I like 20 let's check real quick before he comes on. He'll give us an update on AHR.Um, it's at $23. So let's see. I mean, it's, it's up. I want to say like at least 200% since he pitched us to us on the show, he's got a couple more stocks. We're going to talk about, uh, Should we bring them on easy MC RSI? Isn't it. If the RSA is 90, isn't that a sign it could come to? I don't know. You, you know, more about technicals than I do.Let me know. All right. Without further ado, let's go ahead and get BenOh, Ben, what was the, what was the price of AHR when you first came on the calendar? What's up guys? Yeah. When I first came on, the show was around $5, five 40, somewhere that somewhere in the low fives, I believe it was. Yeah. And when we first presented it to our VIP community, when I presented as a trade idea, it was $2 and 74 cents.What, all the way up to 27. So it was a 10 bagger if you captured that. And I happened to pretty much captured that I was done. I'm handing this from 2 74. I didn't sell my first shares until 16. And then, um, I sold a lot more in the, in the 20 to 25, 26 weeks and maybe even gotten a little bit at 27. So, um, yeah, I've got a few updates for you on, on that.And a couple other sucks. Yeah. You got a new background. This is the first time we're seeing this. It looks pretty good. Yeah. A lot more work to do. We got to get that our mikes and cameras and everything, but yeah, this is the first time anyone's seen this background. Cool. And you said too, it looks like we did.It took us far too long, but we eventually got it together. You get a zoom out so Ben can see our beautiful. Oh, wow. That's great, man. Um, all right, Ben, you want to go ahead and get your screen shared and we can go ahead and run through those slides real quick. I know we only got about 10 minutes, so we've got a few I'll work.I'll be fast. All right, there we go. All right. So yeah, first we got that quick disclaimer, we got to always do that. A story is not investment advisor and missing his spirit is in most gimmick where some losses of you who are new to store trading, what is associate is the practice of understanding market pricing.We also call that the story behind the trade through the four pillars, which we say are sentiment catalyst, fundamentals, and technicals. That allows us to take a holistic look at markets and make choices based on all of these factors, not just one of them. So air story, trade idea, update. I officially opened that in my community on July 8th of 2 74, I did close it on November 8th of 25, 79.I still own some shares. Uh, I ended up selling about nine giving 95% of my shares. Um, Monday morning was really the trigger for me and that's because I saw something, the technicals and also the sentiment, which caused me to say, let me, let me lock in these games. And the technicals last Friday was the.Solid green candle on the chart, you can go look it up the first solid green channel and then tire run-up, which, which means it started the day high and then went down lower throughout the day. And throughout this entire run, it was starting low and going higher throughout the day. I took that to be a reversal sign plus the sentiment, I think when reached peak Eby sentiment for now, for this cycle with, uh, Elon Musk selling shares of Tesla with the Caribbean IPO, with excitement, but Eby infrastructure.I'm trying to time that sentiment top. So I took my profits. I am keeping about 5% of my shares because it could go much higher still if they get the right contracts in the future. Got it. Um, so it took some money off the table, trim some of your position and air took those profits, never a bad move. Uh, Ben, what else?I was on your radar today. I wanted to give you an update on side, cause I was on your show. July 26, we presented this at $5 and 70 cents. Uh, this is stuff that's been up 66% since it was initiated in our community. At the time we said, Hey, T-Mobile's come in. End of month in August. It didn't happen then.But it happened last night. Uh, so a few updates. They have earnings last year. Um, so this T-Mobile deal came three months late, but it's finally here. Now, key, this was announced last night on the conference call only. There's no PR yet. So people who are in the know who are listening to the conference call, they have a big edge getting into the stock right now.Um, there probably will be a PR at some point in the future, and there's also tremendous traction with their other customers, 18 T and Verizon. Um, there was an upgrade today to $9 and I'm not even sure that Benzinga caught it, that upgrade by the analyst and $9 by lake street. But we think in our community, it can go much higher.Uh, there's uh, estimates out there in our community saying we can do go up to a dollar 33 DPS by 2024, which would be a $40 price target. So that's the update on SSI. You know, our investigative research worked and we're read about T-Mobile just a few months late. So this company is going to start printing lots of tests going to be a very profitable company.Do, do you have a target? Um, yeah, no, I don't have to stop losses. God forbid men. No, I do stories yet. I never do stop losses. So I've been holding the stock for like two or three years. I increase in decrease my position around catalysts. So, um, I was buying and after hours last night I bought a lot more today.It's now my largest position actually. And uh, I'm not going to put a price target. I see how it goes. I, I assess the fundamental sets of metallics and technicals on an ongoing basis to determine. There you have it. Yep. So I do have a new pig that I'll get to in a second. But before that, just a little quick alert, maybe something for you guys to look into and talk about, because this is a big kind of big cap for us.At $1.5 billion company GoPro. We presented it to our community Sunday night, um, because the fundamentals are super strong. They had earnings last Friday and we have anticipated a technical breakout of the 200 DMA, which just happened this morning. And this is really, it could be a really fun situation, wanted to bring your attention because it has, this has short and gamble, squeeze potential, very high, short position.He has sense of it has been very, very low, but the financials have completely turned around with this company and they're printing tons of cash. Now you see the technicals broke. The options are very liquid, very cheap. And if it gets into the right hands and the Reddit community, et cetera, this could be a crazy profit potential, you know, with shorten game of squeezes.So keep your eye on that. But go. So GoPro's up about 9% today on the strong earnings. Um, so I, I mean, I, I don't know, personally, Ben, if I'm going to go in and try to chase GoPro and it's already up 10%, but I, or 9% today, but I definitely like having it on my radar. Uh, J rice in the chat was also talking about GoPro saying that he thinks it could be a long-term turnaround play.Uh, I don't know. Munis has been one of those stocks that has just been like beaten down over time historically. Um, but you know, at some point I don't think you can ignore the fundamentals or Fridays when I got in added more Monday, uh, I got a little bit messed up on the options than playing the options.So I actually lost some money. Cause I got scared with the whole inflation thing yesterday, but I'm in it now and yeah, the sense of it's sport, that's the only thing people hate this company, but they're printing cash like crazy. The technicals are turning and shorting, given squeeze potential, such that this, yeah, my all hold back a GoPro has always been that.I feel like they have a very limited, uh, customer base, you know, it's like who, who who's going out and buying GoPros. It's people that take part in extreme sports, you know, mountain bikers, snowboarders, skiers, et cetera, outside of that. Um, I don't, I don't know how many, you know, everyday people are GoPro customers.It's absolutely correct. And I'm not, you know, I wouldn't take issue with that, but the amount of earnings they have like 60 million EBITDA this last quarter. And if you compare it with their market cap, I mean, this thing can easily be 17 bucks. Even with that knock against it. Got it. Um, I been, what else is on the radar?I have a presence. It's my first story. Trade ideas. Since air. I presented this to my VIP community last week. Okay. Listen to the presentation. Don't just jump into buying guys. Okay. Because of what happened with their, every stocks, not air, I can guarantee you, this is not going to go a thousand percent in the next three months, like heritage.Okay. So that sock is Gaia, ticker symbol, G a I a. All right. So we're going to look at it and yeah, it's a smaller company on the ground, 200 million market cap or so, but you know, what's the story behind the trade. That's what we're trying to figure out. And again, we look at the sentiment, the fundamentals, catalyst and technical.So let's start with the fundamentals guys, digital video subscription service, like in some ways like, like Netflix, they sell, um, they make original content for yoga, alternative health, holistic healing, nutrition. It's a monthly subscription service. They've been growing steadily over the last many quarters.They're profitable. Uh, fundamentally I think their inflection really happened to a quarter or two ago when they became profitable. And you can see some of the, uh, the trends here in terms of their, um, their revenue and their EPS, although in the right direction. So in our community, we collaborate with people who are really steeped in fundamentals, just charts, courtesy of Mark Holmes.He has a risk reward chart here in terms of what is the value of this stock and. You know, it's worth, it could be worth at least $17 a share. And the stock is very cheap here. So, so that's the fundamentals. You can check it out on your own. Go look at the earnings report and you'll be able to verify everything I'm saying about, uh, the growth and the subscriber growth and money they're making written it now.And it could potentially be, uh, you know, Netflix may buy them out one day. You know, there's, there's a lot of opportunities here. So, um, yeah, catalyst let's go to the next pillar. Uh, in these sorts trading four pillars, they had their earnings just recently, November 1st, it was 20, 20, 20 2% revenue growth, uh, year to date compared to last year Q3 EBITDA of $4 million.Uh, even a margin of 20% was their fifth straight quarter positive earnings in cashflow. Um, and then they had an additional catalyst the next day. So we'll talk about that catalyst in just a second. What happened the day after earnings, but first let's go to sentiment. So the sentiment is kind of poor with the stock because fundamental investors are frustrated at the price action.I know a lot of fundamental investors saying this should be worth 17, 20, 25 bucks. Why is it $10? Why is it moving yet? So I listened to other participants. I say, I talked to people in social media. I talked to people in my community. Why aren't you interested in the stock? And this is what I'm hearing.The total adjustable market may be too small. It may be too niche, their content. We talked about alternative medicine, yoga, meditation, and things like that. So it feels like I'm just not interested. It doesn't seem like a huge part. Other people say, Hey, the content, they have some content that's kind of fringe on their French content.Like some of that alternative medicine, there might be some videos on, you know, some vaccine hesitancy type stuff or who knows, like some things are it's alternative content, right? So some ESG, uh, buyers, uh, may stay away from that environmental social governance. So that's another knock on the sentiment.The other knock is this is just slow and steady growth it's and where's the hockey stick potential on that. So just remember, this is the poor sentiment. This is what happened. Going back to the catalyst the next day after earnings, after we know the fundamentals are great, the next day another catalyst hit the other, the catalyst was there was a PR that Demi Lovato became a brand ambassador for.And there was a press release that a lot of people didn't see a where Demi Lovato says I'm excited to be one of Dias. First celebrity ambassadors, and honor to join a platform. I've been a fan of for some time she has 118 million followers on Instagram. And the market cap is again like 189 million. So this company has been growing slow and steady.And all of a sudden, they hit you with this news of Demi Lovato is a huge mainstream personality, that and company, I want to go back to this. Airpoints the one thing I saw that the thing about Demi Lovato, uh, in, and I, I guess I didn't realize this was the same company, but like they got a lot of weird shit on their platform.And that was the sentiment I was talking to. They're saying, oh, there's stuff is weird. I just don't want to own this company. But you know, I think that, you know, they're growing steadily, they're getting to a place where they can really focus on growth now, and to me, and let me go back to that bad point.You just made that pinpoint. You brought it up for me, right? Yeah. They have weird stuff. It's too niche. The French content. They keep some buyers away. But this is where I think that Demi Lovato news is really significant because they're in a financial position now to really grow the company. And to me, this signals.And in fact, in that PR said, I'm excited to be one of guys. First celebrity ambassador, And I have a feeling this company is going to start growing their content and start getting into more mainstream content. And based on what you're saying, I think they may be looking for more celebrity ambassadors.And it's just a great situation because the downside is so limited. You have a fundamental floor here and now you have optionality upside. If the company starts doing things to get that hockey stick growth potential. And that's why I really add it to my position here. And I'm very excited for the next several months on this stuff.Um, let me just go to the last or technical resulted with her awesome technician Rex. And this is a monthly chart is saying, uh, this can go to the twelves if it breaks out of the yellow.here, which thought is that shorter than I think the current breakout circled right there. It's got a breakout of, I guess, 10 70 area.He thinks it can go to 12 for the month. And here's another view which looks much more bullish. This is a long view of the monthly chart is a sometime in the future. When the 12 to 13 breakout at the blue line, it should proceed to break out the all-time trend at the white line. And he thinks this is a several month play for that to happen, but you can kind of see where this can go.Uh, if that happens. Uh, any questions on that or any of the other stops, man? I didn't see this one coming. I didn't realize this. I read about this company and I was like, man, I didn't even realize they were a public company until you came on here. So, um, I'm all I'm putting two and two together here. Uh, Ben, thank you as always for coming on the show, we appreciate it and uh, and have a good rest of your day.Please follow us on YouTube. Thank you guys. Thanks a lot, Dan. We got to get moving. We got our next guest. She's already here. We're going to talk biotech guys. I know that we, we always get asked. We always get questions like when's Vivian. When's Vivian. When well she's on right now. Not right now. She'll be on in like five seconds when we bring her on.But Vivi Bio-Queen will be joining us every Thursday. At this time, I told you all to save your questions for your bites. The questions for right now, let's bring her on Vivi. How are we doing today? Good. How are you guys? Can you hear me? Well, we hear you. We see you and we have questions. Oh, awesome. And before we do any questions, I think, um, we can do some updates.I did have a request on Twitter to cover it stock. Um, like at my old good lawyer friend, mellow at Twitter, I, this is out for entertainment purposes. So, uh, is only my opinions are for entertainment, purpose, not a financial advice. So I wanted to, uh, first of all, um, talk to you guys, um, just given up a date, we just had the, uh, ER, on KMP.H can you guys put that up over there? Yeah, this is a daily chart. Okay. And P Hit's quiet. So, yeah, so we just had the ER and I want to, so when I, when everybody kept asking me, like, what do you, what are you going to do for 'em? What are you going to do for K MPH? My position was this small because he is a, some of the concerns I had. I spoke to the manager and he interviewed me. And, uh, he said, you know, we're not going to put any reps in New York yet.And it's just going to be certain regions in the United States. And for me, it showed a little bit of a weakness because for me, if you launching a drug, you should have put reps all over the map. But I think they're just trying to be cautious. So they report around like, I think 2 million in revenue, but here's what I'm bullish.Now. They just launched the rest of the Salesforce. And this company here, you know, their burn rate is really, really small because Korean is doing all the selling. So the burn rates is like a million a quarter and they have a, still have 135 million. And he is what I'm bullish of. Um, there's a company that sells a scripts and there's a guy that, uh, his friend works for this company and feeds him all the script.So for you guys to have an idea, right, the, the feedback has been amazing. So July, they had a nine scripts for the monthly and then August, they had 173 scripts, um, September 416, October 886, still low because there were not out throughout the nation. But I think the feedback has been tremendous from psychiatrist, from the drug and the differentiator.And I feel like as they deploy, uh, the other sales reps, we going to just ramp up the sales. So I feel like at this moment, I wanted to add a little bit more to my position because I, I see the future being very bright here, KMBH. And I think that, um, we gotnot yet. Not yet. I, I should have now I have an, I am waiting for some of my swings to flourish, so I haven't been able to, to add, but I wanted to for sure. Okay. And then I want it to put you guys out. Somebody asking me to, uh, to, uh, cover a, uh, a N N S. Hmm. Okay. Now I want I'm familiar with, but it is biotech.So it's a biotech it's under the radar. So I wanted to explain to you guys some of the reasons how I invest in biotech, and I told you guys, if there's no commercial products, there's, there's got the most important thing you can look at is cash, right? Because if you don't have a cash, no product, are they going to burn too much?And they also gonna have a, to do offerings. And if the stock is low, they do reverse the split. So the first thing I do is to look what was the cash burn and how much cash they have left in a would the future and what the catalysts are going to be. So this company here, the first thing that got me to, to look at it was they have a $271 million in cash.So they're really the city really strong. So I thought that was a really, really, uh, um, uh, valid, uh, information, very important. Then I look at a financial institution on. EVestment, which is a goblin, was the director of FDA is a partner on their firm. So they own 2.5 million shares. So I thought that was another very important information because goblet is well connected to FDA.Not that you know, it nobody's going to be bought out, but when you have investors that work with FDA, they know what it takes to, to be compliant, to get a drug approved, right. Because it's just so much behind to get a drug approved. So manufacturing, you know, how the studies are designed. So I like the fact that there's Nia investment behind.And then I also know VOD is, oh, 2.1 million. So Novartis has some interested and, and the pipeline, it looks really, really amazing is all CNS, um, uh, and mass. Um, they're going to have, uh, Gilliam Barre syndrome, which is a very rare disease. So I really like this company. I really do. I, I'm not, uh, obviously I can't be in every single stock.Uh, but I, I, I, I think this is a really good a long-term, uh, stock to hold for sure. So that would be one. And then if you can, um, bring back, uh, pro GPRO G we have a lot of fans PRG. It's like almost like a min stock, but also a really good stock to hold long and, um, really. Yes. Yes. So, so what, what do you see that the market doesn't see, um, what I see that the market doesn't see.Um, I will tell you why this company is going to be huge. They have a two types of, of delivery system. They have a, um, on, I'll tell you guys, they have a two to two technologies and I wanted to bring to you guys, let me see. I can share a screen. Okay. But I have it here right in front of me. So they have two things.They have the OBDs okay. Which is oral Biotherapeutics delivery system. So what it does is, is able to take big formulations and put in a form of a pill. So for example, Humira is one of the biggest blockbuster drugs in the world. If they found a $10 billion. So you imagine if Abby, I think Abby is a Humira honor.If, imagine if Humira is loses patterns and five years, right. And all of a sudden, because doctors love the efficacy of this drug, and that's why it's so well prescribed. Right? But it's an injection. Imagine for this company sell their technology. And all of a sudden you can have a drug, like a Humira being, um, given orally, all of a sudden you create a whole new patent for that drug.Do you follow me? Because a different formulation. So all of a sudden you gaining another 15, 20 years. I have a patent on that drug. Now imagine how many pharmaceutical companies would have be jumping all over because they like, geez, I have this, this drug that's high formulation. And now I have a, I would love this drug to be an oral form because patients do prefer to be an oral form.So I see, um, they announced that they have three partnerships with the big pharma, but they have announced who, so everyone is kind of on a suspense, like who are going to be the big pharma. So they have, uh, right now with the Pfizer there, just to have an idea, they, the not only the delivery, the delivery system does this to the big formulation, but also one of the drugs of four in any boat to the second one, the oral bio biotherapeutic delivery system.The OBDs, what it does is it's designed it to, to, um, to take it a pill and the pill, the way this delivers it, doesn't go all over your bloodstream. So it's it's for the GI tract. So is GI specific drugs and there's one. For for, for, uh, uh, Pfizer they're there. The preclinical, what they found was not only that, that would their delivery system, that drug was 25 times more potent than the Pfizer drug, but had a no toxicity because it doesn't go to the bloodstream.Like the other drugs would go. So you have a less toxicity, less, less side effects. So imagine what they can do if they already doing this with Pfizer drug, they're studying the Humira. Imagine like for me, this drug should have just literally like get royalties for every farmer, choose the technology instead of it being bought out.Right? So I believe this, the future of this company is super, super bright. Uh, you know, it's heavily shorted. So I think that a lot of people are here for the, the, the, the gum is squeezed because if you look at the amount of, of, uh, of, uh, options, that the options chain is crazy, uh, for this, for this company.But I, I will, like, I have a big position because I wanted to, you know, to trade around my core, but it's some, it's a company that I wanted to keep it. And, um, and a long term, uh, option, because I feel this company is going to be huge. So they just appointed also geo hall, believe it or not do, how do you, how, um, she's in a board of directors and this woman, it's like a powerhouse in biotech, a friend of my work.As so maximum, so messenger gas sold. So she comes in with a lot of experience in pharma and a lot experience in acquisition. So, uh, the team is fabulous. They are four miles away from my house. I should have just bring them a bottle of champagne when we hit $10. But I believe in this company, this company has a bright future and that, um, right now there's a lot of people on it, you know, waiting for that short squeeze.But, uh, it's been keeping really it's being holding like it dipped to, to like 2, 3 0 3 today and it went right back up. So it's been keeping, you know, I think the short sellers were expecting after der cause you know, there's, you know, a yard for a state, uh, initial stages of biotech doesn't produce revenues.Right? So it, it dipped to fund 360 2 all the way to three, but it's been holding for weeks at that average. So people are not selling people believe in this company for sure. Uh, can I ask you, what do you, what do you get, do you have any favorites in the, in the gene therapy space? I do. I, um, I, uh, I, I'm a loan holder for ADP and they have the, uh, it's one of the car T therapies, but it's not the car T is the RTC, uh, ADP.And, uh, the reason I like this company is, uh, not only they have a partnership with Genentech, uh, the Genentech partnership is up to three. And, uh, they already have enough funny from Genentech cause they got, they gave them a prompt payment payment. They have funded into 2024. So I feel like this is a very safe play in regards to not having offerings and that they are sitting with 285 million in the bank and they do have, uh, some, um, some, uh, catalyst coming.And, uh, I, I believe this company will be a multibagger on day, you know? Uh, so it's one of those that you set and forget it, but I like the position of cash because it gives me the comfort that they're not going to be throwing in offerings after a big catalyst. You know, they, they have they're in a really strong position.And when you look at, uh, um, institutional ownership matrix, all 15 million shares of this company, baker brothers owned 18 million, the institutional ownership. It's so strong in this company. And obviously Jen at that has a huge, huge portion of the company and has their eyes on the company. So, uh, this, this is a big one for me.I got to ask you about, about BCR X here. That's the rule. Every week we got to talk about BCR ex of course, it's this year access to my unicorn. You guys, for sure. So, um, it's, it's funny. I held this space as on Twitter yesterday, and I was talking to a pharm D you know, I do respect, they have a lot more knowledge than me, you know, uh, in terms of a clinical.And he validates my position on the CRX and GRTs, which is great stone. And he says to me, you know, be CRX is, is a rare diseases monster in the making. So they, um, they just released the earnings. They put 38 million for the quarter, and people, BU people were really upset that it wasn't 7 million like this huge numbers, but for the mentally long-term, it's still there, right.Because the science hasn't changed. So I see this as an amazing opportunity, uh, if you're not in the CRX, but, uh, just, you guys have an idea. Uh, the biggest drug for, um, for Alex yawn is, uh, Alto Morris. And they are not even that good because it's not only an infusions for PNH, but patients still need transfusion, uh, taking this infusion every eight weeks.Uh, BCRA X has the competitor, which is going to be an oral oral, uh, competitor factor D and not only patients that have been on this study up to now, not only they jumped from phase one to phase three, because they did so. Patients to this date. I think there's 40 weeks, 30 weeks of, could it be this fusion to this date?So imagine having a drug that is, it's already a rare disease for PNH and patients only at the choice on the available is in Jackie, no infusions. And you still have to go through the transfusions. Imagine having an oral pill that you don't have to have a transfusion at all. So just make them do the math and Alex, the on 70% of its revenue, 70 was on PNH for this drug and they got bought out for $39 billion that would have put the CRX at a $230 a share.And B CRX has a better pipeline with a more potential and it's going to be all oral. So you guys do the math, if you don't think this is a monster in the making. Alright, Vivi the bio queen, she joins his every single Thursday at biotech. Underscore SD is for Twitter handle it's up on the screen. And uh, and then please.Yeah, please, if you, I will post my DD there because this is a very short, so you guys searched the bioclean on Twitter and you can find me all right. Thanks a lot. Viva, talk to you again next week, next week. All right. Uh, Hey, let's stick with biotech for a second here because our next guest is the CEO of a gene editor.Company, hence why I asked to VV about that one to get her thoughts. So, uh, if we can, let's go ahead and, uh, and, and, uh, bring her on guys. Andy Ford. She's the president CEO of Mira GTX. His company is a lot going on right now. They're at a very critical point. So let's get Zandy onum, Ford,by the way. Yes. Thank you. Thank you. Where we're actually, it's on our to-do list to get new music, but, um, thank you for, for, for the compliment. Uh, so as I said, it's a pretty critical time for, for mirror DGX. Uh, you guys just presented at, uh, uh, the virtual, uh, oh gosh, uh, the European society of gene and cell therapy Congress, right?Uh, yes, we did add three abstracts. Yes. Right. And then there's a, we're due for another, a little bit surprised when I found out you're on the calendar. Cause I thought you guys, you have another, uh, presentation coming up in a few weeks. I, I, I think I believe right. We do. So we have, um, quite a number of presentations in the second half of this year, uh, which included, uh, some presentations at the meeting.You just mentioned on our programs and our switch that allows you to switch gene therapies on and off with a pale. Um, related to what video was just talking about, actually, that's what I do want to talk about. I'm sorry about, and then, uh, at the beginning of December, we, uh, having a clinical update on our xerostomia program for patients who've been cured of had a neck cancer, but don't make saliva.So we'll be completing that study this year and we'll be updating on the clinical progress so far at the beginning of December. And then a couple of weeks later in mid December, we're having a science day to discuss in some more detail, uh, Ribas switch technology and our promoter platforms, which allow us to really optimize gene therapy.And for the first time switched gene therapies on and off with an oral drug and not just switched them on and off, but quite precisely dose the amount of gene therapy at a particular time with a dose of an oral pill. So let's talk about the switching via via pill. Yes, exactly how that. So obviously gene therapies are a virus which contain a gene and there's a coding sequence of the gene, which will make your protein, whether it's Epogen for example, or whatever, uh, gene therapy, it might be RPG or for the eye.And that gene is activated by a promoter, a regulatory sequence at the beginning of the gene. So that's the normal gene therapy promoter and a gene, the promoter switches the gene on, and it remains on for the rest of that patient's life, all that sells life. So you have persistently expressed gene therapy, but what we've been able to do for the first time is we do everything.I've just told you with the promoter that regulates the gene therapy and switches it on. But on top of that, we put into the gene sequence, a small sequence of DNA, which instructs the entire RNA produced from that gene to degrade. However, if we give a small molecule, but via pill, and we've got many small molecules, because we've developed this as a platform, it stops that degradation.It cuts the entire degrading sequence out of the gene. Produced RNA and you'd get the gene switched on as if it was never there. So for the first time we can deliver gene therapies, which are not on, so they're not producing weird proteins or bits of proteins. And we give a pill and bomb that I'll call it.The degrading signal is cut out of the RNA and you get a perfectly normal protein product.I guess I have so many questions. I don't even know where to, I like that all sounds incredibly complicated. Um, I guess, uh, how can you make sure that it works? So it's, we've when we set up the company, this was one of the technologies that we, um, we wanted to build and these switches made of RNA shape.There are thousands of them and bacteria. And for, for decades, people have tried to take bacterial switches and make them work in human cells. And rather than doing that, which hasn't worked very well. We built, we use the theory of Reiber switches and we built based by base our own switch. So we built it in mammalian cells.We then tested many switches and we have a platform of switches we can control. We were able to make these really simple switches, which switched on and off to high dynamic range. So 5,000 fold above the off level when they're switched on. And as a consequence of that, we were then able to change the drug that we activated with.So now we have multiple genes that we've put our switches in sitting in our freezers. So, uh, various antibody, PCSK nine antibody. You'll be aware of, uh, PD, one antibody, the, the very large drugs we can regulate. And then other drugs like GLP one, obviously a diabetes and obesity drug, which we can regulate.So we've got those genes and we can now put them into vivo in mice and NH PS, and we give those animals small molecules and we've already shown. Based specifically on the dose of the small molecule, we see our drug switched on to exactly the right level in each animal, depending on the dose of the small molecule you give.So we have built this over the last five years and we have moved from cells to mice, to non-human primates. And we're currently in a position to start doing I N I N D enabling studies for both the small molecule, all drugs and the genes that they regulate. Uh, and then as far as use cases, I know you're working on, um, you know, you're working on applying this, um, to, uh, I disorders, right.Uh, but is that the only use case right now? Tell us about the other one. No. So, um, we developed this technology of controlling gene therapy with a pill in order to much more broadly open up the space that gene therapy could be used in. So. We do have a lot of expertise in the eye and a partnership with Johnson and Johnson for our rare eye disease programs, but in diseases like wet AMD or dry AMD or uveitis, those large diseases.These are targets for regulation with our cassette and small molecules. In the case of our wet AMD program, we inhibit VEGF like other companies do. But what we're able to do potentially is when we put that gene that blockades by Jeff into the eye, we can formulate one of our small molecules. That's otherwise oral into eyedrops.So what we're working on now is turning our small molecules into eyedrops. So we can put a wet AMD drug or uveitis drug into the eye as a gene and switch it on each day with an eyedrop. So the eye is an excellent place to be able to regulate gene therapy with a small molecule, another place, which is really important is in the brain because it's very difficult to get antibodies or biologics across the blood-brain barrier.But what we're able to do potentially is we have regulated antibodies and we can put the. By an injection into the brain, just a one-time injection within the blood brain barrier. And then all you need is a pill which crosses the blood brain barrier. So it allows us to deliver drugs that really hard to deliver by other routes.And there are many, many more applications. It, it hugely expands what you can use gene therapy for, because for the first time you can control how much you gave. And at what time, uh, it seems like broadly speaking Zandy, um, gene therapy, like what, like as an investment, it was like super sexy a couple of years ago.Right. And then it sort of, it was super hot and then came down a little bit and was like, oh, wait a minute. This is still really days. Where, where are we now? Are we like back to, is gene therapy being like the hot, the hottest topic in biotech? Or, or are we still sort of, is it like the off cycle? I dunno how else to phrase it?W well, I think there are, there are many different gene therapy companies and there is cell therapy companies. There are very, very large number of, of, uh, therapies in the genetic medicine space. And, um, and there are some companies that just have a product or a platform or a particular organ that they focus on and that.Is a somewhat higher risk to those companies that depend on data around a particular study, right? What is quite different about mirror is that we established the company to really innovate in gene therapy and shows indications in the clinic that had good proof of concept and highly likely to work and to support a future pipeline.We built everything you need to be a gene therapy company in-house so we have multiple promoter platforms, multiple capsid discovery efforts. We have our own internal manufacturing, which is probably the broadest engine therapy today in that we manufacture our own GMP plasmid. We have two, uh, viral vector manufacturing facilities, which are flexible and scalable to commercial scale.And we do our own QC and analytics as well as potency assays. So we have a very, very broad, I suppose, toolkit that's required for anything. That you need to do in gene therapy and went out, positioned with this regulation ability with a deep pipeline of regulated genes that we can then take through to the clinic with our vector ecology and our own GMP manufacturing,not of the regulatory by putting all that in house. I was just gonna ask, as, as we get more developed in this space, like Spencer said, it seems like a couple of years ago, you know, that the gene editing space was huge for investors. Uh, what advice would you give investors that are looking at different, uh, you know, genomics companies to, to be able to discern which ones are going to have an advantage in the, in the field once the industry does become more hot among investors?Again, I do think that right now, manufacturing is not just a bottleneck with respect to capacity, but, um, Dealing with regulatory agencies globally and an expertise in manufacturing process. And, uh, and the assets required to show the release and stability of your products is very, very important. And to be able to either have that, to have as much of that as possible in house dearest clinical programs that you'll see, particularly if you have those sorts of capabilities at the time of D you really don't want to see companies that are starting manufacturing their product in one way.And then at phase two, switched to another way and then have to scale it later. Ideally, you would look for companies that have capabilities that allow them not to necessarily rely on CRS for plasmid manufacturing or in DQC. And we learned that over the last five years, it's one of the reasons we've bought, uh, so many of these capabilities in house, but I do think that's very important.Um, in addition to obviously, you know, do the targets work or is this, is this an appropriate, um, disease for gene therapy, the nuts and bolts of being able to produce and show the agencies that you've produced the right thing, a really important. Zanni Forbes is the presidency of Mira GTX. As I mentioned, there's a lot going on.You guys also got some positive in Canberrans over the weekend and, uh, uh, a lot of presentations after being in stealth mode for quite some time. So, uh, looking forward to seeing how things develop here and, uh, and, and, and good luck going forward. Thanks a lot for coming on today. Thank you so much. All right.Hey, w we got to keep the train running on time here. We've got so many guests today, back to back to back to back let's pivot. If we can, maybe we just spent the last half hour or so talking biotech, uh, let's pivot to like supply chain, specifically, uh, supply chain of textiles, right fashion. And what exactly is going on there?What to find out? We're going to bring on our next guest here in just a second and running Samuel. He is the CEO of a cornea technology, and, uh, let's bring coordinates and a Ronan. W that's been running on this show. Now, if we can guys, I guess I'm Spencer, I guess I'm doing that. All right. I got you right.There you go. Good morning for us this afternoon for you it's later on in the evening. So I appreciate you, uh, coming on, uh, the, the, uh, the show here today. So, uh, let's talk about textile supply chains, right? Uh, what exactly is going on there right now are things as bad there as they are in other areas of the.Well, um, yeah, it's bad. And it has to change the supply chain is broken, but even more than that textile industry and fashion industry in particular is the second most polluted industry in the world. Um, from different reasons. One of the reason is that 30% of whatever put use on textile is actually never been sought.Uh, and this create a huge amount of waste, both of materials and water, and we have to save the world. Um, uh, so we have to change the industry. Now, the reason for that is some of it is because the supply chain of today doesn't meet or doesn't fit the need of the consumer of today. The supply chain of the textile industry is like centuries ago, you produce in large quantities in forest, in China, in Magilla dish, you trying to forecast what the consumer, what the people would like to buy a year in advance, sometimes 18 months in advance, which is impossible.It's crazy to think that you can predict what the consumer today would like to wear in a year and a half from now. So we have to change it. The world move to digital in many, many industries. And in this industry is still fully. Yeah, I'm glad you brought up the, uh, you know, the environmental impact of the textile industry, because that's something that's gained a lot of attention over the past year or so.I mean, you have a quote unquote fast fashion companies, such as sheen and people have kind of started attacking the, the idea that, oh, buying, you know, a cheap t-shirt for $15 or some pants for $15 that you see an ad for an Instagram, uh, causes a lot of environmental distress. So what do you think needs to be done in the industry to address that?So they, this would need to change in order to try to predict what the consumer would like to buy and put, use Lauder moms of products, which will never been sold is to produce, to demand, to produce after the consumer.But that's not efficient. I mean, it you're saying it is, but that's not a necessarily inefficient use of capital though, right? No, no. It's, it's actually a very, very efficient, um, hold on, explain to me, explain to me, yeah, let's begin with, first of all, what we see that production is really moving on shore.Why it's moving on shore, not only because the, the, the supply chain is broken because you have to be closer to the consumer. You have to react fast for the consumer trends. Now, the world of fashion and textile move is moving online today. 30% of all purchases being done online. So e-commerce the focus by 2025, that it will be more than 60%.Now, the online the e-commerce of today is still trying to sell you what they have in the inventory or what they have in the shops, in the stores. Uh, and if you going to order products, sometimes it does not exist at all. Um, and uh, sometimes, um, for the brands is really, really difficult to. Okay. Can you hear me okay?Yeah, we, we, you we're fine. We just got disconnected, but we're back. Okay. So, so I missed everything you just said. Okay. Okay. So let me try to explain again. So the world is moving digital. What does it mean? The consumer today's buying? So e-commerce online. 30% of all sales is being done online and the focus by 2025, that it will be 60%, but the online is today's actually a mirror of the store.Doesn't allow you to choose your product. They're trying to sell you what they have in inventory, which doesn't fit what this consumer would like, what we believe needs to be done, that the online should be filterable. You shouldn't have any real physical products. You can have endless amount of product virtually and connect the virtual wall to the physical world.And this is exactly what committee's doing is enabling on demand, production. You order what you want only. Then you produce it. You produce it close to the consumer. Onshore and delivering, you know, the same day on the next day to the consumer, the product, I feel like this is I'm in now. It sounds great. I think that, but that's like, that's more difficult, right?Well, I give it a few examples. Well, one example, great example. Thinking about the books book markets, um, back at 25 years ago was fully analog. You went to a bookshop, you tried to buy a book. You only have them, the shelf, the books that we're selling in millions of copies, Amazon disrupt this market. They created a digital world.You could go to Amazon buy any type of book, even from 200 years ago. But what they create is actually much more, the impact was much more than that because now everybody can become a writer. You can write a book about your family, about cooking, about anything you like, you publish it. Virtually doesn't cost you anything.Only when someone is ordering, then you print it and send it to the consumer. So the same thing is happening now in the fashion world, you don't need to have it physically. You actually unleashing creativity because you can have endless creativity and each one of the consumer can choose whatever they want in any Colleen, any design.And once you choose it only, then you produce. So this is efficient and there is no way. And you produce it using coordinate technology, which is a fully sustainable green technology. That was my next question was just if you to clarify that, so I come up here like Amazon, for example, uh, could have used your technology, right.Or, or any retailer, right? Could just buy your technology and, and use that along their supply chain to make it more and more green, more efficient. Right? Actually, Amazon is our biggest customers. Okay. Amazon is our biggest customer, but we have many, many more customer. We are worth more than 1000, 300 customers that using our technology all over the world.Some of them very big companies like Amazon, like Adidas, like fanatics, but using our technology. And you can go online and order products and customize the product and order your t-shirt here. You can see with coordinate on top of that or on any color, any size, any shape. And this is the new world. Look what the world is moving into.What is moving into metaverse. So metaverse is everything is virtual. You will have. Your image in the metaverse. Yeah. You will be able to dress it as you wish with any, any type of, of, of goods. Uh, and only when you feel that you lack it, then you order it and then it would connect it to the physical world, which will be big produce next to you.If you are in New York, it will produce in New York. If you are in Beijing, will be produced in Virgin and shipped to you the same day. So the impact on the environment in terms of sustainability is huge and their efficiencies and believable, and the creativity is unleashing the creativity for the designers and for the brands.W why is no one else doing this? Or are they well, uh, there are some, uh, companies, our customers that using our technologies like Amazon are doing it. If you go to Amazon and you're doing what you're doing, right. That's what I meant because, well, we are kind of unique festival in terms of the physical world.We out technology, what we have developed is systems Inc services that is all sustainable, which are digital systems that can produce one off. If you want it to produce a t-shirt or any, any government or any fabric in the, in, in the past using analog technology, you had to print where to produce hundreds of meters in order that it will be economical and.The sustainability impact is huge. Um, there's a lot of pollution and consumption of water for every meter without technology, because it's digital, you are not limited. You can print one t-shirt, you can print one meter, one fit, there's no limitation and every feed can be different design. So this is the advantage of digital is unleashing the limitations that you had before, and we are not using water.So there's no, there's no water consumption. It's pigment ink. So it's fully green. So no impact on the environment. So like who couldn't use corny Amazon obviously, but they're the largest retailer in the world could, could, could I use it on my online Shopify store that make, that sells? I dunno, 10 shirts a year.Exactly the point it's off form. The biggest retailers, biggest e-commerce biggest brands like Adidas, Nike. They of course can use it to anyone, any consumer that would like to open a shop in Shopify. Now, what is the problem with Shopify? If you are now at this time? And you see somewhere in India and you would like to, to sell your product.You open the shop in Shopify in five minutes, you put your design, what is the problem? Once you get the order, what are you going to do with it? How are you going to produce it? Are you going to ship it out? We can compete against other marketplaces. What call Nita Naples is to connect all those and marketplaces all those designers in to a network of fulfiller that can fulfill for them.So you need to take care of only on the design, open a shop, and then connected to Coney ticks. And kinetics is a platform that connect them to a network or fulfiller that using our technology and can produce it anywhere around the world. And it sounds good. The market clearly likes it. Cause if you, if you look at your stock, it's had a pretty tremendous run actually, even, even last year, uh, seemed to, uh, COVID, didn't seem to hold it down too long.So, uh, the market agrees with you. Uh, so I, I guess keep doing what you're doing. Uh, running Samuel was the CEO of, uh, coordinate, uh, digital. Uh, we will have to get you back on the show. Uh, hopefully, uh, now maybe next year when, when the supply chain starts to work itself out a little bit, but I, I I'm, I'm very curious about this space because, uh, you're, you're one of the best performing stocks, uh, I think out there probably right now.So, uh, Ronan, thank you so much for coming on the show today. Thank you very much. Pleasure being here. All right. Uh, it is, uh, 1259. We've got our next guest coming on in couple minutes. Whenever minute when it, whenever they join, to be honest, cause they're not even here yet, but that's okay. Uh, Scott Mathis is the CEO and chairman of Gacha holdings, ticker V I N L a.And then we have, I'm very excited for our one to 30 guests, but we're not a Capels from Benzinga and, uh, really Benzinga is, is a side it's his side gig. His main gig is, uh, is doing really complex. Trading stuff, strategies. So, uh, I'm, I'm very, I'm very much looking forward to that, uh, in a half hour. Uh, if we can think of who executive though, and I have not grabbed, voted for likes yet this hour as we enter our two of our show keyword yet, uh, if you could be so kind and hit that thumbs up button on your screen, I'm not sure where we're at on the light counter right now.Let's look, we're at the de come on computer 74, 75. We can do better than that. We do over a hundred easy, easy. The goal for the goal for the day is 200, but we can get you a hundred right now. I suspect. Yeah. So before we get to Scott Mathis with a wild show holdings, um, the, uh, the previous guest, I liked that idea of cause basically what he's saying is that companies now are producing clothes for a year down the line, right?But they don't know what's going to be hot on Instagram and Tik TOK and what the trends are going to be efficient. It's not efficient. So what, what he's saying they're doing is waiting and basically it's print on demand, but on a huge scale, like what Shelly was talking about in the chat with the economies of scale, they're able to produce, uh, the goods for cheaper when they're doing it on a large scale.If everyone, if they're, if their technology is able to kind of shift that whole industry, it would have a tremendous impact on, uh, the environmental right now, the negative environmental impact that the textile industry has. Another fun fact. Spencer, did you know this, that a lot of luxury brands, um, such as, you know, Gucci, Louis Vuitton, do you know what they do with their extra extra goods?No, I don't want to say something that's politically incorrect, but I know I have no idea. What could, what would the, I don't know. They, they, they, they give it the, give it to animals. I don&
Enterprises are working to simplify the process of deploying and managing systems to support AI applications. That's what NVIDIA's DGX architecture is designed to do, and what we'll talk about on this episode. Frederic Van Haren and Stephen Foskett are joined by Tony Paikeday, Senior Director, AI Systems at NVIDIA, to discuss the tools needed to operationalize AI at scale. Although many NVIDIA DGX systems have been purchased by data scientists or directly by lines of business, it is also a solution that CIOs have embraced. The system includes NVIDIA GPUs of course but also CPU, storage, and connectivity and all of this is held together with software that makes it easy to use as a unified solution. AI is a unique enterprise workload in that it requires high storage IOPS and low storage and network latency. Another issue is balancing these needs to scale performance in a linear manner as more GPUs are used, and this is why NVIDIA relies on NVLink and NVSwitch as well as DPU and InfiniBand to connect the largest systems Three Questions How big can ML models get? Will today's hundred-billion parameter model look small tomorrow or have we reached the limit? Will we ever see a Hollywood-style “artificial mind” like Mr. Data or other characters? Can you give an example where an AI algorithm went terribly wrong and gave a result that clearly wasn't correct? *Question asked by Mike O'Malley of SenecaGlobal. Guests and Hosts Tony Paikeday, Senior Director Senior Director, AI systems at NVIDIA. Connect with Tony on LinkedIn or on Twitter at @TonyPaikeday. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen's writing at GestaltIT.com and on Twitter at @SFoskett. Date: 9/21/2021 Tags: @TonyPaikeday, @nvidia, @SFoskett, @FredericVHaren
Johannes S. Bach believed music is learnable and teachable. However, an author for Encyclopedia Britannica refuted this notion. Northeastern Bible College is where I was trained in theory, advanced conducting and classical piano. It was formerly located in the small community of Essex Fells, New Jersey. Click on the YouTube video title Warming up with Cat Paws where I attempt to play harmony to a pre-recorded demo on my Yamaha portable electronic grand piano DGX-640. Thanks for listening.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure. We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing. The complete show notes for this episode can be found at twimlai.com/go/507.
We discuss the announcements from last week's GPU Technology Conference, along with AI hardware news from elsewhere, and look at Microsoft's proposed acquisition of language AI specialist Nuance for $16 billion – actually closer to $20bn once you account for debt. We start with our regular segment chronicling the revolution in hardware for AI – it's Chip Wars! When Joe Biden is waving around silicon wafers, you know interesting things are going to happen. Nvidia is busy building its first ever CPU to support machine learning workloads, codenamed Grace – after the absolute legend that was rear admiral Grace Hopper. Plus, Nvidia's DGX family of ‘building blocks' for supercomputers will now feature Data Processing Units (DPUs) by default, running a wide variety of networking tasks. DPUs weren't originally developed at Nvidia – these came out of the Mellanox acquisition, which equipped the company with clever networking silicon. Intel's Habana – which has designed its own family of chips for AI – has landed a contract with the San Diego Supercomputer Center, and will build a supercomputer called Voyage. But is it the kind of customer that Intel needs at this point? Meanwhile, SambaNova – which has designed its own family of chips for AI – has announced a massive $676 million funding round, just a few mounts after emerging from stealth. Its CEO Rodrigo Liang appeared on this podcast just a few short weeks ago – making participation a sure indicator of future success. Next, we talk about Nuance, the AI company built from countless acquisitions, now being acquired by Microsoft. The speech recognition and language specialist has helped shape the emerging virtual assistant market – can it give Cortana a shot in the arm? We also cover: Shopping for prying mantises! The importance of haircuts! Chaos at Arm China! And there's even a rendition of the national anthem of the USSR. As always, you can find the people responsible for the circus podcast online: Max Smolaks (@maxsmolax) Sebastian Moss (@SebMoss) Tien Fu (@tienchifu) Ben Wodecki (@benwodecki)
Youtube : https://www.youtube.com/user/klafmann/channels Instagram : https://www.instagram.com/klafmann/ Twitter : https://twitter.com/Klafmann Facebook : https://www.facebook.com/Klafmann/ Email : klafmannhk@gmail.com Website : http://klafmann.com Patreon : http://www.patreon.com/Klafmann SoundOn/Apple Podcast : Klafmann
Youtube : https://www.youtube.com/user/klafmann/channels Instagram : https://www.instagram.com/klafmann/ Twitter : https://twitter.com/Klafmann Facebook : https://www.facebook.com/Klafmann/ Email : klafmannhk@gmail.com Website : http://klafmann.com Patreon : http://www.patreon.com/Klafmann SoundOn/Apple Podcast : Klafmann
Matthew and Rizzle chat with Bullionix.io creator Jesse Johnson and dig into the gold-backed digital NFT minting project. They talk about a future with $DGX gold-backed digital wearables (!), the myriad partnerships Bullionix has been striking with artists and projects like Axie Infinity and the Moonshot World Cup in Decentraland, as well as their plans for a gold-backed digital future. Follow Jesse on Twitter @gldnXross
An interview with Shaun Djie, Co-Founder and COO of Digix — the blockchain company behind the world’s first gold-backed digital asset class. Tune in as we talk with Shaun about the pros and con’s of putting gold on the blockchain, Digix's recently relaunched marketplace, the DGX token and its features. To find out more about Digix you can visit their website — https://digix.global/ — or follow them on Twitter @DigixGlobal If you enjoyed this episode, make sure you visit our website — https://coinpm.news — or follow us on Twitter — @coinpm — to keep up to date with our latest content.
In this episode, Tony Paikeday, Director of AI Systems at NVIDIA, defines enterprise AI, its use cases and benefits. He then explains the expertise, resources and infrastructure needed for successful enterprise AI implementation. He describes NVIDIA's DGX solution and partnership with Dell which offers a complete converged solution for Enterprise AI. Tony shares how both cloud and on prem have a place in one's enterprise AI journey. Tony concludes with a customer success story, where to find more information and final thoughts.
The guys mourn the NBA dunk contests results, Aaron Gordon robbed of his rightful title, Marc Maron talks crap about us, DGX (a fancy new dollar store) opens downtown, and a famous YouTuber trespasses Disney and Universal after hours. Plus we intro a new segment 0-60 (some quick hits and our reactions) and respond to a great idea from some listener feedback - a Florida Man Theme Park!
Our 159th episode was recorded at Ace Cafe Orlando at the kick-off opening event for their new Ace Backyard space - a large green area for outdoor concerts and events. In this week's episode, we chatted about Beer Bus Loops, the new DGX that's about to open, and how it's not cool to be hating on Foxtail Coffee for expanding their biz. Tune in to Bungalower and the Bus every week on 104.1 Real Radio or our podcast to learn all about the top headlines, new restaurants, and best-bet events to attend this week.
Addison Snell and Tiffany Trader discuss NVIDIA's and Google's performance on the MLPerf benchmark, plus news about the DGX
Digix started in 2014 with the idea of tokenizing gold on the blockchain. Anthony Eufemio, Co-Founder and CTO of Digix shares how the company created a way to tokenize gold and tells why gold would be the perfect asset class for their protocol. Anthony further adds how they are doing their best to make this global by building an ecosystem and partnering with a lot of point of sale companies that are creating crypto-based systems that retailers can have and accept DGX tokens. Anthony dives deeper and tells more how this platform is beneficial to us. Love the show? Subscribe, rate, review, and share!Here's How »Join the New Trust Economy Community today:newtrusteconomy.comNew Trust Economy FacebookNew Trust Economy YouTubeTracy Hazzard LinkedInMonika Proffitt LinkedIn
SBTV's latest guest is Kai C. Chng CEO of Digix Global, a company that is tokenizing real world physical assets like gold on the blockchain. KC shares how gold-backed DGX tokens are allow gold to be even more portable and usable for payments today.
SBTV's latest guest is Kai C. Chng CEO of Digix Global, a company that is tokenizing real world physical assets like gold on the blockchain. KC shares how gold-backed DGX tokens are allow gold to be even more portable and usable for payments today.
Today we talk with Shaun Djie from Digix. Using blockchain technology, they represent physical gold with DGX tokens, where 1 DGX represents 1 gram of gold on Ethereum. The transparency, security, traceability of the blockchain ensures that DGX tokens can be transacted and transferred with full visibility and auditability. Not only does the immutable ledger heighten security, the smart contract platform eliminates possible human error and risk of fraud that would otherwise be present in the supply chain of gold. They democratise access to gold. Enjoy! Link: https://digix.global/
Shaun Djie, co-founder and COO of Digix joins the podcast to discuss DGX and DGD. We talk about the history of Digix and how the platform enables people to buy fully audited, tokenized gold. Then we transition into talking about DGD which is the token that enables the governance side of Digix. They’ve recently launched their governance portal for DGD holders which allows users to participate in proposals and voting. We discuss some of the struggles faced so far and the path forward for the team and product. Support our show! Get on the email list at ethhub.substack.com
Machine Learning and Deep Learning are technology tools that organizations have been using to incorporate Artificial Intelligence technology in their IT practices. These tools provide valuable solutions but also require best practice applications to deploy within these organizations. In this audio version of World Wide Technology’s latest episode of their Between the Racks video series, WWT’s Tim Brooks and Matt DuBell, along with Tony Paikeday from NVIDIA, discuss how they work with customers to provide solutions using these technology tools. They discuss how customers use NVIDIA’s DGX technology to deploy these technologies to provide solutions and several use cases where this technology has provided valuable benefits to organizations. Artificial Intelligence AI Research and Development Deep Learning with NVIDIA DGX-1
This week on the podcast, NetApp Senior Technical Director Santosh Rao joins us to talk about how NetApp and NVidia are partnering to enhance AI solutions with the DGX-1, ONTAP and FlexGroup volumes using NFS!
https://Alphainvestors.Club Hey guys! Thanks for joining us here at https://Alphainvestors.Club where today we will be reviewing TokenCard Price Prediction Token card coin is connected to a smart token card called TokenCard. It is the Ethereum token debit card. The transactions are enabled on ERC20 token, which allows the payment around the world in any other ERC20 token. You can pay in ETH, DGX etc or any other ERC20 token. One can also spend multiple tokens in The platform consists of three elements: the Token Contract Wallet, TokenCard, and Token App. The aim of TokenCard is to encourage and spread out the acceptance of cryptocurrencies. Price forecasts: Many people are looking to invest in digital assets (including TokenCard) owing to the enormous return which they provide. Even the institutional investors are trying to invest in such assets. This is highly positive for TokenCard as a platform. As the traction of the platform increases more and more, the price of the cryptocurrency would also go up. This is a huge positive for the platform in the future. With the thrust on the digitization of assets increasing and the awareness of cryptocurrency assets also on the rise, TokenCard has a lot of potential ahead. As long as the organisation is able to execute their strategy the price of the token would also keep on increasing. This will attract a lot of investors to this cryptocurrency...tune in for our full review! Be sure to join our Alpha Investors Email list!! https://Alphainvestors.Club
Digix tokenizes physical gold bars on the Ethereum blockchain. We caught up with Shaun Djie, Co-Founder and COO of Digix Global, on location at The Safe House vault to find out how the DigixDAO token (DGD) works and Digix's plan for DGX to be a stablecoin.
Digix tokenizes physical gold bars on the Ethereum blockchain. We caught up with Shaun Djie, Co-Founder and COO of Digix Global, on location at The Safe House vault to find out how the DigixDAO token (DGD) works and Digix's plan for DGX to be a stablecoin.
Nvidia GTC 2018 Quadro GV100 - Anandtech V100 32GB - Anandtech NVSwitch and DGX-2 - Anandtech More Intel 8th Gen Core CPUs Anandtech overview Mobile Core i9-8950HK Mobile Core i7-8559U Dell with 8th Gen Core CPUs - Anandtech ARM PC followup Gigabit ARM Workstation - Bit Tech Cloudflare testing ARM in Servers - Matthew Prince on Twitter PCIe SD cards RED proprietary SSDs for cinema cameras Google Knusperli Generating images from human brain activity Guilluame Dumas Tweet Paper Newsletters we recommend Wild Week in AI Jupyter Last Week in AWS Issue 52 Import AI Detecting jaywalking with facial recognition Stratechery Stratechery 4.0 Benedict Evans Microsoft re-org Stackshare Instacart Article Aftershow VR game reviews Elite Dangerous - UpIsNotJump Fallout 4 Negatives - UpIsNotJump Fallout 4 Positives - UpIsNotJump More Computer Vision Trolling: Request for sheep - Janelle Shane on Twitter Do Neural Nets Dream of Electric Sheep Diagnosis Pain Levels in Sheep Amazon Transcribe Episode 27 transcript sample
Patrick Moorhead and Ryan Shrout are live at NVIDIA GTC 2018 and talk through the numerous announcements made. This includes the Quadro GV100 and the $399,000 system based on 16 of them called DGX-2. NVIDIA CEO Jensen Huang also talked through improvements in training for robotics and self-driving cars with DRIVE Constellation, deep learning and AI performance increases, but nothing in the cards for gamers quite yet.
IBMIT IBM teste une gravure à 5 nm. IBM Z, le mainframe parano. Ça avance sur les nanotubes. Mais la route est longue. Et sinueuse. Et mal indiquée. Le premier véritable usage d’un ordinateur quantique ? Simuler un système quantique ! Le MIT combine SoC et RAM sur une puce 3D. IBM et le MIT se lovent : un plan de recherche en IA de 240 M$ sur 10 ans. NVID’IA Isaac Initiative : la matrice des robots ! Et nous d’ailleurs ? Vivons nous dans une simulation Elon ? Que penser de l’argument de Bostrom ? NVIDIA étudie la possibilité de GPU modulaires multipuces. Le deep learning pour le raytracing, l’animation et l’antialiasing ! Love triangle : Baidu larguerait NVIDIA au profit d’AMD. Vous voulez un super ordinateur DGX sur Volta ? C’est possible. Et pas cher. Enfin… Au fait..? Android Oreo : la plus grosse mise à jour jamais déployée. Si si. Le dolphin fait flipper : comment parler silencieusement à Siri ? Google tenterait-il de s’approprier le brevet du meilleurs codeur entropique ? D’ailleurs c’est quoi un codeur entropique ? D’ailleurs c’est quoi l’entropie ? Et ça sert à quoi au fait ?
Dr. Julian Hosp - Enter the Blockchain Revolution Show Notes: Dr. Julian Hosp Julian auf Youtube Julians Podcast Facebook Gruppe Coinmarketcap Le Morne (Mauritius) TenX.tech Dgx.io BTC-Echo Mastering Bitcoin - Andreas Antonopoulos Warren Buffett Biografie - Alice Schroeder Buffettoligy - Mary Buffett Quick Links: Wenn es dir gefallen hat, bitte schreibe mir eben eine kurze Bewertung auf iTunes und abonniere die Show! Check meine neue 66 Life Lessons Poster Kollektion Mein erstes Buch "Dein nächstes großes Ding" ist am Start, go get it here! Mein 66 Day Journal - erreiche dein nächstes Ziel in 66 Tagen! SUPERNOTES - Das offizielle Buch zum Podcast mit allen Top Take-Aways aus über 200 Folgen - get it here! Es wäre ein Traum wenn Du mir für mein Buch eine kurze Amazon Bewertung schreiben könntest, muss auch keine gute sein :) vielen Dank im voraus! Direkter Link zur Bewertung. FREE E-Book "Mit Freunden macht man keine Geschäfte" get it here for free! Erhalte mein "Weekly Update” + 5 Tools + 11 Hacks + die ersten 35 Seiten meines Buches FOR FREE - einfach hier eintragen! Sponsor: 7Mind inkl. 30% Discount Thanks, stay inspired, dein Fan, matthew :)
...Statler and Waldorf talk with Fozzie ...What's the "OpsOps" of DevOps?. ...Never say you're going to spend $1bn on anything What exactly is DevOps? We dare to discuss that at first and then get into Amazon's new managed hosting offering. There's some new container news with containerd from DockerInc land, and some little notes on Azure's features and Cisco's InterCloud shutting down. Also, we find out which Muppet each of us would be played by in The Muppets Take Over Software Defined Talk. Mid-roll Coté: Come see me January 10th in Phoenix (https://www.meetup.com/Arizona-Cloud-Foundry-Meetup/events/236191762/), 5:30pm at the Galvanize Office. Free parking! Coté: check out my interview with Tony at Home Depot about their first year being cloud native, on Pivotal Cloud Foundry (https://blog.pivotal.io/pivotal-conversations/features/045-cloud-native-at-home-depot-with-tony-mcculley). They went from 0 to ~150 apps in their first year. Like, real, business critical apps that you probably end up interacting with (pro tools, paint), plus internal facing apps. Feedback & Follow-up The Doc Martin shoes: Hickmire (http://amzn.to/2hlPnIJ). Thanks to Chris Short (https://twitter.com/ChrisShort/status/808339167604338688). The DevOps App dev vs. IT service delivery. DevOps Kung Fu (https://www.youtube.com/watch?v=_DEToXsgrPc), Adam Jacob's talk on the inclusion of everyone in the org chart in DevOps What is DevOps without Dev? Is there OpsOps? AWS Managed Services Amazon will manage your shit now, with real live peoples (https://aws.amazon.com/blogs/aws/aws-managed-services-infrastructure-operations-management-for-the-enterprise/) "This is actually a thing. It's called managed cloud." (http://venturebeat.com/2016/12/12/amazon-launches-aws-managed-services-to-help-more-big-companies-adopt-cloud/) "This is actually a thing. It's called managed cloud." - this is a good example of the more subtle way of "paying off analysts." (https://twitter.com/cote/status/809205833586409472) More like: changing their minds. "Designed for the Fortune 1000 and the Global 2000, this service is designed to accelerate cloud adoption" AKA "We're eating our partners" AKA "RACKSPACE: YOU'RE UP!" Coté: Is this like a service desk and a runbook for spinning up AWS stuff? Plus actual AMZN staff to "manage" the infrastructure like patching and such right? Coté: I was just talking with someone yesterday who's mission was "optimize how we do IT without me telling you what I want to do with IT." That is: lower costs and give us the ability to do whatever we may want in the future in under a year's planning/effort. Bezos doesn't like meetings without a memo http://static2.businessinsider.com/image/5851aebfca7f0c24018b5b6f-2400/ap16349721408436.jpg Don't Sleep on Microsoft Damn, that's a monstrous URL (https://pages.email.microsoftemail.com/page.aspx?qs=773ed3059447707dab3a47fc5c2937dcbf750d2a6d7e8feab247991209f258cd86e8606f2837501c341831b6f3896ebcb5673dff86feb6303e458a94181db250c28f58237fd3b737cd39c6339094ff6800649c38da065423db508d0369c1992e) GPUs, HANA, Media Services, Machine Deep Learning, Data Lake, Single-instance virtual machines Coté: I hear data is a thing. And AI. Cisco Shutting Down Their InterCloud Coté's audition for an ElReg headline writer: Cloud InterRUPPTED $1 Billion isn't enough (http://www.businessinsider.com.au/amazon-claims-another-victim-cisco-kills-its-1-billion-cloud-2016-12), "score another body bag win for the unstoppable Amazon Web Services" "Meanwhile, the cloud providers like Amazon, Microsoft, and Google aren't using a lot of Cisco gear. They are increasingly using a new style to build networks that relies more on software and less on high-end, expensive hardware." Sharwood@ElReg (http://www.theregister.co.uk/2016/12/13/cisco_to_kill_its_intercloud_public_cloud_on_march_31st_2017/): "OpenStack public clouds have an unhappy history: Rackspace felt it could build a business on the platform, but has since changed tack. HP pulled out of its own Helion public cloud. If Cisco is indeed changing direction, the OpenStack Board has some interesting matters to ponder." Theory: AWS means on-premise IT is over-serving. You actually don't need all that. Incumbent vendors succumbed to the strategy aphasia of the disruptor's' dilemma (weren't willing to sacrifice/take eye off the ball of existing success and revenue) and lost to Amazon's lower capabilities, lower price approach. WHEN WILL TECH PEOPLE LURN? There was this talk several years ago that was all like: "well, obviously, we shouldn't compete strategy-to-strategy with Amazon. We should provide the enterprise version!" Apparently, that was dead wrong. People confused Apple's ability to sell at an insane premium with the market not caring about x86 &co. Docker Contributes Containerd Docker-engine standardized container runtime for the industry (https://blog.docker.com/2016/12/introducing-containerd/) Engine vs. Machine (https://docs.docker.com/machine/overview/#/whats-the-difference-between-docker-engine-and-docker-machine) Check out this TheNewStack story for a new strategy slide (http://thenewstack.io/docker-spins-containerd-independent-open-source-project/): Containers in Production! Round-up of some container survey poking (http://redmonk.com/fryan/2016/12/01/containers-in-production-is-security-a-barrier-a-dataset-from-anchore/) n=338 respondents Sidenote: Jenkins win. Good job biffing that one Oracle. But then again: is there any money in it? "This leads us to a very difficult operational problem – how do we ensure security, and understand the makeup of an application while still allowing developer velocity to increase." More Docker usage numbers from DataDog (https://www.datadoghq.com/blog/3-clear-trends-in-ecs-adoption/)! "ECS adoption has climbed steadily from zero to 15 percent of Docker organizations using Datadog. (And more than 10 percent of all Datadog customers are now using Docker.)" How do I read this? Does it mean adoption is fast after an initial tire-kicking? "In the 30 days after an organization starts reporting ECS metrics, we see a 35 percent increase in the number of running containers as compared to the 60-day baseline that came before. Using the same parameters, we see a 27 percent increase in the number of running Docker hosts." CoreOS Tectonic Goes Freemium Erryone's favorite business model (https://coreos.com/blog/tectonic-self-driving.html) Kubernetes 1.5 coming soon Shipping upstream version3 Renamed their distro to Container Linux They have attempted to coin the phrase "self-driving Kubernetes" -- God help us. BONUS LINKS! Not discussed on show. More AWS Followup Missed a talk (https://gist.github.com/stevenringo/5f0f9cc7b329dbaa76f495a6af8241e9)? Open sourced a Deep Learning library (https://github.com/amznlabs/amazon-dsstne/blob/master/FAQ.md): AWS is still really new to contributing to OSS, Cockcroft has been pushing them. Also see the Blox.github.io stuff we didn't talk about last show AWS OpsWorks for Chef Automate Q&A (https://blog.chef.io/2016/12/08/rule-the-cloud-with-chef-automate-and-aws/) AWS Canada & London! Strange Brew Region (https://aws.amazon.com/blogs/publicsector/canada-central-region-now-open/) hello hello hello what's all this then region (https://aws.amazon.com/blogs/aws/now-open-aws-london-region/) "brings our global footprint to 16 Regions and 40 Availability Zones, with seven more Availability Zones and three more Regions coming online through the next year" Docker Acquires Distributed Storage Startup Inifinit "the Infinit platform provides interfaces for block, object and file storage: NFS, SMB, AWS S3, OpenStack Swift, iSCSI, FUSE etc." (https://blog.docker.com/2016/12/docker-acquires-infinit/) To be open-sourced Extends the stateful application story CA Buys Automic for $635 million "CA fights legacy status with DevOps automation tools buy" (http://searchitoperations.techtarget.com/news/450404297/CA-fights-legacy-status-with-DevOps-automation-tools-buy) - that's not a good headline for your Christmas cards. $635 million, Crunchbase says they were founded in 1985(?) Hey look, it's my man Carl Lehmann at 451! New CEO at BMC Beauchamp goes to board, Polycom dude steps in a CEO (http://www.forbes.com/sites/maribellopez/2016/12/12/bmc-adds-peter-leav-as-ceo-prepares-for-new-growth-chapter/#1b2662bc4651) "Beauchamp said many of BMC's products are achieving double digit growth and double-digit profitability." Red Hat OpenShift on GCE and JBoss on OpenShift In case you need more management on your GCE (http://www.cio.com/article/3148671/cloud-computing/red-hat-brings-openshift-to-google-cloud-platform.html)? AWS is already there, probably Azure soon. I wonder if there's a deficiency in Google's offering that it's more of a consumed resource than a platform a la AWS? Plenty of management in AWS already? JBoss on it (http://www.zdnet.com/article/red-hat-brings-full-jboss-software-stack-to-openshift/) Dell Q3 "Dell Technologies Posts $2B Loss, But EMC Deal Already Boosting Revenue" (http://austininno.streetwise.co/2016/12/08/dell-technologies-q3-earnings-report-revenues-and-losses/) Stonic, (not) An Ansible Fork? Stonic (https://blog.stonic.io/0000-it-is-not-a-fork-c0b03c33e408) will be licensed under AGPL-3.0 :facepalm: Coté: why is AGPL bad? Australian 2016 Word of the Year: "Democracy Sausage" (saved you a click) Democracy Sausage (http://www.abc.net.au/news/2016-12-14/democracy-sausage-snags-word-of-the-year/8117684) Google Makes So Much Money It Never Had to Worry About Financial Discipline - Until Now Candy, not CREAM (https://www.bloomberg.com/news/features/2016-12-08/google-makes-so-much-money-it-never-had-to-worry-about-financial-discipline) Brandon called this way back when. But what about Google Fiber in my neighborhood? Best shruggie use of th eyear (https://assets.bwbx.io/images/users/iqjWHBFdfxIU/i2yPkZEJdBec/v0/1000x-1.jpg) NVIDA $129k computer. "Fewer than 100 companies and organizations have bought DGX-1s since they started shipping in the fall, but early adopters say Nvidia's claims about the system seem to hold up." (https://www.technologyreview.com/s/603075/the-pint-sized-supercomputer-that-companies-are-scrambling-to-get/) Does it pass the Coté AI Test? I.e.: can it fix scheduling meetings across different organizations? Recommendations Brandon: Mobile eating the world (http://ben-evans.com/benedictevans/2016/12/8/mobile-is-eating-the-world). Matt: Jenn Schiffer's "No One Expects The Lady Code Troll" (https://www.youtube.com/watch?v=wewAC5X_CZ8) Coté: Senso bluetoother headphones (http://amzn.to/2hLC0lF). Trapper hats (http://amzn.to/2hAupDi) all winter long (https://www.instagram.com/p/BODJGTPjv2b/).
L'évènement : la GTC, GPU Tech Conference (présenté par NVIDIA) NVIDIA présente son Tesla Pascal GP100, une grosse pupuce pour le HPC. NVIDIA présente le DGX-1 : une (grosse) boite, 8 Tesla, 170 TFlops, 129000$ et des superpouvoirs. C’est pas moi qui le dit: “Huang said in a press release. "The DGX-1 is easy to deploy and was created for one purpose: to unlock the powers of superhuman capabilities and apply them to problems that were once unsolvable.” NVIDIA en profite pour mettre à jour son offre de deep learning cuDNN et CUDA 8. NVIDIA va propulser les voitures autonomes de la roborace. NVIDIA propose du ray tracing en temps réél en VR ! NVIDIA proposera aussi du shading multirésolution avec et sans VR ! NVIDIA aide Chuck Norris à libérer la Corée du Nord du joug communiste. NVIDIA ramène la paix dans le monde, soigne le cancer, les caries dentaires, éradique les moustiques et inspire à Bono sa plus belle chanson. Actualités Soldes : Foxconn rachète Sharp. Les Japonais jettent-ils l’éponge sur les semi-conducteurs ? Retour au source de la CDC : les améliorations architecturales des CPU au fil des années et ce que pourrait nous réserver l’avenir. Le VISC : Virtual Instruction Set de Soft Machines, ou comment optimiser l’utilisation des transistors disponibles. Courses de (très très très) petites voitures : la NanoCar race. Un mot sur la nanotech ? Nano particules, nano structures et peut-être un jour, les nano machines... C’est un (trop) vaste sujet... En parlant de nanotech, des boites quantiques en entrée de gamme pour plus de couleurs. Merci Philips ! Et d’ailleurs c’est quoi donc les boites quantiques ? Laver la lumière plus blanche que blanche ? Bien sûr, puisqu’elle n’est jamais blanche... Focus : comment marche Shazam ? De même, la décomposition spectrale de la lumière en astronomie. Les spectres d’émission des atomes dans les étoiles qui donnent leurs compositions. La généralisation à d’autres signaux, séries de Fourier, transformation de Fourier. L’exemple du son : qu’est-ce qu’une note de musique ? Différents instruments donnent la même note sans donner le même son. Le cas Singstar (ou U-Sing, ou Lips, ou Whatever...), parce que le karaoké, c’est important. Utilisation dans la compression audio, le filtrage, et pourquoi les personnes âgées trouvent que la musique des jeunes est pourrie. En 2D aussi, utilisation dans le compression jpeg, un mot sur les ondelettes, décomposition des fonctions dans des bases variées, adaptées à chaque besoins. Le spectrogramme, empreinte d’une chanson et principe de Shazam. Participants Dr Guillaume alias Guillaume Poggiaspalla Emmanuel Vendé (@EmmanuelVende) Présenté par Guillaume Vendé (@GuillaumeVendé)
Option Block 332: How to Waste a Good Hedge Trading Block: Back to rally mode on the street. Vol retreats yet again. Names before the bell: Haliburton (HAL), Hasbro (HAS) - After: Netflix (NFLX) Odd Block: Put sellers trade in SBA Communications Corp. (SBAC), call buyers trade in Merrill Lynch/Market Vectors Semiconductors ETF (SMH), review from 3/27/14 - calls trade in Ares Capital Corp. (ARCC) (this was a loser), and a review from 3/24/14 - call buyers jump in Quest Diagnostics Inc. (DGX) (possible winner). Xpress Block: Apple and Facebook dominate trades, though, a lighter than expected day today. Strategy Block: This week Uncle Mike talks about expiration day risk - just in case some of you got burned last week. Around the Block: Earnings season shifts into full gear. On tap for this week: APPL - 4/23, FB - 4/23, F - 4/25, GM - 4/24 before, MSFT - 4/24 after, UAL - 4/24 before & AMZN - 4/24 after.
Option Block 332: How to Waste a Good Hedge Trading Block: Back to rally mode on the street. Vol retreats yet again. Names before the bell: Haliburton (HAL), Hasbro (HAS) - After: Netflix (NFLX) Odd Block: Put sellers trade in SBA Communications Corp. (SBAC), call buyers trade in Merrill Lynch/Market Vectors Semiconductors ETF (SMH), review from 3/27/14 - calls trade in Ares Capital Corp. (ARCC) (this was a loser), and a review from 3/24/14 - call buyers jump in Quest Diagnostics Inc. (DGX) (possible winner). Xpress Block: Apple and Facebook dominate trades, though, a lighter than expected day today. Strategy Block: This week Uncle Mike talks about expiration day risk - just in case some of you got burned last week. Around the Block: Earnings season shifts into full gear. On tap for this week: APPL - 4/23, FB - 4/23, F - 4/25, GM - 4/24 before, MSFT - 4/24 after, UAL - 4/24 before & AMZN - 4/24 after.
Option Block 324: Behold the Power of the Stupid Trading Block: A mild day on the street. Controversy over SPX settlement on Friday. Mutual fund use of options: public holding and trends (presented at RMC). Minis one year later - boom or bust? Odd Block: Call buyers jump in Quest Diagnostics Inc, (DGX), big call roll in Coronado Biosciences Inc. (CNDO), puts trade in Spectrum Pharmaceuticals, Inc. (SPPI), and call sellers in Prologis Inc. (PLD). Xpress Block: Alex talks in depth about the SPX settlement controversy on Friday Strategy Block: Tosaw discusses the "Stupid." What is it and when do you use it? Around the Block: Watching and waiting to see what happens in the Ukraine. Parade of Fed people. What's happening with the super cheap treasury vol?
Option Block 324: Behold the Power of the Stupid Trading Block: A mild day on the street. Controversy over SPX settlement on Friday. Mutual fund use of options: public holding and trends (presented at RMC). Minis one year later - boom or bust? Odd Block: Call buyers jump in Quest Diagnostics Inc, (DGX), big call roll in Coronado Biosciences Inc. (CNDO), puts trade in Spectrum Pharmaceuticals, Inc. (SPPI), and call sellers in Prologis Inc. (PLD). Xpress Block: Alex talks in depth about the SPX settlement controversy on Friday Strategy Block: Tosaw discusses the "Stupid." What is it and when do you use it? Around the Block: Watching and waiting to see what happens in the Ukraine. Parade of Fed people. What's happening with the super cheap treasury vol?
The Value Guys! discuss: $HLF, $DGX, $YHOO
The Music speaks for itself For this gig, we had no bass player nor a drummer. Therefore, for some songs, a pre-recorded/arranged track was used on the Yamaha DGX-305 of the drum and bass parts (and sometimes a quiet keyboard part such as e-piano or grand piano). For other songs, a drum pattern from the Yamaha DGX-305 was used and K. Selby played bass and keys live (using the DGX-305 in Split mode). In a few cases, a drum pattern from the Yamaha DGX-305 was used and K. Selby played bass on the DGX-305 and keys using a Roland A-30 triggering various sound libraries hosted in Sonar 3.11 on the Notebook PC (see next paragraph). These permutations are noted for each song listed below. We are providing this information because we believe in fully informing our listeners of the specifics of our shows as well as possibly inspiring others to use relatively basic, inexpensive and available equipment to play live. Try it! It's fun! K. Selby uses a variety of keyboard sound libraries for the main keyboard parts he plays. These are hosted in Sonar 3.11 on the Notebook PC and triggered via a Roland A-30 controller keyboard.