POPULARITY
n Episode 39, we sit down with Pooja Kokatnur, Engineering Lead at BNZ, who shares a deeply personal and inspiring journey through the tech industry. From growing up in a multicultural household in Dubai to carving out a meaningful career in New Zealand, Pooja reflects on the challenges of self-doubt, imposter syndrome, and the subtle biases women of colour continue to face in tech.She talks about the power of mentorship, what it means to lead with authenticity, and why staying true to your personality is a strength—not a liability. Pooja also opens up about key turning points in her career, including moments where speaking up made all the difference and where being overlooked led her to become a better advocate for others.This episode is a must-listen for anyone navigating their early career, stepping into leadership, or simply trying to find their voice in a fast-moving industry.#WomenInTech#TechLeadership#AuthenticLeadership#SelfBelief#MentorshipMatters#WomenOfColourInTech#DiversityInTech#InclusionInTech#EmpathyInLeadership#TechBeyondGender#CareerGrowth#SpeakUp#BiasInTech#LeadWithKindness#RepresentationMatters
https://www.genspark.ai/ https://genaimeetup.com/ Follow the podcast: https://podcast.genaimeetup.com/ Join hosts Shishank and Mark as they dive deep into the world of generative AI agents with Lenzoy Lin, Engineering Lead at Gens Park. Discover how this rapidly growing startup is revolutionizing productivity through their suite of AI agents - from their groundbreaking phone call agent to deep research tools and slide creation capabilities. Learn how Gens Park has grown from $10M to $22M in ARR in just one month, their approach to building reliable AI systems, and get a glimpse into the future of human-AI collaboration. Whether you're a tech enthusiast, entrepreneur, or AI professional, this episode offers valuable insights into one of 2025's most promising AI startups. In this episode: Lenzoy Lin's journey from Google to leading Gens Park's engineering team How Gens Park's mixture of agents approach solves complex tasks Behind the scenes of their phone call agent, deep research tools, and slide creation capabilities The technical challenges of building reliable AI agents at scale Gens Park's position in the competitive AI landscape and future roadmap
Everyone's talking about AI, but the real opportunities for data scientists come from being in the room where key AI decisions are made.In this Value Boost episode, technology leader Andrei Oprisan joins Dr Genevieve Hayes to share a specific, proven strategy for leveraging the current AI boom and becoming your organisation's go-to AI expert.This episode explains:How to build a systematic framework for evaluating AI models [02:05]The key metrics that help you compare different models objectively [02:28]Why understanding speed-cost-accuracy tradeoffs gives you an edge [05:47]How this approach gets you “in the room where it happens” for key AI decisions [07:20]Guest BioAndrei Oprisan is a technology leader with over 15 years of experience in software engineering, specializing in product development, machine learning, and scaling high-performance teams. He is the founding Engineering Lead at Agent.ai and is also currently completing an Executive MBA through MIT's Sloan School of Management.LinksConnect with Andre on LinkedInAndrei's websiteAgent.ai websiteConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE
Genevieve Hayes Consulting Episode 59: [Value Boost] How Data Scientists Can Get in the AI Room Where It Happens Everyone’s talking about AI, but the real opportunities for data scientists come from being in the room where key AI decisions are made.In this Value Boost episode, technology leader Andrei Oprisan joins Dr Genevieve Hayes to share a specific, proven strategy for leveraging the current AI boom and becoming your organisation’s go-to AI expert.This episode explains:How to build a systematic framework for evaluating AI models [02:05]The key metrics that help you compare different models objectively [02:28]Why understanding speed-cost-accuracy tradeoffs gives you an edge [05:47]How this approach gets you “in the room where it happens” for key AI decisions [07:20] Guest Bio Andrei Oprisan is a technology leader with over 15 years of experience in software engineering, specializing in product development, machine learning, and scaling high-performance teams. He is the founding Engineering Lead at Agent.ai and is also currently completing an Executive MBA through MIT's Sloan School of Management. Links Connect with Andre on LinkedInAndrei’s websiteAgent.ai website Connect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE Read Full Transcript [00:00:00] Dr Genevieve Hayes: Hello, and welcome to your value boost from Value Driven Data Science, the podcast that helps data scientists transform their technical expertise into tangible business value, career autonomy, and financial reward. I’m Dr. Genevieve Hayes, and I’m here again with Andrei Oprisan. Head of engineering at agent.[00:00:21] ai to turbocharge your data science career in less time than it takes to run a simple query. In today’s episode, we’re going to explore how data scientists can leverage the current AI boom to accelerate their career progression. Welcome back, Andre.[00:00:40] Andrei Oprisan: Thank you. Great to be here.[00:00:41] Dr Genevieve Hayes: So as I mentioned at the start of our previous episode together, we are at the dawn of an AI revolution with unprecedented opportunities for data scientists.[00:00:51] Now, through your current role at Agent. ai, and prior roles at AI centric companies, such as OneScreen. ai, you’ve clearly managed to capitalize on this AI boom, and are actively continuing to do so, and have managed to build a very impressive career for yourself, partly as a result. Now, the Internet’s full of career tips, but they’re usually very generic advice from career coaches who’ve never worked in the data science or technology space, and their advice usually doesn’t take into account the specific context of the AI landscape.[00:01:35] What’s one specific strategy that data scientists can use right now to leverage the AI boom for faster career progression?[00:01:44] Andrei Oprisan: I would say first building some expertise and prompt engineering and AI model evaluation. I think that’s a foundation on top of that. I think it’s developing some systematic approaches for comparing different models outputs on domain specific tasks and then creating something maybe like a reliable evaluation framework.[00:02:05] For example, you could create an eval set. Or tasks in a field and developing some quantitative or qualitative metrics to assess how different models perform compared to traditional approaches and that can really position you as someone who can actually properly integrate AI tools into existing workflows while having that element of scientific rigor.[00:02:28] , it’s leveraging the existing trends around prompt engineering around the different models that are coming up every week, every month. Every quarter and figuring out, how we are going to showcase when to maybe use 1 versus another with the scientific approach with again, I would start as simple as.[00:02:47] An eval from the kind of work that you’re doing in your current role or organization, or thinking about adjacent organizations and adjacent kind of strategies to then create some examples of when and when you wouldn’t. Use certain models because of, some numbers where you can show in an email that, this model does really well in this kind of let’s say, classification in this specific domain versus. One that doesn’t . I think from there, you can iterate and do some even more interesting work very repeatedly and looking at some adjacent domains and apply the same sort of technical solutioning to other domains.[00:03:26] Dr Genevieve Hayes: I read an article recently that was written shortly after the launch of the DeepSeek LLM. And there was a group of researchers at a university that were evaluating the model. And they had a series of prompts that could be used to find out, can this model be used to produce offensive or dangerous information?[00:03:49] And they had something like 50 prompts and they randomly chose 10 of them and ran it against that. Is that the same sort of thing that you’re proposing, but obviously specific to the person’s organization?[00:04:03] Andrei Oprisan: That’s exactly it. So I think starting as simple as again this prompt engineering and writing out a few of those prompts and be able to get some kind of repeatable answer, whether it’s a score, whether it’s, selecting from a set of options, just anything that you can then repeat and measure in a Quantitative way[00:04:24] and like, we can say, okay, it is this category, we’re getting with these, let’s say 50 prompts we’re consistently getting, 10 percent of the answers are incorrect, but 90 percent where we’re getting this kind of consistent answer and an answer that can actually be useful.[00:04:40] And then looking at different kinds of models and and then figuring out, how do they form? But also, how might you improve that? And apply some level of scientific method thinking around, ultimately, what can you change to improve? Essentially, what are still these for most folks, black boxes these LLMs that, And go something outcome, something else, and maybe demystifying what that looks like in terms of consistency at the very least in terms of accuracy over time.[00:05:12] And then, it could even take on more advanced topics. Like. How can you improve those results once you have a baseline starting point, you can say, okay, sure. Now, here’s how I improved, or here’s how maybe the prompts were. Incorrect or, they behave differently given a different LLM or, maybe you push different boundaries around context window size on the Google models are not the best.[00:05:38] But they’re the best at dealing with large data sets. there’s a trade off at a certain point in terms of speed and accuracy and cost.[00:05:47] And so then introducing some of these different dimensions, or maybe only looking at those in terms of, you know, yes, if this LLM takes 10 seconds to get me a 98 percent accurate answer, but this other one takes half a second to give me a 95 percent accurate answer, which one would you choose and a business context essentially the faster one that is a little bit cheaper.[00:06:11] Might actually be the right answer. So there’s different kinds of trade offs, I think, given different kinds of context. And I think exploring what that might look like would be a really good way to kind of apply some of those technical skills and looking at some of those other dimensions, around things like pricing and runtime execution time.[00:06:31] Dr Genevieve Hayes: And I can guarantee if you take a strategy like this, you will become the AI expert in your office, and you will be invited to every single AI centric meeting the senior management have forevermore because I did something similar to this it was before LLMs. It was with those cloud cognitive service type APIs.[00:06:50] And anytime one of those came up, I was the person people thought of. I got invited to the meeting. So, this is really good career advice.[00:06:59] Andrei Oprisan: And really, it starts, I think, growth especially think about how do you grow your career as a technical person? Obviously, part of it is being in the right room at the right time to be able to ask the right kinds of questions to be able to present a technical perspective. And again, I think by pushing on some of these boundaries you get exposed to even bigger.[00:07:20] Opportunities and bigger challenges that do need technical solutions that do need someone with a technical mind to say, You know what? Maybe that doesn’t make sense. Or maybe there is a way to leverage a I, for this problem, but not maybe in the way that you’re thinking, and I think being able to at least present that perspective is incredibly valuable.[00:07:39] Dr Genevieve Hayes: And regardless of which industry you’re working in, the secret to success is you’ve got to get in the room where it happens, as the Hamilton song says, and this sounds like a really good strategy for getting there with regard to LLMs.[00:07:53] That’s a wrap for today’s Value Boost, but if you want more insights from Andre, you’re in luck.[00:08:00] We’ve got a longer episode with Andre where we discuss how data scientists can grow into business leadership roles by exploring Andre’s own career evolution from technology specialist to seasoned technology leader. And it’s packed with no nonsense advice for turning your data skills into serious clout, cash and career freedom.[00:08:23] You can find it now, wherever you found this episode, or at your favorite podcast platform. Thanks for joining me again, Andre.[00:08:31] Andrei Oprisan: for having me. This is great.[00:08:33] Dr Genevieve Hayes: And for those in the audience, thanks for listening. I’m Dr. Genevieve Hayes, and this has been Value Driven Data Science. The post Episode 59: [Value Boost] How Data Scientists Can Get in the AI Room Where It Happens first appeared on Genevieve Hayes Consulting and is written by Dr Genevieve Hayes.
Curiosity may have killed the cat, but for data scientists, it can open doors to leadership opportunities.In this episode, technology leader Andrei Oprisan joins Dr Genevieve Hayes to share how his habit of asking deeper questions about the business transformed him from software engineer #30 at Wayfair to a seasoned technology executive and MIT Sloan MBA candidate.You'll discover:The critical business questions most technical experts never think to ask [02:21]Why understanding business context makes you better at technical work (not worse) [14:10]How to turn natural curiosity into career opportunities without losing your technical edge [09:19]The simple mindset shift that helps you spot business impact others miss [21:05]Guest BioAndrei Oprisan is a technology leader with over 15 years of experience in software engineering, specializing in product development, machine learning, and scaling high-performance teams. He is the founding Engineering Lead at Agent.ai and is also currently completing an Executive MBA through MIT's Sloan School of Management.LinksConnect with Andre on LinkedInAndrei's websiteAgent.ai websiteConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE
Genevieve Hayes Consulting Episode 58: Why Great Data Scientists Ask ‘Why?’ (And How It Can Transform Your Career) Curiosity may have killed the cat, but for data scientists, it can open doors to leadership opportunities.In this episode, technology leader Andrei Oprisan joins Dr Genevieve Hayes to share how his habit of asking deeper questions about the business transformed him from software engineer #30 at Wayfair to a seasoned technology executive and MIT Sloan MBA candidate.You’ll discover:The critical business questions most technical experts never think to ask [02:21]Why understanding business context makes you better at technical work (not worse) [14:10]How to turn natural curiosity into career opportunities without losing your technical edge [09:19]The simple mindset shift that helps you spot business impact others miss [21:05] Guest Bio Andrei Oprisan is a technology leader with over 15 years of experience in software engineering, specializing in product development, machine learning, and scaling high-performance teams. He is the founding Engineering Lead at Agent.ai and is also currently completing an Executive MBA through MIT's Sloan School of Management. Links Connect with Andre on LinkedInAndrei’s websiteAgent.ai website Connect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE Read Full Transcript [00:00:00] Dr Genevieve Hayes: Hello, and welcome to Value Driven Data Science, the podcast that helps data scientists transform their technical expertise into tangible business value, career autonomy, and financial reward. I’m Dr. Genevieve Hayes, and today I’m joined by Andrei Oprisan. Andrei is a technology leader with over 15 years of experience in software engineering.[00:00:24] Specializing in product development, machine learning, and scaling high performance teams. He is the founding engineering lead at Agent. ai, and is also currently completing an executive MBA through MIT’s Sloan School of Management. In this episode, we’ll be discussing how data scientists can grow into business leadership roles by exploring Andre’s own career evolution from technology specialist to seasoned technology leader.[00:00:55] And more importantly, we’ll be sharing specific steps that you can take to follow his path. So get ready to boost your impact, earn what you’re worth, and rewrite your career algorithm. Andre, welcome to the show.[00:01:09] Andrei Oprisan: Thank you. Great to be here. Great[00:01:11] Dr Genevieve Hayes: We’re at the dawn of the AI revolution with everyone wanting to get in on the act and many organizations terrified of being left behind.[00:01:21] As a result, there are more technical data science and AI centric roles being advertised now than ever before. However, this also brings with it unprecedented opportunities for data scientists to make the leap into business leadership, if they’re willing and if they know how. And those are two very big ifs, because in my experience, Many data scientists either don’t know how to successfully make this transition, or write off the possibility of doing so entirely for fear that it’ll take them too far away from the tools.[00:01:55] Now, Andre you started your career as a software engineer, but have since held a number of technology leadership roles, including VP of Engineering at Liberty Mutual Insurance, Chief Technology Officer at OneScreen. ai, And your current role is head of engineering at agent. ai. What is it that first started you on the path from technical specialist to business leader?[00:02:21] Andrei Oprisan: question. So for me, it was all about asking deeper questions as to the why and that led me to ask them more questions, you know, but why and why again, why are we doing this? Why are we prioritizing this kind of work? What makes us believe this is the right kind of feature, to work on as a developer which inevitably leads to some kind of business questions some questions about. Who the customer is and why we’re serving those customers are those customers, right? Kinds of customers. To serve in the 1st place, or, should we be thinking about different kinds of customer personas?[00:02:56] And what does that mean? All the way to, how do you actually make money as a business? Why are we doing this? Is it to drive efficiency? Is it to serve a new, on top market potentially? And so. As you mentioned, I started as a developer, I started my career at Wayfair back in the early days when they were, I think it was engineer number 30 company of 100 or so people back in the early 2000s.[00:03:20] And we were. Developing big features. I remember I own a big part of baby and wedding registries and checkout and customer reviews. And I was building more and more features and I was sitting and also in more meetings with product managers who are usually the kind of the interface right in a tech world to sort of the business.[00:03:42] And I kept asking more and more questions around it. Hey, but why are we doing this? Why are we solving for baby registries? Why are we solving for wedding registries?[00:03:51] So again. For me, it really started from early days of my career, all the way through later stages, where I was always asking more questions about, is it the right thing?[00:03:59] The highest value thing that we can work on as engineers, as developers, as technical folks, or is there something more valuable that we should be working on that we should be aware of? That we should be asking deeper questions about. And it really started with that kind of inquisitive nature, always asking, why are we doing this?[00:04:16] You know, I’m here as part of this team, and I want to understand why we’re doing these things. So I can be more effective. So I can make sure that, I. Do as much as possible to make a successful[00:04:27] Dr Genevieve Hayes: That approach of asking all those why questions, that’s what they recommend to people in pretty much every management consulting advice book. The three. of Management Consulting. Why this? Why now? Why me? Did you pick that up from reading some sort of Management Consulting book or do you just have an naturally inquisitive nature?[00:04:48] Andrei Oprisan: now for me it was more natural, maybe a bit stubborn, maybe depending on what you ask, maybe a bit , irreverent just to sort of asking the question. So, , why are we doing this? But as a developer, as you’re building out features, you can build a very simple version of an ask or you can build something very complex that needs to scale. That needs to take into account a number of different kinds of factors. And so we really started with. Trying to understand, okay, what is the actual technical requirement and why do we think that is[00:05:16] and that’s usually defined by some kind of either tech lead and a team or a product manager or some combination thereof. And I found that to be very helpful, both for me and those non technical counterparts to ask those why questions because it really revealed a lot of the assumptions that went into the road map that went into even the business thinking there’s obviously some assumption that.[00:05:41] For instance, we’re going to invest in scale from a dev ops standpoint, for example to make sure these servers don’t tip over. We’ll be able to handle more traffic because we expect growth. Okay. But when is that? Why is that?[00:05:53] And it started from me, just not really understanding the business and wanting to learn and more wanting to learn on a deeper level to say, okay. I can understand. I became an expert in baby and wedding registries and all the competitors and I think that that’s part of what’s necessary to be able to build.[00:06:12] Good products that kind of obsession, with the product and , asking questions until you really understand the landscape and what you should and shouldn’t be building. I think those are critical aspects of knowing what to build and not to build to be able to.[00:06:26] And get some better outcomes.[00:06:28] Dr Genevieve Hayes: And so by asking these questions, did senior leadership see that as a sign that you had management or leadership potential and then did you naturally get promoted or did you actively seek out those business leadership roles?[00:06:44] Andrei Oprisan: I think a little bit of both, but more likely in the beginning. It was more the former, so I was asking. More of the questions for the sake of the questions and really wanting. To build a better product, which then led to just more responsibilities. And it was clear to me that I wanted.[00:07:02] Those kinds of questions to be asked and answered. And many times they want, many of those sort of technical conversations they were having, those kinds of questions weren’t really asked by the technical folks. And so I became the kind of person that would always ask those questions and always.[00:07:19] Push us to get good answers to those questions and really test those assumptions over time, as I became more senior in my roles building more complex systems that led to more complex questions that needed answers and increasingly got in front of more senior folks.[00:07:37] So what became conversations Within a team with a product manager or a junior product manager talking to junior engineers became conversations, between senior engineers. And directors of thought up and things like that. And so, I just became part of. In those rooms where those conversations were happening at a higher level that led me to ask more important white questions more around.[00:08:01] The business strategy, why do we think this is the right segment to tackle? Why do we think we’re going to build technology that is really differentiated, that is not just another solution that we could have just bought off the shelf.[00:08:13] And those are very interesting conversations to have. And I think that the kinds of conversations that we don’t get to really have, we’re not really focused on both the technical, but not technical just for the sake of technical sort of solutioning, but technology in the service of the business and the service of a business that is, wanting to grow and stay competitive and and be able to win at whatever the business is trying to do,[00:08:40] Dr Genevieve Hayes: It sounds like your nature made you very well suited to a business leadership role, even though you started off as a technical specialist. But I’ve met a lot of data scientists over the years who are very adamant that they don’t want to move away from purely technical roles and into leadership roles.[00:09:01] For example, I’ve been in teams where the team leader role has It’s been advertised and every single technical person in that team has refused to apply for it because they don’t want to move away from the tools. Is this something that you experienced early in your career?[00:09:19] Andrei Oprisan: definitely, and that’s part of every individuals journey as we’re moving through those individual contributor ranks. There are levels to the individual contributor roles, you can go from junior to very senior, to principal or staff or a member of technical staff and different companies have the sort of laddering that can even go up to the equivalent on the sort of management side, all the way to VP levels Microsoft is famous for, their laddering where you can have Distinguished engineers that are the equivalent of VPs will have hundreds of people who are reporting to them and have similar compensation structures.[00:09:55] So, again, it is possible. Not every organization is set up for that. And so I think part of this has to 1st, start with the right level of research and say, okay. If I’m the kind of person that wants to do only technical work. Will the career progression and this organization really support my objective,[00:10:14] if the most senior level that you can go to might be just a senior engineer level, that might be okay. And that might be the right place for you. But if you want me more responsible and we want to be more of an architect or someone who. Is coordinating, larger, project deployments across multiple divisions,[00:10:37] I would say, figure out if the organization. As those kinds of opportunities, and in many cases, they don’t, because they don’t know that I need, it hasn’t been proven as an actual need. So, part of it is, how comfortable are you? And being that sort of trailblazer and taking some risks and, of crafting your own role versus, working within the existing bounds where you may have a well defined ladder.[00:11:03] And, in other cases, it might be that, no, there is a ceiling and in many organizations, that is the case, especially in a non technology companies, and companies that certainly have a technology or it department and some fashion. But they might not have, the same level that you can go to.[00:11:21] Compared to in a potential business role and that needs to be a decision that is that made to say, okay, is this the right kind of place for me? Can I grow and learn? To the level that I’m looking to grow and learn to and then figure out, if you can sort of.[00:11:36] Move beyond some of those limitations, what are they and what are you comfortable with?[00:11:41] Dr Genevieve Hayes: Early in my career, it was the case that basically in Australia, if you wanted to get beyond a very moderate salary, you had to go into management if you’re a technical person. But. In recent years there are an increasing number of companies and organizations that are building in that technical stream.[00:12:03] I think Deloitte in Australia now does have a technical stream where you can get quite senior. And I know of some government organizations that also do. I’m not quite sure how well that works in practice, but it’s a move in the right direction.[00:12:20] Andrei Oprisan: Right, and I think that’s that’s only increased over time. I’ve only seen companies create more opportunities for those very senior technical folks, not fewer. So, again, I think it is encouraging, but I’d also say, you’re not going to find the same.[00:12:36] Leveling across the board for technical folks as you would, let’s say for management oriented and at a certain point, need to make the decision in terms of. Do you want to stay as an individual and the whole contributor, or are you open to management?[00:12:51] It doesn’t mean from a management standpoint, you’re not technical or, you’re not needing to your technical skills, but it may mean that, yes, you’re no longer coding every day. Right, you are maybe at best reviewing architecture documents and really pressure testing the way the systems are designed and having bigger conversations around, cost optimization and.[00:13:14] Privacy and security implications of the work that is being done and making sure that then those are addressed. Which again, there are different kinds of challenges. They’re still technically challenging. And you’re going to need good advice from additional folks, individual contributors on the teams, but they are different.[00:13:32] Dr Genevieve Hayes: The other thing I’d add to all this is, even if you choose to remain in that individual contributor stream, as you move up the ranks, you are still going to be associating more and more with senior leadership and having to think about things from a business point of view. It doesn’t matter whether you’re managing staff or not.[00:13:51] You need to become more business centric. And that idea that a lot of very technical data scientists have of just being left alone in a room to code all day. That’s not going to happen once you get above a certain level regardless of if you’re technical or a leader.[00:14:10] Andrei Oprisan: That’s right, and I think it’s. Figuring out the right balance of enough technical work, and that can mean different things over time with enough. Organizational impact, which is another way to look at the business elements of. You know, we’re doing a bunch of work, but again, is it making money?[00:14:29] Is it helping our customers get more of what they need? Is it improving some kind of output that the organization is measuring. If we can’t answer any of those questions , to some level of sophistication, then, if we’re working on the right thing or not, would we even know,[00:14:45] and would it even about it may be a very interesting technical problem, of course, but does it matter at all? will anyone even see it when you care? I think by, understanding the business understanding, maybe how many eyeballs. The product is going to get in front of and what the assumptions are and even, coming up with some of those numbers is going to really affect what you’re thinking about what you’re building and why you’re building.[00:15:09] Dr Genevieve Hayes: It sounds like you making that transition from being a technical expert to being a business leader was very organic for you, but was there ever a point in time where you actually consciously thought, okay, I’m actually focusing on this business leadership thing. I’m no longer a technical specialist.[00:15:28] I am a data science or engineering leader.[00:15:32] Andrei Oprisan: Yes, when I transitioned from Wayfair I work for an eCommerce consulting shop. So there is where I learned a lot of my sort of consulting skills and really understand how to talk to. Chief marketing officers and CEO. So understand, what exactly are you trying to accomplish?[00:15:48] But in those conversations, it became very clear to me that I needed to understand more about the business, not less, even as I was very technical, I was a tech lead, I was running the technology team, in charge with the recruiting with defining the staffing plans and also architecting some of the solutions.[00:16:10] And so it became very clear that I needed to understand even more. About what the actual goals were of the organization, because the very first iteration of the project we came in with the wrong assumptions completely, and we came up with some technical solutions that made no sense for where they were trying to go.[00:16:30] 2, 3, 5 years later we came up with something that made sense for a proof of concept and sort to get to an initial contract. But actually, we were setting them up for failure in 4 to 5 years were actually the solution that we were proposing wouldn’t be able to support the kinds of customization as they would need when they moved to 20 different supply chain partners and just having those conversations at a, higher level[00:16:57] It was very eye-opening when I walked out of a few of those meetings. Understanding that 90 percent of our assumptions were just incorrect. It’s like, Oh my God, what are we doing? And why are we having this entire team of engineers building these features for, I think it was Portugal and Spain stores where, we were just expected to lift and shift that for Japan, and that we’re just not going to be possible said, okay,[00:17:22] This made absolutely no sense. Let’s have deeper conversations about. The business what their goals are and how the technology is going to support that both now in the very short term, and we’re applying a very short term kind of mentality. But also long term also in 4 to 5 years, assuming the business is successful and they meet their objectives.[00:17:44] How can we make sure we’re enabling their long term growth?[00:17:48] Dr Genevieve Hayes: So it sounds like if one of our listeners wanted to follow your lead and move from technical specialist into a business leadership role, one of the first steps that they should take is to understand the objectives and goals of their organization and how their work can feed into achieving those goals and objectives.[00:18:09] Andrei Oprisan: Absolutely. I think it’s just having those simple questions answered around. What is the business? What is it doing? Why is it doing it? Why are they in this specific sector now? How has this evolved? And then being able to answer, how are they actually able to do that? Is it people?[00:18:28] Is it process? Is that technology is probably a combination of all of those different factors, but technology can have a multiplying effect, right? And I think it’s asking those questions in terms of where they are now and looking at different ways of expanding different ways of providing. Goods and services and using technology to more efficient.[00:18:49] And , it’s just looking at the business, but I would call it. A common sense approach and asking the kinds of questions. Okay. Someone in on the business side, if they can’t answer things in a simple. Way ask more questions if you can understand them in the terms that.[00:19:08] They’re giving back to you then then ask more clarifying questions. Don’t just assume. Right and it’s okay to not be an expert in those things. The challenge that I had in the beginning was getting frustrated with. My blind spots and my lack of really understanding I think it was.[00:19:24] You know, 1 of the early examples was this around tax treatments and, how obviously. Different territories have different rules for when and how you collect taxes.[00:19:34] It gets into a lot of complexity, but, it was very eyeopening. To ask more of those questions and to understand just how complex of an environment the business operates in, which allowed me to be a better developer, which allowed me to be a better team lead, which allowed me to then be a better partner, frankly, to those business folks who, you know, they have the same goals for the organization that we should have.[00:19:59] The company is going to grow. And if the company grows and it does well, then it means good things for everybody on the team. And if they don’t, that’s going to lead to equally bad things for everybody on the team. And so I think part of it is having that ownership mindset of it’s not someone else’s problem.[00:20:16] If we don’t understand this, it’s my problem. It’s my problem that we don’t understand how we’re going to need to customize this types engine. Because we might get hit with fines and we might need to retroactively as a severity one drop everything now. Anyways, kind of issue later than the line,[00:20:34] Dr Genevieve Hayes: So what is the single most important change our listeners could make tomorrow, regardless of whether their role is purely technical or not, to accelerate their data science impact and results and increase their business exposure?[00:20:47] Andrei Oprisan: I would say, ask, those deeper questions and figure out exactly the kind of work that they’re doing, how it’s having an impact on the bottom line. Whether it does or not, I think, understanding that very well understanding whether or not, the group that you’re in and the division is seen as a cost center or not or revenue center.[00:21:05] I think that’s the biggest sort of eye opening question that you can get answered and figure out, what are the broader objectives? Well, there are technical objectives. That the team has or business objectives that the whole division has and figuring out, okay, am I playing a part in that today or not?[00:21:26] Are we directly or indirectly? And how are my bosses or my bosses, bosses seeing the impact of the work that I’m doing in relation to the business success? And if there is no pathway for that, I think it’s the wrong kind of role in terms of long term growth. So again, if the work that you’re doing doesn’t have a measurable impact on that bottom line or on the growth of the organization, I think it’s worth asking deeper questions as to why that is or why it’s seen that way and how you can get into the kind of role that can help it.[00:22:03] With the growth and resiliency of the business.[00:22:06] Dr Genevieve Hayes: For listeners who want to get in contact with you, Andre, what can they do?[00:22:10] Andrei Oprisan: Sure. Can email me at Andre at agent.ai. Can find me on the web at oprisan.com. My blog is linked there as well. I’m on LinkedIn and x and. All the social networks with the same handles but more importantly, just, find me on agent. ai where I spend most of my time building AI agents helping out in the community giving folks feedback on how to build better agents.[00:22:35] And ultimately aiming to democratize AI and make it more accessible.[00:22:40] Dr Genevieve Hayes: And there you have it, another value packed episode to help turn your data skills into serious clout, cash, and career freedom. If you enjoyed this episode, why not make it a double? Next week, catch Andre’s value boost, a five minute episode where he shares one powerful tip for getting real results real fast.[00:23:01] Make sure you’re subscribed so you don’t miss it. Thank you for joining me today, Andre.[00:23:05] Andrei Oprisan: Thank you. Great to be here.[00:23:07] Dr Genevieve Hayes: And for those in the audience, thanks for listening. I’m Dr. Genevieve Hayes, and this has been Value Driven Data Science. The post Episode 58: Why Great Data Scientists Ask ‘Why?’ (And How It Can Transform Your Career) first appeared on Genevieve Hayes Consulting and is written by Dr Genevieve Hayes.
Dr. Jeff Esposito, Engineering Lead at Lenovo R&D, shares how his team is shaping the future of AI with innovations like the Hive Transformer and EdgeGuard. He emphasizes the importance of ethical innovation and building technologies that are intended to serve society's greater good. He also stresses the value of collective contributions and diverse perspectives in shaping a future where technology effectively addresses real-world challenges. Key Takeaways: AI's role in building smarter cities through Lenovo's collaborations with NVIDIA and other partners. How AI security is evolving with EdgeGuard and other cutting-edge protections. The role of hybrid AI in combining machine learning and symbolic logic for real-world applications. Corporate responsibility in AI development and the balance between open-source innovation and commercialization. Why diverse perspectives are essential in shaping AI that benefits everyone. Guest Bio: Dr. Jeff Esposito has over 40 patent submissions, with a long background in research and development at Dell, Microsoft, and Lenovo. He lectures on advanced technological development at various US government research labs, and believes that technology is at its best when serving the greater good and social justice. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
Tell us what you think of the show! This Week in Cleantech is a new, weekly podcast covering the most impactful stories in cleantech and climate in 15 minutes or less.This week's episode features Dana Clare Redden, from our “Cleantecher of the Year” committee, who has picked our Cleantecher of the Year!This Week in Cleantech — December 20, 2024 Trump's Energy Pick, Chris Wright, Argues Fossil Fuels Are Virtuous — The New York TimesCipher analysis: Emerging clean technologies see sharp drop in investments this year — Cipher NewsThe Age of Electrons Has Arrived, but Maybe Not for the Right Reasons — Heatmap NewsPG&E Secures $15 Billion Loan From U.S. Energy Department — The New York TimesWatch the full episode on YouTubeCleantechers of the YearMost Impactful - Bill Weihl - ClimatevoiceBill Weihl, transitioned from ClimateVoice's Co-Executive Director to the role of Founder & Chief Strategic Advisor. ClimateVoice leverages corporate influence from climate-positive companies to win policy battles.Most Entrepreneurial - Emilie Oxel O'Leary - Green Clean Wind LLCEmilie is actively urging her LinkedIn followers to reach out for their solar recycling needs, highlighting the growing concern over solar components ending up in landfills. She's asking for every steel pile, aluminum racking component, nut, and bolt, so she can help clean up your site. Most Disruptive - Michael Tekabe/Hayat Bedene - KübikMichael Tekabe, Chief Operating Officer, and Hayat Bedane, Engineering Lead at Kubik, an Africa-based startup, make building materials out of recycled plastic for affordable, sustainable, and easy-to-deploy homes, clinics, and warehouses. Their construction materials reduce 5x less carbon pollution than traditional materials.Most Innovative - Gregg Patterson- Origami SolarGregg is leading the effort to reframe the global solar industry with recycled steel, replacing aluminum solar module frames. This entire effort is designed to lower the carbon pollution footprint of the global solar industry. Cleantecher of the Year - Jonathan Foley - DrawdownJonathan Foley, climatologist and Project Drawdown Executive Director, wrote a piece that shares the impact of agricultural pollution, and how we need to prioritize sustainable agricultural practices. Jonathan shared that total carbon pollution from the food system, including food waste, transport, packaging and refrigeration sits at 34%, making food the single largest polluting economic sector — more than power generation or industry, which both sit at roughly 23%.
In this sponsored episode of The Cognitive Revolution, Nathan interviews Andrei Oprisan, Engineering Lead at Agent.ai. They explore the cutting-edge world of AI agents and their impact on the future of work. Andrei shares insights on language model limitations, best practices for building AI agents, and Agent AI's vision as a professional network for AI agents. The conversation covers technical details like fine-tuning models, vector database choices, and privacy-preserving techniques. Don't miss this deep dive into AI's role in transforming industries and the skills needed in an AI-augmented workplace. Check out Agent.ai website here: https://agent.ai Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/ SPONSORS: Weights & Biases Weave: Weights & Biases Weave is a lightweight AI developer toolkit designed to simplify your LLM app development. With Weave, you can trace and debug input, metadata and output with just 2 lines of code. Make real progress on your LLM development and visit the following link to get started with Weave today: https://wandb.me/cr Oracle: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive Brave: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Squad: Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. RECOMMENDED PODCAST: This Won't Last - Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel ft their hottest takes on the future of tech, business, and venture capital. Spotify: https://open.spotify.com/show/2HwSNeVLL1MXy0RjFPyOSz CHAPTERS: (00:00:00) About the Show (00:00:22) Sponsors: Weights & Biases Weave (00:01:28) About the Episode (00:05:36) Introduction and AI Agents Overview (00:07:02) Current State of AI Agents (00:11:30) Building and Optimizing AI Agents (00:19:01) Sponsors: Oracle | Brave (00:21:05) Agent.ai Platform and Marketplace (00:31:12) Sponsors: Omneky | Squad (00:32:38) Customization and Context for Agents (00:33:34) Business Model and Monetization (00:36:53) Tech Stack and Development Process (00:43:55) Future of Work and AI Impact (01:03:46) Fine-tuning and Chain of Thought (01:14:30) Capturing Human Reasoning Process (01:21:00) Preparing for Rapid AI Advancement (01:40:58) AI's Impact on Jobs and Society (02:00:16) Closing Thoughts and Future Outlook (02:03:33) Sponsors: Outro
Being a Digital Leader - the Good, Bad AND Ugly of Digital Transformation
What if your neurodiversity could be your ultimate career superpower? Join us as we sit down with Pete Brown, Engineering Lead at Lloyds Banking Group, who shares his remarkable 15-year journey from graduate scheme entrant to digital lab leader. From eye-opening customer-facing roles to complex data center migrations and transformative mentorships, Pete reveals the pivotal experiences that have shaped his career. Discover how he has navigated the nuances of a regulated industry while leveraging his unique strengths to drive IT transformation with significant business impact.Visit AND Digital's website here for the latest episodes and to stay informed. Follow us on:Linkedin: and_digitalX: AND_digitalInsta: and.digital
"Innovation in Rail signalling and systems isn't just about technology; it's about integrating people and system of systems for successful deployment and ensuring safety, enhancing operational efficiency, and compliances across the Rail Systems." — Mohan SankarasubbuWe're thrilled to announce that our 60th episode of the Railway Transportation Systems podcast features none other than Mohan Sankarasubbu, a distinguished Technical Executive and Engineering Lead at WSP Australia with over 23 years of experience in rail signalling systems!Mohan has been at the forefront of groundbreaking projects across the globe, working with companies like Ansaldo STS , Hitachi Rail for major RIMs like Cross River Rail, Queensland Rail, Rio Tinto, Royhill and Indian Railways. As a Fellow of the Institution of Railway Signal Engineers and a Chartered Professional Engineer, his expertise in ERTMS/ETCS, CBTC, ATP, and more is unmatched.
Like many cities around the world, Cape Town used to have a tramway system, but there have long since fallen out of favour to buses, cars and minibus taxis, so why are some groups pushing for their revival? Lester Kiewit speaks to Luke van Wyk, Urban Mobility and Engineering Lead for the Young Urbanists Committee.See omnystudio.com/listener for privacy information.
What is the importance of communication skills for remote teams? In this episode of Product Talk hosted by Workleap Director of Product Barbara Bermes, TIM Fmr Engineering Lead Andrew Parker speaks on the importance of strong communication skills for remote teams. Drawing on his experience managing remote teams across multiple locations, he emphasizes that while technical skills are important, the ability to collaborate and communicate effectively across disciplines is what truly defines a high performer working remotely. He highlights best practices for asynchronous communication and provides insights into how output can be measured in remote environments through a focus on outcomes rather than activities. Andrew also shares his perspective on the future of remote work and the incremental changes we may see in how virtual teams communicate and collaborate over the next five years.
In this episode, we meet Benjamin Mann, the engineering lead for Delivery Hero, one of the world's leading food delivery platforms. Benjamin shares his fascinating career path, from his early days as a passionate "kid coder" to his current role overseeing a global team of engineers. He discusses the complexities of building a platform that handles billions of orders annually across 72 countries. We'll hear about the challenges of ensuring near-perfect execution to deliver food quickly and efficiently, the importance of a "watch me" attitude when faced with technical hurdles, and the butterfly effect that even small changes can have on a massive delivery chain. Tune in for an inspiring conversation about lifelong learning, perseverance, and the impact of software engineering on the global food delivery industry. --- Send in a voice message: https://podcasters.spotify.com/pod/show/getu-chandler/message Support this podcast: https://podcasters.spotify.com/pod/show/getu-chandler/support
Speakers: Maria Riaz, Cloud Counter-Abuse, Engineering Lead, Google Cloud Topics: What is “counter abuse”? Is this the same as security? What does counter-abuse look like for GCP? What are the popular abuse types we face? Do people use stolen cards to get accounts to then violate the terms with? How do we deal with this, generally? Beyond core technical skills, what are some of the relevant competencies for working in this space that would appeal to a diverse set of audience? You have worked in academia and industry. What similarities or differences have you observed? Resources / reading: Video EP165 Your Cloud Is Not a Pet - Decoding 'Shifting Left' for Cloud Security P161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud “Art of War” by Sun Tzu “Dare to Lead” by Brene Brown "Multipliers" by Liz Wiseman
Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well
Understanding Search, GenAI, RAG methodology, and vector databases with Nixon Cheaz, Engineering Lead at IBM's Experience Engine. 02:24 Meet Nixon Cheaz04:32 Search without Google06:35 Experience Engine08:30 Elements of Good Search12:46 Search Data Source15:36 GenAI Use Cases and Vector DBs 19:40 Foundational Models?22:07 Impact of Vector DBs25:38 IBM Public Content DB28:02 Use Cases29:58 IBM Technologies32:54 RAG40:12 Health is WealthLinkedIn: linkedin.com/in/nixon-cheazWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Understanding Search, GenAI, RAG methodology, and vector databases with Nixon Cheaz, Engineering Lead at IBM's Experience Engine. 02:24 Meet Nixon Cheaz04:32 Search without Google06:35 Experience Engine08:30 Elements of Good Search12:46 Search Data Source15:36 GenAI Use Cases and Vector DBs 19:40 Foundational Models?22:07 Impact of Vector DBs25:38 IBM Public Content DB28:02 Use Cases29:58 IBM Technologies32:54 RAG40:12 Health is WealthLinkedIn: linkedin.com/in/nixon-cheazWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. Want to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
In this episode of the IoT For All Podcast, Richard Beranek, Managing Partner and Engineering Lead to Brash Inc, joins Ryan Chacon to discuss custom versus off-the-shelf IoT solutions. They cover what off-the-shelf and custom IoT solutions are, combining them, risks and things to consider, factors that influence the decision between them, and advice for companies starting their IoT journey. Dr. Richard Beranek has a Bachelor of Engineering in Aerospace Engineering and a PhD in Mechanical Engineering, both from Carleton University. In 2012, he co-founded GaitTronics to commercialize SoloWalk, a robotic walker used in early mobilization rehabilitation. He would go on to form Brash Inc. in 2017, leading the product design and development of new devices, consumer electronics, robotics, software, and IoT products. Brash Inc. is in the business of collectively finding creative solutions to interesting problems. Whether it's starting the next big idea or revamping an existing one, they partner with clients to deliver customized solutions from the initial scribble all the way to market (and anywhere in between). Their team of innovators include designers, engineers, software developers, data scientists, and researchers that are not only technical experts in their respective fields, but also well-versed in the startup journey and the various stages of product development. Alongside with you, they dig into developing products that fuse design and engineering to create intuitive customer experiences. Discover more about IoT at https://www.iotforall.com More about Brash: https://www.brashinc.com/ Connect with Richard: https://www.linkedin.com/in/richardberanek/ Our sponsor: https://www.wildernesslabs.co/ (00:00) Sponsor (00:21) Intro (00:31) Richard Beranek and Brash Inc. (02:16) What are off-the-shelf IoT solutions? (04:11) Custom IoT solutions (05:40) Hybrid approach: combining off-the-shelf and custom (10:52) Impact of legacy infrastructure on IoT solution choice (12:02) What other factors influence custom vs off-the-shelf? (21:06) Advice for companies starting their IoT journey (22:22) Learn more and follow up SUBSCRIBE TO THE CHANNEL: https://bit.ly/2NlcEwm Join Our Newsletter: https://www.iotforall.com/iot-newsletter Follow Us on Social: https://linktr.ee/iot4all Check out the IoT For All Media Network: https://www.iotforall.com/podcast-overview
Reveal AI insights in retail with our Disruption Talks guest, Tomasz Glowacki, Head of Data Science at Żabka Group.On “AI-Driven Efficiency: Transforming Retail from the Ground Up” episode we talked about how to:- Leverage AI at the management level for actionable data-driven insights- Elevate retail customer experiences with personalized AI interactions- Revolutionize supply chain, demand forecasting, and logistics all through AIHost: Radek Szeja, Engineering Lead at Netguru
In this episode, Emily is joined by her former colleague and COO, Alex Papadopulos. They explore the dynamics of the two roles, the similarities and differences, and frequent communication and maintaining a unified executive team. They share stories from their time working together, reminiscing about good times, fun moments, and challenging times like COVID-19. Alex on LinkedIn Alex Papadopulos, originally from South Africa, relocated to the U.S for work in 2007 and has built a thriving career in the tech industry. Starting as a developer, he ascended the ranks, becoming Technical Project Manager and Engineering Lead before transitioning into operations. Alex joined fusionZONE in 2018, initially as the Chief Technology Officer (CTO). His role expanded to Chief Operating Officer (COO) 18 months later, overseeing the technical and operational aspects of the company. He played a significant role in re-structuring teams and fostering an environment that promotes growth and productivity. His leadership style blends his technical expertise and operational acumen, making significant contributions to the organization. – – – – – 00:55 Guest Introduction: Alex Papadopoulos 01:28 Understanding the Role of Chief of Staff and Chief Operating Officer 01:48 Alex's Journey to Fusion Zone 04:30 The Evolution of Roles within the Company 14:15 The Importance of Strategic Prioritization 15:10 The Impact of PBS Sessions 17:55 Understanding the Role of a Chief of Staff 19:39 The Importance of Communication in Leadership 20:12 Comparing Traditional and Non-Traditional Organizational Structures 22:16 The Dynamic Relationship Between Executives and Chiefs of Staff 30:33 Navigating Through Crisis: The COVID-19 Experience 33:48 The Interplay Between the Chief of Staff and the CEO 36:24 Reflecting on the Relationship Between the COO and the Chief of Staff 39:03 Closing Remarks and Appreciation – – – – – Emily Sander is a C-suite executive turned leadership coach. Her corporate career spanned Fortune 500 companies and scrappy start-ups. She is an ICF-certified leadership coach and the author of two books, An Insider’s Perspective on the Chief of Staff and Hacking Executive Leadership. Emily works with early to senior executives to step into effective leadership with one-on-one coaching. Go here to read her story from seasoned executive to knowledgeable coach. Get your free Chief of Staff Skills Assessment Checklist here. Download your free Chief of Staff Toolkit here. To learn more about Emily and Next Level Coaching, click here. Follow Emily on LinkedIn | YouTube
Programmer Hour Podcast by Programmer Couple YouTube: https://www.youtube.com/@programmercouple ❤️ YouTube - https://www.youtube.com/@programmercouple
✨ Subscribe to the Green Pill Podcast ✨ https://pod.link/1609313639
In this episode of the Modern Web podcast, host Rob Ocel engages in a conversation with Willow, also known as Ghost. Willow is not only one of the co-founders of Svelte Sirens but also an active member of the Svelte community, known for her contributions and expertise. The episode dives into Willow's personal journey into web development. Willow shares how she stumbled upon Svelte through a YouTube video and was immediately drawn to its exceptional capabilities. She described the pivotal moment when she realized Svelte's potential to revolutionize web development. One of the key highlights of the episode is the emergence of SvelteKit and its transformative effect on web development. Willow underscores the remarkable flexibility of SvelteKit, emphasizing how it allows developers to effortlessly tackle various project types, from single-page applications (SPAs) to static websites, all without the burden of extensive configuration overhead. Host Rob Ocel- Architect and Engineering Lead at This Dot Labs Guest Willow aka GHOST- Co-Founder of Svelte Sirens This episode is sponsored by This Dot Labs
In this episode, Rob Ocel is joined by Kayla Sween to discuss her tech journey, the conference scene, organizing events, and the trials of managing your own deployments in AWS. Along the way they talk about the benefits and challenges of online, live, and hybrid conferences, why all developers should look to attend conferences, how small teams can grow into more complicated DevOps configurations, and how Flightcontrol aims to make that process less painful. Finally, they discuss living with ADHD and how learning about yourself and others can be so empowering. Guest Kayla Sween - MagnoliaJS Organizer, Front End Developer Host Rob Ocel - Architect and Engineering Lead at This Dot Labs Sponsored by This Dot Labs
When Ilana Davidi Reeves walked into the computer science placement test at the start of college and she was the only woman, she walked out and found a new major. But tech-related internships at ESPN and an early internet company kept her interest piqued and on the second day of her first consulting job out of college, she found herself in their Java lab and kept programming and technology front and center in her career.Seemingly against the odds, she exceled in software engineering as one of the only women in the field. She had to deal not only with the volatility of the industry but also the discrimination there during the 2009 recession and later when she needed to juggle it all with motherhood. The struggles and the support she found from her network at that time, however, would prove to be beneficial when she found the odds stacked against her in other ways in her personal life later on.In this episode, find out from Ilana how sticking through the hard things can set you up for success and a sense of gratitude later…on today's Roads Taken, with me, Leslie Jennings Rowley. About This Episode's GuestIlana Davidi Reeves is a software engineering and architecture leader with 25 years of experiences across diverse industries, who currently serves as Engineering Lead, Fulfillment at Square. She has managed, mentored, and directed local, off-shore, and near-shore teams including the new wave of women in the space. She lives in Atlanta with her husband and their two boys. (252) For another story about being an outlier in your industry and leading through it, listen to our episode with Tim Chow. Find more episodes at https://roadstakenshow.com Executive Producer/Host: Leslie Jennings RowleyMusic: Brian BurrowsEmail the show at RoadsTakenShow@gmail.com
In this episode, Nav's VP of Engineering, Gian Perrone, joins Amir Bormand to discuss organizational design and leadership styles. The discussion focuses on how leaders can adapt to the company's culture or change the organization to fit their leadership style. They discuss the importance of assessing the organization before making any changes and the potential pitfalls of imposing a leadership style that doesn't fit the company culture. Highlights [00:01:29] Adapting leadership style to organization. [00:05:21] New leaders and reorgs. [00:07:04] Change management in organizations. [00:10:23] Comms plan and collaboration. [00:14:43] Dealing with organizational change. Guest: Gian Perrone is the Head of Engineering at Nav. Before joining Nav, he served as an Engineering Lead at Square Banking, where he played a key role in launching various financial products. Before Square, he worked as a web developer at The Folio Society and a research programmer at the University of Waikato. He holds a BS and MS in Computer Science from the University of Waikato and a Ph.D. from the IT University of Copenhagen. With a decade of witnessing small business owners' financial challenges, Gian looks forward to simplifying business financing at Nav. He works from Milwaukee, adding to Nav's distributed team across 23 states. https://www.linkedin.com/in/gianp/ --- Thank you so much for checking out this episode of The Tech Trek, and we would appreciate it if you would take a minute to rate and review us on your favorite podcast player. Want to learn more about us? Head over at https://www.elevano.com Have questions or want to cover specific topics with our future guests? Please message me at https://www.linkedin.com/in/amirbormand (Amir Bormand)
This is a rebroadcast episode with Dona Sarkar, Director of Technology for Accessibility at Microsoft. This was the first episode for this podcast and one of my favorites. This conversation with Dona is as relevant today as it was in 2015. I hope it encourages you to believe that having a portfolio career such as technologist, fashion designer and author is actually possible. At the time we recorded this episode, Dona was an Engineering Lead for Microsoft's HoloLens I talked to Dona about her career in technology and how following your interests on the side can enhance your career. Growing up in Detroit, Dona didn't have a lot of money or resources, so she made up her own stories and characters. Dona talked about how growing up with a need to constantly use her own creativity to entertain herself is now an asset. Dona says “when you don't have very much, consider it your superpower”. Dona also explained how a lack of resources can help you understand the billions of undeserved people in the market today. “We keep going after the same 2 billion people over and over again in tech, but why, lets go after 5 billion.” Dona's Blog Topics discussed: Tips for getting started in technology Coding is just building things for people and how not to be scared when trying to learn technical skills If you don't succeed at something, Dona suggests “do it again!” Resources to learn new skills (skillfeed, udemy, youtube, Lynda, codeacademy) Dona's time management secret Dona talks about her book on eating disorders in minority women How growing up with less can be your superpower Dona's philosophy on staying away from non believers and negativity
In this episode, Rob Ocel is joined by Debbie O'Brien (Senior Program Manager at Microsift and Playwright Developer Advocate) to discuss automated testing with Playwright! They talk about the evolution of the testing pyramid, how to write Playwright with the Playwright recorder, and some of the upcoming features of Playwright to deal with Shadow DOM, frames, selector issues, and so much more! Guest Debbie O'Brien: Senior Program Manager at Microsoft and Playwright Web Developer Advocate. @debs_obrien Host Rob Ocel: Architect and Engineering Lead at This Dot Labs. @robocell
BONUS: From Scrum Master to Engineering Lead, how to prepare the transition with Tim Bourguignon We start this episode, reflecting on Tim's journey of realizing the importance of working collaboratively and embracing agile methodologies. Tim noticed early on that he drifted to teaching and providing assistance to others rather than actively developing software. After a while, he moved to consulting in Agile, and in that role, he noticed recurring patterns and struggled with unclogging processes that seemed to be missing something crucial. Over time, frustration set in. He felt like he was fighting an uphill battle and highlighted the disruptive nature of Agile, which aimed to uncover and solve problems but often revealed bigger and deeper underlying issues. The big problem with Agile adoption, and what we can do to prepare for it With time and experience, Tim realized that leadership was a crucial factor in the team's success. He observed a recurring pattern where leadership was either blocking progress or not fulfilling their role effectively. This realization led him to recognize the significance of leadership's involvement and the impact it had on the overall performance of the team. From developer, to coach, and finally to leader: learning to help teams at all levels of the organization Tim shares his journey at WeMaintain and discusses the challenges of scaling while maintaining agility. Before joining, and during the interview process, Tim already sought to identify the problems he could help solve but couldn't pinpoint a specific issue. His boss expressed concerns about managing fast growth while staying agile. Initially, WeMaintain had two teams working efficiently from a backlog, releasing frequently, and measuring their progress. But, they wanted to grow the company without resorting to a traditional approach of multiple teams working on the same problem, which often led to communication issues. Instead, they advocated for compartmentalized teams with strong ownership and defined success metrics for each team based on specific business streams. Each team had the necessary skills and accountability to achieve their success metrics, ensuring a shared responsibility for success. From coach to leader, and the critical lessons learned on the way Tim reflects on the differences between his current approach and what he observed in the past, when he was the coach trying to help teams and organizations. He emphasizes the importance of taking personal responsibility as a leader when facing problems within a team. Previously, their clients would assume that the leaders were right and focused only on changing the teams, and he wanted to avoid that anti-pattern at all costs. Tim shares the tip of starting with leadership and establishing a clear vision, emphasizing the impact of lacking a clear vision and passionate individuals on the organization. When it comes to reflecting on our performance, Tim recommends evaluating oneself against the 12 principles of the Agile Manifesto and highlights the necessity of enabling developers to communicate directly with customers to foster agility. Scaling with Agility: Building Compartmentalized Teams and other strategies for growing companies, and staying Agile Tim's current focus is on stream-aligned teams and metrics. Tim recommends the book "Team Topologies", whose authors have been guests on the podcast. Tim also shares the tip of asking teams to create a portfolio of metrics that demonstrate they are working on the right things. Various teams have found interesting metrics to track their progress. The PDCA cycle and DORA (DevOps Readiness Assessment) metrics are mentioned. The guest highlights a positive sign of organizational health, with a rate of 1.5 deployments per day across the entire product group. They suggest having frequent discussions with people throughout the organization and implementing practices like showcasing Monday morning deliveries and sharing post-release messages on Slack as early documentation for stakeholders. The book "Accelerate" is also recommended for further insights into metrics. In this final segment, we also refer to Extreme Programming.
In this episode, Rob Ocel is joined by Rick Conrades, Engineering Manager at This Dot Labs, to discuss software development and engineering management! They discuss how to identify and mitigate risks, how to make difficult technology tradeoffs, and how to "manage up"! Finally, they share tips for all software engineers about relating to engineering managers, and the surprising challenges that engineering managers face with their teams. Guest Rick Conrades, Engineering Manager at This Dot Labs Host Rob Ocel, Software Architect and Engineering Lead at This Dot Labs Sponsored by This Dot Labs
In this episode, Jesse Tomchak is joined by Sam Julien, Director of Developer Advocacy, for Auth0 at Okta. What is the day to day of someone in developer advocacy outside of what we see from blog posts and conference talks? Sam is passionate about helping developers become the best versions of themselves through Tiny Experiments book, and his current newsletter, Developer Microskills. We dive into the idea of self taught developers, developer productive that is actually effective, sustainable progress, and so much more. Guest Sam Julien - Director of Developer Advocacy, Auth0 at Okta Host Jesse Tomchak - Architect and Engineering Lead at ThisDotLabs - @robocell Developer Microskills newsletter (https://developermicroskills.com) Sam's website (http://www.samjulien.com/) How to Finish What You Start article (https://www.samjulien.com/how-to-finish-what-you-start) Guide to Tiny Experiments book (https://learn.samjulien.com/guide-to-tiny-experiments) Getting Started in Developer Relations book (http://www.gettingstartedindevrel.com) Sponsored by This Dot Labs
On today's episode of AI Proficiency: Turning Tomorrow into Today, we are joined by Ashish Bansal, Engineering Lead at Google. Ashish took us through the process of microchip design and what kinds of processes they help to streamline, as well as the attention to detail that programming requires. Stay tuned to hear more about what goes into refining the user experience on platforms like Twitch and Twitter, and the biggest differences he sees between Machine Learning and Artificial Intelligence. CDAO: https://www.ai.mil/ Tradewind AI: https://www.tradewindai.com/ ATARC: https://atarc.org/
In this episode, Rob Ocel is joined by Dominik Dorfmeister (Frontend Tech Lead at Adverity and Maintainer of TanStack Query), to talk about open-source software, TanStack Query, and more! They discuss the rapid growth of the TanStack, how Dominik first became involved as a core team maintainer and the challenges he's faced, the future of the TanStack and data-management as a driving force in front-end framework development, using TanStack in other frameworks, and the joys of skiing. Guest Dominik Dorfmeister, Frontend Lead at Adverty and Maintainer for TanStack Query Host Rob Ocel, Software Architect and Engineering Lead at This Dot Labs Sponsored by This Dot Labs
A data analytics pipeline is important to modern businesses because it allows them to extract valuable insights from the large amounts of data they generate and collect on a daily basis. This leads to better decision making, improved efficiency, and increased ROI. However, despite your best efforts, sensitive customer data tends to find its way into our analytics pipelines, ending up in our data warehouses and metrics dashboards. Replicating customer PII to your downstream services greatly increases your compliance scope and makes maintaining data privacy and security significantly more challenging. In this episode, Engineering Lead at Skyflow Piper Keyes joins the show to discuss what goes into building a privacy-aware data pipeline, what tools and technologies should you be using, and how Skyflow addresses this problem. Topics: What is a data analytics pipeline? What does it mean to build a privacy-aware data pipeline? Can you give some examples of use cases where privacy-aware data pipelines are particularly important? What's it mean to de-identify data and how does that work? What are some common techniques used to preserve privacy in data pipelines? How does analytics work for de-identified data? How do you balance the need for data privacy with the need for actually being able to use the data? What's it take to build a privacy-aware pipeline from scratch? What are some of the biggest challenges in building privacy-aware data pipelines? How does something like this work with Skyflow? Let's say I have customer's transactional data from Visa, how could I ingest that data into my data warehouse but avoid having to build PCI compliance infrastructure? Walk me through how that works. Could you build a machine learning model based on the de-identified data? Once I have the data in my warehouse, let's say I needed to inform a clinical trial participant about an issue but I also want to maintain their privacy, how could I perform an operation like that? What other use cases does this product enable? Resources: Running Secure Workflows with Sensitive Customer Data Maximize Privacy while Preserving Utility for Data Analytics
Serverless is taking the world by storm. Despite there being countless blogs, tutorials, and recommended best practices around serverless, there's shockingly little material on observability. Why? It's hard! Join AJ and Allen as they talk about the difficulties of observing a serverless application, dive into the differences between KPIs and infrastructure metrics, what the future holds for observability, and more. About AJAJ Stuyvenberg is the Engineering Lead for Serverless APM at Datadog, and has been a member of the Serverless community for 6+ years. He's an AWS Community Builder, serverless meetup organizer, open source author, and frequently blogs about Serverless topics.Before Datadog, he was a Principal Engineer at Serverless Inc, the company behind the Serverless Framework.In his spare time, AJ is an avid BASE jumper and enjoys flying his wingsuit in the Alps. Links Twitter - https://twitter.com/astuyve LinkedIn - https://www.linkedin.com/in/aaron-stuyvenberg Blog - https://aaronstuyvenberg.com Email: aj@datadoghq.com --- Send in a voice message: https://podcasters.spotify.com/pod/show/readysetcloud/message Support this podcast: https://podcasters.spotify.com/pod/show/readysetcloud/support
Deniz Dalkilic is the CTO of MRHB and Co-Founder of Paribus.Deniz has a Master of Science in Computer Software Engineering from the City University of London and a strong software development career for over ten years. Scouted by Tottenham Hotspur FC at 13, Deniz was actively involved in football until interrupted by certain health issues, whereupon he decided to pursue software engineering instead.Deniz worked in the Web2 world with startups and large corporations – he was the Engineering Lead at Vodafone and the Senior Software Engineer at Thomson Reuters as well as the Software Engineer Lead at Capgemini, where he developed software solutions for clients including Royal Mail, Pole Star Space Applications and Lloyd's Register.In 2018, Deniz ventured into the blockchain space and has since co-founded two Web3 companies: decentralized bounty management platform BountyOx, where he was CTO for two years until 2020, and Paribus in 2021, a Cardano-powered cross-chain borrowing and lending protocol for NFTs, liquidity positions, synthetics, and yield-bearing assets.In the latter half of 2021, Deniz joined MRHB DeFi as the Chief Technology Officer, where he is in charge of the company's product and architecture development.In this conversation, we discuss:- Decentralizing cloud computing for the Web3 industry- Benefits of tokenizing commodities- What is Halal DeFi?- Challenges of building a Halal DeFi platform- How staking works from a Halal point of view- Importance of philanthropy for Muslims- Decentralized philanthropy (an upcoming MRHB DeFi product)- Traditional methods of giving to charity- Being scouted by Tottenham Hotspur at the age of 13MRHBWebsite: mrhb.networkTwitter: @MarhabaDeFiDiscord: discord.gg/8ZQDHFHvjaTelegram: t.me/mdf_officialDeniz DalkilicTwitter: @DnzDlklcLinkedIn: Deniz Dalkilic --------------------------------------------------------------------------------- This episode is brought to you by PrimeXBT. PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers. PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50 This promotion is available for a month after activation. Click the link below: PrimeXBT x CRYPTONEWS50
In this episode, Rob Ocel, is joined by Shai Reznik, Founder and CEO at HiRez.io and Community DevRel at Qwik, to talk about Qwik. They discuss the history of front-end development and frameworks, what has led to this recent surge in framework development, what architectural principles Qwik values most, how Qwik and QwikCity differ, how to build Qwik apps, and so much more! Guest Shai Reznik- Founder and CEO at HiRez.io and Community DevRel at Qwik Host Rob Ocel- Software Architect and Engineering Lead at This Dot Labs Sponsored by This Dot Labs
In this episode, Rob Ocel is joined by Jordan Powell, Developer Experience Engineer Cypress, to talk about Component Testing in Cypress. They discuss the explosive recent growth of Cypress, the exciting new features just launched in recent versions, and the changing shape of the testing “trophy”. Then, they dive deeply into component testing in Cypress: motivations, best practices, tips, and so much more! Guest Jordan Powell- Developer Experience Engineer at Cypress Host Rob Ocel- Software Architect and Engineering Lead at This Dot Labs Sponsored by This Dot Labs
In this episode, Rob Ocel is joined by Ray Gesualdo, live at Connect Tech 2022, to discuss API development and how the process is a lot like building with the internationally famous toy, Legos. They share some fond memories of growing up and now raising kids around Legos, talk about the traits and characteristics of the best component and service APIs, and show how some of the great victories and lasting lessons of Lego apply to building more resilient, modular, reusable, and consistent software. Host Rob Ocel- Software Architect and Engineering Lead at This Dot Labs Guest Ray Gesualdo- Staff Software Architect at SalesLoft Sponsored by This Dot Labs
In this episode, Rob Ocel and Jesse Tomchak are joined by Jeremy Meiss live at Connect Tech 2022. They talk about the table stakes of CI/CD and having a high performing team, and the vast array of options available to run DevOps and be successful. They also dive into the topic of mentorship and documentation, and how it benefits teams and the industry at large. Hosts Rob Ocel- Software Architect and Engineering Lead at This Dot Labs Jesse Tomchak- Software Architect at This Dot Labs Guest Jeremy Meiss- Director of DevRel at CircleCI Sponsored by This Dot Labs
In this week's episode, we are joined by Troy Heland Security Engineering Lead at Verizon Asia Pacific's Security Operation Centre. In this episode we discuss in detail the findings from this year's Mobile Security Index, including: the amount of work done on mobile, over-usage causing distractions and bad decisions. We also talk about hot topics like the right-to-disconnect laws being passed over in Europe and data leakage through unsanctioned apps. We wrap up with how zero trust should be applied in mobile devices.
Futurum Research Senior Analyst and VP of Sales Steven Dickens talks with Kyndryl's Richard Baird, VP, CTO and Engineering Lead, Core Enterprise and zCloud, and Jonathan P. Dietz, VP, Core Enterprise and zCloud U.S. Practice Leader, during the SHARE Conference in Columbus. Their conversation explored Kyndryl's growth trajectory over the last year, their focus on building alliances, and what they are doing to modernize the mainframe platform. To learn more about Kyndryl visit their website.
Steven Laver (Solana Mobile Engineering Lead) joins The Zeitgeist to discuss how Solana Mobile Stack and the Saga phone will enable the user experiences and rich ecosystem that we need for the future of Web3.Show Notes00:05 - Intro00:45 - Background01:27 - Before Solana02:24 - Why mobile and SMS?04:12 - What is SMS ?06:15 - Seed Vault08:39 - ARM TrustZone10:06 - Security with Seed Vault11:44 - Restrictions with Seed Vault12:46 - Importing and exporting keys15:02 - Mobile wallet adapter21:39 - Plans for deep linking23:27 - The dApp store27:31 - Plans for Solana pay31:18 - Saga33:16 - Expanding SMS to other devices and blockchains38:54 - Working with TJ from Mountain Pay40:18 - Where to connect with Steven and SMS41:14 - OutroTranscriptBrian (00:06):Hey, everyone. And welcome to the Zeitgeist, the show where we highlight the founders, developers, and designers who are pushing the Web 3.0 space forward. I'm Brian Friel, developer relations at Phantom, and I'm super excited to introduce my guest Steven Labor. Stephen is the lead software engineer for a suite of new Solana developer tools known as SMS. The Solana mobile stack. SMS was unveiled recently at NFT NYC alongside a flagship smartphone that will be powered by this new suite of technologies. Stephen, welcome to the show.Steven (00:37):Thanks Brian. Excited to be here.Brian (00:39):Thanks for coming on. We have a lot of ground to cover today. Couple really great announcements that you guys just released, but before we dive into all that, could you give us a brief background of who you are and how you became involved with building mobile software for so Solana?Steven (00:52):Sure. So I have been doing professional engineering now for round about 20 years, but about 15 of those have been spent working on phones in one fashion or another. Right after college, I kind of fell backwards into a job. I just crossed the road rom my university and started working for research in motion and back then worked on Blackberries back when they were cool. And since then I've had a long career building, all kinds of phones and software for phones. Few forays out into some other areas of consumer electronics, but I keep seeming to be dragged back into phones. So I must love it.Brian (01:28):Right before Solana. What were some of the companies you were just working at?Steven (01:31):When, when I took the call from Anatoly for this cool new, exciting opportunity for Solana mobile, I was sitting in a beige conference room at a very large company that likes to make very monochromatic products. And when he started selling me on what Solana labs was building here and the what the vision for the SAGA device was at the time under a very cool code name, but I was pretty quickly sold on that. And previous to that, I spent a few years working at Google, working on the Android products there. So like I said, long background, lots of different smartphones and phone software in there, but this is the most excited I've been about a phone project in a long, long time.Brian (02:09):Oh, that's awesome to hear. Most people who interact with crypto today do so on a desktop from a browser extension. Maybe they're signing with a ledger, maybe they're signing with a web wallet like Phantom. And on Solana specifically, mobile wallets only launched a little over six months ago. Why is the time right now for push into mobile and why SMS specifically?Steven (02:29):So this is actually exactly what the problem statement that convinced me that I should join Solana labs and help build this awesome product. Our phones are everything to us. They're our web browsers. They are cameras. We watch TV on them. We play games on them. They're the first place we go to for everything, they're even our alarm clocks now. And the fact that the Web 3.0 ecosystem just isn't present on our phones the same way it is on our desktops. That to me, is what really gets me excited and got me interested in building this. We're really at the square one here, as far as building for a rich ecosystem for Web 3.0 and for Solana on mobile devices. And we get to use this at use SAGA and use SMS. This is our stake in the ground.Steven (03:13):This is us saying, this is how we should be building for mobile devices. These are the user experiences we need to offer. These are the technologies we need to offer. And being on the ground floor of that, getting to build these fundamental building blocks and then giving them to the community. So the community can then go build all the great, amazing ideas that are honestly, things I would never have thought of. But when I look back at them, I'm like, wow. We have such a rich and exciting community of builders for Solana. That's what I'm most excited about for building SAGA and building the Solana mobile stack.Brian (03:45):That's awesome. Yeah. From Phantoms side, we've seen just in the last six months that even though that foundation for building mobile apps, isn't really there today, SMS hopefully will be able to lay this great foundation. Like you said, we've seen that mobile has been our fastest growing segment by far. So we're super excited for this as well. SMS covers a lot of new developer features here. It's kind of this umbrella term for this whole suite of products that you guys are building in your own words, what is SMS and how should developers be thinking about this new suite of tools?Steven (04:17):I think of it from two angles. So first of all, let me talk about it just briefly from a consumer angle. Or Web 3.0 ecosystem for consumers. Many of us are very in the know about what that means, but for some others, they may be a little less so. They're new to the space, they're still learning it. There's a lot of words and phrases and sayings and memes that are very unique to this ecosystem that people take a little while to get onboarded with. From a consumer perspective, SMS really allows them to understand what a phone offers in terms of its capabilities with Solana. And so it's really by participating in an ecosystem or on a device that has the SMS technologies on it, to a certain extent, they know what they're going to get there. They know they're going to get a device that is going to work well and is they're going to be able to fully participate in the Solana ecosystem.Steven (05:06):For developers, SMS is a collection of technologies, the big ones being, we have Seed Vault for secure key custody. We have mobile wallet adapter for connecting adapts to wallets. We have deeper richer integrations with the OS for Solana Pay, and then the Solana DAP store, which people are really, really excited about.Steven (05:25):And so from the developer perspective, we have this collection of tools which will continue to grow over time. We're going to put more libraries in there. We're going to have more samples for developers, and we're going to keep expanding on this initial set of technologies. And I'm happy to talk about each of those in detail, but as far as SMS for developers, we've got an SDK out now and we're going to be continuing to enhance that. And each of those technology offerings within SMS has value to offer to a different slice or a different segment of our developer ecosystem.Brian (05:55):That's awesome. They're very complimentary as well. Let's dive into each of them. So you mentioned four of those there. We have the Seed Vault. We have the mobile wallet adapter, the DAP store that you guys are launching and then better integrations around Solana Pay, which you guys recently unveiled earlier this year. Let's start with the Seed Vault. I feel like this is the foundation upon which everything else is built. Can you describe a little bit of more, like, what is the Seed Vault? Is it related to what a secure enclave is on iOS or a key store? How should people be thinking about this new term?Steven (06:25):So there's a pretty big gap between the level of security that a hardware wallet offers. Something like a device like a ledger or a treasure. Those are great devices, they're purpose built hardware, and they offer the maximum level of security for users. I think they do a fantastic job with that, but ultimately they are a little bit limited by the fact that they're a separate device. It needs to be connected to your wallet. It's got a very limited user interface in terms of LCD displays or buttons, et cetera. Whereas on the other end of the spectrum, we have the software wallets running on Android or iOS devices, like Phantom for example. User experience is top notch, but the environment that they're running in the Android or iOS high level operating systems, they are general purpose operating systems. They're connected to the network. They run other user code on them.Steven (07:15):And so while they can offer that great user experience because of the environment they operate in, they don't have that quite that same level of security as a hardware wallet would. Seed Vault allows us to bring a lot of those benefits that a hardware wallet is able to, in terms of custody of keys, in terms of taking advantage of extremely secure hardware on mobile phones. And bring those to the software wallets. And so I think that's actually an important point for Seed Vault. We aren't actually building any new or novel hardware into phones to bring this technology up to wallets. Instead, we're taking advantage of the very high secure elements that already exist on phones. And we're integrating those into the system layer and we're integrating Solana blockchain operations into the system to support these functionalities. So as a concrete example, when your seeds are sort of the root of all your secrets on the Solana blockchain, all your wallet accounts, all your private keys, everything is derived from those seeds.Steven (08:13):We use a secure element on SAGA to secure your seeds in a way that's very protected, even down to the level of forensic attacks. Those are the kind of attacks that would go on in like a lab. And they take your device apart and use all kinds of equipment to try to extract your secrets from that device. So we use the secure element, specialized hardware, very similar to the secure enclave that you would hear about on an iOS device, for example. And then we combine that with a very secure operating environment that is available on most Android devices. So it's called Arm Trust Zone, and it's an environment that is running below the level of Android. So everything the user sees and is used to seeing in terms of the Android operating system, there's actually another operating system running below that on the device, it's a very specialized environment.Steven (09:02):It's very secure and it's called Arm trust zone. We run a trusted application within that environment that cooperates with the secure element to do all of your signing operations. So all of your key derivations from that initial seed and all of the secure signing operations, based on that. Design transactions on behalf of the wallets that are running in the Android OS. We make use of a bunch of specialized secure technologies, such as secure input and secure display. So if you have a password associated with your seed, when you type in that password, you're actually not even typing that into Android. You're typing that into a specialized OS, highly secure. It actually takes over the display temporarily. And so that when you enter your password, it's only going into environment that is secure and ready to accept it. And then once we sign a transaction on behalf of the user, we hand that back to the wallet, for example, Phantom running in the Android operating system, and then Phantom takes that and is able to submit it to the Solana blockchain for processing.Brian (10:04):That's a great overview. Thanks for that. So is it safe to say from like an end user's perspective, is this a Seed Vault, essentially bringing the same security guarantees or even more than a traditional hardware wallet would, but with kind of an added ease of use component being that it's directly embedded into your phone.Steven (10:21):So the hardware wallets are designed from the ground up to be the most secure wallets possible. And so certain design choices that they've made such as not having network connectivity on them, their air gap devices and being designed with very, very optimized hardware specifically for security means that those are pretty much the perfect devices for the security of seed storage. But as we all know, the trade off comes in terms of the user experience.Steven (10:48):Seed Vault gets us most of the way there. We get to take advantage of secure elements for seed encryption and specialized processing environments. It's still running inside a general purpose device, which does have network connectivity, although the secure environments themselves on the device, don't. So cold storage wallets are definitely going to still have a place in the ecosystem for very, very secure storage of high value assets, high value wallets, for example, but Seed Vault gets us a substantial distance closer to that on phones. And it also does it in a way that remains readily available and easy for users to use. So this is Seed Vault is very much a day to day technology that will allow the users to bring a huge amount of security to their seeds and their keys while still maintaining all the usability of an Android device.Brian (11:37):That's fantastic. I think anyone who's been around in crypto long enough deeply understands that problem between security and ease of use trade off. Can this Seed Vault handle all types of signing or is it restricted in any way as to what types of transactions or messages this can sign?Steven (11:53):For the initial launch we're focused on the Solana chain. All the operations that are built into Seed Vault are all designed, are all the cryptographic operations that are necessary for key generation, key storage, and transaction signing on. That said, like I said before, we're not building any new hardware into phones to support this. We're making use of hardware, very, very secure hardware that was already present. This is a systems integration problem. And so we're building this really, really secure and specialized system deep into the OS below the level of Android so that users can have a secure Solana experience. But there's nothing that would stop us in the future from expanding this as well. So what I would say to users is, go to Solanamobile.com. We have a wait list there, but you can also leave comments as well. So please, if other chains are an area you have an interest in, that would be a great place to let us know about it.Brian (12:45):That's fantastic. And one final point on the Seed Vault, is it possible to import or export your keys to another phone or is this sea fault tied to essentially one device?Steven (12:56):So the Seed Vault will support the standard BIT 39 seed phrases. So those 12 or 24 word seed phrases that we're all intimately familiar with. A big part of key custody is understanding the importance of protecting your own keys. And so for users who... We strongly believe in self custody of keys and secrets with the SAGA device. And so when you first set up the device, the user will be guided through a process to either create a new seed or to import an existing seed. As part of that process, we'll be guiding the user to back up, write down in some, for example, a reference card, all of the words in their seed phrase. And store that somewhere safe and secure. That seed phrase can be used to recover your seed later. If you were to get another SAGA device, it could be used to import your seed into that other device.Steven (13:47):Or if you do already have wallets elsewhere, you can use that seed to import those other wallets into the SAGA. There is a security caveat that comes with that, which is that your seed is only as secure as the least secure place you've ever stored it. So our recommendation will be for users to go and create a brand new seed when they get the SAGA device. We've built this great Seed Vault, super high secure storage capabilities. And we would really like for users to add, to create new seeds. But we also understand that users may already have many accounts set up and they may want to add existing seeds into their Seed Vault to facilitate the transfer of their assets, to a brand new seed for the device. And so we will support both importing and exporting seeds on the SAGA device.Brian (14:30):So I think that does a really great job of laying the foundation for the Seed Vault. You mentioned SAGA, the flagship phone upon all this is built. There's three other technologies though that you guys have also bringing the market here as part of SMS.Brian (14:43):One of the next ones which I want to cover is the mobile wallet adapter. I think anyone who's used Solana both from a development perspective, or if they're just an end user is very familiar with the wallet adapter that we have on web. Does a great job of normalizing all the different wallets on Solana, making it really easy to just click and you see a drop down menu and it's easy to connect to your favorite wallet. What is the mobile wallet adapter? How do you say it differs from the web wallet adapter and what can users expect to be seeing when they interact with one of these?Steven (15:12):First of all, mobile wallet adapter is actually the one technology in all of SMS that I'm personally most excited about. It's the least flashy. It's something that users will hopefully never even know exists because it's such a fundamental and basic thing that users will assume that this is just how things are supposed to work. Mobile wallet adapter is an analog of the wallet adapter on the web that we all have come to know and love. It's the communication fabric by which we're going to bind dApps and wallets on mobile devices together. So on the web, you're used to visiting some DAP, clicking that connect wallet button, seeing a popup of the available wallets that you have installed in your browser. And just picking one and just getting this seamless transaction signing experience. Unfortunately, the same thing doesn't exist today on mobile devices, either on Android or on iOS devices for the Solana chain.Steven (16:04):And that I think is the number one thing that is holding back broad mobile adoption for Solana and for Web 3.0 on mobile devices. So just like we built a protocol and a plug-in interface for wallet adapter on the web. We're building something very similar on mobile devices. We're going to have a web socket based communication protocol that allows for dApps to connect to wallets wherever they are for signing transaction purposes. And that wherever they are, I think is the big key difference between transaction signing on mobile devices versus transaction signing on laptops or desktops. With the traditional wallet adapter, there's one environment that everything operates in, the web browser. And it's a great sandbox. It's a super rich set of tools. But on mobile devices, the ecosystem is a little bit broader there. We have dApps that will run in your web browser.Steven (16:57):Any mobile friendly dApps should be able to work on your mobile Chrome, just like they would work in desktop Chrome. You've got native applications that have run on the device as well, whether those are written in Kotlin or Java on an Android device, or a cross platform framework like Darden fluter or like react native. And then you also, a natural extension from there is saying, well, we've built this great wallet on these SMS devices with Seed Vault. Why can't this be my only wallet? Why do I even need to have a different wallet on every platform that I want to use dApps on? And so part of what we're building with mobile wallet adapter is the concept of remote signing as well. We got this phone, you've got a wallet on there. It has network connectivity. It is a great user interface.Steven (17:42):This should be able to act as a remote wallet for dApps that are running on other nearby mobile devices, or even on your nearby laptops and desktops. So mobile wallet adapter is a protocol and it's a fabric that's going to bind all of these things together with the initial release of SMS. We have a reference implementation that we've built for Android, but this is an area we see expanding beyond Android as well. So the mobile wallet adapter protocol was designed to be agnostic to the platform on which it's running. Any platform that has some of these standard web technologies like web sockets, for example, would be able to participate in the mobile wallet, adapter protocol.Brian (18:19):Yeah, that's fantastic. That's super exciting for us. You hit on a couple of things there. One of which I think I could sense your excitement over, was the ability to do this remote signing on your phone. You're interacting with a web app, either on another mobile device or potentially on your laptop. Is this similar to what wallet connect is on Ethereum? I know we haven't really had wallet connect yet on Solana on a major DAP. How is this essentially handling this connection? Is there some middle man server involved? How are you guys thinking about that?Steven (18:49):In principle, it is similar to the functionality provided by wallet connect. Though, I think, we've definitely made some design decisions in the mobile wallet adapter space to make it very suitable for operation on mobile devices. We've really optimized it for local use cases. And as part of that, one of the design decisions we've made is that we shouldn't have to reach out to an intermediate server during the signing process, if the operations that you're performing are those that can be done entirely locally on the device. And so let's just to give a concrete example, you have the Phantom wallet installed on the device. Let's say you wanted to use magic Eden through your Chrome web browser on that same device, because they're both running locally, there's no reason we should have to reach out to an intermediate server to make that connection between those two parties.Steven (19:38):And so we've split up the process into two phases. The first of which we call association, which answers the question of, well, how does the magic Eden running in the browser, how does it even know what wallets are available on the device? How does it start up the wallet so that it's in front of the user and the user has context of what's happening. And then how does it create an encrypted channel through which communications can happen? And so on Android, we've done that through an intent based scheme and then a Diffy helmet key exchange, but we've built the protocol in a flexible way so that we can add additional types of association in the future.Steven (20:13):For example, we have the ability to use QR codes for association. You can imagine scanning a QR code from your mobile phone, and that would encode all the information necessary to inform the two parties of each other. Or alternatively there're other standards we want to explore like web Bluetooth, which gives you a great way to only connect to devices that are in proximity with you.Steven (20:35):So in terms of security, it means that you can have a connection that also has a locality element to it, which would be really interesting when it comes to making use of very highly secure operations, like signing with your private keys for your wallet accounts.Steven (20:50):The second half of the protocol is connections, and we use web sockets. We create an encrypted channel, and then all of the operations that a user would make use of through wallet adapter today, authorizing adapt, signing transactions, sending transactions across the network. All of that has been created in the mobile wallet adapter protocol as well. And so that said, all together, whole bunch of technology running, hopefully invisibly to the user. And we're even building a plugin for regular wallet adapter, so that dApps are able to get support from mobile wallet adapter with just extremely, extremely small amount of work. Rebuild, select the right plugins for wallet adapter, make sure your DAP is mobile web friendly, and you'll be ready to go on day one.Brian (21:34):That's great. That's very elegant too, getting rid of the middleman server in that using just the local network. Personally having been at Phantom for a while. We've sensed the frustration in the mobile scene. A little bit, a lot of wallets today are kind of forced to have these in-app browsers, just given the state of mobile phones and given the state of the mobile phone industry. One thing that Phantom has done to get around this in particular, in interacting with native dApps is deep linking. Does mobile wallet DAP have any plans for deep linking? How are you guys thinking about handling those deep linking protocols?Steven (22:07):Yep. So I think number one, I think the deep linking protocols are a very elegant solution to getting over this problem of how do we break out of the world of browsers inside of wallets? I think that the browsers inside of wallets, I believe is a very expedient solution, but I believe it's a stepping stone on the way to having dApps in wallets, as full participants on a mobile device, using the user experience, paradigms and patterns that users are used to. So standalone native applications that are able to directly communicate with each other.Steven (22:41):The deep linking protocols, I think do a great job. They're very straightforward and do a great job for the use cases local to the device. And so I view those as a complimentary to mobile wallet adapter. Mobile wallet adapter is designed to handle both those use cases as well, but also the broader use cases of on and off device. And so I think that is where mobile wallet adapter can take the ecosystem even one more step forward is through separation of the transport layer from the association layer. We're able to design for use cases that extend beyond the device and extend beyond what the deep linking protocols are currently able to accomplish today.Brian (23:21):That's great. So we hit a lot here with the mobile wallet adapter that will be impacting how dApps and wallets interact with one another. You guys are also releasing another initiative that'll be impacting dApps. You guys are dubbing at the DAP store. The big takeaway here is that there won't be these rent extracting fees of 30% of all commerce in apps from some of the big players that we all know and love. Can you touch a little more on what this DAP store is? What some of the plans are for this and how current DAP developers should be thinking about this DAP store?Steven (23:53):Sure. So I have received more questions on the DAP store. It just goes to show how excited developers are for this. We're all very, very intimately familiar with some of the difficulties that are posed by the current app store ecosystems for mobile devices, Google Play store for Android and the App Store for apple. And they primarily fall into two categories. There is the policies aspect, what am I allowed to do? What am I not allowed to do? Is my app going to be approved for the store? Or am I going to be rejected for what often feels like an arbitrary reason? And then the second part of it is the economic angle. A 30% cut of fees is a pretty hard pill to swallow. And especially when we come to some of the use cases like purchases of digital goods, 30% fee pretty much is a nail in the coffin of trying to do, for example, an NFT like an auction house or a marketplace on a device like an iPhone, for example.Steven (24:51):And the Solana DAP store... This is our opportunity to change that. Our north star on this is that once DAP is installed the any further interactions between that DAP and the user are a matter between that DAP and the user, we're not going to be getting involved with like ongoing fees or anything of that nature. And in fact, we're making our DAP store no fee. So there won't be transaction fees in the store if users want to purchase apps or applications from the store. And like I said, once it's installed any further relationships are between the DAP and the user. We're very much building this in the model of a permissionless Web 3.0 experience.Steven (25:32):Now that said, I do want to touch on one really important area, which is the curation of the catalog. In my background, I did work for a couple years on an app store for mobile devices when I was at Microsoft. And I was a young, slightly naive engineer at that point. And I didn't really think that the curation problem would be that substantial. And I think that in fact, the trust and safety aspect of app stores is probably the single most important topic to look into. So at the beginning, we are going to be curating the contents of the app store, and we're going to be doing that to make sure that the contents are both useful to users, but also there's a huge amount of trust that goes into users when they use an app store into, who is the publisher and what are they doing to protect me.Steven (26:21):And so Solana labs will be curating the contents at the initial release, but we do have aspirations to involve the community in the curation of this app catalog. The community's involvement is always a huge area for Web 3.0 in areas like DAOs, for example, and we have the same aspirations for the Solana mobile and the Solana mobile DAP store.Brian (26:45):That's great. I think that's really important that you emphasize that from the start, setting expectations like that. In our experience, crypto can be really exciting. It's this new world, but also it's very permissionless and that is a sword that cuts both ways. We've seen that firsthand being a wallet. We actually now run, I believe the biggest block list of all spam NFTs and essentially scam domains on Solana. It's a huge, huge issue. It really is important to kind of nip that in the bud, especially for new users when they're just getting acquainted with this ecosystem. And we found in our experience we turned that block list, open source. We get community contributions from that, especially every day now. And I think whenever you're able to kind of leverage the power of the end users here, the community that's using this every day, that's a really great way to handle it.Brian (27:33):So we hit on a lot here. We just covered the DAP store. We also went over the Seed Vault, which leads the foundation for SMS, and then the mobile wallet adapter, which you're super excited about. We're super excited about that as well. There's one final component to this, which is Solana Pay. I think most people are familiar with Solana Pay at this point. It was unveiled earlier in Q2, I believe of 2022. There's a couple great use cases, around this around making point of sale, a lot easier, using SPL tokens on Solana, getting rid of middleman fees once again.Brian (28:04):But I also think Solana Pay might at this point, be a bit of a misnomer. I've seen some really great use cases leveraging Solana Pay with NFT ticketing. I know the mountain pay guys have built this great photo booth where you can snap a photo and then scan with your Phantom wallet and it mints into an NFT right on your phone. And that's all running on Solana Pay behind the scenes. What do you guys have planned for Solana Pay as it relates to mobile and SMS? What can you tell us about that today?Steven (28:32):Mobile devices, we carry them everywhere with us and they've in the last few years, people have really, really started to use mobile payment technologies. You've got, on Android devices you have Google Pay for example. And so we have these perfect devices that you carry with you and people are already used to interacting with in the real world to effectuate payments. And that's really what we're looking at for SMS and Solana Pay. We're actually not making any protocol changes at all to Solana Pay with the SMS stack. Instead, what we're doing is providing guidance on how wallet should integrate Solana Pay into the Android system. For many wallets, there's actually almost nothing to do here. Wallets like Phantom, do a great job of already integrating some of these best practices into Android devices. But by providing a set of best practices, we can really make sure that there's a standardized way by which users can expect their phone to work with Solana Pay. In terms of snapping QR codes, in terms of tapping your phone on NFC readers or in terms of interacting with Solana Pay links that are generated from within the mobile web browser.Steven (29:40):And so, by providing that set of best practices and providing samples on how to integrate Solana Pay, those best practices will give us a foundation by which we can expect that all of these real world Solana Pay interactions that users are going to be using with their phones will be consistent across devices. And by making it consistent, we give to the other side of the equation, those who are working on merchant terminals, for example, we give them an understanding or a base, if you will, on which they know that if they develop some of these Solana Pay technologies, for example, QR codes or NFCS in merchant terminals. They know that there's a base of devices that can take advantage of those.Steven (30:21):And so, so much of the Solana Pay is going to be building out this whole network of providers, software providers, both on the merchant side, as well as on mobile devices to make sure that users can pay with Solana Pay in the real world. Just like they're used to paying with say their credit cards using Google Pay. And quick note, Google pay will also be supported on the SAGA device. And so users should expect that their SAGA device will be their mechanism by which they can effectuate real world payments, whether it's through Solana Pay or through traditional payment networks.Brian (30:56):That's super exciting. So I think this is a great kind of overview we just did of SMS, the whole suite of developer tools that you guys are unveiling. You've hit on this a couple times though, though, there is a flagship phone that you guys are releasing, SAGA. This is separate from SMS, but it will be powered entirely by SMS, as well as traditional things you would expect from Android devices like Google Pay, which you mentioned.Brian (31:20):Let's talk a little bit more about SAGA. What is it like? I saw Anatoli up there on stage flashing it to the crowd. It looked pretty sleek. Would you say this is something that's purely for crypto natives, can ordinary people continue to use this for their favorite apps, say like TikTok or Instagram without noticing much of a difference. How would you characterize this phone broadly?Steven (31:40):We've designed the phone for crypto natives in mind, but it is a standard Android device in every other way. And so it's going to be a full GMS device, meaning it has Chrome. It has Gmail, it has the Google Play store. It has everything that users expect an Android device to do with the SMS stack added on top. And so there's a huge amount of value here that we're going to be able to give to the Sal Solana ecosystem to degens who live and breathe their Web 3.0 in crypto. But it's also going to be a flagship Android device. It's got 12 gigs of RAM five, 12 gigs of flash. It's got the latest and greatest Qualcomm snap drag and chip set, beautiful 6.67 inch O led display everything about this looks and feels like a flagship phone.Steven (32:31):We got that device in just before the event. And so he was able to hand it around a little bit, show some people, let them touch and feel it in person. And it's an impressive device. We have a great partner in Awesome in helping us build this device. And I am thrilled. I think that I don't know that there's any other hardware partner other than Awesome, who could have helped us realize this vision the way we've been able to realize it for the SAGA device. I've had the good fortune to have a prototype. I've been working on that for the last few months and I am thrilled for when people are able to get this device and hold it in their hands. It really does feel like a super, super premium, top end deviceBrian (33:13):Jealous. You're one of the lucky few that has the actual incarnation of this right now in their hand. What is the plan to expand SMS to additional devices though? We have SAGA, I saw, there's actually a huge backlog of pre-orders on this thing. But I'm sure a lot of people are kind of thinking themselves, well, I already have this Android phone, maybe a Google Pixel or Samsung device. Or maybe there're others who there's a lot of folks probably listening to this who are on iOS devices. What is that timeline like? Is this something you think could be running on iOS one day? How are you guys thinking about rolling this out kind of across broadly across the smartphone market?Steven (33:50):So we started our conversation on SMS talking about the collection of technologies. And that's, I think is really the point to hit on here, is that SMS isn't just one technology. It's a whole series of them. And each of them have slightly different applications and slightly different system needs in terms of integrating them. So all the way at one end of the spectrum, we have Seed Vault. For example, Seed Vault really needs the phone manufacturer to be directly involved in the systems integration process. It needs access to the secure element. It needs a trusted application that can run within the secure execution environment, arm trust zone, for example. And then it needs UI baked right into the system image, privileged UI that's able to make use of those lower layers that in the secure execution environments of the device.Steven (34:40):On the other end of the spectrum, we have technologies like mobile wallet, adapter and mobile wallet adapter actually doesn't have any hardware requirements at all. It's purely a protocol system between wallets and dApps for binding them together. And so mobile wallet adapter, the specification is currently in draft for that one. We're working with our wallet and DAP partners to finalize that, make sure we can take all of the ecosystem feedback so that when we do release it, it serves as broad a set of use cases as possible.Steven (35:10):But mobile wallet adapter will actually be available and ready to use before the SAGA launches. And so over time, what it means to be an SMS device really comes down to how many of these SMS technologies are integrated into that particular device. As I was saying, some of them do need deep integration. We would have to work directly with hardware manufacturers, the Seed Vault being the primary one of those. So it's a little bit hard to from a user standpoint, SMS has a certain branding associated with it, but from a technology standpoint, there's a whole spectrum of what an SMS device could look like.Steven (35:45):In terms of specific devices, I can't comment on that other than the fact that Awesome is an amazing partner. And we just view SAGA as the first step in many steps towards bringing all of SMS to mobile users. So that Web 3.0, really has a home with the Solana ecosystem on mobile devices. And then I just did also want to hit quickly on iOS. iOS is not as open a platform as Android is. And so some of these technologies, we just don't have the capability to independently build those into an iOS device.Steven (36:18):But we do also know how much consumers love iOS devices. And there's many people who we'd have to pry their fingers apart to get their iPhones out of their hands. And so technologies like mobile wallet adapter, we're designing them to make sure that they're not Android specific. There's things in there that we can do to make sure that becomes the fabric by which dApps and wallets communicate wherever they are, including on iOS devices. And so we're not forgetting about iOS and we want to make sure that as much of SMS as possible works on as many devices as possible, whether those are Android devices or iOS devices.Brian (36:53):Well, as speaking as somebody who's had a death grip on their iOS device for probably the last decade, I have to say, I am pre-ordering one. You guys have done a great job of convincing me. So I'm going team Android just for this. I'm super excited about it.Steven (37:07):That is great news. I want to hear your entire audience find me on Twitter and tell me the exact same thing. You'll make my week.Brian (37:13):That's great. So you hit a lot here about the plan for rolling SMS. Each of the four components broadly out across the smartphone market. You did a couple times though in our conversation hint at the idea that even though this is called SMS Solana mobile stack, really this could be applicable to a broad number of blockchains across the Web 3.0 ecosystem. How are you guys thinking about that problem? Are you guys going to be focused on Solana, is your core team focused on Solana for the time being, and you're inviting others in because it's open source? Is that the general framing of that? How are you guys thinking about unveiling this to Ethereum and potentially Bitcoin one day and more broadly across the Web 3.0 space?Steven (37:52):So we have a literal mountain of work to do to deliver the best Solana experience that we can on mobile devices. And so the Solana ecosystem remains firmly our goal right now. That said, I think I'll answer your question in two ways. This is an open phone and we would never try to prevent another chain for example, from being installed on this device. So while we are focusing on Solana, this is an Android device, everything that works on Android would work here. And we would never try to stop any of the other chains from participating on this device in all the normal ways.Steven (38:26):And then the other thing I would say there is, we're always interested to hear from the community. And so I think I mentioned it earlier in the podcast, but if you do have an interest, you can always go to Solanamobile.com, please while you're there place a pre-order, that would be awesome. But you can also register your interest in other things you would like to see on this device, whether those be features for the Solana chain or if you have interest in other chains, we'd love to hear about it.Brian (38:50):That's great. And one closing question, I think this is a good segue that we always ask to our guests, given your guys' focus on the Solana ecosystem, who is a builder that you admire in the Solana ecosystem?Steven (39:02):Oh, that's a good one. So there's someone that I've had an opportunity to work with TJ from Mountain Pay and he has been contributing to the Solana Pay side of the ecosystem. And as crazy as it seems some days when I wake up and say, oh, all we're doing is taking on the mobile phone industry by building SAGA. I think of what TJ is doing. He's just looked around and he looks at all the incumbents in the payment space and says, yeah, I think I want to go head on in this space.Steven (39:30):So he's a great guy. Every time I've had the opportunity to chat with him, I've always left thinking like, wow, I didn't even think about that. And he's like... So I really admire him that both for the work he's doing and as well as for the guts to take on the space that he's taking on.Brian (39:48):Yeah. I couldn't agree more. His energy is pretty electric. I don't know how he does it. He's around at all the hacker houses at Mountain Dow, he's in the crowd and then he's got his headphones on banging away on his laptop and still shipping code. But yeah, he's got a great infectious energy and I think there's no one better to be taking on the Stripes and the PayPals and the big payments giants of the world. He's someone I would want to have in my corner for sure.Brian (40:13):Well, Steven, this has been a really great conversation. Thank you for going deep on SMS. I'm super excited about it. I'm ready to pre-order my SAGA right now. Where can people go to learn more, both about SMS and yourself? You mentioned your Twitter. I want to make sure that people can find you and ping you with their stories of how you're prying their iOSs out of their hands right here.Steven (40:35):To find out more about the SAGA and about SMS hit up Solanamobile.com. We've got links in there to the Solana mobile Twitter, our discord community, discord.gg/solanamobile, as well as you can find a link to the mobile stack SDK, which is all in the open on GitHub. So I would say those are the best places to go to learn more about SMS and SAGA. Please join us in discord. We're a friendly community. I'm there. I hang out there. I answer questions there. Lots of people from my team as well. So looking forward to seeing all kinds of people from your audience, join our discord and help us build a really cool community around SMS and SAGA.Brian (41:16):I can't wait. Steven Laver. Thank you for your time. This has been great.Steven (41:20):Thank you, Brian. Thanks for having me.
The recent news surrounding the Log4j vulnerability exposes the many cracks that exist within information security tools. Unfortunately, most technologies lack the visibility to detect critical vulnerabilities creating massive enterprise-wide challenges. When I learned that Lakeside software technology is so unique that it identifies dormant vulnerabilities before they become a massive threat to organizations' information security, I felt compelled to find out more. Ryan Purvis, Head of Solutions, Lakeside Software, joins me on Tech Talks Daily. Ryan is focused on helping Lakeside customers solve complex problems and transform their businesses through the strategic application of our technology. In his 20+ year career, Ryan has led cross-functional teams in multiple roles, ranging from Chief Information Officer for HiLo Maritime Risk Management to Director for Cyber Defence at UBS and Engineering Lead for JP Morgan Chase. Ryan divides his time between the UK and South Africa. We discuss why many leading information security tools lack the visibility to detect critical vulnerabilities and the severe impact that this can have on enterprise organizations. We also talk about the importance of speed as a critical factor in remediation.
Andrew Xu and Daniel Aboul-Hassan speak with Connor McEwen (@connormcewen) from Uniswap Labs! Uniswap allows people to swap, earn, and build on the leading decentralized crypto trading protocol. (0:32) - Introduction (1:22) - What led Connor to join Dorm Room Fund (2:52) - Deciding whether or not take a gap year during college (4:24) - Important lessons learned from returning to school (6:11) - Experience launching makeBU (8:48) - Being on the engineering side & what people on the Business and Engineering sides can learn from each other (10:55) - How Connor approached joining a large company vs. startup (14:20) - Is Connor a Bitcoin millionaire? (14:30) - Connor's funny story about mining Bitcoin (15:31) - How to land an engineering role at an attractive startup (18:10) - 2013 startups vs. 2022 startups (20:06) - Hottest Web3 take (21:43) - How should engineers working at early-stage startups view fundraising? (23:31) - Staying up to date in the crypto space -- Get in touch with Connor! Twitter: @connormcewen LinkedIn: https://www.linkedin.com/in/cmcewen/
Henry Golding is a Principal Software Engineering Lead at Xbox Game Studios currently spreading the significance of Test Driven Development and Automated Testing in Game Development. He's a native of the UK who has programmed for Sea of Thieves, Minecraft, Disney Universe, and Dead Space: Extraction for the Wii. We talk about his latest endeavors as well as discuss his journey into the games industry. Chapters: (00:00) Who is Henry Golding? (11:21) Software Architecture In Games (25:13) Development Team Dynamics (37:30) Academics & University (51:02) Deadspace Extraction (01:05:44) Rare & Work Visas (01:19:01) Breaking in at Microsoft (01:27:19) Final Round (01:37:43) John's Outro Links: Henry's LinkedIn: https://www.linkedin.com/in/hgolding/ Henry's MobyGames:https://www.mobygames.com/developer/sheet/view/developerId,451834/ The Games Automated Testing Resource Hub : https://trello.com/b/nGE5yqZk/game-automated-testing-resource-hub Henry's GDC Talk - Lessons Learned in Adapting the 'Sea of Thieves' Automated Testing Methodology to 'Minecraft' https://gdcvault.com/play/1027345/Lessons-Learned-in-Adapting-the Connect with the host: http://elkingpin.com Come On The Show: https://outofplayarea.com Apple Podcast: https://podcasts.apple.com/us/podcast/out-of-play-area/id1550421307 Spotify: https://open.spotify.com/show/0Da0qLAOBi6rV6ccSzx1gX?si=043f4ec5a6034bf0 Follow @outofplayarea & @elkingpin on Twitter Check out Descript using my affiliate link: http://descript.com?lmref=john@outofplayarea.com
This week, we're busting myths around Cloud Spanner with our guests Pritam Shah and Vaibhav Govil. Mark Mirchandani and Max Saltonstall host this episode and learn about the fantastic capabilities of Cloud Spanner. Our guests give us a quick run-down of Spanner database software and its fully-managed offerings. Spanner's unique take on the relational database has sparked some myths. We start by addressing cost and the idea that Spanner is expensive. With its high availability achieved through synchronously replicating data, failures are virtually a non-issue, making the cost well worth it. Our guests describe other features that add to the value of Spanner as well. Workloads of any size are a good fit for Spanner because of its scalability and pricing based on use. Despite rumors, Spanner is now very easy to start using. New additions like the PostgreSQL interface and ORM support have made the usability of Spanner much more familiar. Regional and multi-regional instances are supported, busting the myth that Spanner is only good for global workloads. Our guests offer examples of projects using local and global configurations with Spanner. In the database world, Vaibhav sees trends like the convergence of non-relational and relational databases as well as convergence in the OLTP and OLAP database semantics, and he tells us how Spanner is adapting and growing with these trends. Pritam points out that customers are paying more attention to total cost of ownership, the importance of scalable and reliable database solutions, and the peace of mind that comes with a managed database system. Spanner helps customers with these, freeing up business resources for other things. This year, Spanner has made many announcements about new capabilities coming soon, like PostgreSQL interface on spanner GA, Query Insights visualization tools, cross-regional backups GA, and more. We hear all about these awesome updates. Pritam Shah Pritam is the Director of Engineering for Cloud Spanner. He has been with Google for about four and a half years. Before Spanner, he was the Engineering Lead for observability libraries at Google. That included Distributed Tracing and Metrics at Google scale. His mission was to democratize the instrumentation libraries. That is when he launched Open Census and then took on Cloud Spanner. Vaibhav Govil Vaibhav is the Product lead for Spanner. He has been in this role for the past three years, and before this he was a Product Manager in Google Cloud Storage in Google. Overall, he has spent close to four years at Google, and it has been a great experience. Cool things of the week Our plans to invest $9.5 billion in the U.S. in 2022 blog A policy roadmap for 24⁄7 carbon-free energy blog SRE Prodcast site Meet the people of Google Cloud: Grace Mollison, solutions architect and professional problem solver blog GCP Podcast Episode 224: Solutions Engineering with Grace Mollison and Ann Wallace podcast Interview Spanner site Cloud Spanner myths busted blog PostgreSQL interface docs Cloud Spanner Ecosystem site Spanner: Google's Globally-Distributed Database white paper Spanner Docs docs Spanner Qwiklabs site Using the Cloud Spanner Emulator docs GCP Podcast Episode 62: Cloud Spanner with Deepti Srivastava podcast GCP Podcast Episode 248: Cloud Spanner Revisited with Dilraj Kaur and Christoph Bussler podcast Cloud Spanner federated queries docs What's something cool you're working on? Max is working on a new podcast platform and some spring break projects. Hosts Mark Mirchandani and Max Saltonstall
Jonathan Harrison is the Managing Director of North America Infrastructure Engineering Lead, within Accenture's Cloud-First where he is responsible for helping organizations navigate the Cloud Continuum—from public to private to the edge. In September of 2020 Accenture made a historic investment in cloud-first initiatives and since then have brought together 70,000 IT professionals to help businesses move to the cloud faster. In this episode, Kathleen and Jonathan discuss how VMware and Accenture's expanded partnership—the VMware Accenture Business Group—is helping organizations accelerate cloud migration and ultimately realize greater value. Saving cost is often the catalyst to change, but businesses are leveraging the alliance between VMware and Accenture to achieve far more. Enjoy this encouraging and articulate conversation with a fantastic guest. About the Guest: Jonathan Harrison is Managing Director of North America Infrastructure Engineering Lead within Accenture Cloud First. You can find Jonathan on LinkedIn at: https://www.linkedin.com/in/jonathan-harrison-550ab72/. About the Host: Kathleen Tandy is Vice President of Global Partner and Alliances Marketing at VMware. You can find Kathleen on LinkedIn at: https://www.linkedin.com/in/ktandy85/, or on Twitter at @kaktandy. To learn more about VMware's partner programs, please visit: https://www.vmware.com/partners/partner-executive-edge.html. Subscribe, follow, and review VMware Partnership Perspectives podcast.
In this segment, you will hear Kara Branch, MBA discuss the importance of Black Women and Black Girls in STEM. She talks about the need for women of color to tap into STEM where you do not see a lot of them in the field. Her oldest daughter was the inspiration for her forming Black Girls Do Engineer Corporation when she stated that she wanted to be a Software Engineer at eight years old. Along with her passion for sharing her experience and education so others do not have to face the trials and tribulations she endured in the workforce etc. WHO IS KARA? Kara Branch, MBA is the Founder and CEO of Black Girls Do Engineer Corporation a 501 (c)(3) nonprofit organization founded in June 2019 and located in Houston, TX. Black Girls Do Engineer is passionate about developing young women in all areas of STEM. Black Girls Do Engineer provides mentoring, personal growth regimens, STEM projects and activities, and guidance on career paths based on insight from professional women that are leaders in the STEM industry. Our organization goals are to provide access, awareness, and advocate for two million Black-American girls to pursue STEM careers by the year 2050. Kara is an alumnus of The Prairie View A&M University and hold a Bachelor of Science in Chemical Engineering and a Master of Business in Executive MBA. Black Girls Do Engineer was founded based on Kara personal experiences in the STEM industry and always wishing she had someone who looked like her to show her the way. Now she is here to be the one who look like you to show you the way. She has gone through a lot of trails but now she has developed a STEM program to help girls ages 9- 21-year-old prepare and thrive the STEM industry. Her STEM program challenges girls to be critical thinkers and encourages girls to choose STEM and they are. Her challenges are paying off. She took girls in her organization who had no STEM background to finalist in the NASAWear Challenge, where they beat out 70 plus teams around the nation. In her 5-year career, she has managed 10 plus improvement projects starting at $1 million to multi-million dollars in the Oil & Gas industry and now serve as a Project Manager in the Space Industry. She has served as a Science and Engineering Lead and Diversity & Inclusion Region Chapter Leader for ERG groups in Corporate America. She also holds a Lean Six Sigma Green Belt certification and is one semester away from completing her Executive MBA. She was selected to participate in the National Black MBA Association Accelerator Program due to the impacts her STEM program has made not only in Texas, but California and South Carolina as well. KARA'S GEM "Always have a plan, do not hold back - Just Do It. Keep Your eye on the PRIZE. Have the drive and determination and stick with it and never give up on yourself because you are always going to have ups as well as downs. If you are a parent, make sure you educate yourself on what STEM is since it is definitely part of the future." KARA'S CONTACT INFO https://www.blackgirlsdoengineer.org/ info@blackgirlsdoengineer.org KARA'S CALL TO ACTION (CTA) Contribute to the mission of Black Girls Do Engineer https://www.blackgirlsdoengineer.org/donate.html GENESIS'S INFO GEMSwithGenesisAmarisKemp@gmail.com https://linktr.ee/GenesisAmarisKemp SUPPORT https://www.buymeacoffee.com/GenesisAKemp CALL TO ACTION Subscribe to GEMS with Genesis Amaris Kemp Channel, Hit the notifications so you don't miss any content, and share with family/friends. **REMEMBER - You do not have to let limitations or barriers keep you from achieving your success. Mind over Matter...It's time to shift and unleash your greatest potential. --- Send in a voice message: https://anchor.fm/genesis-amaris-kemp/message Support this podcast: https://anchor.fm/genesis-amaris-kemp/support