Podcasts about host there

  • 11PODCASTS
  • 11EPISODES
  • 32mAVG DURATION
  • ?INFREQUENT EPISODES
  • May 8, 2023LATEST

POPULARITY

20172018201920202021202220232024


Latest podcast episodes about host there

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The AI Founder Gene: Being Early, Building Fast, and Believing in Greatness — with Sharif Shameem of Lexica

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 8, 2023 50:37


Thanks to the over 42,000 latent space explorers who checked out our Replit episode! We are hosting/attending a couple more events in SF and NYC this month. See you if in town!Lexica.art was introduced to the world 24 hours after the release of Stable Diffusion as a search engine for prompts, gaining instant product-market fit as a world discovering generative AI also found they needed to learn prompting by example.Lexica is now 8 months old, serving 5B image searches/day, and just shipped V3 of Lexica Aperture, their own text-to-image model! Sharif Shameem breaks his podcast hiatus with us for an exclusive interview covering his journey building everything with AI!The conversation is nominally about Sharif's journey through his three startups VectorDash, Debuild, and now Lexica, but really a deeper introspection into what it takes to be a top founder in the fastest moving tech startup scene (possibly ever) of AI. We hope you enjoy this conversation as much as we did!Full transcript is below the fold. We would really appreciate if you shared our pod with friends on Twitter, LinkedIn, Mastodon, Bluesky, or your social media poison of choice!Timestamps* [00:00] Introducing Sharif* [02:00] VectorDash* [05:00] The GPT3 Moment and Building Debuild* [09:00] Stable Diffusion and Lexica* [11:00] Lexica's Launch & How it Works* [15:00] Being Chronically Early* [16:00] From Search to Custom Models* [17:00] AI Grant Learnings* [19:30] The Text to Image Illuminati?* [20:30] How to Learn to Train Models* [24:00] The future of Agents and Human Intervention* [29:30] GPT4 and Multimodality* [33:30] Sharif's Startup Manual* [38:30] Lexica Aperture V1/2/3* [40:00] Request for AI Startup - LLM Tools* [41:00] Sequencing your Genome* [42:00] Believe in Doing Great Things* [44:30] Lightning RoundShow Notes* Sharif's website, Twitter, LinkedIn* VectorDash (5x cheaper than AWS)* Debuild Insider, Fast company, MIT review, tweet, tweet* Lexica* Introducing Lexica* Lexica Stats* Aug: “God mode” search* Sep: Lexica API * Sept: Search engine with CLIP * Sept: Reverse image search* Nov: teasing Aperture* Dec: Aperture v1* Dec - Aperture v2* Jan 2023 - Outpainting* Apr 2023 - Aperture v3* Same.energy* AI Grant* Sharif on Agents: prescient Airpods tweet, Reflection* MiniGPT4 - Sharif on Multimodality* Sharif Startup Manual* Sharif Future* 23andMe Genome Sequencing Tool: Promethease* Lightning Round* Fave AI Product: Cursor.so. Swyx ChatGPT Menubar App.* Acceleration: Multimodality of GPT4. Animated Drawings* Request for Startup: Tools for LLMs, Brex for GPT Agents* Message: Build Weird Ideas!TranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO on Residence at Decibel Partners. I'm joined by my co-host Wix, writer and editor of Latent Space. And today we have Sharish Amin. Welcome to the studio. Sharif: Awesome. Thanks for the invite.Swyx: Really glad to have you. [00:00] Introducing SharifSwyx: You've been a dream guest, actually, since we started drafting guest lists for this pod. So glad we could finally make this happen. So what I like to do is usually introduce people, offer their LinkedIn, and then prompt you for what's not on your LinkedIn. And to get a little bit of the person behind the awesome projects. So you graduated University of Maryland in CS. Sharif: So I actually didn't graduate, but I did study. Swyx: You did not graduate. You dropped out. Sharif: I did drop out. Swyx: What was the decision behind dropping out? Sharif: So first of all, I wasn't doing too well in any of my classes. I was working on a side project that took up most of my time. Then I spoke to this guy who ended up being one of our investors. And he was like, actually, I ended up dropping out. I did YC. And my company didn't end up working out. And I returned to school and graduated along with my friends. I was like, oh, it's actually a reversible decision. And that was like that. And then I read this book called The Case Against Education by Brian Kaplan. So those two things kind of sealed the deal for me on dropping out. Swyx: Are you still on hiatus? Could you still theoretically go back? Sharif: Theoretically, probably. Yeah. Still on indefinite leave. Swyx: Then you did some work at Mitra? Sharif: Mitra, yeah. So they're lesser known. So they're technically like an FFRDC, a federally funded research and development center. So they're kind of like a large government contractor, but nonprofit. Yeah, I did some computer vision work there as well. [02:00] VectorDashSwyx: But it seems like you always have an independent founder bone in you. Because then you started working on VectorDash, which is distributed GPUs. Sharif: Yes. Yeah. So VectorDash was a really fun project that we ended up working on for a while. So while I was at Mitra, I had a friend who was mining Ethereum. This was, I think, 2016 or 2017. Oh my God. Yeah. And he was mining on his NVIDIA 1080Ti, making around like five or six dollars a day. And I was trying to train a character recurrent neural network, like a character RNN on my iMessage text messages to make it like a chatbot. Because I was just curious if I could do it. Because iMessage stores all your past messages from years ago in a SQL database, which is pretty nifty. But I wanted to train it. And I needed a GPU. And it was, I think, $60 to $80 for a T4 on AWS, which is really slow compared to a 1080Ti. If you normalize the cost and performance versus the 1080Ti when someone's mining Ethereum, it's like a 20x difference. So I was like, hey, his name was Alex. Alex, I'll give you like 10 bucks if you let me borrow your 1080Ti for a week. I'll give you 10 bucks per day. And it was like 70 bucks. And I used it to train my model. And it worked great. The model was really bad, but the whole trade worked really great. I got a really high performance GPU to train my model on. He got much more than he was making by mining Ethereum. So we had this idea. I was like, hey, what if we built this marketplace where people could rent their GPUs where they're mining cryptocurrency and machine learning researchers could just rent them out and pay a lot cheaper than they would pay AWS. And it worked pretty well. We launched in a few months. We had over 120,000 NVIDIA GPUs on the platform. And then we were the cheapest GPU cloud provider for like a solid year or so. You could rent a pretty solid GPU for like 20 cents an hour. And cryptocurrency miners were making more than they would make mining crypto because this was after the Ethereum crash. And yeah, it was pretty cool. It just turns out that a lot of our customers were college students and researchers who didn't have much money. And they weren't necessarily the best customers to have as a business. Startups had a ton of credits and larger companies were like, actually, we don't really trust you with our data, which makes sense. Yeah, we ended up pivoting that to becoming a cloud GPU provider for video games. So we would stream games from our GPUs. Oftentimes, like many were located just a few blocks away from you because we had the lowest latency of any cloud GPU provider, even lower than like AWS and sometimes Cloudflare. And we decided to build a cloud gaming platform where you could pretty much play your own games on the GPU and then stream it back to your Mac or PC. Swyx: So Stadia before Stadia. Sharif: Yeah, Stadia before Stadia. It's like a year or so before Stadia. Swtx: Wow. Weren't you jealous of, I mean, I don't know, it sounds like Stadia could have bought you or Google could have bought you for Stadia and that never happened? Sharif: It never happened. Yeah, it didn't end up working out for a few reasons. The biggest thing was internet bandwidth. So a lot of the hosts, the GPU hosts had lots of GPUs, but average upload bandwidth in the United States is only 35 megabits per second, I think. And like a 4K stream needs like a minimum of 15 to 20 megabits per second. So you could really only utilize one of those GPUs, even if they had like 60 or 100. [05:00] The GPT3 Moment and Building DebuildSwyx: And then you went to debuild July 2020, is the date that I have. I'm actually kind of just curious, like what was your GPT-3 aha moment? When were you like GPT-3-pilled? Sharif: Okay, so I first heard about it because I was also working on another chatbot. So this was like after, like everything ties back to this chatbot I'm trying to make. This was after working on VectorDash. I was just like hacking on random projects. I wanted to make the chatbot using not really GPT-2, but rather just like it would be pre-programmed. It was pretty much you would give it a goal and then it would ask you throughout the week how much progress you're making to that goal. So take your unstructured response, usually a reply to a text message, and then it would like, plot it for you in like a table and you could see your progress over time. It could be for running or tracking calories. But I wanted to use GPT-3 to make it seem more natural because I remember someone on Bookface, which is still YC's internal forum. They posted and they were like, OpenAI just released AGI and it's GPT-3. I asked it like a bunch of logic puzzles and it solved them all perfectly. And I was like, what? How's no one else talking about this? Like this is either like the greatest thing ever that everyone is missing or like it's not that good. So like I tweeted out if anyone could get me access to it. A few hours later, Greg Brockman responded. Swyx: He is everywhere. Sharif: He's great. Yeah, he's on top of things. And yeah, by that afternoon, I was like messing around with the API and I was like, wow, this is incredible. You could chat with fake people or people that have passed away. You could like, I remember the first conversation I did was this is a chat with Steve Jobs and it was like, interviewer, hi. What are you up to today on Steve? And then like you could talk to Steve Jobs and it was somewhat plausible. Oh, the thing that really blew my mind was I tried to generate code with it. So I'd write the function for a JavaScript header or the header for a JavaScript function. And it would complete the rest of the function. I was like, whoa, does this code actually work? Like I copied it and ran it and it worked. And I tried it again. I gave more complex things and like I kind of understood where it would break, which was like if it was like something, like if it was something you couldn't easily describe in a sentence and like contain all the logic for in a single sentence. So I wanted to build a way where I could visually test whether these functions were actually working. And what I was doing was like I was generating the code in the playground, copying it into my VS code editor, running it and then reloading the react development page. And I was like, okay, cool. That works. So I was like, wait, let me just put this all in like the same page so I can just compile in the browser, run it in the browser and then submit it to the API in the browser as well. So I did that. And it was really just like a simple loop where you just type in the prompt. It would generate the code and then compile it directly in the browser. And it showed you the response. And I did this for like very basic JSX react components. I mean, it worked. It was pretty mind blowing. I remember staying up all night, like working on it. And it was like the coolest thing I'd ever worked on at the time so far. Yeah. And then I was like so mind blowing that no one was talking about this whole GPT three thing. I was like, why is this not on everyone's minds? So I recorded a quick 30 second demo and I posted on Twitter and like I go to bed after staying awake for like 20 hours straight. When I wake up the next morning and I had like 20,000 likes and like 100,000 people had viewed it. I was like, oh, this is so cool. And then I just kept putting demos out for like the next week. And yeah, that was like my GPT three spark moment. Swyx: And you got featured in like Fast Company, MIT Tech Review, you know, a bunch of stuff, right? Sharif: Yeah. Yeah. I think a lot of it was just like the API had been there for like a month prior already. Swyx: Not everyone had access. Sharif: That's true. Not everyone had access. Swyx: So you just had the gumption to tweet it out. And obviously, Greg, you know, on top of things as always. Sharif: Yeah. Yeah. I think it also makes a lot of sense when you kind of share things in a way that's easily consumable for people to understand. Whereas if you had shown a terminal screenshot of a generating code, that'd be pretty compelling. But whereas seeing it get rendered and compiled directly in front of you, there's a lot more interesting. There's also that human aspect to it where you want to relate things to the end user, not just like no one really cares about evals. When you can create a much more compelling demo explaining how it does on certain tasks. [09:00] Stable Diffusion and LexicaSwyx: Okay. We'll round it out soon. But in 2022, you moved from Debuild to Lexica, which was the search engine. I assume this was inspired by stable diffusion, but I can get the history there a little bit. Sharif: Yeah. So I was still working on Debuild. We were growing at like a modest pace and I was in the stable... Swyx: I was on the signup list. I never got off. Sharif: Oh yeah. Well, we'll get you off. It's not getting many updates anymore, but yeah, I was in the stable diffusion discord and I was in it for like many hours a day. It was just like the most exciting thing I'd ever done in a discord. It was so cool. Like people were generating so many images, but I didn't really know how to write prompts and people were like writing really complicated things. They would be like, like a modern home training on our station by Greg Rutkowski, like a 4k Unreal Engine. It's like that there's no way that actually makes the images look better. But everyone was just kind of copying everyone else's prompts and like changing like the first few words. Swyx: Yeah. Yeah. Sharif: So I was like using the discord search bar and it was really bad because it showed like five images at a time. And I was like, you know what? I could build a much better interface for this. So I ended up scraping the entire discord. It was like 10 million images. I put them in a database and I just pretty much built a very basic search engine where you could just type for type a word and then it returned all the prompts that had that word. And I built the entire website for it in like 20, in like about two days. And we shipped it the day I shipped it the day after the stable diffusion weights were open sourced. So about 24 hours later and it kind of took off in a way that I never would have expected. Like I thought it'd be this cool utility that like hardcore stable diffusion users would find useful. But it turns out that almost anyone who mentioned stable diffusion would also kind of mention Lexica in conjunction with it. I think it's because it was like it captured the zeitgeist in an easy to share way where it's like this URL and there's this gallery and you can search. Whereas running the model locally was a lot harder. You'd have to like to deploy it on your own GPU and like set up your own environment and like do all that stuff. Swyx: Oh, my takeaway. I have two more to add to the reasons why Lexica works at the time. One is lower latency is all you need. So in other words, instead of waiting a minute for your image, you could just search and find stuff that other people have done. That's good. And then two is everyone knew how to search already, but people didn't know how to prompt. So you were the bridge. Sharif: That's true. Yeah. You would get a lot better looking images by typing a one word prompt versus prompting for that one word. Yeah. Swyx: Yeah. That is interesting. [11:00] Lexica's Explosion at LaunchAlessio: The numbers kind of speak for themselves, right? Like 24 hours post launch, 51,000 queries, like 2.2 terabytes in bandwidth. Going back to the bandwidth problem that you have before, like you would have definitely run into that. Day two, you doubled that. It's like 111,000 queries, four and a half terabytes in bandwidth, 22 million images served. So it's pretty crazy. Sharif: Yeah. I think we're, we're doing like over 5 billion images served per month now. It's like, yeah, that's, it's pretty crazy how much things have changed since then. Swyx: Yeah. I'm still showing people like today, even today, you know, it's been a few months now. This is where you start to learn image prompting because they don't know. Sharif: Yeah, it is interesting. And I, it's weird because I didn't really think it would be a company. I thought it would just be like a cool utility or like a cool tool that I would use for myself. And I really was just building it for myself just because I didn't want to use the Discord search bar. But yeah, it was interesting that a lot of other people found it pretty useful as well. [11:00] How Lexica WorksSwyx: So there's a lot of things that you release in a short amount of time. The God mode search was kind of like, obviously the first thing, I guess, like maybe to talk about some of the underlying technology you're using clip to kind of find, you know, go from image to like description and then let people search it. Maybe talk a little bit about what it takes to actually make the search magic happen. Sharif: Yeah. So the original search was just using Postgres' full text search and it would only search the text contents of the prompt. But I was inspired by another website called Same Energy, where like a visual search engine. It's really cool. Do you know what happened to that guy? I don't. Swyx: He released it and then he disappeared from the internet. Sharif: I don't know what happened to him, but I'm sure he's working on something really cool. He also worked on like Tabnine, which was like the very first version of Copilot or like even before Copilot was Copilot. But yeah, inspired by that, I thought like being able to search images by their semantics. The contents of the image was really interesting. So I pretty much decided to create a search index on the clip embeddings, the clip image embeddings of all the images. And when you would search it, we would just do KNN search on pretty much the image embedding index. I mean, we had way too many embeddings to store on like a regular database. So we had to end up using FAISS, which is a Facebook library for really fast KNN search and embedding search. That was pretty fun to set up. It actually runs only on CPUs, which is really cool. It's super efficient. You compute the embeddings on GPUs, but like you can serve it all on like an eight core server and it's really, really fast. Once we released the semantic search on the clip embeddings, people were using the search way more. And you could do other cool things. You could do like similar image search where if you found like a specific image you liked, you could upload it and it would show you relevant images as well. Swyx: And then right after that, you raised your seed money from AI grant, NetFreedman, then Gross. Sharif: Yeah, we raised about $5 million from Daniel Gross. And then we also participated in AI grant. That was pretty cool. That was kind of the inflection point. Not much before that point, Lexic was kind of still a side project. And I told myself that I would focus on it full time or I'd consider focusing on it full time if we had broke like a million users. I was like, oh, that's gonna be like years away for sure. And then we ended up doing that in like the first week and a half. I was like, okay, there's something here. And it was kind of that like deal was like growing like pretty slowly and like pretty linearly. And then Lexica was just like this thing that just kept going up and up and up. And I was so confused. I was like, man, people really like looking at pictures. This is crazy. Yeah. And then we decided to pivot the entire company and just focus on Lexica full time at that point. And then we raised our seed round. [15:00] Being Chronically EarlySwyx: Yeah. So one thing that you casually dropped out, the one that slip, you said you were working on Lexica before the launch of Stable Diffusion such that you were able to launch Lexica one day after Stable Diffusion. Sharif: Yeah.Swyx: How did you get so early into Stable Diffusion? Cause I didn't hear about it. Sharif: Oh, that's a good question. I, where did I first hear about Stable Diffusion? I'm not entirely sure. It must've been like somewhere on Twitter or something. That changed your life. Yeah, it was great. And I got into the discord cause I'd used Dolly too before, but, um, there were a lot of restrictions in place where you can generate human faces at the time. You can do that now. But when I first got access to it, like you couldn't do any faces. It was like, there were like a, the list of adjectives you couldn't use was quite long. Like I had a friend from Pakistan and it can generate anything with the word Pakistan in it for some reason. But Stable Diffusion was like kind of the exact opposite where there were like very, very few rules. So that was really, really fun and interesting, especially seeing the chaos of like a bunch of other people also using it right in front of you. That was just so much fun. And I just wanted to do something with it. I thought it was honestly really fun. Swyx: Oh, well, I was just trying to get tips on how to be early on things. Cause you're pretty consistently early to things, right? You were Stadia before Stadia. Um, and then obviously you were on. Sharif: Well, Stadia is kind of shut down now. So I don't know if being early to that was a good one. Swyx: Um, I think like, you know, just being consistently early to things that, uh, you know, have a lot of potential, like one of them is going to work out and you know, then that's how you got Lexica. [16:00] From Search to Custom ModelsAlessio: How did you decide to go from search to running your own models for a generation? Sharif: That's a good question. So we kind of realized that the way people were using Lexica was they would have Lexica open in one tab and then in another tab, they'd have a Stable Diffusion interface. It would be like either a discord or like a local run interface, like the automatic radio UI, um, or something else. I just, I would watch people use it and they would like all tabs back and forth between Lexica and their other UI. And they would like to scroll through Lexica, click on the prompt, click on an image, copy the prompt, and then paste it and maybe change a word or two. And I was like, this should really kind of just be all within Lexica. Like, it'd be so cool if you could just click a button in Lexica and get an editor and generate your images. And I found myself also doing the all tab thing, or it was really frustrating. I was like, man, this is kind of tedious. Like I really wish it was much simpler. So we just built generations directly within Lexica. Um, so we do, we deployed it on, I don't remember when we first launched, I think it was November, December. And yeah, people love generating directly within it. [17:00] AI Grant LearningsSwyx: I was also thinking that this was coming out of AI grants where, you know, I think, um, yeah, I was like a very special program. I was just wondering if you learned anything from, you know, that special week where everyone was in town. Sharif: Yeah, that was a great week. I loved it. Swyx: Yeah. Bring us, bring us in a little bit. Cause it was awesome. There. Sharif: Oh, sure. Yeah. It's really, really cool. Like all the founders in AI grants are like fantastic people. And so I think the main takeaway from the AI grant was like, you have this massive overhang in compute or in capabilities in terms of like these latest AI models, but to the average person, there's really not that many products that are that cool or useful to them. Like the latest one that has hit the zeitgeist was chat GPT, which used arguably the same GPT three model, but like RLHF, but you could have arguably built like a decent chat GPT product just using the original GPT three model. But no one really did it. Now there were some restrictions in place and opening. I like to slowly release them over the few months or years after they release the original API. But the core premise behind AI grants is that there are way more capabilities than there are products. So focus on building really compelling products and get people to use them. And like to focus less on things like hitting state of the art on evals and more on getting users to use something. Swyx: Make something people want.Sharif: Exactly. Host: Yeah, we did an episode on LLM benchmarks and we kind of talked about how the benchmarks kind of constrain what people work on, because if your model is not going to do well, unlike the well-known benchmarks, it's not going to get as much interest and like funding. So going at it from a product lens is cool. [19:30] The Text to Image Illuminati?Swyx: My hypothesis when I was seeing the sequence of events for AI grants and then for Lexica Aperture was that you had some kind of magical dinner with Emad and David Holtz. And then they taught you the secrets of training your own model. Is that how it happens? Sharif: No, there's no secret dinner. The Illuminati of text to image. We did not have a meeting. I mean, even if we did, I wouldn't tell you. But it really boils down to just having good data. If you think about diffusion models, really the only thing they do is learn a distribution of data. So if you have high quality data, learn that high quality distribution. Or if you have low quality data, it will learn to generate images that look like they're from that distribution. So really it boils down to the data and the amount of data you have and that quality of that data, which means a lot of the work in training high quality models, at least diffusion models, is not really in the model architecture, but rather just filtering the data in a way that makes sense. So for Lexica, we do a lot of aesthetic scoring on images and we use the rankings we get from our website because we get tens of millions of people visiting it every month. So we can capture a lot of rankings. Oh, this person liked this image when they saw this one right next to it. Therefore, they probably preferred this one over that. You can do pairwise ranking to rank images and then compute like ELO scores. You can also just train aesthetic models to learn to classify a model, whether or not someone will like it or whether or not it's like, rank it on a scale of like one to ten, for example. So we mostly use a lot of the traffic we get from Lexica and use that to kind of filter our data sets and use that to train better aesthetic models. [20:30] How to Learn to Train ModelsSwyx: You had been a machine learning engineer before. You've been more of an infrastructure guy. To build, you were more of a prompt engineer with a bit of web design. This was the first time that you were basically training your own model. What was the wrap up like? You know, not to give away any secret sauce, but I think a lot of people who are traditional software engineers are feeling a lot of, I don't know, fear when encountering these kinds of domains. Sharif: Yeah, I think it makes a lot of sense. And to be fair, I didn't have much experience training massive models at this scale before I did it. A lot of times it's really just like, in the same way when you're first learning to program, you would just take the problem you're having, Google it, and go through the stack overflow post. And then you figure it out, but ultimately you will get to the answer. It might take you a lot longer than someone who's experienced, but I think there are enough resources out there where it's possible to learn how to do these things. Either just reading through GitHub issues for relevant models. Swyx: Oh God. Sharif: Yeah. It's really just like, you might be slower, but it's definitely still possible. And there are really great courses out there. The Fast AI course is fantastic. There's the deep learning book, which is great for fundamentals. And then Andrej Karpathy's online courses are also excellent, especially for language modeling. You might be a bit slower for the first few months, but ultimately I think if you have the programming skills, you'll catch up pretty quickly. It's not like this magical dark science that only three people in the world know how to do well. Probably was like 10 years ago, but now it's becoming much more open. You have open source collectives like Eleuther and LAION, where they like to share the details of their large scale training runs. So you can learn from a lot of those people. Swyx: Yeah. I think what is different for programmers is having to estimate significant costs upfront before they hit run. Because it's not a thing that you normally consider when you're coding, but yeah, like burning through your credits is a fear that people have. Sharif: Yeah, that does make sense. In that case, like fine tuning larger models gets you really, really far. Even using things like low rank adaptation to fine tune, where you can like fine tune much more efficiently on a single GPU. Yeah, I think people are underestimating how far you can really get just using open source models. I mean, before Lexica, I was working on Debuild and we were using the GP3 API, but I was also like really impressed at how well you could get open source models to run by just like using the API, collecting enough samples from like real world user feedback or real world user data using your product. And then just fine tuning the smaller open source models on those examples. And now you have a model that's pretty much state of the art for your specific domain. Whereas the runtime cost is like 10 times or even 100 times cheaper than using an API. Swyx: And was that like GPT-J or are you talking BERT? Sharif: I remember we tried GPT-J, but I think FLAN-T5 was like the best model we were able to use for that use case. FLAN-T5 is awesome. If you can, like if your prompt is small enough, it's pretty great. And I'm sure there are much better open source models now. Like Vicuna, which is like the GPT-4 variant of like Lama fine tuned on like GPT-4 outputs. Yeah, they're just going to get better and they're going to get better much, much faster. Swyx: Yeah. We're just talking in a previous episode to the creator of Dolly, Mike Conover, which is actually commercially usable instead of Vicuna, which is a research project. Sharif: Oh, wow. Yeah, that's pretty cool. [24:00] Why No Agents?Alessio: I know you mentioned being early. Obviously, agents are one of the hot things here. In 2021, you had this, please buy me AirPods, like a demo that you tweeted with the GPT-3 API. Obviously, one of the things about being early in this space, you can only do one thing at a time, right? And you had one tweet recently where you said you hoped that that demo would open Pandora's box for a bunch of weird GPT agents. But all we got were docs powered by GPT. Can you maybe talk a little bit about, you know, things that you wish you would see or, you know, in the last few, last few weeks, we've had, you know, Hugging GPT, Baby AGI, Auto GPT, all these different kind of like agent projects that maybe now are getting closer to the, what did you say, 50% of internet traffic being skips of GPT agents. What are you most excited about, about these projects and what's coming? Sharif: Yeah, so we wanted a way for users to be able to paste in a link for the documentation page for a specific API, and then describe how to call that API. And then the way we would need to pretty much do that for Debuild was we wondered if we could get an agent to browse the docs page, read through it, summarize it, and then maybe even do things like create an API key and register it for that user. To do that, we needed a way for the agent to read the web page and interact with it. So I spent about a day working on that demo where we just took the web page, serialized it into a more compact form that fit within the 2048 token limit of like GPT-3 at the time. And then just decide what action to do. And then it would, if the page was too long, it would break it down into chunks. And then you would have like a sub prompt, decide on which chunk had the best action. And then at the top node, you would just pretty much take that action and then run it in a loop. It was really, really expensive. I think that one 60 second demo cost like a hundred bucks or something, but it was wildly impractical. But you could clearly see that agents were going to be a thing, especially ones that could read and write and take actions on the internet. It was just prohibitively expensive at the time. And the context limit was way too small. But yeah, I think it seems like a lot of people are taking it more seriously now, mostly because GPT-4 is way more capable. The context limit's like four times larger at 8,000 tokens, soon 32,000. And I think the only problem that's left to solve is finding a really good representation for a webpage that allows it to be consumed by a text only model. So some examples are like, you could just take all the text and pass it in, but that's probably too long. You could take all the interactive only elements like buttons and inputs, but then you miss a lot of the relevant context. There are some interesting examples, which I really like is you could run the webpage or you could run the browser in a terminal based browser. So there are some browsers that run in your terminal, which serialize everything into text. And what you can do is just take that frame from that terminal based browser and pass that directly to the model. And it's like a really, really good representation of the webpage because they do things where for graphical elements, they kind of render it using ASCII blocks. But for text, they render it as actual text. So you could just remove all the weird graphical elements, just keep all the text. And that works surprisingly well. And then there are other problems to solve, which is how do you get the model to take an action? So for example, if you have a booking page and there's like a calendar and there are 30 days on the calendar, how do you get it to specify which button to press? It could say 30, and you can match string based and like find the 30. But for example, what if it's like a list of friends in Facebook and trying to delete a friend? There might be like 30 delete buttons. How do you specify which one to click on? The model might say like, oh, click on the one for like Mark. But then you'd have to figure out the delete button in relation to Mark. And there are some ways to solve this. One is there's a cool Chrome extension called Vimium, which lets you use Vim in your Chrome browser. And what you do is you can press F and over every interactive element, it gives you like a character or two characters. Or if you type those two characters, it presses that button or it opens or focuses on that input. So you could combine a lot of these ideas and then get a really good representation of the web browser in text, and then also give the model a really, really good way to control the browser as well. And I think those two are the core part of the problem. The reasoning ability is definitely there. If a model can score in the top 10% on the bar exam, it can definitely browse a web page. It's really just how do you represent text to the model and how do you get the model to perform actions back on the web page? Really, it's just an engineering problem. Swyx: I have one doubt, which I'd love your thoughts on. How do you get the model to pause when it doesn't have enough information and ask you for additional information because you under specified your original request? Sharif: This is interesting. I think the only way to do this is to have a corpus where your training data is like these sessions of agents browsing the web. And you have to pretty much figure out where the ones that went wrong or the agents that went wrong, or did they go wrong and just replace it with, hey, I need some help. And then if you were to fine tune a larger model on that data set, you would pretty much get them to say, hey, I need help on the instances where they didn't know what to do next. Or if you're using a closed source model like GPT-4, you could probably tell it if you're uncertain about what to do next, ask the user for help. And it probably would be pretty good at that. I've had to write a lot of integration tests in my engineering days and like the dome. Alessio: They might be over. Yeah, I hope so. I hope so. I don't want to, I don't want to deal with that anymore. I, yeah, I don't want to write them the old way. Yeah. But I'm just thinking like, you know, we had the robots, the TXT for like crawlers. Like I can definitely see the DOM being reshaped a little bit in terms of accessibility. Like sometimes you have to write expats that are like so long just to get to a button. Like there should be a better way to do it. And maybe this will drive the change, you know, making it easier for these models to interact with your website. Sharif: There is the Chrome accessibility tree, which is used by screen readers, but a lot of times it's missing a lot of, a lot of useful information. But like in a perfect world, everything would be perfectly annotated for screen readers and we could just use that. That's not the case. [29:30] GPT4 and MultimodalitySwyx: GPT-4 multimodal, has your buddy, Greg, and do you think that that would solve essentially browser agents or desktop agents? Sharif: Greg has not come through yet, unfortunately. But it would make things a lot easier, especially for graphically heavy web pages. So for example, you were using Yelp and like using the map view, it would make a lot of sense to use something like that versus a text based input. Where, how do you serialize a map into text? It's kind of hard to do that. So for more complex web pages, that would make it a lot easier. You get a lot more context to the model. I mean, it seems like that multimodal input is very dense in the sense that it can read text and it can read it really, really well. So you could probably give it like a PDF and it would be able to extract all the text and summarize it. So if it can do that, it could probably do anything on any webpage. Swyx: Yeah. And given that you have some experience integrating Clip with language models, how would you describe how different GPT-4 is compared to that stuff? Sharif: Yeah. Clip is entirely different in the sense that it's really just good at putting images and text into the same latent space. And really the only thing that's useful for is similarity and clustering. Swyx: Like literally the same energy, right? Sharif: Yeah. Swyx: Yeah. And then there's Blip and Blip2. I don't know if you like those. Sharif: Yeah. Blip2 is a lot better. There's actually a new project called, I think, Mini GPT-4. Swyx: Yes. It was just out today. Sharif: Oh, nice. Yeah. It's really cool. It's actually really good. I think that one is based on the Lama model, but yeah, that's, that's like another. Host: It's Blip plus Lama, right? So they, they're like running through Blip and then have Lama ask your, interpret your questions so that you do visual QA. Sharif: Oh, that's cool. That's really clever. Yeah. Ensemble models are really useful. Host: Well, so I was trying to articulate, cause that was, that's, there's two things people are talking about today. You have to like, you know, the moment you wake up, you open Hacker News and go like, all right, what's, what's the new thing today? One is Red Pajama. And then the other one is Mini GPT-4. So I was trying to articulate like, why is this not GPT-4? Like what is missing? And my only conclusion was it just doesn't do OCR yet. But I wonder if there's anything core to this concept of multimodality that you have to train these things together. Like what does one model doing all these things do that is separate from an ensemble of models that you just kind of duct tape together? Sharif: It's a good question. This is pretty related to interoperability. Like how do we understand that? Or how, how do we, why do models trained on different modalities within the same model perform better than two models perform or train separately? I can kind of see why that is the case. Like, it's kind of hard to articulate, but when you have two different models, you get the reasoning abilities of a language model, but also like the text or the vision understanding of something like Clip. Whereas Clip clearly lacks the reasoning abilities, but if you could somehow just put them both in the same model, you get the best of both worlds. There were even cases where I think the vision version of GPT-4 scored higher on some tests than the text only version. So like there might even be some additional learning from images as well. Swyx: Oh yeah. Well, uh, the easy answer for that was there was some chart in the test. That wasn't translated. Oh, when I read that, I was like, Oh yeah. Okay. That makes sense. Sharif: That makes sense. I thought it'd just be like, it sees more of the world. Therefore it has more tokens. Swyx: So my equivalent of this is I think it's a well-known fact that adding code to a language model training corpus increases its ability to do language, not just with code. So, the diversity of datasets that represent some kind of internal logic and code is obviously very internally logically consistent, helps the language model learn some internal structure. Which I think, so, you know, my ultimate test for GPT-4 is to show the image of like, you know, is this a pipe and ask it if it's a pipe or not and see what it does. Sharif: Interesting. That is pretty cool. Yeah. Or just give it a screenshot of your like VS code editor and ask it to fix the bug. Yeah. That'd be pretty wild if it could do that. Swyx: That would be adult AGI. That would be, that would be the grownup form of AGI. [33:30] Sharif's Startup ManualSwyx: On your website, you have this, um, startup manual where you give a bunch of advice. This is fun. One of them was that you should be shipping to production like every two days, every other day. This seems like a great time to do it because things change every other day. But maybe, yeah, tell some of our listeners a little bit more about how you got to some of these heuristics and you obviously build different projects and you iterate it on a lot of things. Yeah. Do you want to reference this? Sharif: Um, sure. Yeah, I'll take a look at it. Swyx: And we'll put this in the show notes, but I just wanted you to have the opportunity to riff on this, this list, because I think it's a very good list. And what, which one of them helped you for Lexica, if there's anything, anything interesting. Sharif: So this list is, it's pretty funny. It's mostly just like me yelling at myself based on all the mistakes I've made in the past and me trying to not make them again. Yeah. Yeah. So I, the first one is like, I think the most important one is like, try when you're building a product, try to build the smallest possible version. And I mean, for Lexica, it was literally a, literally one screen in the react app where a post-process database, and it just showed you like images. And I don't even know if the first version had search. Like I think it did, but I'm not sure. Like, I think it was really just like a grid of images that were randomized, but yeah, don't build the absolute smallest thing that can be considered a useful application and ship it for Lexica. That was, it helps me write better prompts. That's pretty useful. It's not that useful, but it's good enough. Don't fall into the trap of intellectual indulgence with over-engineering. I think that's a pretty important one for myself. And also anyone working on new things, there's often times you fall into the trap of like thinking you need to add more and more things when in reality, like the moment it's useful, you should probably get in the hands of your users and they'll kind of set the roadmap for you. I know this has been said millions of times prior, but just, I think it's really, really important. And I think if I'd spent like two months working on Lexica, adding a bunch of features, it wouldn't have been anywhere as popular as it was if I had just released the really, really boiled down version alongside the stable diffusion release. Yeah. And then there are a few more like product development doesn't start until you launch. Think of your initial product as a means to get your users to talk to you. It's also related to the first point where you really just want people using something as quickly as you can get that to happen. And then a few more are pretty interesting. Create a product people love before you focus on growth. If your users are spontaneously telling other people to use your product, then you've built something people love. Swyx: So this is pretty, it sounds like you've internalized Paul Graham's stuff a lot. Yeah. Because I think he said stuff like that. Sharif: A lot of these are just probably me taking notes from books I found really interesting or like PG essays that were really relevant at the time. And then just trying to not forget them. I should probably read this list again. There's some pretty personalized advice for me here. Oh yeah. One of my favorite ones is, um, don't worry if what you're building doesn't sound like a business. Nobody thought Facebook would be a $500 billion company. It's easy to come up with a business model. Once you've made something people want, you can even make pretty web forms and turn that into a 200 person company. And then if you click the link, it's to LinkedIn for type form, which is now, uh, I think they're like an 800 person company or something like that. So they've grown quite a bit. There you go. Yeah. Pretty web forms are pretty good business, even though it doesn't sound like it. Yeah. It's worth a billion dollars. [38:30] Lexica Aperture V1/2/3Swyx: One way I would like to tie that to the history of Lexica, which we didn't go over, which was just walk us through like Aperture V1, V2, V3, uh, which you just released last week. And how maybe some of those principles helped you in that journey.Sharif: Yeah. So, um, V1 was us trying to create a very photorealistic version of our model of Sable to Fusion. Uh, V1 actually didn't turn out to be that popular. It turns out people loved not generating. Your marketing tweets were popular. They were quite popular. So I think at the time you couldn't get Sable to Fusion to generate like photorealistic images that were consistent with your prompt that well. It was more so like you were sampling from this distribution of images and you could slightly pick where you sampled from using your prompt. This was mostly just because the clip text encoder is not the best text encoder. If you use a real language model, like T5, you get much better results. Like the T5 XXL model is like a hundred times larger than the clip text encoder for Sable to Fusion 1.5. So you could kind of steer it into like the general direction, but for more complex prompts, it just didn't work. So a lot of our users actually complained that they preferred the 1.5, Sable to Fusion 1.5 model over the Aperture model. And it was just because a lot of people were using it to create like parts and like really weird abstract looking pictures that didn't really work well with the photorealistic model trained solely on images. And then for V2, we kind of took that into consideration and then just trained it more on a lot of the art images on Lexica. So we took a lot of images that were on Lexica that were art, used that to train aesthetic models that ranked art really well, and then filtered larger sets to train V2. And then V3 is kind of just like an improved version of that with much more data. I'm really glad we didn't spend too much time on V1. I think we spent about one month working on it, which is a lot of time, but a lot of the things we learned were useful for training future versions. Swyx: How do you version them? Like where do you decide, okay, this is V2, this is V3? Sharif: The versions are kind of weird where you can't really use semantic versions because like if you have a small update, you usually just make that like V2. Versions are kind of used for different base models, I'd say. So if you have each of the versions were a different base model, but we've done like fine tunes of the same version and then just release an update without incrementing the version. But I think when there's like a clear change between running the same prompt on a model and you get a different image, that should probably be a different version. [40:00] Request for AI Startup - LLM ToolsAlessio: So the startup manual was the more you can actually do these things today to make it better. And then you have a whole future page that has tips from, you know, what the series successor is going to be like to like why everyone's genome should be sequenced. There's a lot of cool stuff in there. Why do we need to develop stimulants with shorter half-lives so that we can sleep better. Maybe talk a bit about, you know, when you're a founder, you need to be focused, right? So sometimes there's a lot of things you cannot build. And I feel like this page is a bit of a collection of these. Like, yeah. Are there any of these things that you're like, if I were not building Lexica today, this is like a very interesting thing. Sharif: Oh man. Yeah. There's a ton of things that I want to build. I mean, off the top of my head, the most exciting one would be better tools for language models. And I mean, not tools that help us use language models, but rather tools for the language models themselves. So things like giving them access to browsers, giving them access to things like payments and credit cards, giving them access to like credit cards, giving them things like access to like real world robots. So like, it'd be cool if you could have a Boston dynamic spot powered by a language model reasoning module and you would like to do things for you, like go and pick up your order, stuff like that. Entirely autonomously given like high level commands. That'd be like number one thing if I wasn't working on Lexica. [40:00] Sequencing your GenomeAnd then there's some other interesting things like genomics I find really cool. Like there's some pretty cool things you can do with consumer genomics. So you can export your genome from 23andMe as a text file, like literally a text file of your entire genome. And there is another tool called Prometheus, I think, where you upload your 23andMe text file genome and then they kind of map specific SNPs that you have in your genome to studies that have been done on those SNPs. And it tells you really, really useful things about yourself. Like, for example, I have the SNP for this thing called delayed sleep phase disorder, which makes me go to sleep about three hours later than the general population. So like I used to always be a night owl and I never knew why. But after using Prometheus it pretty much tells you, oh, you have the specific genome for specific SNP for DSPS. It's like a really tiny percentage of the population. And it's like something you should probably know about. And there's a bunch of other things. It tells you your likelihood for getting certain diseases, for certain cancers, oftentimes, like even weird personality traits. There's one for like, I have one of the SNPs for increased risk taking and optimism, which is pretty weird. That's an actual thing. Like, I don't know how. This is the founder gene. You should sequence everybody. It's pretty cool. And it's like, it's like $10 for Prometheus and like 70 bucks for 23andMe. And it explains to you how your body works and like the things that are different from you or different from the general population. Wow. Highly recommend everyone do it. Like if you're, if you're concerned about privacy, just purchase a 23andMe kit with a fake name. You don't have to use your real name. I didn't use my real name. Swyx: It's just my genes. Worst you can do is clone me. It ties in with what you were talking about with, you know, we want the future to be like this. And like people are building uninspired B2B SaaS apps and you and I had an exchange about this. [42:00] Believe in Doing Great ThingsHow can we get more people to believe they can do great things? Sharif: That's a good question. And I like a lot of the things I've been working on with GP3. It has been like trying to solve this by getting people to think about more interesting ideas. I don't really know. I think one is just like the low effort version of this is just putting out really compelling demos and getting people inspired. And then the higher effort version is like actually building the products yourself and getting people to like realize this is even possible in the first place. Like I think the baby AGI project and like the GPT Asian projects on GitHub are like in practice today, they're not super useful, but I think they're doing an excellent job of getting people incredibly inspired for what can be possible with language models as agents. And also the Stanford paper where they had like the mini version of Sims. Yeah. That one was incredible. That was awesome. Swyx: It was adorable. Did you see the part where they invented day drinking? Sharif: Oh, they did? Swyx: Yeah. You're not supposed to go to these bars in the afternoon, but they were like, we're going to go anyway. Nice. Sharif: That's awesome. Yeah. I think we need more stuff like that. That one paper is probably going to inspire a whole bunch of teams to work on stuff similar to that. Swyx: And that's great. I can't wait for NPCs to actually be something that you talk to in a game and, you know, have their own lives and you can check in and, you know, they would have their own personalities as well. Sharif: Yeah. I was so kind of off topic. But I was playing the last of us part two and the NPCs in that game are really, really good. Where if you like, point a gun at them and they'll beg for their life and like, please, I have a family. And like when you kill people in the game, they're like, oh my God, you shot Alice. Like they're just NPCs, but they refer to each other by their names and like they plead for their lives. And this is just using regular conditional rules on NPC behavior. Imagine how much better it'd be if it was like a small GPT-4 agent running in every NPC and they had the agency to make decisions and plead for their lives. And I don't know, you feel way more guilty playing that game. Alessio: I'm scared it's going to be too good. I played a lot of hours of Fallout. So I feel like if the NPCs were a lot better, you would spend a lot more time playing the game. Yeah. [44:30] Lightning RoundLet's jump into lightning round. First question is your favorite AI product. Sharif: Favorite AI product. The one I use the most is probably ChatGPT. The one I'm most excited about is, it's actually a company in AI grants. They're working on a version of VS code. That's like an entirely AI powered cursor, yeah. Cursor where you would like to give it a prompt and like to iterate on your code, not by writing code, but rather by just describing the changes you want to make. And it's tightly integrated into the editor itself. So it's not just another plugin. Swyx: Would you, as a founder of a low code prompting-to-code company that pivoted, would you advise them to explore some things or stay away from some things? Like what's your learning there that you would give to them?Sharif: I would focus on one specific type of code. So if I'm building a local tool, I would try to not focus too much on appealing developers. Whereas if I was building an alternative to VS code, I would focus solely on developers. So in that, I think they're doing a pretty good job focusing on developers. Swyx: Are you using Cursor right now? Sharif: I've used it a bit. I haven't converted fully, but I really want to. Okay. It's getting better really, really fast. Yeah. Um, I can see myself switching over sometime this year if they continue improving it. Swyx: Hot tip for, for ChatGPT, people always say, you know, they love ChatGPT. Biggest upgrade to my life right now is the, I forked a menu bar app I found on GitHub and now I just have it running in a menu bar app and I just do command shift G and it pops it up as a single use thing. And there's no latency because it just always is live. And I just type, type in the thing I want and then it just goes away after I'm done. Sharif: Wow. That's cool. Big upgrade. I'm going to install that. That's cool. Alessio: Second question. What is something you thought would take much longer, but it's already here? Like what, what's your acceleration update? Sharif: Ooh, um, it would take much longer, but it's already here. This is your question. Yeah, I know. I wasn't prepared. Um, so I think it would probably be kind of, I would say text to video. Swyx: Yeah. What's going on with that? Sharif: I think within this year, uh, by the end of this year, we'll have like the jump between like the original DALL-E one to like something like mid journey. Like we're going to see that leap in text to video within the span of this year. Um, it's not already here yet. So I guess the thing that surprised me the most was probably the multi-modality of GPT four in the fact that it can technically see things, which is pretty insane. Swyx: Yeah. Is text to video something that Aperture would be interested in? Sharif: Uh, it's something we're thinking about, but it's still pretty early. Swyx: There was one project with a hand, um, animation with human poses. It was also coming out of Facebook. I thought that was a very nice way to accomplish text to video while having a high degree of control. I forget the name of that project. It was like, I think it was like drawing anything. Swyx: Yeah. It sounds familiar. Well, you already answered a year from now. What will people be most surprised by? Um, and maybe the, uh, the usual requests for startup, you know, what's one thing you will pay for if someone built it? Sharif: One thing I would pay for if someone built it. Um, so many things, honestly, I would probably really like, um, like I really want people to build more, uh, tools for language models, like useful tools, give them access to Chrome. And I want to be able to give it a task. And then just, it goes off and spins up a hundred agents that perform that task. And like, sure. Like 80 of them might fail, but like 20 of them might kind of succeed. That's all you really need. And they're agents. You can spin up thousands of them. It doesn't really matter. Like a lot of large numbers are on your side. So that'd be, I would pay a lot of money for that. Even if it was capable of only doing really basic tasks, like signing up for a SAS tool and booking a call or something. If you could do even more things where it could have handled the email, uh, thread and like get the person on the other end to like do something where like, I don't even have to like book the demo. They just give me access to it. That'd be great. Yeah. More, more. Like really weird language model tools would be really fun.Swyx: Like our chat, GPT plugins, a step in the right direction, or are you envisioning something else? Sharif: I think GPT, chat GPT plugins are great, but they seem to only have right-only access right now. I also want them to have, I want these like theoretical agents to have right access to the world too. So they should be able to perform actions on web browsers, have their own email inbox, and have their own credit card with their own balance. Like take it, send emails to people that might be useful in achieving their goal. Ask them for help. Be able to like sign up and register for accounts on tools and services and be able to like to use graphical user interfaces really, really well. And also like to phone home if they need help. Swyx: You just had virtual employees. You want to give them a Brex card, right? Sharif: I wouldn't be surprised if, a year from now there was Brex GPT or it's like Brex cards for your GPT agents. Swyx: I mean, okay. I'm excited by this. Yeah. Kind of want to build it. Sharif: You should. Yeah. Alessio: Well, just to wrap up, we always have like one big takeaway for people, like, you know, to display on a signboard for everyone to see what is the big message to everybody. Sharif: Yeah. I think the big message to everybody is you might think that a lot of the time the ideas you have have already been done by someone. And that may be the case, but a lot of the time the ideas you have are actually pretty unique and no one's ever tried them before. So if you have weird and interesting ideas, you should actually go out and just do them and make the thing and then share that with the world. Cause I feel like we need more people building weird ideas and less people building like better GPT search for your documentation. Host: There are like 10 of those in the recent OST patch. Well, thank you so much. You've been hugely inspiring and excited to see where Lexica goes next. Sharif: Appreciate it. Thanks for having me. Get full access to Latent Space at www.latent.space/subscribe

Stories From Women Who Walk
How Quiet Disruptors Are Rearranging Our World for the Better Led by Sue Heatherington. Part 1.

Stories From Women Who Walk

Play Episode Listen Later Nov 11, 2021 38:24


Coming to you from Whidbey Island, Washington this is Stories From Women Who Walk. You'll recognize yourself in these true-life stories from women who are walking their lives while their lives walk them and the lasting difference these journeys have made. I'm your host, Diane Wyzga.Today my guest is Sue Heatherington, writer, catalyst and thinking partner for pioneers, innovators and change-makers who want to cultivate a better change perspective in the world. In 2018 Sue coined the term “Quiet Disruptor” as a short-hand introduction to her unusually broad back story, which includes 10 years as a pioneering public service chief executive. Now, Quiet Disruptors: Creating Change Without Shouting is the title of her newly published book.Sue is joining us all the way from Swansea, Wales where she writes daily thought-provoking blog posts while inviting us to explore her valley through the lens of her camera. I've invited Sue to talk with us about the practice of writing, changing the narrative around conversations and belonging, her manifesto, her book, and most significantly doing it all without shouting. Welcome to the podcast, Sue!Minutes 38:2400 to 4:02       Intro and AlpacasSue (along with her podcaster husband, Steve Heatherington) are raising about 45 AlpacasDid you know that alpacas hum?They also anxiously look around: “Where's the baby! Where's the baby!”Some cluck to their babies weeks before the babies are bornSue learns so much from them4:02 to 6:28       Start From Where You AreHost: One of these fine days we will return to travel and Wales - among other places - is on my hiking itinerary. If I met up with you on one of your favorite walks, where is it and what might I notice about you?Sue smiles!Sue's valley makes she feel so good, to be in her own skinSue strolls to be where she is instead of fast walking to get somewhereSue is a wanderer, taking everything in, open to seeing what she needs to noticeSue is a stopper and a looker and a listenerSue's intention is to be very presentHost: Sue's blog posts with her photography comes from a place of slowing down, pausing and being aware.6:28 to 20:05       A Bit of Back Story. Before we get to your manifesto and your book, entitled Quiet Disruptors: Creating Change Without Shouting, I'd like to know a bit about how you coined the term “Quiet Disruptor” in 2018. Would you share with us how that came about and what you discovered about yourself in that process.A beautiful defining moment that Sue didn't see comingSue had gone through challenging times professionally and personally and not expecting beauty to emergeSue's approach (how can we do this differently) is not traditional and may make folks uncomfortable, like The Emperor's Clothes storyHealth, family, and Sue's own purpose, direction and income challenges together created a constellation of taxing and trying times and then Sue's diagnosis of breast cancerOut of all of this acute chaos, in her recovery time Sue was exploring “Now what?”Sue joined an online community with Bernadette Jiwa and coined “Quiet Disruptor” as a means of introducing herself with honesty but without dramaAlso reading David Hieatt and The DO Lectures As well as Susan Cane and her Quiet Power book  Being a Quiet Disruptor is Sue's life - in a quiet voiceAfter Sue began using this phrase she was stunned by how much people were drawn to it and wanted to use it as well: I can be a change-maker my own way; I feel seen and heard. Quiet Disruptor changes how people perceive themselves because change is theirs to say and do - call out the status quo, etc. but differentlyNow the “do” happens: as a way of changing the conversation for folks Sue wrote a Manifesto to enable those who want to be seen and heard to have a way to quietly disruptSue was seeing that the voices of the Quiet Disruptors were experiencing their time having come as a collective connective, to say, do, explore and createOne person can no longer be The HeroOrganizations will have to be on the forefront of the global changeHost: an unrealistic burden to expect The One to have the answers, especially when imposed by society. If The One has no answers or, not the right answers they might push, shove, shout or masqueradeSue is extending a welcome to The OthersCome on board and work together, all voices are to be heard    20:05 to 36:30       Manifesto for Quiet DisruptorsThe word Manifesto is strong, a public declaration of intention, of design. Would you give us the opportunity to hear your Manifesto.Sue reads her Manifesto as an advocate for Quiet DisruptorsWe need Quiet Disruptors and here's why.Host: There is so much in the invitation to come along. People drawn like filings to a magnet. Sue named qualities or attributes shared by the Quiet Disruptors: stubborn courage, do work quietly and insistently. Question: How might we notice change makers among us or even notice if we could be a change maker?Those who have read Sue's book and claim the Quiet Disruptor label for themselves, are self-identifyingBeing seen and heard is keySue uses tree and fruit as metaphor to explain how to notice the others.The Others have certain characteristics which we recognize: thoughtfully curious, how they are in the world, with others, those who ask the questions, delve deeper, join the dots, see the bigger picture, expression of creativity, the effect on othersHost: I imagine you have people and situations in mind. Question: would you share a few who are already carrying on as a Quiet Disruptors.The book highlights who is already out there - some well known, some not so well known - making courageous connectionsOne example: Dr. Rachel Naomi Remen, MD:  as a practicing physician could see her patients in a way colleagues could not. Also saw that her colleagues were unseen and unheard and became protectively hardened. Remen shifted this experience and radically changed clinical practice in the United StatesHost: One woman begins the shift and then change happens over the course of time. Similarly in the legal profession. Litigators who learned to tell their own stories - doing an archeological dig on themselves - were better able to tell their clients' stories. Being seen and being heard goes toward our understanding and being understood.So often people struggle to listen deeplyThe traditional model is waiting to get the next word inCultivating your curiosity has more value in the questionsWe are at a space of asking deeper questionsQuestion: Let's assume you might have to work with or encounter someone who is not so agreeable to this notion of disrupting quietly, how would you suggest working with folks like that to get them to see the necessity of hopeful, quiet change?Here we are, at the end of the road but not the journey. Thank you for listening to Part 1 of this episode of Stories From Women Who Walk with your host Diane Wyzga and my guest Sue Heatherington, Quiet Disruptor, blogger, author of the book Quiet Disruptors:Creating Change Without Shouting, and a catalyst for pioneers, innovators and change-makers who joined us from her small valley in Swansea, Wales. We hope you enjoyed and are motivated by our quietly disrupting conversation. We'll be back in a week with Part 2.You're also invited to check out over 450 episodes of this podcast Stories From Women Who Walk found on Simplecast, your favorite podcast platform and my website: Quarter Moon Story Arts. This is the place to thrive together. Come for the stories - stay for the magic. Speaking of magic, I hope you'll subscribe, follow, share a nice shout out on your social media or podcast channel of choice, and join us next time! You will have wonderful company as we walk our lives together.About Sue Heatherington  Sue Heatherington offers fresh sight from the quiet edge and believes that our words shape our reality. She is a writer and host of Waterside Conversations from a little valley in South West Wales.Sue coined the phrase 'quiet disruptor' in 2018 when trying to make sense of the journey of her life and work. Her book Quiet Disruptors: Creating Change Without Shouting was published in January 2021 and has been read across the globe.Her short daily pause points combine words of inspiration, provocation and encouragement, with a beautiful photograph from her home in the valley, which she shares with her husband, learning disabled brother and a herd of alpacas.Quiet Disruptors: Creating Change Without Shouting is available on Amazon in printed and Kindle versions. The short ebook Being Seen and Heard… Quiet Disruptors: Companion Notes for People Professionals is also available for free to those who have the main titleLINKS to Stay in Touch With Sue:email: sue@quietdisruptors.comhttps://thewaterside.co.uk/  https://quietdisruptors.com/  linkedin.com/in/sue-heatherington-98a87013https://twitter.com/SueWaterside Stories From Women Who Walk Production TeamPodcaster: Diane F Wyzga: Quarter Moon Story ArtsMusic: Entering Erdenheim from Crossing the Waters by Steve Schuch & Night Heron MusicSound Editing: Darwin Carlisle & First Class ReelsAll content and image © 2019 - Present: for credit and attribution Quarter Moon Story Arts 

Behind The Veil
Behind The Veil Episode 13: Bridal Expos, Wedding Association, and More

Behind The Veil

Play Episode Listen Later Dec 1, 2020 60:54


On this show we welcome the owner of one of the largest Bridal Expos in the country - Karen Lee. Karen is also the owner of Your Wedding TV and Perfect Wedding Guide Central Florida. We also welcome Kevin Dennis the owner of Wedding IQ who has focused on helping businesses set realistic goals for 2021and making the most out of wedding shows!  Note from Host:  There is a 30 second video clip near the beginning of the show that provided visuals for our YouTube audience about the last wedding expo.  The show is recorded live, and we don't like to edit any part of it so it's truly authentic.  The video showed that everyone at the wedding expo was wearing masks and being socially distant. Host: Keith Willard - Keith Willard EventsBehind The Veil Crew: Marci Guttenberg - An Affair To Remember By Marci, Brooke Logan StonerBehind The Veil Guests: Karen Lee - The Bridal Expo and Your Wedding TV, Keven Dennis - Wedding IQ.comSupport the show

Engineering Influence from ACEC
Looking into the Future of the Office Market

Engineering Influence from ACEC

Play Episode Listen Later Oct 13, 2020 17:20


Carl Shilling, a principal at Stantec, joins the Engineering Influence podcast, to share his thoughts on the current state of the office market sector and how it may change going forward. Host:Welcome to the Engineering Influence podcast sponsored by the ACC Life/ Health Trust. One of the biggest business impacts of the COVID-19 pandemic has been the transition to working at home for many professional and office workers. In many downtowns and suburbs, offices are dark and empty. Not surprisingly. This situation has raised questions about the concept of the office. Many pundits have speculated that our traditional views on the office must change and that remote work will play an increasingly large role, maybe even a major role in the future. Others promote the benefits of the office, collaboration, teamwork, efficiency, and argue that once the pandemic has abated, we will return to the office. This is an important discussion for the engineering industry and for firms that work in the office space. To discuss these issues and more, we're here with Carl Shilling, a Stantec. He has a principal based in the firm's Butler, Pennsylvania office, and has more than two decades of experience focusing on the sustainability of the built environment. Carl, welcome.Shilling:Thanks for having me. I appreciate it.Host:So in the initial days and weeks of the pandemic, many companies shut down their offices, and employees began working virtually. Assuming that we eventually have a vaccine for the virus,. do you expect the virtual work environment to perpetuate, will we return to the old normal, or will we find something in between?Shilling:We, like many companies, vacated our offices and performed our business remotely. And I will admit, I did not expect it to work as well as it has. The technology has come through. We've been very successful in conducting our business virtually through, like the interview today, Zoom meetings, Team meetings, and others. They have worked very well, but as you're asking, as we're thinking ourselves: What do the next couple months mean? What does the next year mean? How are we going to conduct business in the future? Beyond our personal experience, we have also polled a lot of our clients--over 130 of them actually--to get their perception on what they think is going to happen in the future, so that we can position ourselves to design appropriate space. What we found is that although it's been successful, we're missing the human component. So we're hoping that beyond today, tomorrow that there's still going to be some interaction within the office, whether that's two days a week, three days a week. But people have learned that they can be effective from home, so what we used to be considered a luxury--being able to work from home--is now going to be considered the norm. So I think there's going to be an expectation that we still need the office space where we can go in and collaborate and understand our partners, their in-person reactions to real-life situations, but we also understand that we can be effective from home. And there are some advantages to being home with our spouses or wives or children and conducting business from there. So what we're expecting is that, yes, there's still going to be a demand for office space, but that workers probably won't be coming back five days a week 40 hours like they did prior to the coronavirus situation.Host:So, as an office designer, as an engineer, what does that mean in office design? Is this going to change it?Shilling:It will. One of the questions that's paramount is do you need the same amount of square footage or space to conduct business that you did prior to entering the pandemic? And what we're coming back with is that the dedicated office space that people are used to having, your cubicle, wherever you do your work, is probably going to change because typically it does not provide the right amount of separation or distancing from coworkers. But if we go into design and expect to be able to provide that, maybe the same amount of square footage works, but we have to reconfigure it so that it still promotes interaction between coworkers. We may need to spread out so fewer people are going to be in the office at any given time. Or it's going to take a larger square footage to accomplish the same thing.Host:In your conversations with your clients, what are their concerns?Shilling:It's kind of the same thing. How do I get my people into the office safely? How do they interact safely while they're there? We've all been out shopping in the meantime during the pandemic and you see one-way aisles. You see limitations of where we can't all go in at the same time. I think that's what we're going to see. You may go into the office, but you have to enter at multiple points so there isn't a large grouping of people at any one location. When you get into the office, there might be direction, it's probably not going to be arrows on the floor, but there's probably going to be within the design, elements to encourage people to not all go the same way or congregate in the same place.Shilling:You're probably going to see people spread out a little more than what you see in today's office, but there are still going to be spaces where you can interact with each other and do your work, There's probably also going to be additional technology within those spaces so that your coworkers and staff that aren't in the office at the time can log in or dial and be a part of the team, whether they're there personally or not.Host:You mentioned retail as one area that you can take some lessons from. This is a unique situation. So where have you looked for guidance to make design decisions on health and safety concerns?Shilling:I mentioned that we do multiple types of buildings and one of them is health care and doing that kind of design has given us a lot of direction on whether the virus is transmitted through the air or is it mainly a contact risk.Shilling:Those kinds of things really go across building types. It's not necessarily just indicative of the office environment. And so the first thing, the biggest risk, is proximity to your coworkers. Now we've heard about six- feet distancing. But given the particle size that you put off wearing a mask, does the mask stop it? As the particles dry, do they float longer than a couple of hours? So we're thinking about these issues between workers. We're thinking about how it is distributed by the air handling systems. There' are many aspects as to how do we keep people safe,Host:Focusing on HVAC, what changes do you expect to see in HVAC going forward?Shilling:The first thing is ventilation. There are three main bullet points and the first one is ventilation. The minimum that we have to think about is, do the systems within the building provide the minimum ventilation rates required by current code, whether that's the International Mechanical Code, whether that's ASHRAE 62, but a lot of office spaces, for energy efficiency reasons have reduced the amount of outside air, have chosen not to do it when they're not there in off-hours, but I think there's a general understanding out there now that we need to continually ventilate the spaces we're in. The benefit to the personnel outweighs the energy demand on the building.Shilling:Number two is filtration. Typical filtration for office environments is like a MERV-6 six, 25-30 percent. I think there's an understanding out there that can the particle sizes that we're dealing with be captured by a filter? It can. We're all wearing masks, right? So masks aren't really a very high particulate filter, but the virus lives in things that are larger, like water drops or things like that. The same thing applies to HVAC systems where if we put higher efficiency filters in, there is a benefit to the office environment. The parallel argument with that is I'm going to need more horsepower and fans to push the air through the higher efficiency filters. So there is an efficiency offset with putting in higher efficiency filters. I'm going to use more energy to do that.Shilling:Lastly, there are lots of products coming to market that we all trust, that we've been using, that have a benefit with combatting the virus, whether it is additional filtration, HEPA filtration, whether it's UV lighting, whether it's a technology like bipolar ionization. These are all things we need to have in our toolkit and our approach to making spaces safer that we can employ to respond to the demand that's out there.Host:In many office buildings, one choke point would be the elevator because you have to get people up upstairs and the elevator is by its nature, a confined space. What do you see happening with elevators?Shilling:I've seen a raft of things coming to light. And again, it's are you dealing with an existing situation or are you dealing with a new design? The most creative thing that I've heard reported to me, and it was experienced by one of our own employees visiting a client site was they entered an elevator. And it was an existing elevator. It was a small facility. The owner had attached a sponge to the wall and put a bunch of toothpicks in the sponge. And whenever you entered the elevator, you were to pick a toothpick and use that to push the button.Shilling:That was a very low dollar, very innovative solution to be as safe as they could with what they had. On the other end of the spectrum is we're designing new elevators. There are all kinds of new control technologies out there where instead of going to the elevator lobby and entering an elevator and pushing a conventional button, before you even get to the lobby, there's a panel where you can enter where you want to go. That panel then looks at where are the elevators in the building are, directs you to a specific one, and controls how many participants are in that elevator. That particular car delivers you to the floor without ever having to touch anything within the elevator itself. So there is a lot of technology coming out to address that situation. But we have a whole lot of existing elevators that we're going to have to be creative with. What are we going to do for those specific cases?Host:The economic forecasts for the office market are pretty bleak right now. What opportunities does Stantec see?Shilling:Again, communicating with our clients, there's a lot of waiting to see what happens. I will say that when I left the office back at the beginning of the year, I never expected that we would be gone this long. It has perpetuated far longer than I ever expected.Shilling:I think everybody's in a holding pattern to see where this is going. Is there going to be an additional infection rate here in the fall? As the weather gets colder and the humidity level drops, are we all going to be more susceptible to the virus? I think it's going to be another six months of what we're seeing, but I really think that there is a real desire for offices to open back up and for people to at least get back into the office in some way so that we can continue what we're doing. We're surviving just fine, but we are not thriving. We need to do additional things that we're not doing now, such as getting new people into the workforce. That isn't possible remotely like we're doing.Host:From an engineering perspective, is it going to more retrofit and renovation work in the office market in the short to mid-term?Shilling:I do. Yes. We're going to have to take a look at the existing systems. We're going to assess them. Are they bringing in any outside air or the right amount of outside air? We're going to look at whether all areas of the office have air distribution. Are there any dead spots?. I think we're going to look at whether existing air handling units can support additional filtration beyond where they're at right now. And then I think we're going to be looking at, can we apply things like UV lighting to sterilize the airflow? Can we employ things like bipolar ionization within the airstream to sterilize the airstream? Ionized hydrogen peroxide has onto the market that can be independent of the air handling system. There are devices that we can just hang on the wall to deliver these ions to the space, to clean the surfaces, to sterilize the air. There are many, many things out there that I think we can apply without spending a lot of money to make the space safer on a broader scale.Host:There's been a lot of talk that one of the impacts of the pandemic will be that the downtown business districts will shrink and the offices will move to the suburbs where there's less density. What is Stantec's view on that?Shilling:We're not seeing that. We're seeing that companies are still going to make the decision on where they want to locate their offices based upon serving their clients and where it makes the most sense for their office to be. I think what you're going to see is the opportunity for employees to choose whether to go into the office more or less independent of where the office happens to be located.Shilling:We are not seeing companies changing their business approach. Some of them are hub and spoke where the hub is within the city and the spokes are into the suburbs. Some, some companies choose to do work in the suburbs. That's where their clients are and where they interact. I don't think the specific office location is necessarily going to change. I think it's going to be focused on the ability to give their workforce the opportunity to say, "I'm going to be there all the time," or "I'm not going to be there all the time." And part of that is attracting new talent. The new generation of employees is going to demand the flexibility to say, "I'm going to work from home" or "I'm going to come into the office." And I think that's where companies will find success, in not necessarily changing the office location, but changing what they're asking their employees to do.Host:Great. That gives us some good insight into the office market. I appreciate your taking the time to speak with us.Shilling:I appreciate it. Thank you for having me. 

Transformation Hour Network International
Fragrance of the Spirit Episode 30

Transformation Hour Network International

Play Episode Listen Later Jul 29, 2020 5:22


*FRAGRANCE OF THE SPIRIT* 30 You're welcome to the FRAGRANCE OF THE SPIRIT PODCAST... EPISODE 30 I'm Pst Alo Olatokunboh Akin, your Host *There's a higher level of Power that is available in the kingdom.* This level is called ' *Authority' which is having power and the legal right or backing to make use of the power.* For instance, a person can have gun (power) but without the legal right (authority) to use it, if such person uses the power, it will be counted as offence But when there's legal backing for the power you have, it becomes authority, which can be defined as: the power or right to give orders or make decisions. In Luke 9: 1, Jesus gave His disciples both power (dunamis in Greek) and authority (exousia another word in Greek), with which they could cast out devils and cure diseases, He didn't only give power to His disciples, He also gave them the right, privilege and backing to make use of the power without limits or hindrance If you've not been making use of your rights and privileges *in Christ, I say to you categorically that you have rights over: lack, sickness, stagnation, sin, poverty, and every negative situations and circumstances* , therefore begin to use your rights from today *May your spirit eyes be flooded with light for you to see your rights and privileges in Christ* Declare that: *I take authority over every situations and circumstances now!* God bless you with the Fragrance of the Spirit I remain your host, Alo Olatokunboh Akin Pastor Dr Alo Olatokunboh Akin Anchor Man Transformation Hour Network International, Nigeria 08052930044 --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

Microsoft Research India Podcast
Podcast: Potential and Pitfalls of AI with Dr. Eric Horvitz

Microsoft Research India Podcast

Play Episode Listen Later Mar 2, 2020


Episode 001 | March 06, 2020 Dr. Eric Horvitz is a technical fellow at Microsoft, and is director of Microsoft Research Labs, including research centers in Redmond, Washington, Cambridge, Massachusetts, New York, New York, Montreal, Canada, Cambridge, UK, and Bengaluru, India. He is one of the world’s leaders in AI, and a thought leader in the use of AI in the complexity of the real world. On this podcast, we talk to Dr. Horvitz about a wide range of topics, including his thought leadership in AI, his study of AI and its influence on society, the potential and pitfalls of AI, and how useful AI can be in a country like India. Transcript Eric Horvitz: Humans will always want to make connection with humans, sociologists, social workers, physicians, teachers, we’re always going to want to make human connections and have human contacts. I think they’ll be amplified in a world of richer automation so much so that even when machines can generate art and write music, even music with lyrics that might put tear in someone’s eye if they didn’t know it was a machine, that will lead us to say, “Is that written by a human. I want to hear a song sung by a human who experienced something, the way I would experience something, not a machine.” And so I think human touch, human experience, human connection will grow even more important in a world of rising automation and those kinds of tasks and abilities will be even more compensated than they are today. (music plays) Host: Welcome to the Microsoft Research India podcast, where we explore cutting-edge research that’s impacting technology and society. I’m your host, Sridhar Vedantham. Host: Our guest today is Dr. Eric Horvitz, Technical Fellow and director of the Microsoft Research Labs. It’s tremendously exciting to have him as the first guest on the MSR India podcast because of his stature as a leader in research and his deep understanding of the technical and societal impact of AI. Among the many honors and recognitions Eric has received over the course of his career are the Feigenbaum Prize and the Allen Newell Prize for contributions to AI, and the CHI Academy honor for his work at the intersection of AI and human-computer interaction. He has been elected fellow of the National Academy of Engineering (NAE), the Association of Computing Machinery (ACM) and the Association for the Advancement of AI , where he also served as president. Eric is also a fellow of the American Association for the Advancement of Science (AAAS), the American Academy of Arts and Sciences, and the American Philosophical Society. He has served on advisory committees for the National Science Foundation, National Institutes of Health, President’s Council of Advisors on Science and Technology, DARPA, and the Allen Institute for AI. Eric has been deeply involved in studying the influences of AI on people and society, including issues around ethics, law, and safety. He chairs Microsoft’s Aether committee on AI, effects, and ethics in engineering and research. He established the One Hundred Year Study on AI at Stanford University and co-founded the Partnership on AI. Eric received his PhD and MD degrees at Stanford University. On this podcast, we talk to Eric about his journey in Microsoft Research, his own research, the potential and pitfalls he sees in AI, how AI can help in countries like India, and much more.   Host: Eric, welcome to the podcast. Eric Horvitz: It’s an honor to be here. I just heard I am the first interviewee for this new series. Host: Yes, you are, and we are really excited about that. I can’t think of anyone better to do the first podcast of the series with! There’s something I’ve been curious about for a long time. Researchers at Microsoft Research come with extremely impressive academic credentials. It’s always intrigued me that you have a medical degree and also a degree in computer science. What was the thinking behind this and how does one complement the other in the work that you do? Eric Horvitz: One of the deep shared attributes of folks at Microsoft Research and so many of our colleagues doing research in computer science is deep curiosity, and I’ve always been one of these folks that’s said “why” to everything. I’m sure my parents were frustrated with my sequence of whys starting with one question going to another. So I’ve been very curious as an undergraduate. I did deep dives into physics and chemistry. Of course, math to support it all – biology and by the time I was getting ready to go to grad school I really was exploring so many sciences, but the big “why” for me that I could not figure out was the why of human minds, the why of cognition. I just had no intuition as to how the cells, these tangles of the cells that we learn about in biology and neuroscience could have anything to do with my second to second experience as being a human being, and so you know what I have to just spend my graduate years diving into the unknowns about this from the scientific side of things. Of course, many people have provided answers over the centuries- some of the answers are the foundations of religious beliefs of various kinds and religious systems. So I decided to go get an MD-PhD, just why not understand humans deeply and human minds as well as the scientific side of nervous systems, but I was still an arc of learning as I hit grad school at Stanford and it was great to be at Stanford because the medical school was right next to the computer science department. You can literally walk over and I found myself sitting in computer science classes, philosophy classes, the philosophy of mind-oriented classes and cognitive psychology classes and so there to the side of that kind of grad school life and MD-PhD program, there are anatomy classes that’s being socialized into the medical school class, but I was delighted by the pursuit of- you might call it the philosophical and computational side of mind- and eventually I made the jump, the leap. I said “You know what, my pursuit is principles, I think that’s the best hope for building insights about what’s going on” and I turned around those principles into real world problems in particular since that was, had a foot in the medical school, how do we apply these systems in time-critical settings to help emergency room, physicians and trauma surgeons? Time critical action where computer systems had to act quickly, but had to really also act precisely when they maybe didn’t have enough time to think all the way and this led me to what I think is an interesting direction which is models of bounded-rationality which I think describes us all. Host: Let’s jump into a topic that seems to be on everybody’s mind today – AI. Everyone seems to have a different idea about what AI actually is and what it means to them. I also constantly keep coming across people who use AI and the term ML or machine learning as synonyms. What does AI mean to you and do you think there’s a difference between AI and ML? Eric Horvitz: The scientists and engineers that first used the phrase artificial intelligence did so in a beautiful document that’s so well written in terms of the questions it asks that it could be a proposal today to the National Science Foundation, and it would seem modern given that so many the problems have not been solved, but they laid out the vision including the pillars of artificial intelligence. This notion of perception building systems that could recognize or perceive sense in the world. This idea of reasoning with logic or other methods to reason about problems, solve problems, learning how can they become better at what they did with experience with other kinds of sources of information and this final notion they focused on as being very much in the realm of human intelligence language, understanding how to manipulate symbols in streams or sequences to express concepts and use of language. So, learning has always been an important part of artificial intelligence, it’s one of several pillars of work, it’s grown in importance of late so much so that people often write AI/ML to refer to machine learning but it’s one piece and it’s an always been an important piece of artificial intelligence. Host: I think that clarifies the difference between AI and ML. Today, we see AI all around us. What about AI really excites you and what do you think the potential pitfalls of AI could be? Eric Horvitz: So let me first say that AI is a constellation of technologies. It’s not a single technology. Although, these days there’s quite a bit of focus on the ability to learn how to predict or move or solve problems via machine learning analyzing large amounts of data which has become available over the last several decades, when it used to be scarce. I’m most excited about my initial goals to understand human minds. So, whenever I read it a paper on AI or see a talk or see a new theorem being proved my first reaction is, how does it grow my understanding, how does it help to answer the questions that have been long-standing in my mind about the foundations of human cognition? I don’t often say that to anybody but that’s what I’m thinking. Secondly, my sense is what a great endeavor to be pushing your whole life to better understand and comprehend human minds. It’s been a slow slog. However, insights have come about advances and how they relate to those questions but along the way what a fabulous opportunity to apply the latest advances to enhancing the lives of people, to empowering people in new ways and to create new kinds of automation that can lead to new kinds of value, new kinds of experiences for people. The whole notion of augmenting human intellect with machines has been something that’s fascinated me for many decades. So I love the fact that we can now leverage these technologies and apply them even though we’re still very early on in how these ideas relate to what’s going on in our minds. Applications include healthcare. There’s so much to do in healthcare with decreasing the cost of medicine while raising the quality of care. This idea of being able to take large amounts of data to build high quality, high precision diagnostic systems. Systems that can predict outcomes. We just created a system recently for example that can detect when a patient in a hospital is going to crash unexpectedly with organ system failures for example, and that can be used in ways that could alert physicians in advanced, medical teams to be ready to actually save patient’s lives. Even applications that we’re now seeing in daily life like cars that drive themselves. I drive a Tesla and I’ve been enjoying the experience of the semi-automated driving, the system can do. Just seeing how far we’ve gotten in a few years with systems that recognize patterns like the patterns on a road or that recognize objects in its way for automatic braking. These systems can save thousands of lives. I’m not sure about India but I know the United States statistics and there are a little bit more than 40,000 lives lost on the highways in the United States per year. Looking at the traffic outside here in Bangalore, I’m guessing that India is at least up there with tens of thousands of deaths per year. I believe that that AI systems can reduce these numbers of deaths by helping people to drive better even if it’s just in safety related features. Host: The number of fatalities on Indian roads is indeed huge and that’s in fact been one of the motivators for a different research project in the lab on which I hope to do a podcast in the near future. Eric Horvitz: I know it’s the HAMS project. Host: It is the HAMS project and I’m hoping that we can do a podcast with the researchers on that sometime soon. Now, going back to AI, what do you think we need to look out for or be wary of? People, including industry leaders seem to land on various points on a very broad spectrum ranging from “AI is great for humanity” to “AI is going to overpower and subsume the human race at some point of time.” Eric Horvitz: So, what’s interesting to me is that over the last three decades we’ve gone from AI stands for almost implemented, doesn’t really work very well. Have fun, good luck to this idea of just getting things up and running and being so excited there’s no other concerns but to get this thing out the door and have it for example, help physicians diagnose patients more accurately to now, “Wait a minute! We are putting these machines in places that historically have always relied upon human intelligence, as these machines for the first time edge into the realm of human intellects, what are the ethical issues coming to the fore? Are there intrinsic biases in the way data is created or collected, some of which might come from the society’s biases that creates the data? What about the safety issues and the harms that can come from these systems when they make a mistake? When will systems be used in ways that could deny people consequential services like a loan or education because of an unfair decision or a decision that aligns mysteriously or obviously with the way society has worked amplifying deep biases that have come through our history?” These are all concerns that many of us are bringing to light and asking for more resources and attention to focus on and also trying to cool the jets of some enthusiasts who want to just blast ahead and apply these technologies without thinking deeply about the implications, I’d say sometimes the rough edges of these technologies. Now, I’m very optimistic that we will find pathways to getting incredible amounts of value out of these systems when properly applied, but we need to watch out for all sorts of possible adverse effects when we take our AI and throw it into the complexity of the open world outside of our clean laboratories. Host: You’ve teed-up my next question perfectly. Is it incumbent upon large tech companies who are leading the charge as far as AI is concerned to be responsible for what AI is doing, and the ethics and the fairness and all the stuff behind AI which makes it kind of equitable to people at large? Eric Horvitz: It’s a good question. There are different points of view on that question. We’ve heard some company leaders issue policy statements along the lines of “We will produce technologies and make them available and it’s the laws of the country that will help guide how they’re used or regulate what we do. If there are no laws, there’s no reason why we shouldn’t be selling something with a focus on profit to our zeal with technology.” Microsoft’s point of view has been that the technology could be created by experts inside its laboratories and by its engineers. Sometimes is getting ahead of where legislation and regulation needs to be and therefore we bear a responsibility as a company in both informing regulatory agencies and the public at large about the potential downsides of technology and appropriate uses and misuses, as well as look carefully at what we do when we actually ship our products or make a cloud service available or build something for a customer. Host: Eric, I know that you personally are deeply involved in thinking through AI and it’s impact on society, how to make it fair, how make it transparent and so on. Could you talk a little bit about that, especially in the context of what Microsoft is doing to ensure that AI is actually good for everybody? Eric Horvitz: You know, these are why this is such a passion for me – I’ve been extremely interested starting with the technical issues which I thought- I think- really deep and fascinating, which is when you build a limited system by definition that’s much simpler than a complex universe that’s going to be immersed in, you take it from the laboratory into the open world. I refer to that as AI in the open world. You learn a lot about the limitations of the AI. You also learn to ask questions and to extend these systems so they’re humble, they understand their limitations, they understand how accurate they are, you get them a level of self-knowledge. This is a whole area of open world intelligence that I think really reads upon some of the early questions for me about what humans are doing, what their minds are doing, and potentially other animals, vertebrates. It started there for me. Back to your question now, we are facing the same kind of things when we take an AI technology and put it in the hands of a judge who might make decisions about criminal justice looking at recommendations based on statistics to help him or her take an action. Now we have to realize we have systems we’re building that work with people. People want explanations. They don’t want to look at a black box with an indicator on it. They will say, why is this system telling me this? So at Microsoft we’ve made significant investments, both in our research team and in our engineering teams and in our policy groups at thinking through details of the problems and solutions when it comes to a set of problems, and I’ll just list a few right now. Safety and robustness of AI systems, transparency and intelligibility of these systems- can they explain themselves, bias and fairness, how can we build systems that are fair along certain dimensions, engineering best practices. Well, what does it mean for a team working with tools to understand how to build a system and maintain it over time so, that it’s trustworthy. Human AI collaboration – what are principles by which we can enable people to better work in a fluid way with systems that might be trying to augment their intelligence such that is a back and forth and understanding of when a system is not confident, for example. Even notions about attention and cognition is, are these systems being used in ways that might be favorable to advertisers, but they’re grabbing your attention and holding them on an application because they’ve learned how to do that mysteriously – should we have a point of view about that? So Microsoft Research has stood up teams looking at these questions. We also have stood up an ethics advisory board that we call the Aether Committee to deliberate and provide advice on hard questions that are coming up across the spectrum of these issues and providing guidance to our senior leadership team at Microsoft in how we do our business. Host: I know you were the co-founder of the Partnership on AI. Can you talk a little bit about that and what it sought to achieve? Eric Horvitz: This vision arose literally at conferences and, in fact, one of the key meetings was at a pub in New York City after meeting at NYU, where several computer scientists got together, all passionate about seeing it go well for artificial intelligence technologies by investing in understanding and addressing some of these rough edges and we decided we could bring together the large IT companies, Amazon, Apple, Facebook, Google, Microsoft to think together about what it might mean to build an organization that was a nonprofit that balanced the IT companies with groups in civil society, academic groups, nonprofit AI research to think through these challenges and come up with best practices in a way that brought the companies together rather than separating them through a competitive spirit. Actually this organization was created by the force of the friendships of AI Scientists, many of whom go back to being in grad school together across many universities, this invisible college of people united in an interesting understanding how to do AI in the open world. Host: Do you think there is a role for governments to play where policies governing AI are concerned, or do you think it’s best left to technology companies, individual thinkers and leaders to figure out what to do with AI? Eric Horvitz: Well, AI is evolving quickly and like other technologies governments have a significant role to play in assuring the safety of these technologies, their fairness, their appropriate uses. I see regulatory activity being of course largely in the hands of governments being advised by leadership in academia and in industry and the public which has a lot to say about these technologies. There’s been quite a bit of interest and activity, some of that is part of the enthusiastic energy, you might say, going into thinking through AI right now. Some people say there’s a hype-cycle that’s leaking everywhere and to all regimes, including governments right now, but it’s great to see various agencies writing documents, asking for advice, looking for sets of principles, publishing principles and engaging multi-stakeholder groups across the world. Host: There’s been a lot of talk and many conversations about the impact that AI can have on the common man. One of the areas of concern with AI spreading is the loss of jobs at a large scale. What’s your opinion on how AI is going to impact jobs? Eric Horvitz: My sense is there’s a lot of uncertainty about this, what kind of jobs will be created, what kinds of jobs will go away. If you take a segment like driving cars, I was surprised at how large a percentage of the US population makes their living driving trucks. Now, what if the long haul parts of truck driving, long highway stretches goes away when it becomes automated, it’s unclear what the ripples of that effect will be on society, on the economy. It’s interesting, there are various studies underway. I was involved in the international academy study looking at the potential effects of new kinds of automation coming via computer science and other related technologies and the results of that analysis was that we’re flying in the dark. We don’t have enough data to make these decisions yet or to make these recommendations or they have understandings about how things are going to go. So, we see people saying things on all sides right now. My own sense is that there’ll be some significant influences of AI on our daily lives and how we make our livings. But I’ll say one thing. One of my expectations and it’s maybe also a hope is that as we see more automation in the world and as that shifts in nature of what we do daily and what were paid to do or compensated to do what we call work, there’ll be certain aspects of human discourse that we simply will learn, for a variety of reasons, that we cannot automate, we aren’t able to automate or we shouldn’t automate, and the way I refer to this as in the midst of the rise of new kinds of automation some of which reading on tasks and abilities we would have in the past assumed was the realm of human intellect will see a concurrent rise of an economy of human around human caring. You think about this, humans will always want to make connection with humans, sociologists, social workers, physicians, teachers, we’re always going to want to make human connections and have human contacts. I think they’ll be amplified in a world of richer automation so much so that even when machines can generate art and write music, even music with lyrics that might put tear in someone’s eye if they didn’t know it was a machine, that will lead us to say, “Is that written by a human. I want to hear a song sung by a human who experienced something, the way I would experience something, not a machine.” And so I think human touch, human experience, human connection will grow even more important in a world of rising automation and those kinds of tasks and abilities will be even more compensated than they are today. So, we’ll see even more jobs in this realm of human caring. Host: Now, switching gears a bit, you’ve been in Microsoft Research for a long time. How have you seen MSR evolve over time and as a leader of the organization, what’s your vision for MSR over the next few years? Eric Horvitz: It’s been such an interesting journey. When I came to Microsoft Research it was 1992, and Rick Rashid and Nathan Myhrvold convinced me to stay along with two colleagues. We just came out of Stanford grad school we had ideas about going into academia. We came up to Microsoft to visit, we thought we were just here for a day to check things out, maybe seven or eight people that were then called Microsoft Research and we said, “Oh come on, please we didn’t really see a big future.” But somehow we took a risk and we loved this mission statement that starts with “Expand the state-of-the-art.” Period. Second part of the mission statement, “Transfer those technologies as fast as possible into real products and services.” Third part of the statement was, “Contribute to the vibrancy of this organization.” I remember seeing in my mind as we committed to doing this, trying it out-  a vision of a lever with the fulcrum at the mountain top in the horizon. And I thought how can we make this company ours, our platform to take our ideas which then were bubbling. We had so many ideas about what we could do with AI from my graduate work and move the world, and that’s always been my sense for what Microsoft Research has been about. It’s a place where the top intellectual talent in the world, top scholars, often with entrepreneurial bents want to get something done can make Microsoft’s their platform for expressing their creativity and having real influence to enhancing the lives of millions of people. Host: Something I’ve heard for many years at Microsoft Research is that finding the right answer is not the biggest thing, what’s important is to ask the right, tough questions. And also that if you succeed in everything you do you are probably not taking enough risks. Does MSR continue to follow these philosophies? Eric Horvitz: Well, I’ve said three things about that. First of all, why should a large company have an organization like Microsoft Research? It’s unique. We don’t see that even in competitors. Most competitors are taking experts if they could attract them and they’re embedding them in product teams. Microsoft has had the foresight and we’re reaching 30 years now since we kicked off Microsoft Research to say, if we take top talent and attract this top talent into the company and we give these people time and we familiarize them with many of our problems and aspirations, they can not only come up with new ideas, out-of-the-box directions, they can also provide new kinds of leadership to the company as a whole, setting its direction, providing a weathervane, looking out to the late-breaking changes on the frontiers of computer science and other sciences and helping to shape Microsoft in the world, versus, for example, helping a specific product team do better with an existing current conception of what a product should be. Host: Do you see this role of Microsoft Research changing over the next few years? Eric Horvitz: Microsoft has changed over its history and one of my interests and my reflections and I shared this in an all-hands meeting just last night with MSR India. In fact, they tried out some new ideas coming out of a retreat that the leadership team from Microsoft Research had in December – just a few months ago, is how might we continue to think and reflect about being the best we can, given who we are. I’ve called it polishing the gem, not breaking it but polishing, buffing it out, thinking about what we can do with it to make ourselves even more effective in the world. One trend we’ve seen at Microsoft is that over the years we’ve gone from Microsoft Research, this separate tower of intellectual depth reaching out into the company in a variety of ways, forming teams, advising, working with outside agencies, with students in the world, with universities to a larger ecosystem of research at Microsoft, where we have pockets or advanced technology groups around the company doing great work and in some ways doing the kinds of things that Microsoft Research used to be doing, or solely doing at Microsoft in some ways. So we see that upping the game as to what a center of excellence should be doing. I’m just asking the question right now, what are our deep strengths, this notion of deep scholarship, deep ability, how can we best leverage that for the world and for the company, and how can we work with other teams in a larger R&D ecosystem, which has come to be at Microsoft? Host: You’ve been at the India Lab for a couple of days now. How has the trip been and what do you think of the work that the lab in India is doing? Eric Horvitz: You know we just hit 15 here – 15 years old so this lab is just getting out of adolescence- that’s a teenager. It seems like just yesterday when I was sitting with the Anandan, the first director of this lab looking at a one-pager that he had written about “Standing up a lab in India.” I was sitting in Redmond’s and having coffee and I tell you that was a fast 15 years, but it’s been great to see what this lab became and what it does. Each of our labs is unique in so many ways typically based on the culture it’s immersed in. The India lab is famous for its deep theoretical chops and fabulous theorists here, the best in the world. This interdisciplinary spirit of taking theory and melding it with real-world challenges to create incredible new kinds of services and software. One of the marquee areas of this lab has been this notion of taking a hard look and insightful gaze at emerging markets, Indian culture all up and thinking about how computing and computing platforms and communications can be harnessed in a variety of ways to enhance the lives of people, how can they be better educated, how can we make farms, agriculture be more efficient and productive, how can we think about new economic models, new kinds of jobs, how can we leverage new notions of what it means to do freelance or gig work. So the lab has its own feel, its own texture, and when I immerse myself in it for a few days I just love getting familiar with the latest new hires, the new research fellows, the young folks coming out of undergrad that are just bright-eyed and inject energy into this place. So I find Microsoft Research India to have a unique combination of talented researchers and engineers that brings to the table some of the deepest theory in the world’s theoretical understandings of hard computer science, including challenges with understanding the foundations of AI systems. There’s a lot of work going on right now. Machine learning as we discussed earlier, but we don’t have a deep understanding, for example, of how these neural network systems work and why they’re working so well and I just came out of a meeting where folks in this lab have come up with some of the first insights into why some of these procedures are working so well to understand that and understand their limitations and which ways to go and how to guide that, how to navigate these problems is rare and it takes a deep focus and ability to understand the complexity arising in these representations and methods. At the same time, we have the same kind of focus and intensity with a gaze at culture at emerging markets. There are some grand challenges with understanding the role of technology in society when it comes to a complex civilization, or I should say set of civilizations like we see in India today. This mix of futuristic, out-of-the-box advanced technology with rural farms, classical ways of doing things, meshing the old and the new and so many differences as you move from province to province, state to state, and these sociologists and practitioners that are looking carefully at ethnography, epidemiology, sociology, coupled with computer science are doing fabulous things here at the Microsoft Research India Lab. Even coming up with new thinking about how we can mesh opportunistic Wi-Fi with sneakers, Sneakernet and people walking around to share large amounts of data. I don’t think that project would have arisen anywhere, but at this lab. Host: Right. So you’ve again teed-up my next question perfectly. As you said India’s a very complex place in terms of societal inequities and wealth inequalities. Eric Horvitz: And technical inequality, it’s amazing how different things are from place to place. Host: That’s right. So, what do you think India can do to utilize AI better and do you think India is a place that can generate new innovative kinds of AI? Eric Horvitz: Well, absolutely, the latter is going to be true, because some of the best talent in computer science in the world is being educated and is working in this, in this country, so of course we will see fabulous things, fabulous innovations being originating in India in both in the universities and in research labs, including Microsoft Research. As to how to harness these technologies, you know, it takes a special skill to look at the currently available capabilities in a constellation of technologies and to think deeply about how to take them into the open world into the real world, the complex messy world. It often takes insights as well as a very caring team of people to stick with an idea and to try things out and to watch it and to nurture it and to involve multiple stakeholders in watching over time for example, even how a deployment works, gathering data about it and so on. So, I think some very promising areas include healthcare. There are some sets of illnesses that are low-hanging fruit for early detection and diagnosis, understanding where we could intervene early on by looking at pre-diabetes states for example and guiding patients early on to getting care to not go into more serious pathophysiologies, understanding when someone needs to be hospitalized, how long they should be hospitalized in a resource limited realm, we have to sort of selectively allocate resources, doing them more optimally can lead to great effects. This idea of understanding education, how to educate people, how to engage them over time, diagnosing which students might drop out early on and alerting teachers to invest more effort, understanding when students don’t understand something and automatically helping them get through a hard concept. We’re seeing interesting breakthroughs now in tutoring systems that can detect these states. Transportation – I mean, it’s funny we build systems in the United States and this what I was doing to predict traffic and to route cars ideally. Then we come to India and we look at the streets here we say, “I don’t think so, we need a different approach,” but it just raises the stakes on how we can apply AI in new ways. So, the big pillars are education, healthcare, transportation, even understanding how to guide resources and allocations in the economy. I think we’ll see big effects of insightful applications in this country. Host: This has been a very interesting conversation. Before we finish do you want to leave us with some final thoughts? Eric Horvitz: Maybe I’ll make a call out to young folks who are thinking about their careers and what they might want to do and to assure them that it’s worth it. It’s worth investing in taking your classes seriously, in asking lots of questions, in having your curiosities addressed by your teachers and your colleagues, family. There’s so much excitement and fun in doing research and development, in being able to build things and feel them and see how they work in the world, and maybe mostly being able to take ideas into reality in ways that you can see the output of your efforts and ideas really delivering value to people in the world. Host: That was a great conversation, Eric. Thank you! Eric Horvitz: Thank you, it’s been fun.

Geo Radio - Waitaki Whitestone Geopark
Whitestone geopark trustee, Helen Jansen, explains the ideas behind Waitaki Whitestone Geopark

Geo Radio - Waitaki Whitestone Geopark

Play Episode Listen Later Sep 24, 2019 3:13


Co-chair Helen Jansen was on Real Radio 104 and talked about what a Geopark is and its great opportunities for the community. Missed it? No problem, you can listen to it here. Transcript: Host:Right, time to talk about Geopark. And we are joined by Helen Jansen, one of the trustees of the Whitestone Waitaki Geopark project. How are you doing? Helen: Absolutely splendid. It's a great day. Absolutely. Thank you for coming in. It's an opportunity for Waitaki to take advantage of a framework. That is a globally recognized framework based on the geology of the areas. It's got to be unique. And in Waitaki we've got unique geology. We tell a story about incredibly ancient Geo-tectonic movements right through to the present. It is about telling the story of the people, of the animals and the plants that have taken advantage of this sort of geology right up to the present day and going into the future because the land that we live and work in sculpts the things that have happened and that do happen and can happen in the future. Host:What's the potential as far as tourism is concerned here? Helen:If we have a Geopark in the world that has increased their visitor numbers and increased their local economy many times, so it's an opportunity which we have had scoped which shows that there will be enormous benefits to the district, depending, of course, on what level of investment you prepare to put into it. Host:There's been a lot of talk about the money that goes into it but the amount that we are investing as far as Waitaki district is concerned versus the amount of potential investment from external forces that we're only putting in a small portion, potentially, aren't we? Helen:Absolutely. And we broke it down into small stages so at the district plan has ongoing support for it for ten years. We've taken the annual amounts for the last couple of years and we have just being granted money from the surplus of this year by the council in order to get us to the point where the UNESCO Geopark application can be completed and assessed. And we are in a very good space to be accredited as a Geopark. We cannot count our chickens, but we will in a very good place to do that. Host:So why should people get behind the idea of a Geopark? Helen:Because it's going to unify the district. It's going to give opportunity to all sorts of, initially tourists opportunities from flags worn through the east coast and up through Oamaru and right up through Ohau. We've already seen excitement through the culinary world and the producers of food in the Geogastronomy group that just started. They are championing at the bit to get this going so this would be really a great opportunity. We've got investors lined up already ready to sign contracts now that we've got this assurance that we can get this through to June. We will have those investors joining up so it's a huge opportunities for this district and for every man. That it depends very much on the energy and enthusiasm the district generates as to how much investment we will be able to bring . Host:Thanks, Helen. We'll catch up again soon. Helen:Thanks very much!

Devchat.tv Master Feed
AiA 216: Building a Complete Web Application from Scratch Alone with Amir Tugendhaft

Devchat.tv Master Feed

Play Episode Listen Later Nov 20, 2018 42:56


Panel: Aaron Frost Brian Love Special Guest: Amir Tugendhaft In this episode, Aaron and Brian talk with Amir Tugendhaft who is a web developer who is located in Israel. He finds much gratification developing and building things from scratch. Check out today’s episode where Aaron, Brian, and Amir talk about just that. Other topics include UI Design, Flexbox, UX design, PrimeNG, and ag-Grid. Show Topics: 0:00 – Advertisement: AngularBootCamp.Com 0:52 – Host: Welcome! Today’s panel is myself, Brian, and our guest is Amir Tugendhaft! 1:13 – Guest: I am a developer and experience with Angular and React. 1:56 – Host: You spend your days/nights there? 2:03 – Panel: He is committed. 2:08 – Host: I am going to back up a second, and Brian could you please introduce yourself, please? 2:26 – Brian: I am the CETO at an Angular consulting firm (Denver, CO). We have the pleasure with working with Aaron from time-to-time. My Twitter handle is @brian_love – check it out! 2:52 – Host: What is CETO stand for? 2:59 – Brian answers the question. Brian: I oversee the crew among other things. 3:31 – Host: What do you want to talk about today, Amir? You are the guest of honor today! 3:40 – Guest. 4:00 – Host: That is a lot of information – that might be more than 1 episode. We have to stay focused! 4:14 – Host: I read one of your recent blogs about Cross Filled Violators. I met you through your blog before we did the Host: Give us your own ideas about starting your own app. 4:50 – Guest answers the question. 6:17 – Host: I am biased. But here is a fact. I used to work on a large team (60 people) and everyone committing to the same page app. We were using Angular.js 1.5, which I think they are still using that. I know that it worked but it wasn’t the easiest or fastest one to maintain, but it worked. 7:05 – Brian. 7:10 – Host: What are you trying to do? React doesn’t fulfill that need. I think you are being hyperballic and using extreme cases as the norm. Let’s be honest: we do cool stuff with jQuery plugins when we didn’t have a framework. When they say that the framework is stopping them then I say: I agree to disagree. 8:00 – Host: What do you think, Amir? 8:04 – Guest: I don’t have preferences. I try to build applications through the technologies and create components and simple applications. 8:30 – Brian. 8:33 – Guest: You create the component, and then... 9:21 – Brian: You don’t have to have a template file and another file – right? 9:35 – Guest. 9:48 – Host: I do in-line styles and in-line templates. One thing I learned from React is that I like my HTML, style and code. I like it being the same file as my component. I like that about that: I like single file components. This promotes getting frustrated if it gets too big. Yeah if it’s more than 500 lines than you have to simplify. That’s one of the things that l like. 10:47 – Brian: Modules versus... 10:55 – Guest. 11:07 – Host: I think in React and Vue you have the word module but in JavaScript you have a file that exports... 11:26 – Host: I have my opinion here and talking with Joe. He made a good point: at a certain level the frontend frameworks are the same. You could be doing different things but they basically do the same thing. 13:57 – Guest: Basically what that means is that the technology used it will do the same thing. Your patterns and practices are huge. 14:17 – Brian: If you are talking about the 3 popular frameworks out there – they are basically doing the same thing. I like Angular a little big more, though. Like you said, Aaron, people tend to pick the same one. I like the opinionated things about Angular. You get properties, components or called props or inputs you are getting a lot of the same features. It comes down to your personal preference. 15:31 – Host: What else Amir? 15:35 – Guest: Let’s talk about the UI. 16:05 – Brian. 16:08 – Guest asks a question. 16:25 – Brian: How have you tackled this problem? 16:34 – Guest: I kind of ran with it. If there wasn’t something that I liked I started from scratch, because it really didn’t feel right. 16:51 – Brian: I am an enemy of starting over type of thing. You have a lot of engineers who START projects, and they can say that they start this piece, but the experts and choice team members have what it takes to ship a feature. I mean fully ship it, not just 80%, but also the final 20%. I think it takes a lot of pose decision making to say I want to rewrite it but not right now. I still need to ship this code. I have always been a bigger fan as not rewriting as much as possible; however, if you started with good patterns then that’s true, but if you are starting off with bad patterns then maybe yes. I like that opinion b/c you have to start right. Brian: How do you do your CSS? 19:05 – Guest. 19:52 – Advertisement: Get A Coder Job! 20:30 – Brian: How do you make those decisions, Amir? 20:39 – Guest: I see something that I like and ask myself how do I apply this to my design and I start scaling things. 21:50 – Host: Are you using a tool like Sketch for your initial UI design? 22:05 – Guest. 22:54 – Host: I worked on a project where the client had a designer (UX). 24:00 – Host and Guest go back-and-forth. 24:51 – Host: I am sure it’s all about the quality from your designer, too. Hopefully it works well for you and it’s quality. 25:18 – Host: There is a lot to building an app from scratch. I am not a good designer. I am not a designer – I mean straight-up. I got nothing. I appreciate team members that can do that. 26:06 – Guest: Do you write...? 26:35 – Host: Only on the most recent project. The designer didn’t own the HTML CSS but he initially wrote it and then gave it to me and now I own it, and it’s in components. If he wants updates then I have to go and make changes b/c he doesn’t know Angular. If it’s a sketch or a PNG you have to make it look like that. That’s what most of my career has been. Host: HTML and CSS got me 762x easier once Flexbox came around! I know there is a decimal there! 28:23 – Host talks about Flexbox some more. 28:42 – Guest asks a question. 28:50 – Host: I suppose if I really had heavy needs for a table then I would try CSS grid could solve some problems. I might just use a styled table. 29:12 – Brian: ag-Grid or something else. 29:21 – Host: On this recent project...I’ve used in-house design and other things. If I ever needed a table it was there. I don’t rebuild components b/c that can get expensive for me. 30:50 – Brian: Accessibility. 31:00 – Host: Your upgrade just got 10x harder b/c you own the component loop. I really don’t build tables or drop-downs. Only way is if I really need to build it for a specific request. 31:30 – Brian. 31:58 – Host: Let me give you an example. You can think I am crazy, but a designer gave me a drop-down but he told me to use PrimeNG. I had the chose of building my own drop-down or the designer has to accept whatever they gave him. I made the UI make what he wanted and I made the drop-down zero capacity and then... Host: When you click on what you see you are clicking on the... Host: Does that make sense? 33:35 – Guest. 33:50 – Host. 34:25 – Brian: That is interesting; remember when... 34:58 – Host: We will send this episode to Jeremy – come on Jeremy! Any last ideas? Let’s move onto picks! 35:20 – Advertisement – Fresh Books! 30-day free trial! END – Advertisement – Cache Fly! Links: Vue jQuery Angular React C# What is a UX Design? UI Design Flexbox Sketch ag-Grid PrimeNG Brian Love’s Twitter Aaron Frost’s Medium Amir’s Medium Amir’s Twitter Amir’s GitHub Amir’s LinkedIn Amir’s Facebook Sponsors: Angular Boot Camp Fresh Books Get a Coder Job Course Cache Fly Picks: Aaron Movie: “A Star Is Born” Concept - Model Driven Forms Amir Puppeteer Arrow Function Converter Brian TV Series: “The 100” Angular Schematics

All Angular Podcasts by Devchat.tv
AiA 216: Building a Complete Web Application from Scratch Alone with Amir Tugendhaft

All Angular Podcasts by Devchat.tv

Play Episode Listen Later Nov 20, 2018 42:56


Panel: Aaron Frost Brian Love Special Guest: Amir Tugendhaft In this episode, Aaron and Brian talk with Amir Tugendhaft who is a web developer who is located in Israel. He finds much gratification developing and building things from scratch. Check out today’s episode where Aaron, Brian, and Amir talk about just that. Other topics include UI Design, Flexbox, UX design, PrimeNG, and ag-Grid. Show Topics: 0:00 – Advertisement: AngularBootCamp.Com 0:52 – Host: Welcome! Today’s panel is myself, Brian, and our guest is Amir Tugendhaft! 1:13 – Guest: I am a developer and experience with Angular and React. 1:56 – Host: You spend your days/nights there? 2:03 – Panel: He is committed. 2:08 – Host: I am going to back up a second, and Brian could you please introduce yourself, please? 2:26 – Brian: I am the CETO at an Angular consulting firm (Denver, CO). We have the pleasure with working with Aaron from time-to-time. My Twitter handle is @brian_love – check it out! 2:52 – Host: What is CETO stand for? 2:59 – Brian answers the question. Brian: I oversee the crew among other things. 3:31 – Host: What do you want to talk about today, Amir? You are the guest of honor today! 3:40 – Guest. 4:00 – Host: That is a lot of information – that might be more than 1 episode. We have to stay focused! 4:14 – Host: I read one of your recent blogs about Cross Filled Violators. I met you through your blog before we did the Host: Give us your own ideas about starting your own app. 4:50 – Guest answers the question. 6:17 – Host: I am biased. But here is a fact. I used to work on a large team (60 people) and everyone committing to the same page app. We were using Angular.js 1.5, which I think they are still using that. I know that it worked but it wasn’t the easiest or fastest one to maintain, but it worked. 7:05 – Brian. 7:10 – Host: What are you trying to do? React doesn’t fulfill that need. I think you are being hyperballic and using extreme cases as the norm. Let’s be honest: we do cool stuff with jQuery plugins when we didn’t have a framework. When they say that the framework is stopping them then I say: I agree to disagree. 8:00 – Host: What do you think, Amir? 8:04 – Guest: I don’t have preferences. I try to build applications through the technologies and create components and simple applications. 8:30 – Brian. 8:33 – Guest: You create the component, and then... 9:21 – Brian: You don’t have to have a template file and another file – right? 9:35 – Guest. 9:48 – Host: I do in-line styles and in-line templates. One thing I learned from React is that I like my HTML, style and code. I like it being the same file as my component. I like that about that: I like single file components. This promotes getting frustrated if it gets too big. Yeah if it’s more than 500 lines than you have to simplify. That’s one of the things that l like. 10:47 – Brian: Modules versus... 10:55 – Guest. 11:07 – Host: I think in React and Vue you have the word module but in JavaScript you have a file that exports... 11:26 – Host: I have my opinion here and talking with Joe. He made a good point: at a certain level the frontend frameworks are the same. You could be doing different things but they basically do the same thing. 13:57 – Guest: Basically what that means is that the technology used it will do the same thing. Your patterns and practices are huge. 14:17 – Brian: If you are talking about the 3 popular frameworks out there – they are basically doing the same thing. I like Angular a little big more, though. Like you said, Aaron, people tend to pick the same one. I like the opinionated things about Angular. You get properties, components or called props or inputs you are getting a lot of the same features. It comes down to your personal preference. 15:31 – Host: What else Amir? 15:35 – Guest: Let’s talk about the UI. 16:05 – Brian. 16:08 – Guest asks a question. 16:25 – Brian: How have you tackled this problem? 16:34 – Guest: I kind of ran with it. If there wasn’t something that I liked I started from scratch, because it really didn’t feel right. 16:51 – Brian: I am an enemy of starting over type of thing. You have a lot of engineers who START projects, and they can say that they start this piece, but the experts and choice team members have what it takes to ship a feature. I mean fully ship it, not just 80%, but also the final 20%. I think it takes a lot of pose decision making to say I want to rewrite it but not right now. I still need to ship this code. I have always been a bigger fan as not rewriting as much as possible; however, if you started with good patterns then that’s true, but if you are starting off with bad patterns then maybe yes. I like that opinion b/c you have to start right. Brian: How do you do your CSS? 19:05 – Guest. 19:52 – Advertisement: Get A Coder Job! 20:30 – Brian: How do you make those decisions, Amir? 20:39 – Guest: I see something that I like and ask myself how do I apply this to my design and I start scaling things. 21:50 – Host: Are you using a tool like Sketch for your initial UI design? 22:05 – Guest. 22:54 – Host: I worked on a project where the client had a designer (UX). 24:00 – Host and Guest go back-and-forth. 24:51 – Host: I am sure it’s all about the quality from your designer, too. Hopefully it works well for you and it’s quality. 25:18 – Host: There is a lot to building an app from scratch. I am not a good designer. I am not a designer – I mean straight-up. I got nothing. I appreciate team members that can do that. 26:06 – Guest: Do you write...? 26:35 – Host: Only on the most recent project. The designer didn’t own the HTML CSS but he initially wrote it and then gave it to me and now I own it, and it’s in components. If he wants updates then I have to go and make changes b/c he doesn’t know Angular. If it’s a sketch or a PNG you have to make it look like that. That’s what most of my career has been. Host: HTML and CSS got me 762x easier once Flexbox came around! I know there is a decimal there! 28:23 – Host talks about Flexbox some more. 28:42 – Guest asks a question. 28:50 – Host: I suppose if I really had heavy needs for a table then I would try CSS grid could solve some problems. I might just use a styled table. 29:12 – Brian: ag-Grid or something else. 29:21 – Host: On this recent project...I’ve used in-house design and other things. If I ever needed a table it was there. I don’t rebuild components b/c that can get expensive for me. 30:50 – Brian: Accessibility. 31:00 – Host: Your upgrade just got 10x harder b/c you own the component loop. I really don’t build tables or drop-downs. Only way is if I really need to build it for a specific request. 31:30 – Brian. 31:58 – Host: Let me give you an example. You can think I am crazy, but a designer gave me a drop-down but he told me to use PrimeNG. I had the chose of building my own drop-down or the designer has to accept whatever they gave him. I made the UI make what he wanted and I made the drop-down zero capacity and then... Host: When you click on what you see you are clicking on the... Host: Does that make sense? 33:35 – Guest. 33:50 – Host. 34:25 – Brian: That is interesting; remember when... 34:58 – Host: We will send this episode to Jeremy – come on Jeremy! Any last ideas? Let’s move onto picks! 35:20 – Advertisement – Fresh Books! 30-day free trial! END – Advertisement – Cache Fly! Links: Vue jQuery Angular React C# What is a UX Design? UI Design Flexbox Sketch ag-Grid PrimeNG Brian Love’s Twitter Aaron Frost’s Medium Amir’s Medium Amir’s Twitter Amir’s GitHub Amir’s LinkedIn Amir’s Facebook Sponsors: Angular Boot Camp Fresh Books Get a Coder Job Course Cache Fly Picks: Aaron Movie: “A Star Is Born” Concept - Model Driven Forms Amir Puppeteer Arrow Function Converter Brian TV Series: “The 100” Angular Schematics

Adventures in Angular
AiA 216: Building a Complete Web Application from Scratch Alone with Amir Tugendhaft

Adventures in Angular

Play Episode Listen Later Nov 20, 2018 42:56


Panel: Aaron Frost Brian Love Special Guest: Amir Tugendhaft In this episode, Aaron and Brian talk with Amir Tugendhaft who is a web developer who is located in Israel. He finds much gratification developing and building things from scratch. Check out today’s episode where Aaron, Brian, and Amir talk about just that. Other topics include UI Design, Flexbox, UX design, PrimeNG, and ag-Grid. Show Topics: 0:00 – Advertisement: AngularBootCamp.Com 0:52 – Host: Welcome! Today’s panel is myself, Brian, and our guest is Amir Tugendhaft! 1:13 – Guest: I am a developer and experience with Angular and React. 1:56 – Host: You spend your days/nights there? 2:03 – Panel: He is committed. 2:08 – Host: I am going to back up a second, and Brian could you please introduce yourself, please? 2:26 – Brian: I am the CETO at an Angular consulting firm (Denver, CO). We have the pleasure with working with Aaron from time-to-time. My Twitter handle is @brian_love – check it out! 2:52 – Host: What is CETO stand for? 2:59 – Brian answers the question. Brian: I oversee the crew among other things. 3:31 – Host: What do you want to talk about today, Amir? You are the guest of honor today! 3:40 – Guest. 4:00 – Host: That is a lot of information – that might be more than 1 episode. We have to stay focused! 4:14 – Host: I read one of your recent blogs about Cross Filled Violators. I met you through your blog before we did the Host: Give us your own ideas about starting your own app. 4:50 – Guest answers the question. 6:17 – Host: I am biased. But here is a fact. I used to work on a large team (60 people) and everyone committing to the same page app. We were using Angular.js 1.5, which I think they are still using that. I know that it worked but it wasn’t the easiest or fastest one to maintain, but it worked. 7:05 – Brian. 7:10 – Host: What are you trying to do? React doesn’t fulfill that need. I think you are being hyperballic and using extreme cases as the norm. Let’s be honest: we do cool stuff with jQuery plugins when we didn’t have a framework. When they say that the framework is stopping them then I say: I agree to disagree. 8:00 – Host: What do you think, Amir? 8:04 – Guest: I don’t have preferences. I try to build applications through the technologies and create components and simple applications. 8:30 – Brian. 8:33 – Guest: You create the component, and then... 9:21 – Brian: You don’t have to have a template file and another file – right? 9:35 – Guest. 9:48 – Host: I do in-line styles and in-line templates. One thing I learned from React is that I like my HTML, style and code. I like it being the same file as my component. I like that about that: I like single file components. This promotes getting frustrated if it gets too big. Yeah if it’s more than 500 lines than you have to simplify. That’s one of the things that l like. 10:47 – Brian: Modules versus... 10:55 – Guest. 11:07 – Host: I think in React and Vue you have the word module but in JavaScript you have a file that exports... 11:26 – Host: I have my opinion here and talking with Joe. He made a good point: at a certain level the frontend frameworks are the same. You could be doing different things but they basically do the same thing. 13:57 – Guest: Basically what that means is that the technology used it will do the same thing. Your patterns and practices are huge. 14:17 – Brian: If you are talking about the 3 popular frameworks out there – they are basically doing the same thing. I like Angular a little big more, though. Like you said, Aaron, people tend to pick the same one. I like the opinionated things about Angular. You get properties, components or called props or inputs you are getting a lot of the same features. It comes down to your personal preference. 15:31 – Host: What else Amir? 15:35 – Guest: Let’s talk about the UI. 16:05 – Brian. 16:08 – Guest asks a question. 16:25 – Brian: How have you tackled this problem? 16:34 – Guest: I kind of ran with it. If there wasn’t something that I liked I started from scratch, because it really didn’t feel right. 16:51 – Brian: I am an enemy of starting over type of thing. You have a lot of engineers who START projects, and they can say that they start this piece, but the experts and choice team members have what it takes to ship a feature. I mean fully ship it, not just 80%, but also the final 20%. I think it takes a lot of pose decision making to say I want to rewrite it but not right now. I still need to ship this code. I have always been a bigger fan as not rewriting as much as possible; however, if you started with good patterns then that’s true, but if you are starting off with bad patterns then maybe yes. I like that opinion b/c you have to start right. Brian: How do you do your CSS? 19:05 – Guest. 19:52 – Advertisement: Get A Coder Job! 20:30 – Brian: How do you make those decisions, Amir? 20:39 – Guest: I see something that I like and ask myself how do I apply this to my design and I start scaling things. 21:50 – Host: Are you using a tool like Sketch for your initial UI design? 22:05 – Guest. 22:54 – Host: I worked on a project where the client had a designer (UX). 24:00 – Host and Guest go back-and-forth. 24:51 – Host: I am sure it’s all about the quality from your designer, too. Hopefully it works well for you and it’s quality. 25:18 – Host: There is a lot to building an app from scratch. I am not a good designer. I am not a designer – I mean straight-up. I got nothing. I appreciate team members that can do that. 26:06 – Guest: Do you write...? 26:35 – Host: Only on the most recent project. The designer didn’t own the HTML CSS but he initially wrote it and then gave it to me and now I own it, and it’s in components. If he wants updates then I have to go and make changes b/c he doesn’t know Angular. If it’s a sketch or a PNG you have to make it look like that. That’s what most of my career has been. Host: HTML and CSS got me 762x easier once Flexbox came around! I know there is a decimal there! 28:23 – Host talks about Flexbox some more. 28:42 – Guest asks a question. 28:50 – Host: I suppose if I really had heavy needs for a table then I would try CSS grid could solve some problems. I might just use a styled table. 29:12 – Brian: ag-Grid or something else. 29:21 – Host: On this recent project...I’ve used in-house design and other things. If I ever needed a table it was there. I don’t rebuild components b/c that can get expensive for me. 30:50 – Brian: Accessibility. 31:00 – Host: Your upgrade just got 10x harder b/c you own the component loop. I really don’t build tables or drop-downs. Only way is if I really need to build it for a specific request. 31:30 – Brian. 31:58 – Host: Let me give you an example. You can think I am crazy, but a designer gave me a drop-down but he told me to use PrimeNG. I had the chose of building my own drop-down or the designer has to accept whatever they gave him. I made the UI make what he wanted and I made the drop-down zero capacity and then... Host: When you click on what you see you are clicking on the... Host: Does that make sense? 33:35 – Guest. 33:50 – Host. 34:25 – Brian: That is interesting; remember when... 34:58 – Host: We will send this episode to Jeremy – come on Jeremy! Any last ideas? Let’s move onto picks! 35:20 – Advertisement – Fresh Books! 30-day free trial! END – Advertisement – Cache Fly! Links: Vue jQuery Angular React C# What is a UX Design? UI Design Flexbox Sketch ag-Grid PrimeNG Brian Love’s Twitter Aaron Frost’s Medium Amir’s Medium Amir’s Twitter Amir’s GitHub Amir’s LinkedIn Amir’s Facebook Sponsors: Angular Boot Camp Fresh Books Get a Coder Job Course Cache Fly Picks: Aaron Movie: “A Star Is Born” Concept - Model Driven Forms Amir Puppeteer Arrow Function Converter Brian TV Series: “The 100” Angular Schematics

Medical Intel
DCIS Breast Cancer

Medical Intel

Play Episode Listen Later Oct 9, 2018 17:37


DCIS is sometimes called “stage zero breast cancer.” It’s contained to the milk ducts, and the primary concern is whether it comes back after treatment. Dr. Patricia Wehner discusses how we diagnose and treat this disease.   TRANSCRIPT Intro: MedStar Washington Hospital Center presents Medical Intel where our healthcare team shares health and wellness insights and gives you the inside story on advances in medicine. Host: Thanks for joining us today. We’re speaking with Dr. Patricia Wehner, a breast surgeon with fellowship training in surgical oncology for the MedStar Regional Breast Health Program. Today we’re discussing ductal carcinoma in situ, or DCIS, breast cancer. Welcome, Dr. Wehner. Dr. Patricia Wehner: Thank you! Host: What is ductal carcinoma in situ, or DCIS, breast cancer? Dr. Wehner: So, ductal carcinoma in situ, which is referred to DCIS for short, is a stage 0 type of non-invasive breast cancer. And what that means is DCIS really is a type of breast cancer that’s confined to just the milk ducts of the breast. So, if you think of the composition of a breast, a breast is composed of multiple milk carrying pipes or tubes of the breast that come through the breast and join up at the nipple. DCIS is a cancer that’s contained just within those milk ducts. Host: Is DCIS life-threatening, if it doesn’t spread? Dr. Wehner: So, ductal carcinoma in situ, or DCIS, in and of itself is a non-invasive cancer. And that’s why we call it a Stage 0 cancer. Stage 0, again, meaning just confined to the milk ducts of the breast. If treated, then DCIS is not necessarily life-threatening. The concern for DCIS is in the recurrent form of DCIS, meaning after it’s treated, will it come back? And the concern there is if it comes back, will it come back as an invasive cancer? The difference is invasive cancers have the potential to spread. So, they have the potential to go to other places of the body and that could be anywhere – the lungs, the liver, the brain. DCIS, in and of itself, does not have that ability but when it morphs into an invasive cancer, that’s when it has the ability to spread to these other locations. Host: How is DCIS detected if it’s just in the milk ducts? Dr. Wehner: DCIS is generally just detected with screening mammography. So, within the United States, there’s about 55 million annual screening mammograms that are performed on a yearly basis. And, of those mammograms, about 1 in 1300 will show ductal carcinoma in situ. For the majority of patients, when we look at a screening mammogram, what we’re looking for is are there any new lumps or what we consider to be nodules or asymmetries that have shown up on mammograms. Or, are there new areas of calcium or calcifications that have showed up on mammograms. DCIS, in the majority of times, shows up as new calcium or calcifications. So, essentially, when we look at a mammogram, there are new little white dots that show up on the mammogram. White dots on a mammogram don’t necessarily equate to a cancer and that’s why, when we see them, we kind of ask a couple questions. And that is, “Are they new? Are they increasing in size or number? Are they fine and tiny or are they big and chunky?” If they’re big and chunky, then generally we don’t have to worry about them. But if they’re small and fine and kind of clustered or grouped together, that’s when we get concerned and will recommend to have a minimally invasive needle biopsy done to determine - is this DCIS or is this something that maybe we don’t need to worry about and it’s just representative of normal changes that we can see on the breast? Whenever we recommend a core needle biopsy, what is involved is coming into an imaging center. We take pictures, with the help of a mammogram machine. And then we place some local or some numbing medicine within the breast. Then a small needle’s placed within the breast, around the area where the calcium or calcifications is located, and a little bit of tissue is removed. That tissue is then looked at by the pathologist, under the microscope, and that’s what can give us our diagnosis of cancer or not cancer. So, going back to the number of annual screenings, about 1 in 1300 patients that get screened with a mammogram throughout the year will actually end up positive for DCIS. So, there’s about 51,000 new diagnosis of DCIS, or stage 0 breast cancer, each year. Host: Do all women with DCIS require treatment, if they’re at average risk? Dr. Wehner: So, when DCIS is found, so when you have a new diagnosis of ductal carcinoma in situ, the standard recommendation does require or recommend to have a multidisciplinary approach to the treatment of this. This may include surgery, it may include radiation therapy, and it may include a pill that’s considered essentially an anti-hormone pill, or an endocrine therapy. Chemotherapy is not something that we will ever use for ductal carcinoma in situ. And chemotherapy is what everybody thinks of when they think of cancer treatment, because chemotherapy is the stuff that can make you feel kind of sick and make your hair fall out, make you kind of sick to your stomach. We don’t use that for DCIS. Standard, if it’s small – so the area of calcifications or the area of known cancer is small - then we can generally remove it by doing something that’s called a partial mastectomy. Now, a partial mastectomy – there’s a lot of terms that mean the same thing, and so you may hear it called as a lumpectomy or as removal of just the cancer or as breast conservation therapy. All of those terms mean the exact same thing. And, that just means the goal is to remove where the cancer cells are living and to get a little bit of healthy, non-cancerous tissue around them. Whenever we remove just part of the breast and we leave the rest of the breast behind, then generally we do recommend radiation therapy to be followed. Radiation generally doesn’t start ‘til a month or so after the completion of surgery and after healing, and then that can be anywhere from 3 weeks to 6 weeks, on a daily basis. That’s not standard for each and every patient but that’s the general guideline we use. Now currently, we do have a clinical trial that has just opened that’s actually looking at observation alone for certain ductal carcinoma in situs, rather than moving forward with surgery. And that particular trial is called the COMET trial. And so, for certain patients that have fairly favorable DCIS, and favorable just refers to various things that we see under the microscope, for some of those patients now rather than moving to a surgery first, those patients can be observed with mammograms every 6 months. And, as long as nothing is changing on the imaging, then we can continue to observe them. Host: Why is radiation therapy preferred over chemotherapy for DCIS? Dr. Wehner: So, radiation therapy and chemotherapy treat cancer in very different ways. When we talk about cancer treatments, we talk about treatments that are local/regional treatments or we talk about treatments that are systemic treatments. And what that means is certain treatments are given just to the breast and the surrounding breast tissue and other treatments are given to the entire body. The point of both of those treatments is the same, which means the point of both of those treatments is to treat the breast cancer. They just go about it in a different way. So, for stage 0 ductal carcinoma in situ, to give a systemic treatment or a whole-body treatment through chemotherapy we know doesn’t work. It doesn’t affect the recurrence of this stage 0 cancer and it doesn’t affect the overall lifetime survival of this type of breast cancer. That’s not true for other invasive cancers, but for DCIS, chemotherapy just has no real added benefit, which is why we don’t use it. Radiation therapy, however, is a treatment that’s directed just to the breast and the surrounding breast tissue. And the point of radiation is to reduce the risk of recurrence. So, what we want to do is reduce the risk of DCIS coming back because we know when it comes back, about half of the time it comes back as that invasive form. And, it’s the invasive form that we worry about the most because that’s the form that can spread to other parts of the body. Host: There’ve been discussions for several years about the over or under treatment of DCIS. What is your opinion about whether it should be treated and how do you help women make that choice? Dr. Wehner: So, breast cancer is no longer a ‘one size fits all’ approach. So, historically, when we treated breast cancer, way back when, we used to treat it with a modified radical mastectomy. So, everybody lost their breast, most patients got chemotherapy, and most patients got radiation therapy, regardless of the type of breast cancer they had. In today’s era, we are much more specific about the types of treatments that we recommend and that’s because it’s not a ‘one size fits all’ criteria any longer. DCIS, we know, is a non-invasive type of breast cancer that has a chance of turning into an invasive type of breast cancer. And so, historically, we have always treated DCIS with at least a minimum of surgery. Current trials do show us that treating it with surgery and then adding radiation will decrease the risk of recurrence but doesn’t necessarily change the overall survival. And so, with that type of data, is where people start to say, ‘well maybe we’re overtreating this.’ So maybe we don’t need radiation therapy. Or maybe we don’t even need surgery. Radiation, for certain patients with DCIS, is no longer a standard recommendation. So, if patients are a little bit older, if patients have a really low-grade type of breast cancer that seems favorable, for those patients we are potentially not recommending radiation therapy. There’s also a new genetic test that has recently come on the market that’s given us sort of a discussion point as to what somebody’s risk of recurrence of the DCIS is. And, that may help us decide adjuvant treatments. But again, this also goes back to the current clinical trial that we have open, which is the COMET trial, and that’s really trying to answer that question of - are there certain patients that we don’t need to treat with radiation and don’t even need to treat with surgery? And are these patients that we choose to just follow with imaging, such as mammography, do they do okay overall? And there’s no real way for us to answer that question without enrolling patients on this trial. And so, we have a lot of high hopes that we can get patients to enroll in this trial and then we can really have a good answer to that question. Host: Could you tell us a little bit about your patient population – what are the ages of women that you’re seeing with DCIS? Dr. Wehner: So, women with DCIS can be any age. So, they can be as young as in their 30s and we can see them up to the age of 103, I think is the oldest patient I’ve treated. So, it’s not that it is necessarily an age criterion that we see for patients. We do know that the number one risk factor for developing breast cancer is age. So, the older we get, the more likelihood we have, or we become, to develop a breast cancer. And, there’s nothing you’re going to do about that because nobody can magically become younger. So, we do see it throughout a wide variety of patients. Host: Have you ever had a patient who was maybe on the fence or had a cancer that you felt needed to be treated and she just wasn’t sure? Could you tell us about that experience with her and how you helped? Dr. Wehner: So, we have a lot of women who come in that are very anxious about their diagnosis of cancer, and rightfully so. However, what I try to explain to patients is breast cancer is a very different cancer compared to something like a brain cancer or a pancreatic cancer. And that just is because breast cancer is very, very treatable. So, most patients with a DCIS treatment, are alive and well five years, ten years later. And that’s just partially the nature of the disease and partially because our treatment options have gotten so good at treating this, that while nobody wants to say that they have a cancer and they have to undergo therapy for a cancer, it really has become very streamlined and fairly minimal. For most patients that have a lumpectomy or a partial mastectomy - the two mean the same thing - it’s actually an outpatient procedure. It only takes us really about one to two hours in the OR to perform this procedure. For most patients, we don’t even have to put them fully to sleep for this procedure, meaning – we give them anesthesia but not so much anesthesia where we have to put a breathing tube down. And then, patients go home the same day. So, patients really do incredibly well from this sort of surgery and they recover very, very quickly. And so, I find a lot of anxiety and fear of treatment is because they just don’t understand what’s really involved with the treatment of it. Host: Is there anything that women need to know about selfcare and recovery? What does the recovery look like for DCIS, if you have a procedure? Dr. Wehner: So, for a partial mastectomy, I send everybody home essentially with just a small band aid that covers the incision and then we send them home in a tight, supportive bra. The key that I tell everybody is they should wear this supportive bra at least for a week after surgery – that includes sleeping in the bra – and that’s just because we do use some deeper sutures within the breast. And so, it’s just more comfortable to provide support to your breasts via this compression bra. And that can be the one we send you home in, that can be one that you have at home – it’s just something that provides compression and support. Patients are allowed to shower the day after surgery. It’s completely fine to get it wet with warm soapy water and then just kind of pat it dry and put your bra back on. There’s not a lot of restrictions. Patients can use their arms. They can brush their hair. They can cook meals. They’re not allowed to drive if they’re taking narcotic pain medicine or if they cannot sit comfortably in a car with the seatbelt on. But, the recovery time actually is pretty minimal. And that’s what most patients end up coming back in to the hospital saying, “You know, I only really needed a pain medicine for maybe one evening and then extra strength Tylenol worked just fine for me.” And patients are pretty surprised at how well they feel afterwards. So, to do a mastectomy is a very different procedure than a partial mastectomy and that’s because a mastectomy, the goal of that is to remove all of the breast tissue, whereas the goal of a partial mastectomy is just to remove part of the breast instead of the entire breast. When we remove an entire breast, then things become a little bit more complicated when we start talking reconstruction, meaning rebuilding another breast. And that can come in the form of an implant-based reconstruction or autologous reconstruction, which really just means we’re taking tissue from someplace else on your body and moving it to where your former breast used to be. Those surgeries are very different and much more complex and much more involved. Those surgeries, some of them, means a hospital stay of four or five days. It really just kind of depends on what type of reconstruction is going on. To do a very simple mastectomy, where the goal is to make it flat and not reconstruct, those patients stay in the hospital one night and go home the next day. Host: Why is MedStar Washington Hospital Center the best place to seek DCIS expertise? Dr. Wehner: So, all breast cancer needs to be treated in a very multidisciplinary approach. And, as I alluded a little bit earlier, we have a lot of different treatment options for the way we treat breast cancer. Here, at MedStar Washington Hospital Center, we have a very active and very robust multidisciplinary team. And part of that is because our patient population here just isn’t always the healthiest. And they have a lot of comorbidities and social factors, things that inhibit them from being able to come to the hospital to get treatments. And so, when you come here to have your breast cancer treated, you really come here to have everything treated and everything looked at. When we meet on a weekly basis, which we do as a multidisciplinary team every Wednesday morning, we have about thirty practitioners that come to our meeting. And that includes breast surgeons, breast medical oncologists, breast radiation oncologists, breast imagers, physical therapists, nutritionists, social workers, nurse navigators. We have a lot of just additional people that really take part in your care that you don’t even realize are working behind the scenes. And that really is just to make certain that you, as a person, is taken care of and not just you as a breast cancer patient. Host: Thanks for joining us today, Dr. Wehner. Dr. Wehner: Absolutely. Conclusion: Thanks for listening to Medical Intel with MedStar Washington Hospital Center. Find more podcasts from our healthcare team by visiting medstarwashington.org/podcast or subscribing in iTunes or iHeartRadio.